The paper introduces Llama4Rec, a framework that integrates large language models (LLMs) with conventional recommendation models to enhance recommendation performance. Llama4Rec addresses the limitations of both approaches by performing mutual augmentation and adaptive aggregation. Data augmentation is used to alleviate data sparsity and long-tail issues in conventional models, while prompt augmentation enriches LLMs with collaborative and sequential information. An adaptive aggregation module combines the predictions from both models to refine the final recommendations. Empirical studies on three real-world datasets validate the effectiveness of Llama4Rec, demonstrating significant improvements over baseline methods in various recommendation tasks. The framework's components, including data augmentation, prompt augmentation, and adaptive aggregation, are evaluated through ablation studies, highlighting their contributions to overall performance. The paper also discusses hyper-parameter analysis and computational efficiency, suggesting future directions for improving LLM-based recommendation systems.The paper introduces Llama4Rec, a framework that integrates large language models (LLMs) with conventional recommendation models to enhance recommendation performance. Llama4Rec addresses the limitations of both approaches by performing mutual augmentation and adaptive aggregation. Data augmentation is used to alleviate data sparsity and long-tail issues in conventional models, while prompt augmentation enriches LLMs with collaborative and sequential information. An adaptive aggregation module combines the predictions from both models to refine the final recommendations. Empirical studies on three real-world datasets validate the effectiveness of Llama4Rec, demonstrating significant improvements over baseline methods in various recommendation tasks. The framework's components, including data augmentation, prompt augmentation, and adaptive aggregation, are evaluated through ablation studies, highlighting their contributions to overall performance. The paper also discusses hyper-parameter analysis and computational efficiency, suggesting future directions for improving LLM-based recommendation systems.