Large Language Model Distilling Medication Recommendation Model

Large Language Model Distilling Medication Recommendation Model

5 Feb 2024 | Qidong Liu, Xian Wu, Xiangyu Zhao, Yuanshao Zhu, Zijian Zhang, Feng Tian, Yefeng Zheng
This paper proposes a novel medication recommendation model called LEADER, which leverages Large Language Models (LLMs) to enhance semantic understanding and improve efficiency. The model addresses two key challenges in medication recommendation: (1) the lack of semantic understanding in existing models, and (2) the difficulty in handling single-visit patients who lack prescription history. LEADER introduces a feature-level knowledge distillation method to transfer the powerful semantic understanding ability of LLMs to a smaller, more efficient model. The model is trained on two real-world datasets, MIMIC-III and MIMIC-IV, and demonstrates superior performance compared to existing state-of-the-art models. The proposed method also addresses the high inference cost of LLMs by distilling their knowledge into a compact model. The results show that the LEADER model achieves effective and efficient medication recommendations, with the distilled model (LEADER(S)) outperforming the original LLM-based model (LEADER(T)) in terms of performance and efficiency. The model is also able to handle single-visit patients by incorporating profile information as a pseudo medication record. The paper also includes an ablation study and hyperparameter analysis to validate the effectiveness of the proposed method. The results demonstrate that the LEADER model provides a better trade-off between performance and efficiency compared to existing models.This paper proposes a novel medication recommendation model called LEADER, which leverages Large Language Models (LLMs) to enhance semantic understanding and improve efficiency. The model addresses two key challenges in medication recommendation: (1) the lack of semantic understanding in existing models, and (2) the difficulty in handling single-visit patients who lack prescription history. LEADER introduces a feature-level knowledge distillation method to transfer the powerful semantic understanding ability of LLMs to a smaller, more efficient model. The model is trained on two real-world datasets, MIMIC-III and MIMIC-IV, and demonstrates superior performance compared to existing state-of-the-art models. The proposed method also addresses the high inference cost of LLMs by distilling their knowledge into a compact model. The results show that the LEADER model achieves effective and efficient medication recommendations, with the distilled model (LEADER(S)) outperforming the original LLM-based model (LEADER(T)) in terms of performance and efficiency. The model is also able to handle single-visit patients by incorporating profile information as a pseudo medication record. The paper also includes an ablation study and hyperparameter analysis to validate the effectiveness of the proposed method. The results demonstrate that the LEADER model provides a better trade-off between performance and efficiency compared to existing models.
Reach us at info@study.space