MedAdapter: Efficient Test-Time Adaptation of Large Language Models towards Medical Reasoning

MedAdapter: Efficient Test-Time Adaptation of Large Language Models towards Medical Reasoning

5 May 2024 | Wenqi Shi, Ran Xu, Yuchen Zhuang, Yue Yu, Hang Wu, Carl Yang, May D. Wang
MedAdapter is a test-time adaptation method for large language models (LLMs) to enhance their performance in medical reasoning tasks. The method uses a small BERT-sized language model (110M parameters) to adapt both white-box and black-box LLMs without requiring extensive computational resources or data sharing with third parties. MedAdapter achieves significant performance improvements in biomedical reasoning tasks, with average improvements of 25.48% for white-box LLMs and 11.31% for black-box LLMs. It also demonstrates superior performance when combined with train-time adaptation, offering a flexible and complementary solution to existing adaptation methods. MedAdapter addresses the challenges of balancing model performance, computational resources, and data privacy, providing an efficient, privacy-preserving, cost-effective, and transparent solution for adapting LLMs to the biomedical domain. The method is evaluated on five biomedical QA datasets, showing its effectiveness in adapting both white-box and black-box LLMs for medical reasoning. MedAdapter is particularly effective for black-box LLMs, as it avoids the risks and costs associated with data sharing and fine-tuning via APIs. The method is also parameter-efficient, using only 14.75% of the GPU memory for white-box LLM adaptation. Overall, MedAdapter offers a practical solution for adapting LLMs to real-world biomedical research and practice.MedAdapter is a test-time adaptation method for large language models (LLMs) to enhance their performance in medical reasoning tasks. The method uses a small BERT-sized language model (110M parameters) to adapt both white-box and black-box LLMs without requiring extensive computational resources or data sharing with third parties. MedAdapter achieves significant performance improvements in biomedical reasoning tasks, with average improvements of 25.48% for white-box LLMs and 11.31% for black-box LLMs. It also demonstrates superior performance when combined with train-time adaptation, offering a flexible and complementary solution to existing adaptation methods. MedAdapter addresses the challenges of balancing model performance, computational resources, and data privacy, providing an efficient, privacy-preserving, cost-effective, and transparent solution for adapting LLMs to the biomedical domain. The method is evaluated on five biomedical QA datasets, showing its effectiveness in adapting both white-box and black-box LLMs for medical reasoning. MedAdapter is particularly effective for black-box LLMs, as it avoids the risks and costs associated with data sharing and fine-tuning via APIs. The method is also parameter-efficient, using only 14.75% of the GPU memory for white-box LLM adaptation. Overall, MedAdapter offers a practical solution for adapting LLMs to real-world biomedical research and practice.
Reach us at info@study.space
[slides] MedAdapter%3A Efficient Test-Time Adaptation of Large Language Models Towards Medical Reasoning | StudySpace