FedLLM-Bench: Realistic Benchmarks for Federated Learning of Large Language Models

FedLLM-Bench: Realistic Benchmarks for Federated Learning of Large Language Models

7 Jun 2024 | Rui Ye1*, Rui Ge1*, Xinyu Zhu1, Jingyi Chai1, Yixin Du1, Yang Liu2, Yanfeng Wang3,4,1, Siheng Chen1,3
The paper introduces FedLLM-Bench, a comprehensive benchmark for federated learning of large language models (FedLLM). FedLLM-Bench includes 8 training methods, 4 training datasets, and 6 evaluation metrics, aiming to provide a realistic testbed for the FedLLM community. The datasets cover a range of client scales and tasks, such as instruction tuning and preference alignment, and exhibit diverse characteristics like language, quality, quantity, instruction, length, embedding, and preference. The datasets are naturally split by real-world user IDs, ranging from 38 to 747 clients, and capture real-world properties. The paper also presents extensive experiments to benchmark existing FL methods and explore new research directions, such as multilingual collaboration. The code and datasets are available on GitHub, making FedLLM-Bench a valuable resource for researchers and practitioners in the FedLLM community.The paper introduces FedLLM-Bench, a comprehensive benchmark for federated learning of large language models (FedLLM). FedLLM-Bench includes 8 training methods, 4 training datasets, and 6 evaluation metrics, aiming to provide a realistic testbed for the FedLLM community. The datasets cover a range of client scales and tasks, such as instruction tuning and preference alignment, and exhibit diverse characteristics like language, quality, quantity, instruction, length, embedding, and preference. The datasets are naturally split by real-world user IDs, ranging from 38 to 747 clients, and capture real-world properties. The paper also presents extensive experiments to benchmark existing FL methods and explore new research directions, such as multilingual collaboration. The code and datasets are available on GitHub, making FedLLM-Bench a valuable resource for researchers and practitioners in the FedLLM community.
Reach us at info@study.space