Twin-Merging is a novel method for dynamically integrating modular expertise in model merging. It addresses two key challenges in model merging: interference between different models and heterogeneous data during testing. Traditional merging methods often underperform compared to fine-tuned models due to these issues. Twin-Merging decomposes knowledge into shared and exclusive components, compressing redundancy to enhance efficiency. It then dynamically merges shared and task-specific knowledge based on input, narrowing the performance gap between merged and fine-tuned models and improving adaptability to heterogeneous data. Extensive experiments on 12 datasets for both discriminative and generative tasks show that Twin-Merging achieves an average improvement of 28.34% in absolute normalized score for discriminative tasks and surpasses the fine-tuned upper bound on generative tasks. The method is scalable, efficient, and adaptable, with minimal hyperparameters and storage requirements. It outperforms existing merging methods, demonstrating effectiveness across various model architectures and task numbers. Twin-Merging is simple to implement, can be combined with other merging methods, and is storage-efficient. It is particularly effective for large models, where shared knowledge is less critical, and for smaller models, where task-specific knowledge is more important. The method's dynamic merging strategy allows it to adapt to diverse test data and outperform traditional merging techniques. Twin-Merging is a powerful and effective approach for combining multiple fine-tuned models into a single multi-task model.Twin-Merging is a novel method for dynamically integrating modular expertise in model merging. It addresses two key challenges in model merging: interference between different models and heterogeneous data during testing. Traditional merging methods often underperform compared to fine-tuned models due to these issues. Twin-Merging decomposes knowledge into shared and exclusive components, compressing redundancy to enhance efficiency. It then dynamically merges shared and task-specific knowledge based on input, narrowing the performance gap between merged and fine-tuned models and improving adaptability to heterogeneous data. Extensive experiments on 12 datasets for both discriminative and generative tasks show that Twin-Merging achieves an average improvement of 28.34% in absolute normalized score for discriminative tasks and surpasses the fine-tuned upper bound on generative tasks. The method is scalable, efficient, and adaptable, with minimal hyperparameters and storage requirements. It outperforms existing merging methods, demonstrating effectiveness across various model architectures and task numbers. Twin-Merging is simple to implement, can be combined with other merging methods, and is storage-efficient. It is particularly effective for large models, where shared knowledge is less critical, and for smaller models, where task-specific knowledge is more important. The method's dynamic merging strategy allows it to adapt to diverse test data and outperform traditional merging techniques. Twin-Merging is a powerful and effective approach for combining multiple fine-tuned models into a single multi-task model.