This paper proposes a new Federated Learning (FL) protocol called FedCS to efficiently train machine learning models in mobile edge computing (MEC) environments with heterogeneous clients. The main challenge is to select clients with sufficient computational and communication resources to ensure efficient training while preserving client privacy. FedCS addresses this by actively managing client resources based on their capabilities, allowing the server to aggregate as many client updates as possible within a specified deadline. The protocol involves a two-step client selection process: first, clients request their resource information, and then the server selects clients based on their resource availability to optimize the training process. The algorithm for client selection is based on a greedy approach that maximizes the number of selected clients while considering their computational and communication constraints. Experimental results show that FedCS significantly reduces the training time compared to the original FL protocol, achieving higher accuracy in both IID and Non-IID settings. The protocol is evaluated using large-scale image datasets and demonstrates improved efficiency in training deep neural networks under resource-constrained conditions. The study also highlights the importance of selecting an appropriate deadline for each training round to balance the number of clients involved and the overall training efficiency. The results indicate that FedCS outperforms existing FL protocols in terms of training time and accuracy, making it a promising approach for future AI applications that require large-scale private data training.This paper proposes a new Federated Learning (FL) protocol called FedCS to efficiently train machine learning models in mobile edge computing (MEC) environments with heterogeneous clients. The main challenge is to select clients with sufficient computational and communication resources to ensure efficient training while preserving client privacy. FedCS addresses this by actively managing client resources based on their capabilities, allowing the server to aggregate as many client updates as possible within a specified deadline. The protocol involves a two-step client selection process: first, clients request their resource information, and then the server selects clients based on their resource availability to optimize the training process. The algorithm for client selection is based on a greedy approach that maximizes the number of selected clients while considering their computational and communication constraints. Experimental results show that FedCS significantly reduces the training time compared to the original FL protocol, achieving higher accuracy in both IID and Non-IID settings. The protocol is evaluated using large-scale image datasets and demonstrates improved efficiency in training deep neural networks under resource-constrained conditions. The study also highlights the importance of selecting an appropriate deadline for each training round to balance the number of clients involved and the overall training efficiency. The results indicate that FedCS outperforms existing FL protocols in terms of training time and accuracy, making it a promising approach for future AI applications that require large-scale private data training.