PerMetrics is a Python framework for performance metrics in machine learning models. It provides a wide range of metrics for regression, classification, and clustering tasks. The library is open-source, well-documented, and compatible with various machine learning libraries. It addresses the limitations of existing libraries like Scikit-Learn and TorchMetrics by offering a comprehensive set of 111 metrics for three fundamental problems. PerMetrics is designed to be user-friendly, requiring no prior knowledge of other major libraries such as TensorFlow, Keras, or PyTorch. It is hosted on GitHub and continuously maintained. The framework includes detailed documentation, examples, and test cases to facilitate integration into users' workflows. PerMetrics is particularly useful for tasks such as regression, where metrics like mean squared error (MSE), root mean square error (RMSE), and coefficient of determination (COD) are used to evaluate model performance. For classification, metrics like accuracy, precision, recall, F1-score, and AUC-ROC are employed. In clustering, metrics such as Silhouette coefficient, Davies-Bouldin index, and Calinski-Harabasz index are used to assess clustering quality. The library is designed to help researchers compare models, identify strengths and weaknesses, and make informed decisions about model selection and parameter tuning. PerMetrics is also useful in the iterative process of model development and improvement, guiding the optimization process and enabling the exploration of feature engineering techniques. The framework is accompanied by comprehensive documentation, examples, and test cases, facilitating easy comprehension and integration into users' workflows.PerMetrics is a Python framework for performance metrics in machine learning models. It provides a wide range of metrics for regression, classification, and clustering tasks. The library is open-source, well-documented, and compatible with various machine learning libraries. It addresses the limitations of existing libraries like Scikit-Learn and TorchMetrics by offering a comprehensive set of 111 metrics for three fundamental problems. PerMetrics is designed to be user-friendly, requiring no prior knowledge of other major libraries such as TensorFlow, Keras, or PyTorch. It is hosted on GitHub and continuously maintained. The framework includes detailed documentation, examples, and test cases to facilitate integration into users' workflows. PerMetrics is particularly useful for tasks such as regression, where metrics like mean squared error (MSE), root mean square error (RMSE), and coefficient of determination (COD) are used to evaluate model performance. For classification, metrics like accuracy, precision, recall, F1-score, and AUC-ROC are employed. In clustering, metrics such as Silhouette coefficient, Davies-Bouldin index, and Calinski-Harabasz index are used to assess clustering quality. The library is designed to help researchers compare models, identify strengths and weaknesses, and make informed decisions about model selection and parameter tuning. PerMetrics is also useful in the iterative process of model development and improvement, guiding the optimization process and enabling the exploration of feature engineering techniques. The framework is accompanied by comprehensive documentation, examples, and test cases, facilitating easy comprehension and integration into users' workflows.