**PerMetrics: A Framework of Performance Metrics for Machine Learning Models**
**Nguyen Van Thieu**
**Faculty of Computer Science, Phenikaa University, Hanoi, Vietnam**
**DOI: 10.21105/joss.06143**
**Software Review, Repository, Archive**
**Editor: Gabriela Alessio Robles**
**Reviewers: @CeciliaCoelho, @KennethEnovoldsen, @GISWLH**
**Submitted: 08 August 2023**
**Published: 09 March 2024**
**License: Creative Commons Attribution 4.0 International License (CC BY 4.0)**
**Summary:**
Performance metrics are crucial in the field of machine learning, particularly for tasks such as regression, classification, and clustering. They provide quantitative measures to assess the accuracy and efficacy of models, aiding researchers and practitioners in evaluating, contrasting, and enhancing algorithms. This paper introduces PerMetrics, a Python framework designed to offer comprehensive performance metrics for machine learning models. The library, packaged as `permetrics`, is open-source and written in Python, providing a wide range of metrics to evaluate models effectively. PerMetrics is hosted on GitHub and is continuously developed and maintained by a dedicated team. It includes extensive documentation, examples, and test cases to facilitate easy comprehension and integration into users' workflows.
**Statement of Need:**
PerMetrics is the first open-source framework that contributes a significant number of metrics (111 methods) for regression, classification, and clustering problems. It addresses the limitations of other popular libraries such as Scikit-Learn, Metrics, and TorchMetrics by offering a broader range of metrics, simpler syntax, and seamless integration with various computational and machine learning libraries. PerMetrics does not require knowledge of other major libraries like TensorFlow, Keras, or PyTorch, making it accessible to a wider audience.
**Available Methods:**
PerMetrics provides three types of performance metrics: regression, classification, and clustering. Examples of usage are provided for each type, demonstrating how to evaluate models using metrics such as mean squared error (MSE), F1-score, and Silhouette coefficient.
**Installation and Simple Example:**
PerMetrics can be installed via pip:
```bash
pip install permetrics
```
Examples of using regression, classification, and clustering metrics are provided in the documentation and examples folder on GitHub.
**Acknowledgements:**
The authors express their sincere thanks to the individuals who have contributed to the software through valuable issue reports and insightful feedback.
**References:**
The paper cites several key references, including major machine learning libraries and frameworks, to support the development and validation of PerMetrics.**PerMetrics: A Framework of Performance Metrics for Machine Learning Models**
**Nguyen Van Thieu**
**Faculty of Computer Science, Phenikaa University, Hanoi, Vietnam**
**DOI: 10.21105/joss.06143**
**Software Review, Repository, Archive**
**Editor: Gabriela Alessio Robles**
**Reviewers: @CeciliaCoelho, @KennethEnovoldsen, @GISWLH**
**Submitted: 08 August 2023**
**Published: 09 March 2024**
**License: Creative Commons Attribution 4.0 International License (CC BY 4.0)**
**Summary:**
Performance metrics are crucial in the field of machine learning, particularly for tasks such as regression, classification, and clustering. They provide quantitative measures to assess the accuracy and efficacy of models, aiding researchers and practitioners in evaluating, contrasting, and enhancing algorithms. This paper introduces PerMetrics, a Python framework designed to offer comprehensive performance metrics for machine learning models. The library, packaged as `permetrics`, is open-source and written in Python, providing a wide range of metrics to evaluate models effectively. PerMetrics is hosted on GitHub and is continuously developed and maintained by a dedicated team. It includes extensive documentation, examples, and test cases to facilitate easy comprehension and integration into users' workflows.
**Statement of Need:**
PerMetrics is the first open-source framework that contributes a significant number of metrics (111 methods) for regression, classification, and clustering problems. It addresses the limitations of other popular libraries such as Scikit-Learn, Metrics, and TorchMetrics by offering a broader range of metrics, simpler syntax, and seamless integration with various computational and machine learning libraries. PerMetrics does not require knowledge of other major libraries like TensorFlow, Keras, or PyTorch, making it accessible to a wider audience.
**Available Methods:**
PerMetrics provides three types of performance metrics: regression, classification, and clustering. Examples of usage are provided for each type, demonstrating how to evaluate models using metrics such as mean squared error (MSE), F1-score, and Silhouette coefficient.
**Installation and Simple Example:**
PerMetrics can be installed via pip:
```bash
pip install permetrics
```
Examples of using regression, classification, and clustering metrics are provided in the documentation and examples folder on GitHub.
**Acknowledgements:**
The authors express their sincere thanks to the individuals who have contributed to the software through valuable issue reports and insightful feedback.
**References:**
The paper cites several key references, including major machine learning libraries and frameworks, to support the development and validation of PerMetrics.