This supplementary material provides guidelines for filling out the REFORMS checklist, which is a consensus-based reporting standard for machine learning-based science. The checklist includes modules on study design, computational reproducibility, data quality, data preprocessing, modeling, data leakage, metrics and uncertainty, and generalizability and limitations. Each module contains specific items that researchers should consider when reporting their work. For example, in study design, researchers are asked to state the population or distribution about which the scientific claim is made and justify why they chose this population. In computational reproducibility, researchers are encouraged to provide details about the dataset, code, and computing infrastructure used, along with a reproduction script to enable others to replicate their results. In data quality, researchers are asked to describe the source of the data, the sampling frame, and the outcome variable. In data preprocessing, researchers are required to describe how they handled missing data and other transformations. In modeling, researchers are asked to describe the models they trained, the criteria for selecting the final model, and the method for hyperparameter tuning. In data leakage, researchers are asked to justify that their features are legitimate and do not lead to leakage. In metrics and uncertainty, researchers are asked to state the metrics used to assess model performance and provide uncertainty estimates. In generalizability and limitations, researchers are asked to describe evidence of external validity and contexts in which their findings may not hold. The guidelines also include a sample checklist based on Obermeyer et al. and references to additional resources for each item. The goal of the REFORMS checklist is to improve the transparency, reproducibility, and reliability of machine learning-based scientific research.This supplementary material provides guidelines for filling out the REFORMS checklist, which is a consensus-based reporting standard for machine learning-based science. The checklist includes modules on study design, computational reproducibility, data quality, data preprocessing, modeling, data leakage, metrics and uncertainty, and generalizability and limitations. Each module contains specific items that researchers should consider when reporting their work. For example, in study design, researchers are asked to state the population or distribution about which the scientific claim is made and justify why they chose this population. In computational reproducibility, researchers are encouraged to provide details about the dataset, code, and computing infrastructure used, along with a reproduction script to enable others to replicate their results. In data quality, researchers are asked to describe the source of the data, the sampling frame, and the outcome variable. In data preprocessing, researchers are required to describe how they handled missing data and other transformations. In modeling, researchers are asked to describe the models they trained, the criteria for selecting the final model, and the method for hyperparameter tuning. In data leakage, researchers are asked to justify that their features are legitimate and do not lead to leakage. In metrics and uncertainty, researchers are asked to state the metrics used to assess model performance and provide uncertainty estimates. In generalizability and limitations, researchers are asked to describe evidence of external validity and contexts in which their findings may not hold. The guidelines also include a sample checklist based on Obermeyer et al. and references to additional resources for each item. The goal of the REFORMS checklist is to improve the transparency, reproducibility, and reliability of machine learning-based scientific research.