The article "Towards A Rigorous Science of Interpretable Machine Learning" by Finale Doshi-Velez and Been Kim discusses the growing importance of interpretable machine learning (interpretable ML) in various applications, such as autonomous vehicles, predictive policing, and email filters. The authors highlight the need for ML systems to not only perform well but also meet criteria like safety, fairness, and explainability. However, these criteria are often difficult to quantify, leading to a reliance on interpretability as a fallback measure. The article critiques the current lack of consensus and rigor in defining and evaluating interpretability, proposing a taxonomy of evaluation approaches: application-grounded, human-grounded, and functionally-grounded. It emphasizes the importance of aligning research claims with evaluation types and suggests creating a shared taxonomy to facilitate communication and comparison of related work. The authors also outline open problems and recommendations for researchers to advance the field of interpretable ML.The article "Towards A Rigorous Science of Interpretable Machine Learning" by Finale Doshi-Velez and Been Kim discusses the growing importance of interpretable machine learning (interpretable ML) in various applications, such as autonomous vehicles, predictive policing, and email filters. The authors highlight the need for ML systems to not only perform well but also meet criteria like safety, fairness, and explainability. However, these criteria are often difficult to quantify, leading to a reliance on interpretability as a fallback measure. The article critiques the current lack of consensus and rigor in defining and evaluating interpretability, proposing a taxonomy of evaluation approaches: application-grounded, human-grounded, and functionally-grounded. It emphasizes the importance of aligning research claims with evaluation types and suggests creating a shared taxonomy to facilitate communication and comparison of related work. The authors also outline open problems and recommendations for researchers to advance the field of interpretable ML.