October 2020 | The Parliamentary Office of Science and Technology
This POSTnote provides an overview of machine learning (ML) and its role in decision-making, highlighting the challenges of understanding how complex ML systems reach their outputs and the technical approaches to making ML more interpretable. It discusses the importance of accountability in ML systems, including algorithm audits and impact assessments.
Machine learning, a subset of artificial intelligence, is increasingly used in various applications, from identity verification to disease diagnosis. While ML offers potential social and economic benefits, concerns exist about transparency and accountability, particularly with complex models like deep learning, which may be difficult to explain. ML systems can also introduce or perpetuate biases, raising concerns about fairness and reliability.
In 2018, the Lords Committee on AI called for AI systems to be "intelligible to developers, users and regulators," emphasizing the need for explanations of decisions. In 2020, the Committee on Standards in Public Life highlighted the importance of explanations for ML decisions in the public sector. The UK government has emphasized the importance of ethical ML and the risks of lack of transparency in ML-assisted decision-making.
ML relies on large datasets to train its algorithms, and the quality of data is crucial to avoid bias. ML is used in decision-making in various applications, such as recruitment and medical diagnoses. However, some ML systems, like deep learning, are complex and referred to as "black box" ML, making it difficult to understand how decisions are made.
Interpretable ML aims to make ML systems more understandable, improving user trust and performance. Technical approaches include using simpler models and tools to understand complex systems. Proposed ways to improve ML accountability include audits and impact assessments.
The term 'interpretability' refers to the ability to explain ML decision-making in human-understandable terms. Technical approaches to interpretable ML include using simpler models and tools to probe complex systems. Tools for interpreting black box ML include proxy models, saliency mapping, and counterfactual explanations.
The importance of explaining ML outcomes to individuals varies by context. The ICO and Alan Turing Institute have produced guidance for organizations to explain AI-based decisions. Interpretable ML can have benefits for organizations, individuals, and society, including improved performance, user trust, and regulatory compliance. However, challenges include commercial sensitivity, risk of gaming, cost, and mistrust or deception.
Wider ML accountability tools include open and documented processes, machine learning fact sheets, algorithmic impact assessments, and algorithm audits. Principles, frameworks, and standards for ML are being developed by various organizations to promote ethical development. The UK has started producing industry standards for ethical design of robots and autonomous systems.This POSTnote provides an overview of machine learning (ML) and its role in decision-making, highlighting the challenges of understanding how complex ML systems reach their outputs and the technical approaches to making ML more interpretable. It discusses the importance of accountability in ML systems, including algorithm audits and impact assessments.
Machine learning, a subset of artificial intelligence, is increasingly used in various applications, from identity verification to disease diagnosis. While ML offers potential social and economic benefits, concerns exist about transparency and accountability, particularly with complex models like deep learning, which may be difficult to explain. ML systems can also introduce or perpetuate biases, raising concerns about fairness and reliability.
In 2018, the Lords Committee on AI called for AI systems to be "intelligible to developers, users and regulators," emphasizing the need for explanations of decisions. In 2020, the Committee on Standards in Public Life highlighted the importance of explanations for ML decisions in the public sector. The UK government has emphasized the importance of ethical ML and the risks of lack of transparency in ML-assisted decision-making.
ML relies on large datasets to train its algorithms, and the quality of data is crucial to avoid bias. ML is used in decision-making in various applications, such as recruitment and medical diagnoses. However, some ML systems, like deep learning, are complex and referred to as "black box" ML, making it difficult to understand how decisions are made.
Interpretable ML aims to make ML systems more understandable, improving user trust and performance. Technical approaches include using simpler models and tools to understand complex systems. Proposed ways to improve ML accountability include audits and impact assessments.
The term 'interpretability' refers to the ability to explain ML decision-making in human-understandable terms. Technical approaches to interpretable ML include using simpler models and tools to probe complex systems. Tools for interpreting black box ML include proxy models, saliency mapping, and counterfactual explanations.
The importance of explaining ML outcomes to individuals varies by context. The ICO and Alan Turing Institute have produced guidance for organizations to explain AI-based decisions. Interpretable ML can have benefits for organizations, individuals, and society, including improved performance, user trust, and regulatory compliance. However, challenges include commercial sensitivity, risk of gaming, cost, and mistrust or deception.
Wider ML accountability tools include open and documented processes, machine learning fact sheets, algorithmic impact assessments, and algorithm audits. Principles, frameworks, and standards for ML are being developed by various organizations to promote ethical development. The UK has started producing industry standards for ethical design of robots and autonomous systems.