Fairness and Abstraction in Sociotechnical Systems

Fairness and Abstraction in Sociotechnical Systems

January 29–31, 2019, Atlanta, GA, USA | Andrew D. Selbst, danah boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, Janet Vertesi
The paper "Fairness and Abstraction in Sociotechnical Systems" by Selbst, Boyd, Friedler, Venkatasubramanian, and Vertesi explores the challenges of developing fair machine learning (fair-ML) systems that achieve social and legal outcomes such as fairness, justice, and due process. The authors argue that the use of concepts like abstraction and modular design in fair-ML can lead to ineffective, inaccurate, and misguided interventions when applied to societal contexts. They identify five "traps" that fair-ML work can fall into: the Framing Trap, Portability Trap, Formalism Trap, Ripple Effect Trap, and Solutionism Trap. These traps arise from failing to consider the interplay between technical systems and social contexts. The authors draw on studies of sociotechnical systems from Science and Technology Studies (STS) to explain why these traps occur and propose ways to avoid them. They suggest that technical designers should shift their focus from solutions to process, drawing abstraction boundaries that include social actors and institutions rather than purely technical ones. The paper emphasizes the importance of understanding the broader social context and the potential unintended consequences of technology in social systems. It concludes with recommendations for fair-ML researchers to engage more meaningfully with social contexts and to recognize when technology may not be the best solution to social problems.The paper "Fairness and Abstraction in Sociotechnical Systems" by Selbst, Boyd, Friedler, Venkatasubramanian, and Vertesi explores the challenges of developing fair machine learning (fair-ML) systems that achieve social and legal outcomes such as fairness, justice, and due process. The authors argue that the use of concepts like abstraction and modular design in fair-ML can lead to ineffective, inaccurate, and misguided interventions when applied to societal contexts. They identify five "traps" that fair-ML work can fall into: the Framing Trap, Portability Trap, Formalism Trap, Ripple Effect Trap, and Solutionism Trap. These traps arise from failing to consider the interplay between technical systems and social contexts. The authors draw on studies of sociotechnical systems from Science and Technology Studies (STS) to explain why these traps occur and propose ways to avoid them. They suggest that technical designers should shift their focus from solutions to process, drawing abstraction boundaries that include social actors and institutions rather than purely technical ones. The paper emphasizes the importance of understanding the broader social context and the potential unintended consequences of technology in social systems. It concludes with recommendations for fair-ML researchers to engage more meaningfully with social contexts and to recognize when technology may not be the best solution to social problems.
Reach us at info@study.space