Fairness and Abstraction in Sociotechnical Systems

Fairness and Abstraction in Sociotechnical Systems

January 29-31, 2019 | Andrew D. Selbst, danah boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, Janet Vertesi
The paper discusses the challenges of achieving fairness in machine learning (ML) systems by examining the limitations of current approaches that rely on technical interventions without considering the broader social context. It argues that fairness and justice are properties of social and legal systems, not just technical tools, and that abstracting away the social context can lead to ineffective or even harmful outcomes. The authors identify five "traps" that fair-ML work can fall into: Framing Trap, Portability Trap, Formalism Trap, Ripple Effect Trap, and Solutionism Trap. These traps arise from failing to account for the interplay between technical systems and social worlds, and require a deeper understanding of "the social" to resolve problems. The paper draws on the concept of sociotechnical systems to ground its observations and provide a path forward. It suggests that technical designers should shift from a solutions-oriented approach to a process-oriented one, and draw abstraction boundaries to include social actors rather than purely technical ones. The authors also emphasize the importance of considering the broader social context when designing fair-ML systems, and highlight the need for interdisciplinary collaboration to address the complex challenges of fairness in sociotechnical systems. The paper concludes that while fair-ML research has made significant progress, it must move beyond technical solutions and consider the broader social implications of its work.The paper discusses the challenges of achieving fairness in machine learning (ML) systems by examining the limitations of current approaches that rely on technical interventions without considering the broader social context. It argues that fairness and justice are properties of social and legal systems, not just technical tools, and that abstracting away the social context can lead to ineffective or even harmful outcomes. The authors identify five "traps" that fair-ML work can fall into: Framing Trap, Portability Trap, Formalism Trap, Ripple Effect Trap, and Solutionism Trap. These traps arise from failing to account for the interplay between technical systems and social worlds, and require a deeper understanding of "the social" to resolve problems. The paper draws on the concept of sociotechnical systems to ground its observations and provide a path forward. It suggests that technical designers should shift from a solutions-oriented approach to a process-oriented one, and draw abstraction boundaries to include social actors rather than purely technical ones. The authors also emphasize the importance of considering the broader social context when designing fair-ML systems, and highlight the need for interdisciplinary collaboration to address the complex challenges of fairness in sociotechnical systems. The paper concludes that while fair-ML research has made significant progress, it must move beyond technical solutions and consider the broader social implications of its work.
Reach us at info@study.space
Understanding Fairness and Abstraction in Sociotechnical Systems