Transparent AI Disclosure Obligations: Who, What, When, Where, Why, How

Transparent AI Disclosure Obligations: Who, What, When, Where, Why, How

May 11-16, 2024 | ABDALLAH EL ALI, KARTHIKEYA PUTTUR VENKATRAJ, SOPHIE MOROSOLI, LAURENS NAUDTS, NATALI HELBERGER, PABLO CESAR
This paper explores the challenges and considerations surrounding transparent AI disclosure obligations under the European AI Act, particularly Article 52, which mandates transparency for AI systems interacting with humans. The authors conducted two participatory AI workshops with researchers, designers, and engineers across disciplines (N=16), using the 5W1H framework to deconstruct the relevant clauses in Article 52. They generated 149 questions clustered into five themes and 18 sub-themes, aiming to inform future legal developments and interpretations of Article 52, as well as provide a starting point for Human-Computer Interaction (HCI) research to examine AI disclosure transparency from a human-centered perspective. The study highlights the ethical, legal, and policy implications of AI disclosures, including questions about who should be responsible for AI-generated content, when users should be informed, and how to ensure transparency. It also addresses the impact of AI on trust, authenticity, and user empowerment, as well as the challenges of implementing AI disclosures in practice. The paper emphasizes the need for interdisciplinary research to address the complex, multi-faceted challenge of ensuring transparent AI disclosures, which is essential for responsible AI development and democratic societies based on truth rather than AI-generated fiction. The authors argue that AI disclosure obligations should be approached through participatory AI and value-sensitive design to ensure human well-being and ethical AI practices.This paper explores the challenges and considerations surrounding transparent AI disclosure obligations under the European AI Act, particularly Article 52, which mandates transparency for AI systems interacting with humans. The authors conducted two participatory AI workshops with researchers, designers, and engineers across disciplines (N=16), using the 5W1H framework to deconstruct the relevant clauses in Article 52. They generated 149 questions clustered into five themes and 18 sub-themes, aiming to inform future legal developments and interpretations of Article 52, as well as provide a starting point for Human-Computer Interaction (HCI) research to examine AI disclosure transparency from a human-centered perspective. The study highlights the ethical, legal, and policy implications of AI disclosures, including questions about who should be responsible for AI-generated content, when users should be informed, and how to ensure transparency. It also addresses the impact of AI on trust, authenticity, and user empowerment, as well as the challenges of implementing AI disclosures in practice. The paper emphasizes the need for interdisciplinary research to address the complex, multi-faceted challenge of ensuring transparent AI disclosures, which is essential for responsible AI development and democratic societies based on truth rather than AI-generated fiction. The authors argue that AI disclosure obligations should be approached through participatory AI and value-sensitive design to ensure human well-being and ethical AI practices.
Reach us at info@study.space
[slides] Transparent AI Disclosure Obligations%3A Who%2C What%2C When%2C Where%2C Why%2C How | StudySpace