May 11–16, 2024, Honolulu, HI, USA | Zijie J. Wang, Chinmay Kulkarni, Lauren Wilcox, Michael Terry, Michael Madaio
FARSIGHT is an interactive tool designed to help AI prototypes identify potential harms of their large language model (LLM)-powered applications during early prototyping. It provides in situ interfaces and novel techniques to empower users to envision potential harms associated with their AI applications. FARSIGHT highlights news articles about relevant AI incidents and allows users to explore and edit LLM-generated use cases, stakeholders, and harms. The tool includes an Alert Symbol that warns users of potential risks, an Awareness Sidebar that highlights relevant news articles and use cases, and a Harm Envisioner that allows users to interactively envision, assess, and reflect on potential harms. FARSIGHT is publicly accessible at https://pair-code.github.io/farsight.
The study reports design insights from a co-design study with 10 AI prototypes and findings from a user study with 42 AI prototypes. After using FARSIGHT, AI prototypes in our user study are better able to independently identify potential harms associated with a prompt and find our tool more useful and usable than existing resources. Their qualitative feedback also highlights that FARSIGHT encourages them to focus on end-users and think beyond immediate harms. The findings suggest that FARSIGHT helps AI prototypes engage with AI harms in a meaningful way during the prototyping stage.
FARSIGHT is an open-source, web-based implementation that lowers the barrier to applying responsible AI practices. It is designed to be model-agnostic and environment-agnostic, making it accessible to users of different LLM models and prompt-crafting interfaces. The tool uses Web Components and Lit to implement FARSIGHT as reusable modules, which can be easily integrated into any web-based interfaces regardless of their development stacks. FARSIGHT also includes a Chrome extension that integrates into Google AI Studio and a Python package that brings FARSIGHT to computational notebooks.
The study evaluated FARSIGHT's effectiveness in aiding AI prototypes to anticipate potential harms associated with AI features. The evaluation user study included 42 participants with diverse roles and experience in prompting LLMs. The study investigated how FARSIGHT and FARSIGHT LITE affect users' ability to identify potential harms and how effective and useful they are in assisting users in envisioning harms compared to existing resources. The findings suggest that FARSIGHT helps AI prototypes address challenges in envisioning potential harms during the prototyping stage.FARSIGHT is an interactive tool designed to help AI prototypes identify potential harms of their large language model (LLM)-powered applications during early prototyping. It provides in situ interfaces and novel techniques to empower users to envision potential harms associated with their AI applications. FARSIGHT highlights news articles about relevant AI incidents and allows users to explore and edit LLM-generated use cases, stakeholders, and harms. The tool includes an Alert Symbol that warns users of potential risks, an Awareness Sidebar that highlights relevant news articles and use cases, and a Harm Envisioner that allows users to interactively envision, assess, and reflect on potential harms. FARSIGHT is publicly accessible at https://pair-code.github.io/farsight.
The study reports design insights from a co-design study with 10 AI prototypes and findings from a user study with 42 AI prototypes. After using FARSIGHT, AI prototypes in our user study are better able to independently identify potential harms associated with a prompt and find our tool more useful and usable than existing resources. Their qualitative feedback also highlights that FARSIGHT encourages them to focus on end-users and think beyond immediate harms. The findings suggest that FARSIGHT helps AI prototypes engage with AI harms in a meaningful way during the prototyping stage.
FARSIGHT is an open-source, web-based implementation that lowers the barrier to applying responsible AI practices. It is designed to be model-agnostic and environment-agnostic, making it accessible to users of different LLM models and prompt-crafting interfaces. The tool uses Web Components and Lit to implement FARSIGHT as reusable modules, which can be easily integrated into any web-based interfaces regardless of their development stacks. FARSIGHT also includes a Chrome extension that integrates into Google AI Studio and a Python package that brings FARSIGHT to computational notebooks.
The study evaluated FARSIGHT's effectiveness in aiding AI prototypes to anticipate potential harms associated with AI features. The evaluation user study included 42 participants with diverse roles and experience in prompting LLMs. The study investigated how FARSIGHT and FARSIGHT LITE affect users' ability to identify potential harms and how effective and useful they are in assisting users in envisioning harms compared to existing resources. The findings suggest that FARSIGHT helps AI prototypes address challenges in envisioning potential harms during the prototyping stage.