May 11–16, 2024 | Orit Shaer, Angelora Cooper, Osnat Mokryn, Andrew L. Kun, Hagit Ben Shoshan
This paper explores the integration of large language models (LLMs) into the group ideation process, specifically in the context of Brainwriting. The study investigates two key aspects: the divergence stage of idea generation and the convergence stage of idea evaluation and selection. A collaborative group-AI Brainwriting framework was developed, incorporating an LLM as an enhancement to the group ideation process. The framework was evaluated in an advanced undergraduate course on tangible interaction design, where students engaged in a Brainwriting session with GPT-3. The results showed that integrating LLMs in Brainwriting could enhance both the ideation process and its outcome. Additionally, an LLM evaluation engine was designed to assess the quality of ideas based on three criteria: relevance, innovation, and insightfulness. The engine was compared to ratings assigned by three expert and six novice evaluators. The findings suggest that LLMs can support idea evaluation and that integrating LLMs into Brainwriting could enhance both the ideation process and its outcome. The study also provides empirical insights into how novice designers engage with and perceive the process of group-AI Brainwriting. The paper contributes to the HCI field by expanding pedagogical frameworks and offering new AI-augmented tools for educators and novice designers. Specific contributions include a collaborative group-AI Brainwriting ideation framework that enhances both divergent and convergent stages, an LLM idea evaluation engine, and empirical insights into how novice designers engage with and perceive the process of group-AI Brainwriting. The study also discusses the merits and limitations of integrating LLMs into a collaborative Brainwriting ideation process for both HCI education and practice.This paper explores the integration of large language models (LLMs) into the group ideation process, specifically in the context of Brainwriting. The study investigates two key aspects: the divergence stage of idea generation and the convergence stage of idea evaluation and selection. A collaborative group-AI Brainwriting framework was developed, incorporating an LLM as an enhancement to the group ideation process. The framework was evaluated in an advanced undergraduate course on tangible interaction design, where students engaged in a Brainwriting session with GPT-3. The results showed that integrating LLMs in Brainwriting could enhance both the ideation process and its outcome. Additionally, an LLM evaluation engine was designed to assess the quality of ideas based on three criteria: relevance, innovation, and insightfulness. The engine was compared to ratings assigned by three expert and six novice evaluators. The findings suggest that LLMs can support idea evaluation and that integrating LLMs into Brainwriting could enhance both the ideation process and its outcome. The study also provides empirical insights into how novice designers engage with and perceive the process of group-AI Brainwriting. The paper contributes to the HCI field by expanding pedagogical frameworks and offering new AI-augmented tools for educators and novice designers. Specific contributions include a collaborative group-AI Brainwriting ideation framework that enhances both divergent and convergent stages, an LLM idea evaluation engine, and empirical insights into how novice designers engage with and perceive the process of group-AI Brainwriting. The study also discusses the merits and limitations of integrating LLMs into a collaborative Brainwriting ideation process for both HCI education and practice.