May 11–16, 2024 | Michael Xieyang Liu, Frederick Liu, Alexander J. Fiannaca, Terry Koo, Lucas Dixon, Michael Terry, Carrie J. Cai
This paper explores the need for structured output constraints in large language models (LLMs) to integrate them into current developer workflows. The authors conducted a survey of 51 industry professionals to understand the range of scenarios and motivations for applying output constraints from a user-centered perspective. They identified 134 concrete use cases for constraints at two levels: low-level, which ensure the output adheres to a structured format and appropriate length, and high-level, which require the output to follow semantic and stylistic guidelines without hallucination. Applying output constraints can streamline the development, testing, and integration of LLM prompts, enhance user experience, and improve user trust. The paper discusses user preferences for expressing constraints, whether through natural language or graphical user interfaces (GUIs), and presents an initial design for a constraint prototyping tool called CONSTRAINTMAKER. CONSTRAINTMAKER allows users to specify different types of output constraints using a GUI, with the ability to mix and match multiple constraint primitives. The tool uses a GPT-3.5-class LLM and a finite-state machine-based decoding technique to ensure the output strictly adheres to the defined formats. User tests and feedback have been used to refine the design of CONSTRAINTMAKER, highlighting its intuitive separation of tasks and output formats, and its flexibility for both developers and non-developers. The paper concludes with a discussion on the future of more controllable and user-friendly interfaces for human-LLM interactions.This paper explores the need for structured output constraints in large language models (LLMs) to integrate them into current developer workflows. The authors conducted a survey of 51 industry professionals to understand the range of scenarios and motivations for applying output constraints from a user-centered perspective. They identified 134 concrete use cases for constraints at two levels: low-level, which ensure the output adheres to a structured format and appropriate length, and high-level, which require the output to follow semantic and stylistic guidelines without hallucination. Applying output constraints can streamline the development, testing, and integration of LLM prompts, enhance user experience, and improve user trust. The paper discusses user preferences for expressing constraints, whether through natural language or graphical user interfaces (GUIs), and presents an initial design for a constraint prototyping tool called CONSTRAINTMAKER. CONSTRAINTMAKER allows users to specify different types of output constraints using a GUI, with the ability to mix and match multiple constraint primitives. The tool uses a GPT-3.5-class LLM and a finite-state machine-based decoding technique to ensure the output strictly adheres to the defined formats. User tests and feedback have been used to refine the design of CONSTRAINTMAKER, highlighting its intuitive separation of tasks and output formats, and its flexibility for both developers and non-developers. The paper concludes with a discussion on the future of more controllable and user-friendly interfaces for human-LLM interactions.