This paper introduces Generative Students, a prompt architecture that leverages large language models (LLMs) to simulate student profiles and generate responses to multiple-choice questions (MCQs). The architecture is based on the Knowledge-Learning-Instruction (KLI) framework, which defines knowledge components (KCs) as the elements students are expected to learn. A generative student profile is a function of the list of KCs the student has mastered, has confusion about, or has no evidence of knowledge of. The paper demonstrates that generative students can produce logical and believable responses that align with their profiles. The responses can then be used to evaluate the quality of MCQs. The study compares generative students' responses to real students' responses and finds a high correlation. Moreover, there is considerable overlap in the difficult questions identified by both groups. A case study shows that instructors can improve question quality based on the signals provided by generative students. The paper also discusses the potential risks of the approach and the necessity of eliciting instructor input to steer the process. The results suggest that generative students can be used to support rapid prototyping and iteration of questions. The study highlights the potential of using LLMs to simulate student profiles and generate responses to evaluate question items without requiring student historical performance data. The paper also discusses related work, including automatic question generation for educational purposes, metrics and approaches to evaluate questions, and generative agents. The paper concludes that the Generative Students prompt architecture has the potential to improve question quality and support learning at scale.This paper introduces Generative Students, a prompt architecture that leverages large language models (LLMs) to simulate student profiles and generate responses to multiple-choice questions (MCQs). The architecture is based on the Knowledge-Learning-Instruction (KLI) framework, which defines knowledge components (KCs) as the elements students are expected to learn. A generative student profile is a function of the list of KCs the student has mastered, has confusion about, or has no evidence of knowledge of. The paper demonstrates that generative students can produce logical and believable responses that align with their profiles. The responses can then be used to evaluate the quality of MCQs. The study compares generative students' responses to real students' responses and finds a high correlation. Moreover, there is considerable overlap in the difficult questions identified by both groups. A case study shows that instructors can improve question quality based on the signals provided by generative students. The paper also discusses the potential risks of the approach and the necessity of eliciting instructor input to steer the process. The results suggest that generative students can be used to support rapid prototyping and iteration of questions. The study highlights the potential of using LLMs to simulate student profiles and generate responses to evaluate question items without requiring student historical performance data. The paper also discusses related work, including automatic question generation for educational purposes, metrics and approaches to evaluate questions, and generative agents. The paper concludes that the Generative Students prompt architecture has the potential to improve question quality and support learning at scale.