UltraEdit is a large-scale dataset for instruction-based image editing, containing approximately 4 million editing samples and 750,000 unique instructions across 9+ editing types. The dataset was created to address limitations in existing image editing datasets, such as limited instruction diversity, image biases, and the lack of region-based editing data. ULTRAEDIT leverages large language models (LLMs) and human raters to generate diverse editing instructions, uses real images as anchors to reduce biases, and supports region-based editing through automatic region annotations. The dataset includes both free-form and region-based editing examples, with region-based editing data significantly improving image editing performance. Experiments show that models trained on ULTRAEDIT achieve new records on MagicBrush and Emu-Edit benchmarks. The dataset, code, and models are available on GitHub. ULTRAEDIT provides a systematic approach to generating high-quality image editing samples, enabling more accurate and effective image editing. The dataset is the largest publicly released instruction-based image editing dataset, offering a wide range of editing tasks and diverse instructions. The dataset is designed to support both free-form and region-based editing, with region-based editing data showing significant improvements in image editing performance. The dataset is also used to evaluate the effectiveness of different image editing models and to analyze the impact of region-based editing on image editing. The dataset is a valuable resource for researchers and developers working on image editing tasks.UltraEdit is a large-scale dataset for instruction-based image editing, containing approximately 4 million editing samples and 750,000 unique instructions across 9+ editing types. The dataset was created to address limitations in existing image editing datasets, such as limited instruction diversity, image biases, and the lack of region-based editing data. ULTRAEDIT leverages large language models (LLMs) and human raters to generate diverse editing instructions, uses real images as anchors to reduce biases, and supports region-based editing through automatic region annotations. The dataset includes both free-form and region-based editing examples, with region-based editing data significantly improving image editing performance. Experiments show that models trained on ULTRAEDIT achieve new records on MagicBrush and Emu-Edit benchmarks. The dataset, code, and models are available on GitHub. ULTRAEDIT provides a systematic approach to generating high-quality image editing samples, enabling more accurate and effective image editing. The dataset is the largest publicly released instruction-based image editing dataset, offering a wide range of editing tasks and diverse instructions. The dataset is designed to support both free-form and region-based editing, with region-based editing data showing significant improvements in image editing performance. The dataset is also used to evaluate the effectiveness of different image editing models and to analyze the impact of region-based editing on image editing. The dataset is a valuable resource for researchers and developers working on image editing tasks.