DiffEditor is a novel approach to enhance the accuracy and flexibility of diffusion-based image editing. The method addresses two key challenges in existing diffusion-based image editing: (1) lack of editing accuracy and unexpected artifacts in complex scenarios, and (2) limited flexibility in harmonizing editing operations. DiffEditor introduces image prompts to fine-grained image editing, combining them with text prompts to provide more detailed descriptions of the editing content. To increase flexibility while maintaining content consistency, the method locally integrates stochastic differential equations (SDE) into the ordinary differential equation (ODE) sampling process. Additionally, regional score-based gradient guidance and a time travel strategy are incorporated into the diffusion sampling, further improving editing quality. Extensive experiments demonstrate that DiffEditor achieves state-of-the-art performance on various fine-grained image editing tasks, including editing within a single image (e.g., object moving, resizing, and content dragging) and across images (e.g., appearance replacing and object pasting). The source code is available at https://github.com/MC-E/DragonDiffusion. The method also introduces a regional SDE sampling strategy to inject randomness into the editing area while maintaining content consistency in other areas. Furthermore, regional score-based gradient guidance and a time travel strategy are used to enhance the editing quality. The method is designed to be flexible and efficient, with a reduced complexity compared to existing diffusion-based methods. The paper also discusses the limitations of the method, including the challenges in editing scenarios that require a large amount of content imagination, and suggests future work to enhance the editing capabilities of diffusion models.DiffEditor is a novel approach to enhance the accuracy and flexibility of diffusion-based image editing. The method addresses two key challenges in existing diffusion-based image editing: (1) lack of editing accuracy and unexpected artifacts in complex scenarios, and (2) limited flexibility in harmonizing editing operations. DiffEditor introduces image prompts to fine-grained image editing, combining them with text prompts to provide more detailed descriptions of the editing content. To increase flexibility while maintaining content consistency, the method locally integrates stochastic differential equations (SDE) into the ordinary differential equation (ODE) sampling process. Additionally, regional score-based gradient guidance and a time travel strategy are incorporated into the diffusion sampling, further improving editing quality. Extensive experiments demonstrate that DiffEditor achieves state-of-the-art performance on various fine-grained image editing tasks, including editing within a single image (e.g., object moving, resizing, and content dragging) and across images (e.g., appearance replacing and object pasting). The source code is available at https://github.com/MC-E/DragonDiffusion. The method also introduces a regional SDE sampling strategy to inject randomness into the editing area while maintaining content consistency in other areas. Furthermore, regional score-based gradient guidance and a time travel strategy are used to enhance the editing quality. The method is designed to be flexible and efficient, with a reduced complexity compared to existing diffusion-based methods. The paper also discusses the limitations of the method, including the challenges in editing scenarios that require a large amount of content imagination, and suggests future work to enhance the editing capabilities of diffusion models.