Magic Fixup: Streamlining Photo Editing by Watching Dynamic Videos

Magic Fixup: Streamlining Photo Editing by Watching Dynamic Videos

19 Mar 2024 | Hadi Alzayer, Zhihao Xia, Xuaner Zhang, Eli Shechtman, Jia-Bin Huang, Michael Gharbi
The paper introduces Magic Fixup, a generative model designed to synthesize photorealistic images from coarsely edited inputs. The method leverages diffusion models to transfer fine details from the original image while preserving object identities and adapting to the lighting and context defined by the new layout. The key insight is that videos provide rich supervision for this task, as they capture how objects and camera motions change with viewpoint, lighting, and physical interactions. The authors construct a paired image dataset from video frames, using motion models to align source and target frames. This dataset is used to train a diffusion model that translates the warped image into the ground truth, ensuring the output follows the user's specified layout and maintains realism. The method is evaluated through user studies and compared against state-of-the-art techniques, showing superior performance in terms of realism and user preference.The paper introduces Magic Fixup, a generative model designed to synthesize photorealistic images from coarsely edited inputs. The method leverages diffusion models to transfer fine details from the original image while preserving object identities and adapting to the lighting and context defined by the new layout. The key insight is that videos provide rich supervision for this task, as they capture how objects and camera motions change with viewpoint, lighting, and physical interactions. The authors construct a paired image dataset from video frames, using motion models to align source and target frames. This dataset is used to train a diffusion model that translates the warped image into the ground truth, ensuring the output follows the user's specified layout and maintains realism. The method is evaluated through user studies and compared against state-of-the-art techniques, showing superior performance in terms of realism and user preference.
Reach us at info@study.space