Home AI News Advancements in Text-Guided Diffusion Models for Realistic Picture Manipulation

Advancements in Text-Guided Diffusion Models for Realistic Picture Manipulation

0
Advancements in Text-Guided Diffusion Models for Realistic Picture Manipulation

Text-Guided Diffusion Models: A Game-Changer in Picture Creation

Interest in text-guided diffusion models for picture creation has surged due to their remarkable realism and diversity. These large-scale models offer unmatched creative flexibility to users. As a result, researchers have been focusing on exploring the potential of these models for picture manipulation. Recent advancements in text-based picture manipulation using semantic guidance (SEGA) have shown promising results.

SEGA has been proven to have advanced picture composition and editing skills, eliminating the need for external supervision or calculations during the generation process. The idea vectors associated with SEGA are reliable, isolated, flexible, and scalable. Other research has looked into different approaches to image creation based on semantic understanding, such as Prompt-to-Prompt. This approach utilizes cross-attention layers in the model to connect pixels with text prompts, allowing for diverse changes in the resulting image.

However, one major challenge in text-guided editing on real photos is inverting the given image. This requires finding a series of noise vectors that, when used as input to a diffusion process, would result in the input image. Most diffusion-based editing studies use the denoising diffusion implicit model (DDIM) technique, which is a deterministic mapping from a single noise map to a generated image. Another approach, the denoising diffusion probabilistic model (DDPM) inversion, has been proposed by other researchers.

The proposed method for computing noise maps in the DDPM scheme’s diffusion generation process allows for different behavior compared to conventional DDPM sampling. This method introduces larger variance and more correlated noise maps across timesteps. It has been demonstrated that the Edit Friendly DDPM inversion delivers state-of-the-art results in text-based editing tasks, either alone or in combination with other editing methods.

In this article, we explore the pairing and integration of the SEGA and DDPM inversion methods, also known as LEDITS. LEDITS modifies the semantically guided diffusion generation mechanism to work with actual photos. This combined approach offers simultaneous editing capabilities of both methods, resulting in competitive qualitative outcomes using cutting-edge techniques. To learn more about LEDITS and access the code and project details, you can check out the provided links below.

Don’t forget to join our ML SubReddit, Discord Channel, and Email Newsletter where we share the latest AI research news, cool projects, and more. If you have any questions or suggestions, feel free to reach out to us via email.

[Link to the Paper](https://arxiv.org/abs/2307.00522)
[Link to the Code](https://huggingface.co/spaces/editing-images/ledits/tree/main)
[Link to the Project](https://editing-images-project.hf.space/index.html)

[Image Courtesy: Aneesh Tickoo](https://www.marktechpost.com/author/aneesh-tickoo/)

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here