Monday, February 27, 2023

Using a reference image within AIart III

The world of image-to-text systems has been revolutionized by a new technique called Diffusion. Originally described as a Latent Diffusion Model, Diffusion has been widely adopted for its ability to encode training images into an encoded noise latent space, from which the system can then correctly decode the resultant images.

Night Café is offering a few techniques that take this a step further. These techniques have a diffusion-like ability, to begin with, a reference image with limited noise added, rather than starting with random noise. Night Café offers two diffusion approaches that start with a style image: Coherent and Stable Diffusion.

To understand the significance of this development, let's take a closer look at how Diffusion works. The core idea behind Diffusion is to train a neural network to predict the next pixel in an image, given the previous pixels. This process continues until the entire image has been generated. But unlike other image generation techniques, Diffusion is not limited to a specific set of images. Instead, it can generate an infinite number of images by starting with a random noise vector and decoding it into an image.

This is where Night Café's Coherent and Stable Diffusion techniques come in. These techniques allow the system to start with a style image, rather than random noise, and generate images that are coherent with that style.

Coherent Diffusion works by blending the style image with random noise and gradually removing the noise until the final image is generated. The result is an image that shares the same style as the reference image.
Not quite what I expected! Then I realize Norman Lindsay did paint a lot of scantily clad sirens!

Stable Diffusion, on the other hand, works by gradually adding noise to the style image until the final image is generated. This technique is particularly useful for generating images that are similar to the style image, but with slight variations.
In conclusion, Night Café's Coherent and Stable Diffusion techniques are a major breakthrough in the field of image-to-text systems. By allowing the system to start with a reference image, these techniques offer a new level of control and precision in image generation. The possibilities for creative applications are endless, and we can expect to see even more exciting developments in this field in the future. My next post will return to google's deep dream generator and its newest feature Text-2-Dream.
With thanks to Chat GPT which was able to translate my techno babble into easier-to-follow plain English (but I did have to correct it in a few place, so generative text AI is not perfect yet either). I left in its enthusiasm in the last paragraph even though I still have some reervations.

No comments: