Tuesday, February 28, 2023

Using a reference image within AIart IV

I’m returning to google’s Deep Dream Generator again and this time using the same Lilly photo as my reference and using their Text 2 Dream tab and the prompt “Watercolour painting in the style of John Singer Sargent.”


This one really hums.  The layout/composition has been altered a little, and the flowers are more Roses than Lillies BUT I can see it has added some butchered signatures, All is not perfect, YET!

My next post will return to Dall.E and investigate the feature of Outpainting.

Monday, February 27, 2023

Using a reference image within AIart III

The world of image-to-text systems has been revolutionized by a new technique called Diffusion. Originally described as a Latent Diffusion Model, Diffusion has been widely adopted for its ability to encode training images into an encoded noise latent space, from which the system can then correctly decode the resultant images.

Night Café is offering a few techniques that take this a step further. These techniques have a diffusion-like ability, to begin with, a reference image with limited noise added, rather than starting with random noise. Night Café offers two diffusion approaches that start with a style image: Coherent and Stable Diffusion.

To understand the significance of this development, let's take a closer look at how Diffusion works. The core idea behind Diffusion is to train a neural network to predict the next pixel in an image, given the previous pixels. This process continues until the entire image has been generated. But unlike other image generation techniques, Diffusion is not limited to a specific set of images. Instead, it can generate an infinite number of images by starting with a random noise vector and decoding it into an image.

This is where Night Café's Coherent and Stable Diffusion techniques come in. These techniques allow the system to start with a style image, rather than random noise, and generate images that are coherent with that style.

Coherent Diffusion works by blending the style image with random noise and gradually removing the noise until the final image is generated. The result is an image that shares the same style as the reference image.
Not quite what I expected! Then I realize Norman Lindsay did paint a lot of scantily clad sirens!

Stable Diffusion, on the other hand, works by gradually adding noise to the style image until the final image is generated. This technique is particularly useful for generating images that are similar to the style image, but with slight variations.
In conclusion, Night Café's Coherent and Stable Diffusion techniques are a major breakthrough in the field of image-to-text systems. By allowing the system to start with a reference image, these techniques offer a new level of control and precision in image generation. The possibilities for creative applications are endless, and we can expect to see even more exciting developments in this field in the future. My next post will return to google's deep dream generator and its newest feature Text-2-Dream.
With thanks to Chat GPT which was able to translate my techno babble into easier-to-follow plain English (but I did have to correct it in a few place, so generative text AI is not perfect yet either). I left in its enthusiasm in the last paragraph even though I still have some reervations.

Saturday, February 25, 2023

Using a reference image within AIart II

In the previous examples, I could see that the very basic style where being found but the Lillies had a wonderful colouring and I wanted to include that. So it was over to Google Image search looking for “pink flowers” “painting” and “Still life”. There were hundreds to search through but I soon found a link to a great watercolour demo by Barbara Fox. Not only do I have the "right colours" I also have the pigments used and a step-through of Barbara's painting processing. 
How would the Style Transfer AIart do?


Well wow, not bad. This is really generating a rich feeling for the subject, looking somewhat reminiscent of Norah Heysen's work.

My next post will look at a slightly different approach usually called Diffusion, that involves developing a latent third network.


Friday, February 24, 2023

Using a reference image within AIart I

Artificial intelligence (AI) has been making great strides in recent years, and one of its most fascinating applications is in the field of art. (AIart) With the help of generative adversarial networks (GANs), we are able to create art that is completely unique and unlike anything that has been seen before.

Most earlier neural networks used a general approach of starting with random noise. Early attempts, such as Portrait of Edmond de Belamy (2018), fetched a whopping US$ 350,000 at auction. However, these early attempts were crude and took numerous iterations. This approach uses two networks: the first is trained on images, called the generator, and the second scores how plausible the result of the iteration is, known as the discriminator. This approach is usually referred to as GAN (Generative Adversarial).

 The generator's job is to create images from random noise that look as if they could have been real. The discriminator's job is to differentiate between the generated images and the real ones. The two networks are trained together in a feedback loop, with the generator trying to improve its results and the discriminator trying to get better at identifying the generated images.

A slightly simpler approach to creating art with neural networks is to build a smaller network based on a single image or a small number of images from a specific artist. This approach is simpler than starting with random noise and can produce stunning results.

Back in 2015, German researchers from the University of Tubingen discovered that the earlier nodes of neural networks trained on images looked mainly at basic picture elements such as line types, shapes, directions, tones, and colour. These elements collectively made a good representation of "style". This means that by training a smaller neural network on a specific artist's style, we can use an existing image as the starting point to generate new artwork in the style of the artist works use.

Google's Deep Dream Generator was an early adopter of this approach and offered a Deep Style tab to perform this. Many other systems now exist that promise to interpret images, like your selfie, in the style of famous artists, for example.

In this example, I’m using a simple black and white pen sketch of my own, to control the line work but samples the original image colour. Its about what I expected and not exciting, I did however like the unexpected suggestion of a face showing up in the flower on the left.

Created with Google Deep Dream Generator

Style Transfer is available at Night Café, and in these example I used it for two different style reference images. The first, I'm starting with a different flower photo and again using my tree sketch, only this I'm using the option to make a short movie of the iteration involved in "finding" the resulting image.


In my last example I'm using a lesser-known Vincent Van Gogh's still life (why do so many start with Vincent’s work in  their exploration of AIart?)  

This time the results are stronger and closer to something that might be considered good more spontaneous art. But still not really impressive. 

In my next post I will look at trying to get a better colour likeness in the result.

Thursday, February 23, 2023

Delving Deeper into the Worlds of AIart

 As the world around us changes rapidly, it's important to keep up with the latest technologies and trends. That's why I decided to delve deeper into the world of AIart, exploring the various offers and services available. Specifically, I've been experimenting with three of the current top AIart tools: Google Deep Dream Generator, Dall.E 2, and Stable Diffusion.

To test out these tools, I've been working with a local group of artists who meet fortnightly on Zoom to paint from a photo. Jeannine Desailly, a co-founder of our little Wednesday Wanderers painting group, kindly supplied a set of photos to paint. I will base my experiment on them.



My first step was to send a suitable prompt “pink lilly still life” to Dall.E 2. While the result was superficially impressive, there were a few things not quite right about the output. The image was overcropped and a little too zoomed in, it lacked the esthetics and composition of a still life subject.

But I'm not giving up on these AIart tools just yet. In my nextpost, I'll be exploring ways to "assist" these tools using these reference images. Stay tuned!

Monday, February 20, 2023

Photowalking on Sunday

Next Photowalk seeing in Detail

The first photowalk has come to an end, with only a few participants showing up. Thanks to those that came But don't worry, we'll make sure to notify you of the next one well in advance. In fact, it's already scheduled on the MGA events calendar!

During this Photowalk, we explored the concept of seeing by chance, inspired by Freeman Patterson's idea of thinking sideways. I’ve provided a short PDF (optimized for phone or iPad) that outlined the tasks for the day. seeing by chance. Firstly, we asked participants to capture three different themes or hashtags. Secondly, we encouraged them to periodically throw a dice and either choose the number of subjects in a photo or the direction they must photograph.

Our objective was to challenge participants to move beyond taking a simple snapshot. With modern cameras and phones, it's easy to obtain well-focused and well-exposed photos. Instead, our random tasks were designed to distract the photowalkers and encourage them to look for different things. By taking numerous photos, participants had the opportunity to review their work later, to see what they captured and saw and how they managed to capture it.

Here are some of my results

#Tranguil, #shadows


#flowers #curve



Six Dots on the Dice

Five Dots



Straight Ahead

Small aside, I forgot to pick up my dice when I took this, but later I was  able to retrace where I took this shot from this photo and there was my dice on the ground!

 

#sculpture #sky

Start planning for the next photowalk, hope to see you there 

Friday, February 03, 2023

There is always time to find your Zoe & your Magic Marker