I am quiet fascinated with the potential of the deep dream generation processes, brought to light by google researchers trying to explain their artificially intelligent process to recognise objects in photos (unfortunately that had started with discovering pictures of cats on social media, I have been know to unfollow/unfriend people who over post cat photos). To demonstrate the way their neural networks work they showed the state of the neural network as it tried to recognize characteristic shapes, outlines, colours & textures/patterns with a given image. As you go deeper and deeper into the layers of the neural network you get wonderfully focused images that eventually become very surreal (trippy) and dream like. Hence the name of the project Deep dream.
There are many competing system around now and I probably still prefer Dreamscope on the web and Lucid on my android phone. The best variation of the deep dream approach also allow you to train your own photos or artwork as the neural network to rank your intended photo. There is a fair amount of computing involved in training these neural networks but google or the other services handles this by spreading the task across its cloud of servers, but it does take time. The deeper you want to go and the larger the output required the longer it takes (google seems to abandon the task after 15 minutes, not sure why yet).
Rather than being just a one click Instagram style filter I want to go deeper and see how this can become a tool to develop more compelling images. I had the monday’s daily photo of the reflection in the lake (photo above), and a very ordinary web cam image (photo on the right, from which I wanted to create a profile avatar). I researched the application neural networks in a former life and I know that “hidden layers” can add significantly to the success of predictions, so I wanted to add a hidden artification step. No so easy without access to the source code. I firstly took the reflection photo through google’s deep style process using the Dark Pastel trained network. This produced a interim image somewhere between a photo and a pastel sketch. Not exactly exciting in itself but it does remain largely honest to the colours of the original photo but with a strong hand drawn appearance.
I then feed a square cropped version of my self portrait into another deep style but this time I used the interim photo/sketch as the image to train a new neraul network upon. This is why I am calling this the third generation (1. original photo 2. pastel like photo/sketch 3. neural network filter).
The result was indeed a good combination of the colours and feel of the original photo, along with the style of marks you might get from pastels and yet I was still recognizable peering between my hands forming a diamond shape frame.
I like where this approach is going