The lighting was not so perfect this afternoon, so I had decided to try another seeing exercise from Freeman Patterson. This time I’m looking at the idea of thinking sideways, looking at something familiar, but not normally photographed. However thee trick here s to think of a reason you don’t take the photo and instead look for something to become a photo. Freeman’s suggest trying to break a rule, like not photography into the sun or avoiding balanced and cantered compositions. On the way to my favourite spots on the far side of jells park lake to capture the setting or late afternoon I have to walk along bush tracks. I have seldom photographed them so next I look to break a rule of too.
On my way out I turned and looking back towards the sun (and yes damn there is sun flare). I also took a bracketed set of exposures so I could play with the tonal aspects. I then processed HDR versions in two different applications (firstly on the right using Aftershot HDR merge and secondly of the left Nik’s HDR Efex Pro 2)
I treated the Aftershot Pro merge as I normally do (without using a presets) and adjusting the tonal sliders myself to match the scene I remember. The HDR Efex software is still a bit new to me so I choose to use their Soft Landscape preset.
On the way back further back along the track the sun broke through the clouds for a few moments and Looking Straight Back down the track, with the classic composition you are supposed to avoid. I took another bracketed set of photos. However the cross lighting and shadows in the distance do attract the eye and make the composition appear more interesting. Again I created two HDR versions.
Of these images I do like the bottom left one, it has stronger contrast (because I created it that way) and perhaps fits more closely with what I remember of the lighting at the time. I’m glad I spent some time thinking sideways and looked at these tracks a fresh.
This is a preliminary test for my entry into the National Portrait Gallery's Digital Portraiture Award 2016.
This is a video of a set of tripod selfies taken in
front of one of my art works (the Holocene Mass Extinction). They have been
processed through google’s deep dream generator trained on a number of Archibald
prize finalist’s paintings.
The self portraits remain in essence images of me
but they borrow, approximate and draw on other artist’s mark making via appropriation
from artificially intelligent neural networks.
I noticed that the new photofriday included a monitor adjustment. I also found this older article on flickr central, which goes into a far bit more detail about doing the tonal adjustment for you computer monitor and its just as relevant today.
It is not me that benefits from you doing this, it is you that benefits. You will be able the better appreciate the effort good photographers have undertake to capture well balanced and rich tonal ranges.
If you are serious about looking at photos on-line check out the test strip above (it shoud show 17 shades of grey plus black and white. Commonly it is the strips at either end that might look the same. Dive into the post to see what needs to be adjusted
I must admit I liked the Henri Rousseau Style images from my photos using google’s deep dream generator. So I decided to have a bit of fun using half a dozen famous Australian Landscape Painters.
Can you guess whose style I am using to render my two photos (above)? (answers in comments)
I have been interested for sometime in multi-exposure and superimposing subjects across time. This image is not a multi-exposure in camera instead it combined two photos taken only a short interval apart. It was just on dusk and there two ducks where lazily swimming by. I already had though of the idea of using flash and non-flash for the multiple images so I grabbed my camera and two a couple of photos, trying to keep the ducks in the center of the frame.
Back home I put the two images together but it was quiet boring because light water in the normal exposure cancelled the dark water in the flash photo (it a boring blue/grey)and the ducks where cut in half because they did overlap. So I had the idea of using a luminosity mask in OnOne 10. First I used a dark brocade textures layer and replaced the lighter values in the photo (the water) and kept the darks (the ducks and shadows). Next I overlayed the flash image and using an inverted luminosity mask let the blend bring the modified original photo and texture into the dark background. A slight crop, leaving the illuminated duck swimming on a surreal surface, with their available light selves becoming living shadows,
I’ve been submitting photos to the weekly PhotoFriday challenges somewhat routinely for a long time, and I normally post the photos here in this blog. I liked their site but it has been significantly upgraded (specifically you can see the images directly without having to click through. It is much more Instagram like and easy to contribute to. It may take me a while to get used to but I’m willing to keep contributing and that means signing up and perhaps not posting the photos here (just their background story if and when they have one)
The one thing that instantly grabbed my attention was the monitor calibration. Something I wish more people would do. However I also know that colour calibration is a big issue (way beyond those devices that supposedly do it for you) and the ambient lighting has a lot more influence on the colours you see. Further it may not always address the tonal accuracy either. I know that it is the tonal aspects (brightness and contrast) that most folk stumble with and either have super contrasty, very dark or extra light monitors and they have got used to it. This sometimes means all the tonal effort a photographer has gone to, often gets lost on the viewers. PhotoFriday’s calibartion screen will probably get you to a much better place to appreciate the tonality of good photo. If you see subtle targets in the black and white circles in the test above you probably ok but if either the black or white shapes is just a single circle you need to do the calibration. Remember to follow the instructions (eg dims the room lights)
The USB interface has revolutionised the way and number of devices you can connect to your computer (and android phones, cameras and widows computer, the apple-verse is catching up but they have an even more complex range of cables to manage). The downside is all those cables, lots of similar ones. Storing them often lead to jumbled mess of cables and it often took time to find the end you needed.
Whilst buying some hardware supplies I chance upon some cheap plastic storage boxes (under $2 each), with removable sliders. I originally visualised them storing longer screws and or bolts. However, when I put it on the side table beside the cables the electric globe of inspiration came on. Now I have my cables back in my desk draw but now I can quickly find the cable I need and don’t have to untangle anything. Also I have an extra SD card reader in the box as well as another one in my camera bag.
Inceptionism is the made up name by some of the google software engineers working on a project (deep dream) where they were looking to training artificial neural networks to recognize objects within images It is an attempt the characterize a technique they developed to explain what was happening in their networks. The article in the link above explains some of the process in more detail but I like to think of it as a slightly different way of seeing the knowledge the network is building played back as a superimposed patterns that reflect what they are looking for. The deeper you go the more things look like what you seek and the patterns of line form and light get distorted or re-seen as something else. Positive reinforcement good wild in a meta-self-referencing sort of way. If you are looking for animals, birds or insects, mystical beasts start to appear every where as if a dream (hence the project name) in an artificially intelligent neural network cycling through its meals of data in a computer somewhere in “the cloud”.
The deep dreaming code and many of their networks are publically released and I am still keen to code it up and build something for myself one day (probably not soon). However the meme of deep dreams has spread quickly on the internet and their are now many website that offer the deep dream approach. One of the first was dreamscope, and it is my favorite, but google also has an on-line deep dream generator. Many of theses site like lucid for android phones offer a instragram-like pick and one-click filter approach. Already there are already two variants.
Convolutional Neural Networks, these have been describe by a team at University of Tubingen. These neural networks are base on a feed forward approach and they are starting to be able to detect style and the nature of marks made as well as the content. The Tubingen study suggest that style and content are separable. Thus you can apply a particular style to a different subject.These system using this approach often have style in their name, (eg google style) and I will adopt that name to characterize this group.
Inceptionist, neural networks. This is the approach described in the google paper (link at the top of this post. It is really just a way to describe googles approach which are extended (more layers) convolution neural networks but have been put into the feedback loop, desperately looking for what they know how to find.
So how useful are these techniques to help an artist create original art?
1. Style
This group is likely to be the most over-used (instagramed to death) because they are in general so easy to use. I am experimenting with such a system called Lucid on my android phone, there are plenty of others for android & apple and even Faux look-alike systems. The better ones like Lucid, do the processing in the cloud, so you load the photo you wait a little while then pick the thumbnail showing the result you want (Lucid uses famous art works for it network training). You pick one and wait a little longer and presto you have something like your photo but painted or sketched by that artist. Some images work, and work wonderfully a lot look just sad. It will be the posting of these unfortunate images that might give this approach a bad name.
There are some web based “filter” or “app sites” that also offer this style approach. My example here is using google’s deep style approach and a Henri Rousseau painting as my network training example. The results are clearly Rousseau inspired and yet also clearly recognizably the subjects of my photos.
Whilst I have seen many wonderful examples, there are a lot of very trashy examples as well and this my lead to the same reaction of many non photographers to the more extreme HDR works. They may reject these works as a gimmick, without seeing the potential
2. Inceptionism
It is not only google who offer access to the original inceptionism technique, here I am using Dreamscope again but with two different neural training sets following the feedback approach. The first is called Angel Hair is more about texture and the flow of lines, the second called inception painting is about seeing animals and generates those weird animal eyes and patterns. You may need to click on the images below to see this detail.
Inceptionism is very quirky, and needs to be used with constraint. Also the deep galleries’ sites in general are becoming far to overloaded with selfies & cats.
Still I see a lot of opportunity to build original art with these techniques.
I have had XnView on my USB Darkroom Key (its always in my camera bag and has portable Apps so that if and when I only have access to a cyber cafe or library computer I can still work on my photos). It is a really simple to use but feature rich photo viewer/manager, I have also had XnViewer MP (a multi platform enhanced version) running on a repurposed Linux machine and my desktop computer and they do get frequently used, However picasa is still my go to application for loading organizing and culling new photos. The newest version of XnViewer MP (0.81) may change my mind.
There are three reasons
Its RAW rendering is fast and decent. I would not suggest that it is a fantastic replacement for other RAW editor but it is is an ok conversion. The downside is it converts the RAW image into an 8 bit jpeg, and thus you will loose a some of the potential dynamic range,
It now includes (from version 0.80) the ability to read and write metadata in .xmp sidecar files. Which means you can transfer rating and metadata into other photo mangers (that can read the adobe sidecar format). You have to go into settings under the browser/metadata tab and click on the options highlighted on the right. Again there is a small caveat in that XnView does not include a pick and reject flag (so useful in lighroom and other software) when first reviewing and culling newly loaded photos. Also the XMP compatibility does not extend to post processing options (just the metadata and ratings).I can not write to xmp in picasa so this is a big plus.
It has a simple tools/import and sort feature that can load photo from a variety of dources (the can be camera, card readers, external drive, they just need to be connected to the computer and have a drive letter and/or folder directory. Whilst XnView MP does build a catalogue entry and thumbnail at this time it is way faster than lightroom for example. You can also add metadata and do some filtering by time (assuming the photo has EXIF data). It can also write data to a subdirectory yyyy_mm_dd according to the day it was photographed and this is how I like to organize my files
There are heaps of other well loved features that have come from the original XnView program. The real power of XnView is the organizing your images, doing a few basic edit and sharing your photos via email or getting them to go to a social media site (nothing ”one click” automatically upload to those sites but there are the right tools to make the files the appropriate size and format). There is also the great range of batch editing tools which can simplify task like changing timestamps to synchronize camera times and external GPS tracks, for geocoding (an all to tedious a task in lightroom).
There is also the ability to read and write an extensive array (the web site claims 500 format) of graphic file formats. Ok there are only a few formats I regularly need but it is handy to be able to at least look at the strange files that turn up from time to time in graphic art. Thus this would make a great for managing the reference photos for a graphic artist who normally works in illustrator or Corel Draw/Painter.
I’m still only luke warm on it being able to replace picasa in my toolkit, but I have made a commitment to put this version on all my computers along side picasa and give it a fair work out.
One disappointment already is on the HP Spectre (2 in 1 windows 10 laptop) where it its decidedly awkward to use in tablet mode and where it can be very touch screen frustrating.
Choose a starting point and take a given number of steps (let someone else choose how many or throw dice or just pick a random number) Mark this spot and the count out just seven paces. This second distance gives you the radius of a circular area in which you must now find at least 20 things to photograph.
I had picked a particular track near the lake and stepped out 79 paces (I thought that would get me much further down the track, but I ended up close to where I had taken the shadow self portrait only a few week ago). Still it was by chance and coincidence is chance too. Looking around there was the track, dead grass/reeds and some fresh weeds! In the distance where some high tension power gantries and scrubby trees. nothing exciting to photography really. It is easy enough to take one or two obvious pictures of the area but Freeman warns this is not enough to start you seeing.
“You should feel desperation during this exercise. You will only start to make visual break throughs when you have exhausted the obvious picture possibilities.”
Take the obvious pictures(s), get it out of your mind. Then spend some time looking a bit closer. What else can you see. is there a specific pattern of texture, strong lines of flow, interesting close up detail? What I saw first was there was not just one track (the human made walking track) but in fact dozens of animals and/or bird tracks through the reeds.
I also saw delicate laces in the dead grasses, swamp melaleuca
Freeman may be a harder task master than me and he goes on to suggest.
“If you want to challenge yourself even more, make thirty of forty images.”