Friday, February 23, 2024

Luminar Neo's GenAI Tools Need More Time in the Oven

Luminar Neo recently added three new generative AI tools called GenErase, GenExpand, and GenSwap, all supposedly based on generative AI technology. I’d seen a bit of hype about them and they just turned up for a 30 day trial, so I just need to play and try these new features. I have to say they clearly needed more time in development before being released to the public.

The tools are only available through Neo's subscription service presumably because they utilize cloud computing power. This means they will likely never be available to run locally on a desktop.


I was most interested in testing out GenExpand, which is supposed to let you extend the edges of an image. I do like Neo’s Panorama stitching extension but I often get bulbous, untrimmed images when stitching together handheld panoramas in Neo, so I thought GenExpand could help with that. Unfortunately, my first attempt to expand a massive 588MB panorama got stuck taking forever and then just produced blackness over the area I’d selected. Oh well, back to the drawing board.

 A smaller test image did successfully expand, but the new edge addition was blurry and grayscale. On closer inspection the horizon matched byt clouds and waves didn’t matchup well to the original.


Hoping for better luck, I tried GenSwap to insert a kangaroo into a photo. The AI clearly wasn't trained on enough Aussie animals, its a bizarre creature but “thats not a real kangaroo”. At this point, my enthusiasm was waning.


Finally, I tested GenErase to remove objects from photos. It performed decently but didn't seem much better than the standard erase tool already in Luminar Neo. Trying to erase a larger object again resulted in the tool freezing up.


In the end, while the ideas behind these new GenAI tools are intriguing, I feel they simply aren't ready for practical use. Too many bugs, glitches, and failures to finish make them more frustrating than functional. Luminar Neo would have been better served by traditional beta testing before releasing them. For now, I don't trust these tools, or for that matter many other developers' generative AI tools to deliver satisfactory results, or are my expectations too high? Maybe someday the technology will mature into something more reliable, but for now GenAI feels more like a breakable flashy toy and “trying to keep up with the Jones”.

Tuesday, February 20, 2024

Happy Birthday Flickr

Today marks the eve of Flickr's 20th anniversary, a milestone that brings back memories of its inception in 2004.  They have a nice article about their significant moments and time-line on their blog

Reflecting on my journey with Flickr, I recall how I initially turned to it as a platform to upload and share photos for this blog, which started out as "things they forgot to tell you about digital photography". It was a time when the concept of cloud storage and HTML image links felt groundbreaking. Moreover, Flickr's community features, such as groups and commenting, added depth to the experience, fostering connections among photographers worldwide. Introducing their "interestingness" algorithm brought exposure to countless talented individuals, while Flickr’s commitment to Creative Commons licensing set a standard that others struggled to match.


For me, Flickr and digital photography have been intertwined since I acquired my first privately owned digital camera in 2003. Over the past two decades, I've amassed a collection of cameras, each holding its own story. It's been a little embarrassing laying out the number of cameras and estimating the money spent! While I no longer use most of them, I cherish the memories and photographs those cameras enabled. Most are still operational, though finding the right memory card for my original Olympus Camedia or a battery charger for an early rechargeable, may prove challenging.

As a consulting geologist, I purchased a Canon Rebel DSLR for video capabilities which has served me well for filming professional videos over the years. I still use that Canon today with a tethered setup to photograph my art on a copy stand. Over time I had invested in two Pentax digital SLRs as megapixels increased into the 12-16MP range, though I seldom use those models now., there were and still are great cameras just a bit heavy.

The two sleek Olympus mirrorless cameras have re-ignited the joy, fun and passion for capturing moments. Despite the evolution of technology, these cameras continue to serve me well, each with its unique capabilities and charm.

As we celebrate Flickr's anniversary, I can't help but acknowledge how my own photography journey has evolved alongside it. While changes of ownership, lockdowns and changes to Flickr's account limitations have impacted my activity, I remain loyal to the platform, eagerly anticipating events like the upcoming worldwide photo walk in Jells Park.

To Flickr, I extend my heartfelt gratitude for two decades of inspiration and community. Here's to many more years of creativity and connection in the digital realm. Cheers from down under


Tuesday, February 13, 2024

Photography without a camera

On Saturday I attended an artist talk at the Museum of Australian Photography focused on three exhibits around “photography without a camera”. One project that really interested me was by Kate Robinson, who created images using generative AI and then made physical prints using the traditional cyanotype process. She ran a workshop on the process in the afternoon

I've been fascinated by optical illusions, like Rubin's vase which can be seen as two faces or a vase. I tried generating the illusion image through text prompts to DALL-E 2 but it didn't work, their AI evidently didn't know about the famous illusion, I just got nice vases. So I experimented with Stable Diffusion instead, knowing I could add a noisy starter image with the same prompt “photo-realistic version of Rubin vase”, this generated something closer to what I needed.



I then gave Stable Diffusion a detailed text prompt asking for “Male & Female Heads in profile facing each other, Professional photography, bokeh, natural lighting” I used the latest SDXL 1.0 model which generates very realistic images. This produced some great portraits which had sharp focus on the faces and soft, blurred backgrounds, while maintaining the illusion styling.

1. Stable Diffusion Starter
2. Stable Diffusion generated images
3. Selected image upscaled
4. Greyscale & Inverted for transparency


I picked one of the four generated AI images with good tonal range, upscaled it, and inverted it to make a negative. Then printed this negative image onto a transparency. Kate had already prepared some watercolour paper with the light-sensitive cyanotype chemicals. I sandwiched the sensitized paper with the transparency under a piece of perspex and exposed it to sunlight for about 25 minutes.

When the transparency was removed the image had magically appeared on the paper! Somewhat faded. Just like seeing that first print develop in the darkroom, under the red light. I rinsed the paper first in water and then briefly in vinegar to set the blue cyanotype tones. In only about 40 minutes, I had gone from an AI concept to a one-of-a-kind cyanotype print, all without ever using a camera!



This project showed me the creative potential in blending digital and analogue photographic methods. I'm re-excited to experiment more with generated AI images and bringing them into the real world through alternative printing processes.


Wednesday, February 07, 2024

Where Do We Go From Here?

Testing Out New AI Tools for Writing - The Good πŸ‘and The Bad πŸ‘Ž


I've been experimenting with some of the new large language AI models like ChatGPT Bard and Claude to help summarize and clean up my dyslexic writing. At first, it seemed amazing - I could just dictate my random thoughts and the AI would turn it into clear, readable text. I even had it generate content like blog posts, YouTube scripts, and Instagram captions.

However, I started noticing some issues:
  • The AI can be overenthusiastic, especially when mentioning product names. It reads like advertising copy. I have had to rewrite these sections to keep them factual.
  • Outrageous claims and incorrect facts. About 40% of the time, the AI includes claims or "facts" that are just plain wrong. I end up removing entire paragraphs.
  • About 30% of the time, the content is good as is. The other 30% needs some reworking to tone down the language.
Clearly there's an issue here with misinformation. My current theory is that these large language models are trained on in-discriminant internet data containing conspiracy theories, misinformation, and bias. Garbage in, garbage out. 

I'm finding Anthropic's Claude model more reliable with fewer glaring errors. I have used it on this post, but I still have to carefully review any AI-generated text before publishing. 

As AI becomes more ubiquitous, it's crucial that we understand how these models are trained and what biases they may contain. We have to establish checks and balances, verifying information and not blindly trusting AI outputs.

I'll keep experimenting with AI writing assistants, AI in photography and digital Graphics (generative AI images such as the one abouve), but always maintain oversight. Stay tuned for more on responsible use of generative AI.

Sunday, February 04, 2024

Back in Nature - End of Summer Photowalk

To celebrate their 20th Birthday, Flickr are reviving their worldwide photowalks. I'm going to revisit the first walk I led as part of the worldwide photowalk project back in 2016. We'll walk through what was then the conservation area behind Jells Park Lake to capture the natural world thriving amidst suburbia. The conservation area is now opened-up but some trails are blocked off, nature is still happily doing its thing. 



This year has been a milder summer and vegetation is thriving, though some weeds are a bit overwhelming. Insect and bird life are literally hopping and buzzing. I've scheduled the walk later to take advantage of the low angle summer light. Precious moments as we only get about half an hour of golden light at this time of year at the end of summer. 

The walk winds through the trees and shrubs behind the lake. There's a quiet beauty in the lengthening shadows and soft evening light. All varieties of birds can be spotted flitting through the branches or foraging on the ground. The constant hum of cicadas and other insects fills the air. Underfoot, wildflowers and grasses sway in the breeze. It's a glimpse into the natural rhythm at summer's close.

If you'd like to stay until 8:30pm and the weather is kind, you'll have a great opportunity to watch the full moon rise over the Dandenongs or be reflected in the lake's still waters. The interplay of light on the landscape creates unique photographic possibilities.

Like the original walk, this photowalk is limited to 20 participants and is free. I hope you can join me to capture the magic of nature at summer's end! Let me know if you have any other questions.

Registration via Eventbrite