One holy grail in the AI applications in Photography (or any visual art) has been the potential ability to automatically see into the binary files (that are photographic images) and use that in searches. For a decade of so google has claimed a lot of progress, starting with the really useful ability to find cats in photos. Ok I’m being a cynical but I have been trackingthe claims in google photos, auto labeeling things and places. It can look through the great dumps of photos its backup & sync tools is daily uploading. Are things improving? You can enter a term and be “surprised” (on the first couple of occasions) that it will find some photo you have forgotten about. It does also pull up a few clearly bad matches. What I have noticed is the really obvious things it misses. So I’m really not seeing any significant improvement.
For a while at least flickr had a feature that would add tags at the time you uploaded your photo. If you bothered to check you were able to delete incorrect terms of add your own. This seem a better approach to me but alas it seems to have fallen by the way side now.
Moving forward I’ve now trying out a slightly new alternative Excire (it has been around as a plug-in for lightroom as Excire Search for a while) which primarily groups classifies your images to create a hierarchical set of keywords. It can then write these keywords to sidecar files, which will then allow this information to percolate into any other software that reads this format. I’m only using the trial version which unfortunately doesn’t give me the ability to write these files, so I have check it yet. Its classification strikes be as both friendlier to use and possibly better than either of the above.
It also have the benefit that you see the grouping visually and they are laid out from best match at the top to just possible at the bottom. Further you don’t have to change modes (eg. go to new screen to make corrections). You just select the image click on it and you can make changes to the keywords on the left tab or albums on the right.
It strikes me as perfect to be run as you first upload/ingest your photos. Helping you to rank and cull but giving the incentive to better keywording of your images. I already have photo mechanic (no AI features as yet) which I love and use (but not so much for keywording). So I’m Impressed but not ready to jump ship just yet, for keyword Shangri-la. However that time is much getting closer.