AI has shaken up the world of photography in 2023, from tricking judges to win photography competitions to the proliferation of deepfakes with real-world ramifications, like the depiction of an explosion – which hasn’t happened yet. produced – outside the Pentagon which caused a brief fall in the markets. .
Deepfakes can be fun (Pope in a puffer jacket, anyone?), but they can also be used to spread misinformation, and as AI image generators become more powerful, it’s becoming increasingly difficult to distinguish real photos we see online from fake ones.
Now, according to a report from Nikkei Asia, major camera brands Sony, Canon and Nikon are looking to solve this problem by creating technology that can verify the authenticity of images in new cameras.
Last year, the Leica M11-P became the first camera with built-in content credentials: a digital signature that authenticates the time, date and location an image was taken, as well as the person by whom it was taken, as well as if any changes were made. made after capture.
However, not many people have an $8,000/£7,000 Leica in their bag, and now Sony, Canon and Nikon are set to introduce their own authentication technology.
We don’t yet know which new cameras will be released with Content Credentials-like technology built-in – although we’ve rounded up the 12 most exciting cameras for 2024, which might give us an idea. At the launch of the Sony A9 III, Sony announced that it would update this camera and two other professional models, the A1 and A7S III, with support for content provenance and authenticity (CP2A), although we don’t know when or how he plans to do so. This. (CP2A is a cross-industry coalition co-founded by Adobe that introduced Content Credentials and includes companies like Nikon and Getty.)
The Nikkei Asia report says future cameras will provide the information needed to authenticate an image on the Content Authenticity Initiative’s (CAI) free, publicly available verification system.
Analysis: are digital signatures behind closed doors enough?
Increasing the number of cameras with authenticity technology capable of marking images with the official Content Credentials stamp is a big step in the right direction, but is it enough?
Even though three of the biggest camera brands are implementing professional authenticity/anti-AI features, early indications are that these will primarily be reserved for professional cameras typically in the hands of journalists.
While large media organizations will be able to implement protocols to fact-check and authenticate the origin of images via the Content Credentials Verify tool for enabled cameras like the M-11P, the majority of cameras will not properly verified, including omnipresent cameras. on smartphones from Apple, Samsung and Google.
The biggest challenge, which these verification measures do not address, is websites and social media platforms that host misinformation, with fake images potentially seen and shared by millions of people.
I’m encouraged that camera brands are poised to introduce digital signatures in future cameras and possibly update some existing cameras with this technology. But it appears it will be some time before these features are rolled out more widely to cameras and phones, including devices used by those looking to spread misinformation with fake images.
It’s also unclear whether bad actors will find ways to circumvent digital signatures, whether for AI-generated images or for real images that have been altered. And what about in-camera multi-image sequences like composites? Hopefully these questions will be answered as camera brands put these authentication measures into practice.
For now, the long fight against deepfakes has only just begun.