The best camera deals, reviews, product advice, and unmissable photography news, direct to your inbox!
You are now subscribed
Your newsletter sign-up was successful
The above photo was taken in July 1917 by a teenage girl, Elsie Wright, in the garden of her Cottingley home in Yorkshire. It shows her younger cousin, Frances Griffiths, surrounded by dancing fairies. The fairies were paper cut-outs, secured to the ground with hat pins. And yet the image fooled millions of people including, most famously, Sherlock Holmes creator Arthur Conan Doyle.
As photographers, this story should make us feel simultaneously vindicated and uneasy. Vindicated, because we understand better than most how profoundly a camera can deceive. Uneasy, because the lessons this affair teaches have never actually been learned; and the same failure is playing out right now, at unprecedented scale, with AI-generated deepfakes.
Last week, an episode of BBC Radio 4's The Long View drew the parallel with characteristic sharpness. As it pointed out, the number of deepfakes shared online rose from around half a million in 2023 to eight million in 2025. The Cottingley story, it turns out, is not ancient history. It's more of a manual.
Article continues belowWhat Kodak missed
Here's the detail that should interest photographers most. When Doyle set out to verify the Cottingley fairy photographs, he did not rely on instinct. He sent the images to Kodak, who confirmed there had been no double exposure. He also commissioned Harold Snelling, an independent expert in photographic manipulation, who reached the same conclusion.
Both experts were looking for darkroom trickery. Neither considered the possibility that the deception had happened entirely in front of the lens: physical cut-outs, carefully positioned, photographed straight. The technique required no manipulation of the negative whatsoever.
It is a forensic lesson that echoes loudly today. When we try to detect AI-generated images, we look for the artefacts we know about: the wrong number of fingers, the inconsistent lighting, the smeared background text, the tell-tale compression signatures of GAN-generated faces. But the deception, as in 1917, tends to stay one step ahead of the verification method.
New tech, old story
What the programme makes compellingly clear is that this is not really a story about photography or AI. It's a story about a very old human habit: we grant uncritical trust to whatever our newest representational technology produces, and we keep doing it until something goes badly wrong.
The best camera deals, reviews, product advice, and unmissable photography news, direct to your inbox!
In 1917, photography was still close enough to its origins to feel miraculous. The cultural idea that the camera could not lie was close to absolute. At the same time, spiritualism was flourishing in the grief-saturated aftermath of the First World War. Conan Doyle had lost his son. He desperately wanted these photographs to be real, even if he didn't consciously realise it.
It gets worse: Conan Doyle never knew that Snelling made adjustments to the photographs (improving exposure, adding detail to the fairies' wings) before they were published. The image he championed in The Strand Magazine in 1920 was not the image the girls had taken. The chain of custody was broken before he ever saw a print.
As BBC disinformation specialist Marianna Spring observes in the programme, the mechanics are similar today. Deep fakes succeed not because they are technically flawless – they're often very much not – but because they confirm what the viewer already wants to believe.
The lesson that never lands
The woman who'd taken the Cottingley fairy photos finally confessed to a journalist in 1983, more than 60 years after on. Her explanation for the long silence was not shame but consideration: she and Elsie did not want to humiliate Conan Doyle and his associates while they were still alive. The confession came too late to do much good, though. By then, the story had become myth.
So what can photographers—people who spend their working lives understanding exactly what a camera can and cannot show—usefully take from all this into the deepfake debate? I'd say the lesson is this.
The Cottingley photographs were verified by photographic experts and still fooled the world, because the experts were solving the wrong problem and the audience had already decided what it wanted to see. Technical literacy matters, but it will only get you so far when motivated reasoning is doing the heavy lifting on the other side.
The most important lesson from 1917, then, is also the most uncomfortable one. Before you try to spot whether an image is fake, ask yourself whether you, or others, want it to be real. That question will tell you more than any techno fix or detection tool currently on the market.
Tom May is a freelance writer and editor specializing in art, photography, design and travel. He has been editor of Professional Photography magazine, associate editor at Creative Bloq, and deputy editor at net magazine. He has also worked for a wide range of mainstream titles including The Sun, Radio Times, NME, T3, Heat, Company and Bella.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

