How AI changed everything for photographers and videographers in 2025

Screenshot of AI actor Tilly Norwood from AI-generated YouTube video from Particle6 TV
Tilly Norwood was everywhere online in 2025. But she's entirely AI-generated, as are the scenes she appears in (Image credit: Particle6 / Xicoia)

In 2025, AI didn’t creep in politely. It barged straight into studios, edit suites and camera bags, uninvited and fully caffeinated. If you shoot, edit or direct for a living, you didn’t just hear about AI developments this year: you lived them.

This year, what was once an “exciting technology on the horizon” became something far more immediate. It altered briefs, workflows, client expectations and, occasionally, the collective blood pressure of photographers and filmmakers.

Most strikingly, we all started to wonder what this would all mean for our careers – especially if you were in one of the photography jobs that AI is coming for.

AI image generators hit photorealism so convincingly that retouched portraits from real shoots sometimes struggled to look more polished. On the flipside, editing suites filled with tools that quietly shaved hours off masking, color correction and rough cuts. Truly, AI could giveth and taketh away.

Sound on for Sora 2 - YouTube Sound on for Sora 2 - YouTube
Watch On

ABOVE: Watch the Sora 2 sizzle reel

Then, just as everyone thought they’d caught their breath, OpenAI released Sora 2 in September. Suddenly we had software capable of producing physically coherent, dialog-synchronized video clips that felt like they came straight from a mid-budget Hollywood production.

Filmmakers and editors were getting a taste of what photographers had already experienced. They weren’t staring at theoretical disruption any more; it was right there on their screens, asking to be added to the workflow.

The deeper story

Yet the story of 2025 isn't simply that AI improved. Of course it improved. The deeper story was the confrontation it forced: what counts as real, what counts as ours and what creativity looks like when machines can mimic almost anything.

The launch of Sora 2 captured this tension perfectly. Technically, it was astonishing. Shots that once required entire departments could be summoned with a prompt. Movements obeyed physics, voices matched lips, shadows fell in exactly the right places. But the rollout caused immediate chaos.

The discovery that Sora’s training data included copyrighted material unless creators actively opted out provoked rightful outrage from studios and rights organizations. Directors and editors found themselves wondering if adopting the tool too soon would entangle clients in the next big intellectual-property scandal.

Then came a twist: audiences didn’t wholeheartedly embrace AI-generated video, either.

The success of Sora 2 in generating convincing AI video was swiftly followed by Google's Veo, Kling AI, Runway Gen-4 and others (Image credit: Sora 2)

Those viral AI clips enjoyed their brief week in the sun, only to fade almost as quickly as they arrived. Filmmakers reported that when clients were offered the choice between AI-crafted footage and something shot by a human with a clear creative perspective, they still gravitated to the latter.

If anything, the excitement surrounding Sora 2 highlighted how wide the gap remains between technological capability and cultural readiness.

Industry split

Around midway through the year, it felt to me that the industry was starting to cleave into two distinct lanes.

In one lane sits the work that AI can produce cheaply, quickly and convincingly: product shoots, corporate headshots, stock-style photography, and the sort of fast-turnaround brand content that was once the backbone of many photographers’ calendars.

Much of this work evaporated almost overnight, with serious consequences for individuals' bank accounts and mortgage payments.

But in the opposite lane, something interesting happened. Creative labor that relied on emotional intelligence and spontaneity has started to flourish. Brands increasingly value the unmistakable fingerprints of human intention.

At the same time the middle ground – that comfortable space of technically competent but stylistically neutral work – has shrunk dramatically.

Lightroom got a ton of new AI-powered features that are genuinely useful (Image credit: Hillary K Grigonis / Future)

For photographers and filmmakers themselves, then, mixed emotions were very much the order of the day in 2025.

On one hand, AI integrated itself into every tool we used, from Lightroom to Premiere Pro. Autofocus seemed to read your mind. Color grading assistants prepared shockingly decent first passes. Retouching tasks that once drained your will to live were handled in a single brushstroke.

Even within cameras themselves, computational photography increasingly took over, with AI-powered phones like the Samsung Galaxy S24 Ultra and Google Pixel 9 leading the way.

But all this convenience came with an equal amount of unease. Subscription costs rose. Storage costs ballooned. And perhaps most painfully, skills that we'd spent years refining became optional at best, redundant at worst, almost overnight. And beneath all this ran the defining revelation of the year: the perfection paradox.

The perfection paradox

As AI became capable of generating flawless imagery at industrial scale, perfection itself lost its cultural value. Social feeds were filled with technically immaculate visuals, yet the images that gained traction were the ones that looked touched by real human hands.

Consequently, many photographers leaned into film grain, motion blur, quirky colors, accidental flare and even cameras with deliberate limitations.

Similarly, some filmmakers embraced handheld jitter, imperfect light and textures that signaled real-world presence. This wasn’t nostalgia for nostalgia’s sake, it was a deliberate response to visual saturation. When everything looks pristine, people crave the unpredictable.

The Google Pixel 9's camera is packed with AI software (Image credit: Google)

High-end clients caught on to all this quickly. They didn't want work that competed with AI on technical precision; they wanted work that couldn’t be mistaken for AI at all.

Imperfection – or more accurately, human presence – became a marker of value. The more synthetic the landscape became, the more desirable the unpolished truth felt.

Conclusion

So where does that leave us heading into 2026? On the one hand, it seems undeniable that AI will remain a staple of everyday creative work. It’s too useful for battling the repetitive, fiddly, joyless parts of production and post-production to abandon. But it no longer defines the pinnacle of the craft.

Perfection has stopped being the goal. The creators who thrived in 2025 weren’t the ones who matched AI’s precision; they were the ones who focused on the parts of image-making and filmmaking that still belong wholly to humans: perspective, emotion, connection and the ability to turn a real moment into something that resonates.

In short, this year taught us that AI can imitate aesthetics but not intention. It can replicate style but not meaning. And it can generate spectacle but not feeling.

The tools changed. The economics shifted. But the heart of the craft – the spark that makes a viewer feel something – has remained firmly, defiantly human. In the end, I reckon that's the most important lesson 2025 has to offer.

Tom May

Tom May is a freelance writer and editor specializing in art, photography, design and travel. He has been editor of Professional Photography magazine, associate editor at Creative Bloq, and deputy editor at net magazine. He has also worked for a wide range of mainstream titles including The Sun, Radio Times, NME, T3, Heat, Company and Bella.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.