
I've spent the past week watching the internet lose its collective mind over Sora 2, and honestly? I can't blame them. OpenAI's latest AI video generator has dropped like a bomb through Hollywood's front porch.
For the uninitiated, Sora 2 is OpenAI's text-to-video model on steroids. The original Sora, unveiled in 2024, was impressive but limited. This version steps things up considerably; generating synchronized audio, understanding physics properly, and producing 10-20 second clips at up to 1080p.
The killer feature? "Cameo" lets you scan your face and voice, then insert yourself into AI-generated scenarios. Think TikTok meets Black Mirror.
What's more, Sora 2 is wrapped in a social media app, currently for iOS only, designed for maximum viral spread. Within hours of launch, videos featuring every copyrighted character imaginable were flooding social feeds. The app hit number one in downloads faster than you can say "intellectual property violation". Which is exactly the problem.
The great IP free-for-all
Above: OpenAI's promo video gives a look at Sora 2's incredible video generating capabilities
The launch of Sora 2 turned copyright protection into chaos, thanks to an opt-out system where copyright holders had to explicitly tell the company not to use their work. The digital equivalent of a burglar announcing he'd nick everything unless you specifically asked him not to.
Cue endless videos online infringing copyright in the most outlandish ways; from scenes of Pikachu being grilled on a barbecue to SpongeBob Square Pants cooking crystals in a meth lab. Talent agencies CAA, WME and UTA all issued furious statements. The Motion Picture Association called it a "serious threat" to performers' likeness rights. And they're right to worry.
After the inevitable backlash, OpenAI now says it's shifting from an opt-out model to a stricter opt-in system. However, it doesn't yet seem possible to demand blanket opt-outs. And even if this does happen, the web is now full of articles with titles such as 'How to bypass Sora 2 copyright rules'. So let's not kid ourselves about the broader trajectory of all this.
The best camera deals, reviews, product advice, and unmissable photography news, direct to your inbox!
What this means
Ultimately, OpenAI has built a tool that makes copyright infringement trivially easy, wrapped it in addictive social mechanics, and released it to millions before sorting out the legal niceties. In this light, OpenAI CEO Sam Altman's blog post acknowledging "edge cases" feels less like reassurance and more like a shrug in corporate speak.
So now we're heading towards a world where anyone can generate convincing footage of anything, featuring anyone, saying anything. The implications for misinformation and the erosion of trust in media are staggering. More immediately, there's the economic question. If clients can generate "good enough" content with AI at a fraction of the cost, where does that leave photographers and filmmakers?
Hollywood is right to be furious. Consumers should be too. This isn't about fearing new technology. It's about demanding that the people building these tools take responsibility for the legal and ethical chaos they're creating.
Sora 2 is powerful and genuinely useful in certain contexts. It's also a piracy nightmare wrapped in a social app. And we're all going to spend the next few years dealing with the consequences.
Tom May is a freelance writer and editor specializing in art, photography, design and travel. He has been editor of Professional Photography magazine, associate editor at Creative Bloq, and deputy editor at net magazine. He has also worked for a wide range of mainstream titles including The Sun, Radio Times, NME, T3, Heat, Company and Bella.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.