Sony shows us an AI future, but where does that leave creators?

Robot hands making origami
AI is taking a bigger role in creativity (Image credit: Getty)

This year, Sony opened up its 50th anniversary celebrations to the press for the first time, and we got to see a rare glimpse behind the scenes. The leading corporation – known for its Alpha range of Sony cameras, Sony phones, TVs and more – showed off some of the groundbreaking new technologies that are in varying phases of R&D, but will eventually become consumer products.

This look into the future was fascinating, with the new developments expanding my mind to just what could be possible. However, the focus on AI and how it is going to infiltrate and revolutionize every instance of technology does make me fear for the traditional role of photographers and videographers in our industry.

The main theme running through every presentation in Sony's showcase was AI (artificial intelligence) and ML (machine learning). These examples ranged from small improvements we are already starting to see in devices, all the way through to significant advances that will completely change the way we use technology.

The tech that most of us will be somewhat familiar with is Sony's Deep Generative Models (DGNs). These are ways that Sony is using AI to improve the way that we currently capture images and video, such as using deep learning for digital noise reduction in an image, smoothing out a blurred photo, or removing background noise in audio recordings. 

We have already seen a lot of this in the industry, so this isn't just relevant to Sony products. Apple and Google are working overtime on these features for their latest camera phones, with Google’s blur removal tools on the new Pixel 7 range being particularly mind-blowing. 

Sony is the world's largest producer of image sensors, found in almost every mobile device including all current iPhones (Google uses Samsung sensors, for the record). So these are improvements that we will begin to see very quickly on devices if manufacturers decide to adopt them.

Whilst DGNs are not going to fundamentally change the need for a photographer, they do make the skill level required much lower. With AI assistance, almost anyone can frame a shot and let the camera and algorithms do the rest. We are starting to see this technology in cameras with Sony, Canon, Nikon and Fujifilm touting their own deep-learning algorithms. 

With these algorithms, each company is striving to make the most neutral photo possible, so it is the most pleasing to everybody, but does this mean that quite soon will all photos just look the same?

AI and AR – how do they work together?

More concerningly for photographers and videographers is the role that AI and AR/VR (augmented/virtual reality) are going to play together. 

Mapray is a new technology from Sony that uses terrain, satellite, aerial photos, sunrise/set patterns, live weather broadcasts, and open image libraries to put together true-to-life models of real places around the world. 

While the demo is still very digital, with graphics that look similar to Microsoft Flight Simulator, the potential for this technology is huge. If you can create fully realistic rendered images of locations with completely accurate light and weather then these images would be indistinguishable from a landscape or cityscape photograph taken in the same place at the same time. 

This technology puts the potential for anybody at home with a computer to 'live' photograph almost any location in the world without getting out of their chair. And movies and TV shows that would have once been filmed on location with a contingent of crew members can now film in green screen studios with a reduced crew and have a realistic backdrop put in later.

AI also has the potential to revolutionize sports photography and broadcasting. Hawkeye is a sports tracking technology, and tennis fans will be very familiar with its tech being used for line calls. In the 2022 World Cup, it's the technology behind the controversial VAR system being used. The system is already capable of tracking player positions down to their smallest skeletal movement, and Sony has shown off its proposed future, with Hawkvision being able to use AI and machine learning to create real-time predictions of player movements.

Hawkvision is an associated technology that allows a full 3D model to be rendered o the pitch and players, allowing viewers to utilize AR to view virtually any angle around the stadium, even from the player's own perspective.

While this is amazing technology, what does this mean for photographers and camera operators covering the match? Well, if AI camera tracking can know which player has the ball, where they are facing, and where they are going to go next, then why do they need any human input at all? Will these renders get so realistic that an AI-generated image is the same as a real image from a camera?

While all this technology is years away from making an impact on the market, many photographers and videographers will be left concerned for their futures in an ever-changing area. The onward march of progress cannot be slowed, so it is important to consider the ever-growing role of AI in our livelihoods and how it will affect all of our futures.

We'll be bringing you extension coverage on AI in the coming months – including what it actually is, the best AI generators, and examples of projects we've seen from photographers who are using the tech.

For more on this topic, check out what is an AI camera. How is AI changing photography?

Gareth Bevan
Reviews Editor

Gareth is a photographer based in London, working as a freelance photographer and videographer for the past several years, having the privilege to shoot for some household names. With work focusing on fashion, portrait and lifestyle content creation, he has developed a range of skills covering everything from editorial shoots to social media videos. Outside of work, he has a personal passion for travel and nature photography, with a devotion to sustainability and environmental causes.