Skip to main content

What is an AI camera, and how does AI photo-editing work?

AI camera: Photoshop Camera
(Image credit: Adobe)

Artificial intelligence (AI) is everywhere, and if you haven't yet got an AI-powered smartphone, you probably soon will do. Even your phone's software uses AI to make decisions on your behalf. Adobe's Photoshop Camera, just launched, uses AI to identify objects and scenes in your pictures and suggest 'lenses' (digital effects) for comic and creative impact.

Is it all just marketing hubris, or is AI in a smartphone – and particularly, in its camera – something we should all aspire to have? With the term AI increasingly being used not only in camera phones, but in all kinds of cameras, it pays to know what AI is actually doing for your photos. 

AI has blurred the boundaries between image capture, image enhancement and image manipulation. It is used in photo-editing, to meld, enhance and 'augment' reality, to make more intelligent object selections, to match processing parameters to the subject and to help you find the images automatically based on what's in your photos rather than manual keywords and descriptions. It is already looking at what you photograph and making its own decisions about how to handle it.

Welcome to the brave new world of AI cameras.

What is AI?

AI is a genre of computer science that examines if we can teach a computer to think or, at least, learn. It's generally split into subsets of technology that try to emulate what humans do, such as speech recognition, voice-to-text dictation, image recognition and face scanning, computer vision, and machine learning. 

There is a whole cluster of buzzwords around this topic. 'AI', 'deep learning', 'machine learning' and 'neutral networks' are all intertwined in this new branch of technology.

What’s it got to do with cameras? Computational photography and time-saving photo editing, that’s what. And voice-activation.

Voice-activated cameras

Voice Control has featured on a number of GoPro models, starting as far back as the HERO5 (above) and carrying through to the latest HERO8 Black.

The ability for a computer to understand human speech is a form of AI, and it's been creeping onto cameras for the last few years. 

Smartphones have been offering Google Now and Siri for a few years, while Alexa is entering homes via the Amazon Echo speakers. Action cameras have jumped on that bandwagon in recent years, with the GoPro action cameras and even dash cams able to take actions when you utter simple phrases such as 'start video', 'take photo' and so on. 

It all makes sense, especially for action cameras where hands-free operation makes them much easier to use but is it really AI? Technically, it is, but until recently voice-activated gadgets were simply referred to as 'smart'. Some now allow you to say quite specific things such as ‘take slow-motion video’ or ‘take low-light photo’, but an AI camera needs to do a little more than that be worthy of the name.   

AI software

AI is about new kinds of software, initially to make up for smartphones’ lack of zoom lenses. “Software is becoming more and more important for smartphones because they have a physical lack of optics, so we’ve seen the rise of computational photography that tries to replicate an optical zoom,” says imaging analyst Arun Gill, Senior Market Analyst at Futuresource Consulting. “Top-end smartphones are increasingly featuring dual-lens cameras, but the Google Pixel 3 uses a single camera lens with computational photography to replicate an optical zoom and add various effects.” 

Since the Pixel 3, multi-camera arrays and computational imaging have merged to produce a hybrid technology that replicates many of the depth of field and lens effects you get from larger cameras. A camera phone is no longer 'just' a camera. It's a calculating, analysing, 'thinking' device that doesn't just capture the scene as it is, but how it thinks you want it to be, or how it thinks you ought to want it to be... 

AI can be like having a know-all assistant. After a while you might start to wonder who is actually in charge.

The Google Pixel 2 was one of the most capable smartphones around, despite only having a single-lens camera on its rear side

The world is not necessarily ready for the full implications of AI cameras. Google used AI on its Google Clips wearable camera, which used AI to capture and keep only particularly memorable moments. It used an algorithm that understood the basics about photography so it didn’t waste time processing images that would definitely not make the final cut of a highlights reel. For example, it auto-deleted photographs with a finger in the frame and out-of-focus images, and favoured those that comply with the general rule-of-thirds concept of how to frame a photo.

Creepy and controlling? Some thought so. In any event, Google pulled the camera in 2019. The question is not whether AI is powerful enough to do the things we want, but whether we're quite ready yet to hand so much power over to a machine... or to the company that owns and operates the AI algorithms behind it.

The Google Clips camera decided what to photograph without user intervention

What is computational photography?

Computational photography is a digital image processing technique that uses algorithms to replace optical processes, and it seeks to improve image quality by using machine vision to identify the content of an image. 

“It's about taking studio effects that you achieve with Lightroom and Photoshop and making them accessible to people at the click of a button,” says Simon Fitzpatrick, Senior Director, Product Management at FotoNation, which provides much of the computational technology to camera brands. 

“So you're able to smooth the skin and get rid of blemishes, but not just by blurring it – you also get texture.” In the past, the technology behind ‘smooth skin’ and ‘beauty’ modes has essentially been about blurring the image to hide imperfections. “Now it’s about creating looks that are believable, and AI plays a key role in that,” says Fitzpatrick. “For example, we use AI to train algorithms about the features of people's faces.”

The LG V50 boasts AI Composition, AI CAM 2.0, Google Assistant, Google Lens and Super Far Field Voice Recognition.  (Image credit: LG)

As far back as the LG V30S ThinQ phone, LG has been using AI for imaging. It allows the user to select a professional image on its Graphy app and apply the same white balance, shutter speed, aperture and ISO. LG also introduced Vision A, an image recognition engine that uses a neural network trained on 100 million images, which recommends how to set the camera. It even detects reflections in the picture, the angle of the shot, and the amount of available light.

Depth sensors and blurry backgrounds

In recent years we've seen many multi-lens phone cameras use two or more lenses to produce aesthetically pleasing images that have a blurry background around the main subject. People (and, therefore, Instagram) love blurry backgrounds, but instead of using dual-lens cameras or picking up a DSLR and manually manipulating the depth of field, AI can now do it for you. 

Commonly called the 'bokeh' effect (Japanese for blur), machine learning identifies the subject, and blurs the rest of the image. “We can now simulate bokeh using AI-based algorithms that segment people from foreground and background, so that we can create an effect that begins to look very much like a portrait taken in a studio,” says Fitzpatrick. The latest smartphones allow you to do this for photos taken with either the rear or the front (selfie) camera. 

Apple's latest iPhone 11 Pro series uses a triple-camera array, with a twin-camera array in the base model (Image credit: Apple)

“People refer to it as bokeh, but you don’t get the true blur you get with a DSLR where you can change the depth; with a phone, you can only blur the background,” says Gill. “But a small and growing number of photographers are really impressed with it and are using an iPhone X for everyday capture, and only when they’re on professional jobs will they get out their DSLR.”

AI cameras can automatically blend HDR images in bright light, switch to a multi-image capture mode in low light and use the magic of computational imaging to create a stepless zoom effect with two or more camera modules.

What about DSLRs and other 'proper' cameras?

Automatic red-eye removal has been in DSLR cameras for years, as has face detection and, lately even smile detection, whereby a selfie is automatically taken when the subject cracks a grin. All of that is AI. Will the likes of Nikon and Canon ever adopt more advanced AI for their flagship DSLRs? After all, it took many years for WiFi and Bluetooth to appear on DSLRs. 

According to the company behind it, Arsenal became the "most funded camera gadget in Kickstarter history"

While we wait, a Kickstarter-funded ‘smart camera assistant’ accessory called Arsenal wants to fill the gap. “Arsenal is an accessory that allows the wireless control of an interchangeable-lens camera (eg a DSLR) from a mobile device, with machine learning algorithms used to take the perfect shot,” says Gill. “What it’s doing is comparing the current scene with thousands of past images, using image recognition to recognise a specific subject and applying the correct settings, such as a fast shutter speed if it recognises wildlife.”

Canon, meanwhile, has leaned heavily on AI technology for the cutting edge autofocus system in the EOS-1D X Mark III. Or, to be more precise, 'deep learning'. The complexity of the algorithms is the same (the system is trained using professional photographs) but deep learning is the end result... artificial intelligence is the ability for a machine to keep learning on its own.

It can be difficult to separate true AI from sophisticated automation, however. For years, compact camera makers have been offering different subject orientated scene modes which can be chosen automatically by the camera. Is that 'intelligence', or simply a slightly more advanced implementation of exposure measurement, subject movement and focus distance? Multi-pattern metering systems typically use a complex measurement of light distribution based on thousands of real-world photos and have been using a 'deep learning' process before the term had been invented.

Who is AI photography for?

Everyone. For starters, it’s about democratising photography. “In the past photography was the domain of those with the expertise of using a DSLR to create different types of images, and what AI has started to do is to make the effects and capabilities of more advanced photography available to more people,” says Fitzpatrick.

So does this mean Adobe’s Photoshop and Lightroom will soon be defunct? Absolutely not; AI is a complementary technology, and is already making photo editing much more automated. One of FotoNation’s partners is Athen Tech, whose ‘Perfectly Clear’ AI-based technology carries out automatic batch corrections that mimic the human eye. A plugin for Lightroom, it’s specifically aimed at reducing how long photographers sit in front of computers manually editing. “Professional photographers make money when they’re out taking photos, not when they’re processing images,” says Fitzpatrick. “AI makes professional-looking creative effects more accessible to smartphone users, and it helps professional photographers maximise their ability to make a living.”

AI might not replace Lightroom and other image-editing programs, but it does stand to change editing for the photographer

AI is quickly becoming an overused term in the world of photography. Right now it largely applies to smartphone cameras, but the incredible algorithms and sheer level of automated software that the technology is allowing will soon prove irresistible to most of us. It may not be time to chuck out the DSLR quite yet, but AI seems set to change how we take photos. 

Not only that, but it could soon take charge of editing and curating our existing photography libraries too. That process has already started. Lightroom CC uses Adobe's server-based Sensei object recognition system to identify images by subject matter so that you no longer have to spend hours manually adding keywords. AI may be an over-hyped term and often a shorthand for what is nothing more than the latest, greatest advanced software, but AI does promise to do something incredible for photographers; it’s going to free-up more of your time so you can take more, and better, photographs.

Read more: Why do some phones have two cameras on one side? Dual camera-designs explained

Luminar 4's AI-driven masking technology goes way beyond the smartest regular selections, identifying object types and areas in a scene, not just tones and colors.

Skylum Software is one of the leaders in AI-powered photo editing software. It has introduced AI Sky Replacement in Luminar 4 to eliminate all the manual masking needed to do this manually, AI Augmented Skies to add clouds, planets, lightning and more to your images, AI portrait enhancement tools that can autonomously identify human features, and AI Structure to add definition only to those areas of a picture where it's appropriate.

The use of augmented reality in photography could yet prove controversial. Ever since the invention of image editors it's been possible to distort, twist and 'invent' reality, but AI promises to make this so easy and so convincing that it requires no particular skill (or conscience) to do.

Read more:

The best camera phones you can buy today
The best photo editing software right now
How to download Photoshop
How to download Lightroom