What is an AI-powered camera? AI cameras explained

Arsenal DSLR assistant

Artificial intelligence (AI) is everywhere, and if you haven't yet got an AI-powered smartphone, you probably soon will do. Is it all just marketing hubris, or is AI in a smartphone – and particularly, in its camera – something we should all aspire to have? With the term AI increasingly being used not only in smartphones, but in all kinds of cameras, it pays to know what AI is actually doing for your photos.

What is AI?

AI is a genre of computer science that examines if we can teach a computer to think or, at least, learn. It's generally split into subsets of technology that try to emulate what humans do, such as speech recognition, voice-to-text dictation, image recognition and face scanning, computer vision, and machine learning. What’s it got to do with cameras? Computational photography and time-saving photo editing, that’s what. And voice-activation.

Voice-activated cameras

Voice Control has featured on a number of GoPro models, such as the HERO5 (above) and more recent HERO6 models

The ability for a computer to understand human speech is a form of AI, and it's been creeping onto cameras for the last few years. 

Smartphones have been offering Google Now and Siri for a few years, while Alexa is entering homes via the Amazon Echo speakers. Action cameras have jumped on that bandwagon in recent years, with the GoPro action cameras and even dash cams able to take actions when you utter simple phrases such as 'start video', 'take photo' and so on. 

It all makes sense, especially for action cameras where hands-free operation makes them much easier to use but is it really AI? Technically, it is, but until recently voice-activated gadgets were simply referred to as 'smart'. Some now allow you to say quite specific things such as ‘take slow-motion video’ or ‘take low-light photo’, but an AI camera needs to do a little more than that be worthy of the name.   

AI software

AI is about new kinds of software, initially to make up for smartphones’ lack of zoom lenses. “Software is becoming more and more important for smartphones because they have a physical lack of optics, so we’ve seen the rise of computational photography that tries to replicate an optical zoom,” says imaging analyst Arun Gill, Senior Market Analyst at Futuresource Consulting. “Top-end smartphones are increasingly featuring dual-lens cameras, but the Google Pixel 2 uses a single camera lens with computational photography to replicate an optical zoom and add various effects.” 

The Google Pixel 2 is one of the most capable smartphones around, despite only having a single-lens camera on its rear side

Google is also using AI on its new Google Clips wearable camera, which uses AI to only capture and keep particularly memorable moments. It uses an algorithm that understands the basics about photography so it doesn’t waste time processing images that would definitely not make the final cut of a highlights reel. For example, it auto-deletes photographs with a finger in the frame and out-of-focus images, and favours those that comply with the general rule-of-thirds concept of how to frame a photo.

The Google Clips camera decides what to photograph without user intervention

What is computational photography?

Computational photography is a digital image processing technique that uses algorithms to replace optical processes, and it seeks to improve image quality by using machine vision to identify the content of an image. “It's about taking studio effects that you achieve with Lightroom and Photoshop and making them accessible to people at the click of a button,” says Simon Fitzpatrick, Senior Director, Product Management at FotoNation, which provides much of the computational technology to camera brands. “So you're able to smooth the skin and get rid of blemishes, but not just by blurring it – you also get texture.” In the past, the technology behind ‘smooth skin’ and ‘beauty’ modes has essentially been about blurring the image to hide imperfections. “Now it’s about creating looks that are believable, and AI plays a key role in that,” says Fitzpatrick. “For example, we use AI to train algorithms about the features of people's faces.”

LG’s V30S ThinQ

LG’s V30S ThinQ phone allows the user to select a professional image on its Graphy app and apply the same white balance, shutter speed, aperture and ISO. LG also just announced Vision A, an image recognition engine that uses a neural network trained on 100 million images, which recommends how to set the camera. It even detects reflections in the picture, the angle of the shot, and the amount of available light.

Depth sensors and blurry backgrounds

In recent years we've seen many dual-lens phone cameras use two lenses to produce aesthetically pleasing images that have a blurry background around the main subject. People (and, therefore, Instagram) love blurry backgrounds, but instead of using dual-lens cameras or picking up a DSLR and manually manipulating the depth of field, AI can now do it for you. 

Commonly called the 'bokeh' effect (Japanese for blur), machine learning identifies the subject, and blurs the rest of the image. “We can now simulate bokeh using AI-based algorithms that segment people from foreground and background, so that we can create an effect that begins to look very much like a portrait taken in a studio,” says Fitzpatrick. The latest smartphones allow you to do this for photos taken with either the rear or the front (selfie) camera. 

The iPhone 8 Plus (left) is one of a number of phones to use a dual-camera design, in contrast to the iPhone 8 (right)

“People refer to it as bokeh, but you don’t get the true blur you get with a DSLR where you can change the depth; with a phone, you can only blur the background,” says Gill. “But a small and growing number of photographers are really impressed with it and are using an iPhone X for everyday capture, and only when they’re on professional jobs will they get out their DSLR.”

Read more: Apple iPhone X is DxOMark’s top-performing smartphone for stills

What about DSLRs?

Automatic red-eye removal has been in DSLR cameras for years, as has face detection and, lately even smile detection, whereby a selfie is automatically taken when the subject cracks a grin. All of that is AI. Will the likes of Nikon and Canon ever adopt more advanced AI for their flagship DSLRs? After all, it took many years for WiFi and Bluetooth to appear on DSLRs. 

According to the company behind it, Arsenal became the "most funded camera gadget in Kickstarter history"

While we wait, a Kickstarter-funded ‘smart camera assistant’ accessory called Arsenal wants to fill the gap. “Arsenal is an accessory that allows the wireless control of an interchangeable-lens camera (eg a DSLR) from a mobile device, with machine learning algorithms used to take the perfect shot,” says Gill. “What it’s doing is comparing the current scene with thousands of past images, using image recognition to recognise a specific subject and applying the correct settings, such as a fast shutter speed if it recognises wildlife.”

Who is AI photography for?

Everyone. For starters, it’s about democratising photography. “In the past photography was the domain of those with the expertise of using a DSLR to create different types of images, and what AI has started to do is to make the effects and capabilities of more advanced photography available to more people,” says Fitzpatrick.

So does this mean Adobe’s Photoshop and Lightroom will soon be defunct? Absolutely not; AI is a complementary technology, and is already making photo editing much more automated. One of PhotoNation’s partners is Athen Tech, whose ‘Perfectly Clear’ AI-based technology carries out automatic batch corrections that mimic the human eye. A plugin for Lightroom, it’s specifically aimed at reducing how long photographers sit in front of computers manually editing. “Professional photographers make money when they’re out taking photos, not when they’re processing images,” says Fitzpatrick. “AI makes professional-looking creative effects more accessible to smartphone users, and it helps professional photographers maximise their ability to make a living.”

AI might not replace Lightroom and other image-editing programs, but it does stand to change editing for the photographer

AI is quickly becoming an overused term in the world of photography. Right now it largely applies to smartphone cameras, but the incredible algorithms and sheer level of automated software that the technology is allowing will soon prove irresistible to most of us. It may not be time to chuck out the DSLR quite yet, but AI seems set to change how we take photos. Not only that, but it could soon take charge of editing and curating our existing photography libraries too. It may be over-hyped and often a shorthand for what is nothing more than the latest, greatest advanced software, but AI is going to do something incredible for photographers; it’s going to free-up more of your time so you can take more, and better, photographs.

Read more: Why do some phones have two cameras on one side? Dual camera-designs explained