The Google Pixel 3 has arrived and it’s a phone that is here to do one big thing: redefine camera phone photography as we know it.
Its among some heavyweight competition, though, with the recently announced Huawei Mate 20 Pro using its team-up with Leica to great effect and the iPhone XS adding in-camera editing and processor prowess to its shots.
These cameras do a lot with two/three lens setup - Google claims, however, to trump all this with just one rear-lens camera, offering 12.2MP f/1.8 with optical and electronic image stabilization. So, how is it exactly doing this?
The company has revealed many of its camera secrets in a blog post, revealing just how it created its Super Res Zoom technology (a fancy name for digital zooming) and how it is challenging the notion that digital superzooms on cameras are no match for the true optics in a camera.
The blog post is very detailed so here’s five things we learned:
1. The digital zoom isn’t based on one image but many
“The Super Res Zoom technology in Pixel 3 is different and better than any previous digital zoom technique based on upscaling a crop of a single image, because we merge many frames directly onto a higher resolution picture,” explains Google.
This technology means, according to Google, the zoom is the equivalent in quality to a 2x optical zoom.
2. Hand motion is the superzoom’s friend, not enemy
Superzoom technology doesn’t work unless the camera you are using is very still. Well, that used to the case but Google reckons it has found a way around this.
“When we capture a burst of photos with a handheld camera or phone, there is always some movement present between the frames,” says Google.
“To take advantage of hand tremor, we first need to align the pictures in a burst together. We choose a single image in the burst as the “base” or reference frame, and align every other frame relative to it. After alignment, the images are combined together roughly as in the diagram shown earlier in this post.”
3. Creating a superzoom that works is challenging
When creating a digital superzoom on a mobile phone, there are a lot of challenges: burst images even in good lighting are noisy and there is motion everywhere in the shots you are taking which makes zooming impractical an interpolation complex and problematic. But Google thinks it’s fixed this.
“These challenges would seem to make real-world super-resolution either infeasible in practice, or at best limited to only static scenes and a camera placed on a tripod,” says Google.
“But with Super Res Zoom on Pixel 3, we’ve developed a stable and accurate burst resolution enhancement method that uses natural hand motion, and is robust and efficient enough to deploy on a mobile phone.”
4. Pixel merging cuts through the noise
To combat noise in an image taken with the zoom, Google has had to do a nifty bit of pixel merging with the blog noting: “We analyze the input frames and adjust how we combine them together, trading off increase in detail and resolution vs. noise suppression and smoothing.
“We accomplish this by merging pixels along the direction of apparent edges, rather than across them. The net effect is that our multi-frame method provides the best practical balance between noise reduction and enhancement of details.”
5. Reference images are key to combating movement
To avoid ghosting and motion blur in its zoomed images, Google has one image in mind for its shots.
“To make the algorithm handle scenes with complex local motion (people, cars, water or tree leaves moving) reliably, we developed a robustness model that detects and mitigates alignment errors,” notes the blog. “We select one frame as a reference image, and merge information from other frames into it only if we’re sure that we have found the correct corresponding feature.”
All of the above has to be taken with a pinch of salt as it's Google explaining how great the Pixel technology is, but read the full post if you want to get your superzoom geek on and find out more - or watch the video below.
- Looking for the best camera phone? Then let us be your guide