Skip to main content

Is a 600MP smartphone sensor as stupid as it sounds?

Is a 600MP smartphone sensor as stupid as it sounds?
The Samsung Galaxy S20 Ultra has a 108MP sensor… but why stop there? (Image credit: Basil Kronfli / Digital Camera World)

We've had 150MP medium format cameras, 61MP mirrorless cameras and even a 108MP Xiaomi Mi Note 10 smartphone, but now it seems like all of this megapixel mania is going to be taken to the next level – with a 600MP Samsung smartphone sensor. Did anyone ask for this technology? Not necessarily, but Samsung isn't going to let a little thing like that stop it. 

Samsung is known for pushing smartphone camera technology boundaries, having pioneered the industry's first 64MP smartphone sensor and releasing a 108MP smartphone in early 2020. Admittedly, these camera phones are very impressive pieces of technology, but is a 600MP sensor really the next step?

• Read more: Everything photographers need to work from home

Samsung explains that our eyes "are said to match a resolution of around 500 megapixels. Compared to most DSLR cameras today that offer 40MP resolution and flagship smartphones with 12MP, we as an industry still have a long way to go to be able to match human perception capabilities."

However, while a high megapixel count is often held up as the gold standard for image quality, it's not actually that simple. As the Samsung release itself explains: "Simply putting as many pixels as possible together into a sensor might seem like the easy fix, but this would result in a massive image sensor…" 

This means that the only way to increase the amount of megapixels for a set amount of space is to make them smaller, but this results in poor image quality due to the "smaller area that each pixels receives light information from". Samsung's solution to this problem is their 'Nonacell technology', which increases the amount of light absorption that pixels are capable of. 

One of the biggest arguments against high megapixel cameras is the unwieldy file sizes. If you actually had 600MP image files, they would be absolutely huge. Not only would it require a massive amount of processing power, but your phone storage would be quickly clogged up by simply taking a couple of selfies and a few snaps of your dinner.

However, while a smartphone might have a high megapixel sensor, that doesn't necessarily mean it produces high megapixel images. The Samsung Galaxy S20 Ultra employs 9-in-1 pixel binning, which means that the 108MP sensor actually produces 12MP photos. If Samsung produced a 600MP sensor that employed 9-in-1 pixel binning, then this would produce 66MP photos. 

This would still be a massive file size for a smartphone, but what if Samsung was able to use something like 36-in-1 pixel binning, which would reduce 600 megapixels into a 16MP image. Not only would this be a perfectly serviceable file size, but the amount of detail this would offer would be incredible. Whether or not Samsung would be able to effectively counteract the difficulties with low light this sensor would inevitably encounter is probably the biggest question mark hanging over this concept. 

While a 600MP smartphone sensor might initially sound a bit ridiculous, a combination of more advanced versions of Nonacell technology and pixel binning might actually produce some genuinely fascinating imaging technology. However, we'll likely have to wait a while before we see the fruits of Samsung's labor. 

Read more

Best camera phone in 2020: which is the best smartphone for photography?
microSD card deals: best buys on memory cards for smartphones
Best iPhone tripods and supports in 2020: for night shots, vlogging and more

  • DaveHaynie
    No, our eyes don't have anywhere near as many as 500-600 megapixels. In fact, we have about 120 million rod cells, and about 6 millon cone cells. The rod cells are not color sensitive, and moreso, they fully saturate and shut down in normally bright light. So you have about 6 million "picture elements" per eye.

    But that's not the whole story. You also have those 6 million pixels connected to a really powerful deep learning supercomputer. Your eyes experience constant microtremors, shifting just a bit as you view a thing, and your brain integrates information from multiple positions to boost the optical resolution by at least 4x. So you're really seeing in about 24 megapixels per eye -- though once again, cone cells are picture elements, but not exactly pixels.

    The rod cells are active in low light, and yes, that's 120 million per eye and 480 million or so with image processing. But that's in very low, photon starved light. The reason the rods are so many and so small is that they're very sensitive to photons. But you'll never have enough photons while they're active for all that many rods to be firing all at once. Even though what you see in low light is the product of multiple rods firing over time, you still see with grain. Just as your camera does when it's photon-starved in low light.

    Samsung will have the same problem. Right now, they're using 800nm pixels in those 108 megapixel chips in the Xiaomi and the S20 Ultra, and in the 64 megapixel chips, and the 48 megapixel chips. Samsung's got a few 700nm chips as well, mostly used for tetracell selfie cameras. But they're probably not going any smaller. For one, the signal to noise level increases as your sensor's ability to capture a large number of photons diminshes, which it always will as the size shrinks.

    Rod cells in the eyes are much larger, about 2um in diameter. And cones larger still. The eye's "sensor area" is actually quite a bit larger than that of a full frame camera, but maybe not so obviously, because of course it's curved, not flat. In fairness to the Samsung, today's silicon photodiodes have a quantum efficiency as high as 95%, while

    Secondly, the wavelength of far red light is around 700nm. Correct color capture could be an issue going any smaller.
    Reply