Photography
Related: About this forumHow many pixels is enough? It depends . . .
How many megapixels would it take to match the human eye? It depends. To fill the entire field of view, 180 degrees, it would take 576 mp. However the eye can only see fine detail in a very small area and that small area can only 'read' 7 mp. All the area around that small area (flovea) only needs another 1 mp, just enough data to tell you where to move your gaze to focus the flovea on a new area of interest. It's explained a lot better than I can here:
So, in imaging terms, it seems that any picture you can hold in your hands and see with the flovea -- say 4x6 snapshots -- only need 7 mp maximum. Larger prints like 8x10 viewed at arms length are only seen in 7 mp segments which for me is about 1/4 of the image at a time. The eye shifts around and the brain records the segments in memory as if they are all one image.
My takeaway is that there is a limit to the resolution our eye can resolve and as distance increases that resolution decreases so pixel count is less important than edge definition of picture elements, i.e. sharpness.
Anyway, I found the video informative and provocative. Seems that it really is the picture, not the pixels . . .
CaliforniaPeggy
(149,611 posts)I so agree.
AndyS
(14,559 posts)It validates 'viewing distance'. The fovea is about as large as both thumbs held up at arms length. At that distance it's only about 4 sq inches into which 7 mp must be packed to get max resolution. As we move away from a larger image those two thumbs cover a much larger area, at 8 ft it's a bit more than 9x12 that the eye can only resolve 7 mp so a 18x24 (roughly four 9x12s) viewed at 8' only requires 28 mp at most.
So, as I stand 8' from this 13x19 B&W picture of a day lily my flovea will cover half of it in one 'snapshot' and the other half in another 7 mp snapshot so the image only needs 14 mp max. It's printed from a 20 mp crop sensor. It's technically over kill. Plus it's lovely.
hunter
(38,311 posts)A 640X480 picture might be blown up to 7 megapixels but then it would be impossible to know how well that photo represented reality at the time the photo was taken. A fuzzy face in the crowd resynthesized and sharpened up in software might look nothing like the person in the original scene.
Software used for upscaling video has already had problems with turning everyone white because the AI systems were trained on images where a disproportionate number of people are white.
https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias
The human mind does things like that as well. Our personal experiences and training heavily influences what we see.
A few years ago I was experimenting with GIMP and Inkscape to make multi-layer woodcuts, which is as close as I get to being an artist like my wife or my dad, who can actually draw and paint. The resulting vector graphic images can look almost like a photograph yet there's no "pixels" in them.
Imagine a camera that turned everything into fractals and vector graphics, no bitmaps, no pixels in the saved files.
Such a camera would be useless for scientific or medical work, of course, since much of the information in the photo would be artifice, but every image would be sharp.
AndyS
(14,559 posts)Cameras are really small computers and are bumping up against the limits of what can be put in that small form factor. Cameras are already suffering overheating issues and battery life issues just processing compressed files.
That said, technology marches on . . .