Digital Noise : Is This an Actual Problem?

My old friend Rick Sammon would tell a story about his Dad whenever the subject of noise came up. His Dad had said that if when you looked at an image what you saw first was noise, it was a lousy picture.

Nothing has changed then, other than the cash suck that noise reduction and AI have foisted upon those who believe in such nonsense.

Consider Grain

I think that most all of us understand that noise and grain are not the same thing at all even if they are referred to the same way.

Grain is the chemical strata on a piece of film that defines part of an image. In that regard it is not much different from what a pixel does in digital. To get finer grain in images, we used film stock with finer grain in chemistry. When exposed properly it looked great, but if underexposed we started to see the speckling effect that we called grain. As we went to higher ASA and ISO ratings on film, in order to be able to creates images in lower light levels, that required that the grain chemistry itself be larger. This resulted in a general perception that higher ISO films were “grainier”, had less fidelity and were often perceived as less sharp.

Compare two black and white images of the same subject in the same light, one made on Kodak Panatomic-X at ISO 50 and the other made on Kodak Tri-X at ISO 400 and the difference was immediately evident.

When we would crop in on a negative or positive in the chemical darkroom, we ended up using less of the area to make the print, so each grain element would appear larger because we were effectively zooming in on only a part of the image to make the print.

Digital

In digital, there is no such thing as grain occurring naturally because there is no chemical substrate being used. Instead we make an image from the data recorded by an electronic sensor called a pixel. By building sensors using millions of independent pixels (wherein the term megapixels originates) we can record a very high resolution (lots of points of information) image. The megapixel race scam was only ever about cramming more and smaller pixels into the same physical area.

Did more pixels ever make a better image? Possibly but not necessarily. The larger the pixel, the less power it needed to be receiving to gather data. This meant that the signal level to electrical noise level ratio was excellent. You want as much signal with as little electrical noise as possible, always. By adding more pixels, sensor makers allowed each smaller pixel to record less of the overall image individually so the principle is that a 50,000,000 pixel equipped sensor was of higher resolution than a 20,000,000 pixel equipped sensor. All very true. However what the megapixel scam leaves out is that each of these smaller pixels needs to receive more power to record data and so the signal to noise ratio on each smaller pixel is not as good as on sensors with larger pixels.

Sony understood this and built cameras in their A7 family to deliver these outcomes. The R series had high pixel counts but showed more noise than the non-R A7s. Of course science and facts rarely get any space in marketing, and only the folks who understood sensors got this.

There is a benefit to a higher pixel count when we start cropping. When we crop a digital image, just like a film image, we are using less of the total data and making a decrease in resolution. This is the price we pay to crop. Using a lens with a smaller angle of view that reduces the amount of cropping required alleviates some of the problem and is a big step to getting things right in camera, but the reality is that a 50MP sensor at any ISO is going to be noisier than a 20MP sensor.

Enter Computational Photography

People like the convenience of smaller cameras, and in the current day, that is the smartphone. We think that smartphones make excellent images. They do, but only after a significant amount of computation prior to creating the stored file. They have to. The sensors are tiny, the pixel counts are limited by sensor area, power demand and signal to noise. When we see an image from a smartphone it has already undergone massive amounts of computer processing before we get to it. If the picture is satisfactory to the user, then that is fine. Just don’t crop it.

Now in our newest mirrorless cameras, we are also seeing computational photography processes taken from smartphones being applied to images recorded with larger sensors. In addition to a perceived quality improvement, it also enables things like in camera panos, focus stacking and such because there is now a lot of computer processing occuring between the capture of the RAW data and the creation of the RAW file. In reality, RAW files have already been edited, modified and cooked. They are a finished image with the only difference being that one can still do pixel level editing on them.

What Has This to Do with Noise?

Consider that the resolution of an image from a 20MP full frame sensor is 5477 pixels on the long side. The long side is 36mm or 1.417 inches, which means a native sensor resolution of about 3865 pixels per inch. That is an amazing number. Now consider your best possible 4K display and understand that on its best day it can deliver approximately 163 pixels per inch.

Do you see what this means? Your best display can only show 1/23 rd the resolution of your sensor. The sensor has 23x the resolution of a 4K display.

The human eye is a pretty amazing thing. But displays while very nice are nowhere near the capability of the human eye. So when we edit, we get as large a display as possible, but the resolution does not change, so the pixel spacing on the display gets larger the larger the screen gets. That means more space between the pixels that actually show data. We should remedy this by not sitting with our noses pressed to a 30 inch diagonal display but that is not what humans do and so we perceive that the image is “noisy”.

Except that it isn’t. The image is fine, it is the display, our eye to display distance and the quality of the pixels in the display that make the noise.

Ignorance is Revenue

A deprecated signal to noise ratio in an image occurs in underexposure. In a decent image, there will always be some areas with less exposure than others. We call them shadows and blacks. They lack data. Our standard models say increase the exposure in these areas, essentially turning up the pixels in the display which damages the signal to noise ratio (by creating more electronic noise).

Then we choose to apply noise reduction, which regardless of what label is stuck on it, it s an adjacent pixel contrast reduction. If it is called AI, it just has a larger, likely stolen, sample size in the data set by which to determine the amount of contrast reduction. And it looks like the noise is reduced. However, with the reduction in noise also comes a decrease in sharpness because what we as humans perceive as sharpness is actually the adjacent pixel contrast difference, where a higher contrast difference appears sharper.

So of course we then add sharpening, which reverses the effect of the noise reduction. If this is beginning to sound like a PT Barnum con job, thanks for catching up.

The buyer has spent money on software to reduce noise that is not inherently in the properly exposed image and then spent money on software to increase sharpness in an image that was already sharper than the screen could display, presuming of course proper focus and proper exposure for the primary subject.

Wrapping Up

Noise reduction software does not remove digital noise, it blurs it out. Sharpening software does not fix and out of focus or unsharp image, it increases contrast. Back in the early days of digital, serious photographers figured this out, but that knowledge is buried and hidden lest it cause the uninformed or as yet incompetent user to not spend money on tools that they never needed in the first place.

We have the most awesome capabilities in our modern cameras, with the highest resolution and best signal to noise ratios ever. We couple that with potentially superb lenses. Why not instead of wasting time and money on computational voodoo, make the time, the effort and the skill development to get things right in camera?

Please become a member on Patreon to help support this channel. A big thanks to all the existing Patreon members! Send in comments or questions, I read and respond to all. If you shop with B&H Photo Video, please use the link on the main page as it pays me a small commission and does not cost you anything to do so. Thanks again and we will see each other again soon.

NO AI CRAPOLA WAS USED IN THE PRODUCTION OF THIS ARTICLE. THE IMAGE IS LICENSED FROM A HUMAN PHOTOGRAPHER