In order to appreciate the different way in which a sensor responds to light, as opposed to film, we need to go behind the scenes in the processing software.
The camera sensor reacts to light falling on it in a very basic way. It is known as ‘Linear’.
When you take your photograph, as soon as you press the shutter button, the exposure begins. The more light that falls onto the sensor, the stronger the response, at exactly the same rate, from dark to very bright. The camera uses millions of tiny light cavities or “Photosites” to record an image. Photosites collect and store photons. Once your exposure has finished, the camera will close the Photosites, and will begin assessing how many of the photons have fallen onto each. Photons have various intensity levels, which then determine Bit Depth (0 – 255 for an 8-bit image).
However, the process above will only create a grey scale image due to the cavities not being able to distinguish how much of each colour they have collected. In order to capture colour images, each cavity will have a filter placed over the top. These filters will allow only certain colours to penetrate them.
Most current digital cameras, can only capture one of three primary colours in each cavity. 2/3 of incoming light is discarded. The camera must then approximate the other two primary colours, in order to have full colour at every pixel. The colour filter array is called ‘Bayer Array’.
Bayer Array (Colour Filter)
A Bayer array consists of alternating rows of red-green and green-blue filters. There are twice as many green as red or blue sensors. Each primary colour doesn’t receive an equal fraction of the total area because the human eye is more sensitive to green light than both red and blue light. Having more green pixels, produces an image which appears less noisy and will have finer detail, which could not be achieved if each colour was treated equally.
The Human Eye, and Film:
Film mimics the human eye when it comes to responding to light. Both compress the way they receive light, in such a way that ‘twice as bright’ for example, seems less than it really is. This is valuable, as it means that our eyesight can cope easily with a wide range of brightness, without driving our sensory system into overload. The film, to a lesser extent, does the same. Not so a camera sensor. It may seem strange as when you take your photograph, they appear exactly as you would expect them to. However, this is due to the camera performing some strong processing procedures before you get your first glimpse of the final result. If we turn off the in camera processing, you would see that the image you first captured is infact surprisingly dark.
The term ‘Gamma’ is often used to describe digital images, in computing. When applied to monitor screens, it is a measure of the relationship between voltage input and the brightness intensity, and because of the way a computer display works, a raw, uncorrected digital image would look darker and more contrasty than out eyes would find normal.
To compensate for this, Gamma correction is applied inside the camera after capture.
Digital Photography Master Class, Tom Ang, Dorling Kindersley Limited, London 2008. Page 164
Online PDF, Raw Capture, Linear Gamma, and Exposure, by Bruce Fraser. Adapted from his book Real World Camera Raw, published by Peachpit Press, in August, 2004.