An Introduction To How
The Digital Sensor In Our Camera
Produces The Colour Images
That it Does
What I attribute to as my main advantage when I compiled this document is that I have a complete understanding as to how a colour image is formed when using film. I was trained in the film era and witnessed first hand the transition from film photography to digital capture that is now commonplace. Having said that I took it upon myself to research and to study to fully comprehend just how the sensor in our digital cameras are able to produce a full-colour image. I fondly recall many of my colleges stating that they would be very reluctant to embrace digital capture as digital would or could ever or never be as good as the images that they were producing with the use of their favourite film. Fujifilm’s, Fuji Velvia, a 50 ISO transparency emulsion, manufactured one of the most popular emulsions that were being used at the time. I need say no more as we all know that high-end digital RAW files now far exceeded that which used to be produced when using film.
Included in this edition. an article that covers the invention and some of the working of the CCD and the CMOS sensor so I need not repeat it again.
What we need to understand is that the light we see with the human eye is made up of a spectrum of light measured in extremely small units, nanometers. One billionth of a meter in length. (0.000 000 001 m) The light that we observe arrives as photons. Our eyes interact with these photons and when a photon strikes the eye it is turned into electrical energy that is then transmitted to the brain to form an image. What we are able to see with the naked eye is termed the visible spectrum and this is measured from 400 nanometers, the blue side of the spectrum to approximately 700 nanometers, the red side. Radiation outside this range, ultraviolet and infrared, is beyond the sensitivity of the human eye. In colour film photography and colour printmaking, there were two principals that are employed the first being the additive formation of colour where the visible colour spectrum could be converted by mixing various proportions of primary coloured light. If the three primary colours that go to make up white light were separately projected to overlap one another, shone onto a projection screen; red, approximately 640nm, green 550nm and blue 450nm we will see, not only the visible result of the three colour primaries but the complementary or secondary colours for red, green and blue, these being cyan magenta and yellow respectively and where all these colours overlap one another they appear as white. (White light)
Additive Colour Process
The three colour primaries, Red, Green and Blue
and the corresponding complementary colours, Cyan, Yellow and Magenta
all merge with one another white light is produced.
The second principle that is adopted is termed the subtractive formation of colours. If we were to take the three complementary colours as coloured gels or colour filters corresponding to the correct wavelength, as mentioned previously and placed them on a lightbox so that all the colours were represented, each colour will subtract quantities of a primary colour. Where two filters overlap each other, only one primary colour remains to the observer, the centre will appear black. These three complementary or secondary colours for red, green and blue are cyan, yellow and magenta and are often referred to as the subtractive primary colours. When these colours are combined in subtractive colour mixing they will produce black. (Neutral density.)
The Subtractive Colour Process
Each colour will subtract quantities of a primary colour. What we are left with.
Blocked light (Greys to black)
Different strengths or values of neutral density filters is obtained by cutting out the intensity of light equally at all wavelengths and has an equal opacity to all colours of the spectrum. Neutral density filters are manufactured in different strengths commonly know as the filter factor and measured accordingly in exposure value, f stops.
I digress but an interesting one nonetheless. As long as this introduction was you could understand that there is a direct link in the formation of colour in both worlds and we have to owe a debt of gratitude to the founder of the Bayer sensor filter or array.
In 1974, while working for Kodak, Bayer completed his design for the Bayer filter, for which he filed a patent application in 1975 and in 1976 received U.S. patent number 3,971,065. The filter employs what is called the “Bayer Pattern,” a checkerboard-like arrangement of red, green, and blue pixels on a square grid of photosensors that allows digital cameras to capture vivid colour images. Half of the pixels collect green light, and the others are evenly divided between red and blue light. The patent application described the filter as “a sensing array for colour imaging” that “includes individual luminance- and chrominance-sensitive elements that are so intermixed that each type of element…occurs in a repeated pattern with luminance elements dominating the array.”
The checkerboard filter that is now common in more than 90% of all digital imaging devices available on the market today. As I have just noted but worth repeating, what Bryce Bayer did was to arrange a grid of four individual light-sensitive boxes formed on a light-sensitive chip. He did this by placing two diagonally placed green elements, one red element with the fourth being blue. As light passed through these elements they were filtered and then placed into an array of colours. If you examine the Bayer pattern you would notice that there are twice as many green elements as the other two colours. It was found to be necessary to include more information from the green pixels in order to create an image that the eye will perceive as a true colour.
This was done to mimic the way that the human eye produces the sharpest, overall colour image as was also noted and discovered by Kodak themselves. My mind has to boggle when I think that with Bryce Bayer’s invention should have been at the very forefront of digital camera manufacture. In hindsight, Kodak did develop and produce one of the first digital cameras but the inventor halted the manufacture, as he did not want to interfere with Kodak’s core business at that time. I owned and used an early Kodak DCS 620 camera that incorporated a Nikon body and lens. At a later stage, I will produce an article on this camera, as I suspect that it will be of interest to quite a few readers as I had first-hand experience as I owned one for a period of time.
The Bayer sensor array may still be the most commonly used but today dozens of alternative patterns have since been developed and are also being used.
Fujifilm X-Trans Sensor
Fujifilm X-Trans sensors have a more randomised pattern of RGB photosites than conventional Bayer array sensors, reducing the likelihood of interference and removing a need for a low-pass filter that lowers image resolution. X-Trans sensors, on the other hand, have an improved colour reproduction due to all horizontal and vertical lines containing at least one R, G and B pixels. X-Trans sensors, while being physically smaller, have a greater perceived resolution than the number of pixels on the sensor and are said to be on par with some full frame sensors.
Foveon X-3 Sensor
The Foveon X3 sensor Designed by Foveon Inc. that is now part of the Sigma Corporation. It uses an array of photosites, each of which consists of three vertically stacked photodiodes organized in a two-dimensional grid. Each of the three stacked photodiodes responds to different wavelengths of light. The signals from the three photodiodes are then processed, resulting in data that provides the amounts of three additive primary colours red, green, and blue.
Panasonic-Smart FSI Pixel Structure
To start with we need to understand that the light that we observe arrives as photons. Our eyes interact with these photons and when a photon strikes the eye it is turned into electrical energy that is then transmitted to the brain to form an image. When the energy from these photons is absorbed by matter, the matter can emit electrons known as the photoelectric effect.
We need to comprehend that photons are massless and carry no electromagnetic charge whereas electrons have mass and charge so they are fundamentally different. The photon, therefore, carries the energy difference between those states and it could be said that an electron can create a photon. One photon equals one photoelectron and the photon is the smallest particle of light.
An understanding that the digital sensor…
“The cameras digital sensor is not capable of producing a colour image but when it is exposed to coloured light it will deliver a set of colour values to represent that coloured light, the sensor can only measure colour from each individual pixel. Using highly specialized demosaicing algorithms are now able to convert this mosaic into an equally sized mosaic of true colours. As each pixel can be used more than once and the true colour of a single pixel is determined by averaging the values from the closest surrounding pixel with the result that a superlative colour image will be produced.”
An understanding that the digital sensor… in our cameras is only able to detect and see a greyscale image. Four monochrome pixels are required to measure one colour pixel. Each individual pixel is encased with one of the colour primary filters that you are either red, green or blue. The pixels take in light photons and then these photons are converted into electrons. This collection of electrons is then converted into a voltage and the voltage is converted to a digital number. The sensor then uses that digital number to build up the image–from a charge to a voltage and then to a digital number.
If it was produced from a single point this sequence of numbers is used to build up the image. In a camera the image is produced through the lens onto the sensor of the camera, the voltage is then converted to a sequence of numbers that is sent to the processor that is then used to build up the image. It is possible to get enough information in the general vicinity of each sensor to make very accurate guesses about the true colour in that location.
This process of looking at the neighbouring sensors is called interpolation and digital cameras use specialized demosaicing algorithms to convert this mosaic into an equally sized mosaic of true colours. The key is that each coloured pixel can be used more than once and the true colour of a single pixel can be determined by averaging the values from the closest surrounding pixels–interpolation.
The digital sensor in our cameras is only able
to detect and see a greyscale image!
I would like to thank Diana-Akhmetianova for the use of her photograph as part of the above illustration unsplash
To conclude this segment I have included a short video entitled; How a Pixel Gets its colour and this will aid our understanding and let you appreciate the persons who were involved and the science as what we now just take for granted when we switch on our digital cameras. Hopefully, now, a better understanding will give us a better appreciation…
A Chromaticity graph of the kind that is used to determine the gamut, colour space used in the production of the colour image that is formed by the wavelengths in the visible spectrum, and perceived colours in human vision. Keep in mind that this is not the colour space that is being covered by the sensor of a digital camera. Colour gamut is generally not referred to when dealing with digital sensors as it is very complex to build a colour gamut that encompasses the full range of a digital camera sensors capabilities, there are just too many variables, rather, spectral sensitivity, as the sensors are ever changing, particularly with the development of sensors in the latest cameras.
Included, a link below that will take you to a study and a better understanding of the spectral sensitivity in a few digital cameras sensors.