Let’s take a moment to go back in time. Does anyone out there remember film? Film is (well, was) a semi-translucent strip of light-sensitive plastic or gelatin coated with a photoreactive emulsion of silver halide crystals. A photographer would load film into his or her camera, and then open a shutter. Light would travel from the sun or a bulb, bounce off the subject of the image, transit the lens and become focused on the film. There, an instant chemical reaction would take place where minuscule grains of sliver halide would become variably translucent when exposed to varying amounts of light. The film “speed” or sensitivity was determined by the size of the silver halide crystals, with faster films having larger grains. After exposure, the photographer would take the film out of the camera in a darkroom and run it through a series of chemical baths to develop it and freeze the chemical reaction. Then, more light would be shined through the developed film and an enlarger onto a piece of photosensitive paper to create a print which then had to itself be developed in more chemicals. Talk about labor intensive!
In the post-film era, we have digital cameras, where the film has been replaced by a sensor device. There are two kinds of digital imaging sensors – the CCD and the CMOS. CCD stands for “charge coupled device” and CMOS stands for “complimentary metal-oxide seminconductor.” Both types of sensors are two-dimensional arrays of photosensitive cells called pixel sensors or “sensels.” In a digital camera as in a film camera, the sensor lies behind a lens that focuses light onto it so it can measure and record the luminosity and create and digitally store a record of the image being photographed.
The CCD was the first type of digital camera sensor, as it was brought to us in 1969 from an AT&T laboratory in Murray Hill New Jersey by Willard Boyle and George Smith. In a CCD, there is a flat layer of capacitors that accumulate an electric charge proportional to the intensity of the light reaching that region of the sensor (more photons = more current generated and recorded). The charges are passed down the line across the sensor from one row of micro-capacitors to the next, and then when they reach the last row in the array, the signals are transferred to an electrical amplifier that converts them into a voltage. This sequence of voltages are digitized and stored in memory on a memory card or on magnetic tape. This method of image capture is electrically intensive, meaning that CCDs ate batteries quickly, and could not process images very quickly. This is bad if you want to shoot a ten-part burst sequence of the aforementioned double backflip.
CMOS were the next generation of sensors developed after CCDs, as they addressed both of those problems. The use of CMOS technology in cameras is called “active pixel sensor” or APS. These sensors were developed at around the same time as CCD sensors, but the technology didn’t mature until the late 1980s and early 1990s when a scientist from the Jet Propulsion Laboratory in Southern California by the name of Eric Fossum invented the predecessor to all of the CMOS imaging sensors found in our digital SLR cameras today. Today he holds over 120 patents, teaches at Dartmouth College, and also sponsors fire-fighting robot competitions.
CMOS sensors collect and respond to light in much the same way as CCD sensors, except for one major difference. In a CMOS chip, each photosite (sensor pixel or “sensel”) has it’s own amplifier, instead of the amplifier being located at the end of the array. This enables CMOS sensors to both use less electricity (longer battery life) and also process images faster. So when Abro shoots you throwing a corked double rodeo 9 off the diving board on Granite with his Nikon D3s, he can grab 11 frames of you flailing in just one second.
Despite all of their fancy technology, professional-grade commercially available digital cameras are still extremely limited in their capability at capturing a wide dynamic range. They pale in comparison to the best camera of them all – the human eye. What exactly is dynamic range?
In terms of light and vision, dynamic range refers to the difference in luminosity between the darkest object resolvable by an imaging device and the brightest. Humans can perceive stars in a dark sky equally well as we can see objects in a desert scene on a bright cloudless day. This range is about 1,100,000,000 th as much light between the night stars and the bright day – that’s 1 to one billion. The very best CMOS sensors in high-end imaging devices such as optical deep-space telescopes can capture a dynamic range of 11,000 to 1. That’s about one hundred thousand times less sensitivity than our eyes. Even my expensive Canon 5D Mark II camera fails at recording high-dynamic range scenes that I could see just fine by eye, such as that shown at the beginning of this post. Techniques exist to overcome this limitation, and I can go into those in more detail in a future post if anyone is interested.
At the end of last August, Japanese engineers at Canon created the world’s largest CMOS sensor - a whopping 120 megapixels measuring out at 29.2 by 20 mm. Check this baby out next to a 35mm sensor from their flagship 5D Mark II camera (shown above):
The incredible size of this sensor means that it is capable of creating usable images in environments with as little as one one-hundredth the amount of light needed by the 5D Mark II or its equivalent. Even still, this mega-sensor doesn’t even come close to seeing what our eyes can. If you don’t believe me, go out into the desert or Desolation on a very dark night with no moon, and wait an hour or so until your eyes completely adjust to the darkness. Chances are pretty good that you would be able to see enough just by starlight to find your way around if you absolutely had to.
So what’s going on inside a GoPro as compared to it’s big daddy cousin the 5D Mark II? The GoPro also has a CMOS sensor, but it is approximately 1/ 2.5” inches in size, with a sensel size of 2.3 micrometers. The 5DII has a 35mm sensor, with sensels of 6.4 micrometers. The GoPro sensor is small, and so are it’s sensel sizes compared to the 5DII, but the cost is 1/100th as much. Each has it’s use, and the GoPro is capable of creating some incredible imagery. Check out the dreamyness of slow motion from aGoPro shooting at 60 fps, when the footage is run through an interpolation algorithm (Twixtor) to emulate the smooth slow-mo of the Red camera:
What does the future hold for digital imaging? Hopefully, we will be offered cameras with CMOS sensors that aren’t larger in size (and thus megapixels), but with larger sensels. The megapixel war is widely regarded as a marketing game, as more megapixels does not equal better or even higher-resolution photographs. An increased ability of these to gather more light would greatly expand dynamic range, and bring digital imaging closer to that experienced in our brains. In order to achieve that we need camera sensors with bigger sensels, not more of them. Along with advances in microprocessor data processing, transfer, and capture rates, we can expect cheaper cameras that shoot high-resolution, high-frame rate video as well as still images. Its an exciting time to be alive and interested in digital imaging.