Software and hardware (mechanical and electric) technologies of the filmmaking industry have undergone a series of engineering developments and evolution since film was introduced in the 19th century, which means that there is a very wide array of compatible and non-compatible tech available, adding notable complexity to rigging and operating camera equipment.
I believe that understanding the principles on which a camera runs, should give its operator a strong basis to a fluid creative performance on the production field. But also, since choosing to buy your future professional camera is no piece of cake – let aside budgeting – understanding these parameters may guide you in having the leanest approach to choosing what’s right for you in the long run.
In this post, we’re going to take a look at camera sensors: how they are designed and their different attributes.
This diagram illustrates the chain of the hierarchical functions of a camera:
What’s an image sensor and how does it work?
It’s a device that captures and converts light into an electric signal (photoelectric transduction) to then constitute an image.
In most cameras, light is captured and transduced to a system called RGB (it stands for Red, Green and Blue). R, G and B are each treated as an independent variable (channels).
But why RGB?
Red, Green and Blue are three primary colours which when blended together in various proportions (or shades), can result in obtaining a vast array of unique colours, called the RGB colour space.
A sensor is made of tiny photosites (roughly around 1µm to 8µm in length) which may also be called ‘pixels’ or ‘sensel’. Here’s what a sensor (CMOS in this example) looks like in theory:
(Source: Image Sensor Technologies – Presentation by Chris Soltesz, available on SlidePlayer)
There are two main types of manufactured sensors
- CCD – Charged Coupled Device
This type of sensor was born in 1970 after 10 years of research by Willard Boyle and George E. Smith (Bells Labs). Professional colour cameras using this technology were introduced around 1984-1986. Nowadays, three-CCD video cameras are widely used but mainly for the Broadcast TV industry (e.g. sports events, studio shows)… and other minor applications (e.g. CCTV security).
A Charged Coupled Device is made of two regions:
- Primary: a light-sensitive layer (silicon), made of photosites.
- Secondary: transmission layer, made of a shift register (thin slice of semiconductor).
In simple words, this device has the ability to capture light information (from the primary region), then to translate and store it such as analogue information (sampled in the form of electric charge buckets, in the secondary region), which in turn is shifted towards a single output where the information is converted from a charge into a voltage (through a charge amplifier).
Let’s take a quick look at an interline transfer CCD1 (image 1, below): the cells of these sensors receive a light intensity which is then converted into a proportional electric charge which is collected by capacitors2 present in the “vertical shift registers”3. Then, these initial electric charges sequentially converge4 towards a unique point in the output of the device.
Image 1: Interline Transfer CCD
(Source: Arindam CCTV Access Control)
Image 2: Charge packets collection and transfer (vertical shift register)
There are two main categories of use for this technology:
- Single-CCD: consumer-level quality. A Bayer filter is applied on top of the sensor to separate RGB colours. The image is then reconstructed through algorithms during treatment of the video signal.
- Three-CCD (three-chip camera): broadcast quality. It uses a dichroic filter5 (as per the image below) to separate a whole image into three images, each being composed of Red, Green and Blue intensities, which are then projected onto their respective sensors.
Philips type dichroic prism
- CMOS – Complementary Metal-Oxide-Semiconductor
Its main difference with the CCD chip is that instead of the charge/voltage converting at a single output of the sensor, the charge/voltage conversion occurs directly at the output of the photosensitive cell, where a transistor is set to deal with conversion & amplification of the signal.
This process has a few notable advantages:
- The CMOS chip performs at a considerably slower pace6 than a CCD chip meaning that it doesn’t require as much energy (2 to 3 times less) to operate.
- The speed to read/process information is one of its main strengths: it has allowed engineers to design larger sensors (the larger sensor > the more information to process) as well as increasing the shooting frame rate (what we call high-speed cameras or slow motion).
- CMOS chips are cheaper to produce since they require the use of traditional manufacturing technology, and they are manufactured on a larger scale. Whereas CCD chips require specialised manufacturing, therefore pricier to produce.
But it also has its disadvantages:
- This technology introduces an artefact called the Rolling Shutter. Here are some explanations which I handpicked for you:
- Given that there are as many converters/amplifiers as there are of photosites on a CMOS chip, these components generate a low fixed pattern of noise.
- The electronics (converters/amplifiers) present for a given pixel use up some space on the chip’s surface (as per the image below), therefore reducing the exposed surface of a given pixel to light7. In other words, it is making the CMOS sensor less sensitive to light, meaning that it would be harder to shoot in low light environments. However, technology advancement has allowed building finer microprocessors, therefore improving a sensor’s sensitivity to light.
Let’s Compare & Conclude: CCD vs. CMOS
You’ll find that any camera on the market with a single sensor shooting above HD is integrated with a CMOS chip since 3CCD chips won’t allow definition scalability. But I’m not entirely sure why I’m not sure why single CCD chips aren’t scalable to larger sensors in order to compete with single CMOS chips. My hypotheses are the following:
- Cost to produce CCD chips at larger definitions (>HD) would lead to high priced cameras, therefore not as competitive as CMOS cameras.
- An increasing number of electric charges would have to transfer via the vertical shift registers, perhaps implying that the sensors would have to either function at a higher frequency or lower frame rate.
- CMOS cameras have the ability to shoot at high frame rates (slow motion) to the detriment of CCD cameras which aren’t.
To be creative isn’t the sole quality of a confident camera operator. In fact, being a good technician will help you make the best out of your tool, and this starts by understanding how the components of your tool work and their limitations.
Hopefully, this should get you started on the right path to thrive in your shooting shenanigans.Next, we’re going to bring the focus on how single sensors collect colour (Bayer filter), their different formats/sizes and dynamic range. Now let’s get to [Part 2].
What else do you want to know about cameras? Let us know in the comments.
- The Science of Camera Sensors is a pretty neat video I’d invite you to watch to help you seal this topic from another angle.
- There is a new sensor type which is currently being developed by Gigajot. It’s called QIS – Quanta Image Sensor. It claims to be sensitive to the point that a pixel can detect down to a single photon of light. That means that it will be extremely efficient for applications such as scientific, medical and space imaging, security, low-light and high frame rate filming, and much more. I’m really excited to see this technology become available to the public.
- There are numerous structure designs for CCD sensor: Interline Transfer (IT), Frame Transfer (FT) and Frame Interline Transfer (FIT).
- A capacitor is a device used to store an electric charge.
- We can suppose that image 2 is a zoom into the vertical shift register.
- That’s because each capacitor transfers its content (or charge) to its neighbouring capacitor, as per image 2.
- It’s an optical filter superior to dye filters (which is another type of optical filter used on “Bayer” filter designs).
- In fact, imagine (case 1) having 10 guys each delivering a bucket of water – all coming from their respective wells – to a truck transforming buckets one by one (sequentially) into single large ice cubes respectively. In comparison (case 2), now imagine the same situation, but where each well is provided with a truck converting water buckets into ice cubes directly from the wells. With both cases needing to complete the task in a similar timeframe, it’s going to take “case 1” a lot more sweat and water/ice conversion power to match the delivery pace with “case 2”.
- The larger the pixel, the larger the well depth (its capacity to store photons). It’s like comparing a shot glass to a pint glass; one’s going to have the ability to contain much more water molecules than the other.
- But overall, 3CCD & CMOS cameras are roughly at the same level in terms of power consumption since CMOS cameras require additional power in order to process the heavy calculations to restitute an RGB image from the Bayer filter.
- These sensors are one of the main reasons why digital cameras have declined in price.
- Fun fact: Canon has the World’s largest ultrahigh-sensitivity CMOS image sensor, and it’s 40 times the size of a 35mm full-frame CMOS sensor.
- For this reason, their presence on the market has declined.