• No products in the cart.
  • No products in the cart.
Scroll to Discover
back to top
Image Alt

Kit Konnect

In [Part 1], we explained the principles and attributes of CCD & CMOS sensors. Then, we have put into perspective a side-to-side comparison to understand how these are set in operation to get the best out of them. We also explained how 3CCD sensors collect colour (through a dichroic filter separating light into Red, Green and Blue streams), but not how the single sensors do1; and that requires some attention since most photography & video production cameras now are equipped with single sensors (CMOS particularly).

 

Here, I’d like to bring your attention to the Bayer filter, how it collects colour and how it impacts a camera’s image resolution. Going further, a sensor’s format (full frame, ⅔”, etc…), type (CCD/3CCD/CMOS) and build quality (improved version of a sensor technology, capacity to reduce the size of electronics further) are considerable factors leveraging your camera’s image resolution, dynamic range, sensitivity to light whilst also having an influence on the depth of field and focal length of a given shot.

This part is broken down into 3 sections:

      • The Bayer Filter
      • Sensor Format/Size
      • Dynamic Range

 

The Bayer filter

Patented by Bryce Bayer in 1976, this principle was developed in order to conceive single sensor colour cameras. It allowed simplifying camera designs (compact cameras) in comparison with 3-chip cameras which were larger, heavier and required more complex engineering2.

 

This filter is made of microscopic pads – coloured in red, green or blue – layered (with a diabolical precision) on top of the sensor’s photosensitive area so that each photosite is exposed to one colour only. From a close-up view, here’s how it looks like…

 

In theory

(Source: Wikipedia)

 

Photography of a Bayer pattern from a sensor, at 600X magnification

(Source: Photo by Kevin Collins)

 

Resolution

As suggested in the images above, the sensor’s photosites are covered by the filter in the following proportions:

      • 50% Green3
      • 25% Red
      • 25% Blue

So in order to determine the unique colour of a single image pixel (where Green, Red or Blue Luminance/Chrominance data may be missing), each group of pixels is subject to complex calculations known as “debayering” or “demosaicing”.

 

Because the Bayer method does not allow each individual photosite to capture all x3 colours at once, in an over-simplified way of looking at things, a camera equipped with a Bayer pattern renders an image resolution at 71% of the sensor’s expressed resolution. But it’s much more complex than that; I’d invite you to read Graeme Nattress’s interview (image processing mathematician @ RED cameras) by Todd Blankenship should you want to dig deeper.

 

Unfortunately, the Bayer method introduces artefacts such as:

      • Aliasing (also known as “Moiré”), which can be resolved using an Optical Low-Pass Filter (OLPF). The down-side of using an OLPF is that it reduces the sharpness of an image.
      • False colour

But fortunately now, these problematics are overcome (but still present in theory) since manufacturers are able to produce high-end large sensors (=increased resolution, more photosites) therefore allowing finer detail restitution (sharpness); an element which couldn’t be sacrificed at lower resolution (where scientists had to find a compromise between aliasing artefacts vs. sharpness reduction).

 

Alternatives

There are alternative filter technologies (e.g. CYMG filter, RGBE filter, RGBW bayer filter) which were designed in an attempt to suppress some of the Bayer filter’s artefacts and/or improve single sensor luminance/chrominance resolution, but the Bayer filter remains the most universally used.

If you’d like to read more on this particular topic, make sure to check out this brilliant post put together by RED: The Bayer Sensor Strategy.

 

Sensor Format/Size

Generally speaking, commercially available single chip professional (and semi-professional) cameras tend to have sensors varying in sizes from APS-C to Medium Format. However, broadcast type cameras with 3CCD chips usually incorporate 2/3” sensors4.

A sensor’s format (or size) should determine the amount of light absorbed (depending on the number of photosites and their actual size) to create an image. It will have an influence on elements such as the colour detail, sharpness, ISO performance5 (capacity to absorb light with the lowest possible noise level ), deliverable image definition and its resolution (which we’ll take a look at further down in this post). But the sensor’s performance may very well depend on the manufacturing standards too.

Here’s a helpful illustration of formats comparison:

(Source: Wikipedia)

 

To illustrate the previous explanation, let’s compare two sensors, different in size:

      • with the exact same amount of pixels, assuming that the larger sensor has larger individual pixels, its light sensitivity is increased (since it can welcome more photons) as well as its dynamic range; your image will contain more details in low lights as well as in high lights (=more contrast).
      • with the exact same size, assuming that one sensor has more pixels than the other (in considerable numbers, for the sake of this example), its image will be sharper (finer details) and its definition will be larger; in fact, you will have more room to crop into the image to obtain the same amount of details as the sensor with a lesser number of photosites.
      • for the larger sensor, the depth of field becomes shallower as it shoots a wider portion of a scene (in comparison to a smaller sensor filming the exact same scene), therefore requiring a lens with a longer focal length6.

In a nutshell, the larger the sensor:

      • the larger the definition
      • the better the dynamic range 
      • the better sensitivity to light 
      • the depth of field becomes shallower
      • the longer the focal length is needed to zoom further
      • the more limited is your zoom ratio (well, the limitation depends on your budget and what is commercially available)

 

Dynamic Range

The dynamic range helps describe the attributes of a camera’s image sensor. It’s the ratio or range between the brightest and darkest level of light/intensity and it is often measured in stops. The more “stops”7 of light that a camera’s sensor can see, the higher the dynamic range.

 

(Source: PremiumBeat)

 

Remember, colours are merely reflections of light into our eyes, so the higher the dynamic range, the more capable a camera is of seeing colour in all ranges of its brightness/intensity.

The human eye is estimated to be capable of capturing up to 24 stops, but it doesn’t capture the whole range all at once; it adjusts automatically its overall sensitivity to light in an effort to adapt to its environment. You’ll find that cameras usually have the potential to capture 6 to 14 stops (or more, for higher range devices), depending on the sensor’s limitations8.

 

What are you planning to shoot?

Outdoor/indoor, b-role, city, nature, movies, interviews, sports? A combination of those?

It’s important to understand how you’ll expect your camera to perform under the previously suggested conditions. Let’s illustrate with some examples9; if you’re shooting:

      • concerts and nightlife event, it will make sense that you choose a camera that performs well in low light (sensor with high sensitivity, low noise generation).
      • nature/city/outdoors environments in daylight, then that should be enough light for most cameras.
      • portraits, you might privilege the finer details.
      • documentaries, you may want something robust that can shoot in a wide range of lighting conditions
      • as a hobby, you could be looking at cameras with smaller sensors, therefore performing less well than the professional ranges

In theory, this approach works, but unfortunately, it’s not just about the sensor: the maths & circuitry (debayering, A/D conversion, compression), lenses and accessories (filters, lighting, etc) should play just as much of a role in delivering your final image’s properties.

 

I hope that this post has helped you gain a little more confidence in understanding cameras. If some things remain unclear or you’d like us to develop on a particular point, please drop us a comment.

Subscribe!

Footnotes

  1. In fact, a sensor stripped off from filtering components only collects a monochrome light intensity, so it does not collect colour.
  2. Why weren’t 3-chip cameras equipped with larger sensors? Because if they had been, these would’ve required extremely bulky and complex dichroic filters which weren’t compatible with compact camera designs.
  3. Definition of the Green component is divided by 2. The Blue and Red component’s definitions are divided by 4 respectively.
  4. At comparable definition (let’s say HD), a single chip CMOS camera would need a larger sensor (therefore more pixels) than the 2/3” of a 3CCD camera to produce an HD image since its colour resolution is reduced to 1/4 for Red and Blue channels respectively, and 1/2 for Green.
  5. By the way, you should always reach for shooting at (or close to) your camera’s “base ISO” (or “native ISO”) as it will give you the capacity to produce the highest image quality. Otherwise, you’ll certainly introduce noise levels in your shots (grainy image) which isn’t usually desirable.
  6. That explains why cameras with larger sensors (e.g. cinema) deliver close-up shots with very blurry backgrounds whilst the filmed subject has a sharp focus.
  7. In this case, the term “stops” refers to the unit used to quantify ratios of exposure. This term can also be used to define the relative aperture of a lens’ iris/pupil, usually expressed in “f-stops”.
  8. That’s when lights and filters come into play: a light projector will increase the light intensity of a darker area (and perhaps create different textures and/or emphasise on a colour to change the mood of a scene, in a controlled manner), whereas a filter – such as an ND filter – will compress the light (in fact, the intensity of all wavelengths, equally) without modifying the colour hue rendition. The latter is particularly useful when shooting (let’s say) under the sun/outdoors.
  9. Yet, these are only suggestions as it also may very well depend on your style and shooting intentions.

Comments: 2

  • Avatar

    Farah

    07/07/2019

    Very useful article 👍

    reply

Post a Comment


Subscribe to Kit Konnect and receive newsletters, offers and invitations. You can unsubscribe anytime. For more details, review our Privacy & Cookie Policy.