Wróć
Search
Products
Artykuły
en
ro pl

General Operating Principles of CMOS/CCD Sensors

The photosensitive sensor is the heart of every camera. It’s what converts incident light into an electrical signal. Each sensor is a specially prepared silicon wafer with a photosensitive surface and the necessary accompanying circuitry. Sensors are divided into two main types: CCD (Charge-Coupled Device) and CMOS (Complementary Metal-Oxide-Semiconductor).

The general operating principle for both CCD and CMOS sensors is the same. Light falling on the silicon wafer dislodges electrons, which then accumulate in areas called pixels. The number of accumulated charges is proportional to the intensity of the incident light. The collected electrons are then converted into voltage, and these in turn are converted into digital data using A/D (analog-to-digital) converters. Combining information from all pixels results in a complete image, which is stored in memory.

Sensors only provide information about the amount of light that fell on a given pixel at a given time. It’s easy to guess that the raw image stored in memory will be in shades of gray. How color images are recorded is described in detail later in the text.

So, what differentiates CCD and CMOS sensors? The method and order of reading information from individual pixels. The application of different technologies has led to different results in functionality and performance.


CCD Sensors

The first CCD sensor was created in 1969 and consisted of 8 pixels arranged in a column. What initially drove the development of this technology was space observation. Several decades of development have led to CCD technology being present in a huge number of devices, both industrial and everyday.

A CCD sensor is a silicon wafer divided into pixels, which are square areas electrically isolated from each other. During image capture, a charge proportional to the intensity of the incident light accumulates in them. In a CCD sensor, data from pixels are read out in entire rows, meaning it’s not possible to read the value of just a single pixel. As shown in the diagram below, the bottom row is shifted to a readout channel, where the charge values accumulated in consecutive pixels are proportionally converted into voltage.

Diagram showing the operation of a CCD sensor.

This voltage then passes through an A/D converter, whose output provides digital data, ultimately stored in the camera’s memory. After collecting information for an entire row, information about the next packet of pixels is shifted into the channel, and so on, until data from the entire sensor is collected. Individual rows in each cycle move down one level until they are read and converted into image data. This solution does not provide access to individual pixels until data from the entire sensor has been read and stored in the camera’s memory.


CMOS Sensors

Paradoxically, CMOS sensor technology has been known for a similar amount of time as CCD. For a long time, however, it was difficult to master to a degree that allowed for mass production. The general operating principle of a CMOS sensor is identical to that of a CCD sensor – here too, the accumulated charge is converted into an electrical pulse. The only difference lies in the method of conversion and the subsequent transmission path of this signal.

Diagram showing the operation of a CMOS sensor.

As mentioned above, CMOS sensors also collect light intensity data in the form of charge. What differentiates this technology from CCD sensors is the ability to access each individual pixel by its (x, y) coordinates. Furthermore, unlike CCD sensors, where there was only one charge-to-voltage converter in the entire system, in a CMOS sensor, each pixel has its own such element. More expensive and advanced models also have dedicated analog-to-digital converters (DACs) for individual pixels. This solution allows, for example, for increasing data readout speed and, consequently, improving the operation of the entire camera and vision system. On the other hand, it’s important to remember that all these elements occupy additional space on the sensor surface. Therefore, CMOS sensors typically have a smaller photosensitive area than CCD sensors. To increase the amount of light collected, a common practice is to use an array of microlenses. These allow more light to be focused onto the photosensitive layer.


What Makes CMOS Better Than CCD?

  • Operating Speed: CMOS sensors offer higher operating speeds than CCDs. Unlike CCD sensors, where each pixel converts input light into a voltage sent off the chip through a limited number of output nodes (one to several), in CMOS sensors, each pixel contains an amplifier, and sometimes even analog-to-digital converters.
  • Power Consumption: CCD sensors consume more power, leading to shorter battery life for battery-powered equipment.
  • Production Cost: Manufacturing CMOS sensors is cheaper than CCDs because CMOS sensors can be produced on machines that manufacture other components using CMOS technology (CMOS is not just about sensors!).
  • Amount of Noise in Data Transmission: In CMOS sensors, the distance between the photodiode and the DAC is smaller than in CCDs, reducing the chance of signal distortion during transmission.

Early CMOS sensors were characterized by high noise levels. In current models, their level is comparable to CCD sensors.


What Makes CCD Better Than CMOS?

  • Photosensitive Area: CMOS sensors have lower photosensitivity compared to CCDs, due to the need to place amplifiers and converters next to each pixel.

Rolling Shutter vs. Global Shutter

Another issue is the shutter operation mode. Two modes are distinguished: rolling shutter and global shutter. In a global shutter configuration, all pixels on the sensor start and end exposure at the same moment. In contrast, in a rolling shutter configuration, the sensor is exposed continuously from top to bottom, meaning only pixels in a single row start and end exposure at the same moment. The same applies to information readout. Different benefits arise from each operating mode. Global shutter sensors allow for capturing fast-moving objects without motion blur, while rolling shutter technology is generally cheaper and works well for observing slowly changing environments.


Color

The sensors themselves only collect information about the intensity of incident light. Various technologies are used to add color to images. The most popular of these is the Bayer filter. This is a filter array that mirrors the structure and size of the sensor used. Each square of the filter only passes one of the three R, G, B components (red, green, blue, respectively). This means that the information provided by a pixel is equivalent to the intensity of only one component. Values from the entire sensor are then interpolated and combined into a single color image. It’s worth adding that individual color filters are not evenly distributed on the sensor. For every 4 pixels, there are two green pixels and one blue and one red pixel. This arrangement is due to the specificity of the human eye’s vision, which is more sensitive to green than to the other components. The disadvantage of using a Bayer filter is that it limits the actual resolution of the sensor and requires color interpolation.

Bayer filter

Another solution is a prism system that separates light into three components, as shown in the diagram below. An undeniable advantage of this solution is that it does not limit the actual resolution of the sensor, thus producing a very high-quality image. However, such a solution is very expensive. Besides the prism system itself, it requires the use of three identical sensors for each R, G, B component.

Prism system separating light into three components.

The last common solution involves sensors composed of three stacked layers, where each layer records information about one light component and passes the rest to lower layers. Here too, there is no limitation of the sensor’s resolution. However, this solution is very rarely found in commercial devices.


“Backside Illumination” Sensors

Modern CMOS sensors offer the highest image quality, even in low-light conditions, despite having a smaller photosensitive area. Sensors manufactured using BSI (“Backside Illumination”) technology perform particularly well in this regard. This solution involves moving all control elements and circuitry to the back side of the sensor, so that light focused by microlenses falls directly onto the photosensitive layer. An additional layer of elements on the front surface of the sensor caused very significant losses due to multiple reflections and refractions, leading to lower image brightness and increased noise. CMOS BSI sensors, such as Sony STARVIS, used in Basler’s latest cameras, allow for capturing clear and detailed images even in night conditions.


In recent years, CMOS sensors, due to their better image quality and operating speed, have gained increasing popularity and are gradually displacing the previous technology.