Both CCD and CMOS sensors were both invented in the late 1960s and 1970s (DALSA founder Dr. Savvas Chamberlain pioneered the development of both technologies). The CCD became dominant, mainly because it provided far superior images with the available manufacturing technology. CMOS image sensors required more uniformity and smaller characteristics than what silicon wafer foundries could offer at the time. It was only in the 90s that lithography developed to the point that designers were able to start presenting and pushing on CMOS again.
The renewed interest in CMOS was based on expectations of reduced power consumption, camera-on-chip integration, and reduced manufacturing costs by reusing traditional logic and manufacturing memory devices. Achieving these benefits in practice, while delivering high image quality, required much more time, money, and process adaptation than the original projections suggested, but CMOS sensors have joined CCDs as a mainstream and mature technology.
In any case, the first sensor to be created is the CCD one, more easily achievable: Willard Boyle and George Smith made it back in 1969. In this type of sensor, in a nutshell, the electrical charge generated on the sensor by photons is transferred to the sensor. circuitry through a few “output nodes”. Then it is converted into a potential difference (we are talking about microvolt) and finally it comes out of the sensor in the form of an analog signal.
In the case of CMOS, on the other hand, every single photodiode is coupled to a converter (therefore the energy is immediately transformed into potential difference), noise reducer, and digitization circuits. It follows therefore that the output signal is of the digital type.
Technology
Table of Contents
Most modern electronics are built using CMOS technology or complementary metal oxide semiconductors. CMOS devices use NMOS and PMOS transistors, which gives them excellent switching characteristics. The construction of sensors with CMOS technology allows them to incorporate additional electronic components, such as analog to digital converters.
Each pixel in a CMOS sensor has its own reading amplifier and the sensors often have A / D converters for each column: this makes it possible to read the array extremely quickly. The transistors located on each pixel take up some space, resulting in less sensitivity and depth. Aside from speed, the main motivation for developing CMOS sensors was cost, not performance. As a result, the sensitivity, noise, and dark current performance of CMOS sensors were far inferior to CCD sensors for many years.
CMOS sensors do not require complex external electronic components or clocks that produce precise voltages and waveforms to move charges around the sensor. They do not require complex external reading electronics, double correlation samplers, and A / D converters. All electronic components required for reading are integrated directly into the sensor.
The single chip only needs clean energy to provide a good image and is read directly into digital. This is why CMOS sensors have a big cost advantage. That said, for scientific applications the additional mechanical and electronic hardware required to support sensor cooling is still an important cost factor, regardless of sensor type.
Advantages and disadvantages of the two solutions
- The CCD produces a high-quality image compared to the CMOS
- CMOS is more susceptible to noise than CCD
- CCD consumes a lot of energy compared to CMOS (about 3 times more)
- The CMOS consumes less, overheats less, and introduces less noise due to the temperature than the CCD
- the CCD is more expensive than the CMOS
- CMOS has greater complexity than CCD
With the passage of time and the advancement of technology, the research centers of the large companies have continued to improve the two technologies in parallel: CMOS they have focused on image quality, for CCDs on reducing consumption. The final result is that, at the moment, the two sensors are comparable and usable without any difference on all machines.
In fact, what was the big difference that has been seen up to now is fading: CMOS sensors on consumer cameras (compact) and CCD sensors on professional or reflex equipment. At the moment, checking what is on the market, exclusively digital backs use the CCD as an exclusive sensor (the color rendering is even slightly better). All the big brands (Nikon and Canon too) are introducing both solutions to the market.
Features | CCD | CMOS |
Photodiode output | electric charge | voltage |
Chip output | voltage (analog) | bit (digital) |
Camera output | bit (digital) | bit (digital) |
Presence of noise | Low | Moderate |
The complexity of the sensor | Low | High |
Dynamic range | Wide | Moderate |
Uniformity | High | Low to moderate |
Gust speed | Moderate to high | High |
Chromatic accuracy | High | Average |
Let’s now try to analyze some substantial differences between the two types of sensors.
CCD vs CMOS: consumption
As mentioned above, the CMOS, due to its construction scheme, has a lower energy cost than an equivalent CCD: the graph below shows the energy difference between the different types of cameras.
The lower consumption is defined by the fact that transferring a voltage (CMOS) does not require power as opposed to transferring an electric charge (CCD) where a mass has to be moved. In addition, with the same CMOS channels, the consumption does not change as its size varies (and the output channels are always constant unless you want to push a lot on the already excellent processor speed). In the case of the CCD, on the other hand, as the size increases, the number of electrons to be moved increases, and therefore energy consumption increases.
CCD vs CMOS: low light
In the case of dim or poor lighting, the CCD wins: the amplification of the signal is carried out on all the signals at the same time while in the CMOS each signal generated by the photodiode is amplified individually (mono pixel amplifiers, in practice). The result is greater precision and harmonization of the result and therefore a photo with a little less noise in the case of the CCD.
It should also be added that the mono-pixel amplifiers of the CMOS have limited bandwidth when compared to those of the CCDs: this is positive since, being in low light conditions, the illuminance signal level is close to the noise level. of the sensor itself, the signal-to-noise ratio of the CMOS is better than that of the CCD which results in the possibility of using higher ISOs.
However, CCDs, having no electronics near the photodiodes, have a greater photosensitive surface: more light captured equals a higher electrical signal and therefore the need for less amplification.
CCDs also have another huge advantage: pixel binning. In practice, it is possible to “join” four adjacent photodiodes to increase the sensitivity of the sensor (almost four times). This obviously translates into a reduction in resolution (by four times) but the final results are excellent as a CCD sensor with active pixel binning can take photos with drastically lower ambient brightness than a CCD or CMOS could. This expensive technology is used in digital backs.
CCD vs CMOS: speed
CMOS sensors definitely win: they are faster and allow faster photo bursts. CCD sensors have the problem of having to transfer all the data collected from the individual pixels to the converter (from electric charge to potential difference) the amplifier in order to be ready to capture a new image. The amplifier and the transformer have to work on a large amount of data and this affects the speed (the amplifier and the transformer act as a bottleneck). In CMOS sensors, on the other hand, since there is a converter and an amplifier per pixel, the latter is decidedly more efficient in terms of speed and does not act as a bottleneck: each line of photodiodes can be “read” separately and stored as the image it is ready.
CCD vs CMOS: shutter
The two sensors play even if the paths followed are completely different. The problem is once again linked to the transfer speed of the “images” from the photodiode: while the transfer takes place, the light continues to strike the photodiodes, thus increasing the electric charge. The result is therefore a longer exposure time than we would like.
To solve this problem, transfer channels (ILT technology) have been created in the CCD: it is a sort of photodiode that is always shielded from light. The illuminated photodiode instantly transmits the electric charge to its neighboring shield which will take care of the actual transfer. During this phase, therefore, it will not undergo light and therefore the electric charge will remain constant.
In CMOS, to solve the problem (however lower, due to the higher transfer speed), a transistor was introduced for each pixel between the photodiode and the charge accumulator. Traditionally, each pixel has three transistors: this allows you to maximize the area exposed to light but creates the classic rolling shutter effect present in a photo like this.
The rolling shutter is a photographic method in which an image is not taken all at the same instant but rather by making a vertical (or horizontal) scan by pixel lines of the image itself. The Rolling Shutter, however, contrasts with the camera shutter and the final effect is precisely the creation of distortions on fast-moving objects such as a rotor blade.
CCD vs CMOS: bright artifacts
In some cases, it may happen that a whole column of white pixels is generated from a very bright source. In practice, the excess electrons on a photosite expand to all the neighboring ones impacting a column (less happens on the lines as there is, on the side of each photosite, the transfer channel. This effect is typical of the CCD and is not present in CMOS (where between the various photosites there is a greater demarcation due to the presence of other electronics).
One of the main disadvantages of long exposure applications is the amp glow phenomenon. This is the scattered light caused by on-chip amplifiers. All semiconductors subjected to polarization produce a small amount of light, through the same mechanism that an LED works with. This parasitic brightness is easily overcome on CCD sensors by reducing the voltage supplied to the on-chip amplifier during long exposures. CMOS sensors have much more active electronics on board and usually cannot be turned off or put into a low power state during image capture.
As a result, sensors can often become saturated due to amplifier brightness within minutes. Even when using shorter exposures, they are achieved photon noises and extra reading noise (if you are interested in expanding your knowledge of noise, we have a special section in our photography course ). In the current state of technology, the brightness of the amplifier can represent a significant disadvantage for long-exposure applications such as astronomy.
CCD vs CMOS: partial exposure
This is directly linked to the rolling shutter problem and therefore is present only on CMOS sensors: in some cases and at particular flash speeds, it happens to photograph only a part of the image “in the light”, while the other remains dark. In current cameras, this effect cannot be seen unless you take very expensive backs as the shutter (which also moves with curtains) is significantly slower than the CMOS sensor itself: the visible result is the upper part. illuminated while the one at the bottom is totally black.
Conclusion
Choosing the right image for an application has never been an easy task. Different applications have different requirements. These requirements impose constraints that affect performance and price. With all these complexities at play, it is not surprising that it is impossible to make a general statement about CMOS versus CCD cameras that applies to all applications.
Line-scan and area-scan CMOS imagers outperform CCDs in most imaging applications in the visible. TDI CCDs, used for high-speed, low-light applications, outperform TDI CMOS. The need for near-infrared imaging may make CCDs a better choice for some area and line scanning applications. For UV imaging, surface treatment after backside thinning is essential, as is the requirement for an overall shutter.
The requirement for very low noise introduces new constraints, while CMOS is still generally superior to CCD at high readout speeds. The price-performance ratio may favor CCD or CMOS imagers, depending on leverage, volume, and security of supply.