ccd vs cmos |
CCD vs. CMOS
The technologies and the markets that use them continue to mature, but the comparison is still a lot like apples vs. oranges: they can both be good for you. CCD (charge coupled device) and CMOS (complementary metal oxide semiconductor) image sensors are two different technologies for capturing images digitally. Each has unique strengths and weaknesses giving advantages in different applications. Neither is categorically superior to the other, although vendors selling only one technology have usually claimed otherwise. In the last five years much has changed with both technologies, and many projections regarding the demise or ascendence of either have been proved false. The current situation and outlook for both technologies is vibrant, but a new framework exists for considering the relative strengths and opportunities of CCD and CMOS imagers. Both types of imagers convert light into electric charge and process it into electronic signals. In a CCD sensor, every pixel's charge is transferred through a very limited number of output nodes (often just one) to be converted to voltage, buffered, and sent off-chip as an analog signal. All of the pixel can be devoted to light capture, and the output's uniformity (a key factor in image quality) is high. In a CMOS sensor, each pixel has its own charge-to-voltage conversion, and the sensor often also includes amplifiers, noise-correction, and digitization circuits, so that the chip outputs digital bits. These other functions increase the design complexity and reduce the area available for light capture. With each pixel doing its own conversion, uniformity is lower. But the chip can be built to require less off-chip circuitry for basic operation. For more details on device architecture and operation, see our original "CCD vs. CMOS: Facts and Fiction" article and its 2005 update, "CMOS vs. CCD: Maturing Technologies, Maturing Markets." CCDs and CMOS imagers were both invented in the late 1960s and 1970s (DALSA founder Dr. Savvas Chamberlain was a pioneer in developing both technologies). CCD became dominant, primarily because they gave far superior images with the fabrication technology available. CMOS image sensors required more uniformity and smaller features than silicon wafer foundries could deliver at the time. Not until the 1990s did lithography develop to the point that designers could begin making a case for CMOS imagers again. Renewed interest in CMOS was based on expectations of lowered power consumption, camera-on-a-chip integration, and lowered fabrication costs from the reuse of mainstream logic and memory device fabrication. While all of these benefits are possible in theory, achieving them in practice while simultaneously delivering high image quality has taken far more time, money, and process adaptation than original projections suggested (see "CMOS Development's Winding Path" below). Both CCDs and CMOS imagers can offer excellent imaging performance when designed properly. CCDs have traditionally provided the performance benchmarks in the photographic, scientific, and industrial applications that demand the highest image quality (as measured in quantum efficiency and noise) at the expense of system size. CMOS imagers offer more integration (more functions on the chip), lower power dissipation (at the chip level), and the possibility of smaller system size, but they have often required tradeoffs between image quality and device cost. Today there is no clear line dividing the types of applications each can serve. CMOS designers have devoted intense effort to achieving high image quality, while CCD designers have lowered their power requirements and pixel sizes. As a result, you can find CCDs in low-cost low-power cellphone cameras and CMOS sensors in high-performance professional and industrial cameras, directly contradicting the early stereotypes. It is worth noting that the producers succeeding with "crossovers" have almost always been established players with years of deep experience in both technologies. Costs are similar at the chip level. Early CMOS proponents claimed CMOS imagers would be much cheaper because they could be produced on the same high-volume wafer processing lines as mainstream logic or memory chips. This has not been the case. The accommodations required for good imaging perfomance have required CMOS designers to iteratively develop specialized, optimized, lower-volume mixed-signal fabrication processes--very much like those used for CCDs. Proving out these processes at successively smaller lithography nodes (0.35um, 0.25um, 0.18um...) has been slow and expensive; those with a captive foundry have an advantage because they can better maintain the attention of the process engineers. CMOS cameras may require fewer components and less power, but they still generally require companion chips to optimize image quality, increasing cost and reducing the advantage they gain from lower power consumption. CCD devices are less complex than CMOS, so they cost less to design. CCD fabrication processes also tend to be more mature and optimized; in general, it will cost less (in both design and fabrication) to yield a CCD than a CMOS imager for a specific high-performance application. However, wafer size can be a dominating influence on device cost; the larger the wafer, the more devices it can yield, and the lower the cost per device. 200mm is fairly common for third-party CMOS foundries while third-party CCD foundries tend to offer 150mm. Captive foundries use 150mm, 200mm, and 300mm production for both CCD and CMOS. The larger issue around pricing is sustainability. Since many CMOS start-ups pursued high-volume, commodity applications from a small base of business, they priced below costs to win business. For some, the risk paid off and their volumes provided enough margin for viability. But others had to raise their prices, while still others went out of business entirely. High-risk startups can be interesting to venture capitalists, but imager customers require long-term stability and support. While cost advantages have been difficult to realize and on-chip integration has been slow to arrive, speed is one area where CMOS imagers can demonstrate considerable strength because of the relative ease of parallel output structures. This gives them great potential in industrial applications. CCDs and CMOS will remain complementary. The choice continues to depend on the application and the vendor more than the technology. of the impending demise of the incumbent image-sensing technology, CCDs. Strong claims by the proponents of a resurgent CMOS technology have been countered by equally forceful claims by CCD defenders. In a pattern typical of battling technologies (both with significant merits but also lacking maturity in some regards), users have become leery of performance representations made by both camps. Overly aggressive promotion of both technologies has led to considerable fear, uncertainty and doubt. Imager basics For the foreseeable future, there will be a significant role for both types of sensor in imaging. The most successful users of advanced image capture technology will be those who consider not only the base technology, but also the sustainability, adaptability and support. They will perform the best long term in a dynamic technology environment that the battle between CCDs and CMOS promises to deliver. Both image sensors are pixelated metal oxide semiconductors. They accumulate signal charge in each pixel proportional to the local illumination intensity, serving a spatial sampling function. When exposure is complete, a CCD transfers each pixel’s charge packet sequentially to a common output structure, which converts the charge to a voltage, buffers it and sends it off-chip. In a CMOS imager the charge-to-voltage conversion takes place in each pixel. This difference in readout techniques has significant implications for sensor architecture, capabilities and limitations. Eight attributes characterize imagesensor performance: • Responsivity, the amount of signal the sensor delivers per unit of input optical energy. CMOS imagers are marginally superior to CCDs, in general, because gain elements are easier to place on a CMOS image sensor. Their complementary transistors allow low-power high-gain amplifiers, whereas CCD amplification usually comes at a significant power penalty. Some CCD manufacturers are challenging this conception with new readout amplifier techniques. • Dynamic range, the ratio of a pixel’s saturation level to its signal threshold. It gives CCDs an advantage by about a factor of two in comparable circumstances. CCDs still enjoy significant noise advantages over CMOS imagers because of quieter sensor substrates (less on-chip circuitry), inherent tolerance to bus capacitance variations and common output amplifiers with transistor geometries that can be easily adapted for minimal noise. Externally coddling the image sensor through cooling, better optics, more resolution or adapted off-chip electronics cannot make CMOS sensors equivalent to CCDs in this regard. Choosing an imager means considering not only the chip, but also its manufacturer and how your application will evolve. On a CCD, most functions take place on the camera’s printed circuit board. If the application’s demands change, a designer can change the electronics without redesigning the imager. Facts and Fiction • Uniformity, the consistency of response for different pixels under identical illumination conditions. Ideally, behavior would be uniform, but spatial wafer processing variations, particulate defects and amplifier variations create nonuniformities. It is important to make a distinction between uniformity under illumination and uniformity at or near dark. CMOS imagers were traditionally much worse under both regimes. Each pixel had an openloop output amplifier, and the offset and gain of each amplifier varied considerably because of wafer processing variations, making both dark and illuminated nonuniformities worse than those in CCDs. Some people predicted that this would defeat CMOS imagers as device geometries shrank and variances increased. However, feedback-based amplifier structures can trade off gain for greater uniformity under illumination. The amplifiers have made the illuminated uniformity of some CMOS imagers closer to that of CCDs, sustainable as geometries shrink. Still lacking, though, is offset variation of CMOS amplifiers, which manifests itself as nonuniformity in darkness. While CMOS imager manufacturers have invested considerable effort in suppressing dark nonuniformity, it is still generally worse than that of CCDs. This is a significant issue in high-speed applications, where limited signal levels mean that dark nonuniformities contribute significantly to overall image degradation. • Shuttering, the ability to start and stop exposure arbitrarily. It is a standard feature of virtually all consumer and most industrial CCDs, especially interline transfer devices, and is particularly important in machine vision applications. CCDs can deliver superior electronic shuttering, with little fill-factor compromise, even in small-pixel image sensors. Implementing uniform electronic shuttering in CMOS imagers requires a number of transistors in each pixel. In line-scan CMOS imagers, electronic shuttering does not compromise fill factor because shutter transistors can be placed adjacent to the active area of each pixel. In areascan(matrix) imagers, uniform electronic shuttering comes at the expense of fill factor because the opaque shutter transistors must be placed in what would otherwise be an optically sensitive area of each pixel. CMOS matrix sensor designers have dealt with this challenge in two ways: A nonuniform shutter, called a rolling shutter, exposes different lines of an array at different times. It reduces the number of in-pixel transistors, improving fill factor. This is sometimes acceptable for consumer imaging, but in higher-performance applications, object motion manifests as a distorted image. A uniform synchronous shutter, sometimes called a nonrolling shutter, exposes all pixels of the array at the same time. Object motion stops with no distortion, but this approach consumes pixel area because it requires extra transistors in each pixel. Users must choose between low fill factor and small pixels on a small, less-expensive image sensor, or large pixels with much higher fill factor on a larger, more costly image sensor. • Speed, an area in which CMOS arguably has the advantage over CCDs because all camera functions can be placed on the image sensor. A CMOS imager converts charge to voltage at the pixel, and most functions are integrated into the chip. This makes imager functions less flexible but, for applications in rugged environments, a CMOS camera can be more reliable. CCD vs. CMOS With one die, signal and power trace distances can be shorter, with less inductance, capacitance and propagation delays. To date, though, CMOS imagers have established only modest advantages in this regard, largely because of early focus on consumer applications that do not demand notably high speeds compared with the CCD’s industrial, scientific and medical applications. • Windowing. One unique capability of CMOS technology is the ability to read out a portion of the image sensor. This allows elevated frame or line rates for small regions of interest. This is an enabling capability for CMOS imagers in some applications, such as high-temporal-precision object tracking in a subregion of an image. CCDs generally have limited abilities in windowing. • Antiblooming, the ability to gracefully drain localized overexposure without compromising the rest of the image in the sensor. CMOS generally has natural blooming immunity. CCDs, on the other hand, require specific engineering to achieve this capability. Many CCDs that have been developed for consumer applications do, but those developed for scientific applications generally do not. • Biasing and clocking. CMOS imagers have a clear edge in this regard. They generally operate with a single bias voltage and clock level. Nonstandard biases are generated on-chip with charge pump circuitry isolated from the user unless there is some noise leakage. CCDs typically require a few higher-voltage biases, but clocking has been simplified in modern devices that operate with low-voltage clocks. Reliability Both image chip types are equally reliable in most consumer and industrial applications. In ultrarugged environments, CMOS imagers have an advantage because all circuit functions can be placed on a single integrated circuit chip, minimizing leads and solder joints, which are leading causes of circuit failures in extremely harsh environments. CMOS image sensors also can be much more highly integrated than CCD devices. Timing generation, signal processing, analog-to-digital conversion, interface and other functions can all be put on the imager chip. This means that a CMOS-based camera can be significantly smaller than a comparable CCD camera. The user needs to consider, however, the cost of this integration. CMOS imagers are manufactured in a wafer fabrication process that must be tailored for imaging performance. These process adaptations, compared with a nonimaging mixed-signal process, come with some penalties in device scaling and power dissipation. Although the pixel portion of the CMOS imager almost invariably has lower power dissipation than a CCD, the power dissipation of other circuits on the device can be higher than that of a CCD using companion chips from optimized analog, digital and mixed signal processes. At a system level, this calls into question the notion that CMOS-based cameras have lower power dissipation than CCD based cameras. Often, CMOS is better, but it is not unequivocally the case, especially at high speeds (above about 25-MHz readout). The other significant considerations in system integration are adaptability, flexibility and speed of change. Most CMOS image sensors are designed for a large, consumer or near-consumer application. They are highly integrated and tailored for one or a few applications. A system designer should be careful not to invest fruitlessly in attempting to adapt a highly application-specific device for a use to which it is not suited. CCD image sensors, on the other hand, are more general purpose. The pixel size and resolution are fixed in the device, but the user can easily tailor other aspects such as readout Are they really stars? For an ideal detector, each pixel’s response to a photon would be identical, and the “starlight” would be confined to the area of the star. Choose Your Imager CMOS imagers offer superior integration, power dissipation and system size at the expense of image quality (particularly in low light) and flexibility. They are the technology of choice for high-volume, space constrained applications where image quality requirements are low. This makes them a natural fit for security cameras, PC videoconferencing, wireless handheld device videoconferencing, bar-code scanners, fax machines, consumer scanners, toys, biometrics and some automotive invehicle uses. CCDs offer superior image quality and flexibility at the expense of system size. They remain the most suitable technology for high-end imaging applications, such as digital photography, broadcast television, high-performance industrial imaging, and most scientific and medical applications. Furthermore, flexibility means users can achieve greater system differentiation with CCDs than with CMOS imagers. Sustainable cost between the two technologies is approximately equal. This is a major contradiction to the traditional marketing pitch of virtually all of the solely CMOS imager companies. Even when it makes economic sense to pay for sensor customization to suit an application, time to market can be an issue. Because CMOS imagers are systems on a chip, development time averages 18 months, depending on how many circuit functions the designer can reuse from previous designs in the same wafer fabrication process. And this amount of time is growing because circuit complexity is outpacing design productivity. This compares with about eight months for new CCD designs in established manufacturing processes. CCD systems can also be adapted with printed circuit board modifications, whereas fully integrated CMOS imaging systems require new wafer runs. Which costs less? One of the biggest misunderstandings about image sensors is cost. Many early CMOS proponents argued that their technology would be vastly cheaper because it could be manufactured on the same high-volume wafer processing lines as mainstream logic and memory devices. Had this assumption proved out, CMOS would be cheaper than CCDs. However, the accommodations required for good electro-optical performance mean that CMOS imagers must be made on specialty, lower volume, optically adapted mixed-signal processes and production lines. This means that CMOS and CCD image sensors do not have significantly different costs when produced in similar volumes and with comparable cosmetic grading and silicon area. Both technologies offer appreciable volumes, but neither has such commanding dominance over the other to establish untouchable economies of scale. CMOS may be less expensive at the system level than CCD, when considering the cost of related circuit functions such as timing generation, biasing, analog signal processing, digitization, interface and feedback circuitry. But it is not cheaper at a component level for the pure image sensor function itself. The larger issue around pricing, particularly for CMOS users, is sustainability. Many CMOS startups are dedicated to high-volume applications. Pursuing the highest-volume applications from a small base of business has meant that these companies have had to price below their costs to win business in commodity markets. Some start-ups will win and sustain these prices. Others will not and will have to raise prices. Still others will fail entirely. CMOS users must be aware of their suppliers’ profitability and cost structure to ensure that the technology will be sustainable. The customer’s interest and the venture capitalist’s interest are not well-aligned: Investors want highest return, even if that means highest risk, whereas customers need stability because of the high cost of midstream system design change. Increasingly, money and talent are flowing to CMOS imaging,in large part because of the high-volume applications enabled by the small imaging devices and the high digital processing speeds. Over time, CMOS imagers should be able to advance into higher-performance applications. For the moment, CCDs and CMOS remain complementary technologies — one can do things uniquely that the other cannot. Over time, this stark distinction will soften, with CMOS imagers consuming more and more of the CCD’s traditional applications. But this process will take the better part of a decade — at the very least Shuttering is a concern in military target acquisition applications. A “rolling shutter” can start and stop exposure on a CMOS device, but the technique can result in a distorted image. |