Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CMOS Image Sensors - Introduction

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Abbas El Gamal and Helmy Eltoukhy

he market for solid-state image sensors has been experiencing explosive growth in recent years due to the increasing demands of mobile imaging, digital still and video cameras, Internet-based video conferencing, surveillance, and biometrics. With over 230 million parts shipped in 2004 and an estimated annual growth rate of over 28% (InStat/MDR), image sensors have become a significant silicon technology driver. Charge-coupled devices (CCDs) have traditionally been the dominant image-sensor technology. Recent advances in the design of image sensors implemented in complementary metaloxide semiconductor (CMOS) technologies have led to their adoption in several high-volume products, such as the optical mouse, PC cameras, mobile phones, and high-end digital cameras, making them a viable alternative to CCDs. Additionally, by exploiting the ability to integrate sensing with analog and digital processing down to the pixel level, new types of CMOS imaging devices are being created for manmachine interface, surveillance and monitoring, machine vision, and biological testing, among other applications. In this article, we provide a basic introduction to CMOS image-sensor technology, design, and performance limits and present recent
8755-3996/05/$20.00 2005 IEEE

DIGITALVISION, LTD.

IEEE CIRCUITS & DEVICES MAGAZINE

MAY/JUNE 2005

developments and future directions in this area. We begin with a brief Auto Auto Control and description of a typical digital imaging Focus Exposure Interface system pipeline. We also discuss image-sensor operation and describe the most popular CMOS image-sensor architectures. We note the main non011 idealities that limit CMOS image sensor performance, and specify several Scene Imaging Microlens Array Image AGC Color key performance measures. One of the Optics Color Filter Array ADC Processing Enhancement most important advantages of CMOS Image Sensor and Compression image sensors over CCDs is the ability to integrate sensing with analog and 1. The imaging system pipeline. digital processing down to the pixel level. Finally, we focus on recent developments and future research directions that are enabled by IMAGE-SENSOR ARCHITECTURES pixel-level processing, the applications of which promise to An area image sensor consists of an array of pixels, each confurther improve CMOS image sensor performance and broaden taining a photodetector that converts incident light into photheir applicability beyond current markets. tocurrent and some of the readout circuits needed to convert the photocurrent into electric charge or voltage and to read it off the array. The percentage of area occupied by the photodeIMAGING SYSTEM PIPELINE tector in a pixel is known as fill factor. The rest of the readout An image sensor is one of the main building blocks in a digital circuits are located at the periphery of the array and are mulimaging system such as a digital still or video camera. Figure tiplexed by the pixels. Array sizes can be as large as tens of 1 depicts a simplified block diagram of an imaging-system megapixels for high-end applications, while individual pixel architecture. First, the scene is focused on the image sensor sizes can be as small 2 2 m. A microlens array is typically using the imaging optics. An image sensor comprising a twodeposited on top of the pixel array to increase the amount of dimensional array of pixels converts the light incident at its light incident on each photodetector. surface into an array of electrical signals. Figure 3 is a scanning electron microTo perform color imaging, a color-filterscope (SEM) photograph of a CMOS array (CFA) is typically deposited in a cerimage sensor showing the color filter and tain pattern on top of the image sensor microlens layers on top of the pixel array. pixel array (see Figure 2 for a typical redThe earliest solid-state image sensors green-green-blue Bayer CFA). Using such were the bipolar and MOS photodiode a filter, each pixel produces a signal correarrays developed by Westinghouse, IBM, sponding to only one of the three colors, Plessy, and Fairchild in the late 1960s [2]. e.g., red, green, or blue. The analog pixel Invented in 1970 as an analog memory data (i.e., the electrical signals) are read device, CCDs quickly became the domiout of the image sensor and digitized by nant image sensor technology. Although an analog-to-digital converter (ADC). To several MOS image sensors were reported produce a full color image, i.e., one with in the early 1980s, todays CMOS image red, green and blue color values for each 2. The color filter array Bayer pattern. sensors are based on work done starting pixel, a spatial interpolation operation around the mid 1980s at VLSI Vision Ltd known as demosaicking is used. Further and the Jet Propulsion Laboratory. Up until the early 1990s, digital-signal processing is used to perform white balancing the passive pixel sensor (PPS) was the CMOS image sensor and color correction as well as to diminish the adverse effects technology of choice [3]. The feature sizes of the available of faulty pixels and imperfect optics. Finally, the image is CMOS technologies were too large to accommodate more compressed and stored in memory. Other processing and conthan the single transistor and three interconnect lines in a trol operations are also included for performing auto-focus, PPS pixel. PPS devices, however, had much lower perforauto-exposure, and general camera control. mance than CCDs, which limited their applicability to low-end Each component of an imaging system plays a role in machine-vision applications. In the early 1990s, work began determining its overall performance. Simulations [1] and on the modern CMOS active pixel sensor (APS), conceived experience, however, show that it is the image sensor that originally in 1968 [4], [5]. It was quickly realized that adding often sets the ultimate performance limit. As a result, there an amplifier to each pixel significantly increases sensor speed has been much work on improving image sensor performance and improves its signal-to-noise ratio (SNR), thus overcoming through technology and architecture enhancements as disthe shortcomings of PPS. CMOS technology feature sizes, cussed in subsequent sections.
IEEE CIRCUITS & DEVICES MAGAZINE

MAY/JUNE 2005

image sensors with very small pixel sizes. Another important advantage is that charge transfer is passive and therefore does not introduce temporal noise or pixel to pixel variations due Microlens Overcoat to device mismatches, known as fixed-pattern noise (FPN). The readout path in a CMOS image sensor, by comparison, comprises several active devices that introduce both temporal Microlens noise and FPN. Charge transfer readout, however, is serial resulting in limited readout speed. It is also high power due to the need for high-rate, high-voltage clocks to achieve nearMicrolens Spacer perfect charge transfer efficiency. By comparison, the random access readout of CMOS image sensors provides the potential Color Filter Color Filter Color Filter for high-speed readout and window-of-interest operations at low power consumption. There are several recent examples of CMOS image sensors operating at hundreds of frames per secColor Filter Planarization Layer ond with megapixel or more resolution [7][9]. The highspeed readout also makes CMOS image sensors ideally suited 3. A cross-section SEM photograph of an image sensor showing the for implementing very high-resolution imagers with multimicrolens and CFA deposited on top of the photodetectors. megapixel resolutions, especially for video applications. Recent examples of such high-resolution CMOS imagers Interline CCD CMOS include the 11-megapixel sensor used Bit Photodiodes Photodiodes in the Canon EOS-1 camera and the 14Word megapixel sensor used in the Kodak DCS camera. Other differences between CCDs and CMOS image sensors arise from differences in their fabrication technologies. CCDs are fabricated in specialized technologies solely optimized for imaging Column Amplifiers and charge transfer. Control over the Column ADC/Mux fabrication technology also makes it Horizontal CCD Output Output possible to scale pixel size down with(a) (b) out significant degradation in performance. The disadvantage of using such 4. (a) Readout architectures of interline transfer CCD and (b) CMOS image sensors. specialized technologies, however, is the inability to integrate other camera functions on the same chip with the sensor. CMOS image senhowever, were still too large to make APS commercially viable. sors, on the other hand, are fabricated in mostly standard With the advent of deep submicron CMOS and integrated technologies and thus can be readily integrated with other microlens technologies, APS has made CMOS image sensors a analog and digital processing and control circuits. Such inteviable alternative to CCDs. Taking further advantage of techgration further reduces imaging system power and size and nology scaling, the digital pixel sensor (DPS), first reported in enables the implementation of new sensor functionalities, as [6], integrates an ADC at each pixel. The massively parallel will be discussed later. conversion and digital readout provide very high speed readSome of the CCD versus CMOS comparison points made out, enabling new applications such as wider dynamic range here should become clearer as we discuss image sensor tech(DR) imaging, which is discussed later in this article. nology in more detail. Many of the differences between CCD and CMOS image sensors arise from differences in their readout architectures. In a CCD [see Figure 4(a)], charge is shifted out of the array Photodetection via vertical and horizontal CCDs, converted into voltage via a The most popular types of photodetectors used in image sensimple follower amplifier, and then serially read out. In a sors are the reverse-biased positive-negative (PN) junction CMOS image sensor, charge voltage signals are read out one photodiode and the P+/N/P pinned diode (see Figure 5). The row at a time in a manner similar to a random access memory structure of the pinned diode provides improved photoresponusing row and column select circuits [see Figure 4(b)]. Each sivity (typically with enhanced sensitivity at shorter wavereadout architecture has its advantages and disadvantages. lengths) relative to the standard PN junction [10]. Moreover, The main advantage of the CCD readout architecture is that it the pinned diode exhibits lower thermal noise due to the pasrequires minimal pixel overhead, making it possible to design sivation of defect and surface states at the Si/SiO2 interface, as
8

Vertical CCD

Row Decoders

IEEE CIRCUITS & DEVICES MAGAZINE

MAY/JUNE 2005

Reset Coli RS N+ P PN Junction APS 3T Pixel . . . Pixel Pixel Reset RS FD P+ N P Pinned Diode APS 4T Vout

TX

5. A schematic of a 3- and 4-T active pixel sensor (APS).

through a CMOS image sensor pixel illustrating the tunnel through which light must travel before reaching the photodetector. Experimental evidence shows that OE can have a significant role in determining the resultant external QE [11]. The second important imaging characteristic of a photodetector is its leakage or dark current. Dark current is the photodetector current when no illumination is present. It is generated by several sources, including carrier thermal generation and diffusion in the neutral bulk, thermal generation in the depletion region, thermal generation due to surface states at the silicon-silicon dioxide interface, and thermal generation due to interface traps (caused by defects) at the diode perimeter. As discussed in more detail later in this article, dark current is detrimental to imaging performance under low illumination as it

well as a customizable photodiode capacitance via the charge transfer operation through transistor TX. However, imagers 0.8 incorporating pinned diodes are susceptible to incomplete 0.7 charge transfer, especially at lower operating voltages causing 0.6 ghosting artifacts to appear in video-rate applications. 0.5 The main imaging characteristics of a photodiode are 0.4 external quantum efficiency (QE) and dark current. External 0.3 QE is the fraction of incident photon flux that contributes to 0.2 the photocurrent in a photodetector as a function of wave0.1 length (typically in the 400700 nm range of visible light). It 0 400 450 500 550 600 650 700 750 800 is typically combined with the transmittance of each color filWavelength (nm) ter to determine its overall spectral response. The spectral response for a typical CMOS color image sensor fabricated in a 6. A spectral response curve for a typical 0.18-m CMOS image sensor. modified 0.18-m process is shown in Figure 6. External QE can be expressed as the product of internal QE and optical efficiency (OE). Internal QE is the fraction of photons incident on the photodetector surface that contributes to the photocurrent. It is a function mainly of photodetector geometry Aperture and doping concentrations and is always Metal 4 less than one for the above silicon phoMetal 3 todetectors. OE is the photon-to-photon Dielectric efficiency from the pixel surface to the Metal 2 Tunnel photodetectors surface. The geometric Metal 1 arrangement of the photodetector with respect to other elements of the pixel Photodiode Photodetector structure, i.e., shape and size of the aperon Pixel Floor ture; length of the dielectric tunnel; and position, shape, and size of the photodetector, all determine OE. Figure 7 is 7. An illustration of optical tunnel above photodetector and pixel vignetting phenomenon. an SEM photograph of a cross section
Spectral Response
IEEE CIRCUITS & DEVICES MAGAZINE

MAY/JUNE 2005

VD

Reset Qmax High Light Light CD Low Light tint


(a) (b)

saturates, and the output charge is equal to the well capacity Q well , which is defined as the maximum amount of charge (in electrons) that can be held by the integration capacitance. Figure 9 depicts the signal path for an image sensor from the incident photon flux to output voltage. This conversion is nearly linear and is governed by three main parameters; external QE, integration time (t int ), and conversion gain.
t

PPS, APS, and DPS Architectures


There are different flavors of CMOS imagesensor readout architectures. We describe PPS, which is the earliest CMOS imagesensor architecture (see Figure 10), the three and four transistor (3 and 4 T) per pixel APS, which are the most popular architectures at present (see Figure 5), and DPS (see Figure 11). The PPS pixel includes a photodiode and a row-select transistor. The readout is performed one row at a time in a staggered rolling shutter fashion. At the end of integration,

8. (a) A schematic of pixel operating in direct integration. (b) Charge versus time for two photocurrent values.

Photons ph/s

QE

Photocurrent Amperes

Direct Integration

Charge Col

Conversion Gain

Voltage V

9. A block diagram of signal path for an image sensor.

introduces shot noise that cannot be corrected for as well as nonuniformity due to its large variation over the sensor array. Much attention is paid to minimizing dark current in CCDs, which can be as low as 12 pA/cm2 , through the use of gettered, high-resistivity wafers to minimize traps from metallic contamination as well as buried channels and multiphase pinned operation to minimize surface generated dark current [12]. Dark current in standard submicron CMOS processes is orders of magnitude higher than in a CCD and several process modifications are used to reduce it [13]. Somewhat higher dark current can be tolerated in CMOS image sensors, since, in a CCD, dark current affects both photodetection and charge transfer. As the range of photocurrents produced under typical illumination conditions is too low (in the range of femto- to picoamperes) to be read directly, it is typically integrated and read out as charge or voltage at the end of the exposure time. This operation, known as direct integration, is illustrated in Figure 8. The photodiode is first reset to a voltage V D . The reset switch is then opened and the photocurrent i ph as well as the dark current, i dc , are integrated over the diode capacitance C D . At the end of integration, the charge accumulated over the capacitance is either directly read out, as in CCDs or PPS, and then converted to voltage or directly converted to voltage and then read out as in APS. In both cases, the charge-to-voltage conversion is linear and the sensor conversion gain is measured in microvolts per electron. The charge versus time for two photocurrent values is illustrated in Figure 8(b). In the low light case, the charge at the end of integration is proportional to the light intensity, while in the high light case, the diode
10

PPS RS Coli N+ P Pixel Pixel Pixel + Vref 10. A schematic of a passive pixel sensor (PPS).

Reset

Vout

DPS Word ADC Memory N+ P Pixel Pixel Pixel Coli

Sense Amp Digital Out

11. A schematic of a diagram pixel sensor (DPS). IEEE CIRCUITS & DEVICES MAGAZINE MAY/JUNE 2005

charge is read out via the column charge-tovoltage amplifiers. The amplifiers and the photodiodes in the row are then reset before the next row readout commences. The main advantage of PPS is its small pixel size. The column readout, however, is slow and vulnerable to noise and disturbances. The APS and DPS architectures solve these problems, but at the cost of adding more transistors to each pixel. The 3-T APS pixel includes a reset transistor, a source follower transistor to isolate the sense node from the large column bus capacitance and a row select transistor. The current source component of the follower amplifier is shared by a column of pixels. Readout is performed one row at a time. Each row of pixels is reset after it is read out to the column capacitors via the row access transistors and column amplifiers. The 4-T APS architecture employs a pinned diode, which adds a transfer gate and a floating diffusion (FD) node to the basic 3T APS pixel architecture. At the end of integration, the accumulated charge on the photodiode is transferred to the FD node. The transferred charge is then read out as voltage in the same manner as in the 3-T architecture. Note that, unlike CCD and PPS, APS readout is nondestructive. Although the main purpose of the extra transistors in the APS pixel is to provide signal buffering to improve sensor readout speed and SNR, they have been used to perform other useful functions. By appropriately setting the gate voltage of the reset transistor in an APS pixel, blooming, which is the overflow of charge from a saturated pixel into its neighboring pixels, can be mitigated [14]. The reset transistor can also be used to enhance DR via well capacity adjusting, as described in [15]. Each of the APS architectures has its advantages and disadvantages. A 4-T pixel is either larger or has a smaller fill factor than a 3-T pixel implemented in the same technology. On the other hand, the use of a transfer gate and the FD node in a 4-T pixel decouples the read and reset operations from the integration period, enabling true correlated double sampling (CDS), as will be discussed in detail later in this article. Moreover, in a 3-T pixel, conversion gain is set primarily by the photodiode capacitance, while in a 4-T pixel, the capacitance of the FD node can be selected independently of the photodiode size, allowing conversion gain to be optimized for the sensor application. In applications such as mobile imaging, there is a need for small pixels to increase the spatial resolution without increasing the optical format (area of the sensor). CCDs have a clear advantage over CMOS image sensors in this respect due to their low pixel overhead and the use of dedicated technologies. To compete with CCDs, CMOS image sensor pixel sizes are being reduced by taking advantage of CMOS technology scaling and the process modifications discussed in the following section. In addition, novel pixel architectures that
IEEE CIRCUITS & DEVICES MAGAZINE

One of the most important advantages of CMOS image sensors over CCDs is the ability to integrate sensing with analog and digital processing down to the pixel level.

reduce the effective number of transistors per pixel by sharing some of the transistors among a group of neighboring pixels have been recently proposed [16], [17]. One example is the 1.75 T per pixel APS depicted in Figure 12 [16]. In this architecture, the buffer of the 4-T APS pixel is shared among each four neighboring pixels using the transfer gates as a multiplexer. The third and most recently developed CMOS image sensor architecture is DPS, where analog-to-digital (A/D) conversion is performed locally at each pixel, and digital data is read out from the pixel array in a manner similar to a random access digital memory. Figure 11 depicts a simplified block diagram of a DPS pixel consisting of a photodetector, an ADC, and digital memory for temporary storage of data before digital readout via the bit-lines. DPS offers several advantages over analog image sensors, such as PPS and APS, including better scaling with CMOS technology due to reduced analog circuit performance demands and the elimination of readrelated column FPN and column readout noise. More significantly, employing an ADC and memory at each pixel to enable massively parallel analot-to-digital conversion and high-speed digital readout, provides unlimited potential for high-speed snap-shot digital imaging. The main drawback of DPS is that it requires the use of more transistors per pixel than conventional image sensors, resulting in larger pixel sizes or lower fill factors. However,

Select Line1 Reset Line 1 FD-1

Read Line 1 Read Line 2

Select Line 2 Reset Line 2 FD-2

Sig. 1 Sig. 2 12. A pixel schematic of a 1.75-T/pixel APS. 11

MAY/JUNE 2005

VDD

R0 M9 S0 M5 M1

R3 M12 S3 M8 M4
Ramp BITX

Word

Bias Bit

VSS1

VSS
13. An MCBS DPS pixel schematic.

to achieve a continuous throughput of 10,000 frames per second or one gigapixel per second. The digitized pixel data is read out over a 64-b wide bus operating at 167 MHz, i.e., over 1.3 GB/s. More specifically, each pixel contains a photodetector, a 1-b comparator, and eight 3-T memory cells in an area of 9.4 9.4 m. Single-slope A/D conversion is performed simultaneously for all pixels via a globally distributed analog ramp and gray coded digital signals generated outside the array.

since there is a lower bound on practical pixel size imposed by the wavelength of light, imaging optics, and DR, this problem NONIDEALITIES AND becomes less severe as CMOS technology scales down to 0.18 PERFORMANCE MEASURES m and below. Image sensors suffer from several fundamental and technoloAs pixel size constraints make it infeasible to use existing gy related nonidealities that limit their performance and effect ADC architectures, our group has developed several per-pixel image-sensor performance. ADC architectures that can be implemented with a small number of transistors per pixel. We have designed and prototyped Temporal and Fixed Pattern Noise three generations of DPS chips with different per-pixel ADC Temporal noise is the most fundamental nonideality in an architectures. The first DPS chip comprised an array of 128 image sensor as it sets the ultimate limit on signal fidelity. 128 pixels with a first-order sigma-delta ADC shared within This type of noise is independent across pixels and varies each group of 2 2 pixels and was implemented in from frame to frame. Sources of temporal noise include 0.8-m CMOS technology [6]. The sigma-delta technique can photodetector shot noise, pixel reset circuit noise, readout be implemented using simple circuits and is, thus, well suited circuit thermal and flicker noise, and quantization noise. to pixel-level implementation in advanced processes. However, There are more sources of readout noise in CMOS image since decimation must be performed outside the pixel array, too sensors than CCDs introduced by the pixel and column much data needs to be read out. The second generation of DPS active circuits. solves this problem by using a Nyquist rate ADC approach [18]. In addition to temporal noise, image sensors also suffer The sensor comprised a 640 512 array of 10.5 10.5 m pixfrom FPN, which is the pixel-to-pixel output variation under els with a multichannel bit serial (MCBS) ADC shared within uniform illumination due to device and interconnect miseach group of 2 2 pixels and was implemented in 0.35 m. A matches across the image sensor array. These variations cause later 0.18-m commercial implementation comprised a 742 two types of FPN: offset FPN, which is independent of pixel 554 array of 7 7 m pixels [19]. Implementing an MCBS ADC signal, and gain FPN or photo response nonuniformity only requires a 1-b comparator and a 1-b latch per each group (PRNU), which increases with signal level. Offset FPN is fixed of four pixels, as shown in Figure 13. Data from the array are read out one quad bit Q S2 S2 Q plane at a time, and the pixel values are assembled outside the array. Our most S1 S3 recent design utilized a standard digital 0.18-m CMOS technology to integrate t both a single-slope bit parallel ADC and Delta-Reset Sampling Correlated Double Sampling t an 8-b dynamic memory inside each pixel (a) (b) [9]. The chip comprised an array of 288 14. Sample times for CDS versus delta reset. 352 pixels and was the first image sensor
12

IEEE CIRCUITS & DEVICES MAGAZINE

MAY/JUNE 2005

from frame to frame but varies from one sensor array to another. Again, there are more sources of FPN in CMOS image sensors than CCDs introduced by the active readout circuits. The most serious additional source of FPN is the column FPN introduced by the column amplifiers. Such FPN can cause visually objectionable streaks in the image. Offset FPN caused by the readout devices can be reduced by CDS, as illustrated in Figure 14(a). Each pixel output is readout twice, once right after reset and a second time at the end of integration. The sample after reset is then subtracted from the one after integration. To understand the effect of this operation, we express the sampled noise at the end of integration as the sum of 1) integrated shot noise Q shot 2) reset noise Q reset 3) readout circuit noise Q read due to readout device thermal and flicker (or 1/ f ) noise 4) offset FPN due to device mismatches Q FPN 5) offset FPN due to dark current variation, commonly referred to as dark signal nonuniformity (DSNU) Q DSNU 6) gain FPN, commonly referred to as PRNU. The output charge right after reset can thus be expressed as S 1 = Q reset + Q 1,read + Q FPN electrons. After integration, the output charge is given by S 2 = i ph + i dc t int + Q shot + Q reset + Q 2,read + Q FPN + Q DSNU + Q PRNU electrons. Using CDS, the signal charge is estimated by (S 2 S 1 ) = i ph + i dc t int + Q shot Q 1, read + Q 2, read + Q DSNU + Q PRNU electrons. Thus, CDS suppresses offset FPN and reset noise but increases read noise power. This increase depends on how much CDS suppresses flicker noise; the shorter the time between the two samples, the more correlated their flicker noise components become and the more effective CDS is at suppressing flicker noise. CDS is particularly effective at suppression of flicker noise in chargetransfer devices. Specifically, CDS is performed on the floatingdiffusion (FD) node directly without regard to the length of the integration period, since the FD node can be reset immediately before charge transfer. Note that CDS does not reduce DSNU. Since dark current in CMOS image sensors can be much higher than in CCDs, DSNU can greatly degrade CMOS image-sensor performance under low illumination levels. This is most pronounced at high temperatures, as dark current and, thus, DSNU exponentially increases with temperature, roughly doubling every 7 C. DSNU can be corrected using digital calibration. However, strong dependence on temperature makes accurate calibration difficult. Although PRNU is also not reduced by performing CDS, its effect is usually not as detrimental since it affects sensor performance mainly under high illumination levels. CDS can be readily implemented in the CCD and the 4-T APS architectures, but cannot be implemented in the 3-T APS architecture. Instead, an operation known as delta-reset samIEEE CIRCUITS & DEVICES MAGAZINE

pling is implemented, whereby the pixel output is read after integration and then once again after the next reset. Since the reset noise added to the first sample is different from that added to the second, the difference between the two samples only suppresses offset FPN and flicker noise and doubles reset noise power [see Figure 14(b)].

SNR and DR
Temporal noise and FPN determine the range of illumination that can be detected by the image sensor, known as its DR, and the quality of the signals it produces within the detection range measured by the SNR. Assuming CDS is performed and reset and offset FPN effectively cancelled, the noise power at the end of integration can be expressed as the sum of four independent components, shot noise with average power 1 (iph + idc )tm electron2 , where q q is the electron charge, read circuit noise due to the two readouts performed including quantization noise with average 2 2 power read , DSNU with average power DSNU and PRNU with 1 2. average power q2 ( PRNU iph tint ) With this simplified noise model, we can quantify pixel signal fidelity using the SNR, which is the ratio of the signal power to the total average noise power, as
SNR =10 log10 q(iph + idc )tint + q2 (iph tint )2 2 + 2 read DSNU + (PRNU iph tint )2 .

SNR for a set of typical sensor parameters is plotted in Figure 15. Note that it increases with photocurrent, first at 20 dB per decade when readout noise and shot noise due to dark current dominate, then at 10 dB per decade when shot noise dominates, and then flattens out when PRNU is dominant, achieving a maximum roughly equal to the well capacity Q well before saturation. SNR also increases with integration time.

50 45 40 35 SNR (dB) 30 25 20 15

Qmax = 60,000 e tint = 30 ms idc = 1 fA read = 30 e DSNU = 10 e PRNU = 0.6% Read Noise + DSNU Limited
/d ec

B 0d

/de

Shot Noise Limited

PRNU Limited

5 0 1015

20

10

dB

DR=66 dB 1014 iph (A)


15. SNR versus photocurrent (iph) for image sensor. 13

1013

1012

MAY/JUNE 2005

Thus, it is always preferred to have as long an integration time as possible. Sensor DR quantifies its ability to image scenes with wide spatial variations in illumination. It is defined as the ratio of a pixels largest nonsaturating photocurrent i max to its smallest detectable photocurrent i min . The largest saturating photocurrent is determined by the well capacity and integration time as i max = qQ well / t int i dc , while the smallest detectable signal is set by the root mean square (rms) of the noise under dark conditions. Using our simplified sensor model, DR can be expressed as
imax = 20 log10 imin qQwell idc tint
2 2 qidc t int + q2 read + DSNU

DR = 20 log 10

Note that DR decreases as integration time increases due to the adverse effects of dark current. On the other hand, increasing well capacity, decreasing read noise, and decreasing DSNU increases sensor DR.

Spatial Resolution
Another important aspect of image-sensor performance is its spatial resolution. An image sensor is a spatial (as well as temporal) sampling device. As a result, its spatial resolution is governed by the Nyquist sampling theorem. Spatial frequencies in linepairs per millimeter (lp/mm) that are above the Nyquist rate cause aliasing and cannot be recovered. Below the Nyquist rate, low-pass filtering due to the optics, spatial integration of photocurrent, and crosstalk between pixels, cause the pixel response to fall off with spatial frequency. Spatial resolution below the Nyquist rate is measured by the modulation transfer function (MTF), which is the contrast in the output image as a function of frequency.

Technology Scaling Effects


CMOS image sensors benefit from technology scaling by reducing pixel size, increasing resolution, and integrating more analog and digital circuits on the same chip with the sensor. At 0.25 m and below, however, a digital CMOS technology is not directly suitable for designing high-quality image sensors. The use of shallow junctions and high doping result in low photoresponsivity, and the use of shallow trench isolation (STI), thin gate oxide, and salicide result in unacceptably high dark current. Furthermore, in-pixel transistor leakage becomes a significant source of dark current. Indeed, in a standard process, dark current due to the reset transistor off-current and the follower transistor gate leakage current in an APS pixel can be orders of magnitude higher than the diode leakage itself. To address these problems, there have been significant efforts to modify standard 0.18-m CMOS technologies to improve their imaging performance. To improve photoresponsivity, nonsilicided deep junction diodes with optimized doping profiles are added to a standard process. To reduce dark current nonsilicided, double-diffused source/drain implanta 14

tion as well as pinned diode structures are included. Hydrogen annealing is also used to reduce leakage by passivating defects [13]. To reduce transistor leakage, both the reset and follower transistors in an APS use thick gate oxide (70 ). The reset transistor threshold is increased to reduce its off-current, while the follower transistor threshold is decreased to improve voltage swing. Technology scaling also has detrimental effects on pixel OE. The use of silicon dioxide/nitride materials reduces light transmission to the photodetector. Moreover, as CMOS technology scales, the distance from the surface of the chip to the photodiode increases relative to the photodiode lateral dimension (see Figure 7). This is due to the reduction in pixel size and the fact that the thickness of the interconnect layers scales slower than the planar dimensions. As a result, light must travel through an increasingly deeper and/or narrower tunnel before reaching the photodiode surface. This is especially problematic for light incident at an oblique angle. In this case, the tunnel walls cast a shadow on the photodiode area. This phenomenon has been referred to as pixel vignetting, since it is similar to vignetting in optical systems. Pixel vignetting reduces the light incident at the correct photodiode surface, resulting both in a severe reduction in OE and in optical color crosstalk between adjacent pixels [20]. Several process modifications are being made in order to increase OE. Oxide materials with better light transmission properties are being used. Thinning of metal and oxide layers is used to decrease the aspect ratio of the tunnel above each photodetector, thereby reducing pixel vignetting. For example, in [17] a CMOS-based 1P2M process with 30% thinner metal and dielectric layers is developed and used to increase pixel sensitivity. Another technique for increasing OE is the placement of air gaps around each pixel in order to create a rudimentary optical waveguide whereby incident light at the surface is guided to the correct pixel below via total internal reflection. The air gaps also serve to significantly reduce optical spatial crosstalk, which can be particularly problematic as pixel sizes decrease [21].

Vref Vph Vc(t) RST VL(t) VR(t) VT(t) VB(t) 4Quadrant Multipliers O-X MAX Vcos Vsin + ITH Photocurrent Integration and Sampling Nonmaximum Suppression

Pulse Emitter P (X,Y)

16. A pixel block diagram of image extraction sensor [35].

IEEE CIRCUITS & DEVICES MAGAZINE

MAY/JUNE 2005

INTEGRATION OF CAPTURE AND PROCESSING


The greatest promise of CMOS image sensor technology arises from the ability to flexibly integrate sensing and processing on the same chip to address the needs of different applications. As CMOS technology scales, it becomes increasingly feasible to integrate all basic camera functions onto a camera-on-chip [22], enabling applications requiring very small form-factor and ultra-low power consumption. Simply integrating blocks of an existing digital imaging system on a chip, however, does not fully exploit the potential of CMOS image sensor technology. With the flexibility to integrate processing down to the pixel level, the entire imaging system can be rearchitected to achieve much higher performance or to customize it to a particular application. Pixel-level processing promises very significant advantages. 17. A high DR image synthesized from the low DR images This is perhaps best demonstrated by the wide adoption of APS shown in Figure 18. over PPS and the subsequent development of DPS. In addition to adding more transistors to each pixel to enhance basic performance, there have T 2T been substantial efforts devoted to the development of computational sensors. These sensors promise significant reduction in system power by performing more sophisticated processing at the pixel level. By distributing and parallelizing the processing, speed is reduced to the point where analog circuits operating in subthreshold can be used. These circuits can perform complex computations while (a) (b) consuming very little power [23]. In the 4T 8T following subsection we provide a brief survey of this work. Perhaps the most important advantage of pixel-level processing, however, is that signals can be processed in real time during integration. This enables several new applications, including high DR imaging, accurate optical-flow estimation, and three-dimensional (3-D) imaging. In many of these applications, the sensor output data rate can be too high, making (c) (d) multiple chip implementations costly, if 16T 32T not infeasible. Integrating frame buffer memory and digital-signal processing on the same chip with the sensor can solve this problem. In this section, we will also briefly describe two related projects with which our group has been involved. The first project involves the use of vertical integration to design ultra high speed and high DR image sensors for tactical and industrial applications. The last subsection describes (e) (f) applications of CMOS image sensor tech18. Images of a high DR scene taken at exponentially increasing integration times. nology to the development of lab-on-chips.
IEEE CIRCUITS & DEVICES MAGAZINE

MAY/JUNE 2005

15

This is a particularly exciting area, with many potential applications in medical diagnostics, pharmaceutical drug discovery, and biohazard detection. The work clearly illustrates the customization and integration benefits of CMOS image sensor technology.

The ability to integrate sensing with processing is enabling a plethora of new imaging applications for the consumer, commercial, and industrial markets.

Computational Sensors
Computational sensors, sometimes referred to as neuromorphic sensors or silicon artificial retinas, are aimed mainly at machine-vision applications. Many authors have reported on sensors that derive optical motion flow vectors [24][28], which typically involve both local and global pixel calculations. Both temporal and spatial derivatives are locally computed. The derivatives are then used globally to calculate the coefficients of a line using least squares approximation. The coefficients of the line represent the final optical motion vector. The work on artificial silicon retinas [29][31] has focused on illumination-independent imaging and temporal low pass filtering, both of which involve only local pixel computations. Brajovic et al. [32] describe a computational sensor using both local and global interpixel processing that can perform histogram equalization, scene change detection, and image segmentation in addition to normal image capture. Rodriguez-Vazquez et al. [33] report on programmable computational sensors based on cellular nonlinear networks (CNN), which are well suited for the implementation of image-processing algorithms. Another approach, which is potentially more programmable, is the programmable artificial retina (PAR) described by Paillet et al. [34]. A PAR vision chip is a single instruction streammultiple data stream (SIMD) array processor in which each pixel contains a photodetector, (possible) analog preprocessing circuitry, a thresholder, and a digital processing element. Although very inefficient for image capture, the PAR can perform a plethora of retinotopic operations including early vision functions, image segmentation, and pattern recognition. In [35], Ruedi describes a 120-dB DR sensor that can perform a variety of local pixel-level computations, such as for image contrast and orientation extraction. Each pixel communicates with its four neighboring pixels to compute the required spatial derivatives for contrast magnitude and direction extraction as well as to perform other image-processing functions, such as edge thinning via a nonmaximum suppression technique (see Figure 16). The chip consists of relatively large 69 69 m2 pixels comprising two multipliers, peak and zero crossing detectors, and a number of amplifiers and comparators.

especially the case for CMOS image sensors, since their read noise and DSNU are typically larger than CCDs. For reference, standard CMOS image sensors have a DR of 4060 dB, CCDs around 6070 dB, while the human eye exceeds 90 dB by some measures. In contrast, natural scenes often exhibit greater than 100 dB of DR. To solve this problem, several DR extension techniques such as well-capacity adjusting [15], multiple capture [18], time-to-saturation [36], and self-reset [37] have been proposed. These techniques extend DR at the high illumination end, i.e., by increasing i max . In multiple capture and time-to-saturation, this is achieved by adapting each pixels integration time to its photocurrent value, while in self-reset the effective well capacity is increased by recycling the well. To perform these functions, most of these schemes require per-pixel processing. A comparative analysis of these schemes based primarily on SNR is presented in [38][40]. Here, we describe in some detail the multiple capture scheme.

DAC

Reset/Select Line Drivers

Wordline Drivers

Sensor Array 742 554 Pixels

Sense Amplifiers

Code Buff0

Code Buff1 Frame Buffer Control Logic SIMD LUTS PLL

High DR Sensors
Sensor DR is generally not wide enough to image scenes encountered even in everyday consumer photography. This is
16

LVDS Interface

19. A photomicrograph of color video system-on-chip. IEEE CIRCUITS & DEVICES MAGAZINE

MAY/JUNE 2005

Consider the high DR scene in Figure 17. Figure 18 shows a sequence of images taken at different integration times by a sensor whose DR is lower than the scenes. Note that none of these images contains all the details in the scene. The short integration time images contain the detail in the bright areas of the scenes but contain no detail in the dark areas due to noise, while long integration time images contain the details in the dark areas but none of the details in the bright areas due to saturation. Clearly, one can obtain a better high DR image of the scene by combining the details from these different integration time images, which can be done, for example, by using the last sample before saturation with proper scaling for each pixel (see Figure 17). The scheme involving capturing several images with different integration times and using them to assemble a high DR scheme is known as multiple capture single image [18]. Dual capture has been used to enhance the DR for CCD sensors, CMD sensors [41], and CMOS APS sensors [42]. A scene is imaged twice, once using a short integration time and another using a much longer integration time, and the two images are combined into a high DR image. Two images, however, may not be sufficient to represent the areas of the scene that are too dark to be captured in the first image and too bright to be captured in the second. Also, it is preferred to capture all the images within a single normal integration time, instead of resetting and starting a new integration after each image. Capturing several images within a normal integration time, however, requires high-speed nondestructive readout, which CCDs cannot perform. DPS, on the other hand, can achieve very high-speed nondestructive readout and, therefore, can naturally implement the multiple capture scheme. To implement multiple capture, high bandwidth between the sensor, memory, and processor is needed to perform the readouts and assemble the high DR image. By integrating the

New types of CMOS imaging devices are being created for man-machine interface, surveillance and monitoring, machine vision, and biological testing, among other applications.

sensor with an onchip frame buffer and digital-signal processing, such high bandwidth can be provided without unduly increasing clock speeds and power consumption. Using a modified 0.18-m CMOS process, a recent paper [19] reported on a video system-on-chip that integrates a 742 554 DPS array, a microcontroller, an SIMD processor, and a full 4.9-Mb frame buffer (see Figure 19). The microcontroller and processor execute instructions relating to exposure time, region of interest, result storage, and sensor operation, while the frame buffer stores the intermediate samples used for reconstruction of the high DR image. The imaging system is completely programmable and can produce color video at a rate of 500 frames per second or standard frame rate video with over 100 dB of DR. Figure 20 shows a sample high DR scene imaged with the system and with a CCD. The last sample before saturation method used to reconstruct a high DR image from multiple captures extends sensor DR only at the high illumination end. To extend DR at the low end, i.e., to reduce i min , one needs to reduce read noise and DSNU or increase integration time. Increasing integration time, however, is limited by motion blur and frame rate constraints. In [43], an algorithm is presented for extending DR at both the high and low illumination ends from multiple captures. The algorithm consists of two main procedures, photocurrent estimation and motion/saturation detection. Estimation is used to reduce read noise and thus enhance DR at the low illumination end. Saturation detection is used to enhance DR at the high illumination end, as previously discussed, while motion blur detection ensures that the estimation is not corrupted by motion. The algorithm operates completely locally. Each pixels final value is computed recursively using only its captured values. The small storage and computation requirements of this algorithm make it well suited for single-chip implementation.

Video Rate Applications of High-Speed Readout


As discussed earlier, one of the main advantages of CMOS image sensors in general and DPS in particular is high frame rate readout. This capability can be used to enhance the performance of many image and video processing applications. The idea is to use the high frame17

DPS-SOC (a) (b)

CCD

20. Comparison of CMOS DPS imager versus CCD imager using a HDR scene.

IEEE CIRCUITS & DEVICES MAGAZINE

MAY/JUNE 2005

rate capability to temporally oversample the scene and, thus, to obtain more accurate information about scene motion and illumination. This information is then used to improve the performance of image and standard frame-rate video applications. In the previous subsection, we discussed one important application of this general idea, which is DR extension via multiple capture. Another promising application of this idea is optical flow estimation (OFE), a technique used to derive an approximation for the motion field captured by a given video sequence. OFE is used in a wide variety of video-processing tasks such as video compression, 3-D surface structure estimation, super-resolution, motion-based segmentation, and image registration. In a recent paper [44], a method for obtaining high accuracy optical flow estimates at a conventional standard frame rate, e.g., 30 frames per second, by first capturing and processing a high frame-rate version of the video is presented. The method uses the Lucas-Kanade algorithm (a gradient-based method) to obtain optical flow estimates at a high frame rate, which are then accumulated and refined to estimate the optical flow at the desired standard frame rate. It demonstrates significant improvements in optical flow estimation accuracy, both on synthetically generated video sequences and on a real video sequence captured using an experimental high-speed imaging system. The high-speed OFE algorithm requires small number of operations per pixel and can be readily implemented in a single chip imaging system similar to the one discussed in the previous section [45].

CMOS image sensors are among the fastest growing and most exciting new segments of the semiconductor industry.

comprising a 64 64 array of pixels [48]. It derives a depth map by estimating the phase delay between an emitted modulated light source and the corresponding detected reflected signal. Each pixel includes two photogates: one switched in phase with the frequency of the emitted modulated light and the other switched completely out of phase. This alternation in the photogate voltage levels effectively multiplies the returning light signal by a square wave, hence approximating a pixel-level demodulation operation. This results in an estimate of the phase shift and, consequently, the depth at each pixel point in the image.

Vertically Integrated Sensor Arrays


Approaches that decouple sensing from readout and processing by employing a separate layer for photodetection are commonly used in infrared (IR) imaging sensors. In particular, many IR hybrid focal plane arrays use separately optimized photodetection and readout layers hybridized via indium bumps [49]. Such approaches are becoming increasingly attractive for visible-range imaging in response to the high transistor leakage and low supply voltages of deep submicron processes as well as a desire for increased integration. In [50], photoresponsivity is improved by using a deposited Si:H thin film on ASIC (TFA) layer for photodetection. In [51], it is shown that silicon-on-insulator (SOI) technology can provide for partial decoupling between readout and sensing. The handle wafer is used for photodetection with improved responsivity, especially at longer wavelengths, while the SOI-film is used for implementing the active readout circuitry with the buried oxide providing isolation between the two. Taking this trend a step forward, vertically integrated sensor arrays, whereby multiple wafers are stacked and connected using through-wafer vias, promise even further performance gains. For example, in certain tactical, scientific, and industrial applications there is a need for imaging scenes with illumination/temperature ranges of 120 dB or more at speeds of 1,000 frames per second or more. These requirements far exceed the capability of current sensors, even using DR extension techniques such as multiple capture and well capacity adjusting. Other schemes can achieve higher DR than these two schemes but require significantly more per-pixel processing. Advances in vertical integration have significantly increased the amount of processing that can be integrated at each pixel, thus making the implementation of these high DR schemes more practical. In a recent paper [52], we described a high DR readout architecture, which we refer to as folded multiple capture. This architecture combines aspects from the multiple capture scheme with synchronous self-reset [53] to achieve over 120 dB of DR at 1,000 frames per second with high signal fidelity and low power consumption using simple robust circuits.
IEEE CIRCUITS & DEVICES MAGAZINE

3-D Sensors
The extraction of the distance to an object at each point in a scene is referred to as 3-D imaging or depth sensing. Such depth images are useful for several computer vision applications such as tracking, object and face recognition, 3-D computer games, and scene classification and mapping. Various 3-D imagers employing a variety of techniques, such as triangulation, stereovision, or depth-from-focus, have been built; however, those based on light detection and ranging (LIDAR) have gained the most focus in recent years due to their relative mechanical simplicity and accuracy [46]. Time-of-flight (TOF) LIDAR-based sensors measure the time delay between an emitted light pulse, e.g., from a defocused laser, and its incoming reflection to calculate the depth map for a given scene. Niclass et al. [47] describe a sensor consisting of an array of avalanche diodes operating in Geiger mode that is sensitive and fast enough to perform photon counting and, consequently, TOF measurements. The high sensitivity allows for the use of a low-power illumination source, thereby reducing the intrusiveness of operating such a sensor in a normal environment. Another example is the Equinox sensor, an amplitude-modulated continuous wave LIDAR 3-D imager
18

MAY/JUNE 2005

Lab on Chip
Current research in biotechnology has focused on increased miniaturization of biological handling and testing systems for increased speed/throughput, decreased reagent cost, and increased sensitivity. In addition, since most mainstream biological analyses are based on optical testing methods, such as fluorometry, luminometry, or absorptiometry, the use of CMOS-based image sensors for biological applications offers a clear advantage over other choices due to the ease of customizability, high integration, and low-cost of CMOS technology. Moreover, integration of such CMOS-based photodetectors with microfabricated microelectromechanical system (MEMS) substrates for reagent handling enables portable bioanalytical platforms for applications such as quantitative PCR, DNA sequencing, and pathogen detection. In [54], a 320 320 pixel CMOS-based lab-on-chip that performs cell manipulation is described. The chip is able to arbitrarily move electrically neutral particles via an applied electric field using an array of electrodes as well as simultaneously image the particles using an embedded APS array. Our group has built an 8 16 luminescence detection lab-on-chip fabricated in an optimized CMOS imaging process [55]. The integrated system is able to detect luminescence signals of below 106 lux at room temperature, enabling low-cost, portable DNA sequencing and pathogen detection. This high sensitivity is achieved through the use of low dark-current P+/N/Psub diodes, low-noise fully differential circuitry, high-resolution A/D conversion, and on-chip signal processing.

ACKNOWLEDGMENTS
The authors wish to thank Brian Wandell, Peter Catrysse, Ali Ozer Ercan, Sam Kavusi, and Khaled Salama for helpful comments and discussions. Work on DPS partially supported under the Stanford Programmable Digital Camera project by: Agilent, Canon, HP, and Kodak. Work on vertically integrated sensor arrays is supported by DARPA. Work on integrated chemiluminescence is partially supported by the Stanford Genome Technology Center and NIH.

REFERENCES
[1] J.E. Farrell, F. Xiao, P. Catrysse, and B. Wandell, A simulation tool for evaluating digital camera image quality, in Proc. SPIE Electronic Imaging Conf., Santa Clara, CA, Jan. 2004, vol. 5294, pp. 124131. [2] G. Weckler, Operation of p-n junction photodetectors in a photon flux integrating mode, IEEE J. Solid-State Circuits, vol. 2, pp. 6573, Sept. 1967. [3] P. Denyer, D. Renshaw, G. Wang, M. Lu, and S. Anderson, On-chip CMOS sensors for VLSI imaging systems, in Proc. VLSI-91, 1991, pp. 157166. [4] P. Noble, Self-scanned image detector arrays, IEEE Trans. Electron Devices, vol. 15, p. 202, Apr. 1968. [5] E.R. Fossum, Active pixel sensors: Are CCDs dinosaurs?, in Proc. SPIE, Charged-Coupled Devices and Solid State Optical Sensors III, vol. 1900, 1993, pp. 3039. [6] B. Fowler, A. El Gamal, and D.X.D. Yang, A CMOS area image sensor with pixel-level A/D conversion, in 1994 IEEE Int. Solid-State Circuits Tech. Dig., pp. 226227. [7] A. Krymski, D. Van Blerkom, A. Andersson, N Block, B. Mansoorian, and E. R. Fossum, A high speed, 500 frames/s, 1024 1024 CMOS active pixel sensor, in Proc. 1999 Symp. VLSI Circuits, 1999, pp. 137138. [8] N. Stevanovic, M. Hillegrand, B.J. Hostica, and A. Teuner, A CMOS image sensor for high speed imaging, in ISSCC Tech. Dig., 2000, vol. 43, pp. 104105. [9] S. Kleinfelder, S.H. Lim, X.Q. Liu, and A. El Gamal, A 10,000 frames/s 0.18 m CMOS digital pixel sensor with pixel-level memory, in ISSCC Tech. Dig., San Francisco, CA, 2001, pp. 8889. [10] B.C. Burkey, W.C. Chang, J. Littlehale, T.H. Lee, T.J. Tredwell, J.P. Lavine, and E.A. Trabka, The pinned photodiode for an interline transfer CCD imager, in Proc. IEDM, 1984, pp. 2831. [11] P. Catrysse and B. Wandell, Optical efficiency of image sensor pixels, J. Opt. Soc. Amer. A, Opt. Image Sci., vol. 19, no. 8, pp. 16101620, 2002. [12] N. S. Saks, A technique for suppressing dark current generated by interface states in buried channel CCD imagers, IEEE Electron Device Lett., vol. 1, pp. 131133, July 1980. [13] S. Ohba, M. Nakai, H. Ando, S. Hanamura, S. Shimda, K. Satoh, K. Takahashi, M. Kubo, and T. Fujita, MOS area sensor: Part IILownoise MOS area sensor with antiblooming photodiodes, IEEE J. SolidState Circuits, vol. 15, pp.747752, Aug. 1980. [14] S.G. Wuu, H.C. Chien, D.N. Yaung, C.H. Tseng, C.S. Wang, C.K. Chang, and Y.K. Hsiao, A high performance active pixel sensor with 0.18-m CMOS color imager technology, in IEEE IEDM Tech. Dig., 2001, pp. 555558. [15] S. J. Decker, R. D. McGrath, K. Brehmer, and C. G. Sodini, A 256 256 CMOS imaging array with wide dynamic range pixels and columnparalleldigital output, IEEE J. Solid-State Circuits, vol. 33, pp. 20812091, Dec. 1998. [16] M. Mori, M. Katsuno, S. Kasuga, T. Murata, and T. Yamaguchi, A 1/4in 2M pixel CMOS image sensor with 1.75Transistor/Pixel, in ISSCC Tech. Dig., 2004, vol. 47, pp. 110111.

CONCLUSION
CMOS image sensors are among the fastest growing and most exciting new segments of the semiconductor industry. After a decade of research and development, CMOS image sensors have become one of the main silicon technology drivers, with tens of millions of parts shipped per year and a compound annual growth rate of over 28%. In addition, the ability to integrate sensing with processing is enabling a plethora of new imaging applications for the consumer, commercial, and industrial markets. In this article, we provided an introduction to CMOS image sensor design and operation and discussed their limitations and performance measures. We discussed several recent developments that have substantially improved CMOS image sensor imaging quality and functionality, thus broadening their applicability to new large markets such as mobile imaging, digital still cameras, and security. In spite of the significant advances in CMOS image-sensor technology and design, the smaller pixel size and generally better low-light performance of CCDs remain as the main obstacles toward their general adoption. We expect that further scaling of CMOS image sensor technology and improvements in their imaging performance will eventually erase any remaining advantage of CCDs. More importantly, we expect that many new and very large markets for imaging devices will be created by further exploitation of the integration of sensing and processing.
IEEE CIRCUITS & DEVICES MAGAZINE

MAY/JUNE 2005

19

[17] H. Takahashi, M. Kinoshita, K. Morita, T. Shirai, T. Sato, T. Kimura, H. Yuzurihara, S. Inoue, A 3.9 m pixel pitch VGA format 10b digital image sensor with 1.5-transistor/pixel, in ISSCC Tech. Dig., 2004, vol. 47, pp. 108109. [18] D. Yang, A. El Gamal, B. Fowler, and H. Tian, A 640X512 CMOS image sensor with ultrawide dynamic range floating-point pixel-level ADC, IEEE J. Solid-State Circuits, vol.34, pp. 18211834, Dec. 1999. [19] W. Bidermann, A. El Gamal, S. Ewedemi, J. Reyneri, H. Tian, D. Wile, and D. Yang, A 0.18 m high dynamic range NTSC/PAL imaging system-on-chip with embedded DRAM frame buffer, in ISSCC Tech. Dig., 2003, pp. 212213. [20] X. Liu, P. Catrysse, and A. El Gamal, QE reduction due to pixel vignetting in CMOS image sensors, in Proc SPIE Electronic Imaging 2000 Conf., San Jose, CA, 2000, vol. 3965, pp. 420430. [21] T.H. Hsu, Y.K. Fang, D.N. Yaung, S.G. Wuu, H.C. Chien, C.S. Wang, J.S. Lin, C.H. Tseng, S.F. Chen, C.S. Lin, C.Y. Lin, Dramatic reduction of optical crosstalk in deep-submicrometer CMOS imager with air gap guard ring, IEEE Electron Device Lett., vol. 25, pp. 375377, June 2004. [22] K. Yoon, C. Kim, B. Lee, and D. Lee, Single-chip CMOS image sensor for mobile applications, ISSCC Tech. Dig., vol.45, pp. 3637, June 2002. [23] C.A. Mead, Adaptive retina, in Analog VLSI Implementation of Neural Systems, C. Mead and M. Ismail, Eds. Norwell, MA: Kluwer, 1989, pp. 239246. [24] R. Lyon, The optical mouse, and an architectural method for smart digital sensors, in Proc. VLSI Systems Conf., Pittsburg, PA, 1981, p. 1. [25] J. Tanner and C.A. Mead, A correlating optical motion detector, in Proc. Advanced Research in VLSI Conf., Cambridge, MA, 1984, p. 57. [26] X. Arreguit, F.A. Van Schaik, F.V. Bauduin, M. Bidiville, and E. Raeber, A CMOS motion detector system for pointing devices, in ISSCC Tech. Dig., San Francisco, CA, 1996, pp. 9899. [27] J. Kramer, G. Indiveri, and C. Koch, Motion adaptive image sensor for enhancement and wide dynamic range, in Proc. SPIE, Berlin, 1996, vol. 2950, pp. 5063. [28] N. Ancona, G. Creanza, D. Fiore, and R. Tangorra, A real-time, miniaturized optical sensor for motion estimation and time-to-crash detection, in Proc. SPIE, Berlin, 1996, vol. 2950, pp. 7585. [29] M. Sivilotti, M.A. Mahowald, and C.A. Mead, Real-time visual computations using analog CMOS processing arrays, in Proc. VLSI Conf., Cambridge, MA, 1987, p. 295. [30] T. Delbruck, Investigations of Analog VLSI Visual Transduction and Motion Processing, Ph.D. dissertation, California Institute of Technology, 1993. [31] I. Koren, J. Dohndorf, J.U. Schlbler, J. Werner, A. Kruonig, and U. Ramacher, Design of a focal plane array with analog neural preprocessing in Proc. SPIE, Berlin, 1996, vol. 2950, pp. 6474. [32] V. Brajovic and T. Kanade, A sorting image sensor: An example of massively parallel intensity-to-time processing for low-latency computational sensors, in Proc. 1996 IEEE Int. Conf. Robotics and Automation, Minneapolis, MN, 1996, pp. 16381643. [33] A. Rodriguez-Vazquez, S. Espejo, R. Dominguez-Castro, R. Carmona, and E. Roca, Mixed-signal CNN array chips for image processing, in Proc. SPIE, Berlin, 1996, vol. 2950, pp. 218229. [34] F. Paillet, D. Mercier, and T. Bernard, Making the most of 15 k lambda2 silicon area for a digital retina PE, in Proc. SPIE, 1998, vol. 3410, pp. 158167. [35] P.-F. Ruedi, P. Heim, F. Kaess, E. Grenet, F. Heitger, P.-Y. Burgi, S. Gyger, and P. Nussbaum, A 128 128 pixel 120 dB dynamic range vision sensor chip for image contrast and orientation extraction, in ISSCC Tech. Dig., San Francisco, CA, 2003, pp. 226227. [36] D. Stoppa, A. Simoni, L. Gonzo, M. Gottardi, and G.-F. Dalla Betta, A 138-dB dynamic range CMOS image sensor with new pixel architecture, in ISSCC Tech. Dig., 2002, vol. 45, pp. 4041.

[37] L. McIlrath, A low-power low-noise ultrawide-dynamic-range CMOS imager with pixel-parallel A/D conversion, IEEE J. Solid-State Circuits vol. 36, pp. 846853, May 2001. [38] D. Yang and A. El Gamal, Comparative analysis of SNR for image sensors with enhanced dynamic range, in Proc. SPIE, San Jose, CA, 1999, vol. 3649, pp. 197211. [39] S. Kavusi and A. El Gamal, A quantitative study of high dynamic range image sensor architectures, in Proc. SPIE, 2004, pp. 264275. [40] S. Kavusi and A. El Gamal, Quantitative study of high-dynamic range SigmaDelta-based focal plane array architectures, in Proc. SPIE Defense and Security Symp., 2004, pp. 341350. [41] T. Nakamura and K. Saitoh, Recent progress of CMD imaging, presented at the IEEE Workshop on CCDs and Advanced Image Sensors, Bruges, Belgium, 1997. [42] O. Yadid-Pecht and E. Fossum, Wide intrascene dynamic range CMOS APS using digital sampling, IEEE Trans. Electron Devices, vol. 44, p. 17211723, Oct. 1997. [43] X.Q. Liu and A. El Gamal, Synthesis of high dynamic range motion blur free image from multiple captures, IEEE Trans Circuits Syst. I, vol. 50, pp 530539, Apr. 2003. [44] S.H. Lim, J. Apostolopoulos, and A. El Gamal, Optical flow estimation using temporally oversampled video, IEEE Trans. Image Processing, to be published. [45] S.H. Lim and A. El Gamal, Integrating image capture and processing Beyond single chip digital camera, in Proc. SPIE Electronic Imaging 2001 Conf., San Jose, CA, 2001, vol. 4306, pp. 219226. [46] M.D. Adams, Coaxial range measurementCurrent trends for mobile robotic applications, IEEE Sensors J., vol. 2, pp. 213, Feb. 2002. [47] C. Niclass, A. Rochas, P.-A. Besse, and E. Charbon, A CMOS single photon avalanche diode array for 3-D imaging, ISSCC Tech. Dig., 2004, vol. 47, pp. 120121. [48] S.B. Gokturk, H. Yalcin, and C. Bamji, A time-of-flight depth sensor, system description, issues and solutions, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Washington, DC, 2004, p. 35. [49] L.J. Kozlowski, Y. Bai, M. Loose, A.B. Joshi, G.W. Hughes, and J.D. Garnett, Large area visible arrays: Performance of hybrid and monolithic alternatives, in Proc. SPIE Survey and Other Telescope Technologies and Discoveries, 2002, vol. 4836, pp. 247259. [50] S. Benthien, T. Lule, B. Schneider, M. Wagner, M. Verhoeven, and M. Bohm, Vertically integrated sensors for advanced imaging applications, IEEE J. Solid-State Circuits, vol. 35, pp. 939945, July 2000. [51] X. Zheng, C. Wrigley, G. Yang, B. Pain, High responsivity CMOS imager pixel implemented in SOI technology, in Proc. 2000 IEEE Int. SOI Conf., 2000, pp. 138139. [52] S. Kavusi and A. El Gamal, Folded multiple-capture: An architecture for high dynamic range disturbance-tolerant focal plane array, in Proc. SPIE Infrared Technology and Applications, Orlando, FL, 2004, vol. 5406. [53] J. Rhee and Y. Joo, Wide dynamic range CMOS image sensor with pixel level ADC, Electron. Lett., vol. 39, no. 4, pp. 360361, Feb. 2003. [54] G. Medoro, N. Manaresi, A. Leonardi, L. Attomare, M. Tartagni, and R. Guerrieri, A lab-on-a-chip for cell detection and manipulation, IEEE Sensors J., vol. 3, pp. 317325, Jun. 2003. [55] H. Eltoukhy, K. Salama, A. El Gamal, M. Ronaghi, and R. Davis, A 0. 18 m CMOS 106 lux bioluminescence detection system-on-chip, in Proc. 2004 IEEE Int. Solid-State Circuits Conf., San Francisco, CA, pp. 222223.

Abbas El Gamal and Helmy Eltoukhy (eltoukhy@stanford.edu) are with Stanford University in Stanford, California.

20

IEEE CIRCUITS & DEVICES MAGAZINE

MAY/JUNE 2005

You might also like