Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Sensors and Actuators A 130–131 (2006) 273–281 A 4 ␮s integration time imager based on CMOS single photon avalanche diode technology Cristiano Niclass, Alexis Rochas 1 , Pierre-André Besse, Radivoje Popovic, Edoardo Charbon ∗ Ecole Polytechnique Fédérale de Lausanne (EPFL), School of Computer and Communication Sciences, School of Engineering, Switzerland Received 17 June 2005; received in revised form 26 January 2006; accepted 11 February 2006 Available online 30 March 2006 Abstract An optical imager is reported based on single photon avalanche diodes. The imager, fabricated in 0.8 ␮m CMOS technology, consists of an array of 1024 pixels each with an area of 58 ␮m × 58 ␮m for a total chip area of 2.5 mm × 2.8 mm. The architecture of the imager is reduced to a minimum since no A/D converter is required. Moreover, since the output of each pixel is digital, complex read-out circuitry, amplifiers, sample and hold, and other analog processing circuits are also not necessary. The maximum measured dynamic range is 120 dB and the minimum noise equivalent intensity is 1.3 mlx. The minimum integration time per pixel is 4 ␮s while optical and electrical crosstalk are negligible without the need for any post-processing or other non-standard techniques. © 2006 Elsevier B.V. All rights reserved. Keywords: Solid-state imaging; Single photon detection; Avalanche effect; High-speed imaging; Single photon avalanche diode; Geiger mode; Dynamic range; Complementary MOS technology 1. Introduction In recent times, the field of optical imaging for characterization of ultra-fast and/or low-intensity phenomena has received renewed attention from the academic and industrial communities. Intense research activity has been focused on bio-imaging applications, whereby assay, bio-luminescence and bio-scattering methods, among others, are currently the techniques of choice. Assay methods, based for example on fluorescence, may be used to determine the potential on a given cell in vitro or in vivo by means of voltage-sensitive dyes (VSDs). A variety of techniques based on specific VSDs for example are utilized in neural activity analysis and research [1,2]. The challenge in these methods is generally to detect small photovariability while maintaining high contrast in the presence of massive background illumination. Fluorescence can be utilized in a variety of other methods, such as fluorescence correlation spectroscopy (FCS) [3–5], single and multi-photon fluorescence lifetime imaging (FLIM) [6,7], Förster resonance energy trans- ∗ 1 Corresponding author. Tel.: +41 21 693 6487. E-mail address: edoardo.charbon@epfl.ch (E. Charbon). Present address: IdQuantique SA, Geneva. 0924-4247/$ – see front matter © 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.sna.2006.02.031 fer (FRET) [8], etc. These methods share the need for detectors with very high timing resolution or, conversely, low time uncertainty for detection of an event involving a few – or often one – photon. Bio-luminescence methods exploit the fact that light is emitted as the result of a chemical reaction [9]. The difficulty of detection is given by the extremely low emission intensity, typically in the microlux range. Speed is also a concern if complex, possibly parallel, evaluations are needed in a reasonable time frame, as it is the case for example in DNA sequencing or protein identification. Scattering techniques are used, among others, to discriminate vessels full of oxygenated emoglobine from those lacking it, thus enabling real time functional brain analysis. This technique can be performed using a matrix of independent transmitter–receiver pairs so as to construct a tomographical image of the brain [10]. The challenge of this method is again the faintness of scattered signals, subject to significant background noise. The discrimination between parallel optical channels is also a major problem due to the number of interfering optical signals in any given location, even when various coding/decoding techniques are used. There exist several applications not involving bio-imaging that require high speed of operation and high timing resolution [11]. An example is the real time imaging of the motion of natural gravity-driven flows, such as snow avalanches and rapid mass 274 C. Niclass et al. / Sensors and Actuators A 130–131 (2006) 273–281 movements (including landslides and debris flows). A growing number of fluid-dynamics models have been proposed over the last decade to predict such phenomena, however a key problem in these models is that they mostly rely on speculative rheological equations and use empirical coefficients. Such coefficients do not have a physical meaning and must be adjusted for each event, thus making accurate predictions challenging. A much more desirable approach would require to gain insights into the dynamics of gravity-driven flows. Currently used techniques involve videogrammetry, radar, and particle image velocimetry (PIV) [12], however a better approach would be to use ultra-fast imaging, as well as high frame rate 2D or 3D vision devices. Such imagers would require high saturation and low integration time to be effective in reducing the impact of shot noise. In this paper we report on a sensor that may enable imaging methods where speed, sensitivity, and/or timing resolution are key. The imager’s core is an array of 1024 pixels based on single photon avalanche diodes (SPADs) fabricated in CMOS technology. To the best of our knowledge, this sensor is the largest optical imager based on SPADs ever integrated in CMOS technology. SPADs are detectors sensitive to a single photon and can be independently read using a standard column/row addressing scheme. Upon photon arrival, a SPAD generates a digital pulse that may be counted with a conventional counter in a given interval of time (integration time). The counted value is proportional to the impinging radiation intensity. Thus, the architecture of grayscale imagers may be highly simplified if compared with conventional CMOS Active Pixel Sensors. Moreover, replacing A/D converters with simple counters, even if one per pixel is used, may result in very significant power savings. SPAD based imagers also have the property of detecting the arrival time of photons with high accuracy. By multiple exposures to a sharp optical laser pulse, it is possible to build a timing spectrum with picosecond resolutions. Such spectrum is the basis, for example, of FLIM characterizations. Accurately detecting photon arrival times is also useful to compute precise time-of-flight (TOF) maps of 3D scenes. In this case, the TOF of photons originated at a pulsed light source and reflected by an object, must be detected with a precision of a few picoseconds to achieve millimetric depth accuracy [13,14]. This is generally accomplished by averaging a relatively low number of measurements [13,15], thus establishing the potential for high frame rates even in 3D vision. In summary, the sensor presented in this paper has a number of very desirable properties, mainly related to its speed and dynamic range. Such properties are detailed in the paper and demonstrated through a number of experiments. The simplicity and general-purpose nature of the sensor enable a number of apparently disparate applications, such as those described above, with little reconfiguration effort. In fact, extensive experimentation with methods such as FLIM for example is currently under way. The paper is organized as follows. Section 2 outlines SPAD technology and its properties. Section 3 is a description of the imager architecture and Section 4 summarizes measurement results for the chip. Fig. 1. I–V characteristic of a typical diode and optical mean gain G. Three reverse biasing regions exist depending upon the values of G. In the so-called Geiger mode of operation the optical mean gain is virtually infinite. In conventionally biased diodes the gain is unity, while in linear mode avalanche diodes it is larger and highly dependent on bias voltage variations. 2. Single photon avalanche diodes Avalanche photodiodes are p–n junctions reverse-biased above breakdown voltage (Vbd ) for single photon detection. When a diode is biased above Vbd , it remains in a zero current state for a relatively long period of time, usually in the millisecond range. During this time, a very high electric field exists within the p–n junction generating the avalanche multiplication region. Under these conditions, if a primary carrier enters the multiplication region and triggers an avalanche, several hundreds of thousands of secondary electron–hole pairs are generated by impact ionization, thus causing the diode’s depletion capacitance to be rapidly discharged. As a result, a sharp current pulse is generated and can be easily measured. This mode of operation is commonly known as Geiger mode. Fig. 1 shows the biasing scheme of Geiger mode photodiodes as compared to conventional and linear mode avalanche diodes. Conventional photodiodes, as those used in standard imagers, are not compatible with Geiger mode of operation since they suffer from premature breakdown when the bias voltage approaches Vbd . The cause for premature breakdown is the fact that peak electric field is located only in the diode’s periphery rather than in the planar region [16]. A SPAD is a photodiode specifically designed to avoid premature breakdown, thus allowing a planar multiplication region to be formed within the whole junction area. Linear mode avalanche photodiodes are biased just below Vbd , thus they exhibit finite multiplication gain. Statistical vari- C. Niclass et al. / Sensors and Actuators A 130–131 (2006) 273–281 ations of this finite gain produce an additional noise contribution known as excess noise. SPADs on the contrary are not concerned with these gain fluctuations since the optical gain is virtually infinite. Nevertheless, the statistical nature of the avalanche buildup is translated onto a detection probability. The probability of detecting a photon hitting the SPAD’s surface, known as the photon detection probability (PDP), depends on the diode’s quantum efficiency and the probability for an electron or for a hole to trigger an avalanche. Additionally, in Geiger mode, the signal amplitude does not provide intensity information since all the current pulses have the same amplitude [16]. Intensity information is however obtained by counting the pulses during a certain period of time or by measuring the mean time interval between successive pulses. The same mechanism may be used to evaluate noise. Thermally or tunneling generated carriers within the p–n junction, which produce dark current in linear mode photodiodes, can trigger avalanche pulses. In Geiger mode, they are indistinguishable from regular photon-triggered pulses and they produce spurious pulses at a frequency known as dark count rate (DCR). DCR strongly depends on temperature and it is an important parameter for imagers since it defines the minimal detectable signal, thus limiting from below the dynamic range of the imager. Another source of spurious counts is represented by after-pulses. They are due to carriers temporarily trapped during a Geiger pulse in the multiplication region that are released after a short time interval, thus re-triggering a Geiger event. After-pulses depend on the trap concentration as well as on the number of carriers generated during a Geiger pulse. The number of carriers depends in turn on the diode’s parasitic capacitance and on the external circuit generally used to quench the avalanche. Typically, the quenching process is achieved by temporarily lowering the bias voltage below Vbd . Once the avalanche has been quenched, the SPAD needs to be recharged again above Vbd so that it can detect subsequent photons. The time required to quench the avalanche and recharge the diode up to 90% of its nominal excess bias is defined as the dead time (see Fig. 2). This parameter limits the maximal rate of detected photons, thus producing a saturation effect. The dead time consequently limits from above the dynamic range of the image sensor. The statistical fluctuation of the time interval between the arrival of a photon at the sensor and the output pulse leading Fig. 2. Voltage waveform at the cathode of a SPAD; definition of dead time tdead . 275 edge is defined as the timing jitter or timing resolution. Timing jitter mainly depends on the time a photo-generated carrier requires to be swept out of the absorption point into the multiplication region; in SPADs it is generally a few tens of picoseconds [16]. The jitter may be improved by increasing the thickness of the multiplication region, for example, through doping profile customization. During an avalanche, some photons can be emitted due to the electroluminescence effect [17]. These photons may be detected by neighboring pixels in an array of SPADs thus producing crosstalk. The probability of this effect is called optical crosstalk probability. This probability is much smaller in fully integrated arrays of SPADs in comparison to hybrid versions. This is due to the fact that the diode’s parasitic capacitance in the integrated version is orders of magnitude smaller than the hybrid solutions, thus reducing the energy dissipated during a Geiger event. Electrical crosstalk on the other hand is produced by the fact that photons absorbed beyond the p–n junction, deep in the substrate, generate carriers that can diffuse to neighboring pixels. The probability of occurrence of this effect, whether it is optical or electrical, defines the crosstalk probability. 3. Image sensor implementation Discrete SPAD devices, fabricated in dedicated Silicon technologies, have been available for almost two decades [18]. Only recently, hybrid solutions involving arrays of Silicon SPAD pixels and CMOS readout circuitries have appeared [19]. Fully integrated SPADs fabricated in CMOS technology are a more recent development [16]. The most obvious advantage of full SPAD integration in CMOS technology is miniaturization. There are other important performance improvements directly associated to miniaturization. The reduction of parasitic capacitance across the diode reduces not only the energy dissipated during a Geiger event and crosstalk but also the time required to perform a complete recharge. This time tends to dominate the dead time in SPADs. Recharge time can be reduced using a variety of active quenching and recharge mechanisms [16,20]. However, these schemes add a level of complexity to the detector, often increasing overall power dissipation and detector size, thus preventing the design of large arrays. DCR is another performance measure that generally benefits from integration and miniaturization. DCR is related in great part to trap concentration in and around the multiplication region. By reducing the size of the multiplication region, the average number of traps per pixel is reduced. Thus, DCR is generally reduced while the counts per active area remain the same. As a result, the dynamic range and signal-to-noise ratio may improve in the overall sensor. In addition, the overall increase of pixel density allows the user to ignore less performing pixels, thus in effect isolating those areas with higher trap concentration from the rest of the sensor. Timing jitter on the contrary is not directly impacted by miniaturization. However, an integrated solution may drastically reduce the distance between detectors and time discriminators. Thus inaccuracies due to propagation of pulses across large 276 C. Niclass et al. / Sensors and Actuators A 130–131 (2006) 273–281 Fig. 3. Cross-section of a SPAD. The device consists of a p–n junction reverse biased at a voltage in excess of breakdown (Geiger mode). The multiplication region forms at the junction. To avoid premature discharge, a guard ring surrounds the sensitive area. The SPAD p + anode is nominally biased at a negative voltage Vop equal to −25.5 V. This voltage is common to all the pixels in the array. distances, thermal gradients, and arrays of repeaters will be reduced. The two main challenges in fabricating SPADs in CMOS are early discharge avoidance and management of excess bias voltage. Early discharge is prevented by surrounding the photodiode by a guard ring of relatively lightly doped diffusion, for example using a combination of p-wells in deep n-wells. The purpose is to eliminate abrupt doping profiles, thereby reducing the potential gradients. High voltages are dealt with by proper layout and by selection of an appropriate CMOS technology. The pixel at the core of the imager consists of a circular SPAD and a 5-transistor (5T) configuration. The cross-section of the SPAD is shown in Fig. 3. The device was fabricated in a high voltage 0.8 ␮m CMOS process. It is a dual p+/deep n-tub/psubstrate junction. The planar p+/deep n-tub junction provides the multiplication region where the Geiger breakdown occurs. The fabrication process was a 2M/2P twin-tub technology on a p-substrate allowing an operating voltage of 2.5–50 V. A useful feature of the technology we selected is the availability of a p-tub implantation to create the guard ring surrounding the p+ region anode. The breakdown voltage Vbd of the p+/deep n-tub junction is 25.5 V. A larger bias voltage Vbd + Ve is applied on the diode to operate in single photon detection mode, where Ve is the excess bias voltage. The SPAD operates in passive quenching, i.e. upon photon arrival, the breakdown current discharges the depletion region capacitance through a resistance, causing the bias voltage to drop below breakdown voltage, thus quenching the avalanche. Fig. 4a shows the schematic of the pixel, whereby the quenching resistance is replaced by a PMOS transistor Tq . The W/L ratio of Tq is set to provide a sufficiently resistive path to quench the avalanche process. After avalanche quenching, the SPAD recharges the parasitic capacitance across the diode via Tq . Thus, photon detection capability is progressively recovered. The time required for quenching and recharging the avalanche, the dead time, is typically less than 40 ns in the 5T digital pixel. At the cathode, an inverted analog voltage pulse of amplitude Ve reflects the detection of a single photon. The CMOS inverter stage converts this analog voltage pulse into a digital pulse. A transmission gate (TG) is used to feed the detection signal to the nearest column output line when s = VDD and s̄ = GND (read phase). The near-infinite internal gain inherent to Geiger mode operation leads to no further amplification and the pixel output can be, if needed, routed directly outside the chip. To the best of our knowledge, we are reporting the first scalable and intrinsically digital pixel implemented in CMOS process for grayscale imaging applications. A photomicrograph of the 5T digital pixel is shown in Fig. 4b. The pixel occupies an area of 58 ␮m × 58 ␮m, while the active area of the SPAD is 38 ␮m2 . The imager’s complete functional diagram is shown in Fig. 5. It consists of an array of 32 × 32 pixels and requires two power supply buses VDD = 5 V and Vop = −25.5 V. Digital pixels allow the readout to be designed with less stringent constraints on noise performance. Moreover, no amplification, no sample and hold, and no A/D converter are necessary. Consequently, in SPAD based image sensors, performance is not limited by analog noise and device mismatch in the readout circuitry. Therefore no particular care has to be given to the design of those components, except for minimizing digital noise and optimizing speed. In this implementation, the readout circuitry consists of a 5-bit decoder Fig. 4. Circuitry and a photomicrograph of a 5T pixel. (a) The deep n-tub cathode is connected to the power supply VDD = 5 V through a long channel p-mos transistor Tq performing the quenching. The excess bias voltage Ve is thus equal to |Vop | + VDD − Vbd = 5 V. The voltage signal at the drain of Tq is converted into a digital signal and routed to the nearest column where a counter may be used to evaluate light intensity and deliver it to the chip I/O. (b) Pixel photomicrograph. The pixel size is 58 ␮m × 58 ␮m. C. Niclass et al. / Sensors and Actuators A 130–131 (2006) 273–281 277 Fig. 5. Sensor block diagram. The imaging system was integrated in a 0.8 ␮m CMOS technology. The chip’s total area is approximately 7 mm2 . for row selection and a 32-to-1 multiplexer (MUX) for column selection. A fast 15-bit linear feedback shift register (LFSR) counter has been implemented on-chip and it is used to compute intensity images. Intensity information is obtained by counting detection pulses within the integration time. Addressing and control signals were provided using a National Instruments PXI acquisition rack. The MUX output is also externally used for testing and characterization purposes. Fig. 6. Photon detection probability vs. wavelength. The excess bias voltage Ve is 5 V. 4. Measurement results Fig. 6 shows the measured PDP as a function of the photon wavelength for a typical pixel with a nominal Ve of 5 V at room temperature. It is larger than 20% between 430 and 570 nm with a peak at 26% at 460 nm. At 700 nm the PDP is still 8% without any post process treatment of the silicon chip surface. Fig. 7 shows a plot of the PDP as a function of Ve for various wavelengths. The distribution of DCR across the sensor array for nominal Ve is plotted in Fig. 8. At room temperature, the limited active area of the SPAD and the outstanding cleanliness of the CMOS process lead to a DCR of 350 Hz on average in the array and negligible after-pulsing effects. In Fig. 9, the distribution of DCR is plotted for T = 0 ◦ C. In this case, the mean value of the DCR dropped below 75 Hz. Fig. 9 shows a histogram of the DCR distribution. Note that 95% of the SPADs in the array exhibit a DCR below 1 kHz at room temperature. At T = 0 ◦ C, 61% of Fig. 7. Photon detection probability vs. excess bias voltage for several wavelengths. the SPADs exhibit an outstanding DCR level below 1 Hz. In addition, almost 93% of them showed a DCR below 100 Hz. Comparisons with recently published devices fabricated in a comparable technology show that the majority of the pixels of this design exhibit two orders of magnitude lower DCR than relatively larger pixels in [21] under comparable operating con- Fig. 8. DCR distribution at room temperature (a) and at 0 ◦ C (b). 278 C. Niclass et al. / Sensors and Actuators A 130–131 (2006) 273–281 Fig. 9. DCR statistics at room temperature and 0 ◦ C. ditions. In addition, passive quench and recharge mechanisms used in this design perform as well as their active counterparts in [21], but at a negligible cost in terms of area and power consumption. Migration to more advanced technologies, where pixels were further miniaturized, confirms this trend [22]. The timing jitter measurement of the pixel (including the readout electronics) was performed using a pulsed laser source with pulse width of 30 ps used to excite a single pixel through an optical fiber. An oscilloscope with 2 ps of uncertainty was used to measure the time interval between the laser output trigger and the sensor output signal. The resulting overall timing jitter was 115 ps FWHM. Fig. 10 illustrates the optical response of a typical pixel to a variable illumination measured with an integration time of 1 s. The sensor was tested with a homogenous illumination using a monochromator whose wavelength was set to 635 nm. At a high intensity illumination, when the signal approached 20 mega counts, the SPAD presented a saturation behavior due to the dead time [15]. This point is considered to be the saturation level. Due to the absence of readout noise, when the illumination is very low, the signal is only limited by Fig. 10. Response of a SPAD to variable light intensity and dynamic range at 1 s integration time. The excess bias voltage Ve is 5 V. the noise in the dark. Since dark counts respect a Poissonian distribution [15], their average value can be electronically stored for each pixel and removed from the final image. Under this assumption, the temporal noise in the dark is defined as the time varying component of the dark counts, i.e. the square root of them. At room temperature, the average temporal noise in the dark was 18 counts. The noise equivalent intensity is defined as the optical intensity where the SNR is equal to 0 dB and designates the minimum detectable signal. It was 3.2 × 10−10 W/cm2 for the imager under these conditions, which corresponds to 1.3 × 10−3 lx. As a result, a dynamic range (DR) of 120 dB was obtained. Experimental measurements demonstrated that the SPAD array is virtually free of crosstalk effects. Crosstalk was measured by illuminating a single pixel in the center of the array using a highly focused laser beam through the optics of a microscope. Optical crosstalk was expected to be negligible thanks to very small parasitic capacitance. Electrical crosstalk was minimized by design. Minor carriers diffusing in the substrate cannot reach the multiplication region of a pixel due to charge collection at the deep n-tub/p-substrate junction. The total crosstalk was measured to less than 0.01%. The plot in Fig. 11 shows the result of the characterization. The SPAD based imager was tested in a real time setup coupled to a camera objective at room temperature. Column/row addressing was controlled using a National Instrument PXI® acquisition rack with a 1.26 GHz Pentium® controller. A LabView® interface was used for the real time capture and image rendering. Fig. 12 shows examples of images taken at various integration times as low as 4 ␮s. Note that the integration time was limited by the acquisition setup not by the device itself. The scene was illuminated using a 50 W halogen light. Sensitivity was evaluated for a given integration time of 30 ms at various illumination levels. A 50% beam-splitter and a CCD camera were used to precisely quantify the illumination level. A 1/10×, Fig. 11. Total cross-talk. The experiment consisted on pointing a collimated light source on one of the pixels and measuring the response of the entire array over an given integration time. C. Niclass et al. / Sensors and Actuators A 130–131 (2006) 273–281 279 Fig. 12. Image of a model at various integration times. The lateral resolution was 32 × 32 pixels. The model was illuminated by a 50 W halogen lamp. Fig. 13. Intensity level experiments at 30 ms exposure: (a) 2 lx; (b) 2 × 10−1 lx; (c) 2 × 10−2 lx; (d) electro-optical setup. The 50% beam-splitter, along with a CCD camera, is used to precisely quantify the illumination levels, while a filter-bank provides the necessary attenuation levels. 1/100×, 1/1000× filter-bank attenuated the light source to the desired levels. Fig. 13 shows the electro-optical setup and a set of pictures obtained at various illumination levels. Table 1 summarizes the salient performance measurements of the imager. Table 1 Performance summary Level Measurement Symbol Value Pixel Photon detection probability peak at 460 nm Average dark count rate at T = 27 ◦ C Fill factor FWHM timing jitter Dead time Dynamic range PDP 26% DCR – – – DR 350 Hz 1.1% 115 ps <40 ns 120 dB Sensor Minimum integration time Optical intensity at SNR = 0 dB Chip size – – – 4 ␮s 1.3 × 10−3 lx 7 mm2 Lens F-number f/# f/3.9 5. Conclusions In this paper, a CMOS image sensor based on an array of 32 × 32 single photon avalanche diodes was described and its performance demonstrated through experiments. This type of imager differs considerably in architecture and performance from conventional CMOS Active Pixel Sensors. Due to the outstanding noise performance of the imager, a noise equivalent intensity of 1.3 × 10−3 lx and a maximum pixel dynamic range of 120 dB were obtained. Images of acceptable quality have been recorded with integration times in the microsecond range. Speed performance makes SPAD based image sensors very suitable for applications requiring very short integration times and/or low timing jitter. Parallel readout circuits, currently being designed, will eventually enable acquisition of multi-thousand images per second and miniaturization will enable even more powerful on-chip processing and greater array sizes while keeping manufacturing costs to a minimum. 280 C. Niclass et al. / Sensors and Actuators A 130–131 (2006) 273–281 Acknowledgments This research was supported by a grant of the Swiss National Science Foundation, grant No. 620-066110. The authors are grateful to Mark Wang of Ziva Corp. for imaging the man in the hat. References [1] A. Grinvald, D. Shoham, A. Shmuel, D. Glaser, I. Vanzetta, E. Shtoyerman, H. Slovin, A. Sterkin, C. Wijnbergen, R. Hildesheim, A. Arieli, in: U. Windhorst, H. Johansson (Eds.), In-vivo Optical Imaging of Cortical Architecture and Dynamics, Modern Techniques in Neuroscience Research, Springer, 2001. [2] S. Nagasawa, H. Arai, R. Kanzaki, I. Shimoyama, Integrated multifunctional probe for active measurements in a single neural cell, in: Proceedings of the IEEE International Conference on Solid-State Sensors, Actuators, and Microsystems, vol. 2, 2005, pp. 1230–1233. [3] P. Schwille, U. Haupts, S. Maiti, W.W. Webb, Molecular dynamics in living cells observed by fluorescence correlation spectroscopy with oneand two-photon excitation, Biophys. J. 77 (1999) 2251–2265. [4] P. Schwille, F.J. Meyer-Almes, R. Rigler, Dual-color fluorescence crosscorrelation spectroscopy for multicomponent diffusional analysis in solution, Biophys. J. 72 (1997) 1878–1886. [5] M. Gösch, A. Serov, T. Anhut, T. Lasser, A. Rochas, P.A. Besse, R.S. Popovic, Parallel single molecule detection with a fully integrated single-photon 2 × 2 CMOS detector array, J. Biomed. Opt. 9 (5) (2004) 913–921. [6] A.V. Agronskaia, L. Tertoolen, H.C. Gerritsen, Fast fluorescence lifetime imaging of calcium in living cells, J. Biomed. Opt. 9 (6) (2004) 1230–1237. [7] O. Berezovska, P. Ramdya, J. Skoch, M.S. Wolfe, B.J. Bacskai, B.T. Hyman, Amyloid precursor protein associates with a nicastrin-dependent docking site on the presenilin 1-␥-secretase complex in cells demonstrated by fluorescence lifetime imaging, J. Neurosci. 23 (11) (2003) 4560–4566. [8] W. Becker, K. Benndorf, A. Bergmann, C. Biskup, K. König, U. Tirplapur, T. Zimmer, FRET measurements by TCSPC laser scanning microscopy, in: Proceedings of the SPIE 4431, ECBO, 2001. [9] H. Eltoukhy, K. Salama, A. El Gamal, M. Ronaghi, R. Davis, A 0.18 ␮m CMOS 10−6 lx bioluminescence detection system-on-chip, in: Proceedings of the IEEE ISSCC, 2004, pp. 222–223. [10] A. Maki, Optical tomography for noninvasive imaging of human brain functions, in: Proceedings of the IEEE International Conference on Solid-State Sensors, Actuators, and Microsystems, vol. 2, 2005, pp. 1103–1105. [11] E. Charbon, Will CMOS imagers ever need ultra-high speed? in: Proceedings of the IEEE International Conference on Solid-State and Integrated-Circuit Technology, 2004, pp. 1975–1980. [12] H.M. Fritz, PIV Applied to landslide generated impulse waves, in: Proceedings of the 10th International Symposium on Applications of Laser Techniques to Fluid Mechanics, July 2000, 2000. [13] C. Niclass, A. Rochas, P.A. Besse, E. Charbon, A CMOS single photon avalanche diode array for 3D imaging, in: Proceedings of the IEEE ISSCC, 2004, pp. 120–121. [14] J. Massa, G. Buller, A. Walker, G. Smith, S. Cova, M. Umasuthan, A.M. Wallace, Optical design and evaluation of a three-dimensional imaging and ranging system based on time-correlated single-photon Appl. Opt. 6 (2002) 1063. [15] C. Niclass, A. Rochas, P.A. Besse, E. Charbon, Toward a 3D camera based on single photon avalanche diodes, IEEE J. Selected Top. Quant. Electron. 10 (4) (2004) 796–802. [16] A. Rochas, Single Photon Avalanche Diodes in CMOS technology, PhD Thesis, EPFL, 2003. [17] R.H. Haitz, Studies on optical coupling between silicon p–n junctions, Solid State Electron. 8 (1965) 417–425. [18] A. Lacaita, M. Ghioni, S. Cova, Double epitaxy improves single-photon avalanche diode performance, IEEE Electron. Lett. 25 (13) (1989) 841–843. [19] B.F. Aull, A.H. Loomis, D.J. Young, R.M. Heinrichs, B.J. Felton, P.J. Daniels, D.J. Landers, Geiger-mode avalanche photodiodes for threedimensional imaging, Licoln Lab J. 13 (2) (2002) 335–350. [20] S. Cova, M. Ghioni, A. Lacaita, C. Samori, F. Zappa, Avalanche photodiodes and quenching circuits for single-photon detection, Appl. Opt. 35 (12) (1996) 1956–1976. [21] S. Tisa, F. Zappa, I. Labanca, On-chip detection and counting of single photons, in: Proceedings of the IEEE International Electron Device Meeting, 2005, pp. 833–836. [22] C. Niclass, M. Sergio, E. Charbon, A single photon avalanche diode array fabricated in deep-submicron CMOS technology, Proceedings of the Design, Automation and Test in Europe Conference (DATE), 2006. Biographies Cristiano Niclass received the MSc degree in microtechnology with emphasis in applied photonics from EPFL in 2003. During his master degree, he designed a novel electronic payment device that led to the creation of a new company in Switzerland. In May 2003, he joined the Processor Architecture Laboratory (LAP) of EPFL and subsequently the Quantum Architecture Group (AQUA), where he is working toward the PhD degree. His interests include high-speed and low-noise digital and mixed-mode application-specific integrated circuits with emphasis on high-performance imaging. He is currently in charge of the design, implementation, and evaluation of fully integrated two- and three-dimensional CMOS image sensors based on single photon avalanche diodes. He is also involved in the design of time discrimination devices with picosecond resolution implemented in conventional technologies. Mr. Niclass is a member of the Institute of Electrical and Electronic Engineers (IEEE). Alexis Rochas received a Master’s degree in physics from the University Lyon 1, France in 1995. In 1997, he received the Postgraduate Diploma in control sciences and technologies from the Institute of Vision Engineering in Saint-Etienne, France. He received the PhD degree in September 2003. As a Research Engineer, he worked for Crismatec, a Saint-Gobain subsidiary (France). In 1998, he joined the Institute of Microsystems of EPFL as a Research Assistant working since 2000 in the field of single photon avalanche diodes (SPADs) fabricated in standard CMOS technology. He also developed dedicated single photon detectors in collaboration with Siemens Building Technologies for flame detection application, with Gnothis SA, Lausanne, Switzerland, for single molecule detection by fluorescence correlation spectroscopy, and with idQuantique, Geneva, Switzerland, for random numbers generation. Since July 2004, he has been responsible for the development of single photon detectors at idQuantique. He has published over 30 journal and conference papers. He received the 2004 scientific Omega prize for his work in the field of silicon-based single photon detectors. Pierre-André Besse was born in Sion, Switzerland, in 1961. He received the Diploma in physics and the PhD degree in semiconductor optical amplifiers from ETH Zürich, in 1986 and 1992, respectively. In 1986, he joined the group of micro- and optoelectronics of the Institute of Quantum Electronics at ETH Zürich, where he was engaged in research in optical telecommunication science. He worked on theory, modeling, characterization and fabrication of compound semi-conductor devices. In August 1994, he joined the Institute of Microsystems at EPFL as a Senior Assistant, where his research activities include sensors and actuator microsystems. His major fields of interest are physical principles and new phenomena for optical, magnetic, and inductive sensors. He has authored or co-authored over 100 scientific papers and conference contributions. Radivoje Popovic obtained the Dipl. Ing. degree in applied physics from the University of Beograd, Yugoslavia, in 1969, and the MSc and Dr. Sc. degrees in electronics from the University of Nis, Yugoslavia, in 1974 and 1978, respectively. From 1969 to 1981 he was with Elektronska Industrija Corp., Nis, Yugoslavia, where he worked on research and development of semicon- C. Niclass et al. / Sensors and Actuators A 130–131 (2006) 273–281 ductor devices and later became head of the company’s CMOS department. From 1982 to 1993 he worked for Landis & Gyr Corp., Central R&D, Zug, Switzerland, in the field of semiconductor sensors, interface electronic, and microsystems. There he was responsible for research in semiconductor device physics (1983–1985), for microtechnology R&D (1986–1990) and was appointed vice president (Central R&D) in 1991. In 1994, he joined the Swiss Federal Institute of Technology at Lausanne (EPFL) as professor for microtechnology systems. His current research interests include sensors for magnetic, optical, and mechanical signals, the corresponding microsystems, physics of submicron devices, and noise phenomena. Edoardo Charbon received the Diploma from ETH-Zürich, the MS from UCSD, and the PhD from UC-Berkeley, all in EECS, in 1988, 1991, and 1995, respectively. From 1995 to 2000, he was with Cadence Design Systems, where he was the architect of the company’s initiative for intellectual property protection. In 2000, he joined Canesta Inc. as its Chief Archi- 281 tect, leading the development of wireless 3D CMOS image sensors. Since November 2002, he has been a member of the Faculty of EPFL, where he has founded the Quantum Architecture Group (AQUA), a research group devoted to the field of high-accuracy imaging and ultra low-power wireless embedded systems. He has consulted for numerous organizations, including Texas Instruments, Hewlett-Packard, Cadence, and the Carlyle Group. He has authored or co-authored over 70 articles in technical journals and conference proceedings and two books, and he holds seven patents. His research interests include 3D micro-imaging, bio-imaging, integrated optical communications, intellectual property protection, substrate modeling and characterization, and micromachined sensor design. He serves as an associated and guest editor of the Transactions on CAD of Integrated Circuits and Systems, the Journal of Solid-State Circuits, and the Transactions on Circuits and Systems. He is a member of the IEEE and he serves in the Technical Committees of VLSI-SOC and ESSCIRC.