Abstract—This paper describes a CMOS vision sensor which is inspired by biological visual systems... more Abstract—This paper describes a CMOS vision sensor which is inspired by biological visual systems. Each pixel independently and in continuous time quantizes local relative intensity changes to generate spike events. These events appear at the output of the sensor as an asynchronous stream of digital pixel addresses. These address-events signify scene reflectance change and have sub-millisecond timing precision. The output data rate depends on the dynamic content of the scene and is typically orders of magnitude lower than those of conventional frame-based imagers. By combining an active front-end logarithmic photoreceptor running in continuous time with a self-timed switched-capacitor differencing circuit, the sensor achieves an array mismatch of 2.1 % in relative intensity event threshold and a pixel bandwidth of 3 kHz under 1 klux scene illumination. Dynamic range is>120 dB and chip power consumption is 23 mW. Event latency shows weak light dependency and decreases to 15 us at&...
Outline Dear reader, today you have probably been filmed by several electronic cameras. You were ... more Outline Dear reader, today you have probably been filmed by several electronic cameras. You were in the visual field of a video camera while you stood close to a cash-machine, while you walked through a railway station or an airport, anytime you entered a bank or large public building, during the shopping in the supermarket and also on many public spaces. Electronic eyes became very abundant in our environment in the last few years. But surveillance is just one of many application fields for electronic vision devices. Artificial vision is also used in industrial manufacture, in safety systems in industrial environments, for visual quality control and failure investigation , for visual stock control, for barcode reading, to control automated guided vehicles etc. In these applications, the human workmanship is replaced by an electronic camera paired with sophisticated computer vision software running on the computer to which the camera is attached. The complete system, comprising one ...
This paper describes an on-chip programmable bias current generator, intended for mixed signal ch... more This paper describes an on-chip programmable bias current generator, intended for mixed signal chips requiring a wide ranging set of currents. The individual generators share a master current reference. A serial digital interface to the chip controls the biases by bits loaded into a 24-bit shift register. These bits control the steering of current from a current splitter. The summed current splitter output is actively mirrored to a broadcasted bias voltage. Measurements from an implementation in 0.35u 4M-2P CMOS show a total range of bias current of over 6 decades (>120dB) ranging from a few times the off-current up to the master reference current. For currents larger than the minimum, the generator has resolution spanning nearly its full 24 bit range (144dB), e.g. for a master current of 10uA, any bias current can be varied by as little as 0.5 pA with the caveat that the code is not guaranteed monotonic. Each bias occupies an area of 0.026 mm 2, which is about 65 % of the bondin...
What is the simplest solution for counting objects that cross a field of view? We reduce the prob... more What is the simplest solution for counting objects that cross a field of view? We reduce the problem to detecting localized illumination change on two spatially separated borders. Each circular border is formed from pixels that measure the magnitude of local changes in illumination relative to the global average. These local change signals are summed over each border and then amplified. Off-chip processing will determine the movement direction by cross-correlating these signals with appropriate delays. We have implemented the crossing-detection functionality with low complexity and power consumption using 5.3mm 2 on a standard 5V 0.8pm double-poly double-metal CMOS process. Crossing objects that cover 1/8 of the border with contrasts down to a few percent can be detected.
Animals by far outperform current technology when reacting to visual stimuli in low processing re... more Animals by far outperform current technology when reacting to visual stimuli in low processing requirements, demonstrating astonishingly fast reaction times to changes. Current real-time vision based robotic control approaches, in contrast, typically require high computational resources to extract relevant information from sequences of images provided by a video camera. Most of the information contained in consecutive images is redundant, which often turns the vision processing algorithms into a limiting factor in high-speed robot control. As an example, robotic pole balancing with large objects is a well known exercise in current robotics research, but balancing arbitrary small poles (such as a pencil, which is too small for a human to balance) has not yet been achieved due to limitations in vision processing. At the Institute of Neuroinformatics we developed an analog silicon retina (http://siliconretina.ini.uzh.ch), which, in contrast to current video cameras, only reports indivi...
This paper describes a CMOS vision sensor which is inspired by biological visual systems. Each pi... more This paper describes a CMOS vision sensor which is inspired by biological visual systems. Each pixel independently and in continuous time quantizes local relative intensity changes to generate spike events. These events appear at the output of the sensor as an asynchronous stream of digital pixel addresses. These address-events signify scene reflectance change and have sub-millisecond timing precision. The output data rate depends on the dynamic content of the scene and is typically orders of magnitude lower than those of conventional frame-based imagers. By combining an active front-end logarithmic photoreceptor running in continuous time with a self-timed switched-capacitor differencing circuit, the sensor achieves an array mismatch of 2.1% in relative intensity event threshold and a pixel bandwidth of 3 kHz under 1 klux scene illumination. Dynamic range is >120 dB and chip power consumption is 23 mW. Event latency shows weak light dependency and decreases to 15 us at >1 klu...
The 18th IEEE International Symposium on Consumer Electronics (ISCE 2014), 2014
ABSTRACT This paper presents the design of a dynamic vision sensor for mobile applications. The s... more ABSTRACT This paper presents the design of a dynamic vision sensor for mobile applications. The sensor features a standby mode with less than 250uW of power dissipation. The sensor changes operation mode between standby and normal operation itself depending on input activity. The power consumption in normal operation mode is typically 500uW and activity dependent. The sensor design is implemented in a 90nm backside illumination process with a pixel-array size of 1.44mm by 1.44mm.
2006 IEEE International Solid State Circuits Conference - Digest of Technical Papers, 2006
A vision sensor responds to temporal contrast with asynchronous output. Each pixel independently ... more A vision sensor responds to temporal contrast with asynchronous output. Each pixel independently and continuously quantizes changes in log intensity. The 128times128-pixel chip has 120dB illumination operating range and consumes 30mW. Pixels respond in <100mus at 1klux scene illumination with <10% contrast-threshold FPN
Abstract—This paper describes a CMOS vision sensor which is inspired by biological visual systems... more Abstract—This paper describes a CMOS vision sensor which is inspired by biological visual systems. Each pixel independently and in continuous time quantizes local relative intensity changes to generate spike events. These events appear at the output of the sensor as an asynchronous stream of digital pixel addresses. These address-events signify scene reflectance change and have sub-millisecond timing precision. The output data rate depends on the dynamic content of the scene and is typically orders of magnitude lower than those of conventional frame-based imagers. By combining an active front-end logarithmic photoreceptor running in continuous time with a self-timed switched-capacitor differencing circuit, the sensor achieves an array mismatch of 2.1 % in relative intensity event threshold and a pixel bandwidth of 3 kHz under 1 klux scene illumination. Dynamic range is>120 dB and chip power consumption is 23 mW. Event latency shows weak light dependency and decreases to 15 us at&...
Outline Dear reader, today you have probably been filmed by several electronic cameras. You were ... more Outline Dear reader, today you have probably been filmed by several electronic cameras. You were in the visual field of a video camera while you stood close to a cash-machine, while you walked through a railway station or an airport, anytime you entered a bank or large public building, during the shopping in the supermarket and also on many public spaces. Electronic eyes became very abundant in our environment in the last few years. But surveillance is just one of many application fields for electronic vision devices. Artificial vision is also used in industrial manufacture, in safety systems in industrial environments, for visual quality control and failure investigation , for visual stock control, for barcode reading, to control automated guided vehicles etc. In these applications, the human workmanship is replaced by an electronic camera paired with sophisticated computer vision software running on the computer to which the camera is attached. The complete system, comprising one ...
This paper describes an on-chip programmable bias current generator, intended for mixed signal ch... more This paper describes an on-chip programmable bias current generator, intended for mixed signal chips requiring a wide ranging set of currents. The individual generators share a master current reference. A serial digital interface to the chip controls the biases by bits loaded into a 24-bit shift register. These bits control the steering of current from a current splitter. The summed current splitter output is actively mirrored to a broadcasted bias voltage. Measurements from an implementation in 0.35u 4M-2P CMOS show a total range of bias current of over 6 decades (>120dB) ranging from a few times the off-current up to the master reference current. For currents larger than the minimum, the generator has resolution spanning nearly its full 24 bit range (144dB), e.g. for a master current of 10uA, any bias current can be varied by as little as 0.5 pA with the caveat that the code is not guaranteed monotonic. Each bias occupies an area of 0.026 mm 2, which is about 65 % of the bondin...
What is the simplest solution for counting objects that cross a field of view? We reduce the prob... more What is the simplest solution for counting objects that cross a field of view? We reduce the problem to detecting localized illumination change on two spatially separated borders. Each circular border is formed from pixels that measure the magnitude of local changes in illumination relative to the global average. These local change signals are summed over each border and then amplified. Off-chip processing will determine the movement direction by cross-correlating these signals with appropriate delays. We have implemented the crossing-detection functionality with low complexity and power consumption using 5.3mm 2 on a standard 5V 0.8pm double-poly double-metal CMOS process. Crossing objects that cover 1/8 of the border with contrasts down to a few percent can be detected.
Animals by far outperform current technology when reacting to visual stimuli in low processing re... more Animals by far outperform current technology when reacting to visual stimuli in low processing requirements, demonstrating astonishingly fast reaction times to changes. Current real-time vision based robotic control approaches, in contrast, typically require high computational resources to extract relevant information from sequences of images provided by a video camera. Most of the information contained in consecutive images is redundant, which often turns the vision processing algorithms into a limiting factor in high-speed robot control. As an example, robotic pole balancing with large objects is a well known exercise in current robotics research, but balancing arbitrary small poles (such as a pencil, which is too small for a human to balance) has not yet been achieved due to limitations in vision processing. At the Institute of Neuroinformatics we developed an analog silicon retina (http://siliconretina.ini.uzh.ch), which, in contrast to current video cameras, only reports indivi...
This paper describes a CMOS vision sensor which is inspired by biological visual systems. Each pi... more This paper describes a CMOS vision sensor which is inspired by biological visual systems. Each pixel independently and in continuous time quantizes local relative intensity changes to generate spike events. These events appear at the output of the sensor as an asynchronous stream of digital pixel addresses. These address-events signify scene reflectance change and have sub-millisecond timing precision. The output data rate depends on the dynamic content of the scene and is typically orders of magnitude lower than those of conventional frame-based imagers. By combining an active front-end logarithmic photoreceptor running in continuous time with a self-timed switched-capacitor differencing circuit, the sensor achieves an array mismatch of 2.1% in relative intensity event threshold and a pixel bandwidth of 3 kHz under 1 klux scene illumination. Dynamic range is >120 dB and chip power consumption is 23 mW. Event latency shows weak light dependency and decreases to 15 us at >1 klu...
The 18th IEEE International Symposium on Consumer Electronics (ISCE 2014), 2014
ABSTRACT This paper presents the design of a dynamic vision sensor for mobile applications. The s... more ABSTRACT This paper presents the design of a dynamic vision sensor for mobile applications. The sensor features a standby mode with less than 250uW of power dissipation. The sensor changes operation mode between standby and normal operation itself depending on input activity. The power consumption in normal operation mode is typically 500uW and activity dependent. The sensor design is implemented in a 90nm backside illumination process with a pixel-array size of 1.44mm by 1.44mm.
2006 IEEE International Solid State Circuits Conference - Digest of Technical Papers, 2006
A vision sensor responds to temporal contrast with asynchronous output. Each pixel independently ... more A vision sensor responds to temporal contrast with asynchronous output. Each pixel independently and continuously quantizes changes in log intensity. The 128times128-pixel chip has 120dB illumination operating range and consumes 30mW. Pixels respond in <100mus at 1klux scene illumination with <10% contrast-threshold FPN
Uploads
Papers by Patrick Lichtsteiner