Seminar Report
Seminar Report
Seminar Report
CHAPTER 1
INTRODUCTION
1.1 Introduction
In recent years, demand for optical 3D imaging sensors has become increasingly relevant, and has led
to the development of instruments that are now commercially available. During the 1970s and 1980s
their development was mainly performed in research laboratories, and was aimed at designing new
techniques exploiting the use of light beams(both coherent and in coherent) instead of contact probes, in
view of their application in the mechanical manufacturing industry, for measurement and quality control
applications. Novel measurement principles were proposed, and suitable prototypes were developed and
characterized to prove their performances.
In parallel, we participated in a considerable effort directed towards the miniaturization and
integration of optical light sources, detectors and components, in the electronic equipment and the
mechanical structure of the sensors. In the last decade, the availability of techniques and component
shasled to the production of a wide spectrum of commercially available devices, with measurement
resolution from a few nanometers to fractions of a meter, and ranges from microns to a few kilometers.
In recent times, the trend has been to produce devices at decreased costs and with increased ruggedness
and portability.
The problem of manipulating, editing, and storing the measured data was approached at the software
level ,by developing a very powerful suite of programs that import and manipulate the data files, and
output the min popular and well-standardized formats, such as the DXF and IGES formats for CAD
applications the STL format for rapid prototyping machines, and the VRML and 3D format for
visualization.
As a result, the interest in the use of 3Dimaging sensors has increased. In the mechanical and
manufacturing industry, surface quality control, micro profiling and macro profiling are often carried out
on a contactless, optical basis. Optical probes and contact probes are used in combination: this occurs
whenever both the accuracy of the measurement and the efficiency of the process are the critical points .
Page 1
3D IMAGING SENSORS
It is significant that, today, one of the input requirements in the design of Coordinate Measuring
Machines(CMMs) is represented by the possibility of mounting optical 3D measurement sensors and
contact probes.
In addition, 3D imaging sensors are becoming of interest in combination with two dimensional(2D)
vision sensors ,especially in robotic applications ,for collision avoidance ,and to solve removing-fromheap and assembling problems .As a result ,those companies traditionally focused on the development
of 2D vision systems ,are now enlarging their product spectrum by including 3D range sensors.
The use of optical depth sensors has gone beyond the mechanical field for which they were
originally intended. Examples of application fields are geology, civil engineering, archaeology , reverse
engineering, medicine, and virtual reality.
The aim of this paper is to briefly review the state of the art concerning 3D sensing techniques and
devices, and to present their applications in industry, cultural heritage, medicine and forensics. It focuses
in part on there search carried out at our Laboratory, and briefly describes related approaches developed
by other groups.
The paper is the following structure: Section2 overviews 3D imaging techniques, and evaluates the
different approaches to highlight which method can be used and what are the main applications.
Section3 gives an insight of the results achieved in the mentioned applications. As far as the industrial
field is concerned, the topics of surface quality control, dimensional measurement and reverse
engineering are covered. Significant results in the cultural heritage field, both achieved by us and by
other research groups are shown.
The approaches developed in the medical field are briefly reviewed, and the postmortem analysis of
lesions is cited as an example of the latest applications in this area. The last section is dedicated to the
use of 3D imaging techniques for the accurate, fast and non-invasive measurement and modeling of
crime scenes.
Page 2
3D IMAGING SENSORS
energy. The most important example of transmission gauging is the industrial computer tomography
(CT), which uses high-energy x-rays and measures the radiation transmitted through the object.
Reflection sensors for shape acquisition can be sub divided into non-optical and optical
sensing . Non-optical sensing includes acoustic sensors( ultrasonic, seismic), electromagnetic(infrared,
ultraviolet, microwave radar, etc...) and others. These technique typically measure distances to objects
by measuring the time required for a pulse of sound or microwave energy to bounce back from an
object.
In reflection optical sensing, light carries the measurement information. There is a remarkable
variety of 3D optical techniques, and their classification is not unique. In this section, we mention the
more prominent approaches, and we classify them as shown in Table1.
X
X
X
X
X
X
X
X
X
X
X
contours
Shape from focusing
Shape from shadows
Texture gradients
Shape from shading
Shape from photometry
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
Surface
Range
Indirect
Direct
Active
X
X
X
X
X
X
Orientation
Images
Passive
Time delay
Triangulation
Laser triangulators
Structured light
Stereo vision
Photo grammetry
Time of Flight
Interferometry
Moirfringe range
Monocular
X
X
X
X
X
X
X
X
X
X
X
X
Page 3
3D IMAGING SENSORS
Object and the illumination of the scene are used to derive the shape information: no active device is
necessary. In the active form, suitable light sources are used as the internal vector of information. A
distinction is also made between direct and indirect measurements.
Direct techniques result in range data, i.e., in to a set of distances between the unknown surface and
the range sensor. Indirect measurements are inferred from monocular images and from prior knowledge
of the target properties. They result either in range data or in surface orientation.
Excellent reviews of optical methods and range sensors for 3D measurement are presented
interferences . In are view of 20 years of development in the field of 3D laser imaging with emphasis on
commercial systems available in 2004 is given.
Comprehensive summaries of earlier techniques and systems are provided in the other publications.
A survey of 3D imaging techniques is also given in and in ,in the wider context of both contact and
contact less 3D measurement techniques for the reverse engineering of shapes.
A
S
Laser
zP
OP
OC
d
j
zC
Imageplane
Page 4
3D IMAGING SENSORS
S(i,j)
Fig1.1 Schematics of the triangulation principle.
S S
A
The laser source generates a narrow beam, impinging the object at point S (single-point
triangulators). The back scattered beam is imaged at point S at image plane. The measurement of
Page 5
3D IMAGING SENSORS
The location (i S, jS) of image point S defines the line of sight S'OC, and, by means
of simple geometry, yields the position of S. The measurement of the surface is
achieved by scanning. In a conventional triangulation configuration, a compromise is
necessary between the Field of View(FOV), the measurement
resolution and
Laser stripes exploit the optical triangulation principle shown in Figure1. However,
in this case, the laser I is equipped with a cylindrical lens, which expands the light
beam along one direction. Hence, a plane of light is generated, and multiple points of
the object are illuminated at the same time. In the figure, the light plane is denoted by
S, and the illuminated points belong to the intersection between the plane and the
unknown object (line AB).
The measurement of the location of all the image points from A to B at plane
allows the determination of the 3D shape of the object in correspondence with the
illuminated points. For dense reconstruction, the plane of light must scan the scene
.An interesting enhancement of the basic principle exploits the B iris principle . A laser
source produces two stripes by passing through the Bi-Iris component (a mask with
two apertures).
The measurement of the point location at the image plane can be carried out on a
differential basis, and shows decreased dependence on the object reflectivity and on
the laser speckles.
One of the most significant advantages of laser triangulators is their accuracy, and
their relative insensitivity to illumination conditions and surface texture effects.
Single-point laser triangulators are widely used in the industrial field,
Sri venkateswara institute of technology
for the
Page 6
3D IMAGING SENSORS
measurement of distances, diameters, thicknesses, as well as in surface quality control
applications. Laser stripes are becoming
engineering, and modeling of heritage. Examples are the Vivid910 system (Konica
Minolta, Inc.), the sensors of the Smart Ray series(Smart Ray GmbH, Germany) and
the Shape Grabber systems(Shape Grabber Inc., CA,USA).
3D sensors are designed for integration with robot arms, to implement pick and
place operations in de-palletizing stations. An example is the Ranger system (Sick,
Inc.) .The Next Engine system (Next Engine, Inc., CA, USA) and the Tri Angles
scanner (Com Tronics, Inc., USA) device deserve particular attention, since they show
how the field is progressing. These two systems, in fact, break the price barrier of
laser-based systems, which is a limiting factor in their diffusion.
Page 7
3D IMAGING SENSORS
Page 8
3D IMAGING SENSORS
Page 9
3D IMAGING SENSORS
No further equipment (e.g. specific light sources) and no special projections are
required. The remarkable advantages of the stereo approach are the simplicity and the
low cost; the major problem is the identification of common points within the image
pairs, i.e., the solution of the well-known correspondence problem. Moreover, the
quality of the shape extraction depends on the sharpness of the surface texture
(affected by variations in surface reflectance). Stereo vision has significant
applications in robotics and in computer vision, where the essence of the problem is
not the accurate acquisition of highly quality data, but, rather ,their interpretation (for
example for motion planning, collision avoidance and grasping applications).
A variant of the stereo vision approach implies the use of multiple images from a
single camera, which captures the object in a sequence of images from different views
(shape from video). A basic requirement forth is technique is that the scene does not
contain moving parts. 3D urban modeling and object detection are typical applications.
Stereo vision based on the detection of the object silhouette has recently become
very popular. The Stereo approach is exploited, but the 3D geometry of the object is
retrieved by using the object contours under different viewing directions. One of the
most prominent applications of this method is the modeling of 3D shapes for heritage,
especially when the texture information is captured together with the range. Among
the devices on the market, the Prompt. Stereo system (Solving 3D GmbH, Germany)
shows high performance and simplicity of use.
Page 10
3D IMAGING SENSORS
performed by means of automatic or semi-automatic procedures. In the first case, very
dense point clouds are obtained, even if at the expense of in accuracy and missing
parts. In the second case, very accurate measurements are obtained, over a
significantly smaller number of points, and at increased elaboration and operator times.
Typical applications of photo grammetry, besides the aerial photo grammetry, are
in close range photo grammetry, forcity modeling, medical imaging and heritage.
Reliable packages are now commercially available for complete scene modeling or
sensor calibration and orientation. Examples are the Australis software (Photo metrics,
Australia), Photo modeler (Eos Systems, Inc., Canada), Leica Photo grammetry Suite
(Leica Geo systems GIS & Mapping), and Menci (Menci Software srl, Italy).
Both stereo vision and photo grammetry share the objective of obtaining 3D
models from images and are currently used in computer vision applications. The
methods rely on the use of camera Calibration techniques. The elaboration chain is
basically the same, even if linear models for camera calibration and simplified point
measurement procedures are used in computer vision applications. In fact, in computer
vision the automation level of the overall process is more important with respect to the
accuracy of the models.
In recent times, the photo grammetric and computer vision communities have
proposed combined procedures, for increasing both the measurement performance and
the automation levels. Examples are in the use of laser scanning and photo grammetry
and in the use of stereo vision and photo grammetry for facadere construction and
object tracking . On the market, combinations of structured light projection with photo
grammetry are available (AtosII scanner, GomGmbH, Germany) for improved
measurement performance.
Page 11
3D IMAGING SENSORS
Surface range measurement can be made directly using the radar time-of-flight
principle. The emitter unit generates a laser pulse, which impinges on to the target
surface. A receiver detects the reflected pulse, and suitable electronics measures the
round trip travel time of the returning signal and its intensity. Single point sensors
perform point-to-point distance measurement, and scanning devices are combined to
the optical head to cover large bi-dimensional scenes. Large range sensors allow
measurement ranges from 15m to 100m (reflective markers must be put on the target
surfaces).
Medium range sensors can acquire 3D data in shorter ranges. The measurement
resolutions vary with the range. For large measuring ranges, time-of flight sensors give
excellent results. On the other side, for smaller objects, about one meter in size,
attaining 1 part per 1, 000 accuracy with time-of-flight radar requires very high speed
timing circuitry, because the time differences are in the Pico-second range. A few
amplitude and frequency modulated radars have shown promise for close range
distance measurement.
In many applications, the technique is range-limited by allow able power levels of
laser radiation, determined by laser safety considerations. Additionally, time-of-flight
sensors face difficulties with shiny surfaces, which reflect little back-scattered light
energy except when oriented perpendicularly to the line of sight.
1.2.6 Interferometry
Interferometry methods operate by projecting a spatially or temporally varying
periodic pattern on to a surface, followed by mixing the reflected light with a reference
pattern. The reference pattern demodulates the signal to reveal the variation in surface
geometry. The measurement resolution is very high, since it is a fraction of the
wavelength of the laser radiation. For this reason, surface quality control and micro
Page 12
3D IMAGING SENSORS
profilometry are the most explored applications. Use of multiple wave lengths and of
double heterodyne detection lengthens the non-ambiguity range.
Systems based on this principle are successfully used to calibrate CMMs.
Holographic Interferometry relies on mixing coherent illumination with different wave
vectors. Holographic methods typically yield range accuracy of a fraction of the light
wavelength over microscopic fields of view.
Page 13
3D IMAGING SENSORS
object to avoid difficulties in discerning surface texture. Most prior work in active
depth from focus has yielded moderate accuracy up to one part per 400 over the field
of view. The reisa direct trade-off between depth of view and field of view;
satisfactory depth resolution is achieved at the expense of a sub-sampling the scene,
which in turn requires some form of mechanical scanning to acquire range
measurement over the whole scene. Moreover, spatial resolution is non-uniform.
Specifically, depth resolution is substantially less than resolution perpendicular to
the observation axis. Finally, objects not aligned perpendicularly to the optical axis
and having a depth dimension greater than the depth of view will come into focus at
different ranges, complicating the scene analysis and interpretation.
Page 14
3D IMAGING SENSORS
the target surface. A variant of this method implies the variation of the lighting
conditions. Are view of the algorithms able to extract the shape information, starting
from the reflectance map of the image, is in reference. The acquisition hard ware is
very simple, and low cost.
However, low accuracy is obtained, especially in the presence of external factors
influencing the object reflectance. The measurements of two is rather complex.
Page 15
3D IMAGING SENSORS
reasons, techniques that are computation demanding (e.g., passive stereo vision) are
now more reliable and efficient.
The selection of which sensor types would be used to solve a given depth
measurement problem is a very complex task, that must consider (i) the measurement
time, (ii) the budget, and (iii) the quality expected from the measurement.
In this respect, 3D imaging sensors may be affected by missing or bad quality data.
Reasons are related to the optical geometry of the system, the type of project or and /or
acquisition optical device, the measurement technique and the characteristics of the
target objects. The sensor performance may depend on the dimension, the shape, the
texture, the temperature and the accessibility of the object.
Relevant factors that influence the choice are also the ruggedness, the portability,
and the adaptiveness of the sensor to the measurement problem, the easiness of the
data handling, and the simplicity of use of the sensor.
Page 16
3D IMAGING SENSORS
The integration step has been widely studied by the computer graphics community,
to develop efficient methods for the creation of meshes. Volumetric methods [64,
65], mesh stitching [66,67], region growing techniques [68,69] ,and sculpting
based methods [70] are of utmost importance in this field.
TECHNOLOGY
Laser triangulators
STRENGTH
WEAKNESS
Relative
simplicity
Performance
Structured Light
Limited
range
and
generally
measurement
independent of
Missing
ambient
light
correspondence
High
data
volume
data
in
with
acquisition rate
Cost
High
data
acquisition rate
based
Intermediate
middle-complex
measurement
Computationally
volume
Performance
cost
shadow
generally
dependent
of
ambient light
Stereo Vision
Simple and in
Computation
expensive High
demanding
accuracy
Sparse
on
well-defined
targets
data
covering
3D IMAGING SENSORS
Photo grammetry
and
High accuracy
Medium to large
Cost
measurement
Accuracy
Simple
expensive
Time-of-Flight
is
inferior
to
rate
Performance
generally
independent
ambient light
Sub-micron
Interferometry
accuracy
Moirfringe
contours
of
in
Measurement
capability
limited to quasi
micro-ranges
range Simple and low
cost Short ranges
Flatsurfaces
Limited
applicability
inindustrial
Table2.Comparison of optical range imaging technique
The acquisition of reflectance data is required for the correct visualization of the 3D
models .Typical applications are for constructing 3D models for virtual reality,
animation, and cultural heritage. The research developed in this area has led to
approaches based either on parametric reflectance models or on a set of colour images
of the object. Excellent reviews of these methods are in references.
Examples
are
Poly
Works
(InnovMetric,Inc.,Canada),
Page 18
3D IMAGING SENSORS
worth noting that in general, the operations are performed in a semi-automatic way,
i.e., the operator skill is crucial to optimize the various steps, and to keep under
control the influence of the alignment errors and of the editing operation over the
3D models.
Page 19
3D IMAGING SENSORS
CHAPTER 2
SENSORS FOR 3D IMAGING
2.1 COLOUR 3D IMAGING TECHNOLOGY
Machine vision involves the analysis of the properties of the luminous flux
reflected or radiated by objects. To recover the geometrical structures of these
objects, either to recognize or to measure their dimension, two basic vision strategies
are available.
Passive vision, attempts to analyze the structure of the scene under ambient
light. Stereoscopic vision is a passive optical technique. The basic idea is that two or
more digital images are taken from known locations. The images are then processed
to find the correlations between them. As soon as matching points are identified, the
geometry can be computed.
Active vision attempts to reduce the ambiguity of scene analysis by structuring
the way in which images are formed. Sensors that capitalize on active vision can
resolve most of the ambiguities found with two-dimensional imaging systems. Lidar
based or triangulation based laser range cameras are examples of active vision
technique. One digital 3D imaging system based on optical triangulation were
developed and demonstrated.
2.2 AUTO SYNCHRONIZED SCANNER
The auto-synchronized scanner, depicted schematically on Figure 1, can
provide registered range and colour data of visible surfaces. A 3D surface map is
captured by scanning a laser spot onto a scene, collecting the reflected laser light, and
Sri venkateswara institute of technology
Page 20
3D IMAGING SENSORS
finally focusing the beam onto a linear laser spot sensor. Geometric and photometric
corrections of the raw data give two images in perfect registration: one with x, y, z
co-ordinates and a second with reflectance data. The laser beam composed of
multiple visible wavelengths is used for the purpose of measuring the colour map of a
scene (reflectance map).
Page 21
3D IMAGING SENSORS
Page 22
3D IMAGING SENSORS
Page 23
3D IMAGING SENSORS
CHAPTER 3
LASER SENSORS FOR 3D IMAGING
The state of the art in laser spot position sensing methods can be divided
into two broad classes according to the way the spot position is sensed. Among those,
one finds continuous response position sensitive detectors (CRPSD) and discrete
response position sensitive detectors (DRPSD)
Page 24
3D IMAGING SENSORS
the light distribution with a very fast response time (in the order of 10 MHz). Theory
predicts that a CRPSD provides very precise measurement of the centroid versus a
DRPSD. By precision, we mean measurement uncertainty.
It depends among other things on the signal to noise ratio and the quantization
noise. In practice, precision is important but accuracy is even more important. A
CRPSD is in fact a good estimator of the central location of a light distribution.
Page 25
3D IMAGING SENSORS
the total array. Once the pertinent light distribution (after windowing around an
estimate around the peak) is available, one can compute the location of the desired
peak very accurately.
Fig 3.3 In a monochrome system, the incoming beam is split into two components
Page 26
3D IMAGING SENSORS
RGB 1storder components are directed onto three CRPSD, which are used for colour
detection . The CRPSDs are also used to find the centroid of the light distribution
impinging on them and to estimate the total light intensity The centroid is computed
on chip with the well-known current ratio method i.e. (I1-I2)/ (I1+I2) where I1 and I2
are the currents generated by that type of sensor.
The weighed centroid value is fed to a control unit that will select a sub-set
(window) of contiguous photo-detectors on the DRPSD. That sub-set is located
around the estimate of the centroid supplied by the CRPSD. Then the best algorithms
for peak extraction can be applied to the portion of interest.
Page 27
3D IMAGING SENSORS
CHAPTER 4
PROPOSED SENSOR COLORANGE
Many devices have been built or considered in the past for measuring the
position of a light beam. Among those, one finds continuous response position
sensitive detectors (CRPSD) and discrete response position sensitive detectors
(DRPSD). The category CRPSD includes lateral effect photodiode and geometrically
shaped photo-diodes (wedges or segmented).
DRPSD on the other hand comprise detectors such as Charge Coupled
Devices(CCD)
and
of
photodiodes
equipped
with
multiplexer for sequential reading . One cannot achieve high measurement speed
like found with continuous response position sensitive detectors and keep the
same accuracy as with discrete response position sensitive detectors.
Fig 4.1 A typical situation where stray light blurs the measurement of the real but much
narrower peak.
On Figure 1, we report two basic types of devices that have been proposed
in the literature. A CRPSD provides the centroid of the light distribution with a very
fast response time (in the order of 10 MHz) [20]. On the other hand, DRPSD are slower
Page 28
3D IMAGING SENSORS
because all the photo-detectors have to be read sequentially prior to the measurement of
the location of the real peak of the light distribution.
People try to cope with the detector that they have chosen. In other words, they
pick a detector for a particular application and accept the tradeoffs inherent in their
choice of a position sensor. Consider the situation depicted on Figure 2, a CRPSD
would provide A as an answer. But a DRPSD can provide B, the desired response .This
situation occurs frequently in real applications.
The elimination of all stray light in an optical system requires sophisticated
techniques that increase the cost of a system. Also, in some applications, background
illuminated cannot be completely eliminated even with optical light filters. We propose
to use the best of both worlds. Theory predicts that a CRPSD provides very precise
measurement of the centroid versus a DRPSD (because of higher spatial resolution).
By precision, we means measurement un certainty. It depends among other
things on the signal to noise ratio, the quantization noise, and the sampling noise. In
practice, precision is important but accuracy is even more important. A CRPSD is in
fact a good estimator of the central location of a light distribution. Accuracy is
understood to be the true value of a quantity. DRPSD are very accurate (because of the
knowledge of the distribution).
Their slow speed is due to the fact that all the pixels of the array have to be
read but they dont all contribute to the computation of the peak. In fact, what is
required for the measurement of the light distribution peak is only a small portion.
4.1 ARCHITECTURE
An object is illuminated by a collimated RPROPOSED SENSOR.
GB laser spot and a portion of the reflected radiation upon entering the system is
split into four components by a diffracting optical element as shown in figure 4b. The
white zero order component is directed to the DRPSD, while the RGB 1storder
components are directed onto three CRPSD,
Sri venkateswara institute of technology
Page 29
3D IMAGING SENSORS
Which are used for colour detection. The CRPSDs are also used to find the centroid
of the light distribution impinging on them and to estimate the total light intensity
The centroid is computed on chip with the well-known current ratio method i.e. (I1I2)/ (I1+I2) where I1 and I2 are the currents generated by that type of sensor. The
weighed centroid value is fed to a control unit that will select a sub-set (window) of
contiguous photo-detectors on the DRPSD. That sub-set is located around the
estimate of the centroid supplied by the CRPSD. Then, the best algorithms for peak
extraction can be applied to the portion of interest.
Page 30
3D IMAGING SENSORS
Figure shows the architecture and preliminary experimental results of a first
prototype chip of a DRPSD with selectable readout window. This is the first block of
a more complex chip that will include all the components.
Page 31
3D IMAGING SENSORS
circuit (CDS). To span 12 bits of dynamic range, the integrating capacitor can assume
five different values. In the prototype chip, the proper integrating capacitor value is
externally selected by means of external switches C0-C4. In the final sensor,
however, the proper value will be automatically set by an on chip circuitry on the
basis of the total light intensity as calculated by the CRPSDs.
During normal operation, all 32 pixels are first reset at their bias value and
then left to integrate the light for a period of 1cms. Within this time the CRPSDs and
an external processing unit estimate both the spot position and its total intensity and
those parameters are fed to the window selection logic. After that, 16 contiguous
pixels, as addressed by the window selection logic, are read out in 5ms, for a total
frame rate of 6cms. Future sensors will operate at full speed, i.e. an order of
magnitude faster. The window selection logic, LOGIC_A, receives the address of the
central pixel of the 16 pixels and calculates the address of the starting and ending
pixel. The analog value at the output of the each CA within the addressed window is
sequentially put on the bit line by a decoding logic, DECODER, and read by the
video amplifier .LOGIC-A generates also synchronization and end-of-frame signals
which are used from the external processing units. LOGIC_B instead is devoted to
the generation of logic signals that derive both the CA and the CDS blocks. To add
flexibility also the integration time can be changed by means of the external switches
Page 32
3D IMAGING SENSORS
CHAPTER 5
FEATURES
5.1 ADVANTAGES
5.2 DISADVANTAGES
5.3 APPLICATIONS
Multi resolution random access laser scanners for fast search and tracking
of 3D features
Page 33
3D IMAGING SENSORS
CHAPTER 6
CONCLUSION
6.1 CONCLUSION
The results obtained so far have shown that optical sensors have reached a
high level of development and reliability those are suited for high accuracy 3D vision
systems. The availability of standard fabrication technologies and the acquired knowhow in the design techniques, allow the implementation of optical sensors that are
application specific: Opto-ASICs.
The trend shows that the use of the low cost CMOS technology leads
competitive optical sensors. Furthermore post-processing modules, as for example
anti reflecting coating film deposition and RGB filter deposition to enhance
sensitivity and for colour sensing, are at the final certification stage and will soon be
available in standard fabrication technologies. The work on the Colorange is being
finalized and work has started on a new improved architecture.
Page 34
3D IMAGING SENSORS
REFERENCE
[1]
optimized
for
3D
digitization,
IEEE
transactions
on
[3]
[4]
[5]
X.ARREGUIT,
F.A.VAN
SCHAIK,
F.V.BAUDUIN,
M.BIDIVILLE,
[7]
Page 35
3D IMAGING SENSORS
[8]
Page 36