Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Development and Application of Eddy Current Sensor Arrays for Process Integrated Inspection of Carbon Fibre Preforms
Previous Article in Journal
An Architecture Providing Depolarization Ratio Capability for a Multi-Wavelength Raman Lidar: Implementation and First Measurements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Out-of-Focus Projector Calibration Method with Distortion Correction on the Projection Plane in the Structured Light Three-Dimensional Measurement System

School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, Shaanxi, China
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(12), 2963; https://doi.org/10.3390/s17122963
Submission received: 17 October 2017 / Revised: 18 December 2017 / Accepted: 19 December 2017 / Published: 20 December 2017
(This article belongs to the Section Physical Sensors)

Abstract

:
The three-dimensional measurement system with a binary defocusing technique is widely applied in diverse fields. The measurement accuracy is mainly determined by out-of-focus projector calibration accuracy. In this paper, a high-precision out-of-focus projector calibration method that is based on distortion correction on the projection plane and nonlinear optimization algorithm is proposed. To this end, the paper experimentally presents the principle that the projector has noticeable distortions outside its focus plane. In terms of this principle, the proposed method uses a high-order radial and tangential lens distortion representation on the projection plane to correct the calibration residuals caused by projection distortion. The final accuracy parameters of out-of-focus projector were obtained using a nonlinear optimization algorithm with good initial values, which were provided by coarsely calibrating the parameters of the out-of-focus projector on the focal and projection planes. Finally, the experimental results demonstrated that the proposed method can accuracy calibrate an out-of-focus projector, regardless of the amount of defocusing.

1. Introduction

Optic three-dimensional (3D) shape measurement has been wildly studied and applied due to its speed, accuracy, and flexibility. Examples of such applications include industrial inspection, reverse engineering, and medical diagnosis, etc. [1,2]. As shown in Figure 1, a typical structured light measurement system is composed by a camera and a projector. The projector projects a series of encoding fringe patterns onto the surface of object, and the camera captures the distorted patterns caused by the depth variation of the surface. Finally, the 3D surface point is reconstructed based on triangulation, providing that the system parameters have been obtained through the system calibration. Hence, in this system, one of the crucial aspects is to accurately calibrate the camera and projector, as the measurement accuracy is ultimately influenced by the calibration accuracy.
Camera calibration has been extensively studied, and a variety of camera calibration approaches have been proposed. Several traditional camera calibration methods exist, including direct linear transformation, nonlinear camera calibration method, two-steps camera calibration method, self-calibration, and Zhang’s method [3,4,5,6,7]. Moreover, some advanced camera calibration methods that are based on traditional methods have been proposed [8,9,10,11]. Qi et al. proposed a method that applies the stochastic parallel gradient descent (SPGD) algorithm to resolve the frequent iteration and long calibration time deficiencies of the traditional two-step camera calibration method [8]. Kamel et al. used three more objective functions to speed up the convergence rate of the nonlinear optimization in the two-step camera calibration method [9]. Huang et al. improved the calibration accuracy of the camera by using an active phase target and statistically constrained bundle adjustment (SCBA) [10]. Jia et al. proposed an improved camera calibration method, based on perpendicularity to eliminate the effect of non-perpendicularity of the camera motion on calibration accuracy in the binocular stereo vision measurement system [11]. However, the projector calibration is more complicated, since it is a programmable light source without a capturing function. The existing projector calibration methods can be classified into two categories: phase-to-depth model and inverse camera method. For the phase-to-depth model, projector calibration occurs by establishing the relationship between the depth and the phase value [12,13,14]. However, the fitting polynomial is always complicated. The inverse camera method is widely used for its small number of calibration parameters and low-cost to compute. In this method, the projector is usually treated as a device with inverse optics of the camera, and thus the camera calibration method can be applied to the projector calibration. This paper used the inverse camera model to calibrate the projector. With this method, the object world coordinates of the projection points are calculated by the calibrated camera, and then the projection points are used to calibrate the projector using the same approach as the camera calibration [15,16]. This calibration technique is simple and convenient, but errors can be significant if the camera is not well-calibrated. To solve the problem of error transmission from the camera to the projector, the virtual camera method is proposed in order to help the projector to “see” the scene. The corresponding relationship of the calibration points between the pixels on the projector Digital Micro-mirror Device (DMD) and the pixels on the camera Charge-Coupled Device (CCD) can be established. Then, the projector was calibrated using the same method as the camera, by using the corresponding points on the projector image and the calibration board [17,18,19]. This calibration approach does not depend on the accuracy of the camera calibration and can achieve higher accuracy, yet its process is complex. Additionally, some methods exist for improving the calibration accuracy of projector by improving the detection accuracy of the reference feature centers [20], or correcting the error caused by lens distortion [21,22].
Recently, the optical 3D measurement technology has been focusing on increasing the speed and accuracy of the measurement method. Lei and Zhang proposed the binary defocusing patterns projection technology, which projects defocused binary structured patterns instead of sinusoidal patterns for 3D measurement [23,24,25,26]. The binary defocusing technique has the advantages of high measuring speed, removing the errors in measurement results caused by the nonlinear gamma of the projector, and having no rigid requirement for precise synchronization [27]. However, calibrating the binary defocusing patterns projection measurement system introduces many challenges because the projector is substantially defocused. Moreover, most of the well-established accurate calibration methods for structured light systems require the projector to be in focus. For calibration with projector defocusing, two attempts have been made. Merner et al. [28] attempted to calibrate the structured light system with an out-of-focus projector, in which the pixels z were a low-order polynomial function of absolute phase, and the ( x , y ) coordinates were calculated from the camera calibration with a known z value. In this method, high depth accuracy was achieved, but the spatial precision was limited. Li et al. [29] analyzed the defocused imaging system model to confirm the center of the projector pixel still corresponded to the center of a camera pixel, regardless the amount of defocusing, and built the one-to-one mapping between the camera pixel and the center point of the projector pixel in the phase domain. Finally, the structured light system, with an out-of-focus projector, was calibrated by the standard OpenCV camera calibration toolbox that was based on the model described by Zhang’s method [7]. Although Li et al.’s calibration method reached an accuracy of about 73 µm for a proper calibration volume, the method neglected the influence of the amount of defocusing on the calibration parameters, which always significantly influence the measurement results.
To address the limitations of the approaches mentioned above, an improved out-of-focus projector calibration method was proposed in this paper, where a distortion correction method on the projection plane and nonlinear optimization algorithm were adopted. The proposed method was composed of two steps: coarse calibration and final calibration. The coarse calibration was to find out approximate parameters and initial values for final calibration based on nonlinear optimization algorithm. To this end, two special planes of the out-of-focus projector, the focal plane, and the projection plane, were given more attention. In the coarse calibration process, the calibration plane was moved to the focal plane (or focal plane 1) of an out-of-focus projector, and the projector was calibrated as an inverse camera by the pinhole camera model. The intrinsic and extrinsic parameters matrices on the focal plane were selected as the initial value of the final out-of-focus projector calibration. Secondly, we considered the lens distortion on the projection plane as the initial value of the final projector calibration. To calculate the lens distortion on the projection plane using the pinhole camera model, the defocusing projector should be adjusted so it is focused on the projection plane, and was it calibrated as a standard inverse camera. Finally, based on the re-projection mathematical model with distortion, the final accuracy parameters of the out-of-focus projector were obtained with a nonlinear optimization algorithm. The objective function was to minimize the sum of the re-projection error of all the reference points onto the projector image plane. In addition, the paper experimentally presents the principle that the projector has noticeable distortions outside its focus plane. When compared to the traditional calibration method, the experiment results demonstrated that our proposed method can accurately calibrate an out-of-focus projector regardless of the amount of defocusing.
This paper is organized as follows. Section 2 explains the basic principles used for the proposed calibration method. Section 3 presents calibration principle and process. Section 4 shows the experimental results to verify the performance of our calibration method, and Section 5 summarizes this paper.

2. Mathematical Model

2.1. Camera Model

The well-known pinhole model is used to describe a camera with intrinsic and extrinsic parameters. The intrinsic parameters include focal length, principle point, and pixel skew factor. The rotation matrix and translation vector, which define the relationship between a world coordinate system and the camera coordinate system, are the extrinsic parameters [7]. As shown in Figure 2, a 3D point in world coordinate system o w x w y w z w can be represented by P w = { x w ,   y w ,   z w } T and the corresponding two-dimensional (2D) point in camera coordinate system o c x c y c z c is p c = { u c ,   v c } T . The relationship between a 3D point P w and its imaging point p c can be described as follows:
s p ˜ c = A c [ R c ,   T c ] P ˜ w
where p ˜ c = { u c , v c , 1 } T is the homogeneous coordinate of the point p c in the camera imaging coordinate system, P ˜ w = { x w , y w , z w , 1 } T is the homogeneous coordinate of the point p w in the world coordinate system, s is a scale factor, [ R c , T c ] is the camera extrinsic matrices, R c denotes the rotation matrix, which is a 3 × 3 matrix, and T c is the translation vector. A c represents the intrinsic matrices, which can be described as follows:
A c = [ f x γ u 0 0 f y v 0 0 0 1 ]
where f x and f y are elements implying the focal lengths along the u c , v c axes of the image plane, respectively, and γ is the skew factor of the u c and v c axes. For modern cameras, γ = 0 . ( u 0 ,   v 0 ) is the coordinate of the principal point in the camera imaging plane.
Furthermore, the image coordinates in the above equations were considered without distortion. The lens distortion of the camera should be corrected to improve the calibration accuracy. Several models exist for lens distortion, such as a radial and tangential distortion model [30,31], rational function distortion model [32,33], division distortion model [31,34], and so on [35]. In this paper, a typical radial and tangential distortion model was used due to its simplicity and sufficient accuracy, formulated as:
{ u c = u c + δ u c = u c + [ k 1 u r r 2 + k 2 u r r 4 + p 1 ( 3 u r 2 + v r 2 ) + 2 p 2 u r v r ] v c = v c + δ v c = v c + [ k 1 v r r 2 + k 2 v r r 4 + p 2 ( u r 2 + 3 v r 2 ) + 2 p 1 u r v r ]
where ( u c , v c ) represents the imaging point on the imaging plane of camera with radial and tangential correction, ( u c , v c ) is the imaging point before correction, and r = u r 2 + v r 2 is the absolute distance between the imaging point ( u c , v c ) and the original point ( u 0 , v 0 ) . k 1 , k 2 are the radial distortion coefficients, and p 1 , p 2 are the tangential coefficients.

2.2. Light Encoding

The projected patterns in the structured light measurement system are always encoded by a combination of gray code and four-step phase shifting [36,37]. As shown in Figure 3a, the 4-bit gray code encodes 24 sets of subdivision by projecting four successive gray code stripes. Although the gray code method can encode the pixel accuracy, and the encoding process does not consider the spatial neighborhood, the spatial resolution of this encoding method is low because the limitation is caused by the number of projecting stripe patterns. Furthermore, the four-step phase shifting has high spatial resolution in every projection. However, the drawback of the phase shifting method is the ambiguity problem that occurs when determining the signal periods in the camera images, which is decided by the periodic nature of the patterns. When gray code methods and phase shifting methods are integrated, their positive features can be additive, and the measurement of discontinuous surfaces with fine details can even be obtained.
The four-step phase shifting algorithm has been extensively applied in optical measurement because of their speed and accuracy. The four fringe images can be represented as follows:
I i ( x ,   y ) = I ( x ,   y ) + I ( x ,   y ) cos [ ϕ ( x ,   y ) + 2 i π / 4 ]
where I ( x ,   y ) is the average intensity, I ( x ,   y ) is the intensity modulation, i = 1 , 2 , 3 , 4 , and ϕ ( x , y ) are the phase, which is solved as follows:
ϕ ( x ,   y ) = arctan [ I 4 ( x ,   y ) I 2 ( x ,   y ) I 1 ( x ,   y ) I 3 ( x ,   y ) ]
where ϕ ( x , y ) is the wrapped phase, as shown in Figure 3b, which lies between 0 and 2πrad. However, the absolute phase is useful for the following work as the phase unwrapping detects the 2π discontinuities and removes them by adding or subtracting multiples of 2π point by point. In other words, the phase unwrapping finds integer number k so that:
Φ ( x ,   y ) = ϕ ( x ,   y ) + 2 k π
where Φ ( x ,   y ) is the absolute phase, and k is the number of stripe. When the phase shifting period coincides with the gray code edges, as shown in Figure 3, the phase shifting works in the subdivision, as defined by the gray code encoding method, and the absolute phase is distributed linearly and spatially continuously over the each subdivision area. Thus, all of the pixels in the camera image are tracked by their absolute phases.

2.3. Digital Binary Defocusing Technique

The digital binary defocusing technique was used to create computer generated binary structured fringe patterns, and the defocused projector blurred them into sinusoidal structured fringe patterns. Mathematically, the defocusing effect can be simplified to a convolution operation, and can be written as follows:
I ( x , y ) = I b ( x , y ) P s f ( x , y )
where, represents convolution, I b ( x , y ) indicates the inputted binary fringe patterns, I ( x , y ) denotes the outputted smooth fringe patterns, and P s f ( x , y ) is the points spread function, determined by a pupil function of the optical system f ( u , v ) .
P s f ( x , y ) = | 1 2 π + f ( u , v ) e i ( x u + y v ) d u d v | 2
Simply, P s f ( x , y ) can be approximated by a circular Gaussian function [38,39],
P s f ( x , y ) = G ( x , y ) = 1 2 π σ 2 exp ( 1 2 σ 2 ( x 2 + y 2 ) )
where the standard deviation σ is proportional to the defocusing degrees. In addition, the defocused optical system is equivalent to a spatial two-dimensional (2D) low-pass filter. As shown in Figure 4, an unaltered binary fringe pattern was simulated to generate the sinusoidal fringe pattern with increasing σ . Figure 4a shows the initial binary structured fringe pattern. Figure 4b,c represent the generated sinusoidal fringe patterns with a low defocusing degree and high defocusing degree, respectively. Figure 4d shows the cross-sections. As seen in Figure 4, when the defocusing degree increased, the binary structures became decreasingly clear, and the sinusoidal structures became increasingly obvious, which unfortunately results in a drastic fall in the intensity amplitude. To solve this problem, a pulse width modulation (PWM) technique was applied to high-quality sinusoidal patterns [40,41,42]. In addition, the dithering technique was proposed for wider binary pattern generation [43]. However, to obtain high-quality sinusoidal patterns with high fringe intensity amplitude, it is difficult to select a proper defocusing degree. Besides, more importantly, when the defocusing degree increased, the phase of defocusing fringe patterns was invariant [29].

3. Calibration Principle and Process

3.1. Camera Calibration

Essentially, the purpose of the camera calibration procedure is to obtain the intrinsic and extrinsic parameters of the camera, based on the reference data, which is composed of the 3D points on the calibration board and the 2D points on the CCD. In this research, Zhang’s method [7] was used to estimate the intrinsic parameters. Instead of using a checkerboard as calibration target, we used a flat black board with 7 × 21 arrays of white circles for calibration, as shown in Figure 5, and the centers of the circles were extracted as feature points. The calibration board was placed in different positions and orientations (poses), and 15 images were obtained to estimate the intrinsic parameters of the camera. This procedure was implemented by the OpenCV camera calibration toolbox. Notably, a typical radial and tangential distortion model was considered and the distortion was corrected for the camera calibration.

3.2. Out-of-Focus Projector Calibration

Generally, a projector can be regarded as an inverse camera, because it projects images rather than capturing them. If the images of the calibration points from the view of the projector are available, the projector can be calibrated as a camera, so that establishing the mapping relationship between the 2D points on the DMD of the projector and the 2D points on the CCD of the camera could realize our goal. Moreover, defocusing the projector complicates the calibration procedure. This occurs because the model for calibrating the projector briefly follows the model for the camera calibration, and since the pinhole camera model always asks for the camera to be focused, an out-of-focus projector does not directly follow this requirement. In addition, most of the projectors have noticeable distortions outside their focus plane (projection plane) [19]. In this Section, a novel calibration model for a defocusing projector is introduced, as well as a solution to the problem of calibrating an out-of-focus projector.

3.2.1. Out-of-Focus Projector Model

In the literature [26], two methods perform the binary defocusing technique. The first method is that the projector shoots on the different planes with a fixed the focal length. The second method is that the projector is in different focus distances, and the plane is at a fixed location. The first defocusing degree of each method represents that the projector is in focus. To study the influence of the binary defocusing technique on the calibration results of the projector, we calibrated a projector under different defocusing degrees with the proposed method in [29], and the amount of the defocusing degree increases from 1 to 5. Table 1 shows the calibration results of the projector with different defocusing degrees, with the defocusing degree increasing from 1 to 5, caused by the first method mentioned above. Table 2 shows the calibration results of the projector with different defocusing degrees with the defocusing degree increasing from 1 to 5, caused by the second method.
From Table 1, the stability of the focal length and the principal point were poor under different defocusing degrees using the first method. The maximum change of f u and f v reached 50 pixels. The maximum change of u 0 and v 0 reached 24 pixels and 40 pixels, respectively. In addition, the re-projection errors in u and v directions rose significantly as the defocusing degree of the projector increased. This is also seen in Figure 6a. Similarly, we applied the different defocusing degrees by using the second method, and the same statistical calibration results are shown in Table 2 and Figure 6b. The results show that the calibration results of the second defocusing method are basically the same as those of the first defocusing method. Therefore, all of the parameters are varying when the projector shoots on the different planes, or when the projector is in different focus distances. This is because the parameters are influenced by the projector defocusing, and there are mutually constrained relationships between parameters in the projector calibration process.
In addition, we found that the distortion coefficients had varied with the increasing amount of the defocusing degree from the above experiments. Moreover, it had mentioned that most of the projectors have noticeable distortions outside their focus plane in [19]. Therefore, we decided to determine the lens distortion of the out-of-focus projector under the different defocusing degrees, which was easy to be described by the average residual error (ARE) value. The ARE was defined as follows:
A R E = 1 n i = 1 n ( x i x i i d ) 2 + ( y i y i i d ) 2
where ( x i , y i ) i are the actual computed image coordinates on the image plane with calibration parameters, ( x i d , y i d ) i are the (extracted) ideal image coordinates. Table 3 shows the statistic of the ARE of an out-of-focus projector under different defocusing degrees. Figure 7 shows the varying of ARE under different defocusing degrees.
From Table 3 and Figure 7, the ARE of the projector rose as the defocusing degree of the projector increased. This indicated that the projector always had noticeable distortions outside their focus plane, and it was very useful for our further discussion.
The structured light system model with an out-of-focus projector is shown in Figure 8. To generate the sinusoidal fringes from binary patterns, the projector was substantially out of focus. From the statistical calibration results of a defocused projector shown in Figure 6, the re-projection errors gradually increased as the projector defocusing increased. This occurred because we directly used the pinhole camera model to calibrate the out-of-focus projector. However, the pinhole camera model requires the camera on the focal plane. When the amount of defocusing is minimal, the re-projection errors of the projector are few, such as when using defocusing degrees 1 and 2. Conversely, if the projector is on a high defocusing degree, then the pinhole camera model is not suitable, and the re-projection errors of the projector become increasingly larger, such as when using defocusing degrees 4 and 5. Therefore, using a pinhole camera model to calibrate an out-of-focus projector will introduce errors into the calibration results. In order to improve the calibration precision, this paper proposes an out-of-focus projector calibration method that is based on nonlinear optimization with the lens distortion correction on the projection plane.
As introduced in Section 2.1, the pinhole camera model describes a camera with intrinsic parameters, such as focal length, principle point, and pixel skew factor; and, extrinsic parameters, such as rotation matrix and translation vector. A model of an out-of-focus projector is shown in Figure 9a. For convenience of expression, the parameters or points related to the focal plane are represented by the subscript “1”, whereas the parameters or points that are related to the projection plane are represented by the subscript “2”. If the defocusing degree of the projector is determined, the unique correspondence between focal plane and projection plane will be decided. Then, the calibration plane can be moved to the focal plane of the projector. The projector can be calibrated as an inverse camera by the pinhole camera model. This process can be described as follows:
s p ˜ p 1 = A p 1 [ R p 1 ,   T p 1 ] P ˜ w
A p 1 = [ f x 1 γ 1 u 0 1 0 f y 1 v 0 1 0 0 1 ]
{ u p 1 = u p 1 + δ u p 1 = u p 1 + k 1 u r 1 r 2 + k 2 u r 1 r 4 + p 1 ( 3 u r 1 2 + v r 1 2 ) + 2 p 2 u r 1 v r 1 v p 1 = v p 1 + δ v p 1 = v p 1 + k 1 v r 1 r 2 + k 2 v r 1 r 4 + p 2 ( u r 1 2 + 3 v r 1 2 ) + 2 p 1 u r 1 v r 1
The A p 1 , R p 1 , T p 1 were selected as the initial values of the final out-of-focus projector calibration.
To obtain accurate calibration results, the lens distortion was considered. In this paper, we used a radial and tangential distortion model [30,31]. Figure 9a shows a model of an out-of-focus projector. Obviously, a distance exists between the focal plane and the projection plane. As the projector defocusing degree increases, then this distance also increases. The measurement plane (projection plane) will move away from the focal plane. Additionally, a studied showed that most projectors have noticeable distortions outside their focus plane [19]. Therefore, we considered the lens distortion on the projection plane as the initial value of the final projector calibration. However, the pinhole camera model should be used on the focal plane. To calculate the lens distortion on the projection plane using the pinhole camera model, the defocusing projector should be adjusted so it is focused on the projection plane. This plane is noted by focal plane 2, as shown in Figure 9b. Then, the projector can be calibrated as a standard inverse camera, and the lens distortion on the projection plane δ u p 2 and δ v p 2 can be obtained. Similarly, the calibration model of the projector on the projection plane can be described as follows:
s p ˜ p 2 = A p 2 [ R p 2 ,   T p 2 ] P ˜ w
A p 2 = [ f x 2 γ 2 u 0 2 0 f y 2 v 0 2 0 0 1 ]
{ u p 2 = u p 2 + δ u p 2 = u p 2 + k 1 u r 2 r 2 + k 2 u r 2 r 4 + p 1 ( 3 u r 2 2 + v r 2 2 ) + 2 p 2 u r 2 v r 2 v p 2 = v p 2 + δ v p 2 = v p 2 + k 1 v r 2 r 2 + k 2 v r 2 r 4 + p 2 ( u r 2 2 + 3 v r 2 2 ) + 2 p 1 u r 2 v r 2
The δ u p 2 , δ v p 2 are selected as the initial values of the final out-of-focus projector calibration.
Here, two instructions are required. First, when we calculated the intrinsic and extrinsic parameters on the focal plane, the projector condition was the same as in the final defocusing state. The parameters were close to the real value. Therefore, the intrinsic and extrinsic parameters on the focal plane were selected as the initial value of the nonlinear optimization. Secondly, with an increase in the projector defocusing degree, the measurement plane (projection plane) moved away from the focal plane. As a prior study showed that most projectors have noticeable distortions outside their focus plane [19]. We considered the lens distortion on the projection plane (focal plane 2) as the initial value of the final out-of-focus projector calibration. Finally, the initial value of an out-of-focus projector with a nonlinear optimization algorithm can be described as follows:
A p p 0 = A p 1 = [ f x 1 γ 1 u 0 1 0 f y 1 v 0 1 0 0 1 ]
[ R p p 0 , T p p 0 ] = [ R p 1 , T p 1 ]
{ δ u p p 0 = δ u p 2 = k 1 u r 2 r 2 + k 2 u r 2 r 4 + p 1 ( 3 u r 2 2 + v r 2 2 ) + 2 p 2 u r 2 v r 2 δ v p p 0 = δ v p 2 = k 1 v r 2 r 2 + k 2 v r 2 r 4 + p 2 ( u r 2 2 + 3 v r 2 2 ) + 2 p 1 u r 2 v r 2
Based on the re-projection mathematical model with distortion, the final accuracy parameters of the out-of-focus projector were obtained with a nonlinear optimization algorithm. The objective function was to minimize the sum of the re-projection error of all the reference points onto the projector image plane. This can be described as:
[ A p p , R p p , T p p , K p p ] = arg min i = 1 N j = 1 M p i j F ( A p 1 , R p 1 , T p 1 , K p 2 , P i j ) 2
where [ A p p , R p p , T p p , K p p ] are the final accuracy parameters of the out-of-focus projector; N is number of reference points; M is the number of images for projector calibration; A p 1 is the intrinsic parameter matrix on the focal plane 1; R p 1 and T p 1 represent the extrinsic parameters on the focal plane 1; K p 2 represents the distortion coefficients on the focal plane 2; p i j is the point coordinate on the image plane; and, F is the function representing the re-projection process of the projector. P i j shows the space coordinate of a calibration point. It is a nonlinear optimization problem that can be solved using the Levenberg-Marquardt method [44]. In addition, a good initial value can be provided by coarsely calibrating the parameters of the out-of-focus projector on the focal plane and projection plane, respectively.

3.2.2. Phase-Domain Invariant Mapping

To improve the calibration accuracy of an out-of-focus projector, a unique one-to-one mapping between the pixels on the projector DMD and the pixels on the camera CCD should be virtually established in the phase domain using the phase shifting algorithm. The mapped projector images were generated, as proposed in [17], and the basic mapping principle can be described as follows. If the vertical structured light patterns, which are encoded with a combination of gray code and phase shifting, are projected onto the calibration board, and the camera captures the patterns images, the absolute phase Φ V ( u c , v c ) can be retrieved for all of the camera pixels with Equations (5) and (6). Following the same process, if horizontal structured light patterns are projected, the absolute phase Φ H ( u c , v c ) is extracted. In addition, the accuracy of the phase mapping is decided by high-quality phase generation, which is dependent on fringe width and the number of fringes in the phase shift method. In this paper, the four-step phase shifting algorithm was used, and the sinusoidal fringe pattern period was 16 pixels, as shown in Figure 10. Hence, six gray code patterns should be used, which have the same period as the sinusoidal fringe pattern, as shown in Figure 11. These vertical and horizontal absolute phases can be used to construct the mapping between the pixels on the DMD and CCD, as follows:
{ Φ V ( u c , v c ) = Φ V ( v p ) Φ H ( u c , v c ) = Φ H ( u p )
Equation (21) provides a pixel-pixel mapping from CCD to DMD; Figure 12 illustrates an example of the extracted correspondences for a single translation.

3.3. Out-of-Focus ProjectorCalibration Process

The camera can be calibrated with an OpenCV camera calibration toolbox. If the image coordinates of the calibration points on the DMD are obtained by Phase-Domain invariant mapping, an out-of-focus projector can be calibrated using the abovementioned approach. Specifically, the calibration process requires the following major steps:
Step 1:
Image capture. The calibration board was placed on the preset location, and a white paper was stuck on the surface of the calibration board. A set of horizontal and vertical gray code patterns was projected onto the calibration board. These fringe images were captured by the camera. Similarly, the pattern images were captured by projecting a sequence of horizontal and vertical four-step phase shifting fringes. After, the white paper was removed, and the calibration board image was captured. For each pose, a total of 21 images were recorded, which were used to recover the absolute phase using the combination of gray code and the four-step phase shifting algorithm, introduced in Section 2.2.
Step 2:
Camera calibration and determining the location of the circle centers on the DMD. The camera calibration method recommended in Section 3.1 was used. For each calibration pose, the horizontal and vertical absolute phase maps Φ H ( u c , v c ) , Φ V ( u c , v c ) were recovered. A unique point-to-point mapping between CCD and DMD was determined as follows:
{ u p = Φ V ( u c , v c ) 2 π T V v p = Φ H ( u c , v c ) 2 π T H
where T V , T H is the four-step phase shifting patterns period in the vertical and horizontal directions, respectively. In this paper, T V = T H = 16 pixels. Using Equation (22), the phase value was converted into projector pixels. Furthermore, we assigned the sub-pixel absolute phases, as obtained by the bilinear interpolation of the absolute phases of its four adjacent pixels, because of the sub-pixel circle center detection algorithm for the camera image. For high accuracy camera circle centers, the standard OpenCV toolbox was used. Figure 12 shows an example of the extracted correspondences for a single translation.
Step 3:
Calculate the initial values of the intrinsic and extrinsic parameters on the focal plane (focal plane 1). To find approximate parameters, 15 different positions and orientation (poses) images were captured within the scheme measurement volume for the projector calibration. If the reference calibration data on focal plane 1 for the projector were extracted from Step 2, the coarse intrinsic and extrinsic parameters of an out-of-focus projector can be estimated using the same software algorithms for camera calibration on focal plane 1, which was described in Section 3.2.
Step 4:
Compute the initial value of the lens distortion on the projection plane. According to the results of our previous experiments in Section 3.2.1, the lens distortion varies with an increasing defocusing degree. To find the approximate parameters, the lens distortion on the projection plane was considered as the initial value of the lens distortion for an out-of-focus projector. In this process, the projector was adjusted to focus on the projection plane, which was called focal plane 2. With the calibration points on focal plane 2 and their corresponding image points on the DMD, the lens distortion on the projection plane was obtained using the pinhole camera model.
Step 5:
Compute the precise calibration parameters of the out-of-focus projector by using a nonlinear optimization algorithm. All of the parameters were solved by minimizing the following cost function, as outlined in Equation (20).

4. Experiment and Discussion

To verify the validity of the proposed calibration method for an out-of-focus projector, in-laboratory experiments were completed. The experimental system is shown in Figure 13: it was composed of a DLP projector (model: OptomaDN322) with 1024 × 768 pixel resolution, and a camera (model: Point Grey GX-FW-28S5C/M-C) with a 12 mm focal length lens (model: KOWA LM12JC5M2). The camera pixel size was 4.54 µm × 4.54 µm with highest resolution of 1600 × 1200 pixels. Here, a calibration board with 7 × 21 arrays of white circles printed on a flat black board was used, and a calibration volume of 200 × 150 × 200 mm was attained. Then, the system calibration followed the method described in Section 3. To evaluate the performance of the proposed calibration method, the system was also calibrated with calibration method in [29]. In addition, the calibration images were generally contaminated by noise during image acquisition, image capturing or image transmission. In order to improve the accuracy of the feature point extraction and phase calculation, the original images in the experiment were preprocessed by an image de-noising method that was based on weighted regularized least-square algorithm [45], which can effectively eliminate the image noise and keep edge information without blurring image edge. This can help reduce the noise impact and improve the calibration results.
Table 4 shows the calibration system parameters using both our proposed method and the conventional method in [29] under defocusing degree 2, as mentioned in Table 1. As was shown, the camera parameters were almost the same for both methods, whereas the out-of-focus projector parameters were obviously different. This is because the calibration process for the projector in [29] was not reliable due to the influence of the projector defocusing. In addition, the re-projection errors of the calibration data on the camera and out-of-focus projector image planes are shown in Figure 14a–c. As shown in Figure 14a, when the re-projection errors of the calibration data on the camera image plane are 0.1567 ± 0.0119 pixels, the re-projection errors of the calibration data for the out-of-focus projector using the method in [29] are 0.2258 ± 0.0217 pixels, as shown in Figure 14b. To compare the effect of both methods, the re-projection errors of the camera remained unchanged, and the re-projection errors of the calibration data for the out-of-focus projector using the proposed method decreased to 0.1648 ± 0.0110 pixels, as shown in Figure 14c, which is a reduction of 27.01% when compared to the method in [29].
To evaluate the performance of the proposed calibration method, the standard distances between two adjacent points on the calibration board in the x and y directions were measured using the two methods. In addition, the standard distances were obtained by moving the calibration board 20 mm in the parallel direction within the volume of 200 × 140 × 100 mm, and the total number of the 1295 distances was measured. Figure 15 shows the measured distances within the 200 × 140 × 100 mm volume. Figure 16 shows the histogram of the distribution of the distance measurement error. The measuring error was 0.0253 ± 0.0364 mm for our proposed calibration method, as shown in Figure 16a, the measuring error was 0.0389 ± 0.0493 mm for the calibration method in [29], shown in Figure 16b. The measurement accuracy and the uncertainty were improved by 34.96% and 26.17%, respectively.
To further test the results of our proposed calibration method, a planar board and an aluminum alloy hemisphere were measured by defocusing the camera-projector system under different defocusing degrees listed in Table 1. The measurement results of the planar board under the five defocusing degrees are shown in Figure 17. The measurement error of the board is defined as the distance between the measuring point and the fitting plane. To determine the measurement errors of the board, the board was also measured using a coordinate measuring machine (CMM) with a precision of 0.0019 mm. Table 5 presents the statistics of the measurement results of the board with five different defocusing degrees. The board fitting residuals of the CMM’s measurement data were 0.0065 ± 0.0085 mm and the maximum was less than 0.0264 mm. Figure 17a,b show the fitting plane and the fitting residuals of the plane with the projector in focus, under defocusing degree 1, respectively. Additionally, it is important to note that our calibration method is the same as the method in [29], under defocusing degree 1. So, there is only one set of measurement results. Figure 17c–f show the measurement results under defocusing degrees 2 to 5 using our proposed calibration method. Similarly, the measurement results under defocusing degrees 2 to 5 using the calibration method in [29] are shown in Figure 17g–j. When the defocusing degree is minimal, such as defocusing degrees 2 and 3, the fitting residuals were similar between our proposed calibration method and the calibration method in [29]. The fitting residuals using our calibration method were 0.0147 ± 0.0184 mm and 0.0159 ± 0.0195 mm, and 0.0169 ± 0.0210 mm and 0.0183 ± 0.0257 mm, using the calibration method in [29], respectively. However, as the defocusing degree increased to defocusing degrees 4 and 5, the differences between the measurement results were obvious. Especially for defocusing degree 5, using our proposed calibration method, the fitting residual was 0.0172 ± 0.0234 mm. Using the calibration method in [29], the fitting residual reached 0.0276 ± 0.0447 mm. Figure 18 shows the fitting error varying with the different defocusing degrees using both our proposed calibration method and the calibration method in [29]. From Figure 18, the change of the fitting error is not obvious for different defocusing degrees using our proposed calibration method. Nevertheless, the fitting error increased rapidly using the calibration method in [29]. Because our proposed calibration method considers the influence of defocusing on the calibration results.
An aluminum alloy hemisphere was also measured using the defocusing camera-projector system for three different defocusing degrees: defocusing degrees 1, 2, and 5. The captured fringe images for the three defocusing degrees and their cross sections of intensity are shown in Figure 19. The measurement and statistics results are shown in Figure 20 and Table 6, respectively. The measurement results under defocusing degree 1 (projector in focus) are shown in Figure 20a–c. Figure 20a shows the reconstructed 3D surface. To evaluate the accuracy of measurement, we obtained a cross section of the hemisphere and fitted it with an ideal circle. Figure 20b shows the overlay of the ideal circle and the measured data points. The error between these two curves is shown in Figure 20c. Figure 20d–f and Figure 20j–l show the measurement results under defocusing degrees 2 and 5 using our proposed calibration method, respectively. Correspondingly, Figure 20g–i and Figure 20m–o show the measurement results for defocusing degrees 2 and 5 using the calibration method in [29], respectively.
To evaluate our proposed calibration method, the hemisphere was also measured by a CMM with a precision of 0.0019 mm. The fitting radius of the CMM’s measurement data was 20.0230 mm, and the hemisphere fitting residuals was 0.0204 ± 0.0473 mm. The fitting radius of defocusing camera-projector system that was calibrated by our proposed calibration method was 19.9542 mm, which had a deviation of 0.0688 mm from the fitting radius of the CMM’s measurement data. The hemisphere fitting residuals was 0.0543 ± 0.0605 mm for defocusing degree 2. Furthermore, with the same experimental conditions, the hemisphere was also measured by the defocusing camera-projector system with calibration method proposed in [29]. The fitting radius with the point data of the hemisphere was 19.9358 mm, which had a deviation of 0.0872 mm from the CMM’s fitting radius, and the hemisphere fitting residuals was 0.0745 ± 0.0733 mm. For defocusing degree 5, the hemisphere fitting residual was 0.0574 ± 0.0685 mm using our proposed calibration method, and 0.0952 ± 0.0936 mm using the calibration method in [29]. Similarly, the fitting error varying with the different defocusing degrees using our proposed calibration method and the calibration method in [29] is shown in Figure 21. From Figure 21, the change of the fitting error was not obvious for different defocusing degrees using our proposed calibration method. Nevertheless, the fitting error increased rapidly when using the calibration method in [29]. Thus, the measurement results using the camera-projector system with our proposed projector calibration method were better than the method in [29]. All of the experimental results verified that the camera and out-of-focus projector system attain satisfactory accuracy using our proposed projector calibration method.

5. Conclusions

This paper proposes an accurate and systematic calibration method to calibrate an out-of-focus projector in a structured light system using a binary defocusing technique. To achieve high accuracy, the calibration method includes two parts. Firstly, good initial values are provided by coarsely calibrating the parameters of the out-of-focus projector on the focal plane and projection plane. Secondly, the final accuracy parameters of the out-of-focus projector are obtained using a nonlinear optimization algorithm, based on the re-projection mathematical model with distortion. Specifically, a polynomial distortion representation on the projection plane, and not the focal plane in which the high order radial and tangential lens distortion are considered, was used to reduce the residuals of the projection distortion. In addition, the calibration points in the camera image plane were mapped to the projector according the phase of the planar projection. The experimental results showed that satisfactory calibration accuracy was achieved using our proposed method, regardless of the defocusing amount. Of course, this method is not without its limitations. When compared to the traditional calibration method, the computing was somewhat longer.

Acknowledgments

The work is supported by National Natural Science Foundation of China (NSFC 51375377).

Author Contributions

Jiarui Zhang performed the experiments and analyzed the data under the guidance of Yingjie Zhang. Bo Chen participated in the eatablishment of the model and design of the experiments. All author contributed to writing the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, F.; Brown, G.M.; Song, M. Overview of three-dimensional shape measurement using optical methods. Opt. Eng. 2000, 39, 10–22. [Google Scholar]
  2. D’Apuzzo, N. Overview of 3D surface digitization technologies in Europe. In Proceedings of the SPIE Electronic Imaging, San Jose, CA, USA, 26 January 2006; pp. 605–613. [Google Scholar]
  3. Abdel-Aziz, Y.I.; Karara, H.M.; Hauck, M. Direct Linear Transformation from Comparator Coordinates into Object Space Coordinates in Close-Range Photogrammetry. Photogramm. Eng. Remote Sens. 2015, 81, 103–107. [Google Scholar] [CrossRef]
  4. Faig, W. Calibration of close-range photogrammetry systems: Mathematical formulation. Photogramm. Eng. Remote Sens. 1975, 41, 1479–1486. [Google Scholar]
  5. Tsai, R. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J. Robot. Autom. 1987, 3, 323–344. [Google Scholar] [CrossRef]
  6. Luong, Q.-T.; Faugeras, O.D. Self-Calibration of a Moving Camera from PointCorrespondences and Fundamental Matrices. Int. J. Comput. Vis. 1997, 22, 261–289. [Google Scholar] [CrossRef]
  7. Zhang, Z.Y. Flexible camera calibration by viewing a plane from unknown orientations. In Proceedings of the IEEE Conference on Computer Vision, Corfu, Greece, 20–27 September 1999; pp. 666–673. [Google Scholar]
  8. Qi, Z.H.; Xiao, L.X.; Fu, S.H.; Li, T.; Jiang, G.W.; Long, X.J. Two-Step Camera Calibration Method Based on the SPGD Algorithm. Appl. Opt. 2012, 51, 6421–6428. [Google Scholar] [CrossRef] [PubMed]
  9. Bacakoglu, H.; Kamel, M. An Optimized Two-Step Camera Calibration Method. In Proceedings of the IEEE International Conference on Robotics and Automation, Albuquerque, NM, USA, 25 April 1997; pp. 1347–1352. [Google Scholar]
  10. Huang, L.; Zhang, Q.C.; Asundi, A. Camera Calibration with Active Phase Target: Improvement on Feature Detection and Optimization. Opt. Lett. 2013, 38, 1446–1448. [Google Scholar] [CrossRef] [PubMed]
  11. Jia, Z.Y.; Yang, J.H.; Liu, W.; Wang, F.J.; Liu, Y.; Wang, L.L.; Fan, C.N.; Zhao, K. Improved Camera Calibration Method Based on Perpendicularity Compensation for Binocular Stereo Vision Measurement System. Opt. Express 2015, 23, 15205–15223. [Google Scholar] [CrossRef] [PubMed]
  12. Zhu, F.P.; Shi, H.J.; Bai, P.X.; Lei, D.; He, X.Y. Nonlinear Calibration for Generalized Fringe Projection Profilometry under Large Measuring Depth Range. Appl. Opt. 2013, 52, 7718–7723. [Google Scholar] [CrossRef] [PubMed]
  13. Huang, L.; Chua, P.S.K.; Asundi, A. Least-squares calibration method for fringe projection profilometry considering camera lens distortion. Appl. Opt. 2010, 49, 1539–1548. [Google Scholar] [CrossRef] [PubMed]
  14. Lu, J.; Mo, R.; Sun, H.B.; Chang, Z.Y. Flexible Calibration of Phase-to-Height Conversion in Fringe Projection Profilometry. Appl. Opt. 2016, 55, 6381–6388. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, X.; Zhu, L.M. Projector calibration from the camera image point of view. Opt. Eng. 2009, 48, 208–213. [Google Scholar] [CrossRef]
  16. Gao, W.; Wang, L.; Hu, Z.Y. Flexible Method for Structured Light System Calibration. Opt. Eng. 2008, 47, 767–781. [Google Scholar] [CrossRef]
  17. Huang, Z.R.; Xi, J.T.; Yu, Y.G.; Guo, Q.H. Accurate projector calibration based on a new point-to-point mapping relationship between the camera and projector images. Appl. Opt. 2015, 54, 347–356. [Google Scholar] [CrossRef]
  18. Huang, J.; Wang, Z.; Gao, J.M.; Xue, Q. Projector calibration with error surface compensation method in the structured light three-dimensional measurement system. Opt. Eng. 2013, 52, 043602. [Google Scholar] [CrossRef]
  19. Moreno, D.; Taubin, G. Simple, Accurate, and Robust Projector-Camera Calibration. In Proceedings of the 2012 IEEE Second International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, Zurich, Switzerland, 13–15 October 2012; pp. 464–471. [Google Scholar]
  20. Chen, R.; Xu, J.; Chen, H.P.; Su, J.H.; Zhang, Z.H.; Chen, K. Accurate Calibration Method for Camera and Projector in Fringe Patterns Measurement System. Appl. Opt. 2016, 55, 4293–4300. [Google Scholar] [CrossRef] [PubMed]
  21. Liu, M.; Sun, C.K.; Huang, S.J.; Zhang, Z.H. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation. Sensors 2015, 15, 26567–26582. [Google Scholar] [CrossRef] [PubMed]
  22. Yang, S.R.; Liu, M.; Song, J.H.; Yin, S.B.; Guo, Y.; Ren, Y.J.; Zhu, J.G. Flexible Digital Projector Calibration Method Based on Per-Pixel Distortion Measurement and Correction. Opt. Lasers Eng. 2017, 92, 29–38. [Google Scholar] [CrossRef]
  23. Gong, Y.Z.; Zhang, S. Ultrafast 3-D shape measurement with an Off-the-shelf DLP projector. Opt. Express 2010, 18, 19743–19754. [Google Scholar] [CrossRef] [PubMed]
  24. Karpinsky, N.; Zhang, S. High-resolution, real-time 3D imaging with fringe analysis. J. Real-Time Image Process. 2012, 7, 55–66. [Google Scholar] [CrossRef]
  25. Zhang, S. Recent progresses on real-time 3D shape measurement using digital fringe projection techniques. Opt. Lasers Eng. 2010, 48, 149–158. [Google Scholar] [CrossRef]
  26. Lei, S.; Zhang, S. Flexible 3-D shape measurement using projector defocusing. Opt. Lett. 2009, 34, 3080–3082. [Google Scholar] [CrossRef] [PubMed]
  27. Lei, S.; Zhang, S. Digital sinusoidal fringe pattern generation: Defocusing binary patterns VS focusing sinusoidal patterns. Opt. Lasers Eng. 2010, 48, 561–569. [Google Scholar] [CrossRef]
  28. Merner, L.; Wang, Y.; Zhang, S. Accurate calibration for 3D shape measurement system using a binary defocusing technique. Opt. Lasers Eng. 2013, 51, 514–519. [Google Scholar] [CrossRef]
  29. Li, B.; Karpinsky, N.; Zhang, S. Novel calibration method for structured-light system with an out-of-focus projector. Appl. Opt. 2014, 53, 3415–3426. [Google Scholar] [CrossRef] [PubMed]
  30. Weng, J.Y.; Cohen, P.; Herniou, M. Calibration of Stereo Cameras Using a Non-Linear Distortion Model. In Proceedings of the Tenth International Conference on Pattern Recognition, Atlantic City, NJ, USA, 16–21 June 1990; pp. 246–253. [Google Scholar]
  31. Faisal, B.; Matthew, N.D. Automatic Radial Distortion Estimation from a Single Image. J. Math. Imaging Vis. 2013, 45, 31–45. [Google Scholar]
  32. Hartley, R.I.; Tushar, S. The Cubic Rational Polynomial Camera Model. Image Underst. Workshop 1997, 649, 653. [Google Scholar]
  33. Claus, D.; Fitzgibbon, A.W. A Rational Function Lens Distortion Model for General Cameras. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; pp. 213–219. [Google Scholar]
  34. Santana-Cedrés, D.; Gómez, L.; Alemán-Flores, M.; Agustín, S.; Esclarín, J.; Mazorra, L.; Álvarez, L. An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Models. SIAM J. Imaging Sci. 2015, 8, 1574–1606. [Google Scholar] [CrossRef]
  35. Tang, Z.W.; von Gioi Grompone, R.; Monasse, P.; Morel, J.M. A Precision Analysis of Camera Distortion Models. IEEE Trans. Image Process. 2017, 26, 2694–2704. [Google Scholar] [CrossRef] [PubMed]
  36. Salvi, J.; Pagès, J.; Batlle, J. Pattern codification strategies in structured light systems. Pattern Recognit. 2004, 37, 827–849. [Google Scholar] [CrossRef]
  37. Sansoni, G.; Carocci, M.; Rodella, R. Three-dimensional vision based on a combination of gray-code and phase-shift light projection: Analysis and compensation of the systematic errors. Appl. Opt. 1999, 38, 6565–6573. [Google Scholar] [CrossRef] [PubMed]
  38. Stokseth, P.A. Properties of a Defocused Optical System. J. Opt. Soc. Am. 1969, 59, 1314–1321. [Google Scholar] [CrossRef]
  39. Wang, Y.J.; Zhang, S. Comparison of the Squared Binary, Sinusoidal Pulse Width Modulation, and Optimal Pulse Width Modulation Methods for Three-Dimensional Shape Measurement with Projector Defocusing. Appl. Opt. 2012, 51, 861–872. [Google Scholar] [CrossRef] [PubMed]
  40. Chao, Z.; Chen, Q.; Feng, S.J.; Feng, F.X.Y.; Gu, G.H.; Sui, X.B. Optimized Pulse Width Modulation Pattern Strategy for Three-Dimensional Profilometry with Projector Defocusing. Appl. Opt. 2012, 51, 4477–4490. [Google Scholar]
  41. Wang, Y.J.; Zhang, S. Optimal Pulse Width Modulation for Sinusoidal Fringe Generation with Projector Defocusing. Opt. Lett. 2010, 35, 4121–4123. [Google Scholar] [CrossRef] [PubMed]
  42. Gastón, A.A.; Jaime, A.A.; J. Matías, D.M.; José, A.F. Pulse-Width Modulation in Defocused Three-Dimensional Fringe Projection. Opt. Lett. 2010, 35, 3682–3684. [Google Scholar]
  43. Wang, Y.J.; Zhang, S. Three-Dimensional Shape Measurement with Binary Dithered Patterns. Appl. Opt. 2012, 51, 6631–6636. [Google Scholar] [CrossRef] [PubMed]
  44. Moré, J.J. The Levenberg-Marquardt Algorithm: Implementation and Theory. In Proceedings of the Numerical Analysis: Proceedings of the Biennial Conference, Dundee, UK, 28 June–1 July 1977; pp. 105–116. [Google Scholar]
  45. Srikanth, M.; Krishnan, K.S.G.; Sowmya, V.; Soman, K.P. Image Denoising Based on Weighted Regularized Least Square Method. In Proceedings of the International Conference on Circuit, Power and Computing Technologies (ICCPCT), Kollam, India, 20–21 April 2017; pp. 1–5. [Google Scholar]
Figure 1. The principle of the structured light three-dimensional (3D) measurement system: (a) system components; and, (b) measurement principle.
Figure 1. The principle of the structured light three-dimensional (3D) measurement system: (a) system components; and, (b) measurement principle.
Sensors 17 02963 g001
Figure 2. Pinhole camera model.
Figure 2. Pinhole camera model.
Sensors 17 02963 g002
Figure 3. Patterns encoding methods: (a) Four-bit gray-code; (b) Four-step phase-shifting; and, (c) absolute phase by combining four-bit gray-code and four-step phase-shifting.
Figure 3. Patterns encoding methods: (a) Four-bit gray-code; (b) Four-step phase-shifting; and, (c) absolute phase by combining four-bit gray-code and four-step phase-shifting.
Sensors 17 02963 g003
Figure 4. Simulation of the binary defocusing technique. (a) Binary structured pattern; (b) sinusoidal fringe pattern with low defocusing degree; (c) sinusoidal fringe pattern with high defocusing degree; and, (d) cross-section.
Figure 4. Simulation of the binary defocusing technique. (a) Binary structured pattern; (b) sinusoidal fringe pattern with low defocusing degree; (c) sinusoidal fringe pattern with high defocusing degree; and, (d) cross-section.
Sensors 17 02963 g004
Figure 5. Design of the calibration board.
Figure 5. Design of the calibration board.
Sensors 17 02963 g005
Figure 6. Re-projection errors under different defocusing degrees: (a) obtaining the different defocusing degrees using the first method; and, (b) obtaining the different defocusing degrees using the second method.
Figure 6. Re-projection errors under different defocusing degrees: (a) obtaining the different defocusing degrees using the first method; and, (b) obtaining the different defocusing degrees using the second method.
Sensors 17 02963 g006
Figure 7. The ARE of a projector under different defocusing degrees.
Figure 7. The ARE of a projector under different defocusing degrees.
Sensors 17 02963 g007
Figure 8. Model of a structured light system with an out-of-focus projector.
Figure 8. Model of a structured light system with an out-of-focus projector.
Sensors 17 02963 g008
Figure 9. Model of an out-of-focus projector: (a) original model; and, (b) double the focal plane model.
Figure 9. Model of an out-of-focus projector: (a) original model; and, (b) double the focal plane model.
Sensors 17 02963 g009
Figure 10. The four generated images in the first row are the vertical phase shifting fringe images I V 1 , I V 2 , I V 3 , and I V 4 , the corresponding horizontal phase shifting fringe images I H 1 , I H 2 , I H 3 , and I H 4 are shown in the second row.
Figure 10. The four generated images in the first row are the vertical phase shifting fringe images I V 1 , I V 2 , I V 3 , and I V 4 , the corresponding horizontal phase shifting fringe images I H 1 , I H 2 , I H 3 , and I H 4 are shown in the second row.
Sensors 17 02963 g010
Figure 11. Generated gray-code patterns.
Figure 11. Generated gray-code patterns.
Sensors 17 02963 g011
Figure 12. Example of the extracted correspondences circle centers for the camera and projector. (a) example of one calibration pose; (b) camera; and, (c) projector.
Figure 12. Example of the extracted correspondences circle centers for the camera and projector. (a) example of one calibration pose; (b) camera; and, (c) projector.
Sensors 17 02963 g012
Figure 13. Experimental system.
Figure 13. Experimental system.
Sensors 17 02963 g013
Figure 14. Re-projection errors of the calibration points on the image planes: (a) camera; (b) out-of-focus projector using the conventional method in [29]; and, (c) out-of-focal projector using the proposed method.
Figure 14. Re-projection errors of the calibration points on the image planes: (a) camera; (b) out-of-focus projector using the conventional method in [29]; and, (c) out-of-focal projector using the proposed method.
Sensors 17 02963 g014
Figure 15. The measured distances in a 420 × 150 × 100 mm volume.
Figure 15. The measured distances in a 420 × 150 × 100 mm volume.
Sensors 17 02963 g015
Figure 16. Histogram of the measurement error of the 20 mm distances by: (a) the proposed method; and (b) the method in [29].
Figure 16. Histogram of the measurement error of the 20 mm distances by: (a) the proposed method; and (b) the method in [29].
Sensors 17 02963 g016
Figure 17. Measurement results and error of plane under different defocusing degrees: (a) fitting plane; (b) measurement error of the plane under defocusing degree 1 (projector in focus); (cf) measurement error of the plane under defocusing degree 2–5 by our proposed calibration method; (gj) corresponding the measurement error of the plane under defocusing degree 2–5 by the calibration method in [29].
Figure 17. Measurement results and error of plane under different defocusing degrees: (a) fitting plane; (b) measurement error of the plane under defocusing degree 1 (projector in focus); (cf) measurement error of the plane under defocusing degree 2–5 by our proposed calibration method; (gj) corresponding the measurement error of the plane under defocusing degree 2–5 by the calibration method in [29].
Sensors 17 02963 g017aSensors 17 02963 g017b
Figure 18. Measurement error of plane under different defocusing degrees using our proposed calibration method and the calibration method in [29].
Figure 18. Measurement error of plane under different defocusing degrees using our proposed calibration method and the calibration method in [29].
Sensors 17 02963 g018
Figure 19. Illustration of three different defocusing degrees: (a) a captured fringe image under defocusing degree 1 (projector in focus); (b) a captured fringe image under defocusing degree 2 (projector slightly defocused); (c) a captured fringe image under defocusing degree 5 (projector very defocused); and, (df) Corresponding cross sections of the intensity of (ac).
Figure 19. Illustration of three different defocusing degrees: (a) a captured fringe image under defocusing degree 1 (projector in focus); (b) a captured fringe image under defocusing degree 2 (projector slightly defocused); (c) a captured fringe image under defocusing degree 5 (projector very defocused); and, (df) Corresponding cross sections of the intensity of (ac).
Sensors 17 02963 g019
Figure 20. Measurement results and error of a hemisphere surface under different defocusing degrees: (a) fitting hemisphere measured under defocusing degree 1 (projector in focus); (b) cross section of the measurement result and the ideal circle; (c) error estimated in x direction; (df) and (jl) correspond to figures (ac) measured using our proposed calibration method under defocusing degrees 2 and 5; and, (gi) and (mo) correspond to figures (ac) measured using the calibration method in [29] under defocusing degrees 2 and 5.
Figure 20. Measurement results and error of a hemisphere surface under different defocusing degrees: (a) fitting hemisphere measured under defocusing degree 1 (projector in focus); (b) cross section of the measurement result and the ideal circle; (c) error estimated in x direction; (df) and (jl) correspond to figures (ac) measured using our proposed calibration method under defocusing degrees 2 and 5; and, (gi) and (mo) correspond to figures (ac) measured using the calibration method in [29] under defocusing degrees 2 and 5.
Sensors 17 02963 g020
Figure 21. Measurement error of the hemisphere under different defocusing degrees using our proposed calibration method and the calibration method in [29].
Figure 21. Measurement error of the hemisphere under different defocusing degrees using our proposed calibration method and the calibration method in [29].
Sensors 17 02963 g021
Table 1. Calibration results of a projector under different defocusing degrees when using the first method.
Table 1. Calibration results of a projector under different defocusing degrees when using the first method.
Defocusing Degree f u f v u o v 0 K = [ k 1 p 1 k 2 p 2 ] Re-Projection Errors
u v
12033.15992481.85752−0.005010.007410.020710.03627
2029.32333794.045160.10553−0.00781
22066.07579480.663430.000250.000750.023560.04631
2060.38058817.00819−0.02750−0.00774
32066.02355482.08093−0.01202−0.003030.028560.05781
2061.98782824.49272−0.01985−0.00817
42083.24450457.53174−0.03166−0.004560.038620.07487
2079.93614834.405940.08715−0.01328
52082.66646458.68461−0.09894−0.017260.054920.09577
2090.35224801.221810.13975−0.01243
Table 2. Calibration results of a projector under different defocusing degrees when using the second method.
Table 2. Calibration results of a projector under different defocusing degrees when using the second method.
Defocusing Degree f u f v u o v 0 K = [ k 1 p 1 k 2 p 2 ] Re-Projection Errors
u v
12065.84461486.737080.00832−0.000520.029500.03762
2068.06661815.66697−0.03886−0.00749
22074.79015503.278880.001110.004960.039410.05078
2074.45555824.024440.03516−0.00402
32131.24929526.117640.00240−0.002290.053520.07076
2135.41564799.982800.03484−0.00229
42165.81230541.55585−0.02157−0.005430.071010.09316
2166.67073794.569980.177370.00042
52173.40017531.463180.01313−0.001250.093580.14790
2172.43898810.091410.10122−0.00151
Table 3. Average residual error (ARE) of a projector under different defocusing degrees (unit: pixel).
Table 3. Average residual error (ARE) of a projector under different defocusing degrees (unit: pixel).
StatisticDefocusing Degree
12345
ARE0.210430.225850.290150.393740.56108
Table 4. Calibration results of the camera and out-of-focus projector.
Table 4. Calibration results of the camera and out-of-focus projector.
MethodDevice f u f v u 0 v 0 K = [ k 1 p 1 k 2 p 2 ] R T
Proposed methodCamera2708.93985684.18114−0.016400.006980.94231
−0.01042
−0.25359
−0.00342
0.99926
−0.00936
0.23016
0.00653
0.73162
−389.37651
179. 73663
229.32452
2732.74604740.395480.03143−0.00944
Projector2065.25354461.4964−0.06638−0.00567
2061.88752798.625520.02323−0.00562
The method in [29]Camera2708.93865684.16035−0.016400.006980.94242
−0.01038
−0.25338
0.00364
0.99952
−0.00929
0.23024
0.00681
0.73139
−389.35619
180.0085
230.02781
2732.74653740.397490.03143−0.00944
Projector2066.07579480.663430.000250.00075
2060.38058817.00819−0.02750−0.00774
Table 5. Statistics of the measurement results of plane (unit: mm).
Table 5. Statistics of the measurement results of plane (unit: mm).
StatisticDefocusing DegreeMeanSDMax.
Plane by CMMNull0.00650.00850.0264
Plane by camera-projector system with our proposed projector calibration method10.01380.01680.0620
20.01470.01840.0837
30.01590.01950.0853
40.01620.02080.0864
50.01720.02340.0882
Plane by camera-projector system with the proposed projector calibration method in [29]10.01380.01680.0620
20.01690.02100.0889
30.01830.02570.0895
40.02150.03030.0913
50.02760.04470.0986
Table 6. Statistics of the measurement results of hemisphere (unit: mm).
Table 6. Statistics of the measurement results of hemisphere (unit: mm).
StatisticDefocusing DegreeFitting RadiusMeanSDMax.
Hemisphere by CMMNull20.02300.02040.04730.1165
Hemisphere by camera-projector system with our proposed projector calibration method119.97450.05230.05870.1236
219.95420.05430.06050.1328
519.95370.05740.06850.1432
Hemisphere by camera-projector system with the proposed projector calibration method in [29]119.97450.05230.05870.1236
219.93580.07450.07330.1653
519.91080.09520.09360.1832

Share and Cite

MDPI and ACS Style

Zhang, J.; Zhang, Y.; Chen, B. Out-of-Focus Projector Calibration Method with Distortion Correction on the Projection Plane in the Structured Light Three-Dimensional Measurement System. Sensors 2017, 17, 2963. https://doi.org/10.3390/s17122963

AMA Style

Zhang J, Zhang Y, Chen B. Out-of-Focus Projector Calibration Method with Distortion Correction on the Projection Plane in the Structured Light Three-Dimensional Measurement System. Sensors. 2017; 17(12):2963. https://doi.org/10.3390/s17122963

Chicago/Turabian Style

Zhang, Jiarui, Yingjie Zhang, and Bo Chen. 2017. "Out-of-Focus Projector Calibration Method with Distortion Correction on the Projection Plane in the Structured Light Three-Dimensional Measurement System" Sensors 17, no. 12: 2963. https://doi.org/10.3390/s17122963

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop