Calibration Method For Misaligned Catadioptric Camera
Calibration Method For Misaligned Catadioptric Camera
Calibration Method For Misaligned Catadioptric Camera
1
catadioptric camera calibration method. This method re- ZM Hyperboloidal
mirror
laxes the assumption of the perfect orthographic projection +ch
and placement. Micusik and Pajdla [17] also propose auto- αc
ah
calibration and 3D reconstruction methods by using a mir- bh
Incident rays
ror boundary and an epipolar geometry approach. In this Incident rays
x
method, they regard the mirror boundary in an image as a
p(x,y) θc
circle. Because of the approximation of central projection, Image plane
(xc ,yc ) fC
p(x,y)
these two methods compensate for minor mirror misalign-
ment. Strelow et al. [12] proposed a model for relation y Image plane – ch Principal point
of lens
between the mirror and camera with 6 degrees of freedom a:Top view b:Side view
(translation and rotation). They determined 6 parameters
through nonlinear optimization. This has the advantage that
the translation and the rotation parameters are simultane- Figure 1: Optics of HyperOmni Vision
ously determined. But the disadvantage is that the accuracy
of the estimated parameters is worse and depends on the
initial values because of nonlinear optimization. YM /XM = −(y − yc )/(x − xc ) (2)
In this paper, we propose a calibration method for a cata- ZM = XM 2 + Y 2 tan α + c (3)
M C h
dioptric camera system consisting of a surface of revolution
2
mirror and a perspective camera, such as HyperOmni Vision −1 (bh + c2h ) sin βC − 2bh ch
αC = tan (4)
[1], Our method estimates the full degrees of freedom of (b2h − c2h ) cos βC
mirror posture. Furthermore, it is free from the volatility of fC
nonlinear optimization such as the local minimum problem, βC = tan−1 (5)
(x − xc )2 + (y − yc )2
the initial value problem, and the computation complexity
problem. We suppose that the mirror posture has five de- This is where ah and bh are parameters of the hyperboloidal
grees of freedom, because the mirror surface is a surface of plane, αC is the depression angle, and β C is the angle be-
revolution. Our method uses a mirror boundary. It is based tween the optical axis and projected point (x, y). As this
on the extrinsic parameter calibration using a elliptic pat- sensor has the same optical characteristics of a common
tern [6]. Because of the conic-based analytical method, our camera (single viewpoint), we can easily transform the cata-
method avoids the initial value and local minimum problem dioptric image into a perspective image.
arising from nonlinear optimization. As the mirror posture
If the mirror is misaligned, the single viewpoint does not
estimated analytically is not unique, we also propose a se-
exist and the image is distorted. It is able to correct the
lection method for finding the best one. We conducted ex-
image distortion up to a certain level by trial and error ad-
periments on synthesized images and real images captured
justment of the parameter f C , xc , and yc . However, the
by HyperOmni Vision [1]. We also evaluate performance of
relations expressed in Figure 1 and equations (2)-(5) are
our method by image transformation and 3D reconstruction.
incorrect. The relation between the misalignment and the
incident rays is analyzed by Swaminathan et al. [8].
2 Catadioptric Camera Model
2.2 Catadioptric Camera Model for Mirror
2.1 HyperOmni Vision Misalignment
The HyperOmni Vision [1] consists of a camera and a hy-
perboloidal mirror. The hyperboloidal mirror surface is ex- To accommodate the misalignment of the mirror, we ap-
pressed by Eq. (1) and has two focal points: (0, 0, +c h ) ply a camera model and the mirror model separately. The
and (0, 0, −ch). The focal point of the camera is aligned omnidirectional camera model for the mirror misalignment
with the lower focal point, (0, 0, −c h ) (Figure 1). The im- consists of three coordinate systems (image, camera, and
age plane, x-y, is parallel to the X M YM -plane, and is fixed mirror), and the correspondence between pixel and ray is
at (0, 0, fC − c), where fC is a focal length of the camera. calculated by ray tracing. A camera model expresses the co-
ordinate transformation between the image coordinate sys-
tem and the camera coordinate system. A mirror posture
2 2
XM + YM Z2 expresses the coordinate transform between the camera co-
2 − M = −1 (1) ordinate system and the mirror coordinate system. A mirror
ah b2h
model expresses the ray reflection by mirror. In this section,
ch = a2h + b2h we express camera model, mirror model, and ray tracing.
2
intersection point P M is calculated by the following equa-
Tangential plane tions:
Mirror
(0,0,+c) PM −βM ± βM 2 − αM γM
VMout k = , (8)
αM
2 2 2
N VMX + VMY VMZ
αM = − , (9)
a2 b2
FM + kVM
VMX FMX + VMY FMY VMZ FMZ
βM = − ,(10)
a2 b2
2 2 2
FM VM FMX + FMY F
γM = 2
− MZ + 1. (11)
a b2
Figure 2: Reflection in mirror coordinate system Let N be a normal vector of a mirror at intersection point
PM . Figure 2 shows the geometric relation between P M ,
N , VM , FM , and the direction vector V Mout of a ray re-
2.2.1 Camera Model flected at intersection point P M . N satisfies the following
equation because of visibility:
We assume that the camera model is a perspective camera
model. Let x̃ = (x, y, 1)T and X˜C = (XC , YC , ZC , 1)T (N , VM ) < 0, (12)
be the augment vectors of points in the image coordinate
system and in the camera coordinate system, respectively. where (, ) expresses the inner product. Equation (8) has two
The relation between x̃ and X˜C is expressed by the follow- solutions, and we choose k, which satisfies Eq. (12).
ing equations: To calculate the tangential plane, Eq. (1) of a hyper-
boloidal plane is rewritten as
x̃ = sKXC (6)
XM 2 + YM 2
f (XM , YM ) = Z = b2 + 1 (Z > 0), (13)
⎡ ⎤ a2
fC kx fC ks xI0
K =⎣ 0 fC ky yI0 ⎦ (7) and partial derivatives are f XM and fYM . The normal vector
0 0 1 N is
where s is the scale factor, K is camera intrinsic matrix, f C N = (fXM (PM ), fYM (PM ), −1) . (14)
is the focal length, (x I0 , yI0 ) is the image center, k x and ky Finally, the direction vector V Mout of a ray reflected at in-
are scale factors of pixel size, and k s is the skewness of x tersecting point P M is calculated by the following equation:
and y axes. Although it doesn’t matter which camera model
is used, to estimate the mirror posture the rank of a camera VMout = VM − 2N (N , VM ) , (15)
intrinsic matrix must be three.
where VM is the direction vector of a ray from the camera.
The viewpoint of this ray is the intersecting point P M .
2.2.2 Mirror Model
In this section, we explain the mirror coordinate system, 2.2.3 Ray Tracing
mirror model, and reflection model. The mirror surface Ray tracing is implemented by using coordinate transforma-
must be a surface of revolution although a hyperboloidal tion. Figure 3 shows the relation of the coordinate systems
mirror is not necessary to estimate mirror posture. To ex- and transformations. A ray from the camera in the cam-
plain the mirror model, we use a hyperboloidal mirror as an era coordinate, V C , and the focal point of the camera in the
example. camera coordinate, F C , are transformed to the camera coor-
The hyperboloidal mirror expressed by Eq. (1) is used dinate system by using rotation matrix R C and translation
as a mirror model. The point P M of a ray from the cam- vector TC as follows:
era intersecting with the mirror plane is calculated from
the focal point F M = (FMX , FMY , FMZ )T of the cam- VM = RC VC , (16)
era in the mirror coordinate system, the direction vector FM = RC FC + TC . (17)
VM = (VMX , VMY , VMZ )T of a ray from the camera in
the mirror coordinate system, and Eq. (1). A ray from the As mentioned in Sect. 2.2.2, a ray from the camera is re-
camera can be defined as F M + kVM . Coefficient k at the flected at the intersection point P M and the reflected ray
3
Mirror coordinate system
World coordinate system
ZM
ZW (0,0,+c)
XM
Rotation RC
Translation TC
Border line of
e
Image plan mirror's region
ZC
x
Figure 4: Mirror boundary (broken line)
Perspective projection Focal point
of camera
ZC
Oc YC
Omnidirectional image XC T ZM
y V QE T
Camera coordinate system CC U NC C0
Catadioptric camera model QI
PC
YC
Figure 3: Coordinate systems XC OM − X MYM ZM
OC − X CYC ZC YM
V OE − XE YE ZE U XM
VMout can also be calculated. A viewpoint of the reflected Camera coordinate system Mirror coordinate system
x̃T QI x̃ = 0,
3 Mirror Posture Estimation (21)
This section explores we presents the mirror posture esti- where
mation method. Our method uses an ellipse in the image ⎡ ⎤
made from mirror boundary and is based on the extrinsic a h f
parameter calibration using a circular pattern [6]. The es- QI = ⎣ h b g⎦ (22)
timated mirror postures are not unique and have 4 possible f g c
solutions if this method is used. Therefore, we also propose
a selection method for finding the best one. and x̃ = (x, y, 1)T is the augment vector of a point in the
image coordinate system. The relation between a point in
the image coordinate system and one in the camera coordi-
3.1 Mirror Posture Estimation based on
nate system is expressed by Eq. (6). By substituting Eq. (6)
Conic Curve for Eq. (21), we obtain
We applied Chen’s method [6] based on conic fitting to es-
timate the mirror postures. s2 X T Qe X = 0, (23)
In order to estimate a mirror posture, we assume the fol-
where
lowing conditions: the camera is calibrated, the rank of
Qe = K T QI K. (24)
the camera intrinsic matrix K is three (full rank), the mir-
ror boundary is within an input image, and its radius r is By eigenvalue decomposition, we obtain
known.
The mirror boundary is projected to an omnidirectional Qe = V ΛV T . (25)
images as an ellipse (conic) curve, and its equation is
We consider a circle centered at (x 0 , y0 , z0 ) on Z = z0
ax2 + by 2 + 2f x + 2gy + 2hxy + c = 0, (20) plane with radius r. According to Chen’s method [6], the
4
circle can be written in a quadratic form:
⎡ −x0 ⎤
1 0 z0
⎢ −y0 ⎥
QC = ⎣ 0 1 z0 ⎦, (26)
−x0 −y0 x20 +y02 −r 2
z0 z0 z02
X T QC X = 0. (27)
We consider the rotation from the coordinate system, O E −
XE YE ZE (See figure 5), to the mirror coordinate system
OM − XM YM ZM . The Z-axis of the mirror coordinate Aligned mirror Misaligned mirror
system is parallel to the normal vector N C of the cross-
section surface PC . To express that rotation, we consider Figure 6: Synthesized omnidirectional images
the rotation matrix U and the relation can be expressed as
follows. because the catadioptric camera can be assumed to be a nor-
1 mal camera. In the case of misalignment, N does not ex-
− U T ΛU = kQC . (28)
λ3 ist. However, if the line is very far from the camera (i.e.
To solve the above equation, we obtain U by using Chen’s ki → ∞), we can assume that Eq. (30) as
method [6] (described in detail in Appendix). By substitut-
(VMout i , N ) → 0, (31)
ing U and r for Eq. (28), we can compute C 0 = [x0 , y0 , z0 ],
the center of the circle. The rotation matrix R from the mir- and this can be regarded as the case of aligned mirror. We
ror coordinate system to the camera coordinate system is apply this assumption to select the mirror posture. If the
obtained by posture is correct, N exists and satisfies Eq. (31), other-
R = V U. (29) wise, N does not satisfy the condition because the rays
don’t intersect the line. We estimate N by minimizing
Figure 5 shows the relationships among each coordinate Σ(VMout i , N )2 . Such N is the solution of the equation,
system and rotation matrices. The center of the circle in ∇Σ(VMout i , N )2 = 0. And N is the eigenvector that has
the camera coordinate system, C C , is obtained by the equa- the minimum eigenvalue of ∇Σ(V Mout i , N )2 . The min-
tion: CC = RC0 . Since R is a rotation matrix, it can be imum eigenvalue can be regarded as an evaluation value.
represented by three orthogonal unit vectors: [r 1 , r2 , r3 ]. The mirror posture that has the minimum evaluation value
Specifically, r3 is the normal vector of the circle, the aspect is the correct posture.
of the the mirror, in the camera coordinate system.
5
(a) Uncalibrated and aligned mirror (b) Uncalibrated and misaligned mirror
(c) Calibrated and aligned mirror (d) Calibrated and misaligned mirror
6
N1 [7] C. Yang, F. Sun, and Z. Hu, “Planar conic based camera
θ3
calibration,” Proc. of International Conference on Pattern
θ1
Recognition, Vol. 1, pp. 555–558 (2000).
Figure 9: Cube, normal vectors, and angles [9] M. V. Srinivasan, “A new class of mirrors for wide angle
imaging,” Proc. of the fourth Workshop on Omnidirectional
Vision and Camera Networks, (2003).
Table 1: Angles between each normal vector
θ1 [deg] θ2 [deg] θ3 [deg] [10] Y. Yagi and M. Yachida, “Real-time generation of environ-
Uncalibrated 58.4 88.1 126.9 mental map and obstacle avoidance using omnidirectional
Hand adjusted 81.4 79.3 134.8 image sensor with conic mirror,” Proc. of IEEE Conference
Calibrated 84.9 89.9 82.9 on Computer Vision and Pattern Recognition, pp. 160–165
(1991).
five degrees of freedom of mirror posture and is free from [11] S. B. Kang, “Catadioptric self-calibration,” Proc. of IEEE
the volatility of nonlinear optimization. Our method uses a Conference on Computer Vision and Pattern Recognition,
conic curve in an image, the mirror boundary. Because of Vol. 1, pp. 201–207 (2000).
the conic-based analytical method, our method could avoid
the initial value problem arising from nonlinear optimiza- [12] D. Stelow, J. Mishler, D. Koes, and S. Singh, “Precise omni-
directional camera calibration,” Proc. of IEEE Conference
tion. We also proposed a method for mirror posture selec-
on Computer Vision and Pattern Recognition, Vol. 1, pp.
tion because the method has 4 solutions of mirror posture.
689–694 (2001).
We conducted experiments on synthesized images and
real images to evaluate the performance of our method, and [13] D. G. Aliaga, “Accurate catadioptric calibration for real-
discussed its accuracy. time pose estimation of room-size environments,” Proc. of
IEEE International Conference on Computer Vision, pp. I:
127–134 (2001).
References
[14] C. Geyer and K. Daniilidis, “Paracatadioptric camera cali-
[1] K. Yamazawa, Y. Yagi, and M. Yachida, “Omnidirectional bration,” IEEE Trans. on Pattern Analysis and Machine In-
imaging with hyperboloidal projection,” Proc. of IEEE/RSJ telligence, 24, 5, pp. 687–695 (2002).
International Conference on Intelligent Robots and Systems,
Vol. 2, pp. 1029–1034 (1993). [15] M. A. Abidi and T. Chandra, “A new efficient and direct so-
lution for pose estimation using quadrangular targets: Algo-
[2] S. K. Nayar, “Catadioptric omnidirectional camera,” Proc.
rithm and evaluation,” IEEE Trans. on Pattern Analysis and
of IEEE Computer Vision and Pattern Recognition, pp. 482–
Machine Intelligence, 17, 5, pp. 534–538 (1995).
488 (1997).
[3] J. Gaspar, C. Decco, J. Okamoto Jr., and J. Santos-Victor, [16] B. Micusik and T. Pajdla, “Para-catadioptric camera auto-
“Constant resolution omnidirectional cameras,” Proc. of calibration from epipolar geometry,” Proc. of Asian Confer-
the Third Workshop on Omnidirectional Vision, pp. 27–34 ence on Computer Vision, Vol. 2, pp. 748–753, (2004).
(2002).
[17] B. Micusik and T. Pajdla, “Autocalibration & 3D Recon-
[4] R. Hicks and R. Perline, “Equi-areal catadioptric sensors,” struction with Non-central Catadioptric Cameras” Proc. of
Proc. of the Third Workshop on Omnidirectional Vision, pp. IEEE Conference on Computer Vision and Pattern Recogni-
13–18 (2002). tion Vol. 1, pp. 58–65, (2004).
[5] R. Lenz and R. Tsai, “Techniques for calibration of the scale
[18] J. P. Barreto and H. Araujo, “Geometric Properties of Cen-
factor and image center for high accuracy 3-d machine vi-
tral Catadioptric Line Images,” Proc. of European Confer-
sion metrology,” IEEE Trans. on Pattern Analysis and Ma-
ence on Computer Vision, Vol. 4, pp. 237–pp251, (2002).
chine Intelligence, 10, 5, pp. 713–720 (1988).
[6] Q. Chen, H. Wu, and T. Wada, “Camera Calibration with [19] X. Ying and Z. Hu, “Catadioptric Camera Calibration Using
Two Arbitrary Coplanar Circles,” Proc. of European Con- Geometric Invariants,” Proc. of IEEE International Confer-
ference on Computer Vision, Vol. 3, pp. 521–532 (2004). ence on Computer Vision, pp. 1351–1358, (2003).
7
Appendix where
We show the calculation of rotation matrix, U , from Chen’s 1 + 1/β 2
method [6]. δ= (42)
1 + 1/α2
Qe has 3 eigenvalues λ 1 , λ2 , and λ3 satisfying:
and α is free variable, l and m are arbitrary integer num-
λ1 λ2 λ3 < 0. (32)
bers. There are four combinations of l and m because each
Without losing generality, we can assume that l and m has even or odd number. Therefore, we get four
solutions.
λ1 λ2 > 0, |λ1 | > |λ2 | . (33)
Let v1 , v2 , and v3 be the normalized eigenvectors corre-
sponding to λ 1 , λ2 , and λ3 respectively. By using the eigen-
values and eigenvectors, Q e can be decomposed as Eq. (25),
where,
V = [v1 , v2 , v3⎤, ]
⎡
λ1 0 0
Λ = ⎣ 0 λ2 0 ⎦ . (34)
0 0 λ3
Where we assume that
1 λ1
2
=− ,
α λ3
1 λ2
=− (35)
β2 λ3
and
⎡ 1
⎤
0 0
1 α2
QE = − Λ=⎣ 0 1
β2 0 ⎦. (36)
λ3
0 0 −1
Since U is a rotation matrix, it can be represented by
three orthogonal unit vectors: U = [u 1 , u2 , u3 ], where
ui = [uix , uiy , uiz ]T ; (i = 1, 2, 3). We also have
U T U = I. (37)