Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Projector Calibration Presentation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Daniel Moreno

October 2012
Overview

Geometric calibration

• Camera intrinsics: Kcam


Z
• Projector intrinsics: Kproj
Kcam X

Z’
• Projector-Camera extrinsics:
Kproj Y
X’ Rotation and translation:
Y’
R,T R,T

The simplest structured-light system consists of a camera and a


2 data projector.
Application: 3D scanning
Projector Camera Projector-Camera Correspondences
correspondences +
Calibration
=
Pointcloud

cols

Triangulation 3
Pointclouds from several
viewpoints can be merged into a
single one and used to build a 3D
… … model

rows

3
Data acquisition 1 Decode 2 Mesh 4
Camera calibration: well-known problem
Pinhole model + radial distortion How do we find correspondences?

fx s cx  Object of known


K   0 fy cy  dimensions

 0 0 1  X

x  K  L( X ; k1 , k2 , k3 , k4 ) y

x
X: 3D point x1=[x,y]T
x2

k1,…,k4: distortion coefficients


K: camera intrinsics
x: projection of X into the image plane x1  K  L( R1 X  T1; k1 , k2 , k3 , k4 )

x2  K  L( R2 X  T2 ; k1 , k2 , k3 , k4 )
If we have enough X↔x point x3 x3  K  L( R3 X  T3 ; k1 , k2 , k3 , k4 )
correspondences we can solve for …
all the unknowns
Images from different viewpoints
4
Projector calibration: ?

Use the pinhole model to describe the projector:


• Projectors work as an inverse camera

fx s cx 
K proj   0 fy cy  x  K proj  L( X ; k1 , k2 , k3 , k4 )
 0 0 1 

If we model the projector the same as our camera, we would


like to calibrate the projector just as we do for the camera:

• We need correspondences between 3D world points and projector image


plane points: X↔x

• The projector cannot capture images

Challenge: How do we find point correspondences?


5
Related works

There have been proposed several projector calibration methods*, they


can be divided in three groups:

1. Rely on camera calibration


• First the camera is calibrated, then, camera calibration is used to find the
3D world coordinates of the projected pattern
• Inaccuracies in the camera calibration translates into errors in the
projector calibration

2. Find projector correspondences using homographies between planes


• Cannot model projector lens distortion because of the linearity of the
transformation

3. Too difficult to perform


• Required special equipments or calibration artifacts
• Required color calibration
• … (*) See the paper for references

Existing methods were not accurate enough or not practical


6
Proposed method: overview

Features:
Simple to perform:
- no special equipment required
- reuse existing components

Accurate:
- there are no constrains for the mathematical model used to describe the projector
- we use the full pinhole model with radial distortion (as for cameras)

Robust:
- can handle small decoding errors

Block diagram Projector


intrinsics
System
Acquisition Decoding
extrinsics
Camera
intrinsics

7
Proposed method: acquisition
Traditional camera calibration
• requires a planar checkerboard (easy to make with a printer)
• capture pictures of the checkerboard from several viewpoints

Structured-light system calibration


• use a planar checkerboard
• capture structured-light sequences of the checkerboard from several viewpoints

… … …

8
Proposed method: decoding

Decoding depends on the projected pattern


• The method does not rely on any specific pattern

Our implementation uses complementary gray code patterns


• Robust to light conditions and different object colors (notice that we used the standard
B&W checkerboard)
• Does not required photometric calibration (as phase-shifting does)
• We prioritize calibration accuracy over acquisition speed
• Reasonable fast to project and capture: if the system is synchronized at 30fps, the 42
images used for each pose are acquired in 1.4 seconds

Our implementation decodes the pattern using “robust pixel classification”(*)


• High-frequency patterns are used to separate direct and global light components for
each pixel
• Once direct and global components are known each pixel is classified as ON, OFF, or
UNCERTAIN using a simple set of rules

9 (*) Y. Xu and D. G. Aliaga, “Robust pixel classification for 3D modeling with structured light”
Proposed method: projector calibration

Once the structured-light pattern is decoded we have a mapping between


projector and camera pixels:

1) Each camera pixel is associated to a projector row and column, or set to UNCERTAIN

For each (x, y): Map(x, y) = (row, col) or UNCERTAIN

2) The map is not bijective: many camera pixels corresponds to the same projector pixel

3) Checkerboard corners are not located at integer pixel locations


10
Proposed method: projector calibration
Solution: local homographies
1. Surface is locally planar: actually the complete checkerboard is a plane
2. Radial distortion is negligible in a small neighborhood
3. Radial distortion is significant in the complete image:
• a single global homography is not enough

𝑞1 = 𝐻1 ∙ 𝑝1 𝐻1
𝑞1
𝑝1
𝑞2 = 𝐻2 ∙ 𝑝2

𝑞𝑛 = 𝐻𝑛 ∙ 𝑝𝑛
projected image Local Homographies captured image

For each 𝐻 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑞 − 𝐻𝑝 2, 𝑞 =𝐻∙𝑝


𝐻 ∀𝑝
checkerboard
corner solve: 𝐻 ∈ ℝ3×3 , 𝑝 = [𝑥, 𝑦, 1]𝑇 , 𝑞 = [𝑐𝑜𝑙, 𝑟𝑜𝑤, 1]𝑇
11
Proposed method: projector calibration
Summary:

1. Decode the structured-light pattern: camera ↔ projector map


2. Find checkerboard corner locations in camera image coordinates
3. Compute a local homography H for each corner
4. Translate each corner from image coordinates x to projector coordinates x’
applying the corresponding local homography H

x'  H  x

5. Using the correspondences between the projector corner coordinates and


3D world corner locations, X ↔ x’, find projector intrinsic parameters

x'1  K proj  L( R1 X  T1; k1 , k2 , k3 , k4 )


No difference with
x'2  K proj  L( R2 X  T2 ; k1 , k2 , k3 , k4 )
camera
x'3  K proj  L( R3 X  T3 ; k1 , k2 , k3 , k4 ) calibration!!
Object of known X
dimensions

12
Camera calibration and system extrinsics
Camera intrinsics
Using the corner locations in image coordinates and their 3D world coordinates, we
calibrate the camera as usual
- Note that no extra images are required

System extrinsics
Once projector and camera intrinsics are known we calibrate the extrinsics (R and T)
parameters as is done for camera-camera systems

Using the previous correspondences, x↔ x’, we fix the coordinate system at the
camera and we solve for R and T:
~
x1  L1 ( Kcam
1
 x1; k1 , k2 , k3 , k4 ) x'1  K proj  L( R  ~
x1  T ; k '1 , k '2 , k '3 , k '4 )
x’
x ~
x2  L1 ( Kcam
1
 x2 ; k1 , k2 , k3 , k4 ) x'2  K proj  L( R  ~
x2  T ; k '1 , k '2 , k '3 , k '4 )
~
x3  L1 ( Kcam
1
 x3 ; k1 , k2 , k3 , k4 ) x'3  K proj  L( R  ~
x3  T ; k '1 , k '2 , k '3 , k '4 )
R, T … …
13
Calibration software

Software
The proposed calibration method can be
implemented fully automatic:
- The user provides a folder with all the images
- Press “calibrate” and the software
automatically extracts the checkerboard
corners, decode the structured-light pattern,
and calibrates the system

Algorithm
1. Detect checkerboard corner locations for each plane orientation
2. Estimate global and direct light components
3. Decode structured-light patterns
4. Compute a local homography for each checkerboard corner
5. Translate corner locations into projector coordinates using local homographies
6. Calibrate camera intrinsics using image corner locations
7. Calibrate projector intrinsics using projector corner locations
8. Fix projector and camera intrinsics and calibrate system extrinsic parameters
9. Optionally, all the parameters, intrinsic and extrinsic, can be optimized together
14
Results

Comparison with existing software: Paper checkerboard used to find plane equation

procamcalib
 Projector-Camera Calibration Toolbox
 http://code.google.com/p/procamcalib/

Projected checkerboard used for calibration

Reprojection error comparison

Method Camera Projector  Only projector calibration is compared


Proposed 0.1447
 Same camera intrinsics is used for all methods
With global
0.3288 0.2176
homography  Global homography means that a single
Procamcalib 0.8671 homography is used to translate all corners

15
Results
Example of projector lens distortion

Distortion coefficients

k1 k2 k3 k4
-0.0888 0.3365 -0.0126 -0.0023

Non trivial distortion!

16
Results
Error distribution on a scanned 3D plane model:

Laser scanner comparison 3D Model

Model with small details


Hausdorff distance reconstructed using SSD
17
Conclusions

 It works 
 No special setup or materials required
 Very similar to standard stereo camera calibration
 Reuse existing software components
Camera calibration software
Structured-light projection, capture, and decoding software
 Local homographies effectively handle projector lens distortion
 Adding projector distortion model improves calibration
accuracy
 Well-calibrated structured-light systems have a precision
comparable to some laser scanners

18
Gray vs. binary codes
Binary Gray

Dec Bin Gray


0 000 000
1 001 001
2 010 011
3 011 010
4 100 110
… … 5 101 111
6 110 101

19
Direct/Global light components

L  Ld  Lg  b(1   ) Lg L  bLd  (1   ) Lg  bLg

L  L L  bL
Ld  Lg  2 Lˆ  max I i Lˆ  min I i
1 b 1  b2 0i  K 0i  K

Robust pixel classification

 Ld  m  UNCERTAIN
 L  L  p  p  ON
 d g

 Ld  Lg  p  p  OFF

 p  Ld  p  Lg  OFF
 p  Lg  p  Ld  ON

otherwise  UNCERTAIN
20
Triangulation

1u1  R1 X  T1
X
2u2  R2 X  T2

uˆ11u1  uˆ1 R1 X  uˆ1T1  0 u2


u1
uˆ22u2  uˆ2 R2 X  uˆ2T2  0
R1 ,T1 R2 ,T2

In homogeneous coordinates:

 uˆ1 R1 uˆ1T1 
uˆ R  X 0
 2 2 uˆ2T2 

21

You might also like