Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
@ttttamaki
tamaki@hiroshima-u.ac.jp
3
, , , , ,
, Vol. 10, No. 3, pp.429-436, 2005.10.
PCL 3
The PCL Registration API, http://pointclouds.org/documentation/tutorials/registration_api.php
Open3D 3
1 2
1 2
1 2
2D 3D
1 2
1 2
1 2
2D-2D
3D-3D
3D-3D
1 2
2D-2D
3D-3D
3D-3D
2D-2D
3D-3D
2D-2D
3D-3D
3次元レジストレーションの基礎とOpen3Dを用いた3次元点群処理
1st Image
Template
Fixed image
model
2nd Image
Observation
Moving image
measurements
I1 I2pI0
2
The Robotics Institute, Mellon University, AAM Fitting Algorithms, http://www.ri.cmu.edu/research_project_detail.html?project_id=448&menu_id=261
p
1st Image
Template
Fixed image
model
2nd Image
Observation
Moving image
measurements
I1 I2pI0
2
The Robotics Institute, Mellon University, AAM Fitting Algorithms, http://www.ri.cmu.edu/research_project_detail.html?project_id=448&menu_id=261
p
3
1st Image
Template
Fixed image
model
2nd Image
Observation
Moving image
measurements
I1 I2pI0
2
I1 I2p I0
2p
L2
SSD, NCC
(MI)
min { }
3
I1 I2pI0
2
I1 I2p I0
2p
L2
SSD, NCC
(MI)
min { }
1st Image
Template
Fixed image
model
2nd Image
Observation
Moving image
measurements
3
I1 I2pI0
2
min
p
dist(I1, I0
2(p))
I1 I2p I0
2p
L2
SSD, NCC
(MI)
min { }
min
p
X
i
||x1i (x2i p)||2
I1 I2
p
min
p
dist(I1, I0
2(p))
min
p
X
i
dist(x1i, W(x2i, p))
x11
x12
x13
x21
x22
x23
W 1
I1 I2
min
p
dist(I1, I0
2(p))
min
p
X
i
dist(x1i, W(x2i, p))
x11
x12
x13 t
x21
x22
x23
min
t
X
i
||(x1i + t) x2i||2
||x11 + t x21||2
= 0
||x12 + t x22||2
= 0
||x13 + t x23||2
= 0
I1 I2
x11
x12
x13 t
x21
x22
x23
x11 + t = x21
x12 + t = x22
x13 + t = x23
...
...
min
t
X
i
||(x1i + t) x2i||2
L2
2
min
t
X
i
||(x1i + t) x2i||2
I1 I2
x11
x12
x13 t
x21
x22
x23
||x11 + t x21||2
' 0
||x12 + t x22||2
' 0
||x13 + t x23||2
' 0
x11 + t = x21
x12 + t = x22
x13 + t = x23
...
...
SSD
(sum of squared differences)
L2
2
3次元レジストレーションの基礎とOpen3Dを用いた3次元点群処理
1
(objective function)
(cost function)
(global minimum)
E
(local minimum)
t = tx
0
tx
E =
X
i
||(x1i + t) x2i||2
2
(objective function)
(cost function)
(global minimum)
E
tx
E =
X
i
||(x1i + t) x2i||2
t =
⇣
tx
ty
⌘
ty
tx
ty
0
■ …
n E 0
n E 0
■
n 1 tx
n 2 t
■ Let’s
(objective function)
(cost function) t = tx
0
E =
X
i
||(x1i + t) x2i||2
@E
@t
= 0
@E
@t
6= 0
E
tx
Let’s
E =
X
i
||(x1i + t) x2i||2
=
X
i
((x1i + t) x2i)T
((x1i + t) x2i)
=
X
i
((x1i x2i) + t)T
((x1i x2i) + t)
=
X
i
(x1i x2i)T
(x1i x2i) + (x1i x2i)T
t + tT
(x1i x2i) + tT
t
=
X
i
(x1i x2i)T
(x1i x2i) + 2(x1i x2i)T
t + tT
t
=
X
i
||x1i x2i||2
+ 2(x1i x2i)T
t + ||t||2
@E
@t
=
X
i
(2(x1i x2i) + 2t) = 0
(
NX
i
2(x1i x2i)) + 2Nt = 0
∴
N:
||a||2
= aT
a
@aT
a
@a
= 2a
@bT
a
@a
= b
0
t =
⇣
tx
ty
⌘
1
I1 I2
x11
x12
x13 t
x21
x22
x23
||x11 + t x21||2
' 0
||x12 + t x22||2
' 0
||x13 + t x23||2
' 0
x11 + t = x21
x12 + t = x22
x13 + t = x23
L2
2
2
E
tx
t = ¯x1 ¯x2
2
I1 I2
x11
x12
x13 t
x21
x22
x23
||x11 + t x21||2
' 0
||x12 + t x22||2
' 0
||x13 + t x23||2
' 0L2
2
2
I1 I2
x11
x12
x13 t
x21
x22
x23
||x11 + t x21||2
' 0
||x12 + t x22||2
' 0
||x13 + t x23||2
' 0L2
2
■
n L1
n
n RANSAC
t = ¯x1 ¯x2
p
W (x1i, p) = x2i ||W (x1i, p) x2i||2
' 0
I1 I2
x11
x12
x13
x21
x22
x23
2
E
■
n
n
■
n
n FFD
3次元レジストレーションの基礎とOpen3Dを用いた3次元点群処理
1 2
2D-2D
3D-3D
3D-3D
2D-2D
3D-3D
2D-2D
3D-3D
ICP
■ Iterative Closest Point (ICP)
n
n
■
n
n R t
R
t
X Y , , , , ,
, Vol. 10, No. 3, pp.429-436, 2005.10.
Chen, Y. and Medioni, G. “Object Modeling by Registration of Multiple Range Images,” Proc. IEEE Conf. on Robotics and Automation, 1991.
Besl, P. and McKay, N. “A Method for Registration of 3-D Shapes,” Trans. PAMI, Vol. 14, No. 2, 1992.
X
Y
■
n
n
■ ICP
1. X Y
(closest point)
2. X Y
R
t
3. RX+t
Y (closest point)
4. 2 3 (iterate)
■ ICP
n
n
X Y
X Y,
SIS SIP
IPSJ-AVM , SIP2009-48,
SIS2009-23, Vol.109, No.202, pp.59-64, , 2009 09 .
X Y Y
1
2
3
4
xi
yi
Rxi+t
yi
R t
R t
R t
R t
Rxi+t
yi
Rxi+t
yi
■
n
n
■ ICP
1. X Y
(closest point)
2. X Y
R
t
3. RX+t
Y (closest point)
4. 2 3
■ ICP
n
n
X Y
X Y,
SIS SIP
IPSJ-AVM , SIP2009-48,
SIS2009-23, Vol.109, No.202, pp.59-64, , 2009 09 .
3次元レジストレーションの基礎とOpen3Dを用いた3次元点群処理
X Y
X Y
R
t
■
n R
n t
R
3xn
1xn
3xn
2
3xn
3xn
■
n
n SVD
SVD
■
■ Frobenius norm
(tr)
K. S. Arun, T. S. Huang, and S. D. Blostein. Least-squares fitting of
two 3-D point sets. PAMI, Vol. 9, No. 5, pp. 698–700, 1987.
Peter H. Schönemman. A generalized solution of the orthogonal
procrustes problem. Psychometrika, Vol. 31, No. 1, pp. 1–10, 1966.
SVD
(Singular Value Decomposition, SVD)
Schwarz
K. S. Arun, T. S. Huang, and S. D. Blostein. Least-squares fitting of
two 3-D point sets. PAMI, Vol. 9, No. 5, pp. 698–700, 1987.
Peter H. Schönemman. A generalized solution of the orthogonal
procrustes problem. Psychometrika, Vol. 31, No. 1, pp. 1–10, 1966.
R +1
R +1 -1
Kenichi Kanatani. Analysis of 3-D rotation fitting. PAMI, Vol. 16, No. 5, pp. 543–549,
1994.
Shinji Umeyama. Least-squares estimation of transformation parameters between
two point patterns. PAMI, Vol. 13, No. 4, pp. 376–380, 1991.
, — —, 3.4 , , 1990.
S
Orthogonal Procrustes Problem
Procrustes. "Now then, you fellows;
I mean to fit you all to my little bed!”
The Modern Bed of Procrustes - Cartoon from the Project Gutenberg
eBook of Punch, Volume 101, September 19, 1891, by John Tenniel.
http://commons.wikimedia.org/wiki/File:The_Modern_Bed_of_Procustes_-_Punch_cartoon_-_Project_Gutenberg_eText_13961.png
Orthogonal Procrustes
Extended Orthogonal Procrustes
Generalized Orthogonal Procrustes
Peter H. Schönemmanand Robert M. Carroll. Fitting one matrix to another under choice of a central dilation and a rigid motion. Psychometrika, Vol.
35, No. 2, pp. 245–255, 1970.
Devrim Akca. Generalized procrustes analysis and its applications in photogrammetry. Technical report, ETH, Swiss Federal Institute of
Technology Zurich, Institute of Geodesy and Photogrammetry, 2003.
Peter H. Schönemman. A generalized solution of the orthogonal procrustes problem. Psychometrika, Vol. 31, No. 1, pp. 1–10, 1966.
John R. Hurley, Raymond B. Cattell, The procrustes program: Producing direct rotation to test a hypothesized factor structure, Behavioral Science,
Volume 7, Issue 2, pages 258–262, April 1962.
R, t s
n
n
• , , , 3
vs. , 2011-
CG145CVIM179-12, 2011.11.
•
3 , 2010-CVIM-176-15,
2011.3
• L. Dryden, K. V. Mardia, Statistical shape analysis, Wiley, 1998.
• Berthold K. P. Horn. Closed-form solution of absolute
orientation using unit quaternions. Journal of the Optical
Society of America, Vol. 4, pp. 629–642, 1987.
• Shinji Umeyama. Least-squares estimation of transformation
parameters between two point patterns. IEEE Trans. on
Pattern Analysis and Machine Intelligence, Vol. 13, No. 4, pp.
376–380, 1991.
• Toru Tamaki, Shunsuke Tanigawa, Yuji Ueno, Bisser Raytchev, Kazufumi Kaneda: "Scale
matching of 3D point clouds by finding keyscales with spin images,” ICPR2010.
• Baowei Lin, Toru Tamaki, Fangda Zhao, Bisser Raytchev, Kazufumi Kaneda, Koji Ichii, "Scale
alignment of 3D point clouds with different scales," Machine Vision and Applications, November
2014, Volume 25, Issue 8, pp 1989-2002 (2014 11). DOI: 10.1007/s00138-014-0633-2
n Keyscale
n
keypoint
pcl::registration::Transformation
EstimationSVDScale
3次元レジストレーションの基礎とOpen3Dを用いた3次元点群処理
R T point-to-point
•
• R T
• s
•
•
• 2
•
X Y
• C++
• three::TransformationEstimationPointToPoint
• with_scaling_ = false (default)
• with_scaling_ = true
• Python
• py3d.TransformationEstimationPointToPoint()
• py3d.TransformationEstimationPointToPoint(with_scaling=True)
point-to-point
distance
point-to-plane
distance
point-to-point
point-to-plane
point-to-point vs point-to-plane
Szymon Rusinkiewicz, Marc Levoy, Efficient Variants of the ICP Algorithm, 3DIM, 2001.
http://www.cs.princeton.edu/~smr/papers/fasticp/
Szymon Rusinkiewicz, Derivation of point to plane minimization, 2013.
http://www.cs.princeton.edu/~smr/papers/icpstability.pdf
Kok-Lim Low, Linear Least-Squares Optimization for Point-to-Plane ICP Surface Registration,
Technical Report TR04-004, Department of Computer Science, University of North Carolina at
Chapel Hill, February 2004.
https://www.comp.nus.edu.sg/~lowkl/publications/lowk_point-to-plane_icp_techrep.pdf
R T point-to-plane
•
• R T
•
•
• 2
•
•
•
• C++
• three::TransformationEstimationPointToPlane
• Python
• py3d.TransformationEstimationPointToPlane()
point-to-plane
distance
point-to-point
distance
ICP point-to-plane
■
n
n
■ ICP
1. X Y
(closest point)
2. X Y
R
t
3. RX+t
Y (closest point)
4. 2 3
■ ICP
n
n
X Y
X Y,
SIS SIP
IPSJ-AVM , SIP2009-48,
SIS2009-23, Vol.109, No.202, pp.59-64, , 2009 09 .
point-to-plane
Open3D ICP
•
• R T
• s
•
• 2
•
•
• py3d.registration_icp
• estimation_method=py3d.TransformationEstimationPointToPoint()
Point-to-point
• estimation_method=py3d.TransformationEstimationPointToPoint(True)
Point-to-point
• estimation_method=py3d.TransformationEstimationPointToPlane()
Point-to-plane
X Y
Open3D ICP
th = 0.02
criteria = py3d.ICPConvergenceCriteria(relative_fitness = 1e-6,
# fitness
relative_rmse = 1e-6,
# RMSE
max_iteration = 1) #
info = py3d.registration_icp(pcd1, pcd2,
max_correspondence_distance=th,
init=T,
estimation_method=py3d.TransformationEstimationPointToPoint()
# estimation_method=py3d.TransformationEstimationPointToPlane()
)
print("correspondences:", np.asarray(info.correspondence_set))
print("fitness: ", info.fitness)
print("RMSE: ", info.inlier_rmse)
print("transformation: ", info.transformation)
Open3D ICP
th = 0.02
criteria = py3d.ICPConvergenceCriteria(relative_fitness = 1e-6,
# fitness
relative_rmse = 1e-6,
# RMSE
max_iteration = 1) #
info = py3d.registration_icp(pcd1, pcd2,
max_correspondence_distance=th,
init=T,
estimation_method=py3d.TransformationEstimationPointToPoint()
# estimation_method=py3d.TransformationEstimationPointToPlane()
)
print("correspondences:", np.asarray(info.correspondence_set))
print("fitness: ", info.fitness)
print("RMSE: ", info.inlier_rmse)
print("transformation: ", info.transformation)
Open3D ICP
3次元レジストレーションの基礎とOpen3Dを用いた3次元点群処理
3次元レジストレーションの基礎とOpen3Dを用いた3次元点群処理
3次元レジストレーションの基礎とOpen3Dを用いた3次元点群処理
2
ICP
■ ICP on PCL1.7.1
n
n setMaximumIterations
n
n setRANSACIterations
n RANSAC
n setRANSACOutlierRejectio
nThrethold
n RANSAC
n setMaxCorrespondenceDist
ance
n
n
n setTransformationEpsilon
n
n setEuclideanFitnessEpsilon
n
■
■ ICP on Open3D
n max_correspondence_dista
nce
n estimation_method
n criteria
■
n softassign
n EM-ICP
n sparse ICP
3次元レジストレーションの基礎とOpen3Dを用いた3次元点群処理
arg min
yj 2Y
||yj xi||
ICP
y1
y2x2
x1
X Y Y ⇤
y21
y43
y43
X xi
Y
X Y X Y*
y⇤
i =
X Y* R t
arg min
yj 2Y
||yj xi||
arg min
yj 2Y
||yj (Rxi + t)||
ICP
y1
y2x2
x1
X Y Y ⇤
y21
y43
y43
y21
X Y* R t
X xi
Y
X Y X Y*
y⇤
i =
ICP Hard assignment
y1 y2
x2
x1
X
Y
Y ⇤
y21 y43
y43
y211
10 0 0
0 0 0
X Y X Y*
arg min
yj 2Y
||yj (Rxi + t)||y⇤
i =
X Y* R t
Softassign soft assignment
y1 y2
x2
x1
X
Y
y21 y43
0.60.1 0.02 0.3
0.50.1 0.1 0.2
X Y X Y
X Y R t
mijxi
yj
Steven Gold, Anand Rangarajan, Chien-Ping Lu,
Suguna Pappu, Eric Mjolsness, "New algorithms for 2D
and 3D point matching: pose estimation and
correspondence," Pattern Recognition, Vol. 31, No. 8,
pp. 1019-1031, 1998.
mij
1
Shinkhorn iterations
Shinkhorn iterations
sumupto1
sumupto1
sumupto1
sumupto1
sum up to 1
sum up to 1
sum up to 1
sum up to 1
S
K q
q
R
Berthold K. P. Horn, Hugh M. Hilden, and Shahriar Negahdaripour. Closed-form solutions of absolute orientation
using orthonormal matrices. Journal of the Optical Society of America, Vol. 5, pp. 1127–1135, 1988.
Berthold K. P. Horn. Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical
Society of America, Vol. 4, pp. 629–642, 1987.
S
K
nx3
3xn
3x3
! = ! = #
3 3
Toru Tamaki, Miho Abe, Bisser Raytchev, Kazufumi Kaneda: "Softassign and
EM-ICP on GPU", Proc. of UPDAS2010; The 2nd Workshop on Ultra
Performance and Dependable Acceleration Systems, In Proc. of ICNC'10,
pp.179-183 (2010 11), Higashi Hiroshima, Japan, November 17-19, 2010.
Pipeline of Softassing.GPU
Compute with CUDA kernel
Shinkhorn.GPU
Centering.GPU
Weighted Horn’s method
Solve
Eigenvalue
problem
Toru Tamaki, Miho Abe, Bisser Raytchev, Kazufumi Kaneda, Marcos Slomp:
"CUDA-based implementations of Softassign and EM-ICP,” CVPR2010 demo.
Toru Tamaki, Miho Abe, Bisser Raytchev, Kazufumi Kaneda: "Softassign and
EM-ICP on GPU", Proc. of UPDAS2010.
R and t
EM-ICP soft assignment
y1 y2
x2
x1
X
Y
y21 y43
0.5
0.60.1 0.02 0.3
0.1 0.1 0.2
X Y X Y*
X Y* R t
mijxi
yj
Sebastien Granger, Xavier Pennec, "Multi-scale EM-ICP: A Fast
and Robust Approach for Surface Registration,” ECCV2002, Vol. 4,
pp. 69-73, 2002
Y ⇤
Shinkhorn
Pipeline of EM-ICP.GPU
Compute with CUDA kernel
Row normalization on GPU
Centering.GPU
2 step weighted Horn’s method
R and t
Solve
Eigenvalue
problem
Toru Tamaki, Miho Abe, Bisser Raytchev, Kazufumi Kaneda, Marcos Slomp:
"CUDA-based implementations of Softassign and EM-ICP,” CVPR2010 demo.
Toru Tamaki, Miho Abe, Bisser Raytchev, Kazufumi Kaneda: "Softassign and
EM-ICP on GPU", Proc. of UPDAS2010.
ICP
Figure 1. Local-to-global registration result, obtained using the
1. We introduce the use of viewpoint descriptors within
the viewpoint-dictionary based registration frame-
work, proposed in [1]. We demonstrate that replac-
ing the dictionary clouds, used in [1], with panoramic
range-images, used as viewpoint descriptors, leads to
considerable reduction in memory requirements and
computational complexity, without loss in registration
accuracy.
2. We propose the use of phase-correlation-based im-
age registration [5, 10], for panoramic range-image
matching, and to enable efficient dictionary search and
rapid local-to-global initial (coarse) registration, with
or without prior knowledge such as GPS data).
2. Related work
ICCV2017
nt
ansformation T0
}
m coarsest to finest
q. 18,19)
T (Eq. 29,30)
map to SE(3)
matchable frag-
bust graph opti-
ts;
s {(Pi, Pj)} and
ween matchable
nd a camera cal-
an objective de-
} [45];
mesh model for
ns 3 and 4 to re-
eate a fragment
RegistrationReconstruction
CZK [4] Ours
Figure 1. Left: failure of the ICP algorithm (top) leads to erroneous
reconstruction (bottom). Right: our colored point cloud registra-
tion algorithm locks the alignment along the tangent plane as well
as the normal direction (top), yielding an accurate scene model
(bottom).
rate fragment alignment. In particular, the new algorithm is
considerably more robust to slippage along flat surfaces, as
shown in Figure 1. This replaces Step 3.
6. Dataset
To our knowledge, no publicly available RGB-D dataset
provides dense ground-truth surface geometry across large-
scale real-world scenes. To complement existing datasets,
we have created ground-truth models of five complete in-
door environments using a high-end laser scanner, and cap-
tured RGB-D video sequences of these scenes. This data
enables quantitative evaluation of real-world scene recon-
struction and will be made publicly available.
We scanned five scenes: Apartment, Bedroom, Board-
Colored Point Cloud Registration Revisited, Jaesik Park, Qian-Yi Zhou,
Vladlen Koltun; ICCV2017, pp. 143-152
Figure 2. The overall procedure of the proposed method.
Moreno et al. [36] proposed an incremental plane mapping
scheme in which the relation between planes is identified by
point features. Several studies [13, 2, 39] exploited planes
and points to find frame-to-frame camera pose and to define
an objective function for bundle adjustment. Ma et al. [25]
estimated a global plane model and frame-to-frame pose in
an alternative way in the EM framework. Zhang et al. [48]
proposed an interactive reconstruction algorithm, in which
the algorithm guides the person to capture designated spots.
The proposed method overcomes aforementioned prob-
lems through the layout-constrained global registration.
The scene layout estimation problem has been tackled in
the field of scene understanding [10, 15, 19] and object de-
tection [17]. Some researches [30, 40, 50] proposed to en-
force the global regularity (e.g. parallelism, orthogonality,
and coplanarity) of the scene structures in an iterative fash-
ion, assuming that well-aligned but noisy point clouds are
given as input. However, we consider inaccurately aligned
point clouds, i.e., owing to drift errors as shown in the left
of Fig. 1. Therefore, we perform the layout estimation and
global registration jointly, and in particular, the proposed
dominant plane estimation based on energy minimization
provides locally optimal dominant planes without regard to
the general global regularities.
3.1. Initial Registration
For initial registration, we partially reconstruct the cap-
tured indoor scene to produce a set of scene fragments and
then register them in the world coordinate system, which is
similar to the previous study of Choi et al. [9]. Here, the un-
derlying assumption is that each scene fragment contains a
negligible amount of accumulation errors so that the large-
scale 3D reconstruction problem turns into the problem of
aligning all the scene fragments. To construct a scene frag-
ment Fi ∈ F, we simply use KinectFusion [27] for every
N frames, e.g. 50, which is a volumetric approach to recon-
struct a scene with truncated signed distance functions [12].
Afterwards, we find pairwise transformations Ti,i+1 for all
pairs of the consecutive fragments and align all the frag-
ments in the world coordinate system based on sequential
multiplication of the pairwise transformations.
Loop closure detection: The set of the registered frag-
ments via the sequential multiplication of the pairwise
transformations usually has a large amount of accumulated
pose errors as well as misaligned range data. Therefore, it
is necessary to identify loop closures to diffuse drift errors
across all the fragments. To detect loop closures, we align
all pairs of the inconsecutive fragments using the FPFH de-
Joint Layout Estimation and Global Multi-View Registration for Indoor Reconstruction,
Jeong-Kyun Lee, Jaewon Yea, Min-Gyu Park, Kuk-Jin Yoon; ICCV2017, pp. 162-171
Local-To-Global Point Cloud Registration Using a Dictionary of Viewpoint
Descriptors,David Avidar, David Malah, Meir Barzohar; ICCV2017, pp. 891-899
Point Set Registration With Global-Local Correspondence and Transformation
Estimation, Su Zhang, Yang Yang, Kun Yang, Yi Luo, Sim-Heng Ong; ICCV2017, pp.
2669-2677
3次元レジストレーションの基礎とOpen3Dを用いた3次元点群処理
2D keypoint detection / matching
David Lowe, Demo Software: SIFT Keypoint Detector,
http://www.cs.ubc.ca/~lowe/keypoints/
3D keypoint detection / matching
Salti, S.; Tombari, F.; Di Stefano, L., "A Performance Evaluation of 3D Keypoint
Detectors," 3D Imaging, Modeling, Processing, Visualization and Transmission
(3DIMPVT), 2011 International Conference on , vol., no., pp.236,243, 16-19 May
2011. doi: 10.1109/3DIMPVT.2011.37
Federico Tombari, Samuele Salti, Luigi Di Stefano, Performance Evaluation of 3D
Keypoint Detectors, International Journal of Computer Vision, Volume 102, Issue 1-3,
pp 198-220, March 2013.
http://vision.deis.unibo.it/keypoints3d/
http://www.cs.princeton.edu/~vk/projects/CorrsBlended/doc_data.php
Keypoint
• 3
• depth
•
•
•
•
•
•
•
•
• P. Scovanner, S. Ali, and M. Shah, “A 3-dimensional SIFT
descriptor and its application to action recognition,” in ACM
Multimedia, 2007, pp. 357–360.
• W. Cheung and G. Hamarneh, “n-sift: n-dimensional scale
invariant feature transform,” in Trans. IP, vol. 18(9), 2009,
pp. 2012–2021.
• Jan Knopp, Mukta Prasad, Geert Willems, Radu Timofte,
Luc Van Gool, Hough Transform and 3D SURF for Robust
Three Dimensional Classification, ECCV 2010, pp 589-602,
2010.
• Tsz-Ho Yu, Oliver J. Woodford, Roberto Cipolla, A
Performance Evaluation of Volumetric 3D Interest Point
Detectors, International Journal of Computer Vision, Volume
102, Issue 1-3, pp 180-197, March 2013.
Range image / RGB-D
• 3D Geometric Scale Variability in Range Images: Features
and Descriptors P. Bariya, J. Novatnack, G. Schwartz, and
K. Nishino, in Int'l Journal of Computer Vision, vol. 99, no. 2,
pp232-255, Sept., 2012
• S. Filipe and L. A. Alexandre, "A comparative evaluation of
3D keypoint detectors in a RGB-D Object Dataset," 2014
International Conference on Computer Vision Theory and
Applications (VISAPP), 2014, pp. 476-483.
• Maks Ovsjanikov, Quentin Mérigot, Facundo Mémoli and
Leonidas Guibas, One Point Isometric Matching with the
Heat Kernel,” Symposium on Geometry Processing 2010,
Computer Graphics Forum, Volume 29, Issue 5, pages
1555–1564, July 2010.
• M. Bronstein and I. Kokkinos, Scale-invariant heat kernel
signatures for non-rigid shape recognition, Proc. IEEE Conf.
on Computer Vision and Pattern Recognition (CVPR), 2010.
3 ,
ViEW2014 , http://isl.sist.chukyo-
u.ac.jp/Archives/ViEW2014SpecialTalk-Hashimoto.pdf
3次元レジストレーションの基礎とOpen3Dを用いた3次元点群処理
Open3D
•
• FPFH
• RANSAC Point-to-point
• point-to-plane ICP
model = py3d.read_point_cloud("milk.pcd")
scene = py3d.read_point_cloud("milk_cartoon_all_small_clorox.pcd")
# model 1/10 size
size = np.abs((model.get_max_bound() - model.get_min_bound())).max() / 10
kdt_n = py3d.KDTreeSearchParamHybrid(radius=size, max_nn=50)
py3d.estimate_normals(model, kdt_n)
py3d.estimate_normals(scene, kdt_n)
show(model, scene)
model_d = py3d.voxel_down_sample(model, size)
scene_d = py3d.voxel_down_sample(scene, size)
py3d.estimate_normals(model_d, kdt_n)
py3d.estimate_normals(scene_d, kdt_n)
show(model_d, scene_d)
kdt_f = py3d.KDTreeSearchParamHybrid(radius=size * 50, max_nn=50)
#
model_f = FPFH(model_d, kdt_f)
scene_f = FPFH(scene_d, kdt_f)
http://pointclouds.org/documentation/tutorials/fpfh_estimation.php
RANSAC R T
#
checker = [py3d.CorrespondenceCheckerBasedOnEdgeLength(0.9),
#
py3d.CorrespondenceCheckerBasedOnDistance(size * 2)]
#
est_ptp = py3d.TransformationEstimationPointToPoint()
criteria = py3d.RANSACConvergenceCriteria(max_iteration=40000, #
max_validation=500)
# RANSAC
result1 = RANSAC(model_d, scene_d, #
model_f, scene_f, #
max_correspondence_distance=size * 2, #
estimation_method=est_ptp, # point-to-point
ransac_n=4, # RANSAC 3
checkers=checker,
criteria=criteria)
show(model, scene, result1.transformation)
RANSAC R T
#
checker = [py3d.CorrespondenceCheckerBasedOnEdgeLength(0.9),
#
py3d.CorrespondenceCheckerBasedOnDistance(size * 2)]
#
est_ptp = py3d.TransformationEstimationPointToPoint()
criteria = py3d.RANSACConvergenceCriteria(max_iteration=40000, #
max_validation=500)
# RANSAC
result1 = RANSAC(model_d, scene_d, #
model_f, scene_f, #
max_correspondence_distance=size * 2, #
estimation_method=est_ptp, # point-to-point
ransac_n=4, # RANSAC 3
checkers=checker,
criteria=criteria)
show(model, scene, result1.transformation)
ICP
est_ptpln = py3d.TransformationEstimationPointToPlane()
# ICP
result2 = ICP(model, scene, size, result1.transformation, est_ptpln)
show(model, scene, result2.transformation)
■ Open3D ICP &
n https://qiita.com/search?q=user%3Atttamaki+open3d
■ PCL ICP & R T &
n https://github.com/tttamaki/ICP-test
■ sparse ICP
n https://github.com/tttamaki/SICP-test
n scip1 sparse ICP with point-to-plane
n scip2 sparse ICP with point-to-plane
■ EM-ICP
n https://github.com/tttamaki/cuda_emicp_softassign
n ICP
n EM-ICP.CPU
n softassign.GPU
n EM-ICP.GPU

More Related Content

3次元レジストレーションの基礎とOpen3Dを用いた3次元点群処理

  • 2. 3 , , , , , , Vol. 10, No. 3, pp.429-436, 2005.10.
  • 3. PCL 3 The PCL Registration API, http://pointclouds.org/documentation/tutorials/registration_api.php
  • 6. 2D 3D 1 2 1 2 1 2 2D-2D 3D-3D 3D-3D
  • 9. 1st Image Template Fixed image model 2nd Image Observation Moving image measurements I1 I2pI0 2 The Robotics Institute, Mellon University, AAM Fitting Algorithms, http://www.ri.cmu.edu/research_project_detail.html?project_id=448&menu_id=261 p
  • 10. 1st Image Template Fixed image model 2nd Image Observation Moving image measurements I1 I2pI0 2 The Robotics Institute, Mellon University, AAM Fitting Algorithms, http://www.ri.cmu.edu/research_project_detail.html?project_id=448&menu_id=261 p
  • 11. 3 1st Image Template Fixed image model 2nd Image Observation Moving image measurements I1 I2pI0 2 I1 I2p I0 2p L2 SSD, NCC (MI) min { }
  • 12. 3 I1 I2pI0 2 I1 I2p I0 2p L2 SSD, NCC (MI) min { } 1st Image Template Fixed image model 2nd Image Observation Moving image measurements
  • 13. 3 I1 I2pI0 2 min p dist(I1, I0 2(p)) I1 I2p I0 2p L2 SSD, NCC (MI) min { }
  • 14. min p X i ||x1i (x2i p)||2 I1 I2 p min p dist(I1, I0 2(p)) min p X i dist(x1i, W(x2i, p)) x11 x12 x13 x21 x22 x23 W 1
  • 15. I1 I2 min p dist(I1, I0 2(p)) min p X i dist(x1i, W(x2i, p)) x11 x12 x13 t x21 x22 x23 min t X i ||(x1i + t) x2i||2
  • 16. ||x11 + t x21||2 = 0 ||x12 + t x22||2 = 0 ||x13 + t x23||2 = 0 I1 I2 x11 x12 x13 t x21 x22 x23 x11 + t = x21 x12 + t = x22 x13 + t = x23 ... ... min t X i ||(x1i + t) x2i||2 L2 2
  • 17. min t X i ||(x1i + t) x2i||2 I1 I2 x11 x12 x13 t x21 x22 x23 ||x11 + t x21||2 ' 0 ||x12 + t x22||2 ' 0 ||x13 + t x23||2 ' 0 x11 + t = x21 x12 + t = x22 x13 + t = x23 ... ... SSD (sum of squared differences) L2 2
  • 19. 1 (objective function) (cost function) (global minimum) E (local minimum) t = tx 0 tx E = X i ||(x1i + t) x2i||2
  • 20. 2 (objective function) (cost function) (global minimum) E tx E = X i ||(x1i + t) x2i||2 t = ⇣ tx ty ⌘ ty tx ty
  • 21. 0 ■ … n E 0 n E 0 ■ n 1 tx n 2 t ■ Let’s (objective function) (cost function) t = tx 0 E = X i ||(x1i + t) x2i||2 @E @t = 0 @E @t 6= 0 E tx
  • 22. Let’s E = X i ||(x1i + t) x2i||2 = X i ((x1i + t) x2i)T ((x1i + t) x2i) = X i ((x1i x2i) + t)T ((x1i x2i) + t) = X i (x1i x2i)T (x1i x2i) + (x1i x2i)T t + tT (x1i x2i) + tT t = X i (x1i x2i)T (x1i x2i) + 2(x1i x2i)T t + tT t = X i ||x1i x2i||2 + 2(x1i x2i)T t + ||t||2 @E @t = X i (2(x1i x2i) + 2t) = 0 ( NX i 2(x1i x2i)) + 2Nt = 0 ∴ N: ||a||2 = aT a @aT a @a = 2a @bT a @a = b 0 t = ⇣ tx ty ⌘
  • 23. 1 I1 I2 x11 x12 x13 t x21 x22 x23 ||x11 + t x21||2 ' 0 ||x12 + t x22||2 ' 0 ||x13 + t x23||2 ' 0 x11 + t = x21 x12 + t = x22 x13 + t = x23 L2 2 2 E tx
  • 24. t = ¯x1 ¯x2 2 I1 I2 x11 x12 x13 t x21 x22 x23 ||x11 + t x21||2 ' 0 ||x12 + t x22||2 ' 0 ||x13 + t x23||2 ' 0L2 2
  • 25. 2 I1 I2 x11 x12 x13 t x21 x22 x23 ||x11 + t x21||2 ' 0 ||x12 + t x22||2 ' 0 ||x13 + t x23||2 ' 0L2 2 ■ n L1 n n RANSAC t = ¯x1 ¯x2
  • 26. p W (x1i, p) = x2i ||W (x1i, p) x2i||2 ' 0 I1 I2 x11 x12 x13 x21 x22 x23 2 E ■ n n ■ n n FFD
  • 29. ICP ■ Iterative Closest Point (ICP) n n ■ n n R t R t X Y , , , , , , Vol. 10, No. 3, pp.429-436, 2005.10. Chen, Y. and Medioni, G. “Object Modeling by Registration of Multiple Range Images,” Proc. IEEE Conf. on Robotics and Automation, 1991. Besl, P. and McKay, N. “A Method for Registration of 3-D Shapes,” Trans. PAMI, Vol. 14, No. 2, 1992. X Y
  • 30. ■ n n ■ ICP 1. X Y (closest point) 2. X Y R t 3. RX+t Y (closest point) 4. 2 3 (iterate) ■ ICP n n X Y X Y, SIS SIP IPSJ-AVM , SIP2009-48, SIS2009-23, Vol.109, No.202, pp.59-64, , 2009 09 .
  • 31. X Y Y 1 2 3 4 xi yi Rxi+t yi R t R t R t R t Rxi+t yi Rxi+t yi
  • 32. ■ n n ■ ICP 1. X Y (closest point) 2. X Y R t 3. RX+t Y (closest point) 4. 2 3 ■ ICP n n X Y X Y, SIS SIP IPSJ-AVM , SIP2009-48, SIS2009-23, Vol.109, No.202, pp.59-64, , 2009 09 .
  • 36. SVD ■ ■ Frobenius norm (tr) K. S. Arun, T. S. Huang, and S. D. Blostein. Least-squares fitting of two 3-D point sets. PAMI, Vol. 9, No. 5, pp. 698–700, 1987. Peter H. Schönemman. A generalized solution of the orthogonal procrustes problem. Psychometrika, Vol. 31, No. 1, pp. 1–10, 1966.
  • 37. SVD (Singular Value Decomposition, SVD) Schwarz K. S. Arun, T. S. Huang, and S. D. Blostein. Least-squares fitting of two 3-D point sets. PAMI, Vol. 9, No. 5, pp. 698–700, 1987. Peter H. Schönemman. A generalized solution of the orthogonal procrustes problem. Psychometrika, Vol. 31, No. 1, pp. 1–10, 1966. R +1 R +1 -1 Kenichi Kanatani. Analysis of 3-D rotation fitting. PAMI, Vol. 16, No. 5, pp. 543–549, 1994. Shinji Umeyama. Least-squares estimation of transformation parameters between two point patterns. PAMI, Vol. 13, No. 4, pp. 376–380, 1991. , — —, 3.4 , , 1990. S
  • 38. Orthogonal Procrustes Problem Procrustes. "Now then, you fellows; I mean to fit you all to my little bed!” The Modern Bed of Procrustes - Cartoon from the Project Gutenberg eBook of Punch, Volume 101, September 19, 1891, by John Tenniel. http://commons.wikimedia.org/wiki/File:The_Modern_Bed_of_Procustes_-_Punch_cartoon_-_Project_Gutenberg_eText_13961.png Orthogonal Procrustes Extended Orthogonal Procrustes Generalized Orthogonal Procrustes Peter H. Schönemmanand Robert M. Carroll. Fitting one matrix to another under choice of a central dilation and a rigid motion. Psychometrika, Vol. 35, No. 2, pp. 245–255, 1970. Devrim Akca. Generalized procrustes analysis and its applications in photogrammetry. Technical report, ETH, Swiss Federal Institute of Technology Zurich, Institute of Geodesy and Photogrammetry, 2003. Peter H. Schönemman. A generalized solution of the orthogonal procrustes problem. Psychometrika, Vol. 31, No. 1, pp. 1–10, 1966. John R. Hurley, Raymond B. Cattell, The procrustes program: Producing direct rotation to test a hypothesized factor structure, Behavioral Science, Volume 7, Issue 2, pages 258–262, April 1962.
  • 39. R, t s n n • , , , 3 vs. , 2011- CG145CVIM179-12, 2011.11. • 3 , 2010-CVIM-176-15, 2011.3 • L. Dryden, K. V. Mardia, Statistical shape analysis, Wiley, 1998. • Berthold K. P. Horn. Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical Society of America, Vol. 4, pp. 629–642, 1987. • Shinji Umeyama. Least-squares estimation of transformation parameters between two point patterns. IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 13, No. 4, pp. 376–380, 1991. • Toru Tamaki, Shunsuke Tanigawa, Yuji Ueno, Bisser Raytchev, Kazufumi Kaneda: "Scale matching of 3D point clouds by finding keyscales with spin images,” ICPR2010. • Baowei Lin, Toru Tamaki, Fangda Zhao, Bisser Raytchev, Kazufumi Kaneda, Koji Ichii, "Scale alignment of 3D point clouds with different scales," Machine Vision and Applications, November 2014, Volume 25, Issue 8, pp 1989-2002 (2014 11). DOI: 10.1007/s00138-014-0633-2 n Keyscale n keypoint pcl::registration::Transformation EstimationSVDScale
  • 41. R T point-to-point • • R T • s • • • 2 • X Y • C++ • three::TransformationEstimationPointToPoint • with_scaling_ = false (default) • with_scaling_ = true • Python • py3d.TransformationEstimationPointToPoint() • py3d.TransformationEstimationPointToPoint(with_scaling=True)
  • 42. point-to-point distance point-to-plane distance point-to-point point-to-plane point-to-point vs point-to-plane Szymon Rusinkiewicz, Marc Levoy, Efficient Variants of the ICP Algorithm, 3DIM, 2001. http://www.cs.princeton.edu/~smr/papers/fasticp/ Szymon Rusinkiewicz, Derivation of point to plane minimization, 2013. http://www.cs.princeton.edu/~smr/papers/icpstability.pdf Kok-Lim Low, Linear Least-Squares Optimization for Point-to-Plane ICP Surface Registration, Technical Report TR04-004, Department of Computer Science, University of North Carolina at Chapel Hill, February 2004. https://www.comp.nus.edu.sg/~lowkl/publications/lowk_point-to-plane_icp_techrep.pdf
  • 43. R T point-to-plane • • R T • • • 2 • • • • C++ • three::TransformationEstimationPointToPlane • Python • py3d.TransformationEstimationPointToPlane() point-to-plane distance point-to-point distance
  • 44. ICP point-to-plane ■ n n ■ ICP 1. X Y (closest point) 2. X Y R t 3. RX+t Y (closest point) 4. 2 3 ■ ICP n n X Y X Y, SIS SIP IPSJ-AVM , SIP2009-48, SIS2009-23, Vol.109, No.202, pp.59-64, , 2009 09 . point-to-plane
  • 45. Open3D ICP • • R T • s • • 2 • • • py3d.registration_icp • estimation_method=py3d.TransformationEstimationPointToPoint() Point-to-point • estimation_method=py3d.TransformationEstimationPointToPoint(True) Point-to-point • estimation_method=py3d.TransformationEstimationPointToPlane() Point-to-plane X Y
  • 46. Open3D ICP th = 0.02 criteria = py3d.ICPConvergenceCriteria(relative_fitness = 1e-6, # fitness relative_rmse = 1e-6, # RMSE max_iteration = 1) # info = py3d.registration_icp(pcd1, pcd2, max_correspondence_distance=th, init=T, estimation_method=py3d.TransformationEstimationPointToPoint() # estimation_method=py3d.TransformationEstimationPointToPlane() ) print("correspondences:", np.asarray(info.correspondence_set)) print("fitness: ", info.fitness) print("RMSE: ", info.inlier_rmse) print("transformation: ", info.transformation)
  • 47. Open3D ICP th = 0.02 criteria = py3d.ICPConvergenceCriteria(relative_fitness = 1e-6, # fitness relative_rmse = 1e-6, # RMSE max_iteration = 1) # info = py3d.registration_icp(pcd1, pcd2, max_correspondence_distance=th, init=T, estimation_method=py3d.TransformationEstimationPointToPoint() # estimation_method=py3d.TransformationEstimationPointToPlane() ) print("correspondences:", np.asarray(info.correspondence_set)) print("fitness: ", info.fitness) print("RMSE: ", info.inlier_rmse) print("transformation: ", info.transformation)
  • 52. 2
  • 53. ICP ■ ICP on PCL1.7.1 n n setMaximumIterations n n setRANSACIterations n RANSAC n setRANSACOutlierRejectio nThrethold n RANSAC n setMaxCorrespondenceDist ance n n n setTransformationEpsilon n n setEuclideanFitnessEpsilon n ■ ■ ICP on Open3D n max_correspondence_dista nce n estimation_method n criteria ■ n softassign n EM-ICP n sparse ICP
  • 55. arg min yj 2Y ||yj xi|| ICP y1 y2x2 x1 X Y Y ⇤ y21 y43 y43 X xi Y X Y X Y* y⇤ i = X Y* R t
  • 56. arg min yj 2Y ||yj xi|| arg min yj 2Y ||yj (Rxi + t)|| ICP y1 y2x2 x1 X Y Y ⇤ y21 y43 y43 y21 X Y* R t X xi Y X Y X Y* y⇤ i =
  • 57. ICP Hard assignment y1 y2 x2 x1 X Y Y ⇤ y21 y43 y43 y211 10 0 0 0 0 0 X Y X Y* arg min yj 2Y ||yj (Rxi + t)||y⇤ i = X Y* R t
  • 58. Softassign soft assignment y1 y2 x2 x1 X Y y21 y43 0.60.1 0.02 0.3 0.50.1 0.1 0.2 X Y X Y X Y R t mijxi yj Steven Gold, Anand Rangarajan, Chien-Ping Lu, Suguna Pappu, Eric Mjolsness, "New algorithms for 2D and 3D point matching: pose estimation and correspondence," Pattern Recognition, Vol. 31, No. 8, pp. 1019-1031, 1998.
  • 60. S K q q R Berthold K. P. Horn, Hugh M. Hilden, and Shahriar Negahdaripour. Closed-form solutions of absolute orientation using orthonormal matrices. Journal of the Optical Society of America, Vol. 5, pp. 1127–1135, 1988. Berthold K. P. Horn. Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical Society of America, Vol. 4, pp. 629–642, 1987. S K nx3 3xn 3x3
  • 61. ! = ! = # 3 3 Toru Tamaki, Miho Abe, Bisser Raytchev, Kazufumi Kaneda: "Softassign and EM-ICP on GPU", Proc. of UPDAS2010; The 2nd Workshop on Ultra Performance and Dependable Acceleration Systems, In Proc. of ICNC'10, pp.179-183 (2010 11), Higashi Hiroshima, Japan, November 17-19, 2010.
  • 62. Pipeline of Softassing.GPU Compute with CUDA kernel Shinkhorn.GPU Centering.GPU Weighted Horn’s method Solve Eigenvalue problem Toru Tamaki, Miho Abe, Bisser Raytchev, Kazufumi Kaneda, Marcos Slomp: "CUDA-based implementations of Softassign and EM-ICP,” CVPR2010 demo. Toru Tamaki, Miho Abe, Bisser Raytchev, Kazufumi Kaneda: "Softassign and EM-ICP on GPU", Proc. of UPDAS2010. R and t
  • 63. EM-ICP soft assignment y1 y2 x2 x1 X Y y21 y43 0.5 0.60.1 0.02 0.3 0.1 0.1 0.2 X Y X Y* X Y* R t mijxi yj Sebastien Granger, Xavier Pennec, "Multi-scale EM-ICP: A Fast and Robust Approach for Surface Registration,” ECCV2002, Vol. 4, pp. 69-73, 2002 Y ⇤ Shinkhorn
  • 64. Pipeline of EM-ICP.GPU Compute with CUDA kernel Row normalization on GPU Centering.GPU 2 step weighted Horn’s method R and t Solve Eigenvalue problem Toru Tamaki, Miho Abe, Bisser Raytchev, Kazufumi Kaneda, Marcos Slomp: "CUDA-based implementations of Softassign and EM-ICP,” CVPR2010 demo. Toru Tamaki, Miho Abe, Bisser Raytchev, Kazufumi Kaneda: "Softassign and EM-ICP on GPU", Proc. of UPDAS2010.
  • 65. ICP
  • 66. Figure 1. Local-to-global registration result, obtained using the 1. We introduce the use of viewpoint descriptors within the viewpoint-dictionary based registration frame- work, proposed in [1]. We demonstrate that replac- ing the dictionary clouds, used in [1], with panoramic range-images, used as viewpoint descriptors, leads to considerable reduction in memory requirements and computational complexity, without loss in registration accuracy. 2. We propose the use of phase-correlation-based im- age registration [5, 10], for panoramic range-image matching, and to enable efficient dictionary search and rapid local-to-global initial (coarse) registration, with or without prior knowledge such as GPS data). 2. Related work ICCV2017 nt ansformation T0 } m coarsest to finest q. 18,19) T (Eq. 29,30) map to SE(3) matchable frag- bust graph opti- ts; s {(Pi, Pj)} and ween matchable nd a camera cal- an objective de- } [45]; mesh model for ns 3 and 4 to re- eate a fragment RegistrationReconstruction CZK [4] Ours Figure 1. Left: failure of the ICP algorithm (top) leads to erroneous reconstruction (bottom). Right: our colored point cloud registra- tion algorithm locks the alignment along the tangent plane as well as the normal direction (top), yielding an accurate scene model (bottom). rate fragment alignment. In particular, the new algorithm is considerably more robust to slippage along flat surfaces, as shown in Figure 1. This replaces Step 3. 6. Dataset To our knowledge, no publicly available RGB-D dataset provides dense ground-truth surface geometry across large- scale real-world scenes. To complement existing datasets, we have created ground-truth models of five complete in- door environments using a high-end laser scanner, and cap- tured RGB-D video sequences of these scenes. This data enables quantitative evaluation of real-world scene recon- struction and will be made publicly available. We scanned five scenes: Apartment, Bedroom, Board- Colored Point Cloud Registration Revisited, Jaesik Park, Qian-Yi Zhou, Vladlen Koltun; ICCV2017, pp. 143-152 Figure 2. The overall procedure of the proposed method. Moreno et al. [36] proposed an incremental plane mapping scheme in which the relation between planes is identified by point features. Several studies [13, 2, 39] exploited planes and points to find frame-to-frame camera pose and to define an objective function for bundle adjustment. Ma et al. [25] estimated a global plane model and frame-to-frame pose in an alternative way in the EM framework. Zhang et al. [48] proposed an interactive reconstruction algorithm, in which the algorithm guides the person to capture designated spots. The proposed method overcomes aforementioned prob- lems through the layout-constrained global registration. The scene layout estimation problem has been tackled in the field of scene understanding [10, 15, 19] and object de- tection [17]. Some researches [30, 40, 50] proposed to en- force the global regularity (e.g. parallelism, orthogonality, and coplanarity) of the scene structures in an iterative fash- ion, assuming that well-aligned but noisy point clouds are given as input. However, we consider inaccurately aligned point clouds, i.e., owing to drift errors as shown in the left of Fig. 1. Therefore, we perform the layout estimation and global registration jointly, and in particular, the proposed dominant plane estimation based on energy minimization provides locally optimal dominant planes without regard to the general global regularities. 3.1. Initial Registration For initial registration, we partially reconstruct the cap- tured indoor scene to produce a set of scene fragments and then register them in the world coordinate system, which is similar to the previous study of Choi et al. [9]. Here, the un- derlying assumption is that each scene fragment contains a negligible amount of accumulation errors so that the large- scale 3D reconstruction problem turns into the problem of aligning all the scene fragments. To construct a scene frag- ment Fi ∈ F, we simply use KinectFusion [27] for every N frames, e.g. 50, which is a volumetric approach to recon- struct a scene with truncated signed distance functions [12]. Afterwards, we find pairwise transformations Ti,i+1 for all pairs of the consecutive fragments and align all the frag- ments in the world coordinate system based on sequential multiplication of the pairwise transformations. Loop closure detection: The set of the registered frag- ments via the sequential multiplication of the pairwise transformations usually has a large amount of accumulated pose errors as well as misaligned range data. Therefore, it is necessary to identify loop closures to diffuse drift errors across all the fragments. To detect loop closures, we align all pairs of the inconsecutive fragments using the FPFH de- Joint Layout Estimation and Global Multi-View Registration for Indoor Reconstruction, Jeong-Kyun Lee, Jaewon Yea, Min-Gyu Park, Kuk-Jin Yoon; ICCV2017, pp. 162-171 Local-To-Global Point Cloud Registration Using a Dictionary of Viewpoint Descriptors,David Avidar, David Malah, Meir Barzohar; ICCV2017, pp. 891-899 Point Set Registration With Global-Local Correspondence and Transformation Estimation, Su Zhang, Yang Yang, Kun Yang, Yi Luo, Sim-Heng Ong; ICCV2017, pp. 2669-2677
  • 68. 2D keypoint detection / matching David Lowe, Demo Software: SIFT Keypoint Detector, http://www.cs.ubc.ca/~lowe/keypoints/
  • 69. 3D keypoint detection / matching Salti, S.; Tombari, F.; Di Stefano, L., "A Performance Evaluation of 3D Keypoint Detectors," 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), 2011 International Conference on , vol., no., pp.236,243, 16-19 May 2011. doi: 10.1109/3DIMPVT.2011.37 Federico Tombari, Samuele Salti, Luigi Di Stefano, Performance Evaluation of 3D Keypoint Detectors, International Journal of Computer Vision, Volume 102, Issue 1-3, pp 198-220, March 2013. http://vision.deis.unibo.it/keypoints3d/ http://www.cs.princeton.edu/~vk/projects/CorrsBlended/doc_data.php
  • 70. Keypoint • 3 • depth • • • • • • • • • P. Scovanner, S. Ali, and M. Shah, “A 3-dimensional SIFT descriptor and its application to action recognition,” in ACM Multimedia, 2007, pp. 357–360. • W. Cheung and G. Hamarneh, “n-sift: n-dimensional scale invariant feature transform,” in Trans. IP, vol. 18(9), 2009, pp. 2012–2021. • Jan Knopp, Mukta Prasad, Geert Willems, Radu Timofte, Luc Van Gool, Hough Transform and 3D SURF for Robust Three Dimensional Classification, ECCV 2010, pp 589-602, 2010. • Tsz-Ho Yu, Oliver J. Woodford, Roberto Cipolla, A Performance Evaluation of Volumetric 3D Interest Point Detectors, International Journal of Computer Vision, Volume 102, Issue 1-3, pp 180-197, March 2013. Range image / RGB-D • 3D Geometric Scale Variability in Range Images: Features and Descriptors P. Bariya, J. Novatnack, G. Schwartz, and K. Nishino, in Int'l Journal of Computer Vision, vol. 99, no. 2, pp232-255, Sept., 2012 • S. Filipe and L. A. Alexandre, "A comparative evaluation of 3D keypoint detectors in a RGB-D Object Dataset," 2014 International Conference on Computer Vision Theory and Applications (VISAPP), 2014, pp. 476-483. • Maks Ovsjanikov, Quentin Mérigot, Facundo Mémoli and Leonidas Guibas, One Point Isometric Matching with the Heat Kernel,” Symposium on Geometry Processing 2010, Computer Graphics Forum, Volume 29, Issue 5, pages 1555–1564, July 2010. • M. Bronstein and I. Kokkinos, Scale-invariant heat kernel signatures for non-rigid shape recognition, Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2010. 3 , ViEW2014 , http://isl.sist.chukyo- u.ac.jp/Archives/ViEW2014SpecialTalk-Hashimoto.pdf
  • 72. Open3D • • FPFH • RANSAC Point-to-point • point-to-plane ICP
  • 73. model = py3d.read_point_cloud("milk.pcd") scene = py3d.read_point_cloud("milk_cartoon_all_small_clorox.pcd") # model 1/10 size size = np.abs((model.get_max_bound() - model.get_min_bound())).max() / 10 kdt_n = py3d.KDTreeSearchParamHybrid(radius=size, max_nn=50) py3d.estimate_normals(model, kdt_n) py3d.estimate_normals(scene, kdt_n) show(model, scene)
  • 74. model_d = py3d.voxel_down_sample(model, size) scene_d = py3d.voxel_down_sample(scene, size) py3d.estimate_normals(model_d, kdt_n) py3d.estimate_normals(scene_d, kdt_n) show(model_d, scene_d)
  • 75. kdt_f = py3d.KDTreeSearchParamHybrid(radius=size * 50, max_nn=50) # model_f = FPFH(model_d, kdt_f) scene_f = FPFH(scene_d, kdt_f) http://pointclouds.org/documentation/tutorials/fpfh_estimation.php
  • 76. RANSAC R T # checker = [py3d.CorrespondenceCheckerBasedOnEdgeLength(0.9), # py3d.CorrespondenceCheckerBasedOnDistance(size * 2)] # est_ptp = py3d.TransformationEstimationPointToPoint() criteria = py3d.RANSACConvergenceCriteria(max_iteration=40000, # max_validation=500) # RANSAC result1 = RANSAC(model_d, scene_d, # model_f, scene_f, # max_correspondence_distance=size * 2, # estimation_method=est_ptp, # point-to-point ransac_n=4, # RANSAC 3 checkers=checker, criteria=criteria) show(model, scene, result1.transformation)
  • 77. RANSAC R T # checker = [py3d.CorrespondenceCheckerBasedOnEdgeLength(0.9), # py3d.CorrespondenceCheckerBasedOnDistance(size * 2)] # est_ptp = py3d.TransformationEstimationPointToPoint() criteria = py3d.RANSACConvergenceCriteria(max_iteration=40000, # max_validation=500) # RANSAC result1 = RANSAC(model_d, scene_d, # model_f, scene_f, # max_correspondence_distance=size * 2, # estimation_method=est_ptp, # point-to-point ransac_n=4, # RANSAC 3 checkers=checker, criteria=criteria) show(model, scene, result1.transformation)
  • 78. ICP est_ptpln = py3d.TransformationEstimationPointToPlane() # ICP result2 = ICP(model, scene, size, result1.transformation, est_ptpln) show(model, scene, result2.transformation)
  • 79. ■ Open3D ICP & n https://qiita.com/search?q=user%3Atttamaki+open3d ■ PCL ICP & R T & n https://github.com/tttamaki/ICP-test ■ sparse ICP n https://github.com/tttamaki/SICP-test n scip1 sparse ICP with point-to-plane n scip2 sparse ICP with point-to-plane ■ EM-ICP n https://github.com/tttamaki/cuda_emicp_softassign n ICP n EM-ICP.CPU n softassign.GPU n EM-ICP.GPU