Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
80 views

Structure From Motion: Computer Vision Jia-Bin Huang, Virginia Tech

Structure from motion (SfM) is a computer vision technique to recover the 3D structure of a scene from 2D images taken from different camera viewpoints. SfM involves estimating the camera locations and orientations as well as the 3D positions of scene points. It works by first using feature matching and epipolar geometry to recover the camera motion between image pairs, and then triangulating the 3D points using multiple views. SfM has applications in areas like 3D modeling, surveying, robot navigation, and visual effects.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views

Structure From Motion: Computer Vision Jia-Bin Huang, Virginia Tech

Structure from motion (SfM) is a computer vision technique to recover the 3D structure of a scene from 2D images taken from different camera viewpoints. SfM involves estimating the camera locations and orientations as well as the 3D positions of scene points. It works by first using feature matching and epipolar geometry to recover the camera motion between image pairs, and then triangulating the 3D points using multiple views. SfM has applications in areas like 3D modeling, surveying, robot navigation, and visual effects.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 84

Structure from Motion

Computer Vision
Jia-Bin Huang, Virginia Tech
Many slides from S. Seitz, N Snavely, and D. Hoiem
Administrative stuffs

• HW 3 due 11:55 PM, Oct 17 (Wed)

• Submit your alignment results! [Link]

• HW 2 will be out this week


Perspective and 3D Geometry
• Projective
  geometry and camera models
• Vanishing points/lines

• Single-view metrology and camera calibration


• Calibration using known 3D object or vanishing points
• Measuring size using perspective cues
• Photo stitching
• Homography relates rotating cameras
• Recover homography using RANSAC + normalized DLT
• Epipolar Geometry and Stereo Vision
• Fundamental/essential matrix relates two cameras
• Recover using RANSAC + normalized 8-point algorithm, enforce
rank 2 using SVD
• Structure from motion (this class)
• How can we recover 3D points from multiple images?
Recap: Epipoles
• Point x in the left image corresponds to epipolar line l’ in right
image
• Epipolar line passes through the epipole
(the intersection of the cameras’ baseline with the image plane
Recap: Fundamental Matrix

• Fundamental matrix maps from a point in one


image to a line in the other

• If x and x’ correspond to the same 3d point X:


Recap: Automatic Estimation of F
Assume we have matched points x x’ with outliers
8-Point Algorithm for Recovering F
• Correspondence Relation
T
x  Fx  0

1. Normalize image coordinates


x  Tx ~
~ x   Tx 

2. RANSAC with 8 points


• Randomly sample 8 points
• Compute F via~ least squares
• Enforce det by
F  0
SVD
• Repeat and choose F with most inliers
~
3. De-normalize: F  T T FT
This class: Structure from Motion

• Projective structure from motion

• Affine structure from motion

• HW 3
• Fundamental matrix
• Affine structure from motion

• Multi-view stereo (optional)


Structure [ˈstrək(t)SHər]:
3D Point Cloud of the Scene

Motion [ˈmōSH(ə)n]:
Camera Location and Orientation

Structure from Motion (SfM)


Get the Point Cloud from Moving
Cameras
SfM Applications – 3D Modeling

http://www.3dcadbrowser.com/download.aspx?3dmodel=40454
SfM Applications – Surveying
cultural heritage structure analysis

Guidi et al. High-accuracy 3D modeling of cultural heritage, 2004


SfM Applications –
Robot navigation and mapmaking

https://www.youtube.com/watch?v=1HhOmF22oYA
SfM Applications – Visual effect
(matchmove)

https://www.youtube.com/watch?v=bK6vCPcFkfk
Steps
Images  Points: Structure from Motion
Points  More points: Multiple View Stereo
Points  Meshes: Model Fitting
+ Meshes  Models: Texture Mapping

= Images  Models: Image-based Modeling

+ + =
Slide credit: J. Xiao
Steps
Images  Points: Structure from Motion
Points  More points: Multiple View Stereo
Points  Meshes: Model Fitting
+ Meshes  Models: Texture Mapping

= Images  Models: Image-based Modeling

+ + =
Slide credit: J. Xiao
Steps
Images  Points: Structure from Motion
Points  More points: Multiple View Stereo
Points  Meshes: Model Fitting
+ Meshes  Models: Texture Mapping

= Images  Models: Image-based Modeling

+ + + =
Slide credit: J. Xiao
Steps
Images  Points: Structure from Motion
Points  More points: Multiple View Stereo
Points  Meshes: Model Fitting
+ Meshes  Models: Texture Mapping

= Images  Models: Image-based Modeling

Slide credit: J. Xiao


Steps
Images  Points: Structure from Motion
Points  More points: Multiple View Stereo
Points  Meshes: Model Fitting
+ Meshes  Models: Texture Mapping

= Images  Models: Image-based Modeling

Slide credit: J. Xiao


Steps
Images  Points: Structure from Motion
Points  More points: Multiple View Stereo
Points  Meshes: Model Fitting
+ Meshes  Models: Texture Mapping

= Images  Models: Image-based Modeling

Example: https://photosynth.net/

Slide credit: J. Xiao


Triangulation: Linear Solution
X
• Generally, rays Cx and
C’x’ will not exactly
intersect
x x'

• Solve via SVD:


A least squares solution to
a system of equations
x  PX x   P X

 upT3  p1T 
 T T 
v p  p
AX  0 A   3T 2 
up3  p1T 
 T T
 v p3  p2 
Further reading: HZ p. 312-313
Triangulation: Linear Solution
  𝒑 𝑻𝟏 𝒑𝑻𝟏 𝑿
u 
x  wv 
 
p1T 
 
P  pT2 
pT3 
𝑢

1[] [] [ ]
𝐱=𝑤 𝑣 = 𝑷𝑿 = 𝒑 𝑻 𝑿 = 𝒑𝑻 𝑿
𝒑𝟑
𝟐
𝑻
𝟐
𝑻
𝒑𝟑 𝑿
1      𝑢 𝒑 𝑻
𝑿 𝒑 𝑻
𝟏 𝑿

[][ ][ ]
𝑢 𝟑
𝑤 𝑣 = 𝑣 𝒑𝑻 𝑿 = 𝒑 𝑻 𝑿
𝟑 𝟐
1 𝑻 𝑻
u  p
1
T
 𝒑𝟑 𝑿 𝒑𝟑 𝑿
 
x   w  v  P  pT

  2
 1  pT 
 3 
 

x  PX x   P X
Triangulation: Linear Solution u  u 
x  wv  x   w  v 
   
Given P, P’, x, x’ 1   1 
1. Precondition points and projection
matrices p1T  p1T 
2. Create matrix A  
P  pT2  P  pT 
3. [U, S, V] = svd(A)  2 
p 3 
T
p3T 
 
4. X = V(:, end)
 upT3  p1T 
Pros and Cons  T T 
 vp 3  p 2 
• Works for any number of correspondingA  up3T  p1T 
images  T T
  
v p3  p 2 
• Not projectively invariant

Code: http://www.robots.ox.ac.uk/~vgg/hzbook/code/vgg_multiview/vgg_X_from_xP_lin.m
Triangulation: Non-linear Solution
• Minimize projected error while satisfying
 
=0
 𝑐𝑜𝑠𝑡 ( 𝑿 )=𝑑𝑖𝑠𝑡 ( 𝒙 , ^𝒙 )2+𝑑𝑖𝑠𝑡 ( 𝒙 ′ , ^𝒙 ′ )2

 𝒙 ′
𝒙
 ^
 𝒙 𝒙′
 ^

Figure source: Robertson and Cipolla (Chpt 13 of Practical Image Processing and Computer Vision)
Triangulation: Non-linear Solution
• Minimize projected error while satisfying
 
=0
 𝑐𝑜𝑠𝑡 ( 𝑿 )=𝑑𝑖𝑠𝑡 ( 𝒙 , ^𝒙 )2+𝑑𝑖𝑠𝑡 ( 𝒙 ′ , ^𝒙 ′ )2

• Solution is a 6-degree polynomial of t, minimizing

Further reading: HZ p. 318


Projective structure from motion
• Given: m images of n fixed 3D points
xij = Pi Xj , i = 1,… , m, j = 1, … , n
• Problem: estimate m projection matrices Pi and n 3D points Xj from
the mn corresponding 2D points xij
Xj

x1j
x3j
x2j
P1
P3
P2
Slides from Lana Lazebnik
Projective structure from motion
• Given: m images of n fixed 3D points
• xij = Pi Xj , i = 1,… , m, j = 1, … , n
• Problem:
• Estimate unknown m projection matrices Pi and n 3D points Xj
from the known mn corresponding points xij
• With no calibration info, cameras and points can only be
recovered up to a 4x4 projective transformation Q:
• X → QX, P → PQ-1
• We can solve for structure and motion when
2mn >= 11m + 3n – 15
DoF in Pi DoF in Xj Up to 4x4 projective tform Q
• For two cameras, at least 7 points are needed
Sequential structure from motion
•Initialize motion (calibration) from
two images using fundamental matrix

•Initialize structure by triangulation points

•For each additional view:


• Determine projection matrix of new

cameras
camera using all the known 3D points
that are visible in its image –
calibration/resectioning
Sequential structure from motion
•Initialize motion from two images
using fundamental matrix

•Initialize structure by triangulation points

•For each additional view:


• Determine projection matrix of new

cameras
camera using all the known 3D points
that are visible in its image –
calibration

• Refine and extend structure:


compute new 3D points,
re-optimize existing points that are
also seen by this camera –
triangulation
Sequential structure from motion
•Initialize motion from two images
using fundamental matrix

•Initialize structure by triangulation points

•For each additional view:


• Determine projection matrix of new

cameras
camera using all the known 3D
points that are visible in its image –
calibration

• Refine and extend structure:


compute new 3D points, re-
optimize existing points that are also
seen by this camera – triangulation

•Refine structure and motion: bundle


adjustment
Bundle adjustment
• Non-linear method for refining structure and motion
• Minimizing reprojection error
2

E (P, X)   D x ij , Pi X j 
m n

i 1 j 1
Xj
• Theory:
The Levenberg
–Marquardt algorithm

P1 X j • Practice:
x1j x3j The Ceres-Solver from Google
P3Xj
P2Xj x2j
P1
P3
P2
Auto-calibration

• Auto-calibration: determining intrinsic camera


parameters directly from uncalibrated images

• For example, we can use the constraint that a moving


camera has a fixed intrinsic matrix
• Compute initial projective reconstruction and find 3D
projective transformation matrix Q such that all camera
matrices are in the form Pi = K [Ri | ti]

• Can use constraints on the form of the calibration


matrix, such as zero skew
Summary so far

• From two images, we can:


• Recover fundamental matrix F
• Recover canonical camera projection matrix P and P’ from F
• Estimate 3D positions (if K is known) that correspond to each
pixel

• For a moving camera, we can:


• Initialize by computing F, P, X for two images
• Sequentially add new images, computing new P, refining X,
and adding points
• Auto-calibrate assuming fixed calibration matrix to upgrade to
similarity transform
Recent work in SfM

• Reconstruct from many images by efficiently finding


subgraphs
• http://www.cs.cornell.edu/projects/matchminer/ (Lou et
al. ECCV 2012)

• Improving efficiency of bundle adjustment or


• http://vision.soic.indiana.edu/projects/disco/ (Crandall et al. ECCV 2011)
• http://imagine.enpc.fr/~moulonp/publis/iccv2013/index.html (Moulin et
al. ICCV 2013)

(best method with software available; also has good overview of recent methods)

Reconstruction of Cornell (Crandall et al. ECCV 2011)


3D from multiple images

Building Rome in a Day: Agarwal et al. 2009


Structure from motion under orthographic projection

3D Reconstruction of a Rotating Ping-Pong Ball


•Reasonable choice when
• Change in depth of points in scene is much smaller than distance to camera
• Cameras do not move towards or away from the scene

C. Tomasi and T. Kanade. Shape and motion from image streams under orthography:
A factorization method. IJCV, 9(2):137-154, November 1992.
Orthographic Projection - Examples
Orthographic projection for
rotated/translated camera

a2
a1 X
Affine structure from motion
• Affine projection is a linear mapping + translation in
homogeneous coordinates
X
 x   a11 a12 a13    t x 
x x        Y    t   AX  t
 y  a 21 a 22 a 23    y 
Z
a2
X Projection of
a1 world origin

1. We are given corresponding 2D points (x) in several frames


2. We want to estimate the 3D points (X) and the affine
parameters of each camera (A)
Step 1: Simplify by getting rid of t: shift to
centroid of points for each camera

1 n
x i  Ai X  t i xˆ ij  x ij   x ik
n k 1

1 n 1 n  1 n 
x ij   x ik  A i X j  t i    A i X k  t i   A i  X j   X k   A i X
ˆ
j
n k 1 n k 1  n k 1 

ˆ
xˆ ij  Ai X j
2d normalized point 3d normalized point
(observed)
Linear (affine) mapping
Suppose we know 3D points and
affine camera parameters …
then, we can compute the observed 2d
positions of each point
 A1   xˆ 11 xˆ 12  xˆ 1n 
A   xˆ xˆ 22  xˆ 2 n 
 2  X1 X2  Xn   
21 
    
  ˆ ˆ ˆ 
A m  3D Points (3xn)  x m1 x m2  x mn 

Camera Parameters (2mx3) 2D Image Points (2mxn)


What if we instead observe
corresponding 2d image points?

Can we recover the camera parameters and


3d points? cameras (2 m)
 xˆ 11 xˆ 12  xˆ 1n   A1 
 xˆ xˆ 22  xˆ 2 n  ?  A 2 
D
21     X1 X2  Xn 
    
ˆ   
x m1 xˆ m 2  xˆ mn   A m 
points (n)

What rank is the matrix of 2D points?


Factorizing the measurement matrix

AX

Source: M. Hebert
Factorizing the measurement matrix
• Singular value decomposition of D:

Source: M. Hebert
Factorizing the measurement matrix
• Singular value decomposition of D:

Source: M. Hebert
Factorizing the measurement matrix
• Obtaining a factorization from SVD:

Source: M. Hebert
Factorizing the measurement matrix
• Obtaining a factorization from SVD:

~ ~
A X

Source: M. Hebert
Affine ambiguity

~ ~
A S
X

• The decomposition is not unique.


We get the same D by using any 3×3 matrix C and
applying the transformations A → AC, X →C-1X

• Why?
We have only an affine transformation and we have
not enforced any Euclidean constraints
(e.g., perpendicular image axes) Source: M. Hebert
Eliminating the affine ambiguity

• Orthographic: image axes are perpendicular


and of unit length

a1 · a2 = 0
x
|a1|2 = |a2|2 = 1
a2

a1 X

Source: M. Hebert
Solve for orthographic constraints
Three equations for each image i

~ T~
ai1 CC ai1  1
T
~  ~
a T

~
aiT2 CCT ~
ai 2  1 where A i  ~ T 
i1

~ T~
a CC a  0
T  ai 2 
i1 i2

• Solve for L = CCT


• Recover C from L by Cholesky decomposition:
L = CCT
• Update A and X: A = AC,
~
X = C-1X~
How to solve L = CCT ?

• 

 
𝐿11

[]
𝐿1 2
𝐿1 3
𝐿21
[ 𝑎 𝑑 𝑏𝑑 𝑐𝑑 𝑎 𝑒 𝑏𝑒 𝑐𝑒 𝑎 𝑓 𝑏𝑓 𝑐𝑓 ] 𝐿22 =k
𝐿23
𝐿31
𝐿32
𝐿33
How to solve L = CCT ?

• 

 
𝐿11

[]
𝐿1 2
𝐿1 3
𝐿21
[ 𝑎 𝑑 𝑏𝑑 𝑐𝑑 𝑎 𝑒 𝑏𝑒 𝑐𝑒 𝑎 𝑓 𝑏𝑓 𝑐𝑓 ] 𝐿22 =k
𝐿23
𝐿31
𝐿32
reshape([a b c]’*[d e f], [1, 9])
𝐿33
Algorithm summary
• Given: m images and n tracked features xij
• For each image i, center the feature coordinates
• Construct a 2m × n measurement matrix D:
• Column j contains the projection of point j in all views
• Row i contains one coordinate of the projections of all the n points
in image i
• Factorize D:
• Compute SVD: D = U W VT
• Create U3 by taking the first 3 columns of U
• Create V3 by taking the first 3 columns of V
• Create W3 by taking the upper left 3 × 3 block of W
• Create the motion (affine) and shape (3D) matrices:
A = U3W3½ and S = W3½ V3T
• Eliminate affine ambiguity
• Solve L = CCT using metric constraints
• Solve C using Cholesky decomposition
• Update A and X: A = AC, S = C-1S Source: M. Hebert
Dealing with missing data
• So far, we have assumed that all points are
visible in all views
• In reality, the measurement matrix typically
looks something like this:

cameras

points

One solution:
• solve using a dense submatrix of visible points
• Iteratively add new cameras
Reconstruction results

C. Tomasi and T. Kanade. Shape and motion from image streams under orthography:
A factorization method. IJCV, 9(2):137-154, November 1992.
Further reading

• Short explanation of Affine SfM: class notes from


Lischinksi and Gruber
http://www.cs.huji.ac.il/~csip/sfm.pdf

• Clear explanation of epipolar geometry and


projective SfM
• http
://mi.eng.cam.ac.uk/~cipolla/publications/contributionToEditedBook
/2008-SFM-chapters.pdf
Review of Affine SfM from Interest
Points
1. Detect interest points (e.g., Harris)
 I x2 ( D ) I x I y ( D )
 ( I ,  D )  g ( I )    Ix Iy
I I
 x y D( ) I 2
y ( )
D   1. Image
derivatives

Ix2 Iy2 IxIy


2. Square of
derivatives
det M  12
trace M  1  2
3. Gaussian g(Ix2) g(Iy2) g(IxIy)
filter g(sI)

4. Cornerness function – both eigenvalues are strong


har  det[ ( I , D)]   [trace( ( I , D)) 2 ] 
g ( I x2 ) g ( I y2 )  [ g ( I x I y )]2   [ g ( I x2 )  g ( I y2 )]2

5. Non-maxima suppression 59
har
Review of Affine SfM from Interest
Points
2. Correspondence via Lucas-Kanade tracking
Original (x,y) position

a) Initialize (x’,y’) = (x,y) It = I(x’, y’, t+1) - I(x, y, t)

b) Compute (u,v) by

2nd moment matrix for feature


displacement
patch in first image

c) Shift window by (u, v): x’=x’+u; y’=y’+v;


d) Recalculate It
e) Repeat steps 2-4 until small change
• Use interpolation for subpixel values
Review of Affine SfM from Interest
Points

3. Get Affine camera matrix and 3D points using


Tomasi-Kanade factorization

Solve for
orthographic
constraints
HW 3 – Part 1 Epipolar Geometry
Problem: recover F from matches with outliers
load matches.mat
[c1, r1] – 477 x 2
[c2, r2] – 500 x 2
matches – 252 x 2

matches(:,1): matched point in im1


matches(:,2): matched point in im2

Write-up:
•Describe what test you used for deciding inlier vs. outlier.
•Display the estimated fundamental matrix F after normalizing to unit length
•Plot the outlier keypoints with green dots on top of the first image plot(x, y, '.g');
•Plot the corresponding epipolar lines
Distance of point to epipolar line

l=Fx=[a b c]

.x .
x‘=[u v 1]

  ′ ¿
𝑑 𝑙 ,𝑥 =¿ 𝑎𝑢+𝑏𝑣+𝑐∨ 2 2 ¿
( )
√ 𝑎 +𝑏
HW 3 – Part 2 Affine SfM
Problem: recover motion and structure

load tracks.mat

track_x – [500 x 51]


track_y - [500 x 51]

Use plotSfM(A, S) to diplay


motion and shape

A – [2m x 3] motion matrix


S – [3 x n]
HW 3 – Part 2 Affine SfM
• Eliminate affine ambiguity
~ T~
ai1 CC ai1  1
T
~  ~
a T

~
aiT2 CCT ~
ai 2  1 where A i  ~ T 
i1

~ T~
a CC a  0
T  ai 2 
i1 i2

• Solve for L = CCT


• L = reshape(A\b, [3,3]); % A - 3m x 9, b – 3m x 1
• Recover C from L by Cholesky decomposition: L =
CCT
• Update A and X: A = AC, X = C-1X
HW 3 – Graduate credits
Single-view metrology

Assume Sign = 1.65m

Question: What’s the heights of

- Building
- Tractor
- Camera
HW 3 – Graduate credits
Automatic vanishing point detection

Input:
• lines: a matrix of size [NumLines x 5] where each row represents a line
segment with (x1, y1, x2, y2, lineLength)

Output:
• VP: [2 x 3] each column corresponds to a vanishing point in the order of
X, Y, Z
• lineLabel: [NumLine x 3] each column is a logical vector indicating which
line segments correspond to the vanishing point.
HW 3 – Graduate credits
Epipolar Geometry

Try “un-normalized” 8-point algorithm.

Report and compare the accuracy with the normalized version


HW 3 – Graduate credits
Affine structure from motion

• Missing track completion.

• Some keypoints will fall out of frame, or come into


frame throughout the sequence.

• Fill in the missing data and visualize the predicted


positions of points that aren't visible in a particular
frame.
Multi-view stereo
Multi-view stereo
• Generic problem formulation: given several images of the
same object or scene, compute a representation of its 3D
shape
• “Images of the same object or scene”
• Arbitrary number of images (from two to thousands)
• Arbitrary camera positions (special rig, camera network
or video sequence)
• Calibration may be known or unknown
• “Representation of 3D shape”
• Depth maps
• Meshes
• Point clouds
• Patch clouds
• Volumetric models
• ….
Multi-view stereo: Basic idea

Source: Y. Furukawa
Multi-view stereo: Basic idea

Source: Y. Furukawa
Multi-view stereo: Basic idea

Source: Y. Furukawa
Multi-view stereo: Basic idea

Source: Y. Furukawa
Plane Sweep Stereo

input image input image

reference
camera

• Sweep family of planes at different depths w.r.t. a reference camera


• For each depth, project each input image onto that plane
• This is equivalent to a homography warping each input image into the reference view
• What can we say about the scene points that are at the right depth?

R. Collins. A space-sweep approach to true multi-image matching. CVPR 1996.


Plane Sweep Stereo

Scene surface

Sweeping
plane

Image 2
Image 1
Plane Sweep Stereo

• For each depth plane


• For each pixel in the composite image stack, compute the variance
• For each pixel, select the depth that gives the lowest variance

• Can be accelerated using graphics hardware

R. Yang and M. Pollefeys.


Multi-Resolution Real-Time Stereo on Commodity Graphics Hardware, CVPR 2003
Merging depth maps
• Given a group of images, choose
each one as reference and compute
a depth map w.r.t. that view using a
multi-baseline approach
• Merge multiple depth maps to a
volume or a mesh (see, e.g., Curless
and Levoy 96)
Map 1 Map 2 Merged
Stereo from community photo collections
• Need structure from motion to recover
unknown camera parameters
• Need view selection to find good groups of
images on which to run dense stereo
Towards Internet-Scale Multi-View Stereo

• YouTube video, high-quality video


Yasutaka Furukawa, Brian Curless, Steven M. Seitz and Richard Szeliski,
Towards Internet-scale Multi-view Stereo,CVPR 2010.
Internet-Scale Multi-View Stereo
The Visual Turing Test for Scene Reconstruction

Q. Shan, R. Adams, B. Curless, Y. Furukawa, and S. Seitz,


"The Visual Turing Test for Scene Reconstruction," 3DV 2013.
The Reading List
• “A computer algorithm for reconstructing a scene from two images”, Longuet-Higgins,
Nature 1981

• “Shape and motion from image streams under orthography: A factorization method.” C.
Tomasi and T. Kanade, IJCV, 9(2):137-154, November 1992

• “In defense of the eight-point algorithm”, Hartley, PAMI 1997

• “An efficient solution to the five-point relative pose problem”, Nister, PAMI 2004

• “Accurate, dense, and robust multiview stereopsis”, Furukawa and Ponce, CVPR 2007

• “Photo tourism: exploring image collections in 3d”, ACM SIGGRAPH 2006

• “Building Rome in a day”, Agarwal et al., ICCV 2009

• https://www.youtube.com/watch?v=kyIzMr917Rc, 3D Computer Vision: Past, Present, and


Future
Next class
• Grouping and Segmentation

You might also like