The document discusses various matrix and tensor tools for computer vision, including principal component analysis (PCA), singular value decomposition (SVD), robust PCA, low-rank representation, non-negative matrix factorization, tensor decompositions, and incremental methods for SVD and tensor learning. It provides definitions and explanations of the techniques along with references for further information.
1 of 51
More Related Content
Matrix and Tensor Tools for Computer Vision
1. Matrix and Tensor Tools
FOR COMPUTER VISION
ANDREWS C. SOBRAL
ANDREWSSOBRAL@GMAIL.COM
PH.D. STUDENT, COMPUTER VISION
LAB. L3I –UNIV. DE LA ROCHELLE, FRANCE
3. Principal Component Analysis
PCA is a statistical procedure that uses orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components.
Dimensionality reduction
Variants: Multilinear PCA, ICA, LDA, Kernel PCA, Nonlinear PCA, ....
http://www.nlpca.org/pca_principal_component_analysis.html
7. Singular Value Decomposition
Formally, the singular value decomposition of anm×nreal or complex matrixMis a factorization of the form:
whereUis am×mreal or complexunitary matrix, Σ is anm×nrectangular diagonal matrixwith nonnegative real numbers on the diagonal, andV*(theconjugate transposeofV, or simply the transpose ofVifVis real) is ann×nreal or complexunitary matrix. The diagonal entries Σi,iof Σ are known as thesingular valuesofM. Themcolumns ofUand thencolumns ofVare called theleft-singular vectorsandright-singular vectorsofM, respectively.
generalization of eigenvalue decomposition
http://www.numtech.com/systems/
10. Robust PCA
Sparse error matrix
Shttp://perception.csl.illinois.edu/matrix-rank/home.html
L
Underlyinglow-rank matrix
M
Matrix of corrupted observations
23. Introduction to tensors
Tensorsaresimplymathematicalobjectsthatcanbeusedtodescribephysicalproperties.Infacttensorsaremerelyageneralizationofscalars,vectorsandmatrices;ascalarisazeroranktensor,avectorisafirstranktensorandamatrixisthesecondranktensor.
28. Introduction to tensors
Tensor transposition
◦While there is only one way transpose a matrix there are an exponential number of ways to transpose an order-n tensor.
The 3rd ordercase:
37. IncrementalSVD
Problem:
◦The matrix factorization step in SVD is computationally very expensive.
Solution:
◦Have a small pre-computed SVD model, and build upon this model incrementally using inexpensive techniques.
Businger(1970) and Bunch and Nielsen (1978) are the first authors who have proposed to update SVD sequentially with the arrival of more samples, i.e. appending/removing a row/column.
Subsequently various approaches have been proposed to update the SVD more efficiently and supporting new operations.
References:
Businger, P.A. Updatinga singularvalue decomposition. 1970
Bunch, J.R.; Nielsen, C.P. Updatingthe singularvalue decomposition. 1978
40. IncrementalSVD
Operations proposed by Matthew Brand (2006):
References:
Fastlow-rankmodifications of the thinsingularvalue decomposition(Matthew Brand, 2006)
42. IncrementalSVD algorithmsin Matlab
By Christopher Baker (Baker et al., 2012)
http://www.math.fsu.edu/~cbaker/IncPACK/
[Up,Sp,Vp] = SEQKL(A,k,t,[U S V])
Original version only supports the Updating operation
Added exponential forgetting factor to support Downdatingoperation
By David Ross (Ross et al., 2007)
http://www.cs.toronto.edu/~dross/ivt/
[U, D, mu, n] = sklm(data, U0, D0, mu0, n0, ff, K)
Supports mean-update, updatingand downdating
By David Wingate (Matthew Brand, 2006)
http://web.mit.edu/~wingated/www/index.html
[Up,Sp,Vp] = svd_update(U,S,V,A,B,force_orth)
update the SVD to be [X + A'*B]=Up*Sp*Vp' (a general matrix update).
[Up,Sp,Vp] = addblock_svd_update(U,S,V,A,force_orth)
update the SVD to be [X A] = Up*Sp*Vp' (add columns [ie, new data points])
size of Vpincreases
Amust be square matrix
[Up,Sp,Vp] = rank_one_svd_update(U,S,V,a,b,force_orth)
update the SVD to be [X + a*b'] = Up*Sp*Vp' (that is, a general rank-one update. This can be used to add columns, zero columns, change columns, recenterthe matrix, etc. ).
48. Incrementaltensor learning
Proposed by Hu et al. (2011)
◦Incremental rank-(R1,R2,R3) tensor-based subspace learning
◦IRTSA-GBM (grayscale)
◦IRTSA-CBM (color)
References:
Incremental Tensor Subspace Learning and Its Applications to Foreground Segmentation and Tracking (Hu et al., 2011)
ApplyiSVD
SKLM
(Ross et al., 2007)
50. Incrementalrank-(R1,R2,R3) tensor-basedsubspacelearning(Hu et al., 2011)
streaming
videodata
Background Modeling
ExponentialmovingaverageLK
J(t+1)
new sub-frame
tensor
Â(t)
sub-tensorAPPLY STANDARD RANK-R SVDUNFOLDING MODE-1,2,3
Set of N background images
For first N frames
low-rank
sub-tensor
model
B(t+1)
Newbackground
sub-frame
Â(t+1)
B(t+1)
drop last frameAPPLY INCREMENTAL SVD UNFOLDING MODE-1,2,3
updatedsub-tensor
foreground mask
For the nextframes
For the nextbackground sub-frame
Â(t+1)
updatedsub-tensorUPDATE
SKLM
(Ross et al., 2007)
{bg|fg} = P( J[t+1] | U[1], U[2], V[3] )
ForegroundDetection
Background Model
Initialization
Background Model
Maintenance
51. Incremental and Multi-feature Tensor Subspace Learning applied for Background Modeling and Subtraction
Performs feature extraction in the slidingblock
Build or update the tensor model
Store the last N frames in a slidingblock
streaming
videodata
Performs the iHoSVDto buildor update the low-rank model
foreground mask
Performs the ForegroundDetection
ℒ푡
low-rank
model
values
pixels
features
풯푡
removethe last frame fromslidingblock
add first frame coming from video stream
풜푡
…
(a)
(b)
(c)
(d)
(e)
More info: https://sites.google.com/site/ihosvd/
Highlights:
* Proposes an incremental low-rank HoSVD(iHOSVD) for background modeling and subtraction.
* A unified tensor model to represent the features extracted from the streaming video data.
Proposed by Sobralet al. (2014)