Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
Matrix and Tensor Tools 
FOR COMPUTER VISION 
ANDREWS C. SOBRAL 
ANDREWSSOBRAL@GMAIL.COM 
PH.D. STUDENT, COMPUTER VISION 
LAB. L3I –UNIV. DE LA ROCHELLE, FRANCE
Principal Component Analysis(PCA)
Principal Component Analysis 
PCA is a statistical procedure that uses orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. 
Dimensionality reduction 
Variants: Multilinear PCA, ICA, LDA, Kernel PCA, Nonlinear PCA, .... 
http://www.nlpca.org/pca_principal_component_analysis.html
Principal Component Analysishttp://store.elsevier.com/Introduction-to-Pattern-Recognition-A-Matlab-Approach/Sergios-Theodoridis/isbn-9780123744869/
Principal Component Analysishttp://store.elsevier.com/Introduction-to-Pattern-Recognition-A-Matlab-Approach/Sergios-Theodoridis/isbn-9780123744869/
SingularValue Decomposition(SVD)
Singular Value Decomposition 
Formally, the singular value decomposition of anm×nreal or complex matrixMis a factorization of the form: 
whereUis am×mreal or complexunitary matrix, Σ is anm×nrectangular diagonal matrixwith nonnegative real numbers on the diagonal, andV*(theconjugate transposeofV, or simply the transpose ofVifVis real) is ann×nreal or complexunitary matrix. The diagonal entries Σi,iof Σ are known as thesingular valuesofM. Themcolumns ofUand thencolumns ofVare called theleft-singular vectorsandright-singular vectorsofM, respectively. 
generalization of eigenvalue decomposition 
http://www.numtech.com/systems/
-3-2-10123-8-6-4-20246810X Z-3-2-10123-8-6-4-20246810Y ZZ 1020304051015202530354045-8-6-4-20246-4-2024-4-2024-10-50510ORIGINAL-4-2024-4-2024-10-50510Z =  1-4-2024-4-2024-10-50510Z =  1 +  2
RobustPCA (RPCA)
Robust PCA 
Sparse error matrix 
Shttp://perception.csl.illinois.edu/matrix-rank/home.html 
L 
Underlyinglow-rank matrix 
M 
Matrix of corrupted observations
Robust PCAhttp://perception.csl.illinois.edu/matrix-rank/home.html
Robust PCA 
OneeffectivewaytosolvePCPforthecaseoflargematricesistouseastandardaugmentedLagrangianmultipliermethod(ALM)(Bertsekas,1982). 
and then minimizing it iteratively by setting 
Where: 
More information: 
(Qiuand Vaswani, 2011), (Pope et al. 2011), (Rodríguez and Wohlberg, 2013)
Robust PCA 
For more information see: (Lin et al., 2010)http://perception.csl.illinois.edu/matrix- rank/sample_code.html
LOW-RANK REPRESENTATION (LRR)
Low-rank Representation (LRR) 
Subspaceclustering 
problem!
Low-rank Representation (LRR)
Low-rank Representation (LRR)
NON-NEGATIVE MATRIX FACTORIZATION (NMF)
Non-Negative Matrix Factorizations (NMF) 
Inmanyapplications,dataisnon-negative,oftenduetophysicalconsiderations. 
◦imagesaredescribedbypixelintensities; 
◦textsarerepresentedbyvectorsofwordcounts; 
Itisdesirabletoretainthenon-negativecharacteristicsoftheoriginaldata.
Non-Negative Matrix Factorizations (NMF) 
NMFprovidesanalternativeapproachtodecompositionthatassumesthatthedataandthecomponentsarenon- negative. 
Forinterpretationpurposes,onecanthinkofimposingnon-negativityconstraintsonthefactorUsothatbasiselementsbelongtothesamespaceastheoriginaldata. 
H>=0constraintsthebasiselementstobenonnegative.Moreover,inordertoforcethereconstructionofthebasiselementstobeadditive,onecanimposetheweightsWtobenonnegativeaswell,leadingtoapart-basedrepresentation. 
W>=0imposesanadditivereconstruction. 
References: 
The Why and How of Nonnegative Matrix Factorization (Nicolas Gillis, 2014) 
Nonnegative Matrix Factorization: Complexity, Algorithms and Applications (Nicolas Gillis, 2011),
Non-Negative Matrix Factorizations (NMF) 
Similartosparseandlow-rankmatrixdecompositions,e.g.RPCA,MahNMFrobustlyestimatesthelow-rankpartandthesparsepartofanon-negativematrixandthusperformseffectivelywhendataarecontaminatedbyoutliers. 
https://sites.google.com/site/nmfsolvers/
Introduction to tensors
Introduction to tensors 
Tensorsaresimplymathematicalobjectsthatcanbeusedtodescribephysicalproperties.Infacttensorsaremerelyageneralizationofscalars,vectorsandmatrices;ascalarisazeroranktensor,avectorisafirstranktensorandamatrixisthesecondranktensor.
Introduction to tensors 
Subarrays, tubesand slicesof a 3rd order tensor.
Introduction to tensors 
Matricizationof a 3rd order tensor.
1 
10 
19 
28 
37 
46 
51 
1 
9 
17 
25 
33 
41 
48 
1 
9 
17 
25 
33 
41 
48 
j k 
i 
1 
10 
19 
28 
37 
46 
51 
1 
9 
17 
25 
33 
41 
48 
1 
9 
17 
25 
33 
41 
48 
j k 
i 
1 
10 
19 
28 
37 
46 
51 
1 
9 
17 
25 
33 
41 
48 
1 
9 
17 
25 
33 
41 
48 
j k 
i 
1 
10 
19 
28 
37 
46 
51 
1 
9 
17 
25 
33 
41 
48 
1 
9 
17 
25 
33 
41 
48 
j k 
i 
1 
10 
19 
28 
37 
46 
51 
1 
9 
17 
25 
33 
41 
48 
1 
9 
17 
25 
33 
41 
48 
j k 
i 
Frontal Vertical Horizontal 
Horizontal, vertical and frontal slices from a 3rd order tensor
Introduction to tensors 
Unfoldinga 3rd order tensor.
Introduction to tensors 
Tensor transposition 
◦While there is only one way transpose a matrix there are an exponential number of ways to transpose an order-n tensor. 
The 3rd ordercase:
Tensor decompositionmethods 
Approaches: 
◦Tucker / HOSVD 
◦CANDECOMP-PARAFAC (CP) 
◦Hierarchical Tucker (HT) 
◦Tensor-Train decomposition (TT) 
◦NTF (Non-negative Tensor Factorization) 
◦NTD (Non-negative Tucker Decomposition) 
◦NCP (Non-negative CP Decomposition) 
References: 
Tensor Decompositionsand Applications (Kolda and Bader, 2008)
Matrix and Tensor Tools for Computer Vision
Tucker / HoSVD
CP 
TheCPmodelisaspecialcaseoftheTuckermodel,wherethecoretensorissuperdiagonalandthenumberofcomponentsinthefactormatricesisthesame. 
Solvingby ALS (alternatingleast squares) framework
Matrix and Tensor Tools for Computer Vision
Matrix and Tensor Tools for Computer Vision
Tensor decompositionmethods 
Softwares 
◦ThereareseveralMATLABtoolboxesavailablefordealingwithtensorsinCPandTuckerdecomposition,includingtheTensorToolbox,theN-waytoolbox,thePLSToolbox,andtheTensorlab.TheTT-ToolboxprovidesMATLABclassescoveringtensorsinTTandQTTdecomposition,aswellaslinearoperators.ThereisalsoaPythonimplementationoftheTT-toolboxcalledttpy.ThehtuckertoolboxprovidesaMATLABclassrepresentingatensorinHTdecomposition.
IncrementalSVD
IncrementalSVD 
Problem: 
◦The matrix factorization step in SVD is computationally very expensive. 
Solution: 
◦Have a small pre-computed SVD model, and build upon this model incrementally using inexpensive techniques. 
Businger(1970) and Bunch and Nielsen (1978) are the first authors who have proposed to update SVD sequentially with the arrival of more samples, i.e. appending/removing a row/column. 
Subsequently various approaches have been proposed to update the SVD more efficiently and supporting new operations. 
References: 
Businger, P.A. Updatinga singularvalue decomposition. 1970 
Bunch, J.R.; Nielsen, C.P. Updatingthe singularvalue decomposition. 1978
IncrementalSVD 
F(t) 
F(t+1) 
F(t+2) 
F(t+3) 
F(t+4) 
F(t+5) 
[U, S, V] = SVD( [ F(t), …, F(t+2) ] ) 
[U’, S’, V’] = iSVD([F(t+3), …, F(t+5)], [U,S,V]) 
F(t) 
F(t+1) 
F(t+2) 
F(t+3) 
F(t+4) 
F(t+5) 
[U, S, V] = SVD( [ F(t), …, F(t+1) ] ) 
[U’, S’, V’] = iSVD([F(t+1), …, F(t+3)], [U,S,V]) 
F(t) 
F(t+1) 
F(t+2) 
F(t+3) 
F(t+4) 
F(t+5) 
[U, S, V] = SVD( [ F(t), …, F(t+2) ] ) 
[U’, S’, V’] = iSVD([F(t+1), …, F(t+3)], [U,S,V]) 
F(t) 
F(t+1) 
F(t+2) 
F(t+3) 
F(t+4) 
F(t+5) 
[U, S, V] = SVD( [ F(t), …, F(t+1) ] ) 
[U’, S’, V’] = iSVD([F(t+1), …, F(t+3)], [U,S,V])
IncrementalSVD 
Updating operation proposed by Sarwaret al. (2002): 
References: 
IncrementalSingularValue DecompositionAlgorithmsfor HighlyScalableRecommenderSystems(Sarwaret al., 2002)
IncrementalSVD 
Operations proposed by Matthew Brand (2006): 
References: 
Fastlow-rankmodifications of the thinsingularvalue decomposition(Matthew Brand, 2006)
IncrementalSVD 
Operations proposed by Melenchonand Martinez (2007): 
References: 
EfficientlyDowndating, Composingand SplittingSingularValue DecompositionsPreservingthe MeanInformation (Melenchónand Martínez, 2007)
IncrementalSVD algorithmsin Matlab 
By Christopher Baker (Baker et al., 2012) 
http://www.math.fsu.edu/~cbaker/IncPACK/ 
[Up,Sp,Vp] = SEQKL(A,k,t,[U S V]) 
Original version only supports the Updating operation 
Added exponential forgetting factor to support Downdatingoperation 
By David Ross (Ross et al., 2007) 
http://www.cs.toronto.edu/~dross/ivt/ 
[U, D, mu, n] = sklm(data, U0, D0, mu0, n0, ff, K) 
Supports mean-update, updatingand downdating 
By David Wingate (Matthew Brand, 2006) 
http://web.mit.edu/~wingated/www/index.html 
[Up,Sp,Vp] = svd_update(U,S,V,A,B,force_orth) 
update the SVD to be [X + A'*B]=Up*Sp*Vp' (a general matrix update). 
[Up,Sp,Vp] = addblock_svd_update(U,S,V,A,force_orth) 
update the SVD to be [X A] = Up*Sp*Vp' (add columns [ie, new data points]) 
size of Vpincreases 
Amust be square matrix 
[Up,Sp,Vp] = rank_one_svd_update(U,S,V,a,b,force_orth) 
update the SVD to be [X + a*b'] = Up*Sp*Vp' (that is, a general rank-one update. This can be used to add columns, zero columns, change columns, recenterthe matrix, etc. ).
IncrementalTensor Learning
Incrementaltensor learning 
Proposed by Sun et al. (2008) 
◦Dynamic Tensor Analysis (DTA) 
◦Streaming Tensor Analysis (STA) 
◦Window-based Tensor Analysis (WTA) 
References: 
Incrementaltensor analysis: Theoryand applications (Sun et al, 2008)
Incrementaltensor learning 
Dynamic Tensor Analysis (DTA) 
◦[T, C] = DTA(Xnew, R, C, alpha) 
◦ApproximatelyupdatestensorPCA,accordingtonewtensorXnew,oldvariancematricesinCandhespecifieddimensionsinvectorR.TheinputXnewisatensor.TheresultreturnedinTisatuckertensorandCisthecellarrayofnewvariancematrices.
Incrementaltensor learning 
Streaming Tensor Analysis(STA) 
◦[Tnew, S] = STA(Xnew, R, T, alpha, samplingPercent) 
◦ApproximatelyupdatestensorPCA,accordingtonewtensorXnew,oldtuckerdecompositionTandhespecifieddimensionsinvectorR.TheinputXnewisatensororsptensor.TheresultreturnedinTisanewtuckertensorforXnew,Shastheenergyvectoralongeachmode.
Incrementaltensor learning 
Window-basedTensor Analysis(WTA) 
◦[T, C] = WTA(Xnew, R, Xold, overlap, type, C) 
◦Computewindow-basedtensordecomposition,accordingtoXnew(Xold)thenew(old)tensorwindow,overlapisthenumberofoverlappingtensorsandCvariancematricesexceptforthetimemodeofprevioustensorwindowandthespecifieddimensionsin vectorR,typecanbe'tucker'(default)or'parafac'.TheinputXnew(Xold)isatensor,sptensor,wherethefirstmodeistime.TheresultreturnedinTisatuckerorkruskaltensordependingontypeandCisthecellarrayofnewvariancematrices.
Incrementaltensor learning 
Proposed by Hu et al. (2011) 
◦Incremental rank-(R1,R2,R3) tensor-based subspace learning 
◦IRTSA-GBM (grayscale) 
◦IRTSA-CBM (color) 
References: 
Incremental Tensor Subspace Learning and Its Applications to Foreground Segmentation and Tracking (Hu et al., 2011) 
ApplyiSVD 
SKLM 
(Ross et al., 2007)
Incrementaltensor learning 
IRTSA architecture
Incrementalrank-(R1,R2,R3) tensor-basedsubspacelearning(Hu et al., 2011) 
streaming 
videodata 
Background Modeling 
ExponentialmovingaverageLK 
J(t+1) 
new sub-frame 
tensor 
Â(t) 
sub-tensorAPPLY STANDARD RANK-R SVDUNFOLDING MODE-1,2,3 
Set of N background images 
For first N frames 
low-rank 
sub-tensor 
model 
B(t+1) 
Newbackground 
sub-frame 
Â(t+1) 
B(t+1) 
drop last frameAPPLY INCREMENTAL SVD UNFOLDING MODE-1,2,3 
updatedsub-tensor 
foreground mask 
For the nextframes 
For the nextbackground sub-frame 
Â(t+1) 
updatedsub-tensorUPDATE 
SKLM 
(Ross et al., 2007) 
{bg|fg} = P( J[t+1] | U[1], U[2], V[3] ) 
ForegroundDetection 
Background Model 
Initialization 
Background Model 
Maintenance
Incremental and Multi-feature Tensor Subspace Learning applied for Background Modeling and Subtraction 
Performs feature extraction in the slidingblock 
Build or update the tensor model 
Store the last N frames in a slidingblock 
streaming 
videodata 
Performs the iHoSVDto buildor update the low-rank model 
foreground mask 
Performs the ForegroundDetection 
ℒ푡 
low-rank 
model 
values 
pixels 
features 
풯푡 
removethe last frame fromslidingblock 
add first frame coming from video stream 
풜푡 
… 
(a) 
(b) 
(c) 
(d) 
(e) 
More info: https://sites.google.com/site/ihosvd/ 
Highlights: 
* Proposes an incremental low-rank HoSVD(iHOSVD) for background modeling and subtraction. 
* A unified tensor model to represent the features extracted from the streaming video data. 
Proposed by Sobralet al. (2014)

More Related Content

Matrix and Tensor Tools for Computer Vision

  • 1. Matrix and Tensor Tools FOR COMPUTER VISION ANDREWS C. SOBRAL ANDREWSSOBRAL@GMAIL.COM PH.D. STUDENT, COMPUTER VISION LAB. L3I –UNIV. DE LA ROCHELLE, FRANCE
  • 3. Principal Component Analysis PCA is a statistical procedure that uses orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. Dimensionality reduction Variants: Multilinear PCA, ICA, LDA, Kernel PCA, Nonlinear PCA, .... http://www.nlpca.org/pca_principal_component_analysis.html
  • 7. Singular Value Decomposition Formally, the singular value decomposition of anm×nreal or complex matrixMis a factorization of the form: whereUis am×mreal or complexunitary matrix, Σ is anm×nrectangular diagonal matrixwith nonnegative real numbers on the diagonal, andV*(theconjugate transposeofV, or simply the transpose ofVifVis real) is ann×nreal or complexunitary matrix. The diagonal entries Σi,iof Σ are known as thesingular valuesofM. Themcolumns ofUand thencolumns ofVare called theleft-singular vectorsandright-singular vectorsofM, respectively. generalization of eigenvalue decomposition http://www.numtech.com/systems/
  • 8. -3-2-10123-8-6-4-20246810X Z-3-2-10123-8-6-4-20246810Y ZZ 1020304051015202530354045-8-6-4-20246-4-2024-4-2024-10-50510ORIGINAL-4-2024-4-2024-10-50510Z =  1-4-2024-4-2024-10-50510Z =  1 +  2
  • 10. Robust PCA Sparse error matrix Shttp://perception.csl.illinois.edu/matrix-rank/home.html L Underlyinglow-rank matrix M Matrix of corrupted observations
  • 12. Robust PCA OneeffectivewaytosolvePCPforthecaseoflargematricesistouseastandardaugmentedLagrangianmultipliermethod(ALM)(Bertsekas,1982). and then minimizing it iteratively by setting Where: More information: (Qiuand Vaswani, 2011), (Pope et al. 2011), (Rodríguez and Wohlberg, 2013)
  • 13. Robust PCA For more information see: (Lin et al., 2010)http://perception.csl.illinois.edu/matrix- rank/sample_code.html
  • 15. Low-rank Representation (LRR) Subspaceclustering problem!
  • 19. Non-Negative Matrix Factorizations (NMF) Inmanyapplications,dataisnon-negative,oftenduetophysicalconsiderations. ◦imagesaredescribedbypixelintensities; ◦textsarerepresentedbyvectorsofwordcounts; Itisdesirabletoretainthenon-negativecharacteristicsoftheoriginaldata.
  • 20. Non-Negative Matrix Factorizations (NMF) NMFprovidesanalternativeapproachtodecompositionthatassumesthatthedataandthecomponentsarenon- negative. Forinterpretationpurposes,onecanthinkofimposingnon-negativityconstraintsonthefactorUsothatbasiselementsbelongtothesamespaceastheoriginaldata. H>=0constraintsthebasiselementstobenonnegative.Moreover,inordertoforcethereconstructionofthebasiselementstobeadditive,onecanimposetheweightsWtobenonnegativeaswell,leadingtoapart-basedrepresentation. W>=0imposesanadditivereconstruction. References: The Why and How of Nonnegative Matrix Factorization (Nicolas Gillis, 2014) Nonnegative Matrix Factorization: Complexity, Algorithms and Applications (Nicolas Gillis, 2011),
  • 21. Non-Negative Matrix Factorizations (NMF) Similartosparseandlow-rankmatrixdecompositions,e.g.RPCA,MahNMFrobustlyestimatesthelow-rankpartandthesparsepartofanon-negativematrixandthusperformseffectivelywhendataarecontaminatedbyoutliers. https://sites.google.com/site/nmfsolvers/
  • 23. Introduction to tensors Tensorsaresimplymathematicalobjectsthatcanbeusedtodescribephysicalproperties.Infacttensorsaremerelyageneralizationofscalars,vectorsandmatrices;ascalarisazeroranktensor,avectorisafirstranktensorandamatrixisthesecondranktensor.
  • 24. Introduction to tensors Subarrays, tubesand slicesof a 3rd order tensor.
  • 25. Introduction to tensors Matricizationof a 3rd order tensor.
  • 26. 1 10 19 28 37 46 51 1 9 17 25 33 41 48 1 9 17 25 33 41 48 j k i 1 10 19 28 37 46 51 1 9 17 25 33 41 48 1 9 17 25 33 41 48 j k i 1 10 19 28 37 46 51 1 9 17 25 33 41 48 1 9 17 25 33 41 48 j k i 1 10 19 28 37 46 51 1 9 17 25 33 41 48 1 9 17 25 33 41 48 j k i 1 10 19 28 37 46 51 1 9 17 25 33 41 48 1 9 17 25 33 41 48 j k i Frontal Vertical Horizontal Horizontal, vertical and frontal slices from a 3rd order tensor
  • 27. Introduction to tensors Unfoldinga 3rd order tensor.
  • 28. Introduction to tensors Tensor transposition ◦While there is only one way transpose a matrix there are an exponential number of ways to transpose an order-n tensor. The 3rd ordercase:
  • 29. Tensor decompositionmethods Approaches: ◦Tucker / HOSVD ◦CANDECOMP-PARAFAC (CP) ◦Hierarchical Tucker (HT) ◦Tensor-Train decomposition (TT) ◦NTF (Non-negative Tensor Factorization) ◦NTD (Non-negative Tucker Decomposition) ◦NCP (Non-negative CP Decomposition) References: Tensor Decompositionsand Applications (Kolda and Bader, 2008)
  • 35. Tensor decompositionmethods Softwares ◦ThereareseveralMATLABtoolboxesavailablefordealingwithtensorsinCPandTuckerdecomposition,includingtheTensorToolbox,theN-waytoolbox,thePLSToolbox,andtheTensorlab.TheTT-ToolboxprovidesMATLABclassescoveringtensorsinTTandQTTdecomposition,aswellaslinearoperators.ThereisalsoaPythonimplementationoftheTT-toolboxcalledttpy.ThehtuckertoolboxprovidesaMATLABclassrepresentingatensorinHTdecomposition.
  • 37. IncrementalSVD Problem: ◦The matrix factorization step in SVD is computationally very expensive. Solution: ◦Have a small pre-computed SVD model, and build upon this model incrementally using inexpensive techniques. Businger(1970) and Bunch and Nielsen (1978) are the first authors who have proposed to update SVD sequentially with the arrival of more samples, i.e. appending/removing a row/column. Subsequently various approaches have been proposed to update the SVD more efficiently and supporting new operations. References: Businger, P.A. Updatinga singularvalue decomposition. 1970 Bunch, J.R.; Nielsen, C.P. Updatingthe singularvalue decomposition. 1978
  • 38. IncrementalSVD F(t) F(t+1) F(t+2) F(t+3) F(t+4) F(t+5) [U, S, V] = SVD( [ F(t), …, F(t+2) ] ) [U’, S’, V’] = iSVD([F(t+3), …, F(t+5)], [U,S,V]) F(t) F(t+1) F(t+2) F(t+3) F(t+4) F(t+5) [U, S, V] = SVD( [ F(t), …, F(t+1) ] ) [U’, S’, V’] = iSVD([F(t+1), …, F(t+3)], [U,S,V]) F(t) F(t+1) F(t+2) F(t+3) F(t+4) F(t+5) [U, S, V] = SVD( [ F(t), …, F(t+2) ] ) [U’, S’, V’] = iSVD([F(t+1), …, F(t+3)], [U,S,V]) F(t) F(t+1) F(t+2) F(t+3) F(t+4) F(t+5) [U, S, V] = SVD( [ F(t), …, F(t+1) ] ) [U’, S’, V’] = iSVD([F(t+1), …, F(t+3)], [U,S,V])
  • 39. IncrementalSVD Updating operation proposed by Sarwaret al. (2002): References: IncrementalSingularValue DecompositionAlgorithmsfor HighlyScalableRecommenderSystems(Sarwaret al., 2002)
  • 40. IncrementalSVD Operations proposed by Matthew Brand (2006): References: Fastlow-rankmodifications of the thinsingularvalue decomposition(Matthew Brand, 2006)
  • 41. IncrementalSVD Operations proposed by Melenchonand Martinez (2007): References: EfficientlyDowndating, Composingand SplittingSingularValue DecompositionsPreservingthe MeanInformation (Melenchónand Martínez, 2007)
  • 42. IncrementalSVD algorithmsin Matlab By Christopher Baker (Baker et al., 2012) http://www.math.fsu.edu/~cbaker/IncPACK/ [Up,Sp,Vp] = SEQKL(A,k,t,[U S V]) Original version only supports the Updating operation Added exponential forgetting factor to support Downdatingoperation By David Ross (Ross et al., 2007) http://www.cs.toronto.edu/~dross/ivt/ [U, D, mu, n] = sklm(data, U0, D0, mu0, n0, ff, K) Supports mean-update, updatingand downdating By David Wingate (Matthew Brand, 2006) http://web.mit.edu/~wingated/www/index.html [Up,Sp,Vp] = svd_update(U,S,V,A,B,force_orth) update the SVD to be [X + A'*B]=Up*Sp*Vp' (a general matrix update). [Up,Sp,Vp] = addblock_svd_update(U,S,V,A,force_orth) update the SVD to be [X A] = Up*Sp*Vp' (add columns [ie, new data points]) size of Vpincreases Amust be square matrix [Up,Sp,Vp] = rank_one_svd_update(U,S,V,a,b,force_orth) update the SVD to be [X + a*b'] = Up*Sp*Vp' (that is, a general rank-one update. This can be used to add columns, zero columns, change columns, recenterthe matrix, etc. ).
  • 44. Incrementaltensor learning Proposed by Sun et al. (2008) ◦Dynamic Tensor Analysis (DTA) ◦Streaming Tensor Analysis (STA) ◦Window-based Tensor Analysis (WTA) References: Incrementaltensor analysis: Theoryand applications (Sun et al, 2008)
  • 45. Incrementaltensor learning Dynamic Tensor Analysis (DTA) ◦[T, C] = DTA(Xnew, R, C, alpha) ◦ApproximatelyupdatestensorPCA,accordingtonewtensorXnew,oldvariancematricesinCandhespecifieddimensionsinvectorR.TheinputXnewisatensor.TheresultreturnedinTisatuckertensorandCisthecellarrayofnewvariancematrices.
  • 46. Incrementaltensor learning Streaming Tensor Analysis(STA) ◦[Tnew, S] = STA(Xnew, R, T, alpha, samplingPercent) ◦ApproximatelyupdatestensorPCA,accordingtonewtensorXnew,oldtuckerdecompositionTandhespecifieddimensionsinvectorR.TheinputXnewisatensororsptensor.TheresultreturnedinTisanewtuckertensorforXnew,Shastheenergyvectoralongeachmode.
  • 47. Incrementaltensor learning Window-basedTensor Analysis(WTA) ◦[T, C] = WTA(Xnew, R, Xold, overlap, type, C) ◦Computewindow-basedtensordecomposition,accordingtoXnew(Xold)thenew(old)tensorwindow,overlapisthenumberofoverlappingtensorsandCvariancematricesexceptforthetimemodeofprevioustensorwindowandthespecifieddimensionsin vectorR,typecanbe'tucker'(default)or'parafac'.TheinputXnew(Xold)isatensor,sptensor,wherethefirstmodeistime.TheresultreturnedinTisatuckerorkruskaltensordependingontypeandCisthecellarrayofnewvariancematrices.
  • 48. Incrementaltensor learning Proposed by Hu et al. (2011) ◦Incremental rank-(R1,R2,R3) tensor-based subspace learning ◦IRTSA-GBM (grayscale) ◦IRTSA-CBM (color) References: Incremental Tensor Subspace Learning and Its Applications to Foreground Segmentation and Tracking (Hu et al., 2011) ApplyiSVD SKLM (Ross et al., 2007)
  • 50. Incrementalrank-(R1,R2,R3) tensor-basedsubspacelearning(Hu et al., 2011) streaming videodata Background Modeling ExponentialmovingaverageLK J(t+1) new sub-frame tensor Â(t) sub-tensorAPPLY STANDARD RANK-R SVDUNFOLDING MODE-1,2,3 Set of N background images For first N frames low-rank sub-tensor model B(t+1) Newbackground sub-frame Â(t+1) B(t+1) drop last frameAPPLY INCREMENTAL SVD UNFOLDING MODE-1,2,3 updatedsub-tensor foreground mask For the nextframes For the nextbackground sub-frame Â(t+1) updatedsub-tensorUPDATE SKLM (Ross et al., 2007) {bg|fg} = P( J[t+1] | U[1], U[2], V[3] ) ForegroundDetection Background Model Initialization Background Model Maintenance
  • 51. Incremental and Multi-feature Tensor Subspace Learning applied for Background Modeling and Subtraction Performs feature extraction in the slidingblock Build or update the tensor model Store the last N frames in a slidingblock streaming videodata Performs the iHoSVDto buildor update the low-rank model foreground mask Performs the ForegroundDetection ℒ푡 low-rank model values pixels features 풯푡 removethe last frame fromslidingblock add first frame coming from video stream 풜푡 … (a) (b) (c) (d) (e) More info: https://sites.google.com/site/ihosvd/ Highlights: * Proposes an incremental low-rank HoSVD(iHOSVD) for background modeling and subtraction. * A unified tensor model to represent the features extracted from the streaming video data. Proposed by Sobralet al. (2014)