Linear Algebra - Module 1
Linear Algebra - Module 1
Linear Algebra - Module 1
3
Matrix Notation
Matrices are denoted by a capital letter, in order to distinguish them from other
variables, e.g. A, B, C.
The dimensions are denoted as number of rows and columns (r x c).
In the examples to the right:
Matrix A is 3 x 3
Matrix B is 2 x 2
Matrix C is 1 x 3
4
Terminology
Norm , sometimes referred to as length, of a vector is the square root of the sum of squares of
the elements. (Maybe written with two bars or one)
Normalized Vector
5
Vector Operations
Inner Product – The inner product multiplies the entries of one vector with another.
Projection of a on b
6
Transpose of Matrix
Transpose of a matrix is inverts the rows and columns. The transpose is denoted by
the apostrophe (') symbol. Therefore given matrix A , A’ is below, similarly C and C’
7
Matrix Addition & Subtraction
Matrix Addition and Subtraction are very easy. Simply add or subtract the
corresponding element of the two matrices to obtain the result matrix.
8
Matrix Multiplication (Scalar)
Sometimes we will multiply a matrix by a scalar, a single number. Simply multiply the
scalar by each element in the matrix:
9
Matrix Multiplication
Multiplying matrices is a bit more complicated. This is performed by adding the
product of row entries in a matrix (A) by the corresponding column entries in another
matrix (B). Lets see an example, before we look at the formula:
10
Matrix Multiplication / Division : Note
When multiplying or dividing matrices, it is important to ensure that the “inner dimensions” are
the same. Example:
If you have a matrix A which is (2x3) in dimension, and Matrix B which is (3x2) in dimension, the
multiplication is possible because the “inner dimensions” are the same.
If you have a matrix A which is (3x4) in dimension, and Matrix B which is (3x4) in dimension, the
multiplication is impossible because the “inner dimensions” are different.
11
Linear Algebra
Module 1.2: Special Matrices & Linear Independence
Special Matrices
Diagonal Matrix is a square matrix whose entries on the diagonal are not 0, such that
every element xij≠0 where i=j, and xij=0 where i ≠ j
Identity Matrix is a square diagonal matrix whose diagonal are all 1. Every element xij=1
where i=j, and xij=0 where i ≠ j. The Identity matrix is denoted with a capital I
Symmetric Matrix – Matrix where its transpose is identical such that A=A’
13
Special Matrices
A - Square matrix
(3x3)
O - Zero matrix
D – Diagonal Matrix
I – Identity Matrix
M - symmetric matrix
A set of vectors a1 , a2 , . . . , an is said to be linearly dependent if constants c1, c2, . . . , cn (not all
zero) can be found such that
c1a1+c2a2+···+cnan =0
If no constants c1 , c2 , . . . , cn can be found satisfying the equation above, the set of vectors is
said to be linearly independent.
A is linear dependent because row 1 and 2 are multiples (3), B is linear independent, and C is
linear independent.
Matrix Properties – Rank of Matrix
The rank of a matrix is equal to the number of linear independent rows or columns in a
matrix. Mathematically, it can be shown that the number of linear dependent rows equals
the number of linear dependent columns (we will not prove that here). Thus, rank is the
smaller of the two, or in a square matrix they are equal.
A matrix is said to have full rank if the rank is equal to the smaller of the number of rows
or columns, i.e. if matrix B is (2x3) and the matrix has a rank or 2, then the matrix has full
rank, because it equals the number of rows.
Matrix Properties – Rank of Matrix
C has a rank of 2
Linear Algebra
Module 1.3: The Determinant, Inverse & Trace
Matrix Properties – Determinant of Matrix
Yielding:
AA’ = A’A = I
Matrix Properties – Inverse
The inverse of a matrix is such that when multiplied by the matrix the inverse yields the identity
matrix, i.e. A-1A = I
To prove it is the inverse we multiply the A and A-1 and the result is I
Matrix Properties – Singular Matrix
Matrices that have an inverse are known as invertible, and those that don't have an
inverse are known as non invertible.
It is important to note that not all matrices are invertible, i.e. AA-1=A-1A = I
A square matrix that has no inverse is considered a singular matrix. One of the key
properties of a square singular matrix is that the determinant of this matrix is 0.
24
Matrix Properties - Trace
The trace of a matrix is the sum of the diagonal elements, denoted by tr(..)
Matrix Functions – R
The matrix Xnxp has p columns, each column has a mean, and thus there would be p means. We
call this a vector of means and it can be represented as E(X):
Σ = Cov(X)=[σik]=[Cov(Xi,Xk)]
= E(X−μ)(X−μ)′
Notice the covariances on the diagonals are actually the variance of the random variable
Random Matrices and Vectors (Con’d)
Generalized Variance of a random matrix is |Σ|, the determinant of its covariance matrix
Refer to μ,Σ,ρ as the population mean (vector), population covariance (matrix), and
population correlation (matrix), respectively
A dataset with 40 variables would yield a 40x40 covariance matrix. Unfortunately, it's
very difficult and impractical to work with 40 dimensions.
So data mining and multivariate analysis techniques rely heavily on reducing the
number of dimensions.
Covariance Matrix
A𝛎 = λ𝛎
det(A − λI) = 0.
Eigenvalues & Eigenvectors
(A−λI)v = 0
When there does not exist a trivial solution there is no inverse and hence
det(A − λI) = 0.
Eigenvalues & Eigenvectors
When there does not exist a trivial solution there is no inverse and hence
det(A − λI) = 0. Thus we can obtain a vector 𝛎 which satisfies the equation
A𝛎 − λI𝛎 = 0,
Notice how A𝛎 is a 2x1 matrix … How does this relate to the original matrix A?
Eigenvalues & Eigenvectors
So now we have A𝛎, and according to our formula before we should be able to find a
scalar (𝜆) that we can multiply by 𝛎 to make the two sides equal.
Because we have found this solution for 𝜆 and 𝛎, we can now reduce A to a scalar,
known as an eigenvalue, and a vector 𝛎, known as the eigenvector. Thus, the
eigenvalue and eigenvector provide information about A, but it is now reduced into
the two components (of smaller dimensions).
Eigenvalues & Eigenvectors
If λ is an eigenvalue of Σ, then
It should also be noted that if Σ is a 2x2 matrix there will be two solutions for
the eigenvector-eigenvalue problem.
Therefore, given a matrix Σ that is nxn, there will be n eigenvalue-eigenvector
solutions.
Eigenvalues & Eigenvectors
The result will be two eigenvalues and eigenvectors. The first eigenvalue will always be the largest,
and the eigenvectors are the columns of the result of the $vectors object.
$values
[1] 3 1
$vectors
[,1] [,2]
[1,] 0.7071068 -0.7071068
[2,] 0.7071068 0.7071068
Linear Algebra
Module 1.6: Singular Value Decomposition
Singular Value Decomposition (SVD)
The D matrix is a matrix with the entries on the diagonal as the eigenvalues of the
matrix A.
We can express any (real) matrix A in terms of eigenvalues and eigenvectors of A′A
and AA′.
Let A be an n × p matrix of rank k.
Then the singular value decomposition of A can be expressed as
A = UDV′,
From the previous slide, we can see, using a bit of Linear Algebra, the equation
above is similar to the eigenvalue-eigenvector formula.
Eigenvalue - Eigenvector Av = 𝝀 v
51
Visit, Follow, Share
Visit our blog site for news on analytics and code samples
http://blogs.5eanalytics.com