Linear Algebra Notes
Linear Algebra Notes
A vector space (or linear space) over a field F is a set V equipped with two operations: vector addition and
scalar multiplication, satisfying the following axioms:
4. Function Space: The set of all real-valued continuous functions on a given interval.
Non-Examples
1. The set of natural numbers N with standard addition and scalar multiplication is not a vector space because
additive inverses do not exist.
2. The set of all 2 × 2 matrices where determinant = 1, as it is not closed under addition.
Subspaces
1/31
Properties of Vector Spaces
Summary
A vector space is a fundamental concept in linear algebra, generalizing the notions of Euclidean space to arbitrary
dimensions and fields. Its structure allows for the study of linear independence, transformations, and
eigenvectors, forming the basis for advanced mathematical theories and applications.
Rn = {(x1 , x2 , … , xn ) ∣ xi ∈ R, i = 1, 2, … , n}.
2. Operations:
3. Properties:
Dimension: n.
Pn = {p(x) = a0 + a1 x + ⋯ + an xn ∣ ai ∈ F}.
2. Operations:
2/31
Contains the zero polynomial 0(x).
2. Operations:
3. Properties:
Dimension: m × n, with basis consisting of matrices with a single 1 in one position and 0 elsewhere.
1. Definition: The set of all continuous real-valued functions on the interval [a, b]:
3/31
∞
∑ ∣ai ∣2 < ∞.
i=1
2. Operations:
3. Properties:
1. The set of natural numbers N with standard addition and scalar multiplication.
Summary
These examples illustrate the diversity of vector spaces, ranging from finite-dimensional spaces like Rn to infinite-
dimensional spaces like C([a, b]) and ℓ2 . Each example satisfies the vector space axioms, providing a foundation
for applications in various branches of mathematics and science.
A subspace of a vector space V over a field F is a subset W ⊆ V that is itself a vector space under the operations
of addition and scalar multiplication inherited from V .
Subspace Conditions
Examples of Subspaces
4/31
4. Planes through the Origin: In R3 , any plane passing through the origin is a subspace.
Non-Examples
1. A subset that does not contain the zero vector is not a subspace.
W1 ∩ W2 = {v ∈ V ∣ v ∈ W1 and v ∈ W2 }.
Proof: The intersection is closed under addition and scalar multiplication and contains 0, satisfying the
subspace conditions.
W1 + W2 = {v ∈ V ∣ v = u1 + u2 , u1 ∈ W1 , u2 ∈ W2 }.
1. R3 :
Subspaces include the origin {0}, any line through the origin, and any plane through the origin.
5/31
Subspace: The set of all symmetric m × n matrices.
3. Space of Polynomials Pn :
Summary
Vector subspaces are subsets of a vector space that inherit all the properties of a vector space. They play a crucial
role in understanding the structure of vector spaces, forming the building blocks for studying linear
independence, bases, and dimension.
v = a 1 v1 + a 2 v2 + ⋯ + a k vk ,
where a1 , a2 , … , ak
∈ F are scalars, and v1 , v2 , … , vk ∈ V .
1. In R3 , given v1 = (1, 0, 0), v2 = (0, 1, 0), and v3 = (0, 0, 1), the vector (2, 3, 4) can be written as:
2. In the space of polynomials P2 , given p1 (x) = 1, p2 (x) = x, and p3 (x) = x2 , the polynomial 3 + 2x + 4x2
is a linear combination:
Definition: Span
The span of a set of vectors {v1 , v2 , … , vk } in a vector space V is the set of all possible linear combinations of
these vectors:
Examples of Span
6/31
Span(v1 , v2 ) = {a1 (1, 0) + a2 (0, 1) ∣ a1 , a2 ∈ R} = R2 .
Properties of Span
If v1 , v2 , … , vk are linearly dependent, some vectors in the set can be expressed as linear combinations of
Summary
Linear combinations represent specific vectors derived from a given set of vectors.
The span represents the entire collection of all such vectors, forming the smallest subspace containing the
given set.
Linear combinations and span provide foundational tools for describing vector spaces and their subspaces.
A set of vectors {v1 , v2 , … , vk } in a vector space V over a field F is said to be linearly independent if the only
a 1 v1 + a 2 v2 + ⋯ + a k vk = 0
7/31
is a1 = a2 = ⋯ = ak = 0, where a1 , a2 , … , ak ∈ F are scalar coefficients.
a1 v1 + a2 v2 + ⋯ + ak vk = 0,
If vectors are linearly dependent, at least one vector can be expressed as a linear combination of the others.
If vectors are linearly independent, none of the vectors can be written as a linear combination of the others.
a1 v1 + a2 v2 = 0.
Substituting v1 and v2 :
We need:
This implies a1 = 0 and a2 = 0. Since there are no other solutions, v1 and v2 are linearly independent.
a1 v1 + a2 v2 = 0.
Substitute v1 and v2 :
8/31
The vector equation gives:
a1 + 2a2 = 0 ⟹ a1 = −2a2 .
The condition is satisfied, which means v1 and v2 are linearly dependent because one is a scalar multiple of the
other:
v2 = 2v1 .
The dimension of a vector space is the number of vectors in any of its bases.
For finite-dimensional vector spaces, this can be translated into solving a homogeneous system of linear
equations.
A = [v1 v2 … vk ].
9/31
2. Determine the rank of the matrix A.
2. Linear Dependence: A set of vectors is linearly dependent if at least one vector can be expressed as a linear
combination of the others.
4. Connection to Basis: A basis is a maximal linearly independent subset of a vector space that spans the space.
Linear independence is foundational in understanding the structure of vector spaces, constructing bases, and
determining the dimensionality of vector spaces.
Lecture 6: Basis
A basis of a vector space V over a field F is a subset of vectors {v1 , v2 , … , vn } ⊆ V that satisfies the following
two conditions:
Formally:
Span(v1 , v2 , … , vn ) = V
and
Example 1: Basis in R2
The standard basis for R2 is given by:
10/31
{e1 , e2 } = {(1, 0), (0, 1)}.
They span all of R2 , as any vector in R2 can be expressed as a linear combination of these two vectors:
P2 = {p(x) = a0 + a1 x + a2 x2 ∣ a0 , a1 , a2 ∈ F}.
{1, x, x2 }.
Proof Sketch:
11/31
Example 3: Finding the Dimension
Let V = Span(v1 , v2 , v3 ) in R3 where v1 = (1, 1, 0), v2 = (1, 0, 1), v3 = (0, 1, 1).
1 1 0
A= 1 0 1 .
0 1 1
2. Determine the rank of the matrix A (row reduction or rank computation).
After row reduction, you will find the rank of this matrix gives rank(A) = 2.
3. The dimension of V is therefore dim(V ) = 2.
Properties of a Basis
1. Any Vector is a Unique Linear Combination of Basis Vectors:
Every vector v ∈ V can be expressed uniquely as a linear combination of the basis vectors:
v = a 1 v1 + a 2 v2 + ⋯ + a n vn ,
2. Basis is Minimal:
A basis contains the fewest number of vectors required to span the vector space.
If a vector is removed, the set will no longer span the vector space.
3. Basis Change:
A vector space can have many different bases. Changing the basis allows different representations of
vectors, particularly useful in applications such as coordinate transformations.
The dimension of a subspace is always less than or equal to the dimension of the parent vector space.
Summary
1. A basis is a set of vectors that is both linearly independent and spans the entire vector space.
2. The dimension of a vector space is the number of vectors in any of its bases.
3. The choice of basis depends on the context of the problem, but every vector space has at least one basis.
The concept of basis is fundamental in linear algebra because it allows for efficient representation and
computation within vector spaces.
12/31
Lecture 7: Dimension
The dimension of a vector space V over a field F, denoted as dim(V ), is defined as the number of vectors in any
basis of V .
The dimension measures the number of degrees of freedom in the vector space V .
A basis of a vector space is a linearly independent set of vectors that spans the entire space.
Properties of Dimension
1. Uniqueness:
If a vector space has a basis, the number of elements in any basis is unique.
This means that all bases of a given finite-dimensional vector space have the same number of elements.
2. Dimension of Subspaces:
The number of vectors in a basis of a vector space is always equal to the dimension of that space.
Examples of Dimension
Example 1: Dimension of Rn
Rn = {(x1 , x2 , … , xn ) ∣ x1 , x2 , … , xn ∈ R}.
{e1 , e2 , … , en },
13/31
dim(Rn ) = n.
0 1
2. Perform row reduction:
1 0
Row reduce A → 0 1 .
0 0
The rank of the matrix is 2, implying:
dim(W ) = 2.
The set:
{1, x, x2 , x3 , … }
This basis is infinite because polynomials can have arbitrarily high degrees.
3. Therefore:
14/31
dim(V ) = n.
If the space is infinite-dimensional, the basis will have infinitely many elements.
dim(W ) ≤ dim(V ).
When extending a basis of a subspace W ⊆ V , if you add vectors that are not in the span of W , the dimension of
the new subspace increases by the number of independent vectors added.
v = a 1 v1 + a 2 v2 + ⋯ + a n vn ,
where a1 , a2 , … , an
∈ F. The coordinates of v are the scalars (a1 , a2 , … , an ).
Coordinate systems depend on the choice of the basis, but the dimension remains invariant.
Summary
1. Definition: The dimension of a vector space is the number of vectors in any of its bases.
15/31
dim(V ) = dim(W1 ) + dim(W2 ).
5. The dimension provides a fundamental understanding of the structure and complexity of vector spaces.
Understanding dimension is crucial for many areas of mathematics, physics, and engineering, as it provides a
measure of how many independent directions or degrees of freedom exist in a vector space.
Replacement Theorem
The Replacement Theorem is a fundamental result in linear algebra that relates bases, dimension, and the notion
of extending or replacing subsets within a vector space. It formalizes how a set of vectors can be replaced by
another set of vectors while maintaining their role as a basis under certain conditions.
If ∣S∣
= m ≤ n and T is a spanning set of size n, then S can be extended to a basis of V by replacing a
subset of T while maintaining linear independence.
More formally:
B = S ∪ T′
is a basis of V .
This means that you can "replace" n − m elements of the spanning set T with S while preserving the linear
structure of the vector space.
B = S ∪ {u1 , u2 , … , un−m }
is a basis for V .
16/31
Consequences of the Replacement Theorem
1. Dimension and Basis Extension:
The Replacement Theorem implies that the dimension of a vector space is invariant under changes of
basis.
Given a linearly independent subset and a spanning subset, you can always construct a basis by
extending the linearly independent subset appropriately.
2. Uniqueness of Dimension:
The theorem leads directly to the idea that the dimension of a vector space is unique and does not
depend on the choice of basis.
The Replacement Theorem allows replacing elements in a spanning set with elements from a linearly
independent subset, provided the number of replacements aligns with the dimension of the space.
4. Basis Completeness:
Here:
∣S∣ = 1,
∣T ∣ = 3,
dim(V ) = 3.
Key Insight
17/31
The Replacement Theorem is a guarantee of flexibility. It ensures that no matter the initial choice of spanning
vectors, you can always align them with a desired linearly independent subset while maintaining the properties of
a basis.
Let T: V → W be a linear map between two vector spaces V and W over a field F. The rank of T , denoted
rank(T ), is defined as the dimension of the image (or range) of T :
Here:
Im(T ) = range of T ⊆ W ,
V is the domain, W is the codomain.
The rank represents the dimension of the subspace of W that is spanned by the images of all vectors in V under
T.
The nullity of T , denoted nullity(T ), is defined as the dimension of the kernel (or null space) of T :
Here:
The nullity represents how many degrees of freedom in V are lost under the mapping T .
18/31
The Rank-Nullity Theorem
The Rank-Nullity Theorem provides a fundamental relationship between the rank and nullity of a linear map T :
V → W:
Theorem: Rank-Nullity
Interpretation:
This relationship states that the sum of the rank (the "effective dimension" of the map) and nullity (the number of
dimensions mapped to 0) must account for all the dimensions of V .
where n = dim(V ),
so that {v1 , v2 , … , vn } forms a basis for V .
Vectors v1 , v2 , … , vk
∈ Ker(T ) are mapped to 0,
The remaining vectors vk+1 , vk+2 , … , vn are mapped to nonzero vectors under T .
Key Idea
19/31
The nullity tells us how much of the input space gets "collapsed" to zero under T , while the rank tells us how much
of the codomain is "covered" by the mapping. Their sum always equals the total dimension of the domain vector
space V .
x1
x + x2
T x2 = [ 1 ].
x2 + x3
x3
x1 + x2 0
[ ] = [ ].
x2 + x3 0
x1 + x2 = 0 ,
x2 + x3 = 0 .
From x1 + x2 = 0, we have:
x1 = −x2 .
From x2 + x3 = 0, we have:
x3 = −x2 .
Therefore:
−x2
v= x2 .
−x2
−1
v=t 1 .
−1
Thus:
dim(Ker(T )) = 1.
2. Rank: Using the Rank-Nullity Theorem:
Here:
dim(V ) = 3,
nullity(T ) = 1, so:
20/31
rank(T ) = 3 − 1 = 2.
3. Rank-Nullity Theorem: Relates the dimensions of a linear map's kernel, image, and the domain vector space:
A linear transformation (or linear map) is a function between two vector spaces that preserves the operations of
vector addition and scalar multiplication.
Let V and W be vector spaces over the same field F. A function T : V → W is called a linear transformation if it
satisfies the following two properties for all vectors u, v ∈ V and all scalars c ∈ F:
1. Additivity:
T (u + v) = T (u) + T (v)
2. Homogeneity:
21/31
T (c ⋅ v) = c ⋅ T (v)
These two conditions ensure that the transformation respects the vector space structure under both addition and
scalar multiplication.
T (v) = A ⋅ v,
T (0) = 0
3. Matrix Representation: Every linear transformation can be represented uniquely by a matrix when a basis is
chosen for V and W .
4. Composition of Linear Maps: If T : V → W and S : W → U are linear maps, then the composition S ∘
T : V → U is also a linear map.
Define T : Rn → Rn by:
22/31
T (v) = v for all v ∈ Rn .
2. T (c ⋅ v) = c ⋅ v = c ⋅ T (v).
Define T : Rn → Rn by:
T (v) = c ⋅ v,
Example 3: Projection
T [ ] = [ ].
x x
y 0
This map keeps the x-component of a vector while setting the y -component to zero. This is a linear map because:
x1 + x2 x + x2
1. T (v1 + v2 ) = T [ ]=[ 1 ] = T (v1 ) + T (v2 ),
y1 + y2 0
c⋅x c⋅x
2. T (c ⋅ v) = T [ ]=[ ] = c ⋅ T (v).
c⋅y 0
1 0
A=[ ].
0 0
Define T : Rn → Rm by:
23/31
This map sends every vector to the zero vector, irrespective of v . The zero map is trivially linear because:
2. T (c ⋅ v) = 0 = c ⋅ T (v).
is the standard basis for Rm , the action of T can be written with a matrix A whose columns are the images of the
basis vectors under T :
v1
v2
If v ∈ Rn is written in coordinates as v = , then:
…
vn
T (v) = A ⋅ v.
T (v + u) = T (v) + T (u).
2. Preserve Scalar Multiplication:
T (c ⋅ v) = c ⋅ T (v).
3. Composition of Linear Maps: If S : W → U and T : V → W are linear transformations, then their
composition S ∘ T : V → U is also a linear transformation.
4. Invertibility: A linear map T : V → W is invertible if there exists a linear map T −1 : W → V such that:
Summary
Linear transformations map vectors from one vector space to another while preserving vector addition and
scalar multiplication.
24/31
They can always be represented by a matrix with respect to chosen bases.
Common examples include identity maps, scaling, projections, and the zero map.
Introduction
The concept of linear transformations is deeply connected to the idea of basis. This lecture explores how linear
transformations interact with bases in vector spaces, the role of basis changes, and how to represent linear
transformations with respect to chosen bases.
Key Idea
Linear transformations can be expressed with matrices when vector spaces V (domain) and W (codomain) are
given specific bases. Understanding how a linear map interacts with these bases allows efficient computation and
insight into the structure of the map.
Let:
B = {v1 , v2 , … , vn } be a basis of V ,
C = {w1 , w2 , … , wm } be a basis of W .
The matrix representation of T with respect to these bases B (for V ) and C (for W ) is denoted by a matrix [T ]B,C
2. Matrix Representation
The matrix representation of T is constructed by expressing T (vi ) (the image of each basis vector vi of V ) in
25/31
2. Express T (vi ) in terms of the basis C as a linear combination:
where aji are the scalar coefficients of this linear combination for j
= 1, 2, … , m.
3. Write T (vi ) as a column vector with respect to C :
a1i
a2i
[T (vi )]C = .
⋮
ami
4. Form the matrix [T ]B,C by using these column vectors as its columns:
Here:
The entries of these columns are coordinates with respect to the codomain basis C .
3. Example
Let’s compute an example for clarity:
T : R2 → R2
be defined by:
x+y
T[ ]=[ ].
x
y x−y
1 0
We compute [T ]B,C with respect to the standard bases B = {[ ] , [ ]} for R2 .
0 1
1 1+0 1
T (v1 ) = T [ ] = [ ] = [ ].
0 1−0 1
0
2. Map v2 = [ ]:
1
0 0+1 1
T (v2 ) = T [ ] = [ ] = [ ].
1 0−1 −1
26/31
Now represent these mappings as column vectors:
1
[T (v1 )]C = [ ],
1
1
[T (v2 )]C = [ ].
−1
1 1
[T ]B,C = [ ].
1 −1
4. Change of Basis
If the basis of V or W changes, the matrix representation of T will change. Let:
C ′ = {w1′ , w2′ , … , wm
′
} be a new basis for W .
The change of bases can be expressed using transition matrices P (from old to new basis). The new matrix
representation [T ]B′ ,C ′ is given by:
[T ]B′ ,C ′ = P −1 [T ]B,C Q,
where:
These transformations are essential in applications where different bases are used to describe computations,
including computer graphics, numerical methods, or theoretical physics.
3. Change of Basis:
27/31
Changing the basis in either V or W affects the matrix representation of T ,
Understanding these relationships allows us to work with linear transformations concretely via matrix
computations.
Introduction
Linear transformations can be represented by matrices, and matrices themselves define how a linear map acts on
vectors. This lecture will explore the connection between linear transformations and their corresponding matrix
representations, the role of matrix multiplication, and how matrices encode geometric transformations.
When dim(V ) = n and dim(W ) = m, the linear transformation T can be uniquely represented by an m × n
matrix A with respect to chosen bases of V and W .
B = {v1 , v2 , … , vn } is a basis of V ,
C = {w1 , w2 , … , wm } is a basis of W .
T (vi ) ∈ W .
2. Express T (vi ) in terms of the basis C :
28/31
where aji ∈ F are scalar coefficients corresponding to the coordinates of T (vi ) with respect to C .
a2i
[T (vi )]C = .
⋮
ami
The matrix A represents the action of T with respect to the bases B and C .
T [ ] = c[ ].
x x
y y
1 0
Here, the standard bases of R2 are B = {[ ] , [ ] }.
0 1
1
T (v1 ) = c [ ] = [ ] .
c
0 0
0
2. Map v2 = [ ]:
1
0 0
T (v2 ) = c [ ] = [ ] .
1 c
0
[T (v1 )]C = [ ] , [T (v2 )]C = [ ] .
c
0 c
29/31
c 0
A = [[T (v1 )]C [T (v2 )]C ] = [ ].
0 c
[S ∘ T ] = B ⋅ A.
2. Identity Map: The identity transformation I : V → V is represented by the identity matrix In .
3. Inverse Map: If a linear map T is invertible, the inverse transformation corresponds to the inverse matrix
A−1 .
These geometric properties allow us to visualize how matrix transformations affect geometric shapes and vectors.
6. Summary
Key Points:
1. Every linear transformation can be expressed uniquely as a matrix with respect to given bases.
2. The matrix representation is constructed by expressing the image of basis vectors under the map T in terms
of the codomain's basis.
4. Inverse maps and identity transformations correspond to matrix inverses and identity matrices, respectively.
30/31
5. Geometric transformations like scaling, rotations, and projections are encoded using matrices.
31/31