Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
16 views

Linear Algebra Notes

Uploaded by

Shashank S
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Linear Algebra Notes

Uploaded by

Shashank S
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Linear Algebra Notes

Lecture 1: Vector Spaces

Definition of Vector Spaces

A vector space (or linear space) over a field F is a set V equipped with two operations: vector addition and
scalar multiplication, satisfying the following axioms:

1. Addition: A binary operation + : V × V → V such that ∀u, v ∈ V , u + v ∈ V .


2. Scalar Multiplication: An operation ⋅ : F × V → V such that ∀c ∈ F, v ∈ V , c ⋅ v ∈ V .

Axioms of Vector Spaces

For all u, v, w ∈ V and a, b ∈ F, the following must hold:


1. Closure under Addition: u + v ∈V.
2. Commutativity of Addition: u + v = v + u.
3. Associativity of Addition: (u + v) + w = u + (v + w).
4. Existence of Additive Identity: There exists an element 0 ∈ V such that u + 0 = u.
5. Existence of Additive Inverse: For each u ∈ V , there exists −u ∈ V such that u + (−u) = 0.
6. Closure under Scalar Multiplication: a ⋅ u ∈V.
7. Distributivity of Scalar Multiplication with respect to Vector Addition: a ⋅ (u + v) = (a ⋅ u) + (a ⋅ v).
8. Distributivity of Scalar Multiplication with respect to Field Addition: (a + b) ⋅ u = (a ⋅ u) + (b ⋅ u).
9. Compatibility of Scalar Multiplication with Field Multiplication: (a ⋅ b) ⋅ u = a ⋅ (b ⋅ u).
10. Identity of Scalar Multiplication: 1 ⋅ u = u, where 1 is the multiplicative identity in F.

Examples of Vector Spaces

1. Euclidean Space: Rn with usual addition and scalar multiplication.

2. Space of Polynomials: The set of all polynomials of degree at most n over R.

3. Matrix Space: The set of all m × n matrices over R or C.

4. Function Space: The set of all real-valued continuous functions on a given interval.

Non-Examples

1. The set of natural numbers N with standard addition and scalar multiplication is not a vector space because
additive inverses do not exist.

2. The set of all 2 × 2 matrices where determinant = 1, as it is not closed under addition.

Subspaces

A subset W ⊆ V is a subspace if:


1. W is closed under addition.
2. W is closed under scalar multiplication.
3. W contains the additive identity 0.

1/31
Properties of Vector Spaces

1. The additive identity 0 is unique.

2. The additive inverse −v is unique for each v ∈V.


3. 0 ⋅ v = 0 and a ⋅ 0 = 0 for all a ∈ F, v ∈ V .
4. (−1) ⋅ v = −v for all v ∈ V .

Basis and Dimension

A basis of a vector space V is a linearly independent set of vectors that spans V .

The dimension of V , denoted dim V , is the cardinality of its basis.

Summary

A vector space is a fundamental concept in linear algebra, generalizing the notions of Euclidean space to arbitrary
dimensions and fields. Its structure allows for the study of linear independence, transformations, and
eigenvectors, forming the basis for advanced mathematical theories and applications.

Lecture 2: Examples of Vector Spaces

Euclidean Space (Rn )

1. Definition: The set of all ordered n-tuples of real numbers:

Rn = {(x1 , x2 , … , xn ) ∣ xi ∈ R, i = 1, 2, … , n}.
​ ​ ​ ​

2. Operations:

Vector addition: (x1 , x2 , … , xn ) + (y1 , y2 , … , yn )


​ ​ ​ ​ ​ = (x1 + y1 , x2 + y2 , … , xn + yn ).
​ ​ ​ ​ ​ ​

Scalar multiplication: a ⋅ (x1 , x2 , … , xn )


​ ​ ​ = (ax1 , ax2 , … , axn ), a ∈ R.
​ ​ ​

3. Properties:

Contains the zero vector: (0, 0, … , 0).

Closed under addition and scalar multiplication.

Dimension: n.

Space of Polynomials (Pn ) ​

1. Definition: The set of all polynomials of degree at most n:

Pn = {p(x) = a0 + a1 x + ⋯ + an xn ∣ ai ∈ F}.
​ ​ ​ ​ ​

2. Operations:

Polynomial addition: (p + q)(x) = p(x) + q(x).


Scalar multiplication: (a ⋅ p)(x) = a ⋅ p(x), a ∈ F.
3. Properties:

2/31
Contains the zero polynomial 0(x).

Closed under addition and scalar multiplication.

Dimension: n + 1, with basis {1, x, x2 , … , xn }.

Matrix Space (Mm×n (F))​

1. Definition: The set of all m × n matrices with entries from F:

Mm×n (F) = {A ∣ A is an m × n matrix with entries in F}.


2. Operations:

Matrix addition: (A + B)ij ​ = Aij + Bij .


​ ​

Scalar multiplication: (a ⋅ A)ij ​ = a ⋅ Aij , a ∈ F.


3. Properties:

Contains the zero matrix O .

Closed under addition and scalar multiplication.

Dimension: m × n, with basis consisting of matrices with a single 1 in one position and 0 elsewhere.

Space of Continuous Functions (C([a, b]))

1. Definition: The set of all continuous real-valued functions on the interval [a, b]:

C([a, b]) = {f : [a, b] → R ∣ f is continuous}.


2. Operations:

Function addition: (f + g)(x) = f (x) + g(x).


Scalar multiplication: (a ⋅ f )(x) = a ⋅ f (x), a ∈ R.
3. Properties:

Contains the zero function f (x) = 0.


Closed under addition and scalar multiplication.

Infinite-dimensional, as there is no finite basis.

Sequence Space (ℓ2 )

1. Definition: The set of all infinite sequences (a1 , a2 , …) such that:


​ ​

3/31

∑ ∣ai ∣2 < ∞.
​ ​

i=1

2. Operations:

Sequence addition: (a1 , a2 , …) + (b1 , b2 , …)


​ ​ ​ ​ = (a1 + b1 , a2 + b2 , …).
​ ​ ​ ​

Scalar multiplication: c ⋅ (a1 , a2 , …)


​ ​ = (ca1 , ca2 , …), c ∈ R or C.
​ ​

3. Properties:

Contains the zero sequence (0, 0, …).

Closed under addition and scalar multiplication.

Infinite-dimensional, typically used in functional analysis.

Non-Examples of Vector Spaces (for Reference)

1. The set of natural numbers N with standard addition and scalar multiplication.

2. The set of 2 × 2 invertible matrices, as it is not closed under addition.

Summary

These examples illustrate the diversity of vector spaces, ranging from finite-dimensional spaces like Rn to infinite-
dimensional spaces like C([a, b]) and ℓ2 . Each example satisfies the vector space axioms, providing a foundation
for applications in various branches of mathematics and science.

Lecture 3: Vector Subspaces

Definition of Vector Subspaces

A subspace of a vector space V over a field F is a subset W ⊆ V that is itself a vector space under the operations
of addition and scalar multiplication inherited from V .

Subspace Conditions

A subset W ⊆ V is a subspace if and only if it satisfies the following three conditions:


1. Closure under Addition: ∀u, v ∈ W, u + v ∈ W.
2. Closure under Scalar Multiplication: ∀c ∈ F, u ∈ W , c ⋅ u ∈ W .
3. Non-Empty: W contains the zero vector 0 ∈ V (equivalently, 0 ⋅ u = 0 ∈ W ).

Examples of Subspaces

1. Zero Subspace: {0} is a subspace of any vector space V .

2. The Whole Space: W = V is a trivial subspace of V .


3. Lines through the Origin: In R2 , any line passing through the origin is a subspace.

4/31
4. Planes through the Origin: In R3 , any plane passing through the origin is a subspace.

Testing for Subspaces

To verify whether a given subset W ⊆ V is a subspace:


1. Confirm that 0 ∈ W.
2. Check closure under addition: u + v ∈ W ∀u, v ∈ W .
3. Check closure under scalar multiplication: c ⋅ u ∈ W ∀c ∈ F, u ∈ W .

Non-Examples

1. A subset that does not contain the zero vector is not a subspace.

Example: The set of vectors {(1, y) ∣ y ∈ R} in R2 .


2. A subset that is not closed under addition or scalar multiplication is not a subspace.

Example: The set of vectors in R2 with positive coordinates.

Theorem: Intersection of Subspaces

The intersection of two subspaces W1 and W2 of a vector space V is also a subspace:


​ ​

W1 ∩ W2 = {v ∈ V ∣ v ∈ W1 and v ∈ W2 }.
​ ​ ​ ​

Proof: The intersection is closed under addition and scalar multiplication and contains 0, satisfying the
subspace conditions.

Theorem: Sum of Subspaces

The sum of two subspaces W1 and W2 is defined as:


​ ​

W1 + W2 = {v ∈ V ∣ v = u1 + u2 , u1 ∈ W1 , u2 ∈ W2 }.
​ ​ ​ ​ ​ ​ ​

The sum W1 ​ + W2 is also a subspace of V .


Examples in Common Spaces

1. R3 :

Subspaces include the origin {0}, any line through the origin, and any plane through the origin.

2. Space of Matrices Mm×n (R): ​

5/31
Subspace: The set of all symmetric m × n matrices.

3. Space of Polynomials Pn : ​

Subspace: The set of polynomials with zero constant term.

Summary

Vector subspaces are subsets of a vector space that inherit all the properties of a vector space. They play a crucial
role in understanding the structure of vector spaces, forming the building blocks for studying linear
independence, bases, and dimension.

Lecture 4: Linear Combinations and Span

Definition: Linear Combination

A linear combination of a set of vectors {v1 , v2 , … , vk } in a vector space V is any vector v


​ ​ ​ ∈ V that can be
expressed as:

v = a 1 v1 + a 2 v2 + ⋯ + a k vk ,
​ ​ ​ ​ ​ ​

where a1 , a2 , … , ak
​ ​ ​ ∈ F are scalars, and v1 , v2 , … , vk ∈ V . ​ ​ ​

Examples of Linear Combinations

1. In R3 , given v1 ​ = (1, 0, 0), v2 = (0, 1, 0), and v3 = (0, 0, 1), the vector (2, 3, 4) can be written as:
​ ​

(2, 3, 4) = 2v1 + 3v2 + 4v3 . ​ ​ ​

2. In the space of polynomials P2 , given p1 (x) ​ ​ = 1, p2 (x) = x, and p3 (x) = x2 , the polynomial 3 + 2x + 4x2
​ ​

is a linear combination:

3 + 2x + 4x2 = 3p1 (x) + 2p2 (x) + 4p3 (x). ​ ​ ​

Definition: Span

The span of a set of vectors {v1 , v2 , … , vk } in a vector space V is the set of all possible linear combinations of
​ ​ ​

these vectors:

Span(v1 , v2 , … , vk ) = {a1 v1 + a2 v2 + ⋯ + ak vk ∣ a1 , a2 , … , ak ∈ F}.


​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​

The span of {v1 , v2 , … , vk } is the smallest subspace of V containing v1 , v2 , … , vk .


​ ​ ​ ​ ​ ​

Examples of Span

1. In R2 , the span of v1 ​ = (1, 0) and v2 = (0, 1) is the entire space R2 , since:


6/31
Span(v1 , v2 ) = {a1 (1, 0) + a2 (0, 1) ∣ a1 , a2 ∈ R} = R2 .
​ ​ ​ ​ ​ ​

2. In R3 , the span of v1 ​ = (1, 0, 0) and v2 = (0, 1, 0) is the xy -plane: ​

Span(v1 , v2 ) = {a1 (1, 0, 0) + a2 (0, 1, 0) ∣ a1 , a2 ∈ R}.


​ ​ ​ ​ ​ ​

3. In P2 , the span of p1 (x)


​ ​ = 1 and p2 (x) = x is the set of all linear polynomials:

Span(p1 , p2 ) = {a1 + a2 x ∣ a1 , a2 ∈ F}. ​ ​ ​ ​ ​ ​

Properties of Span

1. The span of a set of vectors {v1 , v2 , … , vk } is always a subspace of V . ​ ​ ​

2. If W = Span(v1 , v2 , … , vk ), then W contains:


​ ​ ​

The zero vector 0, since 0 = 0v1 + 0v2 + ⋯ + 0vk . ​ ​ ​

Any scalar multiple of vi , as avi ​ ​


∈ W ∀a ∈ F.
3. If the span of {v1 , v2 , … , vk } is V , then v1 , v2 , … , vk are said to span V .
​ ​ ​ ​ ​ ​

Relation between Span and Linear Independence

If v1 , v2 , … , vk are linearly independent, then they form a basis for Span(v1 , v2 , … , vk ).


​ ​ ​ ​ ​ ​

If v1 , v2 , … , vk are linearly dependent, some vectors in the set can be expressed as linear combinations of
​ ​ ​

others, and they do not form a basis.

Summary

Linear combinations represent specific vectors derived from a given set of vectors.

The span represents the entire collection of all such vectors, forming the smallest subspace containing the
given set.

Linear combinations and span provide foundational tools for describing vector spaces and their subspaces.

Lecture 5: Linear Independence

Definition: Linear Independence

A set of vectors {v1 , v2 , … , vk } in a vector space V over a field F is said to be linearly independent if the only
​ ​ ​

solution to the equation

a 1 v1 + a 2 v2 + ⋯ + a k vk = 0
​ ​ ​ ​ ​ ​

7/31
is a1 ​ = a2 = ⋯ = ak = 0, where a1 , a2 , … , ak ∈ F are scalar coefficients.
​ ​ ​ ​ ​

If there exist scalars a1 , a2 , … , ak , not all zero, such that


​ ​

a1 v1 + a2 v2 + ⋯ + ak vk = 0,
​ ​ ​ ​ ​ ​

the vectors are said to be linearly dependent.

Intuitive Meaning of Linear Independence


Linear independence implies that no vector in the set can be expressed as a linear combination of the others.
Conversely:

If vectors are linearly dependent, at least one vector can be expressed as a linear combination of the others.

If vectors are linearly independent, none of the vectors can be written as a linear combination of the others.

Example 1: Linearly Independent Vectors in R2


Let v1 ​ = (1, 0) and v2 = (0, 1) in R2 .

We check if these are linearly independent by solving:

a1 v1 + a2 v2 = 0.
​ ​ ​

Substituting v1 and v2 : ​ ​

a1 (1, 0) + a2 (0, 1) = (a1 , a2 ).


​ ​ ​ ​

We need:

(a1 , a2 ) = (0, 0).


​ ​

This implies a1 ​ = 0 and a2 = 0. Since there are no other solutions, v1 and v2 are linearly independent.
​ ​ ​

Example 2: Linearly Dependent Vectors in R2


Let v1 ​ = (1, 2) and v2 = (2, 4) in R2 .

We check for dependence by solving:

a1 v1 + a2 v2 = 0.
​ ​ ​

Substitute v1 and v2 : ​

a1 (1, 2) + a2 (2, 4) = (a1 + 2a2 , 2a1 + 4a2 ).


​ ​ ​ ​ ​ ​

8/31
The vector equation gives:

(a1 + 2a2 , 2a1 + 4a2 ) = (0, 0).


​ ​ ​ ​

From the first component:

a1 + 2a2 = 0 ⟹ a1 = −2a2 .
​ ​ ​ ​

Substitute into the second component:

2a1 + 4a2 = 2(−2a2 ) + 4a2 = 0.


​ ​ ​ ​

The condition is satisfied, which means v1 and v2 are linearly dependent because one is a scalar multiple of the
​ ​

other:

v2 = 2v1 .
​ ​

Theorem: Basis and Linear Independence


A basis of a vector space V is a set of vectors that are linearly independent and span V . Thus:

Every vector space has a basis.

The dimension of a vector space is the number of vectors in any of its bases.

Criteria for Testing Linear Independence


To check if a given set of vectors {v1 , v2 , … , vk } is linearly independent:
​ ​ ​

1. Write the equation:


a1 v1 + a2 v2 + ⋯ + ak vk = 0.
​ ​ ​ ​ ​ ​

2. Express this equation in terms of scalar coefficients a1 , a2 , … , ak . ​ ​ ​

3. Determine if the only solution is a1 ​ = a2 = ⋯ = ak = 0. ​ ​

For finite-dimensional vector spaces, this can be translated into solving a homogeneous system of linear
equations.

Matrix Representation for Linear Independence


To test the linear independence of vectors v1 , v2 , … , vk in Rn : ​ ​ ​

1. Form a matrix A whose columns are the vectors v1 , v2 , … , vk : ​ ​ ​

A = [v1 v2 … vk ]. ​ ​

9/31
2. Determine the rank of the matrix A.

If rank(A) = k , the vectors are linearly independent.


If rank(A) < k , the vectors are linearly dependent.

Summary of Key Points


1. Linear Independence: A set of vectors is linearly independent if no vector can be expressed as a linear
combination of the others.

2. Linear Dependence: A set of vectors is linearly dependent if at least one vector can be expressed as a linear
combination of the others.

3. Testing Linear Independence:

Use the definition directly by solving the equation a1 v1 ​ ​ + a 2 v2 + ⋯ + a k vk = 0 .


​ ​ ​ ​

Translate the problem into a matrix and analyze its rank.

4. Connection to Basis: A basis is a maximal linearly independent subset of a vector space that spans the space.

Linear independence is foundational in understanding the structure of vector spaces, constructing bases, and
determining the dimensionality of vector spaces.

Lecture 6: Basis

Definition: Basis of a Vector Space

A basis of a vector space V over a field F is a subset of vectors {v1 , v2 , … , vn } ​ ​ ​ ⊆ V that satisfies the following
two conditions:

1. Linear Independence: The vectors v1 , v2 , … , vn are linearly independent.


​ ​ ​

2. Spanning Property: The vectors v1 , v2 , … , vn span the vector space V .


​ ​ ​

Formally:

Span(v1 , v2 , … , vn ) = V
​ ​ ​

and

No nontrivial linear combination exists such that a1 v1 + a2 v2 + ⋯ + an vn = 0 unless a1 = a2 = ⋯ = an = 0.


​ ​ ​ ​ ​ ​ ​ ​ ​

The number of vectors in a basis of V is called the dimension of V .

Example 1: Basis in R2
The standard basis for R2 is given by:

10/31
{e1 , e2 } = {(1, 0), (0, 1)}.
​ ​

These vectors are linearly independent.

They span all of R2 , as any vector in R2 can be expressed as a linear combination of these two vectors:

a1 (1, 0) + a2 (0, 1) = (a1 , a2 ).


​ ​ ​ ​

Therefore, {e1 , e2 } is a basis for R2 .


​ ​

Example 2: Basis in Polynomial Spaces


Consider the vector space of all polynomials of degree at most 2, denoted P2 , defined as: ​

P2 = {p(x) = a0 + a1 x + a2 x2 ∣ a0 , a1 , a2 ∈ F}.
​ ​ ​ ​ ​ ​ ​

A standard basis for P2 is:


{1, x, x2 }.

These vectors (functions) are linearly independent.

They span all polynomials of degree at most 2.

Hence, {1, x, x2 } forms a basis for P2 . ​

Theorem: Uniqueness of Basis Dimension


If a vector space V has a basis {v1 , v2 , … , vn }, the number of elements n in any basis of a finite-dimensional
​ ​ ​

vector space V is unique. This number is called the dimension of V .

Proof Sketch:

Suppose V has two bases {v1 , … , vn } and {w1 , … , wm }.


​ ​ ​ ​

Both sets are linearly independent and span V .

By the properties of linear independence and dimension theory, it follows that n = m.

Definition: Dimension of a Vector Space


The dimension of a vector space V , denoted dim(V ), is defined as the number of vectors in any basis of V .

11/31
Example 3: Finding the Dimension
Let V = Span(v1 , v2 , v3 ) in R3 where v1 = (1, 1, 0), v2 = (1, 0, 1), v3 = (0, 1, 1).
​ ​ ​ ​ ​ ​

1. Form a matrix whose columns are v1 , v2 , v3 : ​ ​ ​

1 1 0
A= 1 0 1 .
​ ​ ​ ​ ​

0 1 1
2. Determine the rank of the matrix A (row reduction or rank computation).

After row reduction, you will find the rank of this matrix gives rank(A) = 2.
3. The dimension of V is therefore dim(V ) = 2.

Properties of a Basis
1. Any Vector is a Unique Linear Combination of Basis Vectors:

Every vector v ∈ V can be expressed uniquely as a linear combination of the basis vectors:

v = a 1 v1 + a 2 v2 + ⋯ + a n vn ,
​ ​ ​ ​ ​

where {v1 , v2 , … , vn } is a basis.


​ ​ ​

2. Basis is Minimal:

A basis contains the fewest number of vectors required to span the vector space.

If a vector is removed, the set will no longer span the vector space.

3. Basis Change:

A vector space can have many different bases. Changing the basis allows different representations of
vectors, particularly useful in applications such as coordinate transformations.

4. Dimension and Subspaces:

The dimension of a subspace is always less than or equal to the dimension of the parent vector space.

Summary
1. A basis is a set of vectors that is both linearly independent and spans the entire vector space.

2. The dimension of a vector space is the number of vectors in any of its bases.

3. The choice of basis depends on the context of the problem, but every vector space has at least one basis.

4. The dimension is invariant under a change of basis.

The concept of basis is fundamental in linear algebra because it allows for efficient representation and
computation within vector spaces.

12/31
Lecture 7: Dimension

Definition: Dimension of a Vector Space

The dimension of a vector space V over a field F, denoted as dim(V ), is defined as the number of vectors in any
basis of V .

The dimension measures the number of degrees of freedom in the vector space V .

A basis of a vector space is a linearly independent set of vectors that spans the entire space.

Properties of Dimension
1. Uniqueness:

If a vector space has a basis, the number of elements in any basis is unique.

This means that all bases of a given finite-dimensional vector space have the same number of elements.

2. Dimension of Subspaces:

For any subspace W ⊆ V , dim(W ) ≤ dim(V ).


3. Dimension and Basis:

The number of vectors in a basis of a vector space is always equal to the dimension of that space.

4. Direct Sum of Subspaces:

If V = W1 ⊕ W2 (direct sum), then:


​ ​

dim(V ) = dim(W1 ) + dim(W2 ). ​ ​

Examples of Dimension

Example 1: Dimension of Rn

The vector space Rn consists of all n-tuples of real numbers:

Rn = {(x1 , x2 , … , xn ) ∣ x1 , x2 , … , xn ∈ R}.
​ ​ ​ ​ ​ ​

The standard basis of Rn is given by:

{e1 , e2 , … , en },
​ ​ ​

where ei is the i-th standard basis vector defined as:


e1 = (1, 0, 0, … , 0), e2 = (0, 1, 0, … , 0), … , en = (0, 0, 0, … , 1).


​ ​ ​

The number of vectors in this basis is n, so:

13/31
dim(Rn ) = n.

Example 2: Dimension of a Subspace

Let V = R3 and consider a subspace W spanned by:

W = Span{(1, 1, 0), (0, 1, 1)}.

1. We check if these vectors are linearly independent:

Write the matrix whose columns are these vectors:


1 0
A= 1 1 . ​ ​ ​

0 1
2. Perform row reduction:
1 0
Row reduce A → 0 1 . ​ ​ ​ ​

0 0
The rank of the matrix is 2, implying:

dim(W ) = 2.

Example 3: Infinite-Dimensional Vector Space

The vector space of all polynomials P over R is defined by:

P = {p(x) ∣ p(x) is a polynomial with real coefficients}.

The set:

{1, x, x2 , x3 , … }

forms a basis for P.

This basis is infinite because polynomials can have arbitrarily high degrees.

Therefore, the dimension of P is infinite.

Dimension and Basis


The dimension of a vector space is directly tied to its basis:

1. Let V have a basis {v1 , v2 , … , vn }.


​ ​ ​

2. The number of elements in this basis is n.

3. Therefore:

14/31
dim(V ) = n.

If the space is infinite-dimensional, the basis will have infinitely many elements.

Subspace Dimension Properties

Theorem 1: Dimension of a Subspace

Let W ⊆ V be a subspace. Then:

dim(W ) ≤ dim(V ).

Theorem 2: Dimension Formula for Direct Sums

If V = W1 ⊕ W2 (a direct sum), the dimension satisfies:


​ ​

dim(V ) = dim(W1 ) + dim(W2 ).


​ ​

Theorem 3: Adding Vectors to Span a Space

When extending a basis of a subspace W ⊆ V , if you add vectors that are not in the span of W , the dimension of
the new subspace increases by the number of independent vectors added.

Dimension and Coordinate Representations


When a vector space has dimension n, every vector can be uniquely written with respect to a basis
{v1 , v2 , … , vn } as:
​ ​ ​

v = a 1 v1 + a 2 v2 + ⋯ + a n vn ,
​ ​ ​ ​ ​

where a1 , a2 , … , an
​ ​ ​ ∈ F. The coordinates of v are the scalars (a1 , a2 , … , an ).
​ ​

Coordinate systems depend on the choice of the basis, but the dimension remains invariant.

Summary
1. Definition: The dimension of a vector space is the number of vectors in any of its bases.

2. Finite vs Infinite Dimensional Spaces:

A finite-dimensional vector space has a finite basis.

An infinite-dimensional space has an infinite basis.

3. Subspaces: The dimension of any subspace W ⊆ V satisfies dim(W ) ≤ dim(V ).


4. Direct Sum Property: If V = W1 ⊕ W2 , then:
​ ​

15/31
dim(V ) = dim(W1 ) + dim(W2 ). ​ ​

5. The dimension provides a fundamental understanding of the structure and complexity of vector spaces.

Understanding dimension is crucial for many areas of mathematics, physics, and engineering, as it provides a
measure of how many independent directions or degrees of freedom exist in a vector space.

Lecture 8: Replacement Theorem and its Consequences

Replacement Theorem

The Replacement Theorem is a fundamental result in linear algebra that relates bases, dimension, and the notion
of extending or replacing subsets within a vector space. It formalizes how a set of vectors can be replaced by
another set of vectors while maintaining their role as a basis under certain conditions.

Statement of the Replacement Theorem


Let V be a vector space over a field F, and let S
= {v1 , v2 , … , vm } ⊆ V be a linearly independent subset of V .
​ ​ ​

Let T = {w1 , w2 , … , wn } ⊆ V be a spanning subset of V . The Replacement Theorem states:


​ ​ ​

If ∣S∣
= m ≤ n and T is a spanning set of size n, then S can be extended to a basis of V by replacing a
subset of T while maintaining linear independence.

More formally:

There exists a subset T ′ ⊆ T of size n − m such that:

B = S ∪ T′

is a basis of V .

This means that you can "replace" n − m elements of the spanning set T with S while preserving the linear
structure of the vector space.

Formal Version of the Replacement Theorem


Let V be a vector space over F, S= {v1 , v2 , … , vm } a linearly independent subset, and T = {w1 , w2 , … , wn }
​ ​ ​ ​ ​

a spanning subset. If m ≤ n, then there exist vectors u1 , u2 , … , un−m ∈ T such that:


​ ​ ​

B = S ∪ {u1 , u2 , … , un−m }
​ ​ ​

is a basis for V .

16/31
Consequences of the Replacement Theorem
1. Dimension and Basis Extension:

The Replacement Theorem implies that the dimension of a vector space is invariant under changes of
basis.

Given a linearly independent subset and a spanning subset, you can always construct a basis by
extending the linearly independent subset appropriately.

2. Uniqueness of Dimension:

The theorem leads directly to the idea that the dimension of a vector space is unique and does not
depend on the choice of basis.

3. Spanning Set Replacement:

The Replacement Theorem allows replacing elements in a spanning set with elements from a linearly
independent subset, provided the number of replacements aligns with the dimension of the space.

4. Basis Completeness:

If m = dim(V ), then the subset S itself forms a basis of V .

Example 1: Using the Replacement Theorem


Let V = R3 (a 3-dimensional real vector space), and let:
S = {(1, 0, 0)}, a linearly independent subset.
T = {(1, 1, 0), (0, 1, 1), (1, 1, 1)}, a spanning set for R3 .

Here:

∣S∣ = 1,
∣T ∣ = 3,
dim(V ) = 3.

Using the Replacement Theorem:

1. We replace two elements of T with elements to form a complete basis.

2. Choose u1 ​ = (0, 1, 1), u2 = (1, 1, 1).


3. The new basis becomes:


{(1, 0, 0), (0, 1, 1), (1, 1, 1)}.

The theorem assures that this choice leads to a valid basis.

Key Insight

17/31
The Replacement Theorem is a guarantee of flexibility. It ensures that no matter the initial choice of spanning
vectors, you can always align them with a desired linearly independent subset while maintaining the properties of
a basis.

Summary of Core Idea


The Replacement Theorem ensures that given a linearly independent subset S and a spanning subset T , you can
construct a basis of the vector space by appropriately replacing vectors from T while preserving linear
independence. This result has direct implications for understanding dimensions, constructing bases, and
exploring the structure of vector spaces.

Lecture 9: Rank and Nullity

Definition: Rank of a Linear Map

Let T: V → W be a linear map between two vector spaces V and W over a field F. The rank of T , denoted
rank(T ), is defined as the dimension of the image (or range) of T :

rank(T ) = dim(Im(T )) = dim({T (v) ∣ v ∈ V }).

Here:

Im(T ) = range of T ⊆ W ,
V is the domain, W is the codomain.

The rank represents the dimension of the subspace of W that is spanned by the images of all vectors in V under
T.

Definition: Nullity of a Linear Map

The nullity of T , denoted nullity(T ), is defined as the dimension of the kernel (or null space) of T :

nullity(T ) = dim(Ker(T )) = dim({v ∈ V ∣ T (v) = 0}).

Here:

Ker(T ) = {v ∈ V ∣ T (v) = 0},


This is the subspace of V consisting of all vectors that are mapped to the zero vector under T .

The nullity represents how many degrees of freedom in V are lost under the mapping T .

18/31
The Rank-Nullity Theorem
The Rank-Nullity Theorem provides a fundamental relationship between the rank and nullity of a linear map T :
V → W:

Theorem: Rank-Nullity

For a linear map T : V → W , the following equation holds:

dim(V ) = rank(T ) + nullity(T ).

Interpretation:

dim(V ) is the dimension of the domain space V ,


rank(T ) is the dimension of the image of T ,
nullity(T ) is the dimension of the kernel of T .

This relationship states that the sum of the rank (the "effective dimension" of the map) and nullity (the number of
dimensions mapped to 0) must account for all the dimensions of V .

Proof Sketch of the Rank-Nullity Theorem


1. Let {v1 , v2 , … , vk } be a basis of Ker(T )
​ ​ ​ ⊆V,
Here Ker(T ) = {v ∈ V ∣ T (v) = 0}.
2. Extend this basis to a full basis of V by adding vectors {vk+1 , vk+2 , … , vn },
​ ​ ​

where n = dim(V ),
so that {v1 , v2 , … , vn } forms a basis for V .
​ ​ ​

3. Map each of these vectors under T :

Vectors v1 , v2 , … , vk
​ ​ ∈ Ker(T ) are mapped to 0,
The remaining vectors vk+1 , vk+2 , … , vn are mapped to nonzero vectors under T .
​ ​ ​

4. The images of these vectors T (vk+1 ), T (vk+2 ), … , T (vn ) span Im(T ).


​ ​ ​

5. The number of these nonzero mapped vectors is n − k = rank(T ).


6. Therefore:

dim(V ) = nullity(T ) + rank(T ).

Key Idea

19/31
The nullity tells us how much of the input space gets "collapsed" to zero under T , while the rank tells us how much
of the codomain is "covered" by the mapping. Their sum always equals the total dimension of the domain vector
space V .

Example 1: Linear Map from R3 to R2


Let T : R3 → R2 be defined by:

x1
x + x2
T x2 = [ 1 ].

​ ​

x2 + x3

​ ​ ​ ​

x3
​ ​

1. Kernel (Null Space): Solve T (v) = 0:

x1 + x2 0
[ ] = [ ].
​ ​

x2 + x3 0
​ ​

​ ​

x1 + x2 = 0 ,
​ ​

x2 + x3 = 0 .
​ ​

From x1 ​ + x2 = 0, we have:

x1 = −x2 .
​ ​

From x2 ​ + x3 = 0, we have:

x3 = −x2 .
​ ​

Therefore:

−x2 ​

v= ​x2 . ​ ​ ​

−x2 ​

Let x2 ​ = t (a free variable). Then:

−1
v=t ​1 . ​ ​

−1

Thus:

dim(Ker(T )) = 1.
2. Rank: Using the Rank-Nullity Theorem:

dim(V ) = nullity(T ) + rank(T ).

Here:

dim(V ) = 3,
nullity(T ) = 1, so:

20/31
rank(T ) = 3 − 1 = 2.

Example 2: Zero Map


Consider the zero linear map T : Rn → Rm defined by T (v) = 0 for all v ∈ Rn .
1. Rank:

The image is {0},

Hence rank(T ) = dim({0}) = 0.


2. Nullity:

The entire space Rn maps to 0,

Hence nullity(T ) = dim(Rn ) = n.

From the Rank-Nullity Theorem:

rank(T ) + nullity(T ) = 0 + n = dim(V ).

Summary of Key Points


1. Rank: Dimension of the image of a linear map.

2. Nullity: Dimension of the kernel (set of vectors mapped to zero).

3. Rank-Nullity Theorem: Relates the dimensions of a linear map's kernel, image, and the domain vector space:

dim(V ) = rank(T ) + nullity(T ).


4. Understanding the rank and nullity is critical in linear algebra, as it describes how much of the domain's
degrees of freedom are "lost" or preserved under a linear map.

Lecture 10: Linear Transformations

Definition: Linear Transformation

A linear transformation (or linear map) is a function between two vector spaces that preserves the operations of
vector addition and scalar multiplication.

Let V and W be vector spaces over the same field F. A function T : V → W is called a linear transformation if it
satisfies the following two properties for all vectors u, v ∈ V and all scalars c ∈ F:

1. Additivity:

T (u + v) = T (u) + T (v)
2. Homogeneity:

21/31
T (c ⋅ v) = c ⋅ T (v)

These two conditions ensure that the transformation respects the vector space structure under both addition and
scalar multiplication.

Standard Representation of Linear Transformations


A linear transformation can be described using matrices when a basis is chosen for both the domain and
codomain. Specifically:

Let V be an n-dimensional vector space,

Let W be an m-dimensional vector space,

A linear transformation T : V → W can be represented by an m × n matrix A.

If v ∈ V is expressed as a column vector, then the action of T can be written as:

T (v) = A ⋅ v,

where A is the matrix corresponding to T with respect to chosen bases in V and W .

Properties of Linear Transformations


1. Linearity Conditions: A function T : V → W must satisfy:
T (u + v) = T (u) + T (v),
T (c ⋅ v) = c ⋅ T (v).
2. Preservation of the Zero Vector:

T (0) = 0

where 0 represents the zero vector in the domain and codomain.

3. Matrix Representation: Every linear transformation can be represented uniquely by a matrix when a basis is
chosen for V and W .

4. Composition of Linear Maps: If T : V → W and S : W → U are linear maps, then the composition S ∘
T : V → U is also a linear map.

Examples of Linear Transformations

Example 1: Identity Map

Define T : Rn → Rn by:

22/31
T (v) = v for all v ∈ Rn .

The identity map is trivially a linear transformation because:

1. T (v1 + v2 ) = v1 + v2 = T (v1 ) + T (v2 ),


​ ​ ​ ​ ​

2. T (c ⋅ v) = c ⋅ v = c ⋅ T (v).

The corresponding matrix is the identity matrix In . ​

Example 2: Scaling Transformation

Define T : Rn → Rn by:

T (v) = c ⋅ v,

where c ∈ F is a scalar constant.


This maps every vector v in Rn to its scalar multiple c ⋅ v . This is a linear transformation because it satisfies both
additivity and homogeneity.

The corresponding matrix representation is cI , where I is the identity matrix.

Example 3: Projection

Let V = R2 , and define T : R2 → R2 by projecting every vector onto the x-axis:

T [ ] = [ ].
x x
y 0
​ ​

This map keeps the x-component of a vector while setting the y -component to zero. This is a linear map because:
x1 + x2 x + x2
1. T (v1 + v2 ) = T [ ]=[ 1 ] = T (v1 ) + T (v2 ),
​ ​ ​ ​

y1 + y2 0
​ ​ ​ ​ ​ ​

​ ​

c⋅x c⋅x
2. T (c ⋅ v) = T [ ]=[ ] = c ⋅ T (v).
c⋅y 0
​ ​

The corresponding matrix representation is:

1 0
A=[ ].
0 0
​ ​

Example 4: Zero Map

Define T : Rn → Rm by:

T (v) = 0 (the zero vector in Rm ) for all v ∈ Rn .

23/31
This map sends every vector to the zero vector, irrespective of v . The zero map is trivially linear because:

1. T (v1 + v2 ) = 0 = T (v1 ) + T (v2 ),


​ ​ ​ ​

2. T (c ⋅ v) = 0 = c ⋅ T (v).

The corresponding matrix is a zero matrix.

Matrix Representation of Linear Transformations


Given a linear transformation T : Rn → Rm , if {e1 , e2 , … , en } is the standard basis for Rn and {f1 , f2 , … , fm }
​ ​ ​ ​ ​ ​

is the standard basis for Rm , the action of T can be written with a matrix A whose columns are the images of the
basis vectors under T :

A = [T (e1 ) T (e2 ) … T (en )] .


​ ​ ​ ​ ​ ​ ​

v1 ​

v2
If v ∈ Rn is written in coordinates as v = , then:


​ ​ ​

vn ​

T (v) = A ⋅ v.

Properties of Linear Transformations


1. Preserve Vector Addition:

T (v + u) = T (v) + T (u).
2. Preserve Scalar Multiplication:

T (c ⋅ v) = c ⋅ T (v).
3. Composition of Linear Maps: If S : W → U and T : V → W are linear transformations, then their
composition S ∘ T : V → U is also a linear transformation.
4. Invertibility: A linear map T : V → W is invertible if there exists a linear map T −1 : W → V such that:

T −1 (T (v)) = v for all v ∈ V and T (T −1 (w)) = w for all w ∈ W .

This can happen only if T is bijective.

Summary
Linear transformations map vectors from one vector space to another while preserving vector addition and
scalar multiplication.

24/31
They can always be represented by a matrix with respect to chosen bases.

Common examples include identity maps, scaling, projections, and the zero map.

The concept of invertibility depends on whether the transformation is bijective.

Lecture 11: Linear Transformations and Basis

Introduction
The concept of linear transformations is deeply connected to the idea of basis. This lecture explores how linear
transformations interact with bases in vector spaces, the role of basis changes, and how to represent linear
transformations with respect to chosen bases.

Key Idea
Linear transformations can be expressed with matrices when vector spaces V (domain) and W (codomain) are
given specific bases. Understanding how a linear map interacts with these bases allows efficient computation and
insight into the structure of the map.

1. Linear Transformations and Basis Representation


Let T : V → W be a linear transformation between vector spaces V and W . To work with T concretely, we can
represent it with respect to chosen bases of V and W .

Let:

B = {v1 , v2 , … , vn } be a basis of V ,
​ ​ ​

C = {w1 , w2 , … , wm } be a basis of W .
​ ​ ​

The matrix representation of T with respect to these bases B (for V ) and C (for W ) is denoted by a matrix [T ]B,C ​

2. Matrix Representation
The matrix representation of T is constructed by expressing T (vi ) (the image of each basis vector vi of V ) in
​ ​

terms of the basis C of W .

Steps to Construct the Matrix Representation


1. For each basis vector vi ∈ B , compute T (vi ).

25/31
2. Express T (vi ) in terms of the basis C as a linear combination:

T (vi ) = a1i w1 + a2i w2 + ⋯ + ami wm ,


​ ​ ​ ​ ​ ​ ​

where aji are the scalar coefficients of this linear combination for j
​ = 1, 2, … , m.
3. Write T (vi ) as a column vector with respect to C :

a1i ​

a2i ​

[T (vi )]C = ​ ​ ​ ​ ​ .

ami ​

4. Form the matrix [T ]B,C by using these column vectors as its columns:

[T ]B,C = [[T (v1 )]C


​ ​ ​ ​ [T (v2 )]C
​ ​ ​ … [T (vn )]C ] .​ ​ ​ ​

Here:

Each column corresponds to the image of the basis vector vi under T , ​

The entries of these columns are coordinates with respect to the codomain basis C .

3. Example
Let’s compute an example for clarity:

Let V = R2 , W = R2 , and let:

T : R2 → R2

be defined by:

x+y
T[ ]=[ ].
x
y x−y
​ ​

1 0
We compute [T ]B,C with respect to the standard bases B = {[ ] , [ ]} for R2 .
0 1
​ ​ ​

Step 1: Map the Basis Vectors


1
1. Map v1 = [ ]:
0
​ ​

1 1+0 1
T (v1 ) = T [ ] = [ ] = [ ].
0 1−0 1
​ ​ ​

0
2. Map v2 = [ ]:
1
​ ​

0 0+1 1
T (v2 ) = T [ ] = [ ] = [ ].
1 0−1 −1
​ ​ ​ ​

Step 2: Write the Matrix

26/31
Now represent these mappings as column vectors:
1
[T (v1 )]C = [ ],
1
​ ​ ​

1
[T (v2 )]C = [ ].
−1
​ ​ ​

The matrix representation is:

1 1
[T ]B,C = [ ].
1 −1
​ ​ ​

4. Change of Basis
If the basis of V or W changes, the matrix representation of T will change. Let:

B ′ = {v1′ , v2′ , … , vn′ } be a new basis for V ,


​ ​ ​

C ′ = {w1′ , w2′ , … , wm


} be a new basis for W .
​ ​

The change of bases can be expressed using transition matrices P (from old to new basis). The new matrix
representation [T ]B′ ,C ′ is given by: ​

[T ]B′ ,C ′ = P −1 [T ]B,C Q,
​ ​

where:

P represents the change of basis in the domain V ,


Q represents the change of basis in the codomain W .

These transformations are essential in applications where different bases are used to describe computations,
including computer graphics, numerical methods, or theoretical physics.

5. Summary of Key Points


1. Matrix Representation:

The matrix [T ]B,C represents a linear transformation T



: V → W with respect to bases B (domain) and
C (codomain).
2. How to Construct the Matrix:

Map basis vectors vi ​


∈ B under T ,
Express T (vi ) in terms of C , ​

Use these expressions as columns of [T ]B,C . ​

3. Change of Basis:

27/31
Changing the basis in either V or W affects the matrix representation of T ,

Transition matrices P and Q are used to compute these new representations.

Understanding these relationships allows us to work with linear transformations concretely via matrix
computations.

Lecture 12: Linear Transformations and Matrices

Introduction
Linear transformations can be represented by matrices, and matrices themselves define how a linear map acts on
vectors. This lecture will explore the connection between linear transformations and their corresponding matrix
representations, the role of matrix multiplication, and how matrices encode geometric transformations.

1. Definition: Linear Transformations and Matrices


Let V and W be vector spaces over a field F (e.g., real numbers R or complex numbers C), and let T :V →W
be a linear transformation.

When dim(V ) = n and dim(W ) = m, the linear transformation T can be uniquely represented by an m × n
matrix A with respect to chosen bases of V and W .

The matrix A is called the matrix representation of T , denoted by [T ]B,C , where: ​

B = {v1 , v2 , … , vn } is a basis of V ,
​ ​ ​

C = {w1 , w2 , … , wm } is a basis of W .
​ ​ ​

2. Constructing the Matrix of a Linear Transformation


The matrix representation of a linear map depends on expressing each basis vector of V under T in terms of the
basis C for W .

Steps to Determine the Matrix A:


Let v1 , v2 , … , vn
​ ​ ​ ∈ V be the basis vectors of V , and let T (vi ) represent the image of vi under the linear map T .
​ ​

1. Map each basis vector vi ​ ∈ B under T :

T (vi ) ∈ W .
2. Express T (vi ) in terms of the basis C :

T (vi ) = a1i w1 + a2i w2 + ⋯ + ami wm ,


​ ​ ​ ​ ​ ​ ​

28/31
where aji ​ ∈ F are scalar coefficients corresponding to the coordinates of T (vi ) with respect to C . ​

3. Write the coordinates as a column vector:


a1i ​

a2i ​

[T (vi )]C = ​ ​ ​ ​ ​ .

ami ​

4. Construct the matrix A = [T ]B,C by placing these vectors as columns:


A = [[T (v1 )]C ​ ​


​ [T (v2 )]C
​ ​
​ … [T (vn )]C ] . ​
​ ​

The matrix A represents the action of T with respect to the bases B and C .

3. Example: Matrix Representation


Let’s compute the matrix representation of a specific linear transformation.

Example: 2D Scaling Transformation


Define a linear map T : R2 → R2 by scaling a vector by a constant c ∈ R:

T [ ] = c[ ].
x x
y y
​ ​

1 0
Here, the standard bases of R2 are B = {[ ] , [ ] }.
0 1
​ ​

Step 1: Map the basis vectors


1
1. Map v1 = [ ]:
0
​ ​

1
T (v1 ) = c [ ] = [ ] .
c
0 0
​ ​ ​

0
2. Map v2 = [ ]:
1
​ ​

0 0
T (v2 ) = c [ ] = [ ] .
1 c
​ ​ ​

Step 2: Write the columns


The images of the basis vectors are:

0
[T (v1 )]C = [ ] , [T (v2 )]C = [ ] .
c
0 c
​ ​ ​ ​ ​ ​

Step 3: Form the matrix


The matrix representation of T is:

29/31
c 0
A = [[T (v1 )]C [T (v2 )]C ] = [ ].
0 c
​ ​ ​ ​ ​ ​ ​ ​

This matrix corresponds to uniform scaling by factor c.

4. Matrix Operations and Linear Transformations


Matrices and their operations can be linked directly to linear transformations:

1. Matrix Multiplication as Composition of Linear Maps: If T : V → W and S : W → U are linear maps


represented by matrices A and B , respectively, their composition S ∘ T : V → U is represented by the
product:

[S ∘ T ] = B ⋅ A.
2. Identity Map: The identity transformation I : V → V is represented by the identity matrix In .

3. Inverse Map: If a linear map T is invertible, the inverse transformation corresponds to the inverse matrix
A−1 .

5. Geometric Interpretation of Matrices


Matrices as linear transformations also have geometric interpretations:

1. Rotations: Certain orthogonal matrices correspond to rotational transformations in spaces like R2 .

2. Scaling: Diagonal matrices represent stretching or shrinking in different coordinate directions.

3. Reflections: Specific matrices represent reflection across a line or plane.

4. Projections: Projection matrices map points onto subspaces.

These geometric properties allow us to visualize how matrix transformations affect geometric shapes and vectors.

6. Summary
Key Points:
1. Every linear transformation can be expressed uniquely as a matrix with respect to given bases.

2. The matrix representation is constructed by expressing the image of basis vectors under the map T in terms
of the codomain's basis.

3. Matrix multiplication corresponds to the composition of linear maps.

4. Inverse maps and identity transformations correspond to matrix inverses and identity matrices, respectively.

30/31
5. Geometric transformations like scaling, rotations, and projections are encoded using matrices.

31/31

You might also like