Module 1 (Introduction to Vectors) (1)
Module 1 (Introduction to Vectors) (1)
A vector is a directed line segment that corresponds to a displacement from point 𝐴 to point 𝐵.
→
The vector from 𝐴 to 𝐵 is denoted by 𝐴𝐵.
Set of all points in plane corresponds to all vectors whose initial point is origin 𝑂.
→ →
For example, 𝐚 = 𝑂𝐴 = [2,3], 𝐛 = 𝑂𝐵 = 3,1 .
The two vectors are said to be equal if and only if their corresponding components are equal.
For example, 3,2 ≠ [2, 3]. The order of the components are important.
A zero vector is a vector whose all the components are zero and is denoted by 𝟎.
The two vectors are said to be equal if they have same length and direction.
Vector Addition:
𝐮 + 𝐯 = u1 + v1 , u2 + v2 .
c𝐯 = c v1 , v2 = [cv1 , cv2 ].
1 1
𝐯 = −2,4 = −1,2 , and
2 2
−2v = −2 −2,4 = [4, −8].
Vector Subtraction:
The vector version of length, distance and angle can all be described using the notion of dot product.
u1 v1
u2 v2
If 𝐮 = ⋮ and 𝐯 = ⋮ , then the dot product 𝐮 ∙ 𝐯 of 𝐮 and 𝐯 is defined by,
un vn
𝐮 ∙ 𝐯 = u1 v1 + u2 v2 + ⋯ + un vn
Example 1:
𝟏 −𝟑
Let 𝐮 = 𝟐 and 𝐯 = 𝟓 , then 𝐮 ∙ 𝐯 = 1 ∙ −3 + 2 ∙ 5 + −3 ∙ 2 = 1
−𝟑 𝟐
Example 2:
[2,3] = 22 + 32 = 13
Theorem:
Let 𝐯 be a vector in ℝ𝑛 and let c be a scalar. Then,
a) 𝐯 = 𝟎 if and only if 𝐯 = 0 .
b) c𝐯 = c 𝐯 .
Unit Vector:
A vector of length 1 is called unit vector.
Example :
1 0
• In ℝ2 , let 𝑒1 = and 𝑒2 = . Then 𝑒1 and 𝑒2 are unit vectors.
0 1
1 0 0
3
• In ℝ , let 𝑒1 = 0 , 𝑒2 = 1 and 𝑒3 = 0 Then 𝑒1 , 𝑒2 and 𝑒3 are unit
0 0 1
vectors.
Given a nonzero vector 𝐯, we can always find a unit vector in the same direction as 𝐯 by dividing 𝐯 by its own
length.
1 𝟏 1
If 𝐮 = 𝐯 , then, 𝐮 = v = 𝐯 =1
𝐯 𝐯 𝐯
Important Inequalities:
a) 𝐮 + 𝐯 ≤ 𝐮 + 𝐯 (Triangle Inequality)
b) 𝐮 ∙ 𝐯 ≤ 𝐮 𝐯 . (Cauchy-Schwarz Inequality)
𝑑 𝐮, 𝐯 = 𝐮 − 𝐯 .
Example 3:
2 0
The distance between 𝐮 = 1 and 𝐯 = 2 is
−1 −2
2
𝑑 𝐮, 𝐯 = 𝐮 − 𝐯 = −1 =2
1
Angle between Two Vectors:
𝐮∙𝐯
cos 𝜃 = 𝐮 𝐯
, where 𝜃 is the angle between 𝐮 and 𝐯.
Example 4:
2 1
Find the angle between 𝐮 and 𝐯 where 𝐮 = 1 and 𝐯 = 1
−2 1
Caculate 𝐮. 𝐯 = 2 ∙ 1 + 1 ∙ 1 + −2 ∙ 1 = 1, 𝐮 = 22 + 12 + −2 2 = 3 and 𝐯 = 12 + 12 + 12
= 3. Therefore cos 𝜃 = 1/3 3 , so
1
𝜃 = cos −1 = 1.377 radian
3 3
Orthogonal Vectors:
Projection:
If 𝐮 and 𝐯 are vectors in ℝ𝑛 and 𝐮 ≠ 𝟎, then the projection of 𝐯 onto 𝐮 is the 𝐩𝐫𝐨𝐣𝐮 (𝐯) defined by
𝐮. 𝐯
𝐩𝐫𝐨𝐣𝐮 𝐯 = 𝐮
𝐮. 𝐮
−1 2
Example: Find the projection of 𝐯 onto 𝐮, where, 𝐯 = and 𝐮 = .
3 1
Sol:
−1 2 2 2
We compute 𝐮. 𝐯 = . = 1 and 𝐮. 𝐮 = . = 5, so
3 1 1 1
𝐯.𝐯 1 2 2/5
proj𝐮 𝐯 = 𝐮= = .
𝐮.𝐮 5 1 1/5
Linear Equations and System of Linear Equations
Linear Equation:
A linear equation in the 𝑛 variables x1 , x2 … , xn is an equation that can be written in the form
a1 x1 + a2 x2 + ⋯ + an xn = b
For example, 3,0,0 , 0,1,2 , and [6,1, −1] are the solution to 𝑥 − 𝑦 + 2𝑧 = 3
System of a Linear Equation:
A system of linear equation is finite set of linear equation with same set of variables.
Example: 2𝑥 − 𝑦 = 3
𝑥 + 3𝑦 = 5
forms a system of linear equation.
Example:
𝑥−𝑦= 2
2𝑥 − 2𝑦 = 4
𝑥−𝑦 = 1
𝑥+𝑦 = 3
𝑥−𝑦= 1
𝑥−𝑦= 3
No solution
There are two important matrices associated with a linear system:
1. Coefficient Matrix
2. Augmented Matrix
2𝑥 + 𝑦 − 𝑧 = 3
𝑥 + 5𝑧 = 1
−𝑥 + 3𝑦 − 2𝑧 = 0
2 1 −1 | 3
2 1 −1
1 0 5 | 1
1 0 5
−1 3 −2 −1 3 −2 | 0
0 2 0 1 −1 3
2 4 1
0 0 −1 1 2 2
Example: 0 −1 2 and are row echelon form.
0 0 0 0 4 0
0 0 0
0 0 0 0 0 5
Elementary Row Operations:
The procedure by which any matrix can be reduced to a matrix in echelon form is known as elementary row
operations.
Remark:
1. The row echelon form of a matrix is not unique.
2. The leading entry in each row is used to create the zeros below it.
3. The pivots are not necessarily the entries that are originally in the positions eventually occupied by the leading
entries.
4. Once we have pivoted and introduced zeros below the leading entry in a column, that column does not change.
5. Elementary row operations are reversible.
1 2 −4 −4 5
2 4 0 0 2
Example: Reducing to echelon form.
2 3 2 1 5
−1 1 3 6 5
1. Work column by column, from left to right and from top to bottom.
2. The strategy is to create a leading entry in a column and then use it to create zeros below it.
3. The entry chosen to become a leading entry is called a pivot, and this phase of the process is called
pivoting.
𝑅2 −2𝑅1
1 2 −4 −4 5 𝑅3 −2𝑅1 1 2 −4 −4 5 1 2 −4 −4 5
2 4 0 0 2 𝑅4 +𝑅1 0 0 8 8 −8 𝑅2 ↔𝑅3 0 −1 10 9 −5 𝑅4 +3𝑅2
→ → →
2 3 2 1 5 0 −1 10 9 −5 0 0 8 8 −8
−1 1 3 6 5 0 3 −1 2 10 0 3 −1 2 10
1 2 −4 −4 5 1 1 2 −4 −4 5 1 2 −4 −4 5
𝑅3
0 −1 10 9 −5 8 0 −1 10 9 −5 𝑅4 −29𝑅3 0 −1 10 9 −5
→ →
0 0 8 8 −8 0 0 1 1 −1 0 0 1 1 −1
0 0 29 29 −5 0 0 29 29 −5 0 0 0 0 24
(Row echelon Form)
Row Equivalence of Two Matrices:
Matrices A and B are row equivalent if there is sequence of elementary row operations that converts A to B.
Matrices A and B are row equivalent if and only if they can be reduced to the same echelon form.
1 2 −4 −4 5 1 2 −4 −4 5
2 4 0 0 2 0 −1 10 9 −5
Example 9: and are row equivalent.
2 3 2 1 5 0 0 1 1 −1
−1 1 3 6 5 0 0 0 0 24
Solution of System of Linear Equations by Gaussian Elimination
Gaussian elimination:
The entire process of row reduction applied to the augmented matrix of a system of linear equations to
create an equivalent system that can be solved by back substitution is known as Gaussian elimination.
0 2 3 | 8 𝑅1 ↔𝑅3
1 −1 −2 | −5 1 −1 −2 | −5 1𝑅2
𝑅2 −2𝑅1 5
2 3 1 | 5 2 3 1 | 5 → 0 5 5 | 15 →
1 −1 −2 | −5 0 2 3 | 8 0 2 3 | 8
1 −1 −2 | −5 1 −1 −2 | −5
𝑅3 −2𝑅2
0 1 1 | 3 → 0 1 1 | 3
0 2 3 | 8 0 0 1 | 2
(Associated system)
𝑥 − 𝑦 − 2𝑧 = −5
𝑦+𝑧 =3
𝑧=2
The above equation can now be solved using back substitution, therefore, 𝑥 = 0, 𝑦 = 1
and 𝑧 = 2 is the solution of the above system of equation.
Lets look at another example
𝑤 − 𝑥 − 𝑦 + 2𝑧 = 1
Example B: Solve the system 2𝑤 − 2𝑥 − 𝑦 + 3𝑧 = 3 .
−𝑤 + 𝑥 − 𝑦 = −3
1 −1 −1 2 | 1 𝑅2 −2𝑅1 1 −1 −1 2 | 1
𝑅3 +𝑅1 𝑅3 +2𝑅2
2 −2 −1 3 | 3 → 0 0 1 −1 | 1 →
−1 1 −1 0 | −3 0 0 −2 2 | −2
1 −1 −1 2 | 1
0 0 1 −1 | 1
0 0 0 0 | 0
𝑤 − 𝑥 − 𝑦 + 2𝑧 = 1
𝑦−𝑧 =1
Note: In the above example, 𝑤, 𝑦 are known as leading variable and 𝑥, 𝑧 are free variables.
Rank of a Matrix:
The rank of a matrix is the number of nonzero rows in its row echelon form.
Let 𝐴 be the coefficient matrix of the system of linear equations with 𝑛 variables. If the system is consistent,
then
number of free variables := 𝑛 −rank(𝐴)
For example, in Example A we have 0 free variable, whereas, in Example B we have 2 free variables
Example C: Consider the system of equation
𝑥 − 𝑦 + 2𝑧 = 3
𝑥 + 2𝑦 − 𝑧 = −3.
2𝑦 − 2𝑧 = 1
The row reduce of the augmented matrix is,
1 −1 2 | 3 1 −1 2 | 3 1𝑅2 1 −1 2 | 3
𝑅2 −𝑅1 3 𝑅3 −2𝑅2
1 2 −1 | −3 → 0 3 −3 | −6 → 0 1 −1 | −2 →
0 2 −2 | 1 0 2 −2 | 1 0 2 −2 | 1
1 −1 2 | 3
0 1 −1 | −2 .
0 0 0 | 5
1 2 0 0 −3 1 0
0 0 1 0 4 −1 0
0 0 0 1 3 −2 0
0 0 0 0 0 0 1
0 0 0 0 0 0 0
Gaussian-Jordan Elimination:
1. Write the augmented matrix of the system of linear equations.
2. Use elementary row operations to reduce the augmented matrix to reduced row echelon form.
3. If the resulting system is consistent, solve for the leading variables in terms of any remaining free
variables
Homogeneous System of Linear Equations:
A system of linear equations is called homogeneous if the constant term in each equation is zero.
2𝑥 + 3𝑦 − 𝑧 = 0
For example, is a homogeneous system of equation.
−𝑥 + 5𝑦 + 2𝑧 = 0
If [𝐴|𝟎] is a homogeneous system of m linear equations with n variables, where m < n, then the system has
infinitely many solutions.
Some Examples
Example: Find the line of intersection of the planes 𝑥 + 2𝑦 − 𝑧 = 3 and 2𝑥 + 3𝑦 + 𝑧 = 1.
Spanning Sets and Linear Independence
Theorem:
A system of linear equations with augmented matrix [𝐴 | 𝒃] is consistent if and only if 𝒃 is a linear combination
of the columns of 𝐴.
If 𝑆 = {𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒌 } is a set of vectors in ℝ𝑛 , then the set of all linear combinations of 𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒌 is called
the span of 𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒌 and is denoted by span(𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒌 ) or span(S).
2 1
Example: Show that ℝ2 = span( , ).
−1 3
𝑎 1
Solution: We need to show that an arbitrary vector can be written as a linear combination of
𝑏 3
1
and , i.e., we must show that the equation
3
2 1 𝑎
x +𝑦 =
−1 3 𝑏
can always be solved for 𝑥 and 𝑦 (in terms of 𝑎 and 𝑏), regardless of the values of a and b.
3𝑎−𝑏 2 𝑎+2𝑏 1 𝑎
+ = .
7 −1 7 3 𝑏
𝑥
3
Example: Let 𝑒1 , 𝑒2 and 𝑒3 be the standard unit vectors in ℝ . Then for any vector 𝑦 , we have
𝑧
𝑥 1 0 0
𝑦 = 𝑥 0 +y 1 +z 0 = x𝑒1 + y𝑒2 + z𝑒3 .
𝑧 0 0 1
3
Thus, ℝ = span(𝑒1 , 𝑒2 , 𝑒3 ).
In general, ℝ𝑛 = span(𝑒1 , 𝑒2 , … , 𝑒𝑛 ).
Linear Dependent and Independent Sets:
A set of vectors 𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒌 is linearly dependent if there are scalars 𝑐1 , 𝑐2 ,…, 𝑐𝑘 , at least one of which is not
zero, such that
𝑐1 𝒗𝟏 + 𝑐2 𝒗𝟐 + ⋯ + 𝑐𝑘 𝒗𝒌 = 0.
Theorem:
Vectors 𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒎 in ℝ𝑛 are linearly dependent if and only if at least one of the vectors can be expressed as a
linear combination of the others.
Proof:
If one of the vectors—say, 𝒗1 —is a linear combination of the others, then there are scalars 𝑐2 ,…, 𝑐𝑘 such that
𝒗𝟏 = 𝑐2 𝒗𝟐 + ⋯ + 𝑐𝑚 𝒗𝒎 .
Rearranging, we obtain
𝒗𝟏 − 𝑐2 𝒗𝟐 − ⋯ − 𝑐𝑚 𝒗𝒎 = 𝟎
which implies that 𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒎 are linearly dependent, since at least one of the scalars (namely, the coefficient
1 of 𝒗𝟏 ) is nonzero.
Conversely, suppose that 𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒎 are linearly dependent. Then there are scalars 𝑐1 , 𝑐2 ,…, 𝑐𝑚 , not all zero,
such that
𝑐1 𝒗𝟏 + 𝑐2 𝒗𝟐 + ⋯ + 𝑐1 𝒗𝒎 = 0.
Suppose 𝑐1 ≠ 0 . Then
𝑐 𝑐
𝑐1 𝒗𝟏 = −𝑐2 𝒗𝟐 − ⋯ − 𝑐1 𝒗𝒎 , ⟹ 𝒗𝟏 = − 2 𝒗𝟐 − ⋯ − 𝑚 𝒗𝒎 .
𝑐1 𝑐1
Example:
Any set of vectors containing the zero vector is linearly dependent. For if 𝟎, 𝒗𝟐 , … , 𝒗𝒎 are in ℝ𝑛 , then we can
find a nontrivial combination of the form
𝑐1 𝟎 + 𝑐2 𝒗𝟐 + ⋯ + 𝑐𝑚 𝒗𝒎 = 𝟎,
Theorem:
Let 𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒎 be (column) vectors in ℝ𝑛 and let 𝐴 be the 𝑛 × 𝑚 matrix [𝒗𝟏 𝒗𝟐 … 𝒗𝒎 ] with these vectors as
its columns. Then 𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒎 are linearly dependent if and only if the homogeneous linear system with
augmented matrix [𝐴 | 𝟎] has a nontrivial solution.
Example: The standard unit vectors 𝑒1 , 𝑒2 , and 𝑒3 are linearly independent in ℝ3 , since the system with
augmented matrix [𝑒1 𝑒2 𝑒3 | 𝟎] is already in the reduced row echelon form
1 0 0 | 0
0 1 0 | 0
0 0 1 | 0
and so clearly has only the trivial solution. In general, we see that 𝑒1 , 𝑒2 , … , 𝑒𝑛 will be linearly independent in
ℝ𝑛 .
Theorem:
Let 𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒎 be (row) vectors in ℝ𝑛 and let 𝐴 be the 𝑛 × 𝑚 matrix with these vectors as its rows. Then
𝒗𝟏 , 𝒗𝟐 , … , 𝒗𝒎 are linearly dependent if and only if 𝑟𝑎𝑛𝑘 𝐴 < 𝑚.
Theorem:
Any set of m vectors in ℝ𝑛 is linearly dependent if 𝑚 > 𝑛.