Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
14 views

Lesson - Vectors

Uploaded by

souldoi1726
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Lesson - Vectors

Uploaded by

souldoi1726
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

10/27/2020 University of Manchester -

Assignment Worksheet
Online Homework System
10/27/20 - 10:17:01 AM GMT
Name: ____________________________ Class:
Class #: ____________________________ Section #: ____________________________
Instructor: Ralf Becker Assignment: Vectors

Question 1: (1 point)

Motivation
We learned in the introduction to multivariate optimisation that, in order to find the maximum of a multivariate function f(x) we need to
1. find a stationary point which requires ∇f(x) = 0, and
2. we need the "sign" of H(x) at that point to identify whether a function was convex or concave at the stationary point

If the argument vector x was a (k × 1) vector, then ∇f(x) was also a (k × 1) vector and H(x) was a (k × k) matrix.
The issue we stumbled upon was that we couldn't easily say what "sign" the matrix H(x) had. In the univariate world the sign of the
second order derivative (which is contained in the Hessian matrix) was the key to understanding whether the function at a particular
point was concave or convex.
In order to be able to solve this issue we will have to learn some matrix algebra. Once we know some matrix algebra we can also
revisit the issue of solving linear equation systems. That issue will become a bit more straightforward (especially in high dimensions).
However, these are not the only applications of matrix algebra. Have a look at this video to see a few more real-life applications of
matrix algebra.

The Applications of Matrices | What I wish my teachers tol…


tol…

This video goes over just a few applications of matrices that may give you some insight into how they can be used in the real world.
Linear Algebra, Electronics, Infection transmission, Face recognition/Image processing, Computer animation/graphics, Network
analysis, Machine learning, etc. There are many more applications but there is no way how you could all list them.

Learning Outcomes
As we will see later, vectors are special cases of matrices. But they are special in another way. We can actually visualise them easily,
at least for very small number of dimensions (2 or 3!)
Here we will
motivate and introduce the idea of a vector in 2 and higher, dimensions
provide a geometric interpretation
review the rules for manipulating vectors

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cour… 1/21
10/27/2020 University of Manchester -

Vectors as special cases of matrices


It is the ability to not only have a geometric interpretation but also the ability to visualise vectors which makes them quite special.
Often we wish to collect together and manipulate a number of variables in a compact way

prices of k goods: p 1, p 2, . . . , p k
price of ONE good over n consecutive time periods: p 1, p 2, . . . , p n
price of k goods over n consecutive time periods
Here the n periods are represented by the n rows and the k products are represented by the k columns.

This is a matrix with n rows and k columns. If we are only looking at the prices for one product over n periods, or for all k products but
only in one period, then we are looking at either one column or one row respectively. These are vectors.

(2 x 1) vectors

These are the simplest form of vectors.


We define a (2 × 1) vector,

x=
() x1
x2

where x 1 ∈ R and x 2 ∈ R.

(
You could understand such a vector as a collection of coordinates in the x 1, x 2 space. )

Thinking about this geometric interpretation of a vector you could immediately be interested in the LENGTH of x, which is called the
EUCLIDEAN NORM.
Definition:
The EUCLIDEAN NORM, or length of a (2 × 1) vector is defined as:

ǁxǁ =
√x 2
1
+ x 22 = OX

which is of course just the famous result by Pythagoras.


A vector with a very special length is the NULL vector 0 = (0, 0). x ≠ 0 ⟺ x 1 ≠ 0 and/or x 2 ≠ 0. So in words, if any of the coordinates
is not 0, then we cannot be looking at the null vector.

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cour… 2/21
10/27/2020 University of Manchester -

What is the length of x = (3, 4)?


ǁxǁ = ____________

What is the length of m = (a, b) in terms of a and b?


Input language, e.g. √q is "sqrt(q)" or "q^(1/2)" or "q^(0.5)"; q 3 is "q^3" or "q*q*q".

ǁmǁ = __________

Question 2: (1 point)

Vector Operations
We will have to learn to do some algebra with vectors. The first simple operation we shall learn is to transpose.

Transpose
( )
Let x = x 1, x 2 be a (2 × 1) column vector

x=
() x1
x2

( )
then the transpose of x is defined as x T = x 1, x 2 , a (1 × 2) row vector

xT = ( x1 x2 )
Transposing a vector turns a column vector into a row vector and a row vector into a columns vector. There is no sensible geometric
interpretation of the transpose operation.
Notationally there is an important note. Often you will see x ′ instead of x T. Indeed in some sense using ′ is a more prevalent notation,
and is indeed used in our textbook. However, in this course we are also dealing with derivatives for which we often use the ′ notation,
and hence we use T as the transpose operator. But please be aware of this alternative notation. It is usually clear from the context
whether a ′ represents a derivative or a transpose.

What is the transpose of

y= ( −2 4)?

(a)
yT = ( )
4
−2

(b)
yT =
( )
−4
2

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cour… 3/21
10/27/2020 University of Manchester -

(c) y T =
( )
−2
4

(d)
yT =
( )
2
−4

Question 3: (1 point)

Scaling
The next operation is called scaling.

( )
A vector x = x 1, x 2 is scaled by factor α by the following operation:

αx = ( αx1 αx 2 )
The factor α ∈ R is called a scalar.

Let y = (3, 4) and α = 2, β = 0.5 and γ = − 0.5. What are p = αy, q = βy and r = γy?

p = αy = 2 ( 3, 4 ) = ( 6, 8)

q = βy = 0.5 ( 3, 4 ) = ( 1.5, 2)

r = γy = − 0.5 ( 3, 4 ) = ( − 1.5, −2 )

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cour… 4/21
10/27/2020 University of Manchester -

These operations have a graphical equivalent as illustrated in the next image.

It is straightforward to calculate the length of a scaled vector.

( )
αx = αx 1, αx 2 T
⟹ ǁαxǁ = |α|ǁxǁ

Let y = (3, 4) and α = 2, β = − 0.5 as well as p = αy, q = βy. What are ǁxǁ, ǁpǁ and ǁqǁ?

y= √32 + 42 = √25 = 5


p = 2 3 2 + 4 2 = 2√25 = 10


y = |− 0.5| 3 2 + 4 2 = 0.5√25 = 2.5

What is the length of x = ( − 2, 5) when you scale it by a factor 3 (to 4dp)?


____________

Question 4: (1 point)

Addition
( ) (
You can add two vectors which have the same dimension, such as two (2 × 1) vectors such as x = x 1, x 2 T, y = y 1, y 2 T. )
( )
Note, that, when we state a vector such as x 1, x 2 that is a 1 × 2 vector, a row vector. Applying the transpose T turns this into a (2 × 1)
column vector.
So, addition of vectors, or indeed matrices as we shall see, works with vectors of the same dimension and the actual operation is very
straightforward, we merely add the corresponding elements.

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cour… 5/21
10/27/2020 University of Manchester -

(
x ± y = x 1 ± y 1, x 2 ± y 2 T )

x = (3, 4) T, y = (4, 2) T

x + y = ((3 + 4), (4 + 2)) = (7, 6)

Vector addition has a nice geometric interpretation as shown in the next Figure. We take one vector and basically attach the other to
the end of the vector to get to the point (here A), representing the coordinates for the resulting vector addition.

From the Figure it also becomes apparent that

(
x ± y = x 1 ± y 1, x 2 ± y 2 T )

implies

ǁx + yǁ = OA.

With the above vectors x and y

ǁx + yǁ = √72 + 62 = √49 + 36 = √85 = 9.2195

Note that this is not the same as ǁxǁ + ǁyǁ

Let m = (− 2, 5) T, n = (3, − 5) T. What is (m + n) and what is its length?


(m + n) = ( ____________ ,____________ ) T

ǁm + nǁ = ____________

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cour… 6/21
10/27/2020 University of Manchester -

Question 5: (1 point)

Subtraction
Vector subtraction is defined in exactly the same way as vector addition.

( ) ( )
Let x = x 1, x 2 T and y = y 1, y 2 T, then

(
x ± y = x 1 ± y 1, x 2 ± y 2 T )

x = (3, 4) T, y = (4, 2) T

x − y = (− 1, 2) T

With this result we can then also determine the length

ǁx − yǁ = √5 = 2.2361
Vector subtraction has a geometric interpretation, just as addition, and shown in the next Figure. We take the first vector (here x) and
basically attach the other to the end of the vector (just flipped, as we subtract) to get to the point (here A), representing the
coordinates for the resulting vector subtraction.

Claim: In general, x − y = y − x.
(a) True
(b) False

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cour… 7/21
10/27/2020 University of Manchester -

Question 6: (1 point)

Linear combinations
( ) ( )
Let x = x 1, x 2 T and y = y 1, y 2 T, then a linear combination of two vectors (of identical dimension) is defined as follows

A linear combination of x and y, z = αx + βy is defined as

z = αx + βy =
( αx 1 + βy 1
αx 2 + βy 2 ) ( )
= αx 1 + βy 1, αx 2 + βy 2 T

Geometrically we are again at either an addition or subtraction, but this time just of scaled vectors.

Recall that we often write column vectors as row vectors with a transpose (as after the last equation sign). The only reason for that is
usually to safe some space on the page.

Let:

x = (3, 4) T, y = (4, 2) T

α = 2, β = 0.5

What is αx − βy?

αx − βy = (2 ⋅ 3 + 0.5 ⋅ 4, 2 ⋅ 4 + 0.5 ⋅ 2) T

= (8, 9) T

Define the following terms:

x=
()
4
3

y= ()
2
1

z=
( )
−3
2

With these elements, calculate the following:

3x = (____________ , ____________ ) T

2y = (____________ , ____________ ) T

− 4z = ( ____________ , ____________ ) T

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cour… 8/21
10/27/2020 University of Manchester -

ǁxǁ = ____________
ǁ3xǁ = ____________
ǁ2yǁ = ____________ ǁyǁ
ǁ− 4zǁ = ____________ ǁzǁ

x + y = (____________ , ____________ ) T

y − z = ( ____________ , ____________ ) T

3x + 2y − 4z = = ( ____________ , ____________ ) T

Question 7: (1 point)

Inner Product, Angle and Orthogonality


A (2 × 1) vector, x, can be defined in the following two ways:
x 1 and x 2 (its co-ordinates)
ǁxǁ and ϕ : length and angle (to horizontal)

Let x, and y, be (2 × 1) vectors. Below we define the INNER PRODUCT between two vectors (of identical dimension), x and y. The
inner product is a scalar, a number, and it contains some information about how the two vectors relate to each other. In particular we
will be able to use the inner product to determine whether two vectors are orthogonal to each other.

The inner product between x and y is defined as

x Ty =
( )(
x1
x2
y1 y2 )
= x 1y 1 + x 2y 2
= y Tx

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cour… 9/21
10/27/2020 University of Manchester -

The result is a scalar.

(2, 3)
( )
−1
4
= − 2 + 12 = 10.

Just to re-iterate this, for you to calculate an inner product the two vectors have to have identical dimensions!
An inner product of 0 has a special meaning. Two vectors whose inner product is 0, are called orthogonal. Orthogonal vectors have
an angle of 90 degrees between them. The image below depicts three vectors:

p=
()
3
4

y= ()
6
3

z=
( )
−1
2

What is the inner product of p and y?

p Ty = ____________

Which of these vectors are orthogonal to each other? (multiple correct answers are possible)

(a) p and y

(b) p and z

(c) y and z

(d) p and y, as well as p and z

(e) p and y, as well as p and z

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cou… 10/21
10/27/2020 University of Manchester -

Question 8: (1 point)

Resolution of a Vector

Let e 1 = (1, 0) T and e 2 = (0, 1) T


These are called unit length vectors as their respective length is 1: ǁe jǁ = 1, j = 1, 2.
These unit vectors are interesting as any (2 × 1) vector, x, can be expressed as a linear combination of e 1 and e 2:

x=
()x1
x2
=
( ) ( ) () ()
x1
0
+
0
x2
= x1
1
0
+ x2
0
1

= x 1e 1 + x 2e 2

Claim: The two unit vectors e 1 and e 2 are orthogonal.

(a) True
(b) False

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cou… 11/21
10/27/2020 University of Manchester -

Question 9: (1 point)

Vectors in n Dimensions
We can think of vectors in any higher dimension n > 2. For n = 3 we can give vectors still geometric interpretations, thinking about a
three dimensional carthesian system. You can see this visualised in the following figure.
Thinking in three dimensions will eventually also lead us to thinking about planes (one of which you can see visualised on the right
hand side image).

Of course there are similar interpretations in higher dimensional system, it is just that we cannot really visualise this anymore. But you
can think of an n dimensional vector containing, for instance n pieces of information about some subject, say a firm.

( )
Company Name
Number of employees
Turnover in 2017
c=
Turnover in 2018
Profit in 2017
Profit in 2018

This way of thinking of data and vectors (and later matrices), i.e. as vectors and matrices just being containers in which to store
information, is very important when you get to deal with large datasets, such as in econometrics or as a data analyst.

Resolution of a Vector: 3 dimensions

As we did for (2 × 1) vectors, higher dimensional vectors can be decomposed into linear combinations of the orthogonal unit vectors.
Let e 1 = (1, 0, 0) T, e 2 = (0, 1, 0) T, e 3 = (0, 0, 1) T. They have unit length: ǁe jǁ = 1, j = 1, 2, 3.

What is the inner product of e 1 and e 3?



e 1e 3 = ____________

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cou… 12/21
10/27/2020 University of Manchester -

Question 10: (0 points)

( )
Any (3 × 1) vector, x = x 1, x 2, x 3 T, can be expressed as a linear combination of e 1, e 2 and e 3:

( ) () () ()
x1 1 0 0
x= x2 = x1 0 + x2 1 + x3 0
x3 0 0 1

= x 1e 1 + x 2e 2 + x 3e 3

Question 11: (0 points)

Special vectors
For later work it is important to note that sometimes we will be dealing with special vectors:
Vector of zeros or null vector: 0 = (0, 0, . . . , 0) T
Vector of ones: i = (1, 1, . . . , 1) T

Vector Operations
Let's recap some of the operations we discussed for (2 × 1) vectors and how they generalise to higher dimensional vectors.

Let's first define a higher dimensional (n × 1) vector x = {xi } with typical element xi, i = 1, . . . , n.

()
x1
x2
( )
x = x 1, x 2, . . . , x n T =

xn

As you can see here, we tend to think about vectors as column vectors by default (unless otherwise mentioned), but we often display
it as a row vector with a transpose. The only reason being that it takes less space on the page. Sometimes there are very mundane
reasons.

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cou… 13/21
10/27/2020 University of Manchester -

Question 12: (0 points)

Length of a vector
We also called that the (Euclidian) Norm of a vector. It was an application of Pythagoras' famous theorem. Did you think, when you
learned it first that you would still be referring to it at Uni??!!
Here is the n dimensional version of the norm.

2 2 2
ǁxǁ =
√x 1
+ x2 + . . . + xn =

Scalar multiplication
Scalar multiplication works exactly as in the 2 dimensional case. Every element is individually multiplied with the scaling factor α ∈ R.

()
αx 1
αx 2
αx = {αxi } = (αx1, αx2, . . . , αxn )T = ⋮

αx n

⟹ ǁαxǁ = |α|ǁxǁ

Addition
If in addition to the (n × 1) vector x you have y = {yi }, i = 1, . . . , n, then you can add these
x±y= { x i ± y i } = ( x 1 ± y 1, x 2 ± y 2, . . . , x n ± y n ) T
Inner product
Inner products for (n × 1) vectors work in the same way as for (2 × 1) vectors:

()
y1
y2
x Ty = y Tx = ( x1 x2 ⋯ xn ) ⋮

yn

n
= x 1y 1 + x 2y 2 + . . . + x ny n = ∑ x iy i
i=1

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cou… 14/21
10/27/2020 University of Manchester -

Question 13: (0 points)

Orthogonality
x and y are ORTHOGONAL IFF x Ty = 0.
Orthogonality has exactly the same meaning for (n × 1) as it has for (2 × 1) vectors. Two (n × 1) vectors which have an inner product of
0 are called orthogonal. It is just difficult to visualise this graphically.

Linear combination
Linear combinations of equally sized (n × 1) vectors work in the same way as for (2 × 1) vectors. Let α ∈ R,β ∈ R and (n × 1) vectors
x and y. Then

()()
x1 y1
x2 y2
αx + βy = α +β
⋮ ⋮

xn yn

( )( )( )
αx 1 βy 1 αx 1 + βy 1
αx 2 βy 2 αx 2 + βy 2
= + =
⋮ ⋮ ⋮

αx n βy n αx n + βy n

= {αxi + βyi } = (αx1 + βy1, αx2 + βy2, . . . , αxn + βyn )T


Recall that, when we specify vectors, terms in curly brackets represent typical elements, here {αxi + βyi }.

Question 14: (1 point)

Exercise
We are going to practice some of the operations. We will practice these for (3 × 1) vectors, but you should see how this easily
generalises to n > 3. We shall use the following vectors

() ( ) ( )
4 2 −3
x= 3 , y= 1 , z= 2
2 −2 0

Let's start by applying a scalar multiplication

()
12
3x = 9
6

What are 2y and − 4z?


https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cou… 15/21
10/27/2020 University of Manchester -
T
2y = ( ____________ , ____________ , ____________ )
− 4z = ( ____________ , ____________ , ____________ ) T

Let's continue with a vector addition and look at x + y.

( ) ()
4+2 6
x+y= 3+1 = 4
2 + ( − 2) 0

What are (y − z) and (3x + 2y − 4z)?

(y − z) = ( ____________ , ____________ , ____________ ) T

The dimension of(3x + 2y − 4z) is: ( ____________ × ____________ )

(3x + 2y − 4z) = ( ____________ , ____________ , ____________ ) T

Now we will look at inner product calculations. For example we want x Ty.

()
2
x Ty = ( 4 3 2) 1 = (4 ⋅ 2) + (3 ⋅ 1) + (2 ⋅ ( − 2)) = 8 + 3 − 4 = 7
−2

What are x Tz and y Tz?

x Tz = ____________

y Tz = ____________

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cou… 16/21
10/27/2020 University of Manchester -

Question 15: (1 point)

Independence of Vectors
We begin our consideration of vector independence with two vectors of dimension (2 × 1). Towards the end of this section we will
consider the independence of k (n × 1) vectors.
Consider the following two vectors:

( ) ( )
x = x 1 , x 2 T, y = y 1 , y 2 T

Two vectors x and y are said to be linearly dependent IFF one can be written as a scalar multiple of the other. This definition applies
to any (n × 1) vectors.
Here is an example where x and y are linearly dependent:

x = (3, 2) T, y = (− 6, − 4) T

The two vectors are linearly dependent as x = δy = − 2y. Perhaps a more general (but essentially identical) definition of the linearly
dependent vectors is

αx + βy = 0, α ≠ 0 and/or β ≠ 0

In fact this is not different to the previous definition as this can be reformulated to

β β
x= − y = γy; γ = −
α α

Vectors which are linearly dependent are, in a diagram, on the same "ray", passing through the "origin".
If one of the vectors x and y is the the null vector then the two vectors are linearly dependent.
Let's approach the issue from the other direction and think about when two vectors can be said to be independent.
Two (2 × 1) vectors x ≠ 0 and y ≠ 0 are said to be linearly independent IFF the only solution to αx + βy = 0 is α = 0 and β = 0.
As said earlier, if any of the two vectors is a null vector then the two vectors cannot be linearly independent.
When two vectors which are linearly independent are depicted in a diagram then they will not be on the same line. In other words
there is a non-zero angle between the two vectors. If there is a 90 degree angle between two vectors we call them orthogonal (as
previously discussed). Therefore orthogonal vectors are also linearly independent (as a 90 degrees angle is a non-zero angle).

Which of the following statements are correct? (multiple correct answers are possible)

(a) Linearly independendence is a sufficient condition for two vectors to be orthogonal.

(b) Linearly independendence is a necessary condition for two vectors to be orthogonal.

(c) Orthogonality is a sufficient condition for two vectors to be linearly dependent.

(d) Orthogonality is a sufficient condition for two vectors to be linearly independent.

(e) Orthogonality is a necessary condition for two vectors to be linearly independent.

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cou… 17/21
10/27/2020 University of Manchester -

Question 16: (1 point)

Orthogonality is a property between two vectors (of any dimension). Dependence or independence is a (potential) property of k (n × 1)
vectors. As it turns out, it is impossible for k vectors of dimension (n × 1) to be independent if k > n. We shall illustrate this for the
special case (k = 3 > n = 2) as for this case we can capture the argument with a graphical illustration.
Consider the following three vectors:

x=
() () ()
2
0
; y=
3
2
; z=
1
1

x and y are linearly independent as αx + βy = 0 can only be true for α = β = 0.

x and z are linearly independent.


(a) True
(b) False

y and z are linearly independent.


(a) True
(b) False

Question 17: (1 point)

So all three possibly combinations of two vectors are linearly independent.


However, when considering vector independence we can also consider the independence of the trio of vectors, i.e. all vectors
together.
The definition of independence translates in a straightforward manner. Three vectors (of any dimension) are said to be independence
if and only if (IFF)

αx + βy + γz = 0

is only true for α = β = γ = 0.


This translates to the following two equations:

α⋅2+β⋅3+γ⋅1= 0
α⋅0+β⋅2+γ⋅1= 0

Here you see that both these equations are actually true for α = 0.5, β = − 1 and γ = 2. Note that α = − 1, β = 2 and γ = − 4 would also
make the statement true as would any other multiples of these parameters. So, αx + βy + γz = 0 is true for parameter values other than
0 and hence the three vectors, collectively, are dependent despite each combination of two of these vectors being independent.
In order to fully appreciate the graphical illustration of what it means for some vectors to be dependent, we first solve the above
solution for y.

αx + βy + γz = 0
0.5x − 1y + 2z = 0
y = 0.5x + 2z

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cou… 18/21
10/27/2020 University of Manchester -

This makes it obvious that y is a linear combination of x and z, in other words, y is linearly dependent of x and z.

In fact, if you have two linearly independent (2 × 1) vectors x and z you can find a linear combination of these two vectors which is
equal to any (2 × 1) vector y. In other words, any set of three or more (2 × 1) vectors must be linearly dependent.
What is true in 2 dimensions is also true in 3 and n dimensions. If you have n linearly independent (n × 1) vectors, then you can find a
linear combination of these two vectors which is equal to any other (n × 1) vector. This also means that any n + 1 vectors of dimension
(n × 1) cannot be lineary independent.
So, if, in a set of vectors of equal dimension, one vector can be written as a linear combination of the remaining vectors, then that set
of vectors is dependent. If none of the vectors in that set can be written as a linear combination of the others, then that set of vectors
is independent.
If any of the vectors is the null vector, then the set is automatically linearly dependent.

Which of the following statements are correct? (multiple correct answers are possible)

(a) Any set of 5 vectors of dimension (4 × 1) is linearly dependent.

(b) Any set of 3 vectors of dimension (4 × 1) is linearly independent.

(c) Any set of m vectors of dimension (n × 1), where m > n is necessarily linearly independent.

(d) A set of 6 vectors of dimension (6 × 1) could be linearly dependent or independent.

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cou… 19/21
10/27/2020 University of Manchester -

Question 18: (0 points)

Example
Consider the following three (3 × 1) vectors:

() ( ) ()
4 −1 5
x= 6 ; y= −4 ; z = 0
1 2 8

Are x, y and z a linearly independent set of vectors?


YES IFF the ONLY solution to αx + βy + γz = 0 is α = 0, β = 0, γ = 0
IMPOSSIBLE to express any of x, y or or z as a linear combination of the remaining two
NO IFF at least one of α, β or γ is non-zero
it is POSSIBLE to express at least one of x, y or z as a linear combination of the remaining two

Let's consider αx + βy + γz = 0.

() ( ) () ()
4 −1 5 0
α⋅ 6 +β⋅ −4 +γ⋅ 0 = 0
1 2 8 0

This translates into the following three equations:

}
4α − β + 5γ = 0
A solution is
6α − 4β + γ = 0
α = 2, β = 3, γ = − 1
α + 2β + 8γ = 0

As there is a solution to αx + βy + γz = 0 which is not equal to the trivial solution (all zeros), these three vectors are not linearly
independent, they are linearly dependent. One of these can be expressed as a linear combination of the other two.

αx + βy + γz = 0
2x + 3y − 1z = 0
z = 2x + 3y

Question 19: (0 points)

Summary
We looked at vectors which were basically containers of numerical values in either a column or row of numbers. There is no reason
why we should restrict ourselves to only one row or column. In the next section we will be looking at collections of rows or columns.
These objects will be called matrices.
Many of the fundamental vector operations we covered here will have fairly obvious equivalents for matrices. As it turns out,
evaluating the dependence or independence of vectors is an important tool in characterising matrices.

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cou… 20/21
10/27/2020 University of Manchester -

https://online.manchester.ac.uk/webapps/blackboard/content/contentWrapper.jsp?content_id=_11843989_1&displayName=Mobius+Home&cou… 21/21

You might also like