Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Vectors PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Vectors

Math 122 Calculus III


D Joyce, Fall 2012

Vectors in the plane R2 . A vector v can be interpreted as an arrow in the plane R2 with
a certain length and a certain direction. The same vector can be moved around in the plane
if you don’t change its length or direction. Thus, one vector can be interpreted as lots of
different arrows with a given length and direction.
When a vector is put in “standard position,” the head of the arrow is at a point (v1 , v2 ) and
the tail of the arrow is at the origin (0, 0). Using this standard position, we identify a vector
v with the point (v1 , v2 ). Thus, sometimes we say “the vector (v1 , v2 )” deliberately confusing
the point (v1 , v2 ) with the vector v. We’ll call R2 the vector space of two dimensions.
The zero vector, 0 = (0, 0) is special since it’s the only vector that has no direction. It
has 0 length, too. We represent it as a point, in fact, any point, since it can be moved around
in the plane.
When we think of the vector v as an arrow somewhere else in the plane, it is an arrow
parallel to the standard arrow and with the same length. Then it can be any arrow with head
(c, d) and tail (a, b) so long as c − a = v1 and d − b = v2 . We’ll say the vector v = (c − a, d − b)
is the displacement vector from the point (a, b) to the point (c, d).

Vectors in space R3 . A vector in space also has a certain length and direction. You
can move it around in space if you don’t change its length or direction. It can be placed
in standard position with its tail at the origin (0, 0, 0) and its head at a point with three
coordinates (v1 , v2 , v3 ). We’ll give the vector those coordinates so that v = (v1 , v2 , v3 ).
Most of the time we’ll look at vectors in R2 and R3 , but the basic concepts generalize to
Rn where n is any positive number. That allows us to work with n-dimensional space easily.

Geometric interpretation of addition of vectors. When you interpret vectors as ar-


rows, there are a couple of ways to interpret addition. One is by what is called the “paral-
lelogram law.” In order to see the sum v + w of two vectors v and w, draw both v and w
in their standard positions with their tails at the origin. Then translate v so that its tail is
at the head of w to get another arrow which can also be labelled v. Likewise, translate w
so that its tail is at the head of the original arrow v to get another arrow labelled w. That
completes a parallelogram with two sides being parallel v arrows and the other two sides
parallel w arrows.

1
1

 >


w   
 

  
 
 

 
v

 v+w
  
 
v  1
  
  
  w



The arrow which goes from the origin to the opposite corner of the parallelogram is the
vector v + w. Thus, you can see v + w either as the arrow v followed by the arrow w, or as
the arrow w followed by the arrow v.
We can use this parallelogram to determine coordinates for the sum v + w.

v + w = (v1 , v2 ) + (w1 , w2 ) = (v1 + w1 , v2 + w2 ).

In words, the coordinates of the sum of two vectors are the sums of the coordinates of the
vectors. That is, sums of vectors are computed coordinatewise.
Once we’ve got a geometric interpretation for addition, we automatically get geometric
interpretations for negation and subtraction. The negation −v of a vector v is an arrow that
points in the opposite direction of v. That implies that

−v = −(v1 , v2 ) = (−v1 , −v2 ).

Thus, negations are computed coordinatewise. Likewise, negation is computed coordinate-


wise.
v − w = (v1 , v2 ) − (w1 , w2 ) = (v1 − w1 , v2 − w2 )

Multiplication of vectors by scalars. Another operation on vectors we can perform is


to multiply a vector by a real number. We’ll use the term scalar as a synomym for real
number as all our vectors are real vectors. (Sometimes other fields of scalars are used. If the
coordinates of the vectors are complex numbers, then the scalars are complex rather than
real).
For example we can double a vector v to get the vector 2v, which is v+v. Coordinatewise,
it’s 2v = 2(v1 , v2 ) = (2v1 , 2v2 ). Likewise we can multiply a vector v by any scalar c to get
cv = c(v1 , v2 ) = (cv1 , cv2 ).
If the scalar c is greater than 1, then stretch v by a factor of c to get cv. If c is between 0
and 1, then squeeze v by that factor. If c is negative, then cv points in the opposite direction
of v stretched or squeezed by the appropriate factor. Of course, 1v = v, −1v = −v, and
0v = 0.
You can also divide a vector by a scalar, but we won’t treat it as a separate operation.
Instead, we’ll take it to be multiplication by the reciprocal of the scalar. So, for example,
instead of v/3, we’ll use 31 v.
We say one vector is in the same direction as another if each is a positive multiple of the
other. They’re in the opposite direction if each is a negative multiple of the other.

2
Properties of vector operations. The expected properties of the operations on vectors
hold. For example, addition is commutative, v+w = w+v, and it’s associative, (u+v)+w =
u + (w + v), so when you have several vectors to add, you can add them in any order.
The zero vector acts like it should, v + 0 = v. Negation works right, −v + v = 0.
Subtraction works right, too. It’s the inverse of addition: u + v = w if and only if u = w − v.
Furthermore, multiplication by scalars has the expected properties. It’s associative,
(bc)v = b(cv), so we won’t have to write extra parentheses. It distributes over addition
on the left, (b + c)v = bv + cv, and on the right c(v + w) = cv + cw.
Because of all these properties, you can use much of what you know about ordinary algebra
for vector algebra, too.
Note that we don’t have multiplication of two vectors. Soon, we’ll look at two different
kinds of multiplcation, the first being ‘dot products’, later we’ll look at ‘cross products’.

The norm kvk of a vector v. By the Pythagorean theorem of plane geometry, the distance
k(x, y)k between the point (x, y) and the origin (0, 0) is
p
k(x, y)k = x2 + y 2 .

Thus, we define the norm of a vector v = (v1 , v2 ), also called the length of the vector, as
being q
kvk = k(v1 , v2 )k = v12 + v22 .
Sometimes the length of a vector is denoted |v|. p
In R3 , norm is defined by kvk = k(v1 , v2 , v3 )k = v12 + v22 + v32 .
Norms have a few special properties. First of all, k0k = k(0, 0)k = 0. Also, kvk is always
greater than or equal to 0, but it only equals 0 when v = 0.
The most important property of norms in the triangle inequality

kv + wk ≤ kvk + kwk

which comes from the theorem that in any triangle, the length of one side of the triangle is
less than or equal the the sum of the lengths of the other two sides of the triangle.
One more property of norms says that the norm of av, a scalar times a vector, is the
product of the absolute value of the scalar times the norm of the vector.

kavk = |a| kvk

Dot products v · w. The dot product of two vectors is defined as the sum of the products
of corresponding elements. In R2 , it’s

v · w = (v1 , v2 ) · (w1 , w2 ) = v1 w1 + v2 w2 .

Likewise, in R3 it’s v · w = (v1 , v2 , v3 ) · (w1 , w2 , w3 ) = v1 w1 + v2 w2 + v3 w3 .


Dot products are often called inner products instead, and there are various other notations
for them including (u, v) and (u|v) and hu|vi.
Note that norms can be described in terms of dot products.

kvk2 = v · v, so kvk = v · v

3
The dot product acts like multiplication in a lot of ways, but not in all ways. First of all,
the dot product of two vectors is a scalar, not another vector. That means you can’t even
ask if it’s associative because the expression (u · v) · w doesn’t even make sense; (u · v) is a
scalar, so you can’t take its dot product with the vector w.
But aside from associativity, dot products act a lot like ordinary products. For instance,
dot products are commutative:
u · v = v · u.
Also, dot products distribute over addition,
u · (v + w) = u · v + u · w,
and over subtraction,
u · (v − w) = u · v − u · w.
Also, the zero vector 0 acts like zero:
v · 0 = 0.
Furthermore, dot products and scalar products have a kind of associativity, namely, if c is a
scalar, then
(cu) · v = c(u · v) = u · (cv).

The dot product v · w between two vectors and the cosine of the angle between
them. The law of cosines for oblique triangles says that given a triangle with sides a, b,
and c, and angle θ between sides a and b,

HHH
 Hc
 HH
H
 H
a  
 

b
θ  
 


c2 = a2 + b2 − 2ab cos θ.
Now, start with two vectors v and w, and place them in the plane with their tails at the
same point. Let θ be the angle between these two vectors. The vector that joins the head of
v to the head of w is w − v. Now we can use the law of cosines to see that

HHH
HwH− v


 H
 HHj
v   1

 

w
θ 
 


4
kw − vk2 = kvk2 + kwk2 − 2kvk kwk cos θ.
We can convert the distances to dot products to simplify this equation.

kw − vk2 = (w − v) · (w − v)
= w · w − 2w · v + v · v
= kwk2 − 2 w · v + kvk2

Now, if we subtract kvk2 + kwk2 from both sides of our equation, and then divide by −2, we
get
w · v = kvk kwk cos θ.
That gives us a way of geometrically interpreting the dot product. We can also solve the last
equation for cos θ,
w·v
cos θ = ,
kvk kwk
which will allow us to do trigonometry by means of linear algebra. Note that
 
w·v
θ = arccos .
kvk kwk

Orthogonal vectors. The word “orthogonal” is synonymous with the word “perpendicu-
lar,” but for some reason is preferred in many branches of mathematics. We’ll write w ⊥ v
if the vectors w and v are orthogonal, or perpendicular.

MB
B
vB  
1


B
B w



B

Two vectors are orthogonal if the angle between them is 90◦ . Since the cosine of 90◦ is 0,
that means
w ⊥ v if and only if w · v = 0

Math 122 Home Page at http://math.clarku.edu/~djoyce/ma122/

You might also like