Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
8 views

1 Introduction Building Blocks

Introduction to building blocks. Metallurgy.

Uploaded by

avesh badal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

1 Introduction Building Blocks

Introduction to building blocks. Metallurgy.

Uploaded by

avesh badal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Mechanics of Material: Course Objectives

• Three Dimensional State of • Results as specific instances from generic


Stress and Strain (Tensors) mathematical objects: Tensors
• Theories of Elastic Failure
• Relationships with principles of Continuum
• Curved Beams Mechanics (Balance & Constitutive Laws)
• Statically Indeterminate
Beams
• Generalization from 2D to 3D

• Unsymmetrical Bending • Expansion of scope through inclusion of


• Axi-symmetrical Problems • Newer problems: Curved beams and
Unsymmetrical problems
• Elastic Strain Energy and
Energy Methods • Newer (energy based) methods
Mechanics of Material

Domain
• Vectors
• Dot P Building Blocks • Vector Space
• Cross P
• Triple P • Index notation

• Moment of force • Tensors


Core
• Moment of couple
Course
• Free body Diagrams Content
Notation by Indices
• It is often efficient (even necessary) to describe quantities through letters attached with
indices (subscripts or superscripts)

• Quantities defined through letters attached with indices are referred to as systems
• When such quantities obey certain transformation laws, they are called tensor systems
• Till we understand tensors, lets call them terms
Term Term Term Term

Free An index appearing ONCE & ONLY ONCE:


index • in a term or a product of terms
• Necessarily in ALL additive term forming a
Term resulting from i system
the product of terms
Dummy An index appearing TWICE:
Term resulting from addition of terms,
index • in a term or a product of terms
each of which in turn results from the
product of terms j • Not Necessarily in ALL additive terms
k forming a system

Rank of The no of free indices in a system


a term/
system Order-1
Notation by Indices
Free index (Unrepeated) Dummy index (Repeated TWICE)
Definition An index appearing ONCE & ONLY ONCE: An index appearing TWICE:
•in a term or a product of terms •in a term or a product of terms
•NECESSARILY IN EACH additive term •NOT NECESSARILY IN EACH additive term
forming a system forming a system

Implication • Each free index implies: • Each dummy index implies:


as many equations as Summation over the range of the index
the range of Index
• Equations = Range rank

y = aj xj y = a1 x1 + a2 x2 + a3 x3

Renaming Free index has a global presence, hence, is DI is local to an additive term, hence, can be
to be renamed across all additive terms renamed only locally (ensuring range)
Spq uq + Tpr vr Si q uq + Ti r vr Spq uq + Tpr vr Sp q uq + Tp q vq
….Permitted if range (r) = range (q)

An index appearing more than twice in a term/product in invalid T ii V i


Given indicial notation: Numerical Expansion -Equations
Given Equations: Writing indicial notation?

🤔
y1 = a11 x1 + a12 x2 + a13 x3 y1 = a11 x1 + a12 x2 + a13 x3 Why not use a dummy
Index which implies
y2 = a21 x1 + a22 x2 + a23 x3 y2 = a21 x1 + a22 x2 + a23 x3 Summation over a range

y1 = a1j xj

y2 = a2j xj
} j =1,2,3

🤔
y1 = a1j xj Don’t we get as

y2 = a2j xj
} j =1,2,3 many equations as
the range of free index?

y1 = a1j xj i Free

y2 = a2j xj
} i=1,2; j=1,2,3
yi =aij
j Dummy
Notation by Indices: System Identifiers (Rank & Type)

Rank: Number of free indices in a system

Type: distribution of free indices, as subscript and superscript, in terms of numbers


Same type
Same no. of free subscripts and superscripts
(All indices of
same range)
Different type
Different no. of free subscripts and superscripts

The interpretation of Free/Dummy index remains the same, yet, sometimes done to:
• make representations more compact (the horizontal length cud be cut up to half
• streamline operations of addition and subtraction

The criterion for distinguishing indices as subscripts and superscripts relates to the
notion of covariant and contravariant basis (to be touched upon later).
Notation by Indices: Addition, Multiplication & Symmetry
Operation Application Resulting System
Addition Applies to systems of the same type & rank Is of the same type & rank

Multiplication Each component of the first system is multiplied The resulting order is the
(outer with each component of the second system SUM of the ranks of the
product) (Different types too)
systems involved.
R5 R2 R3

Symmetric Interchange of 2 Indices leaves the term


System: unchanged

Skew- Interchange of 2 Indices reverses the sign of the


Symmetric system
System:
Vector Space

• Vector Space
• Norm of elements in VS
• Inner Product Space
• Euclidean Space
• Spanning set of a VS
• Basis of a VS

Motivation
Tensor Algebra deeply involves the use of two symbols/operators:
• Kronecker Delta
• Permutation symbol

Vector space concept will smoothly introduce these symbols


Vector Space
A non empty set V is called a Vector Space, if two operations: addition and scalar
multiplication are defined on its objects called vectors, such that the following axioms
are met:

1. Sum belongs to the set V 1. Scalar Multiple belongs to the set V

2. Sum is Commutative 2. Unit Multiplication

3. Sum is Associative 3. Associative law for multiplication of scalars

4. Additive-Zero Vector 4. Distributive law for sum of scalars

5. Additive-Inverse Vector 5. Distributive law for sum of vectors


Norm of Elements in a Vector Space
Norm Its a function that assigns a strictly positive length or size to each vector in
a vector space except for the zero vector, which is assigned a length of zero

L1 /Manhattan/Taxicab norm
Lp norm
L2 / Euclidean norm

L∞ norm

The concept of unit circle (the set of all vectors of norm 1) is different in different norms
In 1-norm: is a rhombus 2-norm: is a circle of unit radius ∞-norm: is a square
Inner Product Space
A vector space, associated with an inner product, is an inner product space

An inner product on V is a mapping that:


1. Additivity

1. Associates to each ordered pair of elements 2. Homogeneity


x,y: a scalar <x,y> that satisfies the following
properties: additivity; homogeneity, 3. Symmetry
symmetry, positive definiteness
4. Positive
Definiteness
2. Provides the notion of orthogonal elements.
In that:
• x,y are orthogonal ,if

• A set of elements of V {x,y,z} are said to form


an orthogonal set if every element in the set is
orthogonal to every other element
Relationships:
Vector Space, Normed Spaced, Inner Product Space

Vector space
Normed space Inner Product space
(VS equipped with a norm) (VS equipped with an inner product)

Positive
Definiteness

Additivity

Homogeneity

Symmetry
Relationships:
Vector Space, Normed Spaced, Inner Product Space, and
Euclidean Space

Vector space
Normed space Inner Product space
(VS equipped with a norm) (VS equipped with an inner product)
Euclidean Space: is a vector space Rn with the following definitions
Inner product

Norm in L2
IP of a vector
with itself

Distance Metric
IP of the
difference b/w
vectors with itself
Vector Space: Spanning set
A set of vectors X1, X2……Xk is said to span the Vector space V, if any vector X ϵ V can
be expressed through this set of k vectors, in that: X = ∑ αi Xi
X1 X2 X3 X4 X5
X3 X2
1 1 0 -1 1
0 1 1 0 -1
X4 X1
Any vector X ϵ R2 can be expressed as a linear
combination of X1……X5, hence, {X1……X5} can be
X5 said to span the Vector space V.

X2 = X1 + X3 This spanning set seems to be over defined


X4 = -X1 because up to three vectors can be expressed in
X5 = -X1 - X3 terms of just two of them. Only {X1 , X3} are enough

For a spanning set to be efficient, linear independence of its members is important


A set of vectors X1, X2……Xk is said to be linearly independent if their scalar multiples
can sum up to zero only by making all those scalars zero at the same time:
∑ αi Xi = 0 ONLY IF each αi = 0
Vector Space: Spanning set to Basis
A set of vectors X1, X2……Xk is said to be a basis for a vector space V if:
• the vectors are linearly independent: ∑ αi Xi = 0 ONLY IF each αi = 0
• the vectors span V: if any vector X ϵ V, X = ∑ αi Xi
X1 X2 X3 X4 X5 {X1 , X2} {X2 , X3} {X3 , X4} {X4 , X5}
1 1 0 -1 1 {X1 , X3} {X2 , X4}
0 1 1 0 -1 {X1 , X5} Possible Basis

A basis for a vector space is both:


• Maximal Independent set: add any element to it, the set would no longer be independent
• Minimal spanning set: remove any element from it, the set will not span the vector space

The basis of a vector space NEED NOT BE:


• Unique, however, the cardinality of the different basis is the same: n for Rn
• Orthogonal ({X1 , X3} is; {X1 , X2} isn’t) but orthogonal basis is most convenient to deal with

Standard basis for Rn: {e1, e2, ……en} where ei is an n-dimensional vector will all
elements =0, except for the ith element = 1
Vector Space: Cartesian Basis (Euclidean Space)
Cartesian Basis: formed by vectors {e1,e2,e3} forming a right handed orthonormal set

Orthonormal Orthogonal set


set Set of unit vectors
Right handed set (curl the fingers of right
hand from first element to second: the
thumb points towards the third element)

Cartesian Basis in R3 Why Cartesian?


….when any 3 non-coplanar vectors in R3 may
form a basis in R3

Tremendous ease: Basis coefficients {a1,a2,a3}

ith coefficient = Dot product of the


given vector with ith basis element
Vector Space: Cartesian Basis (Euclidean Space)

Orthonormality
Right handedness
Orthogonality Unity

Dot product in Component form Cross product in Component form

Kronecker Delta Permutation/alternating


symbol symbol

Dot product in Index form Cross product in Index form


Vector Space: Cartesian Basis (Euclidean Space)
Orthonormality
Right handedness
Orthogonality Unity

Dot product in Component form Cross product in Component form


The intention is to interpret Dot & Cross Product through the
prism of Orthonormality and righthandedness
but let us first revisit what we already know
Vector Space: Cartesian Basis (Euclidean Space)
Orthonormality
Right handedness
Orthogonality Unity

Dot product in Component form Cross product in Component form

Kronecker Delta
symbol

Orthonormal
basis rule
Vector Space: Cartesian Basis (Euclidean Space)
Orthonormality
Right handedness
Orthogonality Unity

Kronecker Delta
symbol

Dot product Cross product


Scalar Triple product
….A Little more on Permutation Symbol

123, 231, 312 are even permutations of 123 Criterion for


132, 213, 321 are odd permutations of 123 even/odd permutation?

Criterion for even/odd permutation relates to the no of transpositions: where a transposition is an


interchange of two consecutive terms in a sequence.

1st interchange (2 and 3) Total 2 interchanges:


2 transpositions
2nd interchange (3 and 1) Even permutation

Even permutations correspond to any three


consecutive terms picked from the sequence:
123123
Odd permutations correspond to any three
consecutive terms picked from the sequence:
321321
….A Little more through/on Permutation Symbol


What happens to the determinant if any two rows are interchanged:
• Once (say R2-R3: so the order of rows in 1-3-2)
• Twice (say R2-R3, followed by R1-R3: so the order of rows is 3-1-2)

By interchanging the rows of a 3*3 matrix, more general results can be


obtained If (p q r) is some permutation of the integers (1 2 3), then…….

|A| is with
Respect to
Sequence
1-2-3
….A Little more on Permutation Symbol

6 1 2 4 5 3

Map it on to
1-2-3-4-5-6

1 2 3 4 5 6

7 intersections implies odd no. of transpositions, implying -1


Vector – Vector Multiplication
Vector
representation Dot product of two vectors
V=vn= {vi ei}
A Matrix: nx1
Dummy: Only the elements at
summation: corresponding
a scalar positions multiply

Tensor product of two vectors: Dyad


Dyad
Each element of one
vector is multiplied
by every element of
Range Rank the other vector
=32
= 9 terms
Matrix – Vector Multiplication: Way to Indices

Q11 Q12 u1 Q11 u1 + Q12 u2 Q1j uj


= =
Q21 Q22 u2 Q21 u1 + Q22 u2 Q2j uj

u1
u2
Q11
Q21
Q12
Q22 x NOT
POSSIBLE

u1 Q11 + u2 Q21 u1 Q12 + u2 Q22

u1 u2 Q11 Q12 ujQ j1 uj Q j2


=
Q21 Q22
i=1 i=2
ujQ ji uj Q ji

u1 Q11 + u2 Q12 u1 Q21 + u2 Q22

u1 u2 Q11 Q21 uj Q1j uj Q2j


=
Q12 Q22
i=1 i=2
uj Qij uj Qij
Matrix – Vector Multiplication: Rules for Indices
T T T
(AB) = B A

Whenever a matrix is involved in multiplication Whenever a matrix transpose is involved


the summation indices are besides each other the summation indices are NOT besides each other
Vector’s index on RHS
(summation: dummy)
is different to the
index being sought on LHS

A vector can pre-multiply a matrix


Only in transpose form
Matrix – Vector to Matrix – Matrix Multiplication

Whenever a matrix is involved in multiplication Vector’s Whenever a matrix transpose is involved


the summation indices are besides each other index on the summation indices are NOT besides each other
RHS (dummy)
is different to the
index being sought
on LHS

A vector can pre-multiply a matrix


Only in transpose form

Aik under transpose


becomes Aki
Co-ordinate Transformation for Vector Components
• Vector components depend on the Co-ordinate system. If the components with respect to
one system are known, can they be known with respect to another co-ordinate system?

• To be able to relate the co-ordinates in two different systems, its important to relate the
two systems at first place.

Translation
Co-ordinate Co-ordinate
system 1 system 2
Rotation

Translation: NOT TO AFFECT the co-ordinates Rotation: TO AFFECT the co-ordinates


Co-ordinate Transformation for Vector Components

Rotation or Transformation Matrix [Q]


[u] = [Q] [u’]

-1 T
[Q ] = [Q ] [u’] = [Q-1] [u]
Transformation Matrix is Orthogonal [u’] = [QT] [u]
Co-ordinate Transformation for Vector Components
Co-ordinate Transformation for Vector Components

These direction cosines can relate to the base vectors in any two Cartesian Coordinate systems

e2
e'2 e'1
e1
e3
e'3
Formal Derivation for
Transformation Equations

Vector pre-mult. implies it should be in Transposed form

Summation Indices NOT BESIDES imply Q transpose


Both sides Transposed
Co-ordinate Transformation for Vector Components
Co-ordinate Systems:
I-in Red (u1,v1)
II-in Blue (u1I,v1I) e2 If transformation 𝕋I leads to vI from uI
What transformation 𝕋II will lead to vII from uII ?

e’2 e’1

vI 3
vII
Q-1
2 𝕋I 𝕋II
uI Q
uII
1

e1
Co-ordinate Transformation for Vector Components

Step 1: Get uI from uII uI = Q uII

Step 2: Get vI from uI vI = 𝕋I uI vII = Q-1 𝕋I Q uII = QT 𝕋I Q uII

vII = 𝕋II uII = 𝕋II uII


Step 3: Get vII from vI vII = Q-1 vI
Covariant & Contravariant Transformations

A covariant transformation is a rule that


specifies how certain entities, such as vectors
(or tensors) change under a change of basis.

Conventionally, indices identifying the basis


vectors are placed as lower indices, while
components of a vector as upper indices
Co-ordinate Transformation for Vector Components
Q Q u2 v2
e1. e1’ e1. e2’ 0 -1 e’1 e2 vI vII 1 3 v2 = 𝕋II u2 𝕋II ?
e2. e1’ e2. e2’ 1 0 -1 -1

𝕋I 𝕋II
𝕋I u1
v1
1 0 1 1
uI uII
3 1 2 1

e’2 e1

TII should be equivalent to TI, where equivalence is ensured by :


• Co-ordinate transformation
• Inverse Co-ordinate transformation 𝕋 II = Q -1 𝕋 Q
I Does 𝕋II map uII onto vII?

Q-1 𝕋I Q Q-1 𝕋IQ Q-1𝕋IQ vII 𝕋II uII


0 1 0 1 0 -1 ≣ 0 1 1 0 ≣ 2 -1 3 2 -1 1
-1 0 1 2 1 0 -1 0 2 -1 -1 0 -1 -1 0 -1

𝕋II

You might also like