Lecture12
Lecture12
Glen Henshaw
Craig Carignan
October 28, 2023
and
∂C
= 0T = ∆xT − ∆q T J T ⇒ ∆x = J∆q (2)
∂∆λ
Substituting 1 into 2 gives
∆x = JJ T λ ⇒ λ = (JJ T )−1 ∆x
1
which gives us the result we want.
Note that you can change the initial optimization problem and derive differ-
ent pseudo–inverses. For instance, you can add a term that penalizes distance
from the arm to an object in the environment. Assume that D(q + ∆q∆t) is a
positive, scalar function that has a “hat” shape with its maximum at a distance
of zero and asymptotically approaches (or actually reaches) zero as the dis-
tance from the object increases. We can them write the following minimization
problem:
min ∥∆q∥22 + γD(q + ∆q∆t) given ∆x = J(q)∆q
We proceed as above:
1 T
C= ∆q ∆q + γD(q + ∆q∆t) + λT (∆x − J∆q)
2
Take the partial with respect to both ∆q and ∆λ, set both equations to zero,
and solve:
∂C ∂D T ∂D T
= 0T = ∆q T + γ ∆t − λT J ⇒ ∆q = J T λ − γ ∆t (4)
∂∆q ∂∆q ∂∆q
and
∂C
= 0T = ∆xT − ∆q T J T ⇒ ∆x = J∆q (5)
∂∆λ
Substituting 4 into 5 gives
" #
∂D T
T
∆x = J J λ−γ ∆t
∂∆q
∂D T
⇒ JJ T λ = ∆x + γJ ∆t
∂∆q
" #
T −1 ∂D T
⇒λ = (JJ ) ∆x + γJ ∆t
∂∆q
Notice that this result is identical to the one above except for the addition
of the
strange partial derivative of the distance function. It turns out that J † J − I
2
is a projection into the nullspace of J, i.e. we can choose any vector r and
multiply it by the projection and it will not change the result:
∆x = J(q)∆q = J ∆q + J † J − I r
Inverse kinematics
RN RM
q x
xext
Joint Space Task Space RN
(dim N ) Forward kinematics (dim M )
Augmented
Task Space
(dim N )
The basic idea is that we are going to increase the dimensionality of “task
space” to include not only end effector trajectories but trajectories in terms of
other parts of our arm. If our original Jacobian relationship is
∆x = JM×N (q)∆q
where dim(x) = M and dim(q) = N then we’re going to “extend” the Jacobian
as follows:
∆x J
= ∆q
∆xA JA
| {z } | {z }
∆xext Jext
3
Note that we no longer need a pseudo–inverse because our extended
Jacobian is square.
self motion
6
3
5
7
4
The self motion is an “orbit” of the elbow about the line −→ We need to
sw.
characterize the elbow’s orbit dynamics. Let’s assign a base frame and define
some helpful vectors:
ŷ0
s x̂0
d
ẑ0 w
v̂ w
p
e ψ
e
l
4
Define
w = −
→=p −p
sw w r
e = →
−
se = pe − ps
x̂ is reference vector in base frame
w × v̂ defines the reference plane
l is perpendicular to w in reference plane
d = ŵ(ŵT e)
d is the projection of e onto w :
Then
p = e − d = (I − ŵŵT )e
l = (w × v̂) × w
ϕ̇ = Jϕ q̇
where
Jϕ = x x x x 0 0 0
and note that the last three entries do not depend on n5 , n6 , n7 . So the extended
Jacobian is:
ẋ J
= q̇ (7)
ϕ̇ Jϕ
3 Example Problem
One very common way to use redundant degrees of freedom is to try to keep
the manipulator’s joints in the middle of their range of motion, so that it’s less
likely to hit a joint limit. Suppose we wanted to derive a pseudoinverse to make
a manipulator with f > 6 degrees of freedom do this.
5
We’ll define an auxiliary loss term that penalizes angular distance from the
center of the robot’s range of motion:
n 2
X qi − qimid
D = (8)
i=1
qimin − qimax
1 min
q mid qi + qimax
≜ (9)
2
where qimin and qimax are the minimum and maximum joint angles for joint i.
We want to use Resolved Motion Rate Control (RMRC) with a “custom”
pseudoinverse that uses the extra degrees of freedom to minimize this loss term.
What is the RMRC Jacobian relationship for this term — in other words, what
is ∆q with respect to ∆x?
3.1 Answer
Recall that we derived a general formula to do this in Lecture 12 (specifically,
equation 6):
∂D T
∆q = J † ∆x + J † J − I
∆t
∂∆q
So all we have to do is take the partial of D with respect to each of the joint
angles qi :
q0 −nmid
q
0
min −q max 0 ... 0
0 0
mid
q1 −q
∂D
0 1
q1min −q1max
... 0
= 2 .. .. .. ..
≜G
∂∆q
. . . .
qf −nmid
f
0 0 ··· qfmin −qfmax
Noting that is diagonal, D = DT , and therefore the full RMRC algorithm is:
∆q = J † ∆x + J † J − I G ∆t