Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

lecture_06

This document covers eigenvalues, eigenvectors, and their applications in Markov chains, focusing on definitions, properties, and theorems related to these concepts. It introduces key results such as the Perron-Frobenius theorem, which describes the existence of invariant measures for stochastic matrices, and discusses the implications of these results for Markov chains. The document emphasizes the mathematical framework and proofs necessary for understanding the behavior of systems modeled by these concepts.

Uploaded by

syyeung7
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

lecture_06

This document covers eigenvalues, eigenvectors, and their applications in Markov chains, focusing on definitions, properties, and theorems related to these concepts. It introduces key results such as the Perron-Frobenius theorem, which describes the existence of invariant measures for stochastic matrices, and discusses the implications of these results for Markov chains. The document emphasizes the mathematical framework and proofs necessary for understanding the behavior of systems modeled by these concepts.

Uploaded by

syyeung7
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Optimization and Computational Linear Algebra for Data Science

Lecture 6: Eigenvalues, eigenvectors and Markov chains


Léo Miolane · leo.miolane@gmail.com

October 9, 2020

Warning: This material is not meant to be lecture notes. It only gathers the main concepts
and results from the lecture, without any additional explanation, motivation, examples, figures...

1 Eigenvalues and eigenvectors


Definition 1.1
Let A ∈ Rn×n . A non-zero vector v ∈ Rn is said to be an eigenvector of A is there exists
λ ∈ R such that
Av = λv.
The scalar λ is called the eigenvalue (of A) associated to v. The set

Eλ (A) = x ∈ Rn Ax = λx = Ker(A − λId)




is called the eigenspace of A associated to λ. The dimension of Eλ (A) is called the multiplicity
of the eigenvalue λ.

Remark 1.1. Notice that Eλ (A) is a subspace of Rn : any (non-zero) linear combination of
eigenvectors associated with the eigenvalue λ is also an eigenvector of A associated with λ.
Remark 1.2. Definition 1.1 can be generalized to allow complex eigenvalues and eigenvectors:
λ ∈ C and v ∈ Cn . However in this course, we only consider real eigenvalues and eigenvec-
tors.
Example 1.1. For λ1 , . . . , λn ∈ R, we introduce the notation

0 0
 
λ1 ···

0 .. 
def λ2 . 
Diag(λ1 , . . . , λn ) =  . ∈ Rn×n .
 
 . .. .. 
 . . . 

0 ··· ··· λn

Let (e1 , e2 , . . . , en ) be the canonical basis of Rn . Then, for all i ∈ {1, . . . , n}

0
 
0 0  .. 
 
λ1 ··· ···
.
.. .. 
.

0 . 
 
0

 . .. 

 ..
Diag(λ1 , . . . , λn ) ei =  .  1 = λi ei .
 
λi
 0
 .
..
 
 .
 . . 0   .. 

0 ··· ··· 0 λn  . 
0

Hence the vector ei is an eigenvector of the matrix Diag(λ1 , . . . , λn ) associated with the eigenvalue
λi .

1
Proposition 1.1
Let A ∈ Rn×n . Suppose that A has an eigenvalue λ ∈ R and let x ∈ Rn be an eigenvector
associated to λ. The following holds:

• For all α ∈ R, αλ is an eigenvalue of the matrix αA and x is an associated eigenvector.

• For all α ∈ R, λ + α is an eigenvalue of the matrix A + αId and x is an associated


eigenvector.

• For all k ∈ N, λk is an eigenvalue of the matrix Ak and x is an associated eigenvector.

• If A is invertible then 1/λ is an eigenvalue of the matrix inverse A−1 and x is an


associated eigenvector.

Definition 1.2
The set of all eigenvalues of A is called the spectrum of A and denoted by Sp(A).

Proposition 1.2
Let v1 , . . . , vk be eigenvectors of A corresponding (respectively) to the eigenvalues λ1 , . . . , λk .
If the λi are all distinct (λi 6= λj for all i 6= j) then the vectors v1 , . . . , vk are linearly
independent.

It follows from Proposition 1.2:


Corollary 1.1
A n × n matrix A admits at most n different eigenvalues: #Sp(A) ≤ n.

In fact, we have the following stronger result:


Proposition 1.3
Let A ∈ Rn×n . If λ1 , . . . , λk are distinct eigenvalues of A of multiplicities m1 , . . . , mk respec-
tively, then
m1 + · · · + mk ≤ n.

2 Application to Markov chains


2.1 First definitions and properties
A finite Markov chain is a random process which moves among the elements of a finite set E in
the following manner: when at x ∈ E, the next position is chosen according to a fixed probability
distribution P (x, ·). More formally:
Definition 2.1
A sequence of random variables (X0 , X1 , . . . ) is a Markov chain with state space E and
“transition matrix” P if for all t ≥ 0,

P Xt+1 = y X0 = x0 , . . . , Xt = xt ) = P (xt , y)

for all x0 , . . . , xt such that P(X0 = x0 , . . . , Xt = xt ) > 0.

The transition matrix P verifies therefore, for all x ∈ E,


P (x, y) = 1. (1)
X

y∈E

2
In order to simplify the notations, we will assume that E = {1, 2, . . . , n} and write for all
i, j ∈ E, Pi,j = P (j, i). Note that we switched here the order of i and j. This is not
what is usually done in the literature, but this will allow us to be more coherent
with our linear algebra framework. Such matrix is said to be stochastic:
Definition 2.2 (Stochastic matrix)
A matrix P ∈ Rn×n is said to be stochastic if:

(i) Pi,j ≥ 0 for all 1 ≤ i, j ≤ n.


n
Pi,j = 1, for all 1 ≤ j ≤ n.
P
(ii)
i=1

Let (X0 , X1 , . . . ) be a Markov chain on {1, . . . , n} with transition matrix P . For t ≥ 0 we will
encode the distribution of Xt in the vector
(t)
x(t) = (x1 , . . . x(t)
n ) = P(Xt = 1), . . . , P(Xt = n) ∈ ∆n


where ∆n is the “n-simplex”


n n o
def
∆n = xi = 1 and xi ≥ 0 for all i .
X
x ∈ Rn
i=1

Proposition 2.1
For all t ≥ 0
x(t+1) = P x(t) and consequently, x(t) = P t x(0) .

Proof. Let i ∈ {1, . . . , n}.


n n
(t+1) (t)
= P(Xt+1 = i) = P(Xt+1 = i|Xt = j)P(Xt = j) = Pi,j xj = (P x(t) )i .
X X
xi
j=1 i=1


Corollary 2.1
Let P be a stochastic matrix. Then

• For all x ∈ ∆n , P x ∈ ∆n .

• For all t ≥ 1, P t is stochastic.

2.2 Invariant measures and the Perron-Frobenius Theorem


We will be interested in the distribution of Xt for large t, that is the limit of x(t) = P t x(0) . As
we will see, under suitable conditions on the matrix P , x(t) converge to some µ ∈ ∆n as t → ∞.
In such case, by taking the t → ∞ limit in x(t+1) = P x(t) we get µ = P µ. This motivates the
following definition:
Definition 2.3 (Invariant measure)
A vector µ ∈ ∆n is called an invariant measure for the transition matrix P if µ = P µ, i.e. if
µ is an eigenvector of P associated with the eigenvalue 1.

3
Theorem 2.1 (Perron-Frobenius, stochastic case)
Let P be a stochastic matrix such that there exists k ≥ 1 such that all the entries of P k are
strictly positive. Then the following holds:

(i) 1 is an eigenvalue of P and there exists an eigenvector µ ∈ ∆n associated to 1.


(ii) The eigenvectors associated to 1 are unique up to scalar multiple (i.e. Ker(P − Id) =
Span(µ)).
(iii) For all x ∈ ∆n , P t x −−−→ µ.
t→∞

Theorem 2.1 is proved in the next section. Theorem 2.1 tells us that there is a unique µ ∈ ∆n
such that P µ = µ. We call µ the Perron-Frobenius eigenvector of P .

Remark 2.1. There exist a stronger version of the Perron-Frobenius Theorem which does not
require the columns of P to sum to 1, see for instance Theorem 1.1 in [2]. The proof is however
more involved.
Corollary 2.2
Let P be a stochastic matrix such that there exists k ≥ 1 such that all the entries of P k are
strictly positive. Then there exists a unique invariant measure µ and for all initial condition
x(0) ∈ ∆n ,
x(t) = P t x(0) −−−→ µ.
t→∞

Corollary 2.2 tells us that the Markov chain “forgets” its initial condition to converge to its
invariant measure µ. We say that the chain is “mixing”.

Working a little bit more, one can prove the “ergodic” Theorem that states that µi corresponds
to the average time spent by the Markov chain in state i.

Theorem 2.2 (Ergodic Theorem)


Let (Xt )t≥0 be a Markov chain whose transition matrix is P . Assume that there exists k ≥ 1
such that all the entries of P k are strictly positive and let µ be the unique invariant measure
of P . Then for any initial condition X0 , we have with probability 1, for all i = 1, . . . , n:
1 
# t < T Xt = i −−−−→ µi .
T T →∞

2.3 Proof of Theorem 2.1


We first prove the theorem in the case k = 1, when Pi,j > 0 for all i, j.
Lemma 2.1
The mapping
ϕ : ∆ n → ∆n
x 7→ P x
is a contraction mapping for the `1 -norm: there exists c ∈ (0, 1) such that for all x, y ∈ ∆n :

kP x − P yk1 ≤ ckx − yk1 .

4
def
Proof. First notice that ϕ is well-defined by Corollary 2.1. Let us write α = mini,j Pi,j ∈ (0, 1).
Let x, y ∈ ∆n . We will show that kP x − P yk1 ≤ (1 − α)kx − yk1 , i.e. kP zk1 ≤ αkzk1 where
z = x − y. Compute
n n X
n
kP zk1 = (P z)i =
X X
Pi,j zj .
i=1 i=1 j=1

Since = 0 we have j (Pi,j − α/n)zj = Pi,j zj . Hence


P P P
j zj j

n X
n n X
n n
kP zk1 = (Pi,j − α/n)zj ≤ (Pi,j − α/n)|zj | = (1 − α)|zj | = (1 − α)kzk1 .
X X X

i=1 j=1 i=1 j=1 j=1


Let now µ be a minimizer of x 7→ kP x − xk1 on ∆n .

• P µ = µ, because otherwise kP µ − µk1 > 0 and by Lemma 2.1

kP (P µ) − P µk1 ≤ ckP µ − µk1 < kP µ − µk1 ,

which contradicts the optimality of µ. This proves (i).

• For all x ∈ ∆n , we have

kP t x − µk1 = kP t x − P t µk1 ≤ ct kx − µk1 −−−→ 0,


t→∞

which proves (iii).

• Let us now prove (ii). Let x ∈ Rn such that P x = x. Then for all t ≥ 1:

x = P t x = x1 P t e 1 + · · · + xn P t e n (2)
−−−→ x1 µ + · · · + xn µ = (x1 + · · · + xn )µ ∈ Span(µ), (3)
t→∞

which proves (ii).

This proves Theorem 2.1 in the case k = 1.

In the case k > 1 we simply apply the result for k = 1 to P k . This gives that there exists
a unique µ ∈ ∆n such that P k µ = µ. Multiplying by P on both sides leads to P k (P µ) = P µ.
Since P µ ∈ ∆n we obtain that P µ = µ by uniqueness of µ. This proves (i). To prove (ii) we
consider x ∈ Rn such that P x = x. By iteration we get P k x = x which implies (using the result
on P k ) that x ∈ Span(µ). To prove (iii) we fix ` ∈ {0, . . . , k − 1}. Let x ∈ ∆n . By applying the
point (iii) to P k , we have
P kt P ` x −−−→ µ.
t→∞

Since this holds for all ` ≤ k − 1 we obtain that P r x −−−→ µ using the Euclidean division of r
r→∞
by k.

2.4 About the condition «P k > 0»


We discuss in this section why the condition

« there exists k ≥ 1 such that all the entries of P k are strictly positive »

5
is needed in the Perron-Frobenius Theorem. Consider for instance the transition matrix
!
0 1
P = .
1 0

This matrix does not verify the condition above since we have for all k ≥ 1:

Id2 if k is even,
(
P =
k
P otherwise.

Eventhough P admits an invariant measure µ = (1/2, 1/2), we see that for x0 = (1, 0), P t x0 does
not converges to µ as t → ∞. Hence (iii) of Theorem 2.1 does not hold.

3 Example: Google’s PageRank algorithm


3.1 The PageRank algorithm
The PageRank algorithm was invented by Larry Page and Sergey Brin [1]. The goal is to rank n
web pages in terms of “importance”. L. Page and S. Brin considered a “drunk surfer” that goes
from a page j to an other page i by randomly clicking on the links that are on the page j. This
can be modeled by a Markov chain with state space {1, . . . , n} and transition matrix P given by

1/deg(j) if there is a link j → i


(
Pi,j =
0 otherwise,

where deg(j) denotes the number of outgoing links on page j. The idea behind PageRank is to
quantify the importance of a page i by the fraction of time spent by the “drunk surfer” on it. By
Theorem 2.2 we know that this corresponds to the coefficient µi of the invariant measure µ of P .

The matrix P is however not guaranteed to satisfy the hypotheses of Theorem 2.2-Corollary 2.2.
Brin and Page proposed to use instead of P the matrix
1−α
G = αP + 1,
n
where α ∈ (0, 1) is a parameter close to 1 (Google takes α ' 0.85) and 1 denotes the all-one
matrix. The PageRank algorithm computes µ the Perron-Frobenius eigenvector of the matrix G
and ranked the webpages according to their coordinate in the vector µ: the higher µi , the better
page i will be ranked.

3.2 Ranking tennis players


The ideas behind PageRank can be applied in many different contexts, not only for ranking
webpages. In the following example we aim at ranking the following n = 54 tennis players:

Federer, Nadal, Djokovic, Murray, Del Potro, Roddick, Coria, Zverev, Ferrer, Soderling, Tsonga,
Nishikori, Raonic, Nalbandian, Wawrinka, Berdych, Hewitt, Tsitsipas, Monfils, Gonzalez,
Thiem, Ljubicic, Davydenko, Cilic, Pouille, Safin, Isner, Dimitrov, Medvedev, Ferrero, Goffin,
Bautista Agut, Sock, Gasquet, Simon, Blake, Monaco, Coric, Stepanek, Khachanov, Almagro,
Robredo, Verdasco, Anderson, Youzhny, Baghdatis, Dolgopolov, Kohlschreiber, Fognini, Melzer,
Paire, Querrey, Tomic, Basilashvili.

6
To do so, we have access to the “head to head” record between them (see Figure 2) in the
form of the matrix R ∈ Rn×n :

Ri,j = « number of wins of player i against player j ». (4)

We will use the approach of the previous section to rank the players. In our case, instead
of a “drunk surfer” we will consider a “drunk spectator”. At time t the value Xt ∈ {1, . . . , n}
indicates which tennis player the spectator believes to be the best. At time t + 1, the spectator
picks uniformly at random a game played by its favorite player Xt against one of the other players,
x. If the game was won by Xt , then the spectator still believes that Xt is the best: Xt+1 = Xt .
Otherwise the spectator changes his mind: Xt+1 = x.
This can be modeled by a Markov chain with transition matrix:

if i = j
(
Vj /Gj
Pi,j =
Ri,j /Gj otherwise,

where Vj denotes the total number of victories of player j and where Gj denotes the total number
of game played by j:
n n
Vj = and Gj = Ri,j + Rj,i .
X X
Rj,i
i=1 i=1

Let µ be the “Perron-Frobenius” eigenvector of P . The vector µ is displayed on Figure 1. Applying


Corollary 2.2-Theorem 2.2 to the matrix P we get that the “drunk spectator” will (in the t → ∞
limit) spend a fraction µi of its time thinking that the player i is the best. The values (µ1 , . . . , µn )
can therefore be used to rank the players. We obtain the following order (see also Figure 1)

Federer (14.4%), Djokovic (13.7%), Nadal (13.6%), Murray (5.8%), Ferrer (3.0%), Del Potro
(2.8%), Berdych (2.5%), Roddick (2.4%), Wawrinka (2.3%), Tsonga (2.1%), Nishikori (1.6%),
Nalbandian (1.6%), Hewitt (1.5%), Monfils (1.5%), Davydenko (1.5%), Cilic (1.4%), Soderling
(1.4%), Verdasco (1.2%), Gonzalez (1.2%), Raonic (1.2%), Ljubicic (1.2%), Gasquet (1.2%),
Simon (1.1%), Thiem (1.1%), Isner (1.0%), Zverev (1.0%), Youzhny (1.0%), Robredo (0.9%),
Kohlschreiber (0.9%), Ferrero (0.9%), Stepanek (0.8%), Safin (0.8%), Dimitrov (0.8%), Almagro
(0.7%), Baghdatis (0.7%), Blake (0.7%), Anderson (0.7%), Goffin (0.7%), Coria (0.7%),
Bautista Agut (0.6%), Monaco (0.6%), Fognini (0.6%), Querrey (0.6%), Melzer (0.6%),
Dolgopolov (0.5%), Coric (0.5%), Pouille (0.4%), Tsitsipas (0.4%), Sock (0.4%), Paire (0.3%),
Medvedev (0.3%), Khachanov (0.3%), Tomic (0.2%), Basilashvili (0.1%).

References
[1] Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. The pagerank citation
ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999.

[2] Eugene Seneta. Non-negative matrices and Markov chains. Springer Science & Business
Media, 2006.

7
0 10 20 30 40 50 60 70 80 0 2 4 6 8 10 12 14

Federer Federer
Nadal Nadal
Djokovic Djokovic
Murray Murray
DelPotro DelPotro
Roddick Roddick
Coria Coria
Zverev Zverev
Ferrer Ferrer
Soderling Soderling
Tsonga Tsonga
Nishikori Nishikori
Raonic Raonic
Nalbandian Nalbandian
Wawrinka Wawrinka
Berdych Berdych
Hewitt Hewitt
Tsitsipas Tsitsipas
Monfils Monfils
Gonzalez Gonzalez
Thiem Thiem
Ljubicic Ljubicic
Davydenko Davydenko
Cilic Cilic
Pouille Pouille
Safin Safin
Isner Isner
Dimitrov Dimitrov
Medvedev Medvedev
Ferrero Ferrero
Goffin Goffin
BautistaAgut BautistaAgut
Sock Sock
Gasquet Gasquet
Simon Simon
Blake Blake
Monaco Monaco
Coric Coric
Stepanek Stepanek
Khachanov Khachanov
Almagro Almagro
Robredo Robredo
Verdasco Verdasco
Anderson Anderson
Youzhny Youzhny
Baghdatis Baghdatis
Dolgopolov Dolgopolov
Kohlschreiber Kohlschreiber
Fognini Fognini
Melzer Melzer
Paire Paire
Querrey Querrey
Tomic Tomic
Basilashvili Basilashvili
(a) Percentage of wins (b) PageRank

Figure 1: Comparison of the ranking by the percentage of wins (on the left) and
the ranking using PageRank.

8
Fe Na Dj Mu De Ro Co Zv Fe So Ts Ni Ra Na Wa Be He Ts Mo Go Th Lj Da Ci Po Sa Is Di Me Fe Go Ba So Ga Si Bl Mo Co St Kh Al Ro Ve An Yo Ba Do Ko Fo Me Pa Qu To Ba
Fe 0 15 22 14 18 21 3 3 17 16 11 7 11 11 23 20 18 1 10 12 2 13 19 9 1 10 6 7 3 10 7 8 4 18 7 10 4 4 14 1 5 11 7 6 17 7 5 14 4 4 7 3 4 1
Na 24 0 26 17 11 7 4 5 26 6 8 11 7 5 18 20 7 4 14 7 9 7 5 6 2 2 7 12 0 7 4 3 4 16 8 4 7 2 7 6 15 7 17 5 13 9 7 15 11 3 4 4 3 3
Dj 25 28 0 25 16 4 2 3 16 6 17 16 9 4 19 25 6 1 15 1 6 7 6 17 1 0 9 8 3 2 5 7 1 13 11 3 8 3 13 1 5 7 11 8 7 8 6 10 8 3 1 8 6 2
Mu 11 7 11 0 7 8 0 1 14 3 14 9 9 5 11 11 1 0 4 1 2 4 6 12 4 0 8 8 0 3 6 3 0 8 16 2 5 2 7 1 5 6 13 6 4 5 4 5 4 7 2 7 5 0
De 7 6 4 3 0 4 0 2 7 4 5 6 3 1 4 5 2 1 2 2 4 1 4 11 0 1 8 6 0 1 2 3 1 7 5 2 1 1 3 3 4 2 5 7 4 4 5 7 1 5 1 3 2 1
Ro 3 3 5 3 1 0 5 0 4 2 2 1 1 4 1 6 7 0 3 9 0 7 5 1 0 4 4 1 0 5 0 0 1 3 2 9 1 0 7 0 2 11 10 2 3 3 1 4 2 10 0 6 1 0
Co 0 1 2 0 0 0 0 0 4 1 0 0 0 2 0 2 0 0 0 5 0 3 2 0 0 1 0 0 0 3 0 0 0 1 0 1 2 0 0 0 1 3 2 0 6 0 0 0 0 2 0 0 0 0
Zv 3 0 2 0 0 0 0 0 5 0 1 2 1 0 2 2 0 1 0 0 2 0 0 6 0 0 5 2 4 0 2 4 1 4 4 0 1 1 0 2 1 0 1 4 3 1 1 2 3 0 1 0 0 1
Fe 0 6 5 6 6 7 1 3 0 4 3 4 4 9 7 8 3 0 3 5 1 6 2 4 1 1 7 5 0 7 2 3 2 10 8 1 5 0 8 0 15 8 14 3 4 4 10 11 11 7 3 3 4 1
So 1 2 1 2 1 4 0 0 10 0 5 1 0 1 2 7 2 0 3 5 0 2 7 2 0 0 1 0 0 1 0 0 0 2 5 2 4 0 5 0 5 5 5 0 5 3 0 1 3 2 0 4 0 0
Ts 6 4 6 2 2 1 0 2 1 0 0 3 2 1 3 5 4 1 4 1 2 3 4 2 2 1 2 4 1 3 4 2 3 5 9 3 6 1 3 2 6 1 3 3 3 7 3 11 4 6 3 4 3 1
Ni 3 2 2 2 2 0 0 1 10 0 6 0 5 0 4 5 0 1 4 0 3 0 2 9 1 0 2 5 2 1 3 4 1 3 1 3 1 1 1 2 2 3 5 5 2 1 5 3 2 3 7 6 3 1
Ra 3 2 0 3 2 0 0 2 0 0 5 2 0 1 3 6 1 0 3 0 2 0 1 1 3 0 1 2 0 0 3 5 8 3 5 2 1 1 3 0 2 6 4 1 3 3 2 2 2 1 0 4 5 0
Na 8 2 1 2 3 2 2 0 5 6 1 1 0 0 3 4 3 0 1 5 0 4 7 4 0 3 2 1 0 4 0 0 0 7 2 0 3 0 2 0 4 6 0 0 2 2 0 2 2 1 1 0 0 0
Wa 3 3 5 8 3 3 0 0 7 2 5 7 5 6 0 11 2 1 3 0 3 3 1 12 1 3 1 5 0 3 3 1 1 1 4 3 4 3 3 2 6 3 3 5 3 6 2 5 5 2 9 5 1 0
Be 6 4 3 6 4 5 1 4 8 3 8 1 3 1 5 0 3 0 6 3 2 2 3 6 1 1 7 3 0 2 2 4 2 9 8 3 7 3 3 0 9 7 11 12 6 4 4 9 3 5 4 5 5 0
He 9 4 1 0 3 7 2 0 1 3 0 2 1 3 2 0 0 0 2 2 0 1 4 1 0 6 4 1 0 6 0 0 1 2 0 8 1 0 3 0 1 1 0 1 5 3 0 2 0 7 0 3 0 0
Ts 1 1 1 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 2 0 2 0 0 0 1 0 0 0 0 0 4 1 0 1 0 0 0 1 0 2 0 1 1 2 0 0 0 2 2 0 0 0 0 1
Mo 4 2 0 2 0 5 0 3 3 0 4 1 3 3 3 1 2 1 0 2 0 3 2 4 2 4 7 4 1 0 2 3 1 10 2 3 2 2 6 0 3 2 3 5 1 4 4 14 4 4 0 3 2 1
Go 1 3 2 2 3 3 1 0 5 4 1 0 1 3 5 4 5 0 0 0 0 4 0 1 0 6 1 0 0 3 0 0 0 1 1 7 6 0 1 0 2 3 3 1 4 1 1 0 0 2 0 2 0 0
Th 4 4 3 1 0 0 0 5 1 0 0 2 1 0 1 0 0 3 5 0 0 0 0 1 0 0 1 2 2 0 3 1 3 0 9 0 1 3 1 1 2 0 0 2 2 0 2 1 3 1 2 3 2 0
Lj 3 2 2 3 1 4 2 0 1 3 3 2 0 5 3 3 0 0 4 4 0 0 4 2 0 2 0 0 0 3 0 0 0 0 3 4 2 0 1 0 3 5 3 0 8 2 1 3 0 0 2 2 0 0
Da 2 6 2 4 3 1 1 0 4 4 2 0 0 5 2 9 0 0 2 6 0 4 0 2 0 4 3 0 0 3 1 0 0 2 3 1 4 0 7 0 1 5 7 1 4 2 2 3 4 6 1 1 0 0
Ci 1 2 2 3 2 2 0 1 2 0 6 6 2 2 2 6 1 1 0 1 0 1 3 0 2 0 8 4 0 2 3 4 0 2 1 1 2 7 1 1 2 4 10 6 5 5 2 5 3 7 4 6 3 0
Po 0 1 0 1 1 0 0 0 2 0 2 0 1 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 4 0 3 2 0 4 1 0 1 2 0 1 1 0 1 0 0 1 0 2 2 0 3 0 1 0
Sa 2 0 2 1 0 3 1 0 1 2 0 0 0 6 1 2 7 0 0 3 0 2 4 1 0 0 1 0 0 6 0 0 0 4 0 2 0 0 1 0 0 4 1 0 3 1 0 0 2 1 0 1 0 0
Is 2 0 2 0 4 2 0 1 2 0 3 1 5 1 3 2 2 2 4 1 1 0 3 3 0 1 0 2 0 0 2 3 5 2 3 3 3 1 2 0 2 3 1 8 2 8 0 4 2 2 1 3 3 0
Di 0 1 1 3 2 0 0 1 1 0 0 1 3 0 4 3 0 0 1 0 2 0 1 2 2 0 1 0 1 0 7 3 2 3 4 0 1 0 1 0 2 2 4 6 1 8 4 2 4 2 1 3 2 1
Me 0 0 1 1 0 0 0 0 0 0 2 2 2 0 1 0 0 4 1 0 0 0 0 0 0 0 0 1 0 0 1 0 2 0 0 0 0 1 0 0 0 0 2 0 0 0 0 1 1 0 1 1 0 0
Fe 3 2 1 0 2 0 3 0 2 1 1 1 0 3 3 0 4 0 3 4 0 3 2 0 0 6 1 0 0 0 0 0 0 1 2 3 2 0 1 0 2 3 3 1 4 1 1 1 0 4 0 2 0 0
Go 1 1 1 0 3 0 0 0 0 0 3 0 2 0 2 1 0 2 3 0 7 0 0 3 1 0 1 1 0 0 0 3 3 1 3 0 0 4 2 4 3 0 3 1 0 2 1 2 0 2 3 2 1 2
Ba 0 0 3 1 2 0 0 2 1 0 3 1 0 0 1 3 0 0 1 0 3 0 0 2 3 0 1 1 1 0 2 0 2 2 1 0 0 4 1 3 1 1 1 1 2 2 2 2 3 1 6 3 1 2
So 0 0 0 0 1 0 0 2 2 0 0 2 3 0 0 0 0 0 0 0 1 0 0 3 1 0 3 3 0 0 0 2 0 4 1 1 0 1 1 1 1 0 2 2 0 1 1 1 1 1 1 1 2 0
Ga 2 0 1 3 1 2 0 0 3 3 4 7 1 0 2 8 0 1 7 0 2 2 6 2 1 2 3 5 1 0 1 1 0 0 8 2 0 1 1 0 4 2 7 7 6 1 3 2 2 3 6 3 8 0
Si 2 1 1 2 3 2 0 0 2 2 3 0 1 1 3 7 4 0 7 0 2 2 5 6 3 0 0 5 1 0 2 5 2 1 0 1 4 2 2 1 3 1 2 1 4 3 1 5 5 4 6 6 3 3
Bl 1 3 0 1 2 3 1 0 2 1 0 1 0 2 0 2 1 0 2 3 0 2 7 2 0 2 0 0 0 1 0 0 0 2 2 0 2 0 4 0 1 4 2 1 2 1 0 1 1 1 0 6 0 0
Mo 0 1 0 2 1 1 0 0 4 1 0 2 2 1 1 0 0 0 5 0 0 1 1 2 1 2 1 3 0 2 1 1 1 1 3 0 0 0 2 0 5 3 7 2 1 1 0 2 3 7 3 5 1 0
Co 2 2 0 2 1 0 0 3 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 2 0 1 0 3 0 0 3 2 1 0 0 0 0 0 1 1 1 1 1 1 1 1 2 0 0 2 1 1 3
St 2 0 1 2 3 1 0 0 3 2 1 0 0 1 4 1 1 0 3 4 0 2 4 3 0 2 3 0 0 3 0 0 0 3 2 3 5 0 0 0 4 2 4 2 4 3 2 6 2 0 1 1 3 0
Kh 0 0 1 0 1 0 0 1 2 0 0 2 0 0 0 2 0 0 0 0 1 0 0 0 2 0 4 0 1 0 1 2 0 0 2 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 2 0 2
Al 0 1 0 1 0 0 1 0 1 3 1 1 0 2 3 4 2 0 3 0 0 2 3 1 0 3 2 1 0 4 3 0 2 3 2 1 6 0 1 0 0 8 5 1 1 2 3 7 4 3 2 5 0 0
Ro 1 0 2 2 0 0 1 2 2 1 1 0 0 3 6 4 1 0 3 2 1 1 2 3 0 6 1 3 0 2 1 0 1 3 5 3 3 0 5 0 1 0 6 1 3 3 0 5 5 4 1 4 0 1
Ve 0 3 4 3 1 3 0 2 7 2 2 2 3 3 3 4 0 0 3 3 4 1 2 5 1 0 1 3 1 3 3 4 0 8 3 0 5 0 3 2 7 5 0 5 2 0 3 4 4 6 0 1 1 0
An 1 0 1 2 0 2 0 0 3 0 0 4 1 0 4 0 2 1 1 0 7 0 1 1 1 0 4 2 0 0 0 0 2 4 4 0 1 4 1 1 0 0 4 0 1 2 2 4 4 2 1 8 1 3
Yo 0 4 3 0 0 2 2 0 5 1 4 1 1 2 3 6 2 1 3 1 0 1 2 5 0 0 2 0 1 1 0 0 0 5 8 2 4 0 4 0 5 2 0 2 0 4 3 3 1 5 0 0 2 0
Ba 1 1 0 3 2 1 0 1 2 2 0 4 1 3 0 3 2 0 1 0 1 4 2 1 1 1 0 1 0 2 4 1 0 3 3 2 1 0 2 0 1 3 3 1 5 0 1 5 1 1 3 2 1 1
Do 0 2 0 0 0 0 0 0 4 1 3 1 1 1 2 2 0 0 0 1 1 0 1 1 0 0 0 1 1 1 1 0 1 0 3 2 1 2 1 2 2 1 0 2 1 4 0 1 5 1 0 2 6 0
Ko 0 1 2 1 2 2 0 3 3 4 1 0 1 1 0 2 2 0 2 0 2 0 2 7 1 0 4 0 1 3 1 3 1 2 5 2 2 2 2 3 3 3 6 0 7 3 3 0 7 4 4 2 1 1
Fo 0 4 0 3 1 0 0 1 0 0 2 1 0 0 1 2 2 0 4 1 1 0 1 1 2 1 0 2 1 0 1 7 1 2 0 0 2 2 2 0 3 5 3 1 3 3 1 2 0 0 1 1 2 0
Me 1 1 1 0 1 0 0 1 2 0 0 1 2 1 2 2 0 0 1 2 1 5 1 3 0 4 3 0 0 3 3 2 0 2 2 0 1 0 2 0 3 4 3 1 2 1 1 0 3 0 1 1 0 0
Pa 0 0 1 0 1 0 0 0 1 0 1 2 1 0 3 0 0 1 1 0 0 0 1 1 1 0 0 2 0 1 3 0 0 1 3 1 1 0 0 1 0 4 0 0 2 2 1 3 3 0 0 0 2 3
Qu 0 1 2 2 1 2 0 0 1 0 4 4 2 0 2 1 2 0 0 0 1 0 1 1 2 0 5 0 0 0 1 0 0 1 2 1 1 0 2 0 2 1 4 8 3 3 4 2 0 2 0 0 0 2
To 0 0 0 0 0 0 0 0 2 1 0 2 0 0 1 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 2 1 1 2 0 1 0 1 1 0 0 0 6 4 0 0 4 2 2 0 0 3 0 0
Ba 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 2 0 1 0 0 2 0 1 0 0 1 0 1 0 0 1 0 0 0 0 0 0

Figure 2: The matrix R given by (4)

You might also like