Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit 1 ITC

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

Unit – 1: Information Theory


1.1 Introduction:
 Communication
Communication involves explicitly the transmission of information from one point
to another,through a succession of processes.
 Basic elements to every communication system
o Transmitter
o Channel and
o Receiver

 Information sources are as:

INFORMATION
SOURCE

ANALOG DISCRETE

 Source definition
Analog : Emit a continuous – amplitude, continuous – time
electrical wavefrom. Discrete : Emit a sequence of letters of
symbols.
The output of a discrete information source is a string or sequence of symbols.

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

An analog information source can be transmitted into a discrete information source through
the process of sampling and quantizing.
Discrete information source are characterized by the following parameters
1) Source alphabet
2) Symbol rate
3) Source alphabet probabilities
4) Probabilities depends of symbol in a sequence.

Probability of sample

X = (x1,x2,x3,x4,x1,x2,x2,x3,x4,x4,)
P (x1) = 2/10 = 0.2
P(x2) = 3/10 = 0.3
P(x3) = 2/10 = 0.2
P(x4) = 3/10= 0.3
P = probability
P(x1) = probability of x1 letter.
Information is inversely proportional to probability.

Basic answers to these questions that formed a solid background of the modern information
theory were given by the great American mathematician, electrical engineer, and computer scientist
Claude E. Shannon in his paper “A Mathematical Theory of Communication” published in “The Bell
System Technical Journal” in October, 1948

1.2 Measure the information:


To measure the information content of a message quantitatively, we are required
to arrive atan intuitive concept of the amount of information.
Consider the following examples:
A trip to Mercara (Coorg) in the winter time during evening hours,
1. It is a cold day
2. It is a cloudy day
3. Possible snow flurries

Amount of information received is obviously different for these messages.


o Message (1) Contains very little information since the weather in coorg is
‘cold’ for mostpart of the time during winter season.
o The forecast of ‘cloudy day’ contains more informat ion, since it is not an
event thatoccurs often.
o In contrast, the forecast of ‘snow flurries’ convey s even more
information, since theoccurrence of snow in coorg is a rare event.
Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)
Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

On an intuitive basis, then with a knowledge of the occurrence of an event, what


can be saidabout the amount of information conveyed?
It is related to the probability of occurrence of the event.
What do you conclude from the above example with regard to quantity of
information? Message associated with an event ‘least likely to occur’
contains most information. The information content of a message can be
expressed quantitatively as follows:
The above concepts can now be formed interns of probabilities as follows:
Say that, an information source emits one of ‘q’ po ssible messages m1, m2 …… m q with p1, p2 ……
pq as their probs. of occurrence.
th
Based on the above intusion, the information content of the k message, can be
written as

I (mk)  1
pk
Also to satisfy the intuitive concept, of information.
I (mk) must zero as pk 1
Therefore,

I (mk) > I (mj); if pk < pj


I (mk) O (mj); if pk 1 ------ I
I (mk) ≥ O; When O < pk < 1

Another requirement is that when two independent messages are


received, the totalinformation content is –
Sum of the information conveyed by each of

the messages.Thus, we have


I (mk & mq) I (mk & mq) = Imk + Imq ------ I
 We can define a measure of information as –

1
----- III
p

Unit of information measure


Base of the logarithm will determine the unit assigned to the information content.
Natural logarithm base : ‘nat’
Base - 10 : Hartley / decit
Base - 2 : bit

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

1
I  log 2 bit
P
1
I  log e nat
P
1
I  log10 decit
P

Use of binary digit as the unit of information?


Is based on the fact that if two possible binary digits occur with equal prob. (p1 = p2 =
½) then the correct identification of the binary digit conveys an amount of information.
I (m1) = I (m2) = – log 2 (½ ) = 1 bit
 One bit is the amount if information that we gain when one of two possible
and equallylikely events occurs.

Illustrative Example

HW: Calculate I for the above messages in nats and Hartley

Types of source:

1. Discrete Memoryless source simply a source in which each symbol is


generated independently.

2. Binary Memoryless source : A source is said to be binary memoryless


Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)
Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

source which has 0 and 1 symbols with probabilities P0 and P1(1-P0)

1 1
H ( X )  P0 log 2  P1 log 2
P0 P1

1.3 Entropy and rate of Information of an Information Source /Model of a


Markov Source

Average Information Content of Symbols in Long Independence Sequences

Suppose that a source is emitting one of M possible symbols s0, s1 ….. sM in a statically
independent sequence
Let p1, p2,............ PM be the probability of occurrence of the M-symbols resp. suppose further
that during a long period of transmission a sequence of L symbols have been generated.

On an average – s 1 will occur LP1 times


S2 will occur LP2 times
: :
Si will occur LPi times

The information content of the i th symbol is


I= (si) = Pi x L X log2 (1/Pi) bits

 Total information content of the message is = Sum of the contribution of all the symbols
Consider 𝑋 = {𝐴𝐴𝐴𝐵𝐵𝐶𝐴𝐵𝐶𝐷}
3 3 2 1
𝑃(𝐴) = = 0.3 𝑃(𝐵) = = 0.3 𝑃(𝐶) = = 0.2 𝑃(𝐷) = = 0.1
10 10 10 10
1 =10
𝐿 = length of source 1 1
𝑃𝑡𝑜𝑡𝑎𝑙 = No of letter of A × log 2 + No of letter of B × log 2 + No of letter of C × log 2 + No of let
𝑃(𝐴) 𝑃(𝐵) 𝑃(𝐶)
1
𝐿× =loglength
2
of source
𝑃(𝐷) 1 1 1
𝐼total = 𝑃(𝐴) × 𝐿 × log 2 + 𝑃(𝐵) × 𝐿 × log 2 + 𝑃(𝐶) × 𝐿 × log 2 + 𝑃(𝐷) × 𝐿
𝑃(𝐴) 𝑃(𝐵) 𝑃(𝐶)
1
× log 2 𝐼
𝑠𝑜𝑢𝑟𝑐𝑒 𝑒𝑛𝑡𝑟𝑜𝑝𝑦𝑃(𝐷) ∶ 𝐻(𝑋) = 𝑡𝑜𝑡𝑎𝑙 𝐿

1 1 1 1
𝐻(𝑋) = 𝑃(𝐴) × log 2 + 𝑃(𝐵) × log 2 + 𝑃(𝐶) × log 2 + 𝑃(𝐷) × log 2
𝑃(𝐴) 𝑃(𝐵) 𝑃(𝐶) 𝑃(𝐷)

Average information content per symbol is also called the source entropy. It tells how much information
there is in an event.

𝑀
1
𝐻(𝑋) = ∑ 𝑃𝐾 𝑙𝑜𝑔2 𝑏𝑖𝑡𝑠/𝑠𝑎𝑚𝑝𝑙𝑒
𝑃𝐾
𝐾=1

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

Properties of Entropy
1) I (Sk) = 0 for Pk =1
2) I(Sk) >= 0
3) I(Sk) > I (Si) for Pk < Pi
4) H(S)= log2 1/K if only if Pk = 1/K for all K
All symbols in the alphabet are equiprobable. This upper bound on entropy corresponds to maximum
uncertainty.

Maximum Entropy

𝑀 𝑀
𝑞𝐾 1 𝑞𝐾
∑ 𝑃𝐾 log 2 ≤ ∑ 𝑝𝐾 ( − 1)
𝑝𝑘 log 𝑒 2 𝑝𝐾
𝐾=1 𝐾=1
𝑀 𝑀
𝑞𝐾 1
∑ 𝑃𝐾 log 2 ≤ ∑(𝑞𝐾 − 𝑝𝐾 )
𝑝𝑘 log 𝑒 2
𝐾=1 𝐾=1
𝑀
𝑞𝐾
∑ 𝑃𝐾 log 2 ≤0
𝑝𝑘
𝐾=1

log x and x-1


M M 𝑀
𝑞𝐾
Hence note that ∑ qK = 1 as well as ∑ 𝑃K = 1 ∑ 𝑝𝑘 log 2 ≤0
𝑃𝐾
K=1 K=1 𝐾=1
let us consider qk =1/k for all k. that is all symbols in the alphabet are equally likely
M M M
1 1
∑ 𝑃K [log 2 𝑞𝐾 + log 2 ] ≤ 0 ∑ 𝑃K log 2 𝑞𝐾 + ∑ 𝑃K log 2 ≤0
𝑃𝐾 𝑃𝐾
K=1 K=1 K=1
𝑀 M
1
∑ 𝑃K log 2 ≤ − ∑ 𝑃K log 2 𝑞𝐾
𝑃𝐾
K=1 K=1

𝑴 𝑴
𝟏 𝟏
∑ 𝑷𝑲 𝒍𝒐𝒈𝟐 ≤ ∑ 𝑷𝑲 𝒍𝒐𝒈𝟐
𝑷𝑲 𝒒𝑲
K=1 K=1
𝟏
𝒒𝑲 =
𝑴
𝑴 𝑴
𝟏
∑ 𝑷𝑲 𝒍𝒐𝒈𝟐 ≤ ∑ 𝑷𝑲 𝒍𝒐𝒈𝟐 𝑴
𝑷𝑲
K=1 K=1
𝑴 𝑴
𝟏
∑ 𝑷𝑲 𝒍𝒐𝒈𝟐 ≤ 𝒍𝒐𝒈𝟐 𝑴 ∑ 𝑷𝑲
𝑷𝑲
K=1 K=1
𝑴

∑ 𝑷𝑲 = 𝟏
K=1
𝑴
𝟏
∑ 𝑷𝑲 𝒍𝒐𝒈𝟐 ≤ 𝒍𝒐𝒈𝟐 𝑴
𝑷𝑲
K=1
𝑯(𝑿)𝒎𝒂𝒙 = 𝒍𝒐𝒈𝟐 𝑴

1.4.The average information associated with an extremely unlikely message, with an extremely
likely probabilities and the dependence of H on the probabilities of messages
Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)
Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

consider the situation where you have just two messages of probs. ‘p’ and ‘(1-p)’.
1 1
Average information per message is H = p log2 +(1  p)log2
p 1p
At p = O, H = O and at p = 1, H = O again,
The maximum value of ‘H’ can be easily obtained as,
H = ½ log 2 + ½ log 2 = log 2 = 1
max 2 2 2

 Hmax = 1 bit / message


Plot of P and H can be shown below

½ P

The above observation can be generalized for a source with an alphabet of M symbols.

Entropy will attain its maximum value, when the symbol probabilities are equal,

i.e., when P1 = P2 = P3= PM= 1/M

 Hmax = log2 M bits / symbol

Hmax = ∑PM log2 1/PM

Hmax = ∑(1/M) log2 1/ 1/M


=

Hmax = log2 M bits / symbol

Information rate
If the source is emitting symbols at a fixed rate of ‘’r s’ symbols / sec, the
average sourceinformation rate ‘R’ is defined as –
R = r . H bits / se

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

 Illustrative Examples
1. Consider a discrete memoryless source with a source alphabet A = { so, s1, s2} with
respective probs. p0 = ¼, p1 = ¼, p2 = ½. Find the entropy of the source.
Solution: By definition, the entropy of a source is given by
M

H = ∑ pi log
1
bits/ symbol
i 1 pi

H for this example is


2

H (A) = ∑ pi log
1
i0 pi
Substituting the values given, we get
1 + P 1  p log 1
H (A) = po log 1 log 2
Po p1 p2
= ¼ log2(4) + ¼ log2(4) + ½ log2 (2)
3
=
2

if r = 1 per sec, then

H′ (A) = r H (A) = 1.5 bits/sec


2. An analog signal is band limited to B Hz, sampled at the Nyquist rate, and the samples are
quantized into 4-levels. The quantization levels Q1, Q2, Q3, and Q4 (messages) are assumed
independent and occur with probs.

P1 = P2 = 1 and P2 = P3 = 3 . Find the information rate of the source.


8 8

Solution: By definition, the average information H is given by

H = p log 1 + p log 1 + p log 1 + p log 1


1 2 3
p1 p2 p3 4
p4

Substituting the values given, we get

H = 1 log 8 + 3 log 8 + 3 log 8 + 1 log 8


8 8 3 8 3 8
= 1.8 bits/ message.

Information rate of the source by definition is

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

R = r H
R = 2B, (1.8) = (3.6 B) bits/sec

3. Compute the values of H and R, if in the above example, the quantities levels are so
chosenthat they are equally likely to occur,
Solution:
Average information per
message isH = 4 (¼ log 4)
= 2 bits/message
2

and R = rs H = 2B (2) = (4B) bits/sec

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

1.5 Markov Model for Information Sources

 Markov process is a simple stochastic process in which the distribution of future states
depends only on the present state and not on how it arrived in the present state.

 A random sequence has the Markov property if its distribution is determined by its current
state. Any random process having this property is called Markov Random process and
information source is said to follow Markov model

 For observable state sequences (state is kinwn from data),this leads to Markov chain model.

 Discrete stationary Markov Model: Provides a statistical model for the symbol sequences
emitted by a discrete source.

 it is represented in graphical form where the states are represented by nodes of the graph and
transition between the states is represented by a directed line from initial to final state.

 The state transition and symbol generation can also be illustrated using a tree diagram

 Consider the example for Markov Model understanding with state diagram for dependent
sequences

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

 A typical example for such a source is given below(First order Markov Model as the
distribution of future states depends only on the present state and not on how it arrived
in the present state)

o It is an example of a source emitting one of three symbols A, B, and C


o The probability of occurrence of a symbol depends on the particular symbol in
question andthe symbol immediately proceeding it.
o Residual or past influence lasts only for a duration of one symbol.
Last symbol emitted by this source

o The last symbol emitted by the source can be A or B or C. Hence past history
can berepresented by three states- one for each of the three symbols of the alphabet.
 Nodes of the source
o Suppose that the system is in state (1) and the last symbol emitted by the source was A.

o The source now emits symbol (A) with probability ½and returns to
state (1). OR
o The source emits letter (B) with probability ¼ andgoes to
state (3)OR
o The source emits symbol (C) with probability ¼ andgoes to state (2).

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

State transition and symbol generation can also be illustrated using a tree diagram.

1.6 Entropy and Information Rate of Markov Sources

𝑛
1
𝐸𝑛𝑡𝑜𝑝𝑦 𝑜𝑓 𝑒𝑎𝑐ℎ 𝑠𝑡𝑎𝑡𝑒 𝑖𝑠 𝑔𝑖𝑣𝑒𝑛 𝑏𝑦 𝐻𝑖 = ∑ 𝑝𝑖𝑗 𝑙𝑜𝑔2 𝑤ℎ𝑒𝑟𝑒 𝑖 = 𝑖𝑛𝑖𝑡𝑖𝑎𝑙 𝑠𝑡𝑎𝑡𝑒 𝑎𝑛𝑑 𝑗 = 𝑓𝑖𝑛𝑎𝑙 𝑠𝑡𝑎𝑡𝑒
𝑝𝑖𝑗
𝑗
H1,H2….. depending on how many states are there …
Total H is for 1st state is given by following equation Where i is the initial entropy which is given
as shown in the state diagram)
𝑛

𝐻 = ∑ 𝑃𝑖 𝐻𝑖
𝑖=1

Where i is the initial entropy which is given as shown in the state diagram

𝐻 = 𝑃1 𝐻1 + 𝑃2 𝐻2 + 𝑃3 𝐻3

For Second order that is at stage 2nd i.e at successive stages which can be denoted by GN = no of stages
Then is monotonically decreasing function of N .in other wordsG1 when considering probs. of
messages of lengths1, G2 when considering probs. of messages of lengths2 we called is as Second order
Markov model and G3 considering probs. of messages of lengths3 we called as Third order Markov
model
𝑛
1 1
𝐺𝑁 = ∑ 𝑃𝑚𝑖 𝑙𝑜𝑔2 𝑁 = 1 𝑠𝑡𝑎𝑔𝑒, 2 𝑠𝑒𝑐𝑜𝑛𝑑 𝑠𝑡𝑎𝑔𝑒 𝑎𝑛𝑑 𝑠𝑜 𝑜𝑛
𝑁 𝑃𝑚𝑖
𝑖=1
G1= N=1
G2 = N=2

For given example


𝑛

𝐻 = ∑ 𝑃𝑖 𝐻𝑖
𝑖=1

1 1 1
𝐻1 = 𝑝11 𝑙𝑜𝑔2 + 𝑝12 𝑙𝑜𝑔2 +𝑝13 𝑙𝑜𝑔2 =
𝑝11 𝑝12 𝑝13

1 1 1
𝐻2 = 𝑝21 𝑙𝑜𝑔2 + 𝑝22 𝑙𝑜𝑔2 +𝑝23 𝑙𝑜𝑔2 =
𝑝21 𝑝22 𝑝23

1 1 1
𝐻3 = 𝑝31 𝑙𝑜𝑔2 + 𝑝32 𝑙𝑜𝑔2 +𝑝33 𝑙𝑜𝑔2 =
𝑝31 𝑝32 𝑝33

H for stage 1 = 𝐻 = 𝑃1 𝐻1 + 𝑃2 𝐻2 + 𝑃3 𝐻3
G1= 𝐻 = 1/3𝐻1 + 1/𝐻2 + 1/3𝐻3
Entropy of the source is defined as the average of the entropy of each state.
n

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

Tree diagram
 Tree diagram is a planar graph where the nodes correspond to states and branches
correspond to transitions. Transitions between states occur once every Ts seconds.
Along the branches of the tree, the transition probabilities and symbols emitted will be
indicated.Tree diagram for the source considered

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and Communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

_____________________________________________________________________________

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

___________________________________________________________________________
Use of the tree diagram
Tree diagram can be used to obtain the probabilities of generating various symbol sequences.

Generation a symbol sequence say AB


This can be generated by any one of the following transitions:

1 1 3

OR

2 1 3

OR

3 1 3

Therefore prob. of the source emitting the two – symbol sequence AB is given by

P(AB) = P ( S1 = 1, S2 =1, S3 =3)


Or
P ( S1 = 2, S2 = 1, S3 = 3 ) ----- (1)
Or
P ( S1 = 3, S2 = 1, S3 = 3 )

Note that the three transition paths are disjoint.

Therefore P (AB) = P ( S1 = 1, S2 = 1, S3 = 3 ) + P ( S1 = 2, S2 = 1, S3 = 3 )
+ P ( S1 = 2, S2 = 1, S3 = 3 ) ----- (2)

The first term on the RHS of the equation (2) can be written as
P ( S1 = 1, S2 = 1, S3 = 3 )
= P ( S1 = 1) P (S2 = 1 / S1 = 1) P (S3 = 3 / S1 = 1, S2 = 1)
= P ( S1 = 1) P (S2 = 2 / S1= 1) P (S3 = 3 / S2 = 2)
Recall the Markov property.
Transition probability to S3 depends on S2 but not on how the system got to S2.

1
Therefore, P (S1 = 1, S2 = 1, S3 = 3 ) = /3 x ½ x ¼ =1/24

Similarly other terms on the RHS of equation (2) can be evaluated.


PAB=1/12
Similarly the probs of occurrence of other symbol sequences can be computed. Therefore, In general the
probability of the source emitting a particular symbol sequence can be computed by summing the

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

___________________________________________________________________________
product of probabilities in the tree diagram along all the paths that yield the particular sequences of
interest.

Illustrative Example:
Q.1The state diagram of the stationary Markov source is shown below.
Draw the tree diagram and
Find
1 . The prob. of each state messages of lengths1, 2 and 3.
2. The entropy of the source
3. G1, G2 ,G3 and verify that G1 ≥ G2 ≥ G3>H the entropy of the source.

Solution: Source given emits one of 3 symbols A, B and C

Tree diagram for the source outputs can be easily drawn as shown.

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

___________________________________________________________________________

Messages of length (1) and their probs


For G1
PA=½ x ¾ = 3/8
3
PB= ½ x ¾ =/8
PC = c is in first and second state = ½ x ¼ + ½ x ¼ = = 1/8+1/8=1/4
N=1
G1=1/N*[3/8log2 8/3+3/8 log2 8/3+1/4 log2 4)= 1/1*[3/8 log2 8/3+3/8 log2 8/3+
1/4 log2 4)=1.561 bits/symbol

For G2 consider all states


PAA=1/2x3/4x3/4=9/32
PAC=1/2x3/4x1/4=3/32
PCC=1/2x1/4x1/4=1/32
PCB=1/2x1/4x3/4=3/32
PCA=1/2x1/4x3/4=3/32
Pcc=1/2x1/4x1/4=1/32
PBC=1/2x3/4x1/4=3/32
PBB=1/2x3/4x3/4=9/32

Check for common prob.and add those PCC =1/16, four 3/32 prob ,two 1/32
G2=1/2[9/32 log2 32/9+(3/32 log232/3)*4+1/16 log2 16+(1/32 log2 32)*2]=1.279bits/symbol

Similary for G3 i.e third stage


PAAA=1/2x3/4x3/4/x3/4=27/128
PAAC=1/2x3/4x3/4x1/4=9/128
PACC=1/2x3/4x1/4x1/4=3/128
PACB= 1/2x3/4x1/4x3/4=9/128

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

___________________________________________________________________________
PCCA=1/2x1/4x1/4/x3/4=3/128
PCCC=1/2x1/4x1/4x1/4=1/128
PCBC=1/2x1/4x3/4x1/4=3/128
PCBB=1/2x/14/x3/4x3/4=9/128
PCAA=1/2x1/4x3/4x3/4=9/128
PCAC=1/2x1/4x3/4x1/4=3/128
PCCC=1/2x1/4x/14/x1/4=1/128
PCCB=1/2x1/4x/14/x3/4=3/128
PBCA=1/2x/3/4x/1/4x3/4=3/128
PBCC=1/2x3/4x1/4x1/4=3/128
PBBC=1/2x3/4x3/4x1/4=9/128
PBBB=1/2x3/4x3/4x3/4x3/4=12/128
G3=1/3[ sum all above prob]
=1/3[(27/128 log2128/27)x2 +(9/128 log2128/9)x6 + (3/128 log2128/3)x6+(1/128
log2128 ]+(1/64 log264)]
=1.140 bits/symbol

Hence if we compare the values of all G ,G1>G2>G3 .it is decreasing at each stages

G1 > G2 > G3 > H

 Statement
It can be stated that the average information per symbol in the message reduces as
the length ofthe message increases.
 The generalized form of the above statement
If P (mi) is the probability of a sequence mi of ‘N’ symbols form the source with the
averageinformation content per symbol in the messages of N symbols defined by

 ∑P(mi ) log P(mi )


i
GN = N
Where the sum is over all sequences mi containing N symbols, then GN is a monotonic
decreasingfunction of N and in the limiting case it becomes.

Lim GN = H bits / symbol


N-

Recall H = entropy of the source


The above example illustrates the basic concept that the average information content
per symbolfrom a source emitting dependent sequence decreases as the message length
increases.

Q.2) The state diagram of the stationary Markov source is shown below.
Draw the tree diagram and
Find
1 The prob. of each state messages of lengths1, 2 and 3.
2. The entropy of the source
3. G1, G2 ,G3 and verify that G1 ≥ G2 ≥ G3>H the entropy of the source.
Calculate entropy for G1,G2,G3
Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)
Department of Electronics and communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

___________________________________________________________________________

Solution: Tree diagram

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

___________________________________________________________________________

For ,G1
PA=1/3*1/2=1/6
PB=1/3*1/4=1/12
PC=1/3*1/4=1/12
PA=1/12
PB=1/12
Pc=1/6
PA =1/12
PC=1/12
PB=1/6

N=1
G1= 1/1 [PA log2 1/ PA + PB log2 1/ PB + PC log2 1/ PC + PA log2 1/ PA + PB log2 1/ PB + PC log2 1/ PC
+ PA log2 1/ PA + PB log2 1/ PB + PC log2 1/ PC ]
G1=1.585 bits/symbol

For G2,
N=2
PAA=1/12
PAB=1/24
PAC=1/24

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

___________________________________________________________________________
PCA=1/48
PCC=1/24
PCB=1/48
PBA=1/24
PBC=1/48
PBB=1/48
PCA=1/12
PCB=1/24
PCC=1/24
PCA=1/48
PCC=1/24
PCB=1/48
PBA=1/24
PBC=1/48
PBB=1/48

G2=1/2 [ ]

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and communication

Subject: Information Theory And Coding Sem-V Term-I AY.2022-23

___________________________________________________________________________

Q.3 For the Mark ov source shown, calculate the informationrate.

By definition, the average information rate for the source is given


by
𝑛

𝐻 = ∑ 𝑃𝑖 𝐻𝑖
𝑖=1

n=3
3

𝐻 = ∑ 𝑃𝑖 𝐻𝑖 𝐻 = 𝑃1 𝐻1 + 𝑃2 𝐻2 + 𝑃3 𝐻3
𝑖=1

1 1 1
𝐻1 = 𝑝11 𝑙𝑜𝑔2 + 𝑝12 𝑙𝑜𝑔2 +𝑝13 𝑙𝑜𝑔2 =1 bit/symbol
𝑝11 𝑝12 𝑝13
1 1 1
𝐻2 = 𝑝21 𝑙𝑜𝑔2 + 𝑝22 𝑙𝑜𝑔2 +𝑝23 𝑙𝑜𝑔2 =1.5bits/symbol
𝑝21 𝑝22 𝑝23

1 1 1
𝐻3 = 𝑝31 𝑙𝑜𝑔2 + 𝑝32 𝑙𝑜𝑔2 +𝑝33 𝑙𝑜𝑔2 =1 bit /symbol
𝑝31 𝑝32 𝑝33
Substituting H1,H2,H3 will give
H=1.25bits/symbol
Information rate :R=rH=1x1.25=1.25 bits/sec

Prepared by: Dr.Tanuja S.Dhope(BV(DU)COE,Pune)


Department of Electronics and communication

Subject: Information Theory And Coding Sem-V


__________________________________________________________________________

Prepared by: Dr.Tanuja S.Dhope

You might also like