Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Itc Class1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 16

INFORMATION THEORY AND CODING

UNIT-1
1. Introduction
2. Elements of Digital Communication
3. Measure of information
4. Average information Content(entropy) in long independent sequences
 INTRODUCTION :

The block diagram of an information system can be drawn as shown in figure. the
meaning of the word "information "in information theory is "message" or "intelligence".
This message may be an electrical message such as voltage, current or power or speech
message • picture message such as television or music message. A source which produces
these messages is called "information source".
 Information is the measure of the predictability and complexity of a message
ELEMENTS OF DIGITAL COMMUNICATION
 FATHER OF INFORMATION THEORY

In 1948, C.E. SHANNON, known as "Father of Information Theory", published a


treatise on the mathematical theory of communication in which he established basic theoretical
bounds for the performances of communication systems. Shannon's theory is based on
probabilistic models for information sources and communication channels. In the forthcoming
sections, we present some of the important aspects of Shannon's work.
 MEASURE OF INFORMATION

In order to know and compare the "information content" of various messages produced by an information
source, a measure is necessary to quantitatively know that information content. For this, let us consider an
information source producing independent sequence of symbols from source alphabet S = {s1, s2,… sq} with
probabilities P = {p1, p2, …pq} respectively.
Let SK be a symbol chosen for transmission at any instant of time with a probability equal to pK. Then the
"Amount of Information" or "Self-Information" of message SK (provided it is correctly identified by the
receiver) is given by
1K = log(1/PK)
If the base of the logarithm is 2, then the units are called "BITS", which is the short form of "Binary Units".
If the base is ''10", the units are "HARTLEYS" or "DECITS". If the base is "e", the units are "NATS" and if the
base, in general, is "r", the units are called "r-ary units".
The most widely used unit of information is "BITS" where the base of the logarithm is 2. Throughout this
hook, log to the base 2 is simply written as log and the units can be assumed to the bits, unless or otherwise
specified.
 JUST TO GET THE CONCEPT OF THE “AMOUNT OF
INFORMATION”,LET US CONSIDER THE FOLLOWING EXAMPLES

1. The sun rise from the east, I=0,P=1 universal truth that is already known
2. There will be earth quake in Shankarghatta, I=1,P=0 Almost Impossible event
From the above example, we can conclude that the message associated with an event least likely to occur
contains most information. This statement can be confirmed with another numerical example.
Example 1.1 : The binary symbols '0' and '1' are transmitted with probabilities ¼ and ¾ respectively. Find
the corresponding self-informations.
Solution

Self-information in a '0' = I0 = log(1/ P0) = log 4 = 2 bits.


Self-information in a '1' = I1 = log(1/ P1) = log 4/3 I1=0.415 bits.
Thus, it can be observed that more information is carried by a less likely message.
PROPERTIES OF INFORMATION

Logarithmic expression is chosen for measuring information because of the follow reasons:

1. The information content or self-information of any message cannot be negative. Each message
must contain certain amount of information.

2. The lowest possible self-information is "zero" which occurs for a sure event since P (sure event) =
1.
3. If the information is more than probability is less.
4. If the information is less, probability is high.
5. If the receiver knows the information being transmitted then the message carried is zero.
 PROPERTIES OF INFORMATION: continued

6. If the source emits two independent messages m1 & m2 are transmitted with the information
I1 & I2 respectively, than the total information is equal to sum of individual information.
Let I1 = log2(1/P(m1)
I2= log2(1/P(m2)
Itotal= log2(1/P(m1m2)
= log2(1/P(m1m2)
= log2(1/P(m1) +log2(1/P(m2)
Itotal = I1 + I2
 ZERO-MEMORY SOURCE:

It represents a model of a discrete information source emitting sequence of symbols from a


fixed finite source alphabet S = {s1, s2,….sq} Successive symbols are selected according to some
fixed probability law and are statistically independent of one another. This means that there is no
connection between any two symbols and that the urce has no memory. Such type of sources are
called "memoryless" or "zero-memory" sources.
 AVERAGE INFORMATION CONTENT (ENTROPY) OF SYMBOLS IN
LONG INDEPENDENT SEQUENCES
Let us consider a zero-memory source producing independent sequences of symbols. while the receiver
of these sequences may interpret the entire message as a single unit, communication systems often have to deal
with individual symbols.
Let us consider the source alphabet S = {S1, S2,…. Sq} with probabilities P = (P1, P2,… Pq} respectively.
Let us consider a long independent sequence of length L symbols. This long sequence contains
P1L number of messages of type S1

P2L number of messages of type S2, and PqL number of messages of type Sq.

From equation of measure of information, the self information of S 1= I1 = log(1/ P1) bits

:
 AVERAGE INFORMATION CONTENT (ENTROPY) OF
SYMBOLS IN LONG INDEPENDENT SEQUENCES
continued….

Itotal=P1Llog(1/P1)+P2Llog(1/P2)+……. +PqLlog(1/Pq)bits
=Llog(1/Pi)bits.
Average self-information=
=log(1/Pi)bits
Average self-information is also called ENTROPY is given by
H(S) =log(1/Pi) bits/message symbol
Thus H(S) represents the “average uncertainty” or the “average amount of surprise”of the source.
INFORMATION RATE:
 If the time rate at which source X emits symbols, is r (symbol s), the information rate R of the source
is given by R = rH(X) b/s
or
 Product of average self information content per symbol and the rate of symbols emitted by the source
at a fixed time rate “rs” symbols per second
The average source information rate Rs in bits/sec is
Rs =H(S) rs bits/sec or BPS
Here R is information rate.
H(S) is Entropy or average information and r is rate at which symbols are generated.
Information rate is represented in average number of bits of information per second. It is
calculated
as under: symbols information bits R r in x H(X)in second symbol or R = information
bits/second

You might also like