Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
79 views

Unit I Content Beyond Syllabus Introduction To Information Theory What Is "Information Theory" ?

Information theory answers fundamental questions about data compression and communication rates. It was founded by Claude Shannon in the 1940s and explores topics in physics, mathematics, electrical engineering, computer science, and economics. Shannon proved that error-free communication is possible below a channel's capacity and defined entropy as the minimum description length of data. The vocoder is a speech coding system that analyzes and synthesizes human speech by measuring the spectral characteristics of speech over time and transmitting these parameters rather than the full waveform, allowing more efficient transmission. Analog vocoders typically split the signal into 8-20 frequency bands and use the amplitude in each band to control carriers, mapping frequency components onto the carrier signal. Early experiments with vocoders were conducted by Bell Labs
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views

Unit I Content Beyond Syllabus Introduction To Information Theory What Is "Information Theory" ?

Information theory answers fundamental questions about data compression and communication rates. It was founded by Claude Shannon in the 1940s and explores topics in physics, mathematics, electrical engineering, computer science, and economics. Shannon proved that error-free communication is possible below a channel's capacity and defined entropy as the minimum description length of data. The vocoder is a speech coding system that analyzes and synthesizes human speech by measuring the spectral characteristics of speech over time and transmitting these parameters rather than the full waveform, allowing more efficient transmission. Analog vocoders typically split the signal into 8-20 frequency bands and use the amplitude in each band to control carriers, mapping frequency components onto the carrier signal. Early experiments with vocoders were conducted by Bell Labs
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

UNIT I

CONTENT BEYOND SYLLABUS


INTRODUCTION TO INFORMATION THEORY
What is Information Theory ?
Information Theory answers two fundamental questions in communication theory
What is the ultimate data compression (the entropy H)
What is the ultimate transmission rate of communication (the channel capacity)
It founds the most basic theoretical foundations of communication theory.
Moreover, Information Theory intersects
Physics (Statistical Mechanics)
Mathematics (Probability Theory)
Electrical Engineering (Communication Theory)
Computer Science (Algorithm Complexity)
Economics (Portfolio / Game Theory)
Electrical Engineering (Communication Theory)
In the early 1940s, Shannon proved that the error probability of transmission error could be
made
nearly zero for all communication rates below Channel Capacity.
SourceH Destination
channelC
The Capacity, C, can be computed simply from the noise characteristics ( described by
conditional probabilities) of the 3 y p ) channel.
Shannon further argued that random processes (signals) such as music and speech
have an irreducible complexity below which the signal cannot be compressed. This he named
the Entropy. Shannon argued that if the entropy of the source is less than the Capacity of the
channel, asymptotically (in probabilistic sense) error-free communication can be achieved.
Computer Science (Kolmogorov Complexity) Kolmogorov, Chaitin, and Solomonoff put the
idea that the complexity of a string of data can be defined by the length of the shortest binary
computer program for computing the string.
The Complexity is the Minimum description length !
This definition of complexity is universal, that is, computer independent, and is
fundamental importance.
Kolmogorov Complexity lays the foundation for the theory of descriptive complexity.
Gratifyingly, the Kolmogorov complexity K is approximately equal to the Shannon entropy H
if the sequence is drawn at random from a distribution that has entropy H. Kolmogorov
complexity is considered to be more fundamental than Shannon entropy. It is the ultimate data
compression and leads to a logically consistent procedure for inference.
Computation vs. Communication.
Computation is communication limited and communication is computation limited.
These become
intertwined, and thus all of the developments in communication theory via information theory
should have a direct impact on the theory of computation.
New Trends in Information Theory.
Compress each of many sources and then put the compressed descriptions together into a
joint reconstruction of the sources Slepian-Wolf theorem.
If one has many senders sending information independently to a common receiver, what is
the channel capacity of this Multiple-Access channelLiao and Ahlswede theorem.
If one has one sender and many receives and wishes to communicate (perhaps different)
information
simultaneously to each of the receiver, what is the channel capacity of this Broadcasting
channel.
If one has arbitrary number of senders and receivers in an environment of interference and
noise, what is the capacity region of achievable rates from the various senders to the receivers.


UNIT II
CONTENT BEYOND SYLLABUS
Vocoder (voice encoder) is an analysis/synthesis system, used to reproduce human
speech. In the encoder, the input is passed through a multiband filter, each band is passed
through an envelope follower, and the control signals from the envelope followers are
communicated to the decoder. The decoder applies these (amplitude) control signals to
corresponding filters in the synthesizer. Since the control signals change only slowly
compared to the original speech waveform, the bandwidth required to transmit speech can be
reduced. This allows more speech channels to share a radio circuit or submarine cable. By
encoding the control signals, voice transmission can be secured against interception.
The vocode was originally developed as a speech coder for telecommunications
applications in the 1930s, the idea being to code speech for transmission. Transmitting the
parameters of a speech model instead of a digitized representation of the speech waveform
saves bandwidth in the communication channel; the parameters of the model change relatively
slowly, compared to the changes in the speech waveform that they describe. Its primary use in
this fashion is for secure radio communication, where voice has to be encrypted and then
transmitted. The advantage of this method of "encryption" is that no 'signal' is sent, but rather
envelopes of the bandpass filters. The receiving unit needs to be set up in the same channel
configuration to resynthesize a version of the original signal spectrum. The vocoder as both
hardware and software has also been used extensively as an electronic musical instrument.
Whereas the vocoder analyzes speech, transforms it into electronically transmitted
information, and recreates it, The Voder (from Voice Operating Demonstrator) generates
synthesized speech by means of a console with fifteen touch-sensitive keys and a pedal,
basically consisting of the "second half" of the vocoder, but with manual filter controls,
needing a highly trained operator.
The human voice consists of sounds generated by the opening and closing of the
glottis by the vocal cords, which produces a periodic waveform with many harmonics. This
basic sound is then filtered by the nose and throat (a complicated resonant piping system) to
produce differences in harmonic content (formants) in a controlled way, creating the wide
variety of sounds used in speech. There is another set of sounds, known as the unvoiced and
plosive sounds, which are created or modified by the mouth in different fashions.
The vocoder examines speech by measuring how its spectral characteristics change
over time. This results in a series of numbers representing these modified frequencies at any
particular time as the user speaks. In simple terms, the signal is split into a number of
frequency bands (the larger this number, the more accurate the analysis) and the level of
signal present at each frequency band gives the instantaneous representation of the spectral
energy content. Thus, the vocoder dramatically reduces the amount of information needed to
store speech, from a complete recording to a series of numbers. To recreate speech, the
vocoder simply reverses the process, processing a broadband noise source by passing it
through a stage that filters the frequency content based on the originally recorded series of
numbers. Information about the instantaneous frequency (as distinct from spectral
characteristic) of the original voice signal is discarded; it wasn't important to preserve this for
the purposes of the vocoder's original use as an encryption aid, and it is this "dehumanizing"
quality of the vocoding process that has made it useful in creating special voice effects in
popular music and audio entertainment.
Since the vocoder process sends only the parameters of the vocal model over the
communication link, instead of a point by point recreation of the waveform, it allows a
significant reduction in the bandwidth required to transmit speech.
Analog vocoders typically analyze an incoming signal by splitting the signal into a
number of tuned frequency bands or ranges. A modulator and carrier signal are sent through a
series of these tuned band pass filters. In the example of a typical robot voice the modulator is
a microphone and the carrier is noise or a sawtooth waveform. There are usually between 8
and 20 bands.
The amplitude of the modulator for each of the individual analysis bands generates a
voltage that is used to control amplifiers for each of the corresponding carrier bands. The
result is that frequency components of the modulating signal are mapped onto the carrier
signal as discrete amplitude changes in each of the frequency bands.
Often there is an unvoiced band or sibilance channel. This is for frequencies outside of
analysis bands for typical speech but still important in speech. Examples are words that start
with the letters s, f, ch or any other sibilant sound. These can be mixed with the carrier output
to increase clarity. The result is recognizable speech, although somewhat "mechanical"
sounding. Vocoders also often include a second system for generating unvoiced sounds, using
a noise generator instead of the fundamental frequency.
The first experiments with a vocoder were conducted in 1928 by Bell Labs engineer
Homer Dudley, who was granted a patent for it on March 21, 1939. The Voder(Voice
Operating Demonstrator), was introduced to the public at the AT&T building at the 1939-
1940 New York World's Fair.[2] The Voder consisted of a series of manually-controlled
oscillators, filters, and a noise source. The filters were controlled by a set of keys and a foot
pedal to convert the hisses and tones into vowels, consonants, and inflections. This was a
complex machine to operate, but with a skilled operator could produce recognizable speech.
Dudley's vocoder was used in the SIGSALY system, which was built by Bell Labs
engineers in 1943. SIGSALY was used for encrypted high-level voice communications during
World War II. Later work in this field has been conducted by James Flanagan.
Vocoder applications

Terminal equipment for Digital Mobile Radio (DMR) based systems.
Digital Trunking
DMR TDMA
Digital Voice Scrambling and Encryption
Digital WLL
Voice Storage and Playback Systems
Messaging Systems
VoIP Systems
Voice Pagers
Regenerative Digital Voice Repeaters


UNIT III
CONTENT BEYOND SYLLABUS
HAMMING DISTANCE
Hamming weight, w(c) is defined as the number of nonzero elements in a codevector.
Hamming distance, d(c1, c2) between two codewords c1 and c2 is defined as the number of
bits in which they differ.
Minimum distance, dmin is the minimum hamming distance between two codewords.
In information theory, the Hamming distance between two strings of equal length is
the number of positions at which the corresponding symbols are different. Put another way, it
measures the minimum number of substitutions required to change one string into the other, or
the number of errors that transformed one string into the other.
For a fixed length n, the Hamming distance is a metric on the vector space of the
words of that length, as it obviously fulfills the conditions of non-negativity, identity of
indiscernibles and symmetry, and it can be shown easily by complete induction that it satisfies
the triangle inequality as well. The Hamming distance between two words a and b can also be
seen as the Hamming weight of ab for an appropriate choice of the operator.
For binary strings a and b the Hamming distance is equal to the number of ones
(population count) in a XOR b. The metric space of length-n binary strings, with the
Hamming distance, is known as the Hamming cube; it is equivalent as a metric space to the
set of distances between vertices in a hypercube graph. One can also view a binary string of
length n as a vector in by treating each symbol in the string as a real coordinate; with this
embedding, the strings form the vertices of an n-dimensional hypercube, and the Hamming
distance of the strings is equivalent to the Manhattan distance between the vertices.
The Hamming distance is named after Richard Hamming, who introduced it in his
fundamental paper on Hamming codes Error detecting and error correcting codes in 1950. It is
used in telecommunication to count the number of flipped bits in a fixed-length binary word
as an estimate of error, and therefore is sometimes called the signal distance. Hamming
weight analysis of bits is used in several disciplines including information theory, coding
theory, and cryptography. However, for comparing strings of different lengths, or strings
where not just substitutions but also insertions or deletions have to be expected, a more
sophisticated metric like the Levenshtein distance is more appropriate. For q-ary strings over
an alphabet of size q 2 the Hamming distance is applied in case of orthogonal modulation,
while the Lee distance is used for phase modulation. If q = 2 or q = 3 both distances coincide.
The Hamming distance is also used in systematics as a measure of genetic distance.
On a grid (such as a chessboard), the points at a Lee distance of 1 constitute the von Neumann
neighborhood of that point.
The following C function will compute the Hamming distance of two integers
(considered as binary values, that is, as sequences of bits). The running time of this procedure
is proportional to the Hamming distance rather than to the number of bits in the inputs. It
computes the bitwise exclusive or of the two inputs, and then finds the Hamming weight of
the result (the number of nonzero bits) using an algorithm of Wegner (1960) that repeatedly
finds and clears the lowest-order nonzero bit.
unsigned hamdist(unsigned x, unsigned y)
{
unsigned dist = 0, val = x ^ y;
// Count the number of set bits
while(val)
{
++dist;
val &= val - 1;
}
return dist;
}


INFORMATION CODING TECHNIQUES
OBJECTIVES

To introduce the fundamental concepts of information theory:
data compaction
data compression
data transmission
error detection and correction.
OBJECTIVES
To have a complete understanding of errorcontrol coding.
To understand encoding and decoding of digital data streams.
To introduce methods for the generation of these codes and their decoding techniques.
To have a detailed knowledge of compression and decompression techniques.
To introduce the concepts of multimedia communication.


IT 2302 INFORMATION THEORY AND CODING
UNIT I
OBJECTIVES

Students can be able to understand the Information entropy fundamentals in this unit.
They can able to know the concepts of uncertainty and Information and Entropy.
Students will understand the theorems like Source coding theorem, Huffman Coding
and Shannon Fano coding.
These theorems will make the students to know about the Joint and conditional
entropies clearly.
After this concept they will learn about the Channels for Communication.
They will know about Channel coding theorem and Channel capacity theorem.


IT 2302 INFORMATION THEORY AND CODING
UNIT I
OUTCOME

Students came to know about the fundamental concepts of Information theory.
At the end of this Unit , Students able to know the problems based on Shannon Fano
and Huffman coding.
Also they got a clear view of the Information entropy fundamentals.
They are capable of solving Channel capacity of Discrete memory less channels.
By solving these problems they came to know about the Channels for Communication.


IT 2302 INFORMATION THEORY AND CODING
UNIT II
OBJECTIVES

In this Unit, Students can be able to understand the Adaptive Huffman Coding and its
related problems.
By various Problems techniques solving students can understand the concepts of
Arithmetic coding
Students will understand the concepts of LZW algorithm by various dictionary
techniques and problems.
Students can able to understand the Audio related concepts masking techniques,
psychoacoustic model and channel vocoders





IT 2302 INFORMATION THEORY AND CODING
UNIT II
OUTCOME

At the end of this Unit , Students able to know the concepts of Adaptive Huffman
Coding clearly.
They came to know about the problems based on Arithmetic coding
Also they understand the concepts of LZW algorithm with problems.
The audio related concepts were explained to students thoroughly.


IT 2302 INFORMATION THEORY AND CODING
UNIT III
OBJECTIVES

Students can be able to understand the concepts of Image and Video formats
Students will know about GIF, TIFF, SIF, CIF and QCIF.
The JPEG will be explained in this unit.
Video compression principles I,B,P frames and motion estimation and compensation
can be explained to the Students.
They can understand the concept of H.261 and MPEG standard.


IT2302 INFORMATION THEORY AND CODING
UNIT III
OUTCOME

Students understand the concepts of Image and Video formats.
Students are explained clearly about the concepts of GIF, TIFF, SIF, CIF and QCIF.
They came to know about the concepts of JPEG in this unit.
Video compression principles I,B,P frames and motion estimation and compensation
taught to the Students thoroughly.
They understand the concept H.261 and MPEG standard

You might also like