Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Solutions Manual of Intro To Linear Algebra Gilbert Strang 3rdedition

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35
At a glance
Powered by AI
The key takeaways are that the document discusses different types of analog and digital modulation techniques used in computer networks and telecommunications including amplitude modulation, frequency modulation, phase modulation and digital modulation techniques like PSK, QAM etc.

The different types of modulation techniques discussed are amplitude modulation, frequency modulation, phase modulation and digital modulation techniques like PSK, QAM etc.

In frequency modulation, the frequency of the carrier signal is modulated to follow the changing voltage level of the modulating signal while the phase and amplitude of the carrier signal remain constant. It is implemented using a voltage-controlled oscillator where the frequency changes according to the input voltage which is the amplitude of the modulating signal.

DEI FACULTY OF ENGINEERING

COMPUTER NETWORKS EEM 720


Course Teacher Dr K. Srinivas

Unit I Notes

UNIT 1
Evolution and uses of Computer Networks, Network structure, concepts of data transmission,
Analog and digital data transmission. Transmission Media: Twisted pair, Coaxial cable, Optical
Fiber, Terrestrial and Satellite Microwave, Radio.

What is a Computer Network?

 Computer network is a collection of autonomous computers interconnected by a single


technology.

 The connection can be via a copper wire, fiber optic cable, coaxial cable, microwaves, infrared or
communication satellites. They can be geographically located anywhere.

 Computer Network is an interconnected set of autonomous computers. The term


autonomous implies that the computers can function independent of others. However,
these computers can exchange information with each other through the communication
network system. Computer networks have emerged as a result of the convergence of
two technologies of this century- Computer and Communication. The consequence of
this revolutionary merger is the emergence of an integrated system that transmit all
types of data and information.
 The purpose of the network is to serve users, which can be humans or processes. Network links
can be point-to-point or multipoint and implemented with several transmission media.
Information exchanged can be represented in multiple media (audio, text, video, images,
etc.).Services provided to users can vary widely.

Uses of Computer networks

• Resource Sharing
– Hardware (computing resources, disks, printers)
– Software (application software)
• Information Sharing
• Reliable backup of data
• Online collaboration
• Easy accessibility from anywhere (files, databases)
• Search Capability (WWW)
• Communication
– Email
– Message broadcast
– Video conferencing
– Internet Telephony (VoIP)
– Chat Groups
– Instant Messengers
• Remote computing
• Distributed processing (GRID Computing)
• Distance Education
• E-Commerce
• News Groups
• Internet Radio
• Tele-medicine
• Social networking
• Video on demand

Evolution of Computer Networks


In the 1960s, Multi-terminal computer systems emerged which provided users direct access to
the centralized computer through interactive terminals connected by point-to-point low-speed
data links. In this situation, a large number of users, some of them located in remote locations
could simultaneously access the centralized computer in time-division multiplexed mode. The
users could get immediate interactive feedback from the computer and correct errors
immediately. Following the introduction of on-line terminals and time-sharing operating
systems, remote terminals were used to access the central computer. Multi-terminal systems,
working in time-sharing mode, became the first step toward the development of computer
networks.

Subsequently, access to a computer from remote terminals located hundreds and sometimes thousands
of miles apart was also made possible by connecting them through telephone lines and using Modems.
Such networks allowed multiple remote users to access the shared resources of several powerful super-
computers. By the time distributed systems appeared, not only were there connections between
terminals and computers, connections between computers were also implemented.

In 1969, the US Department of Defense initiated research for connecting the computers of defense and
research centres across US into a network. They wanted to come up with a network which could survive
a nuclear attack or any natural calamity. The network which they came up with is known as ARPANET.
Also in 1970s the advancements in LSI technology has led to the emergence of Minicomputers which
were powerful and affordable. In ARPANET they used subnet which consisted of these Minicomputers
called Interface Message Processors (IMP) connected by 56 kbps transmission lines. Each node of the
network consisted of a host and an IMP. The host was connected to the IMP through a short wire and
both were housed in the same room.
The original ARPANET Design

ARPANET used Packet Switching. Later on, experiments demonstrated that existing ARPANET protocols
were not suitable for running over multiple networks. TCP/IP model and protocols were developed by
Vinton Cerf and Robert Kahn in 1974 which were specifically designed to handle communication over
internetworks. Adoption of TCP/IP and became increasingly important as more and more networks
were being hooked to ARPANET. ARPANET joined computers of different types, running various
operating systems with different add-on modules by implementing communications protocols common
for all computers participating in the network. Such operating systems can be considered the first true
network operating systems.

Computers became capable of exchanging data in an automatic mode, which, essentially, is the basic
mechanism of any computer network. Developers of the first networks implemented services for file
exchange, database synchronization, e-mail, and other network services that have become
commonplace.
Chronologically, Wide Area Networks (WANs) were the first to appear. WANs connected geographically
distributed computers, even those located in different cities or countries. It was in the course of the
development of WANs that many ideas fundamental to modern computer networks were introduced
and developed, such as:
 Multilayer architecture of communications protocols
 Packet-switching technology
 Packet routing in heterogeneous networks

True network operating systems, in contrast to multi-terminal ones, allowed the system not only to
distribute users but also to organize data storage. They also allowed for processing to be distributed
among several computers connected by electric links. Any network operating system is capable of
performing all functions of a local operating system.

With the emergence of minicomputers even small companies could afford these computers. This was
the origin of the distributed computing concept, with computing resources becoming distributed over
the entire enterprise. With time, the needs of computer users evolved. End users were no longer
satisfied with isolated work on a standalone computer. They needed to exchange computer data (often
automatically) with users in other branches and offices. To satisfy these requirements, the first LANs
appeared.
At first, nonstandard networking technologies were used to connect computers to the network. Network
technology is a coordinated set of software and hardware (for example, drivers, network adapters, cables
and connectors) and mechanisms of data transmission across the communications links, sufficient for
building a computer network. With the emergence of TCP/IP, many University in US which had research
collaboration with DARPA also adopted TCP/IP on their LANs and conveniently got hooked to the ARPANET.
During 1980s advancements in VLSI technology lead to the emergence of Microcomputers or Personal
Computers. The adoption of personal computers was a powerful incentive for the development of these
technologies. PCs became ideal elements for building networks. On the one hand, they were powerful
enough to support networking software; on the other hand, they obviously needed to connect their
processing powers to solve complex tasks and share ex-pensive peripheral devices and disk arrays.
Because of this, PCs became prevalent in LANs, not only playing the roles of clients but also performing
data storage and processing center functions (i.e., becoming network servers). The addition of these new
LANs led to increase in the scale of ARPANET so Domain Name System (DNS) was created to organize
machines into domains and map host names onto IP addresses.
ARPANET allowed people to share scarce computing resources through remote access. In 1973 ARPANET
crossed Atlantic and reached Europe. In mid 1980s National Science Foundation joined this project and
created NSFNET. When ARPANET and NSFnet got connected the growth of this network increased
exponentially. A number of regional networks joined this network and connections were made in
Canada, Europe and Pacific. The link between ARPANET, NSFNET and other networks was called
Internet. Looking at the success of ARPANET and NSFNET a number of other networks sprang up with
their own backbone networks. Later in early 1990 ARPANET was shut down and funding for NSFnet was
discontinued in 1995 but the commercial Internet backbone services replaced them and thus internet
found its way into business and home. The emergence of World Wide Web (WWW) in early 1990s
brought millions of new nonacademic users to the Internet. Today Internet connects thousands of
networks and billions of users around the world.

All standard LAN technologies were based on the same switching principle that turned out to be so
successful when transmitting traffic in WANs, i.e., the packet-switching principle. LAN developers
introduced many innovations affecting the organization of end-user work. Tasks such as accessing
shared network resources became significantly simpler. MANs are intended for serving large cities.
Being characterized by rather long distances between network nodes (sometimes tens of miles) they
also provide high-quality communications links and support high speed data exchange. MANs ensure
economic and efficient connection of LANs, providing them access to WANs. The most important stage
in the evolution of computer networks was the arrival of standard networking technologies. These
include Ethernet, FDDI, and Token Ring. These technologies allow different types computers to connect
quickly and efficiently.

During the late 1980s, LANs and WANs were characterized by significant differences between the length
and the quality of communications links, the complexity of the data transmission methods, data
exchange rates, the range of provided services, and scalability. Later, as a result of the close integration
of LAN, WAN, and MAN, the convergence of these technologies took place.

The trend of convergence of the different types of networks is characteristic not only for LANs and
WANs but also for other types of telecommunications networks, including telephone, radio, and TV
networks. For now, research is aimed at creating universal multiservice networks, capable of efficiently
transmitting information of any kind, including data, voice, and video.

Architecture of the Internet

Network Structure

 network edge: applications and hosts

 network core: interconnected routers, network of networks

 access networks, physical media: wired, wireless communication links

Access Networks

 Residential Access networks: Dial-up modem, Digital subscriber line, Hybrid Fiber-
Coaxial cable

 Institutional Networks

 Mobile Access Networks


Classification of Computer Networks based on Scale

Interprocessor distance Processors located in the Example


same

1m Square meter Personal area network

10m Room Local Area Network

100m Building

1km campus

10km City Metropolitan Area Network

100km Country Wide Area Network

1000km continent

10,000km Planet The Internet

Wide Area Networks


• Span Large geographical area
• Crossing public rights of way
• Rely in part on common carrier circuits
• Alternative technologies
— Circuit switching
— Packet switching
— Frame relay

Circuit switching
• Circuit switching is a dedicated communication path established between two stations or
multiple end points through nodes of the WAN
• Transmission path is a connected sequence of physical link between nodes.
• On each link, a logical channel is dedicated to the connection. Data generated by the source
station are transmitted along dedicated path as rapidly as possible.
• At each node, incoming data are routed or switched to the appropriate outgoing channel
without excessive delay. However, if data processing is required, some delay is experienced.
• Example of circuit switching above is the telephone networks.

Packet switching
• Data is sent in Packets (header contains control info, e.g., source and destination addresses)
• Per-packet routing
• At each node the entire packet is received, stored, and then forwarded (store-and-forward
networks)
• No capacity is allocated, Best effort service
• Link is allocated on demand
• Different packets might follow different paths, reordering required eg. Internet

Classification Based on Transmission Technology


Computer networks can be broadly categorized into two types based on transmission
technologies:
• Broadcast networks
• Point-to-point networks

Network Topologies

Network Topology is the geometric arrangement of various network components (nodes and links) in a
network.

Network Topology is important for

 The design of efficient protocols


 Solving internetworking problems (routing, resource reservation, administration)
 Create models for simulation
 Study fault tolerance
Network Topologies

Four Steps to Networking

 Communicating across a link

 Connecting together multiple links (internetworking)

 Finding and routing data to nodes on internetwork

 Matching application requirement

Network Devices

Network interface cards

A network card, network adapter, or NIC (network interface card) is a piece of computer
hardware designed to allow computers to communicate over a computer network. It provides
physical access to a networking medium and often provides a low-level addressing system
through the use of MAC addresses.
Repeaters

A repeater is an electronic device that receives a signal and retransmits it at a higher power
level, or to the other side of an obstruction, so that the signal can cover longer distances
without degradation. In most twisted pair Ethernet configurations, repeaters are required for
cable which runs longer than 100 meters.

Hubs

A network hub contains multiple ports. When a packet arrives at one port, it is copied
unmodified to all ports of the hub for transmission. The destination address in the frame is not
changed to a broadcast address.

Bridges

A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI
model. Bridges do not promiscuously copy traffic to all ports, as hubs do, but learn which MAC
addresses are reachable through specific ports. Once the bridge associates a port and an
address, it will send traffic for that address only to that port. Bridges do send broadcasts to all
ports except the one on which the broadcast was received.

Bridges learn the association of ports and addresses by examining the source address of frames
that it sees on various ports. Once a frame arrives through a port, its source address is stored
and the bridge assumes that MAC address is associated with that port. The first time that a
previously unknown destination address is seen, the bridge will forward the frame to all ports
other than the one on which the frame arrived.

Network Switch

A network switch is a device that forwards and filters OSI layer 2 datagrams (chunk of data
communication) between ports (connected cables) based on the MAC addresses in the packets.
This is distinct from a hub in that it only forwards the packets to the ports involved in the
communications rather than all ports connected. It has switch matrix which can rapidly connect
and disconnect ports. It allows different nodes of a network to communicate directly with each other
at the same time without slowing each other down.

Router

A router is a networking device that forwards packets between networks using information in
protocol headers and forwarding tables to determine the best next router for each packet.
Routers work at the Network Layer of the OSI model and the Internet Layer of TCP/IP.
Gateway

A gateway is a network point that acts as an entrance to another network. It can translate
information between different networks. It handles tasks such s protocol translation,
impedance matching, rate conversion, fault isolation, signal translation, so on which are
necessary for providing interoperability. On the internet, in terms of routing, the network
consists of gateway nodes and host nodes. Host nodes are computer of network users and the
computers that serve contents (such as Web pages). Gateway nodes are computers that control
traffic within your company’s network or at your local internet service provider (ISP).

Layer Network Device

Application layer Application Gateway

Transport Layer Transport gateway

Network layer Router

Data link layer Bridge, switch

Physical layer Repeater, hub

Network Protocol

A communication protocol is a set of rules that specify the format and meaning of messages
exchanged between computers across a network.

A set of related protocols that are designed for compatibility are called protocol suite.

Transmission Media

Guided: Transmission capacity depends critically on the medium, the length, and whether the medium is
point-to-point or multipoint (e.g. LAN). Examples are co-axial cable, twisted pair, and optical fiber.
Unguided: provides a means for transmitting electro-magnetic signals but do not guide them. Example
wireless transmission.

Characteristics and quality of data transmission are determined by medium and signal characteristics.
For guided media, the medium is more important in determining the limitations of transmission. While
in case of unguided media, the bandwidth of the signal produced by the transmitting antenna and the
size of the antenna is more important than the medium. Signals at lower frequencies are omni-
directional (propagate in all directions). For higher frequencies, focusing the signals into a directional
beam is possible. These properties determine what kind of media one should use in a particular
application.

Twisted Pair

Twisted pair cable is the most common type of cable used in computer networks. It is reliable, flexible
and cost effective. The twisting of the pairs of copper cable is one method used to protect the cable's
signals from interference and crosstalk. This type of protection is known as cancellation.

There are two main categories of twisted pair cable:

 Unshielded Twisted Pair (UTP) Cable

 Shielded Twisted Pair (STP) Cable

• Commonly used for communications within buildings and in telephone networks


• Produced in unshielded (UTP) and shielded (STP) forms, and in different performance categories.
• Cables may hold hundreds of pairs. Neighbor pairs typically have different twist lengths to reduce
crosstalk.
• Bandwidth: 10 Mbps/100Mbps/1000 Mbps
• Used for short distances.

A common implementation of UTP is 100BaseT, which allows for a maximum cable length segment of
100 metres. There are different categories of UTP cable and one differentiation factor is the number of
twists per foot of cable. The more are the twists the higher will be the quality of cable, for example
there are less twists per foot in Cat3 compared to Cat5 UTP cable.
Coaxial Cable
• Offers longer distances and better speeds than twisted pair, due to better shielding.
• coaxial cable used in telephone replaced by fiber

Fiber Optic Cable

Fiber-optic cable consists of glass or plastic fibers that carry data in the form of light signals. Unlike
copper cable there is no electricity, only light. This adds a level to security to fiber that is missing from
copper, i.e. the cable cannot be tapped to detect signals. Fiber-optic cable is perfect for high speed, high
quality data transmission and although it suffers from a form of attenuation, it is nowhere near as
severe as that for copper cable. The reliability, security and distances covered by fiber optic cable make
it the natural choice as backbone cabling within buildings and between buildings.
The typical optical fiber consists of a very narrow strand of glass called the Core. Around the Core is a
concentric layer of glass called the Cladding. A typical Core diameter is 62.5 microns (1 micron = 10-6
meters). Typically Cladding has a diameter of 125 microns. Coating the cladding is a protective coating
consisting of plastic, it is called the Jacket.

Optical fibers work on the principle of total internal reflection

Light is kept in the core by total internal reflection. This causes the fiber to act as a waveguide. At both
ends of the fiber link there are photodiodes which must be in exact alignment in order to prevent signal
reflection. The means of placing the signal on the fiber can be carried out by Light Emitting Diode (LED)
or an Injection Laser Diode (ILD). Fibers that support many propagation paths or transverse modes are
called multi-mode fibers (MMF), while those that only support a single mode are called single-mode
fibers (SMF). Multi-mode fibers generally have a larger core diameter, and are used for short-distance
communication links and for applications where high power must be transmitted.
Types of Fiber Optic cables

Single mode fiber has a thinner core than multimode and uses a single beam of light for data
transmission. The light pulses are generated by Light Emitting Diodes (LEDs). Single mode is faster and
can work over longer distances than multimode and as such it is suitable for use as backbones between
buildings. Single mode has a 50% greater data carrying capacity than multi mode cable. Even though
fiber is manufactured to a high quality there can still be imperfections that could led to signal loss.

Why Fiber Optic Transmission?

 Enormous information carrying capacity


 Easily upgradeable
 Ease of installation
 Allows fully symmetric services
 Reduced operational and maintenance costs
 Benefits of optical fiber:
 Very long distances
 Strong, flexible, and reliable
 Allows small diameter and light weight cables
 Secure
 Immune to electromagnetic interference (EMI)

FIBER VERSUS COPPER


 A single copper pair is capable of carrying 6 phone calls
 A single fiber pair is capable of carrying over 2.5 million simultaneous phone calls
(64 channels at 2.5 Gb/s)
 A fiber optic cable with the same information-carrying capacity (bandwidth) as a comparable copper
cable is less than 1% of both the size and weight

Fiber cable Copper cable

 Uses light  Uses electricity


 Transparent  Opaque
 Dielectric material-nonconductive  Electrically conductive material
 EMI immune  Susceptible to EMI
 Low thermal expansion  High thermal expansion
 Brittle, rigid material  Ductile material
 Chemically stable  Subject to corrosion and galvanic
reactions

Unguided /Wireless Transmission


Transmission using radio waves in the range 1GHz to 30GHz is referred to as Microwave
Transmission.

Terrestrial Microwave Transmission

Above 100 MHz, the waves travel in nearly straight lines and can therefore be narrowly focused.
Concentrating all the energy into a small beam by means of a parabolic antenna (like the familiar
satellite TV dish) gives a much higher signal-to-noise ratio, but the transmitting and receiving antennas
must be accurately aligned with each other. In addition, this directionality allows multiple transmitters
lined up in a row to communicate with multiple receivers in a row without interference, provided some
minimum spacing rules are observed. Since the microwaves travel in a straight line, if the towers are too
far apart, the earth will get in the way. Consequently, repeaters are needed periodically. The higher the
towers are, the farther apart they can be. The distance between repeaters goes up very roughly with the
square root of the tower height.

Satellite Transmission

In its simplest form, a communication satellite can be thought of as a big microwave repeater in the sky.
It contains several transponders, each of which listens to some portion of the spectrum, amplifies the
incoming signal, and then rebroadcasts it at another frequency to avoid interference with the incoming
signal. The higher the satellite, the longer the period. Near the surface of the earth, the period is about
90 minutes. Consequently, low-orbit satellites pass out of view fairly quickly, so many of them are
needed to provide continuous coverage. At an altitude of about 35,800 km, the period is 24 hours. To
keep the satellite stationary with respect to the ground based stations, the satellite is placed in a
geostationary orbit above the equator at an altitude of about 36,000 km. A satellite can be used for
point-to-point communication between two ground-based stations or it can be used to broadcast a
signal received from one station to many ground-based stations. With current technology, it is unwise to
have geostationary satellites spaced much closer than 2 degrees in the 360-degree equatorial plane, to
avoid interference. However, each transponder can use multiple frequencies and polarizations to
increase the available bandwidth. Life of Modern Satellites can range from 10-15 years

Broadcast links Point-to-point link


Satellite Microwave Communication

The principal satellite bands

Communication satellites have several unique properties. The most important is the long
communication delay for the round trip (about 270 ms) because of the long distance (about
72,000 km) the signal has to travel between two earth stations. This poses a number of
problems, which are to be tackled for successful and reliable communication. Another
interesting property of satellite communication is its broadcast capability. All stations under the
downward beam can receive the transmission. It may be necessary to send encrypted data to
protect against piracy.

Now-a-days communication satellites are not only used to handle telephone, telex and
television traffic over long distances, but are used to support various internet based services
such as e-mail, FTP, World Wide Web (WWW), etc. New types of services, based on
communication satellites, are emerging.

Comparison/contrast with other technologies:


 Propagation delay very high. On LANs, for example, propagation time is in nanoseconds
which is negligible.
 One of few alternatives to phone companies for long distances.
 Uses broadcast technology over a wide area - everyone on earth could receive a
message at the same time!
 Easy to place unauthorized taps into signal.

Satellites have recently fallen out of favor relative to fiber.


However, fiber has one big disadvantage: no one has it coming into their house or building,
whereas anyone can place an antenna on a roof and lease a satellite channel.
It looks like the mainstream communication of future will be terrestrial fiber optics combined
with cellular radio, but for some specialized uses satellites are better.
Data Transmission
• Any transmission system has a limited band of frequencies. This limits the data rate that
can be carried. Data
— Entities that convey meaning
• Signals
— Electric or electromagnetic representations of data
• Transmission
— Communication of data by propagation and processing of signals
• Analog
— Continuous values within some interval
— e.g. sound, video
• Digital
— Discrete values
— e.g. text, integers

Analog Transmission
• Analog signal transmitted without regard to content
• May be analog or digital data
• Attenuated over distance
• Use amplifiers to boost signal
• Also amplifies noise

Digital Transmission
• Concerned with content
• Integrity endangered by noise, attenuation etc.
• Repeaters used
• Repeater receives signal
• Extracts bit pattern
• Retransmits
• Attenuation is overcome
• Noise is not amplified
Advantages of Digital Transmission
• Digital technology
— Low cost LSI/VLSI technology
• Data integrity
— Longer distances over lower quality lines
• Capacity utilization
— High bandwidth links economical
— High degree of multiplexing easier with digital techniques
• Security & Privacy
— Encryption
• Integration
— Can treat analog and digital data similarly

Transmission Impairments
When a signal is transmitted over a communication channel, it is subjected to different types of
impairments because of imperfect characteristics of the channel. As a consequence, the
received and the transmitted signals are not the same. Outcome of the impairments are
manifested in two different ways in analog and digital signals. These impairments introduce
random modifications in analog signals leading to distortion. On the other hand, in case of
digital signals, the impairments lead to error in the bit values. The impairment can be broadly
categorized into the following three types:
• Attenuation and attenuation distortion
• Delay distortion
• Noise
Attenuation
• Signal strength falls off with distance
• Depends on medium
• Received signal strength:
— must be enough to be detected
— must be sufficiently higher than noise to be received without error
• Attenuation is an increasing function of frequency
Delay Distortion
• Only in guided media
• Propagation velocity varies with frequency
Noise
• Additional signals inserted between transmitter and receiver
• Thermal
— Due to thermal agitation of electrons
— Uniformly distributed
— White noise
• Intermodulation
— Signals that are the sum and difference of original frequencies sharing a medium
• Crosstalk
— A signal from one line is picked up by another
• Impulse
— Irregular pulses or spikes
— e.g. External electromagnetic interference
— Short duration
— High amplitude

A very important consideration in data communications is how fast we can send data, in bits per
second, over a channel. Data rate depends on three factors:

1. The bandwidth available

2. The level of the signals we use

3. The quality of the channel (the level of noise)

• The maximum rate at which data can be correctly communicated over a channel in
presence of noise and distortion is known as its channel capacity.
• If rate of signal transmission is 2B then signal with frequencies no greater than B is
sufficient to carry signal rate
• Given bandwidth B, highest signal rate is 2B
• Given binary signal, data rate supported by B Hz is 2B bps
• Nyquist Bitrate for Noiseless channel
C= 2B log2L
• C can be increased by using more number of signal levels (L) but this may reduce the
reliability of the system.

Shannon Capacity for a Noisy channel


• Considers data rate, noise and error rate
• Faster data rate shortens each bit so burst of noise affects more bits
— At a given noise level, high data rate means higher error rate
• Signal to noise ratio (in decibels)
• SNRdb=10 log10 (signal/noise)
• Capacity C=B log2(1+SNR)
The Shannon capacity gives us the upper limit; the Nyquist formula tells us how many signal levels we
need.

Baud Rate
The baud rate or signaling rate is defined as the number of distinct signal elements transmitted
per second, irrespective of the form of encoding.
S= c x N x (1/r) baud where r is the ratio of number of data elements per signal element,
and c is the case factor which varies for each case and its value lies between 0 and 1.

The channel bridging the transmitter and the receiver may be a guided transmission medium
such as a wire or a wave-guide or it can be an unguided atmospheric or space channel. But,
irrespective of the medium, the signal traversing the channel becomes attenuated and
distorted with increasing distance. Hence a process is adopted to match the properties of the
transmitted signal to the channel characteristics so as to efficiently communicate over the
transmission media. There are two alternatives; the data can be either converted to digital or
analog signal. Both the approaches have pros and cons. What to be used depends on the
situation and the available bandwidth. Either form of data can be encoded into either form of
signal.

Approaches for conversion of data to signal

Data Signal Approach


Digital Digital Encoding
Analog Digital Encoding
Analog Analog Modulation
Digital Analog Modulation

Digital Data - Digital Signal


Some the main problems which need to be considered in designing a Line Coding Scheme are
Base Wandering
In decoding a digital signal the receiver calculates a running average of the received signal
power. This average is call baseline. A long string of 0s or 1s can cause a drift in the baseline
which is known as baseline wandering and this makes it difficult for the decoder to decode
correctly.
DC Component
When the voltage level in a digital signal is constant for a while, the spectrum creates very low
frequencies around zero called DC component that creates a problem to systems that cannot
pass low frequencies
Lack of Synchronization
To interpret the received signal correctly, the bit interval of the receiver should be exactly same
or within certain limit of that of the transmitter. Any mismatch between the two may lead to
wrong interpretation of the received signal. Usually, clock is generated and synchronized from
the received signal.

Unipolar NRZ scheme

Polar NRZ-L and NRZ-I schemes

• In NRZ-L the level of the voltage determines the value of the bit. In NRZ-I the inversion
or the lack of inversion determines the value of the bit.
• NRZ-L and NRZ-I both have an average signal rate of N/2 Bd.
• NRZ-L and NRZ-I both have a DC component problem.
The advantages of NRZ coding are:
• Detecting a transition in presence of noise is more reliable than to compare a value to a
threshold.
• NRZ codes are easy to engineer and it makes efficient use of bandwidth.

Polar RZ scheme

Key characteristics of the RZ coding are:


• Three levels
• Bit rate is double than that of data rate
• No dc component
• Good synchronization
• Main limitation is the increase in bandwidth

Polar biphase: Manchester and differential Manchester schemes

.
Key characteristics are:
• In Manchester and differential Manchester encoding, the transition at the middle of the bit is
used for synchronization.
• The minimum bandwidth of Manchester and differential Manchester is 2 times that of NRZ
• Two levels
• No DC component
• Good synchronization

Analog Data-Digital Signal


• PAM
• PWM/PDM
• PCM
• Delta Modulation

PCM
A PCM stream is a digital representation of an analog signal, in which the magnitude of the analogue
signal is sampled regularly at uniform intervals, with each sample being quantized to the nearest value
within a range of digital steps.PCM streams have two basic properties that determine their fidelity to the
original analog signal: the sampling rate, which is the number of times per second that samples are
taken; and the bit depth, which determines the number of possible digital values that each sample can
take.

The result of sampling is a series of pulses with amplitude values between the maximum
and minimum amplitudes of the signal. The set of amplitudes can be infinite with
nonintegral values between the two limits. These values cannot be used in the encoding
process. The following are the steps in quantization:
1. We assume that the original analog signal has instantaneous amplitudes between Vmin and
Vmax
2. We divide the range into L zones, each of height Δ
Δ =Vmax - Vrnin / L
3. We assign quantized values of 0 to L - 1 to the midpoint of each zone.
4. We approximate the value of the sample amplitude to the quantized values.
Components of PCM encoder
As a simple example, assume that we have a sampled signal and the sample amplitudes are
between -20 and +20 V. We decide to have eight levels (L = 8). This means that delta=5 V.
Figure below shows this example.

Quantization and encoding of a sampled signal

Three different sampling methods for PCM


The quantization error changes the signal-to-noise ratio of the signal, which in turn reduces the
upper limit capacity according to Shannon. The contribution of the quantization error to the
SNRdB of the signal depends on the number of quantization levels L, or the bits per sample nb
and is given as
SNRdB =6.02 nb + 1.76 dB

Delta Modulation

DM is the simplest form of differential pulse-code modulation (DPCM) where the difference between
successive samples is encoded into n-bit data streams. In delta modulation, the transmitted data is
reduced to a 1-bit data stream.

Its main features are:

 The analog signal is approximated with a series of segments where each segment of the
approximated signal is compared to the original analog wave to determine the increase or decrease
in relative amplitude.
 The decision process for establishing the state of successive bits is determined by this comparison
only the change of information is sent, that is, only an increase or decrease of the signal amplitude
from the previous sample is sent whereas a no-change condition causes the modulated signal to
remain at the same 0 or 1 state of the previous sample.

The process of delta modulation

Delta modulation components

Digital Data- Analog Signal


Amplitude Shift Key Modulation
 In this method the amplitude of the carrier assumes one of the two amplitudes dependent on
the logic states of the input bit stream.
 A typical output waveform of an ASK modulator is shown in the figure below. The frequency
components are the USB and LSB with a residual carrier frequency. The low amplitude carrier is
allowed to be transmitted to ensure that at the receiver the logic 1 and logic 0 conditions can be
recognized uniquely.

ASK

Frequency Shift Key Modulation


 In this method the frequency of the carrier is changed to two different frequencies depending
on the logic state of the input bit stream.
 The typical output waveform of an FSK is shown below. Notice that a logic high causes the
centre frequency to increase to a maximum and a logic low causes the centre frequency to
decrease to a minimum.
Phase shift key (PSK) modulation

In phase shift keying, the phase of the carrier is varied to represent two or more different signal
elements. Both peak amplitude and frequency remain constant as the phase changes. Today,
PSK is more common than ASK or FSK.

 With this method the phase of the carrier changes between different phases determined by the
logic states of the input bit stream.
 There are several different types of phase shift key (PSK) modulators.
 Two-phase (2 PSK)
 Four-phase (4 PSK)
 Eight-phase (8 PSK)
 Sixteen-phase (16 PSK)

Two-phase (2 PSK
 In this modulator the carrier assumes one of two phases.
 A logic 1 produces no phase change and a logic 0 produces a 180° phase change. The output
waveform for this modulator is shown below.
Four-Phase Shift Key Modulation

 With 4 PSK, 2 bits are processed to produce a single phase change.


In this case each symbol consists of 2 bits, which are referred to as a dibit. The actual phases
that are produced by a 4 PSK modulator are shown in the table below.

As the output bit rate is less than the input bit rate, this results in a smaller bandwidth of the
signal.

Eight-Phase Shift Key Modulation


 With this modulator 3 bits are processed to produce a single phase change. This means that
each symbol consists of 3 bits.
Sixteen-Phase Shift Key Modulation
 With this modulator 4 bits are processed to produce a single phase change. This means that
each symbol consists of 4 bits. The constellation for this modulator scheme is shown below.

PSK is limited by the ability of the equipment to distinguish between small differences in
phases. This limits the potential data rate.

Quadrature Amplitude Modulation (QAM)

 QAM is a combination of ASK and PSK. The two carrier waves, usually sinusoids, are out of phase
with each other by 90° and are thus called quadrature carriers or quadrature components —
hence the name of the scheme. The modulated waves are summed, and the resulting waveform
is a combination of both phase-shift keying (PSK) and amplitude-shift keying (ASK), or (in the
analog case) of phase modulation (PM) and amplitude modulation. In the digital QAM case, a
finite number of at least two phases and at least two amplitudes are used.
 The possible variations of QAM are numerous. We can have x variations in phase and y
variations of amplitude. x * y possible variations (therefore greater data rates)
 The minimum bandwidth required for QAM transmission is the same as that required for ASK
and PSK transmission. QAM has the same advantages as PSK over ASK. QAM is used extensively
as a modulation scheme for digital telecommunication systems.

 In 16-QAM modulator, 4 bits are processed to produce a single vector. The resultant
constellation consists of three different amplitudes distributed in 12 different phases as shown
below.
16-QAM

Bit Baud comparison


Modulation Units Bits/Baud Baud rate Bit Rate

ASK, FSK, 2-PSK Bit 1 N N

4-PSK, 4-QAM Dibit 2 N 2N

8-PSK, 8-QAM Tribit 3 N 3N

16-QAM Quadbit 4 N 4N

32-QAM Pentabit 5 N 5N

64-QAM Hexabit 6 N 6N

128-QAM Septabit 7 N 7N

256-QAM Octabit 8 N 8N

Analog-to-analog Modulation
Analog-to-analog conversion is the representation of analog information by an analog signal. This
Modulation is needed if the medium is bandpass in nature or if only a bandpass channel is available .

Amplitude Modulation

In AM transmission, the carrier signal is modulated so that its amplitude varies with the changing
amplitudes of the modulating signal. While the amplitude of the carrier changes to follow variations in
the modulating signal the frequency and phase remain the same.

AM is normally implemented by using a simple multiplier because the amplitude of the carrier signal
needs to be changed according to the amplitude of the modulating signal.
The total bandwidth required for AM can be determined from the bandwidth of the audio signal:
BAM = 2B.

Frequency Modulation
In FM transmission, the frequency of the carrier signal is modulated to follow the changing
voltage level (amplitude) of the modulating signal while the phase and amplitude (of the carrier
signal) remain constant.
FM is implemented by using a voltage-controlled oscillator as with FSK where the frequency of
the oscillator changes according to the input voltage which is the amplitude of the modulating
signal.

FM Band Allocation

Phase Modulation
In PM transmission, the phase of the carrier signal is modulated to follow the changing voltage
level (amplitude) of the modulating signal while the peak amplitude and frequency of the
carrier signal remain constant.

It can be proved mathematically that PM is same as FM with one difference, in FM the


instantaneous change in the carrier frequency is proportional to the amplitude of the
modulating signal; in PM the instantaneous change in the carrier frequency is proportional to
the derivative of the amplitude of the modulating signal.

You might also like