Unit 1 CS2302 Computer Networks New
Unit 1 CS2302 Computer Networks New
CS2363-COMPUTER NETWORKS
Unit I
Introduction to networks network architecture network performance Direct link networks
encoding framing error detection transmission Ethernet Rings FDDI - Wireless networks
Switched networks bridges
Unit II
Internetworking IP - ARP Reverse Address Resolution Protocol Dynamic Host Configuration
Protocol Internet Control Message Protocol Routing Routing algorithms Addressing
Subnetting CIDR Inter domain routing IPv6
Unit III
Transport Layer User Datagram Protocol (UDP) Transmission Control Protocol Congestion
control Flow control Queuing Disciplines Congestion Avoidance Mechanisms.
Unit IV
Data Compression introduction to JPEG, MPEG, and MP3 cryptography symmetric-key
public-key authentication key distribution key agreement PGP SSH Transport layer
security IP Security wireless security - Firewalls
Unit V
Domain Name System (DNS) E-mail World Wide Web (HTTP) Simple Network
Management Protocol File Transfer Protocol (FTP) Web Services - Multimedia Applications
Overlay networks
REFERENCES / WEBSITES:
[1]Larry L. Peterson, Bruce S. Davie, Computer Networks: A Systems Approach, Fourth Edition, Morgan
Kauffmann Publishers Inc., 2009, Elsevier.
[2] Andrew S. Tanenbaum, Computer Networks, Sixth Edition, 2003, PHI Learning.
CS2363 CN / Unit 1
First, it establishes a dedicated circuit across a sequence of links and then allows the source node to
send a stream of bits across this circuit to a destination node.
Packet switched networks:
RMDEC
Each node receives a complete packet over some link, stores the packet in its internal memory and then
forwards the complete packet to the next node.
Internetwork : (internet)
A set of computers can be indirectly connected. A set of independent networks (clouds) are interconnected to form
an internetwork. A node that is connected to two or more networks is commonly called a router or gateway. A
router/gateway forwards messages from one network to another.
Network performance is measured using the metrics bandwidth and latency. The bandwidth also referred to as
throughput of a network is given by the number of bits that can be transmitted over the network in a certain period of
time. The more sophisticated the transmitting and receiving technology, the narrower each bit can become, and
thus, the higher then bandwidth. Latency corresponds to the time taken by the message to travel from one end of
the network to the other and is measured in terms of time. There are situations in which it is important to know
how long it takes to send a message from one end of a network to the other and back, rather than the one-way
latency. Two-way latency is referred to as the round trip time of the network.
1.1 INTRODUCTION
Data Communications is the transfer of data or information between a source and a receiver. The source
transmits the data and the receiver receives it.
The effectiveness of a data communication depends on three characteristics
1. Delivery
2. Accuracy
3. Timeliness
Delivery: The system must deliver data to correct destination.
Accuracy: The system must deliver data accurately.
Timeliness: The system must deliver data in a timely manner. Timely delivery means delivering data as they are
produced, in the same order that they are produced and without significant delay. This kind of delivery is
called real time transmission.
1.1.1 Components
Source: It is the transmitter of data. Some examples are Terminal, Computer, and Mainframe.
Medium: The communications stream through which the data is being transmitted. Some examples are:
Cabling, Microwave, Fiber optics, Radio Frequencies (RF), Infrared Wireless
Receiver: The receiver of the data transmitted. Some examples are Printer, Terminal, Mainframe, and Computer.
RMDEC
CS2363 CN / Unit 1
4
Message: It is the data that is being transmitted from the Source/Sender to the Destination/Receiver.
Protocol: It is steps that are to be followed for sending the data from the data and as well as interpreting the data
received.
DCE: The interface between the Source & the Medium, and the Medium & the Receiver is called the DCE
(Data Communication Equipment) and is a physical piece of equipment.
DTE: Data Terminal Equipment is the Telecommunication name given to the Source and Receiver's equipment.
RMDEC
CS2363 CN / Unit 1
Half-Duplex: In this type of data communication the data flows in both directions but at a time in only one
direction on the data communication line. Example Conversation on walkie-talkies is a half-duplex data flow. Each
person takes turns talking. If both talk at once - nothing occurs. Figure 1.4 represents the Half-Duplex data flow.
CS2363 CN / Unit 1
1.1.3 Networks
Network is a set of device (called nodes) connected by media links. Node is any device that is capable of
sending / receiving data generated by other nodes on the network. Example for nodes is computer, workstations.
Components of networks
The basic hardware components of computer networks are
Network Interface Card (NIC): It is the hardware which is essential for allowing a computer to communicate
to another computer in the Ethernet network. If a computer has to be connected to a network NIC card should be
installed in the computer. NIC provides a dedicated full time connection to a network. NIC contains firmware that
controls the protocol and the Ethernet controller which is required to support the Media Access Control (MAC)
protocol used by the Ethernet. It allows the systems to communicate by using the physical address which is also
referred to as MAC address, which is burnt into it.
MAC address is 48 bit hardware addresses that uniquely identify the device in the network. MAC address is
expressed as 12 hexadecimal digits of which the first 6 digits is referred to as the vendor code (Organizationally
Unique Identifier) and the last 6 digits represent the interface serial number for that vendor. NIC works at the layer 1
and layer 2 of the OSI model.
RMDEC
CS2363 CN / Unit 1
Categories of networks
The three primary categories are of network are Local Area Network (LAN), Metropolitan Area
Network (MAN), and Wide Area Network (WAN). The category into which a network fall is determined by its
size, ownership, the distance it covers and its physical architecture.
LAN: A LAN is usually privately owned and links the devices in a single office, building or campus. A
LAN can be as simple as two PCs or it can extend throughout a company. LAN size is limited to a few
kilometers. The most widely used LAN system is the Ethernet system developed by the Xerox Corporation.
MAN: A MAN is designed to extend over an entire city. It could be a single network
such as cable TV network or connect a number of LANs into a larger network.
Resources may be shared LAN to LAN as well as device to device. A company can use a MAN to connect
the LANs in all its offices throughout a city. A MAN can be owned by a private company or it may be a service
provided by a public company, such as local telephone company. Telephone companies provide a popular
MAN service called (SMDS) Switched Multi-megabit Data Services.
CS2363 CN / Unit 1
8
WAN: A WAN provides long distance transmission of data, voice, image and video information over large
geographic areas. It may comprise a country, continent or even the whole world. Transmission rates are
typically 2 Mbps, 34 Mbps, 45 Mbps, 155 Mbps and 625 Mbps. WAN utilize public, leased, or private
communication equipment usually in combinations and therefore span an unlimited number of miles. A
WAN that is wholly owned and used by a single company is referred to as an Enterprise Network. The
figure represents the comparison of the different types of networks
1.1.5 Topologies
Physical Topology refers to the way in which network is laid out physically. Two or more
links form a topology. The topology of a network is the geometric representation of the
RMDEC
CS2363 CN / Unit 1
9
relationship of all the links and the linking devices tone another. The basic topologies are:
Mesh
Star
Bus and
Ring
Mesh: In a mesh topology each device has a dedicated point to point link to every other device. The term dedicated
means that the link carries traffic only between the two devices it connects. A filly connected mesh network
therefore has n (n-1)/2 physical channels to link n devices. To accommodate that many links every device on the
network should have (n-1) I/O ports. Generally partial mesh topology is implemented, in which case the
decision on the links to be implemented should be made appropriately.
Since every message travels along a dedicated line only the intended recipient will receive the message and
hence the data is secure.
Demerits
The amount of cabling and the I/O ports required increases with the number of devices connected in the
network
Installation and reconnection are difficult
The sheer bulk of the wire accommodates more space than available.
The hardware required to connect each link can be prohibitively expensive.
Star topology: Each device has a dedicated point to point link only to a central controller usually called a hub.
If one device has to send data to another it sends the data to the controller, which then relays the data to the
other connected device.
RMDEC
CS2363 CN / Unit 1
10
Less expensive than a mesh topology. Each device needs only one link and I/O port to connect it to any
number of others.
Installation and reconfigure is easy.
Robustness. If one link fails only that link is affected.
Requires less cable than a mesh.
Demerits
Require more cable compared to bus and ring topologies.
Failure of the central controller incapacitates the entire network.
Bus: One long cable acts as a backbone to link all the devices in a network Nodes are connected to the bus cable
by drop lines and taps. A drop line is a connection running between the device and the main cable. A tap is a
connector that either splices into the main cable or punctures the sheathing of a cable to create a contact with a
metallic core. As the signal travels farther and farther, it becomes weaker. So there is limitation in the number of
taps a bus can support and on the distance between those taps.
Merits
Ease of installation.
Bus uses less cabling than mesh or star topologies.
Demerits
Difficult reconnection and isolation.
Signal reflection at the taps can cause degradation in quality.
A fault or break in the bus cable stops all transmission. It also reflects signals back in the direction of
origin creating noise in both directions.
RMDEC
CS2363 CN / Unit 1
11
Ring: Each device has a dedicated point to point connection only with the two devices on either side of it. A
signal is passed along the ring in one direction from device to device until it reaches the destination. Each
device in the ring incorporates a repeater. It regenerates the bits and passes them along, when it receives the
signal intended for another device.
Merits:
2.
Internet model, sometimes called the TCP/IP protocol suite. (refer 1.2.2)
Layers
Networking system is simpler, cheaper, and more reliable if it is implemented in terms of layers. Each
layer accepts responsibility for a small part of the functionality. A clean separation of function between layers
means that multiple layers do not need to duplicate functionality. It also means that layers are less likely to
interfere with one another. Layering provides two features. First, decomposes the problem into more
manageable components each of which solve one part of the problem. Second, it provides a more modular
design and hence if a service is required to be added then it is required to modify the functionality of a specific
layer alone.
Layer
Each layer provides services to the next higher layer and shields the upper layer from the details
implemented in the lower layers.
RMDEC
CS2363 CN / Unit 1
12
Each layer appears to be in direct (virtual) communication with its associated layer on the other
computer. Actual communication between adjacent layers takes place on one computer only.
Layering simplifies design, implementation, and testing. Only the lowest level can directly communicate
with its peer communications process into parts.
Peer-to-Peer Processes
The processes on each machine that communicate at a given layer are called peerto-peer processes.
At higher layers communication must move down through the layers on device A aver to device B and
then back up through the layers.
Each layer in the sending device adds its own information to the message it receives from the layer
just above it. And passes the whole package to the layer just below and transferred to the receiving
device.
Interfaces between layers
The passing of data and network information down through the layers of the sending device and
back up through the layers of the receiving device is made possible by an interface between each pair
of adjacent layers.
Each interface defines what information and services a layer must provide for the layer above it.
The OSI model provides the basic rules that allow multiprotocol networks to operate. Understanding the OSI
model is instrument in understanding how the many different protocols fit into the networking jigsaw puzzle. It
forms a valuable reference model and defines much of the language used in data communications.
RMDEC
CS2363 CN / Unit 1
13
Physical Layer: It coordinates the functions required to transmit a bit stream over a physical medium. It deals
with the mechanical (cable, plugs, pins etc.) and electrical (modulation, signal strength, voltage levels, bit times)
specifications of the interface and transmission media. It also defines the procedures and functions that physical
devices and interfaces have to perform for transmission to occur.
Major responsibilities of Physical layer are:
Physical characteristics of interfaces and media: It defines the characteristics of the interface between the
devices and the transmission media. Also defines the type of transmission medium.
Representation of bits: To transmit the bits, it must be encoded into electrical or optical signals. Physical
layer defines the type of representation i.e. how Os and 1s are changed to signals.
Data rate: The number of bits sent each second is also defined by the physical layer. That is the physical
layer defines the duration for which the bit lasts.
Synchronization of bits: Sender and the receiver must be synchronized at the bit level. That is the physical
layer ensures that the sender and the receiver clocks are synchronized.
Signals: Analog Signal has infinitely many levels of intensity over a period of time. Digital Signal has limited
number of defined values, generally 1 and 0. Periodic Signal completes a pattern within a measurable time
frame and repeats that pattern over subsequent identical periods. The completion of one full pattern is
called a cycle. A periodic signal changes without exhibiting a pattern or cycle that repeats over time. Period
refers to the amount of time, in seconds, a signal needs to complete one cycle. Frequency refers to the number
of periods in one second. Period is the inverse of frequency and frequency is the inverse of period.
Data link layer: The data link layer is responsible for hop-to-hop (node-to-node) delivery. It transforms the
physical layer a raw transmission facility to a reliable link. It makes physical layer appear error free to the
network layer.
RMDEC
CS2363 CN / Unit 1
14
The duties of the data link layer are:
Framing: The data link layer divides the stream of bits received from the network layer into manageable
data units called frames.
Physical Addressing: If the frames are to be distributed to different systems on the network the data link
layer adds a header to the frame to define the receiver or sender of the frame. If the frame is intended for
a system located outside the sender's network then the receiver address is the address of the connecting
device that connects the network to the next one.
Flow Control: If the rate at which the data absorbed by the receiver is less than the rate produced in the
sender, the data link layer imposes a flow control mechanism to overwhelming the receiver.
Error control: Reliability is added to the physical layer by data link layer to detect and retransmit loss or
damaged frames and also to prevent duplication of frames. This is achieved through a trailer added to the
end of the frame.
Access control: When two or more devices are connected to the same link it determines which device has
control over the link at any given time.
Network Layer: The network layer is responsible for source-to-destination delivery of a packet across
multiple networks. It ensures that each packet gets from its point of origin to its final destination. It does not
recognize any relationship between those packets. It treats each one independently as though each belong to
separate message.
The functions of the network layer are:
> Internetworking: The logical gluing of heterogeneous physical networks together to look like a
single network to the upper layers.
> Logical Addressing: If a packet has to cross the network boundary then the header contains
information of the logical addresses of the sender and the receiver.
> Routing: When independent networks or links are connected to create an internetwork or a large
network the connective devices route the packet to the final destination.
> Packetizing: Encapsulating packets received from the upper layer protocol. Internet Protocol is used
for packetizing in the network layer.
> Fragmenting: Router has to process the incoming frame and encapsulate it as per the protocol used by
the physical network to which the frame is going.
Transport Layer: The transport layer is responsible for process-to-process delivery, i.e., source to destination
delivery of the entire message.
The responsibilities of Transport layer are:
Service-point (port) addressing: Computers run several programs at the same time. Source-todestination delivery means delivery from a specific process on one computer to a specific process on
the other. The transport layer header therefore includes a type of address called port address.
Segmentation and reassembly: A message is divided into segments and each segment contains a
sequence number. These numbers enable the Transport layer to reassemble the message correctly
upon arriving at the destination. The packets lost in the transmission is identified and replaced.
Connection control: The transport layer can be either connectionless or connection-oriented. A
connectionless transport layer treats segment as an independent packet and delivers it to the transport
layer. A connection-oriented transport layer makes a connection with the transport layer at the
RMDEC
CS2363 CN / Unit 1
15
destination machine and delivers the packets. After all the data are transferred the connection is
terminated.
Flow control: Flow control at this layer is performed end to end.
Error Control: Error control is performed end to end. At the sending side, the transport layer
makes sure that the entire message arrives at the receiving transport layer with out error. Error
correction is achieved through retransmission.
Session Layer: Session layer is the network dialog controller. It establishes, maintains, and synchronizes the
interaction between communicating systems.
Specific responsibilities of the layer are:
Dialog Control: Session layer allows two systems to enter in to a dialog. Communication
between two processes takes place either in half-duplex or full-duplex.
Synchronization: The session layer allows a process to add checkpoints into a stream of data.
Example: If a system is sending a file of 2000 pages, check points may be inserted after every 100 pages
to ensure that each 100 page unit is advised and acknowledged independently. So if a crash happens
during the transmission of page 523, retransmission begins at page 501, pages 1 to 500 need not be
retransmitted.
Presentation layer: It is concerned with the syntax and semantics of the information exchanged between two
systems.
Responsibilities of the presentation layer are
Translation: The processes in two systems are usually exchanging information in the form of
character strings, numbers, and so on. Since different computers use different encoding systems, the
presentation layer is responsible for interoperability between these different encoding methods. At
the sender, the presentation layer changes the information from its sender-dependent format into a
common format. The presentation layer at the receiving machine changes the common format into its
receiver dependent format.
Encryption: The sender transforms the original information from one form to another form and
sends the resulting message over the entire network. Decryption reverses the original process to transform
the message back to its original form.
Compression: It reduces the number of bits to be transmitted. It is important in the transmission of text,
audio and video.
Application Layer: It enables the user (human/software) to access the network. It provides user interfaces and
support for services such as electronic mail, remote file access and transfer, shared database management
and other types of distributed information services.
Services provided by the application layer are
Network Virtual terminal: A network virtual terminal is a software version of a physical terminal and
allows a user to log on to a remote host.
RMDEC
File transfer, access and management: This application allows a user to access files in a remote
computer, to retrieve files from a remote computer and to manage or control files in a remote
CS2363 CN / Unit 1
16
computer.
Mail services: This application provides the basis for e-mail forwarding and storage.
Directory services: It provides distributed database sources and access for global information about
various objects and services.
CS2363 CN / Unit 1
17
At the network layer (or, more accurately, the internetwork layer), TCP/IP supports the Internetworking Protocol.
IP, in turn, uses four supporting protocols: ARP, RARP, ICMP, and IGMP. Each of these protocols is described
in greater detail in later chapters.
Internetworking Protocol (IP)
The Internetworking Protocol (IP) is the transmission mechanism used by the TCP/IP protocols. It is an
unreliable and connectionless protocol-a best-effort delivery service. The term best effort means that IP provides
no error checking or tracking. IP assumes the unreliability of the underlying layers and does its best to get a
transmission through to its destination, but with no guarantees. IP transports data in packets called datagrams,
each of which is transported separately. Datagrams can travel along different routes and can arrive out of
sequence or be duplicated. IP does not keep track of the routes and has no facility for reordering datagrams
once they arrive at their destination. The limited functionality of IP should not be considered a weakness,
however. IP provides bare-bones transmission functions that free the user to add only those facilities necessary for
a given application and thereby allows for maximum efficiency.
Address Resolution Protocol
The Address Resolution Protocol (ARP) is used to associate a logical address with a physical address. On a
typical physical network, such as a LAN, each device on a link is identified by a physical or station address,
usually imprinted on the network interface card (NIC). ARP is used to find the physical address of the node
when its Internet address is known.
Reverse Address Resolution Protocol
The Reverse Address Resolution Protocol (RARP) allows a host to discover its Internet address when it knows
only its physical address. It is used when a computer is connected to a network for the first time or when a
diskless computer is booted.
Internet Control Message Protocol
The Internet Control Message Protocol (ICMP) is a mechanism used by hosts and gateways to send
notification of datagram problems back to the sender. ICMP sends query and error reporting messages.
Internet Group Message Protocol
The Internet Group Message Protocol (IGMP) is used to facilitate the simultaneous transmission of a message
to a group of recipients.
Transport Layer
Traditionally the transport layer was represented in TCP/IP by two protocols: TCP and UDP. IP is a host-to-host
protocol, meaning that it can deliver a packet from one physical device to another. UDP and TCP are transport
level protocols responsible for delivery of a message from a process (miming program) to another process. A
new transport layer protocol, SCTP, has been devised to meet the needs of some newer applications.
User Datagram Protocol
The User Datagram Protocol (UDP) is the simpler of the two standard TCPIIP transport protocols. It is a processto-process protocol that adds only port addresses, checksum error control, and length information to the data
from the upper layer.
Transmission Control Protocol
RMDEC
CS2363 CN / Unit 1
18
The Transmission Control Protocol (TCP) provides Hi transport-layer services to applications. TCP is a
reliable stream transport protocol. The term stream, in this context, means connection-oriented: A connection must
be established between both ends of a transmission before either can transmit data. At the sending end of each
transmission, TCP divides a stream of data into smaller units called segments. Each segment includes a sequence
number for reordering after receipt, together with an acknowledgment number for the segments received.
Segments are carried across the internet inside of IP datagrams. At the receiving end, TCP collects each
datagram as it comes in and reorders the transmission based on sequence numbers.
Stream Control Transmission Protocol
The Stream Control Transmission Protocol (SCTP) provides support for newer applications such as voice
over the Internet. It is a transport layer protocol that combines the best features of UDP and TCP.
Application Layer
The application layer in TCP/IP is equivalent to the combined session, presentation, and application layers in the
OSI model. Many protocols are defined at this layer. The protocols pertaining to this layer is discussed in
details as part of chapter 5.
1.4 Encoding
(NRZ, NRZI, Manchester, 4B/5B)
Signals propagate over physical links. The task, therefore, is to encode the binary data that the source node wants to send into the
signals that the links are able to carry, and then to decode the signal back into the corresponding binary data at the receiving node. The
function of encoding are performed by a network adaptora piece of hardware that connects a node to a link. The network adaptor
contains a signaling component that actually encodes bits into signals at the sending node and decodes signals into bits at the receiving
node. As in Figure 2.5, signals travel over a link between two signaling components, and bits flow between network adaptors.
RMDEC
CS2363 CN / Unit 1
19
Demerits:
The problem with NRZ is that a sequence of several consecutive 1s means that the signal stays high on the link for an extended period
of time, and similarly, several consecutive 0s means that the signal stays low for a long time. There are two fundamental problems
caused by long strings of 1s or 0s.
The first is that it leads to a situation known as baseline wander. Specifically, the receiver keeps an average of the signal it
has seen so far, and then uses this average to distinguish between low and high signals. Whenever the signal is significantly
lower than this average, the receiver concludes that it has just seen a 0, and likewise, a signal that is significantly higher than
the average is interpreted to be a 1. The problem, of course, is that too many consecutive 1s or 0s cause this average to
change, making it more difficult to detect a significant change in the signal.
The second problem is that frequent transitions from high to low and vice versa are necessary to enable clock recovery.
Intuitively, the clock recovery problem is that both the encoding and the decoding processes are driven by a clockevery
clock cycle the sender transmits a bit and the receiver recovers a bit. The senders and the receivers clocks have to be
precisely synchronized in order for the receiver to recover the same bits the sender transmits. If the receivers clock is even
slightly faster or slower than the senders clock, then it does not correctly decode the signal.
Solution:
You could imagine sending the clock to the receiver over a separate wire, but this is typically avoided because it makes the
cost of cabling twice as high.
So instead, the receiver derives the clock from the received signalthe clock recovery process. Whenever the signal changes,
such as on a transition from 1 to 0 or from 0 to 1, then the receiver knows it is at a clock cycle boundary, and it can
resynchronize itself. However, a long period of time without such a transition leads to clock drift. Thus, clock recovery
depends on having lots of transitions in the signal, no matter what data is being sent.
Manchester Encoding:
An alternative, called Manchester encoding, does a more explicit job of merging the clock with the signal by transmitting the
exclusive-OR of the NRZ-encoded data and the clock. The Manchester encoding is also illustrated in Figure 2.7. Observe that the
Manchester encoding results in 0 being encoded as a low-to-high transition and 1 being encoded as a high-to-low transition. Because
both 0s and 1s result in a transition to the signal, the clock can be effectively recovered at the receiver.
RMDEC
CS2363 CN / Unit 1
20
Demerits:
The problem with the Manchester encoding scheme is that it doubles the rate at which signal transitions are made on the link, which
means that the receiver has half the time to detect each pulse of the signal. The rate at which the signal changes is called the links
baud rate. In the case of the Manchester encoding, the bit rate is half the baud rate, so the encoding is considered only 50% efficient.
If the receiver had been able to keep up with the faster baud rate required by the Manchester encoding in Figure 2.7, then both NRZ
and NRZI could have been able to transmit twice as many bits in the same time period.
4B/5B:
A final encoding that we consider, called 4B/5B, attempt to address the inefficiency of the Manchester encoding without suffering
from the problem of having extended durations of high or low signals. The idea of 4B/5B is to insert extra bits into the bit stream so as
to break up long sequences of 0s or 1s. Specifically, every 4 bits of actual data are encoded in a 5-bit code that is then transmitted to
the receiver; hence the name 4B/5B. The 5-bit codes are selected in such a way that each one has no more than one leading 0 and no
more than two trailing 0s. Thus, when sent back-to-back, no pair of 5-bit codes results in more than three consecutive 0s being
transmitted. The resulting 5-bit codes are then transmitted using the NRZI encoding, which explains why the code is only concerned
about consecutive 0sNRZI already solves the problem of consecutive 1s. Note that the 4B/5B encoding results in 80% efficiency.
Table 2.4 gives the 5-bit codes that correspond to each of the 16 possible 4-bit data symbols. Notice that since 5 bits are enough to
encode 32 different codes, and we are using only 16 of these for data, there are 16 codes left over that we can use for other purposes.
Of these, code 11111 is used when the line is idle, code 00000 corresponds to when the line is dead, and 00100 is interpreted to mean
halt. Of the remaining 13 codes, 7 of them are not valid because they violate the one leading 0, two trailing 0s rule, and the other 6
represent various control symbols
manageable data units called frames. It also add header to the frame in order to define the physical address of the
communicating partners.
RMDEC
CS2363 CN / Unit 1
21
2. Error control It is process used to detect & retransmit damaged or lost frames, to detect duplication of
frames. It does error control using the information passed as part of the trailer.
3. Flow Control - It is the process used to avoid data loss. Flow control coordinates and decides the amount of
1.5.1 Framing It divides the stream of bits received from the upper layer (network layer) into
manageable data units called frames. It adds a header to the frame to define the physical address (source
address & destination address).
Blocks of data (called frames at this level), not bit streams, are exchanged between nodes. It is the network
adaptor that enables the nodes to exchange frames. When node A wishes to transmit a frame to node B, it tells
its adaptor to transmit a frame from the node's memory. This results in a sequence of bits being sent over the
link. The adaptor on node B then collects together the sequence of bits arriving on the link and deposits the
corresponding frame in B's memory. Recognizing exactly what set of bits constitutes a framethat is,
determining where the frame begins and endsis the central challenge faced by the adaptor.
Although the whole message could be packed in one frame it is not normally done. One
reason is that a frame can be very large, making flow and error control very inefficient.
When a message is carried in one very large frame, even a single-bit error would require the retransmission of
the whole message. When a message is divided into smaller frames, a single-bit error affects only that small
frame.
Fixed-Size Framing: Frames can be of fixed or variable size. In fixed-size framing, there is no need for
defining the boundaries of the frames; the size itself can be used as a delimiter. An example of this type of
framing is the ATM wide-area network, which uses frames of fixed size called cells.
Variable-Size Framing: In variable-size framing, we need a way to define the end of the frame and the
beginning of the next. Three approaches were used for this purpose:
Framing Protocols:
(i)
Byte-oriented protocols
(ii)
Bit-oriented protocols
(iii)
Clock-based protocols
RMDEC
CS2363 CN / Unit 1
22
Byte-oriented protocols:
(iv)
BISYNC Binary Synchronous Communication
(v)
PPP Point-to-point Protocol
(vi)
DDCMP- Digital Data Communication Message Protocol
Bit-oriented protocols:
(i)
HDLC High Level Data Link Control
Clock-based protocols:
(i)
SONET - Synchronous Optical Network
1.5.1.1 Byte oriented Protocols
Binary Synchronous Communication (BISYNC): protocol developed by IBM in the late 1960s. The beginning
of a frame is denoted by sending a special SYN (synchronization) character. The data portion of the frame is
then contained between special sentinel characters: STX (start of text) and ETX (end of text). The SOH (start
of header) field serves much the same purpose as the STX field. The problem with the sentinel approach, of
course, is that the ETX character might appear in the data portion of the frame. BISYNC overcomes this
problem by "escaping" the ETX character by preceding it with a DLE (data-link-escape) character whenever it
appears in the body of a frame; the DLE character is also escaped (by preceding it with an extra DLE) in the
frame body. (C programmers may notice that this is analogous to the way a quotation mark is escaped by the
backslash when it occurs inside a string.) This approach is often called character stuffing because extra
characters are inserted in the data portion of the frame.
PPP - Point-to-Point Protocol (PPP), which is commonly run over dialup modem links, is similar to
BISYNC in that it uses character stuffing. The format for a PPP frame is given in Figure 1.42. The special
start-of-text character, denoted as the Flag field in Figure 1.42, is 01111110, which is byte stuffed if it occurs
within the payload field.
The Address and Control fields usually contain default values. The Address field which is always set to the
binary value 11111111, indicates that all stations are to accept the frame. This value avoids the issue of using
data link addresses. The default value of the Control field is 00000011. This value indicates an unnumbered
frame. In other words, PPP does not provide reliable transmission using sequence numbers and
acknowledgements.
The Protocol field is used for demultiplexing: It identifies the high-level protocol such as IP or IPX (an IPlike protocol developed by Novell). The frame payload size can be negotiated, but it is 1500 bytes by
default. The Checksum field is either 2 (by default) or 4 bytes long. The PPP frame format is unusual in that
several of the field sizes are negotiated rather than fixed. This negotiation is conducted by a protocol called LCP
(Link Control Protocol). PPP and LCP work in tandem: LCP sends control messages encapsulated in PPP frames
RMDEC
CS2363 CN / Unit 1
23
such messages are denoted by an LCP identifier in the PPP Protocol fieldand then turns around and
changes PPP's frame format based on the information contained in those control messages. LCP is also
involved in establishing a link between two peers when both sides detect the carrier signal.
CS2363 CN / Unit 1
24
following the five 1 s). If the next bit is a 0, it must have been stuffed, and so the receiver removes it. If the next
bit is a 1, then one of two things is true: Either this is the end-offrame marker or an error has been introduced
into the bit stream. By looking at the next bit, the receiver can distinguish between these two cases: If it sees a
0 (i.e., the last eight bits it has looked at are 01111110), then it is the end-of-frame marker; if it sees a 1 (i.e.,
the last eight bits it has looked at are 01111111), then there must have been an error and the whole frame is
discarded. In the latter case, the receiver has to wait for the next 01111110 before it can start receiving again,
and as a consequence, there is the potential that the receiver will fail to receive two consecutive frames.
Obviously, there are still ways that framing errors can go undetected, such as when an entire spurious end-offrame pattern is generated by errors, but these failures are relatively unlikely.
An interesting characteristic of bit stuffing, as well as character stuffing, is that the size of a frame is
dependent on the data that is being sent in the payload of the frame. It is in fact not possible to make all frames
exactly the same size, given that the data that might be carried in any frame is arbitrary.
1.5.1.3 Clock based Protocols
Clock-Based Framing (SONET) :
A third approach to framing is exemplified by the Synchronous Optical Network (SONET) standard. For lack of
a widely accepted generic term, we refer to this approach simply as clock-based framing SONET was first
proposed by Bell Communications Research (Bellcore), and then developed under the American National
Standards Institute (ANSI) for digital transmission over optical fiber; it has since been adopted by the ITU-T.
Who standardized what and when is not the interesting issue, though. The thing to remember about SONET is
that it is the dominant standard for long-distance transmission of data over optical networks.
SONET addresses both the framing problem and the encoding problem. It also addresses a problem that is very
important for phone companiesthe multiplexing of several low-speed links onto one high-speed link.
Framing:
A SONET frame has some special information that tells the receiver where the frame starts and ends. Notably,
no bit stuffing is used, so that a frames length does not depend on the data being sent. So the question to ask is,
How does the receiver know where each frame starts and ends? We consider this question for the lowest-speed
SONET link, which is known as STS-1 and runs at 51.84 Mbps.
An STS-1 frame is shown in Figure 2.13. It is arranged as nine rows of 90 bytes each, and the first 3 bytes of
each row are overhead, with the rest being available for data that is being transmitted over the link. The first 2
bytes of the frame contain a special bit pattern, and it is these bytes that enable the receiver to determine where
the frame starts. However, since bit stuffing is not used, there is no reason why this pattern will not occasionally
turn up in the payload portion of the frame. To guard against this, the receiver looks for the special bit pattern
consistently, hoping to see it appearing once every 810 bytes, since each frame is 9 90 = 810 bytes long.
When the special pattern turns up in the right place enough times, the receiver concludes that it is in sync and
can then interpret the frame correctly.
RMDEC
CS2363 CN / Unit 1
25
Figure 1.14 Three STS-1 Frames multiplexed onto one STS-3c frame.
CS2363 CN / Unit 1
26
SONET supports the multiplexing of multiple low-speed links in the following way. A given SONET link runs
at one of a finite set of possible rates, ranging from 51.84 Mbps (STS-1) to 2488.32 Mbps (STS-48) and
beyond. Note that all of these rates are integer multiples of STS-1. The significance for framing is that a single
SONET frame can contain subframes for multiple lower-rate channels. A second related feature is that each
frame is 125 s long. This means that at STS-1 rates, a SONET frame is 810 bytes long, while at STS-3 rates,
each SONET frame is 2430 bytes long. Notice the synergy between these two features: 3 810 = 2430,
meaning that three STS-1 frames fit exactly in a single STS-3 frame.
Intuitively, the STS-N frame can be thought of as consisting of N STS-1 frames, where the bytes from these
frames are interleaved; that is, a byte from the first frame is transmitted, then a byte from the second frame is
transmitted, and so on. The reason for interleaving the bytes from each STS-N frame is to ensure that the bytes
in each STS-1 frame are evenly placed; that is, bytes show up at the receiver at a smooth 51 Mbps, rather than
all bunched up during one particular 1/Nth of the 125-s interval.
Although it is accurate to view an STS-N signal as being used to multiplex NSTS- 1 frames, the payload from
these STS-1 frames can be linked together to form a larger STS-N payload; such a link is denoted STS-Nc (for
concatenated). One of the fields in the overhead is used for this purpose. Figure 2.14 schematically depicts
concatenation in the case of three STS-1 frames being concatenated into a single STS-3c frame. The
significance of a SONET link being designated as STS-3c rather than STS-3 is that, in the former case, the user
of the link can view it as a single 155.25-Mbps pipe, whereas an STS-3 should really be viewed as three 51.84Mbps links that happen to share a fiber.
Finally, the preceding description of SONET is overly simplistic in that it assumes that the payload for each
frame is completely contained within the frame. In fact, we should view the STS-1 frame just described as
simply a placeholder for the frame, where the actual payload may float across frame boundaries.
This situation is illustrated in Figure 2.15. Here we see both the STS-1 payload floating across two STS-1
frames, and the payload shifted some number of bytes to the right and, therefore, wrapped around. One of the
fields in the frame overhead points to the beginning of the payload. The value of this capability is that it
simplifies the task of synchronizing the clocks used throughout the carriers networks, which is something that
carriers spend a lot of their time worrying about.
RMDEC
CS2363 CN / Unit 1
27
2. Sliding window: sender can transmit several frames before needing an ack. the receiver acknowledges
only some of the frames using a single ACK to confirm the receipt of multiple frames.
Sender window represents the no of frames it can send. Sender window shrinks when the frame is sent.
And it expands when the ACK is received.
Receiver window represents the no of frames still it can receive. As a new frame come in receiver
window shrinks. If they acknowledged it expands.
The sender sends the data frames that are present within the sender's window without waiting for ACK.
If the receiver receives a frame without any error it slides its window and sends an ACK frame, ACK could be
sent together for a group of frames. When the sender receives the ACK frame it shifts the window by one or more
number of frames depending on the sequence number in the ACK frame. Send the next frame in the sender's
window.
RMDEC
CS2363 CN / Unit 1
28
A damaged or lost frame is treated in the same manner by the receiver. If the receiver detects an error
in the received frame, it simply discards the frame and sends no acknowledgement.
The sender has a control variable, which we call S, that holds the number of recently sent frame. The
receiver has a control variable, which we call R that holds the number of the next frame expected.
The sender starts a timer when it sends a frame. If an ACK is not received within an allotted time period
the sender assumes that the frame was lost or damaged and resends it.
The receivers send only positive ACK for frames received safe and sound; it is silent about the frames
damaged or lost.
Four different operations namely Normal, Lost/Error data, Lost ACK and Delayed ACK are possible.
1. Normal Operation: The sender sends the data frame and the receiver receives it
without any error and sends an ACK frame. The sender receives the ACK frame
and sends the next frame. The timing diagram for this process is shown in Figure 1.47.
RMDEC
CS2363 CN / Unit 1
29
2. Data Lost/ Error: The sender sends the data frame, which is lost during transmission. Time-out
occurs at the sender's end and hence the sender retransmits the data frame. If the data received is
with error the sender will not send the ACK frame, hence time-out occurs at the sender. The sender
retransmits the data frame. The timing diagram for this process is shown in Figure 1.48.
3. ACK Lost: The sender sends the data frame and the receiver receives it without any error. Receiver
sends an ACK frame which is lost during transmission. Time-out occur at the senders end, hence it
retransmits the data frame. The receiver would be waiting for a frame whereas the other frame is
received and hence it discards the frame as duplicate. It resends the ACK frame. The timing diagram for
this process is shown in Figure 1.49.
RMDEC
CS2363 CN / Unit 1
30
4. Delayed ACK: The sender sends the data frame and the receiver receives it without any error.
Receiver sends an ACK frame, before it reaches the sender time-out occurs at the senders end and the
data frame is retransmitted. The receiver discards the data frame, as it duplicate and retransmits the
ACK frame. The timing diagram for this process is shown in Figure 1.50.
RMDEC
CS2363 CN / Unit 1
31
Figure 1.51: Time diagram for the stop and wait ARQ with Piggybacking
The advantage of stop-and-wait is simplicity. The disadvantage is inefficiency, because if the distance
between the devices is long then the time spent on waiting adds significantly to the total transmission
time.
It is possible to use piggy backing mechanism if both the devices have data to be transmitted. Piggy
backing is a method that combine data frame with an ACK. Piggy backing can save the bandwidth since it
eliminates the overhead of the ACK frame.
1.5.3.2 Sliding Window
In order to improve its efficiency sender can send more than one frame before waiting for
ACK. This technique needs a window of frames to be maintained at the sender's end. The
size of the window is fixed. As the sender receives an ACK for a frame the window would be shifted. At any
time the window would show the frames that could be sent before receiving an ACK from the receiver. Since
the window keep sliding over the frames this technique is referred to as sliding window. There are two variations of
sliding window technique. They are Go-Back-N ARQ and Selective Repeat ARQ.
1.5.3.2.1 Go-Back-N ARQ
In the Go-back-n scheme, upon an error, the sender retransmits all the frames that came after the error. For
example, sender may send frames 1,2,3,4 and does not receive ACK for frame 2 and the timer expires. The
sender will resend all the frames after frame 2. The features of Go-Back-N ARQ are:
Sequence numbers of transmitted frames are maintained in the header of frame. If k is the number of
bits for sequence number, then the numbering can range from 0 to 2k - 1. Example: if k=3 means
sequence numbers are 0 to 7.
Sender's window is on a set of frames in buffer waiting for ACK. As the ACK is received, the
respective frame goes out of window and new frame to be sent come into window. For example if
RMDEC
CS2363 CN / Unit 1
32
sender receives ACK 4, then it knows Frames up to and including Frame 3 was correctly
received. Window will towards the right of frame 3.
In the receiver side, size of the window is always one. The receiver will receive only the frame
which it is expecting; if any other frame is received it is discarded. The receiver slides over after
receiving the expected frame.
Sender has three control variables namely S to hold the sequence number of the recently sent
frame, SF to hold the sequence number of the first frame in the window and S L to hold the
sequence number of the last frame in the window. Receiver has a single variable R to hold the
sequence number of the expected frame.
The sender has a timer for each transmitted frame. The receivers don't have any timer.
The receiver responds for frame arriving safely by positive ACK. For damaged or lost frames
receiver doesn't reply, the sender has to retransmit it when timer of that frame elapsed. The receiver
may ACK once for several frames.
If the timer for any frame expires, the sender has to resend that frame and the subsequent frame also,
hence the protocol is called GO-BACK-N ARQ.
RMDEC
CS2363 CN / Unit 1
33
Figure 1.54: Time diagram for Go-Back-N ARQ with lost frame
RMDEC
CS2363 CN / Unit 1
34
CS2363 CN / Unit 1
35
than one at the receiver's end also. If the receiver receives any of the frames in its window, it will accept the
frame but the window will not slide until the first frame in the receiver's window is received. In selective
repeat both the sender and the receiver have the same window size. Features of selective repeat are
Sender's window is on a set of frames in buffer waiting for ACK. As the ACK is received, the
respective frame goes out of window and new frame to be sent come into window.
Sender has three control variables namely S to hold the sequence number of the recently sent frame,
SF to hold the sequence number of the first frame in the window and SL to hold the sequence number
of the last frame in the window.
The size of the window should be one half of the value 2m, where m is the number of bits used for sequence
number.
The receiver window size must also be the same as that of the sender's window size. In this the
receiver is looking for a range of sequence numbers. The receiver has control variables R F and RL
to denote the boundaries of the window.
The sender has a timer for each transmitted frame. The receivers don't have any timer.
The receiver responds for frame arriving safely by positive ACK. For damaged or lost frames
receiver doesn't reply, the sender has to retransmit it when timer of that frame elapsed. The
receiver may ACK once for several frames.
If the timer for any frame expires, the sender has to resend that frame alone.
The receiver sends a negative ACK to the sender if it receives a frame that is not the first frame in
its window.
The sender and the receiver window size are the same in case of selective repeat. There are four operations in
case of selective repeat. They are:
1. Normal Operation: The sender sends the data frames that are present within the sender's window
without waiting for ACK. If the receiver receives a frame without any error it slides its window and
sends an ACK frame, ACK could be sent together for a group of frames. When the sender receives the
ACK frame it shifts the window by one or more number of frames depending on the sequence number in
RMDEC
CS2363 CN / Unit 1
36
the ACK frame. Sends the next frame in the sender's window.
2. Lost Frame: Sender sends the frames present within the sender's window without waiting for ACK. If a
frame is lost in the transmission the receiver sends a negative ACK for that frame. The receiver will
receive the other frames that reach it, if it is within the receiver's window. The sender retransmits only
the frame for which NACK (negative acknowledgement) has been sent. The timing diagram for this
operation is shown in Figure 1.57.
3. Lost ACK: Sender will send the frames that are present within the sender's window. If the receiver
receives the frame without any error it slides its window and sends an ACK. If the ACK is lost during
transmission, if time-out occurs at the sender's end the frame would be retransmitted otherwise the ACK
of the next frame would have made the sender to slide its window. If the sender had retransmitted
the frame, receiver will discard it since it identifies it as duplicate. In the meanwhile if the receiver
had received the next frame it would have sent the ACK for that frame.
Figure 1.57: Time diagram for Selective Repeat ARQ with lost frame
4. Delayed ACK: Sender will send the frames that are present within the sender's window. If the receiver
receives the frame without any error it slides its window and sends an ACK. If the ACK is delayed
during transmission and time-out occurs at the sender's end the frame for which time out occurred
would be retransmitted. The receiver will receive the frame resent and discard it since it identifies it as
duplicate. In the meanwhile if the next frame is received the receiver would have received it and sent
ACK.
RMDEC
CS2363 CN / Unit 1
37
38
Burst length (B) =B1 Bf + 1
1.5.4.1 Detection
Error detection is the ability to detect the presence of errors caused by noise or other impairments during
transmission from the transmitter to the receiver. The technique of including extra information in the
transmission for error detection is called as Redundancy. There are various techniques available for error
detection. The well known
methods are: Parity, Cyclic Redundancy Check (CRC), Longitudinal Redundancy Check (LRC) and
Checksum.
Modulo 2 Arithmetic
Modulo-2 arithmetic is performed digit by digit on binary numbers. Each digit is considered
independently from its neighbors. Numbers are not carried or borrowed.
A B AXORB
Adding:
0+0=0
Subtracting: 0-0=0
0 0
1 1
0 1
1 0
1-0=1
1+1=0
1-1=0
Addition: Modulo 2 addition is performed using an exclusive OR (XOR) operation on the corresponding
binary digits of each operand. The XOR operation is described in the Figure 1.59. We can add two binary
numbers, X and Y as follows:
(X) 10110100
(Y) 00101010 +
(Z) 10011110
Division: Modulo-2 division can be performed in a manner similar to arithmetic long division. Subtract
the denominator from the leading parts of the enumerator. Proceed along the numerator until its end is
reached. Remember that we are using modulo-2 subtraction. For example, we can divide 100100111 by
10011 as follows:
RMDEC
CS2363 CN / Unit 1
39
Subtraction: Modulo-2 subtraction provides the same results as addition. This can be illustrated by
adding the numbers X and Z from the addition example.
(X)
10110100
(Z) 10011110 +
(Y)
00101010
The addition example shows us that K + Y = Z so Y = Z - K. However, the subtraction example
shows us that Y = Z + K. As neither Z nor X is zero, the addition and subtraction operators must behave in the
same way.
1.5.4.1.1 Parity
A parity bit is a redundant bit which is used in error detection mechanism that can detect all bit errors and
the burst errors that involve odd number of errors. The parity mechanism could either as odd parity (odd
number of one's in data stream) or even parity (even number of one's in data stream). The mechanism of adding
one bit to the data stream is referred to as simple parity. The mechanism that of add one bit to every data
stream and a block parity for a group of data streams is referred to as two dimensional parity.
Simple Parity: The stream of data is broken up into blocks of bits, and the number of 1 bit is counted. Parity
bit is:
o Set (1) if the number of one bit's is odd and even parity is used.
-- Data (without parity): 1100100 Processed Data: I I 00 1001
RMDEC
CS2363 CN / Unit 1
40
o
o
o
Cleared (0) if the number of one bits is even and even parity is used.
-- Data (without parity): 1100101 Processed Data: 11001010
Set (1) if the number of 1 bit's is even and odd parity is used.
-- Data (without parity): 1100101 Processed Data: 11001011
Cleared (0) if the number of one bits is odd and odd parity is used.
-- Data (without parity): 1100100 Processed Data: 11001000
Parity scheme can detect all single bit errors. A parity bit is only guaranteed to detect an odd number of bit
errors. If an even number of bits is in error the parity bit appears to be correct, even though the data is corrupt.
2-D parity: The process of arranging data in a table and associating parity bit with each row and column is
referred to 2-D parity.
Example: If the data to be sent is 1101010011010000101111011101 and if we consider that for every seven
bits a parity bit is added. Let us also consider that even parity is used. The parity generator will perform the
action shown in the Figure 1.60.
Advantage: This increases the likelihood of detecting burst errors. A redundancy of n bits can easily detect burst
errors in n bits.
Disadvantage: If two bits in a data unit are damaged and exactly in the same position in another data unit are also
damaged, the 2-D parity checker will not be able to detect the error.
Original Data
1101010011010000101111011101
1.5.4.1.2 CRC
Complex error detection methods make use of the properties of finite fields and polynomials over such
fields. The cyclic redundancy check considers a block of data as the coefficients to a polynomial and then
divides (modulo-2-division) by a fixed, predetermined polynomial. The coefficients of the result of the division
are taken as the redundant data bits, the CRC. On reception, one recomposes the CRC from the payload bits and
compares this with the CRC that was received. A mismatch indicates that an error occurred.
RMDEC
CS2363 CN / Unit 1
41
Example: If we consider that the bit sequence to be sent is 11010011101100 and a predefined 4 bit
divisor 1011 is used. Since the divisor is 4 bits in length CRC is 3 bits in length. Append three zeros to the
data and perform modulo- division.
The appended zeros are replaced by 100 and hence the transmitted data with CRC is 11010011101100100.
The divisor used in the CRC is often represented as a polynomial. A polynomial should be selected in such a
way that it should not be divisible x and should be divisible by x+l. Where x is the base of the number system
being used and in our case it is 2 as we use binary digits.
RMDEC
The condition of polynomial not being divisible by x guarantees that all burst errors of length equal to
CS2363 CN / Unit 1
42
the degree of the polynomial are detected.
The condition that the polynomial should be divisible by x+ l guarantees that all burst errors affecting an
odd number of bits are detected.
CRC can detect all burst errors that affect odd number of bits. It can detect all burst errors of length less than or
equal to the degree of the polynomial. With a high probability it can also detect errors of length greater than the
degree of the polynomial.
1.5.4.1.3 Checksum
A checksum of a message is an arithmetic sum of message code words of a certain word length, for example byte
values, and their carry value. The sum is negated by means of ones-complement, and stored or transferred as an
extra code word extending the message. On the receiver side, a new checksum may be calculated, from the
extended message. If the new checksum is not 0, error is detected. Checksum schemes include parity bits,
check digits, and longitudinal redundancy check.
The checksum generator performs the following steps:
It adds all the k segments using ones complement to get n bit sum
The checksum is sent along with the data. The checksum checker performs the following step s
The data with the checksum is divided into k+1 segments of n bits each
All the segments are added using ones complement addition to get n bit sum
If the result obtained is zero then the data is accepted as there is no error. If the complemented result is a
non-zero value then the data is discarded as it is corrupted.
Example: If we consider that the bit sequence to be sent is 110100111011000100001010
and n as 8. The data is divided into three segments of eight bits each. The one' complement sum of the
three segments is 10001111. The complement of the sum is 01110000. It is the checksum and appended to the
RMDEC
CS2363 CN / Unit 1
43
data and transmitted.
1.5.4.1.4 LRC
Longitudinal redundancy check (LRC) or horizontal redundancy check is a form of redundancy check that
is applied independently to each of a parallel group of bit streams. The data must be divided into transmission
blocks, to which the additional check data is added. The term usually applies to a single parity bit per bit
stream, although it could also be used to refer to a larger Hamming code. While simple longitudinal parity
can only detect errors, it can be combined with additional error control coding, such as a transverse redundancy
check, to correct errors.
A longitudinal redundancy check for a sequence of characters may be computed in software by the following
algorithm:
Set LRC = 0
for each character c in the string do
set LRC = LRC XOR c end do
An 8-bit LRC such as this is equivalent to a cyclic redundancy check using the polynomial x8+1, but the
independence of the bit streams is less clear when looked at that way. An example of a protocol that uses such an
XOR-based longitudinal redundancy check character is the IEC 62056-21 standard for electrical meter reading.
CS2363 CN / Unit 1
44
The most popular Forward error correction method is called as Hamming Code. This method corrects all single
bit errors. In this method the redundant bits included in the data will help in identifying the bit in which the error
has occurred. To calculate the number of redundancy bits (r) required to correct a given number of data bits (n) we
must find the relationship between n and r. When there are n bits of data and there are r redundant bits then the
resulting data size is n + r. r must be able to indicate m+r+l different state in order to identify the bit in error. r
bits can indicate 2r states and hence 2r > m+r+1. The value of r can be calculated by substituting the value of m.
r8
i4
r2
r1
It is a code that can be applied to data of any length. For example let us consider the 7-bit ASCII code. It
requires 4 redundant bits to be added in order to satisfy the condition of the sum of m+r+1 being greater than or
equal to 2r The redundant bits can be added to the end of the data or interspersed with the original data. The
placement of the bits is considered to be 1, 2, 4 and 8 (positions of powers of 2). Let the redundant bits be
RMDEC
CS2363 CN / Unit 1
45
numbered based on their positions i.e. r 1, r2, r4 and r8. Figure 1.62 shows the placement of the data and the
redundant bits.
In hamming code each of the r bit is the parity bit with combination of the data bits as shown in Figure
1.61 It could be seen that each of the data bits are included at least in two sets. The parity value for each
combination of the bits is the value of the corresponding r bit. Let us consider that the data 1001101 is
to be transmitted. The hamming code that would be transmitted by the sender is 10011100101. The
generation of hamming code is shown in Figure 1.63 The number of l's in the bit positions 3, 5, 7, 9 and
11 is 3 hence ri is set to 1. The number of 1's in the bit positions 3, 6, 7, 10 and 11 is 4 hence r 2 is set to 0.
The number of 1's in the bit positions 5, 6 and 7 is 2 hence r 4 is set to O. The number of l's in the bit
positions 9, 10 and 11 is 1 hence r 1 is set to 1.
Let us consider that the data sent is 10011100101. The receiver calculates the bits rl, r2, r4 and r8 as
per the sets mentioned in Figure 1.62 There are two situations:
Situation 1: The data is received (10011100101) without any error. The calculation of the r bits is
shown in Figure 1.64 The calculation show that all the parity bits are 0 and hence the data is without
any error. The parity bits are dropped and the data is retrieved as 1001101.
Situation 2: The data is received (10011110101) with error in one bit. The calculation of the r bits is
shown in Figure 2.8 If the bits are ordered as per their descending bit order we get the value of 0101.
The decimal equivalent of 0101 is 5, hence it is detected that the bit 5 is in error and hence the 5 th bit
is flipped. The corrected code is 10011100101 and the data is retrieved as 1001101, by dropping the
parity bits.
1
1.6. Ethernet
Any system that is connected via a LAN into the internet should need all the five layers of the Internet
model. The data link layer is divided into logical link control (LLC) and medium access control (MAC) sub
layers. LAN standards define the physical media and the connectors used to connect devices to media at the
RMDEC
CS2363 CN / Unit 1
46
physical layer of the OSI reference model. The IEEE Ethernet data link layer has two sub layers namely LLC
and MAC.
Traditional Ethernet was designed to operate at 10 Mbps. Access to the network by a device is through a
contention method Carrier Sense Multiple Access/Collision Detection (CSMA/CD). The media is shared
between all stations. The MAC sublayer governs the operation of the medium access. It also frames data
received from the upper layer and passes them to the Physical Layer Signaling sublayer for encoding.
1.6.1 IEEE 802.3
Ethernet protocols refer to the family of local-area network (LAN) covered by the IEEE 802.3. In the Ethernet
standard, there are two modes of operation: half-duplex and full-duplex modes. In the half duplex mode, data
are transmitted using the popular Carrier-Sense Multiple Access/Collision Detection (CSMA/CD) protocol on a
shared medium. The main disadvantages of the half-duplex are the efficiency and distance limitation, in which
the link distance is limited by the minimum MAC frame size. This restriction reduces the efficiency drastically
for high-rate transmission. Therefore, the carrier extension technique is used to ensure the minimum frame size
of 512 bytes in Gigabit Ethernet to achieve a reasonable link distance.
Four data rates are currently defined for operation over optical fiber and twisted-pair cables:
10 Mbps - 10Base-T Ethernet (IEEE 802.3)
100 Mbps - Fast Ethernet (IEEE 802.3u)
1000 Mbps - Gigabit Ethernet (IEEE 802.3z)
10-Gigabit - 10 Gbps Ethernet (IEEE 802.3ae).
Figure 2.17 shows the IEEE 802.3 logical layers and their relationship to the OSI reference model. As with all
IEEE 802 protocols, the ISO data link layer is divided into two IEEE 802 sublayers, the Media Access Control
(MAC) sublayer and the MAC-client sublayer. The IEEE 802.3 physical layer corresponds to the ISO physical
layer.
CS2363 CN / Unit 1
47
or more of the defined optional protocol extensions. The only requirement for basic communication between
two network nodes is that both MACs must support the same transmission rate.
The physical layer of IEEE 802.3 is specific to the transmission data rate, the signal encoding, and the type of
media interconnecting the two nodes. Gigabit Ethernet, for example, is defined to operate over either twistedpair or optical fiber cable, but each specific type of cable or signal-encoding procedure requires a different
physical layer implementation.
The MAC sub layer has two primary responsibilities:
Data encapsulation, including frame assembly before transmission, and frame parsing/error detection
during and after reception.
Media access control, including initiation of frame transmission and recovery from transmission failure
The Basic Ethernet Frame Format
The IEEE 802.3 standard defines a basic data frame format that is required for all MAC implementations, plus
several additional optional formats that are used to extend the protocol's basic capability. The basic data frame
format contains the seven fields shown in Figure 2.18.
Preamble (PRE): It consists of 7 bytes. The PRE is an alternating pattern of ones and zeros that tells
receiving stations that a frame is coming, and that provides a means to synchronize the frame-reception
portions of receiving physical layers with the incoming bit stream.
Start-of-frame delimiter (SFD): It is a 1 byte field. The SFD is an alternating pattern of ones and zeros,
ending with two consecutive 1-bits (10101011) indicating that the next bit is the left-most bit in the leftmost byte of the destination address.
Destination address (DA): It is 6 bytes in length. The DA field identifies which station(s) should receive
the frame. The left-most bit in the DA field indicates whether the address is an individual address
(indicated by a 0) or a group address (indicated by a 1). The second bit from the left indicates whether
the DA is globally administered (indicated by a 0) or locally administered (indicated by a 1). The
remaining 46 bits are a uniquely assigned value that identifies a single station, a defined group of
stations, or all stations on the network.
Source addresses (SA): It is a 6 byte field. The SA field identifies the sending station. The SA is always
an individual address and the left-most bit in the SA field is always 0.
Length/Type: It consists of 2 bytes. This field indicates either the number of MAC-client data bytes that
are contained in the data field of the frame, or the frame type ID if the frame is assembled using an
optional format. If the Length/Type field value is less than or equal to 1500, the number of LLC bytes in
the Data field is equal to the Length/Type field value. If the Length/Type field value is greater than
1536, the frame is an optional type frame, and the Length/Type field value identifies the particular type
of frame being sent or received.
Data: It is a sequence of n bytes of any value, where n is less than or equal to 1500. If the length of the
Data field is less than 46, the Data field must be extended by adding a filler (a pad), sufficient to bring
the Data field length to 46 bytes.
7
46-1500bytes
Pre
SFD
DA
SA
Length Type
FCS
Figure 1.18: The Basic IEEE 802.3 MAC Data Frame Format
Frame check sequence (FCS): It consists of 4 bytes. This sequence contains a 32-bit cyclic redundancy
check (CRC) value, which is created by the sending MAC and is recalculated by the receiving MAC to
check for damaged frames. The FCS is generated over the DA, SA, Length/Type, and Data fields.
Individual addresses are known as unicast addresses because they refer to a single MAC and are assigned by
the NIC manufacturer from a block of addresses allocated by the IEEE. Group address known as multicast
address identify the end stations in a workgroup and are assigned by the network manager. A special group
address called broadcast address that indicates all stations on the network has all its bits set to 1.
RMDEC
CS2363 CN / Unit 1
48
Whenever an end station MAC receives a transmit-frame request with the accompanying address and data
information from the LLC sublayer, the MAC begins the transmission sequence by transferring the LLC
information into the MAC frame buffer.
The preamble and start-of-frame delimiter are inserted in the PRE and SFD fields.
The destination and source addresses are inserted into the address fields.
The LLC data bytes are counted, and the number of bytes is inserted into the Length/Type field.
The LLC data bytes are inserted into the Data field. If the number of LLC data bytes is less than 46, a
pad is added to bring the Data field length up to 46.
An FCS value is generated over the DA, SA, Length/Type, and Data fields and is appended to the end of
the Data field.
After the frame is assembled, actual frame transmission will depend on whether the MAC is operating in halfduplex or full-duplex mode.
The IEEE 802.3 standard currently requires that all Ethernet MACs support half-duplex operation, in which the
MAC can be either transmitting or receiving a frame, but it cannot be doing both simultaneously. Full-duplex
operation is an optional MAC capability that allows the MAC to transmit and receive frames simultaneously.
MAC Frame with Gigabit Ethernet Carrier Extension (IEEE 802.3z)
1000Base-X has a minimum frame size of 416 bytes, and 1000Base-T has a minimum frame size of 520 bytes.
The Extension is a non-data variable extension field to frames that are shorter than the minimum length.
7
1
6
6
2
Variable
4
Variable
Pre
SFD
DA
SA
Length Type
FCS
Ext
CS2363 CN / Unit 1
49
Frame
Field
Control
Name
Meaning
00000000
Claim_token
00000001
Solicit_successor_1
00000010
Solicit_successor_2
00000011
Who_follows
00000100
Resolve_contention
00001000
Token
00001100
Set_successor
50
vary in size and depend on the size of the information field. Data frames carry information for the upper layer
protocols and command frame contains control information.
CS2363 CN / Unit 1
51
the number of bits that can be stored in the ring, the sending station will be draining the first part of the packet
from the ring while still transmitting the latter part.
One issue we must address is how much data a given node is allowed to transmit each time it possesses the
token, or said another way, how long a given node is allowed to hold the token. We call this the token holding
time (THT).
Before putting each packet onto the ring, the station must check that the amount of time it would take to
transmit the packet would not cause it to exceed the token holding time. This means keeping track of how long
it has already held the token, and looking at the length of the next packet that it wants to send. From the token
holding time we can derive another useful quantity, the token rotation time (TRT), which is the amount of time
it takes a token to traverse the ring as viewed by a given node. It is easy to see that TRT ActiveNodes THT
+ RingLatency where RingLatency denotes how long it takes the token to circulate around the ring. When no
one has data to send, and ActiveNodes denotes the number of nodes that have data to transmit. The 802.5
protocol provides a form of reliable delivery using 2 bits in the packet trailer, the A and C bits. These are both 0
initially. When a station sees a frame for which it is the intended recipient, it sets the A bit in the frame. When it
copies the frame into its adaptor, it sets the C bit. If the sending station sees the frame come back over the ring
with the A bit still 0, it knows that the intended recipient is not functioning or absent. If the A bit is set but not
the C bit, this implies that for some reason (e.g., lack of buffer space) the destination could not accept the frame.
Thus, the frame might reasonably be retransmitted later in the hope that buffer space had become available.
Another detail of the 802.5 protocol concerns the support of different levels of priority. The token contains a 3bit priority field, so we can think of the token having a certain priority n at any time. Each device that wants to
send a packet assigns a priority to that packet, and the device can only seize the token to transmit a packet if the
packets priority is at least as great as the tokens. The priority of the token changes over time by using
reservation bits in the frame header. For example, a station X waiting to send a priority n packet may set these
bits to n if it sees a data frame going past and the bits have not already been set to a higher value. This causes
the station that currently holds the token to elevate its priority to n when it releases it. Station X is responsible
for lowering the token priority to its old value when it is done.
2.5 Fiber Distributed Data Interface (FDDI)
FDDI networks are typically used as backbones for wide-area networks. FDDI rings are normally constructed in
the form of a "dual ring of trees". A small number of devices, typically infrastructure devices such as routers and
concentrators rather than host computers, are connected to both rings - these are referred to as "dual-attached".
Host computers are then connected as single-attached devices to the routers or concentrators. The dual ring in
its most degenerate form is simply collapsed into a single device. In any case, the whole dual ring is typically
contained within a computer room.
This network topology is required because the dual ring actually passes through each connected device and
requires each such device to remain continuously operational (the standard actually allows for optical bypasses
but these are considered to be unreliable and error-prone). Devices such as workstations and minicomputers that
may not be under the control of the network managers are not suitable for connection to the dual ring.
As an alternative to a dual-attached connection, the same degree of resilience is available to a workstation
through a dual-homed connection which is made simultaneously to two separate devices in the same FDDI ring.
One of the connections becomes active while the other one is automatically blocked. If the first connection fails,
the backup link takes over with no perceptible delay.
An extension to FDDI, called FDDI-2, supports the transmission of voice and video information as well as data.
Another variation of FDDI called FDDI Full Duplex Technology (FFDT) uses the same network infrastructure
but can potentially support data rates up to 200 Mbps.
FDDI's four specifications are the Media Access Control (MAC), Physical Layer Protocol (PHY), PhysicalMedium Dependent (PMD), and Station Management (SMT) specifications. The MAC specification defines
RMDEC
CS2363 CN / Unit 1
52
how the medium is accessed, including frame format, token handling, addressing, algorithms for calculating
cyclic redundancy check (CRC) value, and error-recovery mechanisms. The PHY specification defines data
encoding/decoding procedures, clocking requirements, and framing, among other functions. The PMD
specification defines the characteristics of the transmission medium, including fiber-optic links, power levels,
bit-error rates, optical components, and connectors. The SMT specification defines FDDI station configuration,
ring configuration, and ring control features, including station insertion and removal, initialization, fault
isolation and recovery, scheduling, and statistics collection.
The FDDI frame format is similar to the format of a Token Ring frame. This is one of the areas in which FDDI
borrows heavily from earlier LAN technologies, such as Token Ring. FDDI frames can be as large as 4,500
bytes. Figure 2.25 shows the frame format of an FDDI data frame and token. The following descriptions
summarize the FDDI data frame and token fields illustrated in Figure 2.25.
Preamble: It is a unique sequence that prepares each station for an upcoming frame.
Start delimiter: It indicates the beginning of a frame by employing a signaling pattern that differentiates
it from the rest of the frame.
Frame control: It indicates the size of the address fields and whether the frame contains asynchronous or
synchronous data, among other control information.
Destination Address: It contains a unicast (singular), multicast (group), or broadcast (every station)
address. As with Ethernet and Token Ring addresses, FDDI destination addresses are 6 bytes long.
Source Address: It identifies the single station that sent the frame. As with Ethernet and Token Ring
addresses, FDDI source addresses are 6 bytes long.
Data: It contains either information destined for an upper-layer protocol or control information.
Frame check sequence (FCS): It is filed by the source station with a calculated cyclic redundancy check
value dependent on frame. The destination address recalculates the value to determine whether the frame
was damaged in transit. If so, the frame is discarded.
End delimiter: It contains unique symbols that cannot be data symbols to indicate the end of the frame.
Frame status: It allows the source station to determine whether an error occurred; identifies whether the
frame was recognized and copied by a receiving station.
RMDEC
CS2363 CN / Unit 1
53
BSS without AP
Figure 2.27: Basic Service Set
RMDEC
BSS with an AP
CS2363 CN / Unit 1
54
CS2363 CN / Unit 1
55
frequency.
o After receiving the RTS and waiting a short period of time, called the short inter-frame space (SIFS), the
destination station sends a control frame, called the clear to send (CTS), to the source station. This
control frame indicates that the destination station is ready to receive data.
o The channel uses a persistence strategy with back off until the channel is idle.
o After the station is found idle, the station waits for a period of time, called the distributed interframe space (DIFS); then the station sends a control frame, called request to send (RTS).
o After receiving the RTS and waiting a short period of time, called short inter-frame space (SIFS), the
destination station sends a control frame, called clear to send (CTS), to the source station. This control
frame indicates that the destination station is ready to receive data.
o The source station sends data after waiting an amount of time equal to SIFS
o The destination station, after waiting for an amount of time equal to SIFS, sends an acknowledgement to
show that the frame has been received. Acknowledgement is needed in this protocol because the station
does not have any means to check for the successful arrival of its data at the destination.
CS2363 CN / Unit 1
56
PLCP PDU length (no. of bytes in packet), PLCP signaling (rate information) and Header error check (16 bit
CRC).
MAC Data: The general MAC frame format is shown in Figure 2.31.
Frame Control: This field is two bytes in length and contains the following sub fields:
Protocol Version: It is two bit field which is used to indicate the future versions. The current value for this field
is 0.
Type: The next two bit indicates the type of the frame. It is set as 00 for management frames, 01 for control
frames and 10 for data frames.
Subtype: The next four bits indicates the subtype of the frame. Figure 2.32 shows the value of the type and
subtype fields along with their description
ToDS: This bit is set to 1 when the frame is addressed to the Access Point for forwarding it to the distribution
system. The bit is set to 0 in all other frames.
FromDS: This bit is set to 1 when the frame is coming from distribution system
More flag: This bit is set to 1 when there are more fragments belonging to the same frame follow the current
frame.
Type
Description
Subtype
Subtype
Description
00
Management
0000
Association
Request
00
Management
0001
Association
Response
00
Management
0010
Reassociation
Request
RMDEC
00
Management
0011
Reassociation
Response
00
Management
0100
Probe Request
00
Management
0101
Probe Response
00
Management
01100111
Reserved
00
Management
1000
Beacon
CS2363 CN / Unit 1
57
00
Management
1001
ATIM
00
Management
1010
Disassociation
00
Management
1100
Deauthentication
00
Management
11011111
Reserved
01
Control
00000001
Reserved
01
Control
1010
PS-Poll
01
Control
1011
RTS
01
Control
1100
CTS
01
Control
1101
ACK
01
Control
1110
CF End
01
Control
1111
CF End + CF-ACK
10
Data
0000
Data
10
Data
0001
Data + CF-ACK
10
Data
0010
Data + CF-Poll
10
Data
0011
Data + CF-ACK +
CF-Poll
10
Data
0100
10
Data
0101
10
Data
0110
10
Data
0111
CF-ACK +
Poll(no data)
10
Data
10001111
Reserved
10
Data
00001111
Reserved
RMDEC
CF-
CS2363 CN / Unit 1
Figure 2.32: Description for the frame type and subtype field combinations
Duration/ ID: It indicates the station ID, in case of Power-Save poll messages and duration value used for
Network Allocation Vector calculation in all other frames.
Address Fields: Depending on the ToDS and FromDS bits a frame may contain upto four addresses as
follows:
Address 1: It always indicates the Recipient address and indicates address of AP if the ToDS bit is set
and end station address otherwise.
Address 2: It always indicates the Transmitter address and indicates address of AP if the FromDS bit
is set and station address otherwise.
Address 3: In most cases the remaining, missing address, and indicates the original source address if
FromDS set to 1 and destination address otherwise.
Address 4: It is used in the special case where a wireless distribution system is used and the frame is
being transmitted from one AP to another. In this case both ToDS and FromDS bits are set so that the
original destination and the original source addresses are missing.
Sequence Control: It is used to represent the order of different fragments belonging to the same frame and to
recognize duplicates. It consists of two fields namely Fragment number and the Sequence number.
Frame Body: Data or information that is being transmitted
Cyclic Redundancy Check: It is a 32-bit field containing the Cyclic Redundancy Check.
A wireless LAN defined by IEEE 802.11 has three categories of frames namely management, control and
data frames. Management frames are used for initial communication between stations and access points.
Control frames are used for accessing the channel and acknowledging frames. Data frames are used for
carrying data and control information. There are three control frames namely RTS, CTS and ACK, their
format is shown in Figure 2.33.
FC D
Address1
Address2
RTS
FCS
Figure
2.33: Control Frames
FC CTS
D orAddress1
ACK
FCS
Four different cases of addressing mechanisms are supported by IEEE 802.11. The interpretation of four
addresses in the MAC frame depends on the value of ToDS and FromDS flags of the FC field and is shown
in Figure 2.34. Figures 2.35 to 2.38 represent the addressing mechanisms for the four cases.
To
From Address 1
Address 2 Address 3
Address 4 It Means
DS
DS
0
0
Destination Source
BSS ID
N/A
Frame is neither
Address
station
going to nor
coming
from
distribution
system
0
1
Destination Sending
Source
N/A
Frame is coming
Address
AP
station
from distribution
system
1
0
Receiving
Source
Destination N/A
Frame is going to
AP
station
Address
distribution
system
1
1
Receiving
Sending
Destination Source
The distribution
AP
AP
Address
station
system
is
wireless
Figure 2.34: Interpretation of Addresses
network segment) when an entry for the corresponding MAC is found in the forwarding table. If the source
and the destination stations are in different network segments and the forwarding table do not have a
corresponding in its forwarding table the bridge broadcast the frame into all but the source port. Transparent
bridges work fine as far as there are no redundant bridges in the system. Redundant bridges actually help to
make the system more reliable but can create loops which are not desirable in the system. To solve the
looping problem the bridges use spanning tree algorithm to create a loopless topology.
Address
Port
Address
Port
A
1
Original
After A sends a
frame to D
Figure 2.40: Learning Bridge
Address
Port
A
1
E E sends 3a
After
frame to A
Address
Port
A
1
E B sends3a
After
Bframe to C 1
4. Mark the root port and the designated port as the forwarding ports and all the other ports as blocking
ports. A forwarding port forwards the frame that the bridge receives. Blocking port does not forward
the frame.
Consider the architecture in Figure 2.42 after applying the steps of spanning tree algorithm the
representation is shown in Figure 2.43. In figure 2.44 forwarding links are represented as solid lines and
dotted lines represent blocking links.
1
LAN 1
B1
B3
LAN
1
LAN 2
B4
2
B2
LAN 3
B5
2
1
1
LAN 4
1
2
Figure 2.42: Connections prior to spanning tree application
B1
B3
B4
2
LAN
3
LAN
2
B2
B5
LAN
4
**
Root
Bridge
**
**
*
**
1
*
1
B1
2
B3
3
1
B4
1
2
LAN 3
LAN 2
1
B2
B5
2
LAN 4
In the simplest terms, a switch is a mechanism that allows us to interconnect links to form a larger network.
A switch is a multi-input, multi-output device, which transfers packets from an input to one or more outputs.
Thus, a switch adds the star topology (see Figure 2.45) to the point-to-point link, bus (Ethernet), and ring
(802.5 and FDDI) topologies established in the last chapter. A star topology has several attractive properties:
Even though a switch has a fixed number of inputs and outputs, which limits the number of hosts that
can be connected to a single switch, large networks can be built by interconnecting a number of
switches.
We can connect switches to each other and to hosts using point-to-point links, which typically means
that we can build networks of large geographic scope.
Adding a new host to the network by connecting it to a switch does not necessarily mean that the
hosts already connected will get worse performance from the network.
For example, it is impossible for two hosts on the same Ethernet to transmit continuously at 10 Mbps
because they share the same transmission medium. Every host on a switched network has its own link to the
switch, so it may be entirely possible for many hosts to transmit at the full link speed (bandwidth), provided
that the switch is designed with enough aggregate capacity. Providing high aggregate throughput is one of
the design goals for a switch; In general, switched networks are considered more scalable (i.e., more capable
of growing to large numbers of nodes) than shared-media networks because of this ability to support many
hosts at full speed.
Figure 2.47: Example Switch with three input and output ports
A switch is connected to a set of links and, for each of these links, runs the appropriate data link protocol to
communicate with the node at the other end of the link. A switchs primary job is to receive incoming
packets on one of its links and to transmit them on some other link. This function is sometimes referred to as
either switching or forwarding, and in terms of the OSI architecture, it is the main function of the network
layer. Figure 2.46 shows the protocol graph that would run on a switch that is connected to two T3 links and
one STS-1 SONET link. A representation of this same switch is given in Figure 2.47. In this figure, we have
split the input and output halves of each link, and we refer to each input or output as a port. (In general, we
assume that each link is bidirectional, and hence supports both input and output.) In other words, this
example switch has three input ports and three output ports.
There are three common switching approaches.
(i)
The datagram (or) connectionless approach.
(ii)
Virtual circuit or connection-oriented approach.
(iii)
Source routing, is less common than these other two, but it is simple to explain and does have some
useful applications.
One thing that is common to all networks is that we need to have a way to identify the end nodes. Such
identifiers are usually called addresses. We have already seen examples of addresses in the previous chapter,
for example, the 48-bit address used for Ethernet. The only requirement for Ethernet addresses is that no two
nodes on a network have the same address. This is accomplished by making sure that all Ethernet cards are
assigned a globally unique identifier.
the connection will traverse. We also need to ensure that the chosen VCI on a given link is not currently in
use on that link by some existing connection.
There are two broad classes of approach to establishing connection state. One is to have a network
administrator configure the state, in which case the virtual circuit is permanent. A permanent virtual circuit
(PVC) is a long-lived or administratively configured VC. Alternatively, a host can send messages into the
network to cause the state to be established. This is referred to as signaling, and the resulting virtual circuits
are said to be switched. The salient characteristic of a switched virtual circuit (SVC) is that a host may set up
and delete such a VC dynamically without the involvement of a network administrator. SVC could be more
accurately called a signaled VC, since it is the use of signaling (not switching) that distinguishes an SVC
from PVC. Lets assume that a network administrator wants to manually create a new virtual connection
from host A to host B. First, the administrator needs to identify a path through the network from A to B. In
the example network of Figure 2.49, there is only one such path, but in general this may not be the case. The
administrator then picks a VCI value that is currently unused on each link for the connection. For the
purposes of our example, lets suppose that the VCI value 5 is chosen for the link from host A to switch 1,
and that 11 is chosen for the link from switch 1 to switch 2. In that case, switch 1 needs to have an entry in
its VC table configured as shown in the second row of Table 2.2.
Similarly, suppose that the VCI of 7 is chosen to identify this connection on the link from switch 2 to
switch 3, and that a VCI of 4 is chosen for the link from switch 3 to host B. In that case, switches 2 and 3
need to be configured with VC table entries as shown in rows three and four of Table 2.2. Note that the
outgoing VCI value at one switch is the incoming VCI value at the next switch. Once the VC tables
have been set up, the data transfer phase can proceed, as illustrated in Figure 2.50.
Switch
Incoming
Incoming Outgoing Outgoing VCI
Number
Interface
VCI
Interface
1
2
5
1
11
2
3
11
2
7
3
0
7
1
4
Table 2.2 Virtual Circuit Table Entries
For any packet that it wants to send to host B, A puts the VCI value of 5 in the header of the packet and
sends it to switch 1. Switch 1 receives any such packet on interface 2, and it uses the combination of the
interface and the VCI in the packet header to find the appropriate VC table entry. As shown in Table 2.2, the
table entry in this case tells switch 1 to forward the packet out of interface 1 and to put the VCI value 11 in
the header when the packet is sent. Thus, the packet will arrive at switch 2 on interface 3 bearing VCI 11.
Switch 2 looks up interface 3 and VCI 11 in its VC table (as shown in Table 2.2) and sends the packet on to
switch 3 after updating the VCI value in the packet header appropriately, as shown in Figure 2.51.
switch removes the relevant entry from its table and forwards the message on to the other switches in the
path, which similarly delete the appropriate table entries. At this point, if host A were to send a packet with a
VCI of 5 to switch 1, it would be dropped as if the connection had never existed.
There are several things to note about virtual circuit switching:
Since host A has to wait for the connection request to reach the far side of the network and return
before it can send its first data packet, there is at least one RTT of delay before data is sent.1
While the connection request contains the full address for host B (which might be quite large, being a
global identifier on the network), each data packet contains only a small identifier, which is only
unique on one link. Thus, the per-packet overhead caused by the header is reduced relative to the
datagram model.
If a switch or a link in a connection fails, the connection is broken and a new one will need to be
established. Also, the old one need to be torn down to free up table storage space in the switches.
The issue of how a switch decides which link to forward the connection request on has been glossed
over. In essence, this is the same problem as building up the forwarding table for datagram
forwarding, which requires some sort of routing algorithm.
A third approach to switching is known as source routing. The name derives from the fact that all the
information about network topology that is required to switch a packet across the network is provided by the
source host. There are various ways to implement source routing. One would be to assign a number to each
output of each switch and to place that number in the header of the packet. The switching function is then
very simple: For each packet that arrives on an input, the switch would read the port number in the header
and transmit the packet on that output.
Figure 2.52
Source Routing in a switched network
However, since there will in general be more than one switch in the path between the sending and the
receiving host, the header for the packet needs to contain enough information to allow every switch in the
path to determine which output the packet needs to be placed on. One way to do this would be to put an
ordered list of switch ports in the header and to rotate the list so that the next switch in the path is always at
the front of the list. Figure 2.52 illustrates this idea. In this example, the packet needs to traverse three
switches to get from host A to host B. At switch 1, it needs to exit on port 1, at the next switch it needs to
exit at port 0, and at the third switch it needs to exit at port 3. Thus, the original header when the packet
leaves host A contains the list of ports (3, 0, 1), where we assume that each switch reads the rightmost
element of the list. To make sure that the next switch gets the appropriate information, each switch rotates
the list after it has read its own entry. Thus, the packet header as it leaves switch 1 en route to switch 2 is
now (1, 3, 0); switch 2 performs another rotation and sends out a packet with (0, 1, 3) in the header.
Although not shown, switch 3 performs yet another rotation, restoring the header to what it was when host A
sent it. There are several things to note about this approach.
First, it assumes that host A knows enough about the topology of the network to form a header that
has all the right directions in it for every switch in the path. This is somewhat analogous to the problem of
building the forwarding tables in a datagram network or figuring out where to send a setup packet in a
virtual circuit network.
Second, observe that we cannot predict how big the header needs to be, since it must be able to hold
one word of information for every switch on the path. This implies that headers are probably of variable
length with no upper bound, unless we can predict with absolute certainty the maximum number of switches
through which a packet will ever need to pass.
Third, there are some variations on this approach. For example, rather than rotate the header, each
switch could just strip the first element as it uses it. Rotation has an advantage over stripping, however: Host
B gets a copy of the complete header, which may help it figure out how to get back to host A. Yet another
alternative is to have the header carry a pointer to the current next port entry, so that each switch just
updates the pointer rather than rotating the header; this may be more efficient to implement. These three
approaches are illustrated in Figure 2.53. In each case, the entry that this switch needs to read is A, and the
entry that the next switch needs to read is B. Source routing can be used in both datagram networks and
virtual circuit networks. For example, the Internet Protocol, which is a datagram protocol, includes a source
route option that allows selected packets to be source routed, while the majority is switched as conventional
datagrams.
Source routing is also used in some virtual circuit networks as the means to get the initial setup
request along the path from source to destination. Finally, we note that source routing suffers from a scaling
problem. In any reasonably large network, it is very hard for a host to get the complete path information it
needs to construct correct headers.
PART A
Questions & Answers
1. What are the three criteria necessary for an effective and efficient network?
The most important criteria are performance, reliability and security.
Performance of the network depends on number of users, type of transmission medium, and the capabilities
of the connected h/w and the efficiency of the s/w.
Reliability is measured by frequency of failure, the time it takes a link to recover from the failure and the
networks robustness in a catastrophe.
Security issues include protecting data from unauthorized access and viruses.
2. Group the OSI layers by function?
The seven layers of the OSI model belonging to three subgroups.
Physical, data link and network layers are the network support layers; they deal with the physical aspects of
moving data from one device to another.
Session, presentation and application layers are the user support layers; they allow interoperability among
unrelated software systems.
The transport layer ensures end-to-end reliable data transmission.
3. What are header and trailers and how do they get added and removed?
Each layer in the sending machine adds its own information to the message it receives from the layer
just above it and passes the whole package to the layer just below it. This information is added in the form of
headers or trailers. Headers are added to the message at the layers 6,5,4,3, and 2. A trailer is added at layer2.
At the receiving machine, the headers or trailers attached to the data unit at the corresponding sending layers
are removed, and actions appropriate to that layer are taken.
4. What are the features provided by layering?
Two nice features:
It decomposes the problem of building a network into more manageable components.
It provides a more modular design.
5. Why are protocols needed?
In networks, communication occurs between the entities in different systems. Two entities cannot just
send bit streams to each other and expect to be understood. For communication, the entities must agree on a
protocol. A protocol is a set of rules that govern data communication.
6. What are the two interfaces provided by protocols?
Service interface
Peer interface
Service interface- defines the operations that local objects can perform on the protocol.
Peer interface- defines the form and meaning of messages exchanged between protocol peers to implement
the communication service.
7. Mention the different physical media?
Twisted pair(the wire that your phone connects to)
Coaxial cable(the wire that your TV connects to)
Optical fiber(the medium most commonly used for high-bandwidth, long-distance
links)
Space(the stuff that radio waves, microwaves and infra red beams propagate through)
8. Define Signals?
Signals are actually electromagnetic waves traveling at the speed of light. The speed of light is,
however, medium dependent-electromagnetic waves traveling through copper and fiber do so at about twothirds the speed of light in vacuum.
9. What is waves wavelength?
The distance between a pair of adjacent maxima or minima of a wave, typically measured in meters,
is called waves wavelength.
10. Define Modulation?
Modulation -varying the frequency, amplitude or phase of the signal to effect the transmission of
information. A simple example of modulation is to vary the power (amplitude) of a single wavelength.
11. Explain the two types of duplex?
Full duplex-two bit streams can be simultaneously transmitted over the links at the
same time, one going in each direction.
Half duplex-it supports data flowing in only one direction at a time.
12. What is CODEC?
A device that encodes analog voice into a digital ISDN link is called a CODEC, for coder/decoder.
13. What is spread spectrum and explain the two types of spread spectrum?
Spread spectrum is to spread the signal over a wider frequency band than normal in such a way as to
minimize the impact of interference from other devices.
Frequency Hopping
Direct sequence
14. What are the different encoding techniques?
NRZ
NRZI
Manchester
4B/5B
15. How does NRZ-L differ from NRZ-I?
In the NRZ-L sequence, positive and negative voltages have specific meanings: positive for 0 and
negative for 1. in the NRZ-I sequence, the voltages are meaningless.
Instead, the receiver looks for changes from one level to another as its basis for recognition of 1s.
16. What are the responsibilities of data link layer?
Specific responsibilities of data link layer include the following. a) Framing b) Physical addressing c)
Flow control d) Error control e) Access control.
17. What are the ways to address the framing problem?
Byte-Oriented Protocols(PPP)
Bit-Oriented Protocols(HDLC)
Clock-Based Framing(SONET)
18. Distinguish between peer-to-peer relationship and a primary-secondary relationship. peer -to- peer
relationship?
All the devices share the link equally.
Primary-secondary relationship: One device controls traffic and the others must transmit through it.
19. Mention the types of errors and define the terms?
There are 2 types of errors
Single-bit error.
Burst-bit error.
Single bit error: The term single bit error means that only one bit of a given data unit (such as byte
character/data unit or packet) is changed from 1 to 0 or from 0 to 1.
Burst error: Means that 2 or more bits in the data unit have changed from 1 to 0 from 0 to 1.
20. List out the available detection methods.
There are 4 types of redundancy checks are used in data communication.
Vertical redundancy checks (VRC).
Longitudinal redundancy checks (LRC).
Cyclic redundancy checks (CRC).
Checksum.
21. Write short notes on VRC.
The most common and least expensive mechanism for error detection is the vertical redundancy
check (VRC) often called a parity check. In this technique a redundant bit called a parity bit, is appended to
every data unit so, that the total number of 0s in the unit (including the parity bit) becomes even.
Each receiving device has a block of memory called a buffer, reserved for storing incoming data until
they are processed.
34.What is the difference between a passive and an active hub?
An active hub contains a repeater that regenerates the received bit patterns before sending them out.
A passive hub provides a simple physical connection between the attached devices.
35. For n devices in a network, what is the number of cable links required for a
mesh and ring topology?
Mesh topology n (n-1)/2
Ring topology n
36. Group the OSI layers by function. (MAY/JUNE2007)
The seven layers of the OSI model belonging to three subgroups. Physical, data link and network
layers are the network support layers; they deal with the physical aspects of moving data from one device to
another. Session, presentation and application layers are the user support layers; they allow interoperability
among unrelated software systems. The transport layer ensures end-to-end reliable data transmission.
37.We have a channel with a 1 MHz bandwidth. The SNR for this channel is 63; what is the appropriate bit
rate and signal level?
First, we use the Shannon formula to find our upper limit.
C = B log2 (1 + SNR) = 106 log2 (1 + 63) = 106 log2 (64) = 6 Mbps
Then we use the Nyquist formula to find the
number of signal levels.
4 Mbps = 2 1 MHz log2 L L = 4
= B log2 (1) = B 0 = 0
38.List the Channelization Protocols
Frequency Division Multiple Access (FDMA)
The total bandwidth is divided into channels.
Time Division Multiple Access (TDMA)
The band is divided into one channel that is time shared
Code Division Multiple Access (CDMA)
One channel carries all transmission simultaneously
39.What is protocol?What are its key elements?(NOV/DEC 2007)
Set of rules that govern the data communication is protocol. The key elements are
i)Syntax ii)Semantics iii)Timing
40. What is meant by half duplex?
Each Station can both transmit and receive but not at the same time. When one device is sending the
other can only receive and vice versa.
Direction of data at time 1
Station
Station
41.
42.
43.
Sol: The physical layer, data link and network layers network support layers and session, presentation,
application layers are user support layer. Transport layer links network support and user support layers.
44.
Write the advantages of Optical fiber over twisted pair and coaxial cable.
45.
Sol. Between machines layer-x on one machine can communicate with layer-x of another machine. The
process on each machine that communicate at a given layer are called peer to peer process.
46.
What does IEEE 10 Base 5 standard signify?
It is Ethernet standard.
The number 10 signifies the data rate of 10 Mbps and the number 5 signifies the maximum
cable length of 500 meters.
The word Base specifies a digital signal with Manchester encoding.
47.
What is CSMA/CD?
CSMA/CD is the access method used in an Ethernet.
It stands for Carrier Sense Multiple Access with Collision Detection.
Collision: Whenever multiple users have unregulated access to a single line, there is a danger
of signals overlapping and destroying each other. Such overlaps, which turn the signals into
unusable noise, are called collisions.
In CSMA/CD the station wishing to transmit first listens to make certain the link is free, then
transmits its data, then listens again. During the data transmission, the station checks the line
for the extremely high voltages that indicate a collision.
If a collision is detected, the station quits the current transmission and waits a predetermined
48.
49.
amount of time for the line to clear, then sends its data again.
What is token ring?
Token ring is a LAN protocol standardized by IEEE and numbered as IEEE 802.4.
In a token ring network, the nodes are connected into a ring by point-to-point links.
It supports data rate of 4 & 16 Mbps.
Each station in the network transmits during its turn and sends only one frame during each
turn.
The mechanism that coordinates this rotation is called Token passing.
A token is a simple placeholder frame that is passed from station to station around the ring. A
station may send data only when it has possession of the token.
Differentiate 1000base SX and 100 Base FX.
S.No
1
2
3
4
5
6
50.
Feature
Type
Data rate
Medium
Signal
Max.
Distance
Encoding
1000 Base SX
Gigabit Ethernet
1 Gbps
Optical fiber
Short-wave laser
550m
4B/5B
8B/10B
MAC
16. Explain the access method and frame format of IEEE 802.3 and IEEE 802.5 in detail.
17. Explain the classification of ETHERNET.
18. Explain the functioning of wireless LAN in detail.