Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
9 views

Advanced Computer Network-Unit-1

A computer network connects independent computers to share information and resources, utilizing hardware and software for communication. It consists of nodes (like servers and personal computers) and links (wired or wireless), governed by communication protocols. Networks can be categorized by area (LAN, MAN, WAN), communication type (point-to-point, multipoint), and architecture (client-server, peer-to-peer).

Uploaded by

bhargavii2409
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Advanced Computer Network-Unit-1

A computer network connects independent computers to share information and resources, utilizing hardware and software for communication. It consists of nodes (like servers and personal computers) and links (wired or wireless), governed by communication protocols. Networks can be categorized by area (LAN, MAN, WAN), communication type (point-to-point, multipoint), and architecture (client-server, peer-to-peer).

Uploaded by

bhargavii2409
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 37

What is a Computer Network?

A computer network is a system that connects many independent computers to


share information (data) and resources. The integration of computers and other
different devices allows users to communicate more easily. A computer
network is a collection of two or more computer systems that are linked
together. A network connection can be established using
either cable or wireless media. Hardware and software are used to connect
computers and tools in any network.

A computer network consists of various kinds of nodes. Servers, networking


hardware, personal computers, and other specialized or general-purpose hosts
can all be nodes in a computer network. Hostnames and network addresses are
used to identify them.
Components of a Computer Network
In simple terms, a computer network is made up of two main parts: devices
(called nodes) and connections (called links). The links connect the devices to
each other. The rules for how these connections send information are called
communication protocols. The starting and ending points of these
communications are often called ports.

1. Network Devices
Basic hardware interconnecting network nodes, such as Network Interface
Cards (NICs), Bridges, Hubs, Switches, and Routers, are used in all networks.
In addition, a mechanism for connecting these building parts is necessary,
which is usually galvanic cable and optical cable are less popular (“optical
fiber”)The following are the network devices :
 NIC (Network Interface Card): A network card, often known as a
network adapter or NIC (network interface card), is computer hardware that
enables computers to communicate via a network. It offers physical access
to networking media and, in many cases, MAC addresses serve as a low-
level addressing scheme. Each network interface card has a distinct
identifier. This is stored on a chip that is attached to the card.
 Repeater: A repeater is an electrical device that receives a signal, cleans it
of unwanted noise, regenerates it, and retransmits it at a higher power level
or to the opposite side of an obstruction, allowing the signal to travel
greater distances without degradation. In the majority of twisted pair
Ethernet networks, Repeaters are necessary for cable lengths longer than
100 meters in some systems. Repeaters are based on physics.
 Hub: A hub is a device that joins together many twisted pairs or fiber optic
Ethernet devices to give the illusion of a formation of a single network
segment. The device can be visualized as a multiport repeater. A network
hub is a relatively simple broadcast device. Any packet entering any port is
regenerated and broadcast out on all other ports, and hubs do not control
any of the traffic that passes through them. Packet collisions occur as a
result of every packet being sent out through all other ports, substantially
impeding the smooth flow of communication.
 Bridges: Bridges broadcast data to all the ports but not to the one that
received the transmission. Bridges, on the other hand, learn which MAC
addresses are reachable through specific ports rather than copying messages
to all ports as hubs do. Once a port and an address are associated, the
bridge will only transport traffic from that address to that port.
 Switches: A switch differs from a hub in that it only forwards frames to the
ports that are participating in the communication, rather than all of the ports
that are connected. The collision domain is broken by a switch, yet the
switch depicts itself as a broadcast domain. Frame-forwarding decisions are
made by switches based on MAC addresses.
 Routers: Routers are networking devices that use headers and forwarding
tables to find the optimal way to forward data packets between networks. A
router is a computer networking device that links two or more computer
networks and selectively exchanges data packets between them. A router
can use address information in each data packet to determine if the source
and destination are on the same network or if the data packet has to be
transported between networks. When numerous routers are deployed in a
wide collection of interconnected networks, the routers share target system
addresses so that each router can develop a table displaying the preferred
pathways between any two systems on the associated networks.
 Gateways: To provide system compatibility, a gateway may contain
devices such as protocol translators, impedance-matching devices, rate
converters, fault isolators, or signal translators. It also necessitates the
development of administrative procedures that are acceptable to both
networks. By completing the necessary protocol conversions, a protocol
translation/mapping gateway joins networks that use distinct network
protocol technologies.
2. Links
Links are the ways information travels between devices, and they can be of
two types:
 Wired: Communication done in a wired medium. Copper wire, twisted
pair, or fiber optic cables are all options. A wired network employs wires to
link devices to the Internet or another network, such as laptops or desktop
PCs.
 Wireless: Wireless means without wire, media that is made up of
electromagnetic waves (EM Waves) or infrared waves. Antennas or sensors
will be present on all wireless devices. For data or voice communication, a
wireless network uses radio frequency waves rather than wires.
3. Communication Protocols
A communication protocol is a set of rules that all devices follow when they
share information. Some common protocols are TCP/IP, IEEE 802, Ethernet,
wireless LAN, and cellular standards. TCP/IP is a model that organizes how
communication works in modern networks. It has four functional layers for
these communication links:
 Network Access Layer: This layer controls how data is physically
transferred, including how hardware sends data through wires or fibers.
 Internet Layer: This layer packages data into understandable packets and
ensures it can be sent and received.
 Transport Layer: This layer keeps the communication between devices
steady and reliable.
 Application Layer: This layer allows high-level applications to access the
network to start data transfer.
Most of the modern internet structure is based on the TCP/IP model, although
the similar seven-layer OSI model still has a strong influence.
IEEE 802 is a group of standards for local area networks (LAN) and
metropolitan area networks (MAN). The most well-known member of the
IEEE 802 family is wireless LAN, commonly known as WLAN or Wi-Fi.
4. Network Defense
While nodes, links, and protocols are the building blocks of a network, a
modern network also needs strong defenses. Security is crucial because huge
amounts of data are constantly being created, moved, and processed. Some
examples of network defense tools are firewalls, intrusion detection systems
(IDS), intrusion prevention systems (IPS), network access control (NAC),
content filters, proxy servers, anti-DDoS devices, and load balancers.

Types of Computer Networks


Division Based on Area Covered
 Local Area Network (LAN): A LAN is a network that covers an area of
around 10 kilometers. For example, a college network or an office network.
Depending upon the needs of the organization, a LAN can be a single
office, building, or Campus. We can have two PCs and one printer in-home
office or it can extend throughout the company and include audio and video
devices. Each host in LAN has an identifier, an address that defines hosts in
LAN. A packet sent by the host to another host carries both the source
host’s and the destination host’s address.
 Metropolitan Area Network (MAN): MAN refers to a network that
covers an entire city. For example: consider the cable television network.
 Wide Area Network (WAN): WAN refers to a network that connects
countries or continents. For example, the Internet allows users to access a
distributed system called www from anywhere around the globe.WAN
interconnects connecting devices such as switches, routers, or modems. A
LAN is normally privately owned by an organization that uses it. We see
two distinct examples of WANs today: point-to-point WANs and Switched
WANs
o Point To Point: Connects two connecting devices through
transmission media.
o Switched: A switched WAN is a network with more than two
ends.
Based on Types of Communication
 Point To Point networks: Point-to-Point networking is a type of data
networking that establishes a direct link between two networking nodes.
A direct link between two devices, such as a computer and a printer, is
known as a point-to-point connection.
 Multipoint: is the one in which more than two specific devices share links.
In the multipoint environment, the capacity of the channel is shared, either
spatially or temporally. If several devices can use the link simultaneously,
it is a spatially shared connection.
 Broadcast networks: In broadcast networks , a signal method in which
numerous parties can hear a single sender. Radio stations are an excellent
illustration of the “Broadcast Network” in everyday life. The radio station
is a sender of data/signal in this scenario, and data is only intended to travel
in one direction. Away from the radio transmission tower, to be precise.
Based on the Type of Architecture
 P2P Networks: Computers with similar capabilities and configurations are
referred to as peers.
The “peers” in a peer-to-peer network are computer systems that are
connected to each other over the Internet. Without the use of a central
server, files can be shared directly between systems on the network.
 Client-Server Networks: Each computer or process on the network is
either a client or a server in a client-server architecture (client/server). The
client asks for services from the server, which the server provides. Servers
are high-performance computers or processes that manage disc drives (file
servers), printers (print servers), or network traffic (network servers)
 Hybrid Networks: The hybrid model uses a combination of client-server
and peer-to-peer architecture. Eg: Torrent.
Types of Computer Network Architecture
Computer Network Architecture is of two types. These types are mentioned
below.
 Client-Server Architecture: Client-Server Architecture is basically the
architecture where the clients and the server are connected as two clients
can communicate with each other and the devices present work as servers
in the network.
 Peer-to-Peer Architecture: Peer-to-Peer Architecture, computers are
connected to each other and each computer is equally capable of working
as there is no central server here. Each device present here can be used as a
client or server.

Computer Network Architecture


Computer Network Architecture is defined as the physical and logical design of
the software, hardware, protocols, and media of the transmission of data.
Simply we can say that how computers are organized and how tasks are
allocated to the computer.

The two types of network architectures are used:

o Peer-To-Peer network
o Client/Server network
o Peer-To-Peer network

o Peer-To-Peer network is a network in which all the computers are linked


together with equal privilege and responsibilities for processing the data.
o Peer-To-Peer network is useful for small environments, usually up to 10
computers.
o Peer-To-Peer network has no dedicated server.
o Special permissions are assigned to each computer for sharing the
resources, but this can lead to a problem if the computer with the resource
is down.
Advantages of Peer-To-Peer Network:

o It is less costly as it does not contain any dedicated server.


o If one computer stops working but, other computers will not stop
working.
o It is easy to set up and maintain as each computer manages itself.
Disadvantages of Peer-To-Peer Network:

o In the case of Peer-To-Peer network, it does not contain the centralized


system. Therefore, it cannot back up the data as the data is different in
different locations.
o It has a security issue as the device is managed itself.
o
o Client/Server Network

o Client/Server network is a network model designed for the end users


called clients, to access the resources such as songs, video, etc. from a
central computer known as Server.
o The central controller is known as a server while all other computers in
the network are called clients.
o A server performs all the major operations such as security and network
management.
o A server is responsible for managing all the resources such as files,
directories, printer, etc.
o All the clients communicate with each other through a server. For
example, if client1 wants to send some data to client 2, then it first sends
the request to the server for the permission. The server sends the response
to the client 1 to initiate its communication with the client 2.
Advantages Of Client/Server network:

o A Client/Server network contains the centralized system. Therefore we


can back up the data easily.
o A Client/Server network has a dedicated server that improves the overall
performance of the whole system.
o Security is better in Client/Server network as a single server administers
the shared resources.
o It also increases the speed of the sharing resources.
Disadvantages Of Client/Server network:

o Client/Server network is expensive as it requires the server with large


memory.
o A server has a Network Operating System(NOS) to provide the resources
to the clients, but the cost of NOS is very high.
o It requires a dedicated network administrator to manage all the resources.

The performance of a network pertains to the measure of service quality of a


network as perceived by the user. There are different ways to measure the
performance of a network, depending upon the nature and design of the
network. Finding the performance of a network depends on both quality of the
network and the quantity of the network.
Parameters for Measuring Network Performance
 Bandwidth
 Latency (Delay)
 Bandwidth – Delay Product
 Throughput
 Jitter
BANDWIDTH

One of the most essential conditions of a website’s performance is the amount


of bandwidth allocated to the network. Bandwidth determines how rapidly the
webserver is able to upload the requested information. While there are different
factors to consider with respect to a site’s performance, bandwidth is every now
and again the restricting element.
Bandwidth is characterized as the measure of data or information that can be
transmitted in a fixed measure of time. The term can be used in two different
contexts with two distinctive estimating values. In the case of digital devices,
the bandwidth is measured in bits per second(bps) or bytes per second. In the
case of analog devices, the bandwidth is measured in cycles per second, or
Hertz (Hz).
Bandwidth is only one component of what an individual sees as the speed of a
network. People frequently mistake bandwidth with internet speed in light of the
fact that Internet Service Providers (ISPs) tend to claim that they have a fast
“40Mbps connection” in their advertising campaigns. True internet speed is
actually the amount of data you receive every second and that has a lot to do
with latency too. “Bandwidth” means “Capacity” and “Speed” means
“Transfer rate”.
More bandwidth does not mean more speed. Let us take a case where we have
double the width of the tap pipe, but the water rate is still the same as it was
when the tap pipe was half the width. Hence, there will be no improvement in
speed. When we consider WAN links, we mostly mean bandwidth but when we
consider LAN, we mostly mean speed. This is on the grounds that we are
generally constrained by expensive cable bandwidth over WAN rather than
hardware and interface data transfer rates (or speed) over LAN.
 Bandwidth in Hertz: It is the range of frequencies contained in a composite
signal or the range of frequencies a channel can pass. For example, let us
consider the bandwidth of a subscriber telephone line as 4 kHz.
 Bandwidth in Bits per Seconds: It refers to the number of bits per second
that a channel, a link, or rather a network can transmit. For example, we can
say the bandwidth of a Fast Ethernet network is a maximum of 100 Mbps,
which means that the network can send 100 Mbps of data.
Note: There exists an explicit relationship between the bandwidth in hertz and
the bandwidth in bits per second. An increase in bandwidth in hertz means an
increase in bandwidth in bits per second. The relationship depends upon
whether we have baseband transmission or transmission with modulation.
LATENCY

In a network, during the process of data communication, latency(also known as


delay) is defined as the total time taken for a complete message to arrive at the
destination, starting with the time when the first bit of the message is sent out
from the source and ending with the time when the last bit of the message is
delivered at the destination. The network connections where small delays occur
are called “Low-Latency-Networks” and the network connections which suffer
from long delays are known as “High-Latency-Networks”.
High latency leads to the creation of bottlenecks in any network
communication. It stops the data from taking full advantage of the network pipe
and conclusively decreases the bandwidth of the communicating network. The
effect of the latency on a network’s bandwidth can be temporary or never-
ending depending on the source of the delays. Latency is also known as a ping
rate and is measured in milliseconds(ms).
 In simpler terms latency may be defined as the time required to successfully
send a packet across a network.
 It is measured in many ways like a round trip, one-way, etc.
 It might be affected by any component in the chain utilized to vehiculate
data, like workstations, WAN links, routers, LAN, and servers, and
eventually may be limited for large networks, by the speed of light.
Latency = Propagation Time + Transmission Time + Queuing Time +
Processing Delay
Propagation Time
It is the time required for a bit to travel from the source to the destination.
Propagation time can be calculated as the ratio between the link length
(distance) and the propagation speed over the communicating medium. For
example, for an electric signal, propagation time is the time taken for the signal
to travel through a wire.
Propagation time = Distance / Propagation speed
Example:
Input: What will be the propagation time when the distance between two points
is 12, 000 km?
Assuming the propagation speed to be 2.4 * 10^8 m/s in cable.

Output: We can calculate the propagation time as-


Propagation time = (12000 * 10000) / (2.4 * 10^8) = 50 ms
Transmission Time
Transmission Time is a time based on how long it takes to send the signal down
the transmission line. It consists of time costs for an EM signal to propagate
from one side to the other, or costs like the training signals that are usually put
on the front of a packet by the sender, which helps the receiver synchronize
clocks. The transmission time of a message relies upon the size of the message
and the bandwidth of the channel.
Transmission time = Message size / Bandwidth
Example:
Input: What will be the propagation time and the transmission time for a 2.5-
kbyte
message when the bandwidth of the network is 1 Gbps? Assuming the
distance between
sender and receiver is 12, 000 km and speed of light is 2.4 * 10^8 m/s.

Output: We can calculate the propagation and transmission time as-


Propagation time = (12000 * 10000) / (2.4 * 10^8) = 50 ms
Transmission time = (2560 * 8) / 10^9 = 0.020 ms

Note: Since the message is short and the bandwidth is high, the dominant factor
is the
propagation time and not the transmission time(which can be ignored).
Queuing Time
Queuing time is a time based on how long the packet has to sit around in the
router. Quite frequently the wire is busy, so we are not able to transmit a packet
immediately. The queuing time is usually not a fixed factor, hence it changes
with the load thrust in the network. In cases like these, the packet sits waiting,
ready to go, in a queue. These delays are predominantly characterized by the
measure of traffic on the system. The more the traffic, the more likely a packet
is stuck in the queue, just sitting in the memory, waiting.
Processing Delay
Processing delay is the delay based on how long it takes the router to figure out
where to send the packet. As soon as the router finds it out, it will queue the
packet for transmission. These costs are predominantly based on the complexity
of the protocol. The router must decipher enough of the packet to make sense of
which queue to put the packet in. Typically the lower-level layers of the stack
have simpler protocols. If a router does not know which physical port to send
the packet to, it will send it to all the ports, queuing the packet in many queues
immediately. Differently, at a higher level, like in IP protocols, the processing
may include making an ARP request to find out the physical address of the
destination before queuing the packet for transmission. This situation may also
be considered as a processing delay.

BANDWIDTH – DELAY PRODUCT

Bandwidth and Delay are two performance measurements of a link. However,


what is significant in data communications is the product of the two, the
bandwidth-delay product. Let us take two hypothetical cases as examples.
Case 1: Assume a link is of bandwidth 1bps and the delay of the link is 5s. Let
us find the bandwidth-delay product in this case. From the image, we can say
that this product 1 x 5 is the maximum number of bits that can fill the link.
There can be close to 5 bits at any time on the link.

Bandwidth Delay Product

Case 2: Assume a link is of bandwidth 3bps. From the image, we can say that
there can be a maximum of 3 x 5 = 15 bits on the line. The reason is that, at
each second, there are 3 bits on the line and the duration of each bit is 0.33s.
Bandwidth Delay

For both examples, the product of bandwidth and delay is the number of bits
that can fill the link. This estimation is significant in the event that we have to
send data in bursts and wait for the acknowledgment of each burst before
sending the following one. To utilize the maximum ability of the link, we have
to make the size of our burst twice the product of bandwidth and delay. Also,
we need to fill up the full-duplex channel. The sender ought to send a burst of
data of (2*bandwidth*delay) bits. The sender at that point waits for the
receiver’s acknowledgement for part of the burst before sending another burst.
The amount: 2*bandwidth*delay is the number of bits that can be in transition
at any time.

THROUGHPUT

Throughput is the number of messages successfully transmitted per unit time. It


is controlled by available bandwidth, the available signal-to-noise ratio, and
hardware limitations. The maximum throughput of a network may be
consequently higher than the actual throughput achieved in everyday
consumption. The terms ‘throughput’ and ‘bandwidth’ are often thought of as
the same, yet they are different. Bandwidth is the potential measurement of a
link, whereas throughput is an actual measurement of how fast we can send
data.
Throughput is measured by tabulating the amount of data transferred between
multiple locations during a specific period of time, usually resulting in the unit
of bits per second(bps), which has evolved to bytes per second(Bps), kilobytes
per second(KBps), megabytes per second(MBps) and gigabytes per
second(Gbps). Throughput may be affected by numerous factors, such as the
hindrance of the underlying analog physical medium, the available processing
power of the system components, and end-user behavior. When numerous
protocol expenses are taken into account, the use rate of the transferred data can
be significantly lower than the maximum achievable throughput.
Let us consider: A highway that has a capacity of moving, say, 200 vehicles at
a time. But at a random time, someone notices only, say, 150 vehicles moving
through it due to some congestion on the road. As a result, the capacity is likely
to be 200 vehicles per unit time and the throughput is 150 vehicles at a time.
Example:
Input: A network with bandwidth of 10 Mbps can pass only an average of 12,
000 frames
per minute where each frame carries an average of 10, 000 bits. What will
be the
throughput for this network?

Output: We can calculate the throughput as-


Throughput = (12, 000 x 10, 000) / 60 = 2 Mbps
The throughput is nearly equal to one-fifth of the bandwidth in this case.

JITTER

Jitter is another performance issue related to the delay. In technical terms, jitter
is a “packet delay variance”. It can simply mean that jitter is considered a
problem when different packets of data face different delays in a network and
the data at the receiver application is time-sensitive, i.e. audio or video data.
Jitter is measured in milliseconds(ms). It is defined as an interference in the
normal order of sending data packets. For example: if the delay for the first
packet is 10 ms, for the second is 35 ms, and for the third is 50 ms, then the
real-time destination application that uses the packets experiences jitter.
Simply, a jitter is any deviation in or displacement of, the signal pulses in a
high-frequency digital signal. The deviation can be in connection with the
amplitude, the width of the signal pulse, or the phase timing. The major causes
of jitter are electromagnetic interference(EMI) and crosstalk between signals.
Jitter can lead to the flickering of a display screen, affects the capability of a
processor in a desktop or server to proceed as expected, introduce clicks or
other undesired impacts in audio signals, and loss of transmitted data between
network devices.
Jitter is harmful and causes network congestion and packet loss.
 Congestion is like a traffic jam on the highway. Cars cannot move forward at
a reasonable speed in a traffic jam. Like a traffic jam, in congestion, all the
packets come to a junction at the same time. Nothing can get loaded.
 The second negative effect is packet loss. When packets arrive at unexpected
intervals, the receiving system is not able to process the information, which
leads to missing information also called “packet loss”. This has negative
effects on video viewing. If a video becomes pixelated and is skipping, the
network is experiencing a jitter. The result of the jitter is packet loss. When
you are playing a game online, the effect of packet loss can be that a player
begins moving around on the screen randomly. Even worse, the game goes
from one scene to the next, skipping over part of the gameplay.

Jitter

In the above image, it can be noticed that the time it takes for packets to be sent
is not the same as the time in which they will arrive at the receiver side. One of
the packets faces an unexpected delay on its way and is received after the
expected time. This is jitter.
A jitter buffer can reduce the effects of jitter, either in a network, on a router or
switch, or on a computer. The system at the destination receiving the network
packets usually receives them from the buffer and not from the source system
directly. Each packet is fed out of the buffer at a regular rate. Another approach
to diminish jitter in case of multiple paths for traffic is to selectively route
traffic along the most stable paths or to always pick the path that can come
closest to the targeted packet delivery rate.
Factors Affecting Network Performance
Below mentioned are the factors that affect the network performance.
 Network Infrastrucutre
 Applications used in the Network
 Network Issues
 Network Security

Network Infrastructure

Network Infrastructure is one of the factors that affect network performance.


Network Infrastructure consists of routers, switches services of a network like
IP Addressing, wireless protocols, etc., and these factors directly affect the
performance of the network.

Applications Used in the Network

Applications that are used in the Network can also have an impact on the
performance of the network as some applications that have poor performance
can take large bandwidth, for more complicated applications, its maintenance is
also important and therefore it impacts the performance of the network.

Network Issues

Network Issue is a factor in Network Performance as the flaws or loopholes in


these issues can lead to many systemic issues. Hardware issues can also impact
the performance of the network.

Network Security

Network Security provides privacy, data integrity, etc. Performance can be


influenced by taking network bandwidth which has the work of managing the
scanning of devices, encryption of data, etc. But these cases negatively
influence the network.

A high speed network refers to a network infrastructure that transmits data at a


rapid rate, often exceeding 10 GB of data, making it challenging for network
forensics to capture and analyze all packets efficiently due to the large volume
of data flowing through interconnected devices.

network centric view


The network-centric approach aims to tap into the hidden resources of
knowledge workers supported and enabled by ICT, in particular the social
technologies associated with Web 2.0 and Enterprise 2.0. Essentially though, a
network-centric organization is more about people and culture than technology.
Error Detection in Computer Networks
Last Updated : 31 Jan, 2025


Error is a condition when the receiver’s information does not match the
sender’s. Digital signals suffer from noise during transmission that can
introduce errors in the binary bits traveling from sender to receiver. That means
a 0 bit may change to 1 or a 1 bit may change to 0.
Data (Implemented either at the Data link layer or Transport Layer of the OSI
Model) may get scrambled by noise or get corrupted whenever a message is
transmitted. To prevent such errors, error-detection codes are added as extra
data to digital messages. This helps in detecting any errors that may have
occurred during message transmission.
Types of Errors
Single-Bit Error
A single-bit error refers to a type of data transmission error that occurs when
one bit (i.e., a single binary digit) of a transmitted data unit is altered during
transmission, resulting in an incorrect or corrupted data unit.

Single-Bit Error

Multiple-Bit Error
A multiple-bit error is an error type that arises when more than one bit in a data
transmission is affected. Although multiple-bit errors are relatively rare when
compared to single-bit errors, they can still occur, particularly in high-noise or
high-interference digital environments.
Multiple-Bit Error

Burst Error
When several consecutive bits are flipped mistakenly in digital transmission, it
creates a burst error. This error causes a sequence of consecutive incorrect
values.

Burst Error

Error Detection Methods


To detect errors, a common technique is to introduce redundancy bits that
provide additional information. Various techniques for error detection include:
 Simple Parity Check
 Two-Dimensional Parity Check
 Checksum
 Cyclic Redundancy Check (CRC)
Simple Parity Check
Simple-bit parity is a simple error detection method that involves adding an
extra bit to a data transmission. It works as:
 1 is added to the block if it contains an odd number of 1’s, and
 0 is added if it contains an even number of 1’s
This scheme makes the total number of 1’s even, that is why it is called even
parity checking.

Advantages of Simple Parity Check


 Simple parity check can detect all single bit error.
 Simple parity check can detect an odd number of errors.
 Implementation: Simple Parity Check is easy to implement in both
hardware and software.
 Minimal Extra Data: Only one additional bit (the parity bit) is added per
data unit (e.g., per byte).
 Fast Error Detection: The process of calculating and checking the parity bit
is quick, which allows for rapid error detection without significant delay in
data processing or communication.
 Single-Bit Error Detection: It can effectively detect single-bit errors within
a data unit, providing a basic level of error detection for relatively low-error
environments.
Disadvantages of Simple Parity Check
 Single Parity check is not able to detect even no. of bit error.
 For example, the Data to be transmitted is 101010. Codeword transmitted to
the receiver is 1010101 (we have used even parity).
Let’s assume that during transmission, two of the bits of code word flipped
to 1111101.
On receiving the code word, the receiver finds the no. of ones to be even and
hence no error, which is a wrong assumption.
Two-Dimensional Parity Check
Two-dimensional Parity check bits are calculated for each row, which is
equivalent to a simple parity check bit. Parity check bits are also calculated for
all columns, then both are sent along with the data. At the receiving end, these
are compared with the parity bits calculated on the received data.
Advantages of Two-Dimensional Parity Check
 Two-Dimensional Parity Check can detect and correct all single bit error.
 Two-Dimensional Parity Check can detect two or three bit error that occur
any where in the matrix.
Disadvantages of Two-Dimensional Parity Check
 Two-Dimensional Parity Check can not correct two or three bit error. It can
only detect two or three bit error.
 If we have a error in the parity bit then this scheme will not work.
Checksum
Checksum error detection is a method used to identify errors in transmitted data.
The process involves dividing the data into equally sized segments and using
a 1’s complement to calculate the sum of these segments. The calculated sum is
then sent along with the data to the receiver. At the receiver’s end, the same
process is repeated and if all zeroes are obtained in the sum, it means that the
data is correct.
Checksum – Operation at Sender’s Side
 Firstly, the data is divided into k segments each of m bits.
 On the sender’s end, the segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented to get the checksum.
 The checksum segment is sent along with the data segments.
Checksum – Operation at Receiver’s Side
 At the receiver’s end, all received segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented.
 If the result is zero, the received data is accepted; otherwise discarded.
Cyclic Redundancy Check (CRC)
 Unlike the checksum scheme, which is based on addition, CRC is based
on binary division.
 In CRC, a sequence of redundant bits, called cyclic redundancy check bits,
are appended to the end of the data unit so that the resulting data unit
becomes exactly divisible by a second, predetermined binary number.
 At the destination, the incoming data unit is divided by the same number. If
at this step there is no remainder, the data unit is assumed to be correct and is
therefore accepted.
 A remainder indicates that the data unit has been damaged in transit and
therefore must be rejected.

CRC
Working
We have given dataword of length n and divisor of length k.
Step 1: Append (k-1) zero’s to the original message
Step 2: Perform modulo 2 division
Step 3: Remainder of division = CRC
Step 4: Code word = Data with append k-1 zero’s + CRC
Note:
 CRC must be k-1 bits
 Length of Code word = n+k-1 bits
Example: Let’s data to be send is 1010000 and divisor in the form of
polynomial is x3+1. CRC method discussed below.

Advantages of Error Detection


 Increased Data Reliability: Error detection ensures that the data transmitted
over the network is reliable, accurate, and free from errors. This ensures that
the recipient receives the same data that was transmitted by the sender.
 Improved Network Performance: Error detection mechanisms can help to
identify and isolate network issues that are causing errors. This can help to
improve the overall performance of the network and reduce downtime.
 Enhanced Data Security: Error detection can also help to ensure that the
data transmitted over the network is secure and has not been tampered with.
Disadvantages of Error Detection
 Overhead: Error detection requires additional resources and processing
power, which can lead to increased overhead on the network. This can result
in slower network performance and increased latency.
 False Positives: Error detection mechanisms can sometimes generate false
positives, which can result in unnecessary retransmission of data. This can
further increase the overhead on the network.
 Limited Error Correction: Error detection can only identify errors but
cannot correct them. This means that the recipient must rely on the sender to
retransmit the data, which can lead to further delays and increased network
overhead

reliable transmission in computer networks

the Transmission Control Protocol (TCP) guarantees reliable transmission by


breaking messages into packets, keeping track of which packets have been
received successfully, resending any that have been lost, and specifying the
order for reassembling the data on the other end.

What is Ethernet?

A LAN is a data communication network connecting various terminals or


computers within a building or limited geographical area. The connection
between the devices could be wired or wireless. Ethernet, Token rings, and
Wireless LAN using IEEE 802.11 are examples of standard LAN
technologies.In this article we will see ethernet in detail.
Ethernet
Ethernet is the most widely used LAN technology and is defined under IEEE
standards 802.3. The reason behind its wide usability is that Ethernet is easy to
understand, implement, and maintain, and allows low-cost network
implementation. Also, Ethernet offers flexibility in terms of the topologies that
are allowed. Ethernet generally uses a bus topology. Ethernet operates in two
layers of the OSI model, the physical layer and the data link layer. For Ethernet,
the protocol data unit is a frame since we mainly deal with DLLs. In order to
handle collisions, the Access control mechanism used in Ethernet is CSMA/CD.
Although Ethernet has been largely replaced by wireless networks.A wired
networking still uses Ethernet more frequently. Wi-Fi eliminates the need for
cables by enabling users to connect their smartphones or laptops to a network
wirelessly. The 802.11ac Wi-Fi standard offers faster maximum data transfer
rates when compared to Gigabit Ethernet. However, wired connections are more
secure and less susceptible to interference than wireless networks.
History of Ethernet
Robert Metcalfe’s invention of Ethernet in 1973 completely changed computer
networking. With Ethernet Version 2’s support for 10 Mbps and an initial data
rate of 2.94 Mbps, it first gained popularity in 1982. Ethernet’s adoption was
accelerated by the IEEE 802.3 standardization in 1983. Local area networks
(LANs) and the internet were able to expand quickly because of the rapid
evolution and advancement of Ethernet, which over time reached speeds of 100
Mbps, 1 Gbps, 10 Gbps, and higher. It evolved into the standard technology for
wired network connections, enabling dependable and quick data transmission
for private residences, commercial buildings, and data centers all over the
world.
There are different types of Ethernet networks that are used to connect devices
and transfer data.
1. Fast Ethernet: This type of Ethernet network uses cables called twisted pair
or CAT5. It can transfer data at a speed of around 100 Mbps (megabits per
second). Fast Ethernet uses both fiber optic and twisted pair cables to enable
communication. There are three categories of Fast Ethernet: 100BASE-TX,
100BASE-FX, and 100BASE-T4.
2. Gigabit Ethernet: This is an upgrade from Fast Ethernet and is more
common nowadays. It can transfer data at a speed of 1000 Mbps or 1 Gbps
(gigabit per second). Gigabit Ethernet also uses fiber optic and twisted pair
cables for communication. It often uses advanced cables like CAT5e, which
can transfer data at a speed of 10 Gbps.
3. 10-Gigabit Ethernet: This is an advanced and high-speed network that can
transmit data at a speed of 10 gigabits per second. It uses special cables like
CAT6a or CAT7 twisted-pair cables and fiber optic cables. With the help of
fiber optic cables, this network can cover longer distances, up to around
10,000 meters.
4. Switch Ethernet: This type of network involves using switches or hubs to
improve network performance. Each workstation in this network has its own
dedicated connection, which improves the speed and efficiency of data
transfer. Switch Ethernet supports a wide range of speeds, from 10 Mbps to
10 Gbps, depending on the version of Ethernet being used.

Since we are talking about IEEE 802.3 standard Ethernet, therefore, 0 is


expressed by a high-to-low transition, a 1 by the low-to-high transition. In both
Manchester Encoding and Differential Manchester, the Encoding Baud rate is
double of bit rate.
Key Features of Ethernet
1. Speed: Ethernet is capable of transmitting data at high speeds, with current
Ethernet standards supporting speeds of up to 100 Gbps.
2. Flexibility: Ethernet is a flexible technology that can be used with a wide
range of devices and operating systems. It can also be easily scaled to
accommodate a growing number of users and devices.
3. Reliability: Ethernet is a reliable technology that uses error-correction
techniques to ensure that data is transmitted accurately and efficiently.
4. Cost-effectiveness: Ethernet is a cost-effective technology that is widely
available and easy to implement. It is also relatively low-maintenance,
requiring minimal ongoing support.
5. Interoperability: Ethernet is an interoperable technology that allows
devices from different manufacturers to communicate with each other
seamlessly.
6. Security: Ethernet includes built-in security features, including encryption
and authentication, to protect data from unauthorized access.
7. Manageability: Ethernet networks are easily managed, with various tools
available to help network administrators monitor and control network traffic.
8. Compatibility: Ethernet is compatible with a wide range of other
networking technologies, making it easy to integrate with other systems and
devices.
9. Availability: Ethernet is a widely available technology that can be used in
almost any setting, from homes and small offices to large data centers and
enterprise-level networks.
10.Simplicity: Ethernet is a simple technology that is easy to understand and
use. It does not require specialized knowledge or expertise to set up and
configure, making it accessible to a wide range of users.
11.Standardization: Ethernet is a standardized technology, which means that
all Ethernet devices and systems are designed to work together seamlessly.
This makes it easier for network administrators to manage and troubleshoot
Ethernet networks.
12.Scalability: Ethernet is highly scalable, which means it can easily
accommodate the addition of new devices, users, and applications without
sacrificing performance or reliability.
13.Broad compatibility: Ethernet is compatible with a wide range of protocols
and technologies, including TCP/IP, HTTP, FTP, and others. This makes it a
versatile technology that can be used in a variety of settings and applications.
14.Ease of integration: Ethernet can be easily integrated with other networking
technologies, such as Wi-Fi and Bluetooth, to create a seamless and
integrated network environment.
15.Ease of troubleshooting: Ethernet networks are easy to troubleshoot and
diagnose, thanks to a range of built-in diagnostic and monitoring tools. This
makes it easier for network administrators to identify and resolve issues
quickly and efficiently.
16.Support for multimedia: Ethernet supports multimedia applications, such
as video and audio streaming, making it ideal for use in settings where
multimedia content is a key part of the user experience.Ethernet is a reliable,
cost-effective, and widely used LAN technology that offers high-speed
connectivity and easy manageability for local networks.
Advantages of Ethernet
Speed: When compared to a wireless connection, Ethernet provides
significantly more speed. Because Ethernet is a one-to-one connection, this is
the case. As a result, speeds of up to 10 Gigabits per second (Gbps) or even 100
Gigabits per second (Gbps) are possible.
Efficiency: An Ethernet cable, such as Cat6, consumes less electricity, even less
than a wifi connection. As a result, these ethernet cables are thought to be the
most energy-efficient.
Good data transfer quality: Because it is resistant to noise, the information
transferred is of high quality.
Baud rate = 2* Bit rate
Disadvantages of Ethernet
Distance limitations: Ethernet has distance limitations, with the maximum
cable length for a standard Ethernet network being 100 meters. This means that
it may not be suitable for larger networks that require longer distances.
Bandwidth sharing: Ethernet networks share bandwidth among all connected
devices, which can result in reduced network speeds as the number of devices
increases.
Security vulnerabilities: Although Ethernet includes built-in security features,
it is still vulnerable to security breaches, including unauthorized access and data
interception.
Complexity: Ethernet networks can be complex to set up and maintain,
requiring specialized knowledge and expertise.
Compatibility issues: While Ethernet is generally interoperable with other
networking technologies, compatibility issues can arise when integrating with
older or legacy systems.
Cable installation: Ethernet networks require the installation of physical
cables, which can be time-consuming and expensive to install.
Physical limitations: Ethernet networks require physical connections between
devices, which can limit mobility and flexibility in network design.

Multiple Access Protocols in Computer Network


Multiple Access Protocols are methods used in computer networks to control
how data is transmitted when multiple devices are trying to communicate over
the same network. These protocols ensure that data packets are sent and
received efficiently, without collisions or interference. They help manage the
network traffic so that all devices can share the communication channel
smoothly and effectively.
Who is Responsible for the Transmission of Data?
The Data Link Layer is responsible for the transmission of data between two
nodes. Its main functions are:
 Data Link Control
 Multiple Access Control

Data Link Layer Functions

Data Link Control


The data link control is responsible for the reliable transmission of messages
over transmission channels by using techniques like framing, error control and
flow control. For Data link control refer to – Stop and Wait ARQ.
Multiple Access Control
If there is a dedicated link between the sender and the receiver then data link
control layer is sufficient, however if there is no dedicated link present then
multiple stations can access the channel simultaneously. Hence multiple access
protocols are required to decrease collision and avoid crosstalk. For example, in
a classroom full of students, when a teacher asks a question and all the students
(or stations) start answering simultaneously (send data at same time) then a lot
of chaos is created( data overlap or data lost) then it is the job of the teacher
(multiple access protocols) to manage the students and make them answer one
at a time.
Thus, protocols are required for sharing data on non dedicated channels.
Multiple access protocols can be subdivided further as
1. Random Access Protocol
In this, all stations have same superiority that is no station has more priority
than another station. Any station can send data depending on medium’s
state( idle or busy). It has two features:
 There is no fixed time for sending data
 There is no fixed sequence of stations sending data
The Random access protocols are further subdivided as:
ALOHA
It was designed for wireless LAN but is also applicable for shared medium. In
this, multiple stations can transmit data at the same time and can hence lead to
collision and data being garbled.
ALOHA

Pure ALOHA
When a station sends data it waits for an acknowledgement. If the
acknowledgement doesn’t come within the allotted time then the station waits
for a random amount of time called back-off time (Tb) and re-sends the data.
Since different stations wait for different amount of time, the probability of
further collision decreases.
Vulnerable Time = 2* Frame transmission time
Throughput = G exp{-2*G}
Maximum throughput = 0.184 for G=0.5

Pure ALOHA

Slotted ALOHA
It is similar to pure aloha, except that we divide time into slots and sending of
data is allowed only at the beginning of these slots. If a station misses out the
allowed time, it must wait for the next slot. This reduces the probability of
collision.
Vulnerable Time = Frame transmission time
Throughput = G exp{-*G}
Maximum throughput = 0.368 for G=1

Slotted ALOHA
CSMA
Carrier Sense Multiple Access ensures fewer collisions as the station is required
to first sense the medium (for idle or busy) before transmitting data. If it is idle
then it sends data, otherwise it waits till the channel becomes idle. However
there is still chance of collision in CSMA due to propagation delay. For
example, if station A wants to send data, it will first sense the medium. If it
finds the channel idle, it will start sending data. However, by the time the first
bit of data is transmitted (delayed due to propagation delay) from station A, if
station B requests to send data and senses the medium it will also find it idle and
will also send data. This will result in collision of data from station A and B.

CSMA

CSMA Access Modes

 1-Persistent: The node senses the channel, if idle it sends the data, otherwise
it continuously keeps on checking the medium for being idle and transmits
unconditionally(with 1 probability) as soon as the channel gets idle.
 Non-Persistent: The node senses the channel, if idle it sends the data,
otherwise it checks the medium after a random amount of time (not
continuously) and transmits when found idle.
 P-Persistent: The node senses the medium, if idle it sends the data with p
probability. If the data is not transmitted ((1-p) probability) then it waits for
some time and checks the medium again, now if it is found idle then it send
with p probability. This repeat continues until the frame is sent. It is used in
Wifi and packet radio systems.
 O-Persistent: Superiority of nodes is decided beforehand and transmission
occurs in that order. If the medium is idle, node waits for its time slot to send
data.
CSMA/CD
Carrier sense multiple access with collision detection. Stations can terminate
transmission of data if collision is detected. For more details refer – Efficiency
of CSMA/CD.
CSMA/CA
Carrier sense multiple access with collision avoidance. The process of collisions
detection involves sender receiving acknowledgement signals. If there is just
one signal(its own) then the data is successfully sent but if there are two
signals(its own and the one with which it has collided) then it means a collision
has occurred. To distinguish between these two cases, collision must have a lot
of impact on received signal. However it is not so in wired networks, so
CSMA/CA is used in this case.
CSMA/CA Avoids Collision
 Interframe Space: Station waits for medium to become idle and if found
idle it does not immediately send data (to avoid collision due to propagation
delay) rather it waits for a period of time called Interframe space or IFS.
After this time it again checks the medium for being idle. The IFS duration
depends on the priority of station.
 Contention Window: It is the amount of time divided into slots. If the
sender is ready to send data, it chooses a random number of slots as wait
time which doubles every time medium is not found idle. If the medium is
found busy it does not restart the entire process, rather it restarts the timer
when the channel is found idle again.
 Acknowledgement: The sender re-transmits the data if acknowledgement is
not received before time-out.
2. Controlled Access
Controlled access protocols ensure that only one device uses the network at a
time. Think of it like taking turns in a conversation so everyone can speak
without talking over each other.
In this, the data is sent by that station which is approved by all other stations.
For further details refer – Controlled Access Protocols.
3. Channelization
In this, the available bandwidth of the link is shared in time, frequency and code
to multiple stations to access channel simultaneously.
 Frequency Division Multiple Access (FDMA) – The available bandwidth
is divided into equal bands so that each station can be allocated its own band.
Guard bands are also added so that no two bands overlap to avoid crosstalk
and noise.
 Time Division Multiple Access (TDMA) – In this, the bandwidth is shared
between multiple stations. To avoid collision time is divided into slots and
stations are allotted these slots to transmit data. However there is a overhead
of synchronization as each station needs to know its time slot. This is
resolved by adding synchronization bits to each slot. Another issue with
TDMA is propagation delay which is resolved by addition of guard bands.
For more details refer – Circuit Switching
 Code Division Multiple Access (CDMA) – One channel carries all
transmissions simultaneously. There is neither division of bandwidth nor
division of time. For example, if there are many people in a room all
speaking at the same time, then also perfect reception of data is possible if
only two person speak the same language. Similarly, data from different
stations can be transmitted simultaneously in different code languages.
 Orthogonal Frequency Division Multiple Access (OFDMA) – In
OFDMA the available bandwidth is divided into small subcarriers in order to
increase the overall performance, Now the data is transmitted through these
small subcarriers. it is widely used in the 5G technology.
Advantages of OFDMA
 High data rates
 Good for multimedia traffic
 Increase in efficiency
Disadvantages OFDMA
 Complex to implement
 High peak to power ratio

 Spatial Division Multiple Access (SDMA) – SDMA uses multiple antennas


at the transmitter and receiver to separate the signals of multiple users that
are located in different spatial directions. This technique is commonly used
in MIMO (Multiple-Input, Multiple-Output) wireless communication
systems.
Advantages SDMA
 Frequency band uses effectively
 The overall signal quality will be improved
 The overall data rate will be increased
Disadvantages SDMA
 It is complex to implement
 It require the accurate information about the channel
Features of Multiple Access Protocols
 Contention-Based Access: Multiple access protocols are typically
contention-based, meaning that multiple devices compete for access to the
communication channel. This can lead to collisions if two or more devices
transmit at the same time, which can result in data loss and decreased
network performance.
 Carrier Sense Multiple Access (CSMA): CSMA is a widely used multiple
access protocol in which devices listen for carrier signals on the
communication channel before transmitting. If a carrier signal is detected,
the device waits for a random amount of time before attempting to transmit
to reduce the likelihood of collisions.
 Collision Detection (CD): CD is a feature of some multiple access protocols
that allows devices to detect when a collision has occurred and take
appropriate action, such as backing off and retrying the transmission.
 Collision Avoidance (CA): CA is a feature of some multiple access
protocols that attempts to avoid collisions by assigning time slots to devices
for transmission.
 Token Passing: Token passing is a multiple access protocol in which
devices pass a special token between each other to gain access to the
communication channel. Devices can only transmit data when they hold the
token, which ensures that only one device can transmit at a time.
 Bandwidth Utilization: Multiple access protocols can affect the overall
bandwidth utilization of a network. For example, contention-based protocols
may result in lower bandwidth utilization due to collisions, while token
passing protocols may result in higher bandwidth utilization due to the
controlled access to the communication channel.

Overlay Network

An overlay network is a virtual or logical network that is created on top of an


existing physical network. The internet, which connects many nodes via circuit
switching, is an example of an overlay network. An overlay network is any
virtual layer on top of physical network infrastructure.

An overlay network is a virtual or logical network that is created on top of an


existing physical network. The internet, which connects many nodes via circuit
switching, is an example of an overlay network.

An overlay network is any virtual layer on top of physical network


infrastructure. This may be as simple as a virtual local area network (VLAN)
but typically refers to more complex virtual layers from software-defined
networking (SDN) or a software-defined wide area network (SD-WAN).

The overlay creates a new layer where traffic can be programmatically directed
through new virtual network routes or paths instead of requiring physical links.
Overlays enable administrators to define and manage traffic flows, irrespective
of the underlying physical infrastructure.

Overlay networks and SDN

SDN is a quickly growing network strategy where the network operating system
separates the data plane (packet handling) from the control plane (the network
topology and routing rules). SDN acts as an overlay, running on the distributed
switches, determining how packets are handled, instead of a centralized router
handling those tasks.
In a traditional network, devices communicate directly using the physical
network's topology and routing mechanisms. However, with a network overlay,
an additional layer of abstraction is added, which allows for advanced network
functionalities such as resource virtualization, redundancy, and fault tolerance.

What are Network Overlays?


A network overlay is a virtual network that runs on top of a physical network.
It allows for creation of logical communication channels and connections
between devices that may be geographically dispersed or belong to separate
physical networks. Overlays are built using software-based layers that
encapsulate the communication between nodes, hiding the underlying physical
network's complexity.
 In a traditional network, devices communicate directly using the physical
network’s topology and routing mechanisms.
 However, with a network overlay, an additional layer of abstraction is
added, which allows for advanced network functionalities such as resource
virtualization, redundancy, and fault tolerance.
Characteristics of Network Overlays
Below are the characteristics of Network Overlays:
 Abstraction: Overlays provide a virtualized view of the network,
simplifying the management of complex infrastructures.
 Custom Routing: Unlike physical networks, overlays can implement
custom routing algorithms to optimize communication paths.
 Interoperability: They allow nodes from different networks or domains to
communicate seamlessly.
 Flexibility: Overlay networks are adaptable to changes in network
conditions, topology, or traffic.
Importance of Overlay Networks in Distributed Systems
Distributed systems consist of multiple independent nodes that work together
to perform a task. In such systems, communication between nodes is essential.
The underlying physical network, however, might not always support the
requirements of these systems. This is where overlay networks become
important.
They provide a unified communication model that abstracts the complexity of
the physical network, enabling seamless interactions between nodes.
Key Benefits of Overlay Networks in Distributed Systems
Below are the key benefits of Overlay Networks in Distributed Systems:
 Scalability: Overlays allow distributed systems to scale efficiently by
adding or removing nodes without impacting the overall system.
 Fault Tolerance: Through redundancy and alternative routing paths,
overlays increase the fault tolerance of distributed systems.
 Dynamic Topology Management: As nodes join or leave the network,
overlays can quickly adjust the logical topology, maintaining a stable
communication framework.
 Optimized Resource Usage: Overlay networks enable better utilization of
network resources, reducing congestion and latency.
Types of Network Overlay Models
Network overlay models can be broadly categorized into three types, based on
how they organize and manage the nodes: Structured, Unstructured, and
Hierarchical.
1. Structured Overlay Networks
Structured overlays organize nodes in a predefined topology, ensuring that
data is distributed evenly and can be efficiently retrieved. These networks
employ Distributed Hash Tables (DHTs) to map data to specific nodes based
on a unique identifier. Each node in the network is responsible for storing a
portion of the data, and the DHT ensures that any data can be quickly located
by traversing the network.
Key Characteristics:
 Deterministic Data Placement: Data is assigned to specific nodes based
on a hash function.
 Efficient Lookups: DHTs enable logarithmic search times, making
lookups efficient even in large networks.
 Scalability: The structured approach allows for predictable scaling as
nodes are added or removed.
Examples:
 Chord: In Chord, nodes are arranged in a ring-like topology, with each
node responsible for a portion of the hash space.
 Pastry: Pastry organizes nodes into a circular identifier space, using
proximity-based routing for efficient data retrieval.
2. Unstructured Overlay Networks
Unstructured overlays are more flexible and do not follow a predefined
topology. Nodes in these networks randomly connect to each other, forming a
loose mesh. These networks rely on techniques like flooding or gossiping to
locate data, making them less efficient in terms of search performance but
more resilient to dynamic network changes.
Key Characteristics:
 Random Topology: Nodes are connected in an ad-hoc fashion, with no
strict organization.
 Data Discovery via Search: Data is located through flooding or peer-to-
peer search mechanisms.
 High Resilience: Due to the lack of strict structure, unstructured networks
can handle dynamic changes well.
Examples:
 Gnutella: Gnutella employs a peer-to-peer unstructured network, where
nodes search for resources through a series of broadcast messages.
 Freenet: Freenet uses a similar approach, focusing on anonymity and data
sharing.
3. Hierarchical Overlay Networks
Hierarchical overlays use a multi-tiered structure, where nodes are organized
into clusters or groups, with each group having a representative or "super
node." These super nodes form a higher-level overlay that connects the groups,
allowing for more efficient data lookup and management.
Key Characteristics:
 Cluster-based Organization: Nodes are grouped into clusters, with a
super node responsible for each group.
 Efficient Routing: Super nodes handle inter-cluster communication,
reducing the search space.
 Scalability: The hierarchical structure improves scalability by dividing the
network into manageable segments.
Examples:
Super-peer Networks: In networks like Kazaa, super-peers act as
intermediaries, reducing the load on regular nodes.
Overlay Routing Techniques
Routing in overlay networks involves determining the most efficient path for
data to travel between nodes, considering both the overlay and the underlying
physical network. Different routing techniques are employed based on the type
of overlay:
 Proximity Routing: Nodes route data through the nearest available
neighbor, minimizing latency.
 Source Routing: The sender determines the entire route, specifying the
path the data should take through the network.
 Greedy Routing: Nodes forward data to the neighbor that is closest to the
destination in terms of logical distance.
 Flooding: In unstructured networks, data is broadcast to all neighbors,
ensuring that it reaches its destination.
 Hierarchical Routing: In hierarchical networks, data is routed through
super nodes, which manage inter-cluster communication.
Applications of Overlay Networks in Distributed Systems
Overlay networks are widely used across various domains of distributed
systems:
 Content Delivery Networks (CDNs): CDNs use overlays to distribute
content across multiple servers, reducing latency and load on the central
servers.
 Peer-to-Peer (P2P) Systems: P2P applications like file-sharing (e.g.,
BitTorrent) and decentralized computing use overlays for efficient data
distribution.
 Internet of Things (IoT): Overlays help connect diverse IoT devices
across different networks, facilitating communication and data exchange.
 Blockchain Networks: Many blockchain implementations use overlay
networks to manage peer-to-peer communication and transaction
validation.
 Virtual Private Networks (VPNs): VPNs create a secure overlay on top of
the public internet, allowing secure communication between nodes.
Advantages of Network Overlays in Distributed Systems
Overlay networks offer several advantages in distributed systems:
 Scalability: Overlays can easily scale to support thousands or millions of
nodes.
 Fault Tolerance: Through redundant paths and routing mechanisms,
overlays ensure data is still accessible even when nodes fail.
 Efficient Resource Usage: Custom routing and load-balancing techniques
help optimize network traffic.
 Flexibility: Overlays can be adapted to different physical networks and
optimized for various application needs.
Challenges in Network Overlay Models
While network overlays provide significant benefits, they also pose
challenges:
 Latency: Additional routing layers may introduce latency, especially in
large-scale overlays.
 Network Congestion: In unstructured overlays, the use of flooding and
broadcast mechanisms can lead to congestion.
 Management Complexity: Maintaining the overlay structure, especially in
dynamic environments, can be complex.
 Data Consistency: In peer-to-peer systems, ensuring that all nodes have
consistent data is a major challenge.
Security in Overlay Networks
Security is a crucial aspect of overlay networks, especially in distributed
systems where data is shared across multiple nodes. Several security
challenges arise due to the virtual nature of overlays:
 Data Privacy: Ensuring that data is encrypted and only accessible to
authorized nodes.
 Authentication: Verifying the identity of nodes in the network to prevent
unauthorized access.
 Denial of Service (DoS) Attacks: Overlay networks can be vulnerable to
DoS attacks, where malicious nodes flood the network with traffic.
 Routing Security: Ensuring that malicious nodes do not tamper with or
reroute data.
Security Techniques:
 Encryption: Secure data transmission using end-to-end encryption.
 Node Authentication: Digital certificates or cryptographic methods to
verify the identity of nodes.
 Consensus Mechanisms: In blockchain-based overlays, consensus
algorithms ensure the integrity and validity of transactions.
 Redundant Routing: Implementing multiple routing paths to prevent a
single point of failure.

Delay-tolerant networking (DTN)


Delay-tolerant networking (DTN) is an approach to computer
network architecture that seeks to address the technical issues in heterogeneous
networks that may lack continuous network connectivity. Examples of such
networks are those operating in mobile or extreme terrestrial environments, or
planned networks in space.

Recently,[when?] the term disruption-tolerant networking has gained currency in


the United States due to support from DARPA, which has funded many DTN
projects. Disruption may occur because of the limits of wireless radio range,
sparsity of mobile nodes, energy resources, attack, and noise.

n the 1970s, spurred by the decreasing size of computers, researchers began


developing technology for routing between non-fixed locations of computers.
While the field of ad hoc routing was inactive throughout the 1980s, the
widespread use of wireless protocols reinvigorated the field in the 1990s
as mobile ad hoc networking (MANET) and vehicular ad hoc
networking became areas of increasing interest.

Concurrently with (but separate from) the MANET activities, DARPA had
funded NASA, MITRE and others to develop a proposal for the Interplanetary
Internet (IPN). Internet pioneer Vint Cerf and others developed the initial IPN
architecture, relating to the necessity of networking technologies that can cope
with the significant delays and packet corruption of deep-space
communications. In 2002, Kevin Fall started to adapt some of the ideas in the
IPN design to terrestrial networks and coined the term delay-tolerant
networking and the DTN acronym. A paper published in 2003 SIGCOMM
conference gives the motivation for DTNs.[1] The mid-2000s brought about
increased interest in DTNs, including a growing number of academic
conferences on delay and disruption-tolerant networking, and growing interest
in combining work from sensor networks and MANETs with the work on DTN.
This field saw many optimizations on classic ad hoc and delay-tolerant
networking algorithms and began to examine factors such as security,
reliability, verifiability, and other areas of research that are well understood in
traditional computer networking.

You might also like