Advanced Computer Network-Unit-1
Advanced Computer Network-Unit-1
1. Network Devices
Basic hardware interconnecting network nodes, such as Network Interface
Cards (NICs), Bridges, Hubs, Switches, and Routers, are used in all networks.
In addition, a mechanism for connecting these building parts is necessary,
which is usually galvanic cable and optical cable are less popular (“optical
fiber”)The following are the network devices :
NIC (Network Interface Card): A network card, often known as a
network adapter or NIC (network interface card), is computer hardware that
enables computers to communicate via a network. It offers physical access
to networking media and, in many cases, MAC addresses serve as a low-
level addressing scheme. Each network interface card has a distinct
identifier. This is stored on a chip that is attached to the card.
Repeater: A repeater is an electrical device that receives a signal, cleans it
of unwanted noise, regenerates it, and retransmits it at a higher power level
or to the opposite side of an obstruction, allowing the signal to travel
greater distances without degradation. In the majority of twisted pair
Ethernet networks, Repeaters are necessary for cable lengths longer than
100 meters in some systems. Repeaters are based on physics.
Hub: A hub is a device that joins together many twisted pairs or fiber optic
Ethernet devices to give the illusion of a formation of a single network
segment. The device can be visualized as a multiport repeater. A network
hub is a relatively simple broadcast device. Any packet entering any port is
regenerated and broadcast out on all other ports, and hubs do not control
any of the traffic that passes through them. Packet collisions occur as a
result of every packet being sent out through all other ports, substantially
impeding the smooth flow of communication.
Bridges: Bridges broadcast data to all the ports but not to the one that
received the transmission. Bridges, on the other hand, learn which MAC
addresses are reachable through specific ports rather than copying messages
to all ports as hubs do. Once a port and an address are associated, the
bridge will only transport traffic from that address to that port.
Switches: A switch differs from a hub in that it only forwards frames to the
ports that are participating in the communication, rather than all of the ports
that are connected. The collision domain is broken by a switch, yet the
switch depicts itself as a broadcast domain. Frame-forwarding decisions are
made by switches based on MAC addresses.
Routers: Routers are networking devices that use headers and forwarding
tables to find the optimal way to forward data packets between networks. A
router is a computer networking device that links two or more computer
networks and selectively exchanges data packets between them. A router
can use address information in each data packet to determine if the source
and destination are on the same network or if the data packet has to be
transported between networks. When numerous routers are deployed in a
wide collection of interconnected networks, the routers share target system
addresses so that each router can develop a table displaying the preferred
pathways between any two systems on the associated networks.
Gateways: To provide system compatibility, a gateway may contain
devices such as protocol translators, impedance-matching devices, rate
converters, fault isolators, or signal translators. It also necessitates the
development of administrative procedures that are acceptable to both
networks. By completing the necessary protocol conversions, a protocol
translation/mapping gateway joins networks that use distinct network
protocol technologies.
2. Links
Links are the ways information travels between devices, and they can be of
two types:
Wired: Communication done in a wired medium. Copper wire, twisted
pair, or fiber optic cables are all options. A wired network employs wires to
link devices to the Internet or another network, such as laptops or desktop
PCs.
Wireless: Wireless means without wire, media that is made up of
electromagnetic waves (EM Waves) or infrared waves. Antennas or sensors
will be present on all wireless devices. For data or voice communication, a
wireless network uses radio frequency waves rather than wires.
3. Communication Protocols
A communication protocol is a set of rules that all devices follow when they
share information. Some common protocols are TCP/IP, IEEE 802, Ethernet,
wireless LAN, and cellular standards. TCP/IP is a model that organizes how
communication works in modern networks. It has four functional layers for
these communication links:
Network Access Layer: This layer controls how data is physically
transferred, including how hardware sends data through wires or fibers.
Internet Layer: This layer packages data into understandable packets and
ensures it can be sent and received.
Transport Layer: This layer keeps the communication between devices
steady and reliable.
Application Layer: This layer allows high-level applications to access the
network to start data transfer.
Most of the modern internet structure is based on the TCP/IP model, although
the similar seven-layer OSI model still has a strong influence.
IEEE 802 is a group of standards for local area networks (LAN) and
metropolitan area networks (MAN). The most well-known member of the
IEEE 802 family is wireless LAN, commonly known as WLAN or Wi-Fi.
4. Network Defense
While nodes, links, and protocols are the building blocks of a network, a
modern network also needs strong defenses. Security is crucial because huge
amounts of data are constantly being created, moved, and processed. Some
examples of network defense tools are firewalls, intrusion detection systems
(IDS), intrusion prevention systems (IPS), network access control (NAC),
content filters, proxy servers, anti-DDoS devices, and load balancers.
o Peer-To-Peer network
o Client/Server network
o Peer-To-Peer network
Note: Since the message is short and the bandwidth is high, the dominant factor
is the
propagation time and not the transmission time(which can be ignored).
Queuing Time
Queuing time is a time based on how long the packet has to sit around in the
router. Quite frequently the wire is busy, so we are not able to transmit a packet
immediately. The queuing time is usually not a fixed factor, hence it changes
with the load thrust in the network. In cases like these, the packet sits waiting,
ready to go, in a queue. These delays are predominantly characterized by the
measure of traffic on the system. The more the traffic, the more likely a packet
is stuck in the queue, just sitting in the memory, waiting.
Processing Delay
Processing delay is the delay based on how long it takes the router to figure out
where to send the packet. As soon as the router finds it out, it will queue the
packet for transmission. These costs are predominantly based on the complexity
of the protocol. The router must decipher enough of the packet to make sense of
which queue to put the packet in. Typically the lower-level layers of the stack
have simpler protocols. If a router does not know which physical port to send
the packet to, it will send it to all the ports, queuing the packet in many queues
immediately. Differently, at a higher level, like in IP protocols, the processing
may include making an ARP request to find out the physical address of the
destination before queuing the packet for transmission. This situation may also
be considered as a processing delay.
Case 2: Assume a link is of bandwidth 3bps. From the image, we can say that
there can be a maximum of 3 x 5 = 15 bits on the line. The reason is that, at
each second, there are 3 bits on the line and the duration of each bit is 0.33s.
Bandwidth Delay
For both examples, the product of bandwidth and delay is the number of bits
that can fill the link. This estimation is significant in the event that we have to
send data in bursts and wait for the acknowledgment of each burst before
sending the following one. To utilize the maximum ability of the link, we have
to make the size of our burst twice the product of bandwidth and delay. Also,
we need to fill up the full-duplex channel. The sender ought to send a burst of
data of (2*bandwidth*delay) bits. The sender at that point waits for the
receiver’s acknowledgement for part of the burst before sending another burst.
The amount: 2*bandwidth*delay is the number of bits that can be in transition
at any time.
THROUGHPUT
JITTER
Jitter is another performance issue related to the delay. In technical terms, jitter
is a “packet delay variance”. It can simply mean that jitter is considered a
problem when different packets of data face different delays in a network and
the data at the receiver application is time-sensitive, i.e. audio or video data.
Jitter is measured in milliseconds(ms). It is defined as an interference in the
normal order of sending data packets. For example: if the delay for the first
packet is 10 ms, for the second is 35 ms, and for the third is 50 ms, then the
real-time destination application that uses the packets experiences jitter.
Simply, a jitter is any deviation in or displacement of, the signal pulses in a
high-frequency digital signal. The deviation can be in connection with the
amplitude, the width of the signal pulse, or the phase timing. The major causes
of jitter are electromagnetic interference(EMI) and crosstalk between signals.
Jitter can lead to the flickering of a display screen, affects the capability of a
processor in a desktop or server to proceed as expected, introduce clicks or
other undesired impacts in audio signals, and loss of transmitted data between
network devices.
Jitter is harmful and causes network congestion and packet loss.
Congestion is like a traffic jam on the highway. Cars cannot move forward at
a reasonable speed in a traffic jam. Like a traffic jam, in congestion, all the
packets come to a junction at the same time. Nothing can get loaded.
The second negative effect is packet loss. When packets arrive at unexpected
intervals, the receiving system is not able to process the information, which
leads to missing information also called “packet loss”. This has negative
effects on video viewing. If a video becomes pixelated and is skipping, the
network is experiencing a jitter. The result of the jitter is packet loss. When
you are playing a game online, the effect of packet loss can be that a player
begins moving around on the screen randomly. Even worse, the game goes
from one scene to the next, skipping over part of the gameplay.
Jitter
In the above image, it can be noticed that the time it takes for packets to be sent
is not the same as the time in which they will arrive at the receiver side. One of
the packets faces an unexpected delay on its way and is received after the
expected time. This is jitter.
A jitter buffer can reduce the effects of jitter, either in a network, on a router or
switch, or on a computer. The system at the destination receiving the network
packets usually receives them from the buffer and not from the source system
directly. Each packet is fed out of the buffer at a regular rate. Another approach
to diminish jitter in case of multiple paths for traffic is to selectively route
traffic along the most stable paths or to always pick the path that can come
closest to the targeted packet delivery rate.
Factors Affecting Network Performance
Below mentioned are the factors that affect the network performance.
Network Infrastrucutre
Applications used in the Network
Network Issues
Network Security
Network Infrastructure
Applications that are used in the Network can also have an impact on the
performance of the network as some applications that have poor performance
can take large bandwidth, for more complicated applications, its maintenance is
also important and therefore it impacts the performance of the network.
Network Issues
Network Security
Error is a condition when the receiver’s information does not match the
sender’s. Digital signals suffer from noise during transmission that can
introduce errors in the binary bits traveling from sender to receiver. That means
a 0 bit may change to 1 or a 1 bit may change to 0.
Data (Implemented either at the Data link layer or Transport Layer of the OSI
Model) may get scrambled by noise or get corrupted whenever a message is
transmitted. To prevent such errors, error-detection codes are added as extra
data to digital messages. This helps in detecting any errors that may have
occurred during message transmission.
Types of Errors
Single-Bit Error
A single-bit error refers to a type of data transmission error that occurs when
one bit (i.e., a single binary digit) of a transmitted data unit is altered during
transmission, resulting in an incorrect or corrupted data unit.
Single-Bit Error
Multiple-Bit Error
A multiple-bit error is an error type that arises when more than one bit in a data
transmission is affected. Although multiple-bit errors are relatively rare when
compared to single-bit errors, they can still occur, particularly in high-noise or
high-interference digital environments.
Multiple-Bit Error
Burst Error
When several consecutive bits are flipped mistakenly in digital transmission, it
creates a burst error. This error causes a sequence of consecutive incorrect
values.
Burst Error
CRC
Working
We have given dataword of length n and divisor of length k.
Step 1: Append (k-1) zero’s to the original message
Step 2: Perform modulo 2 division
Step 3: Remainder of division = CRC
Step 4: Code word = Data with append k-1 zero’s + CRC
Note:
CRC must be k-1 bits
Length of Code word = n+k-1 bits
Example: Let’s data to be send is 1010000 and divisor in the form of
polynomial is x3+1. CRC method discussed below.
What is Ethernet?
Pure ALOHA
When a station sends data it waits for an acknowledgement. If the
acknowledgement doesn’t come within the allotted time then the station waits
for a random amount of time called back-off time (Tb) and re-sends the data.
Since different stations wait for different amount of time, the probability of
further collision decreases.
Vulnerable Time = 2* Frame transmission time
Throughput = G exp{-2*G}
Maximum throughput = 0.184 for G=0.5
Pure ALOHA
Slotted ALOHA
It is similar to pure aloha, except that we divide time into slots and sending of
data is allowed only at the beginning of these slots. If a station misses out the
allowed time, it must wait for the next slot. This reduces the probability of
collision.
Vulnerable Time = Frame transmission time
Throughput = G exp{-*G}
Maximum throughput = 0.368 for G=1
Slotted ALOHA
CSMA
Carrier Sense Multiple Access ensures fewer collisions as the station is required
to first sense the medium (for idle or busy) before transmitting data. If it is idle
then it sends data, otherwise it waits till the channel becomes idle. However
there is still chance of collision in CSMA due to propagation delay. For
example, if station A wants to send data, it will first sense the medium. If it
finds the channel idle, it will start sending data. However, by the time the first
bit of data is transmitted (delayed due to propagation delay) from station A, if
station B requests to send data and senses the medium it will also find it idle and
will also send data. This will result in collision of data from station A and B.
CSMA
1-Persistent: The node senses the channel, if idle it sends the data, otherwise
it continuously keeps on checking the medium for being idle and transmits
unconditionally(with 1 probability) as soon as the channel gets idle.
Non-Persistent: The node senses the channel, if idle it sends the data,
otherwise it checks the medium after a random amount of time (not
continuously) and transmits when found idle.
P-Persistent: The node senses the medium, if idle it sends the data with p
probability. If the data is not transmitted ((1-p) probability) then it waits for
some time and checks the medium again, now if it is found idle then it send
with p probability. This repeat continues until the frame is sent. It is used in
Wifi and packet radio systems.
O-Persistent: Superiority of nodes is decided beforehand and transmission
occurs in that order. If the medium is idle, node waits for its time slot to send
data.
CSMA/CD
Carrier sense multiple access with collision detection. Stations can terminate
transmission of data if collision is detected. For more details refer – Efficiency
of CSMA/CD.
CSMA/CA
Carrier sense multiple access with collision avoidance. The process of collisions
detection involves sender receiving acknowledgement signals. If there is just
one signal(its own) then the data is successfully sent but if there are two
signals(its own and the one with which it has collided) then it means a collision
has occurred. To distinguish between these two cases, collision must have a lot
of impact on received signal. However it is not so in wired networks, so
CSMA/CA is used in this case.
CSMA/CA Avoids Collision
Interframe Space: Station waits for medium to become idle and if found
idle it does not immediately send data (to avoid collision due to propagation
delay) rather it waits for a period of time called Interframe space or IFS.
After this time it again checks the medium for being idle. The IFS duration
depends on the priority of station.
Contention Window: It is the amount of time divided into slots. If the
sender is ready to send data, it chooses a random number of slots as wait
time which doubles every time medium is not found idle. If the medium is
found busy it does not restart the entire process, rather it restarts the timer
when the channel is found idle again.
Acknowledgement: The sender re-transmits the data if acknowledgement is
not received before time-out.
2. Controlled Access
Controlled access protocols ensure that only one device uses the network at a
time. Think of it like taking turns in a conversation so everyone can speak
without talking over each other.
In this, the data is sent by that station which is approved by all other stations.
For further details refer – Controlled Access Protocols.
3. Channelization
In this, the available bandwidth of the link is shared in time, frequency and code
to multiple stations to access channel simultaneously.
Frequency Division Multiple Access (FDMA) – The available bandwidth
is divided into equal bands so that each station can be allocated its own band.
Guard bands are also added so that no two bands overlap to avoid crosstalk
and noise.
Time Division Multiple Access (TDMA) – In this, the bandwidth is shared
between multiple stations. To avoid collision time is divided into slots and
stations are allotted these slots to transmit data. However there is a overhead
of synchronization as each station needs to know its time slot. This is
resolved by adding synchronization bits to each slot. Another issue with
TDMA is propagation delay which is resolved by addition of guard bands.
For more details refer – Circuit Switching
Code Division Multiple Access (CDMA) – One channel carries all
transmissions simultaneously. There is neither division of bandwidth nor
division of time. For example, if there are many people in a room all
speaking at the same time, then also perfect reception of data is possible if
only two person speak the same language. Similarly, data from different
stations can be transmitted simultaneously in different code languages.
Orthogonal Frequency Division Multiple Access (OFDMA) – In
OFDMA the available bandwidth is divided into small subcarriers in order to
increase the overall performance, Now the data is transmitted through these
small subcarriers. it is widely used in the 5G technology.
Advantages of OFDMA
High data rates
Good for multimedia traffic
Increase in efficiency
Disadvantages OFDMA
Complex to implement
High peak to power ratio
Overlay Network
The overlay creates a new layer where traffic can be programmatically directed
through new virtual network routes or paths instead of requiring physical links.
Overlays enable administrators to define and manage traffic flows, irrespective
of the underlying physical infrastructure.
SDN is a quickly growing network strategy where the network operating system
separates the data plane (packet handling) from the control plane (the network
topology and routing rules). SDN acts as an overlay, running on the distributed
switches, determining how packets are handled, instead of a centralized router
handling those tasks.
In a traditional network, devices communicate directly using the physical
network's topology and routing mechanisms. However, with a network overlay,
an additional layer of abstraction is added, which allows for advanced network
functionalities such as resource virtualization, redundancy, and fault tolerance.
Concurrently with (but separate from) the MANET activities, DARPA had
funded NASA, MITRE and others to develop a proposal for the Interplanetary
Internet (IPN). Internet pioneer Vint Cerf and others developed the initial IPN
architecture, relating to the necessity of networking technologies that can cope
with the significant delays and packet corruption of deep-space
communications. In 2002, Kevin Fall started to adapt some of the ideas in the
IPN design to terrestrial networks and coined the term delay-tolerant
networking and the DTN acronym. A paper published in 2003 SIGCOMM
conference gives the motivation for DTNs.[1] The mid-2000s brought about
increased interest in DTNs, including a growing number of academic
conferences on delay and disruption-tolerant networking, and growing interest
in combining work from sensor networks and MANETs with the work on DTN.
This field saw many optimizations on classic ad hoc and delay-tolerant
networking algorithms and began to examine factors such as security,
reliability, verifiability, and other areas of research that are well understood in
traditional computer networking.