Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Switching Techniques

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

Switching techniques

In large networks, there can be multiple paths from sender to receiver. The switching technique will decide the best
route for data transmission.

Switching technique is used to connect the systems for making one-to-one communication.

Circuit switching is a switching technique that establishes a dedicated path between sender and receiver.

In the Circuit Switching Technique, once the connection is established then the dedicated path will remain to exist until
the connection is terminated.

Circuit switching in a network operates in a similar way as the telephone works.

A complete end-to-end path must exist before the communication takes place.

In case of circuit switching technique, when any user wants to send the data, voice, video, a request signal is sent to the
receiver then the receiver sends back the acknowledgment to ensure the availability of the dedicated path. After
receiving the acknowledgment, dedicated path transfers the data.

Circuit switching is used in public telephone network. It is used for voice transmission.

Fixed data can be transferred at a time in circuit switching technology.


Message Switching is a switching technique in which a message is transferred as a complete unit and routed through
intermediate nodes at which it is stored and forwarded.

In Message Switching technique, there is no establishment of a dedicated path between the sender and receiver.

Message switches are programmed in such a way so that they can provide the most efficient routes.

Each and every node stores the entire message and then forward it to the next node. This type of network is known as
store and forward network.

Message switching treats each message as an independent entity.

Traffic congestion can be reduced because the message is temporarily stored in the nodes.

Message priority can be used to manage the network.

The size of the message which is sent over the network can be varied. Therefore, it supports the data of unlimited size.
The packet switching is a switching technique in which the message is sent in one go, but it is divided into smaller pieces,
and they are sent individually.

The message splits into smaller pieces known as packets and packets are given a unique number to identify their order at
the receiving end.

Every packet contains some information in its headers such as source address, destination address and sequence
number.

Packets will travel across the network, taking the shortest path as possible.

All the packets are reassembled at the receiving end in correct order.

If any packet is missing or corrupted, then the message will be sent to resend the message.

If the correct order of the packets is reached, then the acknowledgment message will be sent.

IPv4 (Internet Protocol version 4) and IPv6 (Internet Protocol version 6) are two different addressing schemes used to
identify and locate devices on a network. Here's an explanation of both addressing schemes:

IPv4:
IPv4 is the most widely used version of the Internet Protocol and has been in use since the early days of the Internet.

It uses a 32-bit address space, which means it can support approximately 4.3 billion unique addresses.

An IPv4 address is written in the form of four sets of numbers separated by dots, such as 192.168.0.1.

The 32-bit address is divided into two parts: the network portion and the host portion. The division between these two
parts is determined by the subnet mask.

The subnet mask specifies how many bits are used for the network portion and how many bits are used for the host
portion. For example, a subnet mask of 255.255.255.0 means the first three sets of numbers represent the network
portion, and the last set represents the host portion.
IPv4 addresses are hierarchical, with different classes of addresses designated for different purposes. This includes public
addresses used on the internet and private addresses used within private networks.

IPv6:
IPv6 was introduced as an upgrade to IPv4 to address the growing need for more IP addresses due to the increasing
number of devices connecting to the internet.

It uses a 128-bit address space, which allows for an astronomical number of unique addresses—approximately 340
undecillion (3.4 x 10^38).

An IPv6 address is written in the form of eight sets of four hexadecimal digits separated by colons, such as
2001:0db8:85a3:0000:0000:8a2e:0370:7334.

IPv6 addresses offer a much larger address space, improved security features, and enhanced support for auto-
configuration and mobility.

With IPv6, the division between the network and host portions is typically based on the network prefix, specified as a
part of the address. The remaining bits represent the host portion.

IPv6 also introduces the concept of "anycast" addressing, where multiple devices can share the same address, and the
routing infrastructure determines the most appropriate device to handle incoming packets.

In summary, IPv4 is the older addressing scheme with a limited number of addresses, while IPv6 is the newer addressing
scheme that provides a vastly expanded address space and additional features. The transition from IPv4 to IPv6 is
ongoing to accommodate the increasing demands of the internet

Icmp
Internet Control Message Protocol (ICMP) is a network layer protocol used to diagnose communication errors by
performing an error control mechanism. Since IP does not have an inbuilt mechanism for sending error and control
messages. It depends on Internet Control Message Protocol(ICMP) to provide error control.

ICMP is used for reporting errors and management queries. It is a supporting protocol and is used by network devices
like routers for sending error messages and operations information. For example, the requested service is not available
or a host or router could not be reached.

Uses of ICMP

ICMP is used for error reporting if two devices connect over the internet and some error occurs, So, the router sends an
ICMP error message to the source informing about the error. For Example, whenever a device sends any message which
is large enough for the receiver, in that case, the receiver will drop the message and reply back ICMP message to the
source.

How Does ICMP Work?

ICMP is the primary and important protocol of the IP suite, but ICMP isn’t associated with any transport layer protocol
(TCP or UDP) as it doesn’t need to establish a connection with the destination device before sending any message as it is
a connectionless protocol.

CIDR, which stands for Classless Inter-Domain Routing, is a method used in computer networking to allocate and
manage IP addresses more efficiently. It helps in organizing and routing network traffic across the internet.

In the earlier days of networking, IP addresses were divided into classes, such as Class A, Class B, and Class C. Each
class had a fixed number of available IP addresses. However, this system was wasteful, as many IP addresses were
unused or allocated inefficiently.

CIDR, on the other hand, allows for more flexible allocation of IP addresses. It allows network administrators to
group IP addresses together and create smaller subnets based on their needs. CIDR uses a notation that combines
the IP address and a prefix length, separated by a slash (/). The prefix length indicates the number of bits used for
the network portion of the IP address.

For example, instead of assigning a whole Class C network with 256 IP addresses to a small organization that needs
only 30 addresses, CIDR allows the administrator to allocate just the required number of addresses. The IP address
might look something like this: 192.168.0.0/27. Here, the "/27" indicates that the first 27 bits of the IP address are
used for the network, leaving 5 bits for host addresses. This provides a total of 32 possible addresses (2^5).

CIDR also enables efficient routing by allowing the aggregation of multiple smaller networks into a single larger
network. This reduces the number of entries in routing tables, making the routing process faster and more efficient.

In summary, CIDR is a method that allows for more efficient allocation and routing of IP addresses by grouping them
together into smaller subnets and aggregating multiple networks. It helps conserve IP addresses, improve network
management, and enhance overall network performance.

NAT stands for Network Address Translation. It is a technology used in computer networks
to allow multiple devices on a local network to share a single public IP address.

Imagine you have a home network with several devices like computers, smartphones, and
smart TVs. Your Internet Service Provider (ISP) assigns you a single public IP address, which
is the unique address that identifies your network on the internet. However, you have
multiple devices that need to access the internet simultaneously.
This is where NAT comes in. NAT acts as a middleman between your local network and the
internet. It assigns a unique private IP address to each device within your network. These
private IP addresses are only used within your local network and cannot be directly
accessed from the internet.

When a device from your local network wants to communicate with a website or server on
the internet, NAT translates the private IP address of the device into the public IP address
assigned by your ISP. This translation allows the device to send and receive data over the
internet using the public IP address.

When the data comes back from the internet, NAT receives it and looks at the destination
IP address. It then checks its translation table to determine which device in the local
network the data is meant for, based on the port number used. NAT translates the public
IP address back into the corresponding private IP address and forwards the data to the
appropriate device.

In simple terms, NAT allows multiple devices in your network to share a single public IP
address by assigning unique private IP addresses to each device and translating between
these private IP addresses and the public IP address when communicating with the
internet. This way, you can have several devices using the internet at the same time, even
though you only have one public IP address.

Services of Network Layer


Features of Network Layer
1. The main responsibility of the Network layer is to carry the data packets from the source to the
destination without changing or using them.
2. If the packets are too large for delivery, they are fragmented i.e., broken down into smaller packets.
3. It decides the route to be taken by the packets to travel from the source to the destination among the
multiple routes available in a network (also called routing).
4. The source and destination addresses are added to the data packets inside the network layer.

Services Offered by Network Layer


The services which are offered by the network layer protocol are as follows:
1.Packetizing
2.Routing
3.Forwarding

1. Packetizing
The process of encapsulating the data received from the upper layers of the network (also called payload) in a
network layer packet at the source and decapsulating the payload from the network layer packet at the
destination is known as packetizing.

The source host adds a header that contains the source and destination address and some other relevant
information required by the network layer protocol to the payload received from the upper layer protocol and
delivers the packet to the data link layer.
The destination host receives the network layer packet from its data link layer, decapsulates the packet, and
delivers the payload to the corresponding upper layer protocol. The routers in the path are not allowed to
change either the source or the destination address. The routers in the path are not allowed to decapsulate the
packets they receive unless they need to be fragmented.

2. Routing
Routing is the process of moving data from one device to another device. These are two other services offered
by the network layer. In a network, there are a number of routes available from the source to the destination.
The network layer specifies some strategies which find out the best possible route. This process is referred to as
routing. There are a number of routing protocols that are used in this process and they should be run to help the
routers coordinate with each other and help in establishing communication throughout the network.

3. Forwarding
Forwarding is simply defined as the action applied by each router when a packet arrives at one of its interfaces.
When a router receives a packet from one of its attached networks, it needs to forward the packet to another
attached network (unicast routing) or to some attached networks (in the case of multicast routing). Routers are
used on the network for forwarding a packet from the local network to the remote network. So, the process of
routing involves packet forwarding from an entry interface out to an exit interface.

Routing Forwarding
Routing is the process of moving data Forwarding is simply defined as the action applied by
from one device to another device. each router when a packet arrives at one of its
interfaces.
Operates on the Network Layer. Operates on the Network Layer.
Work is based on Forwarding Table. Checks the forwarding table and work according to
that.
Works on protocols like Routing Works on protocols like UDP Encapsulating Security
Information Protocol (RIP) for Routing. Payloads

NAT, CIDR, ICMP


NAT (Network Address Translation) CIDR (Classless Inter-Domain ICMP (Internet Control Message
Routing) Protocol)
Definition Technique to map private IP Method for allocating and Protocol for diagnostics and
addresses to public IP addresses in managing IP addresses more troubleshooting within IP networks
networking efficiently
Purpose Allows multiple devices within a Provides flexibility in defining Facilitates network diagnostics,
private network to share a single network boundaries and efficient testing connectivity, and conveying
public IP address IP address allocation network status
Functionality Translates source IP addresses and Specifies variable-length subnet Sends error messages, tests network
port numbers between private and masks for subnetting and IP connectivity, and provides feedback
public networks address allocation on network conditions
Protocol/Standard Implemented at the network layer Implemented at the network layer Implemented as part of the IP
(Layer 3) (Layer 3) protocol suite
Example Use Case Allowing multiple devices in a home Dividing a large IP address space Sending ICMP echo request
network to share a single public IP into smaller subnets for efficient messages (ping) to test network
address for internet access IP address allocation reachability
Impact on IP Helps conserve IPv4 addresses by Allows for efficient allocation of Does not directly impact IP
Addressing allowing multiple devices to use a IP addresses by defining variable- addressing, but provides diagnostic
single public IP address length subnet masks information about IP packets
Primary Benefit Address scarcity mitigation and Efficient utilization of IP Network troubleshooting and
enabling private networks to connect addresses and improved routing diagnostics for error detection and
to the internet efficiency status reporting

Routing protocol
Routing protocols play a crucial role in computer networks by determining the best path for data packets to
travel from the source to the destination. Here's an explanation of three common types of routing protocols:
Distance Vector, Link State, and Path Vector.
Certainly! Here's a more casual explanation of the routing protocols:

A routing protocol specifies how routers communicate with each other, distributing information that
enables them to select routes between any two nodes on a computer network. There are 3 main
categories of routing protocols:

 Distance Vector (RIP)


 Link-state (OSPF, ISIS)
 Path Vector (BGP)

Distance Vector
The distance vector (DV) protocol is the oldest routing protocol in practice. With distance vector routes
are advertised based upon the following characteristics:

 Distance - How far the destination network is based upon a metric such as hop count.
 Vector - The direction (next-hop router or egress interface) required to get to the destination.
This routing information is exchanged between directly connected neighbours. Therefore when a
node receives a routing update, it has no knowledge of where the neighbour learned it from. In other
words, the node has no visibility of the network past its own neighbour. This part of distance vector is
also known as "routing by rumour".

Furthermore, routing updates are sent in full (rather than delta-based updates) and at regular intervals,
resulting in extremely slow convergence times - one of the key downfalls to distance vector protocols.
In addition due to the slow convergence times and "routing by rumour", distance vector protocols are
prone to routing loops.

However, on the flipside, the resource consumption is low compared to link-state, due to not having to
hold the full state of the entire topology.

Link-state
In contrast to distance vector routing, link state routing (OSPF, ISIS) relies on each node
advertising/flooding the state (i.e. delay, bandwidth etc) of their links to every node within the link state
domain. This results in each node building a complete map of the network (shortest path tree), with
itself as the root using the shortest path first algorithm, also known as the Dijkstra algorithm.

Unlike distance vector link-state neighbours only sends incremental routing updates (LSAs) rather
than a full routing update. Also, these updates are only sent at the point a change in the network
topology occurs, rather than at regular intervals.

Link-state protocols provide extremely low convergence times and, due to each router having a
complete view of the network, aren't prone to the same routing loops seen with DV-based protocols.

However, due to the computation required for the algorithms to run across the shortest path trees upon
each node, greater resources are consumed compared to that of distance vector, however, this really isn't
a concern with the systems of today.

Path Vector
Path vector (PV) protocols, such as BGP, are used across domains aka autonomous systems. In a path
vector protocol, a router does not just receive the distance vector for a particular destination from its
neighbor; instead, a node receives the distance as well as path information (aka BGP path attributes),
that the node can use to calculate (via the BGP path selection process) how traffic is routed to the
destination AS.

OSPF, RIP, BGP


Quality of Service (QoS) Network Congestion Management

Congestion Management or Congestion Avoidance is a QoS tool that seeks to improve network
performance by reducing total packet loss and prematurely discarding some TCP packets. Network
congestion occurs when a network (or a portion of the network) or a network node is overloaded with
data.

Causes of Network Congestion


Listed below are the most common causes of network congestion in a network:

Low bandwidth – Congestion occurs when the bandwidth is not large enough to accommodate all
the traffic at the same time.

Poor network design – Your network topology must be designed not only to ensure that all parts of
your network are connected but also to enhance performance across all coverage areas. Correct
subnetting ensures that traffic flows towards the destined network and stays in that same network,
which reduces congestion.

Outdated hardware – Bottlenecks can occur when data is transported through obsolete switches,
routers, servers, and Internet exchanges. If the hardware isn’t up to par, a slowdown in data
transmission occurs. Wire and cable connections as well must be in their optimal layout.

Too many devices in the broadcast domain – When you put too many hosts or multiple devices in
a broadcast domain, you get a congested network.

Broadcast storms – Too many requests or broadcast traffic in a network.

High CPU utilization – Network devices are designed to handle a certain maximum data speed.
Constantly pushing larger data would result in overutilized devices

The diagram shows how packets are delivered in a network that is experiencing congestion. The router is the
central node in the network, and it is responsible for routing packets to their destinations. The arrows in the
diagram show the flow of packets through the network.

The first arrow shows new packets entering the router. These packets are then placed in the interface output
queue. The interface output queue is a buffer that stores packets that are waiting to be sent out of the router.

If the interface output queue is full, then packets will be dropped. This is what the third arrow in the diagram
shows. Dropped packets are lost and never reach their destination.

The second arrow in the diagram shows packets exiting the interface. These packets have been successfully sent
out of the router and are on their way to their destination.

The diagram shows that network congestion can cause packets to be dropped. This is because the interface
output queue can only hold a limited number of packets. If the queue is full, then new packets will be dropped,
even if they are important.

There are a number of ways to prevent network congestion. One way is to use quality of service (QoS) to
prioritize important packets. QoS can ensure that important packets are not dropped, even if the interface
output queue is full.
Another way to prevent network congestion is to use a congestion avoidance algorithm. Congestion avoidance
algorithms help to slow down the rate at which packets are sent into the network, which can help to prevent the
interface output queue from becoming full.

The diagram is a simplified representation of how network congestion works. In reality, the situation is much
more complex. However, the diagram provides a basic understanding of how network congestion can cause
packets to be dropped.

What is Multiprotocol Label Switching (MPLS)?


Multiprotocol Label Switching (MPLS) is a switching mechanism used in wide area
networks (WANs).
MPLS uses labels instead of network addresses to route traffic optimally via shorter
pathways. MPLS is protocol-agnostic and can speed up and shape traffic flows across
WANs and service provider networks. By optimizing traffic, MPLS reduces downtime and
improves speed and quality of service (QoS).

MPLS is a technique used in wide area networks (WANs) to make network traffic faster and more efficient. Instead of
using traditional network addresses, MPLS uses labels to guide traffic through the network along shorter routes.

Think of it like this: Imagine you're driving on a highway with multiple exits. Each exit represents a different network
path. In traditional routing, you would need to check the address on each vehicle (packet) and decide which exit it
should take. This process can be slow and inefficient.

With MPLS, instead of checking the addresses on each packet, we assign labels to packets at the beginning of their
journey. These labels act like signs directing the packets to take a specific route. Once the packets reach the appropriate
exit, the labels are removed, and the packets continue to their destination.

This label-based approach allows traffic to be routed more quickly and efficiently. It can prioritize certain types of traffic,
ensuring that important data or applications get to their destination faster. By optimizing traffic, MPLS reduces
downtime, improves speed, and enhances the quality of service (QoS) for network users.

Overall, MPLS simplifies and speeds up the process of routing traffic in wide area networks, making network
communication more reliable and efficient.

Routing in MANET using AODV and DSR

Certainly! Let's explain routing in MANET using AODV and DSR in simple terms:

In a Mobile Ad hoc Network (MANET), routing refers to how devices (like smartphones or laptops) communicate with
each other by finding the best paths to send data. Two common routing protocols used in MANETs are AODV and DSR.
1. AODV (Ad hoc On-Demand Distance Vector):
- AODV works like a GPS navigation system for devices in a MANET.
- When a device wants to send data to another device, it first checks if it knows the best path to reach that device. If
not, it sends a request to nearby devices, asking for directions.
- The nearby devices forward this request to their own neighbors, creating a chain of requests that eventually reaches
a device that knows the way.
- Once the device with the requested information is found, it sends the directions back to the requesting device.
- These directions include the number of steps (or "hops") needed to reach the destination and the next device to pass
the data to.
- With this information, the device can send its data along the recommended path, hopping from device to device until
it reaches the destination.

2. DSR (Dynamic Source Routing):


- DSR is like passing a message from person to person in a game of telephone.
- When a device wants to send data, it writes down the complete path the data should follow, listing the devices it
needs to pass through.
- The device then sends the data to its nearest neighbor, along with the complete path.
- Each neighbor, upon receiving the data, reads the path and passes the data to the next device mentioned in the path.
- This process continues until the data reaches its destination.
- Along the way, devices can remember the paths they've learned, so if a similar request comes up in the future, they
can provide the path without starting from scratch.

In simple terms, both AODV and DSR help devices find the best routes to send data in a MANET. AODV asks for
directions from nearby devices until it finds the right path, while DSR writes down the complete path and passes it from
device to device. These protocols allow devices to communicate with each other effectively, even when the network
topology is constantly changing.

You might also like