Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit 5

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 83

ROUTING

•One of the important functions of network layer is routing the


packets from source machine to the destination machine.
•To perform this routing, network layer defines several different
algorithms.
•These routing algorithms play a vital role in network and are used
to define route for packets.
•A routing algorithm is a part of network layer software that
decides to which output line an incoming packet should be
transmitted.
•If the subnet uses a datagram approach, then the choice of route
has to be made for each incoming packet.
Desirable Properties of Routing Algorithms

•Correctness -- should get packets eventually to the correct


destination .

• Simplicity -- this usually implies faster .

• Robustness -- should be able to handle new routers coming


online, as well as handle other going off or malfunctioning.

• Stability -- under constant conditions should converge to some


equilibrium.
CLASSIFICATION OF ROUTING ALGORITHM

1.Based on Working.

2.Based on the number of destination to which packet is routed-


Transmission technique
CLASSIFICATION OF ROUTING ALGORITHM

Based on Working

Non – Adaptive Algorithm:

1.Routing decisions are not based on the measurement or estimation


of current traffic and topology.
2.Route from S –D calculated in advance, and it is downloaded to
router when the network is initialized.
3.Known as Static Routing Algorithm.
CLASSIFICATION OF ROUTING ALGORITHM

Based on Working

Adaptive Algorithm:

1.Dynamic Routing Algorithm.


2.Change their routing decision to reflect changes in traffic and
topology.
3.Update routing information continuously.
CLASSIFICATION OF ROUTING ALGORITHM

Based on Transmission Technique

1.Unicast Routing Algorithm


2.Multicast Routing Algorithm
CLASSIFICATION OF ROUTING ALGORITHM

Based on Transmission Technique

1.Unicast Routing Algorithm

• Unicast means the transmission from a single sender to a single


receiver. It is a point-to-point communication between sender and
receiver. There are various unicast protocols such as TCP, HTTP, etc.

•TCP is the most commonly used unicast protocol. It is a connection-


oriented protocol that relay on acknowledgement from the receiver side.

•HTTP stands for Hyper Text Transfer Protocol. It is an object oriented


protocol for communication.
CLASSIFICATION OF ROUTING ALGORITHM

Based on Transmission Technique

1.Multicast Routing Algorithm

• Packets send by a sender are received by a group of receivers.


• The decision on which computer will be accepting the packet is taken
by looking at the destination address of the packet.
CLASSIFICATION OF ROUTING ALGORITHM

Different Routing Algorithms:

• Optimality principle
• Shortest path routing algorithm
• Flooding
• Distance vector routing
• Link state routing
• Hierarchical Routing
The Optimality Principle

•One can make a general statement about optimal routes without regard
to network topology or traffic. This statement is known as the
optimality principle.

•It states that if router J is on the optimal path from router I to router K,
then the optimal path from J to K also falls along the same.

•As a direct consequence of the optimality principle, we can see that the
set of optimal routes from all sources to a given destination form a tree
rooted at the destination. Such a tree is called a sink tree. The goal of
all routing algorithms is to discover and use the sink trees for all routers.
Shortest Path Routing Algorithm - Dijikstra’s

•In Shortest Path Routing algorithm, a graph of subnet is created.

• The idea is to build a graph of the subnet, with each node of the
graph representing a router and each arc of the graph representing a
communication line or link.

• To choose a route between a given pair of routers, the algorithm just


finds the shortest path between them on the graph .
Shortest path routing algorithm

1. Start with the local node (router) as the root of the tree. Assign a
cost of 0 to this node and make it the first permanent node.

2. Examine each neighbor of the node that was the last permanent
node.

3. Assign a cumulative cost to each node and make it tentative.

4. Among the list of tentative nodes


a. Find the node with the smallest cost and make it Permanent
b. If a node can be reached from more than one route then select
the route with the shortest cumulative cost.

5. Repeat steps 2 to 4 until every node becomes permanent


Shortest path routing algorithm
Shortest path routing algorithm
Shortest path routing algorithm
Shortest path routing algorithm
Shortest path routing algorithm
Shortest path routing algorithm
Shortest path routing algorithm
Shortest path routing algorithm
Shortest path routing algorithm
Shortest path routing algorithm
Shortest path routing algorithm
Shortest path routing algorithm
Shortest path routing algorithm
FLOODING

•Another static algorithm is flooding, in which every incoming packet


is sent out on every outgoing line except the one it arrived on.

• Flooding obviously generates vast numbers of duplicate packets, in


fact, an infinite number unless some measures are taken to damp the
process.

• One such measure is to have a hop counter contained in the header


of each packet, which is decremented at each hop, with the packet being
discarded when the counter reaches zero.

• Ideally, the hop counter should be initialized to the length of the path
from source to destination.
FLOODING

• A variation of flooding that is slightly more practical is selective


flooding.

•. In this algorithm the routers do not send every incoming packet out
on every line, only on those lines that are going approximately in the
right
direction.

•Flooding is not practical in most applications.


Distance Vector Routing

• In distance vector routing, the least - cost route between any two nodes
is the route with minimum distance.
•In this protocol, as the name implies, each node maintains a vector
(table) of minimum distances to every node.
•Mainly 3 things in this
- Initialization
- Sharing
- Updating
Distance Vector Routing

Initialization
•Each node can know only the distance between itself and its immediate
neighbors, those directly connected to it.

•So for the moment, we assume that each node can send a message to
the immediate neighbors and find the distance between itself and these
neighbors.

•Below fig shows the initial tables for each node. The distance for any
entry that is not a neighbor is marked as infinite (unreachable).
Initialization Of Tables In Distance Vector Routing

Initialization
Sharing Of Tables In Distance Vector Routing

•The whole idea of distance vector routing is the sharing of information


between neighbors.
•Although node A does not know about node E, node C does. So if node
C shares its routing table with A, node A can also know how to reach
node E.
•On the other hand, node C does not know how to reach node D, but
node A does.
•If node A shares its routing table with node C, node C also knows how
to reach node D.
• In other words, nodes A and C, as immediate neighbors, can improve
their routing tables if they help each other.
Updating Of Tables In Distance Vector Routing

•When a node receives a two column table from a neighbor, it needs to


update its routing table.

•Updating takes three steps:

1. The receiving node needs to add the cost between itself and
the sending node to each value in the second column. (x+y)
2. If the receiving node uses information from any row. The
sending node is the next node in the route.
Updating Of Tables In Distance Vector Routing

• The receiving node needs to compare each row of its old table with
the corresponding row of the modified version of the received table.
a. If the next - node entry is different, the receiving node
chooses the row with the smaller cost. If there is a tie, the old
one is kept.
b. If the next - node entry is the same, the receiving node
chooses the new row.
Updating Of Tables In Distance Vector Routing
Final Routing Table
Hierarchical Routing

•As networks grow in size, the routing tables grow proportionally.

• Not only is router memory consumed by ever -increasing tables, but


more CPU time is needed to scan them and more bandwidth is needed
to send status reports about them.

•At a certain point, the network may grow to the point where it is no
longer feasible for every router to have an entry for every other router,
so the routing will have to be done hierarchically, as it is in the
telephone network.
Hierarchical Routing

•When hierarchical routing is used, the routers are divided into what we
will call regions.

•Each router knows all the details about how to route packets to
destinations within its own region but knows nothing about the internal
structure of other regions.

•For huge networks, a two - level hierarchy may be insufficient; it may


be necessary to group the regions into clusters, the clusters into zones,
the zones into groups, and so on, until we run out of names for
aggregations .
Hierarchical Routing
Hierarchical Routing

1. Level 1 – Region.
2. Level 2 – Cluster – It is a collection of region.
3. Level 3 – Zone – It is a collection of clusters.
4. Level 4 – Groups – It is a collection of Zones.
Link State Routing

•Link state routing is based on the assumption that, although the global
knowledge about the topology is not clear, each node has partial
knowledge: it knows the state (type, condition, and cost) of its links.

•In other words, the whole topology can be compiled from the partial
knowledge of each node .
Link State Routing
Building Routing Tables

1. It is a dynamic / adaptive Routing Algorithm.


2. Each router must discover its neighbor and obtain their network
address.
3. Should measure delay or cost to each of its neighbor.
4. Should construct a packet containing the network address and the
delays of all its neighbors.
5. Send this packet to all other routers.
6. Compute the shortest path to every other router.
Building Routing Tables

• HELLO packet , Echo packet, Echo Reply – all neighbors – average


delay time – RTT
• Router build packet containing all data.
• Called Link State Packet.
• This packet contain 4 fields – ID of sender, Destination N/W, ID of
neighboring router, Cost.
Routing Table

• A routing table is a set of rules, often viewed in table format, that is


used to determine where data packets traveling over an Internet
Protocol (IP) network will be directed.

• All IP-enabled devices, including routers and switches, use routing


tables.
Routing Table

• A routing table contains the information necessary to forward a


packet along the best path toward its destination.

• Each packet contains information about its origin and destination.

• When a packet is received, a network device examines the packet


and matches it to the routing table entry providing the best match
for its destination.

• The table then provides the device with instructions for sending the
packet to the next hop on its route across the network.
Routing Table

A basic routing table includes the following information:

1. Destination: The IP address of the packet's final destination


2. Next hop: The IP address to which the packet is forwarded
3. Interface: The outgoing network interface the device should use
when forwarding the packet to the next hop or final destination
4. Metric: Assigns a cost to each available route so that the most cost-
effective path can be chosen
5. Routes: Includes directly-attached subnets, indirect subnets that are
not attached to the device but can be accessed through one or more
hops, and default routes to use for certain types of traffic or when
information is lacking.
CONGESTION CONTROL

•Congestion in a network may occur if the load on the network(the


number of packets sent to the network) is greater than the capacity of the
network(the number of packets a network can handle).

•Congestion control refers to the mechanisms and techniques to control


the congestion and keep the load below the capacity .

• When too many packets are pumped into the system, congestion occur
leading into degradation of performance.
Difference Between Congestion Control And Flow
Control

•Congestion control has to do with making sure the network is able to


carry the offered traffic. It is a global issue, involving the behavior of all
the hosts and routers.

•Flow control, in contrast, relates to the traffic between a particular


sender and a particular receiver. Its job is to make sure that a fast
sender cannot continually transmit data faster than the receiver is able
to absorb it.
Difference Between Congestion Control And Flow
Control

•Congestion control has to do with making sure the network is able to


carry the offered traffic. It is a global issue, involving the behavior of all
the hosts and routers.

•Flow control, in contrast, relates to the traffic between a particular


sender and a particular receiver. Its job is to make sure that a fast
sender cannot continually transmit data faster than the receiver is able
to absorb it.
Congestion Control

•In general, we can divide congestion control mechanisms into two


broad categories:

•open-loop congestion control (prevention) and


•closed-loop congestion control (removal)
Congestion Control
Open Loop Congestion Control:

•In open-loop congestion control, policies are applied to prevent


congestion before it happens.

•In these mechanisms, congestion control is handled by either the source


or the destination.
Open Loop Congestion Control:

Retransmission Policy:

1.It is the policy in which retransmission of the packets are taken


care.

2.If the sender feels that a sent packet is lost or corrupted, the
packet needs to be retransmitted. This transmission may increase
the congestion in the network.

3.To prevent congestion, retransmission timers must be designed to


prevent congestion and also able to optimize efficiency.
Open Loop Congestion Control:

Window Policy:
•The type of window at the sender may also affect congestion.

•The Selective Repeat – window is better than the Go-Back-N window


for congestion control.

•In the Go-Back-N window, when the timer for a packet times out,
several packets may be resent, although some may have arrived safe and
sound at the receiver.

•This duplication may make the congestion worse. The Selective Repeat
window, on the other hand, tries to send the specific packets that have
been lost or corrupted.
Open Loop Congestion Control:

Acknowledgment Policy :

•The acknowledgment policy imposed by the receiver may also affect


congestion.
• If the receiver does not acknowledge every packet it receives, it may
slow down the sender and help prevent congestion.
•Several approaches are used in this case.
•A receiver may send an acknowledgment only if it has a packet to be
sent or a special timer expires. NAK
•A receiver may decide to acknowledge only N packets at a time. We
need to know that the acknowledgments are also part of the load in a
network. - ACK 5 (1 -5)
• Sending fewer acknowledgments means imposing less load on the
network.
Open Loop Congestion Control:

Discarding Policy :

•A good discarding policy adopted by the routers is that the routers


may prevent congestion and at the same time partially discards the
corrupted or less sensitive package and also able to maintain the
quality of a message.

•In case of audio file transmission, routers can discard less sensitive
packets to prevent congestion and also maintain the quality of the
audio file.
Open Loop Congestion Control:

Admission Policy :

•An admission policy, which is a quality-of-service mechanism, can also


prevent congestion in virtual-circuit networks.

•Switches in a flow first check the resource requirement of a flow before


admitting it to the network.

•A router can deny establishing a virtual- circuit connection if there is


congestion in the network or if there is a possibility of future
congestion.
Closed Loop Congestion Control:

Closed-loop congestion control mechanisms try to alleviate congestion


after it happens. Several mechanisms have been used by different
protocols.
Closed Loop Congestion Control: Back-pressure

•The technique of backpressure refers to a congestion control


mechanism in which a congested node stops receiving data from the
immediate upstream node or nodes.

•This may cause the upstream node or nodes to become congested, and
they, in turn, reject data from their upstream nodes or nodes. And so on.

•Backpressure is a node-to-node congestion control that starts with a


node and propagates, in the opposite direction of data flow, to the
source.

•The backpressure technique can be applied only to virtual circuit


networks, in which each node knows the upstream node from which a
flow of data is coming.
Closed Loop Congestion Control: Back-pressure
Closed Loop Congestion Control: Choke Packet

•A choke packet is a packet sent by a node to the source to inform it of


congestion.

•Note the difference between the backpressure and choke packet


methods.

• In backpressure, the warning is from one node to its upstream node,


although the warning may eventually reach the source station.

• In the choke packet method, the warning is from the router, which has
encountered congestion, to the source station directly. The intermediate
nodes through which the packet has traveled are not warned.
Closed Loop Congestion Control: Choke Packet
Closed Loop Congestion Control: Implicit Signaling:

Implicit Signaling:

•In implicit signaling, there is no communication between the congested


node or nodes and the source.

•The source guesses that there is a congestion somewhere in the network


from other symptoms.

• For example, when a source sends several packets and there is no


acknowledgment for a while, one assumption is that the network is
congested.

•The delay in receiving an acknowledgment is interpreted as congestion


in the network; the source should slow down.
Closed Loop Congestion Control: Explicit Signaling:

Explicit Signaling:

•The node that experiences congestion can explicitly send a signal to


the source or destination.

•The explicit signaling method, however, is different from the choke


packet method.

• In the choke packet method, a separate packet is used for this


purpose; in the explicit signaling method, the signal is included in the
packets that carry data.

•Explicit signaling, as we will see in Frame Relay congestion control, can


occur in either the forward or the backward direction.
Closed Loop Congestion Control: Explicit Signaling:

Explicit Signaling:
•The node that experiences congestion can explicitly send a signal to
the source or destination.

•The explicit signaling method, however, is different from the choke


packet method.

• In the choke packet method, a separate packet is used for this


purpose; in the explicit signaling method, the signal is included in the
packets that carry data.

•Explicit signaling, as we will see in Frame Relay congestion control, can


occur in either the forward or the backward direction.
Closed Loop Congestion Control: Explicit Signaling:

Explicit Signaling:

Backward Signaling :
•A bit can be set in a packet moving in the direction opposite to the
congestion.
•This bit can warn the source that there is congestion and that it needs to
slow down to avoid the discarding of packets.

Forward Signaling:
•A bit can be set in a packet moving in the direction of the congestion.
This bit can warn the destination that there is congestion.
•The receiver in this case can use policies, such as slowing down the
acknowledgments, to alleviate the congestion.
Traffic Shaping

•Traffic shaping is a mechanism to control the amount and the rate of


the traffic sent to the network.
•Another method of congestion control is to “shape” the traffic before it
enters the network.
• Traffic shaping controls the rate at which packets are sent (not just
how many).
• At connection set -up time, the sender and carrier negotiate a traffic
pattern (shape).
•Two techniques can shape traffic: leaky bucket and token bucket.
Traffic Shaping - Leaky Bucket

Leaky Bucket Algorithm:

• It is used to control rate in a network.

• It is implemented as a single - server queue with constant service


time.

•If the bucket (buffer) overflows then packets are discarded.


Traffic Shaping - Leaky Bucket

Leaky Bucket Algorithm:

•Suppose we have a bucket in which we are pouring water in a random


order but we have to get water in a fixed rate, for this we will make a
hole at the bottom of the bucket.

• It will ensure that water coming out is in a some fixed rate, and also if
bucket is full we will stop pouring in it.

•The input rate can vary, but the output rate remains constant.
Similarly, in networking, a technique called leaky bucket can smooth
out bursty traffic. Bursty chunks are stored in the bucket and sent out
at an average rate.
Traffic Shaping - Leaky Bucket

(a) A leaky bucket with water.


(b) a leaky bucket with packets.
Traffic Shaping - Leaky Bucket

•A simple leaky bucket algorithm can be implemented using FIFO


queue.

•A FIFO queue holds the packets.

•If the traffic consists of fixed-size packets, the process removes a fixed
number of packets from the queue at each tick of the clock.

•If the traffic consists of variable-length packets, the fixed output rate
must be based on the number of bytes or bits.
Traffic Shaping - Leaky Bucket
Traffic Shaping - Leaky Bucket

The following is an algorithm for variable-length packets:

1.Initialize a counter to n at the tick of the clock.

2.If n is greater than the size of the packet, send the packet and
decrement the counter by the packet size. Repeat this step until n is
smaller than the packet size.

3.Reset the counter and go to step 1.


Traffic Shaping - Leaky Bucket

Example – Let n=1000


Packet=

Since n> front of Queue i.e. n>200


Therefore, n=1000-200=800
Packet size of 200 is sent to the network.
Traffic Shaping - Leaky Bucket

Now Again n>front of the queue i.e. n > 400


Therefore, n=800-400=400
Packet size of 400 is sent to the network.

Since n< front of queue


Therefore, the procedure is stop.
Initialize n=1000 on another tick of clock.
This procedure is repeated until all the packets are sent to the network.
Traffic Shaping - Token Bucket

• The leaky bucket algorithm enforces output pattern at the average


rate, no matter how bursty the traffic is.

• So in order to deal with the bursty traffic we need a flexible


algorithm so that the data is not lost. One such algorithm is token
bucket algorithm.
Traffic Shaping - Token Bucket

1. In contrast to the LB, the Token Bucket Algorithm, allows the


output rate to vary, depending on the size of the burst.

2. In the TB algorithm, the bucket holds tokens. To transmit a packet,


the host must capture and destroy one token.

3. Tokens are generated by a clock at the rate of one token every t sec.

4. Idle hosts can capture and save up tokens (up to the max. size of the
bucket) in order to send larger bursts later.
Traffic Shaping - Token Bucket

Steps of this algorithm can be described as follows:

1.In regular intervals tokens are thrown into the bucket. ƒ


2.The bucket has a maximum capacity. ƒ
3.If there is a ready packet, a token is removed from the bucket, and
the packet is sent.
4.If there is no token in the bucket, the packet cannot be sent.
Traffic Shaping - Token Bucket
Traffic Shaping - Token Bucket
Traffic Shaping - Token Bucket

Before After
Leaky Bucket vs. Token Bucket
Some advantage of token Bucket over leaky bucket

1. If bucket is full in token Bucket , token are discard not packets.


While in leaky bucket, packets are discarded.

2. Token Bucket can send Large bursts can faster rate while leaky
bucket always sends packets at constant rate.

You might also like