Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
51 views

Network Switch Notes

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

Network Switch Notes

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 47

1

TECHNIQUE DE COMMUTATION, ITT2 A AND B


CHAPTER 1
INTRODUCTION TO NETWORK SWITCHES

A network switch is a hardware device that channels incoming data from multiple input ports
to a specific output port that will take it toward its intended destination. It is a small device
that transfers data packets between multiple network devices such as computers, routers,
servers or other switches.

In a local area network (LAN) using Ethernet, a network switch determines where to send
each incoming message frame by looking at the physical device address (also known as the
Media Access Control address or MAC address). Switches maintain tables that match each
MAC address to the port which the MAC address is received. In practical, concept of LAN is
possible because of Ethernet.
The main difference between Ethernet and LAN is that Ethernet’s working function is
decentralized on the other hand LAN’s working functions is centralized. And another
difference is that, There is limitation take place in transmission of Ethernet while there is no
limitation take place in transmission of LAN.A network switch operates on the network layer,
called layer 2 of the OSI model.

Network device layers

Network devices can be separated by the layer they operate on, defined by the OSI model.
The OSI model conceptualizes networks separating protocols by layers. Control is typically
passed from one layer to the next. Some layers include:

 Layer 1- or the physical layer or below, which can transfer data but cannot manage
the traffic coming through it. An example would be Ethernet hubs or cables.
 Layer 2- or the data link layer, which uses hardware addresses to receive and
forward data. A network switch is an example of what type of device is on layer 2.
 Layer 3- or the network layer, which performs similar functions to a router and also
supports multiple kinds of physical networks on different ports. Examples include
routers or layer 3 switches.

Other layers include layer 4 (the transport layer), layer 5 (the session layer), layer 6 (the
presentation layer) and layer 7 (the application layer).
SWITCHING TECNOLOGY IN LAN

Identify basic switching concepts and the operation of switches

 Introduction

The role of a LAN switch is to forward Ethernet frames. To achieve that goal,
switches use logic—logic based on the source and destination MAC address in each
frame’s Ethernet header.

LAN switches receive Ethernet frames and then make a switching decision: either
forward the frame out some other port(s) or ignore the frame. To accomplish this
primary mission, transparent bridges perform three actions:

• Deciding when to forward a frame or when to filter (not forward) a frame, based on
the destination MAC address

• Learning MAC addresses by examining the source MAC address of each frame
received by the switch.

• Creating a (Layer 2) loop-free environment with other bridges by using Spanning


Tree Protocol (STP)

HOW A NETWORK SWITCH WORKS/WAYS TO SWITCH

LAN switches are characterized by the forwarding method that they support, such as a
store-and-forward switch, cut-through switch, or fragment-free switch. In the store-
and-forward switching method, error checking is performed against the frame, and any
frame with errors is discarded. With the cut-through switching method, no error
checking is performed against the frame, which makes forwarding the frame through
the switch faster than store-and-forward switches.

Switch Internal processing


STORE AND FORWARD SWITCHING

Store-and-forward switching means that the LAN switch copies each complete frame
into the switch memory buffers and computes a cyclic redundancy check (CRC) for
errors. CRC is an error-checking method that uses a mathematical formula, based on
the number of bits (1s) in the frame, to determine whether the received frame is
eroded. 

Store-and-Forward Switching Operation

Store-and-forward switches store the entire frame in internal memory and check the
frame for errors before forwarding the frame to its destination. Store-and-forward
switch operation ensures a high level of error-free network traffic, because bad data
frames are discarded rather than forwarded across the network.

Errors before Forwarding to Destination Network Segment 

The store-and-forward switch inspects each received frame for errors before
forwarding it on to the frame's destination network segment. If a frame fails this
inspection, the switch drops the frame from its buffers, and the frame is thrown in to
the proverbial bit bucket.

A drawback to the store-and-forward switching method is one of performance,


because the switch has to store the entire data frame before checking for errors and
forwarding. This error checking results in high switch latency (delay). If multiple
switches are connected, with the data being checked at each switch point, total
network performance can suffer as a result. Another drawback to store-and-forward
switching is that the switch requires more memory and processor (central processing
unit, CPU) cycles to perform the detailed inspection of each frame than that of cut-
through or fragment-free switching.

 
CUT THROUGH SWITCHING

With cut-through switching, the LAN switch copies into its memory only the
destination MAC address, which is located in the first 6 bytes of the frame following
the preamble. The switch looks up the destination MAC address in its switching table,
determines the outgoing interface port, and forwards the frame on to its destination
through the designated switch port. A cut-through switch reduces delay because the
switch begins to forward the frame as soon as it reads the destination MAC address
and determines the outgoing switch port.

The cut-through switch inspects each received frame's header to determine the
destination before forwarding on to the frame's destination network segment. Frames
with and without errors are forwarded in cut-through switching operations, leaving the
error detection of the frame to the intended recipient. If the receiving switch
determines the frame is eroded, the frame is thrown out to the bit bucket where the
frame is subsequently discarded from the network.

Before Forwarding to Destination Network Segment

Cut-through switching was developed to reduce the delay in the switch processing
frames as they arrive at the switch and are forwarded on to the destination switch port.
The switch pulls the frame header into its port buffer. When the destination MAC
address is determined by the switch, the switch forwards the frame out the correct
interface port to the frame's intended destination.

Cut-through switching reduces latency inside the switch. If the frame was
corrupted in transit, however, the switch still forwards the bad frame. The destination
receives this bad frame, checks the frame's CRC, and discards it, forcing the source to
resend the frame. This process wastes bandwidth and, if it occurs too often, network
users experience a significant slowdown on the network. In contrast, store-and-
forward switching prevents errored frames from being forwarded across the network
and provides for quality of service (QoS) managing network traffic flow.

 Cut-Through Switching Operation

Cut-through switches do not perform any error checking of the frame because the
switch looks only for the frame's destination MAC address and forwards the frame out
the appropriate switch port. Cut-through switching results in low switch latency. The
drawback, however, is that bad data frames, as well as good frames, are sent to their
destinations. At first blush, this might not sound bad because most network cards do
their own frame checking by default to ensure good data is received. You might find
that if your network is broken down into workgroups, the likelihood of bad frames or
collisions might be minimized, in turn making cut-through switching a good choice for
your network. 

•Switches that use cut-through forwarding start sending a frame immediately after
reading the destination MAC address into their buffers

•The main benefit of cut-through forwarding is a reduction in latency


•The drawback is the potential for errors in the frame that the switch would be unable
to detect

 Because the switch only reads a small portion of the frame into its buffer

FRAGMENT-FREE SWITCHING

Fragment-free switching is also known as runtless switching and is a hybrid of cut-


through and store-and-forward switching. Fragment-free switching was developed to
solve the late-collision problem.

 Fragment-Free Forwarding: 

•Fragment-free forwarding represents an effort to provide more error-reducing


benefits than cut-through switching–While keeping latency lower than does store-and-
forward switching

 •A fragment-free switch reads the first 64 bytes of an Ethernet frame–And then begins
forwarding it to the appropriate port(s)

 
 

Fundamental concepts of a networking switch.

Switches, physical and virtual, comprise the vast majority of network devices in modern data
networks. They provide the wired connections to desktop computers, wireless access points,
industrial machinery and some internet of things (IoT) devices such as card entry systems.
They interconnect the computers that host virtual machines (VMs) in data centers, as well as
the dedicated physical servers, and much of the storage infrastructure. They carry vast
amounts of traffic in telecommunications provider networks.

A network switch can be deployed in the following ways:

 Edge, or access switches: These switches manage traffic either coming into or
exiting the network. Devices like computers and access points connect to edge
switches.
 Aggregation, or distribution switches: These switches are placed within an optional
middle layer. Edge switches connect into these and they can send traffic from
switch to switch or send it up to core switches.
 Core switches: These network switches comprise the backbone of the network,
connecting either aggregation or edge switches, connecting user or device edge
networks to data center networks and, typically, connecting enterprise LANs to the
routers that connect them to the internet.

If a frame is forwarded to a MAC address unknown to the switch infrastructure, it is flooded


to all ports in the switching domain. Broadcast and multicast frames are also flooded. This is
known as BUM flooding -- broadcast, unknown unicast, and multicast flooding. This
capability makes a switch a Layer 2 or data-link layer device in the Open Systems
Interconnection (OSI) communications model.

Types of networking switches

There are several types of switches in networking in addition to physical devices:

 Virtual switches are software-only switches instantiated inside VM hosting


environments.
 A routing switch connects LANs; in addition to doing MAC-based Layer 2
switching it can also perform routing functions at OSI Layer 3 (the network layer)
directing traffic based on the Internet Protocol (IP) address in each packet.

 Unmanaged switches –
These are the switches that are mostly used in home networks and small businesses
as they plug-in and instantly start doing their job and such switches do not need to
be watched or configured. These require only small cable connections. It allows
devices on a network to connect with each other such as a computer to a computer
or a computer to a printer in one location. They are the least expensive switches
among all categories.
1. Managed switches –
These type of switches have many features like the highest levels of security, precision
control and full management of the network. These are used in organisations containing
a large network and can be customized to enhance the functionality of a certain network.
These are the most costly option but their scalability makes them an ideal option for a
network that is growing. They are achieved by setting a simple network management
protocol(SNMP).
They are of two types:

 (I) Smart switches:


These switches offer basic management features with the ability to create some
levels of security but have a simpler management interface than the other managed
switches. Thus they are often called partially managed switches. These are mostly
used in fast and constant LANs which support gigabit data transfer and
allocations.It can accept configuration of VLANs (Virtual LAN).
 (II) Enterprise managed switches:
They have features like ability to fix, copy, transform and display different
network configurations along with a web interface SNMP agent and command line
interface. These are also known as fully managed switches and are more expensive
than the smart switches as they have more features that can be enhanced. These are
used in organisations that contain a large number of ports, switches and nodes.
2. LAN switches –
These are also known as Ethernet switches or data switches and are used to reduce
network congestion or bottleneck by distributing a package of data only to its intended
recipient. These are used to connect points on a LAN.
3. PoE switches –
PoE switches are used in PoE technology which stands for power over Ethernet that is a
technology that integrates data and power on the same cable allowing power devices to
receive data in parallel to power.Thus these switches provide greater flexibility by
simplifying the cabling process.

Network switches can be similar looking to both hubs and routers; however, they have
different functionalities and operate on separate layers. For example, a hub is relatively simple
compared to a network switch. The goal of a hub is to connect all the nodes in a network.
Because a hub can’t manage data going in and out of it as a network switch can, there are a lot
of communication collisions. Hubs are a layer 1 physical device, compared to a network
switch which is a layer 2 on the OSI model.

A router is a device which joins networks and routes traffic between them. Routers are a
layer 3 device on the OSI model and will deal with IP addresses. IP addresses route packets
across the internet. As an example, an individual’s router will connect their local network to
their ISPs network.

Difference between layer-2 and layer-3 switches

A switch is a device which sends a data packet in a local network. What is advantage over
hub? A hub floods the network with the packet and only destination system receives that
packet while others just drop due to which the traffic increases a lot. To solve this problem
switch came into the picture. A switch first learns, by flooding network just like hub to fill
MAC- address table, on which port a particular device is connected. After learning it sends
packets to that particular
Layer 2 switch work on layer 2 of OSI model i.e. data link layer and sends a packet to
destination port using MAC address table which stores the mac address of a device associated
with that port. Layer 3 switch work on layer 3 of OSI model i.e. network layer where it route
packet by using IP address, it is used widely on VLANs.

LAYER 2 SWITCH LAYER 3 SWICTH

Operate on layer 2 (Data Operate on layer 3 (Network


LAYER 2 SWITCH LAYER 3 SWICTH

link) of OSI model. Layer) of OSI model.

Send packet to detination


on the basis of MAC Route Packet with help of IP
address. address

Can perform functioning of


Work with MAC address both 2 layer and 3 layer
only switch

Mostly Used to implement


Used to reduce traffic on VLAN (Virtual Local area
local network. network)

Quite fast as they do not Takes time to examine data


look at the Layer 3 portion packets before sending them
of the data packets. to their destination

It has single broadcast It has multiple broadcast


domain domain.

Can communicate within a Can communicate within or


network only. outside network.

CHAPTER TWO

SWITCHING SYSTEM

In this chapter, we will understand how the switching systems work. A Switching system
can be understood as a collection of switching elements arranged and controlled in such a way
as to set up a common path between any two distant points. The introduction of switching
systems reduced the complexity of wiring and made the telephony hassle-free.

Classification of Switching Systems

In the early stages of telecommunication systems, the process and stages of switching, played
an important to make or break connections. At the initial stages, the switching systems were
operated manually. These systems were later automated. The following flowchart shows how
the switching systems were classified.
The switching systems in the early stages were operated manually. The connections were
made by the operators at the telephone exchanges in order to establish a connection. To
minimize the disadvantages of manual operation, automatic switching systems were
introduced.
THE AUTOMATIC SWITCHING SYSTEMS
The Automatic switching systems are classified as the following −
 Electromechanical Switching Systems − Here, mechanical switches are electrically
operated.
 Electronic Switching Systems − Here, the usage of electronic components such as
diodes, transistors and ICs are used for the switching purposes.

1. Electromechanical Switching Systems

The Electromechanical switching systems are a combination of mechanical and electrical


switching types. The electrical circuits and the mechanical relays are deployed in them. The
Electromechanical switching systems are further classified into the following.

Step-by-step

The Step-by-step switching system is also called the Strowger switching system after its
inventor A B Strowger. The control functions in a Strowger system are performed by circuits
associated with the switching elements in the system.

Crossbar

The Crossbar switching systems have hard-wired control subsystems which use relays and
latches. These subsystems have limited capability and it is virtually impossible to modify
them to provide additional functionalities.

2. Electronic Switching Systems

The Electronic Switching systems are operated with the help of a processor or a computer
which control the switching timings. The instructions are programmed and stored on a
processor or computer that control the operations. This method of storing the programs on a
processor or computer is called the Stored Program Control (SPC) technology. New
facilities can be added to a SPC system by changing the control program.
The switching scheme used by the electronic switching systems may be either Space Division
Switching or Time Division Switching. In space division switching, a dedicated path is
established between the calling and the called subscribers for the entire duration of the call. In
time division switching, sampled values of speech signals are transferred at fixed intervals.
The time division switching may be analog or digital. In analog switching, the sampled
voltage levels are transmitted as they are. However, in binary switching, they are binary coded
and transmitted. If the coded values are transferred during the same time interval from input
to output, the technique is called Space Switching. If the values are stored and transferred to
the output at a time interval, the technique is called Time Switching. A time division digital
switch may also be designed by using a combination of space and time switching techniques.
The hardware used to establish connection between inlets and outlets is called the Switching
Matrix or the Switching Network. This switching network is the group of connections
formed in the process of connecting inlets and outlets. Hence, it is different from the
telecommunication network mentioned above.

Types of Connections

There are four types of connections that can be established in a telecommunication network.
The connections are as follows −

 Local call connection between two subscribers in the system.


 Outgoing call connection between a subscriber and an outgoing trunk.
 Incoming call connection between an incoming trunk and a local subscriber.
 Transit call connection between an incoming trunk and an outgoing trunk.

Traffic

The product of the calling rate and the average holding time is defined as the Traffic Intensity.
The continuous sixty-minute period during which the traffic intensity is high is the Busy
Hour. When the traffic exceeds the limit to which the switching system is designed, a
subscriber experiences blocking.

Erlang

The traffic in a telecommunication network is measured by an internationally accepted unit of


traffic intensity known as Erlang (E). A switching resource is said to carry one Erlang of
traffic if it is continuously occupied through a given period of observation.

STORED PROGRAME CONTROL


In this chapter, we will discuss the Stored Program Control works in Telecommunication
Switching Systems and Networks. In order to increase the efficiency and speed of control and
signaling in switching, the use of electronics was introduced. The Stored Program Control,
in short SPC is the concept of electronics that ringed in a change in telecommunication. It
permits the features like abbreviated dialing, call forwarding, call waiting, etc. The Stored
Program Control concept is where a program or a set of instructions to the computer is stored
in its memory and the instructions are executed automatically one by one by the processor.
As the exchange control functions are carried out through programs stored in the memory of a
computer, it is called the Stored Program Control (SPC). The following figure shows the
basic control structure of an SPC telephony exchange.

The processors used by SPC are designed based on the requirements of the exchange. The
processors are duplicated; and, using more than one processor makes the process reliable. A
separate processor is used for the maintenance of the switching system.
There are two types of SPCs −

 Centralized SPC
 Distributed SPC

Centralized SPC

The previous version of Centralized SPC used a single main processor to perform the
exchange functions. The dual processor replaced the single main processor at a later stage of
advancement. This made the process more reliable. The following figure shows the
organization of a typical Centralized SPC.
A dual processor architecture may be configured to operate in three modes like −

 Standby Mode
 Synchronous Duplex Mode
 Load Sharing Mode

Standby Mode

As the name implies, in the two processors present, one processor is active and the other is in
the standby mode. The processor in the standby mode is used as a backup, in case the active
one fails. This mode of exchange uses a secondary storage common to both the processors.
The active processor copies the status of the system periodically and stores in the axis
secondary storage, but the processors are not directly connected. The programs and
instructions related to the control functions, routine programs and other required information
are stored in the Secondary storage.

Synchronous Duplex Mode

In the Synchronous Duplex mode, two processors are connected and operated in synchronism.
Two processors P1 and P2 are connected and separate memories like M1 and M2 are used.
These processors are coupled to exchange the stored data. A Comparator is used in between
these two processors. The Comparator helps in comparing the results.
During the normal operation, both of the processors function individually receiving all the
information from the exchange and also related data from their memories. However, only one
processor controls the exchange; the other one remains in synchronism with the previous one.
The comparator, which compares the results of both the processors, identifies if any fault
occurs and then the faulty processor among them is identified by operating them individually.
The faulty processor is brought into service only after the rectification of fault and the other
processor serves meanwhile.

Load Sharing Mode


Load sharing mode is where a task is shared between two processors. The Exclusion Device
(ED) is used instead of the comparator in this mode. The processors call for ED to share the
resources, so that both the processors do not seek the same resource at the same time.
In this mode, both the processors are simultaneously active. These processors share the
resources of the exchange and load. In case one of the processor fails, the other one takes over
the entire load of the exchange with the help of ED. Under normal operation, each processor
handles one-half of the calls on a statistical basis. The exchange operator can however vary
the processor load for maintenance purpose.

STRUCTURE OF THE SWITCHING SYSTEM


Functional blocks. A switching configuration has a variety of functional blocks, which are
either involved in or support the actual switching process:
 Switching: Connection of subscribers by means of subscriber lines and link lines, in order
to create individual communication relationships.
Administration: Administration of the subscriber lines associated with the exchange, trunk
lines, the equipment of

 the exchange and the processes which run on this equipment. The collection and
processing of fee and traffic data is also included.
 Maintenance: The ensuring of equipment availability of the central unit.
 Operation: communication between the central units and their operation personnel.

U s e r N e tw o r k In t e r f a c e - U N I N e tw o r k N e tw o r k In t e r f a c e - N N I

s w it c h i n g e q u i p m e n t

s w it c h i n g m a t r ix

tru n k s

c o n t ro l
t e r m i n a ls S ig n a l i n g S ig n a l in g

a d a p t o rs a n d c o n v e rte rs
Principle elements of a switching configuration from the point of view of the switching
process
The figure represents a local exchange. This is the most general case of a switching system,
because here connections to subscribers, as well as connections to other exchanges, are
represented. On the left side, subscriber lines connecting terminal equipment are represented,
using the user network interface (User Network Interface - UNI). On the right side are trunk
lines between the switching stations. Exchanges are connected by means of network interfaces
(Network Network Interface - NNI).

A connection between two terminals attached to the same switching station is called an
internal connection, and is represented with dotted lines. A connection from or to a
subscriber, which is attached to another exchange is called an external connection. This kind
of a connection is drawn in bold lines in the Figure.

Control. An important element of the switching system is the control, which processes the
signalling information from and to the terminal equipment and between the exchanges. The
control system obtains the necessary information for adaptation from adapters and converters
and from subscriber lines and trunk lines.
Switching matrix. The actual creation of connections takes place in the switching matrix,
also called switching network. It is the basic element of a switching system and is set up by
the control system.

Periphery. The periphery of the switching system must provide additional functionality so
that the switching node can successfully integrate into the rest of the environment. The most
important task requirements of this periphery are:
 the supply of power to the subscribers line, i.e. supplying the electrical energy,
 the protection of the switching system from electrical influences on the connections (for
example, due to cable error, voltage overload, lightning etc.),
 the separation of payload and control signals for inband signalling (for example, from and
to subscribers in a telephone network),
 the interference suppression of payload and control signals,
 the conversion of message forms (e.g. 2 wire, 4 wire conversion),
 recognition of incoming signalling,
 Search for a free unit for carrying out a function. Such a unit can be a free link in a certain
direction (path seek), but also can be a software procedure instance for realising a service
characteristic.
 Testing of identifications and access privileges.
 The occupation of a long-distance unit upon request. This unit is assigned to a connection
to be created and locked for any other attempts at occupation.
 Switching on of dial tones.
 Receiving and evaluation of dialling information.
Reception of dialling information and evaluation in terms of the selected direction, of the
subscriber or of service characteristics.
 Signalling transmission, i.e. transmission of a telephone number from the switching
system to another switching system or to terminal equipment.
 Connection, i.e. creation of a connection in the switching network.

 Connection termination, i.e. determination of fees, the signalling of the connection


completion, release of the equipment.
 The disabling of a facility from use in case of malfunction, during maintenance or for
other reasons (for example, to prevent traffic overload of other elements of the central
unit or of the network).
 Release of allocated or disabled equipment within the exchange.

Switching matrix
The switching matrix is an arrangement of switching elements which are used to connect
payload channels in a switching system.
The switching network is the central element of a switching facility. With switching networks,
the required connections of transmission channels between the switching exchanges are
created.
Based on the signalling information and available channels, the switching arrangement
connects input ports and output ports. The task of the switching matrix is the set-up and
release of connections, as well as handling the administration of the simultaneously existing
connections.
In general, a switching network consists of a number of connecting stages. They are
individual layers with a multiplicity of switching elements which are functionally parallel.

Function groups
The complete switching network is divided into three important functional groups, in which
the traffic to be switched is concentrated, distributed, and finally expanded. The most
important function is the distribution of the traffic. The required technical equipment in
general is very complex and can be better utilised with concentration. The concentration /
distribution / expansion structure is functional. This basic structure of switching systems is the
same for all principles that can be applied to switching, independent of whether it is switching
between a variety of spatial connections, time slots or packets.

Concentration. Concentrating switching networks are used when more inputs than outputs
are involved. Concentration is the switching of a number of input lines onto a few output
lines. The traffic of the lightly utilised input lines is concentrated on more heavily utilised
output lines. The expensive equipment assigned to the output lines is also better utilised.

Distribution. Linear switching networks are used when an equal number of inputs and
outputs are involved. In distribution, the traffic is distributed according to its direction.

Expansion. Expanding switching networks are used when more outputs than inputs are
involved. After distribution, the traffic must be reconstituted to the separate individual
subscriber lines at the destination local exchange. The traffic is expanded.
1 1
1 1
1 1 expansion
concentration
n distribution m
m n
m n
m>n m=n m<n

Concentrating, distributing and expanding in a switching network

A connection in a switching system is processed at first with a concentrating, then a


distributing, and finally with an expanding, switching arrangement. This arrangement of the
individual components of the coupling network is purely functional. For the practical
realisation of switching network, a concentrating and expanding switching arrangement can
comprise the same physical elements.

CHAPTER 3

SWITCHING TECHNIQUES

Switching network
The connection of terminal equipment, between which messages are to be exchanged, is
performed by a switching network.
The switching network must be able to perform the following basic tasks:
 At any time, from every piece of terminal equipment or from every entry point, a
connection to all terminal equipment on the network or the transfer to other networks must
be possible in principle.
 Every connection must be controllable by the user.

On one hand, the network must be in the position to fulfil the expected connection requests
with sufficiently high probability, and to satisfy guaranteed quality parameters.
control systems, transmission systems have begun to develop characteristics that have become
more and more similar to those of switching technology. The major remaining difference is
the control system, which uses measures of the network management (transmission
technology) or signalling during connection set-up (switching technology). Both technologies
are rapidly converging.
The connection of terminal equipment, between which messages are to be exchanged, is
performed by a switching network.
The switching network must be able to perform the following basic tasks:
 At any time, from every piece of terminal equipment or from every entry point, a
connection to all terminal equipment on the network or the transfer to other networks must
be possible in principle.
 Every connection must be controllable by the user.

On one hand, the network must be in the position to fulfil the expected connection requests
with sufficiently high probability, and to satisfy guaranteed quality parameters.
The technical effort to satisfy connection requests must, on the other hand, be reasonably
limited.

The switching network is structured according to different points of view:


 requirements of the switching principle employed,
 amount of traffic,
 technical and economic parameters of the technology utilised, regulatory requirement.

n e tw o r k n o d e

t ru n k g ro u p n e tw o r k p a t h

s u b s c r ib e r lin e

s u b s c r ib e r t e r m in a l
( s o u r c e / s in k )

Switching network

The most important elements of the network are the nodes and paths. The payload between
the network nodes is transported in the paths. Network edges are connection lines which link
the terminal equipment on the network and are connection trunks between the network nodes
and users. Groups of connections or channels between these same network nodes are brought
together in trunk groups. The payload is determined in the network nodes.
Connections
A connection is a coupling of at least two pieces of terminal equipment on network access
interfaces, network paths and network nodes of a network for the purpose of exchanging
information.
For all forms of information exchange the rule is: at first, a connection through the network
must be created. This connection can exist continuously or it can be created for a certain time
period. If the connection has been created for a limited period of time, then there must be
switching. A connection then exists for the duration of the complete information transmission
(for example, in a telephone network) or the time for the transmission of a part of the
information (for example, in ATM networks). The switching is carried out in the network
nodes.
A switching process is always carried out in connection with a definite communication
relationship.
Switching
 Switching is the creation of connections for a limited period of time in a network by means
of connecting channels, which make
 up the partial segments of the connection. Switching is the creation of the connection by
means of control signalling.

Switching technology
 All technical equipment which is used for the switching in a network can be designated
switching technology.
 The switching technology ensures that the information in a network, according to the
switching principles current in this network, reach exactly those network nodes or
subscribers for which they were designated.
 From the point of view of the user of a network, switching is a service that can be
employed in order to exchange information with one or many other users on the network.
A switching node is that part of a network where by evaluating technical switching
information, partial segments of the network are put together for a connection.
Simultaneously, depending on the traffic volume, the traffic of many terminals on the
network is concentrated on a few paths of the network by switching.
The place where a switching node is located is called an exchange. Switching nodes are
distinguished according their location in the network hierarchy as well as well as by their
technical configuration.

Switching Principles

The switching principle is the way the switching of connections or messages is carried out.
Connectionless transmission
The connectionless mode is appropriate for networks in which sporadic, short information
segments must be exchanged between the terminals, such that the time required for setting up
and terminating a connection can be reduced. For this reason, these networks have mainly
developed for communication between computers. The disadvantage of this kind of network
is that all nodes are loaded with traffic, even if the information is not intended for them.
Connection-oriented transmission
If the time required for the set-up of a connection is short compared with the time period that
the connection exists, then connection-oriented service modes are more advantageous.
Information is transported only to nodes that are necessarily involved with the
communication. Telephone networks have evolved on this model. Connection-oriented
networks can work with switched channels (channel switching) or the message switching
(packet switching or virtual connections).
Connection-oriented channel switching includes switching in the spatial domain (spatial
separation of the channels - spatial switching) and in the time domain (time multiplexing of
the channels). Message switching consists of packet switching (a number of packets per
message) and consignment switching (one packet per message).
A special position must be given to ATM switching.

In large networks, there can be multiple paths from sender to receiver. The switching
technique will decide the best route for data transmission.

Switching technique is used to connect the systems for making one-to-one communication.

Difference between Connection-oriented and Connection-less Services

Both Connection-oriented service and Connection-less service are used for the connection


establishment between two or more than two devices. These type of services are offered by
network layer.
Connection-oriented service is related to the telephone system. It includes the connection
establishment and connectiontermination. In connection-oriented service, Handshake method
is used to establish the connection between sender and receiver.

 
Connection-less service is related to the postal system. It does not include any connection
establishment and connection termination. Connection-less Service does not give the
guarantee of reliability. In this, Packets do not follow same path to reach destination.

Difference between Connection-oriented and Connection-less Services:


CONNECTION-
S.N ORIENTED CONNECTION-
O SERVICE LESS SERVICE

Connection-oriented
service is related to Connection-less
the telephone service is related to
1. system. the postal system.

Connection-oriented Connection-less
service is preferred Service is preferred
by long and steady by bursty
2. communication. communication.

Connection-less
Connection-oriented Service is not
3. Service is necessary. compulsory.

Connection-less
Connection-oriented Service is not
4. Service is feasible. feasible.
In connection-
oriented Service, In connection-less
Congestion is not Service, Congestion
5. possible. is possible.

Connection-oriented Connection-less
Service gives the Service does not
guarantee of give the guarantee
6. reliability. of reliability.

In connection- In connection-less
oriented Service, Service, Packets do
Packets follow the not follow the same
7. same route. route.

Classification Of Switching Techniques

CIRCUITSWICHING

o Circuit switching is a switching technique that establishes a dedicated path between


sender and receiver.
o In the Circuit Switching Technique, once the connection is established then the
dedicated path will remain to exist until the connection is terminated.
o Circuit switching in a network operates in a similar way as the telephone works.
o A complete end-to-end path must exist before the communication takes place.
o In case of circuit switching technique, when any user wants to send the data, voice,
video, a request signal is sent to the receiver then the receiver sends back the
acknowledgment to ensure the availability of the dedicated path. After receiving
the acknowledgment, dedicated path transfers the data.
o Circuit switching is used in public telephone network. It is used for voice
transmission.
o Fixed data can be transferred at a time in circuit switching technology.

Communication through circuit switching has 3 phases:

o Circuit establishment or channel establishment


o Data transfer or Information exchange
o Circuit Disconnect or Connection Realease

1.CHANNEL SWITCHING

For channel switching, the relationship between the communication partners is implemented
by connecting channels. After the relationship is created, the subscribers are directly
connected with each other for the complete duration of the communication.
Channel switching is also designated as circuit switching. For circuit switching, the creation
of a connection is necessary before the actual communication is made; after the
communication, the connection must be terminated again. Therefore the connection is divided
into phases.

o Connection phases
Connection set-up. The connection set-up is carried out by an exchange of signalling
information between the active terminal equipment and the exchange, and between the
exchanges. The initiative is taken by the terminal equipment which wants to set up the
communication relationship (in telecommunication technology and in the above example in
Figure 3.3: ‘A’-subscriber). Thereafter follows the reservation of the switching device
equipment to which the A-subscriber is connected. If this reservation is accepted, that is, if a
facility is free to process the connection request, then the terminal equipment is informed (in
the telephone network: using dial tone). Next, the terminal equipment notifies, by dialling,
which other terminal it desires to connect to (dial information, address information). Then an
attempt is made to establish a path to the destination terminal (B-subscriber). If this is
successful, then the B-subscriber is called, and the A-subscriber is informed of the connection
set-up (call display, in telephone network: ringing tone). After the B-subscriber has
acknowledged the call (logon), the connection enters into the second phase. The created
occupancy is, from the point of view of the A- subscriber, an outgoing call and, from the point
of view of the B-subscriber, an incoming call.
In general, the requested connection extends over a number of switching configurations, and
signalling is also necessary between them.

2. Information exchange.
In the second phase of the connection the actual information exchange occurs which also can
be accompanied by signalling. Thus, during the course of a connection, service components
can be switched on and off and teleservices can be managed.
3.
Connection release.
The third phase of the connection is the connection release, which one of the terminals
initiates by means of signalling. The switching equipment engaged and the occupied channels
are released again. Data is collected for the recording of connection-dependent fees.

Advantages Of Circuit Switching:

o In the case of Circuit Switching technique, the communication channel is


dedicated.
o It has fixed bandwidth.

Disadvantages Of Circuit Switching:

o Once the dedicated path is established, the only delay occurs in the speed of
data transmission.
o It takes a long time to establish a connection approx 10 seconds during which
no data can be transmitted.
o It is more expensive than other switching techniques as a dedicated path is
required for each connection.
o It is inefficient to use because once the path is established and no data is
transferred, then the capacity of the path is wasted.
o In this case, the connection is dedicated therefore no other data can be
transferred even if the channel is free.

Message Switching

o Message Switching is a switching technique in which a message is transferred as a


complete unit and routed through intermediate nodes at which it is stored and
forwarded.
o In Message Switching technique, there is no establishment of a dedicated path
between the sender and receiver.
o The destination address is appended to the message. Message Switching provides a
dynamic routing as the message is routed through the intermediate nodes based on
the information available in the message.
o Message switches are programmed in such a way so that they can provide the most
efficient routes.
o Each and every node stores the entire message and then forward it to the next node.
This type of network is known as store and forward network.
o Message switching treats each message as an independent entity.
Advantages Of Message Switching

o Data channels are shared among the communicating devices that improve the
efficiency of using available bandwidth.
o Traffic congestion can be reduced because the message is temporarily stored in the
nodes.
o Message priority can be used to manage the network.
o The size of the message which is sent over the network can be varied. Therefore, it
supports the data of unlimited size.

Disadvantages Of Message Switching

o The message switches must be equipped with sufficient storage to enable them to
store the messages until the message is forwarded.
o The Long delay can occur due to the storing and forwarding facility provided by
the message switching technique.

Packet Switching

o The packet switching is a switching technique in which the message is sent in one
go, but it is divided into smaller pieces, and they are sent individually.
o The message splits into smaller pieces known as packets and packets are given a
unique number to identify their order at the receiving end.
o Every packet contains some information in its headers such as source address,
destination address and sequence number.
o Packets will travel across the network, taking the shortest path as possible.
o All the packets are reassembled at the receiving end in correct order.
o If any packet is missing or corrupted, then the message will be sent to resend the
message.
o If the correct order of the packets is reached, then the acknowledgment message
will be sent.
Approaches Of Packet Switching:

There are two approaches to Packet Switching:

Datagram Packet switching:

o It is a packet switching technology in which packet is known as a datagram, is


considered as an independent entity. Each packet contains the information about
the destination and switch uses this information to forward the packet to the correct
destination.
o The packets are reassembled at the receiving end in correct order.
o In Datagram Packet Switching technique, the path is not fixed.
o Intermediate nodes take the routing decisions to forward the packets.
o Datagram Packet Switching is also known as connectionless switching.

Virtual Circuit Switching

o Virtual Circuit Switching is also known as connection-oriented switching.


o In the case of Virtual circuit switching, a preplanned route is established before the
messages are sent.
o Call request and call accept packets are used to establish the connection between
sender and receiver.
o In this case, the path is fixed for the duration of a logical connection.

Let's understand the concept of virtual circuit switching through a diagram:


o In the above diagram, A and B are the sender and receiver respectively. 1 and 2 are
the nodes.
o Call request and call accept packets are used to establish a connection between the
sender and receiver.
o When a route is established, data will be transferred.
o After transmission of data, an acknowledgment signal is sent by the receiver that
the message has been received.
o If the user wants to terminate the connection, a clear signal is sent for the
termination.

Differences b/w Datagram approach and Virtual Circuit approach

Datagram approach Virtual Circuit approach

Node takes routing decisions to Node does not take any routing decision.
forward the packets.

Congestion cannot occur as all the Congestion can occur when the node is busy,
packets travel in different and it does not allow other packets to pass
directions. through.

It is more flexible as all the packets It is not very flexible.


are treated as an independent entity.

Advantages Of Packet Switching:

o Cost-effective: In packet switching technique, switching devices do not require


massive secondary storage to store the packets, so cost is minimized to some
extent. Therefore, we can say that the packet switching technique is a cost-effective
technique.
o Reliable: If any node is busy, then the packets can be rerouted. This ensures that
the Packet Switching technique provides reliable communication.
o Efficient: Packet Switching is an efficient technique. It does not require any
established path prior to the transmission, and many users can use the same
communication channel simultaneously, hence makes use of available bandwidth
very efficiently.

Disadvantages Of Packet Switching:

o Packet Switching technique cannot be implemented in those applications that


require low delay and high-quality services.
o The protocols used in a packet switching technique are very complex and requires
high implementation cost.

If the network is overloaded or corrupted, then it requires retransmission of lost packets. It can
also lead to the loss of critical information if errors are nor recovered.
:
SPACE TIME SWITCHING
CONGESTION

Congestion in a network is said to have occurred when load on the network is greater than the
capacity of the network. When the buffer size of the node exceeds the data received, then the
traffic will be high. This further leads to congestion. The amount of data moved from a node
to the other can be called as Throughput.
The following figure shows congestion.

In the above figure, when the data packets arrive at Node from the senders A, B and C then
the node cannot transmit the data to the receiver at a faster rate. There occurs a delay in
transmission or may be data loss due to heavy congestion.
When too many packets arrive at the port in a packet switched network, then the performance
degrades and such a situation is called Congestion. The data waits in the queue line for
transmission. When the queue line is utilized more than 80%, then the queue line is said to be
congested. The Congestion control techniques help in controlling the congestion. The
following graph, drawn between throughput and packet send shows the difference between
congestion controlled transmission and uncontrolled transmission.

The techniques used for congestion control are of two types – open loop and closed loop. The
loops differ by the protocols they issue.

Open Loop

The open loop congestion control mechanism produces protocols to avoid congestion. These
protocols are sent to the source and the destination..

Closed Loop

The closed loop congestion control mechanism produces protocols that allow the system to
enter the congested state and then detect and remove the congestion. The explicit and
implicit feedback methods help in the running of the mechanism

*Difference between Flow Control and Congestion Control

Both Flow Control and Congestion Control are the traffic controlling methods in different


situations.
The main difference between flow control and congestion control is that, In flow control,
Traffics are controlled which are flow from sender to a receiver. On the other hand, In
congestion control, Traffics are controlled entering to the network.
Let’s see the difference between flow control and congestion control:

S.N CONGESTION
O FLOW CONTROL CONTROL

In flow control, Traffics


are controlled which are In this, Traffics are
flow from sender to a controlled entering to
1. receiver. the network.
Data link layer and Network layer and
Transport layer handle Transport layer handle
2. it. it.

In this, Receiver’s data In this, Network is


is prevented from being prevented from
3. overwhelmed. congestion.

In flow control, Only In this, Transport layer


sender is responsible for is responsible for the
4. the traffic. traffic.

In this, Traffic is
In this, Traffic is prevented by slowly
prevented by slowly transmitting by the
5. sending by the sender. transport layer.

SIGNALING TECHNIQUES
Signaling techniques enable the circuit to function as a whole by inter connecting all varieties
of switching systems. There are three forms of signaling involved in a telecommunication
network.

 Subscriber loop signaling


 Intraexchange or register signaling
 Interexchange or inter-register signaling
The subscriber loop signaling depends upon the type of telephone instrument used. The intra
exchange signaling refers to the internal portion of a switching system that is heavily
dependent upon the type and design of a switching system, which varies depending upon the
model. The inter-exchange signaling takes place between exchanges. This helps in the
exchange of address digits, which pass from exchange to exchange on a link-by-link basis.
The network-wide signaling that involves end-to-end signaling between the originating
exchange and the terminating exchange is called the Line signaling.
The two main types of signaling techniques are −

In-Channel Signaling

In-Channel Signaling is also known as Per Trunk Signaling. This uses the same channel,
which carries user voice or data to pass control signals related to that call or connection. No
additional transmission facilities are needed, for In-channel signaling.

Common Channel Signaling


Common Channel Signaling uses a separate common channel for passing control signals for a
group of trunks or information paths. This signaling does not use the speech or the data path
for signaling.
We will discuss the signaling techniques in depth in our subsequent sections.

Types of Signaling Techniques

As discussed above, the signaling techniques are categorized into two, the In-channel
signaling and the Common channel signaling. However, these are further divided into few
types depending upon the frequencies and frequency techniques used.
The division is as shown in the following figure −

CHAPTER THREE

1 Routing
Routing is the directing of data packets, based on the complete address of the destination of
the sender contained in the data header, to the receiver over a varying number of nodes
(routers) through the network. The job of the routing function is, for example, to transport
datagrams in a packet network from a transmitter to one (unicast) or numerous (multicast,
broadcast) destinations. For this, two sub-tasks must be performed:
 the construction of routing tables, and;
 the forwarding of the datagrams using the routing tables.

The routing process described here is the forwarding of data packets. It has nothing to do with
path searching for switched circuits under certain network conditions, such as in the case of
overload, errors, or for optimising the costs of a connection (least-cost routing).

The datagrams are transferred from one router (next-hop) to the next (hop-by-hop). A given
router knows the next router which lies in the direction of the destination. The decision on the
next router (next-hop) depends on the destination address of the datagram (destination based
routing). An entry in the routing tables contains the destination and the next-hops that belong
with it, as well as supplementary data.

The routing table determines the next node that a data packet must reach in order to get to the
desired destination. Routing tables can be:
 static, or;
 dynamic.

In the case of static routing, the next-hop of a route is entered as a fixed location in the tables.
Static routing is appropriate for smaller networks and networks with a simple topology. In the
case of dynamic routing, the next hop is determined from network state information.
Employment makes sense for larger networks with a complex topology and for the automatic
path adaptation in case of error (backup), and in case of the overloading of the network parts.

The whole world is digitalized and connected over the network. Packets, which are the atomic

unit of information in packet-switched communication networks, are exchanged between the

nodes (a node might be an end device, a router or a data generating device, etc.). The process

of transferring these packets of information from their source node to the destination node

with one or more hops in between along the most optimum path is called as ‘Routing’.

Routers and switches are the devices that are used for the purpose which work on the routing

protocols and algorithms they are configured with. The routing of packets is taken care of by

the L3 layer or the network layer of the OSI Reference Model.

 
How does it take place?
When a packet is introduced in the network and received by one of the routers, it reads the

headers of the packet to understand the destination and checks its routing table marked with

routing metrics to see what would be the next best hope for the packet to optimally reach the

destination. Then, it pushes the packet to the next node and the above process repeats at the

new node too until the packet reaches the destination node.

Routing metrics –
Routing tables have the information based on which packet switching takes place in the most

optimal path. And this information is different metrics or variables which the routing

algorithms look for and then decide their path. The standard metrics include –
1. Path Length – In this, the administrator will assign costs to each path (between two

nodes). The path length will be the sum of all the path costs. The path with the less

path length will be chosen as the most optimal one.

2. Delay – This is the measure of time it takes for the packet to route from source to

destination. This depends on many factors like network bandwidth, the number of

intermediate nodes, congestion at nodes, etc. Sooner the transfer, better the Quality of

Service (QoS).

3. Bandwidth – This refers to the amount of data a link can transfer through it. Usually,

the enterprise lease the network line to achieve a higher link and bandwidth.

4. Load – Load refers to the traffic which a router or a link is handling. The unbalanced

or unhandled load might cause congestion and a lower rate of transmission packet

losses.

5. Communication Cost – This is the operational expense which the company incurs by

sending the packets on the leased line between the nodes.

6. Resilience and Reliability – This refers to the error handling capacity of the router

and the routing algorithms. If some nodes in the network fail then the resilience and

reliability measure will show us how well the other nodes can handle the traffic.

 
Types of Routing
Types of Switching techniques

 Static Routing – This is the type of routing in which the optimal path between all

possible pairs of sources & destinations in the given network is pre-defined and fed

into the routing table of the routers of the network.

Advantages –
1. There is no CPU overhead for the routers to decide the next hop for the packet as the

paths are predefined.

2. This offers higher security as the administrator has autonomy over the permissions for

packet flow along a defined path.

3. Between the routers, no bandwidth would be used (for tasks like updating the routing

table, etc.)

Disadvantages

1. For a larger network topology, it will be difficult for the administrator to identify and

pre-define an optimal path from all possible combinations of source & destination

nodes.

2. The administrator would be expected to be thorough in the concepts of networks and

topology. Transition to a new administrator would consume time so as understand the

topology and policies that are defined.

 Dynamic Routing – This type gives the router the ability to discover the network by

protocols like OSPF (Open Shortest Path First) and RIP (Routing Information

Protocol), updates the routing table by itself and effectively decides upon the path that

the incoming packet must follow to reach its destination.

Advantages

1. This is easy to configure.

2. It would be efficient in order to discover some remote network and execute routing

there.

Disadvantages –
1. When one of the routers in the network implementing dynamic routingsdiscovers

change or generates an update, it broadcasts it to all the nodes. Thus, consuming a

higher amount of bandwidth.

2. It is relatively less secure than static.

 
Types of Routing Algorithms
There are two types of algorithms –

 Adaptive – The routes are decided dynamically based on the changes in the network

topology.

1. Distance Vector Routing – In this algorithm, each router maintains a routing

table containing an entry for each router in the network. These entries are

updated periodically. This is also called as the Bellman-Ford Algorithm.

Originally, this was the ARPANET algorithm.

2. Link State Routing – LSR discovers the neighbors, measures the cost to each

neighbor, then constructs the packets and sends it along the computed shortest

path.

o Non-Adaptive– The routes are decided in a static fashion by the routers.

1. Flooding – In this you send the packets to every other neighboring router &

they in-turn to the same and by some path, the packet reaches its destination.

This duplicates the packets but the reliability is very high in a type of routing.

This is mostly used in defense networks, distributed databases, wireless

networks and to populate the routing tables.


CHAPTER FOUR

SWITCHING TECHNOLOGIES

ISDN

In this chapter, we will learn about the Integrated Services Digital Network. Earlier, the
transmission of data and voice both were possible through normal POTS, Plain Old
Telephone Systems. With the introduction of Internet came the advancement in
telecommunication too. Yet, the sending and receiving of data along with voice was not an
easy task. One could use either the Internet or the Telephone. The invention of ISDN helped
mitigate this problem.
The process of connecting a home computer to the Internet Service Provider used to take a lot
of effort. The usage of the modulator-demodulator unit, simply called the MODEM was the
essential thing to establish a connection. The following figure shows how the model worked
in the past.

The above figure shows that the digital signals have to be converted into analog and analog
signals to digital using modem during the whole path. What if the digital information at one
end reaches to the other end in the same mode, without all these connections? It is this basic
idea that lead to the development of ISDN.
As the system has to use the telephone cable through the telephone exchange for using the
Internet, the usage of telephone for voice calls was not permitted. The introduction of ISDN
has resolved this problem allowing the transmission of both voice and data simultaneously.
This has many advanced features over the traditional PSTN, Public Switched Telephone
Network.
ISDN was first defined in the CCITT red book in 1988.The Integrated Services of Digital
Networking, in short ISDN is a telephone network based infrastructure that allows the
transmission of voice and data simultaneously at a high speed with greater efficiency. This is
a circuit switched telephone network system, which also provides access to Packet switched
networks.
The model of a practical ISDN is as shown below.
ISDN supports a variety of services. A few of them are listed below −

 Voice calls
 Facsimile
 Videotext
 Teletext
 Electronic Mail
 Database access
 Data transmission and voice
 Connection to internet
 Electronic Fund transfer
 Image and graphics exchange
 Document storage and transfer
 Audio and Video Conferencing
 Automatic alarm services to fire stations, police, medical etc.

Types of ISDN

Among the types of several interfaces present, some of them contains channels such as the B-
Channels or Bearer Channels that are used to transmit voice and data simultaneously; the D-
Channels or Delta Channels that are used for signaling purpose to set up communication.
The ISDN has several kinds of access interfaces such as −

 Basic Rate Interface (BRI)


 Primary Rate Interface (PRI)
 Narrowband ISDN
 Broadband ISDN

Basic Rate Interface (BRI)

The Basic Rate Interface or Basic Rate Access, simply called the ISDN BRI Connection uses
the existing telephone infrastructure. The BRI configuration provides two data or bearer
channels at 64 Kbits/sec speed and one control or delta channel at 16 Kbits/sec. This is a
standard rate.
The ISDN BRI interface is commonly used by smaller organizations or home users or within
a local group, limiting a smaller area.

Primary Rate Interface (PRI)

The Primary Rate Interface or Primary Rate Access, simply called the ISDN PRI connection
is used by enterprises and offices. The PRI configuration is based on T-carrier or T1 in the
US, Canada and Japan countries consisting of 23 data or bearer channels and one control or
delta channel, with 64kbps speed for a bandwidth of 1.544 M bits/sec. The PRI configuration
is based on E-carrier or E1 in Europe, Australia and few Asian countries consisting of 30 data
or bearer channels and two-control or delta channel with 64kbps speed for a bandwidth of
2.048 M bits/sec.
The ISDN BRI interface is used by larger organizations or enterprises and for Internet Service
Providers.

Narrowband ISDN

The Narrowband Integrated Services Digital Network is called the N-ISDN. This can be
understood as a telecommunication that carries voice information in a narrow band of
frequencies. This is actually an attempt to digitize the analog voice information. This uses
64kbps circuit switching.
The narrowband ISDN is implemented to carry voice data, which uses lesser bandwidth, on a
limited number of frequencies.

Broadband ISDN

The Broadband Integrated Services Digital Network is called the B-ISDN. This integrates the
digital networking services and provides digital transmission over ordinary telephone wires,
as well as over other media. The CCITT defined it as, “Qualifying a service or system
requiring transmission channels capable of supporting rates greater than primary rates.”
The broadband ISDN speed is around 2 MBPS to 1 GBPS and the transmission is related to
ATM, i.e., Asynchronous Transfer Mode. The broadband ISDN communication is usually
made using the fiber optic cables.
As the speed is greater than 1.544 Mbps, the communications based on this are called
Broadband Communications. The broadband services provide a continuous flow of
information, which is distributed from a central source to an unlimited number of authorized
receivers connected to the network. Though a user can access this flow of information, he
cannot control it.

Advantages of ISDN

ISDN is a telephone network based infrastructure, which enables the transmission of both
voice and data simultaneously. There are many advantages of ISDN such as −

 As the services are digital, there is less chance for errors.


 The connection is faster.
 The bandwidth is higher.
 Voice, data and video − all of these can be sent over a single ISDN line.

Disadvantages of ISDN

The disadvantage of ISDN is that it requires specialized digital services and is costlier.
However, the advent of ISDN has brought great advancement in communications. Multiple
transmissions with greater speed are being achieved with higher levels of accuracy.

ATM switching
In the case of ATM switching, the composition of information packets is similar to that for
packet switching. They all have the same length of 53 bytes. All packets of an ATM
connection take the same path through the network, for which the transmission capacity has
been reserved in advance.
ATM switching differs from classical packet switching by the constant packet length and the
determination of a connection path. This allows the switching of ATM cells to be simpler and
computationally easier to control.
Storage principles
A requirement for the switching of ATM cells is that the cells in every switching system are
temporarily stored. For this purpose, the following basic principles can be applied:

Input memory: Per input, the incoming cells are stored in memory on the principle first-in-
first-out (FIFO). For the switching process, an internal blocking-free matrix is employed. The
disadvantage of this storage method is the possible blockage of waiting cells in the FIFO, so
that even
 though the respective output is free, it is possible that a cell must wait for switching
because previous cells to other outputs must be handled first.
 Output memory: Immediately after arriving, the cells are switched to a FIFO per output,
and read out from there with the output line cycles. On the input, only the storage of one
cell per lead is necessary. The disadvantage of this storage method is that the internal speed
of the switching matrix must be greater than the speed of all incoming cells.
 Central memory: All incoming cells are stored in a common memory. This can be smaller
than the sum of all separate memory requirements, but the control system for memory
access is complex and very high-speed memory access is required.

Distributed memory: In a matrix made up of input and output lines, memory is allocated at
every crosspoint to allow the multiplexing of the cells on the output lines. The disadvantage
of this method is the large memory requirement.

Input memory Output memory


Central memory Distributed memory

memory

Storage principles in ATM- switching


3.1.1 Virtual connections
In the case of virtual connections, individual packets are switched, but all packets of an
information relationship are transmitted along only one path which is established at
connection set-up.

Connection orientation. Before the information exchange begins, there is a connection set-
up which determines if a path with adequate transmission capacity is available between source
and sink. This channel is not occupied during the total connection time, but only when the
transmission capacity is required. If no packets are available for some duration, the
transmission channel can be used for other virtual connections. The capacity of transmission
sections can even, within certain limits, be overbooked (statistical multiplex gain),
nevertheless, all virtual connections have access to guaranteed resources and at times even
have the use of more bandwidth than they were guaranteed.
Virtual connections combine the advantages of packet switching and channel switching. They:
 do a good job of utilising the resource of the network (an advantage of packet switching);
 can quickly make available large transmission capacities (an advantage of packet
switching);
 guarantee resources (an advantage of channel switching), and;
 have a control system which is inherently less complex to realise than with a strict packet
switching system.

EFON ESTHER ABIA PERMANENT TEACHER SUP’PTIC

You might also like