Signal Transmission
Signal Transmission
Signal Transmission
Signaling is the way that data is transmitted across a medium. This is done using electrical
energy to communicate. The information to be transmitted can exist in two formats, analog or
digital. Analog data is continuously changing and represents all values within a range. Digital
data, on the other hand, consists of discreet states, ON or OFF, or 1 or a 0.
Current-State Encoding
Unipolar
Polar
Return-to-Zero
Biphase
State-Transition Encoding
State-transition encoding methods differ from current-state encoding methods in that they use
a transition in the signal to represent data, as opposed to encoding data by means of a
particular voltage level or state.
Encoding Schemes
Unipolar – Uses two levels of signal for encoding data. The presence of a signal,
either positive or negative (+5 and –5, for example) represents one value, while the
absence of signal represents the other value.
Polar – Similar to unipolar, except that it can use both positive and negative voltages
for encoding data. For example, a –3V could represent a 1 while a +3V could
represent a 0.
Return-to-Zero (RZ) – An encoding
scheme that uses a signal transition to zero in the middle of each bit interval. A
positive voltage transitioning to zero could represent a 1 while a negative voltage
transitioning to zero could represent a 0, for example.
Biphase – This encoding scheme requires at least one mid-bit transition per bit
interval. Manchester and Differential Manchester are two examples.
Manchester – Used most commonly on Ethernet LAN’s, this encoding scheme use a
low-to-high bit transition to represent one value and a high-to-low bit transition to
represent another value.
Differential Manchester – This encoding scheme is also considered to be a biphase
encoding scheme because it uses a mid-bit transition. In this scheme, however, the
mid-bit transition doesn’t represent data; it is used for timing. The actual data is
represented by a transition at the beginning of the bit interval. The presence of a
transition indicates one value while the absence of a transition represents another
value. This scheme is used on Token Ring LAN’s.
Non-Return-to-Zero (NRZ) – The main difference between this scheme and
Differential Manchester is that NRZ, unlike Differential Manchester, does not use a
mid-bit transition for clocking.
B8ZS = Binary Eight Zero Substitution. This is the improved method of formatting
T1 datastreams. In B8ZS, a method was devised to replace long strings of zeroes with
a sequence of intentional bipolar violations. These bipolar violations include pulses
and therefore sufficient pulses are always on the line to retain synchronization. A one
is sent on a T1 by sending a pulse, as opposed to not sending a pulse. The alternating
mark rule means that if the last pulse sent was of a positive going polarity, the next
pulse sent must be negative going. If a T1 device receives two pulses in a row and
they are of the same polarity a bipolar violation (BPV) has occurred. In B8ZS a
specific combination of valid pulses and bipolar violations is used to represent a string
of eight zeroes, whenever the user data contains eight zeroes in a row, therefore, the
user no longer needs to give up one bit in eight as in AMI. When configuring a CSU
for a B8ZS T1 the selection for density should be NONE.
Bit Synchronization
Asynchronous
Uses stop and start bits to signal beginning and end of each package of data
Synchronous
Analog signaling:
Amplitude – measures the strength of the signal or the height of the wave.
Frequency – the amount of time for a wave to complete one cycle.
Phase – the relative state of one wave when timing began. This is measured in
degrees.
Broadband
Physical Circuits
Basic Terminology
Local Loop – Often called the last mile, this is the physical wiring between the
demarcation point and the telephone company’s equipment.
Digital signal - A signal with only two values, normally 0 and 1, during transmission,
unlike an analog signal whose values constantly vary
Circuit - (1) The physical connection, or path, of channels, conductors, and equipment
between two given points through which an electric current may be established. (2) A
switched or dedicated communications path with a specified bandwidth (transmission
speed/capacity). (3) A communication path or network; usually a pair of channels
providing bidirectional communication.
Punch Down Block - A block mounted on the wall in a telco closet where cable pairs
are punched down for termination or cross connection.
Analog
Analog Multiplexing
Each voice channel was assigned to a different 4Khz band. When added together, 24 voice
channels totaled 96Khz, well within the capacity of twisted pair technology at the time. Due
to restrictions in vacuum technology at the time, 100Khz was actually the top end, however,
96Khz was used as a conservative estimate. This is known as frequency division
multiplexing.
Line Noise
Because every wire acts like an antenna, any electrical signal will cause electro magnetic
interferance. Line noise exists uniformly across the circuit. As the analog signal traverses the
line, it starts out strong near the sender and diminishes in quality farther from the sender. The
voice signal can be amplified easily, however, the amplifier cannot separate the signal from
the noise.
TDM
To bring the noise problem under control, the telephone industry developed digital
transmission techniques. Digital techniques allowed for most of the line noise to be filtered
from a signal by using digital signal regenerators. With the advent of the transistor and digital
logic, a new systems was developed for multiplexing signals : time division multiplexing. A
time division multiplexor, or a channel bank, assigns short time intervals or time slots to each
channel in rotation. With the aid of multiplexors, 24 voice channels take turns using the line.
The analog telephone remains the standard, however, its signal must be converted to digital
form. This function is done by a Coder-DECoder, or CODEC. Input is an analog voice signal
from 300-3,300 Hz and output is a 64,000 bits per second digital stream.
Loading coils and other filtering equipment limit the frequencies that can be transmitted
across a voice grade analog channel. If we were to draw a graph of the typical telephone
channel (line), we could make the vertical axis signal power (the more power the louder the
signal) and the horizontal axis the signal frequency. A voice grade channel would permit
signal power between frequencies 300 and 3,400 cycles per second. Below 300 and above
3,400, all signal power would be soaked up or absorbed by the telephone network. (The total
frequency range used by the telephone company is 0 to 4,000 cycles per second. Guard bands
limit the effective voice range from 300 to 3,400 cycles per second.)
Now, the line is limited to 64Kbps, however, what about the bandwidth from 56Kbps to
64Kbps. This has been allotted to the telephone companies per the FCC for signaling
purposes. All of the digital encoding techniques listed below rob bits from the 64Kbps signal
for signaling, timing, and other information. You will see this concept of robbed bits come up
several times in the rest of the class.
Key words
Local Loops - A local loop is the wire that runs from your facility to the closest telephone
central office. It is generally from 2 to 25 miles of 19 AWG unshielded twisted pair wire.
Today local loops are terminated at one point in a facility. It is then the responsibility of the
facility owner to route the local loop wires to its final destination. Some local loops are trunk
lines that carry more than a single call. Other local loops provide high digital voice access to
the telephone company central office. They may be run over glass fiber links rather than the
more common copper wire.
Loading Coils - Loading Coils are devices placed in analog local loops to assure predictable
electrical signaling over the range of frequencies that carries voice communications on the
telephone network. Loading coils cannot be used on digital transmission links because they
absorb the digital pulses effectively killing the digital signaling.
Channel Banks - Channel banks are multiplexing/demultiplexing analog to digital and digital
to analog conversion devices. A channel bank converts the analog signal from a phone into a
DS-0 64 Kbps digital signal that is time division multiplexed (combined) with other phone
signals and sent over the telephone network. Typically, digital encoding of a voice analog
signal is done using Pulse Code Modulation (PCM).
Digital Switches - In the telephone company central office (the office nearest your facility)
the telephone company has a digital switching system. The digital switching system is a very
large un-manned branch exchange telephone switch. It takes the digital signals from the
channel bank and routes them with other signals across the telephone company wide area
backbone network. The digital switch belongs to your Local Exchange Carrier (LEC).
Trunk Circuits - Trunk circuits are typically high speed (1.544 Mbps and up) digital links
between telephone company central office switches. A phone call is switched from the Local
Exchange Carrier (LEC) central office switch onto a high speed digital trunk that carries the
call to a long distance carrier (an Inter-eXchange Carrier -- IXC) Point of Presence (POP)
location.
Troubleshooting
Line Impairment - Variations in line quality are typically the cause of low connection
rates. Occasionally, users will experience "a bad line" connection, and have to hang
up and call again. However, if you find that you never or rarely connect at rates above
19200 bps, you will need to investigate the line condition of your connections.
Trunking
Components
SPIDs
Switch Types
This refers to variations on implementation of the signaling protocols by different switch
vendors.
5ESS
DMS-100
National ISDN -1 (NI-1)
If the telephone company tells you the switch is 5ESS or DMS-100, ask for the software type.
If they say the switch uses National software, then use NI-1 as the switch type.
Centrex
Centrex (central office exchange service) is a service from local telephone companies in the
United States in which up-to-date phone facilities at the phone company's central (local)
office are offered to business users so that they don't need to purchase their own facilities.
The Centrex service effectively partitions part of its own centralized capabilities among its
business customers. The customer is spared the expense of having to keep up with fast-
moving technology changes (for example, having to continually update their private branch
exchange infrastructure) and the phone company has a new set of services to sell.
In many cases, Centrex has now replaced the private branch exchange. Effectively, the central
office has become a huge branch exchange for all of its local customers. In most cases,
Centrex (which is sold by different names in different localities) provides customers with as
much if not more control over the services they have than PBX did. In some cases, the phone
company places Centrex equipment on the customer premises.
Typical Centrex service includes direct inward dialing (DID), sharing of the same system
among multiple company locations, and self-managed line allocation and cost-accounting
monitoring.
CRC’s
Link Down
Centrix
Trunking/Switch problems
The T1 system was designed to carry 24 digitized telephone calls. Hence the capacity of a T1
line is divided into 48 channels, 24 in each direction. The 24 lines plus 1 D channel combines
to allow a bandwidth of 1.544 Mbps.
Physical
Cable segments. Typically 4 wires (two twisted pairs). Two wires to transmit and two
to receive.
Regenerators that clean up and repeat the signal from one segment to the next.
A specialized DSU device is used to connect to a T3 line as a T3 uses coaxial cable for its
transmission media.
Multiplexing
Although we often think of these channels as flowing across the line together, the bits are
actually transmitted across the line one at a time. One byte from the first call is sent, followed
by the next, and so forth. This is done using time division multiplexing. Each call is assigned
a 1-byte time slot. The transmitting device sends one byte for a channel each time the
channel’s time slow comes around.
DS1 Frames
24 Sixty-four Kbps channels plus 1 eight Kbps signaling channel (Framing channel).
A DS1 frame consists of a framing bit followed by 24 bytes, one each for each of the 24
channels. Thus, a frame consists of 193 bits. Eight thousand frames are sent per second, the
sampling rate used by the telephone company, giving a total signal rate of 1,544,000 bits per
second. This signal is called digital signal level 1 (DS1). T1 and DS1 are often used
interchangeably, however, T1 is a physical implementation while DS1 defines the format of
the signal transmitted on a T1 line.
The original packaging that was defined for a DS1 signal was called a D4 superframe, and
consisted of 12 consecutive frames. The 12 framing (F) bits in a D4 superframe contained the
pattern:
100011011100
Troubleshooting
CSU/DSU Alarms:
AIS - Alarm indication signal that is all ones, unframed -- 11111111. Also known as a
Blue Alarm which signals that an upstream failure has occurred
CRC (Cyclic Redundancy Check) A method of detecting errors in the serial
transmission of data. A CRC for a block of data is calculated before it is sent, and is
then sent along with the data. A new CRC is calculated on the received data. If the
new CRC does not match the one that has been sent along with the data then an error
has occurred.
Yellow Alarm a yellow alarm indicates a transmission problem at the remote
CSU/DSU. A specific bit pattern will identify the alarm, the mechanism differs
depending on the frame format. Of course for the remote CSU/DSU to signal an
alarm, the basic T1 circuit has to be operational.
Loss of Synchronization if the CSU/DSU can’t locate the synchronization flag over
some number of frames, it will indicate that it lost "synch" with the remote
CSU/DSU.
Red Alarm A red alarm indication warns that the CSU/DSU has lost synchronization
over a longer period of time.
Bipolar Violations This indicates that unintentional bipolar violations have been
detected on the circuit. This typically is created when one side of the link sends binary
data in which the negative and positive states alternate. Used in digital transmission
facilities.
Loss of Service- when an insufficient number of ‘1’ bits or pulses are received, the
CSU/DSU may declare the circuit to be out of service.
http://www.cisco.com/univercd/cc/td/doc/cisintwk/itg_v1/itg_seri.htm
3 in 24 - This is a 24 bit pattern which contains three ones. The longest string of
consecutive zeros is fifteen (15), and the average ones density is 12.5%. This pattern
is used primarily to test timing (clock) recovery and may be used framed or unframed
for that purpose. This pattern will force a B8ZS code if transmitted through equipment
optioned for B8ZS*.
1:7 ( also written as 1 in 8 ) - This is an eight bit pattern which contains a single one.
This pattern is used primarily to test timing (clock) recovery and may be used framed
or unframed for that purpose. When transmitted unframed the maximum number of
consecutive zeros is seven (7) and the pattern will not force a B8ZS code. When
transmitted framed, framing bits force the maximum number of consecutive zeros to
eight (8), and the pattern will force a B8ZS code from equipment optioned for B8ZS.*
QRSS - This pseudo-random sequence is based on a twenty bit shift register and
repeats every 1,048,575 bits. Consecutive zeros are suppressed to no more than
fourteen (14) in the pattern and it contains both high and low density sequences.
QRSS is the most commonly used test pattern for T-1 maintenance and installation.
This pattern will stress timing recovery, ALBO and equalizer circuits, but it also
simulates customer traffic on the circuit. The QRSS pattern can be used framed or
unframed and will force a B8ZS code in circuits optioned as B8ZS.*
ALL ONES - This pattern is composed of ones only, and causes line driver circuity to
consume the maximum amount of power. In a circuit with repeaters, this pattern will
verify that the dc power is regulated correctly. When transmitted unframed, an all
ones pattern is defined in some networks as an Alarm Indication Signal (AIS). An
unframed all ones signal may also be referred to as a "Blue Alarm" and is sent
forward by a device that has lost its input signal. Framed all ones is often used as an
idle condition on a circuit that is not yet in service. Thus, all ones is the most common
pattern found on a circuit during installation. This pattern will not have a B8ZS code
present, even if the circuit is optioned for B8ZS.
ALL ZEROS - This pattern is composed of zeros only and must only be used on
circuits optioned for B8ZS. The pattern verifies that all circuit elements are optioned
for B8ZS and should be used whenever a B8ZS circuit is under test.
T1-DALY and 55 OCTET - Each of these patterns contain fifty-five (55), eight bit
octets of data in a sequence that changes rapidly between low and high density. These
patterns are used primarily to stress the ALBO and equalizer circuitry but they will
also stress timing recovery. 55 OCTET has fifteen (15) consecutive zeroes and can
only be used unframed without violating ones density requirements. For framed
signals, the T1-DALY pattern should be used. Both patterns will force a B8ZS code in
circuits optioned for B8ZS*.
*Note: When a B8ZS code is injected into a test pattern that contains a long string of
zeros, the pattern is no longer testing to the full consecutive zero requirement. Circuit
elements, such as line repeaters, that are intended to operate with or without B8ZS
should be tested without B8ZS.
T3/DS3
Verio typically uses these as a channelized T3 and sells each of the 28 lines as a separate T1.
Lower level signals can be multiplexed into a payload of a DS3 M-frame in a number of
ways. For example, one way is to take the input of 28 complete DS1 signals and send these
into a multiplexor. Each consists of 1.544 Mbps, and includes the T1 framing bits. These
signals are byte interleaved into a DS3 signal at a multiplexor. This direct multiplexing
scheme is called the synchronous DS3 M13 multiplex format.
An alternative way to pack the DS1’s into a DS3 is to bit-interleave groups of four T1’s into
DS2 signals, and then bit-interleave seven DS2 signals into a DS3 signal. T his is called the
M23 multiplex format.
Troubleshooting
SMDS
Switched Multimegabit Data Service (SMDS) is a connectionless high speed digital network
service based on cell relay for end-to-end application usage. This allows for a logical
progression to ATM if the need arises. Switched means it can be used to reach multiple
destinations with a single physical connection. Originally rolled out in December 1991,
SMDS allows transport of mixed data, voice, and video on the same network.
SMDS provides higher speeds (56kbps - 34Mbps) than Frame Relay or ISDN and is a cross
between Frame Relay and ATM. It uses the same 53 byte cell transmission technology as
ATM but differs from Frame Relay in that destinations are dynamic (not predefined). This
allows data to travel over the least congested route. However, it does provide some of the
same benefits as Frame Relay including:
Protocol transparency
Inexpensive meshed redundancy
Reliability
High speeds
There are 6 implimentations of SMDS that currently exist (that I know of):
CSU/DSU) must be used. Common SMDSU's are Kentrox and Digital Link
4) T3 DXI (45Mbps) - I don't know much about this because B.A. doesn't sell
it (to my knowledge). I do know that it does not use SIP and a normal T3
CSU/DSU is used.
5) ATM to SMDS (I don't know anything about this. Bell Atlantic is not
information as I've only had the issue come up once and that was several
years ago. As you'll see from the document, SIP is Layers 1, 2, and a
little bit of 3.
think wrote the SMDS standards), but the URL I had for their site is no
no ip directed-broadcast
encapsulation smds
telco
Customer router has the same config (T1's would be on a serial interface
though). For some reason in the past, I've had to use "no smds dxi-mode"
and that was with a DXI T1 (strange enough as it looks), otherwise, the
interface went up/down. I don't know if that was a bug or not...you might
want to check with the guys currently config'ing customer routers to see if
For more information about SMDS, check out Cisco’s web site:
http://www.cisco.com/univercd/cc/td/doc/product/software/ios11/rbook/rsmds.htm
http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/smds.htm
Troubleshooting
Bridges and switches are data communications devices that operate principally at Layer 2 of
the OSI reference model. As such, they are widely referred to as data link layer devices.
Bridging and switching occur at the link layer, which controls data flow, handles transmission
errors, provides physical (as opposed to logical) addressing, and manages access to the
physical medium. Bridges provide these functions by using various link-layer protocols that
dictate specific flow control, error handling, addressing, and media-access algorithms.
Examples of popular link-layer protocols include Ethernet, Token Ring, and FDDI.
Bridges and switches are not complicated devices. They analyze incoming frames, make
forwarding decisions based on information contained in the frames, and forward the frames
toward the destination. In some cases, such as source-route bridging, the entire path to the
destination is contained in each frame. In other cases, such as transparent bridging, frames are
forwarded one hop at a time toward the destination.
Bridges are capable of filtering frames based on any Layer 2 fields. A bridge, for example,
can be programmed to reject (not forward) all frames sourced from a particular network.
Because link-layer information often includes a reference to an upper-layer protocol, bridges
usually can filter on this parameter. Furthermore, filters can be helpful in dealing with
unnecessary broadcast and multicast packets.
By dividing large networks into self-contained units, bridges and switches provide several
advantages. Because only a certain percentage of traffic is forwarded, a bridge or switch
diminishes the traffic experienced by devices on all connected segments. The bridge or
switch will act as a firewall for some potentially damaging network errors, and both
accommodate communication between a larger number of devices than would be supported
on any single LAN connected to the bridge. Bridges and switches extend the effective length
of a LAN, permitting the attachment of distant stations that were not previously permitted.
Although bridges and switches share most relevant attributes, several distinctions
differentiate these technologies. Switches are significantly faster because they switch in
hardware, while bridges switch in software and can interconnect LANs of unlike bandwidth.
A 10-Mbps Ethernet LAN and a 100-Mbps Ethernet LAN, for example, can be connected
using a switch. Switches also can support higher port densities than bridges. Some switches
support cut-through switching, which reduces latency and delays in the network, while
bridges support only store-and-forward traffic switching. Finally, switches reduce collisions
on network segments because they provide dedicated bandwidth to each network segment.
Frame Relay
Frame relay is a packet-switching protocol based on X.25 and ISDN standards. Unlike X.25
however, which assumed low speed, error-prone lines and had to perform error correction,
frame relay assumes error-free lines. By leaving the error correction and flow control
functions to the end points (customer premise equipment), frame relay has lower overhead
and can move variable-sized data packets at much higher rates.
Each location gains access to the frame relay network through a Frame Relay Access Device
(FRAD). A router with frame relay capability is one example. The FRAD is connected to the
nearest carrier point-of-presence (POP) through an access link, usually a leased line. A port
on the edge switch, provides entry into the frame relay network.
FRADs assemble the data to be sent between locations into variable-sized frame relay frames,
like putting a letter in an envelope. Each frame contains the address of the target site, which is
used to direct the frame through the network to its proper destination. Once the frame enters
the shared network cloud or backbone, any number of networking technologies can be
employed to carry it.
The path defined between the source and the destination sites is known as a virtual circuit.
While a virtual circuit defines a path between two sites, no backbone bandwidth is actually
allocated to that path until the devices need it. Frame relay supports both permanent and
switched virtual circuits.
A Permanent Virtual Circuit (PVC) is a logical point-to-point circuit between sites through
the public frame relay cloud. PVCs are permanent in that they are not set up and torn down
with each session. They may exist for weeks, months or years, and have assigned end points
which do not change. The PVC is available for transmitting and receiving all the time and, in
that regard, is analogous to a leased line.
In contrast, a Switched Virtual Circuit (SVC) is analogous to a dial-up connection. It is a
duplex circuit, established on demand, between two points. Existing only for the duration of
the session, it is set up and torn down like a telephone call. FRADs which support SVCs
perform the call establishment procedures. Currently, all public frame relay service providers
offer PVCs, while only a very small number offer SVCs.
By supporting several PVCs simultaneously, frame relay can directly connect multiple sites,
through a single physical connection. (In contrast, a leased line network would require
multiple physical connections, one for each site.)
A Data Link Connection Identifier (DLCI), assigned by the service provider, identifies each
PVC. A header in each frame contains the DLCI, indicating which virtual circuit the frame
should use.
The real benefit of frame relay comes from its ability to dynamically allocate bandwidth and
handle bursts of peak traffic. When a particular PVC is not using backbone bandwidth it is
"up for grabs" by another.
When purchasing PVCs, the bandwidth or Committed Information Rate (CIR) must be
specified. The CIR is the average throughput the carrier guarantees to be always available for
a particular PVC.
A device can burst up to the Committed Burst Information Rate (CBIR) and still expect the
data to get through. The duration of a burst transmission should be short, less than three or
four seconds. If long bursts persist, then a higher CIR should be purchased.
The frame relay network does try to police itself and keep congestion and thus packet loss
down. It can do this in two ways. It can try to control the flow of packets with Forward
Explicit Congestion Notification (FECN), which is a bit set in a packet to notify a receiving
interface device that it should initiate congestion avoidance procedures. Backward Explicit
Congestion Notification (BECN) is a bit set to notify a sending device to stop sending frames
because congestion avoidance procedures are being initiated.
A second way to inform the end devices that there is congestion is through the Local
Management Interface (LMI). This specification describes special management frames sent to
access devices.
A Discard Eligiblility Bit (DE Bit) is set by the public frame relay network in packets the
device is attempting to transmit above the CIR or the CBIR for any length of time. It will also
be set if there is high network congestion. This means that if data must be discarded, packets
with the DE bit set should be dropped before other packets.
Notice that the network itself has no way to enforce congestion flow control. It is up to the
end device to support and obey these codes. When all is said and done, the frame travels to its
destination where it is disassembled by the receiving FRAD, and data is passed to the user.
X.25
There are major differences between frame relay and X.25 data networks. See the table below
for a brief summary:
X.25 Frame
X.25 has link and packet protocol Frame relay is a simple data link protocol.
Levels.
The link level of X.25 consists of a Frame relay provides basic data transfer
Data link protocol called LAPB. That doesn’t guarantee reliable delivery of data.
X.25 LAPB information frames are Frame relay frames are not numbered or
Circuits are defined at the packet layer, Circuits are identitfied by an address field in
the
acknowledged.
There are complex rules that govern the Data is packaged into simple frame relay
frames
flow of data across an X.25 interface. and transmitted toward its destination.
These rules often interrupt and impede the Data can be sent across a frame relay
network
flow of data. Whenever there is bandwidth available to carry it.
ATM
ATM is a cell-switching and multiplexing technology that combines the benefits of circuit
switching (guaranteed capacity and constant transmission delay) with those of packet
switching (flexibility and efficiency for intermittent traffic). It provides scalable bandwidth
from a few megabits per second (Mbps) to many gigabits per second (Gbps).
ATM is a layered architecture allowing multiple services like voice, data and video, to be
mixed over the network. Three lower level layers have been defined to implement the
features of ATM.
The Adaptation layer assures the appropriate service characteristics and divides all types of
data into the 48 byte payload that will make up the ATM cell.
The ATM layer takes the data to be sent and adds the 5 byte header information that assures
the cell is sent on the right connection.
The Physical layer defines the electrical characteristics and network interfaces. This layer
"puts the bits on the wire." ATM is not tied to a specific type of physical transport.
Three types of ATM services exist: permanent virtual circuits (PVC), switched virtual circuits
(SVC), and connectionless service (which is similar to SMDS).
A PVC allows direct connectivity between sites. In this way, a PVC is similar to a leased line.
Among its advantages, a PVC guarantees availability of a connection and does not require
call setup procedures between switches. Disadvantages of PVCs include static connectivity
and manual setup.
An SVC is created and released dynamically and remains in use only as long as data is being
transferred. In this sense, it is similar to a telephone call. Dynamic call control requires a
signaling protocol between the ATM endpoint and the ATM switch. The advantages of SVCs
include connection flexibility and call setup that can be handled automatically by a
networking device. Disadvantages include the extra time and overhead required to set up the
connection.
ATM networks are fundamentally connection oriented, which means that a virtual channel
(VC) must be set up across the ATM network prior to any data transfer. (A virtual channel is
roughly equivalent to a virtual circuit.)
Two types of ATM connections exist: virtual paths, which are identified by virtual path
identifiers, and virtual channels, which are identified by the combination of a VPI and a
virtual channel identifier (VCI).
A virtual path is a bundle of virtual channels, all of which are switched transparently across
the ATM network on the basis of the common VPI. All VCIs and VPIs, however, have only
local significance across a particular link and are remapped, as appropriate, at each switch.
Thus, ATM uses a "cloud" system almost exactly like that of frame relay. Unlike frame,
however, ATM uses a 53 bytes fixed cell length, DLCI’s have been switched to VPI’s and
VCI’s, and ATM offers a guaranteed service level.
HDLC
One of the oldest data communications protocols in use today is IBM’s Synchronous
Data Link Control (SDLC). SDLC defined rules for transmitting data across a digital
line and was used for long distance communications between terminals and computers.
IBM submitted SDLC to the standards organizations and they revised it and
generalized it into the High Level Data Link Control (HDLC) protocol. HDLC is the
basis of a family of related protocols.
LAPD Protocol - Belongs to the High-level Data Link Control (HDLC) family of protocols
Background
The Point-to-Point Protocol (PPP) originally emerged as an encapsulation protocol for
transporting IP traffic over point-to-point links. PPP also established a standard for the
assignment and management of IP addresses, asynchronous (start/stop) and bit-oriented
synchronous encapsulation, network protocol multiplexing, link configuration, link quality
testing, error detection, and option negotiation for such capabilities as network-layer address
negotiation and data-compression negotiation. PPP supports these functions by providing an
extensible Link Control Protocol (LCP) and a family of Network Control Protocols (NCPs) to
negotiate optional configuration parameters and facilities. In addition to IP, PPP supports
other protocols, including Novell's Internetwork Packet Exchange (IPX) and DECnet. This
chapter provides a summary of PPP's basic protocol elements and operations. PPP is capable
of operating across any DTE/DCE interface. The only absolute requirement imposed by PPP
is the provision of a duplex circuit, either dedicated or switched, that can operate in either an
asynchronous or synchronous bit-serial mode, transparent to PPP link-layer frames. PPP does
not impose any restrictions regarding transmission rate other than those imposed by the
particular DTE/DCE interface in use.
PPP Components
PPP provides a method for transmitting datagrams over serial point-to-point links. PPP
contains three main components:
A method for encapsulating datagrams over serial links---PPP uses the High-Level
Data Link Control (HDLC) protocol as a basis for encapsulating datagrams over
point-to-point links.
An extensive LCP to establish, configure, and test the data-link connection.
A family of NCP’s (Network Control protocols) for establishing and configuring
different network-layer protocols---PPP is designed to allow the simultaneous use of
multiple network-layer protocols.
General Operation
To establish communications over a point-to-point link, the originating PPP first sends LCP
frames to configure and (optionally) test the data-link. After the link has been established and
optional facilities have been negotiated as needed by the LCP, the originating PPP sends NCP
frames to choose and configure one or more network-layer protocols. When each of the
chosen network-layer protocols has been configured, packets from each network-layer
protocol can be sent over the link. The link will remain configured for communications until
explicit LCP or NCP frames close the link, or until some external event occurs (for example,
an inactivity timer expires or a user intervenes).
The following descriptions summarize the PPP frame fields illustrated in Figure 13-1 :
Flag---A single byte that indicates the beginning or end of a frame. The flag field
consists of the binary sequence 01111110.
Address---A single byte that contains the binary sequence 11111111, the standard
broadcast address. PPP does not assign individual station addresses.
Control---A single byte that contains the binary sequence 00000011, which calls for
transmission of user data in an unsequenced frame. A connectionless link service
similar to that of Logical Link Control (LLC) Type 1 is provided.
Protocol---Two bytes that identify the protocol encapsulated in the information field
of the frame. The most up-to-date values of the protocol field are specified in the most
recent Assigned Numbers Request for Comments (RFC).
Data---Zero or more bytes that contain the datagram for the protocol specified in the
protocol field. The end of the information field is found by locating the closing flag
sequence and allowing 2 bytes for the FCS field. The default maximum length of the
information field is 1,500 bytes. By prior agreement, consenting PPP implementations
can use other values for the maximum information field length.
Frame Check Sequence (FCS)---Normally 16 bits (2 bytes). By prior agreement,
consenting PPP implementations can use a 32-bit (4-byte) FCS for improved error
detection.
First, link establishment and configuration negotiation occurs. Before any network-
layer datagrams (for example, IP) can be exchanged, LCP first must open the
connection and negotiate configuration parameters. This phase is complete when a
configuration-acknowledgment frame has been both sent and received.
This is followed by link-quality determination. LCP allows an optional link-quality
determination phase following the link-establishment and configuration-negotiation
phase. In this phase, the link is tested to determine whether the link quality is
sufficient to bring up network-layer protocols. This phase is optional. LCP can delay
transmission of network-layer protocol information until this phase is complete.
At this point, network-layer protocol configuration negotiation occurs. After LCP has
finished the link-quality determination phase, network-layer protocols can be
configured separately by the appropriate NCP and can be brought up and taken down
at any time. If LCP closes the link, it informs the network-layer protocols so that they
can take appropriate action.
Finally, link termination occurs. LCP can terminate the link at any time. This usually
will be done at the request of a user but can happen because of a physical event, such
as the loss of carrier or the expiration of an idle-period timer.
Three classes of LCP frames exist. Link-establishment frames are used to establish and
configure a link. Link-termination frames are used to terminate a link, while link-
maintenance frames are used to manage and debug a link. These frames are used to
accomplish the work of each of the LCP phases.
PPP Multilink