Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Signal Transmission

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 25

Signal Transmission

Signaling is the way that data is transmitted across a medium. This is done using electrical
energy to communicate. The information to be transmitted can exist in two formats, analog or
digital. Analog data is continuously changing and represents all values within a range. Digital
data, on the other hand, consists of discreet states, ON or OFF, or 1 or a 0.

Current-State Encoding

Data is encoded in current-state strategies by the presence or absence of a signal


characteristic or state. For example, a +5 might represent a binary 0, while a –5 voltage could
represent a binary 1.

The following encoding techniques use current-state encoding:

 Unipolar
 Polar
 Return-to-Zero
 Biphase

State-Transition Encoding

State-transition encoding methods differ from current-state encoding methods in that they use
a transition in the signal to represent data, as opposed to encoding data by means of a
particular voltage level or state.

The following encoding techniques use state-transition encoding:

 Bipolar-Alternate Mark Inversion (AMI)


 Binary Eight Zero Substitution (B8ZS)
 Non-Return-to-Zero (NRZ)
 Manchester
 Differential Manchester
 Biphase Space (FM-0)

Encoding Schemes

 Unipolar – Uses two levels of signal for encoding data. The presence of a signal,
either positive or negative (+5 and –5, for example) represents one value, while the
absence of signal represents the other value.
 Polar – Similar to unipolar, except that it can use both positive and negative voltages
for encoding data. For example, a –3V could represent a 1 while a +3V could
represent a 0.
 Return-to-Zero (RZ) – An encoding
scheme that uses a signal transition to zero in the middle of each bit interval. A
positive voltage transitioning to zero could represent a 1 while a negative voltage
transitioning to zero could represent a 0, for example.

 Biphase – This encoding scheme requires at least one mid-bit transition per bit
interval. Manchester and Differential Manchester are two examples.
 Manchester – Used most commonly on Ethernet LAN’s, this encoding scheme use a
low-to-high bit transition to represent one value and a high-to-low bit transition to
represent another value.
 Differential Manchester – This encoding scheme is also considered to be a biphase
encoding scheme because it uses a mid-bit transition. In this scheme, however, the
mid-bit transition doesn’t represent data; it is used for timing. The actual data is
represented by a transition at the beginning of the bit interval. The presence of a
transition indicates one value while the absence of a transition represents another
value. This scheme is used on Token Ring LAN’s.
 Non-Return-to-Zero (NRZ) – The main difference between this scheme and
Differential Manchester is that NRZ, unlike Differential Manchester, does not use a
mid-bit transition for clocking.

 AMI = Alternate Mark Inversion. This is the original method of formatting T1


datastreams. In AMI a zero is always sent by doing nothing, at the time when a pulse
might otherwise be sent, a pulse is not sent to represent a zero. A one is sent on an
AMI T1 by sending a pulse, as opposed to not sending a pulse. The alternating mark
rule means that if the last pulse sent was of a positive going polarity, the next pulse
sent must be negative going. If an AMI T1 device receives two pulses in a row and
they are of the same polarity a bipolar violation (BPV) has occurred. Thus AMI has a
rudimentary error checking capability with a 50% probability of detecting altered,
inserted or lost bits end to end. Since a T1 uses a single pair of wires in each direction
and the only signals on those wires are the pulses which represent data; the only way
to recover clock and retain synchronization on a T1 is by detecting the rate at which
pulses are being received. All of the equipment in a T1 circuit must operate at the
same rate because all of the equipment must sense the T1 at the correct time in order
to determine if a pulse (1) or nonpulse (0) has been received at each bit time. Since
only ones are sent as pulses and zeroes are represented by doing nothing, if too many
zeroes are sent at a time there will be no pulses on the T1 at all and the clock circuitry
in all of the hardware will rapidly fall out of synchronization. Thus the design of AMI
requires that a certain ONES DENSITY be maintained, that a certain minimum of the
bits over a certain period of time be guaranteed to be a ONE (pulse). This is why AMI
circuits require DENSITY enforcement. Briefly stated; on average one bit in eight
must be a one and no more than (varies according to specific standard) so many
zeroes may be sent in a row. In order to be able to satisfy the ones density requirement
on an AMI T1 one bit out of every eight is taken away from the user, not available for
voice or data traffic, and that 1 bit in 8 is always sent as a one. Once this has been
done the requirement for ones density is satisfied and the user is free to send any data
pattern in the remaining bandwidth. The rate of a T1 is 1.544 megabits per second. 8K
is used for framing leaving 1.536MBPS. The 1.536 is usually divided into 24
timeslots (DS0s) or "channels" each being inherently 64KBPS. By taking the 1 bit in
8 that is reserved to satisfy ones density the user is left with 56K per timeslot. When
configuring a CSU for an AMI T1 the selection for density must be other than NONE.

 B8ZS = Binary Eight Zero Substitution. This is the improved method of formatting
T1 datastreams. In B8ZS, a method was devised to replace long strings of zeroes with
a sequence of intentional bipolar violations. These bipolar violations include pulses
and therefore sufficient pulses are always on the line to retain synchronization. A one
is sent on a T1 by sending a pulse, as opposed to not sending a pulse. The alternating
mark rule means that if the last pulse sent was of a positive going polarity, the next
pulse sent must be negative going. If a T1 device receives two pulses in a row and
they are of the same polarity a bipolar violation (BPV) has occurred. In B8ZS a
specific combination of valid pulses and bipolar violations is used to represent a string
of eight zeroes, whenever the user data contains eight zeroes in a row, therefore, the
user no longer needs to give up one bit in eight as in AMI. When configuring a CSU
for a B8ZS T1 the selection for density should be NONE.

Bit Synchronization
Asynchronous

 Uses stop and start bits to signal beginning and end of each package of data

Synchronous

 Guaranteed State Change


 Separate Clock Signals
 Oversampling

Analog signaling:

Analog waves have three characteristics used to describe them:

 Amplitude – measures the strength of the signal or the height of the wave.
 Frequency – the amount of time for a wave to complete one cycle.
 Phase – the relative state of one wave when timing began. This is measured in
degrees.

Analog Signal modulation:


All three of these characteristics can be used to encode data in an analog signal. For example,
a higher amplitude could represent a 1 and a lower amplitude could represent a 0. Using
frequency, a 0 could be represented with a higher frequency while a 1 could be represented
using a slower frequency. There are three main strategies to encoding data using analog
signals. Amplitude shift keying and frequency shift keying are both considered current-state
encoding schemes because a measurement is made to detect a particular state or key. Phase
shift keying, on the other hand, is a state-transition encoding scheme because it relies on the
presence or absence of a transition from one phase to another.

 Amplitude Shift Keying (ASK)

 Frequency Shift Keying (FSK)

 Phase Shift Keying (PSK)

Broadband vs. Baseband Transmission


Baseband

 Uses single frequency


 Statistical Multiplexing
 Time Division Multiplexing

Broadband

 Frequency Division Multiplexing


 Generally used in fiber optic systems.

Physical Circuits
Basic Terminology

 Local Loop – Often called the last mile, this is the physical wiring between the
demarcation point and the telephone company’s equipment.

 Demarc - A point (such as a jack or cross-connect panel) at which ownership or


responsibility for operating and maintaining facilities passes from one party to
another.

 Demarcation point - The location of the separation of network equipment from


subscriber equipment. Usually occurs at a connecting device. Also called point of
demarcation.

 CPE – Customer premise equipment.


 Central Office - CO - One local Class 5 Switch with lines to customer locations.
(Usually less than 100,000 telephone lines per Central Office.) COs are usually owned
and operated by LECs or BOCs. COs have connections to Tandem ( Class 4 Toll
Offices) and often connect directly to other COs and IECs like LDDS WorldCom,
AT&T, MCI, Sprint, etc. A CO is a major equipment center designed to serve the
communications traffic of a specific geographic area. CO coordinates are used in
mileage calculations for local and interexchange service rates. A Non-Conforming CO
is one that does not (yet) support Equal Access.

 Analog signal - An electrical signal that varies continuously in amplitude and


frequency.

 Digital signal - A signal with only two values, normally 0 and 1, during transmission,
unlike an analog signal whose values constantly vary

 LEC – Local exchange carrier

 POTS - Plain Old Telephone Service

 Circuit - (1) The physical connection, or path, of channels, conductors, and equipment
between two given points through which an electric current may be established. (2) A
switched or dedicated communications path with a specified bandwidth (transmission
speed/capacity). (3) A communication path or network; usually a pair of channels
providing bidirectional communication.

 Punch Down Block - A block mounted on the wall in a telco closet where cable pairs
are punched down for termination or cross connection.

Analog

Analog Multiplexing

Each voice channel was assigned to a different 4Khz band. When added together, 24 voice
channels totaled 96Khz, well within the capacity of twisted pair technology at the time. Due
to restrictions in vacuum technology at the time, 100Khz was actually the top end, however,
96Khz was used as a conservative estimate. This is known as frequency division
multiplexing.

Line Noise

Because every wire acts like an antenna, any electrical signal will cause electro magnetic
interferance. Line noise exists uniformly across the circuit. As the analog signal traverses the
line, it starts out strong near the sender and diminishes in quality farther from the sender. The
voice signal can be amplified easily, however, the amplifier cannot separate the signal from
the noise.

TDM
To bring the noise problem under control, the telephone industry developed digital
transmission techniques. Digital techniques allowed for most of the line noise to be filtered
from a signal by using digital signal regenerators. With the advent of the transistor and digital
logic, a new systems was developed for multiplexing signals : time division multiplexing. A
time division multiplexor, or a channel bank, assigns short time intervals or time slots to each
channel in rotation. With the aid of multiplexors, 24 voice channels take turns using the line.

The analog telephone remains the standard, however, its signal must be converted to digital
form. This function is done by a Coder-DECoder, or CODEC. Input is an analog voice signal
from 300-3,300 Hz and output is a 64,000 bits per second digital stream.

Why is analog bandwidth limited to 56K?

Loading coils and other filtering equipment limit the frequencies that can be transmitted
across a voice grade analog channel. If we were to draw a graph of the typical telephone
channel (line), we could make the vertical axis signal power (the more power the louder the
signal) and the horizontal axis the signal frequency. A voice grade channel would permit
signal power between frequencies 300 and 3,400 cycles per second. Below 300 and above
3,400, all signal power would be soaked up or absorbed by the telephone network. (The total
frequency range used by the telephone company is 0 to 4,000 cycles per second. Guard bands
limit the effective voice range from 300 to 3,400 cycles per second.)

Now, the line is limited to 64Kbps, however, what about the bandwidth from 56Kbps to
64Kbps. This has been allotted to the telephone companies per the FCC for signaling
purposes. All of the digital encoding techniques listed below rob bits from the 64Kbps signal
for signaling, timing, and other information. You will see this concept of robbed bits come up
several times in the rest of the class.

Key words

Local Loops - A local loop is the wire that runs from your facility to the closest telephone
central office. It is generally from 2 to 25 miles of 19 AWG unshielded twisted pair wire.
Today local loops are terminated at one point in a facility. It is then the responsibility of the
facility owner to route the local loop wires to its final destination. Some local loops are trunk
lines that carry more than a single call. Other local loops provide high digital voice access to
the telephone company central office. They may be run over glass fiber links rather than the
more common copper wire.

Loading Coils - Loading Coils are devices placed in analog local loops to assure predictable
electrical signaling over the range of frequencies that carries voice communications on the
telephone network. Loading coils cannot be used on digital transmission links because they
absorb the digital pulses effectively killing the digital signaling.

Channel Banks - Channel banks are multiplexing/demultiplexing analog to digital and digital
to analog conversion devices. A channel bank converts the analog signal from a phone into a
DS-0 64 Kbps digital signal that is time division multiplexed (combined) with other phone
signals and sent over the telephone network. Typically, digital encoding of a voice analog
signal is done using Pulse Code Modulation (PCM).
Digital Switches - In the telephone company central office (the office nearest your facility)
the telephone company has a digital switching system. The digital switching system is a very
large un-manned branch exchange telephone switch. It takes the digital signals from the
channel bank and routes them with other signals across the telephone company wide area
backbone network. The digital switch belongs to your Local Exchange Carrier (LEC).

Trunk Circuits - Trunk circuits are typically high speed (1.544 Mbps and up) digital links
between telephone company central office switches. A phone call is switched from the Local
Exchange Carrier (LEC) central office switch onto a high speed digital trunk that carries the
call to a long distance carrier (an Inter-eXchange Carrier -- IXC) Point of Presence (POP)
location.

Troubleshooting

 Line Impairment - Variations in line quality are typically the cause of low connection
rates. Occasionally, users will experience "a bad line" connection, and have to hang
up and call again. However, if you find that you never or rarely connect at rates above
19200 bps, you will need to investigate the line condition of your connections.
 Trunking

Integrated Services Digital Network (ISDN) (DS0) (BRI)

 Supports 3 digital channels across a local loop


 Two are digital 64Kbps bearer (B) channels
 Bidirectional
 One is a digital 16Kbps data (D) channel
 Always ready to use

Components

 Terminal Adapters (TA’s)


 Network-Termination Devices (NT1)
 Provides physical interface between a four wire local network that connects to devices
and the two-wire local loop. Needs power.
 Line-Termination Equipment
 The physical component that terminates the line at the telephone company.
 Exchange-Termination Equipment
 The logical device that connects the line to the central office switch

SPIDs

 Directory number (DN, 10 digits)


 Sharing terminal identifier (2 digits) – Can be from 01 to 32.
 Terminal identifier (TID, 2 digits) – Used to distinguish between terminals that have
the same directory umber and sharing terminal identifier.

Switch Types
This refers to variations on implementation of the signaling protocols by different switch
vendors.

Three switch types are commonly used in North America:

 5ESS
 DMS-100
 National ISDN -1 (NI-1)

If the telephone company tells you the switch is 5ESS or DMS-100, ask for the software type.
If they say the switch uses National software, then use NI-1 as the switch type.

Centrex

Centrex (central office exchange service) is a service from local telephone companies in the
United States in which up-to-date phone facilities at the phone company's central (local)
office are offered to business users so that they don't need to purchase their own facilities.
The Centrex service effectively partitions part of its own centralized capabilities among its
business customers. The customer is spared the expense of having to keep up with fast-
moving technology changes (for example, having to continually update their private branch
exchange infrastructure) and the phone company has a new set of services to sell.

In many cases, Centrex has now replaced the private branch exchange. Effectively, the central
office has become a huge branch exchange for all of its local customers. In most cases,
Centrex (which is sold by different names in different localities) provides customers with as
much if not more control over the services they have than PBX did. In some cases, the phone
company places Centrex equipment on the customer premises.

Typical Centrex service includes direct inward dialing (DID), sharing of the same system
among multiple company locations, and self-managed line allocation and cost-accounting
monitoring.

Call Connection Procedures:


Troubleshooting

 CRC’s
 Link Down
 Centrix
 Trunking/Switch problems

T1/Primary Rate Interface (PRI) (DS1)

The T1 system was designed to carry 24 digitized telephone calls. Hence the capacity of a T1
line is divided into 48 channels, 24 in each direction. The 24 lines plus 1 D channel combines
to allow a bandwidth of 1.544 Mbps.

Physical

 Cable segments. Typically 4 wires (two twisted pairs). Two wires to transmit and two
to receive.
 Regenerators that clean up and repeat the signal from one segment to the next.

CSU/DSU (channel service unit/data service unit)


 Terminates the line from the telecommunications network. A 2 or 4 wire line is used
for 56/64 Kbps service, while a 4 wire line is used for T1 service.
 Places the signal on the line and controls the strength of the transmission signal.
 Support loop back tests.
 Provides timing or synchronizes the timing with the timing received on the line.
 Frames the T1 signal. (More on this in a bit)

A specialized DSU device is used to connect to a T3 line as a T3 uses coaxial cable for its
transmission media.

Multiplexing

Although we often think of these channels as flowing across the line together, the bits are
actually transmitted across the line one at a time. One byte from the first call is sent, followed
by the next, and so forth. This is done using time division multiplexing. Each call is assigned
a 1-byte time slot. The transmitting device sends one byte for a channel each time the
channel’s time slow comes around.

DS1 Frames

24 Sixty-four Kbps channels plus 1 eight Kbps signaling channel (Framing channel).

Can be used for voice calls or data or both.

Uses multiplexors and inverse multiplexors to combine channels as needed.

Multiplexors are typically part of the router or within the PBX.

A DS1 frame consists of a framing bit followed by 24 bytes, one each for each of the 24
channels. Thus, a frame consists of 193 bits. Eight thousand frames are sent per second, the
sampling rate used by the telephone company, giving a total signal rate of 1,544,000 bits per
second. This signal is called digital signal level 1 (DS1). T1 and DS1 are often used
interchangeably, however, T1 is a physical implementation while DS1 defines the format of
the signal transmitted on a T1 line.

Packaging the DS1 signal – DS4 and ESF

The original packaging that was defined for a DS1 signal was called a D4 superframe, and
consisted of 12 consecutive frames. The 12 framing (F) bits in a D4 superframe contained the
pattern:

100011011100

Telecommunications equipment locked into this pattern to locate D4 superframes and


maintain alignment.

An improved extended superframe (ESF) was adopted later. It is made up of 24 consecutive


frames. Its framing bits are used for three purposes:
 Alignment – Framing bits from six of the frames repeat the pattern 0 0 1 0 1 1. (This
consumes 2Kbps.)
 Error Checking – Framing bits from six of the frames contain a cyclic redundancy
check (CRC) computed on the previous extended superframe. (This consumes 2Kbps)
 A messaging link – Framing bits from 12 of the frames are used to form a messaging
channel called a facility data link. (This consumes 4Kbps.)

Troubleshooting
CSU/DSU Alarms:

 AIS - Alarm indication signal that is all ones, unframed -- 11111111. Also known as a
Blue Alarm which signals that an upstream failure has occurred
 CRC (Cyclic Redundancy Check) A method of detecting errors in the serial
transmission of data. A CRC for a block of data is calculated before it is sent, and is
then sent along with the data. A new CRC is calculated on the received data. If the
new CRC does not match the one that has been sent along with the data then an error
has occurred.
 Yellow Alarm a yellow alarm indicates a transmission problem at the remote
CSU/DSU. A specific bit pattern will identify the alarm, the mechanism differs
depending on the frame format. Of course for the remote CSU/DSU to signal an
alarm, the basic T1 circuit has to be operational.
 Loss of Synchronization if the CSU/DSU can’t locate the synchronization flag over
some number of frames, it will indicate that it lost "synch" with the remote
CSU/DSU.
 Red Alarm A red alarm indication warns that the CSU/DSU has lost synchronization
over a longer period of time.
 Bipolar Violations This indicates that unintentional bipolar violations have been
detected on the circuit. This typically is created when one side of the link sends binary
data in which the negative and positive states alternate. Used in digital transmission
facilities.
 Loss of Service- when an insufficient number of ‘1’ bits or pulses are received, the
CSU/DSU may declare the circuit to be out of service.

 Troubleshooting Serial Links


http://www.cisco.com/univercd/cc/td/doc/cisintwk/itg_v1/itg_seri.htm

Telephone company testing

 3 in 24 - This is a 24 bit pattern which contains three ones. The longest string of
consecutive zeros is fifteen (15), and the average ones density is 12.5%. This pattern
is used primarily to test timing (clock) recovery and may be used framed or unframed
for that purpose. This pattern will force a B8ZS code if transmitted through equipment
optioned for B8ZS*.
 1:7 ( also written as 1 in 8 ) - This is an eight bit pattern which contains a single one.
This pattern is used primarily to test timing (clock) recovery and may be used framed
or unframed for that purpose. When transmitted unframed the maximum number of
consecutive zeros is seven (7) and the pattern will not force a B8ZS code. When
transmitted framed, framing bits force the maximum number of consecutive zeros to
eight (8), and the pattern will force a B8ZS code from equipment optioned for B8ZS.*
 QRSS - This pseudo-random sequence is based on a twenty bit shift register and
repeats every 1,048,575 bits. Consecutive zeros are suppressed to no more than
fourteen (14) in the pattern and it contains both high and low density sequences.
QRSS is the most commonly used test pattern for T-1 maintenance and installation.
This pattern will stress timing recovery, ALBO and equalizer circuits, but it also
simulates customer traffic on the circuit. The QRSS pattern can be used framed or
unframed and will force a B8ZS code in circuits optioned as B8ZS.*
 ALL ONES - This pattern is composed of ones only, and causes line driver circuity to
consume the maximum amount of power. In a circuit with repeaters, this pattern will
verify that the dc power is regulated correctly. When transmitted unframed, an all
ones pattern is defined in some networks as an Alarm Indication Signal (AIS). An
unframed all ones signal may also be referred to as a "Blue Alarm" and is sent
forward by a device that has lost its input signal. Framed all ones is often used as an
idle condition on a circuit that is not yet in service. Thus, all ones is the most common
pattern found on a circuit during installation. This pattern will not have a B8ZS code
present, even if the circuit is optioned for B8ZS.
 ALL ZEROS - This pattern is composed of zeros only and must only be used on
circuits optioned for B8ZS. The pattern verifies that all circuit elements are optioned
for B8ZS and should be used whenever a B8ZS circuit is under test.
 T1-DALY and 55 OCTET - Each of these patterns contain fifty-five (55), eight bit
octets of data in a sequence that changes rapidly between low and high density. These
patterns are used primarily to stress the ALBO and equalizer circuitry but they will
also stress timing recovery. 55 OCTET has fifteen (15) consecutive zeroes and can
only be used unframed without violating ones density requirements. For framed
signals, the T1-DALY pattern should be used. Both patterns will force a B8ZS code in
circuits optioned for B8ZS*.

*Note: When a B8ZS code is injected into a test pattern that contains a long string of
zeros, the pattern is no longer testing to the full consecutive zero requirement. Circuit
elements, such as line repeaters, that are intended to operate with or without B8ZS
should be tested without B8ZS.

T3/DS3

Consists of 672 sixty-four Kpbs channels, or 28 DS1’s.

Verio typically uses these as a channelized T3 and sells each of the 28 lines as a separate T1.

Multiplexing Lower Level Signals into a DS3 Signal

Lower level signals can be multiplexed into a payload of a DS3 M-frame in a number of
ways. For example, one way is to take the input of 28 complete DS1 signals and send these
into a multiplexor. Each consists of 1.544 Mbps, and includes the T1 framing bits. These
signals are byte interleaved into a DS3 signal at a multiplexor. This direct multiplexing
scheme is called the synchronous DS3 M13 multiplex format.

An alternative way to pack the DS1’s into a DS3 is to bit-interleave groups of four T1’s into
DS2 signals, and then bit-interleave seven DS2 signals into a DS3 signal. T his is called the
M23 multiplex format.

Troubleshooting

See T1/PRI troubleshooting above.

SMDS

Switched Multimegabit Data Service (SMDS) is a connectionless high speed digital network
service based on cell relay for end-to-end application usage. This allows for a logical
progression to ATM if the need arises. Switched means it can be used to reach multiple
destinations with a single physical connection. Originally rolled out in December 1991,
SMDS allows transport of mixed data, voice, and video on the same network.

SMDS provides higher speeds (56kbps - 34Mbps) than Frame Relay or ISDN and is a cross
between Frame Relay and ATM. It uses the same 53 byte cell transmission technology as
ATM but differs from Frame Relay in that destinations are dynamic (not predefined). This
allows data to travel over the least congested route. However, it does provide some of the
same benefits as Frame Relay including:

 Protocol transparency
 Inexpensive meshed redundancy
 Reliability
 High speeds

There are 6 implimentations of SMDS that currently exist (that I know of):

1) 1.17Mbps SIP (SMDS Interface Protocol) - a special (T1) SMDSU (not

CSU/DSU) must be used. Common SMDSU's are Kentrox and Digital Link

2) 1.536Mbps DXI (a regular T1 CSU/DSU is used)

3) 4,10,16,25,34Mbps A special (T3) SMDSU is used. Common SMDSU's are

Kentrox and Digital Link.

4) T3 DXI (45Mbps) - I don't know much about this because B.A. doesn't sell

it (to my knowledge). I do know that it does not use SIP and a normal T3

CSU/DSU is used.

5) ATM to SMDS (I don't know anything about this. Bell Atlantic is not

selling this to my knowledge, however it exists.)

6) 64Kbps SMDS...not very common.


You can probably skip over the DBDQ section, it's not really necessary

information as I've only had the issue come up once and that was several

years ago. As you'll see from the document, SIP is Layers 1, 2, and a

little bit of 3.

There used to be an organization called the SMDS Interest Group (who I

think wrote the SMDS standards), but the URL I had for their site is no

longer valid and I can not find a new one.

Simple router config (from ucsc1, but made simpler):


interface Hssi0/1/0

description Bell Atlantic SMDS CID: 3QCDQ650002

ip address 130.94.46.3 255.255.255.128 no ip redirects

no ip directed-broadcast

no ip proxy-arp ! THIS IS VERY IMPORTANT.. W/O IT, THE ROTUER ACTS AS A

! PROXY FOR ARP RESPONSES FOR OTHER ROUTERS

! AND GIVES ITS OWN HARDWARE ADDR INSTEAD OF THEIRS

encapsulation smds

smds address c121.5215.1279 ! Unique "single-cast" address assigned by

telco

smds multicast ARP e101.2150.2129 130.94.46.0 255.255.255.128

smds multicast IP e101.2150.2129 130.94.46.0 255.255.255.128

! The mtulicast address e101...is assigned by telco

! and is used to "group" the circuits together.

! This is what makes ARP work. BOTH IP and ARP lines

! must be present in the config

smds enable-arp ! Tells the router to use ARP

crc 32 ! CRC is set by Telco. on the switch; either 16 or 32.

Customer router has the same config (T1's would be on a serial interface

though). For some reason in the past, I've had to use "no smds dxi-mode"
and that was with a DXI T1 (strange enough as it looks), otherwise, the

interface went up/down. I don't know if that was a bug or not...you might

want to check with the guys currently config'ing customer routers to see if

they have run into that problem more recently.

For more information about SMDS, check out Cisco’s web site:

http://www.cisco.com/univercd/cc/td/doc/product/software/ios11/rbook/rsmds.htm

http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/smds.htm

Troubleshooting

 ARP tables – viewing the layer 2 link and resetting interfaces.

Layer 2 – Data Link Layer


Bridging and Switching

Bridges and switches are data communications devices that operate principally at Layer 2 of
the OSI reference model. As such, they are widely referred to as data link layer devices.
Bridging and switching occur at the link layer, which controls data flow, handles transmission
errors, provides physical (as opposed to logical) addressing, and manages access to the
physical medium. Bridges provide these functions by using various link-layer protocols that
dictate specific flow control, error handling, addressing, and media-access algorithms.
Examples of popular link-layer protocols include Ethernet, Token Ring, and FDDI.

Bridges and switches are not complicated devices. They analyze incoming frames, make
forwarding decisions based on information contained in the frames, and forward the frames
toward the destination. In some cases, such as source-route bridging, the entire path to the
destination is contained in each frame. In other cases, such as transparent bridging, frames are
forwarded one hop at a time toward the destination.

Upper-layer protocol transparency is a primary advantage of both bridging and switching.


Because both device types operate at the link layer, they are not required to examine upper-
layer information. This means that they can rapidly forward traffic representing any network-
layer protocol.

Bridges are capable of filtering frames based on any Layer 2 fields. A bridge, for example,
can be programmed to reject (not forward) all frames sourced from a particular network.
Because link-layer information often includes a reference to an upper-layer protocol, bridges
usually can filter on this parameter. Furthermore, filters can be helpful in dealing with
unnecessary broadcast and multicast packets.

By dividing large networks into self-contained units, bridges and switches provide several
advantages. Because only a certain percentage of traffic is forwarded, a bridge or switch
diminishes the traffic experienced by devices on all connected segments. The bridge or
switch will act as a firewall for some potentially damaging network errors, and both
accommodate communication between a larger number of devices than would be supported
on any single LAN connected to the bridge. Bridges and switches extend the effective length
of a LAN, permitting the attachment of distant stations that were not previously permitted.

Although bridges and switches share most relevant attributes, several distinctions
differentiate these technologies. Switches are significantly faster because they switch in
hardware, while bridges switch in software and can interconnect LANs of unlike bandwidth.
A 10-Mbps Ethernet LAN and a 100-Mbps Ethernet LAN, for example, can be connected
using a switch. Switches also can support higher port densities than bridges. Some switches
support cut-through switching, which reduces latency and delays in the network, while
bridges support only store-and-forward traffic switching. Finally, switches reduce collisions
on network segments because they provide dedicated bandwidth to each network segment.

Frame Relay

Frame relay is a packet-switching protocol based on X.25 and ISDN standards. Unlike X.25
however, which assumed low speed, error-prone lines and had to perform error correction,
frame relay assumes error-free lines. By leaving the error correction and flow control
functions to the end points (customer premise equipment), frame relay has lower overhead
and can move variable-sized data packets at much higher rates.

Each location gains access to the frame relay network through a Frame Relay Access Device
(FRAD). A router with frame relay capability is one example. The FRAD is connected to the
nearest carrier point-of-presence (POP) through an access link, usually a leased line. A port
on the edge switch, provides entry into the frame relay network.

FRADs assemble the data to be sent between locations into variable-sized frame relay frames,
like putting a letter in an envelope. Each frame contains the address of the target site, which is
used to direct the frame through the network to its proper destination. Once the frame enters
the shared network cloud or backbone, any number of networking technologies can be
employed to carry it.

The path defined between the source and the destination sites is known as a virtual circuit.
While a virtual circuit defines a path between two sites, no backbone bandwidth is actually
allocated to that path until the devices need it. Frame relay supports both permanent and
switched virtual circuits.

A Permanent Virtual Circuit (PVC) is a logical point-to-point circuit between sites through
the public frame relay cloud. PVCs are permanent in that they are not set up and torn down
with each session. They may exist for weeks, months or years, and have assigned end points
which do not change. The PVC is available for transmitting and receiving all the time and, in
that regard, is analogous to a leased line.
In contrast, a Switched Virtual Circuit (SVC) is analogous to a dial-up connection. It is a
duplex circuit, established on demand, between two points. Existing only for the duration of
the session, it is set up and torn down like a telephone call. FRADs which support SVCs
perform the call establishment procedures. Currently, all public frame relay service providers
offer PVCs, while only a very small number offer SVCs.

By supporting several PVCs simultaneously, frame relay can directly connect multiple sites,
through a single physical connection. (In contrast, a leased line network would require
multiple physical connections, one for each site.)

A Data Link Connection Identifier (DLCI), assigned by the service provider, identifies each
PVC. A header in each frame contains the DLCI, indicating which virtual circuit the frame
should use.

The real benefit of frame relay comes from its ability to dynamically allocate bandwidth and
handle bursts of peak traffic. When a particular PVC is not using backbone bandwidth it is
"up for grabs" by another.

When purchasing PVCs, the bandwidth or Committed Information Rate (CIR) must be
specified. The CIR is the average throughput the carrier guarantees to be always available for
a particular PVC.
A device can burst up to the Committed Burst Information Rate (CBIR) and still expect the
data to get through. The duration of a burst transmission should be short, less than three or
four seconds. If long bursts persist, then a higher CIR should be purchased.

Devices using the extra free


bandwidth available do run a risk: any data beyond the CIR is eligible for discard, depending
on network congestion. The greater the network congestion, the greater the risk that frames
transmitted above the CIR will be lost. While the risk is typically very low up to the CBIR, if
a frame is discarded it will have to be re-sent. Data can even be transmitted at rates higher
than the CBIR, but doing this has the greatest risk of lost packets.

The frame relay network does try to police itself and keep congestion and thus packet loss
down. It can do this in two ways. It can try to control the flow of packets with Forward
Explicit Congestion Notification (FECN), which is a bit set in a packet to notify a receiving
interface device that it should initiate congestion avoidance procedures. Backward Explicit
Congestion Notification (BECN) is a bit set to notify a sending device to stop sending frames
because congestion avoidance procedures are being initiated.

A second way to inform the end devices that there is congestion is through the Local
Management Interface (LMI). This specification describes special management frames sent to
access devices.

A Discard Eligiblility Bit (DE Bit) is set by the public frame relay network in packets the
device is attempting to transmit above the CIR or the CBIR for any length of time. It will also
be set if there is high network congestion. This means that if data must be discarded, packets
with the DE bit set should be dropped before other packets.

Notice that the network itself has no way to enforce congestion flow control. It is up to the
end device to support and obey these codes. When all is said and done, the frame travels to its
destination where it is disassembled by the receiving FRAD, and data is passed to the user.

X.25
There are major differences between frame relay and X.25 data networks. See the table below
for a brief summary:

X.25 Frame

X.25 has link and packet protocol Frame relay is a simple data link protocol.

Levels.

The link level of X.25 consists of a Frame relay provides basic data transfer

Data link protocol called LAPB. That doesn’t guarantee reliable delivery of data.

X.25 LAPB information frames are Frame relay frames are not numbered or

numbered and acknowledged. acknowledged.

Circuits are defined at the packet layer, Circuits are identitfied by an address field in
the

which runs on top of the LAPB data link frame header.

layer. Packets are numbered and

acknowledged.

There are complex rules that govern the Data is packaged into simple frame relay
frames

flow of data across an X.25 interface. and transmitted toward its destination.

These rules often interrupt and impede the Data can be sent across a frame relay
network
flow of data. Whenever there is bandwidth available to carry it.

ATM
ATM is a cell-switching and multiplexing technology that combines the benefits of circuit
switching (guaranteed capacity and constant transmission delay) with those of packet
switching (flexibility and efficiency for intermittent traffic). It provides scalable bandwidth
from a few megabits per second (Mbps) to many gigabits per second (Gbps).

ATM is a layered architecture allowing multiple services like voice, data and video, to be
mixed over the network. Three lower level layers have been defined to implement the
features of ATM.

The Adaptation layer assures the appropriate service characteristics and divides all types of
data into the 48 byte payload that will make up the ATM cell.

The ATM layer takes the data to be sent and adds the 5 byte header information that assures
the cell is sent on the right connection.

The Physical layer defines the electrical characteristics and network interfaces. This layer
"puts the bits on the wire." ATM is not tied to a specific type of physical transport.

Three types of ATM services exist: permanent virtual circuits (PVC), switched virtual circuits
(SVC), and connectionless service (which is similar to SMDS).

A PVC allows direct connectivity between sites. In this way, a PVC is similar to a leased line.
Among its advantages, a PVC guarantees availability of a connection and does not require
call setup procedures between switches. Disadvantages of PVCs include static connectivity
and manual setup.

An SVC is created and released dynamically and remains in use only as long as data is being
transferred. In this sense, it is similar to a telephone call. Dynamic call control requires a
signaling protocol between the ATM endpoint and the ATM switch. The advantages of SVCs
include connection flexibility and call setup that can be handled automatically by a
networking device. Disadvantages include the extra time and overhead required to set up the
connection.

ATM networks are fundamentally connection oriented, which means that a virtual channel
(VC) must be set up across the ATM network prior to any data transfer. (A virtual channel is
roughly equivalent to a virtual circuit.)

Two types of ATM connections exist: virtual paths, which are identified by virtual path
identifiers, and virtual channels, which are identified by the combination of a VPI and a
virtual channel identifier (VCI).

A virtual path is a bundle of virtual channels, all of which are switched transparently across
the ATM network on the basis of the common VPI. All VCIs and VPIs, however, have only
local significance across a particular link and are remapped, as appropriate, at each switch.
Thus, ATM uses a "cloud" system almost exactly like that of frame relay. Unlike frame,
however, ATM uses a 53 bytes fixed cell length, DLCI’s have been switched to VPI’s and
VCI’s, and ATM offers a guaranteed service level.

HDLC
One of the oldest data communications protocols in use today is IBM’s Synchronous
Data Link Control (SDLC). SDLC defined rules for transmitting data across a digital
line and was used for long distance communications between terminals and computers.
IBM submitted SDLC to the standards organizations and they revised it and
generalized it into the High Level Data Link Control (HDLC) protocol. HDLC is the
basis of a family of related protocols.

 Link Access Procedure on the D-channel (LAPD) – Used with ISDN


 Link Access Protocol Balanced (LAPB) – Used with X.25
 Link Access Procedures to Frame-Mode Bearer (LAPF) – used with Frame
Relay
 Point-to-Point (PPP) – Used for general communications access across wide area
lines.

LAPD Protocol - Belongs to the High-level Data Link Control (HDLC) family of protocols

Three types of HDLC/LAPD frames:

 Information Frames – Used to carry the ISDN signaling messages.


 SABME – Set asynchronous balanced mode extended - Initiates a link
 UA – Unnumbered acknowledgment of link setup or termination
 DISC – Disconnect. Terminates a link.
 DM – Disconnect mode. Refuses a SABM or SAMBE, or just a announces a
disconnected state.
 FRMR – Frame reject. Announces a non-recoverable error. The link must be reset.
 XID – Exchange identification. Used to negotiate parameters when a link is
established.
 Supervisory Frames – Used for acknowledgements and flow control, and to report an
out-of-sequence frame
 I – Carriers information
 Unnumbered Frames – Used to initiate and terminate a LAPD link, to negotiate
parameters, and to report errors.
 RR – Receive ready. Indicates a ready state and acknowledges data.
 RNR – Receive not ready. Indicates a busy state and acknowledges data.
 REJ – Reject. Indicates that one or more frames need to be retransmitted.

PPP – Point-to-Point Protocol

Background
The Point-to-Point Protocol (PPP) originally emerged as an encapsulation protocol for
transporting IP traffic over point-to-point links. PPP also established a standard for the
assignment and management of IP addresses, asynchronous (start/stop) and bit-oriented
synchronous encapsulation, network protocol multiplexing, link configuration, link quality
testing, error detection, and option negotiation for such capabilities as network-layer address
negotiation and data-compression negotiation. PPP supports these functions by providing an
extensible Link Control Protocol (LCP) and a family of Network Control Protocols (NCPs) to
negotiate optional configuration parameters and facilities. In addition to IP, PPP supports
other protocols, including Novell's Internetwork Packet Exchange (IPX) and DECnet. This
chapter provides a summary of PPP's basic protocol elements and operations. PPP is capable
of operating across any DTE/DCE interface. The only absolute requirement imposed by PPP
is the provision of a duplex circuit, either dedicated or switched, that can operate in either an
asynchronous or synchronous bit-serial mode, transparent to PPP link-layer frames. PPP does
not impose any restrictions regarding transmission rate other than those imposed by the
particular DTE/DCE interface in use.

PPP Components
PPP provides a method for transmitting datagrams over serial point-to-point links. PPP
contains three main components:

 A method for encapsulating datagrams over serial links---PPP uses the High-Level
Data Link Control (HDLC) protocol as a basis for encapsulating datagrams over
point-to-point links.
 An extensive LCP to establish, configure, and test the data-link connection.
 A family of NCP’s (Network Control protocols) for establishing and configuring
different network-layer protocols---PPP is designed to allow the simultaneous use of
multiple network-layer protocols.

General Operation
To establish communications over a point-to-point link, the originating PPP first sends LCP
frames to configure and (optionally) test the data-link. After the link has been established and
optional facilities have been negotiated as needed by the LCP, the originating PPP sends NCP
frames to choose and configure one or more network-layer protocols. When each of the
chosen network-layer protocols has been configured, packets from each network-layer
protocol can be sent over the link. The link will remain configured for communications until
explicit LCP or NCP frames close the link, or until some external event occurs (for example,
an inactivity timer expires or a user intervenes).

The following descriptions summarize the PPP frame fields illustrated in Figure 13-1 :
 Flag---A single byte that indicates the beginning or end of a frame. The flag field
consists of the binary sequence 01111110.
 Address---A single byte that contains the binary sequence 11111111, the standard
broadcast address. PPP does not assign individual station addresses.
 Control---A single byte that contains the binary sequence 00000011, which calls for
transmission of user data in an unsequenced frame. A connectionless link service
similar to that of Logical Link Control (LLC) Type 1 is provided.
 Protocol---Two bytes that identify the protocol encapsulated in the information field
of the frame. The most up-to-date values of the protocol field are specified in the most
recent Assigned Numbers Request for Comments (RFC).
 Data---Zero or more bytes that contain the datagram for the protocol specified in the
protocol field. The end of the information field is found by locating the closing flag
sequence and allowing 2 bytes for the FCS field. The default maximum length of the
information field is 1,500 bytes. By prior agreement, consenting PPP implementations
can use other values for the maximum information field length.
 Frame Check Sequence (FCS)---Normally 16 bits (2 bytes). By prior agreement,
consenting PPP implementations can use a 32-bit (4-byte) FCS for improved error
detection.

PPP Link-Control Protocol


The PPP LCP provides a method of establishing, configuring, maintaining, and terminating
the point-to-point connection. LCP goes through four distinct phases:

 First, link establishment and configuration negotiation occurs. Before any network-
layer datagrams (for example, IP) can be exchanged, LCP first must open the
connection and negotiate configuration parameters. This phase is complete when a
configuration-acknowledgment frame has been both sent and received.
 This is followed by link-quality determination. LCP allows an optional link-quality
determination phase following the link-establishment and configuration-negotiation
phase. In this phase, the link is tested to determine whether the link quality is
sufficient to bring up network-layer protocols. This phase is optional. LCP can delay
transmission of network-layer protocol information until this phase is complete.
 At this point, network-layer protocol configuration negotiation occurs. After LCP has
finished the link-quality determination phase, network-layer protocols can be
configured separately by the appropriate NCP and can be brought up and taken down
at any time. If LCP closes the link, it informs the network-layer protocols so that they
can take appropriate action.
 Finally, link termination occurs. LCP can terminate the link at any time. This usually
will be done at the request of a user but can happen because of a physical event, such
as the loss of carrier or the expiration of an idle-period timer.

Three classes of LCP frames exist. Link-establishment frames are used to establish and
configure a link. Link-termination frames are used to terminate a link, while link-
maintenance frames are used to manage and debug a link. These frames are used to
accomplish the work of each of the LCP phases.
PPP Multilink

 PPP multilink protocol (MP)


 Allows two channels to bond together to form one single pipe for data at the
establishment of a call.
 Bandwidth allocation protocol (BAP)
 Older protocol allowing for the bandwidth to be changed dynamically by need.
 Bandwidth allocation control protocol (BACP)
 The current version of the protocol used between different vendors to negotiate
bandwidth dynamically.

You might also like