BSCe 3
BSCe 3
BSCe 3
Local Maintenance
OML14 Course
Course #1596AEN
14.00/EN
April, 2003
Publication History
Date
mm/dd/yy
Version
Comments
PE/TRD/CN/4273
13.01/EN
Mars 2001
13.02/EN
July 2001
14.00/EN
April 2003
Creation
Updating of the section 7: BSC e3 and TCU e3
Troubleshooting (TML part)
Update
PE/TRD/CN/4273
14.00/EN
April, 2003
ii
OML14 Course
Introduction
Glossary
PE/TRD/CN/4273
14.00/EN
April, 2003
iii
Volume Composition
No.
Title
Reference
Version/Edition
PE/TRD/CN/4273
14.00/EN
PE/TRD/CN/4273
14.00/EN
April, 2003
iv
Course Presentation
This course covers the BSC e3 and TCU e3 local maintenance.
It describes how to use the TML e3 equipment to troubleshoot a BSC e3/TCU
e3 on site.
It describes also fault finding and software upgrading.
Course Objectives
Upon completion of this course, you will be able to:
Describe the architecture of the BSC e3 and TCU e3.
Describe all board functions and interfaces.
Use the TML e3 equipment to interpret events coming from the BSC e3
and TCU e3 to perform tests and upgrades.
Identify, with leds and panel displays, the faulty modules and replace
them.
Prerequisites
This course is designed for people who maintain the BSS on site (Operator
Field Technicians, Supervisors).
Before attending this course, you need to have a good understanding of the
Telecommunication systems (hardware and software) or equivalent systems.
This knowledge is provided by the following course:
SY1ven: GSM System and Products Overview.
Scope
This course applies to the BSC e3 and TCU e3 V14 version.
PE/TRD/CN/4273
14.00/EN
April, 2003
v
Table of Contents
COURSE NOTES CONTENTS
PUBLICATION H ISTORY
ii
OML14 C OURSE
iii
VOLUME C OMPOSITION
iv
COURSE I NTRODUCTION
TABLE OF CONTENTS
vii
1. INTRODUCTION
1-1
1-2
1-3
CONTENTS
1-4
OBJECTIVES
1-5
2-1
OBJECTIVES
2-2
CONTENTS
2-3
2-4
BSS ARCHITECTURE
2-5
2-6
2-7
2-8
BSC E3 A RCHITECTURE
2-9
D ESCRIPTION
2-9
FUNCTIONAL A RCHITECTURE
2-10
TCU E3 A RCHITECTURE
2-11
D ESCRIPTION
2-11
2-12
PE/TRD/CN/4273
14.00/EN
April, 2003
vi
3-1
OBJECTIVES
3-2
CONTENTS
3-3
CONTROL NODE
3-5
ARCHITECTURE
3-6
HARDWARE MODULES
3-7
THE CN S LICES
3-8
TM F UNCTIONS
3-9
3-10
OMU
3-11
ATM-SW
3-12
TMU
3-13
3-14
INTERFACE N ODE
3-15
ARCHITECTURE
3-16
BOARD LAYOUT
3-17
CEM
3-18
8K-RM
3-19
ATM-RM
3-20
LSA-RC
3-21
3-24
TRANSCODING NODE
3-25
ARCHITECTURE
3-26
BOARD LAYOUT
3-27
TRM
3-28
3-29
PE/TRD/CN/4273
14.00/EN
April, 2003
vii
4-1
OBJECTIVES
4-2
CONTENTS
4-3
4-4
PCIU MODULES
4-4
SIM MODULE
4-5
COOLING S YSTEM
4-6
4-6
4-7
4-8
SAI F RAME
4-8
4-9
4-10
4-11
4-12
OPTIONAL HUB
4-13
4-14
4-15
4-16
4-17
5-1
OBJECTIVES
5-2
CONTENTS
5-3
HARDWARE F EATURES
5-4
MAIN C HA RACTERISTICS
5-4
FILLER MODULE
5-5
5-6
5-6
5-7
5-8
PE/TRD/CN/4273
14.00/EN
April, 2003
viii
6-1
OBJECTIVES
6-2
CONTENTS
6-3
EQUIPMENT S TARTUP
6-4
PRINCIPLE
6-4
LED DISPLAY
6-5
6-7
BUILT)
6-7
NOT B UILT)
6-8
6-9
MAIN P RINCIPLES
6-10
BOARD RECOVERY
6-11
SLICE RECOVERY
6-12
6-13
6-14
CN S TARTUP T IMER
6-15
6-17
IN S TARTUP: PRINCIPLES
6-18
6-19
TN S TARTUP
6-20
FAULT TOLERANCE
6-21
SOFTWARE
6-22
CELLGROUP CONCEPT
6.23
6-24
PE/TRD/CN/4273
14.00/EN
April, 2003
ix
7-1
OBJECTIVES
7-2
CONTENTS
7-3
MAINTENANCE OVERVIEW
7-4
7-5
7-7
ENVIRONMENT
7-8
OVERVIEW
7-9
LOGIN W INDOW
7-10
TML E3
7-11
ENVIRONMENT
7-12
OVERVIEW
7-13
CONNECTIONS
7-14
7-15
LOGIN W INDOW
7-16
CONNECTION W INDOWS
7-17
STARTING W INDOWS
7-18
7-19
CONFIGURATION MENU
7-20
7-21
7-22
7-23
7-24
MISCELLANEOUS MENU
7-25
VIEW MENU
7-26
PE/TRD/CN/4273
14.00/EN
April, 2003
x
8-1
OBJECTIVES
8-2
CONTENTS
8-3
SAFETY INSTRUCTIONS
8-4
EXTRACTION/INSERTION OF A M ODULE
8-5
LOCATION OF M ODULES
8-6
8-8
GENERAL P RINCIPLES
8-8
OMU MODULE
8-9
8-10
8-11
ATM-SW MODULE
8-12
ATM-RM MODULE
8-13
TRM MODULE
8-14
TMU MODULE
8-15
CEM MODULE
8-16
8K-RM MODULE
8-17
8-18
SIM MODULE
8-19
FAN UNIT
8-20
AIR F ILTER
8-21
9. GLOSSARY
9-1
PE/TRD/CN/4273
14.00/EN
April, 2003
xi
V14.00/EN
June 2003
Section 1
Introduction
The copyright of this document is the property of Nortel Networks. Without
the written consent of Nortel Networks, given by contract or otherwise, this
document must not be copied, reprinted or reproduced in any material form,
either wholly or in part, and the contents of this document, or any methods or
techniques available therefrom, must not be disclosed to any other person
whatsoever.
V14.00/EN
June 2003
3 days
2 days
3 days
2 days
5 days
10 days
2 days
10 days
5 days
5 days
1 day
3 days
2 days
5 days
SY0
SY1
2 days
5 days
SYS
TL1
TL2
TL3
TL4
Telecommunications Overview
Frame Relay Overview
TCP/IP Overview
ATM Overview
2 days
1 day
2 days
1 day
3 days
3 days
GPRS Courses
System Courses
3 days
4 days
PR1
PR3
PR4
SY2
SR14.2
GP0
GP1
GP3
GP20
GP21
GP22
3 days
2 days
3 days
2 days
2 days
2 days
2 days
2 days
2 days
BSS Operation
& Maintenance Courses
OM1/2
OM4
OM6
OM9
OM31
OM36
OML14
OMV14
The BSS and NSS training courses are split into several families according to the
different skills required to deal with GSM networks:
System: general knowledge about GSM, as well as a general view of the
different Telecom technologies.
BSS System: general knowledge of the BSS system: products, dimensioning,
optimization.
BSS Operation and Maintenance: how to operate and maintain a
telecommunication network by using the OMC-R facilities fully. It gives an indepth understanding of BSS functions and equipment.
NSS System: knowledge of the operation and maintenance of the NSS part of
the system.
Radio and Network Engineering: cell planning, BSS network topology, field
tests, data fill or BSS parameter optimization.
BSS Installation and Commissioning: how to install, cable, and run on-site tests.
GPRS: an overview of this new system and an advanced description of the new
nodes.
V14.00/EN
June 2003
UMTS Introduction
UMTS System Description
UMTS System & Products
Description
Advanced UMTS Radio Interface
UMTS General Overview
UM21
UM22
UM30
UM31
2 days
2 days
2 days
2 days
1 day
UMTS RF Engineering
Fundamentals
UMTS Radio Network Planning
Fundamentals
UMTS Radio Network Planning
Project
UM52
UM53
1 day
5 days
5 days
3 days
2 days
5 days
ProductCourses
UM20
UM51
4 days
1 day
3 days
2 days
3 days
2 days
10 days
2 days
5 days
5 days
2 days
3 days
5 days
UMTS: an overview of this new system and an advanced description of the new nodes.
V14.00/EN
June 2003
Security
OMC-R
BSS
BSS ObservationsObjects Configuration
BSS
BSS
V11/ V12
Administration Preventive Operating
Operating Parameters Counters
and Performance
O&M
and
SMS-CB & Corrective ProceduresParameters
Principles Dictionary Dictionary Faults Maintenance
User Guide Evolutions
and Help Maintenance
07
124
125
128
129
BSC 22
PE/CDC/DD/0004
CD-ROM of
GSM BSS NTPs
General
Information
e-cell
BTS 92
BSC e3
and 126
TCU e3
Whats new
BSS Product
in the
GPRS
Documen- BSS
tation OverviewBSS V12 Overview
Overview
NTP suite
Reference
Manuals
S4000
Smart 43
BTS
S2000/
S2000E 53
BTS
S8000/
S8002 63
BTS
PCUSN 91
34
36
52
User Manual
User Manual
Operations
Manuals
S4000
Outdoor 23
BTS
S2000H/L
BTS 35
32
Principles
50 TML (BSC/TCU)
51 TML (BTS)
ROT 14
TCU 16
130
39 Maintenance
00
01
88
PE/CDC/DD/0026
CD-ROM of
BSS Parameters User Guide
+
PE/CDC/DD/0083
CD-ROM of
GPRS Access Network
Parameters User Guide
Maintenance
Manuals
41 BSC Maintenance
Procedures
42 TCU Maintenance
Procedures
S2000/S2000E BTS
46 Maintenance
Procedures
BTS
47 S4000
Maintenance
Procedures
BTS
48 S8000
Maintenance
Procedures
117
H/L BTS
49 S2000
Maintenance
Procedures
BTS
84 S8002
Maintenance
Procedures
CT Tools (optional)
CT1000/
Instal .
Manual
BTS
85 S8006
Maintenance
Procedures
e-cell BTS
90 Maintenance
Manual
102
103
104
105
106
131
29
38
20
121
60
TCU e3
V14.00/EN
June 2003
Contents
Introduction
BSC e3 and TCU e3 Functional Architecture
BSC e3 and TCU e3 Board Description
Thermic, Energetic and Cabling Aspects
Hardware Features, Configuration and Dimensioning.
BSC e3 and TCU e3 Startup
BSC e3 Troubleshooting
BSC e3 and TCU e3 Module Replacement
Annex: ATM Reminders
Glossary
V14.00/EN
June 2003
Objectives
V14.00/EN
June 2003
Section 2
BSC e3 and TCU e3
Functional Architecture
The copyright of this document is the property of Nortel Networks. Without
the written consent of Nortel Networks, given by contract or otherwise, this
document must not be copied, reprinted or reproduced in any material form,
either wholly or in part, and the contents of this document, or any methods or
techniques available therefrom, must not be disclosed to any other person
whatsoever.
V14.00/EN
June 2003
Objectives
V14.00/EN
June 2003
Contents
V14.00/EN
June 2003
MSC
TCU
A Interface
NSS
Radio
Interface
S2000H&L
BTS
Ater Interface
OMC-R
MS
Abis Interface
OMN Interface
Sun
BSC
StorEdge A5000
S8000
Outdoor
BTS
Radio
Interface
Agprs Interface
PCUSN
Gb Interface
S8000
Indoor
BTS
MS
10
The Base Station Subsystem includes the equipment and functions related to the
management of the connection on the radio path.
It mainly consists of one Base Station Controller (BSC), and several Base Transceiver
Stations (BTSs), linked by the Abis interface.
An equipment, the Transcoder/Rate Adapter Unit (TRAU) so called TransCoder Unit
(TCU) within Nortel Networks BSS products, is designed to reduce the number of PCM
links.
These different units are linked together through specific BSS interfaces:
each BTS is linked to the BSC by an Abis interface,
the TCUs are linked to the BSC by an Ater interface,
the A interface links the BSC/TCU pair to the MSC.
V14.00/EN
10
June 2003
BSS Architecture
PCUSN
OMC-R
Internet
OMN Interface
GPRS Core
Network
Remote
RACE client
TCP/IP
Ethernet
Agprs
BSC
e3
PSTN
TCU e3
Gb Interface
MS
Contr
ol
Node
Service
Area
Interface
BTS
Air
TCU 1
ATM
DMS
MSC/HLR
Optical
Interface
Ater
Interfac
e Node
TCU 0
Abis
Service
Area
Interface
Service
Area
Interface
11
V14.00/EN
11
June 2003
OMC-R
Ethernet
Agprs
BSC
LAPD GSL
Data
BTS
GPRS
LAPD
OML
LAPD
RSL
LAPD
GSL
TCU
MSC
LAPD
OML
SS7
Voice
Data
Ater
Abis
12
V14.00/EN
12
June 2003
X.25
TCU 2G
V12.4
(TCB2)
TCU e3
V14.3
TCU 2G
V12.4
(TCB2)
PCU SN
V14.3
Ethernet
BSC 2G
V12.4
BSC e3
V14.3
BSC e3
V14.3
BTSs
V12.4
BTSs
V12.4
BTSs
V14.3
13
The BSC e3 and TCU e3 are intended to interwork with current BSC 2G (12000), BTS
and OMC-R products.
Note that BSC e3 is able to support TCU 2G as well, but only with TCB2 boards (EFR).
The OMC-R - BSC e3 link is TCP/IP over Ethernet, instead of native X.25 for BSC 2G.
The OMC-R BSC e3 link over A/Ater Interface is not available in the V14.3 release
(V15 candidate feature).
Either TCU 2G and/or TCU e3 can be used to recover the synchronizing clock and to
carry SS7 links.
Each TCU (2G and e3) requires LAPD link to communicate with BSC e3.
V14.00/EN
13
June 2003
BSC e3
TCU e3
14
The BSC e3 and the TCU e3 are one-cabinet equipment, composed of two Nodes and
one Service Area Interface.
These Nodes are each housed in a sub rack comprising two shelves.
The cabinet is designed for indoor applications.
The design allows front access to the equipment.
External cabling from below or above is supported.
The Service Area Interface or SAI is installed on the left side of the cabinet:
It provides front access to the PCM cabling.
It contains the electrical equipment to interface the BSC or the TCU and the
customer cables.
The product is EMC compliant. No rack enclosure is required for this reason, as EMC
compliance is achieved at the sub rack level (Control and Interface Node).
V14.00/EN
14
June 2003
BSC e3 Architecture
1 - Description
BSC (doors closed)
Power Supplies
Fans
Service
Area
Interface
Control
Node
Service
Area
Interface
Control
Node
Fans
Interface
Node
Interface
Node
15
V14.00/EN
15
June 2003
BSC e3 Architecture
2 - Functional Architecture
BSC e3
Control Node
OMU
TMU
TMU
TMU
ATM SW
Traffic
Traffic
Mgt
Traffic
Mgt
Mgt
OAM
Optical Interface
Interface Node
ATM RM
Ater
Interface
TCU
Abis
Interface
Switching Unit
LSA RC
CEM
8K RM
LSA RC
PCM
Interfac
e
64 kbps
8 kbps
PCM
Interfac
e
BTS
16
The Control Node is a computing and signaling platform built around an ATM Switch.
It contains the BSC processing core that handles overall BSC operations including
Interface Node operations, and enables communication with the OMC-R.
It is composed of the following three functional modules:
the ATM SW (Asynchronous Transfer Mode Switch)
the OMU (Operation and Maintenance Unit)
the TMU (Traffic Management Unit).
The Interface Node is a circuit switch platform which provides dense PCM connectivity.
It is made up of the following four major hardware modules:
the ATM -RM (Asynchronous Transfer Mode Resource Module)
the CEM (Common Equipment Module)
the 8K RM (8K subrate matrix Resource Module)
the LSA RC (Low Speed Access Resource Complex).
NB: The BSC e3 cabinet is powered by four SIMs (Shelf Interface Module).
V14.00/EN
16
June 2003
TCU e3 Architecture
1 - Description
TCU (doors closed)
Power Supplies
Fans
Service
Area
Interface
Transcoding
Node
Service
Area
Interface
Transcoding
Node
Fans
Transcoding
Node
Transcoding
Node
17
V14.00/EN
17
June 2003
TCU e3 Architecture
2 - Functional Architecture of a Transcoding Node
Transcoding Node
12
TRM
2
1
TRM
S links
BSC
Ater
Interface
LSA RC
CEM
S links
S links
PCM
Interfac
e
64 kbps
LSA RC
A
Interface
MSC
PCM
Interfac
e
18
The Transcoding Node performs the following main tasks related to communication,
switching and transcoding.
The TCU e3 cabinet is made of two Transcoding nodes.
It is composed of the following three major hardware modules:
The CEM (Common Equipment Module)
The TRM (Transcoding Resource Module)
The LSA RC (Low Speed Access Resource Complex).
NB: The TCU e3 cabinet is powered by four SIMs (Shelf Interface Module).
V14.00/EN
18
June 2003
Section 3
BSC e3 and TCU e3 Board Description
V14.00/EN
19
June 2003
Objectives
V14.00/EN
20
20
June 2003
Contents
V14.00/EN
21
21
June 2003
Control Node
V14.00/EN
22
22
June 2003
Control Node
1 - Architecture
Ethernet
Link
Passive Mirrored
OMU Shared
Disks
Private
Disk
Private Disk
OAM
ATM links
25 Mb/s
ATM Links
155 Mb/s
ATM links
25 Mb/s
ATM SW
25
Mb/s
1
1
TMU
ATM Links
155 Mb/s
ATM SW
AT
M
25 link
Mb s
/s
s
ink
M l /s
AT Mb
25
Towards
Interface Node
Active
OMU
Traffic
Mgt
TMU
Traffic
Mgt
14
TMU
Towards
Interface Node
Traffic
Mgt
23
The Control Node is the processing unit of the BSC e3. It is an ATM-based engine that
handles the following functions:
OAM
Traffic Management
Call & Signaling processing.
These main functions are performed by three sub-assemblies:
OMU = Operation and Maintenance Unit (OA&M + Disk Management)
ATM -SW = ATM Switch (Interconnection between OMUs and TMUs with
Communication Controller boards and optical connection with the Interface
Node)
TMU = Traffic Management Unit (Traffic Management + Signaling Processing).
The platform is full ATM inside: the links between the different modules inside the CN
are ATM links at 25 Mb/s, they are all redundant for safety reasons.
The Control Node is connected to the Interface Node by an optical fiber cable based on
a standard ATM interface at 155 Mb/s.
V14.00/EN
23
June 2003
Control Node
10
11
12
13
14
SIM B
14
BSC e3
Contro
l Node
15
SIM A
TMU
13
TMU
TMU
12
TMU
TMU
11
TMU
TMU
10
TMU
MMS Private
OMU
ATM SW
MMS Shared
Filler
ATM SW
6
Filler
OMU
5
MMS Shared
MMS Private
TMU
TMU
TMU
Filler
2
TMU
TMU
Shelf 00
Filler
TMU
Shelf 01
2 - Hardware Modules
15
24
The OMU (Operation & Maintenance Unit) controls all the BSC e3 elements
(both Control and Interface Nodes) and TCU e3 elements, is responsible for
Operation, Administration and Maintenance (OA&M) of the BSS, deals with disk
management, and ensures Ethernet access to the OMC-R and TML.
The MMS (Mass Memory Storage) are the 4 storage disks (2 private disks for
OMUs and 2 shared disks; one of these shared disk only is mandatory). If the
private MMS is in default state, the whole BSC e3 is in Exposure state.
The ATM SW is the ATM switch that provides the interconnexion between the
OMU and the TMU modules. It also provides connectivity with the Interface
Node through an OC-3c link.
The TMU (Traffic Management Unit) is in charge of GSM traffic and signaling
processing (LAPD and SS7).
The SIM (Shelf Interface Module) is the power supply for both shelves and the
alarm interface between the dual-shelf and the PCIU. It provides 48 V dc to
the Control Node. For redundancy purposes, there are 2 SIMs per equipment:
each SIM contributes to supply each shelf (at 50% level).
Filler Boards are empty containers which occupy any unused slots to ensure
EMC shielding.
Duplication schemes:
1 + 1 redundancy = 1 active element + 1 passive (or active) element.
N + P redundancy = N active elements to provide the targeted performance. P means
that P boards can be in default state, without loosing any established communication.
V14.00/EN
24
June 2003
Control Node
3 - The CN Slices
1
3
The CN Slices:
OMU
MMS
ATM-SW
TMU
25
A slice is the name given to a set of boards plugged into a slot on a shelf.
The Control Node is composed of the following slices:
OMU, MMS, TMU and ATM -SW
Plus SIM and Fillers.
The OMU, TMU and ATM -SW slices have a common hardware architecture and are
divided into 3 parts:
A Single Board Computer board (SBC) = Computer Board.
A PCI Mezzanine Card (PMC) = Front Panel Board.
A Transition Module board (TM) = Interface Adapter Board.
To identify each part of the slice, suffixes have been added to the board names:
xxx SBC
xxx TM
Transition Module
xxx PMC
Each module has two visual indicators on the top of the front panel, which indicate its
status:
A red LED with a triangular shape,
A green LED with a rectangular shape.
V14.00/EN
25
June 2003
Control Node
4 - TM Functions
MAIN FUNCTION: ATM ADAPTATION
TMU
VME & SCBus
to
ATM
conversion
OMU
VME
Interface with
SBC
+
ATM 25
Interface
with ATM-SW.
ATM-SW
SBC
Interface
with CN
backplane
+
OC-3c
Optical
Interface
26
V14.00/EN
26
June 2003
Control Node
5 - Memory Mass Storage
Front
Panel
SCSI -bus
Always available
View
BSS
OMU 1
OMU 2
Spare
OMU
Ethernet Link
Removal
Request Push
Button
Active
OMU
O.S.
O.S.
BSS
Private Disk
For OMU 1
Private Disk
For OMU 2
27
The MMS (Memory Mass Storage) modules are SCSI 9 Gbytes Hard Disks in the
Control Node.
These 4 MMS are linked to the OMU modules through 4 SCSI-buses.
They are split as follows:
Two mirrored shared hard disks for both OMU modules. They contain the data
that must be secured and still be accessible in the event of an OMU failure or a
disk failure (BSS data).
Two private disks (one for each OMU). These disks hold all the private data for
the module (Operating System data).
External Interfaces on the Front Panel:
Two LEDs,
One removal request push button
Redundancy scheme: 1 + 1 operating simultaneously for the mirrored shared disks.
Board Location: Dual Shelf 01, Shelf 00, slots 5 & 6 & 9 & 10.
LED Status:
V14.00/EN
Red LED
Green LED
Status
Unlit
Lit
Disk operational
and Updated
27
June 2003
Control Node
6 - OMU
2 MMS
OMC-R
Shared
OMU Module
Ethernet Link
Disk Management
MMS
Private
VME
OMU
Boar
d
TM
TML
RJ45
Board
SCSI
Buses
D-sub 9-pins
For RS 232 debug
RS 232 Debug Bus
Unused
MMS
n
tio
Private
lica
p
u
rD
Fo
Etherne
t link
with
OMU
ATM 25
link
with
ATM-SW
Removal
Request
Push -button
MTM Bus
(Board Reset and
LED commands)
BACKPLANE
Front
Panel
View
28
The OMU (Operation and Maintenance Unit) manages all the BSC resources.
It does the following:
Disk management (Private and Shared MMS; private disk duplication),
Interface with the OMC-R or TML through an Ethernet access.
System maintenance (by using the TML) and OAM of the BSS.
External Interfaces on the Front Panel:
Two LEDs,
One RJ45 connector for one 10/100 base T Ethernet OMC-R + TML port,
One 9-pin D-sub connector for the RS 232 debug port,
One removal request push button (shut down and SWACT of the OMU)
Redundancy scheme: 1 + 1 Hot Stand-by.
Board Location: Dual Shelf 01, Shelf 01, slots 5+6 & 9+10.
LED Status:
Red-TM
LED
ATM layers: OMU
board.
Green LED
Status
Lit
Module active
and Unlocked
Unlit
V14.00/EN
28
June 2003
Control Node
OMU
7 - ATM SWitch
Activ
e
OMU
ATM SWitch
Utopia Bus
BACKPLAN
E
6 x ATM 25
Optical
Interface
ATM SW
OAM
3x ATM 25
Interface
OC-3 Link
ATM 155 Mb/s
TX OC-3c
Connecto
r
To RX
on the IN
Towards / From
ATM-RM
Interface Node
RX OC -3c
Connecto
r
From TX
On the IN
TMU
Front
Traffic
Mgt
Panel
ATM-SW MAIN FUNCTION: BOARDS INTERCONNEXION
View
29
The ATM SW (ATM Switch) provides a backplane board interconnection with live
insertion capabilities.
It provides:
interconnection between the OMU and TMU modules,
ATM switching, adaptation and interface on an OC-3 optical multimode fiber
towards the Interface Node. The TX connector on the ATM-SW is linked to the
ATM -RM RX connector; the RX connector on the ATM -SW is linked to the ATMRM TX connector.
External Interfaces on the Front Panel:
Two LEDs,
1 TX OC-3 (upper) + 1 RX OC-3 (lower) optical connectors
Redundancy Scheme: 1+ 1 simultaneous work
Board Location: Dual Shelf 01, Shelf 01, slot 7 & 8.
LED Status:
Red LED
Green LED
Status
Lit
Module active
and Unlocked
Unlit
V14.00/EN
29
June 2003
Control Node
8 - TMU
TMU Module
VME
Board
ATM 25
link
with
ATM-SW
TMU
SCSI Buses
+ VME Buses
TM
Board
VME
link
with
OMU
BACKPLANE
Front
Panel
View
30
The TMU (Traffic Management Unit) manages traffic. It is equivalent to a set of three
boards in the 2G release (SICD + CCS7 + BIFP).
It is in charge of:
GSM & GPRS traffic management,
GSM signaling processing (LAPD & SS7)
GPRS signaling processing
BTS OAM (software downloading, BTS configuration).
External Interfaces on the Front Panel:
Two LEDs
Redundancy Scheme: N + P load sharing.
Board Location: Dual Shelf 01, Shelves 00 & 01, slots 1 & 3 & 4 & 11 to 14.
LED Status:
Red LED
Base Operating System:
Unlit Vx Works
Green LED
Status
Lit
Module active
and Unlocked
V14.00/EN
30
June 2003
Control Node
9 - Minimal Configuration for the CN
1 OMU
The corresponding Private MMS
1 shared MMS
1 ATM-SW (+ the corresponding ATM-RM in the IN)
n TMUs (according to the traffic load)
1 SIM
V14.00/EN
31
31
June 2003
Interface Node
V14.00/EN
32
32
June 2003
Interface Node
1 - Architecture
ATM RM
TOWARDS CN
S links
Switching Unit
CEM
64 kbps
Abis
TOWARDS
BTS
Ater
LSA RC
LSA RC
S links
S links
PCM
Interface
PCM
Interface
TOWARDS
TCU
8K RM
8 kbps
33
V14.00/EN
33
June 2003
Interface Node
10
12
13
11
12
13
14
15
14
15
SIM
11
SIM
LSA RC
N3
LSA RC
N2
Filler
7
10
Filler
LSA RC
N5
BSC e3
Interfac
e Node
LSA RC
N4
Filler
8K-RM
8K-RM
CEM
CEM
LSA RC
N 0
ATM RM
ATM RM
LSA RC
N1
Filler
1
SHELF 00
SHELF 01
2 - Board Layout
Synchronizatio
n
34
The Interface Node is the connectivity component of the BSC e3, after the SAI.
It is responsible for:
establishing all the connections between the BSC and the other entities of the
network
supervising the physical links.
The Interface Node is divided into the following hardware modules:
The CEM (Common Equipment Module), which controls the resource modules
of the IN, provides system maintenance, clock synchronization and traffic
switching.
The ATM RM (ATM - Resource Module), which adapts Time Slots (DS0) based
voice and data channels of S-links to ATM cells for transmission over a
Synchronous Optical NETwork (SONET), OC-3c interface,
The 8K RM (8K subrate matrix Resource Module), which adds subrate
switching capability to the IN, as the CEM is only capable of switching at a TS
(DS0) level (64 kbps).
The LSA RC (Low Speed Access Resource Complex), which is the PCM
interface module, used to interface the BSC to both the TCU and BTS, providing
modularity (up to 21 E1 or 28 T1 links). Each LSA-RC block consists of 3
boards. They must be inserted in ordered steps.
The SIM (Shelf Interface Module) is the power supply for both shelves and the
alarm interface between the dual-shelf and the PCIU. It provides 48 V dc to
the Interface Node. For redundancy purposes, there are 2 SIMs per equipment:
each SIM contributes to supply each shelf (at 50% level).
V14.00/EN
34
June 2003
Interface Node
3 - CEM
Front
Panel
ATM RM
LSA RC
PCM
Interface
3 S-links
3 S-links
View
Caution: only
rescue way of
connection
TML
Switching
Unit
Unused
CEM
Ethernet
link
RJ45
9 S-links
Clock
Synchronization
Alarms
processing
OAM
Interface
SWITCHING
35
The CEM (Common Equipment Module) is the master board of the Interface Node.
It provides the following features:
64K Traffic Switching Matrix,
OA&M interface,
Control of the Resource Modules (8K RM, ATM -RM and LSA RC),
Clock synchronization,
Alarm processing,
External Interfaces on the Front Panel:
Two LEDs,
RJ45 connector (Ethernet Link) for TML ( rescue connection only)
4 unused connectors.
Redundancy scheme: 1 + 1 Hot Stand-by
Board Location: Dual Shelf 00, Shelf 00, slots 7 & 8
LED Status:
V14.00/EN
Red LED
Green LED
Status
Unlit
Lit
Module active
and Unlocked
35
June 2003
Interface Node
4 - 8K RM
Switching Unit
CEM
Unused
9 S-links
SUB RATE
SWITCHIN
G
8K RM
Front
Panel
8K-RM MAIN FUNCTION: SUB RATE SWITCHING
View
36
V14.00/EN
Red LED
Green LED
Status
Unlit
Lit
Module active
and Unlocked
36
June 2003
Interface Node
5 - ATM RM
Control
Node
Interface
Node
3 S-links
(768 TS)
ATM RM
ATM SW
CEM
CEM
OC-3c Link
155 Mb/s
T
X
R
X
Redundant
Optical
Connection
TX OC-3c
Connecto
r
To RX on
the CN
3 S-links
(768 TS)
AAL1
LAPD, SS7
AAL5
OAM, CallP
RX OC -3c
Connecto
r
From TX
On the CN
Front
Panel
View
37
The ATM RM (ATM Resource Module) provides the centralized resources required to
support the Interface Node applications.
It performs:
A SONET OC-3c physical interface, that allows direct connection to the ATM
network located in the Control Node. Caution an optical attenuator must be
inserted on the TX connector output.
adaptation between the ATM cells of the Control Node (high bitrate: 155 Mbps)
and the DS0 circuits of the Interface Node (low bitrate: 64 kbps):
AAL1 adaptation for LAPD and SS7 channels
AAL5 adaptation for OAM and Call Processing Signaling.
External Interfaces on the Front Panel:
Two LEDs
1 TX OC-3 (upper) + 1 RX OC-3 (lower) optical connectors.
Redundancy scheme: 1+ 1 (simultaneous work).
Board Location: Dual Shelf 00, Shelf 01, slots 5 & 6.
LED Status:
Red LED
Green LED
Status
Unlit
Lit
Module active
and Unlocked
37
June 2003
Interface Node
6 - LSA-RC Module 1/3
Backplane
IEM
IEM
TIM
Active
LSA RC
Module
Passive
To SAI
From SAI
38
The LSA RC (Low Speed Access Resource Complex) is the PCM Interface module. All
external communications run through this board.
Each LSA RC can manage up to 21 E1 or 28 T1 PCM links.
It provides the electrical interface for the signal on the PCM links.
This module is common to the Interface Node and the Transcoding Node.
In the IN, it is used to interface the BTS and the TCU.
In the Transcoding Node, it is used to interface the MSC and the BSC.
Each LSA block is a 3-slot slice made of:
2 IEM boards (Interface Electronic Module), which are in charge of the PCMs.
1 TIM board (Terminal Interface Module) which is a passive board that routes
the PCM towards the active IEM board.
Redundancy scheme:
for IEM: 1 + 1 Hot Stand-by
For TIM: no redundancy (only connecting and filtering functions).
V14.00/EN
38
June 2003
Interface Node
6 - LSA-RC Module 2/3
1
62-pin Sub D
Connector
to SAI
Red LED
blinks
Signal Failure
Indication
Red LED
blinks
Signal Failure
Indication
4
62-pin Sub D
Connector
from SAI
Up and down
Buttons
Front Panel
View for PCM
E1 links
Up and down
Buttons
Front Panel
View for PCM
T1 links
39
V14.00/EN
Red LED
Green LED
Status
Unlit
Lit
Module active
and Unlocked
39
June 2003
Interface Node
6 - LSA-RC Module 3/3
FRONT PANELS
DETAILS
NO information is displayed
when there is NO problem to
report.
Red Multiple
Span Alarms
Signal Failure
Indication
PCM Failure
Indication
Up and Down
Buttons
Front Panel
View for PCM
E1 links
Front Panel
View for PCM
T1 links
40
40
June 2003
Interface Node
7 - Minimal Configuration for the IN
V14.00/EN
41
41
June 2003
Transcoding Node
V14.00/EN
42
42
June 2003
Transcoding Node
1 - Architecture
TRM
TRM
TRM
TRM
S links
TOWARDS
BSC
Ater
Interface
CEM
LSA RC
PCM
Interfac
e
S links
LSA RC
S links
64 kbps
PCM
Interfac
e
A
Interface
TOWARDS
MSC
43
V14.00/EN
43
June 2003
Transcoding Node
Transcodin
g Node no.
01
11
12
13
13
14
14
SIM
15
SIM
TRM
LSA RC
N3
10
12
TRM
11
TRM
10
TRM
TRM
LSA RC
N2
Filler
8
TRM
TRM
TRM
4
TRM
CEM
CEM
LSA RC
N0
TRM
TRM (opt)
TRM (opt)
SHELF 00
LSA RC
N1
Filler
SHELF 01
2 - Boards Layout
15
Synchronization
Transcodin
g Node no.
00
NORTEL NETWORKS CONFIDENTIAL
44
The main function of the TCU (TransCoder Unit) is to perform the main tasks related to
communication, switching and transcoding.
The following hardware modules are part of the Transcoding Node:
The CEM (Common Equipment Module), which controls the BSC Interface
Node Resource Modules, and provides system maintenance, clock
synchronization, and traffic switching.
The TRM (Transcoder Resource Module), which performs the GSM transcoding
functions. Each shelf of the TCU can contain up to 12 TRMs (the boards located
in the slots 1 and 2 are optional).
The LSA RC (Low Speed Access Resource Complex), which is used to
interface the TCU to both the MSC and BSC using PCM links (E1 or T1). They
must be inserted in ordered steps.
The SIM (Shelf Interface Module) is the power supply for both shelves and the
alarm interface between the dual-shelf and the PCIU. It provides 48 V dc to
the TCU. For redundancy purposes, there are 2 SIMs per equipment: each SIM
contributes to supply each shelf (at 50% level).
V14.00/EN
44
June 2003
Transcoding Node
3 - TRM
TRM Module
#3
PPU SPU PPU SPU PPU SPU
#2
SPU PPU SPU PPU SPU
#1 Mail PPU SPU
Mail
Box PPU
SPU
PPU SPUSPU
PPU SPUSPU
Mail
Box
SPU
SPU
SPU
Box
SPUSPU
SPUSPU
SPUSPU
SPU
SPU
SPU
SPUSPU
SPUSPU
SPUSPU
SPU
SPU
SPU
SPU
SPU
SPU
PP
QUICC
CEM
3 S-links
1 Island
1 DSP Archipelago
Front
Panel
TRM MAIN FUNCTION: VOCODING OF SPEECH / DATA
CHANNELS
View
45
The TRM (Transcoder Resource Module) performs the GSM vocoding of the speech / data
channels. Up to 12 TRMs boards can be housed in one single TCU shelf.
The TRM provides:
Voice coding / decoding in Full Rate (FR), Enhanced Full Rate (EFR) and AMR.
Physical Organization:
9 Islands (1 island = 1 PPU (Pre Processing Unit) + 4 SPU (Signal Processing Unit))
3 Archipelagoes = TRM module (1 archipelago = 1 MLB (Mail Box ) + 3 Islands)
1 TRM = 216 voice channels in normal mode
1 TRM = 180 voice channels in TTY (US specific)
External Interfaces on the Front Panel:
Two status LEDs .
Redundancy scheme: N + P load sharing
Board Location:
For both Dual Shelf 00 & 01,
Shelf 00, slots 1 to 3 + slots 9 to 14
Shelf 01, slots 5 & 6 & 14
LED Status:
V14.00/EN
Red LED
Green LED
Status
Unlit
Lit
Module active
and Unlocked
45
June 2003
Transcoding Node
4 - Minimal Configuration for the TN
1 CEM
n TRM (according to the traffic load)
n LSA-RC (= 1 TIM + 1 IEM, but always the LSA n0)
1 SIM
V14.00/EN
46
46
June 2003
Section 4
Thermic,, Energetic and Cabling Aspects
Thermic
The copyright of this document is the property of Nortel Networks. Without
the written consent of Nortel Networks, given by contract or otherwise, this
document must not be copied, reprinted or reproduced in any material form,
either wholly or in part, and the contents of this document, or any methods or
techniques available therefrom, must not be disclosed to any other person
whatsoever.
V14.00/EN
47
June 2003
Objectives
V14.00/EN
48
48
June 2003
Contents
V14.00/EN
49
49
June 2003
50
The Power Supply and the Alarm Systems of the BSC e3/TCU e3 are composed of:
One PCIU (Power Cabling Interface Unit): provides central distribution and
gathering of all power and alarm cabling used inside the BSC e3/TCU e3
frames.
4 SIMs (Shelf Interface Module): used to transfer the -48 V dc and the alarms to
and from each module via the PCIU.
The PCIU is located in a frame power distribution tray and is mounted on the top of the
BSC e3/TCU e3 frame. It contains the following modules:
ALM (Alarm Module): monitors the SIM modules, the cooling units and the fuse
failures, provides control for each LED on the fan units, reports alarms on each
dualshelf, reports the PCIU fail function.
2 FMU (Fan Management Unit): softstart used to limit capacitor inrush current,
capacitor fault alarm, 48V / 60V at 30 A input capability, input transient
protection alarm.
When the frame summary indicator (amber lamp) located on the front cover is:
OFF: there is no active alarm in the BSC e3 or TCU e3 frame,
ON: there is an active alarm in the BSC e3 or TCU e3 frame.
V14.00/EN
50
June 2003
Switch
On / Off
Alarm Indicators
-48 V dc / alarms
Connector
To / from PCIU
Front
Panel
View
51
SIM means Shelf Interface Module. It is the power supply of the BSC e3/TCU e3 frames.
The input voltage is -48 V dc. It transmits also alarms notifications.
The SIM boards are the dc power conditioner for each Dual-shelf.
The SIM board manages the following functions:
Current limiting during Startup
Alarms
Filtered 48 V dc and Power conditioning.
External Interfaces on the Front Panel:
Two status LEDs,
One Switch On/Off,
Amber LEDs Alarm Indicators,
A 48 V dc/alarms connector (7 pins).
Redundancy scheme: 1 + 1, simultaneous work.
Board Location: Dual Shelf 00 & 01, Shelves 00 & 01, slot 15.
LED Status:
V14.00/EN
Red LED
Green LED
Status
Unlit
Lit
Module active
and Unlocked
51
June 2003
Cooling System
1 - Location of the Cooling & Fan Units
Upper
grill
assembly
Lower
grill
assembly
52
V14.00/EN
52
June 2003
Cooling System
2 - Cooling & Fan Units
COOLING UNIT
FAN UNIT
53
State of the LEDs located on the front panel of the Fan Unit:
Green LED
Red LED
Action
Faulty
Module
NO
YES
NO
---
YES
Note
The Test Lamp button re-lights (during 20 seconds) all the LEDs which have turned
to sleep mode, to detect any eventual LED malfunction.
V14.00/EN
53
June 2003
BSC e3 Cabinet
HARDWARE OVERVIEW
54
The SAI (Service Area Interface) is a 30 cm-wide auxiliary frame attached at the left side
of the BSC e3/TCU e3 frame. It enables front access to the PCM cabling.
The SAI cabinet can host:
in the TCU e3: up to 8 CTUs (Cable Termination Unit)
In the BSC e3: up to 6 CTUs + 2 optional HUBs.
The CTU module is a frame assembly which provides the physical interface (PCM
E1/T1 links) between the TIM module of the LSA-RC and the other BSS products
(copper concentration).
It is split as follows:
1 x CTB (Cable Transition Board) which is the backplane,
7 x CTMx (Cable Transition Modules) that are either:
CTMP, E1, twisted pair, Z=120 Ohms: processes 3 spans.
CTMC, E1, coax, Z=75 Ohms: processes 3 spans.
CTMD, T1, twisted pair, Z=100 Ohms: processes 4 spans.
For local maintenance purposes, the TML can be plugged into a HUB of the BSC e3.
Note
The CTU provides the ability for each E1 or T1 PCM to be set in loopback mode, in
order to help the diagnostic of PCM faults.
V14.00/EN
54
June 2003
Module
s
In the
CN
ATM RM
TX
RX
Module
s
In the IN
55
This figure shows how to connect the OC-3c optical multi-mode fibers.
They are used to connect the ATM backplane in the Control Node via the ATM -SW
module to the S-links backplane in the Interface Node via the ATM-RM module.
Notes
The optical link goes from the TX (ATM -SW in the CN) to the RX (ATM-RM in the IN),
The RX (ATM -SW in the CN) goes to the TX (ATM-RM in on the IN).
An optical attenuator must be inserted on the optical fiber at the output of the ATM -SW
module.
Reference of the Optical Fiber: NTQE0607.
V14.00/EN
55
June 2003
2 3
LSA-RC
1
Interface node
5 0
LSA-RCLSA-RC
5
0
LSA-RC
4
FILLER
TRM
TRM
TRM
IEM
TIM
IEM
CEM
CEM
TRM
TRM
TRM
TRM
TRM
TRM
SIM
LSA-RC
0
Air f ilter
IEM
TIM
IEM
IEM
TIM
IEM
CEM
CEM
8K-RM
8K-RM
FILLER
IEM
TIM
IEM
SIM
Transcoder node
LSA-RCLSA-RC
2
3
Cooling unit
2 3
LSA-RC
1
LSA-RCLSA-RC
2
3
Transcoder node
Air f ilter
2 3
LSA-RCLSA-RC
2
3
TRM
SIM
Cooling unit
Upper
Node
Lower
Node
LSA-RC
1
FILLER
IEM
TIM
IEM
TRM
TRM
FILLER
IEM
TIM
IEM
IEM
TIM
IEM
FILLER
IEM
TIM
IEM
ATM-RM
ATM-RM
FILLER
IEM
TIM
IEM
IEM
TIM
IEM
FILLER
SIM
Air f ilter
Cooling unit
TRM
TRM
TRM
IEM
TIM
IEM
CEM
CEM
TRM
TRM
TRM
TRM
TRM
TRM
SIM
TMU
TMU
TMU
TMU
SIM
OMU
OMU
ATM-SW
ATM-SW
TMU
FILLER
TMU
TMU
Control node
TMU
FILLER
TMU
TMU
MMS private
MMS shared
FILLER
FILLER
MMS shared
MMS private
TMU
TMU
TMU
TMU
SIM
TCU
PCIUe3
Cooling unit
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
IEM
TIM
IEM
TRM
TRM
FILLER
IEM
TIM
IEM
IEM
TIM
IEM
TRM
SIM
BSC
PCIUe3
LSA-RC
0
Air f ilter
56
Note: For both BSC e3 and TCU e3, all the cables linking the CTUs and the LSA-RCs
have the same length (1.66 meter).
BSC e3
In the case of a BSC e3, the SAI includes a maximum of 6 CTUs which are numbered
from the top to the bottom: 0, 1, 2, 3, 4, 5.
Each CTU must be connected to the relevant LSA-RC as follows:
CTU 0 < -- > LSA 1
CTU 1 < -- > LSA 2
CTU 2 < -- > LSA 3
CTU 3 < -- > LSA 5
CTU 4 < -- > LSA 0
CTU 5 < -- > LSA 4.
TCU e3
In the case of a TCU e3, the SAI includes a maximum of 8 CTUs; the 4 upper CTUs are
dedicated to the upper transcoder node dual-shelf, the 4 lower CTUs are dedicated to
the lower transcoder node dual-shelf. They are numbered from the top to the bottom: 0,
1, 2, 3, 4, 5, 6, 7.
Each CTU must be connected as follows:
V14.00/EN
June 2003
TIM
Multiple
Span
Alarms
IEM
Multiple
Span
Alarms
Tx signals
62-pin
connector
CTU
Rx signals
Tx
62-pin
connector
Rx
LSA-RC
57
Both cables are identical. Each of them is symmetrical (its two connectors are identical 62-pin connectors).
Both cables have to be connected as follows:
Tx signals: upper connector of the CTU with the upper connector of the front
panel of the TIM module.
Rx signals: lower connector of the CTU with the lower connector of the front
panel of the TIM module.
Note
The Rx cable must be connected before the Tx cable.
V14.00/EN
57
June 2003
BSC
e3
PCIU
TMU
TMU
TMU
TMU
SIM
OMU
OMU
TMU
FILLER
TMU
TMU
MMS private
MMS shared
FILLER
FILLER
MMS shared
MMS private
TMU
TMU
TMU
TMU
SIM
Control Node
Air filter
CTU 4
CTMx6
CTMx5
CTMx4
CTMx3
CTMx2
CTMx1
CTMx0
IEM
TIM
IEM
ATM-RM
ATM-RM
FILLER
IEM
TIM
IEM
IEM
TIM
IEM
FILLER
SIM
FILLER
Cooling unit
LSA-RC
1
LSA-RCLSA-RC
2
3
Interface Node
IEM
TIM
IEM
IEM
TIM
IEM
CEM
CEM
8K-RM
8K-RM
FILLER
IEM
TIM
IEM
SIM
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
CTMX
ATM-SW
ATM-SW
TMU
FILLER
TMU
TMU
Cooling unit
LSA-RCLSA-RC
5
0
LSA-RC
4
Air filter
NORTEL NETWORKS CONFIDENTIAL
58
The number of the CTM in the SAI depends on the number of the given LSA-RC in the
shelf.
The numbering of the CTM ports goes from the left to the right and from the bottom to
the top: from 0 to 20 for E1 and from 0 to 27 for T1.
Example:
The PCM nb 0 from the LSA-RC nb 0 is linked to the CTU nb 0, CTM nb 0, port nb 0
V14.00/EN
58
June 2003
59
A Hub is an active node which regenerates the Ethernet signal: it is the central switch in
a twisted pair network.
The equipment that is used is the BayStack 250 (Nortel Equipment).
BayStack 250 Series
The BayStack 250 is a standard stackable Ethernet Hub that contains:
12 RJ45 ports for twisted pair 10/100 Base T conductors. It is possible to
connect up to 5 Hubs together obtaining a 60-port logical Hub.
One LED display: Each port has two LEDs to indicate its port status.
One switch for connection to an Ethernet switch or another Hub.
Notes
Each port is a repeater.
The ports can be active simultaneously.
A slot is available for a management module. Two chained Hubs make a logical Hub.
V14.00/EN
59
June 2003
60
V14.00/EN
60
June 2003
61
Alarm links
For the BSC e3 Frame.
The figure shows the internal and external alarm links for the frame assembly of the
BSC e3 cabinet.
V14.00/EN
61
June 2003
62
Alarm links
For the TCU e3 Frame.
The figure shows the internal and external alarm links for the frame assembly of the
TCU e3 cabinet.
V14.00/EN
62
June 2003
63
Fuses
The power supply box is equipped with:
1 main breaker for the whole site
and 4 breakers or fuses (32 A) for the cabinet.
The general breaker/fuse value is the general value on the clients site, which depends
on the on-site equipments.
The following boards house a fixed fuse to protect each component:
OMU,
TMU,
MMS,
CEM,
ATM -RM,
8K-RM,
IEM (from the LSA-RC),
TRM.
V14.00/EN
63
June 2003
Section 5
BSC e3 and TCU e3 Hardware
Features and Configurations
The copyright of this document is the property of Nortel Networks. Without
the written consent of Nortel Networks, given by contract or otherwise, this
document must not be copied, reprinted or reproduced in any material form,
either wholly or in part, and the contents of this document, or any methods or
techniques available therefrom, must not be disclosed to any other person
whatsoever.
V14.00/EN
64
June 2003
Objectives
V14.00/EN
65
65
June 2003
Contents
Hardware Feature s
Configurations
V14.00/EN
66
66
June 2003
Hardware Features
1 - Main Characteristics
600
Service
Area
Interfac
e
BSC e3
TCU e3
600
Service
Area
Interfac
Transcoding
e
Transcoding
Node
Node
Control Node
Operating Temperature
(long term)
+ 40 C
+ 5 C
2200
2200
Transcoding
Node
Interface
Node
Transcoding
Node
Maximum
Relative humidity
85%
960
960
5%
Weight
Maximum Weight 570 kg
67
V14.00/EN
67
June 2003
Hardware Features
2 - Filler Module
Front
Panel
View
Filler Module MAIN FUNCTION: FILL IN THE UNUSED SLOTS
68
The Filler Module is an empty module container which can be used inside all the Nodes
of the BSC e3/TCU e3 which are not filled with any other module.
It manages the following functions:
To maintain Electro Magnetic Interference (EMI) integrity,
To maintain shelf airflow patterns to ensure proper cooling.
External Interfaces on the Front Panel: NA
Board Location: the Filler Module can occupy any slot that does not house a module.
Note
Caution: If one or more slots remain empty on a powered shelf, then TCU e3 or BSC e3
frames may be damaged. These fillers ensure:
A good equipment cooling
A proper EMI shielding.
V14.00/EN
68
June 2003
Min
Max
Erlang
600
3000
TRX
360
1000
BTS
120
500
Cells
360
600
LAPD links
120
600
42 / 56
126 / 168
21 / 28
84 / 112
200
1944
620
3112
16
SS7 links
69
This table gives the minimum and maximum possible configurations for the BSC e3 and
TCU e3 cabinets.
BSC e3 configuration:
The minimum is a 600 Erlang BSC e3 with 3 TMU modules (2+1 for
redundancy) and 2 LSAs (42 E1 or 56 T1 PCMs).
The maximum is a 3000 Erlang BSC e3 with 14 TMU modules (12+2 for
redundancy) and 6 LSAs (126 E1 or 168 T1 PCMs). In this case, the BSC e3
requires 2 TCU e3 cabinets .
TCU e3 configuration:
The minimum is a 200 Erlang TCU e3 (in the case of EFR) with 2 TRM modules
(1+1 for redundancy), 1 LSA (21 E1 or 28 T1 PCMs).
The maximum is a 1800 Erlang TCU e3 with 10 TRM modules (9+1 for
redundancy) and 4 LSAs (84 E1 or 112 T1 PCMs) in each TCU e3 shelf.
Notes
Between these minimum and maximum configurations, different configurations can be
offered. Nevertheless, in the TCU e3 cabinets, the number of TRMs and LSAs is directly
linked to the A Interface capacity.
Moreover, some product engineering rules have been defined to avoid inconsistency
between the number of TMUs and the number of LSAs.
V14.00/EN
69
June 2003
Typical
Configurations
BSC e3
TMU
LSA
Nb of LAPD
Nb of E1
Nb of
T1
600 E
2+1
2
120
42
56
1500 E
5+1
3
300
63
84
2400 E
8+2
5
480
105
140
3000 E
10+2
6
600
126
168
TCU e3
TRM
LSA
Nb of E1
200 E
1+1
1
21
600 E
3+1
2
42
1200 E
6+1
3
63
1800 E
9+1
4
84
Nb of T1
28
56
84
112
70
Nortel Networks has defined some market model configurations (rural, semi-urban,
urban), and optional extension kits (comprised of TMU, TRM & LSA) in order to help the
operators select the appropriate number of modules.
A rural type of configuration with
a relatively low number of TMUs (low traffic capacity)
a maximum number of LSAs (because many small BTSs used for
coverage need to be connected).
An urban type of configuration with
a high number of TMUs (high traffic capacity)
a relatively low number of LSAs (because BTSs have many TRXs per
cell, and there are relatively few BTSs to be connected to the BSC).
Note: The BSC can have a maximum of 14 TMU modules (12+2) for very tough traffic
profiles.
V14.00/EN
70
June 2003
S111
S222
S333
S444
S888
Nb of BTSs
BSC capacity (Erl)
Nb of TRXs
200
1300
600
125
3000
750
70
3000
630
50
3000
600
20
3000
480
12
5
70 / 70
12
4
50 / 62
12
4
40 / 46
TMU
LSA
Abis E1 / T1
6
12
6
6
115 / 150 100 / 126
Ater E1 / T1
11 / 18
26 / 42
26 / 42
26 / 42
26 / 42
Agprs E1 / T1
10 / 14
7 / 10
7 / 10
5/7
4/6
71
This table gives some dimensioning examples of BSC e3 according to the type of BTS
site.
The figures in italics are the dimensioning factors for the BSC e3.
With the exception of a pure S111 BTS configuration, the only dimensioning factor is the
maximum Erlang capacity (3000 E).
If the network consists of 100% S111 BTSs, the maximum number of cells (600)
supported by a BSC e3 is reached before the maximum Erlang capacity.
In S111 and S222 cases, Abis concentration is assumed.
V14.00/EN
71
June 2003
Section 6
BSC e3 and TCU e3 Startu
Start up
The copyright of this document is the property of Nortel Networks. Without
the written consent of Nortel Networks, given by contract or otherwise, this
document must not be copied, reprinted or reproduced in any material form,
either wholly or in part, and the contents of this document, or any methods or
techniques available therefrom, must not be disclosed to any other person
whatsoever.
V14.00/EN
72
June 2003
Objectives
V14.00/EN
73
73
June 2003
Contents
Equipment Startup
BSC e3 and TCU e3 Startup at the OMC-R
CN Startup
IN Startup
TN Startup
Fault-Tolerance
V14.00/EN
74
74
June 2003
Equipment Startup
1 - Principle
BSS
BSC
TCU
Interface Node
Control Node
Transcoder
Node
75
V14.00/EN
75
June 2003
Equipment Startup
2 - LED Display (1/2)
76
Each module inside each dualshelf houses the same two LEDs on the upper part of the
front panel to ease onsite maintenance and reduce the risk of human error.
The actual color of these LEDs are referenced as:
red with a triangular shape
green with a rectangular shape.
The red and green LEDs indicate the module status.
V14.00/EN
76
June 2003
Equipment Startup
2 - LED Display (2/2)
Unlit LED
Unlit LED
Lit LED
Lit LED
Unlit LED
Table 2
Unlit LED
Lit LED
Lit LED
Unlit LED
Unlit LED
Unlit LED
Unlit LED
Lit LED
Lit LED
Table 1
Unlit LED
Unlit LED
Unlit LED
Lit LED
Winking LED
Lit LED
Unlit LED
Unlit LED
77
Table 1 gives the description, combinations and states of the red and green LEDs for
each module (except the MMS module) inside the BSC e3 and TCU e3 cabinets.
Note: Table 2 is for the MMS modules.
Scenarii for modules except MMS: (description of LED behavior).
Scenario 1: module insertion (general case): step 1 -> step 2 -> step 1 -> step 4.
Scenario 2: insertion of a TMU or an ATMSW module (administrative state unlocked):
step 1 -> step 2 -> step 1 -> step 3 -> step 4.
Scenario 3: removal of a passive OMU module, one must press the removal request
button (a TML command also exists): step 3 -> step 1 -> step 6.
Scenario 4: removal of an active OMU: step 4 ->step 3 -> step 1 -> step 6.
Scenarii for MMSs modules: (description of LED behavior).
Scenario 1: insertion of a MMS module (normal case: administrative state unlocked):
step 1 -> step 2 -> step 3 [updating ...] -> step 4.
Scenario 2: insertion of a MMS module (administrative state locked):
-> step 2 -> step 3.
step 1
Scenario 3: removal of a MMS module, one must press the removal request button (a
TML command also exists): step 4 -> step 3 -> step 6.
V14.00/EN
77
June 2003
OMC- R
OUT
Notifications
BSC e3/TCU e3
78
A Hot Startup is performed when the MIB is already built. This is the case when a BSC
e3 is restarted for example. (Note that the Hot Startup principle is the same as in BSC
2G).
Three cases may occur:
A module was extracted: a specific event is sent indicating a state change to
disabled/{not Installed} of the object that was previously in the slot. On
reception of this state change, the OMC-R will delete the corresponding logical
object at the MMI and in the MIB. An alarm is triggered at the father object level
to indicate the deletion.
A new module was inserted: (it was plugged into a previously-free slot): the
corresponding CN & IN objects are automatically created at the MMI as well as
in the MIB. The platform will send notifications indicating that the hardware
configuration has been detected on the corresponding platform object (CN, IN,
LSA or Transcoder equipment). This information is stored on the OMU disk and
sent to the OMC. The information is also stored at OMC-R level and can be
displayed upon operator request.
A module was replaced by another one: the initial object is removed at the MMI
and deleted in the MIB. The new object is created at the MMI as well as in the
MIB. Alarms are triggered at the father object level to indicate both
modifications.
V14.00/EN
78
June 2003
BDE
2. MIB build request
k
Lin
SC
B
/
-R
d
MC she
1.O tabli
s
e
BSC e3
MIB
BTS
MIB
Building
BTS
79
A Cold Startup is when the MIB (Management Information Base) is not built. (Note that
the Cold Startup principle is the same as in BSC 2G).
Note: the maximum acceptable configuration of the equipment is stored in the MIB. But
the new modules inserted are taken into account in the configuration only when the
OMC-R is connected to the BSC e3.
The startup sequence includes the following steps:
The operator builds the network at the OMC-R level and creates the BSC logical
object. He also has to define the CN, IN and LSA-RC modules, indicating their
hardware positions.
As soon as the OMC-R/BSC link is established, the BSC sends a notification
indicating that an MIB build is requested.
Upon receipt of this notification, the OMC-R triggers the MIB build phase. This
phase ends with the creation of the MIB logical objects followed by the reception
of a report build message.
The BSC sends a global notification giving the OMC-R the description of the
detected hardware components.
The detected modules appear on the MMI.
These modules are created at the BSC/OMC-R interface.
The supervision software reports the state of all created modules.
V14.00/EN
79
June 2003
V14.00/EN
80
80
June 2003
CN Startup
1 - Main Principles
Control Node
Slice
Board
Slice Recovery
Board Recovery
Hardware Startup Progress
81
V14.00/EN
81
June 2003
CN Startup
2 - Board Recovery
Startup Dependencies
Boot
Platform
Application
Sequence
Initialization
Initialization
OMU
SBC
OMU TM
ATM-SW
SBC
TMU TM
TMU SBC
TMU
PMC
82
V14.00/EN
82
June 2003
CN Startup
3 - Slice Recovery
Slice
OMU
TMU
ATM-SW
Number of
Slices
2
2 to 14
Boards
included
Processor
present
TM
Yes
SBC
Yes
PMC
No
TM
Yes
SBC
Yes
PMC
Yes
TM
No
SBC
Yes
83
V14.00/EN
83
June 2003
CN Startup
10
11
12
13
14
SIM B
15
SIM A
TMU
14
TMU
TMU
13
TMU
TMU
12
TMU
11
TMU
10
MMS Private
TMU
OMU
ATM SW
8
MMS Shared
Filler
ATM SW
6
Filler
OMU
5
MMS Shared
TMU
4
MMS Private
TMU
TMU
Filler
2
TMU
TMU
Shelf 00
Filler
TMU
Shelf 01
15
84
Dead Office Recovery consists of all the slices performing their slice recovery.
A BSC e3 is in Dead Office State when both OMUs are in an undefined activity state, i.e.
neither is passive nor active.
When a Dead Office State is detected, the active OMU resets all the TMUs (loss of
traffic service).
If a Dead Office State is incorrectly detected, it will result in a service interruption and all
the TMUs are reset.
The entire Control Node startup sequence must be performed.
V14.00/EN
84
June 2003
CN Startup
5 - CN Complete Startup Sequence
Boards
Active OMU-SBC
Passive OMU -SBC
OMU-TM/TMU -TM
TMU-SBC
TMU-PMC
ATM-SW
Slices
OMU
TMU
ATM-SW
CN Startup
IN Startup
TN Startup
85
V14.00/EN
85
June 2003
CN Startup
6 - CN Startup Timer
1
2
Some TMUs are
Not operational
86
Note: The BSC e3 has a startup timer in order to detect problems with the Control Node and the
Interface Node.
The Control Node startup sequence runs in parallel with and independently with respect to the IN
startup sequence: if the CN and IN indicate that they are operational, then the BSC e3 is said to be
operational.
The Control Node has a startup timer in order to detect problems with slices during recovery. This
timer is started once the active OMU has completed Platform Initialization.
Each slice sends a notification when it becomes operational. If all the slices send a notification before
the end of the startup timer, then the timer is stopped.
If the timer expires then those slices which have not sent a notification are considered as inoperative.
When the startup timer is stopped or if it expires, the CN startup sequence continues if and only if there
are enough operational slices to face the BSC e3 theoretical workload.
This theoretical workload is a configured parameter which indicates the minimum number of operational
slices required for service.
If there are not enough operational slices to handle the BSC e3 engineered workload, the CN manages
a certain number of cells, with the number of TMUs available. A notification is sent to the operator, to
indicate a lack of resources.
V14.00/EN
86
June 2003
V14.00/EN
87
87
June 2003
IN Startup
1 - Main Principles
CRITICAL PATH TO THE
CN
Creation
Platform Object
Creation
CEM board
Creation
ATM -RM
CONFIGURATION
OF THE OBJECTS
STORED IN MIB
MIB
Switch on
Set
In
Service
Set
In
Service
88
Each card of the IN contains a software release in its Flash Memory in order to have at
least the boards of the critical path able to run as soon as possible.
Critical path = set of boards (CEM + ATM-RM) that must be ready to establish
immediate communication with the CN when the IN starts.
The configuration data for these boards are stored in the IN non-volatile memory. Other
cards will be treated later, once this link is established.
An operator can add a module in the IN, whatever its software configuration.
The starting phase of a module depends on the state of its Flash Memory:
Either the Flash is empty: it contains only the IBL (Initial Boot Loading).
Or it contains a software release that will be loaded into the RAM and the
module is valid.
4 starting cases can occur:
1) Restarting with the CEM in the IBL state
2) Restarting a valid CEM: minimal configuration
3) Restarting a critical pathed RM in IBL state
4) Starting a non-critical pathed RM in the IBL state.
V14.00/EN
88
June 2003
IN Startup
2 - CEM/RM Modules States
2
IBL STATE
VALID SATE
3
INITIAL STATE
89
2)
The first release of the software has been loaded into the Flash, or the
software has been downloaded after failure during the previous upgrade.
3)
There has been a failure during the previous on field release upgrade.
The module automatically comes back into the IBL state.
V14.00/EN
89
June 2003
TN Startup
CRITICAL PATH TO THE BSC
e3
Creation
Platform Object
Creation
CEM board
Creation
LSA-RC
Creation
IEM
CONFIGURATION
OF THE OBJECTS
STORED IN MIB
Creation
PCMs
MIB
Switch on
Set
In
Service
Set
In
Service
90
When the TCU e3 is switched on, it has to deal with its critical path management. The
critical path represents the TCU e3 objects which needs to be in service in order to
enable the dialogue between the CN and the TCU e3 through the IN.
On the TCU e3, critical path management consists of:
Creation of the platform object (logical representation of the TCU)
Creation and setting in service of one of the CEM boards
Creation of the LSA no.0, located into the slots no. 4, 5 and 6 (synchronization)
Creation and setting in service of the IEM of this LSA
Creation and setting in service of PCMs associated with this LSA
Attempting to open a LAPD dialogue with the CN on one of these PCMs.
Once the BSC e3/TCU e3 dialogue is established, the TCU sends the OMC-R its
Hardware configuration (i.e. the identification of the different boards detected in the TCU
by the CEM).
Then, it can start the creation of the logical objects (Platform, LSA, PCMs) and the
creation of the hardware objects (CEM, TRM, IEM).
The MIB is updated, and the hardware objects associated with the new inserted boards
are created.
V14.00/EN
90
June 2003
Fault Tolerance
V14.00/EN
91
91
June 2003
Fault Tolerance
1 - Fault Tolerance Software
Fault Tolerance
Platform
1
Active
Instance
Current context
update
Passive
Instance
Fault Tolerance
Platform
2
ACT
SW
Active
Instance
FAU
LT
Passive
Instance
Active
Instance
NORTEL NETWORKS CONFIDENTIAL
92
A Fault Tolerant Application (FT) is an application which is replicated: it has at least one
passive mate hosted on another board. The passive module can take over and continue
to run the application without any break in service.
The active instance of the FT application runs the application code.
The passive instances of the FT application are simply kept up-to-date with the current
context of the active instance.
The process of changing a passive instance to an active instance is called a SWACT
(SWitch of ACTivity).
Load Balancing (LB) is the ability of the Control Node to balance resources of Fault
Tolerant Applications (CPU load, memory, ATM network, Abis links, Ater links, timers)
on TMUs. It has been designed to minimize overload issues by sharing well distributed
resources and distribute passive entities in order to have well balanced entities after a
swact.
V14.00/EN
92
June 2003
BSC e3
Load
Balancin
g
New Site
93
The Cellgroup (CG) is a group of radio cells managed by the same call processing
application instances. A BSC e3 can manage a maximum of 96 CG. Each Cellgroup
manages an average of 60 Erlang and its maximum capacity in term of traffic is the one
of a TMU board: 300 Erlang. Each TMU module is able to manage up to 95 sites
arranged in 16 (8 actives, 8 passives) Cellgroups per TMU (up to 100 TRX). The
Cellgroups are determined at boot time by the Load Balancing function.
The CG dimensioning must respect:
96 CG maximum per BSC e3
24 sites maximum per CG
48 cells maximum per CG
48 TRX maximum per CG
300 Erlang per CG
Note: if some BTS sites or TRX are added, the load balancing function may reorganize
the traffic load on the TMUs.
A site must be placed in a CG by a BSC at its creation and cannot be moved to another
CG after that. The only way to move a site from one CG to another is to delete it and
then to re-create it.
Another possibility is to perform a on-line build (with complete service loss of the whole
BSC for a few minutes).
V14.00/EN
93
June 2003
Fault Tolerance
3 - Example: SWACT on TMU Failure
TMU#1
TMU#2
A1
P2
TMU#1
P1
A1
A2
P3
A1
TMU#3
Active Process
P2
A3
P1
TMU#2
TMU#3
A1
A2
P2
P3
A3
Passive Process
94
V14.00/EN
94
June 2003
Section 7
BSC e3 and TCU e3 Troubleshooting
The copyright of this document is the property of Nortel Networks. Without
the written consent of Nortel Networks, given by contract or otherwise, this
document must not be copied, reprinted or reproduced in any material form,
either wholly or in part, and the contents of this document, or any methods or
techniques available therefrom, must not be disclosed to any other person
whatsoever.
V14.00/EN
95
June 2003
Objectives
V14.00/EN
96
96
June 2003
Contents
Overview
TML e3/RACE Hardware Architecture
TML e3/RACE Environment
RACE: Remote Access Equipment
TML e3: Terminal de Maintenance Locale
V14.00/EN
97
97
June 2003
Overview
Maintenance
TML
RACE
OMC-R
Terminal de
Maintenance
Locale
Remote ACcess
Equipment
98
The BSC e3 and TCU e3 can perform many OAM tasks in parallel.
This decreases:
The upgrade duration
The time required to bring the whole BSS network back into servi ce after a
restart.
Immediate and precise Fault Detection, even at module level, is enabled for software
and hardware failures on the BSC e3 and TCU e3.
Each hardware module is a Field Replaceable Unit (FRU) and is hot Plug and Play.
The Maintenance operations can be done from three types of equipment:
OMC-R: Operation and Maintenance Center-Radio,
TML: Terminal de Maintenance Locale (Local Maintenance Terminal),
RACE: Remote ACcess Equipment.
V14.00/EN
98
June 2003
PCMCIA interfaces
TML
Or R
ACE
99
V14.00/EN
99
June 2003
RACE
Remote ACcess Equipment
V14.00/EN
100
100
June 2003
RACE
1 - Environnement
LAN
RA CE
RA CE
OMC-R Server
(1) Remote
RACE client
PSTN
BT
S
BSC
e3
RA CE
RA CE
101
The RACE (Remote Access Equipment) is a Web interface to the OMC-R. The RACE
can be used as a particular OMC-R Workstation, excepted some particular functions.
This equipment replaces the ROT.
The advantages of this new product are the following:
Operations and Maintenance can be done from a remote site without requesting
an OMC-R on-site operator:
(1) using PSTN, modem and firewall
(2) through LAN (using an Ethernet board)
(3) via BTS S8000 / S12000 equipments
(4) via BSC e3 equipment
the interface is user-friendly, it is close to the interface of the OMC-R. Thus, the
tool is easy to manipulate for the user who is used to the OMC-R interface.
The unique requirement is to have a Web browser: the installation is done
quickly and its upgrade is done on-line, nothing will be modified on the client site
It is able to ensure a secure access to the network, which was no longer
guaranteed with the ROT.
V14.00/EN
101
June 2003
RACE
2 - Overview
JAVA
MMI
Interface
OMC-R
Kernel
Server
Http
server
Web
browser
RACE Client
OMC-R Server
102
The aim of the RACE is to access the OMC-R from a remote PC client using a Web
Browser.
This new application is composed of Web pages and Java applets that can be run
through a Web navigator (Netscape or Internet Explorer). All the software is on the
server html pages and Java applets and is downloaded to the client.
The RACE Client sends requests to an http Server, located on the LAN of the OMC-R
server. The requests are transmitted to a RACE Server running on an OMC-R
workstation.
RACE Client: this is the laptop PC on which the RACE application runs.
http Server: it receives the requests coming from the RACE client and transmits them to
the RACE server; it is installed on the same OMC-R workstation as the RACE server.
RACE Server: this is the link between the RACE client and the applications hosted
inside the OMC-R station. It transmits commands to the Kernel, and the submittals to
the relevant applications. It also translates the internal messages of the MMI for the
RACE client.
This application is adapted to individual operator needs:
when the operator must work from home
when operating at BTS or BSC e3 sites.
V14.00/EN
102
June 2003
RACE
3 - Login Window
The RACE needs the OMC-R username and password.
RACE
103
When starting a session, the user is asked for his OMC-R user name and password.
The password is encrypted before being sent to the server.
Login and password are then checked by the security task from the server. If the
username and password match the security information, a new OMC-R session is
started.
The RACE application can manage several connections; the maximum number of
communications is defined offline.
The process running on the RACE server manages the list of connected users, and the
beginning and end of sessions. It works as a new OMC-R task.
As the feature is multi-user, the first step is to identify each connection: since http is not
a continuous communication, the user has to be authenticated each time he sends a
new request to the server.
A better presentation of the data allows the customers to save time; suppose for
instance, that an operator must modify a list of parameters and makes a mistake. With
the RACE, using the Back button of the navigator, he just has to modify the wrong
parameters.
V14.00/EN
103
June 2003
TML e3
Terminal de Maintenance Locale
V14.00/EN
104
104
June 2003
TML e3
1 - Environnement
(1)
TCP/I
P/Eth
ernet
TML
PC
PSTN
LAN
Mode
m
(2)
(2)
TCP/IP/Ethern
et
(1)
TML
PC
(3)
TML
PC
TCP
/IP/E
the
rne
t
(4)
OMU
CEM
t
rne
the
/IP/E
TCP
TML
PC
BSC e3
105
V14.00/EN
105
June 2003
TML e3
2 - Overview
BSC e3
HTTP
Server
LAN
FTP
Server
Physical
Path
Manager
HTML
JAVA
Test
Management
S/W
Bus
Test
Server
TML
Interface
Node
Access
ATM
Manager
Hardware
Manager
106
The TML e3 hardware is a laptop PC working under Windows and behaving like a Java
Browser.
The TML e3 Application is a Java applet stored in the BSC e3 disk. It is stored in the
MMS module. Its interface is independent from the BSC e3/TCU e3 software evolution.
The TML e3 allows the user to:
Perform tests after an equipment installation (commissioning). In this case, the
equipment is off-line and the OMC-R link is out of service
Perform an audit before an important operation (an upgrade for example). In this
case, the equipment is on-line and the OMC-R link is in service.
Perform some upgrade (software or hardware) tasks. In this case, the
equipment is on-line and the OMC-R link is in service.
Perform corrective maintenance. In this case, the equipment can be on-line and
processing traffic. The OMC-R link can be in service or down.
Investigate and to localize any product problem. In this case, the equipment can
be on-line and processing traffic. The OMC-R link can be in service or down.
Note: Same TML PC for e3 as the one used for 2G BTS/BSC/TCU.
V14.00/EN
106
June 2003
TML e3
3 - Connections
Example of connection to a BSC e3
WEB
Browser
http://xxxxxxxxxx. html
TML e3
Application
HTTP
server
HTML
JAVA
Try Connection
Send USER and PASSWORD
Send Commands
Test
Server
Receive Answers
TML
e3 Platform
107
Principle:
Using a web browser, the TML e3 operator loads an html page (through http) holding the
TML e3 applet.
When the TML e3 applet is downloaded to the TML e3 laptop PC using the http server: a
test session can be started.
The messages exchanged between the TML e3 and the BSC e3 are then carried
through a TCP/IP connection
The TML e3 communicates with the Test server software module.
Advantages:
The TML e3 PC will be able to run on a platform with any OS (theoretically including
Windows, Mac OS, Unix, Linux and so on).
No specific software to install. Only a web browser (with Java enabled) is needed.
The TML e3 release is always on time, on site, and up to date (Software integrated to
the e3 equipment)
It is possible to have a remote TML e3 connected to the BSC e3 LAN.
Only one standard hardware interface is used : Ethernet.
The TML e3 is used to perform a whole set of tests to check the integrity of the BSC e3
configuration (to check the correct operation of a hardware module, to check the
communication between two hardware modules, to perform loop-back testing (LAPD,
PCM, etc.).
The TML e3 also provides on-line equipment-monitoring capabilities such as software
spies, traces, notification decoding, dump decoding.
V14.00/EN
107
June 2003
TML e3
4 - TML e3 Man Machine Interface
V14
V14
108
108
June 2003
TML e3
5 - Login Window
TML
TCP/IP
Ethernet
V14
Front
Panel
View
109
V14.00/EN
109
June 2003
TML e3
6 - Connection Windows
1. Connection window
3. Error window
110
In the Connection window, select the node you want to connect to and click on the
Validate button:
CN for Control Node of BSC e3
IN for Interface Node of BSC e3
TCU for Transcoding Node of TCU e3.
In the TML connection menu, enter the IP address and the port number corresponding
to the Node the user wants to connect to (port number 11000 for IN and TCU, 12000 for
CN). Then click on the Connection button.
In case of connection problem, an Error window appears. It may be due to the fact that
the module is not the active one. In this case, connect the cable to the other module and
repeat the connection procedure.
V14.00/EN
110
June 2003
TML e3
7 - Starting Windows
Command in progress
window
Summary window
111
If the connection to the module is correct, the command Get hardware configuration is
launched at the startup of the TML e3.
The following windows appear. The Summary window gives a list of all the modules
detected by the TML e3, as well as their location in the cabinet.
By clicking on the OK button, the Summary window closes and the main window is
displayed on the screen (see next slide).
V14.00/EN
111
June 2003
TML e3
8 - Main Windows: Interface Node
V14
V14
Contextual Menu
112
For each connection Node (CN, IN and TCU), there is a corresponding main window.
The TML e3 main window is divided into 2 parts:
menu items, which are: Configuration, Test, Disk/Memory, Miscellaneous, View.
the e3 graphical view of the equipment. Each module present in each Node is
displayed in the equipment view with the indication of its position in the shelf.
The faulty modules are also notified on the view.
Notes about the general use of the TML e3:
Each time a new command is launched, a window appears allowing the cancellation of
the command in progress.
At the end of each command, a Summary window appears and gives the results of the
last command executed.
All the results of all the commands are stored in the Result window. This window may be
opened or closed through the command View/ Result view.
From this window, the TML e3 user is able to start the commands and tests using:
the main menu
the contextual menu, when clicking on the right button of the mouse, the cursor
being located on the graphical view of a module.
V14.00/EN
112
June 2003
TML e3
9 - Configuration Menu
V14
Configuration Menu
113
V14.00/EN
113
June 2003
TML e3
10 - Test Menu (1/2)
Module Isolation
summary window
V14
Warning window
114
The Test Menu gives access to several tests the TML can perform to check the e3 equipment
modules and links: the sub-menus are:
Module test:
SIM modules alarms: this test checks the SIM modules alarm signal states. The
target module can be selected by giving its location in the equipment or by
selecting all the modules of the same type. The result is displayed in the
Summary window: the state (Enabled or Disabled) of the PCU alarm and the
CU alarm relative to each SIM module is given.
LED test: this helps to check visually the LEDs status. The concerned modules
are: OMU, TMU, ATM-SW (CC1) and MMS. The target module can be selected
by giving its location in the equipment or by selecting all the modules of the
same type. The LEDs of the selected module light for 5 seconds. In the case of
several modules, the test is performed sequentially on each module.
Reset: to reset a module in the BSc e3 or in the Tcu e3. The concerned
modules are: OMU, TMU, ATM-SW (CC1). Only one module defined by its
position can be selected. A Warning window appears, notifying that this action
may have an impact on service. Once launched, it is impossible to abort the
reset command. CAUTION: DO NOT APPLY ON THE ACTIVE OMU OR
ACTIVE CEM MODULES.
Module Isolation: this resets a module for a defined time, to simulate a module
failure. The concerned modules are: OMU, TMU, ATM-SW (CC1). Only one
module defined by its position can be selected. A duration process setting
window appears (hours, minutes and seconds). As a result, the selected
module is isolated for the defined duration.
Set MMS busy: this simulates a busy state of a selected MMS module for a
defined duration. Only the module defined by its position can be selected. A
duration process setting window appears. As a result, the selected MMs
module is set in a busy state for the defined duration.
V14.00/EN
114
June 2003
TML e3
11 - Test Menu (2/2)
V14
Test Menu
115
Link test:
CCS7/ LAPD Global Path: this test consists of sending a LAPD
sequence from one or several source modules, to one or several
targeted modules, on which a loop-back is performed. The possible
source modules are OMU and TMU. The possible target loop-back
modules are: CEM, LSA-RC and SAI. All PCM links or only one specific
can be selected. The Summary window gives the list of the failing links
between the OMU and TMU modules.
Check CC1 switch: this test checks communications between OMU and
TMU modules of the Control Node through the ATM-SW (CC1) module
switches. The Summary window presents the source module, the
destination module and the percentage of received frames/sent frames.
Check OMU-OMU ethernet: this test checks the ethernet
communication link between both OMU modules. The Summary
window gives the percentage of received frames/sent frames.
Check IN-CN optical link: this test checks the OC-3 optical connection
between the Control Node and the Interface Node.
V14.00/EN
115
June 2003
TML e3
12 - Disk Menu (1/2)
V14
Disk Menu
Customization window
116
V14.00/EN
116
June 2003
TML e3
13 - Disk Menu (2/2)
V14
V14
Initialization Menu
Verification Menu
117
V14.00/EN
117
June 2003
TML e3
14 - Miscellaneous Menu
V14
Miscellaneous Menu
118
V14.00/EN
118
June 2003
TML e3
15 - View Menu
V14
View Menu
Result Window
119
The View Menu displays the result windows, the BSC directory containing the
notification files and the BSC directory software error file.
Result view: displays the Summary window presenting all past commands and
their results.
Software error view: displays the BSC e3 error log file directory.
V14.00/EN
119
June 2003
Section 8
BSC e3 and TCU e3
Module Replacement
The copyright of this document is the property of Nortel Networks. Without
the written consent of Nortel Networks, given by contract or otherwise, this
document must not be copied, reprinted or reproduced in any material form,
either wholly or in part, and the contents of this document, or any methods or
techniques available therefrom, must not be disclosed to any other person
whatsoever.
V14.00/EN
120
June 2003
Objectives
V14.00/EN
121
121
June 2003
Contents
Safety Instructions
Extraction/Insertion of a Module
Location of Modules inside the Cabinets
Module Replacement Procedures
V14.00/EN
122
122
June 2003
Safety Instructions
123
V14.00/EN
123
June 2003
Extraction/Insertion of a Module
Module
Module
Extraction
Insertion
124
V14.00/EN
124
June 2003
Location of Modules
Equipment Type
Location
Equipment Number
OMU Module
Control Node
Dual-shelf 01, Shelf 01
Control Node
Dual-shelf 01, Shelf 00
5 and 10
Control Node
Dual-shelf 01, Shelf 00
6&9
7 & 8 for private MMS duplication
ATM-SW Module
Control Node
Dual-shelf 01, Shelf 01
7 or 8
ATM-RM Module
Interface Node
Dual-shelf 00, Shelf 01
5 or 6
TMU Module
Control Node
Dual-shelf 01, Shelf 00 or 01
CEM Module
Interface Node
Dual-shelf 00, Shelf 00
7 or 8
8K-RM Module
Interface Node
Dual-shelf 00, Shelf 00
9 or 10
IEM Module
Interface Node
Dual-shelf 00 , Shelf 00 or 01
TIM Module
Interface Node
Dual-shelf 00, Shelf 00 or 01
SIM Module
15
Fan Unit
1, 2, 3, 4
V14.00/EN
125
125
June 2003
Location of Modules
Inside the TCU e3 Cabinet
Equipment Type
TRM Module
Location
Equipment Number
Transcoder Node
Dual-shelf 00 or 01
Shelf 00 (1, 2, 3, 9,
10, 11, 12, 13, 14) or
Shelf 00 or 01
Transcoder Node
CEM Module
Dual-shelf 00 or 01
7, 8
Shelf 00
IEM Module
Transcoder Node
Shelf 00 (4, 6) or
Dual-shelf 00 or 01
Shelf 00 or 01
Transcoder Node
TIM Module
Dual-shelf 00 or 01
Shelf 00 or 01
Shelf 00 (5) or
Shelf 01 (3, 9, 12)
Transcoder Node
SIM Module
Dual-shelf 00 or 01
15
Shelf 00 or 01
V14.00/EN
126
126
June 2003
Green
LED
Red LED
Action
Faulty
Module
NO
YES
NO
---
YES
127
Location of the faulty module: the OMC-R has given the shelf number and the logical
slot number of the faulty module.
State of the LEDs located on the front panel of the module:
Green LED OFF and red LED OFF: press the lamp test button (located on the
front panel of the cooling unit). If both LEDs turn on, the module is not faulty.
Green LED ON and Red LED ON: wait the end of the self-test (if any). If both
LEDs still remain ON, the module is faulty.
Green LED ON and red LED OFF: press the lamp test button (located on the
front panel of the cooling unit). If both LEDs turn on, the module is not faulty.
Green LED OFF and Red LED ON: the module is faulty.
Check-up of the LED status after module insertion:
At first, both Green and Red LEDs turn ON (during the self-test).
After a time, the Red LED turns OFF and the Green LED remains ON. Thus, the
module is operational.
After another time (about 15 to 20 minutes), the Green LED turns OFF (it turns
to sleep mode which allows LED life to be prolonged).
V14.00/EN
127
June 2003
Product Reference:
128
V14.00/EN
128
June 2003
Product Reference:
129
Impact on Service: none (one private MMS per OMU). If the private MMS is extracted, then the
OMU is out of service.
Update the spare module with the content of the active private MMS disk:
Insert the new MMS module into the slot 7 or 8 of the shelf 00
At first both green and red LEDs turn on (self-test)
Then the content of the active MMS module is duplicated on the MMS module. When the
green LED turns off and the red LED begins to flicker, the duplication is completed
Insert the MMS at its right position (slot 5 or 10, shelf 00).
Extraction of an MMS module: two cases can occur
The module is faulty (Red LED ON): extract it without pressing the removal push button.
The module is not faulty (Red LED OFF). The operator must first isolate the module by
pressing the Removal request push button. After few minutes, the Green LED turns off,
then the Red LED begins to flicker: from this moment, it is possible to extract the MMS
module.
Abort Procedure: if the module was not faulty, and if it is not extracted during the 15
minutes following the Red LED flickering, the module automatically returns back into
service.
Insertion of the updated MMS module:
Insert the updated module into the slots 5 or 10 of the shelf 00. Both green and red
LEDs turn on.
These LEDS remain on while the corresponding OMU boots during a few minutes.. Then
only the green LED remains on.
V14.00/EN
129
June 2003
Product Reference:
130
V14.00/EN
130
June 2003
Product Reference:
131
V14.00/EN
131
June 2003
Tx
Rx
+
Optical
Attenuator
Attenuator
M
AT M
R
Tx
Rx
Product Reference:
132
V14.00/EN
132
June 2003
Product Reference:
133
V14.00/EN
133
June 2003
Product Reference:
134
V14.00/EN
134
June 2003
Product Reference:
135
V14.00/EN
135
June 2003
Product Reference:
136
V14.00/EN
136
June 2003
Front Panel
View for PCM
E1 links
137
V14.00/EN
137
June 2003
Product Reference:
138
V14.00/EN
138
June 2003
unused
Product Reference:
139
V14.00/EN
139
June 2003
Product Reference:
140
V14.00/EN
140
June 2003
Student notes
V14.00/EN
141
June 2003