NN43041-220 05.12 Communication Server 1000E Planning Engineering
NN43041-220 05.12 Communication Server 1000E Planning Engineering
NN43041-220 05.12 Communication Server 1000E Planning Engineering
and Engineering
Avaya Communication Server 1000
7.5
NN43041-220, 05.12
May 2012
© 2012 Avaya Inc. Copyright
All Rights Reserved. Except where expressly stated otherwise, no use should be made of
materials on this site, the Documentation, Software, or Hardware
Notice provided by Avaya. All content on this site, the documentation and the
Product provided by Avaya including the selection, arrangement and
While reasonable efforts have been made to ensure that the design of the content is owned either by Avaya or its licensors and is
information in this document is complete and accurate at the time of protected by copyright and other intellectual property laws including the
printing, Avaya assumes no liability for any errors. Avaya reserves the sui generis rights relating to the protection of databases. You may not
right to make changes and corrections to the information in this modify, copy, reproduce, republish, upload, post, transmit or distribute
document without the obligation to notify any person or organization of in any way any content, in whole or in part, including any code and
such changes. software unless expressly authorized by Avaya. Unauthorized
reproduction, transmission, dissemination, storage, and or use without
Documentation disclaimer the express written consent of Avaya can be a criminal, as well as a
“Documentation” means information published by Avaya in varying civil offense under the applicable law.
mediums which may include product information, operating instructions
Third-party components
and performance specifications that Avaya generally makes available
to users of its products. Documentation does not include marketing Certain software programs or portions thereof included in the Product
materials. Avaya shall not be responsible for any modifications, may contain software distributed under third party agreements (“Third
additions, or deletions to the original published version of Party Components”), which may contain terms that expand or limit
documentation unless such modifications, additions, or deletions were rights to use certain portions of the Product (“Third Party Terms”).
performed by Avaya. End User agrees to indemnify and hold harmless Information regarding distributed Linux OS source code (for those
Avaya, Avaya's agents, servants and employees against all claims, Products that have distributed the Linux OS source code), and
lawsuits, demands and judgments arising out of, or in connection with, identifying the copyright holders of the Third Party Components and the
subsequent modifications, additions or deletions to this documentation, Third Party Terms that apply to them is available on the Avaya Support
to the extent made by End User. Web site: http://support.avaya.com/Copyright.
Link disclaimer Preventing Toll Fraud
Avaya is not responsible for the contents or reliability of any linked Web “Toll fraud” is the unauthorized use of your telecommunications system
sites referenced within this site or documentation provided by Avaya. by an unauthorized party (for example, a person who is not a corporate
Avaya is not responsible for the accuracy of any information, statement employee, agent, subcontractor, or is not working on your company's
or content provided on these sites and does not necessarily endorse behalf). Be aware that there can be a risk of Toll Fraud associated with
the products, services, or information described or offered within them. your system and that, if Toll Fraud occurs, it can result in substantial
Avaya does not guarantee that these links will work all the time and has additional charges for your telecommunications services.
no control over the availability of the linked pages.
Avaya Toll Fraud Intervention
Warranty
If you suspect that you are being victimized by Toll Fraud and you need
Avaya provides a limited warranty on its Hardware and Software technical assistance or support, call Technical Service Center Toll
(“Product(s)”). Refer to your sales agreement to establish the terms of Fraud Intervention Hotline at +1-800-643-2353 for the United States
the limited warranty. In addition, Avaya’s standard warranty language, and Canada. For additional support telephone numbers, see the Avaya
as well as information regarding support for this Product while under Support Web site: http://support.avaya.com. Suspected security
warranty is available to Avaya customers and other parties through the vulnerabilities with Avaya products should be reported to Avaya by
Avaya Support Web site: http://support.avaya.com. Please note that if sending mail to: securityalerts@avaya.com.
you acquired the Product(s) from an authorized Avaya reseller outside
of the United States and Canada, the warranty is provided to you by Trademarks
said Avaya reseller and not by Avaya.
The trademarks, logos and service marks (“Marks”) displayed in this
Licenses site, the Documentation and Product(s) provided by Avaya are the
registered or unregistered Marks of Avaya, its affiliates, or other third
THE SOFTWARE LICENSE TERMS AVAILABLE ON THE AVAYA parties. Users are not permitted to use such Marks without prior written
WEBSITE, HTTP://SUPPORT.AVAYA.COM/LICENSEINFO/ ARE consent from Avaya or such third party which may own the Mark.
APPLICABLE TO ANYONE WHO DOWNLOADS, USES AND/OR Nothing contained in this site, the Documentation and Product(s)
INSTALLS AVAYA SOFTWARE, PURCHASED FROM AVAYA INC., should be construed as granting, by implication, estoppel, or otherwise,
ANY AVAYA AFFILIATE, OR AN AUTHORIZED AVAYA RESELLER any license or right in and to the Marks without the express written
(AS APPLICABLE) UNDER A COMMERCIAL AGREEMENT WITH permission of Avaya or the applicable third party.
AVAYA OR AN AUTHORIZED AVAYA RESELLER. UNLESS
OTHERWISE AGREED TO BY AVAYA IN WRITING, AVAYA DOES Avaya is a registered trademark of Avaya Inc.
NOT EXTEND THIS LICENSE IF THE SOFTWARE WAS OBTAINED
FROM ANYONE OTHER THAN AVAYA, AN AVAYA AFFILIATE OR AN All non-Avaya trademarks are the property of their respective owners,
AVAYA AUTHORIZED RESELLER; AVAYA RESERVES THE RIGHT and “Linux” is a registered trademark of Linus Torvalds.
TO TAKE LEGAL ACTION AGAINST YOU AND ANYONE ELSE
USING OR SELLING THE SOFTWARE WITHOUT A LICENSE. BY Downloading Documentation
INSTALLING, DOWNLOADING OR USING THE SOFTWARE, OR
AUTHORIZING OTHERS TO DO SO, YOU, ON BEHALF OF For the most current versions of Documentation, see the Avaya
YOURSELF AND THE ENTITY FOR WHOM YOU ARE INSTALLING, Support Web site: http://support.avaya.com.
DOWNLOADING OR USING THE SOFTWARE (HEREINAFTER
Contact Avaya Support
REFERRED TO INTERCHANGEABLY AS “YOU” AND “END USER”),
AGREE TO THESE TERMS AND CONDITIONS AND CREATE A Avaya provides a telephone number for you to use to report problems
BINDING CONTRACT BETWEEN YOU AND AVAYA INC. OR THE or to ask questions about your Product. The support telephone number
APPLICABLE AVAYA AFFILIATE ( “AVAYA”). is 1-800-242-2121 in the United States. For additional support
telephone numbers, see the Avaya Web site: http://support.avaya.com.
The following section details what is new in this document for Avaya Communication Server 1000 (Avaya
CS 1000) Release 7.5.
Navigation
• Features on page 15
• Other changes on page 15
Features
The Extend Local Calls (ELC) feature is introduced. The engineering calculations are updated
to support the ELC feature.
®
The Avaya Aura Session Manager (SM) is supported. The SM can replace the Network
Routing Service (NRS) for most CS 1000E deployments. The Signaling Server algorithm
calculations are updated to include SM content.
Other changes
This section contains the following topic:
• Revision history on page 15
Revision history
March 2012 Standard 05.09. This document is up-issued to include information about
supported codecs and the MAS session connection call rate.
February 2012 Standard 05.08. This document is up-issued for changes in technical
content. The section Signaling Server capacity limits on page 238 is
updated.
November Standard 05.07. This document is up-issued to include an update to the
2011 number of IP Attendant Consoles available for IP Media Services
sessions.
July 2011 Standard 05.06. This document is up-issued to update the Security Server
capacities table with MAS cph details.
June 2011 Standard 05.05. This document is up-issued to include the Avaya
Common Server (HP DL360 G7).
March 2011 Standard 05.04. This document is up-issued to include recommended fax
configurations in the Communication Server 1000 Release 7.5.
February 2011 Standard 05.03. This document is up-issued to remove legacy feature and
hardware content that is no longer applicable to or supported by
Communication Server 1000 systems.
November Standard 05.02. This document is issued to support Avaya
2010 Communication Server 1000 Release 7.5.
November Standard 05.01. This document is issued to support Avaya
2010 Communication Server 1000 Release 7.5.
March 2012 Standard 04.05. This document is up-issued to include updates to CSQI/
CSQO limits.
March 2012 Standard 04.04. This document is up-issued to include information about
supported codecs and the MAS session connection call rate.
October 2010 Standard 04.03. This document is up-issued to update the dedicated
Signaling Server capacity limits to support Avaya Communication Server
1000 Release 7.0.
August 2010 Standard 04.02. This document is up-issued to update planning and
engineering capacities, and the Signaling Server algorithm to support
Avaya Communication Server 1000 Release 7.0.
June 2010 Standard 04.01. This document is issued to support Avaya
Communication Server 1000 Release 7.0.
February 2010 Standard 03.08. This document is up-issued to replace Baystack with
Ethernet Routing Switch.
October 2009 Standard 03.07. This document is up-issued to include Media Gateway
Extended Peripheral Equipment Controller (MG XPEC) content.
September Standard 03.06. This document is up-issued to include Media Gateway
2009 1010 content.
August 2009 Standard 03.05. This document is up-issued to update Memory-Related
parameters.
Visit the Avaya Web site to access the complete range of services and support that Avaya provides. Go
to www.avaya.com or go to one of the pages listed in the following sections.
Navigation
• Getting technical documentation on page 19
• Getting product training on page 19
• Getting help from a distributor or reseller on page 19
• Getting technical support from the Avaya Web site on page 20
Contents
This chapter contains the following topics:
Introduction on page 21
Engineering a new system on page 26
Engineering a system upgrade on page 26
Communication Server 1000 task flow on page 28
Enterprise Configurator on page 30
Introduction
Warning:
Before an Avaya Communication Server 1000E (Avaya CS 1000E) system can be installed,
a network assessment must be performed and the network must be VoIP-ready.
If the minimum VoIP network requirements are not met, the system will not operate
properly.
For information about the minimum VoIP network requirements and converging a data
network with VoIP, see Avaya Converging the Data Network with VoIP Fundamentals,
NN43001-260.
A switch must be engineered upon initial installation, during upgrades, and when traffic loads
change significantly or increase beyond the bounds anticipated when the switch was last
engineered. A properly engineered switch is one that all components work within their capacity
limits during the busy hour.
This document is not intended to provide a theoretical background for engineering principles,
except to the extent required to make sense of the information. Furthermore, in order to control
complexity, technical details and data are sometimes omitted when the impact is sufficiently
small.
This document does not address the engineering or functionality of major features, such as
Automatic Call Distribution (ACD) or Network Automatic Call Distribution (NACD), and of
auxiliary processors and their applications, such as Symposium and Avaya CallPilot.
Guidelines for feature and auxiliary platform engineering are given in documents relating to
the specific applications involved. This document provides sufficient information to determine
and account for the impact of such features and applications upon the capacities of the system
itself.
Subject
Warning:
Before an Avaya CS 1000E system can be installed, a network assessment must be
performed and the network must be VoIP-ready.
If the minimum VoIP network requirements are not met, the system will not operate
properly.
For information about the minimum VoIP network requirements and converging a data
network with VoIP, see Avaya Converging the Data Network with VoIP Fundamentals,
NN43001-260.
This document provides the information necessary to properly engineer a Communication
Server 1000E (CS 1000E) system. There are two major purposes for using this document: to
engineer an entirely new system, and to evaluate a system upgrade.
The Enterprise Configurator provides an alternative to the manual processes given in this
document. It is beyond the scope of this document to describe the Enterprise Configurator
process.
Note on legacy products and releases
This document contains information about systems, components, and features that are
compatible with CS 1000 software. For more information about legacy products and releases,
click the Technical Documentation link under Support & Training on the Avaya home page:
http://www.avaya.com
Applicable systems
This document applies to the CS 1000E system.
When upgrading software, memory upgrades can be required on the Signaling Server, the Call
Server, or both.
Intended audience
This document is intended for system engineers responsible for engineering the switch and
the Avaya Technical Assistance Support personnel who support them. Engineers can be
employees of the end user, third-party consultants, or distributors.
The engineer responsible for system implementation should have several years of experience
with Avaya PBX systems.
Others who are interested in this information, or find it useful, are Sales and Marketing, Service
Managers, Account Managers, and Field Support.
Conventions
In this document, CS 1000E is referred to generically as system.
In this document, the following Chassis or Cabinets are referred to generically as Media
Gateway:
• Option 11C Mini Chassis (NTDK91) and Chassis Expander (NTDK92)
• Option 11C Cabinet (NTAK11)
• Avaya MG 1000E Chassis (NTDU14) and Expansion Chassis (NTDU15)
• Media Gateway 1010 (MG 1010) (NTC310)
• IPE module (NT8D37) with MG XPEC card (NTDW20)
In this document, the following cards are referred to generically as Gateway Controller:
• Media Gateway Controller (MGC) card (NTDW60 and NTDW98)
• Common Processor Media Gateway (CP MG) card (NTDW56 and NTDW59)
• Media Gateway Extended Peripheral Equipment Controller (MG XPEC) card (NTDW20)
In this document, the following hardware platforms are referred to generically as Server:
• Call Processor Pentium IV (CP PIV)
• Common Processor Pentium Mobile (CP PM)
• Common Processor Media Gateway (CP MG)
• Common Processor Dual Core (CP DC)
• Commercial off-the-shelf (COTS) servers
- IBM x306m server (COTS1)
- HP DL320 G4 server (COTS1)
- IBM x3350 server (COTS2)
- Dell R300 server (COTS2)
- HP DL360 G7 (Avaya Common Server)
In this document, the generic term COTS refers to all COTS servers. The term COTS1, COTS2,
or Common Server refers to the specific servers in the preceding list.
Co-res CS and SS is not supported on COTS1 servers (IBM x306m, HP DL320-G4)
The following table shows CS 1000 supported roles for various hardware platforms.
Table 1: Hardware platform supported roles
Note:
The CP MG card functions as the Co-res CS and SS, and the Gateway Controller while
occupying slot 0 in a Media Gateway.
Related information
This section lists information sources that relate to this document.
Documents
The following documents are referenced in this document:
• Avaya Feature Listing Reference, NN43001-111
• Avaya Signaling Server IP Line Applications Fundamentals, NN43001-125
• Avaya Network Routing Service Fundamentals, NN43001-130
• Avaya Converging the Data Network with VoIP Fundamentals, NN43001-260
• Avaya Electronic Switched Network Signaling and Transmission Guidelines,
NN43001-280
• Avaya Transmission Parameters, NN43001-282
• Avaya Dialing Plans Reference, NN43001-283
• Avaya Circuit Card Reference, NN43001-311
• Avaya IP Peer Networking Installation and Commissioning, NN43001-313
• Avaya Branch Office Installation and Commissioning, NN43001-314
• Avaya Linux Platform Base and Applications Installation and Commissioning,
NN43001-315
• Avaya SIP Line Fundamentals, NN43001-508
• Avaya Co-resident Call Server and Signaling Server Fundamentals, NN43001-509
• Avaya Automatic Call Distribution Fundamentals, NN43001-551
• Avaya System Management Reference, NN43001-600
• Avaya Access Control Management Reference, NN43001-602
• Avaya Software Input Output Administration, NN43001-611
• Avaya Security Management, NN43001-604
• Avaya Element Manager System Reference - Administration, NN43001-632
• Avaya Telephones and Consoles Fundamentals, NN43001-567
• Avaya IP Phones Fundamentals (NN43001-368), NN43001-368
• Avaya ISDN Primary Rate Interface Fundamentals, NN43001-569
• Avaya Basic Network Feature Fundamentals, NN43001-579
• Avaya ISDN Basic Rate Interface Feature Fundamentals, NN43001-580
• Avaya Traffic Measurement Formats and Outputs Reference, NN43001-750
• Avaya Software Input Output Reference Maintenance, NN43001-711
• Avaya Communication Server 1000M and Meridian 1 Large System Planning and
Engineering, NN43021-220
• Avaya Communication Server 1000E Installation and Commissioning, NN43041-310
• Avaya Communication Server 1000E Software Upgrades, NN43041-458
• Avaya CallPilot Planning and Engineering, 555-7101-101
Online
To access Avaya documentation online, click the Technical Documentation link under
Support & Training on the Avaya home page:
http://www.avaya.com
Enterprise Configurator
The Enterprise Configurator (EC) is a global engineering and quotation tool to assist the site
engineer, sales person, or customer in engineering the switch. It is available in both stand-
alone and web-based versions. For users in North America and the Caribbean and Latin
America (CALA), it replaces Meridian Configurator and 1-Up. For users in Europe, Middle East,
and Africa (EMEA) countries, it replaces NetPrice.
The EC provides a simple "needs-based" provisioning model that allows for easy configuring
and quoting. The EC supports CS 1000E new system sales and upgrades by analyzing input
specifications for a digital PBX to produce a full range of pricing, engineering reports, and
graphics. These reports include equipment lists, cabling reports, software matrix, engineering
capacities, and pricing for currently available CS 1000E configurations. Graphics depict the
engineered platform, card slot allocations as well as loop assignments.
The EC runs on the user's Windows-based or MacOS personal computer. It uses standard
browser and Microsoft Office applications. For details on computer system requirements and
for user instructions, refer to the Avaya web site. Enterprise Configurator implements the
algorithms specified in this document for real time, memory, and physical capacities. It is the
official tool for determining whether a proposed configuration will meet the customer's capacity
requirements.
Where applicable, in this document, references are made to the EC inputs that correspond to
parameters being described.
Contents
This chapter contains the following topics:
System approval on page 33
Electromagnetic compatibility on page 34
Notice for United States installations on page 35
Notice for Canadian installations on page 37
Canadian and US network connections on page 38
Notice for International installations on page 39
Notice for Germany on page 40
System approval
The Avaya Communication Server 1000E (Avaya CS 1000E) system has approvals to be sold
in many global markets. Regulatory labels on the back of system equipment contain national
and international regulatory information.
Some physical components in systems have been marketed under different names in the past.
Previous naming conventions utilizing the terms Succession 1000 and CSE 1000 have been
harmonized to use the term Avaya CS 1000. Similarly, previous naming conventions utilizing
the terms Meridian and Option have been harmonized to use the term Meridian 1 PBX. Product
names based on earlier naming conventions can still appear in some system documentation
and on the system regulatory labels. From the point of view of regulatory standards compliance,
the physical equipment is unchanged. As such, all the instructions and warnings in the
regulatory sections of this document apply to the CS 1000M, CS 1000S, and CS 1000E
systems, as well as the Meridian, Succession 1000, and CSE 1000 systems.
Electromagnetic compatibility
Caution:
In a domestic environment, the system can cause radio interference. In this case, the user
can be required to take adequate measures.
Table 2: EMC specifications for Class A devices on page 34 lists the EMC specifications for
the system.
Table 2: EMC specifications for Class A devices
working correctly. If possible, the telephone company notifies you before they disconnect the
equipment. You are notified of your right to file a complaint with the FCC.
Your telephone company can make changes in its facilities, equipment, operations, or
procedures that can affect the correct operation of your equipment. If the telephone company
does make changes, they give you advance notice. With advance notice, it is possible for you
to make arrangements to maintain uninterrupted service.
If you experience trouble with your system equipment, contact your authorized distributor or
service center.
You cannot use the equipment on public coin service provided by the telephone company.
Connection to party line service is subject to state tariffs. Contact the state public utility
commission, public service commission, or corporation commission for information.
The equipment can provide access to interstate providers of operator services through the use
of Equal Access codes. Failure to provide Equal Access capabilities is a violation of the
Telephone Operator Consumer Services Improvement Act of 1990 and Part 68 of the FCC
Rules.
Make sure that the electrical ground connections of the power utility, telephone lines, and
internal metallic water pipe system, if present, connect together. This precaution is for the
users' protection, and is very important in rural areas.
Voltage:
DANGER OF ELECTRIC SHOCK
The system frame ground of each unit must be tied to a reliable building ground
reference.
Voltage:
DANGER OF ELECTRIC SHOCK
Do not attempt to make electrical ground connections yourself. Contact your local electrical
inspection authority or electrician to make electrical ground connections.
Supported interfaces
Analog interfaces are approved based on national or European specifications. Digital
interfaces are approved based on European specifications.
Safety specifications
The system meets the following European safety specifications: EN 60825, EN 60950, and
EN 41003.
Beispiel: Bei Umgebungstemperaturen über 45° C (113° F) wird der Betrieb von Disketten-
und Festplattenlaufwerken nicht mehr zuverlässig. Im Falle eines Gerätes, das in einem
Gehäuse installiert ist, sollten Sie beachten, daß die interne Umgebungstemperatur unter
Umständen über die maximal mögliche, externe Umgebungstemperatur ansteigen kann.
ESE-ANTISTATIKBAND VERWENDEN
Avaya empfiehlt, bei allen Installations- oder Aufrüstarbeiten am System
ein Antistatikband und eine ableitende Schaumstoffunterlage zu
verwenden. Elektronische Komponenten, wie z.B. Plattenlaufwerke,
Platinen und Speichermodule, können gegen ESE äußerst empfindlich
sein. Nach dem Entfernen des Bauteils aus dem System oder aus der
Schutzhülle wird das Bauteil flach auf eine geerdete und statikfreie
Oberfläche gelegt, und im Falle einer Platine mit der Komponentenseite
nach oben. Das Bauteil nicht auf der Oberfläche hin und her bewegen.
Ist kein ESE-Arbeitsplatz verfügbar, so können ESE-Gefahren durch das
Tragen eines Antistatikbands (in Elektronik- Fachgeschäften erhältlich)
vermieden werden. Dabei ist ein Ende des Bandes um das Handgelenk
zu legen. Das Erdungsende (normalerweise ein Stück Kupferfolie oder
eine Krokodilklemme) an einer elektrischen Masseverbindung
anschließen. Hierbei kann es sich um ein Stück Metall handeln, das direkt
zur Erde führt (z.B. ein unbeschichtetes Metallrohr) oder ein Metallteil
eines geerdeten, elektrischen Gerätes. Ein elektrisches Gerät ist geerdet,
wenn es einen dreistiftigen Schukostecker besitzt, der in eine Schuko-
Steckdose gesteckt wird. Das System selbst kann nicht als
Masseverbindung verwendet werden, weil es bei allen Arbeiten vom Netz
getrennt wird.
Voltage:
WARNUNG
Vor dem Ausführen dieser Verfahren ist die Stromzufuhr des Systems auszuschalten und
das System vom Stromnetz zu trennen. Wenn der Strom vor dem Öffnen des Systems nicht
ausgeschaltet wird, besteht die Gefahr von Körperverletzungen und Beschädigungen des
Gerätes. Im Gerät sind gefährliche Spannungen, Strom und Hochenergie vorhanden. An
den Anschlußpunkten der Betriebsschalter können gefährliche Spannungen anliegen, auch
wenn sich der Schalter in der ausgeschalteten Position befindet. Das System darf nicht bei
abgenommener Gehäuseabdeckung betrieben werden. Vor dem Einschalten des Systems
ist die Gehäuseabdeckung stets anzubringen.
nicht voraussehen kann, welche Geräte mit diesem Gehäuse verwendet werden oder wie
dieses Gehäuse verwendet wird, sind der Systemintegrator und der Installateur völlig dafür
verantwortlich, daß das gesamte fertiggestellte System den Sicherheitsanforderungen von UL/
CSA/VDE sowie den EMI/HFI-Emissionsgrenzen entspricht.
Caution:
VORSICHT
Bei einem inkorrekten Auswechseln der Lithium-Batterien besteht Explosionsgefahr.
Wechseln Sie die Batterien nur mit dem gleichen oder einem gleichwertigen Batterietyp, der
von dem Hersteller empfohlen ist, aus. Entsorgen Sie gebrauchte Batterien gemäß den
Herstelleranweisungen.
Caution:
VORSICHT
Bitte nehmen Sie vor Ort keine Wartung bzw. Austausch der Lithium-Batterien selber vor.
Um die Batterien sachgemäß warten oder auswechseln zu lassen, setzen Sie sich mit Ihrem
Avaya Servicevertreter in Verbindung.
Caution:
VORSICHT
Befestigen Sie das Chassis nicht oben am Rack. Ein kopflastiges Rack kann Umkippen und
Geräte beschädigen sowie Personal verletzen.
Um Verletzungen von Personen oder Beschädigungen der Geräte zu vermeiden sollten
folgende Schritte von zwei Personen ausgeführt werden.
1. Schieben Sie das Chassis vorne in das Rack.
2. Befestigen Sie das Chassis mit Schrauben. (Um Genaueres über die hierzu
empfohlenen Schraubenarten zu erfahren, wenden Sie sich bitte an den Hersteller
des Racks.)
3. Stellen Sie sicher daß der Netzschalter (ON/OFF oder EIN/AUS) am Chassis auf
OFF (O) gestellt ist. Ist Ihr System mit einem Spannungswahlschalter versehen, so
stellen Sie den Schalter auf die Ihrem Standort gemäße Betriebsspannung.
Voltage:
WARNUNG
Vor Wartungsarbeiten am Chassis ist das Netzkabel vom Stromnetz zu trennen, um die
Gefahr eines elektrischen Schlages oder andere mögliche Gefahren zu reduzieren.
Contents
This chapter contains the following topics:
Introduction on page 45
Data network planning for VoIP on page 46
100BaseTx IP connectivity on page 48
Introduction
Warning:
Before an Avaya Communication Server 1000E (Avaya CS 1000E) system can be installed,
a network assessment must be performed and the network must be VoIP-ready.
If the minimum VoIP network requirements are not met, the system will not operate
properly.
For information about the minimum VoIP network requirements and converging a data
network with VoIP, see Avaya Converging the Data Network with VoIP Fundamentals,
NN43001-260.
The data network's infrastructure, engineering, and configuration are critical to achieve
satisfactory IP Telephony voice quality. A technical understanding of data networking and Voice
over IP (VoIP) is essential for optimal performance of the Avaya CS 1000E system.
See Avaya Converging the Data Network with VoIP Fundamentals, NN43001-260 for detailed
information about network requirements. These requirements are critical to the system Quality
of Service (QoS).
To evaluate requirements for the VoIP network, review network topology, feature capabilities,
and protocol implementations. Measure redundancy capabilities of the network against
availability goals with the network design recommended for VoIP.
Evaluate the overall network capacity to ensure that the network meets overall capacity
requirements. Overall capacity requirements must not impact existing network and application
requirements. Evaluate the network baseline in terms of the impact on VoIP requirements.
To ensure that both VoIP and existing network requirements are met, it can be necessary to
add one or more of the following:
• memory
• bandwidth
• features
QoS planning
An IP network must be engineered and provisioned to achieve high voice quality performance.
It is necessary to implement QoS policies network-wide to ensure that voice packets receive
consistent and proper treatment as they travel across the network.
IP networks that treat all packets identically are called "best-effort networks". In a best-effort
network, traffic can experience varying amounts of delay, jitter, and loss at any time. This can
produce speech breakup, speech clipping, pops and clicks, and echo. A best-effort network
does not guarantee that bandwidth is available at any given time. Use QoS mechanisms to
ensure bandwidth is available at all times, and to maintain consistent, acceptable levels of loss,
delay, and jitter.
For planning details for QoS, see Avaya Converging the Data Network with VoIP
Fundamentals, NN43001-260.
100BaseTx IP connectivity
Between the Call Server and Media Gateway, the CS 1000E supports 100BaseTx IP
connectivity or campus data network connectivity. Campus data network connectivity is
provided through ELAN and Layer 2 switches.
To satisfy voice quality requirements, adhere to applicable engineering guidelines. See Avaya
Converging the Data Network with VoIP Fundamentals, NN43001-260 for details. Contact the
local Data Administrator to obtain specific IP information.
Contents
This chapter contains the following topics:
Main components and architecture on page 51
Communication Server 1000E Call Server on page 56
Media Gateway on page 66
Signaling Server on page 84
Terminal Server on page 90
Layer 2 switch on page 91
Power over LAN (optional) on page 92
Telephones on page 93
Component dimensions on page 94
• The Signaling Server provides the CS 1000E system with SIP/H.323 signaling between
components. Signaling Servers (total number required depends on capacity and
survivability levels). (see Signaling Server on page 234)
• The Layer 2 switch provides the CS 1000E system with additional ports to transmit data
packets to devices interconnected by Ethernet to the ELAN or TLAN subnets (see Layer
2 switch on page 91).
• The Terminal Server is an option that provides the CS 1000E system with additional serial
ports for applications and maintenance. For more information about the MRV Terminal
Server , see Terminal Server on page 90.
CS 1000E systems can be configured for either Standard Availability, High Availability (system
redundancy), or Co-resident Call Server and Signaling Server (Co-res CS and SS).
Figure 4: CP PM Standard Availability on page 53 shows the typical main components of a
Standard Availability CP PM Communication Server 1000E solution.
Figure 5: Co-resident Call Server and Signaling Server on page 54 shows the typical main
components of a Co-resident Call Server and Signaling Server CS 1000E solution.
Figure 6: CP PM High Availability on page 55shows the typical main components of a High
Availability CP PM equipped CS 1000E solution.
Figure 7: CP PIV High Availability on page 56 shows the typical main components of a High
Availability CP PIV equipped CS 1000E solution.
Important:
CP PIV equipped CS 1000E cannot be configured for Standard Availability. CS 1000E
systems equipped with a stand-alone CP PM processor can be configured for Standard
Availability or upgraded to High Availability with an additional CP PM processor and software
package 410 HIGH_AVAIL HIGH AVAILABILITY. CS 1000E systems with Co-resident Call
Server and Signaling Server cannot be configured for High Availability. The remainder of
this chapter discusses each component in further detail.
The Communication Server 1000E system supports various types of hardware platforms. You
must ensure that your hardware platform can support your target CS 1000E configuration. For
more information about supported roles for each hardware platform, see Table 1: Hardware
platform supported roles on page 24
Functional description
The Call Servers provide the following functionality:
• provide main source of call processing
• process all voice and data connections
• control telephony services
• control circuit cards installed in Media Gateways
• provide resources for system administration and user database maintenance
Operating parameters
The CS 1000E can be equipped as SA (single Call Server) or High Availability (dual Call Server)
(Core 0 and Core 1) to provide a fully redundant system. The CP PIV supports High Availability
only. The Co-resident Call Server and Signaling Server does not support High Availability.
Core 0 and Core 1 can operate in redundant mode over the High Speed Pipe (HSP) with
software package 410 HIGH_AVAIL HIGH AVAILABILITY: one runs the system while the other
runs in a warm standby mode, ready to take over system control if the active Call Server
fails.
The system configuration and user database are synchronized between the active and inactive
Call Servers. This lets the inactive Call Server assume call processing in the event of failure
of the active Call Server.
The Call Server uses a proprietary protocol to control the Media Gateways. This proprietary
protocol is similar to industry-standard Media Gateway Control Protocol (MGCP) or H.248
Gateways.
CS 1000E Call Servers can control up to 50 Media Gateways.
The Call Servers provide connectivity to telephony devices using IP signaling through Media
Gateways rather than by direct physical connections.
The CS 1000E system supports lineside T1 (NT5D14) and lineside E1 (NT5D34) cards. For
further information about T1/E1 lineside cards, see Avaya Circuit Card Reference,
NN43001-311
Similar to the set of core circuit cards used in CS 1000M Large System, each CP PIV Call
Server contains the following:
• CP PIV Call Processor card
• System Utility card
In addition, each Call Server is equipped with the following modules:
• Power supply module
• Alarm/fan module
The CP PIV Call Processor card (NT4N39AA) is the main processor for the Call Server,
controlling all call processing and telephony services. It also provides the system memory
required to store operating software and customer data.
The CP PIV Call Processor card provides the following connectors:
• The Com 1 port is an RS232 serial port you directly connect to a system terminal for
system access. You can optionally connect the Com 1 port to an IP-based Terminal
Server, which provides standard serial ports for system maintenance and third-party
applications (for more information, see Terminal Server on page 90).
• The Com 2 port is an additional RS-232 port (for system maintenance only).
• The LAN 1 Ethernet port connects the Call Server to the Embedded LAN (ELAN) subnet
through an ELAN Layer 2 switch to provide IP connections between the Call Server,
Signaling Servers, and Media Gateways. The port is a 10/100/1000MB autonegotiate
port.
• The LAN 2 Ethernet port connects Call Server 0 to Call Server 1 over a 1 Gbps auto
negotiating high speed pipe to provide communication and database synchronization.
• The USB port is not supported by the CS 1000E system and cannot be used.
The System Utility card (NT4N48) provides auxiliary functions for the Call Server.
The minimum vintage for the System Utility card with CS 1000E is NT4N48BA.
System Utility card functions include:
• LCD display for system diagnostics
• interface to the Call Server alarm monitor functions
• Core-selection DIP switches to specify Call Server 0 or Call Server 1
• software security device holder
The software security device enables the activation of features assigned to the CS 1000E
system. The security device for a CS 1000E Call Server is similar to the one used on a CS
1000M Large System
Filler Blank
The filler blank covers over the disk carrier slot used in the older CP PII-based system. The
blank supports the blue LEDs that illuminate the Logo.
The AC power supply module (NTDU65) is the main power source for the Call Server and is
field-replaceable.
Alarm/fan module
The alarm/fan module (NTDU64) provides fans for cooling the Call Server and provides status
LEDs indicating the status of Call Server components. The alarm/fan module is field-
replaceable.
CP PM chassis
The CP PM is a circuit card that you insert in a Media Gateway. For more information, see
Media Gateway on page 66 . For information about upgrading Option 11C equipment to
support the CP PM card, see Avaya Communication Server 1000E Upgrades,
NN43041-458.
For information about upgrading CS 1000M IPE modules to support CS 1000E Server cards,
see Avaya Communication Server 1000M and Meridian 1 Large System Planning and
Engineering, NN43021-220 and Avaya Communication Server 1000E Installation and
Commissioning, NN43041-310.
CP PM card
The Common Processor Pentium Mobile (CP PM) card can be configured as a Call Server.
The CP PM offers similar features to that of the CP PIV processor, but uses an IPE slot form
factor, allowing for a CS 1000E product with only a single Media Gateway chassis. The CP PM
card can also be configured as a stand-alone Signaling Server, or a Co-resident Call Server
and Signaling Server.
Figure 11: CP PM card NTDW61 on page 64 shows the CP PM faceplate and CP PM circuit
card. The NTDW61 CP PM card is designed for use in Media Gateway IPE slots.
The NTDW99 CP PM card contains a metal faceplate that provides enhanced EMC
containment. The NTDW99 CP PM card is designed for use in Media Gateway 1010 chassis
slots 22 and 23.
• Two Compact Flash sockets: 1 GB fixed media disk (FMD) on the card and a hot swap
removable media disk (RMD) accessible on the faceplate
• 1 GB DDR RAM (expandable up to 2 GB)
• Three Ethernet ports (TLAN, ELAN, HSP):
• One USB 2.0 port, for future use
• Security device, housed on board
For more information about Co-res CS and SS, see Avaya Co-resident Call Server and
Signaling Server fundamentals, NN43001-509. For information about installing or configuring
Co-res CS and SS applications, see Avaya Linux Base and Applications Installation and
Commissioning, NN43001-315
Media Gateway
Media Gateways provide basic telephony media services, including tone detection, generation,
and conference to CS 1000E telephones. The Media Gateway houses IPE circuit cards and
connectors for access to the Main Distribution Frame. The Media Gateway with Media Gateway
Controller (MGC) supports digital trunk and PRI access to the PSTN and to other PBX systems.
The Media Gateway also supports Avaya Integrated Applications, including Integrated
Recorded Announcer. It can also provide connectivity for digital and analog (500/2500-type)
telephones as well as analog trunks for telephone and fax.
Functional description
The Media Gateway provides the following functionality:
• tones, conference, and digital media services (for example, Music and Recorded
Announcement) to all phones
• support for CallPilot and Avaya Integrated Applications
• direct physical connections for analog (500/2500-type) phones, digital phones, and fax
machines
Circuit cards
The following circuit cards are supported in Media Gateways:
• One Gateway Controller card is required in each Media Gateway chassis or cabinet. See
Media Gateway Controller (MGC) card on page 68 or Common Processor Media
Gateway (CP MG) card on page 71 for Gateway Controller card details.
• An MG XPEC card in each IPE module. SeeMedia Gateway Extended Peripheral
Equipment Controller (MG XPEC) card on page 72 for card details.
• Server cards (CP PM, CP DC, CP MG).
• Voice Gateway Media Cards. See Voice Gateway Media Card on page 81 for card
details.
• Intelligent Peripheral Equipment (IPE) cards. See Operating parameters on page 67 for
specific cards supported on Media Gateways.
For more information about circuit cards, see Avaya Circuit Card Reference, NN43001-311.
Security device
The security device on the Media Gateway is a generic security device that allows Media
Gateways to register with the CS 1000E Call Servers.
Control for the activation of features assigned to the CS 1000E system, including Media
Gateways, is provided by the security device on the System Utility card in CP PIV systems,
and the security dongle on all other supported hardware platforms.
For more information about security devices, see Avaya Security Management,
NN43001-604.
Operating parameters
The Media Gateway operates under the direct control of the Call Server. Up to 50 Media
Gateways can be configured on the Call Server.
To allow IP Phones to access digital media services, you must configure Media Gateways with
Digital Signal Processor (DSP) ports. The MGC with DSP daughterboards can provide up to
128 DSP ports, or 256 DSP ports when configured as a PRI Gateway. The CP MG card is
available with 32 or 128 DSP ports. You can install Voice Gateway Media Cards into a Media
Gateway to provide additional DSP ports beyond the DSP port limit of the Gateway Controller
card.
The Media Gateways support the following circuit cards and applications:
• Voice Gateway Media Cards: transcode between the IP network and digital circuit cards
• Service cards: provide services such as Music or Recorded Announcements (RAN)
• Analog interfaces to lines and trunks: support analog (500/2500-type) phones and fax,
analog PSTN trunks, and external Music or RAN sources
• Analog trunk cards
• Digital line cards: support digital terminals, such as attendant consoles, M2000 and Avaya
3900 Series Digital Deskphones, and external systems that use digital line emulation,
such as Avaya CallPilot Mini
• Digital PSTN Interface Cards, including E1, T1, and ISDN Basic Rate interfaces: provide
access to PSTN
• CLASS Modem card (XCMC)
• DECT Mobility cards
• Avaya Integrated Applications, including:
- Integrated Conference Bridge
- Integrated Call Assistant
- Integrated Call Director
- Integrated Recorded Announcer
- Hospitality Integrated Voice Services
- MGate cards for CallPilot
- CallPilot IPE
Digital Trunks, PRI and BRI, and DECT Mobility Cards are only supported in Media Gateways
with MGC.
The MGC card occupies the system controller slot 0 in a Media Gateway chassis.
The MGC card provides a gateway controller for IP Media Gateways in a CS 1000E system.
The MGC only functions as a gateway controller under control of a CS 1000E Call Server.
The MGC card supports up to 128 DSP ports with the two expansion sites to accommodate
digital signal processor daughterboards. 128-port DSP daughterboard NTDW78, 96-port DSP
daugherboard NTDW64 and 32-port DSP daughterboard NTDW62 can be installed on the
MGC. The DSP daughterboards support VoIP voice gateway resources on the MGC, reducing
the amount of separate Voice Gateway Media Cards.
The MGC DSP daughterboard security feature provides an infrastructure to allow endpoints
capable of SRTP/SRTCP to engage in secure media exchanges. The media security feature
can be configured by the administrator or, optionally, by the end user. This feature provides for
the exchange of cryptographic material needed by the SRTP-capable endpoints to secure
media streams originating from those endpoints.
For more information about Media Security or SRTP, see Avaya Security Management,
NN43001-604.
DSP daugherboards include voice gateway (VGW) application; they do not include the
Terminal Proxy Server (TPS) application. DSP daughterboards cannot be used for load sharing
of IP Phones from Signaling Servers, or as backup TPS in case of failures.
Figure 12: Media Gateway Controller card (NTDW60) on page 70 shows the MGC faceplate
and MGC circuit card (with two DSP daughterboards installed).
The MGC card (NTDW98) contains a metal faceplate for enhanced EMC containment. The
NTDW98 MGC card is designed for use in a Media Gateway 1010 chassis.
The MGC card (without expansion daughterboards) includes the following components and
features:
• Arm processor
• 128 MB RAM
• 4 MB boot flash
• Internal Compact Flash (CF) card mounted on the card. It appears to the software as a
standard ATA hard drive
• Embedded Ethernet switch
• Six 100 BaseT Ethernet ports for connection to external networking equipment
• Four character LED display on the faceplate
• Two PCI Telephony Mezzanine Card form factor sites for system expansion
Important:
The MGC is a gateway controller that replaces the SSC in a Media Gateway. It also reduces
the need for separate Voice Gateway Media Cards with the use of onboard DSP
daughterboards. The MGC-based Media Gateway supports PRI/PRI2/DTI/DTI2 trunks, BRI
trunks, D-channels, and clock controllers. The MGC remote SDI feature reduces the need
for separate Terminal Servers.
For more information about the MGC card, see Avaya Circuit Card Reference,
NN43001-311.
The hardware for the Common Processor Media Gateway (CP MG) card consists of integrating
a Common Processor, a Gateway Controller, and non-removable Digital Signal Processor
(DSP) resources into a single card for use in a Communication Server 1000E system. The CP
MG card design is based on the CP PM card and MGC card with DSP daughterboards. The
CP MG card is available in two versions:
• NTDW56BAE6 - CP MG card with 32 DSP ports
• NTDW59BAE6 - CP MG card with 128 DSP ports
The CP MG card provides improvements in port density and cost reductions by functioning as
a Co-resident Call Server and Signaling Server (Co-res CS and SS) and a Gateway Controller
with DSP ports while only occupying one slot in a Media Gateway cabinet or chassis. The CP
MG card occupies the system controller slot 0 in a Media Gateway.
The CP MG card includes the following components and features:
• Intel EP80579 integrated processor, 1200 Mhz (Common Processor)
• 2 GB DDR2 RAM (expandable to 4 GB)
and a daughterboard. Each board of the dual assembly contains 192 non-removable Digital
Signal Processor (DSP) ports. The MG XPEC card is essentially equivalent to two Media
Gateway Controller (MGC) cards. The MG XPEC motherboard controls slots 0 to 7 on the left
half of the IPE module, and the MG XPEC daughterboard controls slots 0 to 7 on the right half
of the IPE module.
For information about converting an IPE module into Media Gateways with the MG XPEC card,
see Avaya Communication Server 1000M and Meridian 1 Large System Planning and
Engineering, NN43021-220.
Network connections
The ELAN of the Media Gateway can reside in a separate layer 3 subnet from that of the Call
Server ELAN. When connecting the Media Gateway to the ELAN through a Layer 3 switch the
connection from the Call Server to the Media Gateway must have a round trip delay of less
than 80 msec and have a packet loss of less than 0.5 % (0% recommended).
Figure 14: Redundant network connections with MGC Dual Homing on page 74 is a
schematic representation of redundant network connections for Media Gateways with MGC.
The call servers, signaling servers, switches, and chassis can be any of the supported
types.
The separate LAN subnets that connect the Media Gateway and the Call Server to the
customer IP network are as follows:
• ELAN The ELAN subnet (100BaseT, full-duplex) is used to manage signaling traffic
between the Call Server, Signaling Server, and Media Gateways. The ELAN subnet
isolates critical telephony signaling between the Call Servers and the other
components.
• TLAN The TLAN subnet (100BaseT, full-duplex) is used to manage voice and signaling
traffic. It connects the Signaling Server and Voice Gateway Media Cards to the Customer
LAN. It also isolates the IP Telephony node interface from broadcast traffic.
The HSP (high speed pipe) is a 1000BaseT connection used to provide standby call server
redundancy. The HSP provides connectivity for High Availability if two CP PM or CP PIV call
servers are connected through the HSP, and the Campus Redundancy software package 410
HIGH_AVAIL HIGH AVAILABILITY has been purchased. The HSP is not supported on Co-res
CS and SS configurations.
supports an optional Media Gateway Expander through copper connections for a total of eight
slots.
Physical description
The following sections describe the front and rear components of the MG 1000E (NTDU14).
Front components
Figure 15: Front components in the MG 1000E (NTDU14) on page 75 shows the Media
Gateway with the front cover removed. Note the following:
• The DIP switches configure telephone ringing voltages, ringing frequencies, and message
waiting voltages.
• The 100BaseT bulkhead ports 1 and 2 provide MGC daughterboard ports with
connections to rear bulkhead ports.
Rear components
Figure 16: Rear components in the MG 1000E on page 76 shows the rear components on
the Media Gateway. Note the following:
• The AC power cord connector provides AC connection to the Media Gateway.
• AUX extends Power Failure Transfer Unit (PFTU) signals to the Main Distribution Frame
(MDF).
• GND is used for ground cable termination.
• 100BaseT bulkhead ports 1 and 2 provide connections from IP daughterboard ports on
the MGC card to other system components.
• The Attachment Unit Interface (AUI) is used with cards that require a Media Access Unit
(MAU).
• The AUI is used with MGC cards as a clock reference. When the AUI connection is used
with an MGC, Ethernet link and speed LEDs cannot function.
• The serial port connects to maintenance terminals.
• DS-30X and CE-MUX interconnect the Media Gateway to the Media Gateway
Expander.
• 25-pair connectors extend the IPE card data to the MDF.
Physical description
Figure 17: Media Gateway Expander (NTDU15) on page 77 shows the Media Gateway
Expander (NTDU15).
Rear components
Figure 18: Rear components in the Media Gateway Expander on page 78 shows the rear
components in the Expander. Note the following:
• The AC power cord connector provides an AC connection to the Expander.
• GND is used for ground cable termination.
• DS-30X and CE-MUX are used to interconnect the Media Gateway and the Expander.
• 25-pair connectors are used to extend IPE card data to the MDF.
Physical description
The Media Gateway 1010 chassis (NTC310AAE6) consists of:
• MG 1010 rack mount kit (NTC316AAE6)
• backplane assembly (NTC31002)
• Media Gateway Utility (MGU) card (NTC314AAE6)
• power supply, maximum of two with load sharing (NTC312AAE6)
• blower fans, N+1 arrangement for redundant cooling (NTC320AAE6)
• air filter (NTC315AAE6)
• front cover with EMC containment and a window to view status LEDs
• MG 1010 serial cable kit (NTC325AAE6)
Metal faceplate Server cards and Gateway Controller cards are required for enhanced EMI
containment. Avaya recommends you to use metal CP PM card (NTDW99) and metal MGC
card (NTDW98) in a MG 1010. All CP MG and CP DC cards contain metal faceplates.
The following sections describe the front and rear components of the MG 1010 (NTC310).
Front components
Figure 19: Front components in the MG 1010 on page 79 shows the Media Gateway 1010
without the front cover. Note the following:
• Ten IPE card slots
• Two Server card slots
• One Gateway Controller card slot
• One Media Gateway Utility (MGU) card provides LED status, ringing, message waiting
voltage, dual homing Ethernet cable ports, and serial cable ports
• One metal divider in chassis to separate MGU, Server cards, and Gateway Controller
card from the IPE cards.
Figure 20: MG 1010 front cover on page 80 shows the MG 1010 with the front cover. Note
the following:
• Window to view LED status of all cards
• Decorative cover provides additional EMC shielding
• Two locking latches in top corners of front cover.
Rear components
Figure 21: Rear components of the MG 1010 on page 81 shows the rear components of the
MG 1010. Note the following:
• Hot swappable redundant power supplies
• Hot swappable fans in a redundant N + 1 configuration for chassis cooling
• One DECT connector
• One AUX connector
• Ten MDF connectors
The MC32S provides 32 channels of IP-TDM connectivity between an IP device and a TDM
device in the CS 1000 network. The MC32S is an IPE form factor card and can interwork with
other voice gateway application cards, such as the MGC card.
The MC32S Media Security feature provides an infrastructure to allow endpoints capable of
SRTP/SRTCP to engage in secure media exchanges. The media security feature can be
configured by the administrator or, optionally, by the end user. This feature provides for the
exchange of cryptographic material needed by the SRTP-capable endpoints to secure media
streams originating from those endpoints.
For more information about Media Security or SRTP, see Avaya Security Management,
NN43001-604.
For more information about Media Card features or the IP Line application, see Avaya Signaling
Server IP Line Applications Fundamentals, NN43001-125.
Signaling Server
Main role
The Signaling Server provides SIP and H.323 signaling between components in a CS 1000E
system.
The hardware platforms supported as stand-alone Signaling Servers are CP PM, CP DC, and
Commercial off-the-shelf (COTS) Servers. Available COTS servers are IBM x306m, IBM
x3350, HP DL320 G4, HP DL360 G7, and Dell R300 servers.
The Communication Server 1000E Linux Platform Base includes many operational,
performance, and security hardening updates. The User Access Control (UAC) introduces
eight Linux groups to define user privileges. Central Authentication provides user
authentication across the security domain with single password. The Emergency Account
allows you to log on through the Command Line Interface (CLI) if both the Primary and
Secondary UCM are offline. Secure File Transfer Protocol (SFTP) is the default file transfer
protocol. You must explicitly identify FTP users, all users can use SFTP.
The Linux Platform Base operating system installs on the Signaling Server and can run multiple
applications, including:
• SIP and H.323 Signaling Gateways
• Terminal Proxy Server (TPS)
• Network Routing Service (NRS)
• SIP Line Gateway (SLG)
• Element Manager
• Application Server for Personal Directory (PD), Callers List (CL), Redial List (RL), and
Unicode Name Directory (UND) for UNIStim IP Phones
SIP Line Gateway includes:
• SIP Line (SIPL)
• SIP Management Service, an Element Manager (EM) system management interface you
use to configure and manage the SIP Line Service.
NRS includes:
• H.323 Gatekeeper
• SIP Proxy Server
replace the Common Processor Pentium Mobile (CP PM) card. The CP DC card contains a
dual core AMD processor and upgraded components which can provide improvements in
processing power and speed over the CP PM card. The CP DC card requires the Linux Base
Operating System, and supports Co-resident Call Server and Signaling Server, or stand-alone
Signaling Server configurations.
The CP DC card is available in two versions:
• NTDW53AAE6 - single slot metal faceplate CP DC card (CS 1000E).
• NTDW54AAE6 - double slot metal faceplate CP DC card (CS 1000M).
The CP DC card provides performance improvements in MIPS, maximum memory capacity,
and network transfer rate, and occupies one IPE slot in a Media Gateway.
Software applications
The following software components operate on the Signaling Server:
• Terminal Proxy Server (TPS)
• SIP Gateway (Virtual Trunk)
• SIP Line Gateway (SLG)
- SIP Line
- SIP Management Service
• H.323 Gateway (Virtual Trunk)
• H.323 Gatekeeper
• Network Routing Service (NRS)
- SIP Redirect Server
- SIP Proxy Server
- SIP Registrar
- NRS Manager
• CS 1000 Element Manager
• Application Server for the Personal Directory, Callers List, Redial List, and Unicode Name
Directory features
Signaling Server software elements can coexist on one Signaling Server or reside individually
on separate Signaling Servers, depending on traffic and redundancy requirements for each
element.
For descriptions of the function and engineering requirements of each element, see Table 43:
Elements in Signaling Server on page 234. For detailed Signaling Server engineering rules
and guidelines, see Signaling Server algorithm on page 271. For more information about
H.323, SIP Trunking NRS and SIP Proxies see Avaya IP Peer Networking Installation and
Commissioning, NN43001-313 and Avaya Network Routing Service Fundamentals,
NN43001-130.
For more information about SIP Line and IP Line, see Avaya SIP Line Fundamentals,
NN43001-508 and Avaya Signaling Server IP Line Applications Fundamentals,
NN43001-125.
Functional description
The Signaling Server provides the following functionality:
• provides IP signaling between system components on the LAN
• enables the Call Server to communicate with IP Phones and Media Gateways
• supports key software components (see Software applications on page 88)
Operating parameters
The Signaling Server provides signaling interfaces to the IP network using software
components that run on the Linux Base operating system. For more information about the
Signaling Server Linux Base, see Avaya Linux Platform Base and Applications Installation and
Commissioning, NN43001-315.
The Signaling Server can be installed in a load-sharing, survivable configuration.
The total number of Signaling Servers that you require depends on the capacity and
redundancy level that you require (see Signaling Server calculations on page 284).
Terminal Server
The MRV IR-8020M IP-based Terminal Server provides the Call Server with standard serial
ports for applications and maintenance.
Important:
A CS 1000E configured with a Gateway Controller does not require a separate Terminal
Server. The Gateway Controller provides serial ports for connectivity.
Physical description
Figure 25: Terminal Server on page 90 shows the Terminal Server.
Hardware components
The MRV Terminal Server provides 20 console ports for modular RJ-45 connectors. It is also
equipped with one RJ-45 10BaseT connection for network interface to the ELAN subnet and
an internal modem to provide remote access.
Operating parameters
A CS 1000E configured with a Gateway Controller does not require a separate Terminal Server.
The Gateway Controller provides serial ports for connectivity.
Traditionally, serial ports are used to connect terminals and modems to a system for system
maintenance. As well, many third-party applications require serial port interfaces to connect to
a PBX. Because the Call Server provides only two local serial ports for maintenance purposes,
an IP-based Terminal Server is required to provide the necessary serial ports.
The Terminal Server provides standard serial ports for applications. These applications include
billing systems that analyze Call Detail Recording (CDR) records, Site Event Buffers (SEB)
that track fault conditions, and various legacy applications such as Property Management
System (PMS) Interface and Intercept Computer applications. In addition, serial ports are used
to connect system terminals for maintenance, modems for support staff, and printers for system
output.
The Terminal Server is configured to automatically log in to the active Call Server at start-up.
For this reason, each Call Server pair requires only one Terminal Server. Customers can
configure up to 16 TTY ports for each Call Server pair.
The Terminal Server can be located anywhere on the ELAN subnet. However, if the Terminal
Server is used to provide local connections to a Com port on the Call Server, it must be
collocated with the system.
The Terminal Server can also be used as a central point to access and manage several devices
through their serial ports.
Important:
Currently, the CS 1000E only supports the MRV IR-8020M commercial Terminal Server.
Layer 2 switch
The Layer 2 switch transmits data packets to devices interconnected by Ethernet to the ELAN
or TLAN subnets. The switch only directs data to the target device, rather than to all attached
devices.
Physical description
Operating parameters
These components must be supplied by the customer. For more information, see Avaya
Converging the Data Network with VoIP Fundamentals, NN43001-260.
For more information, see Avaya Converging the Data Network with VoIP Fundamentals,
NN43001-260.
Telephones
The CS 1000E system supports the following:
• IP Phones
- IP Phone 2001
- IP Phone 2002
- IP Phone 2004
- Avaya 2007 IP Deskphone
- Avaya 1120E IP Deskphone
- Avaya 1140E IP Deskphone
- Avaya 1150E IP Deskphone
- Avaya 1210 IP Deskphone
- Avaya 1220 IP Deskphone
- Avaya 1230 IP Deskphone
- Avaya 2050 Mobile Voice Client (MVC)
- Avaya 2050 IP Softphone
- Avaya 2033 IP Conference Phone
- WLAN Handset 2210, 2211 and 2212
- IP Phone Key Expansion Module (KEM)
• analog (500/2500-type) telephones
• digital deskphones
• attendant consoles
• DECT handsets
• 802.11 Wireless LAN terminals
Component dimensions
All rack mount components fit in 19-inch racks. Table 4: Height dimension of CS 1000E
components on page 94 lists the height of each rack mount component. COTS Servers
require the use of 4 post racks.
Option 11C cabinet is a supported CS 1000E enclosure. Option 11C cabinets are 25" high x
22" wide. They can be wall or floor mounted. Option 11C cabinets are not rack mountable.
Table 4: Height dimension of CS 1000E components
Component Height
NTDU62 CP PIV Call Server 3U
Signaling Server (COTS) 1U
NTDU14 Media Gateway <5U
NTDU15 Media Gateway Expander <5U
NTC310 Media Gateway 1010 9U
MRV Terminal Server 1U
Ethernet Routing Switch 2526T 1U
Ethernet Routing Switch 2550T 1U
1 U = 4.4 cm (1-3/4 in.)
The clearance in front of rack-mounted equipment is the same for all major components. For
the Call Servers, Media Gateways, and Media Gateway Expanders, the distance from the
mounting rails of the rack to the front of the bezel/door is 7.6 cm (3 in.).
Contents
This chapter contains the following topics:
Introduction on page 95
Fax/Modem pass through on page 96
Recommendations for Fax configuration in the CS 1000 system on page 98
Option 1: Campus-distributed Media Gateways on page 101
Option 2: Campus Redundancy on page 102
Option 3: Branch Office on page 103
Option 4: Geographic Redundancy Survivable Media Gateway on page 104
Introduction
The IP-distributed architecture of the Avaya Communication Server 1000E (Avaya CS 1000E)
enables flexibility when it comes to component location. Given this flexibility, the Avaya CS
1000E offers many configuration options to support increased system redundancy.
The CS 1000E can be deployed in LAN and WAN environments. Most fall into one of the
following categories:
• Multiple buildings in a campus
- Campus-distributed Media Gateways
- Campus Redundancy
• Multiple sites
- Central Call Server with Branch Office
- Geographic Redundancy
- Geographic Redundancy Survivable Media Gateway
These configurations provide CS 1000E systems with many options for redundancy and
reliability. Careful planning is required to determine the configuration that is appropriate for
your needs.
The following sections describe each of these configuration options.
Note:
CLS MPTA and MPTD is included in LD10 for analog line card units.
For information about feature packaging requirements see Table 6: Feature packaging
requirements on page 97.
Table 6: Feature packaging requirements
Modem traffic
The CS 1000E supports modem traffic in a campus-distributed network with the following
characteristics:
• Media Card configuration:
- G.711 codec
- 20 msec packet size
• one-way delay less than 5 msec
• low packet loss
• V.34 rate (33.6 Kbps)
Performance degrades significantly with packet loss.
Modem and fax traffic performance is improved with Modem/Fax Pass Through Allowed
(MPTA) class of service in an analog phone TN. The call server selects a G.711 codec with no
Voice Activity Detection (VAD) to setup the call. Call Server disconnection or slow modem and
fax data rates caused by closing the voice connection and reconnecting with T.38 codec have
been eliminated with MPTA.
The CS 1000E supports Modem Pass Through and Super G3 (SG3) fax at V.34 (33.6 Kbps).
You can configure MPTA for a TN, and the codec used by the DSP for that TN will be G.711
NO VAD only.
Performance degrades significantly with packet loss. Packet loss and latency in modem pass
through mode can degrade connectivity rate, throughput rate and drop calls.
If you plan to route modem calls through analog trunks, you must configure the line cards for
the modems and the analog trunks to route in the same Media Gateway. Delay issues can also
be addressed by using a baud rate of 2400 baud or lower on the modem, provided the customer
needs can be met at the low baud rate.
Avaya recommends you use digital trunks and only hardware modems in large systems which
require modems and a large number of trunks. Do not deploy software modems across different
Media Gateways regardless of the trunk type. Replace software modems with hardware
modems or other IP interfaces.
The following hardware modems have been tested with the Communication Server 1000E
using digital trunks:
• U.S. Robotics 5637
• U.S. Robotics 5685E
• U.S. Robotics 5699B
The guidelines listed above must be considered for upgrades to a CS 1000E system from a
Option 11C or CS 1000M. Some configuration changes may be necessary.
Important:
Avaya has conducted extensive but not exhaustive tests of modem-to-modem calls, data
transfers, and file transfers between a CS 1000E and Media Gateway, using Virtual Trunks
and PRI tandem trunks. While all tests have been successful, Avaya cannot guarantee that
all modem brands will operate properly over all G.711 Voice over IP (VoIP) networks. Before
deploying modems, test the modem brand within the network to verify reliable operation.
Contact your system supplier or your Avaya representative for more information.
Important:
Fax performance at higher speeds (33.6 Kbps) requires that all network elements are
properly engineered to support it. When high speed faxes cannot be achieved with a
consistent success rate, Avaya recommends that the fax units be set at a lower speed (14.4
Kbps).
Typical scenarios
Typical scenarios for faxing in a CS 1000E solution are:
• two faxes connected to analog lines in the same Media Gateway Controller (MGC)
• two faxes connected to analog lines in different MGCs of the same CS 1000E system
• one fax connected to an analog line in an MGC, to an IP trunk, to a remote system with
a fax
• one fax connected to an analog line in an MGC, to an analog trunk in the same MGC,
and then to a TN (Local or Long Distance (LD)) fax
• one fax connected to an analog line in an MGC, to a digital trunk in the same MGC, and
then to a TN (Local or LD) fax
• one fax connected to an analog line in an MGC, to a digital trunk in a different MGC of
the same CS 1000E system, to a TN (Local or LD) fax
Note:
The following scenario is not supported for faxing: one fax connected to an analog line in an
MGC, to an analog trunk in a different MGC of the same CS 1000E system, to a TN (Local or
LD) fax.
Important:
Performance degrades significantly with packet loss (must be less than 0.5%) and when the
delay (round trip) is greater than 50 ms and the mean jitter is greater than 5 ms.
Important:
Avaya conducted extensive but not exhaustive test of fax calls in different scenarios. While
all tests succeeded, Avaya cannot guarantee that all fax brands can operate properly over
all G.711 Voice over IP (VoIP) networks. Before you deploy faxes, test the fax within the
network to verify reliable operation. Contact your system supplier or your Avaya
representative for more information.
In this configuration, a CS 1000E system is installed at the main site, and additional Media
Gateways and an optional Signaling Server are installed at a second campus site. All IP
Phones are configured and managed centrally from the main site.
For details on the specific operating and network parameters for the Media Gateway, see
Media Gateway on page 66.
To do this, the ELAN subnet and the subnet of the High Speed Pipe (HSP) are extended
between the two Call Servers using a dedicated Layer 2 Virtual LAN configured to meet
specified network parameters. Figure 29: Campus Redundancy configuration on page 103
shows a CS 1000E system in a Campus Redundancy configuration. For more information, see
Avaya System Redundancy Fundamentals, NN43001-507.
In this configuration, the Branch Office Media Gateway is survivable. This ensures that
telephone service remains available if the main office fails. For more information about Branch
Office, see Avaya Branch Office Installation and Commissioning, NN43001-314. For more
information about Survivable Media Gateway, see Avaya System Redundancy Fundamentals,
NN43001-507.
For more information about Geographic Redundancy Survivable Media Gateway, see Avaya
System Redundancy Fundamentals, NN43001-507.
the primary Call Server cannot be established, the Media Gateway reboots and registers to its
configured alternate Call Server 1.
During a WAN failure, the Media Gateway reboots and registers to its configured alternate Call
Server 2. Once the alternate Call Server connection is established, the Media Gateway can
provide service to the resources in its own area.
When the Media Gateway is registered to any of the alternate Call Servers, it continues to poll
configured Call Servers. When the primary Call Server is detected, the Media Gateway can
automatically switch back to register with the primary Call Server if the registration switching
policy is defined as automatic. Switching policy can also be set to manual and the Media
Gateway remains registered to an alternate Call Server until a command is entered.
With Voice Gateway Media Card Triple Registration, the Voice Gateway Media Card can
register with the primary Call Server, alternate Call Server 1, or alternate Call Server 2. The
Voice Gateway Media Card is configured with three IP addresses: primary, alternate 1, and
alternate 2. The IP addresses of the three Call Servers must be defined on the Media Card
level. To avoid the Media Gateway and Voice Gateway Media Card registering on different
alternate Call Servers during a primary Call Server failure, the Media Gateway sends a
message to Voice Gateway Media Card in each Media Gateway to register with the same
alternate Call Server that the Media Gateway is registered with.
Contents
This chapter contains the following topics:
Introduction
Reliability in the Avaya Communication Server 1000E (Avaya CS 1000E) system is based on:
Network failure
Figure 33: Network failure with Survivable Media Gateway on page 110 illustrates a network
failure with Survivable Media Gateway.
NRS failure
Figure 35: NRS failure on page 112 illustrates an NRS failure.
NRS redundancy
Figure 35: NRS failure on page 112 depicts a distributed environment where the TPS and NRS
software reside with Call Server A and Call Server B on their own Signaling Server.
The NRS, TPS, and Gateway software can all reside on a single Signaling Server. Furthermore,
primary software, the TPS, and the SIP and H.323 Gateways can all reside on Call Server A,
while the second instance of NRS software can reside on a separate Signaling Server with the
TPS.
CS 1000E networks are equipped with at least one NRS to provide management of the network
numbering plan for private and public numbers. An optional redundant NRS can be installed
in the network. This alternate NRS automatically synchronizes its database with the primary
NRS periodically.
When planning NRS survivability strategies, install a second or redundant NRS. If the primary
NRS fails, the alternate NRS assumes control. The Gateways time out and register to the
alternate NRS. Network calls resume.
In addition to NRS redundancy, SIP and H.323 Gateway interfaces can withstand
communication loss to both NRS by reverting to a locally cached copy of the Gateway
addressing information. Since this cache is static until one NRS becomes accessible, it is only
intended for a brief network outage.
The NRS can be configured as primary, alternate, or Fail-safe. If both NRS fail or a network
outage to an NRS occurs, the Gateways route calls using cached data until communication to
the NRS resumes.
Resiliency Scenario 1
IP Phone 2004 A1 and A2 are talking over the LAN and Call Server A fails.
What happens to the call in progress?
The call stays up until the Media Gateway is finished rebooting, and then the call is dropped.
Describe what happens:
Media Gateway A reboots and, if it is configured as an alternate Call Server, it begins taking
over all call processing. The Signaling Server reregisters to the alternate Call Server so service
can be restored for all IP Phones.
Minutes before the call described in the situation can be initiated:
1.5 minutes for the Media Gateway reboot plus switchover timer. (Default for switchover timer
is 2 minutes.)
Resiliency Scenario 2
IP Phone 2004 A1 and IP Phone 2004 B2 are talking and Call Server A fails.
What happens to the call in progress?
Same as Scenario 1.
The call stays up until the Media Gateway is finished rebooting, and then it is dropped.
Describe what happens:
Same as Scenario 1.
Media Gateway A reboots and if it is configured as an alternate Call Server, it begins taking
over all call processing. The Signaling Server reregisters to the alternate Call Server so service
can be restored for all IP Phones.
Minutes before the call described in the situation can be initiated:
Same as Scenario 1.
1.5 minutes for reboot plus switchover timer. (Default for switchover timer is 2 minutes.)
Resiliency Scenario 3
IP Phone 2004 A1 is talking to someone locally or off-net over a PSTN trunk in Media Gateway
A, and Call Server A fails.
What happens to the call in progress?
Same as Scenario 1.
The call stays up until the Media Gateway is finished rebooting, and then it is dropped.
Describe what happens:
Same as Scenario 1.
Media Gateway A reboots and if it is configured as an alternate Call Server, it begins taking
over all call processing. The Signaling Server reregisters to the alternate Call Server so service
can be restored for all IP Phones.
Minutes before the call described in the situation can be initiated:
Same as Scenario 1.
1.5 minutes for reboot plus switchover timer. (Default for switchover timer is 2 minutes.)
Resiliency Scenario 4
IP Phone 2004 A1 and IP Phone 2004 B2 are talking and Signaling Server A fails. A redundant
Signaling Server is configured on Site A.
What happens to the call in progress?
• 50% of calls on Site A stay up for 2.5 minutes, and then are dropped.
• The other 50% of telephones registered to the redundant Signaling Server on Site A do
not drop the call.
Describe what happens:
• IP Phone A1 (that is, 50% of calls) reboots and then reregisters with the redundant
Signaling Server.
• The other 50% have no impact on the calls in progress and the telephones stay registered
to the redundant Signaling Server.
Minutes before the call described in the situation can be initiated:
• 2 to 5 minutes depending on number of telephones (2 minutes for all telephones to realize
the first Signaling Server is not responding, and then all telephones from the first Signaling
Server reboot and start registering with the redundant Signaling Server). At this stage,
100% of telephones from Site A are registered to the redundant Signaling Server.
• Not applicable for other 50% of telephones.
Resiliency Scenario 5
IP Phone 2004 A1 and IP Phone 2004 A2 are talking and Signaling Server A fails. A redundant
Signaling Server is configured on Site A.
What happens to the call in progress?
Same as Scenario 4.
• 50% of the calls stay up for 2.5 minutes, and then are dropped.
• Other 50% of telephones registered to the redundant Signaling Server do not drop the
call.
Describe what happens:
Same as Scenario 4.
• 50% of telephones on Site A1 reboot and then reregister with the redundant Signaling
Server.
• Other 50% are unaffected and have no impact on the calls in progress. Telephones stay
registered to the redundant Signaling Server. At this stage, 100% of the telephones from
Site A are registered to the redundant Signaling Server.
Minutes before the call described in the situation can be initiated:
Same as Scenario 4.
• 2 to 5 minutes depending on number of telephones (2 minutes for all telephones to realize
the first Signaling Server is not responding, and then all the telephones reboot and start
registering with redundant Signaling Server).
• Not applicable for other 50% of the telephones.
NRS failure
Resiliency Scenario 6
IP Phone 2004 A1 and IP Phone 2004 B2 are talking and the primary NRS fails. An alternate
NRS is configured on Site B. Assume the primary NRS is a stand-alone box (without a TPS).
What happens to the call in progress?
The calls in progress are unaffected.
Describe what happens:
The alternate NRS takes over as Active NRS after the 30-second polling timer expires.
There is also the Time to Live timer for the H.323 endpoints to the Gatekeeper. This timer is
usually configured shorter. This timer is also user configurable.
Minutes before the call described in the situation can be initiated:
New calls are established following:
• the 30-second polling timer expires
• the alternate NRS switches over to the Active NRS
• the Time to Live timer expires
Resiliency Scenario 7
IP Phone 2004 A1 and IP Phone 2004 B2 are talking and the primary NRS (Signaling Server)
fails. Assume the primary NRS is Co-resident with the Signaling Server TPS on Site A. An
alternate NRS is configured on Site B. Assume the alternate NRS is Co-resident with Signaling
Server TPS on Site B. A redundant Signaling Server is configured on Site A.
What happens to the call in progress?
Similar to Scenario 4.
• 50% of the calls on Site A stay up for 2.5 minutes, and then are dropped.
• Other 50% of the telephones registered to the redundant Signaling Server on Site A do
not drop the call.
• Calls in progress are unaffected by the NRS switchover. If transient calls (for example,
calls in ringing stage) exist, they are dropped due to the Signaling Server switchover.
Describe what happens:
• 50% of telephones on Site A (that is, 50% of the calls) reboot and then reregister with the
redundant Signaling Server.
• Other 50% have no impact on the calls in progress and telephones stay registered to the
redundant Signaling Server.
• The alternate NRS takes over as Active NRS after the 30-second polling timer expires.
• There is also the Time to Live timer for the H.323 endpoints to the Gatekeeper. This Time
to Live timer is usually configured shorter than the 30-second polling timer. This timer is
also user configurable. The Virtual Trunks from the first Signaling Server register to the
redundant Signaling Server like the telephones.
Minutes before the call described in the situation can be reinitiated:
2 to 5 minutes depending on the number of telephones (2 minutes for all telephones to realize
the first Signaling Server is not responding, and then all telephones from the first Signaling
Server reboot and start registering with redundant Signaling Server). At this stage, 100% of
the telephones from Site A are registered to the redundant Signaling Server.
For the other 50% of the telephones already registered to the redundant Signaling Server, new
calls are established following:
• the 30-second polling timer expires
• the alternate NRS switches over to the Active NRS
• the Time to Live timer expires
Resiliency Scenario 8
IP Phone 2004 A1 and IP Phone 2004 B2 are talking and both the primary and alternate NRS
fail (both are stand-alone NRS).
What happens to the call in progress?
Same as scenario 6.
Resiliency Scenario 9
IP Phone 2004 C1 and C2 are talking over the LAN and Call Server A fails.
What happens to the call in progress?
The call stays up until Media Gateway A is finished rebooting, and then it is dropped.
Describe what happens:
Media Gateway A reboots at the Main Office site and acts as an alternate Call Server at Site
A. The Branch Office telephones on Signaling Server A register with the alternate Call
Server.
Minutes before the call described in the situation can be initiated:
1.5 minutes for reboot plus switchover timer. (Default for timer is 2 minutes.)
Resiliency Scenario 10
IP Phone 2004 C1 and A2 are talking and Call Server A fails.
What happens to the call in progress?
Same as Scenario 9.
The call stays up until Media Gateway A is finished rebooting, and then it is dropped.
Describe what happens:
Same as Scenario 9.
Media Gateway A reboots at the Main Office site and acts as an alternate Call Server at Site
A. The Branch Office telephones on Signaling Server A register with the alternate Call
Server.
Minutes before the call described in the situation can be initiated:
Same as Scenario 9.
1.5 minutes for reboot plus switchover timer. (Default for timer is 2 minutes.)
Resiliency Scenario 11
IP Phone 2004 C1 and C2 are talking over the LAN and Signaling Server A fails.
What happens to the call in progress?
The call stays up for 2.5 minutes, and then it is dropped.
Describe what happens:
C1 and C2 reboot and register with the branch office Signaling Server. The telephones are
redirected back to the Main Office to register with the redundant Signaling Server.
Minutes before the call described in the situation can be initiated:
2 to 6 minutes; 2 to 5 minutes to reboot C1 and C2, plus the extra minute for redirection.
Resiliency Scenario 12
IP Phone 2004 C1 and A2 are talking over LAN and Signaling Server A fails.
What happens to the call in progress?
The call stays up for 2.5 minutes, and then it is dropped.
Describe what happens:
A2 reboots and registers with the redundant Signaling Server at the Main Office. C1 reboots,
registers with the branch office Signaling Server, and then is redirected to register with the
redundant Signaling Server at the Main Office. This assumes telephones are registered to the
failing Signaling Server in this scenario. If 50% of telephones were registered to the surviving
Signaling Server, telephones and calls would proceed as per normal healthy operation.
Minutes before the call described in the situation can be initiated:
For telephone A2, 2 to 5 minutes depending on the number of telephones (2 minutes for all
telephones to realize the first Signaling Server is not responding, and then all telephones from
the first Signaling Server reboot and start registering with the redundant Signaling Server). At
this stage, 100% of telephones from Site A are registered to the redundant Signaling Server.
For telephone C1, 2 to 6 minutes. The extra minute is needed to register to the branch office
Signaling Server and then be redirected back to the Main Office.
Not applicable for the other 50% of telephones if registered to the redundant Signaling
Server.
Resiliency Scenario 13
IP Phone 2004 C1 and C2 at the branch office are talking and the WAN data network
connection to the Main Office goes down.
What happens to the call in progress?
The call stays up for 2.5 minutes, and then it is dropped.
Describe what happens:
Telephones C1 and C2 reboot and then reregister with the Signaling Server at the branch
office.
Minutes before the call described in the situation can be initiated:
Minimum of 1 minute after the call is dropped. The time depends on the number of Branch
Office telephones. It is approximately 6 minutes for 400 telephones.
Resiliency Scenario 14
IP Phone 2004 C1 and A2 are talking and the WAN data network connection to the Main Office
goes down.
What happens to the call in progress?
The speech path is lost as soon as the network connection is down.
Describe what happens:
A2 stays registered with Signaling Server A. C1 reboots and registers with Signaling Server at
the branch office.
Minutes before the call described in the situation can be initiated:
Calls between Site A and Site C over IP only start after the WAN connection is fixed. Calls
routed over PSTN Trunks can be completed as soon as the IP Phones reboot.
Resiliency Scenario 15
IP Phone 2004 C1 is talking to someone off-net over a PSTN trunk in Branch Office C and Call
Server A fails.
Resiliency Scenario 16
IP Phone 2004 C1 is talking to IP Phone 2004 C2 and the branch office Signaling Server
fails.
What happens to the call in progress?
No impact on the call in progress.
Describe what happens:
No impact on existing or future Branch Office IP to IP calls in progress.
Minutes before the call described in the situation can be initiated:
Not applicable.
Resiliency Scenario 17
IP Phone 2004 C1 is talking to someone off-net over a PSTN trunk in Branch Office C and the
Signaling Server C (branch office) fails. (The behavior is the same as IP Phone 2004 A1 talking
to someone off-net over a PSTN trunk in Media Gateway B and Signaling Server B fails.)
What happens to the call in progress?
No impact on the call in progress.
Describe what happens:
Telephone C1 is registered to the TPS at the Main Office site. A Virtual Trunk (SIP or H.323)
session is initiated between the Signaling Server at the Main Office site and the Signaling
Server at the branch office. With the loss of the Signaling Server at the branch office, the SIP
or H.323 session fails. All idle Virtual Trunks become idle unregistered. Virtual Trunks that are
busy on established calls also become unregistered, but they remain busy until the calls are
released.
Minutes before the call described in the situation can be initiated:
If there is no redundant Signaling Server in the branch office, calls of this type cannot be
initiated until the Signaling Server is reestablished. The call would, in this instance, be routed
out over an alternative PSTN route.
Resiliency Scenario 18
A digital telephone in the branch office is talking to someone off-net over a PSTN trunk in
Branch Office C and the Signaling Server C (branch office) fails.
What happens to the call in progress?
No impact on the call in progress.
Describe what happens:
The call from the digital telephone proceeds as normal. The Signaling Server does not
participate in this call.
Minutes before the call described in the situation can be initiated:
Not applicable.
Resiliency Scenario 19
A digital telephone in the Main Office is talking to someone off net over a PSTN trunk in Branch
Office C and Signaling Server A (Main Office) fails. A redundant Signaling Server is installed
at Site A. (This is the same as a digital telephone in Media Gateway A talking to someone off-
net over a PSTN trunk in Media Gateway B and Signaling Server A fails.)
What happens to the call in progress?
No impact on the call in progress.
Describe what happens:
The call from the digital telephone proceeds as normal. The Signaling Server at Site A fails,
the Virtual Trunk (SIP or H.323 session) required to continue the call continues. All idle Virtual
Trunks become idle unregistered and then register with the redundant Signaling Server
installed at Site A. Virtual Trunks that are busy on established calls also become unregistered,
but they remain busy. There is no impact on the media path between the DSP connected to
digital telephone in the Main Office and that connected to the PSTN trunk. When the call is
released by the user, the Virtual Trunk in the Main Office becomes idle, and then registers with
the redundant Signaling Server installed at Site A.
Minutes before the call described in the situation can be initiated:
The call from the digital telephone proceeds as normal with no delay. The redundant Signaling
Server at Site A initiates the Virtual Trunk (SIP or H.323 session) required to complete the
call.
Resiliency Scenario 20
A digital telephone in the Main Office Site A is talking to someone off-net over a PSTN trunk
in Branch Office C and Signaling Server A (Main Office) fails. No redundant Signaling Server
is installed at Site A. PSTN is configured as an alternate route. (This is the same as a digital
telephone in Media Gateway A talking to someone off-net over a PSTN trunk in Media Gateway
B and Signaling Server A fails.)
What happens to the call in progress?
No impact to the call in progress.
Describe what happens:
All idle Virtual Trunks become idle unregistered. Virtual Trunks that are busy on established
calls also become unregistered, but they remain busy until the calls are released. There is no
impact on the media path between the DSP connected to the digital telephone and that
connected to the PSTN trunk.
Minutes before the call described in the situation can be initiated:
The call from the digital telephone proceeds as normal. The PSTN from the Main Office site is
used as an alternative route to complete the call.
Resiliency Scenario 21
A digital telephone in the Main Office Site A is talking to a digital telephone in Branch Office C
and Signaling Server A (Main Office) fails. No redundant Signaling Server is installed at Site
A. PSTN is configured as an alternate route. (This is the same as digital telephone in Media
Gateway A talking to digital telephone in Media Gateway B and Signaling Server A fails.)
What happens to the call in progress?
No impact to the call in progress.
Describe what happens:
All idle Virtual Trunks become idle unregistered. Virtual Trunks that are busy on established
calls also become unregistered, but they remain busy until the calls are released. There is no
impact on the media path between the DSP connected to digital telephone in Main Office Site
A and that connected to digital telephone in Branch Office C.
Minutes before the call described in the situation can be initiated:
The call from the digital telephone proceeds as normal with no delay. The PSTN is used as an
alternative route to complete the call.
Resiliency Scenario 22
A digital telephone in the Main Office Site A is talking to IP Phone C1 in Branch Office C and
Signaling Server A (Main Office) fails. There is no redundant Signaling Server installed at Site
A.
What happens to the call in progress?
The call stays up for 2.5 minutes on average and then it is dropped. Time varies due to
watchdog timer on the IP Phone.
Describe what happens:
IP Phones reboot and attempt to register to Signaling Server A.
Minutes before the call described in the situation can be initiated:
IP calls between Site A and Site C are offline and can start once the Signaling Server is
fixed.
a different building, then you can use VLANs to keep IP addresses on the same logical subnet.
For further implementation details, see Avaya Converging the Data Network with VoIP
Fundamentals, NN43001-260.
If there are different nodes in different Media Gateway, then the nodes can be configured to
register to different alternate Call Servers. This concept is desirable for optimizing system
reliability to best deal with possible system outages. Associate each IP telephony node with
an appropriate (for example, collocated) alternate Call Server.
If the node IDs are configured using the guidelines for the 'Enhanced Redundancy for IP Line
Nodes' feature, then the IP Phones can register (if needed) to an alternate node on a Media
Gateway Expander. This further improves the survivability of the IP Phones by allowing them
to register to a different node should a system outage occur on their primary node's Media
Gateway.
For a description of the enhanced redundancy for IP Line nodes, see Avaya Signaling Server
IP Line Applications Fundamentals, NN43001-125 .
Multiple D-channels
Avaya does not recommend you to split the Primary and Backup D-channels of the same ISDN
Trunk Group across multiple GR/CR CS 1000E Media or PRI Gateways. While this
configuration insures D-channel redundancy during some Primary D-Channel failure
situations, states could exist where both D-channels register to different Call Servers and
simultaneously activate creating a conflict in the Central Office. This conflict can affect service
and can lead to a complete ISDN Trunk Group outage in most service provider Central
Offices.
If your service provider supports ISDN Trunk Group hunting, Avaya recommends you to
maintain multiple ISDN Trunk Groups with each ISDN service provider. Configure each Trunk
Group with its own Primary and Backup D-channels on PRI circuits in each Media Gateway.
This solution offers resilient configuration in larger systems distributed geographically and
operates well even if your service provider is unable to support a D-channel for each ISDN PRI
circuit.
Avaya can provide VoIP Session Border Controllers as an alternative to large scale ISDN
Trunking facilities. This solution offers improved flexibility in deployment and resiliency
performance. For more information, see www.avaya.com/support.
Contents
This chapter contains the following topics:
Installation planning on page 129
Milestone chart on page 130
Evaluating existing telephony infrastructure on page 130
Telephony planning issues on page 131
Numbering plans on page 132
DTI/PRI clocking on page 133
Clocking operation on page 141
Installation and configuration on page 145
Installation planning
Use Table 7: installation planning on page 129 as a guide to prepare a detailed plan for every
installation.
Table 7: installation planning
Procedure Requirements
Research Determine requirements for fire protection and safety,
the equipment room, grounding and power, and
cables.
Site planning Select a site with suitable qualifications. Develop the site
to meet requirements. Prepare the building cabling
plan.
Delivery and installation Perform pre-installation inspections. Examine the
preparation delivery route. Review equipment handling precautions.
Gather all delivery items.
Milestone chart
Site preparation activities are easier to plan and monitor when a milestone chart is used. A
milestone chart is a general schedule that shows all required activities in order, with a start and
end date for each. Individual operations and an overall installation schedule should both be
represented. Table 8: Milestone chart on page 130 lists typical activities in a milestone chart.
For a complex site, a more detailed chart can be required.
Table 8: Milestone chart
Step Action
1 Select the site and complete planning activities.
• Plan fire prevention and safety features.
• Plan the equipment room layout.
• Plan grounding and power.
• Plan cable routes and terminations.
• Plan and start any renovations to the equipment room.
3 Complete construction and ensure that grounding and power are in place.
• Test air conditioning and heating systems.
• Make equipment delivery arrangements.
• Complete equipment room inspection, identifying and resolving any delivery
constraints.
The Telecom infrastructure analysis examines the products, services, and features used in the
existing environment, including:
• PBX systems and locations
• system and network level features
• existing dial Plan
• supported applications
• key systems
• PBX inter-connectivity
• telephone users and features
• PSTN trunking
Applications
For information about Avaya CallPilot, Symposium, and other applications, see the following:
• Avaya Automatic Call Distribution Fundamentals, NN43001-551
• CallPilot 555-7101- xxx series publications
• Symposium 297-2183-xxx series publications
• Remote Office 555-8421-xxx series publications
• MDECT 553-3601-xxx series publications
• other applications publications
Access
For information about signaling (ISDN-PRI, EIR2, CCS and CAS), see the following:
• Avaya ISDN Primary Rate Interface Fundamentals, NN43001-569
• Avaya ISDN Basic Rate Interface Feature Fundamentals, NN43001-580
For information about FXS, FXO, or ground/loop start COT trunks, see Avaya Circuit Card
Reference, NN43001-311.
Numbering plans
A CS 1000E network can use many numbering plans, depending upon dialing preferences and
configuration management requirements. Primary options include:
• Uniform Dialing Plan (UDP)
• Coordinated Dialing Plan (CDP)
• Zone Based Dialing (ZBD)
• Transferable Directory Numbers (TNDN)
See Avaya Network Routing Service Fundamentals, NN43001-130 for information about the
following:
• the Network Routing Service (NRS) and how it performs address translation
• numbering plans
• call routing
• zoning plans
• collaborative servers
For more information about dialing plans, see Avaya Dialing Plans Reference,
NN43001-283.
DTI/PRI clocking
When digital signals transport over a digital communication link, the receiving end must operate
at the same frequency as the originating end to prevent data loss, this is called link
synchronization. If one end of a communication link is not synchronized, data bit slips occur
and data loss results. To ensure reliable data transfer, accurate timing is important and
synchronized timing is critical.
When only two PBX systems interconnect in an isolated private network, the two systems can
operate in master-slave mode to achieve synchronization. In master-slave mode, one system
derives its timing from the other. Slips can be lessened by forcing all systems to use a common
reference clock through a network clocking hierarchy, shown in Figure 38: Hierarchical
synchronization on page 134.
Synchronization methods
There are two common methods of maintaining timing synchronization between switching
systems:
• Pleisiochoronous operation
• Mesochronous operation
Pleisiochoronous operation
In pleisiochoronous mode, nodal clocks run independently (free-run) at the same nominal
frequency. Frequency differences between clocks result in frame slips. The magnitude of frame
slips is directly proportional to the difference in frequency. Slips, though inevitable, can be
minimized by using stable clocks and elastic stores or buffers. The buffers absorb data bits to
compensate for slight variances in clock frequencies.
Mesochronous operation
In mesochronous mode, nodal clocks are commonly and automatically locked to an external
reference clock, yielding virtually slip-free operation. With this method, frame slips are
eliminated if elastic stores are large enough to compensate for transmission variances.
If the CS 1000E system is not used as a master in a private network, Avaya recommends that
systems be configured in mesochronous mode. To do this, users can configure the clock
controller circuit cards to lock onto an external reference source.
If the CS 1000E system is used as a master in a private network, end-users can configure the
system in pleisiochoronous mode. Since a private network has no digital links to a higher node
category, a CS 1000E clock controller in an isolated private network can operate in free run
mode and act as a master clock. Other PBX systems in the private network can then track the
master clock.
Timing reference
In the North American network, the Primary Timing Reference is derived from a cesium beam
atomic clock.
In Canada, the digital network is divided in two regions that interact plesiochronously, each
with its own cesium atomic clock. Their common boundary lies between the Manitoba
Telephone System and Bell Canada. The Eastern Region clock is located in Ottawa, the
Western region clock in Calgary. Any DS-1 signal leaving these switches is synchronized to
cesium oscillators. Every digital node in Canada (whether Central Office (CO), Digital PBX with
CO connectivity, or digital Multiplexer) can trace their clock back to the cesium oscillator in
Ottawa or Calgary. That is, unless the Digital System is operating in Pleisiochoronous
operation.
In the United States, a similar arrangement exists. The U.S. digital network is supported by
two primary clocks, one in St.Louis, Missouri, and a second in Boulder, Colorado.
Node categories/Stratum
In the North America digital network, nodes are synchronized using a priority master/slave
method. Digital networks are ranked in Node Categories A to E in Canada, as shown in Table
10: Node categories on page 137, and in Stratum levels 1 to 5 in USA. Each node is
synchronized to the highest ranking clock where the node has a direct link.
Table 9: Stratum data
Frame slip
Digital signals must have accurate clock synchronization for data to be interleaved into or
extracted from the appropriate timeslot during multiplexing and de-multiplexing operations. A
frame slip is defined as the repetition or deletion of the 193 bits of a DS-1 frame due to a
sufficiently large discrepancy in the read and write rates at the buffer. Frame slips occur when
clocks do not operate at the same exact speed.
When data bits are written into a buffer at a higher rate than the bits are read, the buffer
overflows, known as a slip-frame deletion. When data bits are written into a buffer at a lower
rate than the bits are read, the buffer runs dry or under-flows, known as a slip-frame repetition.
Both occurrences are called a slip or a controlled slip. Frame slippage has a negative impact
on data transfer, but can be controlled or avoided with proper clock synchronization.
Guidelines
Design guidelines for CS 1000E Network Synchronization are as follows:
• Where possible, the master Clock Source should always be from a Node Category/
Stratum with a higher clock accuracy. When the PBX is connected to the CO, the CO is
always the master and the PBX is the slave. For example, the PBX clock controller prompt
PREF is set to the slot number of the DTI/PRI connected to the CO.
• Clock controllers within the system should not be in free-run unless they operate in a fully
independent network where the source clock controller acts as a master. Only one clock
controller in the system can operate in free-run mode.
• When connecting two PBXs together with no CO connections, the most reliable PBX
should be the master clock source.
• Avoid timing loops. A timing loop occurs when a clock uses as its reference frequency, a
signal that is traceable to the output of the same clock. This produces a closed loop that
leads to frequency instability.
• All Central Offices/PBX links that serve as clock references must offer a traceable path
back to the same Stratum 1 clock source.
• If a Media Gateway has at least one DTI, PRI, or BRI trunk card, it must also have one
clock controller installed. The clock controller tracks to the same traceable reference as
the other Media Gateway.
• All slave clock controllers must set their primary reference (PREF) to the slot that they
are installed. For example, a clock controller installed in slot 9 must have its PREF set to
slot 9.
• The Media Gateway Expander does not support clock controllers.
Clocking modes
The CS 1000E system supports up to one clock controller in each Media Gateway. Each clock
controller can operate in one of two modes: tracking or nontracking (free-run).
Tracking mode
In tracking mode, the DTI/PRI card supplies a clock reference to a clock controller
daughterboard. Also, one DTI/PRI with clock controller is defined as the primary reference
source for clock synchronization. The other (within the same Media Gateway) is defined as the
secondary reference source (PREF and SREF in LD 73).
There are two stages to clock controller tracking:
1. tracking a reference
2. locked onto a reference
When tracking a reference, the clock controller uses an algorithm to match its frequency to the
frequency of the incoming clock. When the frequencies are nearly matched, the clock controller
locks on to the reference. The clock controller makes small adjustments to its own frequency
until incoming frequencies and system frequencies correspond. If the incoming clock reference
is stable, the internal clock controller tracks it, locks on to it, and matches frequencies exactly.
Occasionally, environmental circumstances cause the external or internal clocks to drift. When
this occurs, the internal clock controller briefly enters the tracking stage. The green LED flashes
momentarily until the clock controller once again locks on to the reference.
If the incoming reference is unstable, the internal clock controller is continually in the tracking
stage, with green LED flashing continually. This condition does not present a problem, rather,
it shows that the clock controller is continually attempting to lock on to the signal. If slips occur,
a problem exists with the clock controller or the incoming line.
Monitoring references
Primary and secondary synchronization references are continuously monitored to provide
autorecovery.
Reference switchover
Switchover can occur with reference degradation or signal loss. When reference performance
degrades to a point where the system clock is not able to follow the timing signal, the reference
is out of specification. If the primary reference is out of specification but the secondary
reference is within specification, an automatic switchover is initiated without software
intervention. If both references are out of specification, the clock controller provides
holdover.
If the command "track to secondary" is given, the clock controller tracks to the secondary
reference and continuously monitors the quality of both primary and secondary references. If
secondary goes out of specification, the clock controller automatically tracks to primary,
provided that primary is within specification.
Free-run (nontracking)
In free-run mode, the clock controller does not synchronize on any source. Instead, the clock
controller provides its own internal clock to the system. Free-run mode can be used when the
CS 1000E system acts as a master clock source for other systems in the network. If the CS
1000E system is a slave, free-run mode is not desirable. Free-run mode can take effect when
primary and secondary clock sources are lost due to hardware faults. Administrators can
invoke free-run mode by using software commands.
Faceplate LEDs
Table 11: NTAK20 LED indications on page 140 provides a description of the NTAK20
LEDs.
Table 11: NTAK20 LED indications
Clocking operation
The CS 1000E system can support up to 50 active clock controllers, one for each Media
Gateway with a PRI. However, a Media Gateway can support only one clock controller, and a
Media Gateway Expander cannot support a clock controller.
The following are clock controller acronyms:
• CC - Clock Controller
• FRUN - Free Running mode
• PREF - Primary Reference
• SREF - Secondary Reference
Free-running clocks
Free-running clocks are allowed only if the CS 1000E system does not connect to a CO.
Figure 39: Acceptable connection to an isolated private network with primary reference on
page 141 to Figure 41: Acceptable connection with a combined CO and private network on
page 142 show acceptable connections.
Figure 39: Acceptable connection to an isolated private network with primary reference
Figure 40: Acceptable connection to an isolated private system with primary and secondary
reference
Connecting to a CO
Any Media Gateway that supplies a reference to a remote PBX must have a trunk tracking to
a CO. There is no clock relationship between gateways. Each media gateway operates in a
separate clock domain.
Figure 42: Acceptable connection: Media Gateway 1 and Media Gateway 2 receive clock
reference directly from CO
Figure 43: Acceptable connection: Media Gateway 1 receives clock reference directly from CO/
Remote, Media Gateway 2 receives clock reference indirectly from CO
Figure 44: Unacceptable connection: Media Gateway 1 references remote PBX; clock loop, no
master clock reference
Figure 45: Acceptable connection: Media Gateway 1 references remote PBX; Media Gateway 2
provides master reference to remote PBX
Clock controllers are configured in LD 73. For 1.5 Mb and 2 Mb DTI/PRI, the following
commands are used.
Command Description
DIS CC l s Disable system clock controller on specified superloop and shelf.
DSCK loop Disables the clock for loop. This is not applicable for 1.5Mb DTI/PRI.
DSYL loop Disable yellow alarm processing for loop.
ENCK loop Enable the clock for loop. This is not applicable for 1.5Mb DTI/PRI.
ENL CC l s Enable system clock controller on specified superloop and shelf.
ENYL loop Enable yellow alarm processing for loop.
SSCK l s Get status of system clock on specified superloop and shelf.
TRCK aaa l s Configure clock controller on Media Gateway specified by the superloop,
loop and shelf tracking to primary, secondary or free-run. Where aaa is:
• PCK = track primary clock
• SCLK = track secondary clock
• FRUN = free-run mode
Track primary clock (PCK) or secondary clock (SCLK) as the reference
clock or go to free-run (FRUN) mode.
Command Description
SSCK l s Get status of system clock on specified superloop and shelf.
Examples
Status of the CC when it is tracking to Primary.
.ssck 4 0
ENBL
CLOCK ACTIVE
CLOCK CONTROLLER - LOCKED TO SLOT 1
PREF - 1
SREF -
AUTO SWREF CLK - ENBL
.ssck 12 0
ENBL
CLOCK ACTIVE
CLOCK CONTROLLER - FREE RUN
PREF -
SREF -
AUTO SWREF CLK - ENBL
.ssck 40 0
ENBL
CLOCK ACTIVE
CLOCK CONTROLLER - LOCKED TO SLOT 2
PREF - 1
SREF - 2
AUTO SWREF CLK - ENBL
The tracking mode on an installed clock controller can be changed by the following
commands.
Table 15: Clock Controller commands (LD 60)
Command Description
TRCK PCK l s Configure clock controller tracking to primary on specified superloop and
shelf.
Command Description
PCK = track primary clock
Instructs the installed clock controller to track to a primary reference clock
source also referred to as "SLAVE" mode.
TRCK FRUN l s Configure clock controller tracking to free-run on specified superloop and
shelf.
FRUN = free-run mode
Instructs the installed clock controller to free-run. In this mode, the system
provides a reference or "MASTER" clock to all other systems connected
through DTI/PRI links. This mode can be used only if there are no other
clock controllers in SLAVE mode anywhere within the system.
The Call Server can be locked to any Media Gateway with the following command.
Table 16: Clock Controller commands (LD 60)
Command Description
TRCK PLL l s Overrides the default search order and locks to specified superloop and
shelf.
Track primary clock (PCK) or secondary clock (SCLK) as the reference
clock or go to free-run (FRUN) mode.
Contents
This chapter contains the following topics:
Introduction on page 149
Creating an installation plan on page 150
Fire, security, and safety requirements on page 152
Equipment room requirements on page 154
Grounding and power requirements on page 161
Cable requirements on page 162
LAN design on page 163
Preparing a floor plan on page 166
Creating a building cable plan on page 166
Creating a building cable plan on page 166
Enterprise Configurator on page 30
Preparing for installation on page 172
Introduction
Warning:
Before an Avaya Communication Server 1000E (Avaya CS 1000E) system can be installed,
a network assessment must be performed and the network must be VoIP-ready.
If the minimum VoIP network requirements are not met, the system will not operate
properly.
For information about the minimum VoIP network requirements and converging a data
network with VoIP, see Avaya Converging the Data Network with VoIP Fundamentals,
NN43001-260.
Planning for system installation affects the installation cost, as well as operation and
maintenance, and can have an overall effect on system performance. Consider the following
requirements (in addition to local and national building and electrical codes) when you plan a
system installation.
Select and evaluate sites according to the requirements in this document and the following
criteria:
• Space:
- The site must provide adequate space for unpacking, installation, operation, potential
expansion, service, and storage. The site must provide space for sufficient cooling. You
can need additional space for a maintenance and technician area.
• Location:
- The location should be convenient for equipment delivery and close to related work
areas. Consider the location of related equipment, such as the distribution frame and
batteries for Uninterruptable Power Supply (UPS) units. Also consider cable
limitations.
• Grounding and power:
- Proper grounding and sufficient power facilities must be available.
• Structural integrity:
- The floor must be strong enough to support anticipated loads and, if applicable, the
ceiling must be able to support overhead cable racks.
Installation outline
Use Table 17: Installation plan outline on page 151 as a guide for preparing a detailed
installation plan.
Procedure Requirements
Researching site requirements • Determine fire, security, and safety requirements
• Determine equipment room requirements
• Determine grounding and power requirements
• Determine cable requirements
Milestone chart
Planning and monitoring site preparation activities is easier when you use a milestone chart.
A milestone chart is a general site planning schedule showing the sequence of activities
necessary to complete a job.
Table 18: Milestone chart on page 151 lists typical activities included in a milestone chart. For
a complex site, you must create a more detailed chart.
Table 18: Milestone chart
Task Action
1 Select the site.
2 Plan fire prevention and safety features.
3 Plan the equipment room layout.
4 Plan grounding and power.
5 Plan cable routes and terminations.
6 Plan and start any renovations to the equipment room.
7 Continue site construction and renovation tasks.
8 Install grounding, power, air conditioning, and heating.
9 Install special rigging, such as overhead cable racks and distribution
frame equipment, as required.
Task Action
10 Test site wiring to ensure that minimum requirements are met.
11 Complete construction and ensure that grounding and power are in
place.
12 Test air conditioning and heating systems.
13 Make equipment delivery arrangements.
14 Complete equipment room inspection, identifying and resolving any
delivery constraints.
When you prepare a milestone chart, consider not only individual operations, but the overall
installation schedule. The milestone chart should show the necessary operations in order and
can assign a start and end date for each activity.
heat, and smoke from spreading from one part of a building to another. Install smoke detectors
in all appropriate places.
Regularly check services such as steam, water, and power, and inspect pipes for excess
condensation, leaks, or corrosion.
Danger:
Avaya does not recommend using Halon or any other fire extinguishing system that is not
described above.
Security precautions
You may need to extend and improve existing building security to provide adequate protection
for the equipment. For example, you can install safeguards such as tamper proof keylock door
controls and electrically taped glass doors and windows that can tie into an alarm system. You
can also install a monitoring unit using closed-circuit television.
Important:
Electric locks, such as push button access code or card reader locks, are not recommended
unless you provide a battery backup or a key override.
Protect critical data, such as business records, by storing backups well away from the
equipment room. A regular updating program is highly recommended.
Important:
The acoustic noise generated by a system ranges from 45 dBA to 60 dBA (decibels "A"-
weighted).
Environmental requirements
The environment that the Avaya CS 1000E system operates in must meet the following general
conditions:
• The room must be clean, relatively dust-free, and well ventilated. On equipment,
ventilating openings must be free of obstructions.
• The room must meet the requirements for temperature and humidity. For more information
about temperature and humidity requirements, see Temperature and humidity control on
page 157 and Air conditioning guidelines on page 158.
• The room cooling system must meet the requirements for the installed equipment. For
estimating cooling requirements based on thermal generation from system components,
see Power consumption on page 187.
• Select a location for equipment installation that is not subject to constant vibration.
• Locate equipment at least 12 ft (3660 mm) away from sources of electrostatic,
electromagnetic, or radio frequency interference. These sources can include:
- power tools
- appliances (such as vacuum cleaners)
- office business machines (such as copying machines)
- elevators
- air conditioners and large fans
- radio and TV transmitters
- high-frequency security devices
- all electric motors
- electrical transformers
Space requirements
Space and equipment layout requirements differ with each installation. When you plan the site,
consider the following requirements:
• Primary storage
• Secondary storage
• Maintenance and technician space
Primary storage
The floor area required for a system depends on the number of racks, the length-to-width ratio
of the area, and the location of walls, partitions, windows, and doors. To determine the exact
layout required, prepare a detailed floor plan after regarding all of the requirements in this
chapter.
Wall jacks and outlets must be provided for all devices located in the equipment room.
Secondary storage
Provide space in the equipment area for storing disks, printer paper, printouts, and daily
reports. A secure storage room for spare parts is recommended.
Whenever possible, maintain the same environmental conditions in the equipment room and
storage areas. If it is not possible to maintain the environment of the storage area exactly the
same as the environment of the operating equipment, give stored materials time to adjust to
the equipment room environment before using them.
Danger:
Damage to Equipment
Do not expose equipment to absolute temperature limits for more than 72 hours. Do not
place heat sources (such as floor heaters) near the equipment.
Table 19: Operating environment
Telephones Absolute:
• 5°C to 40°C (41° to 104°F)
• RH 5% to 95%, noncondensing
If you operate the system within recommended temperature limits, there are no thermal
restrictions on any equipment.
Follow the specifications listed in Table 20: Storage environment on page 158 to store or
transport equipment.
Other terminal devices Refer to the specific Avaya publication or the manufacturer's
guidelines
Important:
Temperature changes must be less than 30° C (54° F) per hour for storage and during
transportation.
Caution:
Damage to Equipment
Because digital systems require constant power (even if the system is idle), they
generate heat continuously. Air conditioning requirements must be met at all times.
• Table 23: Current, power, and cooling requirements for CS 1000E components on
page 187 and Table 24: Power and cooling requirements for Media Gateway packs on
page 189 show the thermal dissipation for system components.
Static electricity
Electronic circuits are extremely sensitive to static discharge. Static discharge can damage
circuitry permanently, interrupt system operation, and cause lost data.
Static electricity can be caused by physical vibration, friction, and the separation of materials.
Other common causes of static electricity build-up are low humidity, certain types of carpeting,
the wax on equipment room floors, and plastic-soled shoes. The human body is the most
common collector of static electricity. A combination of plastic-soled shoes, certain flooring
materials, and low humidity can cause body charges in excess of 15 kV.
Important:
IEEE Standard 142-1982 recommends that flooring resistance be more than 25 000 ohms
and less than 1 million megohms, measured by two electrodes 0.91 m (3 ft) apart on the
floor. Each electrode must weigh 2.2 kg (5 lb) and have a dry flat contact area of 6.35 cm
(2.5 in.) in diameter.
Antistatic wrist straps, sprays, and mats are available. Avaya recommends at least using an
antistatic wrist strap whenever you work on equipment.
Vibration
Vibration can cause the slow deterioration of mechanical parts and, if severe, can cause
serious disk errors. Avoid structure-borne vibration and consequent noise transferred to the
equipment room. Raised floors must have extra support jacks at strategic places to prevent
the transmission of vibration.
Limit vibration in an office environment to a frequency range of 0.5–200 Hz and a G-force
magnitude of 0.1 G (in accordance with the Bellcore "Network Equipment Building Systems
Generic Equipment Requirements" specification TR-EOP-000063).
Dust
Accumulated dust and dirt can degrade system reliability and performance. Dust and dirt can:
• Scratch the contacts on circuit cards causing intermittent failures
• Have conductive contents that increase static electricity in the environment
• Cause components to operate at higher temperatures
Average dust density for an office environment must be 0.00014 g/m3 or better. False ceilings
and tiled floors help maintain dust density requirements.
Lighting
Lighting illumination of 50 to 75 footcandles measured 76 cm (30 in.) above the equipment
room floor is recommended. Avoid direct sunlight in the equipment room to prevent
malfunctions by devices with light sensors (such as disk units).
Lighting must not be powered from the equipment room service panel. For large system
installations, consider provisions for emergency lighting in the equipment room.
Earthquake bracing
Earthquake (seismic) bracing is required or should be considered in some locations.
Structural features
Use sealed concrete, vinyl, or mastic tile for flooring and ensure that it meets the floor loading
requirements described later in this document. Avoid using sprayed ceilings or walls.
4. Well-secured
5. Accessible (the doorway must not be blocked)
6. Meet all floor loading requirements and the noise levels required by OSHA
standards 1910.5 (or local standards)
For detailed instructions on battery usage, see ANSI/IEEE Standard 450-1987: "Maintenance,
Testing and Replacement of Large Storage Batteries."
Cable requirements
This section describes the types of cable used in the system. It also provides some cabling
guidelines.
Cable types
The system uses the following major types of wiring:
• 25-pair main distribution frame (MDF) cables: These cables carry voice and data
information between gateways and the distribution frame. One end of the cable must be
equipped with a 25-pair female connector that terminates on the module input/output (I/
O) panel. The other end of the cable terminates on the MDF block.
• Interface cables: Interface, or I/O, cables are typically 25-conductor interfaced through
RS-232-C connectors. These cables are used to connect data units to printers, host
computers, and modems.
• Three port cables: This cable is used as an interconnect between terminal equipment and
the terminal port on the Media Gateway 1000E. The cable also functions as a remote TTY
if it has been configured with an MGC. On the Avaya MG 1000E, it is required only for
initial configuration of IP addresses.
• Cat 5 cables: These are standard cables used to connect LAN equipment and are
terminated with RJ45 connectors. These are specified as either being standard or straight
through or as cross over. Not recommended for speeds greater than 100 Mbps.
• Cat 5E (Cat 5 Enhanced) cables: The Cat 5E are the same as Cat 5 cables, but made to
more stringent requirements. They are also designed for speeds up to 1 Gbps.
• Cat 6 cables: The same as Cat 5E, but made to more stringent standards. Designed for
speeds up to 1 Gbps.
• Terminal server cables: Terminal server cables are a proprietary cable that can be used
to interface between the MRV Terminal Server and various system components in order
to allow terminal access.
• Twisted-pair telephone cables: These cables carry analog voice and digitized voice and
data information between distribution frames and terminal devices throughout the
building. They connect to 8-pin modular jacks located within 2.4 m (8 ft) of each device.
• Surge-suppression cables: These cables prevent transient voltages from damaging
certain Central Office Trunk (COT) and Direct-Dial Inward (DDI) cards. The cable has a
male connector on one end and a female connector on the other so that you can connect
it serially with the existing cable. For a list of cards that require surge-supression cables
and installation instructions, see Circuit Card Reference, NN43001–311.
Consider cable length requirements and limitations for both initial installation and later growth
when you plan a system.
Cable access
The customer is responsible for supplying all access for station, feeder, and riser cabling. This
includes (where necessary):
• Conduit
• Floor boring
• Wall boring
• Access into hung ceilings
LAN design
Network requirements are critical to the CS 1000E quality of service. Ensure the network meets
the following requirements:
• Provision 100BaseTx IP connectivity between the Call Server and the Media Gateway.
The 100BaseTx IP connectivity can be either a point-to-point network or a distributed
campus data network. IP daugherboards in the Call Server and the Media Gateway
provide connectivity.
• Ensure that the 100BaseTx Layer 2 (or Layer 3) switch supports full-duplex connection.
Routers are not supported in Call Server to Media Gateway connections. The ports on
Layer 2 (or Layer3) switching equipment must be configured to autonegotiate
ENABLED.
• Provision the ELAN subnet and the TLAN subnet on separate subnets.
• Provision all applications on the ELAN subnet on the same subnet. This includes Voice
Gateway Media Cards that must be on the same ELAN subnet.
• Ensure that Voice Gateway Media Cards are in the same node on the same TLAN
subnet.
For information about the requirements for creating a robust, redundant network, see Avaya
Converging the Data Network with VoIP Fundamentals, NN43001-260.
Keep a record of the IP addresses assigned to system components. See Figure 47: Sample
IP address record sheet on page 165 for a sample.
Important:
According to the National Fire Code, equipment must be at least 30.5 cm (12 in.) from a
sprinkler head.
Ensure that the site configuration meets all requirements of the third-party suppliers of the 19-
inch racks.
a letter or number, and assign a block of numbers to each zone. Figure 48: Building
cable zones on page 169 illustrates zoning.
Be sure to leave room for expansion.
Wire routing
Refer to the appropriate electrical code for your region for standards you are required to meet.
For the US, refer to the National Electrical Code (NEC).
To plan wire routing, establish the start and end point of each cable relative to the location of
the terminal devices in the building, and then examine the construction of the office to
determine the best wiring routes. Consider the following guidelines when performing this
task.
• Floors:
- In the open, wires can run along baseboard, ceiling moldings, or door and window
casings. For the safety of employees, never run wire across the top of the floor.
- When concealed, wires can run inside floor conduits that travel between distribution
frames and jacks. (Under-carpet cable is not recommended.)
• Ceilings:
National and local building codes specify the types of telephone wire that you can run in
each type of ceiling. Local building codes take precedence.
• Walls:
Cables that run vertically should, when possible, run inside a wall, pole, or similar facility
for vertical wire drops. Cables that run horizontally cannot be blind-fed through walls.
• Between floors:
Locate distribution frames as closely to one another as possible. Local coding laws specify
whether or not a licensed contractor is required if conduit is installed.
• EMI:
Data degradation can occur if wires travel near strong EMI sources. See Electromagnetic
and radio frequency interference on page 160 for a description of common interference
sources.
Termination points
After you determine the wire routing, establish termination points. Cables can terminate at:
• the MDF (typically in the equipment room)
• intermediate distribution frames, typically on each floor in telephone utility closets
• wall jacks to terminal boxes, typically located near the terminal device
At the distribution frame (also called the cross-connect terminal), house cables terminate on
the vertical side of the two-sided frame and cross connect to equipment that is typically located
on the horizontal. If you use a color field scheme, house cables typically terminate in the blue
field and the equipment terminates on the purple (US) or white (Canada) field.
In all cases, clearly designate the block where the cables terminate with the cable location
information and the cable pair assignments. Keep a log book (cable record) of termination
information. See Figure 49: Sample cable record on page 170 for an example.
Contents
This chapter contains the following topics:
Introduction on page 175
Grounding requirements on page 175
Grounding methods on page 180
Commercial power requirements on page 183
Alternative AC-powered installation on page 185
AC input requirements on page 186
Power consumption on page 187
Heat dissipation on page 195
Uninterruptible Power Supply on page 195
Power requirements for IP Phones on page 197
Introduction
Avaya Communication Server 1000E (Avaya CS 1000E) system components are AC-
powered. This section outlines the system's grounding and electrical requirements.
Grounding requirements
For system grounding in new installations, Avaya recommends following ANSI/TIA/EIA-607
(Commercial Building and Bonding Requirements for Telecommunications Equipment).
In building installations where the ANSI/TIA/EIA-607 method is not used, connect the
equipment ground to the AC ground at the respective service panel.
If you are having difficulty interpreting the grounding methods in this document, Avaya
recommends obtaining the services of a certified power contractor or auditor prior to system
installation or cutover
Warning:
Failure to follow grounding recommendations can result in a system installation that is:
• unsafe for personnel handling or using the equipment
• not properly protected from lightning or power transients
• subject to service interruptions
Before installing the equipment and applying AC power, measure the impedance of the building
ground reference. An ECOS 1023 POW-R-MATE or similar meter is acceptable for this
purpose. Ensure that the ground path connected to the system has an impedance of 4 ohms
or less. Make any improvements to the grounding system before attempting installation.
Voltage:
DANGER OF ELECTRIC SHOCK
Never connect the single point ground conductor from the system to structural steel
members or electrical conduit. Never tie this conductor to a ground source or grounded
electrode that is not hard-wired to the building reference conductor.
System grounding must adhere to the following requirements:
• The ground path must have an impedance of 4 ohms or less.
• Ground conductors must be at least #6 AWG (16 mm 2) at any point (see Table 21: Area-
specific ground wire requirements on page 176 for a list of grounding wire requirements
specific to some areas).
• Ground conductors must not carry current under normal operating conditions.
• Spliced conductors must not be used. Continuous conductors have lower impedance and
are more reliable.
• All conductors must terminate in a permanent way. Make sure all terminations are easily
visible and available for maintenance purposes.
• Tag ground connections with a clear message such as "CRITICAL CONNECTION: DO
NOT REMOVE OR DISCONNECT."
Table 21: Area-specific ground wire requirements
For more information about standards and guidelines for grounding telecommunications
equipment, refer to ANSI/TIA/EIA-607 (Commercial Building and Bonding Requirements for
Telecommunications Equipment).
Voltage:
DANGER OF ELECTRIC SHOCK
For an installed Call Server, Media Gateway, Media Gateway Expander, or Signaling Server,
link impedance between the ground post of any equipment and the single point ground that
it connects to must be less than 0.25 ohms.
Caution:
Damage to Equipment
Transients in supply conductors and ground systems can damage integrated circuits. This
damage can result in unreliable system operation. Damage caused by transients is not
always immediately apparent. Degradation can occur over a period of time.
Voltage:
DANGER OF ELECTRIC SHOCK
Do not perform work inside electrical panels unless you are a qualified electrician. Do not
try to remove bonding conductors without approval from qualified personnel.
In an ANSI/TIA/EIA-607 installation, the Telecommunications Main Grounding Busbar (TMGB)/
Telecommunications Grounding Busbar (TGB) links the telecommunications equipment to the
ground. Other grounding terminology is:
• building principal ground, normally in a building with one floor
• floor ground bar, normally in buildings with more than one floor
Configure telecommunications subsystems, such as groups of frames or equipment, as
separate single-point ground entities connected to the equipment's dedicated service panel
via a single-point ground bar. The service panel ground connects to the building principal
ground via the main service panel or, in an ANSI/TIA/EIA-607 installation, via the TGB. Refer
to Figure 51: Typical wiring plan on page 180.
Grounding methods
This section describes the grounding methods for:
• Ground bar (NTBK80) on page 181
• Ground bar (NTDU6201) on page 181
• CP PIV Call Server (NTDU62) on page 181
• COTS servers on page 182
• Media Gateway on page 182
Voltage:
DANGER OF ELECTRIC SHOCK
To prevent ground loops, power all CS 1000E system equipment from the same dedicated
power panel.
COTS servers
The Commercial off-the-shelf (COTS) server does not connect to a ground bar. It is properly
grounded when:
• The COTS server power cord is plugged into the rack's AC outlet. The rack's AC outlet
must be grounded to its dedicated electrical panel.
• The COTS server power cord is plugged into a wall AC outlet. The Server is grounded
outside of the rack via the safety grounding conductor in the power cord. This method
only ensures proper grounding of the Signaling Server itself. It does not provide grounding
protection for other rack-mounted pieces of equipment. Therefore, ensure that other
devices in the rack are properly grounded as required.
Media Gateway
The grounding method used for the Media Gateway depends on the number of units used and
whether the units are powered by the same service panel.
All equipment located in a series of equipment racks that are physically bonded together must
be grounded to and powered by the same service panel. If additional service panels are
required, collocate them beside the original service panel.
If racks are not bonded together, then the equipment located in the racks can be grounded and
powered by separate service panels.
Connect a #6 AWG (16 mm 2) ground wire from the rear panel grounding lug of each Media
Gateway to the ground bar. See Table 21: Area-specific ground wire requirements on page 176
for area-specific ground wire requirements. Connect the ground bar to a ground source in the
dedicated service panel.
In the UK, connect the ground wire from the equipment to a ground bar or through a Krone
Test Jack Frame.
Important:
Power each Media Gateway and Media Gateway Expander pair from the same service
panel.
Conduit requirements
Conductive conduit linking panels and equipment is legal for use as a grounding network in
most countries. For all CS 1000E system ground paths, route the correct size of insulated
copper conductors inside conduit. A ground link that depends on a conduit can defeat the
improvements achieved with the installation of dedicated electrical panels and transformers.
A grounding failure can result from the following:
• Personnel who service different equipment can separate conduit links. If such a
separation occurs between the system and the building ground reference, the conduit
cannot provide a ground path. This situation is hazardous.
• Corrosion of metal conduits increases resistance. Threaded connections are prone to
corrosion. This problem becomes worse when there are multiple links. Applying paint over
the conduit increases the corrosion process.
• Conduit cannot be fastened to secure surfaces. Often, the conduit bolts on to structural
steel members, which can function as ground conductors to noisy equipment (for
example, compressors and motors). Adding noisy equipment into the grounding system
can damage the system's performance. The resulting intermittent malfunctions can be
difficult to trace.
such as TTYs and printers. There is no expectation that system components that are located
off-site will be powered by this dedicated electrical panel.
Voltage:
DANGER OF ELECTRIC SHOCK
Avaya does not recommend connecting any CS 1000E system telecommunications ground
bus to untested horizontal structural steel or water pipes, or other unreliable ground paths.
Use a ground point known to be "clean" and permanent. Place a "DO NOT DISCONNECT"
tag on it.
Installing an isolation transformer without pluggable power cords on page 185 describes the
method to install an isolation transformer without pluggable power cords.
Installing an isolation transformer without pluggable power cords
1. If the transformer does not have a pluggable cord, hardwire the transformer to an
electrical panel. Route all wires (including grounds) through a single conduit.
Some electrical codes permit the use of conduit as the only ground conductor
between pieces of equipment.
2. Run a separate insulated ground conductor through the conduit to hold unit grounds
together. Such a conductor maintains the safety ground connection in the event that
the conduit becomes corroded or disconnected.
3. Run all ground lines through the same conduit as the phase conductors that serve
the equipment. Figure 52: Typical hardwired isolation transformer wiring plan on
page 186 shows the isolation transformer connections.
AC input requirements
For the AC input current requirements of Communication Server 1000E components, see Table
23: Current, power, and cooling requirements for CS 1000E components on page 187 .
North America: Voltage range 90 to 132V AC, 60Hz.
Europe and UK: Voltage range 180 to 250V AC, 50Hz Note: Regulations in Germany allow a
maximum supply panel fuse or breaker of 16A.
If other data communications equipment is in the same rack as the CS 1000E system, power
each piece of equipment from the same electrical panel. Install additional outlets, if
necessary.
Because local power specifications vary, consult a qualified local electrician when planning
power requirements.
Power consumption
System power consumption depends on the number of components installed.
Table 23: Current, power, and cooling requirements for CS 1000E components on page 187
summarizes the current, power, and cooling requirements for CS 1000E components. Table
23: Current, power, and cooling requirements for CS 1000E components on page 187 shows
absolute maximum ratings as well as typical ratings for configured systems. The typical values
are provided as a guide to avoid over-engineering, particularly for Uninterruptable Power
Supply (UPS) requirements.
Table 23: Current, power, and cooling requirements for CS 1000E components
Table 24: Power and cooling requirements for Media Gateway packs on page 189 provides
the power consumption and thermal dissipation of Media Gateway packs (circuit cards and
daughterboards) commonly installed in CS 1000E and Media Gateway Media Gateways and
Expanders. Use the data in the following table in conjunction with the system and Media
Gateway power consumption worksheets. See Power consumption worksheets on
page 190.
Electrical load for analog line cards varies with traffic load. The figures in the following table
assume that 50% of analog lines are active.
For digital and analog (500/2500-type) telephones, most thermal dissipation will be external
to the switch room. This is accounted for In Table 24: Power and cooling requirements for Media
Gateway packs on page 189, and the Power consumption worksheets on page 190. This
thermal dissipation is also accounted for in the typical values shown in Table 23: Current,
power, and cooling requirements for CS 1000E components on page 187.
Table 24: Power and cooling requirements for Media Gateway packs
Important:
To determine the required UPS rating for Media Gateways you must allow for the efficiency
factor of the Media Gateway power supply plus peak inrush. For NTAK11, NTDU14, and
NTDU15, multiply the total power consumption of the components by 1.5. For NTC310,
multiply the total power consumption of the components by 1.3. See the Media Gateway
power consumption worksheets for this calculation.
Prepare one worksheet for each Media Gateway. Use the appropriate worksheet for the Media
Gateway type from the following list.
• Table 26: NTAK11 Media Gateway Option 11C power consumption worksheet on
page 192
• Table 27: NTDU14 and NTDU15 Media Gateway power consumption worksheet on
page 193
• Table 28: NTC310 Media Gateway 1010 power consumption worksheet on page 194
For the power and thermal dissipation requirements for the individual components, see Table
24: Power and cooling requirements for Media Gateway packs on page 189 .
Table 26: NTAK11 Media Gateway Option 11C power consumption worksheet
Table 27: NTDU14 and NTDU15 Media Gateway power consumption worksheet
23 Server card
22 Server card
MG 1010 chassis 100
0 Gateway Controller
0 DSP daughterboard
0 DSP daughterboard
1
2
3
4
5
6
7
8
9
10
TOTAL
multiply total power consumption by 1.5 x 1.5
Required UPS Power (W or VA)
subtract total thermal dissipation outside
system (W)
System thermal dissipation (W)
multiply by 3.412 to convert to BTU x 3.412
System thermal dissipation (BTU)
Heat dissipation
The CS 1000E is equipped with a cooling system and does not have heat dissipation problems
under normal applications. Mounting in the rack is not restricted.
Use the Power consumption worksheets on page 190 to determine the thermal load generated
by system components and Media Gateway packs.
For air conditioning purposes, 1 ton = 12 000 Btu.
UPS sizing
To determine UPS sizing, sum the values given in Table 23: Current, power, and cooling
requirements for CS 1000E components on page 187 and Table 24: Power and cooling
requirements for Media Gateway packs on page 189 for UPS requirements for the applicable
components and Media Gateway packs. The value in watts (W) is equivalent to a volt-ampere
(VA) rating. Size the UPS in terms of its rating in VA (or kVA). For AC-powered systems,
Enterprise Configurator calculates the system power consumption in both watts and volt-
amperes.
To determine the sizing and provisioning of UPS batteries, follow the instructions provided by
the UPS manufacturer. A general approach is to take the total system power in watts, divide
by the UPS inverter efficiency, and convert to battery current drain by dividing by the nominal
discharge voltage of the battery string. Then determine the battery requirements in ampere-
hours (A-hrs) by multiplying the battery current drain by the required reserve power operating
time.
UPS installation
When installing a UPS, follow the vendor's instructions carefully.
Avaya recommends installing a bypass switch during the initial UPS wiring (if the switch
function is not inherently a part of the UPS itself). The UPS bypass switch lets the system run
directly from the commercial power source while the UPS is taken off-line during installation,
service, or battery maintenance.
Caution:
Damage to Equipment
Take care when connecting battery cables to the UPS. Connecting battery cables backward
can result in severe damage to the UPS.
Figure 53: Typical UPS wiring plan on page 197 shows a typical UPS wiring plan.
Contents
This chapter contains the following topics:
Introduction on page 199
System parameters on page 200
Customer parameters on page 200
Console and telephone parameters on page 201
Trunk and route parameters on page 202
ACD feature parameters on page 202
Special feature parameters on page 203
Hardware and capacity parameters on page 205
Call Server memory related parameters on page 206
Introduction
This section describes sets of design parameters that set an upper boundary on certain system
capacities. Changes to these parameters generally require a revision to the software and are
constrained by other basic capacities such as memory and traffic or system load. The design
parameters are set to provide the best possible balance between limits.
Note on terminology
The term Media Gateway refers to the Avaya CS 1000 Media Gateway 1000E (Avaya MG
1000E), and Media Gateway 1010, (MG 1010). The MG 1010 provides ten IPE slots. The
Avaya MG 1000E provides four IPE slots.
System parameters
Table 29: System parameters on page 200 lists system parameters and provides their
maximum values.
Table 29: System parameters
Customer parameters
Table 30: Customer parameters on page 200 lists customer parameters and their maximum
values.
Table 30: Customer parameters
Media Cards
A Media Card is a card that provides additional DSP resources for a Media Gateway beyond
the DSP resources provided by the Gateway Controller. Media Card is a term for the Media
Card 32-port secure line card, Media Card 32-port line card, and the Media Card 8-port line
card.
In the CS 1000E, Media Cards are used primarily for DSP connections between the TDM
devices in a Media Gateway and IP circuits.
Media Cards can be assigned to any nonblocking slot other than slot 0. You must provision
each Media Gateway with enough DSP ports to support the TDM devices in that Media
Gateway.
Values
Parameter CS 1000E
Low-priority input buffers 95 – 5000
(3500)
• (recommended default)
Values
Parameter CS 1000E
Call registers for AML input queues (CSQI) Up to 25% of total call registers (NCR),
minimum 20
Call registers for AML output queues Up to 25% of total call registers (NCR),
(CSQO) minimum 20
Auxiliary input queue 20 – the minimum of 25% of total call
registers or 255
(default 20)
Auxiliary output queue 20 – the minimum of 25% of total call
registers or 255
(default 20)
History file buffer length (characters) 0 – 65 535
In a system with Avaya CallPilot, AML, and Symposium, add the number of CSQI and CSQO
to the Call Register (CR) requirement obtained from feature impact calculations.
The buffer estimates were based on relatively conservative scenarios, which should cover
most practical applications in the field. However, most models deal with "average traffic".
When traffic spikes occur, buffers can overflow. In these cases, raise the buffer size,
depending on the availability of CRs. The maximum number of buffers allowed for CSQI
and CSQO is up to 25% of total call registers (NCR).
Buffer limits
The buffer limit is the maximum number of Call Registers (CR) that can be used for that
particular function out of the total CR pool. If the designated limit is larger than needed and
there are still spare CRs, the unused CRs will not be tied up by this specific function. Therefore,
there is little penalty for overstating the buffer size limit, as long as the limit is within the number
of CRs available to the system.
The values provided in Table 36: Memory related parameters on page 206 indicate the relative
requirements for various buffers. They are the minimum buffer size needed to cover most
applications under the constraint of tight memory availability. When increasing buffer sizes,
make the increases proportional to the values in Table 36: Memory related parameters on
page 206. This guideline applies in all cases except CSQI/CSQO, which is relatively
independent of other buffers and can be increased without affecting others.
For example, with a CS 1000E Call Center (maximum 25 000 CRs) using many applications
(such as CallPilot), it would be advisable to set the CSQI/CSQO to a high value (even up to
the limit of 25% of NCR). Note that the value of NCR should be increased to account for the
requirements of CSQI and CSQO.
Contents
This chapter contains the following topics:
Introduction on page 209
Memory size on page 210
Mass storage on page 212
Physical capacity on page 213
CS 1000E network traffic on page 217
Real time capacity on page 229
Signaling Server on page 234
Software configuration capacities on page 247
CS 1000E capacities on page 247
Zone/IP Telephony Node Engineering on page 248
Introduction
This chapter describes the system's primary capacity categories. For each category, this
chapter:
• identifies the units that the capacity is measured
• details the primary physical and functional elements affecting the capacity
• describes actions that can be used to engineer the capacity
Resource calculations on page 249 provides the algorithms for engineering the system within
the capacity limits. In some cases, applications such as Call Center require detailed
engineering. These applications are discussed in Application engineering on page 323
Memory size
Table 37: Avaya CS 1000 memory requirements on page 210 shows the minimum amount of
memory required for Avaya Communication Server 1000 (Avaya CS 1000) software.
Table 37: Avaya CS 1000 memory requirements
Table 38: Recommended call register counts on page 210 shows the call register count
recommended for Communication Server 1000 software, so that the system's memory
requirements do not exceed the processor's memory capacity.
Table 38: Recommended call register counts
Note:
A large deployment is greater than 11 500 SIP Line users or greater than 22 500 UNIStim
users.
Memory engineering
Current call processors for the CS 1000E are shipped with sufficient memory for the supported
line sizes of the individual CPU types. Memory engineering is not required for most items.
Customer data is split between unprotected data store (UDS) and protected data store (PDS).
Using LD 10 or LD 11 and looking at the memory usage, you can determine the amount of
memory left on a system.
>ld 11 SL1000 MEM AVAIL: (U/P): 8064848 USED U P: 8925713 4998811 TOT: 21989372
The preceding example shows that there is 8,064,848 SL1 words (32,259,392 bytes) of
memory left that can be used for either UDS or PDS. When the amount of available memory
drops to be very low this will be shown as amount of UPS available and PDS available.
The preceding example also shows that currently 8,925,713 SL1 words (35,702,852 bytes) of
UDS in use and 4,998,811 SL1 words (19,995,244 bytes) of PDS in use.
The major consumer of unprotected data store (UDS) is call register definitions. Therefore
before increasing the number of call registers on a system, check that there is sufficient UDS
available.
The major consumer of protected data store (PDS) is speed call lists. The overlay used to
create speed call lists does the memory calculations (based on the number of lists, size of lists
and DN sizes).
For definitions of large numbers of sets, it is recommended that you look that the available
memory, create a single set and see how much memory was consumed. Then determine if
there is sufficient memory left to create all of the desired sets.
Assumptions:
• Call Register Traffic Factor (CRF) = 1.865
• The formula for calculating the recommended number of call registers depends on traffic
load for the system.
• 28 centi-call seconds (CCS) for each ACD trunk
• Snacd = (Number of calls overflowed to all target ACD DNs × 2.25) – (Number of calls
overflowed to local target ACD DNs × 1.8) (= 0 if the system is not a source node)
• Tnacd = 0.2 × Number of expected calls overflowed from source (= 0 if the system is not
a target node)
• ISDN CCS = PRI CCS + BRI CCS
• ISDN penetration factor: p = ISDN CCS ÷ Total Voice Traffic
• ISDN factor: (1 – p)^2 + [4 × (1 – p)] × p + (3 × p^2)
Mass storage
The system processor program and data are loaded from a Fixed Media Disk (FMD).
Depending on the hardware platform, the FMD can be a hard disk drive (HDD) or Compact
Flash (CF) card.
Software installation
Software, customer databases, and PEPS are delivered to the system using a Removable
Media Disk (RMD), either CF card or USB, inserted into the Server. An installation process
copies the software to the on-board FMD. The software subsequently operates on the Server
FMD.
Database backup
The RMD can also be used for customer database backups.
Physical capacity
The following physical capacities are discussed in this section:
• CS 1000E SA and HA physical capacity on page 213
• CS 1000E Co-resident Call Server and Signaling Server physical capacity on
page 214
• CS 1000E TDM physical capacity on page 214
information about phantom and virtual loops, see the Global Software Licenses chapter in
Avaya Features and Services Fundamentals, NN43001-106.
For information about loop and card slot usage and requirements for the Media Gateways in
the CS 1000E, see Assigning loops and card slots in the Communication Server 1000E on
page 375 .
The CS 1000E TDM system does not support any IP Phones (UNIStim, SIP Line, or SIP
DECT), virtual trunks, or an NRS. For more information about CS 1000E TDM, see Avaya Co-
resident Call Server and Signaling Server Fundamentals, NN43001-509.
Physical links
There are two types of physical links to consider:
• Serial Data Interface (SDI) on page 215
• Local Area Network (LAN) on page 215
Functional links
For each of the following functions, the type of link and resulting capacity are given.
OAM
The system uses a SDI port to connect to a terminal/computer (TTY) to receive maintenance
commands or to print traffic reports, maintenance messages, or CDR records.
D-Channel
A PRI interface consists of 23 B-channels (30 in Europe based on E1) and 1 D-channel. The
D-channel at 64 kbps rate is used for signaling. A D-channel communicates with the system
through a DCHI card or a DCHI port on the D-channel handler. A D-channel on a BRI set is a
16 kbps link that is multiplexed to make a 54 kbps channel.
Terminology
Basic traffic terms used in this section are:
• ATTEMPT – any effort on the part of a traffic source to seize a circuit/channel/timeslot
• CALL – any actual engagement or seizure of a circuit or channel by two parties
• CALLING RATE – the number of calls per line per busy hour (Calls/Line)
• BUSY HOUR – the continuous 60-minute period of day having the highest traffic usage,
usually beginning on the hour or half-hour
• HOLDING TIME – the length of time that a call engages a traffic path or channel
• TRAFFIC – the total occupied time of circuits or channels, generally expressed in Centi-
Call Seconds (CCS) or Erlangs (CCS = a circuit occupied 100 seconds; Erlang = a circuit
occupied one hour)
• BLOCKING – attempts not accepted by the system due to unavailability of the resource
• OFFERED traffic = CARRIED traffic + BLOCKED traffic
• Traffic load in CCS = Number of calls × AHT ÷ 100 (where AHT = average holding time)
• Network CCS = Total CCS handled by the switching network or CCS offered to the network
by stations, trunks, attendants, Digitone Receivers, conference circuits, and special
features
Communication Server 1000E engineering is typically based on measurements you perform
in an hour (typical busy hour). This applies to traffic load in CCS and Network CCS.
Loop counting
• 1 Virtual Superloop has 1024 TNs (IP Phones, Vrtks, IP Media Services)
• 2 Media Gateways per Superloop (Media Gateway PRI Gateway counts as 1 Media
Gateway)
• Gateway Controller card
- 1 loop per TDS definition (30 units per loop).
- 1 loop per Conference definition (30 units per loop).
- Can define up to 2 Conference loops and 2 TDS loops.
• IP Media Services are IP tone, IP conference, IP attendant consoles, IP recorded
announcer, and IP music.
- IP tone - 30 units for each tone loop
- IP conference - 30 units for each conference loop
• 1 Phantom loop has 512 units. Used for M39xx Virtual Office.
• 1 Phantom loop has 1024 DECT users.
• 1024 i200x Virtual Office sets per Superloop.
• Every PRI definition requires 1 loop (23 channels TI, 30 channels E1).
• 1024 PCAs per Superloop.
• Limit of 64 Superloops (256 loops).
• Superloops = ROUNDUP(IP Phones + Vrtks) / 1024 + ROUNDUP (MGs / 2) +
ROUNDUP(M39XX_vo / 512) + ROUNDUP(i200x_vo / 1024) + ROUNDUP(PCA / 1024)
+ ROUNDUP(DECT users / 1024) + ROUNDUP ( (2×(SIPN + SIP3 users) ) / 1024)
• Loops = ROUNDUP(Conference Ports / 30) + TDS loops (minimum 1 for each MGC) +
PRI or DTI cards + (1000E PRI Gateways × 4)
• Total Loops = Loops + IP tone loops + Superloops × 4
• Total Loops > 256 is an error, too many loops being used.
• Total Superloops = ROUNDUP(Total Loops / 4)
Note:
Conference ports can be TDM or IP.
Superloop capacity
On a TDM based system (CS 1000M) each superloop is constrained by the number of talkslots
and the number of CCS that the superloop can carry.
The CS 1000E is an IP based system and does not have the same constraints. All Virtual
superloops use "virtual talkslots" and are nonblocking (one virtual talkslot per virtual TN). This
also removes the CCS per superloop constraint.
The Media Gateway has a nonblock TDM backplane (1 talksot per TDM unit). Call blocking
can only occur here for other required resources (DTR, TDS, DSP, etc) which must all exist
within the same Media Gateway as the phone requiring the resource.
Loop capacity and Media Gateway TDM resources are subject to the Grade-of-Service (GoS)
described under Grade-of-Service on page 227.
Table 40: Connection type resources required on page 221 lists the resources required for
each type of connection.
Table 40: Connection type resources required
See Resource calculations on page 249 for the algorithms to calculate the required
resources.
TDS
The Tone and Digit Switch (TDS) loop provides dial tone, busy tone, overflow tone, ringing
tone, audible ringback tone, DP or dual tone multifrequency (DTMF) outpulsing, and
miscellaneous tones. All these tones are provided through the maximum 30 timeslots in the
TDS loop.
A minimum of one TDS loop is required in each Media Gateway. The TDS circuits are provided
by the MGC card. If additional TDS circuits are required in any Media Gateway, a second TDS
loop can be configured in it. TDS circuits in a Media Gateway provide tones for TDM telephones
or trunks in that Media Gateway only.
Conference
The MGC has a maximum of 2 conference loops, with 30 conference circuits for each
conference loop, for a total of 60 conference circuits for each MGC-based Media Gateway.
The maximum number of parties involved in a single conference on a Media Gateway with
Gateway Controller is 30. Conference circuits in the CS 1000E are a system resource.
Conference loops can be TDM or IP.
Broadcast circuits
The Avaya Integrated Recorded Announcer (Recorded Announcer) card provides either 8 or
16 ports to support Music, Recorded Announcement (RAN), and Automatic Wake Up. There
is a maximum of 60 simultaneous connections to an individual card for broadcast within a
Media Gateway. The use of controlled broadcast with Symposium and MGate cards has the
same simultaneous connection limit as broadcast circuits. With special provisioning, the limit
can be increased to 120 connections (see Broadcast circuits on page 383).
Music
Music Broadcast requires any Music trunk and an external music source or a Recorded
Announcer card. The Recorded Announcer has the capability to provide audio input for external
music. A CON loop is not required for Music Broadcast.
Network Music
With the Network Music feature, a networked Central Audio Server is attached to the CS 1000E
system to be used as the music source on demand to all parties on hold. With Network Music,
the CS 1000E systems supports MOH features without a locally equipped music source for
each node. Network Music feature provides music to every node in the system
The Central Audio Server is accessed over the network through H.323/SIP virtual trunks or
TDM trunks. Virtual trunks or TDM trunks are connected to a network music trunk through an
analog TIE trunk, the Network Music TIE trunk. Network Music is implemented with an XUT
pack (NT8D14) and a network music agent. Broadcast music or conference music is set up so
that multiple held parties can share the same music trunk.
To maximize the resource efficiency, the music is broadcast so that multiple parties can share
the same music trunk. One music trunk can support a maximum of 64 listeners with broadcast
music.
RAN
RAN trunks are located on eight-port trunk cards on PE shelves just like regular trunk circuits.
They provide voice messages to waiting calls. RAN trunks are also needed to provide music
to conference loops for music on hold.
Each RAN trunk is connected to one ACD call at a time, for the duration of the RAN message.
Different RAN sources require different RAN trunk routes. If the first RAN is different from the
second RAN, they need different RAN trunk routes. However, if the same message is to be
used, the first RAN and second RAN can use the same route.
Use the following formula to calculate RAN traffic:
RAN CCS = Number of ACD calls using RAN × RAN HT ÷100
A RAN message typically runs from 20 seconds to 40 seconds. If the average for a specific
application is not known, use a default of 30 seconds. After RAN CCS is obtained, estimate
RAN trunk requirements from a Poisson P.01 table or a delay table (such as DTR table)
matching the holding time of a RAN message.
DTR
A Digitone Receiver (DTR) serves features involving 2500 telephones or Digitone trunks. In
CS 1000E systems, DTRs are not system-wide resources. They support only the telephones
and trunks in the Media Gateway that they reside.
The MGC card provides 16 DTRs, or 8 DTRs and 4 Multifrequency Receivers (MFR). Additional
DTRs can be provided by XDTR cards.
There are a number of features that require DTRs. General assumptions for DTR traffic
calculations are:
• DTR traffic is inflated by 30% to cover unsuccessful dialing attempts.
• Call holding time used in intraoffice and outgoing call calculations is 135 seconds if actual
values are unknown.
• DTR holding times are 6.2 and 14.1 seconds for intraoffice and outgoing calls,
respectively.
• The number of incoming calls and outgoing calls are assumed to be equal if actual values
are not specified.
The major DTR traffic sources and their calculation procedures are as follows:
1. Calculate intraoffice DTR traffic:
Intraoffice = 100 × DTR station traffic (CCS) ÷ AHT × (R ÷ 2) (Recall that R is the
intraoffice ratio.)
2. Calculate outgoing DTR traffic:
Outgoing = 100 × DTR station traffic (CCS) ÷ AHT × (1 - R ÷ 2)
3. Calculate direct inward dial (DID) DTR traffic:
DID calls = DID DTR trunk traffic (CCS) × 100 ÷ AHT
4. Calculate total DTR traffic: Total = [(1.3 × 6.2 × intra) + (1.3 × 14.1 × outgoing calls)
+ (2.5 × DID calls)] ÷ 100
5. See Digitone receiver load capacity 6 to 15 second holding time on page 418 to
determine the number of DTRs required. Note that a weighted average for holding
times should be used.
IP Media Services
IP Media Services is a term used for a group of services. The services supported are:
• IP Tone generation for IP Phones
• IP Conference
• IP Music
• IP Recorded Announcer
• IP Attendant Console
IP Tone
IP Tone is required for IP Phone blind transfers (ring back tone required). Approximately 3
percent of IP Phones are involved in a blind transfer on a system at any given time. Therefore,
IP Tone uses a low amount of media sessions on a system.
Perform the following steps to determine the number of media sessions required to support IP
Tone.
1. Determine the number of calls per hour for IP Phones (UNIStim and SIP).
• Calls involving at least one UNIStim Phone and TPS: CIPtone = C2IP + C1IP +
CSTIV + CSTID + CTSVI + CTSDI + C2SIPUIP
• Calls involving at least one SIP Line Phone using SLG: CSIPtone = C2SIP + C1SIP
+ C2SIPUIP + CSTSV + CSTSD + CTSVS + CTSDS
Total_IP_calls = CIPtone + CSIPtone
2. Determine the number of IP based calls that require an IP Tone.
IP_Tone_calls = Total_IP_calls × 0.03
3. Determine the traffic load (CCS) for IP Tone. Assume the tone is required for 20
seconds.
IPT_CCS = (IP_Tone_calls × 20) ÷ 100
4. Determine the number of IP Tone sessions required.
Use a Poisson 1% blocking (P.01 GOS) formula or table to determine the number
of IP Tone sessions you require.
Note:
IP Tone loops are the same density as TDM loops. Loop counting for determining the number
of loops consumed for IP Tone is the same as TDM.
IP Conference
IP Conference provides IP based conference. The conference port calculations and loop
counting remain the same as the TDM based Conference calculations. DSP resources are not
required for IP Conference.
The IP Media Sessions required to support IP Conference is equal to the number of required
conference ports.
Conference ports for each system = (Number of total telephones) × rcon × 0.4
IP Conference Sessions = Conference ports for each system
IP Music
You require one IP media session for every IP Music ISM ordered. You enter the number of IP
Music ISM on the IP Media Services input page.
IP Recorded Announcer
You require one IP media session for every IP Recorded Announcer (IP RAN) ISM ordered.
You enter the number of IP RAN ISM on the IP Media Services input page.
IP Attendant Console
You require one IP Attendant ISM for every IP Attendant console ordered. You order IP
Attendant Consoles on the Consoles and terminals input page. The number of IP Attendant
ISM is displayed on the IP Media Services input page.
Media Cards
Media Cards (MC32 or MC32S) do not run the Terminal Proxy Server (TPS) application. Media
Cards provide DSP resources. The TPS application only runs on a Signaling Server.
All the Media Cards in a specific Media Gateway must be in the same zone, so that bandwidth
management and codec selection can be performed properly.
Grade-of-Service
In a broad sense, the Grade-of-Service (GoS) encompasses everything a telephone user
perceives as the quality of services rendered. This includes:
• frequency of connection on first attempt
• speed of connection
• accuracy of connection
• average speed of answer by an operator
• quality of transmission
In the context of the system capacity engineering, the primary GoS measures are blocking
probability and average delay.
Based on the EIA Subcommittee TR-41.1 Traffic Considerations for PBX Systems, the
following GoS requirements must be met:
• Dial tone delay is not greater than 3 seconds for more than 1.5% of call originations.
• The probability of network blocking is 0.01 or less on line-to-line, line-to-trunk, or trunk-
to-line connections.
• Blocking for ringing circuits is 0.001 or less.
• Post-dialing delay is less than 1.5 seconds on all calls.
Traffic models
Table 41: Traffic models on page 228 summarizes the traffic models that are used in various
subsystem engineering procedures.
Table 41: Traffic models
Typically, the GoS for line-side traffic is based on Erlang B (or Erlang Loss formula) at P.01
GoS. When there is no resource available to process a call entering the system, the call is
blocked out of the system. Therefore, the correct model to calculate the call's blocking
probability is a "blocked call cleared" model, which is the basis of Erlang B.
When a call is already in the system and seeking a resource (trunk) to go out, the usual model
to estimate trunk requirements is based on the Poisson formula. The reasons are:
• The Poisson model is more conservative than Erlang B (in that it projects a higher number
of circuits to meet the same GoS). This reflects trunking requirements more accurately,
since alternative routing (or routing tables) for outgoing trunk processing tends to increase
loading on the trunk group.
• General telephony practice is to provide a better GoS for calls already using system
resources (such as tones, digit dialing, and timeslots). Incomplete calls inefficiently waste
partial resources. With more trunk circuits equipped, the probability of incomplete calls is
lower.
Note:
Co-res CS and SS effective throughput is significantly reduced because of high RTM factors
with all calls and UCM load. The effective call throughput is the following
• CP PM - 10 000 cph
• CP DC - 15 000 cph
• CP MG - 8000 cph
• COTS2 - 20 000 cph
• Common Server - 20 000 cph
Feature impact
Every feature that is applied to a call increases the CP real time consumed by that call. These
impacts can be measured and added incrementally to the cost of a basic call to determine the
cost of a featured call. This is the basis of the algorithm used by Enterprise Configurator to
determine the rated capacity of a proposed switch configuration.
The incremental impact of a feature, expressed in EBC, is called the real time factor for that
feature. real time factors are computed by measuring the incremental real time for the feature
in milliseconds, and dividing by the call service time of a basic call.
Each call is modeled as a basic call plus feature increments. For example, an incoming call
from a DID trunk terminating on a digital telephone with incoming CDR is modeled as a basic
call plus a real time increment for incoming DID plus an increment for digital telephones plus
an increment for incoming CDR.
A second factor is required to determine the overall impact of a feature on a switch. This is the
penetration factor. The penetration factor is simply the proportion of calls in the system that
invoke the feature.
The real time impact, in EBC, of a feature on the system is computed as follows:
(Calls) × (penetration factor) × (real time factor)
The sum of the impacts of all features, plus the number of calls, is the real time load on the
system, in EBC.
For penetration and real time factors and for the detailed EBC calculations, see System
calls on page 253 and Table 52: Real time factors on page 258.
Auxiliary processors
Interactions with auxiliary processors also have real time impacts on the system CP depending
on the number and length of messages exchanged. Several applications are described in
Application engineering on page 323.
as well. The relevant number is the average of the highest ten values from the busiest four-
week period of the year. An estimate is sufficient, based on current observations, if this data
is not available.
If the switch is not accessible and call load is not known or estimated from external knowledge,
call load can be computed. For this purpose, assumptions about the usage characteristics of
telephones and trunks must be made. For a description of the parameters that are required
and default values, see Resource calculation parameters on page 250.
Telephones
As the primary traffic source to the system, telephones have a unique real time impact on the
system. For the major types listed below, the number of telephones of each type must be given,
and the CCS and AHT must be estimated. In some cases it can be necessary to separate a
single type into low-usage and high-usage categories. For example, a typical office
environment with analog (500/2500-type) telephones can have a small call center with agents
on analog (500/2500-type) telephones. A typical low-usage default value is 6 CCS. A typical
high-usage default value is 28 CCS.
The principal types of telephones include:
• Analog: 500/2500-type, message waiting 500, message waiting 2500, and CLASS
telephones
• Digital: M2000 series Meridian Modular Telephone, voice and/or data ports
• Consoles
• IP Phones 200x and 11xxE
• IP Softphone 2050
Trunks
Depending on the type of trunk and application involved, trunks can either be traffic sources,
which generate calls to the system, or resources that satisfy traffic demands. Default trunk
CCS in an office environment is 26 CCS. Call Center applications can require the default to
be as high as 28 to 33 CCS.
Voice
Analog:
• CO
• DID
• WATS
• FX
• CCSA
• TIE E&M
• TIE Loop Start
Digital:
• DTI: number given in terms of links, each provides 24 trunks under the North American
standard
• PRI: number given in terms of links, each provides 23B+D under the North American
standard
• European varieties of PRI, each provides 30B+D: VNS, DASS, DPNSS, QSIG, ETSI PRI
DID
H.323 Virtual Trunk
An IP Peer H.323 Virtual Trunk identified with a trunk route that is not associated with a physical
hardware card.
SIP Virtual Trunk
A Session Initiation Protocol (SIP) Virtual Trunk identified with a trunk route that is not
associated with a physical hardware card.
Data
• Sync/Async CP
• Async Modem Pool
• Sync/Async Modem Pool
• Sync/Async Data
• Async Data Lines
RAN
The default value for AHT RAN is 30 seconds.
Music
The default value for AHT MUSIC is 60 seconds.
Signaling Server
The following software components operate on the Signaling Server:
• Terminal Proxy Server (TPS)
• H.323 Gateway (Virtual Trunk)
• SIP Gateway (Virtual Trunk)
• SIP Line Gateway (SLG)
• Network Routing Service (NRS)
• H.323 Gatekeeper
• Network Connection Service (NCS)
• CS 1000 Element Manager Web Server
• Application Server
• Avaya Unified Communication Manager (Avaya UCM)
Signaling Server software elements can coexist on one Signaling Server or reside individually
on separate Signaling Servers, depending on traffic and redundancy requirements for each
element. For any Co-resident Signaling Server software element combination the maximum
call rate supported is 10K cph.
A Signaling Server can also function as an application server for the Personal Directory, Callers
List, Redial List, and Unicode Name Directory applications and Password administration. See
Application server for Personal Directory, Callers List, Redial List, and Unicode Name
Directory on page 246.
Table 43: Elements in Signaling Server on page 234 describes the function and engineering
requirements of each element.
Table 43: Elements in Signaling Server
• The TPS manages the firmware for the IP Phones that are registered
to it. Accordingly, the TPS also manages the updating of the firmware
for those IP Phones.
• The redundancy of TPS is N+1. Therefore, you can provide one extra
Signaling Server to cover TPS functions from N other servers.
H.323 Gateway • The IP Peer H.323 Gateway trunk, or H.323 Virtual Trunk, provides
(Virtual Trunk) the function of a trunk route without a physical presence in the
hardware. The H.323 Gateway supports direct, end-to-end voice
paths using Virtual Trunks.
• The H.323 Signaling software (Virtual Trunk) provides the industry-
standard H.323 signaling interface to H.323 Gateways. It supports
both en bloc and overlap signaling. This software uses an H.323
Gatekeeper to resolve addressing for systems at different sites.
• The H.323 Gateway supports up to 1200 H.323 Virtual Trunks per
Signaling Server, assuming a combination of incoming and outgoing
H.323 calls (see Maximum number of SIP and H.323 Virtual
Trunks on page 245). Beyond that, a second Signaling Server is
required.
• The redundancy mode of the H.323 Gateway is 2 × N. Two H.323
Gateways handling the same route can provide redundancy for each
other, but not for other routes.
SIP Gateway • The SIP Gateway trunk, or SIP Virtual Trunk, provides a direct media
(Virtual Trunk) path between users in the CS 1000E domain and users in the SIP
domain.
• The SIP trunking software functions as: – a SIP User Agent – a
signaling gateway for all IP Phones
• The SIP Gateway supports a maximum of 3700 SIP Virtual Trunks
on CP DC and COTS2, and a maximum of 1800 SIP Virtual Trunks
on CP PM and COTS1 (see Maximum number of SIP and H.323
Virtual Trunks on page 245).
• The redundancy mode of the SIP Gateway is 2 × N. Two SIP
Gateways handling the same route can provide redundancy for each
other, but not for other routes.
SIP Line Gateway • The SLG fully integrates Session Initiation Protocol (SIP) endpoints
(SLG) in the Communication Server 1000 system and extends the
Communication Server 1000 features to SIP clients.
• The Call Server requires Package 417 (SIPL)
• The maximum SIPL users for each SLG is 3700 on CP DC, COTS2,
and Common Server. The maximum SIPL users for each SLG is 1800
on CP PM and COTS1. The maximum SIPL users for a
Communication Server 1000E Call Server is 11 250. In a pure IP
system, the maximum SIPL users is 20 000.
• You configure SIPL users as SIPL UEXT (SIPN and SIP3). SIPL
users require two TNs from the Call Server, one for line TN (SIP
UEXT), and one for the SIPL VTRK. There must be a 1 to 1 ratio
between SIPL UEXT and SIPL VTRK TN.
• SIPL redundancy can be a leader and follower configuration for a
SLG node. Both Signaling Servers share the same node IP, however
SIPL clients only register on the SLG node leader. The two Signaling
Servers do not load share.
Network Routing • The NRS has three components: – H.323 Gatekeeper – SIP Redirect
Service (NRS) Server – SIP Proxy Server – Network Connection Service (NCS)
• The NRS must reside on the Leader Signaling Server
• For NRS redundancy, there are two modes:
- Active-Active mode for SIP Redirect and SIP Proxy provides load
balancing across two NRS servers. You must appropriately
engineer Active-Active mode to carry the redundant load in the case
of an NRS server failure.
- Primary-Secondary mode uses the Primary NRS server to handle
the total call rate from all the registered endpoints. If the Primary
NRS fails, the endpoints register to the Secondary NRS server to
handle the calls.
• The Primary and Secondary NRS Servers must be matched pairs.
Unmatched vendor NRS Servers are not supported. You must use
matched software configurations and engineering on each server for
optimal performance.
• For NRS failsafe, you must identify a Gateway Server as the NRS
failsafe.
• You can configure a Server as either Primary, Secondary, or Failsafe.
You cannot combine multiple roles on one Server.
• The NRS software limit for the total number of endpoints is 5000. An
exception is SIP Proxy mode with SIP TCP transport, where the
endpoints limit is 1000.
• The total number of routing entries is 50 000.
• The redundancy of the NRS is in a mode of 2 × N.
• H.323 • All systems in the network register to the H.323 Gatekeeper, which
Gatekeeper provides telephone number to IP address resolution.
• The capacity of the H.323 Gatekeeper is limited by the endpoints it
serves and the number of entries at each endpoint.
• SIP Redirect • The SIP Redirect Server provides telephone number to IP address
Server resolution. It uses a Gateway Location Service to match a fully
qualified telephone number with a range of Directory Numbers (DN)
and uses a SIP gateway to access that range of DNs.
• The SIP Redirect Server logically routes (directly or indirectly) SIP
requests to the proper destination.
• The SIP Redirect Server receives requests, but does not pass the
requests to another server. The SIP Redirect Server sends a
response back to the SIP endpoint, indicating the IP address of the
called user. The caller can directly contact the called party because
the response included the address of the called user.
• SIP Proxy • The SIP Proxy acts as both a server and a client. The SIP Proxy
Server (SPS) receives requests, determines where to send the requests, and acts
as a client for the SIP endpoints to pass requests to another server.
• Network • The NCS provides an interface to the TPS, enabling the TPS to query
Connection the NRS using the UNIStim protocol. The NCS is required to support
Service (NCS) the Avaya MG 1000B, Virtual Office, and Geographic Redundancy
features.
CS 1000 Element • Has a negligible impact on capacity and can reside with any other
Manager Web element.
Server
Application Server • The Application Server for the Personal Directory, Callers List, Redial
List, and Unicode Name Directory feature runs on the Signaling
Server.
• Only one database can exist in the network, and redundancy is not
supported.
• The database can coexist with the other software applications on a
Signaling Server. However, if the number of IP users exceeds the
following PD and UND limits, the database must be stored on a
dedicated Signaling Server.
- CP PM: PD limit of 2000 IP users, UND limit of 2000 IP users
- COTS1 (HP DL320-G4 or IBM x306m): PD limit of 3000 IP users,
UND limit of 3000 IP users.
- COTS2 (Dell R300 or IBM x3350): PD limit of 5000 IP users, UND
limit of 5000 IP users.
The feasibility of combining the TPS, H.323 Gateway, SIP Gateway, and NRS on a Signaling
Server is determined by traffic associated with each element and the required redundancy
of each function.
Table 45: Non-dedicated Signaling Server limits (multiple SS applications per server)
Table 46: CS 1000E Co-resident Call Server and Signaling Server limits
CP MG CP PM COTS2 Common CP DC
128 (Dell Server
R300, (HP
IBM DL360
x3350) G7)
ACD Agents (IP agents, IP 200 200 200 200 200
trunks)
* UNIStim Phones 700 1000 1000 1000 1000
Personal Directory users 700 1000 1000 1000 1000
* SIP Line Phones 400 400 1000 1000 400
* Virtual Trunks (H323 and/or 400 400 400 400 400
SIP)
TDM 800 128 128 128 128
Branch Branch Branch Branch
Office Office Office Office
800 stand 800 stand 800 stand 800 stand
alone alone alone alone
PRI Spans 16 16 16 16 16
UCM Elements 100 100 100 100 100
Media Gateways (IPMG) 5 5 5 5 5
Service/GW endpoints on NRS 5 5 5 5 5
NRE on NRS 20 20 20 20 20
OCS TR87 700 1000 1000 1000 1000
CP MG CP PM COTS2 Common CP DC
128 (Dell Server
R300, (HP
IBM DL360
x3350) G7)
Media Server Controller (MSC) 400 400 400 400 400
IPConf sessions
MSC IPMusic sessions 400 400 400 400 400
MSC IPRan sessions 400 400 400 400 400
MSC IPTone sessions 400 400 400 400 400
MSC IPAttn sessions 256 256 256 256 256
MSC total sessions ** 400 400 400 400 400
Calls per hour 8 000 10 000 20 000 20 000 15 000
sum of sum of sum of sum of sum of
CS + CS + CS + CS + CS +
NRS + NRS + NRS + NRS + NRS +
MSC MSC MSC MSC MSC
* Note: (UNISti (UNIStim (UNIStim (UNIStim (UNIStim
m + SipN + SipN + + SipN + + SipN + + SipN +
+ Sip3 Sip3 <= Sip3 <= Sip3 <= Sip3 <=
<= 700 1000 sets 1000 sets 1000 sets 1000 sets
sets and AND AND AND AND
TDM <= TDM <= TDM <= TDM <= TDM <=
800 and 800 and 800 and 800 and 800 and
(MSC + (MSC + (MSC + (MSC + (MSC +
VTRK + VTRK + VTRK + VTRK + VTRK +
ELC ELC ELC ELC ELC
<=400) <=400) <=400) <=400) <=400)
** Note: MSC = MSC = MSC = MSC = MSC =
(IPConf (IPConf + (IPConf + (IPConf + (IPConf +
+ IPRan IPRan + IPRan + IPRan + IPRan +
+ IPTone IPTone + IPTone + IPTone + IPTone +
+ IPMusic + IPMusic + IPMusic + IPMusic +
IPMusic IPAttn) <= IPAttn) <= IPAttn) <= IPAttn) <=
+ IPAttn) 400 400 400 400
<= 400
Table 47: MG 1000B Co-resident Call Server and Signaling Server limits
CP MG 32 CP MG 128 CP PM CP DC
CoRes CoRes
ACD Agents (IP agents, IP trunks) 20 200 200 200
UniStim sets * 100 400 400 400
CP MG 32 CP MG 128 CP PM CP DC
CoRes CoRes
PD users 100 400 400 400
Sip Line sets * 0 0 0 0
vtrks (H323 and/or SIP) * 400 400 400 400
TDM 32 128 128 128
PRI Spans 1 16 16 16
UCM Elements N/A N/A N/A N/A
Media Gateways (IPMG) ** 1 5 5 5
HD PRI GW 0 2 2 2
NRS N/A N/A N/A N/A
Media Server Controller IPConf 10 20 20 20
sessions
MSC IPMusic sessions 30 120 120 120
MSC IPRan sessions 30 120 120 120
MSC IPTone sessions 10 20 20 20
MSC IPAttn sessions 16 64 64 64
MSC total sessions *** 100 350 350 350
Calls per hour 8,000 sum 8,000 sum 10,000 sum 15,000 sum
of CS + of CS + of CS + of CS +
MSC MSC MSC MSC
* Note: (UniStim + (UniStim + (UniStim + (UniStim +
SipLine SipLine <= SipLine <= SipLine <=
<= 100 400 sets 400 sets 400 sets
sets and and TDM and TDM and TDM
TDM <= <= 128 and <= 128 and <= 128 and
32 and (MSC + (MSC + (MSC +
(MSC + VTRK + VTRK + VTRK +
VTRK + ELC ELC ELC
ELC <=400) <=400) <=400)
<=400)
** Note: single 4 Media Media Media
slot Gateway + Gateway + Gateway +
chassis HD GW <= HD GW <= HD GW <=
5 5 5
*** Note: MSC = MSC = MSC = MSC =
(IPConf + (IPConf + (IPConf + (IPConf +
IPRan + IPRan + IPRan + IPRan +
IPTone + IPTone + IPTone + IPTone +
CP MG 32 CP MG 128 CP PM CP DC
CoRes CoRes
IPMusic + IPMusic + IPMusic + IPMusic +
IPAttn) <= IPAttn) <= IPAttn) <= IPAttn) <=
100 350 350 350
3700 0 0 0 3700
0 600 600 1200 1200
0 900 0 900 900
600 0 1200 1200 1800
600 300 600 900 1500
*Assumes H.245 tunneling is enabled.
CS 1000E capacities
Since IP telephony consumes less processing than TDM, the total number of telephones that
a particular platform can support depends on the type of traffic as well as the physical capacity
and applications of a specific configuration.
Table 49: CS 1000E capacities summary on page 247 summarizes the capacities of CS 1000E
systems. Values in each cell indicate the total number of telephones that can be supported in
a particular configuration. These values are calculated from the point of view of call server
processing capacity, not from the point of view of physical card slot capacity.
Values in each cell are exclusive, not additive.
Table 49: CS 1000E capacities summary
Contents
This chapter contains the following topics:
Introduction on page 249
Resource calculation parameters on page 250
Resource calculation equations on page 251
Total system traffic on page 253
System calls on page 253
Real time calculations on page 257
DSP/Media Card calculations on page 264
Virtual Trunk calculations on page 269
Signaling Server algorithm on page 271
Reducing imbalances (second round of algorithm calculations) on page 305
Illustrative engineering example on page 308
Introduction
This chapter describes the algorithms implemented by the Enterprise Configurator tool in order
to calculate the resources required by the system.
In many cases, the calculations require user inputs that are the result of pre-engineering
performed in accordance with the capacities and guidelines described in System capacities on
page 209 and Application engineering on page 323.
When a proposed new system is equipped with more ports than the initial configuration actually
uses, treat the two sets of input data like two separate configurations. Run each set of data
through the algorithm and then compare results. For a viable solution, both sets of calculation
results must be within the capacities of the proposed system.
The ACD traffic for TDM terminations is integral in L TDM for all systems. Large systems can
contain both standard and nonblocking telephones. You must enter ACD agents in the
nonblocking telephone count (for line card provisioning), therefore adjust the CSS using the
nonblocking CCS rate.
LIP is correct for IP ACD agents. L IP does not require a CSS adjustment because all IP Phones
are nonblocking.
System calls
The total number of calls the system must be engineered to handle is given by:
TCALL = 0.5 × TCCS × 100 ÷ WAHT
The real time factor adjusts for the fact that a feature call generally requires more real time to
process than a basic call. The impact on the system is a function of the frequency with which
the feature call appears during the busy hour. The penetration factor of a feature is the ratio
of that type of feature call to the overall system calls. See Traffic equations and penetration
factors on page 254 for the equations to calculate penetration factors for the 21 major call
types.
The real time factors and penetration factors are used to generate the real time multiplier
(RTM), which in turn is used to calculate the overall system EBC.
The real time multiplier is given by:
RTM = 1 + Error_term + (P_UIPtoUIP × f1) + (P_UIPtoL × f 2) + (P_LtoL × f 3) + (P_VTtoTr × f4)
+ (P_TrtoTr × f5) + (P_VhtoVs × f6) + (P_UIPtoVT × f7) + (P_UIPtoTr × f8) + (P_LtoVT × f9) +
(P_LtoTr × f10) + (P_VTtoL × f 11) + (P_VTtoUIP × f12) + (P_TrtoUIP × f13) + (P_TrtoL × f 14) +
(P_SIPtoSIP × f15) + (P_SIPtoUIP × f16) + (P_SIPtoL × f 17) + (P_SIPtoVT × f18) + (P_SIPtoTr
× f19) + (P_VTtoSIP × f20) + (P_TrtoSIP × f21)
The Error_term accounts for features such as call transfer, three-way conference, call-forward-
no-answer, and others that are hard to single out to calculate real time impact. The Error_term
is usually assigned the value 0.25.
Table 55: Ratio of existing processor capacity to new processor capacity (CPTU) on page 263
gives capacity ratio values for supported processor upgrades.
Table 55: Ratio of existing processor capacity to new processor capacity (CPTU)
Because DSPs cannot be shared between Media Gateways, the efficiency of the DSP ports
on a Media Card is not as high as in a system-wide group. To calculate port and Media Card
requirements, use the following (and round up to the next integer if the result is a fraction):
• 794 CCS per Media Card (32 ports)
• 1822 CCS per two Media Cards (64 ports)
• 2891 CCS per three Media Cards (96 ports)
For example, 2000 CCS requires 96 DSP ports to provide a P.01 GoS (2000 >1822) as
calculated from Table 56: Erlang B and Poisson values, in 32-port increments on page 264. In
this example, you must provide 3 Media Cards, or a 96 DSP ports.
For information about allocating Media Cards to Media Gateways, see Assigning loops and
card slots in the Communication Server 1000E on page 375.
Note:
For IP Media Services, no dedicated DSP resources are required to support IP
conference.
Number of DSP ports for applications = DSP for Integrated Recorded Announcer + DSP for
Integrated Conference Bridge + ... + DSP for Agent Greeting ports
of DSP ports for general traffic equation on page 266 + Number of DSP ports for applications
equation on page 267 + Number of DSP ports for nonblocking traffic equation on page 267
Note:
For IP Media Services, no dedicated DSP resources are required.
Each Communication Server 1000E system can have two types of Media Gateway DSP
configurations:
• maximum of 2 nonblocking resource units
• any number of blocking cards (limited by the available card slots in the Media Gateway)
• maximum of 2 standard consoles
A nonblocking resource unit can be any of the following
• 1 x PRI or digital trunk card (for example, 24-channel TMDI/T1 or 30-channel PRI/E1)
• 1 x CallPilot card
• 1 x DECT card (for example, DMC / DMC8 or DMC-E / DMC8-E)
• 1 x Agent Greeting, SECC VSC, CRQM and RAO cards
• 2 x nonblocking digital or analog line cards (includes analog card for CLASS and Reach
line card)
• 2 x nonblocking analog trunk cards
• 2 x nonblocking consoles
• 1 x nonblocking line side interface card
• 30 x broadcast circuits (RAN, MUSIC)
- Accounts for MIRAN cards. If you assign more than 60 broadcast circuits to a single
MIRAN or trunk card, you must install it in a dedicated nonblocking Media Gateway. If
less than 1 MIRAN or trunk card is available or assigned, you can have a maximum of
30 RAN or MUSIC for each card and install each in a standard Media Gateway.
A blocking card can be any of the following:
• CP PM and XCMC (CLASS clock)
• Standard digital or analog lines
• Standard Reach line cards
• Standard trunk cards
If there are high CCS rates for each card, you must configure the cards as nonblocking.
A nonblocking Media Gateway can also be referred to as a Dedicated Media Gateway. A
nonblocking Media Gateway has one DSP port for each resource in the Media Gateway. 12
DSP ports are required for each nonblocking console.
For sites where the proportion of ACD agent telephones is less than 15% of the total
telephones in the system, CVT includes all general traffic seeking an access port.
Sites where the proportion of ACD agent telephones exceeds 15% of the total
telephones in the system are considered to be call centers. For call centers, CVT is
a reduced total that excludes ACD CCS. See Special treatment for nonblocking
access to DSP ports on page 267.
2. Convert Virtual Trunk calls to CCS.
Virtual Trunk CCS (VTCCS) = CVT × WAHT ÷ 100
3. For call centers, since the calculated Virtual Trunk calls exclude ACD traffic, restore
ACD traffic so that the final number of Virtual Trunks will be sufficient to handle both
general and ACD traffic.
Final Virtual Trunk CCS = (Calculated VTCCS without ACD) + [(Number of IP ACD
agent telephones) + (Number of TDM ACD agent telephones)] × V × ACD CCS ÷
TRKCCS
The expanded Virtual Trunk CCS is inflated by the ratio of 33/28 to reflect the fact
that more Virtual Trunks are needed to carry each agent CCS. This is because the
traffic levels engineered for ACD agents and Virtual Trunks are different.
4. Use the SIP and H.323 ratios to determine how the Virtual Trunk access ports will
be allocated to the two groups.
SIP Virtual Trunk CCS (SVTCCS) = VTCCS × vS H.323 Virtual Trunk CCS (HVTCCS)
= VTCCS × vH
5. Using the Poisson table for P.01 GoS (seeTable 56: Erlang B and Poisson values,
in 32-port increments on page 264 or Trunk traffic Erlang B with P.01 Grade-of-
Service on page 409), find the corresponding number of SIP and H.323 access
ports required.
Although a Virtual Trunk does not need the physical presence of a superloop, it does
utilize a logical superloop. A superloop of 128 timeslots can support 1024 Virtual
Trunk channels.
split. Imbalanced Virtual Trunk traffic renders the resulting equipment recommendation
unreliable.
For example, if the calculated number of Virtual Trunks is 80 but the original input value was
60, and the user decides to use the original input value of 60 to calculate bandwidth and
Signaling Server requirements, the resulting system will likely provide service inferior to the
normal expected P.01 GoS. On the other hand, if the user input was 80 and the calculated
result is 60, it is up to the user to choose the number to use for further calculations for necessary
resources, such as the LAN/WAN bandwidth requirement. Unless the configuration is
constrained in some way, the larger of the two values (input number or calculated number) is
always preferred.
by each application, taking redundancy requirements into consideration. The calculation for
each application is performed separately. Once the individual requirements are determined,
the algorithm proceeds to evaluate sharing options. Then the results are summed to determine
the total Signaling Server requirement.
In most cases, the individual calculations divide the configuration's requirement for an
applicable parameter (endpoint, call, telephone, trunk) into the system limit for that parameter.
The particular application's Signaling Server requirement is determined by the parameter with
the largest proportional resource requirement, adjusted for redundancy.
The Signaling Server hardware platform can be CP PM, CP DC, IBM x306m, HP DL320-G4,
HP DL360-G7, IBM x3350, or Dell R300 servers. For the calculations, each variable is indexed
by Signaling Server type. type index = CP PM or CP DC or COTS1 (HP DL320-G4, IBM
x306m), COTS2 (IBM x3350, Dell R300), or Common Server (HP DL360-G7).
Table 58: Signaling Server algorithm constants on page 272 defines the constants you use in
the Signaling Server algorithm.
Table 58: Signaling Server algorithm constants
Table 59: Signaling Server algorithm user inputs on page 277 describes the user inputs you
use in the Signaling Server algorithm.
Table 59: Signaling Server algorithm user inputs
Table 60: Signaling Server algorithm variables on page 279 describes the variables you use
in the Signaling Server algorithm.
Table 60: Signaling Server algorithm variables
Table 61: Constant and variable definitions for Co-resident Call Server and Signaling
Server on page 281describes the constant and variable definitions for each Co-resident Call
Server and Signaling Server hardware type.
Table 61: Constant and variable definitions for Co-resident Call Server and Signaling
Server
COTS1 (HP DL320-G4, IBM x306m), COTS2 (IBM x3350, Dell R300), or Common Server (HP
DL360-G7).
Note:
® ®
The Aura Session Manager does not run on any of these severs and requires its own Aura
hardware.
®
1. Avaya Aura Session Manager Servers
A Session Manager (SM) can replace the NRS except for when IPv6, H.323
Gateway, or a High Scalability system is required.
®
The Aura Session Manager does not run on any of the CS 1000 supported
®
signaling servers. It runs on its own Aura supported platform. The calculation
®
provided here is to help determine the number of Aura Session Manager servers
required. These are separate from the CS 1000 Signaling Server calculations and
are not part of the total Signaling Server calculation.
®
Once you determine if an Aura Session Manager is required, the following table
provides the information required for the Session Manager calculations.
Table 62: Session Manager constants and variables
SME is the sum of the value entered for index 1h and index 1d-1. The value entered
for SME cannot exceed 100 000. The value calculated for SMC or BHCC cannot
exceed 3 000 000, as the total number of SM is 10.
SMC is the calculated value of the BHCC for the SM.
SMC0 = VTSIP × CCS × 100 ÷ WAHT
SMCNET = VTSMNET × CCS × 100 ÷ WAHT ÷ 2
All Signaling Server calculations are required. Calculate the following algorithm
in sequence
If (HS_Primary = true)
Then {
HS_SS[type_index] = 2; (one NRS, one HS manager)
If (HS_NRSA = yes) then %alternate NRS required
HS_SS[type_index] = HS_SS[type_index] + 1
If (HS_ManA = yes) then %alternate HS manager required
HS_SS[type_index] = HS_SS[type_index] +1
}
Else
HS_SS[type_index] = 0;
All Signaling Server calculations are required. Calculate the following algorithm
in sequence
}
Else
one Co-res_CS_type server provision is correct
}
Else
If (NRE > CS_SS_NRS_EP[Co-res_CS_type])
OR (NRD > CS_SS_NRS_RE[Co-res_CS_type]) then
{
If TCALL > CS_SS_CallRate[Co-res_CS_type] then
{
error ("The call rate calculated exceeds the limit this server can
handle.");
Exit this section and select a new system type;
}
Else % stand-alone NRS required, one of the Co-res NRS limits
exceeded
SST[type_index] = SST[type_index] + NRA (=2 if true, else 1);
} % End of Co-res NRS bounds checking
% Provision additional SS for SIP Line if required
If SIP Line users > 0 and SIP Line dedicated= YES then
if SIPLineRedundant = true then
SST[type_index] = SST[type_index] + 2
Else
SST[type_index] = SST[type_index] + 1
} % End of Co-res CS + SS checks
Else
All Signaling Server calculations are required. Calculate the following algorithm
in sequence
Since the capacity for handling H323 calls is different than SIP calls, you must
determine the SIP call loading factor on NRS. There are two SIP modes, SIP_Proxy
and SIP_Redirect. To calculate the SIP loading factor, see Table 63: SIP mode
factors on page 289.
Table 63: SIP mode factors
If you require a dedicated NRS Signaling Server, round up SSNR for the following
calculations.
NRC could be a hardware or CPU or memory limit; it includes NRC0 (calls result
from main switch calculation) and network VTNET for the Network Routing Service:
NRC = NRC0 + NRCNET
Both VT323 and VTSIP must convert to H.323 and SIP calls from your input:
H323 calls = VT323 × CCS per VT × 100 ÷ WAHT
SIP calls = VTSIP × CCS per VT × 100 ÷ WAHT + (ELC VT × CSS per telephone ×
100 ÷ WAHT)
Determine the SIP loading factor on the NRS:
NRC0 = (H323 calls + Factor × SIP calls) NRCNET = VTNET × CSS for each VT ×
100 ÷ WAHT ÷ 2
NRC = NRC0 + NRCNET
Formula (c) in SSNR equation = NRC ÷ NRCHL[nrs_type_index]
The previous equation represents the load on the Signaling Server to handle NRS
calls. Compare it with (a) and (b) to determine the highest of all potential uses.
6. Terminal Proxy Server calculation (SSTR)
Calculate the TPS call rate: CUIP= C2IP× 2 + C1IP + C2SIPUIP + CSTIV + CSTID + CSTVI
The Call Server CPU calculations define the variables.
If the user wants Terminal Proxy Server(s) in a dedicated Signaling Server, round
up SSTR before proceeding with further calculations:
The number of SIP virtual trunk calls is calculated from the SVTCSS:
SCVT = (SVTCCS × 100) ÷ WAHT
The number of virtual trunks required for the Extend Local Calls (ELC) feature:
ELCVT = ELC ISM value (the requested number of ELC users)
The number of ELC calls that impact SIP virtual trunks:
ELCCVT = CSS × ELC_P
If the user wants SIP CTI/TR87 in a dedicated signalling server, then round up
SSTR87[type_index] before proceeding with further calculations.
c SIPL + IPL > IPL_db or SIPL + SVT + HVT > non-dedicated limit
SIPL_VtrkSL[type_index] then 1 else 0
}
Round up SSSLGR before performing further calculations
If (SSLGR[type_index] >= 1) or dedicated SIP Line Gateway then
{
SSSLGW[type_index] = ROUNDUP
(SSSLG[type_index] × SIPLA[type_index]
(=2, if true; else 1) SIPLA = if 1 + 1 redundant
SIPL is required
SST[type_index] = SST[type_index] +
SSSLGW;
}
Else Co-res
If SSSLGR[type_index] > 0 then
{
SLG_Co-res = true;
NumOfCo-res = NumOfCo-res + 1;
}
}
SSMSCR[type_index] = ROUNDUP (SSMSCR[type_index]);
If (SSMSCR[type_index] >= 1) or dedicated MSC then
{
If MSCA[type_index] = true then redundant MSC
requested
SSMSCW[type_index] =
SSMSCR[type_index] + 1
SST[type_index] = SST[type_index] +
SSMSCW[type_index];
}
Else Co-res
If SSMSCR[type_index] > 0 then
{
MSC_Co-res = true;
NumOfCo-res = NumOfCo-res + 1;
}
Case of NumOfCo-res;
Null; No SS applications Co-
res
{
SST[type_index] = SST[type_index] + 1; One SS application -
assign one SS
If redundant needed, add one SS and reset Co-res flag
If (NRS_Co-res = true and NRA = true) or (TPS_Co-res = true and TPSA = true)
or (H323_Co-res = true and GWA = true) or (SIP_Co-res = true and GSA = true)
or (TR87_Co-res = true and TR87A = true) or MSC_Co-res = true and MSCA =
true)
TMSCC = 0
SST[type_index] = SST[type_index] +
MSCA (=2 if true, else 1);
}
Repeat DO until CallRate <= Co-
resCR[type_index]
If NumOfCo-res = 0, then exit
Else If NumOfCo-res = 1, then do One SS application -
Assign one SS
Else { SST[type_index] = SST[type_index] + 1;
If (NRS_Co-res = true and NRA = true) or (TPS_Co-res = true and
TPSA = true) or (H323_Co-res = true and GWA = true) or (SIP_Co-res
= true and GSA = true) or (TR87_Co-res = true and TR87A = true) or
(SIPL_Co-res = true and SIPLA = true) or MSC_Co-res = true and
MSCA = true)
Then SST[type_index] = SST[type_index] + 1;
}
}
}
}
If ucm_pss_required = true
Then {
SST[ucm_pss_type] =
SST[ucm_pss_type] + 1;
If ucm_backup_required = true then
SST[ucm_pss_type] = SST[ucm}pss_type] + 1;
}
Determine the number of calls per hour for SIP Trunk Bridge. STBC[type_index] =
(STB[type_index] × TRKCCS × 100) ÷ WAHT The number of servers required
increases with media anchoring usage. Media anchoring lowers the supported call
rate and number of sessions for SIP Trunk Bridge.
The MAS application requires additional dedicated Signaling Servers. The MAS
application is supported on CPDC and COTS2 Server Signaling Servers only.
MAS servers are deployed in clusters, with a maximum cluster size of 7 servers.
The MAS license server does not know the capacity limitations of the MAS servers
in the cluster. Therefore, all MAS servers within the cluster MUST be of the same
type, or at a minimum, have the same capacity rating.
To ensure the MAS servers can handle the load of a cluster when one of the servers
fails, a redundant server may be selected. Selecting redundancy for MAS puts an
additional server in each cluster (N+1 servers per cluster).
Since the cluster has a limit of 7 servers, adding a redundant server lowers the
cluster size by 1 server, which could potentially increase the number of MAS clusters
required. An example is shown in the following table:
No Redundancy Redundancy
Servers in Cluster Servers in Redundant Total Server is
Cluster Server Cluster
1–7 1–6 1 2–7
Additional MAS servers requested by the customer must also be considered in the
cluster calculations.
The Systems Options page contains an input field for Additional MAS Servers,
referred to as MAS_Additional.
The codec used on the MAS server has a large impact on the number of conference
sessions that can be supported. Therefore, the MAS codec selection must be taken
into consideration when determining the number of sessions that can be supported
on a MAS server.
The following table shows the MAS Codec Ratios (MASCRatio) used for each MAS
server type:
The calculations used to determine the number of Signaling Servers required for
MAS depends on the MAS codec selection, the use of Media Security on the MAS
server and two MSC variables; MSC Sessions (MSC_sessions) and MSC call rate
(MSCC).
If (MAS_MSEC=off) then
MASSRTP_factor = 1;
Else
MASSRTP_factor = MASSRTP[codec];
SSMASR[type_index] = larger of:
{
a (MSC_sessions + XMSC_sessions) ÷ number of sessions
((MASSSL[type_index] * MASSRTP_factor )/ (software limit)
MASCRatio[MAScodec] )
b MSCC ÷ MASCHL[type_index] calls per hour
(hardware limit)
}
SSMASR[type_index] = ROUNDUP(SSMASR[type_index]) + MAS_Additional
If you are calculating for the primary location of a High Scalability (HS)
deployment, then deploy the HS servers at this location:
SST[type_index] = SST[type_index] + HS_SS[type_index]
See Signaling Server calculation on page 317 for a for a numerical example illustrating the
algorithm.
Virtual Trunks
When the VT number input by the user differs significantly from the calculated VT number
(more than 20% difference), the Enterprise Configurator tool uses the calculated number and
rerun the algorithm to obtain a newer VT number. If the resulting VT number is not stable (in
other words, after each rerun, a new calculated VT number is obtained), the program repeats
the calculation cycle up to six times. This recalculation looping is built into the Enterprise
Configurator and occurs automatically, with no user action required. At the end of the
recalculation cycle, the user has the choice of using the original input VT number or the final
calculated VT number in the configuration.
The user inputs about the number of H.323 Virtual Trunks and SIP Virtual Trunks are a function
of other parameters in the system. For example, the number of Virtual Trunks required are
affected by the total number of trunks in the system and by intraoffice/incoming ratios: Virtual
Trunks and TDM trunks cannot carry a high volume of trunk traffic if the system is characterized
as carrying mostly intraoffice calls. If pre-engineering has not provided consistent ratios and
CCS, the VT input numbers tend to diverge from the calculated results based on input trunking
ratios.
Use the following formula to calculate the VT CCS to compare against user input, in order to
determine the size of the deviation. Note that for this consistency check, H.323 VT CCS and
SIP VT CCS are separated out from the VT total by using the user input ratio of H.323 to
SIP.
HVT = CVT × vH × WAHT ÷ 100
SVT = CVT × vS × WAHT ÷ 100
The respective difference between HVT and HVTCCS, and between SVT and SVT CCS, is the
deviation between input data and calculated value.
After the automatic Enterprise Configurator recalculations, if the discrepancy between the input
VT number and the final calculated number is still significant (more than 20%), follow the
recommendations for reducing line and trunk traffic imbalance (see Line and trunk traffic on
page 307). Adjusting the number of Virtual Trunks and trunk CCS alone, without changing the
intraoffice ratio or its derivatives, may never get to a balanced configuration.
Trunk traffic too high • Reduce CCS per trunk or number of trunks.
• Reduce the intraoffice ratio.
• Increase the tandem ratio (if justified; changing the
incoming/outgoing ratio has no impact on line/trunk
traffic imbalance).
Assumptions
The example uses the following values for key parameters.
These parameter values are typical for systems in operation, but the values for the ratios are
not the defaults.
• Intraoffice ratio (RI): 0.25
• Tandem ratio (RT): 0.03
• Incoming ratio (I): 0.60
• Outgoing ratio (O): 0.12
In fraction of calls, the above ratios add up to 1.
• AHTSS = 60 [average hold time (AHT) for telephone to telephone (SS)]
• AHTTS = 150 [AHT for trunk to telephone (TS)]
• AHTST = 150 [AHT for telephone to trunk (ST)]
• AHTTT = 180 [AHT for trunk to trunk (TT)]
Given configuration
A Communication Server 1000E CP PIV system with the following configuration data:
• 1200 digital and analog telephones at 5 CCS/telephone
- including 170 ACD agents with digital telephones at 33 CCS/agent
• 1600 IP telephones at 5 CCS/IP telephone
- including 50 IP ACD agent telephones at 33 CCS/IP agent telephone
• 200 MDECT mobile phones at 5 CCS/telephone
• 1200 SIP Line telephones
• 820 trunks
- 450 Virtual Trunks (300 H.323 and 150 SIP) at 28 CCS/trunk (The numbers for H.323
and SIP Virtual Trunks are input from user response to a GUI request in the EC.)
- 370 TDM (PRI) trunks at 28 CCS/trunk
• Network Virtual Trunks served by this Gatekeeper: 800 (This is another input from the
user interface.)
• CallPilot ports at 26 CCS/CP port
- 36 local CallPilot ports
- 24 network CallPilot ports (input from user interface)
• Other traffic-insensitive, preselected application ports that require DSP channels and real
time feature overhead. The DSP required for IP Phones to access these special
applications is proportional to the percentage of IP calls in the system.
- Agent greeting ports: 4
- Integrated Conference Bridge ports: 16 (HT = 1800)
- Integrated Recorded Announcer ports: 12 (HT = 90)
- Integrated Call Assistant ports: 8 (HT = 180)
- Hospitality Integrated Voice Service ports: 8 (HT = 90)
- Integrated Call Director ports: 12 (HT = 60)
- BRI users: 8 (HT = 180)
- MDECT mobile telephones: 200 (HT = WAHT)
• Features with processing overhead but no hardware ports:
- CPND percentage: CPND calculation assumes all calls involving a telephone use
CPND
Calculations
The calculations in this example were performed by spreadsheet. Some rounding off may have
occurred.
• The percentage of ACD agent to total telephones = (50 + 170) ÷ (1200 + 1600 + 1200 +
200) × 100 = 5.238 % This ratio is less than the 15% threshold, so the site is not considered
a call center. All ACD traffic will be included in call distribution calculations. For more
information, see DSP ports for general traffic on page 266. The following calculations use
the default nonblocking telephone CCS rate of 18 CCS.
• LTDM TDM telephones CCS = [(1200 – 170) × 5] + (170 × 18) = 8210 CCS
• LIP IP telephones CCS = (1600 – 50) × 5 = 7750 CCS
- LACD TDM ACD agent CCS = 170 × 33 = 5610 CCS
- LACDIP IP ACD agent CCS = 50 × 33 = 1650 CCS
- LDECT DECT telephones CCS = 200 × 5 = 1000 CCS
- LSIPL SIP Line telephones CCS = 1200 × 5 = 6000 CCS
- ACDadj ACD CCS adjustment for TDM agents = 170 × 18 = 3060 CCS
- LCCS Total line CCS = 8210 + 7750 + 5610 + 1650 + 1000 + 6000 + 3060 = 27160
CCS
• TTDM TDM trunk CCS = 370 × 28 = 10360 CCS
- HVTCCS H.323 trunk CCS = 300 × 28 = 8400 CCS
- SVTCCS SIP trunk CCS = 150 × 28 = 4200 CCS
- VTCCS Total Virtual Trunk CCS = 8400 + 4200 = 12600 CCS
- TTCCS Total Trunk CCS = 12600 + 10360 = 22960 CCS
• Fraction of H.323 CCS of total Virtual Trunk CCS (VH) = 8400 ÷ 12600 = 0.67
• Fraction of SIP CCS of total Virtual Trunk CCS (V S) = 4200 ÷ 12600 = 0.33
• Fraction of Virtual Trunk CCS of total trunk CCS (V) = 12600 ÷ 22960 = 0.549
• Fraction of UNIStim IP CCS (PU) = (7750 + 1650) ÷ 27160 = 0.346
• Fraction of SIP CCS (PS) = 6000 ÷ 27160 = 0.221
• Fraction of IP CCS (PIP) = 0.346 + 0.221 = 0.561
• Weighted average holding time (WAHT) = (60 × 0.25) + (150 × 0.60) + (150 × 0.12) +
(150 × 0.12) + (180 × 0.03) = 128 seconds
• CP1 local CallPilot CCS = 36 × 36 = 936
• CP2 network CallPilot CCS = 24 × 26 = 624
• Total CCS (TCCS) = LCCS + TTCCS = 27160 + 22960 = 50120 CCS
• Total calls (TCALL) = 0.5 × TCCS × 100 ÷ WAHT = 0.5 × 50120 × 100 ÷ 128 = 19578
• The system calls are comprised of four different types of traffic: Intraoffice calls
(telephone-to-telephone) (CSS); Tandem calls (trunk-to-trunk) (CTT); Originating/Outgoing
calls (telephone-to-trunk) (CST); Terminating/Incoming calls (trunk-to-telephone) (CTS).
a. Intraoffice calls (CSS) = TCALL × RI = 19578 × 0.25 = 4895 calls
i. Intraoffice UNIStim IP to UNIStim IP calls (C 2IP) = CSS × PU × PU =
4895 × 0.346 × 0.346 = 586 (require no DSP, no VT) P_UIPtoUIP
= 586 ÷ 19578 = 0.03
ii. Intraoffice UNIStim IP to TDM calls (C 1IP) = CSS × 2 × PU × (1 – PU)
= 4895 × 2 × 0.346 × (1 – 0.346) = 1467 (require DSP) P_UIPtoL =
1467 ÷ 19578 = 0.07
iii. Intraoffice TDM to TDM calls (C NoIP) = CSS × (1 – PIP)2 = 4895 × (1
– 0.567) × (1 – 0.567) = 918 (require no DSP, no VT) P_LtoL = 918
÷ 19578 = 0.05
iv. Intraoffice SIP Line to SIP Line calls (C 2sip) = CSS × PS2 = 4895 ×
0.221 × 0.221 = 239 (require no DSP, no VT) P_SIPtoSIP = 239 ÷
19578 = 0.01
v. Intraoffice SIP Line to UNIStim IP calls (C 2sipuip) = CSS × PS × PU =
4895 × 0.221 × 0.346 = 748 (require no DSP, no VT) P_SIPtoUIP
= 748 ÷ 19578 = 0.04
vi. Intraoffice SIP Line to TDM calls (C 1sip) = CSS × 2 × PS × (1 - PIP)
= 4895 × 2 × 0.221 × (1 - 0.567) = 936 (require DSP, no VT)
P_SIPtoL = 918 ÷ 19578 = 0.05
b. Tandem calls (CTT) = TCALL × RT = 19578 × 0.03 = 587 calls
i. Tandem VT to TDM calls (C T1VT) = 2 × CTT × V × (1 – V) = 2 × 587
× 0.549 × (1 – 0.549) = 291 (require DSP and VT) P_VTtoTr = 291
÷ 19578 = 0.0015
ii. Tandem TDM to TDM calls (C T2NoVT) = CTT × (1 – V) × (1 – V) = 587
× (1 – 0.549) × (1 – 0.549) = 120 (require no DSP, no VT) P_TrtoTr
= 120 ÷ 19578 = 0.006
(0.05 × 0.81) + (0.015 × 1.14) + (0.006 × 1.20) + (0.008 × 1.09) + (0.02 × 1.20) + (0.02 ×
1.16) + (0.03 × 2.44) + (0.01 × 1.25) + (0.14 × 1.72) + (0.11 × 0.97) + (0.09 × 1.25) + (0.12
× 1.34) + (0.01 × 2.72) + (0.04 × 1.36) + (0.05 × 1.78) + (0.01 × 1.97) + (0.01 × 2.25) +
(0.07 × 2.17) + (0.06 × 3.57) = 1.552
• Calculate the System EBC SEBC = (TCALL × (1 + PF + Error_term)) = 19578 × (1 + 1.552
+ 0.25) = 54854
Endpoints served by this NRS: 100 NRS entries (CDP + UDP + É): 1000 Virtual Trunks
from other endpoints served by this NRS: 800 NRS alternate (NRA): Yes TPSA (TPS
N+1 redundancy required): Yes H.323 Gateway alternate (GWA): Yes SIP Gateway
alternate (GSA): Yes PD/CL/RL feature available to IP Phones: Yes Sharing Database
with other traffic: Yes SIP Proxy or SIP Redirect: Proxy SIP Proxy TCP: Yes SIP Line
Alternate (SIPLA) :Yes
NRC could be hardware or CPU or memory limit, it includes local NRC0 (calls from
main switch calculation) and network VTNET for the NRS:. NRC = NRC0 + NRCNET
Both VT323 and VTSIP from user input must convert to H.323 and SIP calls.
H.323 calls = VT323 × CCS × 100 ÷ WAHT = 300 × 28 × 100 ÷ 128 = 6562
SIP calls = VTSIP × CCS × 100 ÷ WAHT = 150 × 28 × 100 ÷ 128 = 3281
Determine the SIP loading factor on the NRS:
Factor = if SIP_mode = Proxy, then 4
NRC0 = (H323 calls × Factor × SIP calls) = 6562 + 4 × 3281 = 19686
NRCNET = (VTNET × CCS for each VT × 100 ÷ WAHT ÷ 2) = 800 × 28 × 100 ÷ 128
÷ 2 = 8750
NRC = NRC0 + NRCNET = 19686 + 8750 = 28436
Formula (c) in SSNR equation = NRC ÷ NRCHL[nrs_type_index] = 28436 ÷ 200 000
= 0.15
SIPL = SIPN + SIP3, where SIPL = total number of SIP Phones SIPL = 1200
Calculate total number of SIP Line calls
CSIP = (2 × C2SIP) + C1SIP + C2SIPUIP + CSTSV + CSTSD + CSTVS + CSTDS
CSIP = 5277
SSSLGR[type_index] = larger of:
{
Contents
This chapter contains the following topics:
Introduction on page 323
Access Restrictions on page 324
Converged Desktop on page 325
Exchange 2007 Unified Messaging SIP trunk provisioning on page 332
Microsoft Office Communications Server users on page 333
Mobile Extension engineering on page 338
D-channel on page 340
D-channel handler engineering procedure on page 351
Avaya CallPilot engineering on page 356
Call Center on page 357
Symposium Call Center on page 359
ELAN engineering on page 360
HSP LAN Engineering on page 365
CLASS network engineering rules on page 368
Configuration parameters on page 370
Media Application Server (MAS) on page 371
Introduction
Certain applications have significant capacity impact and require engineering in order to
operate properly from a capacity perspective. This section provides suggestions for
engineering these applications.
For descriptions of the features and their functionality, refer to feature documentation in the
Avaya publications.
For more information about voice networks, see Avaya Converging the Data Network with VoIP
Fundamentals, NN43001-260.
Access Restrictions
The Access Restrictions feature, also known as the port blocking facility is a VxWorks-based
firewall designed to prevent port-based attacks on the CP PIV, MGC, and MC32S running
VxWorks software. Access Restrictions use port blocking rules for accepting or rejecting
packets to open ports. The port blocking rules are preconfigured during installation and
distribute from the Call Server to the MGC and MC32S. You can customize the port blocking
rules post installation with Overlay 117 or EM.
Adding port blocking rules increases CPU utilization. Avaya recommends you to maintain
minimal port blocking rules to minimize the CPU performance impact. Access Restrictions
provide a minimum but essential firewall to secure the VxWorks platforms. If you require a full
firewall, Avaya recommends the use of a dedicated third-party hardware firewall.
CPU utilization depends on the type and amount of rules configured. Table 68: CP PIV packet
throughput drop at 10 percent CPU utilization on page 324 provides an example of the CP
PIV performance drop with increasing rule depth.
Table 68: CP PIV packet throughput drop at 10 percent CPU utilization
Rule Depth 60 byte packet, 60 byte packet, 60 byte packet, 60 byte packet,
reject at end. accept at rule. accept at rule. accept at rule.
(packets/ (packets/ CPU utilization CPU utilization
second) second) drop against no drop against
firewall. accept all
default rule.
No firewall 66 500 66 500 0 n/a
0 64 000 57 000 14.3% 0
1 57 000 51 000 23.3% 10.5%
4 52 000 48 000 27.8% 15.8%
8 43 000 41 000 38.3% 28.1%
16 35 000 33 000 50.4% 42.1%
32 24 750 25 000 62.4% 56.1%
64 13 250 13 500 79.7% 76.3%
128 7 500 7 500 88.7% 86.8%
Converged Desktop
The Converged Desktop is a TDM or IP Phone configured to access Avaya Multimedia
Communication Server 5100 (Avaya MCS 5100) multimedia applications through a Session
Initiation Protocol (SIP) Virtual Trunk.
The columns under "% voice traffic carried by SIP trunk" indicate the fraction of calls that use
a SIP trunk for conversation. A percentage of zero means that the SIP port is used only for
signaling during the ringing period and is dropped from the connection once the call is
answered.
To use the table, users must know (1) the number of Converged Desktop users and (2) the
percentage of Converged Desktop users using SIP trunks to carry voice traffic. The readings
below the percentage column are the number of SIP trunks and PCA ports required for a given
number of Converged Desktop users.
The number of users shown in Table 69: SIP port and PCA requirements for Converged
Desktop (with P.01 GoS) on page 327 increments by increasingly large amounts as the
number of users increases. If you are calculating requirements for a number of users not shown
in the table, use the following formula:
Inputs
• Total number of Converged Desktop users required (MCS_CD_Users)
• Percentage of calls that are answered on a soft client configured as a Converged Desktop
(P_CD_SIP)
• Total Number of nonconverged desktop users required (MCS_Non_CD_Users)
• Number of Meet-Me Audio Conference ports configured on the MCS (MeetMe_Ports)
Calculations
• Traffic for CD = (MCS_CD_Users) x (CCS per user) x 10%
• Traffic for SIP ports = (MCS_Non_CD_Users) x (CCS per user) + (MCS_CD_Users x
P_CD_SIP) x (CCS per user)
• Total SIP Traffic = (Traffic for CD) x (1 - P_CD_SIP) + (Traffic for SIP ports)
• Number of SIP ports = Poisson (Total SIP Traffic) at P.01 + MeetMe_Ports
• Number of MCS PC As ports = Poisson (Traffic for CD) at P.01
• Number of ACD agents = Number of MCS PCAs ports
If detailed information about the network is not available, use the following formula to estimate
the percentage of Converged Desktop users using SIP trunks to carry voice traffic, rounding
up the result:
(Number of SIP trunks) ÷ [(Number of SIP trunks) + (Number of H.323 trunks)]
Assumptions
1. The ringing period is 10% of the conversation time. (One ring is a 6-second cycle;
the ringing period is usually 2–3 rings; average conversation time is 120–180
seconds.)
2. PCA holding time is equal to the length of the ringing period for each call. This is a
conservative assumption, because it implies that every call needs a PCA.
Example
Two hundred Converged Desktop users with 0% SIP trunk conversation require 8 SIP access
ports and 8 PCAs. If 20% use SIP for conversation, the requirements are 16 SIP access ports
and 8 PCAs.
Table 69: SIP port and PCA requirements for Converged Desktop (with P.01 GoS)
Trunking
To handle the traffic between the Communication Server 1000 and the Office Communications
Server 2007, you must configure sufficient SIP trunks and Universal Extensions (UEXT). The
number of additional SIP trunks needed is determined by:
The number of Office Communicator users using the SIP Gateway feature.
multiplied by:
The percentage of users expected to be on the phone at any given time.
For example, 100 Office Communicator SIP Gateway users × 10% on the phone at any given
time = 10 additional SIP trunks.
The percentage of users on a phone is decided by standard practice and the environment
involved (Call Center, Normal Office, and so on).
Telephony services (TLSV) has replaced Personal Call Assistant (PCA). TLSV extends the call
over a SIP trunk to the OCS client from the Communication Server 1000 system.
Input Description
Calculations
Use the following formulas to calculate traffic requirements:
Traffic for UEXTs = (UEXT_MO_Users) × (CCS per user) × (1 - P_UEXT_SIP) × 10%
Traffic for SIP ports = (TN_MO_Users - UEXT_MO_Users) × (CCS per user) +
(UEXT_MO_Users × P_UEXT_SIP) × (CCS per user)
Total SIP Traffic = (Traffic for UEXTs) + (Traffic for SIP ports)
Number of MO SIP ports = Poisson (Total SIP Traffic) at P.01 Grade of Service
MO = Microsoft® Office Communicator
Table 71: Traffic figures on page 334 shows traffic in CCS and number of ports calculated
based on Poisson formula at P.01 Grade of Service.
Table 71: Traffic figures
60 1.67 6
65 1.81 6
70 1.94 7
75 2.08 7
80 2.22 7
85 2.36 7
90 2.5 8
95 2.64 8
100 2.78 8
125 3.47 9
150 4.14 10
175 4.86 12
200 5.56 13
225 6.25 14
250 6.94 15
275 7.64 16
300 8.33 17
325 9.03 18
350 9.72 19
375 10.42 19
400 11.11 20
425 11.81 21
450 12.5 22
475 13.19 23
500 13.89 24
550 15.28 26
600 16.67 28
650 18.06 29
700 19.44 31
750 20.83 33
800 22.22 35
850 23.61 36
900 25 38
950 26.39 40
1000 27.78 42
1500 41.67 58
2000 55.56 74
2500 69.44 90
3000 83.33 106
3500 97.22 121
4000 111.11 137
4500 125 152
5000 138.89 168
6000 166.67 198
7000 194.44 228
8000 222.22 258
9000 250 288
10000 277.78 318
20000 555.56 611
30000 833.33 908
40000 1111.11 1205
50000 1388.89 1502
60000 1666.67 1799
70000 1944.44 2096
Port use
The Communication Server 1000 uses the following ports for TCP and TLS:
• 5060: TCP
• 5061: TLS
The dynamic port range Office Communicator uses for SIP and RTP is 1024 - 65535. You can
restrict the port range with group policy settings. Port ranges must not overlap. For more
information, see the help and support page on the Microsoft Web site at http://
www.microsoft.com.
SIP CTI/TR87
When planning for capacity with SIP CTI services, observe the following fundamental
restriction:
For a single call server that supports multiple nodes, each with SIP CTI services enabled,
multiple SIP CTI/TR87 sessions can be established for a given DN through the same node,
but not through different nodes.
To illustrated this restriction, consider the following high-level example:
Client A sends a TR/87 SIP INVITE to Node 1 to monitor DN 1000. The TR/87 association is
established. Client B then sends a TR/87 SIP INVITE to Node 1 (the same node) to monitor
DN 1000. Both sessions are established successfully. As a result of this sequence, two TR/87
sessions exist for DN 1000 through Node 1.
However, if Client B attempts to send a TR/87 SIP INVITE to Node 2 (that has an AML link to
the same call server as Node 1), the attempt to establish the TR87 sessions fails because the
DN is already in use by client A's session through Node 1.
To solve this issue when planning for capacity, SIP routing must ensure that all TR/87 session
for a given DN always terminate on the same node when a single Call Server has multiple
nodes. (See Figure 55: Capacity example on page 338.
This situation can arise in cases where there is an expectation that a single user has multiple
clients logged on simultaneously, such as a client at home, a client in the office, and a mobile
client all with TR/87 capability.
Impact on Signaling Server
The maximum number of SIP CTI/TR87 users on a single Signaling Server is 5000. One
Signaling Server can support up to 1800 SIP trunks, therefore you require two Mediation
servers for each Signaling Server to correctly deploy OCS 2007. To increase the system
capacity, associate a pool of Mediation servers with each Call Server. The Multimedia
Convergence Manager (MCM) routes inbound calls from the Signaling Server to the
appropriate Mediation server within the Mediation server pool. The CP PIV and CP PM Call
Server can support up to 13 000 users.
For more information about Converged Office features and engineering, see Avaya Converged
Office Fundamentals - Microsoft Office Communications Server 2007, NN43001-121.
Mobile Extension
You can configure a mobile user with a Mobile Extension (MOBX), providing a logical
connection to the users mobile phone. Each mobile user requires a configured MOBX.
There is a limit of 4000 Mobile Extensions per customer.
D-channel
D-channel handling interfaces are based on the Multi-purpose Serial Data Link (MSDL) used
in Large Systems.
CS 1000E usage of D-channels for digital trunking is the same as the CS 1000M, therefore
this section applies to the engineering of D-channels for digital trunking on the CS 1000E.
Engineering considerations
The engineering guidelines assume normal traffic consisting of valid call processing and
administrative messages. Engineering rules cannot prevent a piece of equipment on the
network from malfunctioning and generating spurious messages, which overload the links. At
this point the recovery mechanism becomes essential. The mechanism is graceful, not
requiring manual intervention, and can provide as much diagnostic information as possible, to
help isolate the root cause of the problem.
Outgoing messages originate from the system Core Processor (CP), are passed to the D-
channel handler, and travel across the appropriate link to the destination. In equilibrium, or
over a relatively long period of time (on the order of several minutes), the system cannot
generate messages faster than the D-channel handler can process them, than the link can
transmit them, or than the destination can process them. Otherwise, messages build up at the
bottleneck and are eventually lost. The entity with the lowest capacity is the system bottleneck.
For very short periods of time, however, one or more entities can be able to send messages
at a higher rate than the system bottleneck, since buffers are available to queue the excess
messages. These periods are referred to as bursts. The length of the burst and the size of the
burst that can be supported depend on the sizes of the buffers.
Multiple D-channels
Avaya does not recommend you to split the Primary and Backup D-channels of the same ISDN
Trunk Group across multiple GR/CR CS 1000E Media or PRI Gateways. While this
configuration insures D-channel redundancy during some Primary D-Channel failure
situations, states could exist where both D-channels register to different Call Servers and
simultaneously activate creating a conflict in the Central Office. This conflict can affect service
and can lead to a complete ISDN Trunk Group outage in most service provider Central
Offices.
If your service provider supports ISDN Trunk Group hunting, Avaya recommends you to
maintain multiple ISDN Trunk Groups with each ISDN service provider. Configure each Trunk
Group with its own Primary and Backup D-channels on PRI circuits in each Media Gateway.
This solution offers resilient configuration in larger systems distributed geographically and
operates well even if your service provider is unable to support a D-channel for each ISDN PRI
circuit.
Avaya can provide VoIP Session Border Controllers as an alternative to large scale ISDN
Trunking facilities. This solution offers improved flexibility in deployment and resiliency
performance. For more information, see www.avaya.com/support.
is printed. Manual intervention is required to clear the overloaded port. This feature prevents
a single port from locking up the whole link.
Several software tasks exist on the D-channel handler. Layer 1 message processing operates
at the highest priority. If the link is noisy, Layer 1 processing can starve the Layer 2 and Layer
3 processing tasks, resulting in buffer overflows. If such a problem is suspected, the Protocol
Log (PLOG) can be examined. PLOG reporting is requested in LD 96, as described in Avaya
Software Input Output Administration, NN43001-611.
D-channel
For interfaces including NI-2, Q-SIG, and Euro-ISDN, Layer 3 processing is also performed on
the D-channel handler, thus reducing its capacity. These interfaces are referred to as R20+
interfaces. The steady state message rate allowable for D-channel messages is 29 msg/sec
for R20+ interfaces.
The SL-1 software output queue for DCH messages is the Output Buffer (OTBF), which is user
configurable for between 1 and 127 buffers in LD 17. This is a single system resource shared
by all D-channels.
It is possible to define overload thresholds per D-channel for R20+ interfaces. The
ISDN_MCNT (ISDN message count), defined in LD 17, specifies the number of ISDN Layer 3
call control messages allowed per 5-second interval. Overload control thresholds can be set
per D-channel, ranging from 60 to 350 messages in a 5-second window, with a default of 300
messages. If the overload control threshold is exceeded, DCH421 is output. When the
message rate exceeds the threshold for two consecutive 5-second periods, overload control
is invoked and new incoming call requests are rejected by the Layer 3 protocol control in the
third 5-second time interval. Layer 3 resumes accepting new calls at the end of the third time
interval. This flexibility lets the user to regulate the processing required by a specific R20+ DCH
port.
The default value implies no overload control, since 300 messages/5 seconds exceeds the
rated capacity of 29 messages/second.
Peak analysis
When there is a link restart, STATUS messages are sent to all trunks with established calls.
Since the SL-1 software task does not implement flow control on this mechanism, a burst of
up to several hundred messages can be sent to the D-channel handler, exceeding flow control
thresholds. When this happens, messages back up on the OTBF buffer, possibly resulting in
buffer overflow, as indicated by DCH1030 messages. OTBF overflow is also possible after an
initialization, since a burst of messages is sent to each D-channel in the system, and the OTBF
is a shared system resource.
The system capacity is significantly higher in this scenario than in the steady state one because
it is sending out D-channel messages that do not involve call processing. D-channel handling
and Link capacities are also higher because, for equilibrium analysis, some capacity is
reserved for peaking.
In the worst case scenario for a single D-channel, if the system sends messages at its peak
rate, OTBF buffer overflow is possible. Also, once the messages are sent, a burst of responses
can be expected in the incoming direction, resulting in additional congestion at the D-channel
handler.
This situation also occurs when a backup D-channel becomes active, since STATUS messages
are exchanged to resynchronize the link.
To reduce the possibility of this problem occurring, limit the number of B-channels supported
by a D-channel, separate D-channels onto several cards so that message bursts are not being
sent to ports on the same D-channel handling card after initialization, and increase the size of
OTBF to the maximum value of 127.
The Status Enquiry Message Throttle is implemented. This feature applies only to system-to-
system interface networks. It lets the user to configure the number of Status Enquiry messages
sent within 128 msec on a per-D-channel basis. The SEMT parameter is set in LD 17 with a
range between 1 and 5. The default value is 1. Since this feature provides a flow control
mechanism for Status Enquiry messages, the likelihood of buffer overload is reduced.
B-channel overload
In an Automatic Call Distribution (ACD) environment, in which the number of ACD agents plus
the maximum ACD queue length is considerably less than the number of B-channels available
for incoming calls, a burst of incoming messages can impact the performance of the D-channel
handler as well as the system via the following mechanism: Calls from the CO terminate on a
specified ACD queue. When the destination is busy (the destination telephone is busy or the
ACD queue has reached its maximum limit of calls), the system immediately releases the call.
The CO immediately presents another call to the same destination, which is released
immediately by the PBX, and so on.
The B-channel Overload Control feature addresses this problem by delaying the release of an
ISDN PRI call by a user-configurable time when the call encounters a busy condition. The
delay in releasing the seized B-channel prevents a new call from being presented on the same
B-channel, decreasing the incoming call rate. The timer BCOT is set in LD 16 with a range
between 0 and 4000 msec.
For the baud rates listed in Table 72: ISL link capacities on page 344, the link is the limiting
constraint. The potential peak traffic problems described in Peak analysis on page 343 apply
here as well, to an even greater extent because of the larger rate mismatch between the system
and the system bottleneck. To minimize the risk, set the baud rate as high as possible.
messages in one second. The incoming facility call request acknowledges 25 messages in the
same second. The outgoing and incoming call requests total 50 messages.
In this example, the bit rate load on the D-channel equals:
50 messages × 70 octets × 8 bits/octet = 28 800 bits/second
Total bandwidth of a 9600 baud modem is approximately:
9600 baud × 2 = 19 200 bits/second
With a total bandwidth of 19 200 bits/second and a bit rate load of 28 800 bits/second, the D-
channel cannot handle the messaging. D-channel messaging is backlogged.
If the customer is having problems networking calls during high traffic, then the D-channel can
be the cause (especially if the bandwidth is less than 2800 baud). If the D-channel messaging
is delayed to the point where VNS call processing gets delayed, the calls fail to network and
many PRI/VNS/DCH messages are output at both the source and target nodes.
NACD network
A Network ACD (NACD) network is difficult to engineer, since performance depends on specific
network configuration details including connectivity, routing tables, the number of nodes, the
number of queues at each node, and calling patterns.
Diverting calls in NACD is controlled by Routing Tables with timers. Calls diverted by NACD
can be answered by the Source ACD DN or any one of up to 20 Target ACD DNs. Each Target
can have an individual timer defined, from 0 to 1800 seconds. By using ISDN D-channel
messaging to queue Call Requests at remote Target ACD DNs, voice calls are not physically
diverted until an idle agent is reserved for that call at the remote Target node.
Avaya recommends that the Routing Table be designed so that Call Requests cascade to the
network with the timers staggered. The node that is most likely to have available agents should
have the smallest timer value. Otherwise Call Requests flood the network, resulting in
inefficient use of network and real time resources.
An Active Target is available to accept NACD calls, while a Closed Target is closed to incoming
calls. When calls in the Call Request queue exceed the Call Request Queue Size (CRQS)
threshold, the status changes to Closed. A Status Exchange message is sent from the Target
node to the Source ACD DNs indicating the new status. The Target ACD DN remains Closed
to further network call requests until the number of calls in the queue is reduced by the Flow
Control Threshold (FCTH).
Equilibrium analysis
At the source node, for each call queued to the network but not answered, 4 messages are
exchanged. For each call queued to the network and answered, 11 messages are exchanged.
Likewise, at the target node, a network call that is queued but not answered requires 4
messages, while a call that is queued and answered requires 11 messages. Messages average
31 bytes.
From a single D-channel perspective, the most difficult network topology is a star network that
each agent node is connected to a tandem node. All messages to the other nodes are sent
across the D-channel connected to the tandem node.
As an example, consider a site with 2000 calls arriving locally during the busy hour. The timers
in the Routing Table are staggered so that 1000 are answered locally without being queued to
the network, 500 are answered locally after being queued to an average of two network target
queues, and 500 are answered in the network after being queued to an average of four network
target queues. Meanwhile, 200 Logical Call Requests arrive from the network, of which 100
calls are answered.
For this same network, assume now that the timers in the Routing Table are not staggered;
instead, Logical Call Requests are broadcast to the 4 target nodes in the network as soon as
calls arrive at the local node. Also assume that a total of 4000 calls arrive elsewhere in the
network and are queued at local ACD DNs. Even if the calls are answered exactly where they
were before, the number of messages exchanged increases significantly:
• 1500 calls queued on 4 ACD DNs and not answered × 4 msgs/call/DN = 24 000 msgs
• 500 calls answered × 11 msgs/call = 5500 msgs
• 500 calls queued on 3 ACD DNs and not answered × 4 msgs/call/DN = 6000 msgs
• 3900 network calls queued on local DN and not answered × 4 msgs/call = 15 600 msgs
• 100 network calls answered × 11 msgs/call = 1100 msgs
• Total 52 200 msgs/hr
• (52 200 msgs/hr) ÷ (3600 secs/hr) = 14.5 msgs/sec
Peak analysis
When the CRQS threshold is reached, the target queue broadcasts messages to the source
ACD DNs informing them that it no longer accept calls. The size of this outgoing burst of
messages depends on the number of source ACD DNs in the network.
Once the FCTH threshold is reached, another Status Exchange message is sent. At that point,
Logical Call Request messages are sent by the Source ACD DNs. While the target queue has
been closed, many calls can have queued at source ACD DNs, resulting in a burst of Logical
Call Request messages once the DN becomes available.
If CRQS values are set high, many messages are exchanged, with the network emulating a
single virtual queue. If the CRQS values are lowered, fewer Call Requests are sent across the
network. However, average source delays can be increased. If FCTH levels are set too low,
target nodes can bounce between Active and Closed states, resulting in network congestion
and excessive real time utilization. However, if FCTH levels are set too high, a target node can
be inundated with Logical Call Request messages once it becomes available. CRQS is
configurable for the range 0 to 255, while FCTH is configurable for the range 10 to 100.
Since the impact of these parameters depends on the configuration, it is not possible to make
general recommendations on how to configure them. They can be determined as part of the
custom network design process. Contact your local Avaya representative for network
engineering services.
Impact of proper engineering of B-channels
In the NACD environment, another problem arises when insufficient B-channels are configured
across the network. When an agent becomes available, an Agent Free Notification message
is sent to the source node. An ISDN Call Setup message is sent from the source node to the
target node. Since no B-channel is available, the agent reservation timer expires, an ISDN
Cancellation Message is sent from the target node to the source node, and an ISDN
Cancellation Acknowledge message is sent from the source node to the target node. At this
point, the agent is still free, so the process repeats until a trunk becomes available or the target
closes. This scenario results in a significant amount of message passing.
Parameter settings
The following are parameters that can be configured in LD 17 for CS 1000 D-channels. Items
are listed with their input ranges, with default values shown in brackets.
1. OTBF 1 - (32) - 127: Size of output buffer for DCH
This parameter configures how many output buffers are allocated for DCH
messages outgoing from the system CP to the D-channel handling card. The more
that are created, the deeper the buffering. For systems with extensive D-channel
messaging, such as call centers using NACD, the parameter can be set at 127. For
other systems with moderate levels of D-channel messaging, OTBF can be set at
the smaller of the following two quantities: Total B-channels – (30 × MSDL cards
with D-channels) or 127.
For example, if a system in a standard office environment is configured with 7 T1
spans, 2 D-channels located on two different NTBK51 daughterboards, and 2 back-
up D-channels, the total number of B-channels is (7 × 24) – 4 = 164. OTBF can be
configured to be the smaller of 164 – (30 × 2) = 104 and 127 which is 104.
2. T200 2 - (3) - 40: Maximum time for acknowledgment of frame (units of 0.5 secs)
This timer defines how long the D-channel handler's Layer 2 LAPD waits before it
retransmits a frame. It if does not receive an acknowledgment from the far end for
a given frame before this timer expires, it retransmits a frame. Setting this value too
low can cause unnecessary retransmissions. The default of 1.5 seconds is long
enough for most land connections. Special connections, over radio, for instance,
can require higher values.
3. T203 2 - (10) - 40: Link Idle Timer (units of seconds)
This timer defines how long the Layer 2 LAPD waits without receiving any frames
from the far end. If no frames are received for a period of T203 seconds, the Layer
2 sends a frame to the other side to check that the far end is still alive. The expiration
of this timer causes the periodic "RR" or Receiver Ready to be sent across an idle
link. Setting this value too low causes unnecessary traffic on an idle link. However,
setting the value too high delays the system from detecting that the far end has
dropped the link and initiating the recovery process. The value can be higher than
T200. It can also be coordinated with the far end so that one end does not use a
small value while the other end uses a large value.
4. N200 1 - (3) - 8: Maximum Number of Retransmissions
This value defines how many times the Layer 2 resends a frame if it does not receive
an acknowledgment from the far end. Every time a frame is sent by Layer 2, it
expects to receive an acknowledgment. If it does not receive the acknowledgment,
it retransmits the frame N200 times before attempting link recovery action. The
default (3) is a standard number of retransmissions and is enough for a good link
to accommodate occasional noise on the link. If the link is bad, increasing N200 can
keep the D-channel up longer, but in general this is not recommended.
5. N201 4 - (260): Maximum Number of Octets (bytes) in the Information Field
This value defines the maximum I-frame (Info frame) size. There is no reason to
reduce the number from the default value unless the system is connected to a
system that does not support the 260-byte I-frame.
6. K 1 - (7): Maximum number of outstanding frames
This value defines the window size used by the Layer 2 state machine. The default
value of 7 means that the Layer 2 state machine sends up to 7 frames out to the
link before it stops and requires an acknowledgment for at least one of the frames.
A larger window allows for more efficient transmission. Ideally, the Layer 2 receives
an acknowledgment for a message before reaching the K value so that it can send
a constant stream of messages. The disadvantage of a large K value is that more
frames must be retransmitted if an acknowledgment is not received. The default
value of 7 should be sufficient for all applications. The K value must be the same
for both sides of the link.
7. ISDN_MCNT (ISDN Message Count) 60 - (300) - 350: Layer 3 call control messages
per 5-second interval
It is possible to define overload thresholds for interfaces on a per-D-channel basis.
This flexibility lets the user to regulate the D-channel handler processing required
by a specific R20+ DCH port. The default value of 300 messages/5 seconds is
equivalent to allowing a single port to utilize the full real time capacity of the D-
channel handler. To limit the real time utilization of a single R20+ DCH port to (1 ÷
n) of the real time capacity of the D-channel handler, for n > 1, set ISDN_MCNT to
(300 ÷ n) × 1.2, where the 1.2 factor accounts for the fact that peak periods on
different ports are unlikely to occur simultaneously. For example, to limit a single
port to one-third of the processing capacity of the D-channel handler, ISDN_MCNT
is set to (300 ÷ 3) × 1.2 = 120.
If the ISDN_MCNT threshold is exceeded for one 5-second period, error message DCH421 is
printed. If the threshold is exceeded for two consecutive periods, incoming call requests
arriving in the third 5-second interval are rejected by the D-channel handler Layer 3 software.
At the end of the third 5-second interval, Layer 3 resumes accepting incoming call requests.
system is not aware that an XOFF has been received. After the buffer is full, if further output
is received, the oldest data is discarded. Output resumes when an XON is received or 1 minute
has passed since the output was halted by an XOFF. At this point, the contents in the buffer is
emptied first, followed by output from the system. If any data has been discarded, an error
message is sent.
In the input direction, every character received by the Layer 1 Driver is passed to the SDI
Application. The SDI Application echos any input character unless it is told not to by the system.
In Line Editing Mode, the SDI Application buffers a line of up to 80 characters that can be edited
before being sent to the system.
Under certain conditions, control characters can cause messages to bounce between a
modem or printer and the system. To avoid these situations, configure modems in dumb mode
and disable printer flow control.
The system input buffer is the TTY input buffer, which can store 512 characters. The system
output buffer is the TTY output buffer, which can store 2048 characters.
Modem baud rate Link capacity (msg/ Calls/Hour for Call/Hour for
sec) (peak) FCDR=old FCDR=new
300 30 831 390
1200 120 3323 1560
2400 240 6646 3120
4800 480 13 292 6241
9600 960 26 585 12 481
19 200 1920 53 169 24 962
38 400 3840 106 338 49 924
Equilibrium analysis
The system capacity for messages per second is conservatively based on the assumption of
100% outgoing calls with FCDR=new. Typically, CDR records are not generated for 100% of
the calls.
Peak analysis
Since each character is sent as a separate message, every time a CDR record is sent, a traffic
peak is generated.
To prevent system buffers from building up, set the baud rate at 38 400. If a lower baud rate
is chosen, assume that the CDR application frequently is in a state of flow control. Note that
this is true even if the steady state message rate is low, due to the nature of the SDI
interface.
The burst sizes are even greater if CDR is configured with queue records for incoming ACD
calls.
that value into Column A. Otherwise, follow the guidelines provided. Values in parentheses are
default values. For example, the default number of calls/hr/trunk is 15.6. The value in Column
E can be inserted in the Real Time Required column of Table 74: D-channel handler
engineering worksheet on page 351, and the appropriate Peak Buffer Usage values should be
inserted in the corresponding Peak Buffer Usage columns of Table 74: D-channel handler
engineering worksheet on page 351.
DCH applications
If several applications share a D-channel, add the final real time requirements for the
applications and then enter the total in the appropriate entry in Table 80: Real time
requirements for D-channel applications on page 356.
Table 75: Real time requirements for D-channel applications
The calculations described for NACD provide a simplified approximation of a "typical" NACD
network. If call flows can be predicted or estimated, they can be used to develop a more
accurate model using the number of messages. When this is done, the msgs/hr is computed
directly, so columns A and B are not used. See Examples on page 354 for a detailed example
of how this can be done.
If a live system is being modeled, add the "number of all incoming messages received on the
D-channel" and the "number of all outgoing messages sent on the D-channel" field from a busy
hour TFS009 report to derive the entry for Column C. See Avaya Traffic Measurement Formats
and Outputs Reference, NN43001-750 for details.
Table 76: Peak buffer requirements for D-channel applications
• Low: 10
• Medium: 20
• High: 30
NMS 10 10
In the case of an ISL D-channel, ensure that the baud rate of the connection is greater than
(C msgs/hr × 29 bytes/msg × 8 bits/byte) ÷ 3600 sec/hr
where C comes from column C in Table 80: Real time requirements for D-channel
applications on page 356.
If the baud rate is too low to meet requirements, performance of the entire D-channel handler
can be jeopardized, since 30 of the output buffers are occupied with ISL D-channel messages
and the real time spent processing these messages increases due to additional flow control
and queueing logic.
SDI applications
In the HSL analysis, include live agents, automated agents, and Avaya CallPilot agents in the
agent total. This compensates for the assumption of simple calls.
Table 77: Real time requirements for SDI applications
There are no traffic reports that provide information about the number of SDI messages directly.
For CDR records, determine whether CDR is enabled for incoming, outgoing, and/or internal
calls. The number of incoming, outgoing, internal, and tandem calls is available from TFC001.
Tandem calls are considered both incoming and outgoing. Alternatively, the number of CDR
records can be counted directly.
TTY 10 10
Examples
The DCH1 and DCH2 columns indicate whether the messages can be included in the DCH1
and DCH2 message count, respectively. For each row, multiply the entry in the "Queued and
answered" column by 11 messages and multiply the entry in the "Queued but not answered"
column by 4 messages. The sum of these two values is provided in the "Total messages"
column. By summing the rows that can be included for DCH1 and DCH2, we derive the total
messages for DCH1: 56 350 msg/hr and DCH2: 59 150 msg/hr. Note that these messages do
not include the impact of CRQS and FCTH, which are beyond the scope of this analysis (see
Table 80: Real time requirements for D-channel applications on page 356).
Assuming that no nonNACD calls are carried, Node B carries 3750 calls/hour.
Table 81: Real time requirements for SDI applications
Port Application Real Time Peak Buffer usage Peak Buffer usage
required outgoing incoming
0 CDR 39 938 10 1
1 DCH-NACD 495 880 7 10
2 DCH-NACD 520 520 7 10
3
Total 1 056 338 24 21
The projected D-channel handler utilization is 1 056 338 ÷ 2 770 000 = 38%. Assuming low
network congestion, incoming and outgoing peak buffer usage are below 60, so a single D-
channel handler is able to support this configuration. However, due to the potentially high
messaging impact of NACD, this can be re-engineered periodically to determine whether the
call volumes or call flow patterns have changed.
Call Center
The Call Center is an ACD switch whose calls are mostly incoming, with extensive applications
features such as Avaya Hospitality Integrated Voice Services. A port in the Call Center
environment, either as an agent telephone or trunk, tends to be more heavily loaded than other
types of applications.
System capacity requirements depend on customer application requirements, such as calls
processed in a busy hour, and feature suites such as Recorded Announcement (RAN), Music,
and Interactive Voice Response (IVR).
ACD
Automatic Call Distribution (ACD) is an optional feature available with the system. It is used
by organizations where the calls received are for a service rather than a specific person.
For basic ACD, incoming calls are handled on a first-come, first-served basis and are
distributed among the available agents. The agent that has been idle the longest is presented
with the first call. This ensures an equitable distribution of incoming calls among agents.
The system is managed or supervised by supervisors who have access to the ACD information
through a video display terminal. These supervisors deal with agent-customer transactions
and the distribution of incoming calls among agents.
Many sophisticated control mechanisms have been built on the basic ACD features. Various
packages of ACD features have real time impact on the system CP capacity.
ACD-D package
The ACD-D system is designed to serve customers whose ACD operation requires
sophisticated management reporting and load management capabilities. It has an enhanced
management display, as the system is supplemented by an auxiliary data system. The system
and the auxiliary processor are connected by data links through SDI ports for communications.
Call processing and service management functions are split between the system and the
auxiliary processor.
ACD-MAX
ACD-MAX offers a customer managerial control over the ACD operation by providing past
performance reporting and current performance displays. It is connected through an SDI port
to communicate with the system CP. The ACD-MAX feature makes the necessary calculations
of data received from the system to produce ACD report data for current and past performance
reports. Every 30 seconds, ACD-MAX takes the last 10 minutes of performance data and uses
it to generate statistics for the current performance displays. The accumulated past
performance report data is stored on disk every 30 minutes.
ACD-MAX calls impact capacity engineering in the real time area only.
NACD
The majority of tasks in the engineering of Network ACD (NACD) involve the design of an
NACD routing table and the engineering of overflow traffic. The process is too complex to be
included here. The engineering procedure in this document is for single-node capacity
engineering, which accounts for the real time impact of NACD calls on a switch either as a
source node or remote target node. Therefore, the overall design of a network is not in the
scope of this document.
ELAN engineering
The Embedded Local Area Network (ELAN) subnet is designed to handle messaging traffic
between the system and its applications, such as Symposium and Avaya CallPilot. It is not
meant to handle functions of the customer's LAN, which carries customer application traffic.
A 64 kbps link can handle messaging traffic of over 80 000 calls. The ELAN subnet, being an
Ethernet with data rate of 10/100/1000MG autonegotiate, is not a bottleneck in a Symposium/
CallPilot configuration. However, observe the following engineering guidelines to avoid
performance problems. For more information, see Avaya Converging the Data Network with
VoIP Fundamentals, NN43001-260.
• Ensure that settings on the physical interface of the system to the Ethernet are correct.
• Although no traffic engineering is required on the ELAN subnet, if the loading on the link
is extremely high (for example, above 10% on the 10T-10 Mbps), collision on the Ethernet
can happen. Use a sniffer to detect any performance problems. Decrease the loading on
the link if it is overloaded.
• Set a consistent data rate with the application.
Certain remote maintenance applications can utilize the ELAN subnet to access the system
from a remote location. Ensure that no other customer LAN traffic is introduced.
Gateways requires a recalculation of the ELAN traffic estimation to ensure proper data
networking.
You can estimate the ELAN traffic by estimating the load on each card in the Media Gateway
in an estimation table. Table 83: Media Gateway ELAN traffic estimation example on page 361
includes the traffic load on each card in the CCS. Record each card load as normal or maximum
in the table, then copy the appropriate traffic bandwidth number and idle traffic bandwidth for
each card from Table 84: Estimated traffic for cards on page 362. Total and sum the ELAN
traffic bandwidth needed for each Media Gateway. If a card is not found in Table 84: Estimated
traffic for cards on page 362 use the Unknown IPE Card (UIC). Table 83: Media Gateway
ELAN traffic estimation example on page 361 shows an example ELAN traffic estimation table.
Table 84: Estimated traffic for cards on page 362 shows estimated ELAN traffic per card
required for the calculation.
Table 83: Media Gateway ELAN traffic estimation example
With IPSec, there is an estimated 30% increase in traffic overhead. ELAN traffic estimation
with IPSec is 409630 Bits/sec.
Table 84: Estimated traffic for cards
• ELAN Traffic is additional traffic from the ELAN ports on the MGC that must be included
in the overall data network bandwidth estimation. This traffic is specifically between the
MGC and the controlling Communication Server. The MGC may re-home to three
Communication Servers.
• Idle Traffic (Bits/sec) is the minimum ELAN traffic that occurs under no load conditions.
This traffic is always present and must be added to the total ELAN traffic estimation.
• Normal Traffic (Bits/sec) is the normal ELAN traffic during 20% card load or 5 CCS per
port.
• Maximum Traffic (Bits/sec) is the maximum ELAN traffic during 100% card load or 33 CCS
per port.
• Normal Load CCS is the normal card load measured in CCS. For example the NT8D09
Digital Line Card under normal CCS is 80 CCS (16x5CCS = 80 CCS).
• Maximum Load CCS is the maximum card load measured in CCS. For example the
NT8D09 Digital Line Card under maximum CCS is 80 CCS (16x33CCS = 528 CCS).
Table 84: Estimated traffic for cards on page 362 is only to be used for estimating the Media
Gateway ELAN traffic. Do not use these table values for other calculations. Additional network
parameters such as TLAN traffic, packet loss and round trip delay are also required for proper
data network planning and engineering. See Avaya Converging the Data Network with VoIP
Fundamentals, NN43001-260 for additional details on Distributed Media Gateways.
The HSP can be connected using a cable directly between the two CPUs, or using networking
equipment. CP PIV requires the use of a crossover cable for HSP. When using networking
equipment to connect, the HSP ports are assigned unique IP addresses.
The following are recommendations and rules for configuring the HSP network and network
interfaces of two Call Server CPUs using network equipment:
• The HSP must be connected through an Ethernet cable (cross-over on CP PIV) or by a
dedicated VLAN through switches.
• The HSP must be in its own IP subnet. It cannot be combined with the ELAN subnet.
• The minimum throughput of the HSP must be 100 Mbps. Therefore, the HSP port must
be 100 Mbps and full duplex. This must be confirmed using the STAT HSP command in
LD 137 after the equipment is operational. This must also be verified on the network
equipment that the HSP is attached.
• The network switches must be capable of port mapping to 802.1p/Q.
• When running the HSP across network equipment, the HSP must be isolated in its own
VLAN. Do not include other traffic in this VLAN. This VLAN must be given higher VLAN
priority than any other traffic on the network, except for network control traffic (network
control traffic is the traffic necessary to keep the network operational). The VLAN must
be 802.1p/Q-capable and must be set to a very high setting so as not to starve the HSP.
Avaya strongly recommends 802.1p Level 7 (Network Control and OAM).
• When using third-party vendor network equipment that has not been validated by Avaya,
a pre-test of the network must be performed. This test includes mixed traffic going across
the networks in different VLANs. The network specifications can meet the round trip delay
and packet loss requirements.
• The round trip delay of the HSP VLAN must be less than 30 msec and the packet loss of
the HSP VLAN must be below .1 % packet loss.
• The HSP port on the CP PIV is set to autonegotiate the link speed and duplex. Therefore,
the network equipment that the CP PIV is attached must also use autonegotiate. Verify
that both the CP PIV and the network equipment speed and duplex are a match.
• Avaya recommends that MLT (Multi Link Trunking) be used across the enterprise IP
network for the Campus Redundancy configuration.
• Cabling for the HSP port on the CP PIV must be at least Cat 5e when running the link
speed at 1 Gbps.
Caution:
Duplex mismatches occur in the LAN environment when one side is set to autonegotiate
and the other is hard configured.
The autonegotiate side adapts only to the speed setting of the fixed side. For duplex
operations, the autonegotiate side sets itself to half-duplex mode. If the forced side is full-
duplex, a duplex mismatch occurs.
Switching Equipment
Important:
The HSP cannot be routed, and as a result, it cannot be extended through a Layer 3 router
unless that device supports a method of providing Layer 2 end-to-end connectivity
(Example: Layer 2 tunneling). Therefore, when passing through routing equipment, the HSP
must remain in the same subnet from one Call Server to the other (Example: tunneling the
HSP over the network).
Feature operation
A call originated from Telephone A (or Trunk A) seeks to terminate on a CLASS Telephone B.
When Telephone B starts to ring, Telephone A hears ringback. A unit in CLASS Modem
(CMOD) is assigned to collect the originator's CND information and waits for the CND delivery
interval. After the first ring at Telephone B, a silence period (deliver interval) ensues, and the
CMOD unit begins to deliver CND information to the CLASS telephone.
The CND information of a traffic source (Telephone A) is a system information, which is
obtained by the system when a call is originated. During the two-second ringing period of the
CLASS Telephone B, Telephone A's CND is delivered to CMOD by SSD messages (using
signaling channel only). When the CND information is sent from CMOD to CLASS Telephone
B, it is delivered through a voice path during the four-second silence cycle of Telephone B. The
CMOD unit is held for a duration of six seconds.
The system delivers SSD messages containing CND information to CMOD and then sends it
to Telephone B during the delivery interval through a voice path.
Table 85: CMOD Unit Capacity on page 368 is the CMOD capacity table. It provides the
number of CMOD units required to serve a given number of CLASS telephones with the desired
GoS (P.001). The required number of CMOD units can have a capacity range whose upper
limit is greater than the number of CLASS telephones equipped in a given configuration.
Table 85: CMOD Unit Capacity
Configuration parameters
Design parameters are constraints on the system established by design decisions and
enforced by software checks. Defaults are provided in the factory-installed database. However,
some parameter values must be set manually, through the OA&M interface, to reflect the actual
needs of the customer's application.
For guidelines on how to determine appropriate parameter values for call registers, I/O buffers,
and so on, see Design parameters on page 199 and Memory engineering on page 211.
First, calculate the number of MAS clusters required for the number of servers that will be
deployed (MAS_Clusters).
If (MASA[MASA_type_index] = true then
MAS_Clusters = round_up(SSMASR / (MAS_Cluster_size -1))
Else
MAS_Clusters = roundup(SSMASR / MAS_Cluster_size)
The MAS licenses need to be divided up between the clusters and the number of servers
in each cluster must be taken into consideration.
Second, calculate the number of servers in each Cluster:
Where Servers_in_cluster_one_and_full is defined as the
number of servers in the first cluster and the number of
servers in each full cluster. Therefore when there is only
one cluster this number can be less than the cluster
size.
Servers_in_cluster_N is defined as the number of
servers in the last cluster. When there is only one cluster
this value will be zero. This value will often be smaller than
the cluster size.
If (MASA[MASA_type_index] = true then % have redundancy
{
Servers_in_cluster_one_and_full = MAS_Cluster_size -1
If MAS_Clusters > 1 then
Servers_in_cluster_N = roundup(SSMASR-
((MAS_Clusters-1)*MAS_cluster_size-1))
Else % only one cluster
Servers_in_cluster_N =0
}
Else % no redundancy
{
Servers_in_cluster_one_and_full = MAS_Cluster_size
If MAS_Clusters > 1
then
Servers_in_cluster_N = roundup(SSMASR-
((MAS_Clusters-1)*MAS_cluster_size))
Else % only one cluster
Servers_in_cluster_N = 0
}
Now that the number clusters is known and the number of servers per cluster is known, the
size of the keycode per cluster can be calculated.
Note that the cluster N-1 may not be a full cluster and therefore will have a different number
of licenses than a full cluster.
If MAS_Clusters > 1 then
{
Numb_Cluster_one_keycodes = MAS_Clusters – 1
Numb_ClusterN_keycodes = 1
If MASA = true % have redundancy
{
Cluster_one_keycode_Sessions =
ROUNDUP(Total_MAS_Sessions_Licens
es/SSMASR*(MAS_cluster_size-1)
}
Else % no redundancy MAS servers
{
Cluster_one_keycode_Sessions =
ROUNDUP(Total_MAS_Sessions_Licens
es/SSMASR*(MAS_cluster_size)
} % end redundancy check
% there is more than one cluster, so determine keycode sessions for clusterN
ClusterN_keycode_Sessions = Total_MAS_Session_Licenses –
(Cluster_one_keycode_Sessions * (MAS_Clusters-1))
} % end > 1 MAS cluster
Else % only one MAS cluster
{
Cluster_one_Keycode_Sessions = Total_MAS_Sessions_Licenses
ClusterN_keycode_Sessions = 0
Numb_Cluster_one_keycodes = 1
Numb_ClusterN_keycodes = 0
}
The customer now needs Numb_Cluster_one_keycodes of size
Cluster_one_keycode_Sessions. If the number of MAS clusters (MAS_Clusters) is greater
than one, an additional keycode of size ClusterN_keycode_Sessions will be required.
To verify the number of keycodes that must be generated:
Numb_Cluster_one_keycodes + Numb_ClusterN_keycodes = MAS_Clusters
Contents
This chapter contains the following topics:
Introduction on page 375
Loops and superloops on page 376
Card slot usage and requirements on page 377
Assigning loops and cards in the CS 1000E on page 380
Preparing the final card slot assignment plan on page 389
Introduction
Calculating the number and assignment of cards and, relatedly, Media Gateways is an iterative
procedure, because of specific capacity and usage requirements.
In an Avaya Communication Server 1000E (Avaya CS 1000E) system, Digital Signal Processor
(DSP), Digitone receiver (DTR), Tone and Digit Switch (TDS), and other services are provided
by circuit cards such as Media Cards, and the Media Gateway Controller (MGC). These
resources are available only to the Media Gateway (with optional Expander) that the circuit
cards reside. Other services, such as Conference, are available as system resources but
require Media Gateway-specific DSP resources in order to access them.
System capacities on page 209 and Resource calculations on page 249 describe the
theoretical, traffic-based calculations used by Enterprise Configurator to estimate the required
number of Media Cards and Media Gateways. This chapter describes the steps to allocate the
cards to specific Media Gateways. The process can result in an increase in the required
number of Media Cards and Media Gateways.
Note on terminology
The term Media Gateway refers to the Media Gateway 1010 (MG 1010) and Avaya CS 1000
Media Gateway 1000E (Avaya MG 1000E). The MG 1010 provides ten IPE slots. The Avaya
MG 1000E provides four IPE slots.
Each MG 1000E can be connected to an optional Media Gateway Expander in order to
increase capacity to eight IPE slots. In this chapter, the term MG 1000E includes the optional
Media Gateway Expander, if equipped.
Virtual superloops
There are no physical timeslots on Media Gateways. Timeslots are defined within virtual
superloops that benefit from the nonblocking timeslot architecture used by IP Phones and
Virtual Trunks.
The superloop is layered into 16 banks of virtual superloops interfacing the 16 card slots in the
two Media Gateways. This expands the superloop's 120 timeslots to 1920 timeslots (= 16 ×
120) to service a maximum of 1024 TNs in the address space. Media Gateways are therefore
nonblocking with respect to timeslots.
Internally, a card number separates the banks of software timeslots. Since a superloop is
associated with 16 cards, each card is associated with one virtual superloop.
The network-level circuits, such as Conference and Tones, use additional loops outside of this
address space. They also use DSPs from within the nonblocking superloops.
With MGTDS, you can configure two Media Gateway TDS loops. MGC-based Media Gateway
lets 30 parties on each loop.
MGC-based Media Gateway capacity is 2 MGCONF loops with 30 parties on each loop.
PRI/PRI2/DTI/DTI2 loops
An MGC is required in slot 0 of any Media Gateway that will contain a PRI/PRI2/DTI/DTI2
card.
Each T1/E1 span consumes a loop, as well as a card slot.
The CP PIV and CP PM processors can support up to 100 PRI/PRI2/DTI/DTI2 spans. However,
this many TI/E1 spans would consume most of the loops on the system.
Virtual
n/a 0 32-port MGC DSP 32-port DSP daugherboards
daughterboard use virtual slot 0. 32-port
daugherboards are supported
in MGC daugherboard location
1 and location 2.
n/a 11 96-port MGC DSP 96-port DSP daughterboard
daughterboard uses virtual slots 11, 12, and
n/a 12 13. 96-port daugherboard is
n/a 13 supported in MGC
daughterboard location 1.
n/a 14 DTRs (maximum: 8) Required if any analog
terminals or trunks are
n/a 15 DTRs (maximum: 8) equipped in the MG 1000E.
n/a 15 MF tone detectors (maximum: Must be provided on each MG
4) 1000E that requires tone-
based signaling.
If DTRs are configured in any other card slot, a receiver hardware pack must be equipped
in the slot.
Important:
DSP resources cannot be shared between Media Gateways. Therefore, each
Media Gateway must contain sufficient DSP resources required by the equipment
configured in that Media Gateway.
3. There must be at least one TDS loop in each Media Gateway.
4. Allocate the users and Media Cards for dedicated DSPs first, then fill remaining
empty slots in Media Gateways with other IPE cards.
Important:
There is no way to reserve DSP resources for dedicated usage (such as
Conference). If a system has higher than expected call rates for standard
telephones, these standard telephones can effectively hijack DSP resources
required for dedicated functions. Therefore, in a system with high call rates for
standard telephones, place dedicated and standard resources in different Media
Gateways.
Provision resources in the following order:
a. Conference on page 381
b. TDS on page 382
c. Broadcast circuits on page 383
d. Other service circuits on page 385
e. TDM telephones and TDM agents on page 385
f. Consoles on page 386
g. Standard telephones on page 387
Conference
Each Media Gateway provides up to 60 conference circuits (ports), which can be used to form
conferences of up to 30 parties each. The MGC card has 60 conference circuits (2 loops).
Users can configure 2 conference loops on each MGC-based Media Gateway, with each loop
providing 30 conference circuits.
The conference circuits are available to all Media Gateways in the system. Calls are assigned
to conference circuits on a "round robin" basis. Each conference circuit is accessed through a
DSP port in the Media Gateway that the conference loop is defined. In addition, the device
using the service can require another DSP in order to reach the conference port (see DSP
ports for Conference on page 265).
For nonblocking access, provide an equal number of DSP ports and conference ports. In other
words, provide one 32-port Media Card for every defined conference loop.
1. Calculate the number of Media Gateways required for Conference based on the
number of conference circuits needed, in multiples of 60:
Number of Media Gateways = ROUNDUP(Number of conference circuits ÷ 60)
2. Calculate the number of DSPs required for Conference based on the number of
conference circuits needed, in multiples of 32:
Provide 32 ports of DSP for every defined conference loop.
Examples
TDS
A minimum of one TDS loop is required in each Media Gateway. The TDS circuits are provided
by the MGC card. If additional TDS circuits are required in any Media Gateway, a second TDS
loop can be configured in it.
PRI/PRI2/DTI/DTI2
Each digital trunk in a CS 1000E system requires a dedicated DSP resource. Each T1 span
requires 24 ports of DSP and each E1 span requires 30 ports of DSP.
PRI/PRI2/DTI/DTI2 cards require the use of CEMUX are supported in slots 1-9 of a Media
Gateway.
The definition of each PRI/PRI2/DTI/DTI2 span consumes 1 loop and can be configured in LD
17.
Controlled broadcast
If an MGate card is used for controlled broadcast, the rules for card placement of the MGate
card and timeslot usage is the same as for a MiRan card. The MGate card will require 1 DSP
for every listener.
Table 89: Example of timeslot sharing in a superloop on page 383, if timeslot sharing is used
for MiRan, the same that would be used for MGate, when used for controlled broadcast.
Broadcast circuits
Music and Recorded Announcement (RAN) are broadcast circuits. One channel can support
many listeners. Each listener needs one DSP port. A broadcast music trunk is required for
every 60 broadcast users.
In order to maximize the number of simultaneous connections to an Avaya Integrated
Recorded Announcer card in one Media Gateway shelf of a superloop, use all the timeslots
for the superloop for that card. The software "steals" the timeslots from the other shelf of the
superloop, provided the equivalent card slot in the second Media Gateway is not used. Table
89: Example of timeslot sharing in a superloop on page 383 illustrates the strategy.
Table 89: Example of timeslot sharing in a superloop
An alternative strategy is to use just one Media Gateway on a superloop when broadcast
circuits are required.
Consoles
Each Avaya 2250 Attendant Console and PC Console requires two TNs (originating and
terminating) on an XDLC card and one Aux TN (for supervisor function). Avaya also
recommends two power TNs per console.
DSPs are used when a call is active on an Attendant loop key. Each side (originating and
terminating) requires one DSP, for a total of two DSPs per active/held call on the console.
Queued calls (ICI key indicators) do not consume DSP resources until the Attendant answers
the call on a loop key.
DSP calculations
For standard access, provide 4 DSPs per console.
For dedicated DSPs, provide 12 DSPs per console (2 × 6 loop keys).
IP Attendant Consoles
Each IP Attendant 3260 in the system requires four SIP ports. You must ensure that there are
enough SIP ports to support the intended number of consoles.
Traffic estimation
You only need to perform traffic estimation calculations if the overall bandwidth between the
IP Attendant Consoles and the registered Media Services server is less than 20 Mbps. This
applies to all deployments.
The traffic estimation calculations shown in the table below are only applicable to the traffic
between the IP Attendant Console and the Media Services server.
Table 90: Traffic estimation calculations for IP Attendant Console
Standard telephones
Standard telephones are the average line users configured with a standard configuration.
1. Using a rule of thumb of five telephones per unallocated DSP, distribute line cards
to the Media Gateways with empty slots and unused DSPs.
The rule of thumb is derived as follows:
• A Media Card with 32 DSPs supports 794 CCS. This approximates to 24.8
CCS per DSP (794 ÷ 32).
• The default value for average user traffic is 5 CCS. At 5 CCS per standard
user, 24.8 CCS per DSP translates to 5 telephones per DSP.
2. Using a rule of thumb of one Media Card per seven line cards, fill empty Media
Gateways with the remaining line cards and their required Media Cards.
The rule of thumb assumes average traffic of less than 7 CCS per telephone. This
is derived as follows:
• There are a total of 8 card slots available in each MG 1000E.
• If 1 card slot is used by a Media Card, a maximum of 7 line cards, or 112
telephones (7 × 16 ports), can be added to the MG 1000E.
• There are a total of 10 card slots available in each MG 1010.
• If 1 card slot is used by a Media Card, a maximum of 9 line cards, or 144
telephones (9 × 16 ports), can be added to the MG 1010.
• A Media Card with 32 DSPs supports 794 CCS. This is the traffic capacity of
this particular Media Gateway.
• A capacity limit of 794 CCS means each MG 1000E based telephone must
generate less than 7 CCS, on average (794 ÷ 112).
• A capacity limit of 794 CCS means each MG 1010 based telephone must
generate less than 5.5 CCS, on average (794 ÷ 144).
For average traffic of more than 7 CCS per telephone, use Table 91: Maximum
number of Media Cards, line cards, and telephones in a Media Gateway on
page 388 to determine the number of Media Cards and telephones that can be
assigned to an MG 1000E.
Table 91: Maximum number of Media Cards, line cards, and telephones in a
Media Gateway
3. Use a similar rule to add trunk cards (XUT) and their required Media Cards. See
Table 92: Maximum number of Media Cards, trunk cards, and trunks in a Media
Gateway on page 388.
Table 92: Maximum number of Media Cards, trunk cards, and trunks in a Media
Gateway
4. To mix line and trunk cards in a Media Gateway, calculate the total CCS for the
number of lines and trunks. Then use Table 93: Traffic capacity of Media Cards
(Erlang B at P.01) on page 388 to identify the number of Media Cards required to
support that CCS rate.
Table 93: Traffic capacity of Media Cards (Erlang B at P.01)
CLASS cards
CLASS cards can be placed in any Media Gateway. Therefore, each CLASS cards requires
32 ports of DSP.
The telephones that use the CLASS cards do require extra DSP resources. The rules for
allocating standard telephones apply.
Contents
This chapter contains the following topics:
Introduction on page 391
Step 1: Define and forecast growth on page 392
Step 2: Estimate CCS per terminal on page 393
Step 3: Calculate number of trunks required on page 394
Step 4: Calculate line, trunk, and console load on page 395
Step 5: Calculate Digitone receiver requirements on page 396
Step 6: Calculate total system load on page 397
Step 7: Calculate the number of IPE cards required on page 397
Step 8: Calculate the number of Media Cards required on page 397
Step 9: Calculate the number of Signaling Servers required on page 398
Step 10: Provision conference/TDS loops on page 398
Step 11: Calculate the number of Media Gateways required on page 398
Step 12: Assign equipment and prepare equipment summary on page 398
Resource calculation worksheets on page 399
Introduction
This section provides a high-level overview of the steps required to determine general
equipment requirements. Consult your Avaya representative and use a configuration tool, such
as Enterprise Configurator, to fully engineer a system.
Important:
The values used in the examples in this chapter are for illustrative purposes only, and should
not be interpreted as limits of the system capacity. The values must be adjusted to suit the
application of a particular system.
Example
A customer has 500 employees and needs 275 telephones to meet the system cutover. The
customer projects an annual increase of 5% of employees based on future business
expansion. The employee growth forecast is:
• 500 employees × 0.05 (percent growth) = 25 additional employees at 1 year
• 525 employees × 0.05 = 27 additional employees at 2 years
• 552 employees × 0.05 = 28 additional employees at 3 years
• 580 employees × 0.05 = 29 additional employees at 4 years
• 609 employees × 0.05 = 31 additional employees at 5 years
• 640 employees × 0.05 = 32 additional employees at 6 years
The ratio of telephones to employees is 275 ÷ 500 = 0.55.
To determine the number of telephones required from cutover through a five-year interval,
multiply the number of employees required at each of the time periods by the ratio of telephones
to employees (0.55).
• 500 employees × 0.55 = 275 telephones required at cutover
• 525 employees × 0.55 = 289 telephones required at 1 year
• 552 employees × 0.55 = 304 telephones required at 2 years
• 580 employees × 0.55 = 319 telephones required at 3 years
• 609 employees × 0.55 = 335 telephones required at 4 years
• 640 employees × 0.55 = 352 telephones required at 5 years
This customer requires 275 telephones at cutover, 304 telephones at two years, and 352
telephones at five years.
Each DN assigned to a telephone requires a TN. Determine the number of TNs required for
each customer. Perform this calculation for cutover, two-year, and five-year intervals.
Comparative method
Select three existing systems that have an historical record of traffic study data. The criteria
for choosing comparative systems are:
1. Similar line size (+25%)
2. Similar business (such as bank, hospital, insurance, manufacturing)
3. Similar locality (urban or rural)
Calculate the average station, trunk, and intra-system CCS/T for the selected systems. Apply
these averages to calculate trunk requirements for the system being provisioned.
Manual calculation
Normally, the customer can estimate the number of trunks required at cutover and specify the
Grade-of-Service (GoS) to be maintained at two-year and five-year periods (see Table 95:
Example of manual calculation of CCS/T on page 394).
Use an appropriate trunking table (see Reference tables on page 409) to obtain estimated
trunk group usage for the number of trunks. Divide the number of lines that are accessing the
group at cutover into the estimated usage. The result is the CCS/T, which can be used to
estimate trunk requirements.
Table 95: Example of manual calculation of CCS/T on page 394 provides an example of the
manual calculation.
Traffic source Cutover (CCS) Two years (CCS) Five years (CCS)
Line 275 × 6.2 = 1705 304 × 6.2 = 1885 352 × 6.2 = 2183
Trunk 275 × 4.1 = 1128 304 × 4.1 = 1247 352 × 4.1 = 1444
Subtotal 2833 3132 3627
Console 30 30 30
Total system load 2863 3162 3657
Line CCS/T = 6.2; Trunk CCS/T = 4.1; two consoles = 30 CCS.
Repeat this method for each trunk group in the system, with the exception of small special
services trunk groups (such as TIE, WATS, and FX trunks). Normally, customers tolerate a
lesser GoS on these trunk groups.
Default method
Studies conducted estimate that the average line CCS/T is never greater than 5.5 in 90% of
all businesses. If attempts to calculate the CCS/T using the comparative method or the manual
calculation are not successful, the default of 5.5 line CCS/T can be used.
Determine the network line usage by multiplying the number of lines by 5.5 CCS/T. Then
multiply the total by 2 to incorporate the trunk CCS/T. However, this method double-counts the
intra-CCS/T, resulting in over-provisioning if the intra-CCS/T is high. Also, this method is not
able to forecast individual trunk groups. The trunk and intra-CCS/T are forecast as a group
total.
Example
The customer requires a Poisson 1% blocking GoS (see Trunk traffic Poisson 1 percent
blocking on page 411). The estimated trunk CCS/T is 1.14 for a DID trunk group. Determine
the total trunk CCS by multiplying the number of lines by the trunk CCS/T for cutover, two-year,
and five-year intervals:
Use Trunk traffic Poisson 1 percent blocking on page 411 to determine the quantity of trunks
required to meet the trunk CCS at cutover, two-year, and five-year intervals. In this case:
• 17 DID trunks are required at cutover
• 18 DID trunks are required in two years
• 21 DID trunk are required in five years
For trunk traffic greater than 4427 CCS, allow 29.5 CCS/T.
Line load
Calculate line load by multiplying the total number of TNs by the line CCS/T. The number of
TNs is determined as follows:
• one TN for every DN assigned to one or more single-line telephones
• one TN for every multi-line telephone without data option
• two TNs for every multi-line telephone with data option
Trunk load
The number of Virtual Trunks to provision is calculated by the ordering and configuration tool
as part of the Media Card provisioning calculation. See Resource calculation worksheets on
page 399 for the manual calculation.
Console load
Calculate console load by multiplying the number of consoles by 30 CCS per console.
Model 1
Digitone receiver requirements Model 1 on page 414 is based on the following factors:
• 33% intraoffice calls, 33% incoming calls, and 33% outgoing calls
• 1.5% dial tone delay GoS
• no Digitone DID trunks or incoming Digitone TIE trunks
Model 2
Digitone receiver requirements Model 2 on page 415 is based on the following factors:
• the same traffic pattern as Model 1
• Digitone DID trunks or incoming Digitone TIE trunks
• Poisson 0.1% blockage GoS
Model 3
Digitone receiver requirements Model 3 on page 416 is based on the following factors:
• 15% intraoffice calls, 28% incoming calls, and 56% outgoing calls
• 1.5% dial tone delay GoS
• no Digitone DID trunks or incoming Digitone TIE trunks
Model 4
Digitone receiver requirements Model 4 on page 417 is based on the following factors:
• the same traffic pattern as Model 3
• Digitone DID trunks or incoming Digitone TIE trunks
• Poisson 0.1% blockage GoS
to determine final Media Card requirements, see Assigning loops and card slots in the
Communication Server 1000E on page 375 .
Important:
Another step you want to consider at this point is system security. For more information, see
Avaya Access Control Management Reference, NN43001-602.
Use the calculations in Table 96: Worksheet A: Resource calculation procedure on page 399
for input into the Table 98: Worksheet C: Virtual Trunk calculation on page 403, and for input
into the Real time calculation worksheets on page 404. Worksheet A is not required for input
into the DSP calculations for a Communication Server 1000E system.
Table 96: Worksheet A: Resource calculation procedure
Table 97: Worksheet B: Detailed DSP and Media Card calculation for Media Gateway
List of tables
Trunk traffic Erlang B with P.01 Grade-of-Service on page 409
Table 103: Trunk traffic Poisson 1 percent blocking on page 411
Table 104: Trunk traffic Poisson 2 percent blocking on page 412
Digitone receiver requirements Model 1 on page 414
Digitone receiver requirements Model 2 on page 415
Digitone receiver requirements Model 3 on page 416
Digitone receiver requirements Model 4 on page 417
Digitone receiver load capacity 6 to 15 second holding time on page 418
Digitone receiver load capacity 16 to 25 second holding time on page 419
Digitione receiver requirement Poisson 0.1 percent blocking on page 421
Conference and TDS loop requirements on page 422
Digitone receiver provisioning on page 423
Trunks CCS Trunks CCS Trunks CCS Trunks CCS Trunks CCS
1 0.4 21 462 41 1076 61 1724 81 2387
2 5.4 22 491 42 1108 62 1757 82 2419
3 16.6 23 521 43 1140 63 1789 83 2455
4 31.3 24 550 44 1171 64 1822 84 2488
5 49.0 25 580 45 1203 65 1854 85 2520
Trunks CCS Trunks CCS Trunks CCS Trunks CCS Trunks CCS
6 68.8 26 611 46 1236 66 1886 86 2552
7 90.0 27 641 47 1268 67 1922 87 2588
8 113 28 671 48 1300 68 1955 88 2621
9 136 29 702 49 1332 69 1987 89 2653
10 161 30 732 50 1364 70 2020 90 2689
11 186 31 763 51 1397 71 2052 91 2722
12 212 32 794 52 1429 72 2088 92 2758
13 238 33 825 53 1462 73 2120 93 2790
14 265 34 856 54 1494 74 2153 94 2822
15 292 35 887 55 1526 75 2185 95 2858
16 319 36 918 56 1559 76 2221 96 2891
17 347 37 950 57 1591 77 2254 97 2923
18 376 38 981 58 1624 78 2286 98 2959
19 404 39 1013 59 1656 79 2318 99 2992
20 433 40 1044 60 1688 80 2354 100 3028
101 3060 121 3740 141 4424 161 5119 181 5810
102 3092 122 3776 142 4460 162 5155 182 5843
103 3128 123 3809 143 4493 163 5188 183 5879
104 3161 124 3845 144 4529 164 5224 184 5915
105 3197 125 3877 145 4561 165 5260 185 5974
106 3229 126 3913 146 4597 166 5292 186 5983
107 3265 127 3946 147 4630 167 5328 187 6019
108 3298 128 3982 148 4666 168 5360 188 6052
109 3330 129 4014 149 4702 169 5396 189 6088
110 3366 130 4050 150 4738 170 5429 190 6124
111 3398 131 4082 151 4770 171 5465 191 6156
112 3434 132 4118 152 4806 172 5501 192 6192
113 3467 133 4151 153 4842 173 5533 193 6228
114 3503 134 4187 154 4874 174 5569 194 6260
115 3535 135 4219 155 4910 175 5602 195 6296
116 3571 136 4255 156 4946 176 5638 196 6332
Trunks CCS Trunks CCS Trunks CCS Trunks CCS Trunks CCS
117 3604 137 4288 157 4979 177 5670 197 6365
118 3640 138 4324 158 5015 178 5706 198 6401
119 3672 139 4356 159 5051 179 5738 199 6433
120 3708 140 4392 160 5083 180 5774 200 6469
For trunk traffic greater than 6469 CCS, allow 32.35 CCS per trunk.
Trunks CCS Trunks CCS Trunks CCS Trunks CCS Trunks CCS
1 0.4 41 993 81 2215 121 3488 161 4786
2 5.4 42 1023 82 2247 122 3520 162 4819
3 15.7 43 1052 83 2278 123 3552 163 4851
4 29.6 44 1082 84 2310 124 3594 164 4884
5 46.1 45 1112 85 2341 125 3616 165 4917
6 64 46 1142 86 2373 126 3648 166 4549
7 84 47 1171 87 2404 127 3681 167 4982
8 105 48 1201 88 2436 128 3713 168 5015
9 126 49 1231 89 2467 129 3746 169 5048
10 149 50 1261 90 2499 130 3778 170 5081
11 172 51 1291 91 2530 131 3810 171 5114
12 195 52 1322 92 2563 132 3843 172 5146
13 220 53 1352 93 2594 133 3875 173 5179
14 244 54 1382 94 2625 134 3907 174 5212
15 269 55 1412 95 2657 135 3939 175 5245
16 294 56 1443 96 2689 136 3972 176 5277
17 320 57 1473 97 2721 137 4004 177 5310
18 346 58 1504 98 2752 138 4037 178 5343
19 373 59 1534 99 2784 139 4070 179 5376
20 399 60 1565 100 2816 140 4102 180 5409
Trunks CCS Trunks CCS Trunks CCS Trunks CCS Trunks CCS
21 426 61 1595 101 2847 141 4134 181 5442
22 453 62 1626 102 2879 142 4167 182 5475
23 480 63 1657 103 2910 143 4199 183 5508
24 507 64 1687 104 2942 144 4231 184 5541
25 535 65 1718 105 2974 145 4264 185 5574
26 562 66 1749 106 3006 146 4297 186 5606
27 590 67 1780 107 3038 147 4329 187 5639
28 618 68 1811 108 3070 148 4362 188 5672
29 647 69 1842 109 3102 149 4395 189 5705
30 675 70 1873 110 3135 150 4427 190 5738
31 703 71 1904 111 3166 151 4460 191 5771
32 732 72 1935 112 3198 152 4492 192 5804
33 760 73 1966 113 3230 153 4525 193 5837
34 789 74 1997 114 3262 154 4557 194 5871
35 818 75 2028 115 3294 155 4590 195 5904
36 847 76 2059 116 3326 156 4622 196 5937
37 876 77 2091 117 3359 157 4655 197 5969
38 905 78 2122 118 3391 158 4686 198 6002
39 935 79 2153 119 3424 159 4721 199 6035
40 964 80 2184 120 3456 160 4754 200 6068
For trunk traffic greater than 6068 CCS, allow 30.34 CCS per trunk.
Trunks CCS Trunks CCS Trunks CCS Trunks CCS Trunks CCS
1 0.4 31 744 61 1659 91 2611 121 3581
2 7.9 32 773 62 1690 92 2643 122 3614
3 20.9 33 803 63 1722 93 2674 123 3647
4 36.7 34 832 64 1752 94 2706 124 3679
Trunks CCS Trunks CCS Trunks CCS Trunks CCS Trunks CCS
5 55.8 35 862 65 1784 95 2739 125 3712
6 76.0 36 892 66 1816 96 2771 126 3745
7 96.8 37 922 67 1847 97 2803 127 3777
8 119 38 952 68 1878 98 2838 128 3810
9 142 39 982 69 1910 99 2868 129 3843
10 166 40 1012 70 1941 100 2900 130 3875
11 191 41 1042 71 1973 101 2931 131 3910
12 216 42 1072 72 2004 102 2964 132 3941
13 241 43 1103 73 2036 103 2996 133 3974
14 267 44 1133 74 2067 104 3029 134 4007
15 293 45 1164 75 2099 105 3051 135 4039
16 320 46 1194 76 2130 106 3094 136 4072
17 347 47 1225 77 2162 107 3126 137 4105
18 374 48 1255 78 2194 108 3158 138 4138
19 401 49 1286 79 2226 109 3190 139 4171
20 429 50 1317 80 2258 110 3223 140 4204
21 458 51 1348 81 2290 111 3255 141 4237
22 486 52 1374 82 2322 112 3288 142 4269
23 514 53 1352 83 2354 113 3321 143 4302
24 542 54 1441 84 2386 114 3353 144 4335
25 571 55 1472 85 2418 115 3386 145 4368
26 562 56 1503 86 2450 116 3418 146 4401
27 627 57 1534 87 2482 117 3451 147 4434
28 656 58 1565 88 2514 118 3483 148 4467
29 685 59 1596 89 2546 119 3516 149 4500
30 715 60 1627 90 2578 120 3548 150 4533
For trunk traffic greater than 4533 CCS, allow 30.2 CCS per trunk.
Number of DTRs DTR load (CCS) Number of DTRs DTR load (CCS)
1 0 26 469
2 2 27 495
3 7 28 520
4 15 29 545
5 27 30 571
6 40 31 597
7 55 32 624
8 71 33 650
9 88 34 676
10 107 35 703
11 126 36 729
12 145 37 756
13 166 38 783
14 187 39 810
15 208 40 837
Number of DTRs DTR load (CCS) Number of DTRs DTR load (CCS)
16 231 41 865
17 253 42 892
18 276 43 919
19 299 44 947
20 323 45 975
21 346 46 1003
22 370 47 1030
23 395 48 1058
24 419 49 1086
25 444 50 1115
A
M
ACD (Automatic Call Distribution) ............................202
design parameters .............................................202
memory .....................................................................206
attendant consoles ...................................................201
design parameters .............................................206
design parameters .............................................201
C N
E S
engineering Meridian 1 systems .......................200–202
schedules (milestone chart) ......................................130
console/telephone parameters ...........................201
system parameters ...................................................200
customer parameters .........................................200
system parameters .............................................200
trunk and network parameters ...........................202 T
H telephones ................................................................201
design parameters .............................................201
hardware ...................................................................205 trunks ........................................................................202
design parameters .............................................205 design parameters .............................................202