One NDS - 8
One NDS - 8
One NDS - 8
Version 8.0
Product Description
Confidential
The information in this document is subject to change without notice and describes only the product
defined in the introduction of this documentation. This documentation is intended for the use of Nokia
Siemens Networks customers only for the purposes of the agreement under which the document is
submitted, and no part of it may be used, reproduced, modified or transmitted in any form or means
without the prior written permission of Nokia Siemens Networks. The documentation has been prepared
to be used by professional and properly trained personnel, and the customer assumes full responsibility
when using it. Nokia Siemens Networks welcomes customer comments as part of the process of
continuous development and improvement of the documentation.
The information or statements given in this documentation concerning the suitability, capacity, or
performance of the mentioned hardware or software products are given “as is” and all liability arising in
connection with such hardware or software products shall be defined conclusively and finally in a
separate agreement between Nokia Siemens Networks and the customer. However, Nokia Siemens
Networks has made all reasonable efforts to ensure that the instructions contained in the document are
adequate and free of material errors and omissions. Nokia Siemens Networks will, if deemed necessary
by Nokia Siemens Networks, explain issues which may not be covered by the document.
Nokia Siemens Networks will correct errors in this documentation as soon as possible. IN NO EVENT
WILL NOKIA SIEMENS NETWORKS BE LIABLE FOR ERRORS IN THIS DOCUMENTATION OR FOR
ANY DAMAGES, INCLUDING BUT NOT LIMITED TO SPECIAL, DIRECT, INDIRECT, INCIDENTAL OR
CONSEQUENTIAL OR ANY LOSSES, SUCH AS BUT NOT LIMITED TO LOSS OF PROFIT,
REVENUE, BUSINESS INTERRUPTION, BUSINESS OPPORTUNITY OR DATA,THAT MAY ARISE
FROM THE USE OF THIS DOCUMENT OR THE INFORMATION IN IT.
This documentation and the product it describes are considered protected by copyrights and other
intellectual property rights according to the applicable laws.
The wave logo is a trademark of Nokia Siemens Networks Oy. Nokia is a registered trademark of Nokia
Corporation. Siemens is a registered trademark of Siemens AG.
Other product names mentioned in this document may be trademarks of their respective owners, and
they are mentioned for identification purposes only.
Portions of this solution are provided under licence from Orange Personal Communication Services
Limited and from Apertio Limited.
Copyright © Nokia Siemens Networks 2010. All rights reserved.
Contents
1 Overview ..........................................................................................................................7
4 Directory-based Transactions......................................................................................31
6 Schema ..........................................................................................................................39
6.1 Directory Schema............................................................................................................39
6.2 Common Data Model (CDM)...........................................................................................40
6.2.1 Data Schema ..................................................................................................................41
6.3 Schema Adaptation.........................................................................................................42
6.4 User Adaptation ..............................................................................................................43
7 Security ..........................................................................................................................45
7.1 Access Control ................................................................................................................45
7.2 Transport Layer Security (TLS) .......................................................................................46
8 Data Consistency.......................................................................................................... 47
8.1 Overview......................................................................................................................... 47
8.2 Synchronisation .............................................................................................................. 47
8.3 Replication ...................................................................................................................... 48
8.4 Reconciliation ................................................................................................................. 49
8.5 Chaining.......................................................................................................................... 49
8.6 Journals .......................................................................................................................... 50
8.7 Database Back-up and Restore...................................................................................... 51
8.7.1 Backup............................................................................................................................ 51
8.7.2 Restore ........................................................................................................................... 51
10 Provisioning .................................................................................................................. 55
10.1 Provisioning Gateway ..................................................................................................... 55
10.1.1 PGW Administration ....................................................................................................... 56
10.1.2 PGW Modularisation ....................................................................................................... 56
10.1.3 Provisioning Gateway DSA (PGW-DSA) ........................................................................ 57
10.2 LDAP Interface ............................................................................................................... 58
12 Hardware ....................................................................................................................... 71
12.1 Sun Netra X42xx............................................................................................................. 71
12.1.1 Sun Netra X4270 ............................................................................................................ 71
12.1.2 Sun Netra X4250 ............................................................................................................ 72
12.1.3 Sun Netra X4200 M2 ...................................................................................................... 73
12.2 Fujitsu Primergy RX200 Sx............................................................................................. 73
12.2.1 Fujitsu-Siemens Primergy RX200 S6 ............................................................................. 74
12.2.2 Fujitsu-Siemens Primergy RX200 S5 ............................................................................. 74
12.3 IBM Blade Center ........................................................................................................... 75
12.3.1 IBM BladeCenter H Chassis ........................................................................................... 75
12.3.2 IBM BladeCenter HT Chassis ......................................................................................... 76
12.3.3 IBM BladeCenter LS 42 .................................................................................................. 77
12.3.4 IBM BladeCenter HS22................................................................................................... 77
12.4 Racks.............................................................................................................................. 77
13 One-NDS Software........................................................................................................ 79
13.1 Extension Packages ....................................................................................................... 79
14 New Features.................................................................................................................81
14.1 Upgrading from One-NDS 7.1 .........................................................................................81
14.2 Upgrading from C-NTDB 2.0 ...........................................................................................81
References ........................................................................................................................................83
Glossary ........................................................................................................................................85
Abbreviations ........................................................................................................................................89
Summary of changes
Version Description Author Release Date
1.0 Released NSN 01 December 2008
2.0 Released NSN 10 December 2009
3.0 Released NSN 20 April 2010
4.0 Released NSN 27 May 2010
5.0 Released NSN 08 December 2010
Document Conventions
Note: This Note is used to illustrate that you should pay particular attention to its
accompanying information.
1 Overview
One-NDS, specifically designed and created for 2G and 3G telecommunications networks, provides
an open centralised database in compliance with the X.500 and LDAP standards for a data
directory system.
The fundamental aim behind the One-NDS architecture is support for the decoupling of application
logic and data. Subscription and service data located in the directory are made available to all other
applications for query and update on a controlled and secure basis. This separation of the data
from the application and the provision of an open external interface to that data, removes the
problem of platform and application-specific 'closed-data' scenarios prevalent in many network
deployments.
In addition to the directory itself, One-NDS 8.0 provides new opportunities for provisioning and
trigger notification as well as operation, administration and maintenance functions.
With release 8.0, Nokia Siemens Networks is able to bring together the strengths and operationally-
based expertise acquired through the integration of Apertio Ltd in February 2008. Building on the
highly successful (Apertio) One-NDS 7.1 and (NSN) C-NTDB 2.0 releases, One-NDS 8.0 marks a
significant stage in the maturation of this technology which is deployed by operators across the
world.
The basis for efficient and personalized services delivery is a user-centric approach, which can be
attained by consolidating subscriber data in a common directory.
Subscriber data can be consolidated by transitioning from multiple subscriber data repositories to a
common subscriber directory. This contrast may be characterised as follows:
Current networks:
Multiple provisioning points
Multiple subscriber data repositories
Multiple subscriber data records
Proprietary interfaces.
Future network with a common subscriber directory:
Single provisioning point
One common repository for all subscriber data
All application front ends (FEs) operate using the common directory.
Figure 1 shows the transition from multiple repositories to a common subscriber directory.
provisioning interfaces can be adapted quickly and at low cost to new service and data
requirements.
3 One-NDS Architecture
modify subscriber profile data that needs to be forwarded to the connected session controller (for
example, SGSNs in a mobile core network environment).
Directory Schema
Entries are arranged in a tree structure, the directory information tree (DIT). This facilitates data:
Distribution
Management
Naming
Locating
The tree structure models the hierarchical relationships of the real-world objects. The tree structure
is maintained within the database in such a manner that standard directory access services can be
used to name, locate, and manage the objects.
Directory entry data values and object names (keys):
Data values
Directory entries consist of a set of attributes. Each attribute consists of:
A type (equivalent to a syntax in LDAP) that classifies the information represented by
the attribute
One or more values
The definition of an attribute type includes a unique identifier and the ASN.1 (abstract syntax
notation one) to which all values of the attribute must conform.
The entire directory, which was originally conceived to extend to cover all organizations in the
world, is partitioned or distributed into logical regions. The directory service for each region is
provided by a logical directory system agent (DSA), each of which typically contains a collection of
related objects within a single organization. Each DSA of the directory maps to a set, or cluster, of
physical directory servers (DS).
The ability to distribute the directory is essential to ensure the scalability of the implementation. The
DSAs interoperate to implement the overall directory. A user or application FE of the directory
(directory user agent - DUA) is typically assigned to a particular DSA but it can transparently access
any entity in the entire directory over this point of access.
Data Model
The modelling of the DIT is based on a general structure that can encompass all the subscriber-
related profile data of a service/network provider and can therefore service multiple-application FEs.
The DIT also enables real-time access to data as required for the subscriber data repository.
Although Nokia Siemens Networks provides a default common data model for use with NSN
applications, the data model can be customer or project-specific, meaning that it depends on the
project. Nokia Siemens Networks recommends user-centric data models and, as the world market
leader, also follows this structure. In the following, two possible data models are described.
The first data model is structured as follows:
Subscriber tree (subscriber-related)
The subscriber tree represents permanent subscriber data and subscriber-specific service
data. The general structure of a subscriber in a One-NDS subscriber tree is based on a
multiple subscriber identifiers in which all object classes are referred to at least once. Other
types of subscribers can also be derived from this DIT.
Service tree (non-subscriber-related)
The service tree represents common service data with infrequent changes, which are
usually identical for groups of subscribers.
Because the service tree data is subscriber-independent, the data must be stored in the
directory separately from the subscriber tree (that is, on the Routing DSA, because it does
not contain a large amount of data) and referenced from the subscriber entry when needed.
This avoids unnecessary redundancies in the data model and simplifies administrative tasks
(for example, when changing the general characteristics of a service using the PGW).
The second data model is structured as follows:
Subscriber data (with service data)
The subscriber data segment sub-tree can be subdivided into bearer services (which contain
the underlying IP service needed for services), subscriber services (which contain service-
specific configuration for a particular subscriber), primary device (which contains a reference
to the subscriber’s device in the device sub-tree), sub-account (which contains sub-accounts
administered and owned by this subscriber), and so on.
Device data
A device data segment sub-tree can contain all devices registered in the network (for
example, devices for 3G and 4G interfaces).
Standard Support
One-NDS is based on the X.500 (2005) series of recommendations. More specific details are
available in [5].
One-NDS is based on LDAPv3 as defined by RFCs 4510-4519 (which replaced RFCs 2251-56,
2829-30 and 3377 in 2006). More specific details are available in [6].
LDAP
NetAct System
Monitor Routing DSAs SOAP
PGW-DSA SOAP
Notification
Manager
Install
Server
BE DSAs
One-NDS
Administrator
A typical DSA configuration has three directory servers (a DS triplet), but configurations with more
or less are also supported. Three directory servers are sufficient to meet high availability
requirements although more may be required to meet certain performance requirements. For
geographic redundancy, the directory servers of a DSA are installed at different sites.
The directory consists of the following logical components:
Routing DSA
Holds subscriber data location information
Works as X.500 relay server
N-server in cluster (read replica)
Scales with number of servers (mostly read traffic)
BE-DSA
Holds subscriber information
Segmentation and scaling via multiple clusters
Multiple servers in a cluster for redundancy and read performance
Triplet: primary server and secondary servers
Synchronous update in the cluster (two phase commit)
The Routing DSA cluster is essential for database access. It contains information about the
segmentation and the distribution of the database content. For application FEs activity carried out
over the PGW (such as provisioning tasks and SIM card management tasks), the Routing DSA
cluster acts as a central entry point into the database. The cluster resolves key information provided
by the application FE and automatically routes database requests towards the appropriate BE-DSA.
The Routing DSA stores access keys and references to database entries (subscriber profile data).
It only retains the information required to route LDAP requests to the BE-DSAs and subscriber-
independent common service data in the service tree (see section 3.2). The number of Routing DSs
in a Routing DSA cluster depends on the total amount of data stored in the BE-DSAs, the access
frequency (dynamic performance), and the redundancy requirements. To ensure persistence and
redundancy, at least three Routing DSs must be used. Each Routing DS contains identical data.
This means that One-NDS only contains one Routing DSA cluster with the appropriate number of
single DSs.
To read and update subscriber data, the application FE server uses LDAP. The application FE
server does not know where the data of a particular subscriber is located. A read/update request
can be sent towards any of the Routing DSs. The Routing DSA knows the location of the subscriber
data (from the key information in the read/update request) and act as an X.500 relay server. This
way, the application FE server remains free of subscriber data location information. Some common
service data is cached in the application FE servers to avoid unnecessary access to almost
constant service data definitions for every subscriber read operation. These caches are
automatically aligned with the actual database contents whenever needed.
BE-DSA clusters are the part of the directory that contains the actual data content representing all
subscriber profile data for all network services. Each BE-DSA represents a logical construction that
stores subscriber profile data, that is, a portion of the total number of the served subscribers.
Depending on the traffic model of a specific network, the data for a much higher number of
subscribers can be stored in a single BE-DSA.
Subscriber profile data is stored in BE-DSA clusters in the form of a directory information tree (DIT),
based on a specified data model. The DIT data model consists of a general structure that
encompasses all of the subscriber profile data of a service/network provider. It can therefore
efficiently service all combinations of multi-application FEs. The subscriber profile data model also
takes real-time data access into consideration. The subscriber profile data stored on a single BE-
DSA can thus be considered as a sub-tree of the DIT, the so-called subscriber tree (see section
3.2). The subscriber profile data stored on all of the BE-DSAs represents the entirety of the DIT
available at that particular time.
The appropriate subscriber profile data can be read and updated through the requests that the BE-
DSAs receives from the application FEs over the Routing DSA cluster. The data requested in read
requests is then forwarded over the Routing DSA cluster to the requesting application FEs.
System Architecture
The directory service as defined by the X.500 standard allows that any DUA may request service
from any DSA and that the initial DSA will relay the request via its links to neighbours such that the
request will arrive at the DSA hosting the object on which the operation is requested. This can
mean a request traversing multiple hops in the DIT until the correct DSA is located.
NDS implements a more robust and scalable solution while remaining within the bounds of the
X.500 and LDAP standards. A Routing DSA is configured for an application and holds a minimal
object (with no data) for each subscriber in the directory. This Routing DSA contains enough
directory information to be able to optimally route requests to the correct data Back End (BE) DSA
(through the X.500 subordinate references maintained by the DSA) and without any traversal of the
DIT. These references are also automatically maintained by the directory implementation, so
removing the need for mapping tables.
Logical /
Directory
Plane The Directory
(DIB)
Directory Region
(DSA)
Physical / Data
Plane
DS Node
DS Cluster
Each logical region (DSA) of the directory maps onto a set, or cluster, of physical servers executing
the directory service application (as shown in Figure 6). Each server in a cluster holds an identical
copy of the data contained within the DSA as any or all servers within the cluster may be used to
process read and update requests from external users (DUAs) of the directory.
The synchronization of the databases in each server of a cluster is handled by a proprietary
directory service mechanism. This employs a two-phase commit strategy to ensure that an update
operation is synchronously replicated to all servers before the requestor is notified of the completion
of the request. Since the protocol used to support this replication uses TCP/IP as a transport, the
constituent servers can be deployed on disparate geographical sites (as indicated by the colour
coding above). The ability to distribute components in this way ensures that the service provided by
the directory is not only fault-tolerant, but is also geographically resilient (so an entire site may be
lost without service outage).
The most appropriate number of servers to be used within a cluster is a deployment issue and
depends on a number of factors, including:
Redundancy Requirements: NDS implements a load-sharing (N+k) redundancy policy,
where “N” is the number of servers necessary to handle the maximum load of the cluster
and “k” is the number of redundant servers desired by the operator. For example: an “N+1”
implementation would allow the continued operation of service in the event that one of the
constituent servers was unavailable for whatever reason (hardware failure, planned
maintenance, software upgrade, etc.).
Traffic Model: The data access load for the data within the DSA affects the profile of this
traffic (for example: the ratio of read to update operations). Data which is subject to a large
number of read requests but few updates (such as a number portability database) may be
hosted in a DSA with a large number of servers if required. Highly volatile data (for example:
prepaid balances, current VLR location) is more appropriately implemented on DSAs with
fewer servers.
Server Type: The type of hardware (amount of memory, speed of processor, etc.) is a factor
in designing the distributed system.
Available Sites: The number of suitable sites owned by the operator and the quality of
facilities available (such as data communications infrastructure).
The use of multiple replicas, together with the partitioning options inherent in the distribution
facilities, allows massive scalability in many dimensions.
At any time, one of the servers in the cluster, referred to as the primary, is responsible for handling
the update, and then replicating it to all the other servers in the cluster (secondary servers). If the
primary fails, one of the remaining secondary servers will assume the primary role. Any of the
servers can act in the primary role when required.
As all redundant sites provide the same functionality, the system operates in a load sharing
configuration. In particular, updates to the data store in NDS can be made via any site.
NetAct
or NMS
System
Monitor
One-NDS Servers
One-NDS
The Fault Management (FM) interface provides the aggregate alarms for the whole monitored
system as shown in Figure 8. It supports the north-bound protocol:
NE3S (SOAP) for integration to NetAct
SNMP v2c for integration to another NMS.
The alarms represent active system conditions. They are raised and cleared according to defined
thresholds, as described above. It is possible to have multiple severities of the same alarm, each
triggered by its own threshold.
other
NetAct
NMS
System
Monitor
One-NDS Servers
One-NDS
Similarly, the Performance Management (PM) interface provides the aggregate performance data
for the whole monitored system. Statistics are combined from the separate servers into overall
system counters. Counters are also maintained where appropriate for individual server statistics.
Statistic counters accumulate over a reporting period of thirty minutes. At the end of each period,
the counters are written to CSV files and then reset. The files are retained on the SM server.
For NetAct, the O&M Agent also sends the files over the NE3S interface.
The NMS is responsible for collecting the files from the SM server when they are available.
4 Directory-based Transactions
Transaction support provides the facility to perform a sequence of isolated LDAP operations across
multiple DSAs that can be applied to the database, or discarded, as a single unit. Transactions do
not affect other operations until they have been committed to the database at the end of the
transaction.
Incoming requests from client applications to the LDAP interface are Start Transaction, Transaction
Update/Read and End Transaction messages. These are responded to by similarly named
response messages. In addition, an Abort Transaction Notice message can be sent to the client
application on the LDAP interface.
The LDAP server handles the specific requirements to these messages relating to their specific
transmission protocol. All requirements that are the same for both transmission protocols are
handled in the transaction layer which handles all administration of transactions.
Updates performed within a transaction are of lower priority than non-transactional updates. If a
non-transactional update modifies an object which has already been modified as part of a
transaction, the transaction will fail in order to allow the non-transactional update to proceed.
Transactional updates which have already been performed are of higher priority than incoming
transactional updates. If a new transactional update attempts to modify an object which has already
been updated by a different transaction, the incoming transaction will be aborted. The transaction is
aborted in its entirety (rather than just that update) to avoid deadlock where two transactions try to
update each other’s objects at the same time.
Non-transactional updates do not last as long as transactional updates, so there is less chance of
them being outstanding on an object when a transactional update arrives. Where this occurs, for
consistency with the rules outlined above, the transaction of the incoming update is aborted.
Transactional updates obtain a lock for each object they update. If they are pre-empted and lose
the lock, the whole transaction will fail cleanly, the updates rolled back and the user notified. There
may be cases where the user does not need to update a particular object, but does rely on its data
not being changed for the duration of the transaction. In this case, the transaction may lock the
object by sending an update to set the requestLock operational attribute. This stops other
transactions from modifying the object and ensures that the transaction is notified if a non-
transactional user modifies the object.
Another use of the requestLock attribute is to implement co-operative sub-tree locking. If all
transactional users agree that before modifying a certain sub-tree, they will lock an agreed object,
probably the root of the sub-tree, they can be sure that for as long as they have this lock, no other
transaction is working within the sub-tree.
For operators upgrading from C-NTDB 2.0, transaction support is no longer provided by PGW-DSA.
5.1 Triggering
Triggering provides the ability to perform a set of actions when One-NDS detects that a
combination of one or more of the following conditions is true:
Operation type matches that of the database update
Object class matches that of the target object of the database update
Object DN matches the DN of the target object of the database update
Attribute type/value match values of the target object of the database update
The actions that may be performed are a combination of one or more of the following:
The delivery of a SOAP message
The raising of an alarm
The triggering configuration is constructed from the following elements in order to reduce
duplication:
A global triggering configuration object containing overload and sizing details.
Trigger objects containing condition details: a trigger will reference one or more resulting
actions.
Action objects containing reporting and onward adaptation details: an action may be
referenced by zero or more triggers and may reference zero or more trigger view
extensions.
Trigger view extension objects specifying source entries and attribute mappings to augment
the reported entry: a trigger view extension may be referenced by zero or more actions.
Trigger, action, and view configuration objects can be provisioned into the system at runtime.
Triggers and actions can be enabled and disabled individually.
Any actions triggered by updates that are part of a transaction will be processed when the
transaction is successfully committed. Specifically, any SOAP output generated by the transaction
is combined into one message body per Notification Manager server.
For operators upgrading from C-NTDB 2.0, triggering is no longer provided by PGW-DSA.
Trigger Notifications
NDS can generate triggers when entries are modified. NTF enables these triggers to be distributed
readily to multiple applications.
As it is co-located with PGW-DSA, NTF can multiply triggers to be distributed to several
applications without increasing the load on the directory itself, whilst also guaranteeing
delivery of the notifications.
Its distribution of notifications is flow-controlled to prevent application overload.
An application with data stored in NDS typically runs on multiple application FE servers. With NTF,
a trigger can either be broadcast to all such servers, or it can be sent just to one of them. In the
latter case, each trigger would be sent to the next server in turn, in a round-robin approach, to
distribute the load over the application’s servers.
Command Distribution
With its central point of provisioning, One-NDS can be expected to receive provisioning commands
that are to be executed by one of its applications. For example, if a subscriber needs to be barred
on a VLR by a local HLR application.
NTF distributes such commands from PGW to an application’s servers in SOAP messages. As
with triggers, a command can be sent to just one application server, with successive commands
being sent to each server in turn.
Although the anticipated use case is in distributing commands from PGW, NTF is also able to
accept such commands from any other application that implements the appropriate SOAP interface.
Trigger and notification information is held in the registry and stored in NDS. When a trigger is
generated by NDS — or when an application such as PGW sends a command — NTF distributes
the message to each registered application.
Registration
A interested client that uses the NTF for notifications will configure the support for these via the
ADM. The ADM uses trigger definitions which are loaded to the ADM as “trigger definition
templates”. These templates contain the basic trigger definitions and additional placeholders for
the tenant, the receiving client, the application (or LDAP user) and the DSA.
The standard notification information is defined via the ADM. The ADM generates the final trigger
definitions for the One-NDS directory and the subscription data for the NTF by replacing the
variables with values required for the actual deployment. The deployment specific data can be
loaded at installation or entered via the ADM.
The registration information for triggers is stored in the directory, but it is held in memory in the
NTF’s registry for quick access. The registry is populated from the directory at the start-up of the
NTF.
Message Distribution
NTF essentially acts as a hub for SOAP messages. It receives SOAP messages from NDS
(triggers) or applications (commands, e.g. from PGW) and sends them to the intended recipients.
The messages it receives do not contain details of where the messages should be sent. Instead, it
must look up the type of message in its registry to determine the destination(s) according to
configuration and/or subscriptions.
Distribution Modes
Messages can be distributed in two modes: multi-cast and round-robin. In the former, each
message is sent to all servers that have subscribed to the notification. In the latter, a single
message is sent to only one of a group of subscribed servers; each subsequent message of the
same type is sent to the next server in the group in turn.
Guaranteed Delivery
Delivery of messages is guaranteed. Each message is stored in the local file system. It is only
removed when it has been sent to all intended recipients. If NTF fails to deliver the message to a
recipient, the delivery is rolled back and the message is queued again for a subsequent attempt.
Flow-Control
NTF monitors the rate at which it is sending messages to each recipient, for each type of message
and overall. Maximum rates can be set for each measure. When a maximum rate is reached, NTF
throttles the messages so the limit is not exceeded.
Individual NTF servers operate independently. The maximum rates apply to each server and not
for the total set of NTF servers.
Configuration
Configuration of NTF is performed from ADM which controls the configuration data and default
parameters held in NDS. The configuration includes the definitions of triggers and static
notifications that are associated with them. ADM also instructs NTF to reload its configuration from
NDS as needed.
6 Schema
A command line utility is provided to generate an LDIF file containing the current schema entries.
operations). Clear rules apply if more than one application wants to write shared data (also
MODIFY operations via variants).
Creation and deletion of objects is principally performed by Provisioning Gateway (PGW) only.
Exceptions are explicitly negotiated and specified (e.g. SIMCARD change-over). Variant objects are
explicitly managed by the PGW.
A schema merge has to be done via migration (during live operation) that creates the new object
instances (and corresponding variants if applicable) and deletes the old object instances.
ROOT
Primary Key-Area
Renamed: formerly
cn=SUBSCRIBER
sssd=SSSD for modelling
Subscriber-Area subscriber centric data
uid=<value>
As shown in Figure 11, the structure level elements for the primary keys (dc=…) have been
replaced. The currently used and standardized (RFC 4519) LDAP attribute and object class for
domain component are not appropriate as they are semantically and syntactically incorrect with
regard to the application purpose of these keys.
The previous structure element ‘sssd=SSSD’ is replaced by ‘cn=SUBSCRIBER’ to be in line with
the new structure elements for access (network) centric data and device centric data introduced at
the same level.
New elements are used to structure the subscriber specific data into profile data, service data and
data for mapping the relations between primary keys and its appropriate application specific sub-
tree. This new structure levels replaces the ‘nss=NSS’ element of the legacy model:
One-NDS provides a number of facilities to overcome these types of problems, which include both
standard X.500 features and advanced features.
Relationships model real-life hierarchies and an application may require its own model, hierarchies
and views. Schema adaptations create the data views required by the application. Adaptation or
‘variants’ are a key enabler of multi-application support.
Schema adaptation comprises a number of related functions:
Aliases and alias hiding
Variant objects
Adaptive naming
Attribute adaptation
Username adaptation
7 Security
Users and groups are defined per DSA. Users can be added as required and made members of the
appropriate groups. Group settings are customisable.
Each user can be a member of more than one group.
An NDS server requires the following information in order to make the access control decision:
Username and authentication level
Operation being performed
Access control information (ACI) associated with a target object and its attributes which
includes permissions for each operation.
Access control information which comprises user permissions and ACI building blocks is typically
set up at initial installation. These building blocks can then be referenced when application data is
configured into the database.
1
RFC 4513 references TLS 1.1 as defined in RFC 4346 and is the ‘officially’ recommended version. However, TLS 1.0, as defined
in RFC 2246, apparently remains the de facto standard and almost ubiquitous version currently being used. In addition, the two
main SSL libraries available from OpenSSL and Mozilla, only claim to support TLS 1.0.
8 Data Consistency
8.1 Overview
NDS has a variety of mechanisms, both standard and non-standard, for ensuring data consistency
between all the data servers in a One-NDS environment. These comprise:
Synchronisation: gets the database in line with the primary’s database
Replication: ensures the database on the node remains identical to the primary
Reconciliation: checks secondary databases are the same as primary and distributed
references are consistent across DSAs
Chaining: sends requests across multiple DSAs
Journaling: as part of backup and restore mechanisms
8.2 Synchronisation
Synchronisation is the process by which a NDS server can join or rejoin a live cluster. The NDS
server is started from backup and journal files, and the primary automatically supplies all update
operations made since the last update held in the local journal. Once all historical updates have
been applied, the server is considered synchronised and is available for service.
Synchronization is also used to recover from a temporary communications failure between the
primary and another of the NDS servers in the cluster. This operation is performed automatically, as
soon as the communication link is restored.
There are a number of failures that are automatically detected:
communications failure to the primary
update cannot be applied, even though it has been validated by the primary
database inconsistencies detected by background reconciliation process
software subsystem failure
Where a detected failure implies that an NDS server is no longer guaranteed to be identical to that
of the primary, the individual NDS server will withdraw its services and attempt to resynchronise.
8.3 Replication
Updates are sent to the primary node. The primary updates itself and sends a replicated update to
other nodes which respond with either success or fail.
If successful, the primary commits the update to itself, sends a commit to other nodes and sends a
successful response to the user agent.
If there is a failure by many secondary nodes, the commit is not applied, the primary will rollback
the update and a fail message is sent.
8.4 Reconciliation
Reconciliation is used to check consistency within a DSA, i.e. replicated data between primary and
synchronised secondary nodes.
It is also used when multiple DSAs are involved with distributed data to check the validity of all
References on a DSA, by verifying that the other end of the reference exists.
8.5 Chaining
The X.500 protocols are used as the basis of directory distribution, however the replication within
the DSA feature of NDS means that additional routing options are available when an operation is
chained.
In all cases, update operations are chained to the current primary in the target DSA.
Reads may be chained to a server on the same site or to the least utilised server on any site. The
former option should be configured where inter-site communication bandwidth is limited.
Updates are sent to the primary which checks its database, finds data is not there and forwards the
update to a remote primary where the data is mastered.
The remote primary finds data and updates itself and sends replicated update to the other nodes in
the cluster which respond with ‘success’. The remote primary commits the update to itself and
sends a commit to the other nodes. Finally, the remote primary responds to the local primary which
sends ‘success’ to the user agent.
8.6 Journals
During any replicated update, NDS will not only apply the update to the in-memory database but it
will also store details of the transaction in the in-memory journal.
Additionally, completed transactions from the in-memory journal are written out to a disk-based
journal file as a more permanent record.
In-memory journal: This is a shared memory area storing details about all transactions (circular
buffer).
Disk journal: A journal file is created each time the node starts-up, at midnight, or when the file
reaches the maximum permitted size (currently up to 200 MB in size or a maximum of 500,000
entries whichever occurs first).
The files also contain information about when the transaction was performed and by whom.
8.7.1 Backup
Backups are automatically created once a day at a configurable number of seconds past midnight
but can also be requested from the command line at other times.
Backups are typically run simultaneously on each DSA. The backup file is ASN.1 encoded.
The backup process involves walking the directory tree and writing each object to the backup file.
To limit the impact of the backup, the database remains available for normal activities throughout
the backup process.
As a result, updates could be occurring to areas of the tree already backed-up introducing
inconsistencies in the backup file. These inconsistencies are resolved via the journal at restore time
by replaying updates that occurred during the period of backup via journal files.
In addition to the on-server disk backups, a second type of daily backup to an external B&R server
can be performed allowing a copy of the backup files to be stored elsewhere. This is supported by
EMC Legato Networker agent which replicates the back ups to a separate B&R server. EMC
Legato Networker is supported by @com and NetAct.
8.7.2 Restore
The restore process needs the backup and journal files for the transactions made during the
backup in order to:
restore the schema and load objects from the main backup
resolve any inconsistencies with objects moved during the backup using the information in
the supplementary backup sections.
replay from the associated journal any transactions that occurred during the backup.
At this point the database is restored to the moment that the backup completed and is consistent:
the node then only needs a small number of transactions to be retransmitted by the primary during
synchronisation.
9 Overload Protection
10 Provisioning
Subscriber data is stored in a single central repository which provides the opportunity to have a
single provisioning point. In One-NDS, the Provisioning Gateway (PGW) acts as this single point for
provisioning.
The PGW supports a recognised standard interface for provisioning commands from the customer
CRM system. SPML (Service Provisioning Markup Language) is a XML based standard released
by the OASIS group.
Using SPML allows One-NDS to decouple provisioning form the detailed directory model, i.e. SPML
is more abstract than the directory representation, which is optimized for fast access of the
applications. SPML presents a unified subscriber view to the provisioning system, which can now
operate on a subscriber with all its services (e.g. HSS, HLR, AAA, etc.)
On a project specific basis it is also possible to use a combination of other provisioning
mechanisms or approaches (such as over the LDAP interface for a single application or LDAP
methods) but this runs the risk of compromising data consistency and uniqueness.
Interface Versioning.
PGW supports different interface versions in parallel. This allows smooth introduction
of new features.
Mass provisioning.
PGW is an optional component within the One-NDS system and provides a flexible, standards-
based approach to integration of operators’ existing provisioning engines and new applications that
use One-NDS for data storage.
Tasks from, for example, a customer relationship management system (CRM) or a SIM card
management system are forwarded to the PGW where they are processed and forwarded to One-
NDS via an internal LDAP interface. These systems would use the PGW SPML interface for
providing subscriber specific data and services as well as SIM card data. The CRM might use the
SPML interface via SOAP/HTTP and the SIM card management might use bulk files containing the
SPML document.
The PGW supports SPML over different interfaces: an online Interface, SPML over SOAP,
supporting single and batch requests and a bulk Interface over sFTP. The bulk interface accepts a
file containing the requests to be executed.
The PGW supports a number of different provisioning use cases which can either be used
wholesale or in a discrete fashion by the operator. Single SPML requests such as add, modify,
delete etc. are sent independently via the SOAP interface (online). Batch requests contains several
single SPML requests (a series of requests within an "envelope", which can be handled as a single
transaction). In addition, the batch request is sent via the SOAP interface (online). Only a single
response is provided to a batch request detailing the response for each of the requests included in
the request.
The bulk request supports operations on more than one object in the One-NDS and is used for
mass operations. A Bulk Request can be specified in two ways:
By a filter condition – the provided filter identifies all the objects that have to be
modified/deleted.
By a list of identifiers – on all the objects with the provided identifiers the modify/delete is
performed. Identifiers are stored in an external file.
An extended bulk request allows execution of special logic, which has to be implemented for
specific use cases that can’t be expressed via other standard request types.
Some applications are provided with their own dedicated Web-based GUI for subscriber
administration. Supervision and modification of batch file execution is also provided by the GUI.
through a chain of AMs, which can be statically configured for each plug-in version. The AMs are
completely independent of one another from a build and installation perspective; that is, it is
possible to add, remove, update, and exchange any AM in the chain irrespective of other AMs.
Module controllers invoke the AMs in a sequence as defined during plug-in configuration.
Therefore, each of the above extensions can be part of a different plug-in or can be an extension of
the same plug-in. That is, a newly developed AM can be part of existing plug-in or it can be part of a
new plug-in. Furthermore, the plug-in configuration (PC) file controls the configuration of the plug-in,
that is, a configuration file lists all AMs applicable for a plug-in version and the chain to invoke the
AMs during request processing.
Basically the plug-in versioning concept is intended to cover the following basic operational use
cases due to extension of the existing application or due to introduction of new application:
Changes in the NDS LDAP schema/DIT only and are not reflected at the SPML interface.
Changes in the SPML schema only and do not reflect any changes in NDS. Changes at the
SPML interface can only affect the application schema; that is, data items, not the requests.
Changes in the SPML schema and the NDS LDAP schema/DIT
The best example of this type of change is the introduction of a new service that is also supported
at the SPML interface. It can be assumed that this use case is the one used most.
The management associated with PGW modularisation supports the configuration of application-
specific modules. With the help of PGW plug-ins, it is possible to customise modifications of LDAP
directory server schema and/or version of the SPML interface. A PGW plug-in contains AM
packages, application support (AS) packages, and a PC package. A single AM can be re-used for
several plug-ins or versions of the SPML interface and an AM can have its independent versioning
rules. The PC package controls the configuration of the plug-in, that is, a configuration file lists all
AMs applicable for a plug-in version and the chain to invoke the AMs during request processing.
So, a PC package refers to several AS packages and, for example, each PC package can also
contain one and the same AM. PGW plug-ins are provided in a separate Red Head package
manager (RPM) packages. The PGW module framework allows the execution of the modules
according to the following module configurations which can be defined at two levels:
A global configuration file defines the module list for a plug-in version and also the sequence
in which the modules is invoked during request processing.
A module level configuration file called the deployment descriptor defines the module
specific settings.
Furthermore, it is also possible to have multiple modules per application, that is, it is possible to
design functional "delta modules", which contains only the delta functionality.
During SPML processing a plug-in controller executes the different modules in a defined sequence.
This sequence is configured per SPML Plug-in. One module version may be reused in different
Plug-ins. This means that the final SPML interface is defined at deployment time, where all SPML
contributions from all PGW modules are be “merged” into the SPML interface offered by the specific
plug-in. The SPML interface can then easily be enhanced by adding a new module and changing
the configuration. A new module may be added for support of additional attributes for existing
applications as well as for complete new applications.
It should be noted that two of the functions of PGW-DSA in C-NTDB 2.0 – notably transaction
support and triggering – are now provided by the One-NDS Directory itself (see Chapters 4 and 5).
In a fully configured One-NDS system, PGW-DSA is co-located with NTF.
DSA Management
Using the new “One-NDS setup in one step” procedure in One-NDS 8.0, all NDS
components can be defined one-by-one on a single Web page and the configuration itself
can be executed all in one step.
The introduction of a new DSA can be entered on a single Web page and executed in a
single step. This provides an efficient way of entering information, and it also offers an
advantage in that the complete set of data entered can be verified before it is applied to
NDS. Directory management reporting enables the operator to obtain an overview of the
directory’s operational parameters.
Consistency Check
Data Management
Data management enables the operator to administer server-specific data for the individual DSAs.
Data sets are composed by adding/importing one or more LDIF data files, for example, subscriber
profile data definitions. Further, a data configuration with tenant or corresponding application
configuration is possible.
Subscriber data has to be distributed over the entire directory, that is, both Routing and BE DSAs.
Subscriber data may need to be moved if a particular server is approaching its limit for traffic-
handling capacity, while other servers have available resources. One-NDS supports moving
individual data sets between subscriber data directory servers.
In connection with the relocation of subscribers over the DSAs, operator actions are automatically
accompanied by a mechanism that eliminates incorrect subscriber distribution, thus reducing the
operator workload considerably. It also ensures that the stability of the active network is not
endangered in any way.
Monitoring of subscriber data enables the operator to monitor the fill level of the subscriber data
directory in terms of the number of subscribers (of subscriber profile data) and of memory usage.
The NDS directory servers must know the LDAP clients. Therefore, LDAP clients must authenticate
themselves before they are given access to NDS.
An LDAP user is any application that accesses NDS over LDAP. In other words, any application
that accesses NDS over LDAP needs an LDAP user account.
Job Management
ADM provides job management, which is used to check the processing status of jobs after starting
administration tasks which create a job.
Log Management
ADM provides a log file browser component, which is used to retrieve log files from ADM as well as
from any other host, for example, the directory servers.
The implemented emergency actions treat both disaster recovery system backup/restore and
software fallback.
PGW Control
ADM provides the ability to start, stop, and restart PGW application modules running on PGW
servers on demand. These single PGW servers must first be announced in the ADM database.
The ADM GUI (PGW administration part) serves as an administration front-end application for the
PGW. ADM GUI provides a user interface to modify the configuration data at runtime.
PGW project and office data, for example, feature flags, configurable parameters that denote
feature behaviour, and basis customer project settings can be managed.
NTF Control
The ADM provides the ability to start, stop, and restart NTF applications running on a PGW DSA
server on demand.
Registry Data
Registration
Registration allows adding (and deleting) notification receivers to existing notification instances.
Command Interface
A few commands are available for controlling System Monitor, for example: start, stop, and
reconfigure. These are executed from the command line.
Configuration
Logs
Several log files are maintained on both the clients and the SM server. These include:
Operational activity, including errors, heartbeat messages, etc.
All events on the clients, whether they are notified to the server or suppressed
Optionally, the north-bound alarms that are sent to the NMS.
A mechanism is provided to archive old log files.
NetAct
or NMS
System
Monitor
One-NDS Servers
One-NDS
System Monitor is used with NetAct or a 3rd party NMS. System Monitor is not required with
deployments supported by @Com.
It provides the NMS with a simple single interface instead of one for each of the One-NDS
many constituent servers.
It aggregates data according to the logical and physical components of the system to which
they relate e.g. DSA.
It gives information relevant to the overall system, not just a collection of data from individual
servers within the system.
11.2.1 Architecture
The architecture is shown in Figure 23.
NetAct
or NMS
System 3
Monitor
One-NDS Servers
One-NDS
Fault Management
NetAct presents alarms from One-NDS, allowing effective real-time monitoring of the One-NDS.
Performance Management
Performance management involves monitoring the performance of directory read and update
operations and comparing these with response time limits. One-NDS can passively measure
transaction rates and actively stimulate test transactions as well as measure the response times
compared with defined limits. Service level agreements (SLAs) can be tracked and remedial action
taken to maintain performance levels.
B&R facilities enable the directory data to be permanently and securely stored. In the very unlikely
case of a complete system failure, a backup enables the operator to resume operation from the
point in time at which the last backup was performed. One-NDS is designed so that it can be
integrated with the B&R system within NetAct. EMC Legato Networker agents on the Directory
servers interface to the B&R system.
For detailed information about OAM, see the One-NDS User Manual or the documentation and
Online Help for NetAct.
Fault Management
Fault management involves monitoring alarms and taking the appropriate action. One-NDS is fully
integrated into a common multi-layered network/component management architecture and features
network/component management. This network/component management can provide a network
view of the Directory servers, collect alarms, and pre-correlate alarms to reduce traffic towards an
enterprise management system.
Performance Management
Performance management involves monitoring the performance of directory read and update
operations and comparing these with response time limits. One-NDS can passively measure
transaction rates and actively stimulate test transactions as well as measure the response times
compared with defined limits. Service level agreements (SLAs) can be tracked and remedial action
taken to maintain performance levels.
B&R facilities enable the directory data to be permanently and securely stored. In the very unlikely
case of a complete system failure, a backup enables the operator to resume operation from the
point in time at which the last backup was performed. One-NDS is designed so that it can be
integrated with the B&R system within @vantage Commander (@Com). EMC Legato Networker
agents on the Directory servers interface to the B&R system.
Configuration Management
Software Management
Software Management (SWM) is handled by the Install Server within One-NDS. @com supports
secure shell access to the Install Server to allow the operator to use the SUFDirector software
management scripts to prepare network component software update (including the download/import
software packages to INS), view logs and to display software status of network components.
Inventory Management
Inventory management is used to provide previously scanned inventory information about the
various static resources of managed One-NDS network components and the network environment.
Static resources consist of hardware equipment as well as the software installed at the network
components.
Security Management
@Com provides role-based user management coupled with a sophisticated authorization concept.
It also supports an interface for central external user management.
For detailed information about OAM, see the One-NDS User Manual or the documentation and
Online Help for @vantage Commander.
12 Hardware
The current reference hardware platform for all One-NDS components is the Sun Netra
X4250/X4270. Alternate hardware supported includes Fujitsu-Siemens Primergy RX200
S3/S4/S5/S6, RX600 S4 and S5 and IBM Bladecenter.
Item Type
Processor 2 x Intel Xeon Quad-Core CPU
Memory Max 64GB RAM (16 DIMM slots)
Disk capacity Max 4 x 146GB SAS HDD
Network 4 x Gigabit Ethernet ports
I/O 2 x PCI-X and 3 x PCI Express slots
Footprint 2 RU
Power Redundant AC or DC 650W PSU
12.4 Racks
The Sun rack II 19” cabinet (AC/DC) and the FSC 19” Primecenter rack are used for hardware
configurations based on the Sun Netra X4200, X4250, X4270 or the FSC RX200 S3/S4/S5/S6
respectively.
IBM 42U
The Cisco 3560 or Juniper EX3200-48T is utilized for internal communication in AC powered racks
and the Cisco 4948 or Juniper EX3200-48T in DC powered racks.
13 One-NDS Software
This is the technical data on the main functional elements of the One-NDS software which runs on
Suse Linux Enterprise Server 10.
The following software is installed on the servers of these product components:
SoftwareComponent Software
NDS NDS software providing the Routing DSA and BE-DSA functions.
NTF Notification Manager software as well as OEM software including J2SE
(Linux version), Jakarta Tomcat, Mule, Apache
PGW DSA PGW-DSA as well as OEM software including J2SE (Linux version),
Jakarta Tomcat, Mule, Apache
PGW PGW as well as OEM software including scientific use file (SUF) agents,
J2SE (for Linux), Apache XML security, XML data types library, XML
commons, Java architecture for XML binding (JAXB), Java system XML
streaming parser (SJSXP)
ADM ADM as well as OEM software including and scientific use file (SUF)
agents, JBoss application server, Sun JDK, Apache My Faces,
Ganymed SSH, Mozilla Directory SDK
INS Install Server, as well as OEM software
SM System Monitor and OEM software including MySQL, Ganymed SSH,
Java SDK, Java SCTP Library, Axis Java, and MySQL Java connector
Customer
Customer application
Customer application version
The content of extension packages is optional, or each component of the package is optional. At
least one component must be included within a package. The set of components which may be part
of extension package (and not part of core medium):
Data model (schema and data)
ADM context definition
PGW/NTF configuration (schema and data)
SOAP trigger definition
User definition (LDAP users, PGW OS users)
Old style PGW plug-in (One-NDS 8.0 binary compatible)
PGW application modules/application support/plug-in configuration (AM/AS/PC)
System Monitor configuration for application specific counters
The extension packages are cumulative. This means that the latest correction version of a package will
always contain the complete set of functionality (for example, full data model, latest PGW modules, full
trigger definition files etc.) Each latest extension package (except the initial version) supports an update
procedure from each previously released correction version to the latest version (for example, over a series
of delta steps per each correction release). The core medium is able to support this update mechanism.
Extension package development follows different delivery software cycles to the core medium. Therefore it
has its own deliverable software - packaged as a Unix compatible tape archiver (TAR) file including all
components bundled together. This package contains a versioning schema to be able to distinguish
correction releases (maintenance version) and support versioning (major version, minor version at least).
14 New Features
References
[1] One-NDS Version 8.0 Developers Guide
[2] One-NDS Version 8.0 Directory Information Tree
[3] One-NDS Version 8.0 Operations and Maintenance Guide
[4] One NDS Version 8.0 Directory Interfaces and Protocols
[5] One-NDS Version 8.0 X.500 Support
[6] One-NDS Version 8.0 LDAP Support
[7] RFC 2246: The Transport Layer Security (TLS) Protocol Version 1.0, January 1999
[8] RFC 4511: Lightweight Directory Access Protocol (LDAP): The Protocol, June 2006
[9] RFC 4512: Lightweight Directory Access Protocol (LDAP): Directory Information Models,
June 2006
[10] RFC 4513: Lightweight Directory Access Protocol (LDAP): Authentication Methods and
Security Measures, June 2006
[11] One-NDS Directory Version 8.0 Alarm Guide
Glossary
This glossary does not contain all the terms associated with the abbreviations in the abbreviations
list.
3*N redundancy 3*N redundancy means site redundancy. 3 is the number of geographical sites
and N is the number of servers required for the network-specific load.
Abstract syntax notation one The standard and flexible notation ASN.1 describes the abstract syntax of
(ASN.1) information, that is, the data structures for representing, encoding, transmitting,
and decoding data. ASN.1 facilitates the exchange of structured data, especially
between network-wide application programs. ASN.1 is independent of machine
architecture and implementation language and is used by application layer
protocols (for example, by X.500).
American standard code for ASCII is a character encoding based on the English alphabet. ASCII defines
information interchange (ASCII) several control characters, for ex-ample for devices such as printers, and 95
printable characters including the space character, numbers, the English
alphabet, and some special characters. It does not include country-specific
characters, such as German umlauts.
Apache Apache is a freely available Web server that is distributed under an “open source”
license. Version 2.0 runs on most UNIX-based operating systems (such as Linux,
Solaris, Digital UNIX, and AIX), on other UNIX/POSIX-derived systems (such as
Mac OS X, BeOS, and BS2000/OSD), on AmigaOS, and on Windows 2000.
Application programming API is an interface with commands and perhaps routines and/or macros that is
interface (API) offered by an operating system or an operating system extension. Application
programs can use such interfaces to cause the operating system to perform the
provided functions. A well known API is the common application programming
interface, which is used in ISDN.
Cluster In a cluster, multiple computers (typically PCs or UNIX workstations), multiple
storage devices, and redundant interconnections are used to form a single, highly
available system for the user. A cluster can be used for load balancing as well as
for high availability. Advocates of clustering suggest that in some cases, the
approach can help an enterprise to achieve 99.999 availability. One of the main
ideas of clustering is that the cluster appears as a single system to the outside
world.
Comma-separated variables Files in the CSV format are ASCII files which are often used to extract database
(CSV) contents and transfer them into another database. The single fields are separated
by commas. The common format and MIME type for CSV files is defined in RFC
4180.
Core network The core network is the actual backbone network. It can be a mobile radio
network, telephone network, integrated services digital network or data network to
which the access networks are connected. The core network provides
connections with high transmission rates over the entire supported area.
Directory access protocol (DAP) DAP is a network standard protocol introduced by ITU-T and the International
Organization for Standardization in 1988. Since then, DAP’s basic operations are
implemented into LDAP.
Directory operational bindings DOP defines how the DSAs establish connections between each other. This
management protocol (DOP) protocol also defines which server stores what information and the functions that
each machine has.
Directory service mark-up DSML represents directory service information in XML syntax. DSML version 1 is
language (DSML) a document type definition for files, which contain XML representations of LDAP
directory entries. DSML version 2 is an XML schema for the representation of
directory access operations. These operations are based on those of LDAP and
can be carried in SOAP.
Ethernet Ethernet is the most widely-installed local area network (LAN) technology.
Specified in a standard, IEEE 802.3, Ethernet was originally developed by Xerox
and then developed further by Xerox, DEC, and Intel. An Ethernet LAN typically
uses coaxial cables or special grades of twisted pair wires. Ethernet is also used
in wireless LANs. The most commonly installed Ethernet systems are called
10BaseT and provide transmission speeds of up to 10 Mbit/s. Devices are
connected to the cable and compete for access using a carrier sense multiple
access with collision detection (CSMA/CD) protocol.
Extensible mark-up language XML is a data format language consisting of structured text containing embedded
(XML) tags. XML is an extensible, hierarchical data format that is both human and
machine readable. It is a standard which is processed by parsers, such as Simple
API for XML (SAX).XML only describes the content and does not take the
presentation into consideration. XML is used in publishing, data exchange, E-
business, and integration servers.
File transfer protocol (FTP) The open standard FTP is based on TCP/IP. It is used to transfer files between
computers. Usually, an FTP server and an FTP client with client software are
involved (see RFC 0959.
Gateway Hardware and software used to connect different networks with one another after
performing protocol conversion or data security checks and thus enabling
communication between the connected systems. A gateway has the task of
conveying messages from one computer network to another, whereby the
translation of the communication protocols is necessary. A gateway can thus be
thought of as a protocol converter. For the purpose of performing these tasks, a
gateway contains a specially developed computer.
General Packet Radio Service Introduced as a standard by the European Telecommunications Standard
(GPRS) Institute, GPRS is now standardized by 3GPP. The technology is often referred to
as 2.5G, this means a mobile communications system between 2G and 3G. Now,
GPRS is integrated into GSM standards. GPRS is packet-switched, that is,
multiple users share the same transmission channel when sending data. The
bandwidth can be fully utilized because it is only used intermittently for the actual
data transmission.
Gigabit Ethernet A transmission technology based on the Ethernet frame format and protocol used
in local area networks (LANs) that provides a data rate of 1 Gbit/s. Gigabit
Ethernet is defined in the IEEE 802.3 standard and is currently being used as the
backbone in many enterprise networks.
Global System for Mobile GSM is an international standard for mobile communications and the so-called
communications (GSM) “second generation” (2G) of mo-bile communications. GSM is currently developed
by 3GPP. Open standards make inter-operability easy, which means that service
providers can use equipment from different vendors.
High availability High availability refers to a system or component that is continuously operational
for a desirably long length of time. Availability can be measured relative to “100%
operational” or “never failing”. A widely-held but difficult-to-achieve standard of
availability for a system or product is known as “five 9s” (99.999 percent)
availability.
Hyper text transfer protocol HTTPS is actually the same as HTTP with the exception that the transmission of
secure (HTTPS) data is encrypted. To enable HTTPS connections over a Web server, the
administrator must prepare certificates for the Web server. Addtionally,
certificates can be used to limit Web server access to authorized users. (About
the alternative S-HTTP, see RFC 2660.)
International Telecommunication ITU is an international union of public and private organizations. ITU’s aim is to
Union (ITU) develop and enhance telecommunication standards. ITU was founded as the
International Telegraph Union in Paris on 17 May 1865. ITU is one of the
specialized agencies of the United Nations and has its headquarters in Geneva,
Switzerland.
Internet protocol (IP) IP is the lowest protocol in the TCP/IP stack and therefore represents its basis. IP
is the main component of the open systems interconnection (OSI) layer 3 in this
stack, that means, it is the network layer component of the TCP/IP stack. It
implements the functions necessary for the transmission of data between Internet
terminals, which can be present in different combinations in different transmission
networks.
LDAP data interchange format LDIF is typically used to import and export directory information between LDAP-
(LDIF) based directory servers or to describe a set of changes that is to be applied to a
directory. This data is contained in LDIF files (plain text) each of which contains a
series of records. A record describes a directory entry or a set of changes to a
directory entry. See RFC 2849.
Lightweight directory access LDAP is a networking protocol to query and modify directory services that run
protocol (LDAP) over TCP/IP. The protocol accesses LDAP directories that conform to the X.500
model.
N + k redundancy The N + k redundancy is an availability concept, where N is the number of servers
necessary to perform business operations and k is the number of redundant
servers. As many as k servers can fail without impact on the business.
Secure shell (SSH) SSH is both a program and a network protocol to login at a remote computer and
to execute tasks on the re-mote computer. The SSH protocol provides secure
encrypted communications. For secure file transfer, SSH is usually used with
sFTP (see RFC 4251 for the SSH-2 architecture).
Secure file transfer protocol The acronym sFTP stands for SSH file transfer protocol and is also called secure
(sFTP) FTP. The network protocol sFTP is used for secure file transfer and for access to
remote computers over the SSH protocol.
Service provisioning mark-up SPML is an XML-based framework that has been developed by the OASIS
language (SPML) Provisioning Services Technical Committee (PSTC). SPML is used to query and
exchange user, account, and resource provisioning requests.
The SPML protocol is an open standard protocol for the integration and
interoperation of service provisioning requests.
Simple network management SNMP is a widely-used network monitoring and control protocol (on the
protocol (SNMP) application layer). Data is forwarded from SNMP agents that reside in the network
devices and report to the main console used to supervise the net-work. The
agents return information that is contained in the management information base
(MIB). SNMP is part of the TCP/IP protocol suite. It enables network
administrators to manage network performance, find and solve network problems,
and plan for network growth.
Simple object access protocol Originally, SOAP was an acronym for simple object access protocol. The full
(SOAP) name was dropped in the specification version 1.2 because the focus shifted from
object access to object inter-operability. According to the World Wide Web
Consortium recommendation (June 2003), SOAP version 1.2 is a lightweight
protocol intended for exchanging structured information in a decentralized,
distributed environment. Using XML technologies, the messaging framework
defines an extensible messaging framework containing a message construct that
can be exchanged over a variety of underlying protocols.
Third Generation Partnership 3GPP is an association of organizations that plays a leading role in developing
Project (3GPP) standardized technical specifications and coordinating the implementation of
future UMTS mobile phone networks and Internet Protocol Multimedia Subsystem
(IMS) networks. Its partners co-operate to produce globally applicable technical
specifications, as well as technical reports for a third-generation mobile system
based on evolved GSM core networks and the radio access technologies that
they support (for example, universal terrestrial radio access (UTRA), as well as
frequency-division duplex (FDD) and time-division duplex (TDD) modes).
Transmission control protocol TCP defines a series of standards that regulate data transfer in the Internet.
(TCP) Internal computer networks, such as Intranet, are also based on this standard and
therefore Internet access is possible. The layers above IP are responsible for
reliable delivery service when it is required. TCP is the primary virtual-circuit
transport protocol for the Internet suite. TCP provides reliable, in-sequence
delivery of a full-duplex stream of octets. TCP is used by those applications
needing a reliable, connection-oriented transport service, such as mail (SMTP),
file transfer (FTP), and virtual terminal service (Telnet).
X.500 X.500 is a standard for directory services. X.500 components manage information
about objects and make it possible to search for information. This information is
contained in a directory information base (DIB). DIB entries are uniquely identified
by their distinguished name (DN) and are arranged in a tree structure, the
directory information tree (DIT). A directory schema defines the attributes of each
object class.
Abbreviations
2G 2nd generation
3G 3rd generation
3GPP 3rd Generation Partnership Project
ACID atomic, consistent, isolated, durable
ADM One-NDS Administrator
API application programming interface
ASCII American standard code for information interchange
ASN.1 abstract syntax notation one
B&R backup and restore
BE-DSA back-end DSA
BSS business support system
CAPEX capital expenditure
CCC customer care center
CFG configuration block
CMS core mobility server
CN core network
C-NTDB common network technology database
CORBA common object request broker architecture
CPU central processing unit
CRM customer relationship management
CSV comma-separated values
DAP directory access protocol
DIB directory information base
DIT directory information tree
DN distinguished name
DOP directory operational binding management protocol
DS directory server
DSA directory system agent
DSE DSA-specific entries
DSML directory services mark-up language
DUA directory user agent
ExP Extension Packages (for PGW)
FE front-end
FM fault management
FTP file transfer protocol
GPRS general packet radio service
GSM global system for mobile communication
GUI graphical user interface
HLR home location register
HOB hierarchal operational binding
HSS home subscriber server
HTTP hypertext transfer protocol
HTTPS hypertext transfer protocol secure
ID identity
IP Internet protocol
IS information system
INS install server
IT information technology
ITU-T International Telecommunication Union, Sector Telecommunication Standardization
LAN local area network
LDAP lightweight directory access protocol
LDIF LDAP data interchange format
NMS Network Management System
NDS Network Directory Server
NEM network element management
NTF notification manager
OAM operation, administration and maintenance
OEM other equipment manufacturers
OS operating system
OSI open systems interconnection
OSS operation support system
PGW provisioning gateway
PGW-DSA provisioning gateway directory system agent
SDK software development kit
sFTP secure FTP
SM System Monitor
SNMP simple network management protocol
SOAP simple object access protocol