Module-5
Module-5
The fundamental premise of mobile computing is that the information will be accessed from
outside of the organization. As long as the information is within the four walls, the environment will
be better known. It may be easier to control this environment and make it secure. When the
information or computing environment is outside the controlled environment we do not have much
control either from its users or usage patterns. Today, all the computers of the world are
interconnected through extranet. Moreover, in a majority of cases, mobile computing uses wireless
networks. Wireless media works on the principle of broadcast; information is radiated to everyone
within the radio wave range thus increasing the security threats. Unlike a physical attack, cyber
attacks can be replicated quite easily. Therefore, unless special care is taken, all systems are open to
attack. This chapter discusses different techniques to secure information over mobile computing
environment.
Is prevention better that cure? If prevention is expensive or impractical, what is the strategy to
recover from the loss following an attack?
There is no absolute security. What may appear to be absolute security in one context may not
be absolute security in another context. Therefore, while building a security system, we need to
arrive at a proper balance amongst the answers emerging from the above questions. In a mobile
environment, the user roams through different networks with heterogeneous security
Department of CSE Prepared by Dr.C.Brijilal Ruban
Vidya Academy of Science and Technology Technical Campus
Attacks
A security system is a system to defend our assets from attacks. In the physical world, these
attacks are carried out at the weak points in the defense system. Likewise in the electronic world,
attacks are carried out at the point of vulnerability. When the vulnerability is exploited for some
interest or selfish motive, it is an attack on the system. Of course there could be occasions where
the vulnerability is exposed by accident as well. Where the vulnerability is exploited, there is a
loss. This loss can be either of static information asset (static asset) or an information asset in
transit (dynamic asset). If we look at an information system, static assets cover a large portion of
the asset base. All the databases, files, documents, etc. in the computers fall in this category.
Examples of attacks on static asset are virus deleting files in a computer or jamming a network.
An example of an attack on a dynamic asset is the theft of a credit card number while a user is
doing a credit card transaction on the web.
• Interception: An unauthorized party gaining access to an asset will be part of this attack.
This is an attack on confidentiality like unauthorized copying of files or tapping a
conversation between parties. Some of the sniffing attacks fall in this category.
• Fabrication:An unauthorized party inserts counterfeited objects into the system; for
example, impersonating someone and inserting a spurious message in a network.
• Virus and Worms: These are a type of program that replicates and propagates from one
system to another. Most of the virus do malicious destructive functions in the system.
• Denial of Service: These are attacks on the system to prevent legitimate users from using
the service.
• Intrusion: These are people or software, which enter into computer systems and perform
function without the knowledge of the owner of the asset. These are also called hackers.
• Replay Attack: In a replay attack the opponent passively captures the data without trying
to analyze the content. At a later time, the same is used in the same sequence to
impersonate an event and gain unauthorized access to resource.
A security system needs to be so designed that the system is able to counter and recover from
attacks.
5.2.1.1 Confidentiality
Confidentiality is ensured through encryption of the data. To a person a comprehensible message
is written in a particular language. The language can be English, Hindi, French or any other
language. These messages are called plaintext or ciphertext messages. Through encryption (or
encipher) we disguise this message in such a fashion that it is no longer understandable by either a
person or a machine. An encrypted message is called ciphertext. The process of converting a
ciphertext back into plaintext is called decryption (or deciphering). Plaintext need not be a written
text. It can even be an audio or video message as well. When leaders of two countries talk, the
Department of CSE Prepared by Dr.C.Brijilal Ruban
Vidya Academy of Science and Technology Technical Campus
message is encrypted so that a man eaves dropping cannot make any sense of the conversation.
The plaintext message can also be a data file in the computer disk. Figure 18.2 depicts the process
of encryption and decryption.
In cryptography there are two components, viz., algorithms and protocols. A cryptographic
algorithm is a mathematical function used for encryption and decryption, and
protocol relates to the process and procedure of using algorithms. A protocol is the way
algorithms are used to ensure that the security is ensured and the system is less prone to attack. In
a security system the plaintext message is encrypted by using a key KEYE. The encrypted
message is then sent from the sender to the received through a media (wired, wireless, or even
postal) using some protocol. The encrypted message is then decrypted using a key KEYD to
extract the original message. A cryptographic key is generally a large number. The range of
possible values of a key is called keyspace. The larger the keyspace is, the more difficult it is for
an attacker to guess the key and restore the original message. Therefore a larger keyspace makes a
ciphertext more secure. This is similar to a lock. A conventional lock of Il levers is more secure
compared to a 7-lever lock.
The art of keeping message secure using the science of encryption and decryption is called
cryptography. People who practice cryptography are called cryptographers. There are people who
try to break the secrecy of encryptions. These are for many purposes; some are for research
purposes to measure the strength of the security and some, for stealing the information. Some are
hackers who try to break the security for fun or for a price. These people who try to break the
secrecy of the cryptography are called cryptanalysts. The practice of cryptanalyst is called
cryptanalysis. There is another science in security engineering. This is called steganography.
Steganography is the science of hiding secret message in other messages so that the existence of
the secret message is concealed; for example, sending some secret message by changing some bits
in a large picture message. By looking at the picture, others will not be able to guess that in reality
the picture is carrying a secret message.
5.2.1.2 Integrity
Integrity is to ensure the integrity of the message. Integrity is achieved by adding additional
information in to the message. This is done through checksums, message digests or digital
signature. In a crypto system, the receiver of the message checks this extra information to verify
whether the message has been tampered with. This is similar to a bank cheque. A cheque issued to
a customer is honored only when the customer signs it. The cheque number and the signature are
verified to ensure integrity. Integrity check is advised for both static asset and asset on transit.
5.2.1.3 Authorization
Authorization deals with privileges. In any transaction, there is a subject (a person) and an object
(data items or file). The subject wants some function to be performed on
the object. The privilege to an object is defined through ACL or Access Control List. ACL is
used while allowing access to the object. The privilege on an object can be read, write, or
execute. Besides objects there need to be privilege-based type of subjects. This is done through
authorization.
5.2.1.4 Non-repudiation
Authentication and Non-repudiaüon have some overlapping properties. Authentication is a
process by which we validate the identity of the parties involved in a transaction. In non-
repudiation we identify the identity of these parties beyond any point of doubt. Nonrepudiation
can be considered as authentication with formal record. These records will have legal bindings.
Like a signature in a cheque, using digital signature we achieve nonrepudiation.
5.2.1.5 Availability
Media management is not within the scope of security protocols and algorithms. However,
media management is part of the larger security framework. Media management is needed to ensure
availability of service. For a message a confidentiality may be maintained; also, the integrity is
intact but an attacker can manipulate the media to make sure that the message does not reach the
destination. This is like there is no theft of power, power quality is good, but someone blows the
transmission line of the power grid. Attack on availability happens for industrial espionage or from
political motivation. During a festive season, one company may target to block the e-commerce site
of a competition. In a social framework, someone may try to stop people's voice by using threats or
other means of intimidation to compel the author to remove the web page. If these methods prove
unsuccessful, various denials of service attacks can be launched against the site to make it
impossible to access. In less high-profile cases, people often enjoy far less support for exposing
corruption or criticizing employers and particularly litigious organizations. Also, there need to be
some way where terrorist organizations or dictators cannot block the mass opinion. This field of
research area is called Censorship-resistant Publishing. Censorship-resistant publishing is achieved
through document entanglement.
5.2.1.6 Trust
Computers rely on user authentication and access control to provide security. Within a
network, it may be safe to assume that the keyholder is authentic, and the keyholder is using the
key assigned to him or her. However, these strategies are inadequate for mobile computing
environments with high level of flexibility. Mobile computing lacks centralized control and its
users are not all predetermined. Mobile users expect to access resources and services anywhere and
anytime. This leads to serious security risks and access control problems. To handle such dynamic
ever changing context, trust-based security management is necessary. Trust involves developing a
security policy, assigning credentials to entities, verifying that the credentials fulfill the policy.
Also, we need delegation of trust to third parties, and reasoning about users' access rights.
5.2.1.7 Accounting
For any service, the service provider needs to be paid. The service can be either a content
service or a network service. Accounting and billing is a very critical aspect in mobile computing
environment. Accounting is the process by which the usage of the service is metered. Based upon
the usage, the service provider collects the fee either directly from the customer or through the
home network. This will be true even if the user is roaming in a foreign network, and using the
services in the foreign network.
RADIUS (Remote Authentication Dial In User Service) protocol (RFC 2865) has been in use for
a long time for the AAA (Authentication, Authorization, and Accounting) functions in Internet.
With the demanding service requirement of mobile computing, it is now apparent that RADIUS is
incapable of supporting all these complexities. A new protocol called Diameter (RFC 3588) has
been released to address the AAA needs for data roaming and mobile computing. Diameter can
work in both local and roaming AAA situations.
to produce a ciphertext block of equal length. Typically a block of 64 bits (8 octets) or 128 bits (16
octets) is used for block cipher. Majority of cryptosystems use block cipher.
3DES: This is a modification of DES. In this algorithm, DES is used 3 times in succession.
AES: Advances Encryption Standards, this is the current accepted standard for encryption by
FIPS (Federal Information Processing Standards) of USA.
DES employs the principle of scrambling and substitution. These processes are repeated a number
of times with keys to ensure that the plaintext is completely transformed into a thoroughly scrambled
bit stream. The DES can be divided into the following major functions. These are:
• Permutations of bits in a block. This is the first and last step i DES. In this step the 64-bit
plaintext block is rearranged through Initial Permutation IP. This is done through a 64-bit
register where the bits of the input block are scrambled in a particular fashion. As the last
step, the reverse permutation is done through IVI
• The key-dependent computation is run through 16 rounds. Each round uses the data from
the previous round as input.
The beauty of DES algorithm is that the same algorithm is used for both encryption and
decryption. DES demonstrates a very high avalanche effect. In an avalanche effect one bit of
change in either the input data or the key changes many bits in the output. For example, in DES
one bit of change in the input data changes 34 bits, whereas one bit change in the key affects 35
bits.
3DES (Triple DES): With the increase of processing power available in PC, 56 bits of key
became vulnerable for attack. Therefore, to protect the investment and increase security 3DES
(commonly known as Triple DES) was proposed. 3DES uses the same DES algorithm three times
in succession with different keys. This increases the keysize resulting in higher security. Also, as
the fundamental algorithm in 3DES is practically the DES, it is easily adaptable without
additional investment. There are two different flavors of 3DES. One uses two 56-bit key and the
other uses three 56-bit key. By using three 56bit key, the effective security can be increased to the
key size, to 168 bits. Till today 3DES is the most widely used algorithm for symmetric
cryptography.
Like DES, AES also uses permutation and substitution. However, AES does not use Feistel
structure. In a Feistel structure, one half of the data block is used to modify the other half of the
data and then swapped done through a 64-bit register where the bits of the input block are
scrambled in a particular fashion. As the last step, the reverse permutation is done through IVI
Key schedule; this breaks the 56-bit key into two 28-bit subkeys and use them to
compute the bits in data blocks. In each iteration, the bits within the subkey are shifted
to generate a new subkey.
The key-dependent computation is run through 16 rounds. Each round uses the data
from the previous round as input.
The beauty of DES algorithm is that the same algorithm is used for both encryption and
decryption. DES demonstrates a very high avalanche effect. In an avalanche effect one bit of
change in either the input data or the key changes many bits in the output. For example, in DES
one bit of change in the input data changes 34 bits, whereas one bit change in the key affects 35
bits.
3DES (Triple DES): With the increase of processing power available in PC, 56 bits of key
became vulnerable for attack. Therefore, to protect the investment and increase security 3DES
(commonly known as Triple DES) was proposed. 3DES uses the same DES algorithm three
times in succession with different keys. This increases the keysize resulting in higher security.
Also, as the fundamental algorithm in 3DES is practically the DES, it is easily adaptable
without additional investment. There are two different flavors of 3DES. One uses two 56-bit
key and the other uses three 56-bit key. By using three 56bit key, the effective security can be
increased to the key size, to 168 bits. Till today 3DES is the most widely used algorithm for
symmetric cryptography.
Whitfield Diffie and Martin Hellman in 1976 came up with the principle of asymmetric key or
public key cryptography. Public key cryptography proposed by Diffie and Hellman solved two
difficult problems of Key distribution and digital signature in
cryptography. In public key cryptography, there are six components (Figure 18.3). These are:
• Plaintext: This is the human readable message or data given to the public key algorithm as
input for encryption.
• Ciphertext: This is the scrambled data produced as output of the encryption algorithm.
This is a unique data and depends only on the unique key used for encryption.
• Encryption algorithm: This is the algorithm that does computation and various
transformations on the input plaintext. The output of the transformation is too garbled to
be decipherable for an intruder.
• Decryption algorithm: This algorithm does the reverse function of the encryption
algorithm. This function accepts the ciphertext as input and does some transformation on
the data so that the original data is recovered.
• Public key: This is one of the keys from the key pair. This key is made public for anybody
to access. This key can be used either for encryption or decryption.
• Private key: This is the other key from the key pair. This key is called the private key,
because this is kept secret. This can be used either for encryption or decryption.
There are three public key cryptosystems most widely used today. These are Diffie Hellman, RSA,
and Elliptic curve.
The methodology used for encryption of data and the digital signature is different. During the
encryption, the sender uses the public key of the receiver. This is because only the receiver
should be able to decrypt the message using his or her own secret private key. If there is a
surrogate who is able to intercept the encrypted message, he will not be able to decrypt the
message, as the key required to do so is the private key. Receiver's private key is kept secret with
the receiver. The methodology used for authentication or digital signature is just reverse. In case
of signing the transaction, the private key of the sender is used by the sender. The receiver uses
the public key of the sender to read the signature. This authenticates that the transaction was
indeed done by the sender.
Let us assume that there are two parties A and B. A and B choose some prime number p and
another number g less than p. These numbers are selected and made available to both A and B in
advance. The steps followed in Diffie Hellman algorithms for key generation are as follows:
Neither A nor B shared their secret key for use in symmetric encryption, but arrived at that using
some properties of modulo arithmetic with prime numbers. The example above may look trivial.
However, when these numbers are large, nobody can calculate the key just by knowing p, g and s x
in a reasonable period of time. An eavesdropper could not compute discrete logarithm, i.e., figure
out KA based on seeing SB.
5.3.3.2 RSA
RSA is named after its inventors R.L. Rivest, A. Shamir and L. Adleman. It is a public key
algorithm that does encryption/decryption, authentication, and digital signature. The key length is
variable and the most commonly used key size is 512 bits. The key length used In India by CCA
(Controller of Certifying Authorities) is 2048 bits. Key length can be large for higher security; the
key length can be smaller for better efficiency. The plaintext data block is always smaller than the
key length. However, the ciphertext block is the same as the key length. RSA is much slower than
symmetric key encryption. That is why RSA is generally not used for payload encryption. RSA is
used primarily for encrypting a secret key for key exchange.
Let us take an example where we choose two prime numbers p = 7 and q = 17. Calculate n = p *
q = 7 * 17 = 119
Find the value of (n) using the formula (n) = (p - 1) * (q- 1) = (7 - 1) * (17 - 1) 16=96.
Now we need to select an e. e will be relatively prime to ɸ (n) and less than (n). We can see
that 2, 3, 4 have factors with 96, therefore, are not relatively prime. Whereas, 5 is relatively
prime to 96. Therefore, we can choose e to be 5.
We know that d* e = 1 mod (n), which in other words d* e = (Y* ɸ (n) + l) mod (n). To
find the value of d, we use the formula (Y* ɸ (n)) + l)/ e. Replace Y with 1 then 2 then 3 and so
on until we get an Integer. When we set Y = 4, the equation evaluates:
(4 * 96 + 1)/5 = (384 + 1) / 5 = 385/5 77
Therefore, we get d = 77. We have just generated our key pair. The public key is (5, 119) and
private key is (77, 119). We can now use this to encrypt and decrypt values.
ECC is believed to offer a similar level of security with a much smaller size of key. For example,
it is claimed that the level of security that 1024 bits of RSA provide can be achieved by 160 bits of
ECC. A 210-bit key of ECC is equivalent to 2048 bits of RSA. This makes ECC very attractive for
small footprint devices like cell phones or PDAs.
5.3.3.5 MD5
MD5 (Message Digest version 5) hashing algorithm is described in RFC 1321. The MD5
algorithm is an extension of the MD4 message-digest algorithm and is slightly slower than MD4.
The MD5 algorithm takes a message of arbitrary length as input and produces a 128-bit 'message
digest' as output. The algorithm processes 512 bits of the input message in blocks. The digest
produced by the algorithm can also be considered as a 'fingerprint' of the message. It is conjectured
that it is computationally infeasible to produce two messages having the same message digest. It is
also conjectured that it is computationally infeasible to produce any message having a given message
digest. The MD5 algorithm is intended for digital signature applications in a public key
cryptosystem.
5.3.3.6 SHA
The Secure Hash Algorithm (SHA) was developed by the N IST (National Institute of
Standards and Technology). SHA was first published in 1993. Later in 1995, a revised version of the
algorithm was published as SHA-I. SHA processes input in 512 bits block and produces 160 bits of
output. Like MD5, SHA-I is also based on MD4 algorithm. As both MD5 and SHA-I are based on
MD4, they are quite similar in nature. However, as SHA-I generates a longer digest of 160 bits
compared to 128 bits by MD5, it is considered to be more secure.
5.3.3.7 MAC
MAC stands for Message Authentication Code. MAC is used to do the integrity check on the
message. A secret key is used to generate a small fixed size data block from the message. This is
similar to a checksum of the message. Both the sender and the receiver share the same secret key for
MAC. When the sender has a message to be sent to the receiver, the message is sent along with the
MAC. The receiver receives the message; and calculates the MAC from the message and the shared
key. The receiver checks the MAC received from the sender. If they are the same, the message is
considered to be in perfect state. HMAC is another mechanism for message authentication using
cryptographic hash functions like MD5, or SHA-I, in combination with a secret shared key. H MAC
has been defined in RFC 2104. The cryptographic strength of I-IMAC depends on the properties of
the underlying hash function.
In system level security we secure our systems to protect our assets. In the security framework
provided by the operating system, shells will be part of system level security. In authentication
challenges during Unix login, or login into a mainframe computer through username, password is the
system level security. Access control list (ACL), file system security, memory security etc will also
be part of the system level security. It protects the system from worms, viruses and Trojan horses.
Prevention from other forms of attacks like buffer overflow attacks, intrusion etc can also be part of
system level security as also security protocols like SSL and TLS. There is a concept of capability-
based system, where security is policy-driven and managed through capability. Even if a virus enters
into such a system, or an intrusion happens, it will not be able to damage any asset in the system. One
of such operating system is EROS.
Database security is part of system level security. In database security, data in the database is
secured by the database software. This can be encrypting a column in a row or some special check
based on ACL and capability. Most of the database software today offer security at this layer. This
will be over and above the security offered by the operating system.
Security systems implemented for wired networks in organization are primarily policy based.
Effective security policies make frequent references to standards and guidelines that exist within an
organization. Policy is a written down rules about what is allowed or what is not allowed in the
network. Policies are usually area-specific, covering a single area. According to the RFC 2196,
security policy is defined as “A security policy is a formal statement of the rules by which people
who are given access to an organization's technology and information assets must abide.” For
example, there could be a rule in the organization that nobody will be allowed to have a global IP
address. To stop spam and mail bound viruses, there may be another rule that prevents access to
external email systems (like hotmail) from the corporate network. To stop the possibility of
espionage, there could be a rule that FTP from the Internet is not allowed to any machine within the
intranet. A standard is typically collection of system-specific requirements that must be met by
everyone. A guide line is typically collections of system specific procedural specific “suggestions”
for best practice. Guidelines are not requirements to be met, but are strongly recommended.
In a wired network where systems are stationary, and the network structure is static, it is
possible to define security policies. In such networks, it is possible to enforce these policies or rules.
However, things are different when we move to a mobile computing environment. In a mobile
computing environment, user will move from one device to another device, from one network to
another network. These devices or networks may be of similar type or different types. For example
user moves from a WiFi network to a CDMA2000 network or from a PalmOS to a WindowsCE. In a
network with static nodes, it is possible to define a security policy. It may be possible to enforce such
policies as well. However, in case of mobile computing where nodes are roaming from one network
Department of CSE Prepared by Dr.C.Brijilal Ruban
Vidya Academy of Science and Technology Technical Campus
to another, it may not be practical to define a security policy and implement it. Therefore, over and
above policy based security, for mobile nodes, we need object security. In object security, objects
will carry their security signatures and capabilities. This is achieved through the concept of principal.
Therefore, when a device moves from network to network, the device carries the security
requirement and security signature with it.
that needs to be maintained is that the carrier spacing should be equal to the reciprocal of the symbol
period.
In order that OFDM works, there should be very accurate synchronization between the
communicating nodes. If frequency deviation occurs in the sub-streams, they will not be orthogonal
any more, due to which interference between the signals will occur.
5.7 WATM
Wireless ATM (WATM; sometimes also called wireless, mobile ATM, wmATM) does not
only describe a transmission technology but tries to specify a complete communication system
(Acampora, 1996), (Ayanoglu, 1996). While many aspects of the IEEE WLANs originate
from the data communication community, many WATM aspects come from the
telecommunication industry (Vländel, 1994). This specific Situation can be compared to the case of
competition and merging with regard to the concepts TCP/IP and ATM (IP-switching, MPIS).
Similar to fixed networks ATM never made it to the desktop, WATM will not make It to mobile
terminals. However, many concepts found in WAIM can also be found in QoS supporting WLANs
such as HiperLAN2
and for ensuring security features privacy, authentication, or authorization. These and more
functions http://Easyengineering under the term location management.
Several requirements for location management have been identified (Bhat, 1998):
Transparency of mobility: A user should not notice the location management function under
normal operation. Any change of location should be performed Without user activity. This
puts certain constraints on the permissible time delay of the functions associated with location
management. Transparent roaming between different domains (private/private, private/public,
public/public) should be possible. This may include roaming between networks based on
different technologies using, for example, a dual mode terminal.
Security: To provide a security level high enough to be accepted for mission-critical use
(business, emergency etc.), a WATM system requires special features. All location and user
information collected for location management and accounting should be protected against
unauthorized disclosure. This protection is particularly important for roaming profiles that
allow the precise tracking of single terminals. As the air interface is very simple to access,
special access restrictions must be implemented to, e.g., keep public
Quality of service (QOS) guarantees are one of the main advantages envisaged for WATM
networks compared to, e.g., mobile IP working over packet radio networks. While the internet
protocol IP does not guarantee QoS, ATM networks do (at the cost of higher complexity). WATM
networks should provide mobile QoS (M-QoS). M-QoS is composed of three different parts:
Wired QoS: The infrastructure network needed for WATM has the same QoS properties as any
wired ATM network. Typical traditional QoS parameters are link delay, cell delay variation,
bandwidth, cell error rate etc.
Wireless QoS: The QoS properties of the wireless part of a WATM network differ from those of
the wired part. Again, link delay and error rate can be specified, but now error rate is typically
some order of magnitude that is higher than, e.g., fiber optics. Channel reservation and
multiplexing mechanisms at the air interface strongly influence cell delay variation.
Handover QoS: A new set of QoS parameters are introduced by handover. For example, handover
blocking due to limited resources at target access points, cell loss during handover, or the speed of
the whole handover procedure represent critical factors for QoS,
T (terminal): A standard ATM terminal offering ATM services defined for fixed ATM networks
MT (mobile terminal): A standard ATM terminal with the additional capability of reconnecting
after access point change. The terminal can be moved between different access points within a
certain domain.
WT(wireless terminal): This terminal is accessed via a wireless link, but the terminal itself is fixed,
i.e., the terminal keeps its access point to the network
WMT (wireless mobile terminal): The combination of a wireless and a mobile terminal results in
the WMT. This is exactly the type of terminal
RAS (Radio Access System)Point of access to a network via a radio link
Department of CSE Prepared by Dr.C.Brijilal Ruban
Vidya Academy of Science and Technology Technical Campus
EMAS (End user mobility supporting ATM switch, -E: edge, -N: network): Switches with the
support of end-user mobility.
NMAS (network mobility-supporting ATM switch): A whole network can be mobile not just
terminals. Certain additional functions are needed to support this mobility from the fixed network.
MS (mobile ATM switch): ATM switches can also be mobile and can use wireless access to
another part of the ATM network.
ACT (ad-hoc controller terminal): For the configuration of ad-hoc networks, special terminal types
might be required within the wireless network. These terminals could, for example, control
wireless access without an RAS.
4. Time to Live (TTL) — It is 8 bit long and its value is decreased by one at each hop to prevent
packet to stuck in the network.
do. This learning starts with PE routers. Each PE router learns routes to different subnets from CE
router. Suppose PE router PEI learns route to subnet (say subnet l) from CE router. Now PEI will
add label to packet, forward to its neighboring LSR, and tells them that if you receive packet which
destination address to subnet then forward it to me.
Again this process is repeated by this [SR. In this way, I-SR learns routes and add this information in
Label Forwarding Information Base(LFIB). Now if any PE receives packet with destination to
subnet, then looking at labels and LFIB, LSRs can easily forward IP packet.
exploit the available unlicensed spectrum. IEEE 802.11 ac, the latest evolution of the WiFi
technology, can provide broadband wireless pipes with multi-Gbps data rates. It uses wider
bandwidth of up to 160 MHz at the less polluted 5 GHz ISM band, employing up to 256 Quadrature
Amplitude Modulation (QAM). It can also support simultaneous transmissions up to four streams
using multi-user MIMO technique. The incorporated beamforming technique has boosted the
coverage by several orders of magnitude, compared to its predecessor (IEEE 802.1 In). Finally,
major telecom companies such as Qualcomm have recently been working on developing LTE in the
unlicensed spectrum as well as integrating 3G/4G/WiFi transceivers into a single multi-mode base
station (BS) unit. In this regard, it is envisioned that the future UE will be intelligent enough to select
the best interface to connect to the RAN based on the QoS requirements of the running application.
enhancement was the focal point of LTE R-12, where the New Carrier Type (NCT) (also known as
the Lean Carrier) was introduced to assist small cells by the host macro-cell. This allows more
efficient control plane functioning (e.g. for mobility management, synchronisation, resource
allocation, etc.) through the macro-layer while providing a high-capacity and spectrally efficient data
plane through the small cells. Finally, reducing the cell size can also improve the EE of the network
by bringing the network closer to the UEs and hence shrinking the power budget of the wireless links.
5.9.3 Self-Organising Network
Self-Organising Network (SON) capability is another key component of 5G. As the population
of the small cells increases, SON gains more momentum. Almost 80% of the wireless traffic is
generated indoors. To carry this huge traffic, we need hyperdense small-cell deployments in homes
— installed and maintained mainly by the users — out of the control of the operators. These indoor
small cells need to be self-configurable and installed in a plug and play manner. Furthermore, they
need to have SON capability to intelligently adapt themselves to the neighbouring small cells to
minimize inter-cell interference, For example, a small cell can do this by autonomously
synchronising with the network and cleverly adjusting its radio coverage.
The traditional sub-3 GHz spectrum is becoming increasingly congested and the present RATs
are approaching Shannon's capacity limit. As such, research on exploring cm and mm Wave bands
for mobile communications has already been started. Although the research on this field is still in its
infancy, the results look promising. There are three main impediments for rtm Wave mobile
communications.
First, the path loss is relatively higher at these bands, compared to the conventional sub-3GHz
bands. Second, electromagnetic waves tend to propagate in the Line-Of-Sight (LOS) direction,
rendering the links vulnerable to being blocked by moving objects or people. Last but not least, the
penetration loss through the buildings is substantially higher at these bands, blocking the outdoor
RATs for the indoor users. Despite these limitations, there are myriad advantages for mmWave
communications.
An enormous amount of spectrum is available in mmWave band; for example, at 60 GHz, there is
9GHz of unlicensed spectrum available. This amount of spectrum is huge, especially When we think
that the global allocated spectrum for all cellular technologies hardly exceeds 780 MHz. This amount
of spectrum can completely revolutionize mobile communications by providing ultra-broadband
wireless pipes that can seamlessly glue the wired and the wireless networks. Other advantages of
mmWave communications include the small antenna sizes (712) and their small separations (also
around M2), enabling tens of antenna elements to be packed in just one square centimetre. This in
turn allows us to achieve very high beamforming gains in relatively small areas, which can be
implemented at both the BS and the UE. Incorporating smart phased array antennas, we can fully
exploit the spatial degree of freedom of the wireless channel (using Space-Division Multiple Access
(SDMA)), which can further improve the system capacity. Finally, as the mobile station moves
around, beamforming weights can be adjusted adaptively so that the antenna beam is always pointing
to the BS.
EE will remain an important design issue while developing 5G. Today, Information and
Communication Technology (ICT) consumes as much as 5% of the electricity produced around globe
and is responsible for approximately 2% of global greenhouse gas emissions roughly equivalent to
the emissions created by the aviation industry. What concerns more is the fact that if we do not take
any measure to reduce the carbon emissions, the contribution is projected to double by 2020. Hence,
it is necessary to pursue energy-efficient design approaches from RAN and backhaul links to the UEs.
The benefit of energy-efficient system design is manifold. First, it can play an important role
in sustainable development by reducing the carbon footprint of the mobile industry itself. Second,
ICT as the core enabling technology of the future smart cities can also play a fundamental role in
reducing the carbon footprint or other sectors (e.g. transportation). Third, it can increase the revenue
of mobile operators by reducing their operational expenditure (Opex) through saving on their
electricity bills. Fourth, reducing the ‘Joule per bit’ cost can keep mobile services affordable for the
Department of CSE Prepared by Dr.C.Brijilal Ruban
Vidya Academy of Science and Technology Technical Campus
users, allowing flat rate pricing in spite of the 10 to 100x data rate improvement expected by 2020.
Last but not least, it can extend the battery life of the UEs, which has been identified by the market
research company TNS as the number one criterion of the majority of the consumers purchasing a
mobile phone.
5.9.8 Allocation of New Spectrum for 5G:
Another critical issue of -SG is the allocation of new spectrum to fuel wireless
communications in the next decade. The 1000x traffic surge can hardly be managed by only
improving the spectral efficiency or by hyper-densification. In fact, the leading telecom companies
such as Qualcomm and NSN believe that apart from technology innovations, 10 times more spectrum
is needed to meet the demand. The allocation of around 100 MHz bandwidth at the 700 MHz band
and another 400 MHz bandwidth at around 3.6 GHz, as well as the potential allocation of several
GHz bandwidths in cm- or mm Wave bands to 5G will be the focal point of the next WRC
conference, organised by ITU-R in 2015.
The last but not least critical enabler of SG is the virtuatisation of the RAN.sharing of wireless
infrastructure among multiple operators. Network virtualisation needs to be pushed from the wired
core network (e.g. switches and routers) towards the RAN. For network virtualisation, the
intelligence needs to be taken out of the RAN hardware and controlled in a centralised manner using
a software brain. which can be done in different network layers. Network virtualisation can bring
myriad advantages to the wireless domain, including both Capex (Capital Expenditure) and Opex
savings through multi-tenant network and equipment sharing. improved FE, on-demand up- or down-
scaling of the required resources, and increased network agility through the reduction of the time-to-
the-market for innovative services (from 90 hours to 90 minutes). as well as easy maintenance and
Department of CSE Prepared by Dr.C.Brijilal Ruban
Vidya Academy of Science and Technology Technical Campus
fast troubleshooting through increased transparency of the network. Virtualisation can also serve to
converge the wired and the wireless networks by jointly managing the whole network from a central
orchestration unit. further enhancing the efficiency of the network. Finally, multi-mode RANs
supporting ENG, 4G or WiFi can be adopted where different radio interfaces can be turned on or off
through the central software control unit to improve the EE or the Quality of Experience (QoE) for
the end users.