Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

Module-5

The document discusses the security issues associated with mobile computing, highlighting the increased risks to sensitive information due to the use of wireless networks and the lack of control over external environments. It outlines various types of attacks on both static and dynamic assets, emphasizing the need for robust information security systems that address confidentiality, integrity, availability, and other critical components. Additionally, it introduces security techniques and algorithms, including symmetric and public key encryption, to protect against these vulnerabilities.

Uploaded by

guccitaetaev7
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Module-5

The document discusses the security issues associated with mobile computing, highlighting the increased risks to sensitive information due to the use of wireless networks and the lack of control over external environments. It outlines various types of attacks on both static and dynamic assets, emphasizing the need for robust information security systems that address confidentiality, integrity, availability, and other critical components. Additionally, it introduces security techniques and algorithms, including symmetric and public key encryption, to protect against these vulnerabilities.

Uploaded by

guccitaetaev7
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Vidya Academy of Science and Technology Technical Campus

Security Issues in Mobile Computing


5.1 INTRODUCTION
Mobile computing is pervading our society and our lifestyles with a high momentum. Mobile
computing with networked information systems help increase productivity and operational efficiency.
This however, comes at a price. Mobile computing with networked information systems increase the
risks for sensitive information supporting critical functions in the organization which are open to
attacks.

The fundamental premise of mobile computing is that the information will be accessed from
outside of the organization. As long as the information is within the four walls, the environment will
be better known. It may be easier to control this environment and make it secure. When the
information or computing environment is outside the controlled environment we do not have much
control either from its users or usage patterns. Today, all the computers of the world are
interconnected through extranet. Moreover, in a majority of cases, mobile computing uses wireless
networks. Wireless media works on the principle of broadcast; information is radiated to everyone
within the radio wave range thus increasing the security threats. Unlike a physical attack, cyber
attacks can be replicated quite easily. Therefore, unless special care is taken, all systems are open to
attack. This chapter discusses different techniques to secure information over mobile computing
environment.

5.2 INFORMATION SECURITY


In any defense system, we need to know our enemy. We also need to determine possible areas—
weak points, vulnerabilities—where the enemy may attack. We need to build a defense system
around these vulnerabilities. To build an information security system, we need to answer the
following questions:
 Who is the enemy?
 What are the vulnerabilities? What are the weak links in the system?
 What could be the possible exploitation of these vulnerabilities by the resulting attacks?
 What needs special protection?
 To protect our assets from attack, we need to build a security system. How much does the
security system cost in terms of money, resource and time?
 When the security system is deployed, to what extent will it affect the openness and add to
inconvenience?

 Is prevention better that cure? If prevention is expensive or impractical, what is the strategy to
recover from the loss following an attack?

There is no absolute security. What may appear to be absolute security in one context may not
be absolute security in another context. Therefore, while building a security system, we need to
arrive at a proper balance amongst the answers emerging from the above questions. In a mobile
environment, the user roams through different networks with heterogeneous security
Department of CSE Prepared by Dr.C.Brijilal Ruban
Vidya Academy of Science and Technology Technical Campus

infrastructure. In such an environment where device mobility and network mobility is a


necessity, offering homogenous service over heterogeneous devices and networks is the key. In
such an environment weak security link from a wireless network could become a point of
vulnerability for the entire system. Therefore, in a mobile computing environment, it is
necessary to have a robust security and infrastructure.

Attacks
A security system is a system to defend our assets from attacks. In the physical world, these
attacks are carried out at the weak points in the defense system. Likewise in the electronic world,
attacks are carried out at the point of vulnerability. When the vulnerability is exploited for some
interest or selfish motive, it is an attack on the system. Of course there could be occasions where
the vulnerability is exposed by accident as well. Where the vulnerability is exploited, there is a
loss. This loss can be either of static information asset (static asset) or an information asset in
transit (dynamic asset). If we look at an information system, static assets cover a large portion of
the asset base. All the databases, files, documents, etc. in the computers fall in this category.
Examples of attacks on static asset are virus deleting files in a computer or jamming a network.
An example of an attack on a dynamic asset is the theft of a credit card number while a user is
doing a credit card transaction on the web.

Attack on dynamic assets can be of the following types

• Interception: An unauthorized party gaining access to an asset will be part of this attack.
This is an attack on confidentiality like unauthorized copying of files or tapping a
conversation between parties. Some of the sniffing attacks fall in this category.

• Modification: An unauthorized party gaining control of an asset and tampering with it is


part of this attack. This is an attack on integrity like changing the content of a message
being transmitted through the network. Different types of man-in-the-middle attacks are
part of this type of attack.

• Fabrication:An unauthorized party inserts counterfeited objects into the system; for
example, impersonating someone and inserting a spurious message in a network.

• Interruption: An asset is destroyed or made unusable. This is an attack on availability.


This attack can be on a static asset or a dynamic asset. An example could be cutting a
communication line or making the router so busy that a user cannot use a server in a
network. These are all Denial of service attack.

Attacks on static assets can be of the following types:

• Virus and Worms: These are a type of program that replicates and propagates from one
system to another. Most of the virus do malicious destructive functions in the system.

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

• Denial of Service: These are attacks on the system to prevent legitimate users from using
the service.

• Intrusion: These are people or software, which enter into computer systems and perform
function without the knowledge of the owner of the asset. These are also called hackers.

• Replay Attack: In a replay attack the opponent passively captures the data without trying
to analyze the content. At a later time, the same is used in the same sequence to
impersonate an event and gain unauthorized access to resource.

• Buffer overflow attacks: In a buffer overflow attack, the vulnerability of an executable


program is exploited to force a stack overflow condition, inducing the program counter of
the process to change. The program counter is then manipulated to do the work for the
attacker.

• Trapdoor attacks: These are exploitations of some undocumented features of a system.


Undocumented functionality are designed to debug, service, support or take control of the
system.

A security system needs to be so designed that the system is able to counter and recover from
attacks.

5.2.1 Components of Information Security


For centuries, information security was synonymous with secrecy. The art of keeping a
message secret was to encrypt the message and thus hide it from others getting to know of it.
However, in today's netcentric electronic world, the taxonomy of information security is much
beyond encryption. Information security needs to cater to all the possible attacks related to
confidentiality, integrity, availability, non-repudiation, authorization, trust and accounting
(CIANATA). Confidentiality is the property where the information is kept secret so that
unauthorized persons cannot get at the information. Integrity is the property of keeping the
information intact. Availability is the property of a system by which the system will be available to
its legitimate users. Non-repudiation is the property by which the identity of both sender and
receiver of the message can be identified and verified. Authorization is the property by which the
user's properties can be associated to the information access. Trust is the property of expectation,
confidence, and belief over time. Accounting is the property of calculating the fee for a service
rendered.

5.2.1.1 Confidentiality
Confidentiality is ensured through encryption of the data. To a person a comprehensible message
is written in a particular language. The language can be English, Hindi, French or any other
language. These messages are called plaintext or ciphertext messages. Through encryption (or
encipher) we disguise this message in such a fashion that it is no longer understandable by either a
person or a machine. An encrypted message is called ciphertext. The process of converting a
ciphertext back into plaintext is called decryption (or deciphering). Plaintext need not be a written
text. It can even be an audio or video message as well. When leaders of two countries talk, the
Department of CSE Prepared by Dr.C.Brijilal Ruban
Vidya Academy of Science and Technology Technical Campus

message is encrypted so that a man eaves dropping cannot make any sense of the conversation.
The plaintext message can also be a data file in the computer disk. Figure 18.2 depicts the process
of encryption and decryption.

In cryptography there are two components, viz., algorithms and protocols. A cryptographic
algorithm is a mathematical function used for encryption and decryption, and

Figure. 18.2 Encryption and Decryption with Key

protocol relates to the process and procedure of using algorithms. A protocol is the way
algorithms are used to ensure that the security is ensured and the system is less prone to attack. In
a security system the plaintext message is encrypted by using a key KEYE. The encrypted
message is then sent from the sender to the received through a media (wired, wireless, or even
postal) using some protocol. The encrypted message is then decrypted using a key KEYD to
extract the original message. A cryptographic key is generally a large number. The range of
possible values of a key is called keyspace. The larger the keyspace is, the more difficult it is for
an attacker to guess the key and restore the original message. Therefore a larger keyspace makes a
ciphertext more secure. This is similar to a lock. A conventional lock of Il levers is more secure
compared to a 7-lever lock.
The art of keeping message secure using the science of encryption and decryption is called
cryptography. People who practice cryptography are called cryptographers. There are people who
try to break the secrecy of encryptions. These are for many purposes; some are for research
purposes to measure the strength of the security and some, for stealing the information. Some are
hackers who try to break the security for fun or for a price. These people who try to break the
secrecy of the cryptography are called cryptanalysts. The practice of cryptanalyst is called
cryptanalysis. There is another science in security engineering. This is called steganography.
Steganography is the science of hiding secret message in other messages so that the existence of
the secret message is concealed; for example, sending some secret message by changing some bits
in a large picture message. By looking at the picture, others will not be able to guess that in reality
the picture is carrying a secret message.

5.2.1.2 Integrity
Integrity is to ensure the integrity of the message. Integrity is achieved by adding additional
information in to the message. This is done through checksums, message digests or digital
signature. In a crypto system, the receiver of the message checks this extra information to verify
whether the message has been tampered with. This is similar to a bank cheque. A cheque issued to

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

a customer is honored only when the customer signs it. The cheque number and the signature are
verified to ensure integrity. Integrity check is advised for both static asset and asset on transit.

5.2.1.3 Authorization
Authorization deals with privileges. In any transaction, there is a subject (a person) and an object
(data items or file). The subject wants some function to be performed on
the object. The privilege to an object is defined through ACL or Access Control List. ACL is
used while allowing access to the object. The privilege on an object can be read, write, or
execute. Besides objects there need to be privilege-based type of subjects. This is done through
authorization.

Authorization is implemented through policy-based resource accessibility. In an


organization (or society) where there is a hierarchy, there be certain functions allowed to
certain levels in the hierarchy. A clerk in a corporation may have authorization to approve an
expense claim less than a specified threshold, supervisors might have a higher limit, and vice-
presidents might have a still higher limit. Similarly, role-based security will be used when an
application requires multiple layers of authorization and approvals to complete an action.
Privilege management infrastructure together with the role-based authorization allows the
administration and enforcement of user privileges and transaction entitlements. In the
authorization process, users are checked to see if they have the required rights to access the
resource. If they have been granted the required rights, they can access the resource, otherwise
they are denied access.

5.2.1.4 Non-repudiation
Authentication and Non-repudiaüon have some overlapping properties. Authentication is a
process by which we validate the identity of the parties involved in a transaction. In non-
repudiation we identify the identity of these parties beyond any point of doubt. Nonrepudiation
can be considered as authentication with formal record. These records will have legal bindings.
Like a signature in a cheque, using digital signature we achieve nonrepudiation.

5.2.1.5 Availability
Media management is not within the scope of security protocols and algorithms. However,
media management is part of the larger security framework. Media management is needed to ensure
availability of service. For a message a confidentiality may be maintained; also, the integrity is
intact but an attacker can manipulate the media to make sure that the message does not reach the
destination. This is like there is no theft of power, power quality is good, but someone blows the
transmission line of the power grid. Attack on availability happens for industrial espionage or from
political motivation. During a festive season, one company may target to block the e-commerce site
of a competition. In a social framework, someone may try to stop people's voice by using threats or
other means of intimidation to compel the author to remove the web page. If these methods prove
unsuccessful, various denials of service attacks can be launched against the site to make it
impossible to access. In less high-profile cases, people often enjoy far less support for exposing

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

corruption or criticizing employers and particularly litigious organizations. Also, there need to be
some way where terrorist organizations or dictators cannot block the mass opinion. This field of
research area is called Censorship-resistant Publishing. Censorship-resistant publishing is achieved
through document entanglement.

5.2.1.6 Trust
Computers rely on user authentication and access control to provide security. Within a
network, it may be safe to assume that the keyholder is authentic, and the keyholder is using the
key assigned to him or her. However, these strategies are inadequate for mobile computing
environments with high level of flexibility. Mobile computing lacks centralized control and its
users are not all predetermined. Mobile users expect to access resources and services anywhere and
anytime. This leads to serious security risks and access control problems. To handle such dynamic
ever changing context, trust-based security management is necessary. Trust involves developing a
security policy, assigning credentials to entities, verifying that the credentials fulfill the policy.
Also, we need delegation of trust to third parties, and reasoning about users' access rights.

5.2.1.7 Accounting

For any service, the service provider needs to be paid. The service can be either a content
service or a network service. Accounting and billing is a very critical aspect in mobile computing
environment. Accounting is the process by which the usage of the service is metered. Based upon
the usage, the service provider collects the fee either directly from the customer or through the
home network. This will be true even if the user is roaming in a foreign network, and using the
services in the foreign network.

RADIUS (Remote Authentication Dial In User Service) protocol (RFC 2865) has been in use for
a long time for the AAA (Authentication, Authorization, and Accounting) functions in Internet.
With the demanding service requirement of mobile computing, it is now apparent that RADIUS is
incapable of supporting all these complexities. A new protocol called Diameter (RFC 3588) has
been released to address the AAA needs for data roaming and mobile computing. Diameter can
work in both local and roaming AAA situations.

5.3 SECURITY TECHNIQUES AND ALGORITHMS


Generally the encryption algorithms are divided into two main groups. These are symmetric
key encryption and public key encryption. In a symmetric key encryption, the key used for
decryption is the same as the key for encryption. In some cases of symmetric encryption even the
algorithm used for encryption and decryption is the same. In the case of public key algorithms, the
key used for decryption is different from the key used for encryption.

5.3.1 Stream Ciphering and Block Ciphering


In stream cipher, a bit or a byte is taken at a time and encrypted. The algorithm looks at the
input plaintext as a stream of bits and encrypts them one bit (or byte) at a time as the stream
progresses. In this technique, the length of the plaintext and the key size will be same. Wireless LAN
(WiFi) uses stream cipher. In this methodology, the key has to be unique for every encryption. If the
same key is used for multiple packets, and these packets can be captured, there is vulnerability. The
other technique is block cipher. In a block cipher, one block of plaintext is taken as a whole and used

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

to produce a ciphertext block of equal length. Typically a block of 64 bits (8 octets) or 128 bits (16
octets) is used for block cipher. Majority of cryptosystems use block cipher.

5.3.2 Symmetric Key Cryptography


In a symmetric key cryptography, the same key is used for both encryption and decryption.
This is like a lock where the same key is used to lock and unlock. In cryptography, symmetric key
algorithms are in use for centuries; that is why symmetric key algorithms are called conventional or
classical algorithms as well. In this type of encryption, the key is secret and known only to the
encrypting (sender) and decrypting (receiver) parties. Therefore, it is also known as a secret key
algorithm. Some authors refer to symmetric key cryptography as shared key cryptography as well.
This is because the same key is shared between the sender and the receiver of the message. The
unique key chosen for use in a particular transaction makes the results of encryption unique.
Selection of a different key causes the cipher that is produced for any given set of inputs to be
different. The cryptographic security of the data depends on the security of the algorithm used and
the key used to encipher the data. The strength of the security depends on the size of the key.
Unauthorized recipients of the cipher who know the algorithm but do not have the correct key cannot
derive the original data algorithmically. However, anyone who does have the key and the algorithm
can easily decipher the cipher and obtain the original data. Symmetric key algorithms are much faster
compared to their asymmetric (public key) counterparts.
In a symmetric key cryptography, there are four components. These are plaintext,
encryption/decryption algorithm, secret key (key for encryption and decryption), and the ciphertext.
if we make KeyE = KeyD, this becomes a symmetric key algorithm. There are many symmetric key
algorithms. The most popular symmetric key algorithms are:
DES: Data Encryption Standard, this algorithm is the most widely used, researched and has had
the longest life so far.

3DES: This is a modification of DES. In this algorithm, DES is used 3 times in succession.

AES: Advances Encryption Standards, this is the current accepted standard for encryption by
FIPS (Federal Information Processing Standards) of USA.

Skipjack/FORTEZZA: This is a token-based symmetric algorithm used by defense personnel in


the US.

5.3.2.1 DES (Data Encryption Standard)


In the late 1960s, IBM set up a research project in computer cryptography led by Horst
Feistel. In 1971, the project concluded with an outcome of an algorithm named Lucifer. The original
algorithm used 64-bits block and 128-bits key. IBM reduced the length of the key to fit the algorithm
into a single chip. This algorithm was adopted in 1977 by NIST (National Institute of Standards and
Technology) as the data encryption standard (DES). A DES key consists of 64 bits of which 56 bits
are randomly generated and used directly by the algorithm. The other 8 bits are used for error
detection and not for encryption.

DES employs the principle of scrambling and substitution. These processes are repeated a number
of times with keys to ensure that the plaintext is completely transformed into a thoroughly scrambled
bit stream. The DES can be divided into the following major functions. These are:

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

• Permutations of bits in a block. This is the first and last step i DES. In this step the 64-bit
plaintext block is rearranged through Initial Permutation IP. This is done through a 64-bit
register where the bits of the input block are scrambled in a particular fashion. As the last
step, the reverse permutation is done through IVI

• A key dependent computation. This includes multiple rounds (iteration) of transformation


through combination of permutation and substitution. This is in the core of the encryption
function.

• Swapping of half blocks of data in each round.


• Key schedule; this breaks the 56-bit key into two 28-bit subkeys and use them to compute
the bits in data blocks. In each iteration, the bits within the subkey are shifted to generate a
new subkey.

• The key-dependent computation is run through 16 rounds. Each round uses the data from
the previous round as input.
The beauty of DES algorithm is that the same algorithm is used for both encryption and
decryption. DES demonstrates a very high avalanche effect. In an avalanche effect one bit of
change in either the input data or the key changes many bits in the output. For example, in DES

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

one bit of change in the input data changes 34 bits, whereas one bit change in the key affects 35
bits.

3DES (Triple DES): With the increase of processing power available in PC, 56 bits of key
became vulnerable for attack. Therefore, to protect the investment and increase security 3DES
(commonly known as Triple DES) was proposed. 3DES uses the same DES algorithm three times
in succession with different keys. This increases the keysize resulting in higher security. Also, as
the fundamental algorithm in 3DES is practically the DES, it is easily adaptable without
additional investment. There are two different flavors of 3DES. One uses two 56-bit key and the
other uses three 56-bit key. By using three 56bit key, the effective security can be increased to the
key size, to 168 bits. Till today 3DES is the most widely used algorithm for symmetric
cryptography.

5.3.2.2 AES (Advanced Encryption Standard)


We have discussed that the strength of security of a cryptographic algorithms depends on the size
of the key. The larger the size of the key, the longer it takes to decipher the encrypted data
through brute force. With GHz of computing power easily available, 56bit key size is found to be
unsafe today. To overcome these challenges, 3DES became popular. However, 3DES was quite
slow. Also, scientists found that the 64-bit block which both DES and 3DES use, may not be the
best. A higher block size is desirable from efficiency and security point of view.
To overcome these drawbacks, in 1997 NIST (National Institute of Standards and
Technology) in US issued a call for algorithms for advanced encryption standard or AES.
According to the call for proposal, the AES standard was to have equal or better security
compared to 3DES and more efficient than the 3DES. NIST also specified that AES had to be a
symmetric cipher with block size of 128 bits. Also, it has to support keys of size 128-bits, 192-
bits, and 256-bits. Many algorithms competed for the AES standard. Following a rigorous
evaluation process in November 2001, NIST selected the Rijndael as the AES algorithm.
Rijndael is named after two researchers from Belgium who developed the algorithm. They were
Joan Daemen and Vincent Rijmen. Rijndael was designed to have the following characteristics:
Resistance against all known attacks
Design simplicity
Speed and code compactness on a wide range of platforms.

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

Like DES, AES also uses permutation and substitution. However, AES does not use Feistel
structure. In a Feistel structure, one half of the data block is used to modify the other half of the
data and then swapped done through a 64-bit register where the bits of the input block are
scrambled in a particular fashion. As the last step, the reverse permutation is done through IVI

 A key dependent computation. This includes multiple rounds (iteration) of


transformation through combination of permutation and substitution. This is in the core
of the encryption function.
 Swapping of half blocks of data in each round.

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

 Key schedule; this breaks the 56-bit key into two 28-bit subkeys and use them to
compute the bits in data blocks. In each iteration, the bits within the subkey are shifted
to generate a new subkey.
 The key-dependent computation is run through 16 rounds. Each round uses the data
from the previous round as input.
The beauty of DES algorithm is that the same algorithm is used for both encryption and
decryption. DES demonstrates a very high avalanche effect. In an avalanche effect one bit of
change in either the input data or the key changes many bits in the output. For example, in DES
one bit of change in the input data changes 34 bits, whereas one bit change in the key affects 35
bits.

3DES (Triple DES): With the increase of processing power available in PC, 56 bits of key
became vulnerable for attack. Therefore, to protect the investment and increase security 3DES
(commonly known as Triple DES) was proposed. 3DES uses the same DES algorithm three
times in succession with different keys. This increases the keysize resulting in higher security.
Also, as the fundamental algorithm in 3DES is practically the DES, it is easily adaptable
without additional investment. There are two different flavors of 3DES. One uses two 56-bit
key and the other uses three 56-bit key. By using three 56bit key, the effective security can be
increased to the key size, to 168 bits. Till today 3DES is the most widely used algorithm for
symmetric cryptography.

5.3.3 Public Key Cryptography


In symmetric key encryption we use the same key for both encryption and decryption. In
public key cryptography we use two different keys, one key for encryption and a different key
for decryption. As there are two different keys used, this is also called asymmetric key
cryptography. The development of public key cryptography can be considered as the greatest
advance in the history of cryptography. Public key cryptosystem is based on mathematical
functions rather than permutation and substitution. However, it is not true that the public key
cryptosystem is more secure for general purpose. There is nothing in principle, which makes one
algorithm superior to another from the point of view of resisting cryptanalysis. It is
computationally infeasible to derive the decryption key given only the encryption key and
knowledge of the cryptographic algorithm. The encryption key and the decryption key together
form a key pair. One of these keys from the key pair is made public and the other one kept
private or secret. That is why this algorithm is called public key cryptosystem.

Whitfield Diffie and Martin Hellman in 1976 came up with the principle of asymmetric key or
public key cryptography. Public key cryptography proposed by Diffie and Hellman solved two
difficult problems of Key distribution and digital signature in

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

cryptography. In public key cryptography, there are six components (Figure 18.3). These are:

• Plaintext: This is the human readable message or data given to the public key algorithm as
input for encryption.

• Ciphertext: This is the scrambled data produced as output of the encryption algorithm.
This is a unique data and depends only on the unique key used for encryption.

• Encryption algorithm: This is the algorithm that does computation and various
transformations on the input plaintext. The output of the transformation is too garbled to
be decipherable for an intruder.

• Decryption algorithm: This algorithm does the reverse function of the encryption
algorithm. This function accepts the ciphertext as input and does some transformation on
the data so that the original data is recovered.

• Public key: This is one of the keys from the key pair. This key is made public for anybody
to access. This key can be used either for encryption or decryption.

• Private key: This is the other key from the key pair. This key is called the private key,
because this is kept secret. This can be used either for encryption or decryption.

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

There are three public key cryptosystems most widely used today. These are Diffie Hellman, RSA,
and Elliptic curve.

The methodology used for encryption of data and the digital signature is different. During the
encryption, the sender uses the public key of the receiver. This is because only the receiver
should be able to decrypt the message using his or her own secret private key. If there is a
surrogate who is able to intercept the encrypted message, he will not be able to decrypt the
message, as the key required to do so is the private key. Receiver's private key is kept secret with
the receiver. The methodology used for authentication or digital signature is just reverse. In case
of signing the transaction, the private key of the sender is used by the sender. The receiver uses
the public key of the sender to read the signature. This authenticates that the transaction was
indeed done by the sender.

5.3.3.1 Diffie Hellman


Whitfield Diffie and Martin Hellman first introduced the notion of public key cryptography in
1976. In Diffie Hellman technique, secret keys are never exchanged. However, the technique
allows two parties to arrive at a secret key through the usage of public keys. Communicating
parties select a pair of private and public keys. Public keys are exchanged. The shared secret key
is generated from the private key and the public key of the other party.

Let us assume that there are two parties A and B. A and B choose some prime number p and
another number g less than p. These numbers are selected and made available to both A and B in
advance. The steps followed in Diffie Hellman algorithms for key generation are as follows:

l. Let these p and g be: p = 13 and g = 3;


2. A chooses a random number SA. This number is kept secret as a private key with A. Let
this number be 5.
3. B chooses a random number SB. This number is kept secret as a private key with B. Let
this number be 7.
4. A takes g and raises it with his secret key SA modulo p. This will be TA = (g ^ ASA) mod
p (3 ^ 5) mod 13 = (243) mod 13 = 9. This number 9 is A's public key. A already has
chosen 5 as his private key.
5. B takes g and raises it with his secret key SB modulo p. This will be (g ^SB) mod P (3 ^ 7)
mod 13 = (2187) mod 13 = 3. This number 3 is B's public key. B has already chosen 7 as
his private key.
6. Public keys of A and B are exchanged. This means A send the public key 9 to B and B
send his public key 3 to A over a public channel like Internet.
7. A takes B's public key and raises it with his own private key mod p. Therefore, we now
have KA = (TB ^SA) mod (3 ^ 5) mod 13 = (243) mod 13 = 9.
8. B now takes A's public key and raises it with his own private key mod p in a similar
fashion as A. The result will be KB = (TA ^ SB) mod p (9 ^ 7) mod 13 = (4782969) mod
13 = 9.
9. The value of (TA ^ SB) mod p= (TB ^ S) mod P= 9. Though KA and KB have been
calculated by A and B independently; it will always be equal. Therefore, these keys KA
and KB can now be used by A and B as the shared key for payload encryption.

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

Neither A nor B shared their secret key for use in symmetric encryption, but arrived at that using
some properties of modulo arithmetic with prime numbers. The example above may look trivial.
However, when these numbers are large, nobody can calculate the key just by knowing p, g and s x
in a reasonable period of time. An eavesdropper could not compute discrete logarithm, i.e., figure
out KA based on seeing SB.

5.3.3.2 RSA
RSA is named after its inventors R.L. Rivest, A. Shamir and L. Adleman. It is a public key
algorithm that does encryption/decryption, authentication, and digital signature. The key length is
variable and the most commonly used key size is 512 bits. The key length used In India by CCA
(Controller of Certifying Authorities) is 2048 bits. Key length can be large for higher security; the
key length can be smaller for better efficiency. The plaintext data block is always smaller than the
key length. However, the ciphertext block is the same as the key length. RSA is much slower than
symmetric key encryption. That is why RSA is generally not used for payload encryption. RSA is
used primarily for encrypting a secret key for key exchange.

The RSA algorithm works as follows:

l. Choose two prime numbers p and q.


2. Multiply p and q to generate n. n will be used as the modulus.
3. Calculate ɸ(n) = (p - 1) * (q - 1). ɸ (n) is the Euler's totient function. ɸ (p) is the
number of positive integers less than p and relatively prime to p.
4. Choose a number e such that it is relatively prime to ɸ (n).
5. Find d such that it is multiplicative inverse of e; d= e-l mod (n).
6. (e, n) is the public key and (d, n) is the private key
7. To encrypt we use the formula (Ciphertext block) = (Plaintext block) e mod n
8. To decrypt we use the formula (Plaintext block) = (Ciphertext block)d mod n

Let us take an example where we choose two prime numbers p = 7 and q = 17. Calculate n = p *
q = 7 * 17 = 119
Find the value of (n) using the formula (n) = (p - 1) * (q- 1) = (7 - 1) * (17 - 1) 16=96.

Now we need to select an e. e will be relatively prime to ɸ (n) and less than (n). We can see
that 2, 3, 4 have factors with 96, therefore, are not relatively prime. Whereas, 5 is relatively
prime to 96. Therefore, we can choose e to be 5.

We know that d* e = 1 mod (n), which in other words d* e = (Y* ɸ (n) + l) mod (n). To
find the value of d, we use the formula (Y* ɸ (n)) + l)/ e. Replace Y with 1 then 2 then 3 and so
on until we get an Integer. When we set Y = 4, the equation evaluates:
(4 * 96 + 1)/5 = (384 + 1) / 5 = 385/5 77
Therefore, we get d = 77. We have just generated our key pair. The public key is (5, 119) and
private key is (77, 119). We can now use this to encrypt and decrypt values.

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

To encrypt we use the formula


(Ciphertext block) = (Plaintext block)e mod n. Assuming that the plaintext block is 8 bits
long and the value is 65. Therefore, the ciphertext will be (65 ^5) mod 119 => (1160290625)
mod 119 = 46. To decrypt, we use the formula (Plaintext block) = (Ciphertext block) d mod (46
77) mod 119 = (1.077340631679169568093835458385e+128) mod 119 = 65
The example above may look trivial and someone may think that by knowing (5, 119) one
can easily find out d. This is almost impossible if the numbers are large, for example 128 bits long.
Also, to know the private key, the eavesdropper needs to evaluate p and q from n. The eavesdropper
has to factorize the number n to get the two large prime numbers, which is extremely hard even in a
huge timeframe. RSA uses the complexity in prime factorization.
5.3.3.3 Elliptic Curve
A majority of the products and standards that use the public key cryptography use RSA for
encryption, authentication, and digital signature. Due to extensive research in cryptanalysis in RSA
and increase in availability of computing power, some vulnerabilities of RSA have been discovered.
There are subexponential algorithms available today for breaking RSA and Diffie-Hellman
algorithms. To overcome these threats, the size of the RSA key has been increasing over time. This
puts a tremendous demand on computing power. Elliptic Carve Cryptography (ECC) has shown a lot
of promise for higher security with lesser resource. Elliptic curve cryptography was proposed by
Victor Miller and Neal Koblitz in the mid 1980s. Till date there is no subexponential algorithms
available to break ECC. An elliptic curve is the set of solutions (x,y) to an equation of the form Y2 =
x3 + ax + b, together with an extra point O which is called the point at infinity.

ECC is believed to offer a similar level of security with a much smaller size of key. For example,
it is claimed that the level of security that 1024 bits of RSA provide can be achieved by 160 bits of
ECC. A 210-bit key of ECC is equivalent to 2048 bits of RSA. This makes ECC very attractive for
small footprint devices like cell phones or PDAs.

5.3.3.4 Hashing Algorithms


Hashing functions are one-way functions used for message digests. Hash function takes an
input data of any size and produces an output stream of some fixed size. The outputs are collision
free. This means that two different inputs will not produce the same output. It is also not possible to
derive the input from a known output. This means that if we have a message digest, it is impossible
to derive the original message. The most commonly used hash functions are MD5 and SHA-I.

5.3.3.5 MD5
MD5 (Message Digest version 5) hashing algorithm is described in RFC 1321. The MD5
algorithm is an extension of the MD4 message-digest algorithm and is slightly slower than MD4.
The MD5 algorithm takes a message of arbitrary length as input and produces a 128-bit 'message
digest' as output. The algorithm processes 512 bits of the input message in blocks. The digest
produced by the algorithm can also be considered as a 'fingerprint' of the message. It is conjectured
that it is computationally infeasible to produce two messages having the same message digest. It is
also conjectured that it is computationally infeasible to produce any message having a given message
digest. The MD5 algorithm is intended for digital signature applications in a public key
cryptosystem.

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

5.3.3.6 SHA
The Secure Hash Algorithm (SHA) was developed by the N IST (National Institute of
Standards and Technology). SHA was first published in 1993. Later in 1995, a revised version of the
algorithm was published as SHA-I. SHA processes input in 512 bits block and produces 160 bits of
output. Like MD5, SHA-I is also based on MD4 algorithm. As both MD5 and SHA-I are based on
MD4, they are quite similar in nature. However, as SHA-I generates a longer digest of 160 bits
compared to 128 bits by MD5, it is considered to be more secure.

5.3.3.7 MAC
MAC stands for Message Authentication Code. MAC is used to do the integrity check on the
message. A secret key is used to generate a small fixed size data block from the message. This is
similar to a checksum of the message. Both the sender and the receiver share the same secret key for
MAC. When the sender has a message to be sent to the receiver, the message is sent along with the
MAC. The receiver receives the message; and calculates the MAC from the message and the shared
key. The receiver checks the MAC received from the sender. If they are the same, the message is
considered to be in perfect state. HMAC is another mechanism for message authentication using
cryptographic hash functions like MD5, or SHA-I, in combination with a secret shared key. H MAC
has been defined in RFC 2104. The cryptographic strength of I-IMAC depends on the properties of
the underlying hash function.

5.4 SECURITY MODELS


The security algorithms and protocols are used to protect our assets. These assets can be either
static assets in the form of priceless data in a database, files, or documents within a computer or
assets in transit. The security and trust model we choose should be able to secure our assets and
protect our interests. To protect ourselves from different threats, we need to look at security and trust
at system level and application level.

5.4.1 1nfrastructure Level Security


Infrastructure level security offers security at the perimeter of the system. This will primarily
include networks and the infrastructure. We can call this Network security as well. Infrastructure
level security will include protecting the infrastructure or the network so that attacks from worms,
viruses, Trojan horses can be prevented. Prevention from other forms of attacks like intrusion etc.
different firewalls in the network are all part of the infrastructure-level security. Virtual private
network (VPN) is part of infrastructure security as well. Infrastructure level security for mobile-
computing environment need to handle some additional threats compared to a wired network. For
mobile-computing network, the last mile access network will be wireless in most of the situations.
Therefore, at the access level, additional infrastructure security is necessary. An example is
encryption in GSM using A5 algorithm. Wiki/wireless LAN networks use WEP. Some vulnerabilities
have been identified in infrastructure security for WiFi; therefore, new security protocols like 802.1x
and 802.11i have been proposed to take care of the over the air interface in the access network.

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

5.4.2 System Level Security

In system level security we secure our systems to protect our assets. In the security framework
provided by the operating system, shells will be part of system level security. In authentication
challenges during Unix login, or login into a mainframe computer through username, password is the
system level security. Access control list (ACL), file system security, memory security etc will also
be part of the system level security. It protects the system from worms, viruses and Trojan horses.
Prevention from other forms of attacks like buffer overflow attacks, intrusion etc can also be part of
system level security as also security protocols like SSL and TLS. There is a concept of capability-
based system, where security is policy-driven and managed through capability. Even if a virus enters
into such a system, or an intrusion happens, it will not be able to damage any asset in the system. One
of such operating system is EROS.

Database security is part of system level security. In database security, data in the database is
secured by the database software. This can be encrypting a column in a row or some special check
based on ACL and capability. Most of the database software today offer security at this layer. This
will be over and above the security offered by the operating system.

5.4.3 Policy Based Security

Security systems implemented for wired networks in organization are primarily policy based.
Effective security policies make frequent references to standards and guidelines that exist within an
organization. Policy is a written down rules about what is allowed or what is not allowed in the
network. Policies are usually area-specific, covering a single area. According to the RFC 2196,
security policy is defined as “A security policy is a formal statement of the rules by which people
who are given access to an organization's technology and information assets must abide.” For
example, there could be a rule in the organization that nobody will be allowed to have a global IP
address. To stop spam and mail bound viruses, there may be another rule that prevents access to
external email systems (like hotmail) from the corporate network. To stop the possibility of
espionage, there could be a rule that FTP from the Internet is not allowed to any machine within the
intranet. A standard is typically collection of system-specific requirements that must be met by
everyone. A guide line is typically collections of system specific procedural specific “suggestions”
for best practice. Guidelines are not requirements to be met, but are strongly recommended.

In a wired network where systems are stationary, and the network structure is static, it is
possible to define security policies. In such networks, it is possible to enforce these policies or rules.
However, things are different when we move to a mobile computing environment. In a mobile
computing environment, user will move from one device to another device, from one network to
another network. These devices or networks may be of similar type or different types. For example
user moves from a WiFi network to a CDMA2000 network or from a PalmOS to a WindowsCE. In a
network with static nodes, it is possible to define a security policy. It may be possible to enforce such
policies as well. However, in case of mobile computing where nodes are roaming from one network
Department of CSE Prepared by Dr.C.Brijilal Ruban
Vidya Academy of Science and Technology Technical Campus

to another, it may not be practical to define a security policy and implement it. Therefore, over and
above policy based security, for mobile nodes, we need object security. In object security, objects
will carry their security signatures and capabilities. This is achieved through the concept of principal.
Therefore, when a device moves from network to network, the device carries the security
requirement and security signature with it.

5.4.4 Application Level Security


In a mobile-computing environment we need security at the application level. Application
security looks at the security at the content level. This can also be termed as Peer to peer security.
The application at the client device will talk to the application at the server and handle security
requirements end-to-end as the content may demand. In a mobile computing environment, we cannot
make any assumption related to the client context or the network context. Therefore, the security
needs to be addressed at the content level, using the context awareness, J2ME, NET, WIM or MEE
(Mobile Execution Environment) environment. Using cryptographic libraries, we can build security
at the application level. This security will be custom-built and can use standard algorithms or new
algorithms as agreed by the peer nodes. Of course the system/infra structure level security, if any,
will be over and above the application level security.

5.4.5 Java Security


Security model provided by Java covers both system level security and application level
security. Java system level security is provided through the ‘sandbox’ model. Sandbox provides a
restricted environment for code execution through Java virtual machine. In the sandbox model, local
code is trusted to have full access to system resources like file system, memory etc. However,
downloaded code from a remote site as an applet is not trusted. Therefore, applet can access only the
limited resources provided inside the sand- box. Java supports digitally signed trusted applet. A
digitally signed applet is treated like local code, with full access to resources. Digitally signed applets
use public key infra structure. Prior to transmission, the applet server signs an applet JAR file using
its digital certificate. Upon receipt, the client side Java security manager verifies the signature and
decides whether the origin and integrity of the application is trusted. Once the authentication is
successful, the application code is delivered to the client for execution.
As a part of application level security, Java framework supports cryptographic library through
Java cryptography architecture (JCA). JCA refers to a framework for accessing and developing
cryptographic functionality for the Java platform. These cryptographic services are:

 Symmetric key encryption algorithms


 Public key encryption algorithms
 Digital signature algorithms
 Message digest algorithms
 Message authentication code generation
 Key generation algorithms

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

 Key exchange algorithms


 Keystore creation and management
 Algorithm parameter management
 Algorithm parameter generation
 Key factory support to convert between different key representations
 Certificate factory support to generate certificates and certificate revocation lists
 (CRLs) from their encodings
 Random-number generation (RNG) algorithm
 Support of SSL and TLS through http support.

5.6 Orthogonal Frequency Division Multiplexing (OFDM)


Orthogonal Frequency Division Multiplexing (OFDM) is a digital modulation technique that
allows for efficient data transmission over wireless communication channels. It is widely used in
various applications, including Wi-Fi, 4G LTE, and digital broadcasting. OFDM divides a high-speed
data stream into multiple parallel substreams and transmits them simultaneously using different
orthogonal frequencies. The following diagram plots FDM versus OFDM, to depict the saving in
bandwidth obtained by OFDM.

5.6.1 Working Principle of OFDM


OFDM is a specialised FDM having the constraint that the sub-streams in which the main
signal is divided, are orthogonal to each other. Orthogonal signals are signals that are perpendicular
to each other. A main property of orthogonal signals is that they do not interfere with each other.
When any signal is modulated by the sender, its sidebands spread out either side. A receiver can
successfully demodulate the data only if it receives the whole signal. In case of FDM, guard bands
are inserted so that interference between the signals, resulting in cross-talks, does not occur.
However, since orthogonal signals are used in OFDM, no interference occurs between the signals
even if their sidebands overlap. So, guard bands can be removed, thus saving bandwidth. The criteria
Department of CSE Prepared by Dr.C.Brijilal Ruban
Vidya Academy of Science and Technology Technical Campus

that needs to be maintained is that the carrier spacing should be equal to the reciprocal of the symbol
period.
In order that OFDM works, there should be very accurate synchronization between the
communicating nodes. If frequency deviation occurs in the sub-streams, they will not be orthogonal
any more, due to which interference between the signals will occur.

5.7 WATM
Wireless ATM (WATM; sometimes also called wireless, mobile ATM, wmATM) does not
only describe a transmission technology but tries to specify a complete communication system
(Acampora, 1996), (Ayanoglu, 1996). While many aspects of the IEEE WLANs originate
from the data communication community, many WATM aspects come from the
telecommunication industry (Vländel, 1994). This specific Situation can be compared to the case of
competition and merging with regard to the concepts TCP/IP and ATM (IP-switching, MPIS).
Similar to fixed networks ATM never made it to the desktop, WATM will not make It to mobile
terminals. However, many concepts found in WAIM can also be found in QoS supporting WLANs
such as HiperLAN2

5.7.1 WIRELESS ATM GROUP

 Location management: Similar to other cellular networks, WATM networks must


be able to locate a wireless terminal or a mobile user, i.e., to find the current
access point of the terminal to the network.
 Mobile routing: Even if the location of a terminal is known to the system, it still has to
route the traffic through the network to the access point currently responsible for the
wireless terminal. Each time a user moves to a new access point, the system must
reroute traffic.
 Handover signaling: The network must provide mechanisms which search for new
access points, set up new connections between intermediate systems and signal
the actual change of the access point.
 QoS and traffic control: In contrast to wireless networks offering only best effort traffic,
and to cellular networks offering only a few different types of traffic; WATM should be
able to offer many QoS parameters. To maintain these parameters, all actions such as
rerouting, handover etc. have to be controlled. The network must pay attention to the
incoming traffic (and check if it conforms to some traffic contract) in a similar way to
today's ATM (policing).
 Network management: All extensions of protocols or other mechanisms also require an
extension of the management functions to control the network.

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

Fig. Example of Generic WATM reference


Model

5.7.2 Location management


As for all networks Supporting mobility, special functions are required for looking up the
current position of a mobile terminal, for providing the moving terminal with a permanent address,

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

and for ensuring security features privacy, authentication, or authorization. These and more
functions http://Easyengineering under the term location management.
Several requirements for location management have been identified (Bhat, 1998):

 Transparency of mobility: A user should not notice the location management function under
normal operation. Any change of location should be performed Without user activity. This
puts certain constraints on the permissible time delay of the functions associated with location
management. Transparent roaming between different domains (private/private, private/public,
public/public) should be possible. This may include roaming between networks based on
different technologies using, for example, a dual mode terminal.
 Security: To provide a security level high enough to be accepted for mission-critical use
(business, emergency etc.), a WATM system requires special features. All location and user
information collected for location management and accounting should be protected against
unauthorized disclosure. This protection is particularly important for roaming profiles that
allow the precise tracking of single terminals. As the air interface is very simple to access,
special access restrictions must be implemented to, e.g., keep public

5.7.3 Mobile quality of service

Quality of service (QOS) guarantees are one of the main advantages envisaged for WATM
networks compared to, e.g., mobile IP working over packet radio networks. While the internet
protocol IP does not guarantee QoS, ATM networks do (at the cost of higher complexity). WATM
networks should provide mobile QoS (M-QoS). M-QoS is composed of three different parts:
 Wired QoS: The infrastructure network needed for WATM has the same QoS properties as any
wired ATM network. Typical traditional QoS parameters are link delay, cell delay variation,
bandwidth, cell error rate etc.
 Wireless QoS: The QoS properties of the wireless part of a WATM network differ from those of
the wired part. Again, link delay and error rate can be specified, but now error rate is typically
some order of magnitude that is higher than, e.g., fiber optics. Channel reservation and
multiplexing mechanisms at the air interface strongly influence cell delay variation.
 Handover QoS: A new set of QoS parameters are introduced by handover. For example, handover
blocking due to limited resources at target access points, cell loss during handover, or the speed of
the whole handover procedure represent critical factors for QoS,
 T (terminal): A standard ATM terminal offering ATM services defined for fixed ATM networks
 MT (mobile terminal): A standard ATM terminal with the additional capability of reconnecting
after access point change. The terminal can be moved between different access points within a
certain domain.
 WT(wireless terminal): This terminal is accessed via a wireless link, but the terminal itself is fixed,
i.e., the terminal keeps its access point to the network
 WMT (wireless mobile terminal): The combination of a wireless and a mobile terminal results in
the WMT. This is exactly the type of terminal
 RAS (Radio Access System)Point of access to a network via a radio link
Department of CSE Prepared by Dr.C.Brijilal Ruban
Vidya Academy of Science and Technology Technical Campus

 EMAS (End user mobility supporting ATM switch, -E: edge, -N: network): Switches with the
support of end-user mobility.
 NMAS (network mobility-supporting ATM switch): A whole network can be mobile not just
terminals. Certain additional functions are needed to support this mobility from the fixed network.
 MS (mobile ATM switch): ATM switches can also be mobile and can use wireless access to
another part of the ATM network.
 ACT (ad-hoc controller terminal): For the configuration of ad-hoc networks, special terminal types
might be required within the wireless network. These terminals could, for example, control
wireless access without an RAS.

5.8 Multi Protocol Label Switching (MPLS)


Multi packet Protocol through paths Label via Switching labels instead (MPLS) of
looking is an IP at packet complex routing technique tables of that routers. routes This IP
feature helps in increasing the delivery rate of IP packets. MPLS uses layer 3 service i.e,
Internet Protocol, and uses router as forwarding device. The traffic of different customers is
separated from each other because MPLS works somewhat like VPN. It does not work like
regular VPN that encrypts the data but it ensures packet from one customer cannot be
received by another customer. An MPLS header is added to packet that lies between layers 2
and 3. Hence, it is also considers to be Layer 2.5 protocol.

Fig. MPLS Header

5.8.1 MPLS Header


The MPLS Header is 32 bit long and is divided into four parts
1. Label - This field is 20 bit long and can take value b/w 0 & 220 -1.
2. Exp - They are 3 bits long and used for Quality of Service(QoS).
3. Bottom of stack (S) - It is of size a bit. MPLS labels are stacked one over other. If there is only
one label remained in MPLS header, then its value is 1 otherwise 0.

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

4. Time to Live (TTL) — It is 8 bit long and its value is decreased by one at each hop to prevent
packet to stuck in the network.

5.8.2 Forwarding in MPLS :


LSRs receive IP packet CE and add an MPLS header in between layer 3 and layer 2 means it
encapsulates the link-layer i.e, layer 2 frames. This feature allows LSRs to support receiving packets
containing frames from different protocols like Frame Relay, Metro Ethernet, etc, that's why it is
called multi-protocol.
MPLS forwarding is based on label attached to IP packet. This label attachment is regulated by
protocol called Label Distribution Protocol(LDP). Each I-SR initially learns routes as normal routers

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

do. This learning starts with PE routers. Each PE router learns routes to different subnets from CE
router. Suppose PE router PEI learns route to subnet (say subnet l) from CE router. Now PEI will
add label to packet, forward to its neighboring LSR, and tells them that if you receive packet which
destination address to subnet then forward it to me.

Again this process is repeated by this [SR. In this way, I-SR learns routes and add this information in
Label Forwarding Information Base(LFIB). Now if any PE receives packet with destination to
subnet, then looking at labels and LFIB, LSRs can easily forward IP packet.

Fig. MPLS Network

5.9 Ten Pillars of 5G


We identify 10 key building blocks for 5G, illustrated by the Figure. In the following, we elaborate
each of these blocks and highlight their role and importance for achieving 5G.

5.9.1 Evolution of Existing RATs


As mentioned before, 5G will hardly be a specific RAT, rather it is likely that it will be a
collection of RATs including the evolution of the existing ones complemented with novel
revolutionary designs. As such, the first and the most economical solution to address the 1000x
capacity crunch is the improvement of the existing RATs in terms of SE, EE and latency, as well as
supporting flexible RAN sharing among multiple vendors. Specifically, LTE needs to evolve to
support massive/3D MIMO to further exploit the spatial degree of freedom (DOF) through advanced
multi-user beamforming, to further enhance interference cancellation and interference coordination
capabilities in a hyperdense small-cell deployment scenario. WiFi also needs to evolve to better

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

exploit the available unlicensed spectrum. IEEE 802.11 ac, the latest evolution of the WiFi
technology, can provide broadband wireless pipes with multi-Gbps data rates. It uses wider
bandwidth of up to 160 MHz at the less polluted 5 GHz ISM band, employing up to 256 Quadrature
Amplitude Modulation (QAM). It can also support simultaneous transmissions up to four streams
using multi-user MIMO technique. The incorporated beamforming technique has boosted the
coverage by several orders of magnitude, compared to its predecessor (IEEE 802.1 In). Finally,
major telecom companies such as Qualcomm have recently been working on developing LTE in the
unlicensed spectrum as well as integrating 3G/4G/WiFi transceivers into a single multi-mode base
station (BS) unit. In this regard, it is envisioned that the future UE will be intelligent enough to select
the best interface to connect to the RAN based on the QoS requirements of the running application.

5.9.2 Hyper dense small-cell deployment


Hyper dense small-cell deployment is another promising solution to meet the crunch. while
bringing additional EE to the system as well. This innovative solution, also referred to as HctNet, can
help to significantly enhance the area spectral efficiency In there two to realise HetNet: (i)
a cellular of the same technology, that is. with micro-, pico-, or femtocells; (ii) with small cells
of different technologies in contrast to just the cellular one (e.g. High Speed Packet Access (HSPA).
LTE, WiFi, and so on).
The former is called multi-tier HetNet. while the latter is referred to as multi-RAT HetNet.
Qualcomm, a leading company in addressing 1000x capacity challenge through small-cell
deployments. has demonstrated that adding small cells can scale the capacity of the almost in a linear
fashion. That is, the capacity doubles every time we double the number of small cells. However,
reducing the cell size increases the inter-cell interference and the required control signaling. To
overcome this drawback, advanced inter-cell interference management techniques are needed at the
system level along with complementary interference cancellation techniques at the UEs. Small-cell
Department of CSE Prepared by Dr.C.Brijilal Ruban
Vidya Academy of Science and Technology Technical Campus

enhancement was the focal point of LTE R-12, where the New Carrier Type (NCT) (also known as
the Lean Carrier) was introduced to assist small cells by the host macro-cell. This allows more
efficient control plane functioning (e.g. for mobility management, synchronisation, resource
allocation, etc.) through the macro-layer while providing a high-capacity and spectrally efficient data
plane through the small cells. Finally, reducing the cell size can also improve the EE of the network
by bringing the network closer to the UEs and hence shrinking the power budget of the wireless links.
5.9.3 Self-Organising Network

Self-Organising Network (SON) capability is another key component of 5G. As the population
of the small cells increases, SON gains more momentum. Almost 80% of the wireless traffic is
generated indoors. To carry this huge traffic, we need hyperdense small-cell deployments in homes
— installed and maintained mainly by the users — out of the control of the operators. These indoor
small cells need to be self-configurable and installed in a plug and play manner. Furthermore, they
need to have SON capability to intelligently adapt themselves to the neighbouring small cells to
minimize inter-cell interference, For example, a small cell can do this by autonomously
synchronising with the network and cleverly adjusting its radio coverage.

5.9.4 Machine Type Communication


Apart from people, connecting mobile machines is another fundamental aspect of 5G. Machine
type communication (MTC) is an emerging application where either one or both of the end users of
the communication session involve machines. MTC imposes two main challenges on the network.
First, the number of devices that need to be connected is tremendously large. Ericsson (one of the
leading companies in exploring 5G) foresees that 50 billion devices need to be connected in the future
networked society, the company envisages 'anything that can benefit from being connected will be
connected'. The other challenge imposed by MTC is the accelerating demand for real-time and
remote control of mobile devices (such as vehicles) through the network. This requires an extremely
low latency of less than a millisecond so called “tactile Internet”, dictating 20x latency improvement
from 4G to 5G.
5.9.5 Developing Millimetre-Wave RATs

The traditional sub-3 GHz spectrum is becoming increasingly congested and the present RATs
are approaching Shannon's capacity limit. As such, research on exploring cm and mm Wave bands
for mobile communications has already been started. Although the research on this field is still in its
infancy, the results look promising. There are three main impediments for rtm Wave mobile
communications.

First, the path loss is relatively higher at these bands, compared to the conventional sub-3GHz
bands. Second, electromagnetic waves tend to propagate in the Line-Of-Sight (LOS) direction,
rendering the links vulnerable to being blocked by moving objects or people. Last but not least, the
penetration loss through the buildings is substantially higher at these bands, blocking the outdoor

Department of CSE Prepared by Dr.C.Brijilal Ruban


Vidya Academy of Science and Technology Technical Campus

RATs for the indoor users. Despite these limitations, there are myriad advantages for mmWave
communications.

An enormous amount of spectrum is available in mmWave band; for example, at 60 GHz, there is
9GHz of unlicensed spectrum available. This amount of spectrum is huge, especially When we think
that the global allocated spectrum for all cellular technologies hardly exceeds 780 MHz. This amount
of spectrum can completely revolutionize mobile communications by providing ultra-broadband
wireless pipes that can seamlessly glue the wired and the wireless networks. Other advantages of
mmWave communications include the small antenna sizes (712) and their small separations (also
around M2), enabling tens of antenna elements to be packed in just one square centimetre. This in
turn allows us to achieve very high beamforming gains in relatively small areas, which can be
implemented at both the BS and the UE. Incorporating smart phased array antennas, we can fully
exploit the spatial degree of freedom of the wireless channel (using Space-Division Multiple Access
(SDMA)), which can further improve the system capacity. Finally, as the mobile station moves
around, beamforming weights can be adjusted adaptively so that the antenna beam is always pointing
to the BS.

5.9.6 Redesigning Backhaul Links


Redesigning the backhaul links is the next critical issue of 5G. In parallel to improving the RAN,
backhaul links also need to be reengineered to carry the tremendous amount of user traffic generated
in the cells. Otherwise, the backhaul links will soon become bottlenecks, threatening the proper
operation of the whole system. The problem gains more momentum as the population of small cells
increases. Different communication mediums can be considered, including optical fibre, microwave
and mmWave. In particular, mmWave point-to-point links exploiting array antennas with very sharp
beams can be considered for reliable self-backhauling without interfering with other cells or with the
access links.

5.9.7 Energy Efficiency

EE will remain an important design issue while developing 5G. Today, Information and
Communication Technology (ICT) consumes as much as 5% of the electricity produced around globe
and is responsible for approximately 2% of global greenhouse gas emissions roughly equivalent to
the emissions created by the aviation industry. What concerns more is the fact that if we do not take
any measure to reduce the carbon emissions, the contribution is projected to double by 2020. Hence,
it is necessary to pursue energy-efficient design approaches from RAN and backhaul links to the UEs.
The benefit of energy-efficient system design is manifold. First, it can play an important role
in sustainable development by reducing the carbon footprint of the mobile industry itself. Second,
ICT as the core enabling technology of the future smart cities can also play a fundamental role in
reducing the carbon footprint or other sectors (e.g. transportation). Third, it can increase the revenue
of mobile operators by reducing their operational expenditure (Opex) through saving on their
electricity bills. Fourth, reducing the ‘Joule per bit’ cost can keep mobile services affordable for the
Department of CSE Prepared by Dr.C.Brijilal Ruban
Vidya Academy of Science and Technology Technical Campus

users, allowing flat rate pricing in spite of the 10 to 100x data rate improvement expected by 2020.
Last but not least, it can extend the battery life of the UEs, which has been identified by the market
research company TNS as the number one criterion of the majority of the consumers purchasing a
mobile phone.
5.9.8 Allocation of New Spectrum for 5G:
Another critical issue of -SG is the allocation of new spectrum to fuel wireless
communications in the next decade. The 1000x traffic surge can hardly be managed by only
improving the spectral efficiency or by hyper-densification. In fact, the leading telecom companies
such as Qualcomm and NSN believe that apart from technology innovations, 10 times more spectrum
is needed to meet the demand. The allocation of around 100 MHz bandwidth at the 700 MHz band
and another 400 MHz bandwidth at around 3.6 GHz, as well as the potential allocation of several
GHz bandwidths in cm- or mm Wave bands to 5G will be the focal point of the next WRC
conference, organised by ITU-R in 2015.

5.9.9 Spectrum Sharing:


Regulatory process for new spectrum allocation is often very time consuming, so the efficient
use of available spectrum is always of critical importance. Innovative spectrum allocation models
(different from the traditional licensed or unlicensed allocation) can be adopted to overcome the
existing regulatory limitations. Plenty of radio spectrum has traditionally been allocated for military
radars where the spectrum is not fully utilised all the time (24/7) or in the entire geographic region.
On the other hand, spectrum cleaning is very difficult as some spectrum can never be cleaned or can
only be cleaned over a very long time; beyond that, the spectrum can be cleaned in some places but
not in the entire nation. As such, the Authorised/Licensed Shared Access (ASA/ ISA) model has been
proposed by Qualcomm to exploit the spectrum in small cells (with limited coverage) without
interfering with the incumbent user (e.g. military radars). This kind of spectrum allocation model can
compensate the very slow process of spectrum cleaning. It is also worth mentioning that as mobile
traffic growth accelerates, spectrum refarming becomes important, to clean a previously allocated
spectrum and lake it available for 5G. Cognitive Radio concepts can also be revisited to jointly utilize
licensed and unlicensed spectrums. Finally, new spectrum sharing models might be needed as multi-
tenant network operation becomes widespread.
5.9.10 RAN Virtualisation:

The last but not least critical enabler of SG is the virtuatisation of the RAN.sharing of wireless
infrastructure among multiple operators. Network virtualisation needs to be pushed from the wired
core network (e.g. switches and routers) towards the RAN. For network virtualisation, the
intelligence needs to be taken out of the RAN hardware and controlled in a centralised manner using
a software brain. which can be done in different network layers. Network virtualisation can bring
myriad advantages to the wireless domain, including both Capex (Capital Expenditure) and Opex
savings through multi-tenant network and equipment sharing. improved FE, on-demand up- or down-
scaling of the required resources, and increased network agility through the reduction of the time-to-
the-market for innovative services (from 90 hours to 90 minutes). as well as easy maintenance and
Department of CSE Prepared by Dr.C.Brijilal Ruban
Vidya Academy of Science and Technology Technical Campus

fast troubleshooting through increased transparency of the network. Virtualisation can also serve to
converge the wired and the wireless networks by jointly managing the whole network from a central
orchestration unit. further enhancing the efficiency of the network. Finally, multi-mode RANs
supporting ENG, 4G or WiFi can be adopted where different radio interfaces can be turned on or off
through the central software control unit to improve the EE or the Quality of Experience (QoE) for
the end users.

Department of CSE Prepared by Dr.C.Brijilal Ruban

You might also like