Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
10 views

Conventional Encryption

The document discusses conventional encryption techniques including the Caesar cipher and DES. It covers key concepts in encryption like confidentiality, integrity and availability. Security attacks can be passive like eavesdropping or active like altering systems. Classical encryption techniques are also discussed.

Uploaded by

d56468901
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Conventional Encryption

The document discusses conventional encryption techniques including the Caesar cipher and DES. It covers key concepts in encryption like confidentiality, integrity and availability. Security attacks can be passive like eavesdropping or active like altering systems. Classical encryption techniques are also discussed.

Uploaded by

d56468901
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

5.

Conventional Encryption
Security of information, security attacks, classical techniques, caesar Cipher, block cipher
principle, data encryption standard, key generation for DES, block cipher principle, design and
modes of operation, S-box design, triple DES with two three keys, introduction to international
data encryption algorithm, key distribution.

Security of information:
A Definition of Computer Security
The NIST Computer Security Handbook [NIST95] defines the term computer security as
follows:
Computer Security: The protection afforded to an automated information system in order to
attain the applicable objectives of preserving the integrity, availability, and confidentiality of
information system resources (includes hardware, software, firmware, information/data, and
telecommunications).
This definition introduces three key objectives that are at the heart of computer security:
 Confidentiality: This term covers two related concepts:
Data confidentiality: Assures that private or confidential information is not made available or
disclosed to unauthorized individuals.
Privacy: Assures that individuals control or influence what information related to them may
be collected and stored and by whom and to whom that information may be disclosed.
 Integrity: This term covers two related concepts:
Data integrity: Assures that information (both stored and in transmitted packets) and programs
are changed only in a specified and authorized manner.
System integrity: Assures that a system performs its intended function in an unimpaired
manner, free from deliberate or inadvertent unauthorized manipulation of the system.
 Availability: Assures that systems work promptly and service is not denied to authorize
users.
These three concepts form what is often referred to as the CIA triad. The three concepts embody
the fundamental security objectives for both data and for information and computing services.
For example, the NIST standard FIPS 199 (Standards for Security Categorization of Federal
Information and Information Systems) lists confidentiality, integrity, and availability as the
three security objectives for information and for information systems. FIPS 199 provides a
useful characterization of these three objectives in terms of requirements and the definition of
a loss of security in each category:
 Confidentiality: Preserving authorized restrictions on information access and
disclosure, including means for protecting personal privacy and proprietary
information. A loss of confidentiality is the unauthorized disclosure of information.
 Integrity: Guarding against improper information modification or destruction,
including ensuring information nonrepudiation and authenticity. A loss of integrity is
the unauthorized modification or destruction of information.
 Availability: Ensuring timely and reliable access to and use of information. A loss of
availability is the disruption of access to or use of information or an information system.
Alipta Anil Pawar
Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
Although the use of the CIA triad to define security objectives is well established, some in the
security field feel that additional concepts are needed to present a complete picture (Figure
1.1). Two of the most commonly mentioned are as follows:

 Authenticity: The property of being genuine and being able to be verified and trusted;
confidence in the validity of a transmission, a message, or message originator. This
means verifying that users are who they say they are and that each input arriving at the
system came from a trusted source.
 Accountability: The security goal that generates the requirement for actions of an
entity to be traced uniquely to that entity. This supports nonrepudiation, deterrence,
fault isolation, intrusion detection and prevention, and after-action recovery and legal
action. Because truly secure systems are not yet an achievable goal, we must be able to
trace a security breach to a responsible party. Systems must keep records of their
activities to permit later forensic analysis to trace security breaches or to aid in
transaction disputes.

The Challenges of Computer Security


Computer and network security is both fascinating and complex. Some of the reasons
follow:
1. Security is not as simple as it might first appear to the novice. The requirements seem
to be straightforward; indeed, most of the major requirements for security services can
be given self-explanatory, one-word labels: confidentiality, authentication,
nonrepudiation, or integrity. But the mechanisms used to meet those requirements can
be quite complex, and understanding them may involve rather subtle reasoning.
2. In developing a particular security mechanism or algorithm, one must always consider
potential attacks on those security features. In many cases, successful attacks are
designed by looking at the problem in a completely different way, therefore exploiting
an unexpected weakness in the mechanism.

Alipta Anil Pawar


Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
3. Because of point 2, the procedures used to provide particular services are often
counterintuitive. Typically, a security mechanism is complex, and it is not obvious from
the statement of a particular requirement that such elaborate measures are needed. It is
only when the various aspects of the threat are considered that elaborate security
mechanisms make sense.
4. Having designed various security mechanisms, it is necessary to decide where to use
them. This is true both in terms of physical placement (e.g., at what points in a network
are certain security mechanisms needed) and in a logical sense (e.g., at what layer or
layers of an architecture such as TCP/IP [Transmission Control Protocol/Internet
Protocol] should mechanisms be placed).
5. Security mechanisms typically involve more than a particular algorithm or protocol.
They also require that participants be in possession of some secret information (e.g., an
encryption key), which raises questions about the creation, distribution, and protection
of that secret information. There also may be a reliance on communications protocols
whose behaviour may complicate the task of developing the security mechanism. For
example, if the proper functioning of the security mechanism requires setting time
limits on the transit time of a message from sender to receiver, then any protocol or
network that introduces variable, unpredictable delays may render such time limits
meaningless.
6. Computer and network security is essentially a battle of wits between a perpetrator who
tries to find holes and the designer or administrator who tries to close them. The great
advantage that the attacker has is that he or she need only find a single weakness, while
the designer must find and eliminate all weaknesses to achieve perfect security.
7. There is a natural tendency on the part of users and system managers to perceive little
benefit from security investment until a security failure occurs.
8. Security requires regular, even constant, monitoring, and this is difficult in today’s
short-term, overloaded environment.
9. Security is still too often an afterthought to be incorporated into a system after the
design is complete rather than being an integral part of the design process.
10. Many users and even security administrators view strong security as an impediment to
efficient and user-friendly operation of an information system or use of information.

Security attacks:
A useful means of classifying security attacks, used both in X.800 and RFC 4949, is in terms
of passive attacks and active attacks (Figure 1.2). A passive attack attempts to learn or make
use of information from the system but does not affect system resources. An active attack
attempts to alter system resources or affect their operation.

Alipta Anil Pawar


Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
Classical techniques:
Passive Attacks
Passive attacks (Figure 1.2a) are in the nature of eavesdropping on, or monitoring of,
transmissions. The goal of the opponent is to obtain information that is being transmitted. Two
types of passive attacks are the release of message contents and traffic analysis.
The release of message contents is easily understood. A telephone conversation, an
electronic mail message, and a transferred file may contain sensitive or confidential
information. We would like to prevent an opponent from learning the contents of these
transmissions.
A second type of passive attack, traffic analysis, is subtler. Suppose that we had a way
of masking the contents of messages or other information traffic so that opponents, even if they
captured the message, could not extract the information from the message. The common
technique for masking contents is encryption. If we had encryption protection in place, an
opponent might still be able to observe the pattern of these messages. The opponent could
determine the location and identity of communicating hosts and could observe the frequency
and length of messages being exchanged. This information might be useful in guessing the
nature of the communication that was taking place.
Passive attacks are very difficult to detect, because they do not involve any alteration
of the data. Typically, the message traffic is sent and received in an apparently normal fashion,
and neither the sender nor receiver is aware that a third party has read the messages or observed
the traffic pattern. However, it is feasible to prevent the success of these attacks, usually by
means of encryption. Thus, the emphasis in dealing with passive attacks is on prevention rather
than detection.

Active Attacks
Active attacks (Figure 1.2b) involve some modification of the data stream or the creation of a
false stream and can be subdivided into four categories: masquerade, replay, modification of
messages, and denial of service.
A masquerade takes place when one entity pretends to be a different entity (path 2 of
Figure 1.2b is active). A masquerade attack usually includes one of the other forms of active
attack. For example, authentication sequences can be captured and replayed after a valid
authentication sequence has taken place, thus enabling an authorized entity with few privileges
to obtain extra privileges by impersonating an entity that has those privileges.
Replay involves the passive capture of a data unit and its subsequent retransmission to
produce an unauthorized effect (paths 1, 2, and 3 active).

Alipta Anil Pawar


Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
Modification of messages simply means that some portion of a legitimate message is
altered, or that messages are delayed or reordered, to produce an unauthorized effect (paths 1
and 2 active). For example, a message meaning “Allow John Smith to read confidential file
accounts” is modified to mean “Allow Fred Brown to read confidential file accounts.”
The denial of service prevents or inhibits the normal use or management of
communications facilities (path 3 active). This attack may have a specific target; for example,
an entity may suppress all messages directed to a particular destination (e.g., the security audit
service). Another form of service denial is the disruption of an entire network, either by
disabling the network or by overloading it with messages so as to degrade performance.
Active attacks present the opposite characteristics of passive attacks. Whereas passive
attacks are difficult to detect, measures are available to prevent their success. On the other
hand, it is quite difficult to prevent active attacks absolutely because of the wide variety of
potential physical, software, and network vulnerabilities. Instead, the goal is to detect active
attacks and to recover from any disruption or delays caused by them. If the detection has a
deterrent effect, it may also contribute to prevention.

Caesar Cipher:

The earliest known, and the simplest, use of a substitution cipher was by Julius Caesar. The
Caesar cipher involves replacing each letter of the alphabet with the letter standing three places
further down the alphabet. For example,

plain: meet me after the toga party


cipher: PHHW PH DIWHU WKH WRJD SDUWB

Note that the alphabet is wrapped around, so that the letter following Z is A. We can define the
transformation by listing all possibilities, as follows:

plain: a b c d e f g h i j k l m n o p q r s t u v w x y z
cipher: D E F G H I J K L M N O P Q R S T U V W X Y Z A B C

Let us assign a numerical equivalent to each letter:


Alipta Anil Pawar
Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
Then the algorithm can be expressed as follows. For each plaintext letter p, substitute the cipher
text letter C:
C = E (3, p) = (p + 3) mod 26
A shift may be of any amount, so that the general Caesar algorithm is
C = E (k, p) = (p + k) mod 26 (3.1)
Where k takes on a value in the range 1 to 25. The decryption algorithm is simply
p = D (k, C) = (C - k) mod 26 (3.2)
If it is known that a given ciphertext is a Caesar cipher, then a brute-force cryptanalysis is easily
performed: simply try all the 25 possible keys. Figure 3.3 shows the results of applying this
strategy to the example ciphertext. In this case, the plaintext leaps out as occupying the third
line.
Three important characteristics of this problem enabled us to use a bruteforce
cryptanalysis:
1. The encryption and decryption algorithms are known.
2. There are only 25 keys to try.
3. The language of the plaintext is known and easily recognizable.
In most networking situations, we can assume that the algorithms are known. What generally
makes brute-force cryptanalysis impractical is the use of an algorithm that employs a large
number of keys. For example, the triple DES algorithm, makes use of a 168-bit key, giving a
key space of 2168 or greater than 3.7 × 1050 possible keys.

Alipta Anil Pawar


Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
The third characteristic is also significant. If the language of the plaintext is unknown, then
plaintext output may not be recognizable. Furthermore, the input may be abbreviated or
compressed in some fashion, again making recognition difficult.
For example, Figure 3.4 shows a portion of a text file compressed using an algorithm called
ZIP. If this file is then encrypted with a simple substitution cipher (expanded to include more
than just 26 alphabetic characters), then the plaintext may not be recognized when it is
uncovered in the brute-force cryptanalysis.

Data encryption standard:


Until the introduction of the Advanced Encryption Standard (AES) in 2001, the Data
Encryption Standard (DES) was the most widely used encryption scheme. DES was issued in
1977 by the National Bureau of Standards, now the National Institute of Standards and
Technology (NIST), as Federal Information Processing Standard 46 (FIPS PUB 46). The
algorithm itself is referred to as the Data Encryption Algorithm (DEA). For DEA, data are
encrypted in 64-bit blocks using a 56-bit key. The algorithm transforms 64-bit input in a series
of steps into a 64-bit output. The same steps, with the same key, are used to reverse the
encryption.
Over the years, DES became the dominant symmetric encryption algorithm, especially
in financial applications. In 1994, NIST reaffirmed DES for federal use for another five years;
NIST recommended the use of DES for applications other than the protection of classified
information. In 1999, NIST issued a new version of its standard (FIPS PUB 46-3) that indicated
that DES should be used only for legacy systems and that triple DES (which in essence involves
repeating the
DES algorithm three times on the plaintext using two or three different keys to produce
the ciphertext) be used. Because the underlying encryption and decryption algorithms are the
same for DES and triple DES, it remains important to understand the DES cipher. This section
provides an overview. For the interested reader, Appendix S provides further detail.

DES Encryption
The overall scheme for DES encryption is illustrated in Figure 4.5. As with any
encryption scheme, there are two inputs to the encryption function: the plaintext to be encrypted
and the key. In this case, the plaintext must be 64 bits in length and the key is 56 bits in length.8

Alipta Anil Pawar


Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
Looking at the left-hand side of the figure, we can see that the processing of the plaintext
proceeds in three phases. First, the 64-bit plaintext passes through an initial permutation (IP)
that rearranges the bits to produce the permuted input.

This is followed by a phase consisting of sixteen rounds of the same function, which
involves both permutation and substitution functions. The output of the last (sixteenth) round
consists of 64 bits that are a function of the input plaintext and the key. The left and right halves
of the output are swapped to produce the preoutput. Finally, the preoutput is passed through a
permutation [IP-1] that is the inverse of the initial permutation function, to produce the 64-bit
ciphertext. With the exception of the initial and final permutations, DES has the exact structure
of a Feistel cipher.
The right-hand portion of Figure 4.5 shows the way in which the 56-bit key is used.
Initially, the key is passed through a permutation function. Then, for each of the sixteen rounds,
a subkey (Ki) is produced by the combination of a left circular shift and a permutation. The
permutation function is the same for each round, but a different subkey is produced because of
the repeated shifts of the key bits.

DES Decryption

Alipta Anil Pawar


Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
As with any Feistel cipher, decryption uses the same algorithm as encryption, except that the
application of the subkeys is reversed. Additionally, the initial and final permutations are
reversed.

Block cipher principles and design:


Although much progress has been made in designing block ciphers that are cryptographically
strong, the basic principles have not changed all that much since the work of Feistel and the
DES design team in the early 1970s. In this section we look at three critical aspects of block
cipher design: the number of rounds, design of the function F, and key scheduling.

Number of Rounds
The cryptographic strength of a Feistel cipher derives from three aspects of the design: the
number of rounds, the function F, and the key schedule algorithm. Let us look first at the choice
of the number of rounds.
The greater the number of rounds, the more difficult it is to perform cryptanalysis, even
for a relatively weak F. In general, the criterion should be that the number of rounds is chosen
so that known cryptanalytic efforts require greater effort than a simple brute-force key search
attack. This criterion was certainly used in the design of DES. Schneier [SCHN96] observes
that for 16-round DES, a differential cryptanalysis attack is slightly less efficient than brute
force: The differential cryptanalysis attack requires 255.1 operations, 10 whereas brute force
requires 255. If DES had 15 or fewer rounds, differential cryptanalysis would require less effort
than a brute-force key search.
This criterion is attractive, because it makes it easy to judge the strength of an algorithm
and to compare different algorithms. In the absence of a cryptanalytic breakthrough, the
strength of any algorithm that satisfies the criterion can be judged solely on key length.

Design of Function F
The heart of a Feistel block cipher is the function F, which provides the element of confusion
in a Feistel cipher. Thus, it must be difficult to “unscramble” the substitution performed by F.
One obvious criterion is that F be nonlinear, as we discussed previously. The more nonlinear
F, the more difficult any type of cryptanalysis will be.
There are several measures of nonlinearity. In rough terms, the more difficult it is to
approximate F by a set of linear equations, the more nonlinear F is. Several other criteria should
be considered in designing F. We would like the algorithm to have good avalanche properties.
Recall that, in general, this means that a change in one bit of the input should produce a change
in many bits of the output. A more stringent version of this is the strict avalanche criterion
(SAC) [WEBS86], which states that any output bit j of an S-box (see Appendix S for a
discussion of S-boxes) should change with probability 1/2 when any single input bit i is
inverted for all i, j. Although SAC is expressed in terms of S-boxes, a similar criterion could
be applied to F as a whole. This is important when considering designs that do not include S-
boxes.

Alipta Anil Pawar


Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
Another criterion proposed in [WEBS86] is the bit independence criterion (BIC),
which states that output bits j and k should change independently when any single input bit i is
inverted for all i, j, and k. The SAC and BIC criteria appear to strengthen the effectiveness of
the confusion function.

Key Schedule Algorithm


With any Feistel block cipher, the key is used to generate one subkey for each round. In general,
we would like to select subkeys to maximize the difficulty of deducing individual subkeys and
the difficulty of working back to the main key. No general principles for this have yet been
promulgated.
Adams suggests [ADAM94] that, at minimum, the key schedule should guarantee
key/ciphertext Strict Avalanche Criterion and Bit Independence Criterion.

Modes of operation:
A block cipher takes a fixed-length block of text of length b bits and a key as input and produces
a b-bit block of ciphertext. If the amount of plaintext to be encrypted is greater than b bits, then
the block cipher can still be used by breaking the plaintext up into b-bit blocks. When multiple
blocks of plaintext are encrypted using the same key, a number of security issues arise. To
apply a block cipher in a variety of applications, five modes of operation have been defined by
NIST (SP 800-38A).
In essence, a mode of operation is a technique for enhancing the effect of a
cryptographic algorithm or adapting the algorithm for an application, such as applying a block
cipher to a sequence of data blocks or a data stream. The five modes are intended to cover a
wide variety of applications of encryption for which a block cipher could be used. These modes
are intended for use with any symmetric block cipher, including triple DES and AES. The
modes are summarized in following Table:

Mode Description Typical Application


Electronic Each block of plaintext bits is  Secure transmission of
Codebook (ECB) encoded independently using the single values (e.g., an
same key. encryption key)
Cipher Block The input to the encryption algorithm  General-purpose block
Chaining (CBC) is the XOR of the next block of oriented transmission
plaintext and the preceding block of  Authentication
ciphertext.
Cipher Feedback Input is processed s bits at a time.  General-purpose
(CFB) Preceding ciphertext is used as input stream-oriented
to the encryption algorithm to transmission
produce pseudorandom output, which  Authentication
is XORed with plaintext to produce
next unit of ciphertext.

Alipta Anil Pawar


Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
Output Feedback Similar to CFB, except that the input  Stream-oriented
(OFB) to the encryption algorithm is the transmission over
preceding encryption output, and full noisy channel (e.g.,
blocks are used. satellite
communication)
Counter (CTR) Each block of plaintext is XORed  General-purpose
with an encrypted counter. The blockoriented
counter is incremented for each transmission
subsequent block.  Useful for high –speed
requirements

S-box design:
S-box (substitution-box) is a basic component of symmetric key algorithms which performs
substitution. In block ciphers, they are typically used to obscure the relationship between the
key and the ciphertext, thus ensuring Shannon's property of confusion.
The forward substitute byte transformation, called SubBytes, is a simple table lookup (Figure
6.5a). AES defines a 16 * 16 matrix of byte values, called an S-box (Table 6.2a), that contains
a permutation of all possible 256 8-bit values. Each individual byte of State is mapped into a
new byte in the following way: The leftmost 4 bits of the byte are used as a row value and the
rightmost 4 bits are used as a column value. These row and column values serve as indexes
into the S-box to select a unique 8-bit output value. For example, the hexadecimal value {95}
references row 9, column 5 of the S-box, which contains the value {2A}. Accordingly, the
value {95} is mapped into the value {2A}.

Alipta Anil Pawar


Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
Alipta Anil Pawar
Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
Alipta Anil Pawar
Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
Alipta Anil Pawar
Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
The result is {2A}, which should appear in row {09} column {05} of the S-box. This is
verified by checking Table 6.2a.
Alipta Anil Pawar
Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
Triple DES with two, three keys:
Triple DES with Two Keys
An obvious counter to the meet-in-the-middle attack is to use three stages of encryption with
three different keys. Using DES as the underlying algorithm, this approach is commonly
referred to as 3DES, or Triple Data Encryption Algorithm (TDEA). As shown in Figure 7.1b,
there are two versions of 3DES; one using two keys and one using three keys. NIST SP 800-
67 (Recommendation for the Triple Data Encryption Block Cipher, January 2012) defines the
two-key and three-key versions. We look first at the strength of the two-key version and then
examine the three-key version.

Two-key triple encryption was first proposed by Tuchman [TUCH79]. The function follows
an encrypt-decrypt-encrypt (EDE) sequence (Figure 7.1b):

There is no cryptographic significance to the use of decryption for the second stage. Its only
advantage is that it allows users of 3DES to decrypt data encrypted by users of the older single
DES:

Alipta Anil Pawar


Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
Alipta Anil Pawar
Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
Alipta Anil Pawar
Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
Key distribution:
Symmetric key distribution using symmetric encryption:
For symmetric encryption to work, the two parties to an exchange must share the same
key, and that key must be protected from access by others. Furthermore, frequent key changes
are usually desirable to limit the amount of data compromised if an attacker learns the key.
Therefore, the strength of any cryptographic system rests with the key distribution technique,
a term that refers to the means of delivering a key to two parties who wish to exchange data
without allowing others to see the key. For two parties A and B, key distribution can be
achieved in a number of ways, as follows:
1. A can select a key and physically deliver it to B.
2. A third party can select the key and physically deliver it to A and B.
3. If A and B have previously and recently used a key, one party can transmit the new
key to the other, encrypted using the old key.
4. If A and B each has an encrypted connection to a third party C, C can deliver a key
on the encrypted links to A and B.
Options 1 and 2 call for manual delivery of a key. For link encryption, this is a reasonable
requirement, because each link encryption device is going to be exchanging data only with its
partner on the other end of the link. However, for end-to-end encryption over a network,
manual delivery is awkward. In a distributed system, any given host or terminal may need to
engage in exchanges with many other hosts and terminals over time. Thus, each device needs
a number of keys supplied dynamically. The problem is especially difficult in a wide-area
distributed system.

Symmetric key distribution using asymmetric encryption:


Because of the inefficiency of public-key cryptosystems, they are almost never used for the
direct encryption of sizable blocks of data, but are limited to relatively small blocks. One of
the most important uses of a public-key cryptosystem is to encrypt secret keys for distribution.
Alipta Anil Pawar
Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad
 Simple Secret Key Distribution
An extremely simple scheme was put forward by Merkle [MERK79], as illustrated in Figure
14.7. If A wishes to communicate with B, the following procedure is employed:
1. A generates a public/private key pair {PUa, PRa} and transmits a message to B
consisting of PUa and an identifier of A, IDA.
2. B generates a secret key, Ks, and transmits it to A, which is encrypted with A’s public
key.
3. A computes D (PRa, E (PUa, Ks)) to recover the secret key. Because only A can decrypt
the message, only A and B will know the identity of Ks.
4. A discards PUa and PRa and B discards PUa.
A and B can now securely communicate using conventional encryption and the session key Ks.
At the completion of the exchange, both A and B discard Ks. Despite its simplicity, this is an
attractive protocol. No keys exist before the start of the communication and none exist after the
completion of communication. Thus, the risk of compromise of the keys is minimal. At the
same time, the communication is secure from eavesdropping.
The protocol depicted in Figure 14.7 is insecure against an adversary who can intercept
messages and then either relay the intercepted message or substitute another message. Such an
attack is known as a man-in-the-middle attack.

Alipta Anil Pawar


Assistant Professor,
Dept. of Electronics and Telecommunication Engineering,
Dr. Babasaheb Ambedkar Technological University, Lonere, Raigad

You might also like