Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Crypto Notes PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 97
At a glance
Powered by AI
The document discusses various topics related to applied cryptography including symmetric ciphers, key distribution techniques, and secure multi-party computation protocols.

Some of the main symmetric ciphers discussed include the one-time pad, monoalphabetic substitution cipher, Lorenz cipher machine, and modern ciphers like AES.

Different modes of operation like CBC, CTR, CFB are used with symmetric ciphers to provide security properties like confidentiality and integrity when encrypting multiple blocks of data.

CS 387

Applied Cryptography

David Evans
written by

Daniel Winter

special thanks to:

Wolfgang Baltes

16.04.2012
Contents
1 Symmetric Ciphers 3
1.1 Cryptology, Symmetric Cryptography & Correctness Property . . . . . . . . . . 3
1.2 Kerchoffs Principle & xor-function . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 One - Time Pad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Secret Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6 Perfect Cipher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.7 Monoalphabetic Substitution Cipher (Toy-Cipher) . . . . . . . . . . . . . . . . . 13
1.8 Lorenz Cipher Machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.9 Modern Symmetric Ciphers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2 Application of Symmetric Ciphers 18


2.1 Application of Symmetric Ciphers . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2 Generating Random Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3 Pseudo Random Number Generator (PRNG) . . . . . . . . . . . . . . . . . . . . 20
2.4 Modes of Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.1 Electronic Codebook Mode (ECB) . . . . . . . . . . . . . . . . . . . . . . 21
2.4.2 Cipher Block Chaining Mode (CBC) . . . . . . . . . . . . . . . . . . . . . 22
2.4.3 Counter Mode (CTR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4.4 CBC versus CTR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4.5 Cipher Feedback Mode (CFB) . . . . . . . . . . . . . . . . . . . . . . . . 26
2.4.6 Output Feedback Mode (OFB) . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4.7 CBC versus CFB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.4.8 Parallel Decrypting Modes . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5 Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.6 Padding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.7 Cryptographic Hash Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.8 Random Oracle Assumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.9 Strong Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.10 Dictionary Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.11 Salted Password Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.12 Hash Chain, S/Key Password Scheme . . . . . . . . . . . . . . . . . . . . . . . . 37

3 Key Distribution 38
3.1 Key Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2 Pairwise Shared Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 Trusted Third Party . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4 Merkles Puzzle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.5 Diffie-Hellman Key Exchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.6 Discrete Logarithm Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.7 Decisional Diffie-Hellman Assumption . . . . . . . . . . . . . . . . . . . . . . . . 45
3.8 Implementing Diffie-Hellman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.9 Finding Large Primes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.10 Faster Primal Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.11 Fermats Little Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.12 Rabin-Miller Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

1
4 Asymmetric Cryptosystems 51
4.1 Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2 RSA Cryptosystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.3 Correctness of RSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4 Eulers Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.5 Proving Eulers Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.6 Inversibility of RSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.7 Pick and Compute e and d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.8 Security Property of RSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.9 Difficulty of Factoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.9.1 Best Known Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.10 Public Key Cryptographic Standard (PKCS#1), Insecurity of RSA in Practice . 62
4.11 Using RSA to Sign a Document . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.11.1 Problem with RSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

5 Cryptographic Protocols 64
5.1 Encrypted Key Exchange Protocol (EKE) . . . . . . . . . . . . . . . . . . . . . . 65
5.2 Secure Shell Protocol (SSH) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.3 SSH Authentication in Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.4 Transport Layer Security Protocol (TLS) . . . . . . . . . . . . . . . . . . . . . . 71
5.5 TLS Information Leaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.6 Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.7 Certificate Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.8 Signature Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

6 Using Cryptography to Solve Problems 78


6.1 Traffic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.2 Onion Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.3 Voting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.4 Auditing MIXnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.5 Digital Cash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.6 RSA Blind Signature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.7 Blind Signature Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.8 Deanonymizing Double Spenders . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.9 Identity Challenges - Spending Cash . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.10 Bitcoin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.10.1 Providing Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.10.2 Finding New Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.10.3 Avoid Double Spending . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

7 Secure Computation 92
7.1 Enctypted Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
7.2 Garbled Circuit Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

2
1 Symmetric Ciphers
1.1 Cryptology, Symmetric Cryptography & Correctness Property
Definition cryptography, cryptology
cryptography comes from Greek with crypto means hidden, secret and graphy means writ-
ing. A broader definition is cryptology with Greek -logy means science.
Example 1:
These actions involve cryptology:
Opening a door

Playing poker

Logging into an internet account


.
Definition symmetric Cryptography
Symmetric Cryptography means all parties have the same key to do encryption and decryption.
The keys may be identical or there may be a simple transformation to go between the two keys.
Definition symmetric Cryptosystem, decryption function, encryption function
In this paper a symmetric Cryptosystem always looks as follows:

m / E /o c/o o/ / X /o c/o o/ / D /m
O O

k k
where
m is a plaintext message from the set M of all possible messages,

k is the key from the set K of all possible keys and

c is a ciphertext from the set C of all possible Ciphertexts.

E ist the encryption function,

X is a possible eavesdropper of the insecure channel and

D is the decryption function with:

E : M K C : (m, k) 7 c
D : C K M : (c, k) 7 m

.
Definition correctness property
In order we get the same message after decrypting the ciphertext (encrypted message) we need
the correctness property: m M, k K

Dk (Ek (m)) = m

Example 2:
These functions satisfy the correctness property for a symmetric cryptosystem:

3
1. Ek (m) = m + k, Dk (c) = c k with M = K = N because

Dk (Ek (m)) = Dk (m + k) = m + k k = m

2. Ek (m) = am + b, Dk = a1 (c b) with M = K = Z/Zm, a (Z/Zm) because

Dk (Ek (m)) = Dk (m) = m

1.2 Kerchoff s Principle & xor-function


Theorem 1.2.1 (Kerchoff s Principle)
The general assumption is that the channel for message transmission is not secure.
Even if the encryption function E and decryption function D are public, the message is still
secure due to a usage of a secret. Only the key has to be secret. If the key gets public you have
to use another key. So the keys must be kept secret in a cryptosystem.

Definition xor function


the xor function or exclusive or (symbol: ) is given by its truth table:

A B AB
0 0 0
1 0 1
0 1 1
1 1 0

Useful properties x, y, z of the xor function are:

Distributivity
x (y z) = (x y) z

Commutativity
xy =yx

Negation
x1=x

Identity
xx=0

which are applied in ciphers.


It follows
x y x = x x y = y m k = c and c k = m
that the xor function is kommutative and assoziative and how to compute the ciphertext c with
the message m and the key k and decrypt the ciphertext c with the same key k to get m.

4
1.3 One - Time Pad
Definition One - Time Pad
Lets assume the set of all possible messages M := {0, 1}n of lenght n. That means every
message is represented as a string of zeros and ones with fixed length n N. Any key k K
has to be as long as the message m M. It follows that K := {0, 1}n . The encryption function
with m = m0 m1 . . . mn1 and k = k0 k1 . . . kn1 looks as follows:

E : M K C : (m, k) = m k = c0 c1 . . . cn1 = c

with the value of each bit of the ciphertext defined for all i:

ci = mi ki

Example 3:
Let assume our message is m =CS. We have to convert the string to an element of M := {0, 1}n
where n = 7. That means every charakter in every message is represented by 7 bits. Python
provides a built-in function ord(<one charakter string>) which maps every charakter to a
specific dezimal number. The code for converting a string to a valid message looks as follows:

def convert_to_bits(n,pad):
result = []
while n > 0:
if n % 2 == 0:
result = [0] + result
else:
result = [1] + result
n = n / 2
while len(result) < pad:
result = [0] + result
return result

def string_to_bits(s):
result = []
for c in s:
result = result + convert_to_bits(ord(c),7)
return result

string_to_bits(CS) => [1,0,1,0,0,1,1,1,0,0,0,0,1,1]


and it follows with a random choosen key k:
C S
z }| { z }| {
m = CS = 1010011 1000011

k = 11001000100110
m k = c = 01101111100101

If someone got the ciphertext but not the key - the person is not able to figure the original
message out. Taking c and another key k = 11001010100110 and trying to get the message

5
i {0, 1, . . . , n 1}:
ci ki = mi m = 10100101000011
if m is separated in 2 parts of length 7: 1010010 and 1000011, convert each to a decimal number
and apply the built-in Python function chr(<number>)=ascii charakter the result is BS
instead of the correct string CS.
Example 4:
Suppose an eavesdropper X knows A is sending B a message m {00, 01, 10, 11} using a
one-time pad were the key k is a perfectly random key unknown to X and the distribution of
messages is uniform - each message is equally likely. Then the conditional probability

P [m = 01|X intercepts c = 01] = 0.25

because since m is encrypted using a one-time-pad we gain no information about m from c,


therefore:
1
P [m = 01|X intercepts c = 01] = P [m = 01] = = 0.25
4
Additionally, suppose X learns that A generated k using a biased random number generator
that outputs 0 with 0.52 probability. Then

P [m = 01|X intercepts c = 01] = 0.2704

for c = 01 to map to m = 01 the key has to be k = 00. It follows

P [k = 00] = 0.52 0.52 = 0.2704

1.4 Probability
Definition Probability Space
The probability space is the set of all possible outcomes (i )iI .
Example 5:

Flipping a coin. The coin can land on Head H, Tail T or Edge E, hence = {H, T, E}

Rolling a die. The outcomes are 1, 2, 3, 4, 5 or 6, hence = {1, 2, 3, 4, 5, 6}

.
Definition Bernoulli Trial
A Bernoulli Trial is an experiment whose outcome is random and can be either of two possible
outcomes:

Success (S)

Failure (F )

As probabilities, we assign

p as the probability for success

and

6
1 p as the probability for failure

.
Definition Geometric Distribution
As experiment, we perform a Bernoulli trial until success. This meas k 1 trials fail and success
at the k th trial for k = 1, 2, . . . , . The probability space is never changed by any trial. It
follows:
n times
z }| {
Using the symbols S for success, F for failure and F nS := F F . . . F S then is an infinite
set with
= {S, F S, F F S, F F F S, F 4 S, F 5 S, . . .}

Since successive trials are independent, the probability distribution is given by

P (F n S) = P (F )n P (S) = (1 p)n p

The average number of trails until success is



!  
X
k1 d X k d 1 1
(1 p) p k = p (1 p) = p 1 =
dp dp p p
k=1 k=1

.
Definition Uniform Distribution
A uniform distribution has a underlying probability space a finite set, e.g. = {1 , 2 , . . . , r }
with r elements, and a probability measure P that has (by definition) the property
1 1
P ({i }) = =
r ||

Example 6:
Assume = {H, T } for Head and Tail (without Edge) then
1
P (H) = P (T ) =
2

Example 7:
Assume a die with = {1, 2, 3, 4, 5, 6} then
1
i : P (i) =
6

Example 8:
Another example is given by a similar, but slightly different experiment to that one described
for the geometric distribution which deals with a set 0 := {1 , 2 , . . . , N } of N numbers.
Lets assume that exactly one of the numbers i be prime. Again, we perform a draw until
success, where success means we draw a prime. Additionally, in case the number drawn is not
prime, we dont place this number back into , what will effect the probability space for the
next draw.
The probability-distribution of this experiments outcomes is the uniform distribution. It follows

7
n times
z }| {
The probability space using S for success, F for failure and F nS := F F . . . F S then
is the finite set:
= {S, F S, F F S, F F F S, F 4 S, F 4 S, . . . , F n1 S}

Successive trials reduce the actual probability space (each time by one element), so we
evaluate the outcomes probabilities as follows
1
P (S) =
N
N 1 1 1
P (F S) = =
N N 1 N
N 1 N 2 N 3 1 1 1
P (F n1 S) = =
N N 1 N 2 2 1 N
hence k {0, 1, 2, . . . , n 1}
1
P (F k S) =
N
The average number of trials until success is
n n
X X 1 N (N + 1) N +1
P (F i1 S) i = i= =
N 2N 2
i=1 i=1
.
Definition Event
An Event A is a subset of the probability space (i.e. A ). In this course A consists of
finite or infinitely many outcomes.
Example 9:
An event A of tossing a coin would be landing on head, therefore A = {H}. The event B a
valid coin toss is considered as B = {H, T }
Definition Probability Measure, certain event, impossible event
The Probability Measure is a function that maps each outcome to a non-negative value lower-
equal than 1. That means
P : [0, 1] : 7 P ()
where is an outcome.
If P (A) = 1 then the event A is called certain event. Recall that also P () = 1, but in general
it holds A 6= . And similar behavior we have for impossible events, i.e. in case of P (B) = 0 it
may be that B 6= (only for elementary probability spaces it holds A = and B = ).
Example 10:
Roll 7 on a die is an impossible event, because = {1, 2, 3, 4, 5, 6} and
0
P ({7}) = =0
6
On the other hand rolling a 1, 2, 3, 4, 5 or 6 with a die is
6
P ({1, 2, 3, 4, 5, 6}) = P () = =1
6

Theorem 1.4.1:
Assume the probability space and the probability function p. Then it holds
X
P () = 1

8
Example 11:
Lets assume = {H, T, E} with probabilities

P (H) = 0.49999
P (T ) = 0.49999

The probability for edge E is given by

1 = P () = P (H) + P (T ) + P (E) = 0.49999 + 0.49999 + P (E) P (E) = 0.00002

Theorem 1.4.2:
The probability of an event A is given by
X
P (A) = P ()
A

Example 12:
The probability for a valid coin toss B = {H, T } is

P (B) = P (H) + P (T ) = 0.49999 + 0.49999 = 0.99998

Definition Conditional Probability


Given two events, A and B, in the same probability space , the conditional probability of B,
given that A occured is:
P (A B)
P (B|A) = (1.1)
P (A)

Example 13:
Given that a coin toss is valid, the probability it is heads is given with A = {H, T } is the
event that a coin toss is valid and B = {H} is the event that the coin toss is heads. It
follows with P (A) = P ({H, T }) = P ({H}) + P ({T }) = 0.49999 + 0.49999 = 0.99998 and
P (B) = P ({H}) = 0.49999:

P (A B) P ({H, T } {H}) P ({H}) 0.49999 1


P (B|A) = = = = =
P (A) P ({H, T }) P ({H, T }) 0.99998 2

Example 14:
The relative frequencies of the vowels in English, as a percentage of all letters in a sample of
typical English text:

e: 13%, a: 8%, o: 7%, i: 7%, u: 3%

For the letter x drawn randomly from the text, it is

P (x is a vowel):

P (x {a, e, i, o, u}) = 0.13 + 0.08 + 0.07 + 0.07 + 0.03 = 0.38

9
P (x is e|x is a vowel) :
P (x = e x {a, e, i, o, u})
P (x = e|x {a, e, i, o, u}) =
P (x {a, e, i, o, u})
P (x = e)
=
P (x {a, e, i, o, u})
0.13
= = 0.34
0.38

P (x is a vowel|x is not a} :
P (x {a, e, i, o, u} x 6= a)
P (x {a, e, i, o, u}|x 6= a) =
P (x 6= a)
P (x = e) + P (x = i) + P (x = o) + P (x = u)
=
1 P (x = a)
0.13 + 0.07 + 0.07 + 0.03 0.3
= = = 0.32
1 0.08 0.92
.

1.5 Secret Sharing


Definition Secret Sharing
A useful property of xor is that it can be used to share a secret (message) amount at least 2
(yourself and 2 others) people as follows:
1. The secret message of length n is

x = x0 x1 x2 . . . xn1

2. Generate a random key k {0, 1}n :

k = k0 k1 k2 . . . kn1

3. Compute
c=kx
with i {0, 1, 2, . . . , n 1} : ci = ki xi

4. Give c and k to different person and keep x yourself.


The message can only be decryptet by computing c k. So the people with the information c
and k may not meet.
Theorem 1.5.1:
To share a n bit long secret x amoung m people it is needed to compute (m 1) n key bits
(equally: to compute m 1 keys).

Proof:
Show that m 1 keys are enough to share a secret securely amoung m people with the property
that only all m people together can decrypt the message.
Compute m 1 keys:
k1 , k2 , . . . , km1

10
with i {1, 2, . . . , m 1} : ki {0, 1}n .
Then for m people pi with i {0, 1, . . . , m 1} the information maps as follows:
x k1 k2 km1 7 p0
k1 7 p1
k2 7 p2
..
.
km1 7 pm1
so every person pi with i {0, 1, 2, . . . , m 1} gets an information, that the secret is perfectly
shared. Thus, every key holds n bits. It follows with m 1 keys hold (m 1) n bits together.
Therefore
(m 1) n
are needed to compute (choose randomly) are spreeded amoung m people to provide secrecy
amoung m people. 

1.6 Perfect Cipher


Definition perfect cipher
The ciphertext provides an attacker with no additional information about the plaintext. Assume
m, m M, k K, c C. The property for a perfect cipher is given by
P [m = m |Ek (m) = c] = P [m = m ] (1.2)
That means: for an attacker/eavesdropper the probability that m = m without knowing the
ciphertext is equal m = m with knowing the ciphertext. The property
1
P [m = m |Ek (m) = c] =
|M|
where |M| is the cardinality of M (number possible messages). This woult be correct if, a
priori, the attacker knew nothing about the messages, therefore all messages are equally likely
(whats obviously not correct - not all sentences make sense).
Theorem 1.6.1:
The one-time pad is a perfect cipher.

Proof:
Remember the perfect cipher property (1.2) and the definition of the conditional probability
(1.1).
It follows with A = (m = m ) and B = (Ek (m) = c)
X X P (Ek (mi ))
i
P (B) = P (Ek (m) = c) =
|M| |K|
mi M ki K

For any message-ciphertext pair, there is only one key that maps that message to that ciphertext,
therefore X
P (Eki (m) = c) = 1
ki K
and summing over all messages the value of 1 leads to
X X P (Ek (mi ))
i
|M| 1 1
P (B) = P (Ek (m) = c) = = =
|M| |K| |M| |K| |K|
mi M ki K

11
Thats the probability of event B, which is the probability that some message encrypts to some
key (computed over all the messages).
Then
1 P (m = m )
P (AB) = P (m = m Ek (m) = c) = P (m = m )P (k = k ) = P (m = m ) =
|K| |K|

to see this consider k K, m M and the distribution of M is not uniform (not all messages
are equally likely) and every key maps each message to only one ciphertext and the keys are
equally likely (the distribution of the keys is uniform), therefore P (k = k ) = |K|
1
.
Plugging all together in the conditional probability formula gives
P (m=m )
P (m = m Ek (m) = c) |K|
P [m = m |Ek (m) = c] = = 1 = P (m = m )
P (Ek (m) = c) |K|

Which is the definition of the perfect cipher. It follows that the one-time pad is a perfect cipher

Definition malleable cipher, impractical cipher
A cipher is

malleable then the encryptet message Ek (m) = c can be modified by an active attacker
X, which means
0
m / Ek (m) c/o o/ o/ / X /o /o c o/ / Ek (c0 ) / m0
O O

k k

impractical if and only if


|K| |M|
The one-time pad is very impractical, because the keys have to be as long as the messages,
and a key can never be reused. That means

|K| = |M|

Unfortunately, Claude Shannon proved that finding a practical perfect cipher is impossible.

.
Theorem 1.6.2 (Shannons Keyspace Theorem)
Every perfect cipher is impractical.

Proof: Proof by contradiction:


Assume having a perfect cipher that does not satisfy the impractical property.
Thats equal to:
Suppose E is a perfect cipher where |M| > |K|.
Let c0 C with P (Ek (m) = c0 ) > 0. That means there is some key that encrypts some message
m to c0 .
Decrypt c0 with all k K with the decryption function D (not necessarily the same as E).
Since the cipher is correct - in order to be perfect it has to both be correct and perfectly secure.
That means the decryption function must have the property

Dk (Ek (m)) = m

12
Let [
M0 = Dk (c0 )
kK

Therefore M0 is the set of all possible messages decrypting c0 with every key k K (brute-force
attack). It follows
(a) |M0 | |K|
Because of construction of M0 (union over all keys)

(b) |M0 | < |M|


Because of the assumption |M| > |K| and combined with (a) : |M0 | |K|.

(c) m M : m / M0
Follows exactly from (b) : |M0 | < |M|
Considering the perfect cipher property

P [m = m |Ek (m) = c] = P [m = m ]

Due to (b) : P (m = m ) = 0 but due to (c) : m M P (m = m ) 6= 0


We have contradicted the requirement for the pefect cipher. Therefore the assumption |M| >
|K|. Thus:
There exists no perfect ciphers where

|M| > |K|

Therefore every cipher that is perfect must be impractical. 

1.7 Monoalphabetic Substitution Cipher (Toy-Cipher)


Definition Monoalphabetic Substitution Cipher (Toy-Cipher)
The Toy-Cipher is a monoalphabetic substitution cipher where each letter in the alphabet is
mapped to a substitution letter. The decryption is done by the reversed mapping. The cipher
uses M = {A, B, C, . . . , Z}n (words of length n, using letters from the alphabet) and has
K = 26 (permutation of 26 letters) as keyspace.
Example 15:
One possible mapping is given by (every letter maps to the next letter and z maps to a)

a 7 b
b 7 c
..
.
y 7 z
z 7 a

Thus the encryption function E encrypts the message m =hello as follows:

h 7 i
e 7 f
l 7 m
l 7 m
o 7 p

13
It follows E1 (m) = if mmp where the key k = 1 is the translation or shift of each letter.
Theorem 1.7.1:
The Monoalphabetic Substitution Cipher (Toy-Cipher) is imperfect for a minimum message
length of 19

Proof:
Shannons keyspace theorem claims that a cipher is perfect if and only if

|K| |M|

(The keyspace is as least as big as the message space).


It follows a cipher is imperfect if and ony if

|K| < |M|

Applying this inequality delivers

|K| = 26 25 24 2 1 = 26!

because the key is just a permutation of the alphabet. There are 26 choices for what a can map
to, 25 choices for what b can map to, and so on.
The number of possible messages (the message space) of length n is

|M| = 26n

Thus the smallest n follows by


26! < 26n n 19

Proof (by counterexample)
Any two-letter ciphertext with same letters (e.g. aa, bb, ) could not decrypt to a non two
letter message with different letters (e.g. ab, dk, lt, ). As a letter always decrypts to the same
letter (i.e. aa can only decrypt to a message with to identical letters) 

1.8 Lorenz Cipher Machine


Theorem 1.8.1:
Given two cipertexts m k = c = c0 c1 . . . cn1 , m0 k = c0 = c00 c01 . . . , c0n1 with c, c0 C and
j I := {0, 1, 2, . . . , n 1}:
cj 6= c0j
then by xoring c c0 = m k m0 k = m m0 . If there is only a slightly difference between
c and c0 it is possible by guessing m m and getting a possible message via (xoring with the
intercepted cipertexts)
m c c0
which should give back the other message m0 .
Once the two messages m, m0 are given, its easy to get the key with

k =mc

Definition Lorenz Cipher Machine

14
Each letter of the message m would be divided into 5 bits m0 m1 m2 m3 m4 , and those would be
xord with the value coming from the corresponding and different sized k-wheels which had at
each position a 0 or a 1. The result would also be xord with the value of the s-wheels, which
worked similarly. The k-wheels turned every charakter, the s-whells turned conditionally on the
result of 2 other wheels, which were the m1 -wheel (which turned every time) and the m2 -wheel
(which would rotate depending on the value of the m1 -wheel) and depending on the m1 m2
either all the s-wheels would rotate by 1 or none of them would. The result of all these xors is
the cipher text c = c0 c1 c2 c3 c4 .
The Lorenz Cipher works similarly to an One - Time Pad: xoring a message with a key leads
to a ciphertext.
Knowing the structure of the machine is not enough to break the cipher. Its necessary to know
the initial configuration.

Example 16:
Let z = z0 z1 z2 z3 z4 z5 z6 z7 . . . zn1 be the interceptet message. The ciphertext z is broken into 5
channels c where each bit on position i with i {0, 1, 2, . . . , n 1} is transmitted over channel
(i + 1) mod 5. Thus for 5 channels c1 , c2 , c3 , c4 , c5 :
c1 z0 z5 z10 z15 . . .
c2 z1 z6 z11 z16 . . .
c3 z2 z7 z12 z17 . . .
c4 z3 z8 z13 z18 . . .
c5 z4 z9 z14 z19 . . .

So channel 1 transmit the first part of the first letter z0 , the first part of the second letter z5 ,
channel 2 transmit the second part of the first letter z1 , the second part of the second letter z6 ,
and so on.
Now subscripting z by th channel and the letter for that channel zc,i . It follows
z0,i = z0 , z5 , z10 , . . .
z1,i = z1 , z6 , z11 , . . .
The subscripts break up the ciphertext into channels and therefore, with the weakness of the
cipher (all s-wheels move in turn). Thus
zc,i = mc,i kc,i sc,i
and by separating the ciphertext into these 3 pieces, its possible to take advantage of properties
that they have. The importance is that the s-wheels dont always turn. Looking at subsequent
characters, there is a good chance that the s-wheels have not changed. Lets define
zc,i := zc,i zc,i+1
notice that zc,i , zc,i+1 are 5 characters apart in the interceptet ciphertext, but they are adjacent
for that channel. It follows
z0,i z1,i = z0,i z0,i+1 z1,i z1,i+1
= m0,i k0,i s0,i m0,i+1 k0,i+1 s0,i+1 m1,i k1,i s1,i m1,i+1 k1,i+1 s1,i+1
= m0,i m0,i+1 m1,i m1,i+1 k0,i k0,i+1 k1,i k1,i+1 s0,i s0,i+1 s1,i s1,i+1
| {z } | {z } | {z }
=:m =:k =:s
= m k s

15
Theorem 1.8.2:
With the example above follows
1
(a) P (m = 0) > 2
1
(b) P (s = 0) > 2

Proof:

(a) P (m = 0) > 12 depends on subsequent message letters:


If adjacent letters in the message are the same, that ensures that m = 0 (repeated letter:
wheels, letters, for German 0.61)

(b) P (s = 0) > 21 follows by the structure of the machine:


When the s-wheel advance this probability is about 21 but when they dont advance, s
is always 0. This means, the probability that s = 0 is significanlty greater than 21 (for
the structure of the Lorenz Cipher Maschine its about 0.73)


Example 17:
Assume P (k = 0) = 21 and P (m = 0) > 21 and P (s = 0) > 12 with zc,i = mc,i
kc,i sc,i . Its possible to break the cipher knowing more about the key k. If key is
uniformly distributed, whatever patterns m and s have are lost when they get xored with k in
zc,i = mc,i kc,i sc,i . The k-wheels in the Lorenz Cipher machine produce key. Looking
at z for two channels, i.e. only at the first two k-wheels (size 41 and size 31). Then there
are 41 31 = 1271 different configurations for k0 and k1 . That means that every 1271 letters
those wheels woult repeat, and there are only 1271 different possible settings for the k-wheels.
Trying all 1271 possible setting and for 1 of those possible settings we are goint to know all the
key bits and if we guess the right setting then k = 0. If we guess right then P (k = 0) = 1
otherwise (false guess) P (k = 0) = 21 . With z = m k s it follws P (z = 0) = 0.55
because:

z = m |{z}
k s P (z = 0) = P (m = 0) P (s = 0) + P (m = 1) P (s = 1)
=0
= 0.61 0.73 + (1 0.63) (1 0.73)
= 0.55

Computing the sum of all z. If the output is nearly |z|


2 is was a bad guess otherwise if the
result is about 0.55 |z| is was a good guess.
Assume having a 5000 letters message with all 1271 configurations of k0 and k1 and for all
configuration its necessary to compute the summation of the z. Guessing that the s = 0
therefore

z0,i z1,i = m0,i m0,i+1 m1,i m1,i+1 k0,i k0,i+1 k1,i k1,i+1 s0,i s0,i+1 s1,i s1,i+1
| {z }
=:s=0

Thus computin 7 xors for each character and counting the number of times thats equal to 0. It
follows the number of xors are 5000 1272 7 = 44485000 whats the maximum number of xors
we have to do (expect about half of 44485000 operations to find the correct configuration of k0
and k1 and then do similar thinks with the other k-wheels and then we can decrypt the whole

16
message). With a 2 GHz processor we need a fraction of a millisecond
Theorem 1.8.3 (Goal of Cipher)
The goal of a cipher is to hide statistical properties of the message space and key (which should
be perfectly random).
Two properties of the operation of a secure cipher are:
Confusion:
making the relationship between the plaintext and the ciphertext as complex and involved
as possible

Diffusion:
non-uniformity in the distribution of the individual letters (and pairs of neighbouring let-
ters) in the plaintext should be redistributed into the non-uniformity in the distribution of
much larger structures of the ciphertext, which is much harder to detect (the output bits
should depend on the input bits in a very complex way - see also avalanche effect).

Theorem 1.8.4 (Goal of Cryptoanalyst)


The goal of a cryptoanalyst is to find statistical properties in ciphertext and use those to break the
key and/or message (Lorenz Cipher Machine has statistical properties when you looked acrosss
channels at subsequent letters which was not hidden by the cipher and because of a mechanical
weakness that all the s-wheel either all moved or didnt move and matehmatical weakness - only
1272 different positions of the first two k-wheels.)

1.9 Modern Symmetric Ciphers


Definition modern symmetric ciphers, stream ciphers, block ciphers
There are two main types of modern symmetric ciphers:
stream cipher:
consists of a stream of data and the cipher can encrypt small chunks at a time (usually 1
byte at a time)

block cipher:
the data is separated in larger chunks and the cipher encrypts a block at a time (usually
a block size is at least 64 bits and can be up to 128 or 256 bits)
The only differences is changing the block size. The different ciphers are designed for different
purposes.
Definition Advanced Encryption Standard (AES), Data Encryption Standard (DES)
Advanced Encryption Standard or AES is the most important block cipher nowadays (since
1997) and works on blocks on 128 bits and displaced the Data Encryption Standard or DES,
which had been a standard for the previous decades. AES was the winner of a competition that
was run by the United States. The main criteria for the submissed ciphers in the competition
where
security (as provable security is only achievable for the one-time pad) computed with
actual # round
security
minimal # of rounds
where breakable means anything that showed you could reduce the search space even a
little bit woult be enough

17
speed: implementing it both in hardware and in software and

simplicity which is usually against security.

The winner of the AES competition was a cipher known as Rijndael (developed by two belgian
128
cryptographers). A brute force attack with a 128 bit key would require on average 2 2 = 2127
attempts. The best known attack needs 2126 attempts.
The AES works with xor and has two main operations

shift (permuting bits - moving bits around)

s-boxes (non-linearity: mixes up data in way that is not linear):


This is done by lookup-tables:
A s-box takes 8 bits and have a lookup table (with 256 entries) mapping each set of 8 bits
to some other set of 8 bits. Designing the lookup table is a challenge. The lookup table
has to be as nonlinear as possible and make sure there is no patterns in the data in this
table.

The way AES works is combining shifts and s-boxes with xor to scramble up the data and do
this multiple rounds and put them back through a series of shifts and s-boxes with xor. The
number of rounds depens on the key size: for the smallest key size for AES (128 bits) we would
do 10 rounds going through the cycle, getting the output cipher text for that block.

2 Application of Symmetric Ciphers


2.1 Application of Symmetric Ciphers
Ciphers provide 2 main functions:

Encryption:
Takes a message m from some message space M and a key k from some key space K.

Decryption:
Is the inverse of encryption. It takes a ciphertext and if it takes the corresponding key k 0
it will produce the same message that we got.

The correctness property (as mentioned earlier):

Dk (Ek (m)) = m

All of our assumptions about security depend on the key.


There are 2 main key properties:

k is selected randomly and uniformly from K. This means each key is equally likely to be
selected and there is no predictability about what the key is.

k can be kept secret (but shared). That means that the adversary cant learn the key but
it can be shared between the 2 endpoints.

18
2.2 Generating Random Keys
Definition (Kolomogorov) Randomness
A string of bits is random if and only if it is shorter than any computer program that can
produce that string (Kolmogorov Randomness).
This means that random strings are those that cannot be compressed.
Example 18:
k K for K := {0, 1}n with some n N. If there are no visible patterns (e.g. 100100100 . . .) and
enough repititions (e.g. 100000101111101011111 . . .), then it is likely that the key is random.
Definition Complexity of a Sequence (Kolomogorov Complexity)
How random a certain string is, is a way of measuring the complexity K of some sequence s.
This is defined as the length of the shortest possible description of that sequence:
K(s) = length of the shortest possible describtion of s
where a description is e.g. a Turing-Maschine, a python program or wathever we select as
description language and as long as that description language is powerful enough to describe
any algorithm, which its a reasonable way to define complexity.
Definition Random Sequence
A sequence s is random if and only if

K(s) = |s| + C

That means, making the sequence longer the description gets longer at the same rate with the
constant C.
Therefore a short program that can produce the sequence is not random as there is a structure
in the program and the program shows what is that structure.
If there isnt a short program that can describe that sequence, thats an indication that the
sequence is random (there is no simpler way to understand that sequence other than to see the
whole sequence).
Theorem 2.2.1:
For a given sequence s it is theoretically impossible to compute K(s).

Proof:
If s is truly random then
K(s) = |s| + C
would be correct.
But if s is not truly random, there might be some shorter way to compute K(s). So K(s) gives
the maximum value of the Kolomogorov complexity of a sequence s e.g. print + s which
prints out s. Its length would be the length of s plus the 5 characters (4 for print plus 1 for
the space). But that doesnt proof that there is a shorter program that can produce s. 
Example 19:
The Berry Paradoxon gives an idea of the proof of the former theorem:
What is the smallest natural number that cannot be described in eleven words?
Which has 2 properties:
Set of natural numbers that cannot be described in eleven words (a set).

Any set of natural numbers has a smallest element.


The answer:

19
The smallest natural number that cannot be described in eleven words
has 11 words. That suggest there is no such number but this contradicts the 2 properties
(paradox)
Definition Statistical Test
A statistical test shows that something is non-random. The statistical test cant prove that
something is random.
Definition Unpredictability
Unpredictability is the requirement to satisfy the randomness for generating a good key.
Example 20:
Assuming a sequence s of lenght n
s = x0 , x1 , x2 , . . . , xn1
with xi [0, 2n1 ].
1
Even after seeing x0 , x1 , x2 , . . . , xm1 , its only possible to guess xm with probability 2n .
Definition Physically Random Events
Physically random events are in
Quantum Mechanics (e.g.: events in the universe, radiactive declay, and others)
Thermal noise
Key presses or user actions
many others
.

2.3 Pseudo Random Number Generator (PRNG)


Definition Pseudo Random Number Generator, seed, state
A pseudo random number generator takes as input a small amount of physically randomness
(seed ) and produces a long sequence of random bits. The PRNG is an algorithm for generating
a sequence of numbers that approximates the properties of random numbers. The sequence is
not truly random in that it is completely determined by a relatively small set of initial values,
called the PRNGs state, which includes a truly random seed.
Example 21:
Assume extracting a seed s from a Random Pool (finite many true random numbers) and using
s as key. The PRNG may look as follows

0 1

 
s /E s /E ...

 
x0 x1
For the first random output x0 , we get 0 encrypting that with s and so on.
Theorem 2.3.1:
Its impossible to wirte a program, that test a sequence of bits if they are truly random and a
sequence that passes this test cant be truly random.

20
2.4 Modes of Operation
The modes of operation is the procedure of enabling the repeated and secure use of a block
cipher (AES) under a single key. That means modes of operation are ways to encrypt a file that
doesnt give that much information about the message m from the ciphertext c.

2.4.1 Electronic Codebook Mode (ECB)


Definition Electronic codebook mode (ECB)
The electronic codebook mode maps for each i {0, 1, 2, . . . , 2j 1} (in AES j = 128) inputs
the value of Ek (i). That is (for only one key):

0 7 Ek (0)
1 7 Ek (1)
..
.
2j 1 7 Ek (2j 1)

Thus for m = m0 m1 m2 . . . mn1 it is i {0, 1, 2, . . . , n 1}

Ek (mi ) = ci

and therefore
c = c0 c1 c2 . . . cn1
The electronic codeblock mode works as follows:
1. The message m is divided into blocks

m = m0 m1 m2 . . .

with a block length depeding on the cipher (assume each block i {0, 1, 2, . . .} : mi has
a block length of 128 bits).

2. The ciphertext is
c = c0 c1 c2 . . .
where i {0, 1, 2, . . .} :
ci = Ek (mi )

.
Assue E has perfect secrecy (impossible due to reusing the key, therefore |K| < |M|). Then the
attacker (knowing only c) can only figure out:
The length of m because the lenght of c is equal to the length of m.

Which blocks in m are equal. For an 128 bit encryption and an 8 bit character length
there are only 128
8 = 16 characters per block. That means after 16 characters a new block
starts.
The 2 main problems of electronic codebook mode are
The electronic codebook mode doesnt hide repititions

An attacker can move or replace blocks and decryption would result in a perfectly valid
message with the blocks in a different order.

21
2.4.2 Cipher Block Chaining Mode (CBC)
Definition Cipher block chaining mode
The idea of CBC mode is using the ciphertext from the previous block to impact the next
block. Breaking the message m into blocks m = m0 m1 m2 . . . mn1 of block size b, then the
CBC may look as follows

m0 m1 m2 ...

  
/ ...
L L L
IV : :

  
k /E k /E k /E ...

  
c0 c1 c2 ...

This means instead of doing each block independently, each message block will be xor ed with
the previous cipher block and then encrypted:

1. The first message block m0 will be xor ed with a initialization vector (IV), which is a
random block of size b, and then encrypted with E using k to get c0 . The IV dont need
to be kept secret but its helpful to not to reuse an IV.

2. m1 will be xor ed with c0 and then encrypted with E using k to get c1 .

3. Repeating this until mn1 will be xor ed with cn2 and then encrypted with E using k to
get cn1 .

The result of CBC is i {1, 2, 3, . . . , n 1}:

c0 = Ek (m0 IV)
ci = Ek (mi ci1 )

Note that the CBC still encrypts the output of m0 IV.


Theorem 2.4.1 (Recovering m)
Loosing the value of IV but having c and k, then the message m (excepts m0 ) can be recovered
with the formula

ci = Ek (mi ci1 ) mi1 = Dk (ci1 ) ci2


mi = Dk (ci ) ci1

except
c0 = Ek (m0 IV) m0 = Dk (c0 ) IV
thus m0 is lost.

Example 22:
Implementing ciper block chaining mode in Python may look as follows
from Crypto.Cipher import AES

def non_encoder(block, key):


"""A basic encoder that doesnt actually do anything"""

22
return pad_bits_append(block, len(key))

def xor_encoder(block, key):


block = pad_bits_append(block, len(key))
cipher = [b ^ k for b, k in zip(block, key)]
return cipher

def aes_encoder(block, key):


block = pad_bits_append(block, len(key))
# the pycrypto library expects the key and block in 8 bit ascii
# encoded strings so we have to convert from the bit string
block = bits_to_string(block)
key = bits_to_string(key)
ecb = AES.new(key, AES.MODE_ECB)
return string_to_bits(ecb.encrypt(block))

# this is an example implementation of


# the electronic cookbook cipher
# illustrating manipulating the plaintext,
# key, and init_vec
def electronic_cookbook(plaintext, key, block_size, block_enc):
"""Return the ecb encoding of plaintext"""
cipher = []
# break the plaintext into blocks
# and encode each one
for i in range(len(plaintext) / block_size + 1):
start = i * block_size
if start >= len(plaintext):
break
end = min(len(plaintext), (i+1) * block_size)
block = plaintext[start:end]
cipher.extend(block_enc(block, key))
return cipher

###############
def xor(x,y):
return [xx^yy for xx,yy in zip(x,y)]
def cipher_block_chaining(plaintext,key,init_vec,block_size,block_enc):
#plaintext = bits to be encoded
#key = bits used as key for the block encoder
#init_vec = bits used as initialization vector for the block encoder
#block_size = size of blocks used by block_enc
#block_enc = function that encodes a block using key
cipher = []
xor_input=init_vec
# break the plaintext into blocks
# and encode each one
for i in range(len(plaintext) / block_size + 1):
start = i * block_size
if start >= len(plaintext):
break
end = min(len(plaintext), (i+1) * block_size)
block = plaintext[start:end]
input_=xor(xor_input,block)
output=block_enc(input_,key)
xor_input=output
cipher.extend(block_enc(block, key))
return cipher
####################

23
def test():
key = string_to_bits(4h8f.093mJo:*9#$)
iv = string_to_bits(89JIlkj3$%0lkjdg)
plaintext = string_to_bits("One if by land; two if by sea")

cipher = cipher_block_chaining(plaintext, key, iv, 128, aes_encoder)


assert bits_to_string(cipher) == \xeaJ\x13t\x00\x1f\xcb\xf8\xd2\x032b\xd0\xb6T\xb2\xb1\x81\xd5h
\x97\xa0\xaeogtNi\xfa\x08\xca\x1e

cipher = cipher_block_chaining(plaintext, key, iv, 128, non_encoder)


assert bits_to_string(cipher) == wW/i\x05\rJQ]\x05\\\r\x05\x0e_G\x03 @Ilkj3$%/hd\x00\x00\x00

cipher = cipher_block_chaining(plaintext, key, iv, 128, xor_encoder)


assert bits_to_string(cipher) == C?\x17\x0f+=sb0O37/7|c\x03 @Ilkj3$%/hd9#$

###################
# Here are some utility functions
# that you might find useful

BITS = (0, 1)
ASCII_BITS = 8

def display_bits(b):
"""converts list of {0, 1}* to string"""
return .join([BITS[e] for e in b])

def seq_to_bits(seq):
return [0 if b == 0 else 1 for b in seq]

def pad_bits(bits, pad):


"""pads seq with leading 0s up to length pad"""
assert len(bits) <= pad
return [0] * (pad - len(bits)) + bits

def convert_to_bits(n):
"""converts an integer n to bit array"""
result = []
if n == 0:
return [0]
while n > 0:
result = [(n % 2)] + result
n = n / 2
return result

def string_to_bits(s):
def chr_to_bit(c):
return pad_bits(convert_to_bits(ord(c)), ASCII_BITS)
return [b for group in
map(chr_to_bit, s)
for b in group]

def bits_to_char(b):
assert len(b) == ASCII_BITS
value = 0
for e in b:
value = (value * 2) + e
return chr(value)

def list_to_string(p):

24
return .join(p)

def bits_to_string(b):
return .join([bits_to_char(b[i:i + ASCII_BITS])
for i in range(0, len(b), ASCII_BITS)])

def pad_bits_append(small, size):


# as mentioned in lecture, simply padding with
# zeros is not a robust way way of padding
# as there is no way of knowing the actual length
# of the file, but this is good enough
# for the purpose of this exercise
diff = max(0, size - len(small))
return small + [0] * diff

2.4.3 Counter Mode (CTR)


Definition Counter Mode (CTR)
A message m is divided into blocks m = m0 m1 . . . mn1 . In the CT R instead of just having a
message block go in the encryption function there is a counter (some value that cycles through
the natural numbers) which is the input to the encryption function and so the results are
some encrypted blocks. These blocks xord with the corresponding messageblock are the final
ciphertext blocks. To avoid the problem of using the same sequence of counters every time,
we add a nonce (in fact: appending the nonce with the counter value). A nonce is simply a
one-time, unpredictable value (similar to a key) which isnt need to be kept secret (e.g. with
AES: the size of a block is always 128 bits, therefore the nonce and the counter are each 64 bits
long). The CT R mode may look as follows:

nonce|0 nonce|1 nonce|n 1

  
k /E k /E k /E
m0 m1 mn1
L L L

  
c0 c1 cn1

It follows (encryption)
ci = Ek (nonce|i) mi
and (decryption):
mi = ci Ek (nonce|i)

2.4.4 CBC versus CTR


Due to former definitions:

25
CBC CTR
Encryption ci = Ek (mi ci1 ) ci = Ek (nonce|i) mi
c0 = Ek (m0 IV)
Decryption mi = Dk (ci ) ci1 mi = ci Ek (nonce|i)
m0 = Dk (c0 ) IV
Speed slower faster
encryption of ci requires can do encryption Ek (nonce|i) without
encryption of ci1 knowing message. Encryption is more
(no parallel encryption) expensive than xor operation

2.4.5 Cipher Feedback Mode (CFB)


Definition Cipher Feedback Mode (CFB)
The CF B mode takes as input some n-bit long x values x0 , x1 , x2 , . . . with the property that
the first value is an initialization vector x0 = IV. Each x value is separated into 2 blocks: the
first block xei has a size of s (s block) and the second block has a size of n s (n s block). The
encryption function takes the n block of a x value and a key k of length n and gives as output a
n bit long result. The result is separated into two blocks: the first block c0i has a length of s (s
bock) and so the second block c00i has a length of n s (n s block). The message m is divided
into blocks of length s: m = m0 m1 m2 . . .. Each message block xord with the corresponding
s-block (c0i ) to get the ciphertext block of length s. The next x value is composited: the first
part is the n s block of the former x value and the second part is the ciphertext from the
former encryption function (after xord with the corresponding m block). This may look as
follows:
x0 : x1 : , x2 : ,
IV e0 ||c0
x e1 ||c1
x
= =

 
k /E k /E ...

 
c00 ||c000 c01 ||c001
m0 c00 m1 c01
L L

 
c0 c1
where the output of E has size n and is separated into c00 (s block) and c000 (n s block). Then
c00 is used to compute c0 of size s by xoring c00 with m0 . To get the next x value simply append
the n s block from the former x (noted as x e) with the ciphertext of the former computation
to get an n bit long input for E. And so on. Updating the x value works as follows:

ei1 ||ci1
xi = x
x0 = IV

and the ciphertext values are given by:

ci = Ek (xi ) mi
| {z }
=c0i

26
The decryption of a message given the ciphertext c = c0 c1 . . . cn1 :

mi = ci Ek (xi )
| {z }
=c0i
ei1 ||ci1
xi = x
x0 = IV

2.4.6 Output Feedback Mode (OFB)


Definition Output Feedback Mode
The OF B mode is similar to CF B mode but instead of taking the ciphertext and putting that
block into the x value. We take the output from the encryption E and take that into the next
x value. Thats the only difference between OFB and CFB. This may look as follows
x0 : x1 : , 0 x2 : , 0
IV e ||c
x e ||c
x
8 0 0 8 1 1

 
k /E k /E ...

 
c00 ||c000 c01 ||c001
m0 c00 m1 c01
L L

 
c0 c1

Unlike CFB with OFB it is possible to recover most of an encrypted file if one cipher block
is lost. Therefore OFB could not be the basis of a cryptographic hash function, because in
cryptographic hash functions the cipherblock text does depend on the previous message block
(not given in OFB : c2 doesnt depend on m1 ).

2.4.7 CBC versus CFB


CBC CFB
Requires E is invertible true false
Reqires IV to be secret false false
Can use small message blocks false true
Protect against tampering false false
Final cn1 depens on all message blocks true true

2.4.8 Parallel Decrypting Modes


Theorem 2.4.2:
Mode of operation that can perform most of the decryption work in parallel:
ECB

CTR

CBC

27
CFB

2.5 Protocol
Definition Protocol
A protocol involves 2 or more parties and is a precisely definition of a sequence of steps. Each
step can involve som computation and communication (sending data between the parties). A
cryptographic protocol also involves a secret.
Definition Security Protocol
A security protocol is a protocol, that provides some guarantee even if some of the participants
cheat (dont follow the protocols steps as specified).
Example 23:
Making a coin toss over a channel via 2 parties A and B:

1. A picks a value for x {0, 1} with 0 representing Heads and 1 representing Tails and
a random key k of length n (security parameter): k {0, 1}n .

2. A will create a message m by encrypting x with k: m = Ek (x).

3. A sends the message m to B.

4. B receives m and makes a guess g {0, 1}.

5. A receives g from B

6. A sends k to B so that B gets the result of the coin toss by decrypting m with k:
x = Dk (m). If x = g B knows who won the coin toss.

This looks as follows:


A B

x, k, m R
RRR
RRRm
RRR
RRR
R)
m, g
kkkkk
k
kkk
kkkkk g
k
g ukQkQQ
QQQ
QQQ k
QQQ
QQQ
(
? ?
g=x x, g = x
Note that A can cheat by finding 2 keys k0 , k1 where

Ek0 (0) = Ek1 (1) or Ek0 (1) = Ek1 (0)

and A will win depends on the guess of B: A sends a different key to B for every choice B
makes. A harder way to cheat is finding 2 keys k0 , k1 where:

Ek0 (0) = Ek1 (1) and Ek0 (1) = Ek1 (0)

28
which will always lead to the opposite of Bs guess. And another harder way to cheat is finding
k 0 such that
Ek0 (0) = Ek (1)

2.6 Padding
Definition Padding
If using a block cipher that requires an input of n bits long (minimum n = 128 for the message,
the key and therefore the output in AES ) the message has to be padded sometimes to reach
the required length. The simplest form is zero padding (filling up with zeroes until the required
length is reached).
Example 24:
For the protocol we used in 23 the value of x was only 1 bit (0 or 1). Using the ECB mode
requires 128 bits (in AES ). The simplest solution is zero padding: padding the input with 127
0-bits added after x.

2.7 Cryptographic Hash Function


Definition Cryptographic Hash Function
The Cryptographic Hash Function H is a function that takes some large value as input and
outputs a small value:
h = H(x)
Regular Hash functions have these properties:
Compression:
H takes large input and gives a small fixed output

Distribution:
1
H is well distributed: P (H(x) = i) N, where N is the size of the output (output range:
[0, N )).
A cryptographic hash function has additional these properties:
Pre-image resistance (one-way-ness):
Given h then its hard to solve for h = H(x) for x.

Weak collision resistance:


Given h = H(x), its hard to find any x0 such that H(x0 ) = h.

Strong collision resistance:


Its hard to find any pair (x, y), such that H(x) = H(y)
.
Example 25:
An almost good cryptographic hash function is to use CBC to encrypt x and take the last
output block as the value of the hash function because this provides the compression property
as well as the collision resistance properties. This construction is similar to Merkle-Dangard
Construction. For the hash function using the same key will work (select key being 0)

29
2.8 Random Oracle Assumption
Definition Random Oracle Assumption
A random oracle assumption is an ideal (has all required properties) cryptographic hash func-
tion. That maps any input to h with an uniform distribution. An attacker trying to find
collusion can do no better than a brute force search on the size of h:

H(x) 7 h

Theorem 2.8.1:
It is impossible to construct a random oracle.

Proof:
The hash function must be deterministic, so it produces the same output for the same input.
We want to produce uniform distribution, so that means it needs to add randomness to what
comes in, but thats impossible. Since its deterministic, the only randomness it could use it
whats provided by x. So theres no way to amplify that to provide more randomness in the
output. 
Example 26:
Consider a coin toss as in Example 23 with a hash function:

1. A picks a number x {0, 1} and computes the ideal cryptographic hash function (despit
the ideal cryptographic hash function doesnt exist) m = H(x)

2. A sends m to B

3. B makes a guess g {0, 1} and sends it to A.

4. A sends x to B

5. B can check if m = H(x). If m 6= H(x) then B suspects A has cheated.

If x = g then B wins, otherwise A wins. This may look as follows

A B

x, m TT
TTTT
TTTmTTTT
TTTT
TTT*
iii g
ii iiii
i
iiii
iiiiiii g
tiTTT
TTTT
TTTT x
TTTT
TT*
? ? ?
x= m = H(x), x = g

In this protocol B can easily cheat:


The hash function is not ecryption. There are no secrets that go into it. Its only providing
this commitment to the input. B can compute H(0) and H(1) and check them weather they
are equal m and instead of guessing, B can pick whichever one did.
Example 27:

30
Assuming an attacker has enough computing power to perform 262 hashes, then 63 bits should
the (ideal) hash function produce to provide weak collision resistance. Lets assume an attacker
success probability: P (attacker can find x0 where H(x0 ) = h) 21 in 262 tries. Consider (as b
the number of output bits and k as the number of guesses: g = 262 ):

P ( one guess H(x0 ) = h) = 2b


P ( bad guess) = 1 2b
And over s series of guesses, the probability that they are all bad:
P (k guesses all bad ) = (1 2b )k

Computing gives the following output:


62
(1 262 )2 0.63
63 262
(1 2 ) 0.39

That means 63 is the fewest number of bits to provide the attacker would less than 50% chances
of finding a pre-image that maps to the same hash value in 262 guesses.
Lets assume

weak collision resistance 2b


b
strong collision resistance 2 2

This means, for weak collision resistance an attacker needs to do about 2b work, where b is the
number of hash bits. To obtain strong collision resistance is actually much harder, and we need
b
about twice as many bits for that. The attacker effort is more like 2 2 , so we need about twice
as many output bits in our hash function to prove this.
Example 28:
The birthday paradox (not really a paradox) gives an unexpected result of the probability that
2 people have the same birthday.
Assume a group of k people. The probability that 2 people have the same birthday is given by
(no leap-years, birthdays are uniform distributed):
The complement probability (that there are no duplicates) is:

365 364 363 (365 k + 1)


P (no duplicates) =
365 365 365 365
n!
(nk)!
=
nk
where 365 364 363 (365 k + 1) are the number of ways to assign birthdays with no
duplication among k people and 365 365 365 365 is the number of ways to assign with
duplication. n is the number of possible days (hash outputs) and k is the number of trials. The
probability that there are duplicates is:
n!
(nk)!
P (one or more duplicates) = 1
nk
It follows:
strong collusion resistance weak collusion resistance
n!
1 (nk)!
nk
> 1 (1 n1 )k

Example 29:

31
For n = 365 days and k people there is

k P (at least one duplicate)


2 0.0027
3 0.0082
6 0.0405
20 0.4114
21 0.4437
23 0.5073

That means: in a group of 23 people, its more likely that 2 person have a birthday in common
that no duplicate birthdays occur.
With n = 264 hash and an attacker can do k = 232 work(operations) the probability of a hash
collision is about 0.39. For k = 234 the probability of a hash collision is about 0.99.
Conclusion:
Given an ideal hash function with N possible outputs, an attacker is expected to need N
guesses to find an input x0 that hashes to a particular value (weak):

H(x0 ) = h

but only needs N guesses to find a pair x, y that collide (strong):

H(x) = H(y)

This assumes an attacker can store all those hash values as the attacker try the guesses. This
is teh reason why hash functions need to have a large output.
Example 30:
SHA-1 uses 160 bits of output and was broken. There is a ways to find a collision with only
251 operations.
SHA-2 uses 256 or 512 bits of output. For an ideal hash function, this would be big enough to
defeat any realistic attacker
Example 31:
This is an example to find a hash collision in Python assuming we have already implemented
a hash function using CT R mode to encrypt and then xor ing all the blocks fo ciphertext that
came out and using that as the hash output:
from Crypto.Cipher import AES
from copy import copy

def find_collision(message):
new_message = copy(message)
def swap_blocks(block_a,block_b,cblock_a,cblock_b):
#returns the 2 blocks necessary for the message text
#in order to swap 2 blocks in the cipher text
eblock_a=xor_bits(block_a,cblock_a)
eblock_b=xor_bits(block_b,cblock_b)
new_block_a=xor_bits(eblock_a,cblock_b)
new_block_b=xor_bits(eblock_b,cblock_a)
return new_block_a,new_block_b
block_size,block_enc,key,ctr=hash_inputs()
cipher = counter_mode(message, key, ctr, block_size,block_enc)
#swap block 1 and 2
block_a=get_block(message, 0,block_size)
block_b=get_block(message,1,block_size)
cblock_a=get_block(message,0,block_size)
cblock_b=get_block(message,1,block_size)
new_block_a,new_block_b=swap_blocks(block_a,block_b,cblock_a,cblock_b)

32
new_message[0:block_size]=new_block_a
new_message[block_size:2*block_size]=new_block_b
return new_message

def test():
messages = ["Trust, but verify. -a signature phrase of President Ronald Reagan",
"The best way to find out if you can trust somebody is to trust them. (Ernest Hemingway)",
"If you reveal your secrets to the wind, you should not blame the wind for revealing them
to the trees. (Khalil Gibran)",
"I am not very good at keeping secrets at all! If you want your secret kept do not tell
me! (Miley Cyrus)",
"This message is exactly sixty four characters long and no longer"]
for m in messages:
m = string_to_bits(m)
new_message = find_collision(m)
if not check(m, new_message):
print "Failed to find a collision for %s" % m
return False
return True

from Crypto.Cipher import AES

# Below are some functions


# that you might find useful

BITS = (0, 1)
ASCII_BITS = 8

def display_bits(b):
"""converts list of {0, 1}* to string"""
return .join([BITS[e] for e in b])

def seq_to_bits(seq):
return [0 if b == 0 else 1 for b in seq]

def pad_bits(bits, pad):


"""pads seq with leading 0s up to length pad"""
assert len(bits) <= pad
return [0] * (pad - len(bits)) + bits

def convert_to_bits(n):
"""converts an integer n to bit array"""
result = []
if n == 0:
return [0]
while n > 0:
result = [(n % 2)] + result
n = n / 2
return result

def string_to_bits(s):
def chr_to_bit(c):
return pad_bits(convert_to_bits(ord(c)), ASCII_BITS)
return [b for group in
map(chr_to_bit, s)
for b in group]

def bits_to_char(b):

33
assert len(b) == ASCII_BITS
value = 0
for e in b:
value = (value * 2) + e
return chr(value)

def list_to_string(p):
return .join(p)

def bits_to_string(b):
return .join([bits_to_char(b[i:i + ASCII_BITS])
for i in range(0, len(b), ASCII_BITS)])

def pad_bits_append(small, size):


# as mentioned in lecture, simply padding with
# zeros is not a robust way way of padding
# as there is no way of knowing the actual length
# of the file, but this is good enough
# for the purpose of this exercise
diff = max(0, size - len(small))
return small + [0] * diff

def xor_bits(bits_a, bits_b):


"""returns a new bit array that is the xor of bits_a and bits_b"""
return [a^b for a, b in zip(bits_a, bits_b)]

def bits_inc(bits):
"""modifies bits array in place to increment by one

wraps back to zero if bits is at its maximum value (each bit is 1)


"""
# start at the least significant bit and work towards
# the most significant bit
for i in range(len(bits) - 1, -1, -1):
if bits[i] == 0:
bits[i] = 1
break
else:
bits[i] = 0

def aes_encoder(block, key):


block = pad_bits_append(block, len(key))
# the pycrypto library expects the key and block in 8 bit ascii
# encoded strings so we have to convert from the bit array
block = bits_to_string(block)
key = bits_to_string(key)
ecb = AES.new(key, AES.MODE_ECB)
return string_to_bits(ecb.encrypt(block))

def get_block(plaintext, i, block_size):


"""returns the ith block of plaintext"""
start = i * block_size
if start >= len(plaintext):
return None
end = min(len(plaintext), (i+1) * block_size)
return pad_bits_append(plaintext[start:end], block_size)

def get_blocks(plaintext, block_size):


"""iterates through the blocks of blocksize"""
i = 0

34
while True:
start = i * block_size
if start >= len(plaintext):
break
end = (i+1) * block_size
i += 1
yield pad_bits_append(plaintext[start:end], block_size)

def _counter_mode_inner(plaintext, key, ctr, block_enc):


eblock = block_enc(ctr, key)
cblock = xor_bits(eblock, plaintext)
bits_inc(ctr)
return cblock

def counter_mode(plaintext, key, ctr, block_size, block_enc):


"""Return the counter mode encoding of plaintext"""
cipher = []
# break the plaintext into blocks
# and encode each one
for block in get_blocks(plaintext, block_size):
cblock = _counter_mode_inner(block, key, ctr, block_enc)
cipher.extend(cblock)
return cipher

def counter_mode_hash(plaintext):
block_size, block_enc, key, ctr = hash_inputs()
hash_ = None
for block in get_blocks(plaintext, block_size):
cblock = _counter_mode_inner(block, key, ctr, block_enc)
if hash_ is None:
hash_ = cblock
else:
hash_ = xor_bits(hash_, cblock)
return hash_

def hash_inputs():
block_size = 128
block_enc = aes_encoder
key = string_to_bits("Vs7mHNk8e39%CXeY")
ctr = [0] * block_size
return block_size, block_enc, key, ctr

def _is_same(bits_a, bits_b):


if len(bits_a) != len(bits_b):
return False
for a, b in zip(bits_a, bits_b):
if a != b:
return False
return True

def check(message_a, message_b):


"""return True if message_a and message_b are
different but hash to the same value"""

if _is_same(message_a, message_b):
return False

hash_a = counter_mode_hash(message_a)
hash_b = counter_mode_hash(message_b)

35
return _is_same(hash_a, hash_b)

2.9 Strong Passwords


Any web application that can send you your password (in cleartext)
is doing some things very very bad.

Even if someone can get access to the database and read all the usernames and password
information they cant break into your account.
Bad ideas for storing usernames and passwords:

Store usernames and passwords in cleartext

Generate a random key k {0, 1}n and each password is stored as Ek (password) using
CFB with s = 8 is a bad idea because

It reveals the length of the password because encrypting the password, the output
thats stored password will reveal the length. Any revealed information about the
password is bad and its easy to find short passwords (easier to break).
Solution: use hash function so the size of the output doesnt depend on the size of
the input. No matter how long the password is, the output length is always the same
It reveals if 2 users have the same password
If k is compromised, it reveals all passwords, because we need the key k to decrypt
and we need the Ek function an every check in (on the account). So the program
running on the server and check the passwords needs this key all the time and if the
password file is compromised the key is also compromisedd as its available in the
memory and stored in the program.
Solution: Dont invert the password to check if its correct. Only recompute from
the entered password some function that checks that with whats stored. So there is
no reason to have a key stored.

A slightly better idea is:


For each user, store:
n
Epassword (0), which means the encryption is done n times with the password as the key. There
is no key kept secret on the server and encrypting twice, doubles the work to check a password
(and if the result is 0). This scales more the attackers work than the checking work. Unix uses
25
n = 25 : x = Epassword (0). Unix worked with DES which works with a 56 bit key. Longer
passwords will be cut off. Due to restrictions of DES (only 8 bits and only upper and lowercase
character ans numbers allowed) there are only 26 + 26 + 10 possible characters. This means
there are only 628 possible password (628 < 248 )

2.10 Dictionary Attacks


Definition Dictionary Attacks
An attacker can pre-compute a dictionary. Pre-compute

Ewn (0)

36
(Unix: n = 25) for a set of common password w, store those pre-computed values and then
every new password file thats compromised check all the passwords against this list and have
a good likelyhood of finding some accounts that can be broken into.
Making Dictionary Attacks Harder:

Train (coerce) user to pick better passwords (wont work properly)

Protect encrypted passwords better

add salt

2.11 Salted Password Scheme


The password file include

username (userID)

salt

encrypted password

Definition salt
Salt are random bits (Unix: 12 bits). They dont need to be kept secret. The salt bits are
different for each user. The encrypted password is the result of hashing the salt concatenated
with the users password.
n
x = Esalt||password
That means: as long as the salts are different, even if the passwords are the same, the encrypted
25
passwords will be different (Unix: DESsalt||password (0)). An attacker who compromises the pass-
word file has not much harder work because the salt is not kept secret and the attacker needs to
try possible passwords concatenated with that salt and find one that matches the hash value.
An attacker with an offline dictionary attack has 2n (number of different n bits long salts con-
taining only 0 and 1 ; n lenght of salt) more work because the attacker needs to pre-compute
that dictionary for all the different salt values to be able to look for password matches. Salting
adds a lot of value for very littel cost. Just by some extra bits which dont need to be kept
secret, that are stored in the password file.

2.12 Hash Chain, S/Key Password Scheme


Definition Hash Chain
A hash chain is the result of hasing a value (secret) s {0, 1} over and over again: computing
hash of that secret: H(s) and do this again: H(H(s)) and so on.
Example 32:
A user u checks into a webpage (server S) gets the hash function and computes H(s), H(H(s)), . . ..
The only thing the server stores is the last value of this hash chain. So for hashing n-times the
server only stores H n (s).
The first log in protocol works as follows (assume n = 100):

1. U gets the hash function an computes a hash chain until H n1 (s) = H 99 (n) with his
secret s.

2. U sends p = H 99 to S

37
3. S checks if H(p) = x = H 100 (s)

4. S sets the next hash chain endpoint to n 2 = 98

U S

H(s), H(H(s)), . . .
UUUU
UUUU p
UUUU
UUUU
U*
?
H(p) = x, x 7 p

The next log in requires user sends H 98 (s). The hash chain is going backwards and hashes can
only be verified in one direction. The hash is hard to compute in one direction and easy in the
other (valuable property of the hash function). If someone just knows x (intercepts p), knows
the previous password value and can easy compute H 100 (s), H 101 (s), . . . but H 98 (s) is hard to
compute.
Definition S/Key Password Scheme
In S/Key Password Scheme the server would generate the hash chain of length n:

H n (s), H n1 (s), . . . , H(s)

The user prints out the received hash values.


The server only stores the last entry in the hash chain and so whats stored on the server can
not used to log in. The downside is that the user has to carry around a list of passwords and
has a look on the list on ervery log in, use the correct password, cross this one off and uses the
next password on the next log in. And the user has to get a new password list after using the
last password.

3 Key Distribution
3.1 Key Distribution
In symmetric ciphers all participating parties have the same key to do encryption and decryp-
tion. The keys may be identical or there may be a simple transformation to go between the
two keys. The important property that makes a cryptographic symmetric is that the same key
is used to encrypting and decrypting. If 2 or more parties want to talk to each other, they first
have to agree on a secret key. That means there has to be a way for the parties to communicate
this key without exposing the key. Earlier this was done with a codebook (which was physically
distributed to the endpoints) which is not practical. Nowadays there are different ways how to
establish a secure key.

3.2 Pairwise Shared Keys


Definition Pairwise Shared Key
A pairwise shared key is a secure key only used by 2 parties to communicate to each other.
That means every party has a different key to any other party. This works with a small number
of people otherwise it gets pretty expensive, due to the number of different keys.
Example 33:
Consider 4 parties: A, B, C, D. Then

38
A has a key with B, C, D

B has a key with C, D

C has a key with D.

For 4 parties 6 keys are needed.


Theorem 3.2.1:
The number of pairwise keys for n people is:
n1 n1
X X n (n 1)
(n i) = (n 1) + (n 2) + + (n (n 1)) = i= (3.3)
| {z } 2
i=1 =1 i=1

Proof:
Consider n people. The first person has to make a secret key to everyone excepts himself.
Therefore in the first step (n 1) keys are needed. The second person has to make a secret key
to everyone excepts himself and the first person (exists already). Therefore in the second step
n 2 keys are needed and so on until the penultimate person has to make a key only to the nth
person. The last one has already a key to the other parties.
Reading the summation backwards 1 + 2 + 3 + . . . + (n 2) + (n 1) we get a much simpler
summation.
n(n1)
With induction its simply to show that n1
P
i=1 i = 2 :
Initial step: n = 1:
11 0
X X 1 (1 1)
i= =0=
2
i=1 i=1

Assume the formula (3.3) is true for n, then for n + 1 is


(n+1)1 n
X X n (n 1) n (n 1) 2n
i= = 1 + 2 + + (n 2) + (n 1) + n = +n= +
2 2 2
i=1 i=1
n2 n + 2n n2 + n
= =
2 2
(n + 1) n (n + 1) ((n + 1) 1)
= =
2 2

Example 34:
Assume the network has 100 people using pairwise shared keys. The the number of needed keys
are
99
X 100 (100 1)
i= = 4950
2
i=1

For a goup of 109 people we would need approximately 5.0 1017 keys to have pairwise keys.

3.3 Trusted Third Party


Definition Trusted Third Party
A trusted third party is some trustworthy place T P which has a shared secret with each network

39
individual.
Example 35:
Assume 2 parties A, B in the network and a trustworthy place T P has a secret key to each party:
One secret key kA to A and one secret key kB to B. When A and B want to communicate the
protocol looks as follows:

TB
kA ,kB ,kAB
G 88
88
EkA (B||kAB ) 88
88EkB (A||kAB )
88
B 88
88
 
A o / B
kA EkAB (m) kB

This means:

1. A sends a request to T P for a communication with B.

2. T P generates a random key kAB {0, 1}n

3. T P sends A the encrypted key kAB concatenated with Bs name (userID) with the shared
key kA {0, 1}n and B the encrypted key kAB concatenated with As name (userID) with
the shared key kB {0, 1}n .

4. A and B can communicate with the key kAB .

The problems with a trusted third party are:

T P can read all messages between A and B, because T P generates the key kAB , so T P
can read every intercepted message over the insecure channel between A and B.

T P can impersonate every customer. T P can generate a fake conversation, make it seem
like A is communicating with B.

An attacker can maybe tamper with EkA (B||kAB ) to steal kAB (depends on encryption
and modes of operation used)

So the T P is a more theoretical thinking than even implementing.

3.4 Merkles Puzzle


Definition Merkles Puzzle
The Merkles Puzzle was the first key exchange protocol, without the parties sharing a secret
key with another or third party (trusted place).
The idea behind Merkles Puzzle is that 2 parties A, B want to share a secred key. First the
parties agree on some parameters:

Encryption function: E

Security parameters: s, n, N with s n

The protocol works as follows:

40
1. A creates N secrets:
s0 = [{0, 1}s , {0, 1}s , . . . , {0, 1}s ]
which means every secret is a s bit long random number.
2. Then A creates a set of N puzzles:
p = [Es01 ||0ns (Puzzle: 1], Es02 ||0ns (Puzzle: 2], . . . , Es0N ||0ns (Puzzle: N ]
which means the i-th encrypted puzzle uses the i-th secret concatenated with enough 0s,
so that the right key length is achieved. The message include the word Puzzle followed
by the corresponding number of the secret(in AES every key has 128 bits).
3. A shuffles the puzzle set and sends all N puzzles to B.
4. B picks randomly one of the puzzles m and does a brute force key search. That is, B tries
to
Dg||0ns (m)
with a guessed key g {0, 1}s and a known decryption function (inverse of E).

5. Eventually B is going to find one that decrypt to


Puzzle: p
where 1 p N the number of the puzzle.
6. So B knows the guess g and the puzzle number p. For longer keys its better to use:
Puzzle: p, Key: k
where k is a new random key of any length associated with that secret.
7. A will keep track of the keys in a list
k 0 = [k1 , k2 , . . . , kN ]

8. If B decrypts a puzzle, B acquires the puzzle number and the key number.
9. B sends the number of the puzzle back to A
10. A does a lookup in the key list and figure out which key was the one in the puzzle
decrypted.
The protocol looks as follows:
A B

s0 , p WWWWW
WWWWW
WWpWshuffled
WWWWW
WWWWW
WWW+
pick m


brute force key search p, k
gg
p ggggggg
gg g
gggg
sgggg Ekp (m)
look-up kp o /

41
The Merkles Puzzle is an impractical idea as it requires a lot of secrets and puzzles to create
for A and a good bandwith to send this information to B so that an attacker cant get the key
too easy.
Theorem 3.4.1:
Assuming perfect encryption and random keys, an attacker, who got all sended information,
expects to need N2 times as much as B to find the key

Proof:
Due to B randomly picks a puzzle out of the shuffled set p and solve it with a brute force
key search and sends A the number of the puzzle back, the attacker doesnt know which of
the encrypted puzzles correspond to this number. The attacker has to try to break all of the
encrypted puzzles and will be expected to find the one that matches the one that B picked after
trying about N2 of them 

3.5 Diffie-Hellman Key Exchange


Definition Properties of Multiplication
The Properties of Multiplication (used in DHKE) are
Commutativity: a, b Z, n N :
a b (modn) = b a (modn)

Commutativity of powers (follows directly from commutativity): a Z, c, b N, c, b


Z a + Zn (Z/Zn) , n N:
 c
ab (modn) = abc (modn) = acb (modn) = (ac )b (modn)

Note: even using the mod operation the properties are valid.
Definition Primitive Root
For q P consider the multiplicative group ((Z/Zq), ) (hence: operation is multiplication).
We know that (Z/Zq) has q 1 elements, namely
(Z/Zq) = {1, 2, . . . , q 1}
We say that a number g Z is a primitive root of q if and only if
(Z/Zq) = {g i mod q|i = 1, 2, . . . , q 1}

Example 36:
Consider q = 7 a prime number, then g = 3 is a primitive root of q because
g1 =3 3 3 mod 7
g2 =9 9 2 mod 7
g3 = 27 27 6 mod 7
g4 = 81 81 4 mod 7
g5 = 243 243 5 mod 7
g6 = 729 729 1 mod 7
Every number x {1, 2, 3, 4, 5, 6} occurs once so 3 is a primitive root of 7.
Theorem 3.5.1:
If p is a prime number and p > 2, then there are always at least 2 primitive roots.
Example 37:
The Python code for finding all primitive roots of an integer n P may look as follows:

42
def square(x):
return x*x
def mod_exp(a,b,q): #recursive definition
if b==0: #base case: a^0=1
return 1
if b%2==0:
return square(mod_exp(a,b/2,q))%q
else:
return a*mod_exp(a,b-1,q)%q
def primitive_roots(n):
roots=[]
def is_primitive_root(r):
s=set()
for i in range(1,n):
t=mod_exp(r,i,n)
if t in s:
return False
s.add(t)
return True
for i in range(2,n):
if is_primitive_root(i):
roots.append(i)
return roots

.
Definition Diffie-Hellman Key Exchange(DHKE)
The Diffie-Hellman Key Exchange allows 2 parties without any prior agreement be able to
establish a shared secret key.
The protocol for the DHKE with 2 parties A, B is
1. A picks some values q (large prime number) and g (primitive root of q) and sends them
to B.
2. A selects a random value xA {0, 1}n and B selects a random value xB {0, 1}n
3. A computes yA = g xA mod q
4. B computes yB = g xB mod q
5. A and B exchange their computed values
xA
6. A computes the key kAB = yB mod q
xB
7. B computes the key kBA = yA mod q
This looks as follows:
A B

q, g, xAQ xB
QQQ
QQq,g
QQQ
QQQ
QQ(
yA QQ yB
QQQ mm
QQQyAmmmmm
mmQQ
mmm yB QQQQQ
vmmm (
kAB kBA
First the 2 parties A, B agree on some values: q a large prime number and g a primitive root
of q. Then A and B select a random of length n. A computes yA and B computes yB the way

43
shown in the protocol above. Then A and B exchange their computet values and comute the
key kAB and kBA .
Theorem 3.5.2 (Correctness Property of DHKE)
In the DHKE protocol is:
kAB = kBA

Proof:
A:
xA
kAB = yB mod q
= (g xB )xA mod q
= g xB xA mod q

B:
xB
kBA = yA mod q
= (g xA )xB mod q
= g xA xB mod q
Due to the commutativity of power in multiplication is
g xB xA = g xA xB

Theorem 3.5.3 (Security Property of DHKE (against passive attacker))
The passive (listen only) attacker gets the public values q, g, yA , yB . Its possible to show that
reducing same known hard problems to the Diffie-Hellman problem. This shows, anyone who
can solve the Diffie-Hellman problem efficiently would also be able to solve some problem that we
already known as hard. To break efficiently the Diffie-Hellman problem need a way to compute
descrete logarithm efficiently.
The security of the scheme depends on it being difficult to solve
ax = b mod n
for x given a, b, n (discrete logarithm problem).
If modulus n is not prime, the Diffie-Hellman scheme would still be correct (2 participants
produce the same key) but might not be sure, because of some non prime number calculating the
discrete logarithm is not hard.

Theorem 3.5.4 (Security Property of DHKE (against active attacker))


The protocol of DHKE isnt secure against an active (change, write sended messages) attacker.
If an attacker can change the value of yA and the value of xB (e.g. if the attacker changes yA
and yB to 1 then the secret key will be 1 raised to the personal secret key which is still 1). or
the attacker can get all values q, p, yA and establish a secure fake connection with A with the
key kAM and B with the keys kBM . So the attacker is in the middle:
c c
c = EkAM (m) m = DkAM (c), m m0 , c0 = EkBM (m0 ) m0 = DkBM (c0 )
| {z } | {z } | {z }
A M B

This means: A sends an encrypted message to M thinking its B. M can now decrypt the
message and change it and forward it to B who thinks he receives a message from A.

44
3.6 Discrete Logarithm Problem
Definition Continous Logarithm
The solution of the equation for known a, b R
ax = b
is
x = loga b
which can be solved in efficient ways.
Example 38:
It is log2 8 = 3 because 23 = 8.
Definition Discrete Logarithm
The solution of the equation for known a, b, n R
ax = b mod n
is
x = dloga b
where dlog is the discrete logarithm. This turns out to be really hard problem and n is a large
prime number. Its not clear that the dlog always exists (for certain choices of a, b, n it would
not exists).
Here a is a generator which means
a1 , a2 , . . . , an1
is a permutation of 1, 2, 3, . . . , n 1 Zn (that means in general: every number occurs exactly
once).
Conlusion:
Given a power its hard to find the corresponding number in the list in Example 36 (for greater
values than 7). The fastest known solution need exponential time (not polynomial). That
means the only way to solve this is to try all possible powers until you find the one that work.
You can do a little better by trying powers in a clever way (and to exclude some powers) but
there is no better ways than doing this exponential search which is exponential in the size of n
(linear in the value of n).
If its possible to compute dlog efficiently then the attacker, who knows q, g, yA , yB hat to
compute
dlog yAmod q
k = yB g mod q
where dlogg yA mod q = xA and k is the key.

3.7 Decisional Diffie-Hellman Assumption


Definition Decisional Diffie-Hellman Assumption
Assume discrete lograithm is hard then breaking Diffie-Hellman implies solving discrete loga-
rithm efficiently (not provable). The security of Diffie-Hellman relies on a strong assumption:
the Decisional Diffie-Hellman Assumption (a bit circular).
The Decisional Diffie-Hellman Assumption is with former notation x = xA , y = xB :
k = g xy mod q
is indistinguishable from random given q, g, g x , g y (intercepted message).
This assumption is not true for certain values.

45
3.8 Implementing Diffie-Hellman
The first issue is to do fast modular exponentiation:
g xA mod q
where xA is some large random number and q is a large prime number and q a primitive root
of p.
Theorem 3.8.1:
Computing an with n N, a R can be done using O(log n) multiplications. That means
modular exponentiation scales linear in the size (bits to represent) of the power. It follows:
making the power n a very large number and still compute an quickly.

Example 39:
Using x2a = (xa )2 then for 220 there are at least 5 multiplications needed:
 2 2 2  2
  2    
20 10 2
 5 2
 2
4 2 2 2 2
2 = 2 = 2 = 22 = 2 2 = 2 (2 2)

as seen 5 multiplications are needed to compute 220


Example 40:
A fast modular exponentiation that turns 3 values a, b, q to ab mod q which has a running time
that is linear in the size of b is given by this Python code:
def square(x):
return x*x

def mod_exp(a,b,q): #recursive definition


if b==0: #base case: a^0=1
return 1
if b%2==0:
return square(mod_exp(a,b/2,q))%q
else:
return a*mod_exp(a,b-1,q)%q

.
Theorem 3.8.2:
The fast modular exponentiation technique used in this notes suffers from an important security
flaw. The time it takes to execute depends on the value of the power which may be secret.
This means, an attacker who can measure precisely how long the encryption takes can learn
something about the key.

Example 41:
Assume modulus and multiplication costs:

0 time unit by 1 or 2
mod and costs
1 time unit otherwise
In binary, telling if a number is odd or even depends on the last digit. E.g.
10 = 1010 last digit 0 even
11 = 1011 last digit 1 odd
and dividing by 2 is a shift right:

20
:2 / 10 :2 /5

10100 01010 00101

46
In the modulo exponentiation routine:
If the exponent is even, we divide by 2 and this costs 1 multiplication
If the exponent is odd, we substract 1 from it which will make it even and then we divide
by 2, which costs in total 2 multiplications
Now written die exponent in binary it follows:
For every 1 in the exponent (in binary) we do 2 multiplications
For every 0 in the exponent (in binary) we do 1 multiplication
Having 4 exponents the most expensive is:
dec. bin costs
1023 0001111111111 3 0 + 20 2 = 20
1025 0010000000001 1 2 + 9 1 + 1 2 = 13
4096 1000000000000 1 2 + 12 1 = 14
4097 1000000000001 1 2 + 11 1 + 1 2 = 15
Therefore 1023 is here the most expensive operation.
Example 42:
Suppose A and B execute the Diffie-Hellman protocol, but A picks a value for g that is not a
primitive root of q. Then
the generated key would be more vulnerable to an eavesdroppter
the number of possible keys that could be generated would be smaller than q
Remember that the Diffie-Hellman protocol relies upon the difficulty of discrete logs.
If g is a primitive root of q then 1 2 3
{g , g , g , . . .} = q
If g is not a primitive root of q then
1 2 3
{g , g , g , . . .} < q

This implies if g is not a primitive root of q it is easier to solve the discrete log, which means the
generated key would be more vulnerable to an eavesdropper and also means that the number
of possible keys that could be generated would be smaller then q.

3.9 Finding Large Primes


Theorem 3.9.1:
There are infinitely many prime numbers.

Proof (by contradiction - Euclid)


Assume there are a limited set of primes:

P = {p1 , p2 , . . . , pn }

with |P | = n.
Computing the product
n
Y
p = p1 p2 pn = pi
i=1

47
Then
n
Y
p0 = p + 1 = p1 p2 pn + 1 = pi + 1
i=1

is not a prime number due to the assumption that P = {p1 , p2 , . . . , pn } are the only primes and
p0
/ P.
Since p0 is not prime, it must be a product of a prime and an integer q. So p0 is a composite
number:
p0 = pi q
But
p0 = p1 p2 pn + 1 = pi q
Dividing by pi results in
p1 p2 pn + 1 1
= q p1 p2 pi1 pi+1 pn + =q
pi pi
Since q was an integer and p1 p2 pi1 pi+1 pn is an integer and pi P pi 6= 1 so
1
pi
/ Z. It follows
1
p1 p2 pi1 pi+1 pn +
/N
pi
but q N. Contradiction.
Thus the assumption is false, it follows: there are infinitely many primes. 
Definition Asymptotoc Law Of Distribution Of Prime Number (Density of Primes)
The Density of Primes is the number of primes N that occur given an upper bound x:
x
N x (3.4)
ln x
Therefore, assuming the set of all prime numbers P, the probability that a number x is prime
is given by:
1
P (x P)
ln x

Theorem 3.9.2:
The expected number of guesses N needed to find an n-decimal digit long prime number is
ln 10n
N (3.5)
2

Proof:
Follows directly of the asymptotic law of distribution of prime numbers
1
P (x P)
ln x
Its also the probability that a randomly selected odd integer in the invervall from 10y to 10x is
prime is approximately using (3.4)
10x 10y
ln 10x ln 10y 10xy
 
number of primes 2 1
= 10x 10y =
number of odd integers 2
10xy 1 x ln 10 y ln 10

48

Example 43:
The probability of a 100 digit number is prime is approximately
 
2 10 1
0.008676
9 100 ln 10 99 ln 10
and on average
1
115
0.008676
odd 100-digit numbers would be tested before a prime number is found.
Or directly from (3.5):
ln 10100
115
2
Example 44:
A Python procedure for finding large prime numbers for some randomly selected large number
x may be look as follows:
def find_prime_near(x):
assert x%2==1 #only odd numbers
while True:
if is_prime(x):
return x
x=x+2 #skip even numbers

def is_prime(x):
for i in range(2,x):
if x%i==0: #test divisibility
return False
return True

but this is exponentially in the size of x so wouldnt work for large x efficiently and the prime
test is very naive.

3.10 Faster Primal Test


Definition Faster Primal Test, Probability Test
A faster primal test uses a probability test:

/ P) 2k
x passes the test P (x

That means if x passes the primal test, then the probability that x is composite (not prime) is
equal or less then some value 2k . Noramlly k = 128.

3.11 Fermats Little Theorem


Definition Fermats Little Theorem
A useful property of prime numbers is the Fermats litte theorem with p P, a N : 1 1 < p
is
ap1 a mod p
or if a 6= n p for some n Z then
ap1 1 mod p

49
With this, its easy to try a lot of a and if it always holds p is probably prime but some com-
posite numbers Charmichael numbers where the ap1 1 mod p also holds for all a relatively
prime p. This test isnt fast enough to try all a with 1 a < p because its runningtime is
exponential in the size of p.

3.12 Rabin-Miller Test


Definition Rabin-Miller Test
The Rabin-Miller test is a probabilistic prime number test.
Start by guessing an odd numer n N (if n is even, then its not prime). If n is odd then n can
be broke into
n = 2t s + 1
Next choose some random a [1, n) N. If n P then

as 1 mod n (3.6)

or for some 0 j < t:


j
as2 n 1 mod n (3.7)
The big advantage is its sufficient trying only a small number of values.
Example 45:
For n < 1373653 its sufficient to try a = 2 and a = 3.
Theorem 3.12.1:
Finding any value that sadisfied (3.6) or (3.7) than this value is composite.
The difference of the Rabin-Miller test to the Fermat test is that the probability that a composite
number passes the test is always less than some constant and there are no bad numbers like the
Charmichael number in the Fermat test.

Example 46:
If the probability that the guess n is composite is less than 2128 we need to run the test 64
times for random selected a values because
64
22 = 2128

Therefore 64 test runds would be sufficient.


Example 47:
The Rabin -Miller test in Python is:

from random import randrange

#randrange returns a random integer between start and end:


#r=randrange(start,end)
def square(x):
return x*x
def mod_exp(a,b,q): #recursive definition
if b==0: #base case: a^0=1
return 1
if b%2==0:
return square(mod_exp(a,b/2,q))%q
else:
return a*mod_exp(a,b-1,q)%q

def rabin_miller(n,target=128):

50
def calculate_t(n):
n=n-1
t=0
while n%2==0:
n=n/2
t=t+1
return t
if n%2==0:
return False
#n=(2**t)*s+1
t=calculate_t(n)
s=(n-1)/(2**t)
n_test=target/2 #number of test needed to get desired probability
tried = set()
if n_test>n:
raise Exception(n is too small)
for i in range(n_test): #randomly pick a and if not tried before
while True:
a=randrange(1,n)
if a not in tried:
break
tried.add(a)
#2 tests in Rabin-Miller
#1st test:
if mod_exp(a,s,n)==1:
continue
#2nd test:
found = False
for j in range(0,t):
if mod_exp(a,2**j*s,n)==(n-1):
found=True
break
if not found:
#failed both tests => leave
return False
#if made it untill here, all tests passed
return True

4 Asymmetric Cryptosystems
Definition Asymmetric Cryptosystems I
In difference to symmetric cryptosystems, where the key k used for encryption and decryption
is the same. This leads to distributing the key problem between 2 parties but even solving that
problem using the same key for encryption and decryption limits whats possible to do with the
cryptosystem.
With asymmetric cryptosystems there are different keys for encrypting kU (public key - no need

51
to be kept secret) and decrypting kR (private key). That means:

m / E /o /o /o /o /o /o /o c/o /o /o /o /o /o /o o/ / D /m
KO OS

k symmetric k

kU asymmetric kR

Definition Correctness Property of Asymmetric Cryptosystems


Assume 2 parties A, B. A wants to send a private message to B over an insecure channel. A
and B havent a shared key but A has Bs public key kU B and B has his own private key kRB
that corresponds to the public key kU B . So A does:

m /E /o /o c o/ o/ / D /m
O O

kU B kRB

So only knowing Bs private key kRB can decrypt the message. So the correctness property
here is:
DkRB (EkU B (m)) = m
because decrypting with the private key is the inverse of encrypting with the public key.

4.1 Signatures
Definition Physical Signature
Physical signatures are used to authenticate documents. When something is signed, that means
that the signer is responsible for the message in the document.
Physical signatures dont work well for the purpose on digital documents because the signature
can be cut and paste or a signed document can be modiefied after its signed.
Definition Digital Signature
A digital signature is a signature on a digital document. The person who signed the document
agreed to the message and the message can not be changed without also breaking the signature.
Example 48:
Asymmetric Cryptography used by 2 parties:
Assume 2 parties A, B. A signs a message, transmit that message to B, B should be able to
read the message and know that this message had only come from A. This may look as follows:

m / D /o /o c o/ o/ / E /m
O O

kRA kU A

That means:
A uses the own private key kRA (which only A should have) to encrypt the message m. Anyone
who has As public key kU A (including B) can decrypt the message and if it decrypts to a
reasonable message B knows that it came from A.

52
The correctness of this depends on the inverse property. In order for signatures to work we need
to be able to encrypt and decrypt in the reverse order and have them still be inverse:

EkU A (DkRA (m)) = m

using the apropriate private key for decryption and the public key for encryption.
Definition One-Way Function, Trapdoor Function
For building an asymmetric cryptosystem we have to build a one-way function, trapdoor function
which is a function that is easy to compute in one direction f (x) and hard to compute the other
direction f 1 (y):
f (x)
xj *y
f 1 (y)

In asymmetric cryptosystems we want to reveal the function (easy direction) and the reverse
direction is hard but we want also some way to do the reverse (hard) direction easily if we know
some secret.
Definition Asymmtric Cryptosystems II
In an asymmtric cryptosystem its hard to do the reverse direction unless you have a key but
revealing the easy way to do forward (easy) direction does not reveal the easy way to do the
reverse (hard) direction.

4.2 RSA Cryptosystems


Definition RSA Cryptosystems
RSA Cryptosystems names after Rivest, Shamir and Adleman is the most famous asymmetric
cryptosystem used worldwide.
It works as follows:

The public key is a pair of 2 numbers: kU = (e, n) where n = p q the product of 2 large
prime numbers and e is a secret number.

The secret key is a pair of 2 numbers: kR = (d, n) where n is the modulus as before and
d is a secret number.

To perform encryption fo some message m:

EkU (m) = me mod n

To perform decryption on a ciphertext c using the secret key kR :

DkR (c) = cd mod n

.
Example 49:
Suppose n = 6371. Then the maximum value for m is 6370 because we need a mapping and
encryption to be invertable given that the output is modn that the possible values woulde be
in {0, 1, 2, . . . , 6370}. Knowing that

EkU (m) = me mod n

53
maps each message to a unique ciphertext otherwise this wouldnt be invertable (if 2 messages
would map to the same value, we woulndt know which to decrypt to) that would definitely be
the case if we have more than modulus number of values (otherwise we used one value twice).
As long are m and n are relatively prime (which means their greatest common divisor is 1) we
should generate all the different values and use each of them for a different message. It is
m = 0 me mod n = 0 so the encryption function doesnt depend on the key
m = 1 me mod n = 1 so the encyption function doesnt depend on the key
.
Theorem 4.2.1:
So its dangerous to use small values for m (see later)
Proof:
In
EkU (m) = me mod n
the key kU is public and we assume that an adversary knows e. If there is only a small possible
set of m values the adversary can just try them all and see which one maps to which. 
Example 50:
Let M be a 2-digit number. The value of m is encrypted using RSA and As private key kRA
using no padding:
EkRA (m) = c
Given the values of As public key kU A :
n = 1231392572501263290727737266752432575805507280655102503550501
05040961069379571235556595226046554214827510742496799970195881106
28043795915870723829687621960877649085435570879287913576999645140
49216311301341042070900557515495046418603754603553362356149067749
88828468421536178729380641488939620270782145238346367L
e = 65537
and the ciphertext:
c = 579654303362576360840355738165819750714799492255
5475032249008245888336713874585037805559949385687100525344312
7798571076614963471843544300372785467082456903912582920032952
0758983756800872241184575378396329063230847828100301360222673
8631730848965119239243053133548975221346897531038080236731282
5891307058273972L
then m can be computed by
def solve(e,n,m):
for i in range(0,100):
if pow(i,e,n)==m:
return i
return None

print solve(e,n,c)
with pow(a,b,c) returns ab mod c.

54
4.3 Correctness of RSA
Theorem 4.3.1 (Invertible Property for RSA)
Assuming encryption and decryption with

EkU (m) = me mod n (4.8)


d
DkR (c) = c mod n (4.9)

then the property for RSA to be invertible is

med1 1 mod n (4.10)

Proof:
To get the message after encryption back:

DkR (EkU (m)) = (me mod n)d mod n m med mod n


1 med1 mod n

Therefore the goal is to select values for e, d, n that satisfy:

m Z : med1 1 mod n


Note that (4.8) and (4.9) work in both directions. For signatures we want invertability where
we do decryption first and then encryption thats equal to:

EkU (DkR (c)) = cde mod n c mod n

So we have the correctness in both directions.

4.4 Eulers Theorem


Definition Totien function, Eulers Function
The totiend function (Eulers function) of an interger n N returns the number of positive
integers less than n that are relatively prime to n
Example 51:
It is
(15) = 8
because 15 = 3 5 so every multiple of 3 and 5 are not in (15) so

(15) = |{1, 2, 4, 7, 8, 11, 13, 14}| = 8

Theorem 4.4.1:
It is
n P (n) = n 1

Proof:
If n is prime than all of the positive integers less than n are relatively prime to n 

55
Example 52:
It follows for n = 277 a prime number:

(277) = 277 1 = 276

Theorem 4.4.2:
If n = p q with p, q P then
(n) = (p) (q)

Proof:
It follows
4.4.1
(n) = p q 1 (q 1) (p 1) = p q (p + q) + 1 = (p 1)(q 1) = (p) (q)

which is useful for RSA:


If we know the factors of n, we have an easy way to compute the value of the totient of n but
if we dont know p and q it appears to be hard to compute the number of (p q) = (n). 
Theorem 4.4.3 (Eulers Theorem)
If gcd(a, n) = 1 (that means if a and n are relatively prime, greatest common divisor) then

a(n) 1 mod n (4.11)

where (n) is the totient of n. Then we set

(n) = ed 1

then we have the correctness property we need with the assumption that a, n are relatively prime.

4.5 Proving Eulers Theorem


Theorem 4.5.1 (Fermats Little Theorem)
If n P and gcd(a, n) = 1 then
an1 1 mod n

Proof:
It follows
{a mod n, 2a mod n, . . . , (n 1) a mod n} = {1, 2, . . . , n 1}
so

a 2a 3a (n 1) a (n 1)! mod n
(1 2 (n 1)) an1 (n 1)! mod n
(n 1)! an1 (n 1)! mod n
an1 1 mod n

56

Theorem 4.5.2 (Eulers Theorem)
If gcd(a, n) = 1 for a, n N then
a(n) 1 mod n

Proof:
Follows from Fermats little Theorem 4.5.1. If

n P (n) = n 1. So

an1 1 mod n a(n) 1 mod n

n
/ P, gcd(a, n) = 1 R = {x1 , x2 , . . . , xr } : i {1, 2, 3, . . . , r} : gcd(xi , n) 6= 1.
This means R is the set of numbers which are not relatively prime to n since n / P.
Multiplying R with modn leads to:

S = R mod n = {x1 mod n, x2 mod n, . . . , xr mod n}

and it follows:
|S| = |R| = (n)
because the set S and R consists of {1, 2, . . . , r} elementes and R was defined the same
way as (n).
Due to a, n are relatively prime:

axi mod n = axj mod n xi = xj

It follows
r (n)
Y Y z }| { Y
(R) = xi = x1 x2 xr = (S)
i=1
r
Y
= (axi mod n)
i=1
= (ax1 mod n) (ax2 mod n) (axr mod n)
= a(n) (x1 x2 xr ) mod n

and therefore

x1 x2 xr = a(n) (x1 x2 xr ) mod n 1 a(n) mod n

4.6 Inversibility of RSA


Following 4.3.1 it is:
Definition Inversibility Property of RSA
Assume n = p q a product of 2 primes p, q P then there are the encryption function (4.8)
and the decryption function 4.9. The invertibility property we need to know:

med1 1 mod n (4.12)

57
From Euler we know:
a(n) 1 mod n
with gcd(a, n) = 1.
So if we can pick e and d such that

e d 1 = k (n)

We cant gurantee that gcd(m, n) = 1 (that m and n are relatively prime). We only know:

n = p q and m < n

and because its possible that


m = c1 p or m = c2 q
which some values c1 , c2 and therefore (if c1 < q or c2 < p): m < n and it follows that
gcd(m, n) 6= 1.
So we cant use Eulers theorem directly - we have to deal with special cases.

4.7 Pick and Compute e and d


For correctness we need that with n = p q and p, q P:

ed 1 = k(n) = k(p q) = k(p)(q) = k(p 1)(q 1)

Now d should be selected randomly because the private key kR = (d, n) includes the secret d,
the public key kU (e, n) includes eand if we want the private key th be secret and hard to guess
then include something that is unpredictable and that will be the value of d. Due to n is part
of the public key, it doenst provide any security for the private key.
Since e is public its okay if e is computed in a predictable way and in fact e is often choosend
to be a small value.
Theorem 4.7.1:
Since d is relatively prime to (n), it has a multiplicativer inverse such that:

d e = 1 mod (n)

Theorem 4.7.2 (Compute e)


Given d and (n) we can use extended Euclidian algorithm to compute e.

Theorem 4.7.3 (Compute d)


Given e and n in
de = 1 mod (n)
its hard to compute d otherwise RSA would be insecure, which is only true if n is big enough
and a product of 2 prime numbers.

58
4.8 Security Property of RSA
An attacker who didnt have access to the private key has a difficult probelm perfoming decryp-
tion.
Definition Security Property of RSA
The security property of RSA is given e and n it is hard to find d unless for someone who knows
the factors of n = p q because:

d = e1 mod (n) = e1 mod (p q) = e1 mod ((p) (q)) = e1 mod ((p 1)(q 1))

So the security argument relies on 2 things:

1. Showing that all ways of breaking RSA would allow easy ways to factor n.

2. Claim that factoring n is hard, where n is the result of multiplying two large prime
numbers

ad 1 :
We show thats not possible to compute (n) more easily than factoring n or equivalent:
We show that given (n) there is an easy way to compute p and q.
We know:

(n) = (pq) = (p)(q) = (p1)(q1) = pq(p+q)+1 = n(p+q)+1 p+q = n(n)+1

The goal is: if we know (n) we can easily find p and q!


First consider:

(p q)2 = p2 2pq + q 2 = p2 2n + q 2
(p + q)2 = p2 + 2pq + q 2 = p2 + 2n + q 2
(p q)2 (p + q)2 = p2 2n + q 2 p2 2n q 2 = 4n
(p q)2 = (p + q)2 4n
= (n (n) + 1)2 4n
p
(p q) = (n (n) 1)n 4n

putting all together and add the 2 equations:

p+q =n p (n) 1
p q = (n (n) + 2
p1) 4n
2p = n (n) 1 + (n (n) + 1)2 4n

and so p
n (n) 1 + (n (n) + 1)2 4n
p=
2
p
which is easy to compute because p P N and therefore (n (n) + 1)2 4n N.
It follows:
(n) (n)
(n) = (p 1)(q 1) =q1q = +1
p1 p1
ad 2 :
If factoring is hard, breaking RSA would be hard and factoring is indeed hard.

59
.
Theorem 4.8.1:
There isnt an easier way to compute d than finding the factors of n.

Proof:
This follows by:
ed = 1 mod (n)
if we know (n), we can easily find the factors p and q because the correctness of RSA depends
on this property.
That means there is some k N such that
k (n) = ed 1
We already know the value of e. Now if finding out the value of d, we know a multiple of the
totient of n.
Once we know a multiple of the totient its easy to find the factors p and q.
So if there is some easier way to find d than factoring the modulus n would provide an easy way
to factor. That shows that all the obvious mathematical ways to break RSA are equivalent to
factor n. 
Example 53:
Given the public key n and e:
n = 11438162575788886766923577997614661201021829672124236256256184293570
6935245733897830597123563958705058989075147599290026879543541
e = 9007
and the intercepted ciphertext
c = 9686961375462206147714092225435588290575999112457431987469512093081629
Figure out the message can be done by factoring n. The time to factor n is about 17 years and
is:
n = p q = 3490529510847650949147849619903898133417764638493387843990820577
32769132993266709549961988190834461413177642967992942539798288533
So RSA keys must be longer than 129 digits.
Example 54:
Suppose for the public key: e = 79, n = 3737, the private key: d = 319, n = 3737 and the
intercepted ciphertext c = 903. Then the plaintext is m = 387 because
E : me mod n
D : cd mod n
so
903319 mod 3737 = 387
To check the answer simply encrypt 387:
38779 mod 3737 = 903

Example 55:
Generating public key kU = (e, n) and private key kR = (d, n):

60
1. Pick 2 random prime numbers p, q

p = 11, q = 13

2. Compute modulus n = p q
n = 11 13 = 143

3. Compute (n)

(n) = (p q) = (p) (q) = (p 1) (q 1) = 10 12 = 120

4. Choose e considering gcd(n, e) with 1 < e < (n)

e = 23 kU = (23, 143)

5. Compute multiplicative inverse (using extended euclidian algorithm) of e (which is d)


using e d 1 mod (n). It follows : e d + k(n) = 1 = gcd(e, (n))

23 d + k 120 = 1 = gcd(23, 120) d = 47, k = 9 kR (47, 143)

p, q, (n) are no longer needed but these values can be easily recomputed using e, d, n.
Encrypting a message:

1. Encrypting using
m = 7, kU = (23, 143)

2. Using c me mod n considering m < n

723 mod 143 2 c = 2

Decrypting a message:

1. Given
c = 2, kR = (47, 143)

2. Using m cd mod n
247 mod 143 7 m = 7

4.9 Difficulty of Factoring


The largest RSA public key broke so far was RSA-786 (bits) which are equivalent to 232 decimal
digits.
So if we want to know that RSA is secure we need to understand how the costs of factoring
depends on the size of the numbers that we need to factor.
Desired: if we pick a large enough key even an adversary with large amount of computational
power still wont be able to factor the number and break the RSA.

61
4.9.1 Best Known Algorithms
Measure the size (number of bits) of the input b in

b = log2 n

with the RSA modulus n.


Brute Force Algorithm:
A brute force search would look in Python as follows:

for i in range(2,sqrt(n)):
if is_factor(i,n):
return i

assuming is_factor (finding gcd) in a constant time. Then we need to go through this

loop n times. So the running time will be linear in n but b = log2 n so the running
b
time will be O(2 2 ) which will not work for large b

General Number Field Sieve (for classic computers):


The general number field sieve is faster than the brute force but still requires all possi-
1 2
bles. The runningtime is exponential with b 3 log b 3 which is still much worse than being
polynomial.

Shors Algorithm (for quantum computers):


Shors Algorithm has a polynomial running time O(b3 ) in the number of bits.
So factoring is in the complexity class BQP: bounded error quantum but polynomial time.
Its unknown wheather factoring is in the complexity class NP-hard, which are the hardest
problems that can be solved by a non deterministic Truring-Machine in polynomial time.

Theorem 4.9.1:
If it is proven that factoring is NP-hard then

N P BQP

4.10 Public Key Cryptographic Standard (PKCS#1), Insecurity of RSA in


Practice
The security property of RSA assumed a large, random number m.
Example 56:
Suppose n =RSA-1024, e and the ciphertext c is:

n = 13506641086599522334960321627880596993888147560566702752448514385152651060
48595338339402871505719094417982072821644715513736804197039641917430464965
89274256239341020864383202110372958725762358509643110564073501508187510676
59462920556368552947521350085287941637732853390610975054433499981115005697
7236890927563
e = 17
c = 232630513987207

62
Then
c = me mod n
and if me < n then we never wrapped around the modulus n.
That means decryption is as easy as finding

e
c

and so
17
232630513987207 = 7 = m

Example 57:
Suppose we wnat to send a small number like the day of the year:

m {1, 2, . . . , 365}

using RSA. To avoid message guessing we add some random padding to make m large and
unpredictable.
Definition Public Key Cryptographic Standard (PKCS#1)
The PKCS#1 adds some padding to make m long enough and unpredictable and avoid an
attacker from message guessing.
The new message m0 is:
m0 = 00 . . . 010||r||00000000||m
where r are some random bits with |r| 64 depending on the length of m it may use more bits.
This prevents the small message space attack since even if the set of possible messages is fairly
small an attacker needs to try all possible choices for the random bits (at least 64 of them) in
order to test those messges.
A better way to do this:
Definition Optimal Asymmetric Encryption Padding (OAEP)
The idea of OAEP is to xor the message with the output of the cryptographic hash function
that takes in a random value but the recipient can still decrypt the message because they can
obtain the random value and xor out the result of the cryptographic hash.

4.11 Using RSA to Sign a Document


The straight forward way may look as follows:

A B

m / D /o /o c o/ o/ / E /m
O O

kRA kU A

A decrypts a message m using his private key kRA . That produces the ciphertext c which is
really the signed document. Anyone who has As the public key kU A (include B) can now use
the encryption usint the public key on that signed document and obtain the document and
verified because this document was decrypted using As private key that only A has created.
Example 58:

63
Suppose A has a public kU A and a private kRA key and B has a public kU B and private kRB
key. A wants to send a message to B in a way that protects it from eavesdroppers and proofs
to B that the message was generated by A and indented to B, so A should send to B:

EkU B (EkRA (m)):


B can use his private key to decrypt the entire message and then use As public key to
get the message and verify it came from A.

EkRA (EkU B (m)):


Every eavesdropper can reverse the first encryption using As public key but cannot de-
crypt the full message without Bs private key.

4.11.1 Problem with RSA


RSA is very expensive.
Dont use it for large documents. It costs about 1000 times as much computing power to do 1
RSA encryption as it does to do symmetric encryption. So we dont want to encrypt the whole
document like this.
To avoid lots of computational costs, we assume A got kU A , kRA , RSA and a cryptographic hash
function H then
m0 = hm, RSAkRA (H(m))i
So we send the document in cleartext (enough for signature) but we send along with it something
that proves its the document that A indented. The output of the hash function H(m) is a small
fixed value and we can encrypt that with RSAkRA much more cheaper than encrypting the whole
document using RSA.

5 Cryptographic Protocols
In cryptographic protocols we are going to solve problems using

symmetric ecryption

cryptographic hash functions

asymmetric encryption

Definition Cryptographic Protocol


The main problem in cryptographic protocols that we have to solve is how to authenticate
between a client and a server. In this course we are going to look at these cryptographic
protocols:

Encrypted Key Exchange Protocol (EKE)

Secure Shell (SSH)

Transport Layer Security (TLS),


which is the most widely used cryptographic protocol using https

64
These 3 are used to authenticate a client and a server. This can be in either directions or in
both directions. They all involve a mix of asymmetric and symmetric techniques.
Definition Threat Model
Everytime we talked about cryptographic protocols we have to think what our threat model is.
If we want to argue that our protocol is secure, then we need to understand the threat model,
which means knowing the capabilities of the adversary. In order to argue that the protocol is
secure, we need to argue that an adversary with only those capabilities will not be able to break
the protocol. Therefore we need to assume:

The adversary has limited computational power.


That means in general:

An attacker who intercepts a message encrypted with some key k is not able to
decrypt unless knowing k or some other advantage for decrypting the message.
Hashfunctions have the property they should, that means preimage resistant, so an
adversary who has the hash of some value x, can not figure out was x was, and also
have strong collision resistant, so an adversary cant find 2 values that hash the some
output.

An passive attacker can only eavesdrop.


That means the attacker can only listen to messages on the network, but cant modifying
the message and cant inject their own message.

An active attacker contols the network.


That means the can modify data and messages, replay messages, attack-in-the-middle
(intercepting traffic between 2 parties and replace the messages with own messages)

.
Example 59:
For an adversary who controls a router on the Internet the threat model:

limited computational power and active attacker

would be good, because since the attacker contols the network can modify messages, replay
messages, act as a middle attacker. The attacker has lots of things to do, otherwise a passive
attacker who can only intercepts messages and analyze intercepted messages.

5.1 Encrypted Key Exchange Protocol (EKE)


Definition Ecrypted Key Exchange Protocol (EKE)
There are many variations on the encrypted key exchange protocol and many in which are still
used today. The protocol, starting with Diffie-Hellman, looks, for client C and server S, as

65
follows:
C S

Diffie-Hellman

xA , g, q, yA XX
XXXXXX
X
XXXXyXAX
XXXXXX
XXXXX,
xB , g, q, yB
yB ffffffff
ffff
ff
fffffffffff
ffff
k fr k
This means (after doing Diffie-Hellman):

1. C and S agree on a generator g and modulus q

2. C picks a random value xA and computes yA = g xA mod q and sends yA to S

3. S picks a random xB value and computes yB = g xB mod q and sends the result to C

4. C and S can compute the same key k = g xA xB mod q

The problem here is, that an active attacker can change the values of yA and yB and finally can
act as a middle attacker. The idea of encrypted key exchange is to combine this with symmetric
encryption to allow the C and S to authenticate each other even if there is a middle attacker.
This works as follows:

1. Assume C and S have some secret password p

2. C sends hID, Ep (g xA mod q)i to S (the name of C (ID) with the symmetric encrypted
value).

3. S can decrypt this with Dp (Ep (g xA mod q) to obtain g xA mod q (which would have been
send in the Diffie-Hellman protocol)

4. S can compute a key k = g xA xB mod q using Ss own secret value xB

5. S sends Ep (g xB mod q) to C

6. C can decrypt this with Dp (Ep (g xB mod q)) to obtain g xA mod q

7. C can compute a key k = g xA xB mod q using Cs own secret value xA

66
This looks like:

C S

p, xA , g, q, yA = g xA mod
X
q
XXXXX
XXXhid,E
XXXXp (y A )i
XXXXX
XXXXX
,
p, xB , g, q, yB = g xB mod q
Ep (yB )fffffff
fff
fffff
fffffffff
s f
Dp (Ep (yB )) Dp (Ep (yA ))

k = g xA xB mod q k = g xA xB mod q

The drawback of the encrypted key exchange protocol for authenticate a user to website is that
it requires the server to store passwords in cleartext, which is never a good idea but the EKE
is not vulnerable to offline dictionary attacks and to in-the-middle attacks.
This protocol, as described, does not provide authentication, because the way to authenticate
depens on proving having the same key. To authenticate, the way C needs to prove, is knowing
the password and the way S needs to prove, is knowing the password. The assumption is that
the password is shared between C and S and only the knowledge of this password proves the
authentication. So establish a key does not prove anything.
Definition Encrypted Key Exchange Implementation
We have to prove, that both parties obtain the same key. For S to obtain the key it needed to be
able to decrypt the message from C using the password p. So we add to the message that sends
S to C, instead of just sending Ep (yB ), a challenge, which is some random value r {0, 1}n
of length n. So S sends hEp (yB ), Ek (r)i. Now C needs to be able to obtain the right key from
Ep (yB ), which proves that C knew the password and using that key C can decrypt Ek (r) and
obtain r. This demonstrates to C that C is talking to the right server S, because the server
that knew the password and the key k can could only be produced correctly if the server was
able to decrypt the message that C send (encrypted with that password). This hasnt proven
anything to S. To finish the protocol C has to send a response back to S that proves that C
was able to obtain r. Therefore C sends a message encrypted with the key k: Ek (r||rA ) which
adds to r another nounce. Now S can decrypt this, check if the r matches and extract rA and
S can send rA back encrypted with k: Ek (rA ).
Finally both, the server and the client have proved knowledge of the password, they established
a shard secret (the key k), which can be used for further communication and they have done
this in the way, that even there is an active attacker, intercepting and modifying all sended
messages, if the attacker doesnt know the password p there is no chance to establish a in-the-
middle connection or get any information about the messages.

67
So the former protocol look now as follows:

C S

p, xA , g, q, yA = g xA mod
Y
q
YYYYYY
YYYYhid,E
YYYYpY(y A )i
YYYYYY
YYYYYY
,
p, xB , g, q, r {0, 1}n , yB = g xB mod q
eee O
eee
eeeeee
hEp (yB ),Ek (r)i
eeee
eeeeee
r eeeee
e
Dp (Ep (yB )) k = g xA xB mod q Dp (Ep (yA )) k = g xA xB mod q

rA {0, 1}m ?
=

r = Dk (Ek (r)) ZZZ


ZZZZZZZ
ZZZEZkZ(r||r
ZZZZAZ)ZZ
ZZZZZZZ
ZZZZZZZ
ZZ,
ee eeeeeeeee r
) eeeeeeee
eeee e
Ek (rA e
eee eee
r eeeee
e
?
rA = Dk (Ek (rA ))

5.2 Secure Shell Protocol (SSH)


Definition Secure Shell Protocol (SSH)
The SSH, based on Diffie-Hellman, uses aspects of symmetric and asymmetric cryptography to
solve the problem of client-server authentication. The protocol may look as follows:

C S

xC , yC YYYY kU S , kRS , xS , yS =
YY YYYYYY
YYYYYyYC
YYYYYY
YYYYYY
YYYYYY
Y,
k

eH =
eeeeee
ee
hkU S ,yS ,hH,EkRS e
e eeeeee
(H)ii
eeeeee
reeeeee
?
k, DkU S (EkRS (H)) = H

?
H = hash(protocol params||kU S ||yC ||yS ||k)

Which means:

68
1. The fist steps are in fact the same to Diffie-Hellman:

2. C picks a large random number xC {0, 1}n and sends yC = g xC mod p to S, where g is
the generator and p is the modulus just like in Diffie-Hellman.

3. S picks his own large random value xS {0, 1} and computes yS = g xS mod p and then
xS
S computes the key k = yC .
So far its the same as Diffie-Hellman protocol (just changed some names of the variables).

4. S computes a hash H = hash(protocol parameters||kU S ||yC ||yS ||k), so we use some cryp-
tographic hash function. The inputs to that hash function are some protocol parameters,
that identify the protocol, concatenated with the public key of the server kU S , the value
of yS that was send by C, that verifies its part of the same session and prevents replay
attacks, because that value was determined by C and finally concatenated by the value of
yS , which is the normal Diffie-Hellman response and the key k. Note that this is all in a
one way hash, so someone who intercepts that hash, wont be able to learn anything about
the inputs. Someone who knows the inputs would be able to verify the hash is correct.

5. S sends the value of its public key kU S , the value of yS and the hash signed with Ss
private key kRS .This means sending the hash H along with the hash encrypted with the
private key, that what it means to do a signature in asymmetric cryptosystems.

6. C can compute the key k = ySxC mod p

7. C and S have a shared key.

8. C has to check (verify S) if

DkU S (EkRS (H)) = H


which means checking the signature by decrypting using the public key, this verifies
that the message was created by someone who knows the private key
H = hash(protocol params||kU S ||yC ||yS ||k)
which means recomputing the hash, note that the hash in one-way (we cant use the
hash to learn the key but C can compute the key) and then check that the key and
the hash matches by computing the hash. Now we know there is no replay attack,
because the values yC , yS , k are fresh. If there was a replay attack and a different
hash value was replayed, then this hash wouldnt match.

.
Example 60:
Using SSH where C does not know the value of the public key kU S provides no authentication,
but establish a shared key, which provides no benefit against an active adversary (attack in the
middle) but provides some security against an passive adversary, because the message does get
to the right place.

5.3 SSH Authentication in Practice


Example 61:
using SSH to log into the university of technology in vienna (* stands for private information):

69
>>ssh e*@tuwien.ac.at
The authenticity of host tuwien.ac.at (128.130.35.76) cant be established.
RSA key fingerprint is 70:05:f4:85:ec:f5:3a:59:65:22:f6:4a:35:82:6b:54.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added tuwien.ac.at,128.130.35.76 (RSA) to the list
of known hosts.
>>e0625005@tuwien.ac.ats password: *
/home/>>logout
>>cat ~/.ssh/known_hosts
tuwien.ac.at,128.130.35.76 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA7
+92EdxA6IktvoPVIObaZmK3HjVcjYFoQ30jnMh0khLj1SzFnPoLx1j8M4Ub
Tipp+4DGS9U1fMJG//z09vNSdjjYsrynv2HRFU5AXaWJkNq0qTadmWcHG
tJzw/gC4u9voVoEi1wbPVLHNO0+OFyOlLJl5L6O5aiB1gmlZk+BtL3nYbjk8y
j2vkXZk0ZE1aAqoYOOvc1+Y+1GqBiv0guxZkFCJshCMSUgsFCCJ8tBn90i
LTQ6j8VSuWyS/VPCpH9ztmUupfeBbGaUoFatAl3tHyKxOTzfg6KF6yDufjEH
t7SdYE+9zK3wXgQwQSBpkEuNyH5Jq8uOgT221nel2LrRpw==
>>emacs ~/.ssh/known_hosts #change some value of the public key
>>ssh e*@tuwien.ac.at
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
70:05:f4:85:ec:f5:3a:59:65:22:f6:4a:35:82:6b:54.
Please contact your system administrator.
Add correct host key in /Users/danielwinter/.ssh/known_hosts to get rid of
this message.
Offending key in /Users/danielwinter/.ssh/known_hosts:8
RSA host key for tuwien.ac.at has changed and you have requested strict checking.
Host key verification failed.

So first, dont having a public key for that host, so the authenticity of the host cant be estab-
lished. It shows the fingerprint (a human easy way to see the key, than all the bits). Connecting
typing yes will store the public key and added to the lists of already known hosts. Now having
a secure channel encrypted with some key thats been agreed to using this SSH protocol to
send the password to the host knowing that it couldnt be intercepted (but this has only value
if knowing that the fingerprint is really the public key). Enter the password would lead to log
into the server.
Typing cat ~/.ssh/known_hosts shows the list of all known hosts. The ip-adress of the host
and the public key are stored. Doing again ssh e*@tuwien.ac.at will lead directly to the
password request, as the public key is now known.
Modifiying the file (simply change some character) and trying to connect again will give a warn-
ing message!

70
5.4 Transport Layer Security Protocol (TLS)
Definition Transport Layer Security Protocol (TLS), TLS Handshake Protocol, TLS Record
Protocol, Pre-Master Secret, Master Secret
TLS former known as Secure Socket Layer (SSL) is a compley protocol for clients (webbrowser)
and servers (hosts) to communicate securely, which is one essential thing for e-commerce (al-
lows sending credit-card numbers over the network as well as personal information with some
confidence that it only goes to the intended destination).
It constists of 2 main parts:

1. TLS Handshake Protocol

used to authenticate a server to a client (and a client to a server, but this rarely
happens, because the client would require a public key and this public key has to be
known by the server) using a combination of symmetric and asymmetric cryptography
agreement on cryptographic protocol, because TLS allows many different encryption
algorithms. TLS can be implemented on top of many other protocols. The most com-
monly used implementation is on top of Hyper Transfer Text Protocol (HTTP). Com-
bining TLS with HTTP results in Hyper Transfer Text Protocol Secure (HTTPS).
establish shared session key, a key shared between the server and the client

2. TLS Record Protocol

starts after sucessfully finished the TLS Handshake Protocol


enables communication using session key, which was established in the TLS Hand-
shake Protocol
using symmetric cryptography only, because a session key was established by the end
of the TLS Handshake Protocol and now using symmetric encryption which is much
more faster (cheaper) for encrypting all the content of a page.

A simplified version of TLS between a client C and a server S may look as follows
a

C S

m, c, h XXXXX
XXXXXX
XXXXX a
XXXXXX
XXXXXX
XXXXX,
eeeee
c, h, p
b eeeeeeeeeee
ee
eeeeee
r eeeee
e
verification


c EkU S (r)
r {0, 1}n / Dk (Ek (r)) = r
RS US

r=ko /r=k

Which means:

71

a C connect to S sending a message m, a list of ciphers c supported by C and a list of hash
functions h used by C, because different browsers have different ciphers and hash-functions
implemented.


b Due to, C and S have to agree an a cipher and a hash-function, S picks the strongest
cipher and hash-function from the received lists and sends the choice back to C. S also
sends a certificate, which gives the public key kU S of S to C, in a way, C can trust in. The
certificate includes the domain and the public key kU S of S and is signed by a Certificate
Authority (CA).


c Next C verifies the certificate, extract kU S and picks a random value r and sends EkU S (r)
back to S. S can decrypt EkU S (r) using the private key to get r, which is used as the
shared key k.
Now both, C and S, have a shared key k = r and can communicate over the channel using
symmetric encryption with the key k. The protocol to do this is the TLS Record Protocol.
The problems of this simplified version of TLS are:
c and replace it with EkU S (r0 ) with some r0 6= r.
(a) An attacker in the middle can hijack
Then S has a different key as C and the attacker can now decrypt the messages coming
from S but wouldnt be able to get r and figure out the messages of C, because the
attacker doesnt have a way to decrypt EkU S (r) (only S has the private key to decrypt
this).

(b) An attacker can force C and S to use a weaker cipher by altering .


a Due to the list of
supported ciphers of C is send in plain-text, an attacker can modify the cipher list with
only weak ciphers. Now S picks the most powerful cipher from a list of weak ciphers,
which is a weak cipher and sends the picked weak cipher back to C.
Therefore some changings in the protocol are needed to avoid (a) and (b):
(a) C generates a random value rC and add this value to . a S also generates a random value
rS and add this value to . The value r also called pre-master secret will still be created.
b
Step c includes some extra information:
r will be padded using PKCS #1 protocol (see section 4.10) and add something about
the client version. So C sends EkU S (client version||r) to S. Instead of making the k just r,
the key is combining all of the randomness used so far. The key is now called the Master
Secret:
master secret = H(r, master secret rC rS )
The way to compute the master secret is by using a pseudo-random function (like a
hash-funcion) with

the pre-master secret r


a label which is just identifing this as the master secret, which gets combinet with
the random value of rC (clients randomness) and
the random value of rS (servers randomness) as input

The master secret (in common cases 384 bits), gets divided into 3 parts (I,II,III) of equal
length (128 bits):

(I) will be used as the key k for symmetric encryption (using RC4 a symmetric encryp-
tion algorithm)

72
(II) will become the initialization vector (IV), that is needed for CBC mode
(III) will be used for a keyed hash function, which is a hash function that mapping depends
on a key kn .

The goal now is to protect the traffic between S and C.


This change means, if an active attacker interfere with the message EkU S (client version||r),
the attacker still cant control what gets computed here and before the channel is used
before anything secure, C and S need to verify that they got the same key.
So to finish the TLS Handshake Protocol we added a step:
We encrypt the finish message using the key extracted from the master secret. If any of
the values: r, master secret, rC , rS were modified, the keys will be different for C and S
and the TLS Handshake Protocol will never be finished and there will never be a secure
communication using that key, because the 2 parties have to verify the TLS Handshake
Protocol before continuing.
Since new random values r, rC , rS are used for each protocol execution, there is nothing
to replay. Therefore the random values prevents from replay attacks, because the key
depends on the values that are used previously.

(b) This can be avoided by also authenticate the messages a and .


b Not at the beginning,
as there is no key established but in later steps.
If an attacker can force S and C to use a 40 bit key, then the attacker can do a brute
force attack.
The TLS Record Protocol (with the modifications) may look as follows:

C S

k, IV, kn [[[[[[[[[[ k, IV, k


n
[[[[[[[[[
[[[[[[HTTP
[[[[[[[[[
request
[[[[[[[[[
[[[[[[[[[
[[-
https bbbbb b
b webpage content
bb b bbbb bbbbbbbbbbbb
bbbb
bbbbbbbHTTP
bbbbbbbbbbbbbbbbbbb response
q b
b

R = M ||HM ACkn (M )||padding

So first, C requests the content of a webpage. The response is the content of some webpage,
which can be quite long, so we need a way to encrypt that response and send it to C. We want
both:
confidentiality

integrity checking
The response is M , which includes a mac using the hash-function H of M , which uses kn (the
key of the hash-function) and finally we have some padding to fill up the block size.
Now we want to send this whole response over the secure channel.
The way this is done with TLS Record Protocol is to use CBC mode 2.4.2 and some encryption
function.
In one session there might be multiple responses, so when the next response is done, we dont

73
want to do the whole TLS Handshake Protocol again. In the next response the next message
block will be encrypted using CBC mode again which produces the cipher block of the next
message, but we need an IV here (dont use former IV, due to security).
In TLS the last cipherblock of the previous message is used as the IV for the first
block of the next message.
Example 62:
With the notation of 2.4.2 suppose an adversary intercepts the first message and has a way to
control m00 (the first input message block of the next message). With the values:

c3 = 10101010 cn1 = 11110000


c4 = 01010101 m = 00000000

the adverary can learn if mi = m , where m is the guess for block i.


The adversary can set the value of m00 and figure out, how that make the server give a particular
repsonse and examining the first ciphertext block c00 .
The adversary should use:
m00 = 01011010
Because using CBC mode encryption it is

ci = Ek (ci1 mi )

For the first block of the second message it is

c00 = Ek (cn1 m00 )

So the danger here is cn1 because the adversary already knows the whole(first) encrypted mes-
sage and therefore the adversary knows the value of cn1 .
That means an adversary can pick a message value for m00 , so that the value of c00 reveil some-
thing. It has to be an input in encryption (assuming the adversary cant break the encryption).
The adversary knows the encryption of all the blocks from the previous message and we know,
from the way how CBC works, if we want to learn the value of m4 , we havt to do

cn1 m00 = c3 m4 = c3 m

where we dont know m4 (a guess for m4 is m ).


If c3 m is the input to the encryption function, then we can check if the output matches c4
due to this CBC short scheme:

m4


A ...
L
A


E


c3 c4

To make this the input, we have the IV = cn1 (the last cipherblock of the previous message is
the initialization vector for the next message) and xor that out:

cn1 m00 = c3 m cn1

74
Here m = 00000000 can be left out and then

c3 cn1 = 01011010

If 01011010 is used for m00

c00 = Ek (cn1 c3 m cn1 ) = Ek (c3 m )

because same values cancel out in xor and thats the same as in CBC mode

c00 = Ek (c3 m4 )

Knowing c3 we can construct c3 m cn1 to pas in the ecryption functio. We dont know
what m4 is but we knof if this result is the same then

c00 = Ek (c3 m ) = c4

and that gives us if m4 = m


Example 63:
Generalize Example 62:
If m00 can be picked as
m00 = cn1 ci1 m
where ci1 is the cipherblock the one before the one we want to get and m is the guess. That
means what we learn if c00 = ci then we know mi = m .
If the block size is small we can guess all of these blocks. Using RC4 (symmtric encryption
algorithm) we have a 64 bits block size of usin AES we have a 128 bits block size.
The attack is useful if the attacker can guess these 64 bits in a useful way and has a way to
control the message which is not a small danger.
Definition Browser Exploit Against SSL/TLS (BEAST)
One way is if the attacker can figure out many of the bits (say 58 out of 64, 8 bits unknown) than
there is an attack called BEAST, which uses this cryptographic weakness and injects java script
in the page. Then the attacker can control the requests enough to actually use this guessing.
Guessing only 8 bits at a time, you expectatly to need 128 guesses for each 8 bits.
If the attacker controls java-script on the page, the victim is using TLS to request, then java-
script can make repeated requests to this server. Those are still a part of the encrypted session
but the attacker can control what the requests anre and perhaps can design a request that the
server will response to in a particular way, giving the attacker control of the first block in the
next message.
Example 64:
To exploit the BEAST vulnerability in HTTPS, suppose the attacker has intercepted the ci-
phertext, using 4-bit blocks:

m0 m1 m2 m00

   
c0 c1 c2 c00

1000 1101 0011 ?


where c0 , c1 , c2 are the intercepted cipher-blocks from the message blocks m0 , m1 , m2 .
If an attacker wants to find if m1 , the second block in the intercepted message is equal to 1010,

75
then the attacker should use as the target value for m00 = 0001 (the first block in the next
response).
We can test if m00 = 1010. In order to test this, the value going in the encryption with m00 is
1010 c0 (a guess of m1 xored with c0 ). It follows

1010 c0 = m00 c2 m00 = 1010 c0 c2 = 1010 1000 0011 = 0001

Theorem 5.4.1:
The client can trust it is communicating with the intended server, because the client verifies the
certificate using some other key, checks if that matches the server and obtain the servers public
key from it.

Proof:
We need some other key, that the client already trust to verify the certificate and then knowing
thats the right server and knowing the servers public key 

5.5 TLS Information Leaks


Assume the encryption works fine and the TLS Handshake protocol is good and all messages
between C and S are encrypted but the attacker can listen on this channel (all messages).
Assuming the attacker cant break any encryption but can observe all the messages on this
channel then the attacker can learn:

The approximate size of the requests

The approximate size of the responses

The pattern of requests and responses

Due to optimization in HT T P there are often multiple responses to one request, so the pattern
is distinguished.

5.6 Certificate
Definition Certificate
A certificate can be used to verify that a public key belongs to a server of a network.
The certificate has to include something that communicates the public key of the server of the
network to the client in a way that the client can trust.
One way to do this is that the server sends the certificate to the client in this form:

certificate = EkRCA (domain||kU S )

The certificate includes the domain of the server as well as the servers public key (the server
knows its own corresponding private key). This message is encrypted using a private key of
some certificate authoritiy (CA) that the client hat to trust.
To verify the certificate the client has to decrypt the certificate using the public key of CA,
which is kU CA and checks if the domain matches. If this is the right domain, the client can
trust that kU S is the right public key of the intended server.

76
5.7 Certificate Details
The details of the certificate of the server google.com are
The certificate hierarchy, which gives confident in the certificate. Every entry in the cer-
tificate hierarchy was signed by the issuer above. At the top of the certificate hierarchy we
have a self-signed certificate. We trust the top level certificate in the certificate hierarchy,
because we got it from a trusted source.

The issuer, which indicates who issued the certificate. For google.com its:

CN=Thawte SCC CA
O=Thawte Consulting (Pty) Ltd.
C=ZA

The class of certification. The higher the class, the more identity checking will be done

The expiration time, is the date the certificate is valid from (Not Before) until the date
it is valid (Not After)

The subject, which gives the name of the owner of the certificate

The subject public key info has information about

The subject public key algorithm, shows the used algorithm for ecryption. For
google.com its PKCS # 1 with RSA Encryption
The public key, showing the modulus and the exponent. For google.com its:
n:
128 Byte : A9 50 A4 1D 0F 96 8E 59 07 9F 13 3D 88 77 EC 4F 93 70 1A 5F
32 DA 7C 90 62 85 63 6D 5B 3C D5 44 BA 36 5A 6E 1B 94 0D EA 6B 1A 72 19
32 B2 1A C5 C3 FB 5A 66 33 FB 76 79 34 4C 11 AB DF 81 1E 90 0B 75 3D 91
22 8C 06 52 A7 EE 84 F0 0F 85 83 C1 E4 1C 2F 9C AD B4 98 21 D6 70 30 23
D5 A4 8E E2 5A 74 F4 4E E3 5E 5A CE 6E F4 51 C5 EB E5 FA F5 80 34 2C
B7 83 05 1F 0A 6C C5 C3 71 3C 82 FE 6D
e:
65537

Thats nothing convicing, because any attacker could have generate this. An attacker con
generate a certificate that shows all these informations. So we need a signature to verify
all these informations

The certificate signature algorithm, is the algorithm used for signing the certificate. For
google.com its PKCS #1 SHA-1 with RSA Encryption, where P KCS#1 is a padding
scheme and SHA 1 a hash function.

The certificate signature value is, what we need to use to convince ourself that this is a
valid certificate.
So the signature is
EkR||issuer (H(hcertificate contenti))
with EkR||issuer is P KCS#1 using RSA and H is SHA 1.
The important part here is that the signature is encrypted with the private key of the issuer.

77
5.8 Signature Validation
In order to trust a certificate, the client needs to validate the signature. To do that, the client
needs to know the corresponding public key kU ||issuer that can be used to decrypt the message
containing the hash value of the certificate content and can check, that the hash value is the
same as computing the hash of the certificate itself.
Definition Public Key Infrastructure (PKI)
The PKI are the ways of distributing public keys. We need a way to securely know the public
key of the issuer. Once we know that, we can use the certificate to learn the public key of the
website.
Example 65:
Suppose Udacity would like to add digital signatures to the couse accomplishment certificates
that would allow someone who knows kU CA , the public key of a certificate authoritiy, to validate
that certificate was generated by Udacity and earned by the person whose name is in the
certificate. Assume m is the certificate string, e.g.: Certificate of Accomplishment with Highest
Distinction earned by John Doe in CS387.
Assume E is a strong public-key encryption algorithm, H is a cryptographic hash function and
r is a random nonce, then these schemes would allow Udacity to generate unforgable, verifiable,
signed certificate.

cert = m||EkRUdacity (H(m))||EkRCA (0 Udacity0 , kU Udacity )


cert = m||EkRUdacity (m r)||EkRUdacity (H(r))||EkRCA (0 Udacity0 , kU Udacity )

6 Using Cryptography to Solve Problems


This section will deal with

Anonymous Communication using a chain of asymmetric encryption to enable 2 parties


to communicate over a network without anyone knowing that they are even talking to
each other

Voting with the issue that can it be provided an accurate tally know that each voter is
counted without revealing who voted for whom. This will also be done using a chain of
asymmetric ecryption but with some added features to ensure the vote tally is correct

Digital Cash, a way to represent and transfer value similar to paper cash. This involves
new techniques such as

blind signature, a cetralized way


Bitcoin network, a decentralized way that doesnt require any trusted authority but
uses proof of work to create value

6.1 Traffic Analysis


Definition Traffic Analysis
We use HTTPS to do a Handshake first, agree on a secret key and then have a secure channel
between a client C and a server S.
The messages are going through routers on the internet to reach a distinct destination. There

78
are maybe many hops between C and S and along these hops will go packets, because using
TLS every packet consists of 2 parts:
One part of the packet is the encrypted message and
Another part of the packet is the routing information. This is necessary so that every
router knows the direction to send the message.
Any eavesdropper, who can see one of these messagees can learn that C and S are talking to
each other.
This is a form of Traffic Analysis, where the important property we want to hide is not the
content of the message (which is encrypted and if HTTPS works correctly this cannot be un-
derstood by the eavesdropper), but what we really want to hide is the fact that C is talking to
S.
The mere presents of communication of 2 parties is often enough information to cause problems.

6.2 Onion Routing


Definition Onion Routing
Assume 2 parties A, B want to communicate, without anyone being able to know that the 2
parties are communicating. We have a set of routers {Ri |i N } = {R1 , R2 , . . .}. The Ri are
all conected to each other and the 2 parties. So we have a fully connected network assuming
we have secure channels between each router and each router with A and B.
Now, we select a random router sequence Ri1 , Ri2 , . . . , RiN with N N . Then each message
in the chain
Mik = EkU Ri (0 T o : Ri0 k+1 ||Mik+1
k

to be the message send to the router Rik is a message encrypted with the routers public key
and its content explains the next destination as well as the message that should go to that next
destination.
Due to we can wrap as many layers as we want this is callen onion routing.
The more layers there are, the more hops we go through and the less risk there is that all of
the links of the chain can collide or that a party can observe all the communication and leard
whats being communicated.
Assume an adversary can listen on the connection from A to R1 (the first router) and from RiN
(the last router) to B, then the adversary can learn that A and B are communicating if
There is no other similar traffic on the whole network
There is a way the correlate the message from A to R1 and the messge from RiN to B by
e.g. indroducing delays in one(start) message and check if the delay is in the next(last)
message.
.
Example 66:
For 2 parties and 3 rounters R1 , R2 , R3 the network may look as follows:
R1 BPP
~~ BBPPPP
~~ BB PPP
~~ BB PPP
~~ B PPP
A@ R2 B
@@ ||
@@ ||
@@ ||
@ ||
R3

79
where each line is a secure channel.
Assume the random router sequence is R2 , R1 , R3 and the message m then:

A should send to R2 :

EkU R2 (0 T o : R10 ||EkU R1 (0 T o : R30 ||EkU R3 (0 T o : B 0 ||EkU B (m))))

R2 should send to R1 :

EkU R1 (0 T o : R30 ||EkU R3 (0 T o : B 0 ||EkU B (m)))

R1 should send to R3 :
EkU R3 (0 T o : B 0 ||EkU B (m))

R3 should send to B:
EkU B (m)

.
Definition TOR
A very sucessful project that project that provides onion routing as a service on the internet is
called TOR (torproject).
It provides a way to connect to a website without revealing to the website where are you con-
necting from. You need to get a response as well, so that means, in addition to sending a route
for reaching the website you need a route for returning (you dont want to include your IP
adress which will reveal your location). This project selects routes in both directions. Selecting
a random set of routers to reach the website that you want to connect with as well as a way for
that website to send a response along another random path.
A client, who wants to reach a website via a random set of routers has to download the public
key of all the routers to create a messag that can be send over each hop. This will be done by
downloading a list from a trusted directory.

6.3 Voting
Definition Permutation
The notion of permutation is used with several slightly different meanings, all related to the
act of permuting (rearranging) objects or values. Informally, a permutation of a set of objects
is an arrangement of those objects into a particular order.
Example 67:
There are 6 permutations fo the set {1, 2, 3}, namely:

{1, 2, 3}, {1, 3, 2}, {2, 1, 3}, {2, 3, 1}, {3, 1, 2}, {3, 2, 1}

because |{1, 2, 3}| = 3, therefore 3! different permutations exist.


Definition Voting
The securitiy properties that a voting system should provide are:

Anonymity of voters, that it shouldnt be possible for an adversary to know who somone
voted for.

Verifiability of count, which would be easy if each voter be willing to pulic who declared
their vote.

80
Coercion resistance that means, a voter cant prove who they voted for.

Reliability

Security

Efficency

.
Definition Mixed Network (MIXnet)
The idea (onion routing is based on this) for MIXnet is that n voters, who are giving x1 , x2 , . . . , xn
votes to one of the 3 parties A, B, C.

A B C
x1



v1
x2 v2





.. f . g . h . .
.
..
..
..
..

xn vn

B collects the votes x1 , x2 , . . . , xn from the voters. These are inputs to a random permutation
f . The permutation randomly scrambles the order of the votes. The position of the votes that
came in doesnt match with the position fo the votes that came out. The votes in the new order
is passedn along. Now B scrambles also the votes using a random permutation g and those
outputs are passed along to C which also does some random permutation using h.
In order this to work, we want at the end to know what the actual votes are. The security
assumption is that A, B, C wouldnt possible collute. So they can be trusted not to collude with
the other parties.
The question is, what should we use for xi to enable this cain. A good start for the xi value
would be:
EkU A (EkU B (EkU C (vote)))
The problem using this is that A, B, C and any eavesdropper can learn all the votes, because a
vote is from the set of all possible votes {0 A0 ,0 B 0 ,0 B 0 }. Now everybody who knows the public
keys kU A , kU B , kU C of the parties A, B, C can compute the value of xi for the three possible
votes, match thath up to the incoming votes and know exactly what they are - so there is no
anonymity to the voters.
To avoid this, the voter neet to add some randomization to the chain.
Adding some random value r, selected by the voter and kept secret:

EkU A (EkU B (EkU C (vote||r)))

This works only if an eavesdropper doesnt collude with C because C learns all the votes and
the random values by encrypting EkU C (vote||r) and can use that in collaboration with the
eavesdropper who heard EkU A (EkU B (EkU C (vote||r))) to figure out, which voter voted for which
party. This solution wont really work. Carring that solution through and add some randomness
value to each of this layers:

EkU A (EkU B (EkU C (vote||r0 )||r1 )||r2 )

To validate a vote, instead of just publishing the vi values, it requires that C also published the
r0 value. This means the voter can check that the nounce the voter uses is in that list.
In this case C has the most power, because

81
C can decide not to include votes

C could temper with the votes, because the only validation that we have is that a voter
can check if their vote is in the list.

C can add extra votes, if the number of votes in the beginning is unknown.

C can change or replace some votes and there is no way in this scheme yet, for a voter to
prove that C cheated.

To prove a vote, if the tally is published as list hvk , r0k i, then a voter can prove their vote is
not included or corrupted by revealing r0 , r1 , r2 and show they produce xi , which requires the
properties of the encryption function, that its hard to find x, y with x 6= y such that

E(x) = E(y)

or even impossible if E is invertible.


This requires that the voter reveals the vote in order to show that the vote was not included by C.

6.4 Auditing MIXnet


Definition Auditing MIXnet
The idea here is that each participant in the MIXnet can audit some of the outputs of the next
step. For this we need to provide extra input. Instead of the votes just provides the vote as
encrypted to party A, the voter provides this to party B as well. So all the incoming votes go
to party A and B.
B is going to audit A by picking some random set of inputs which are outputs of party A (e.g.
f (xm ) = ym ). Now B wants A to prove that ym is a valid output. So A only neet to provide
the nounce vmk (its helpful to provide the key k as well).
B needs to check that
EkU A (ym ||rmk ) = xk
where B knows the value of xk .
This proves to B, for the particular output B asks for, that corresponds to one of the inputs to
A. Now A will privide its output to B and C and C will be able to verify that B do the mix
correctly by picking some random input and having B proving that they are correct and in the
final stage B will provide its output to C as well as to some validator who can validate that C
does correct thing.
The number of outputs to be audited depends on teh tradeoff between voters anonymity and
chatching cheating. Auditing all the outputs would reveal exactly what the permuatation was.
Example 68:
Suppose there are 100 votes and 20 are audited. The probability a mixer can cheat on 4 votes

82
without getting chaught is 0.4%. This follows from:
 
96
number of ways to choose without picking cheated vote 20
=  
number of ways to choose 100
20
96!
20!(9620)!
= 100!
20!(10020)!
96 95 77
=
100 99 81
= 0.40

Example 69:
For a MIXnet with 3 mixers, each auditing 20 out of 100 votes, the probability an eavesdropper
could determine which voter cast a given output vote is 0.008%.
At each step there is a 20% probability of seeing that particular vote at both stages and knowing
how that was3mixed. To chase a vote all the way back we need to see that same vote each time
therefore 15 = 125 1
= 0.008.
That means there is a tradeoff here. If we do more auditing (increase from 20% to a higher
value) that would increase the probability that a given voter could be identified. It would also
decrease the probability thath the MIXnet could cheat without getting caught.
So we have a tradeoff between the privacy of the voters and the likelyhood of the detecting
cheating.

6.5 Digital Cash


Definition Digital Cash
We want to find some way to use numbers to assimilate something that would be similar (maybe
better/worse) than physical cash.
The properties of physical cash are:

Universally recognized value, that means everyone agrees that its worth something

Easy to transfer between 2 individuals. That means transfer cash without going to a bank
or some other trusted third party.

Anonymous and untraceable. Not all want this, e.g. governments.

Light and protable, that means easy to carry.

Hard to copy or forge. If someone can counterfeit the money, it wouldnt have universally
recognized value for long.

83
Digital cash may work as follows:

C AC B
CC 100$
CC
CC
C!
x
xx
xx
xm||E
x
x{ x kRB (H(m))

m||EkRB (H(m)) xxx


xx
xx
|xx /

A would got to Indivisible Prime Bank B which everyone knows is trustworthy and gives 100$
to B. Then B writes a message m =I.P. Bank owes the bearer 100$ and B will send A a
signed vrsion of that message: a message along with Bs signature using Bs private key (using
a hash of that message). So the signature proves that its a valid IOU from B and now A has
something representing currency. A gives the currency to C (e.g. buys something at Cs shop)
and C can take the note (signed message from B) from A and give it to B and ask B to deposit
into Cs account.
Assuming erveryone trusts the bank B and knows its public key the digital cash is
easy to transfer by just sending the bits to another person.

anonymous and untraceable, its the case if alle IOU the bank creates are the same.

light and portable

not hard to copy or forge (in fact trivial), because A can send the same bits multiple times
and noone knows which ones are valid and which ones are copies. Once we have lost this
propertiy we have:

not universally recognized value


The solution to this are blind signatures.
Definition Blind Signature, cut-and-choose
This technique gives us a way to associate a unique ID with the bill to be able to detect double
spending but doesnt allow the bank to associate the unique ID on the bill with the person who
aquires that bill.
The idea is:
A B

m1 , m2 , . . . , m
VN
VVVVV
VVVV1V,m2 ,...,mN
100$,m
VVVVV
VVVVV
V+
gggggg mk
EkRB (mkg)ggg
gg
g ggg ggggg
s gg
g gg

This means:
1. A will deposit a bill at the bank B and along with the bill A generates a large number of
messages
mi =0 Bill #rAi .B owes the bearer 100$0
where rAi is some unique ID generated by A.

84
2. B uses the cut-and-choose:
B will randomly pick one of the messages mk and checks all other messages m1 , m2 , . . . , mk1 , mk+1 , . . . , m
that they are valid (correct value). If they all valid, then without looking at message mk
then B will be blindfolded and sign mk .

The point of this is that A generates all the messages, transfer them to B, but B doesnt see
them until B randomly picks one.
The probability of A being able to cheat without getting caught is N1 .
To improve this scheme we are using RSA Blind Signature.

6.6 RSA Blind Signature


Definition RSA Blind Signature
Using RSA Blind Signature to blind the message from the signer. The protocol looks as follows:

A B

kU B , m, k, t V
VVVV
VVVV t
VVVV
VVVV
VV+
tdB mod nB
hh
tdBmodhnhBhhhhh
hhh
hhh
s hhh
h
mdB mod nB

This means:

1. A wants B to sign a message.

2. A knows Bs public key kU B = (eB , nB ) (the key is a RSA key pair with the exponentn
eB and the modulus nB ). m is the message A wants B to sign. A also picks a random
value k ZnB .

3. A will compute t = mk eB mod nB . If k ZnB and gcd(k, nB ) = 1 (that menas k is


relatively prime to nB ) that would make k eB mod nB a permutation of the values in
ZnB . It follows k eB mod nB is random and therefore m = k eB mod nB is random and so
t = mk eB mod nB is random, which doesnt reveal the value of m to B, so A can safely
send t to B.

4. B will sign that message using Bs private key dB (private exponent). That produces the
value tdB mod nB which B sends back to A.
Now using t = mk eB mod nB A can compute:

tdB mod nB = (mk eB mod nB )dB mod nB = (mk eB )dB mod nB


= mdB k eB dB mod nB
= mdB k mod nB

knowing that eB dB = 1.

85
5. A can divide k out and gets
mdB mod nB
That is the message m signed by B.
That means we need to be careful when outputting RSA decryption (using private key) -
forge a RSA signature by multiplying 2 signatures. In this case, the message thats being
signed might be use to produce other messages.

6.7 Blind Signature Protocol


Definition Blind Signature Protocol
The blind signature protocol works as follows:

A B

m1 , m2 , . . . , mN , k1 , k2X, . . . , kN
XXXXX
XXX2 ,...,m
m1 ,m XXXXXN ,k1 ,k2 ,...,kN
XXXXX
XXX+
x {1, 2, . . . , N }
xffffffff
ff
fffff
ffffff
rfYfYfYfYfYf
YYYYYY
YYYY{k
YYi |i6Y= x}
YYYYYY
YYYYYY
YYYY,
fffff ffff
EkRB (mx ) fffff
ffff
fffffffffff
rf
EkRB (mx )

1. A generates N messages each one of the form

mi =0 Bill #ri .B100$0

but with different values for ri and A will also pick N k values and send all m and k values
to B.

2. B picks some random value x {1, 2, . . . , N } and unblinds all the other messages. That
means B sends x to A.

3. A sends {ki |i 6= x} back to B that B can use to verify that all of the messages other than
message mx are valid.

4. B can decrypt all the messages using Bs private key and then divide out k eB fromt the
result.

5. At the end of the blind signature protocol A will be able to compute EkRB (mx ) signed by
B.

6. A can spend the bill, give it e.g. to C and C can verify the signature and deposit the bill
at B. B checks that this ID has not been spend before and beliefe that its a valid bill.

86
7. If A spends the bill again to e.g. D and D deposit it at B, B checks the signature but
finds that the rx value was previously used. So B knows, the bill was double spend. The
problem is that B doesnt know who double spends the bill.

6.8 Deanonymizing Double Spenders


Definition Deanonymizing Double Spenders
The key to deanonymizing double spenders is

Cash is anonymous if spend once

Identitiy of spender is revealed if cash is spend twice

To do this we need a One-time pad, where the key property is that we can split a message into
2 pieces and xor them to get the message back.
In the blind signature scheme A creates N messages like

mi =0 Bill #ri .Identity: (I1 , I2 , . . . , Im ).B100$0

with an identity list, where


Ik = H(Ik0 )||H(Ik1 )
Ik is the concatenation of 2 hash values. H is a cryptographic hash function and the property
of Ik is:
Ik0 Ik1 = As Identitiy
Its easy to create those I values:
Setting Ik0 to a random bit string and Ik1 to As identity xord with Ik0 .
B need k : Ik0 to verify checked messages in cut-and-choose because B needs to validate the
hashes. B knows As identity so B can compute easily Ik1 by xoring Ik0 with As identity and
check if both hashes are correct which validates that each of this identity components are cor-
rect.
Now we have a good was to reveal double spenders.

87
6.9 Identity Challenges - Spending Cash
The protocol looks as follows:

A B

[[[[[[[[[
[[[[[[[[[
[[[blindes
[[[[[signatures
[[[[[[[[[
[[[[[[[[[
[[[[[[[[-
ccc cccccccc
ccccc cc cccccc
ccc
cccccccc
cc cccc ccccccccc protocol with identities
mk qcTTT C
TTTT
TTTTmk
TTTT
TTT*
challenge c = {0, 1}m
iii
i ii iiii
i c
iiii
tiUUiUii
UUUU c1 c2 c3
UUIU1U ,I2 ,I3 ,...
UUUU
UUUU
* c c c
I1 1 ,I2 2 ,I3 3 ,...
check H(I1c1 ) / deposit for C

mk TTT D
TTTT
TTTTmk
TTTT
TTT*

challenge d = {0, 1}m check rO k twice
i
iiii
i i i iiii
iii d
tiUiUiUi
UUUU d1 d2 d3
UUIU1U,I2 ,I3 ,...
UUUU
UUU* d d d
I1 1 ,I2 2 ,I3 3 ,...
check H(I1d1 ) / deposit for D

This protocol means:


At the end of the protocol between A and B, A has a signed message (bill) mk , where each one
of the identity pairs is one of those pairs of hashes.
To spend a bill A sends mk to C and C sneds a challenge c back to A. The challenge is a list
of m random bits e.g. c = [0, 1, 1, 0, . . .]. These will tell A which parts of the identity A needs
to open. For each bit A has to validate one part of the hash.
Remeber that
Is = H(Is0 )||H(Is1 )
with the property
Is0 Is1 = As Identity
if the sth = cs challenge bit is 0 then A has to send Ik0 , if the sth = cs challenge bit is 1 then A
has to send Is1 .
Now C can check if H(Iscs ) matches with the identities in mk for all identities. If all matches C
accepts the bill. When C deposits it, C has to send all I values to B.
Suppose A tries to spend the bill again. This time A sends the bill mk to D. D will do the
same protocol making a challenge d, send that challenge to A and gets back the corresponding
I values. D checks all the values before accepting the bill and deposit the bill to B by sending

88
all the received I values.
As long as one of the 2 challenge bits at the same position are different B has both parts of the
identity. B knows that the bill was double spended, because B sees the rs twice and also knows
the identity of the person who obtained the bill because B can xoring the received I values of
C and D. If e.g. the sth bit of each challenge is different B computes:

Is0 Is1 = As Identity

Now B knows who double spends the bill.


Example 70:
If A spends
10 a bill twice, with the challenge lenght m = 10, then the probability that A is cought
is 1 21 = 99.9%, because A will not be caught only if all bits of the 2 challengens are the
same (in every position). So if C and D took exactyl the same challenge (c = d)
10
The challengees are 10 bits long, so the probability that that would happen is 12 and the
1 10

probability of the invert event (that A got caught) is just 1 2 = 99.9%.

6.10 Bitcoin
Definition Bitcoin
Bitcoin is a way to do digital cash in a completely decentralized way. This means there is no
bank and no trusted authority, but everyone who participates in the network is considered a
peer and they will have an equal say as to whats valued and whats not. Bitcoin combines a
lot of ideas from previous protocols. The way avoids needing a centralized bank is to keep every
single transaction that ever happen. In order to track transactions we have a chain of signatures,
which shows the history of transactions. This works as follows:
Some coin c comes in. For A to transfer c, A has to create a message including c as well as
including that A transfer it to B. A signs that message with As private key kRA .
Then A sends this message to B. B can verify the signature by using As public key.
For B to transfer c to C, B will add a transfer message and sign the whole thing with Bs
private key kRB . Now C can make the same:
C takes everything that B sends, add a transfer message to it and sign the whole thing with
Cs private key kRC and then C can transfer it and so on.
Every link in this chain, as long as they have all the public keys, can verify the entire history
of transactions.

A B C

c / Ek (0 To B :0 ||c) / Ek (0 To C :0 ||Ek (0 To B :0 ||c)) / ...


RA RB RA

The problem in this protocol is that every peer can spend c as many times as they want.
In order for c to have some value, they have to be scorce.
It has to be the case that

you cant spend them multiple times

you cant just create them

89
6.10.1 Providing Security
Definition timestamp, proof of work
Everytime someone received a transaction, they dont just accept it but send it into the
peer-to-peer network.
When someone wants to verify a c, what they need to do is send it into the network, so every
transaction can be verified by all the other members in the network and before the transaction
considered valid, we need to know that this c hasnt been already spend in some other way.
There are 2 improtant parts to this:

All nodes must agree on all transactions, that requires some sort of timestamp:
Nodes are going to receive messages at different times. If c was spend twice, that before one
node validates the transaction, we need to ensure that if someone attempt to spend a coin
twice, both transactions wouldnt be validated by having different parts fo the network
have different views of that history of all transactions. For providing this timestamp, we
have to rememver that some of the nodes might be malicious. We have no way to know
that all nodes are trusted, because anyone who wants can join the network. We just need
to have some honest parties to validate the transactions. But we need to know that the
honest parties cant invalidate the history of transactions.

The key to this is requiring a proof of work :


For each timestep, we are goint to have a new block and we need to know that creating
those new blocks requires work. If it requires enough work to increase the timestampf
then its unlikely that a malicious user can increase the timestamp faster than the whole
network. To make it hard we need some proof of work embedded in the timestamp. A
way to do a proof of work :

Find a value x such that H(x) starts with 0k

In Order to prove you have done some amount of work you need to find a value x where
the hash of x starts with k zeroes. Doing that requires work if H is a good cryptographic
hash function. The only way to find such an x is keep guessing and looking at the output.

.
Example 71:
The average number of guesses of x needed to find one value that H(x) start with 010 is 1024,
because if the hash is a good cryptographic hash and it produces a uniform distribution the
probability that any bit is a 1 or a 0 is 21 and we need to find an output that starts with 10
zeroes, so 210 = 1024 guesses are needed on average independend to the length of the output.
The number of trials expected to have a greater than 50% probability of finding one where H(x)
start with 010 we need to comute:
 !
1023 k

min 1 > 0.5 k = 710
kN 1024

the expected number of trails

That means finding a hash value with certain properties is expected to require an amound
of work and by adjusting those properties we can increase that amout of work.

90
6.10.2 Finding New Blocks
In order to create a new block, which would validate the next history of transactions its nec-
essary to find some value x such that:

H(H(state||x)) < target

where H is a SHA 256 hash, the stat is the property of the network and x keeps increasing
to find one, that satisfies that property and that provides the timestamp, which allows a new
block to be generated. The timestamp uses 2 hash functions to increase the required work.
This is the idea Bitcoin uses to generate timestamps is you have to keep finding a new block
and a block will validate a set of transactions but to generate a new block you have to find a
new timestamp which is this target.
So you have to find a value, where the hash fo the hash of the state concatenaded with that
value is less than the target. The value of the target controls how hard it is to find such a value
x. The way Bitcoin currency is designed towards is the value of the target is adjusted in a way
that make the expected time to find x about 10 minutes. Thats the time for the whole network
to find the next value. So the value of the target will keep decreasing (harder to find a lower
value than the target) as the computing power of the network increase.
The state does 2 improtant things:
The state includes information about the previous block, this is how the timestamps form
a chain

The state includes some information its likely to be unique for each member of the
network. This is how Bitcoin avoids being the case that all members will find the same
value x.
Example 72:
The current value of the target starts with 034 1011 . . ..
If you find a value that hashes to a result that starts with 035 . . . or 034 1010 . . . (something less
than the target) then you will be able to create the next block and earn the value of a new
block (currently 50 Bitcoins, 1 Bitcoin = 1 USD) and the rest of the network can verify that
by computing the hash of the value you found. If the hashed value is less than the target, that
will add that block to the Bitcoin network.
To do fast computation most participants use GPU (graphics processing unit), because there
are algorithms for implementing the hash function more efficently compared to CPU (central
processing unit)

6.10.3 Avoid Double Spending


In the network, at each timestamp, a new block is created that validates all the transactions.
At the time the block is created this has to be the longest block-chain.
Someone can try to create an alternate block-chain so if someone wants to spend a coin twice,
the double spender has to create a chain that is longer than the longest chain.
When a transaction is validated by the network all the signatures in the coin are checked (using
transfer-chain) but to prevent double spending there is also a check of the chain of blocks, where
the longest chain is the one that is viewed as correct.
Each peer in the network might see a different view of the block-chain. If they see different
views the one that has the longest chain is the one that will be viewed as the most correct
view of all the transactions. So every participant of the network is keeping track of all the
transactions and the version of all transactions thath people trusted the most is the one with

91
the longest chain.
If an adversary wants to create a longer chain with a different view of transactions it requires
finding correct hash values. If the network power exceeds the power of the adversary then its
likely the network have a longer chain then the adversary can produce.
This avoids the need for a central authority but doesnt provide anonymity (in the traditional
way) because each transaction is known to the network.
The ways of providing some anonymity is instead of using your actual name in the transaction
you can have different identities for each transaction.
Example 73:
Assume

H is a strong cryptographic hash function that produces 128 output bits from any length
input. Computing H(x) takes 1 unit of time

E indicates RSA encryption. kU is a known public key, but the corresponding private key
is not known. Computing E(x) takes 1000 units of time.

There are no memory limits, but the task has no access to precomputed values.

Then an order by how much work they prove (from least expected work to the most expected
work) is

Find a value x such that EkU (r x) ends with 022 , because an easy way to calculate x such
that EkU (r x) ends with 022 is to calculate r x = 0 and so x = 0 r and Ek (0) = 0.
This requires no hashing and no encryptions.

Find 2 values x, y such that the last 24 bits of H(r||x) are equal to the first 24 bits of
H(r||y). Remebering the birthday paradox and the strong collision resistance. There we
showed the amount of work

N = 224 = 212 < 220

Find a value x such that H(r||x) ends with 020 , which will take about 220 hashes

Find a value x such that H(H(r||x)) ends with 020 , takes twice as much work as only one
hash - so it will take about 221 hashes..

Find a value x such that EkU (x) = r is expensive, because this is equivalent to breaking
RSA.

7 Secure Computation
Definition Secure Computation
Assume 2 parties A, B have some private information. They want to perfomr some secure
computation and at the end of that, they learn the result of some function on both on their
inputs but they dont learn anything about the other party input. This is not achievable using
cryptographic only, because our archieving depends on people, assumptions and the adversary
as well as the system that actual runs the protocol.
The idea is that any discrete and fixed sized function cna be turned into a logic gate and if we

92
can find a way to implement logic gates securely we can implement a whole function this way
Example 74:
Thinking of a logic gate as a truth table of the function AND:

A B AB
0 0 0
0 1 0
1 0 0
1 1 1

To get the value of A B we need to know the value of A and B.


The goal is to encrypt this circuit in such a way, that we can still evaluate it without actually
knowing what the inputs are and without learning what the output is.
But we want to produce an output that we can use for the next circuit. If we can evaluate each
gate, by keeping the inputs and outputs encrypted we can evalutate the whole circuit and at
the end we can map the final result to a meaningful value.

7.1 Enctypted Circuits


The first step to create a encrypted circuit is to replace the inputs with encrypted values for
each wire. That means we need some way to represent a 0 and 1 on both wires.
Lets assume a0 .a1 are 0 and 1 on the one wire and b0 , b1 are 0 and 1 on the other wire.
We have to replace the nounces in the table with the encrypted values. So the new truth table
is
a0 b0 x0
a0 b1 x0
a1 b0 x0
a1 b1 x1
here are the values in the same order. We want to hide any information so we do some permu-
ation:
0010 1100 0111
0010 0011 1001
1010 0011 0111
1011 1100 0111
where a0 , a1 {0010, 1010} and b0 , b1 {1100, 0011}.
Due to this form of the x values it can easily revealed all the a and b values. To hide this
pattern and solve this problem we have the outputs x to be encrypted with different keys. If we
encrypt the x values with the same key, then either the evaluator would be able to determine
all the outputs because the evaluator knows that key or the couldnt determine any of them,
because the evaluator doesnt know that key.
So we need to encrypt the outputs (x values) with different keys.
A circuit evaluator, who wants to decrypt x0 using a1 , b0 can use

Ea1 b0 (x0 ) or Ea1 (Eb0 (x0 ))

to ensure that a circuit evaluator can decrypt this output and none of the others, because an
evaluator has to know both values a1 , b0 to get x0 .
In the garbled table we have outputs encrypted with different keys corresponding to the intputs

93
that correspond to that output value:
a0 b0 Ea0 ,b0 (x0 )
a0 b1 Ea0 ,b1 (x0 )
a1 b0 Ea1 ,b0 (x0 )
a1 b1 Ea1 ,b1 (x1 )
sending the whole table will reveal the values of a and b, so we need to remove this part of the
table and randomly permute the output values and ass some padding:
Ea0 ,b0 (x0 ) Ea1 ,b0 (pad||x0 )
Ea0 ,b1 (x0 ) Ea1 ,b1 (pad||x0 )

Ea1 ,b0 (x0 ) Ea0 ,b1 (pad||x0 )
Ea1 ,b1 (x1 ) Ea0 ,b0 (pad||x1 )
Because each of these values are encrypted with a different key, the evaluator cant tell which
one is which. The evaluator is still able to decrypt these to produce the right output.
The evaluator knows the output value of the truth table by trying to decrypt all the entries
with input value keys and use the one thath decrypts to pad||x0,1

7.2 Garbled Circuit Protocol


Definition Garbled Circuit Protocol
The Garbled Circuit Protcol for 2 parties A (generator) and B (evaluator) may look as follows:

A LLL B A rB
LLLa10
LLL b1 b21
rrrrr
LL& 0  a21 rr
 rx rr
garbled table garbled table
m
mmm
mmmmm
m
 mv mm
A garbled table B

garbled table, a10 ,a11 . evaluate

o x0 , x1 , . . .

output o / output

This means:
1. In the beginning A and B agreed on some circuit they want to evaluate and it takes inputs
from both A and B.
2. A generates a garbled table for each logic gate in the circuit and send the garbled circuit
to B as well as As input values, which are random nounces and B cant tell what they
mean.
3. B evaluates the circuit using the garbled circuit protocol decrypting one entry from each
of these and at the end B gets some output values and then turn that into semantic value.
The problem is that B cant obtain his inputs to the circuit, because to evalute the table
B needs As and Bs inputs.

94
.
Definition Oblivious Transfer (1 out of 2 OT)
The Oblivious Transfer means that A can create 2 values x0 , x1 and B will obtain 1 of those
values xb with b {0, 1}. So B learns one of x0 or x1 and A doesnt know which one B obtained:

A B

x0 , x1 , w0 , w1 , kUU A b
UUUU
UUxU0 U,x1
UUUU
UUU*
xb , r, v
jjjjj
v jjj
jjjjjjj
jt
k0 , k1 , w00 , w10
TTTT
TTw 0 0
TT0T,wT1
TTTT
TT)
wb0 r

This means:

1. A has 2 wire labels w1 , w2 which correspond to the inputs to some gate and A wants
to transfer one of them to B without revealing the other one. We use As public key
kU A = (n, e) which is know to B. The goal is to transfer on of the wire labels to B. A
creates 2 random values x0 , x1 separated from the wire labels, which are transfered to B.

2. B picks some random value b {0, 1} and picks xb from the transfered value and B also
picks some random value r to blind the response xb , because B cant allow A to learn
whether B picks x0 or x1 , which would reveal Bs input.

3. B computes
v = xb + re mod n
to hide the value of xb by adding a random value raised to the eth power.

4. B sends v to A.

5. A performs 2 different RSA decryptions:

k0 = (v x0 )d mod n
k1 = (v x1 )d mod n

6. A sends a message to B that allows B to learn 1 wired label. A adds the keys to the wired
label:

w00 = w0 + k0
w10 = w1 + k1

and send w00 , w10 to B.

7. B computes w1 (if B picked b = 1) with w10 r or w0 (if B picked b = 0) with w00 r

95
.
Example 75:
Assuming B picks b = 1 in the OT protocol, then k0 is meaningless but

k1 = r

because, considering ed = 1:

k1 = (v x1 )d mod n = ((x1 + re mod n) x1 )d mod n = (x1 + re mod n x1 )d mod n


= (re mod n)d mod n
= red mod n
= r1 mod n = r

Now B can easily compute w1 with w10 = w1 + k1 = w1 + r so w1 = w10 r


So B obtain his inputs with OT. To enable B to learn his inputs to the circuit A sends the
garbled circuit along As inputs. B can evaluate the circuit and then from the encrypted output
wire B can obtain the result of the circuit execution and share that with A or flip role and do
it again and A would obtain the output.
To actually obtain the output vlaue (the outputs for the garbled table are all encrypted), so at
the end of the execution B has a list of encrypted wire labels and B wants to turn that into
semantic output.

96

You might also like