N19679 Draft Revised SC27 SD12 Clean TT 20190520
N19679 Draft Revised SC27 SD12 Clean TT 20190520
N19679 Draft Revised SC27 SD12 Clean TT 20190520
SC 27 SD12
REPLACES: N18984
DATE: 2019-05-20
PROJECT: SC 27 SD12
STATUS: In accordance with Resolution 5 of the Comment Resolution Meeting for SC 27/WG 2
projects held on 1st - 5th April, 2019 at Tel Aviv, Israel (contained in SC 27 N19861),
this document is created. It is circulated to SC 27 National Bodies and liaison
organizations for comments and contributions to be considered at the Paris meeting.
PLEASE NOTE: This document is also freely accessible from the public SC 27 web site at:
http://www.din.de/go/jtc1sc27.din.de/sbe/sd12 "Downloads" when approved.
DUE DATE:
MEDIUM: http://isotc.iso.org/livelink/livelink/open/jtc1sc27
The block ciphers described in the following sections are: AES, Camellia, Seed, Misty1, CAST-128,
Hight, Present, CLEFIA and Triple-DES.
1.1 AES
General
The following is an analysis of the AES (Advanced Encryption Standard) in reference to ISO/IEC
18033-3:2010 Information technology – Security techniques – Encryption algorithms – Part 3: Block
ciphers subclause 5.2 “AES”.
The AES is a symmetric block cipher which processes data blocks of 128 bits using a key of 128, 192
or 256 bits. Therefore, the algorithms with these different lengths of key will be referred as AES-128,
AES-192 and AES-256 respectively to the length of the key. According to [26], the versions with longer
key length are slower by 20% and 40% respectively.
The effective strength of these three versions of the cipher is equal to the length of the key (128 bits
for AES-128, etc.).
The AES can be attacked by single-key recovery methods such as impossible differential cryptanalysis
and Square attacks [27][28]. Impossible differential cryptanalysis, instead of tracking the propagation
of differences through the computation of the algorithm, tracks differences that have probability 0 to
happen at a certain stage of the algorithm.
The AES-192 and the AES-256 are vulnerable to related-key attacks (see part related-key attacks)
[29],[30]. The best attack for the AES-192 requires data complexity 2 and time complexity 2 . For
.
the AES-256, the attack is more performant and requires data complexity 2 and time complexity
.
2 . These attacks work because of the difference between the key length and the block size which
has impact on the key schedule. That is also the reason why AES-128 is not vulnerable. Nevertheless,
taking independent and random keys avoid these attacks.
As the effective strength of the algorithm is 128 bits (respectively 192 and 256), a brute force attack
leads to 2 (2 and 2 ) computations. The bi-clique technique presented in [31] reduces the
.
complexity of the brute force attack. Bogdanov et al. show that it needs 2 time complexity and
.
2 data complexity to break AES-128, 2 time complexity and 2 data complexity to break AES-
.
192 and 2 time complexity and 2 data complexity to break AES-256.
These few attacks are mostly theoretical for now and demand a lot of computing power. Using the
AES in current and future applications is recommended.
The attack on AES-128 can be compared with a theoretical multi-target attack where, given 240
plaintext/ciphertext pairs all with the same plaintext block but generated under different keys, an
attacker will expect to find one of the keys after 288 key guesses
1.2 Camellia
General
The following is an analysis of Camellia in reference to ISO/IEC 18033-3:2010 Information technology
– Security techniques – Encryption algorithms – Part 3: Block ciphers subclause 5.3 “Camellia”.
Camellia is a symmetric block cipher processing data blocks of 128 bits using a key of 128, 192 or 256
bits. The two versions with 192 and 256 bits are both 33% slower than the 128 bits version.
The effective strength of these three versions of the cipher is equal to the length of the key.
However, because of its algebraic proprieties, Camellia is theoretically vulnerable to algebraic attacks
which demands a number of computations way too elevated for the current computation capacities.
Thus these attacks have been proved ineffective. Hence, Camellia is approved to be used in modern
and future applications.
1.3 SEED
General
The following is an analysis of SEED in reference to ISO/IEC 18033-3:2010 Information technology –
Security techniques – Encryption algorithms – Part 3: Block ciphers subclause 5.4 “SEED”.
SEED is a symmetric block cipher processing data blocks of 128 bits using a key of 128 bits.
The effective strength of the cipher is equal to the length of the key.
1.4 MISTY1
General
The following is an analysis of MISTY1 in reference to ISO/IEC 18033-3:2010 Information technology
– Security techniques – Encryption algorithms – Part 3: Block ciphers subclause 4.3 “MISTY1”.
MISTY1 is a symmetric block cipher processing data blocks of 64 bits using a key of 128 bits.
1.5 CAST-128
General
The following is an analysis of CAST-128 in reference to ISO/IEC 18033-3:2010 Information
technology – Security techniques – Encryption algorithms – Part 3: Block ciphers subclause 4.4
“CAST-128”.
CAST-128 is a symmetric block cipher processing data blocks of 64 bits using a key of 128 bits.
The effective strength of the cipher is equal to the length of the key.
1.6 HIGHT
General
The following is an analysis of HIGHT in reference to ISO/IEC 18033-3:2010 Information technology
– Security techniques – Encryption algorithms – Part 3: Block ciphers subclause 4.5 “HIGHT”.
HIGHT is a symmetric block cipher processing data blocks of 64 bits using a key of 128 bits.
The encryption technique commonly known as Triple DES (Data Encryption Standard) is referred to
as the Triple Data Encryption Algorithm (TDEA) by ISO/IEC 18033-3:2011 in 4.1 ‘TDEA’ (see [7]).
Triple DES is a symmetric block cipher that can process data blocks of 64 bits, using keys with a length
of 128 (or 192) bits, of which 112 (or 168) bits can be chosen arbitrarily and the rest may be used for
error detection.
The effective strength of three-key Triple DES is at most 112 bits. A security analysis and practical
demonstration of attacks on TDEA in several real-world protocols, done by Karthikeyan Bhargavan
and Gaëtan Leurent of Inria (Paris), in [23], provide evidence that the collision attack on TDEA
represents a serious security vulnerability for many common uses of these protocols — including the
HTTPS protocol for secure Internet connections. Moreover, the analysis shows that the security
vulnerability remains serious unless more stringent limits are imposed on the amount of data that can
be encrypted under a single 3-key bundle than the current data limit recommended. The maximum
amount of plaintext allowed to be encrypted under a single TDEA 3-key bundle should be 2 (64-bit)
blocks
The remainder of this clause focuses on the effective strength of two-key Triple DES.
However, in more recent research [24], it has been shown that the van Oorschot-Wiener attack will still
work even if the plaintext/ciphertext pairs available to a cryptanalyst were generated using a multiplicity
of keys. That is, modifying the statement in the previous paragraph, if 2 plaintext/ciphertext pairs are
known, possibly computed using more than one secret key, then one of the secret keys can be
determined using of the order of 2 storage and 2 encryption/decryption operations. This means
that some of the precautions previously proposed to ensure the security of two-key Triple DES in
practice, no longer apply. In particular, it was previously recommended in SD12 that, depending on the
required security level, the maximum number of plaintexts encrypted under a single key should be
limited. Whilst updating keys remains desirable (as always) and can limit the impact of a single key
compromise, the more recent research findings mean that in any environment in which two-key Triple
DES is used, if it is feasible for a cryptanalyst to obtain 2 plaintext/ciphertext pairs (possibly
generated using multiple keys), then the security of the algorithm is upper bounded by 2
encryption/decryption operations. For example, if it is feasible for a cryptanalyst to accumulate 2
plaintext/ciphertext pairs, perhaps over a long period of time and using multiple keys, then of the order
of 2 DES encryption/decryption operations will be sufficient to find one of the keys.
MACs can also be used with TDES, and the use of ISO/IEC 9797-1:2011 algorithm 3 with DES with
two independent keys remains quite widespread. Annex C of ISO/IEC 9797-1:2011 summarizes the
security of this MAC technique when used with DES in the third row of the Table C.2. It is mentioned
in this table that the most effective key recovery attack using known message/MAC pairs requires 2
such pairs generated with the same key. Mitchell [24] shows that a different key recovery attack, the
generalized van Oorschot-Wiener attack, will work even if the known message/MAC pairs are
generated using a multiplicity of keys. This means that the security offered by the algorithm TDEA
combination is less than previously believed, and that changing keys frequently will not in itself prevent
attacks.
To sum up, the security level of two-key Triple DES is less than the number of bits in a key (112 bits)
because of a ‘meet-in-the-middle’ attack on the Triple DES construction. An attacker with access to
2 plaintext/ciphertext pairs, possibly generated using multiple keys, will be able to launch a search
for one of the keys used with effort equivalent to an exhaustive key search for an 80-bit key, using the
attack described in [24]. The implication of this is that the effective key-length of two-key Triple DES in
specific applications can only be regarded as being at most 80 bits (instead of 112 bits).
For many practical applications, this degradability of the effective key-length is not necessarily a
problem as access to 2 plaintext/ciphertext pairs may be deemed unlikely. However, conservative
system security design suggests that this assumption needs to be very carefully analyzed in the
context of system use1 before continuing to employ this encryption technique.
1
It is noted in [24] that in some cases only partial plaintexts need to be known to the attacker.
1.8 PRESENT
General
The following is an analysis of PRESENT in reference to ISO/IEC 29192-2:2012 Information
technology – Security techniques – Lightweight cryptography – Part 2: Block ciphers subclause 5.1
“PRESENT”.
PRESENT is a lightweight symmetric block cipher processing data blocks of 64 bits using a key of 80
or 128 bits. PRESENT-80 and PRESENT-128 refer respectively to these key lengths.
The effective strength of these three versions of the cipher is equal to the length of the key.
1.9 CLEFIA
General
The following is an analysis of CLEFIA in reference to ISO/IEC 29192-2:2012 Information technology
– Security techniques – Lightweight cryptography – Part 2: Block ciphers subclause 6.1 “CLEFIA".
CLEFIA is a lightweight symmetric block cipher processing data blocks of 128 bits using a key of 128,
192 or 256 bits.
The effective strength of these three versions of the cipher is equal to the length of the key.
2 Modes of operation
A mode of operation is an algorithm which encrypts/decrypts a message by using a symmetric-key
block cipher in order to provide confidentiality and possibly also authenticity. In the following, five
modes providing confidentiality are presented in the first subclause and each is a reference to
ISO/IEC 10116:2017 Information technology — Security techniques — Modes of operation for an n-
bit block cipher clause 6 to 10. Annex B of ISO/IEC 10116 also provides “Properties of the modes of
operation and important security guidance”.
In the second subclause, six authentication encryption modes are presented, all of them from ISO/IEC
19772:2009 Information technology – Security techniques – Authenticated encryption.
These modes of operation should be used with the block ciphers presented in Clause 1 and
standardized in ISO/IEC 18033-3:2010 Information technology – Security techniques – Encryption
algorithms – Part 3: Block ciphers.
The problem with this method is that the same plaintext blocks are encrypted into the same ciphertext
blocks. Thus, the possible repetitions are not hidden, which is a confidentiality issue.
For the very first block, as there is not a “previous” ciphertext, an IV (Initial Vector)2 is used. The latter
must be independent and random for each plaintext in order to make the algorithm indistinguishable
under chosen plaintext attack. If multiple plaintexts are encrypted under the same key, the IV must be
generated uniformly at random for each plaintext.
Before the beginning of encryption, CBC needs to use padding on its plaintext but several padding
oracle attacks against CBC implementations exist that might be exploited in practice, such as [63].
CFB requires an independent and random IV but it does not need to be secret. If multiple plaintexts
are encrypted under the same key, the IV must be generated uniformly at random for each plaintext.
It transforms a block cipher into a self-synchronizing stream cipher. A ciphertext block, instead of
depending on a plaintext block and a key (like ECB), depends on the previous ciphertext block, that is
to say depends on the whole plaintext.
To proceed the decryption, the user only needs the encryption scheme of the block cipher. In other
words, when the block cipher encryption and block cipher decryption is different (e.g. the AES), the
block cipher decryption does not need to be used for decryption in the CFB mode.
Besides, this mode does not use padding, thus it is not vulnerable to oracle padding attacks.
2
The Initial Vector (IV) refers to “Starting Variable” (SV) in ISO/IEC 10116.
However, when an error occurs, it is propagated through the rest of the plaintext but the user only lost
a certain part of the plaintext because of its self-synchronizing characteristic.
This mode requires a unique IV (must be a nonce) for each plaintext, if not the security is compromised.
If multiple plaintexts are encrypted under the same key, the IV must be generated uniformly at random
for each plaintext or be a counter.
Moreover, the security may also be compromised if any of the input block (used to create keystream
block in order to encrypt the message) are used for another message.
In OFB, there are no dependencies between the ciphertext blocks. Because of its no-dependency
characteristic, this mode is used when the user cannot tolerate error propagation.
As CFB, OFB does not use padding and only needs the encryption scheme of the block cipher for the
algorithm.
The current ciphertext block is created with the XOR of the current plaintext block and the keystream
block. Each successive keystream block is created by encrypting an input block called counter block.
Each block of the counter must be distinct with every other counter block.
Moreover, if several plaintexts are encrypted under the same key (of the block cipher), all the counter
blocks from each plaintext must be distinct too.
The counter can be any function which outputs a sequence. This sequence must not repeat during the
lifetime of a key. The increment-by-one function is mostly chosen and is the simplest. Each block of
the counter can be concatenated or added or XOR-ed with a random nonce before being encrypted
by the block cipher. If the nonce is not random, it should only be concatenated in order to prevent
security issues.
As OFB, this mode does not have plaintext dependency, each ciphertext block does not depend on
the previous one.
As CFB and OFB, CTR does not use padding and only needs the encryption scheme of the block
cipher for the algorithm.
However, the counter must be synchronized between the sender and the receiver. If not, errors during
the decryption could occur.
OCB 2.0 can encrypt plaintexts of any bit length which does not have to be a multiple of the block
length, and no padding is ever used.
OCB 2.0 requires the use of a new nonce for each encryption which does not need to be secret or
random, so a counter is recommended.
Latest attacks on OCB 2.0 show that OCB 2.0 is fully broken, see 11.5 for more details.
Key Wrap
The Key Wrap problem, posed by the NIST in the late 1990s, was to develop secure and efficient
cipher-based algorithms to encrypt keys. Thus, this mode is not efficient to encrypt long plaintexts.
This mode can be used with any block cipher which is a 128-bit block cipher, for example AES is a
reliable choice.
Key Wrap can encrypt plaintexts of length up to 2 though they must be at least 128 bits and a multiple
of 64, i.e. in the form of 64m, with m >1 for some integer.
CCM
CCM (Counter with CBC-MAC) has been presented by Whiting et. al. in [59].
Before using CCM, the originator and recipient of the data must agree on the bit length of the tag and
the octet length of the message length field, and the latter defines the maximum of plaintexts allowed
to be encrypted.
This mode can be used with any block cipher which is a 128-bit block cipher, for example AES is a
reliable choice. Moreover, any data being encrypted by this mechanism must be a multiple of 8, i.e. in
the form of 64m, with m >1 for some integer.
EAX
EAX has been presented by Bellare et. al. in [60] as an alternative to CCM.
Before using EAX, the originator and recipient of the data must agree on the bit length of the tag.
EAX can encrypt plaintexts of any bit length which does not have to be a multiple of the block length,
and no padding is ever used.
Encrypt-then-MAC
Encrypt-then-MAC, as the name indicates, is a combination of any encryption mechanism and any
MAC scheme. In [61], Bellare et. al. presented the provable security of this mechanism with the
assumption that both encryption mechanism and MAC scheme possess the necessary security
properties.
Before using Encrypt-then-MAC, the originator and recipient of the data must agree on the block cipher
to be used, the MAC scheme to be used and a method to derive a single secret key into two secret
keys (one to be used with the block cipher, the other to be used with the MAC scheme).
GCM
GCM (Galois/Counter Mode) has been presented by McGrew and Viega in [62].
Before using EAX, the originator and recipient of the data must agree on the bit length of the tag.
This mode can be used with any block cipher which is a 128-bit block cipher, for example AES is a
reliable choice.
GCM has been proven secure in the concrete security model, and is widely used because of its
performance results.
3 Stream Ciphers
A stream cipher is a symmetric key algorithm which uses a keystream to encrypt a plaintext in a bitwise
or block-wise manner. A keystream is a sequence of bits or blocks of bits.
Two types of stream cipher exist: the self-synchronizing stream cipher (e.g. see the CFB mode
previously) and the synchronous stream cipher (e.g. see the OFB or CTR mode previously).
3.1 MUGI
General
The following is an analysis of MUGI in reference to ISO/IEC 18033-4:2011 Information technology –
Security techniques – Encryption algorithms – Part 4: Stream ciphers subclause 8.1 “MUGI key stream
generator”.
It is a synchronous stream cipher with two variants: a 128-bits and 256-bits key and a 32-bits IV. It is
used within the 3GPP Confidentiality and Integrity Algorithms UEA2 and UIA2.
In [33], it is shown that a modified version of SNOW 2.0 is vulnerable to algebraic attacks which
requires about 2 time complexity and no more than 1000 outputs.
Kircanski presented in his thesis [34] a sliding attack recovering related-key pair sets for SNOW 2.0.
A related-key attack can be mounted with these results on SNOW 2.0, the 256 bits version. It raises
questions about the validity of the security protocols which depend on SNOW 2.0 because it indicates
that SNOW 2.0 cannot be considered as a random function of both the key and the IV, but SNOW 2.0
is still beyond the reach of a practical attack.
3.3 Rabbit
General
The following is an analysis of Rabbit in reference to ISO/IEC 18033-4:2011 Information technology –
Security techniques – Encryption algorithms – Part 4: Stream ciphers subclause 8.3 “Rabbit key
stream generator”. It was submitted to the eSTREAM competition and included in the final eSTREAM
portfolio for the hardware profile.
It is a synchronous stream cipher using a 128-bits key and a 64-bits IV (public) which has been
designed for high performance in software implementations. This cipher also turns out to have high
performance in hardware implementations.
Aumasson presented in [36] the existence of a bias in the output of Rabbit that leads to a distinguisher
which requires about 2 samples of keystream of 128 bits. Additionally, Lu et al. [37] have shown a
distinguisher attack with complexity 2 . Nevertheless the cost of this attack is much higher than an
exhaustive search (2 operations).
3.4 Decimv2
General
The following is an analysis of Decimv2 in reference to ISO/IEC 18033-4:2011 Information technology
– Security techniques – Encryption algorithms – Part 4: Stream ciphers subclause 8.4 “Decimv2 key
stream generator”.
3.6 Enocoro
General
The following is an analysis of Enocoro-128v2 and Enocoro-80 in reference to ISO/IEC 29192-3:2012
Information technology – Security techniques – Lightweight cryptography — Part 3: Stream ciphers
subclause 6.1 “Enocoro-128v2 keystream generator” and 6.2 "Enocoro-80 keystream generator"
respectively.
3.7 Trivium
General
The following is an analysis of Trivium in reference to ISO/IEC 29192-3:2012 Information technology
– Security techniques – Lightweight cryptography — Part 3: Stream ciphers subclause 6.3 “Trivium
keystream generator”. It was submitted to the eSTREAM competition and included in the final
eSTREAM portfolio for the hardware profile.
It is a synchronous stream cipher using an 80-bits key and an 80-bits IV which has been designed for
high performance in hardware implementation.
In [39], Maximov and Biryukov presented an attack recovering the internal state (and thus the secret
.
key) with time complexity around 2 . The authors showed also that increasing the length of the key
does not lead to an increase of the security strength of the algorithm. Modifying Trivium to provide
128-bits security strength or more is an open problem.
A distinguishing attack on 790 rounds out of 1152 is referenced In [Note] by Fouquet and Vannet.
4 Hash functions
The following is in reference to ISO/IEC 10118-1:2016 Information technology — Security techniques
— Hash-functions.
A hash function is a function which maps strings of bits of variable (but usually upper bounded) length
to fixed-length strings of bits, satisfying the following two properties:
• for a given output, it is computationally infeasible to find an input which maps to this output;
• for a given input, it is computationally infeasible to find a second input which maps to the same
output.
4.1 SHA-3
General
The following is an analysis of SHA-3, which is a set of four hash functions SHA3-224, SHA3-256,
SHA3-384, SHA3-512, in reference to ISO/IEC 10118-3:2018: Information technology — Security
techniques — Hash-functions — Part 3: Dedicated hash-functions.
The input can be of any length. The hashes are respectively of length 224, 256, 384 and 512 bits.
4.2 SHA-2
General
The following is an analysis of SHA-2, which is a set of six hash functions SHA-224, SHA-256, SHA-
384, SHA-512, SHA-512/224 and SHA-512/256, in reference to ISO/IEC 10118-3:2018 (in issue):
Information technology — Security techniques — Hash-functions — Part 3: Dedicated hash-functions.
The hashes are respectively of length 224, 256, 384 and 512 bits. For SHA-224 and SHA-256, the
max plaintext size is 2 − 1. For the others, the max plaintext size is 2 − 1.
SHA-256 and SHA-512 are computed with 32 and 64 words respectively. SHA-224 and SHA 384 are
truncated versions of the first two with different initial vectors. SHA-512/224 and SHA-512/256 are
also truncation of the first two but the initial values are generated differently.
Khovratovich et al. [40] reported a biclique pre-image attack breaking a reduced version of SHA-256
(52 rounds out of 64) with time complexity 2 and a reduced version of SHA512 (57 rounds out of
80) with time complexity 2 .
Lamberger and Mendel [41] presented a differential pseudo-collision attack which breaks 46 rounds
out of 64 of SHA-256.
The full version of the family functions SHA-2 have not been broken.
4.3 WHIRLPOOL
General
The following is an analysis of WHIRLPOOL, a hash function in reference to ISO/IEC 10118-3:2004
Information technology -- Security techniques -- Hash-functions -- Part 3: Dedicated hash-functions
subclause 13 “Dedicated Hash-Function 7 (WHIRLPOOL).
WHIRLPOOL takes as input plaintext at most 2 − 1 bits and returns a ciphertext of 512 bits length.
This hash function is based on transformations similar to the ones used in the Advanced Encryption
Standard (AES) block cipher.
Others collisions attacks have been presented in [43] on reduced WHIRLPOOL (e.g. with 2 /2 and
2 /2 time complexity/data complexity).
In [44], Sasaki presented a Second Preimage attack on 5 rounds of WHIRLPOOL with 2 time
complexity and 2 data complexity.
4.4 STREEBOG
General
The following is an analysis of STREEBOG, which is a set of two hash functions STREEBOG-256 and
STREEBOG-512, in reference to ISO/IEC 10118-3:2018: Information technology — Security
techniques — Hash-functions — Part 3: Dedicated hash-functions.
STREEBOG-256 and STREEBOG-512 take as input plaintext at most 2 − 1 bits and return a
ciphertext of length 256 and 512 bits respectively.
4.5 SM3
General
The following is an analysis of SM3, which is hash function in reference to ISO/IEC 10118-3:2018:
Information technology — Security techniques — Hash-functions — Part 3: Dedicated hash-functions.
SM3 takes as input plaintext at most 2 − 1 bits and return a ciphertext of length 256 bits.
4.6 RIPEMD
General
RIPEMD-160 and RIPEMD-128 are hash functions specified in ISO/IEC 10118-3:2010 Information
technology -- Security techniques -- Hash-functions -- Part 3: Dedicated hash-functions subclause 7
“Dedicated Hash-Function 7 (RIPEMD-160)” and subclause 8 “Dedicated Hash-Function 8 (RIPEMD-
128)”.
RIPEMD are described in [52] and their construction is based on MD5 hash-function.
RIPEMD-160 is still considered not vulnerable for known attacks. In [51], Mendel et. al. presented a
cryptanalysis on reduced RIPEMD-160, proposing a semi-free-start collision attack for 42 steps (out
of 80) while the previous best known result was for 36 steps. They moreover described a semi-free-
start collision attack from the first step of RIPEMD-160 for 36 steps.
4.7 SHA-1
General
The following is an analysis of SHA-1, which is a hash function in reference to ISO/IEC 10118-3:2018:
Information technology — Security techniques — Hash-functions — Part 3: Dedicated hash-functions.
SHA-1 takes as input plaintext at most 2 − 1 bits and return a ciphertext of length 160 bits.
This part is in reference to the family of standard ISO/IEC 9797 on Message Authentication Codes
(MACs). This family is composed of three standards:
A MAC algorithm is an algorithm for computing a function which maps strings of bits and a secret key
to fixed-length strings of bits, satisfying the following properties:
-- for any key and any input string, the function can be computed efficiently;
-- for any fixed key, and given no prior knowledge of the key, it is computationally infeasible to compute
the function value on any input string, even given knowledge of a set of input strings and corresponding
function values, where the value of the ith input string might have been chosen after observing the
value of the first i -1 function values (for integers i >1).
Note 3 to entry: Computational feasibility depends on the user's specific security requirements
and environment.
In short term, a MAC is a bit string to authenticate a message, providing integrity and authenticity.
In this part, six MAC algorithms are specified. The first one is commonly known as CBC-MAC, the
other five are variants of it.
Seven steps are followed when applying a MAC algorithm: key derivation (optional), padding, splitting,
iterative application of the block cipher, final iteration, output transformation, and truncation.
In this part, three MAC algorithms are specified. The first one is commonly known as MDx-MAC, the
second one as HMAC and the third and last one is a variant of MDx-MAC.
MDx-MAC has five steps: key expansion, modification of the constants and the IV, hashing operation,
output transformation and truncation. HMAC has four steps which are key expansion, hashing
operation, output transformation and truncation. The last MAC algorithm of this part has also five steps,
key expansion, modification of the constants and the IV, padding, application of the round-function and
truncation.
In this part, four MAC algorithms are specified: UMAC, Badger, Poly1305-AES and GMAC.
These MAC algorithms use a master key, a nonce and a message as input. They follow four steps:
key preprocessing, message preprocessing, message hashing, and finalization.
6 Asymmetric Cryptography
The most widespread uses of asymmetric cryptography are on one hand as an asymmetric cipher,
with the public transformation used as the encryption and the private transformation as the decryption.
The cipher allows a sender to use a recipient’s public key to transmit an encryption of a message to
the receiver who can use his secret key to decrypt the given ciphertext, thereby obtaining the original
message. On the other hand, as a digital signature, to provide sender authentication, integrity
authentication and support for non-repudiation.
The combination of the enciphering and the digital signature constitutes the Enveloped Public Key
Encryption.
The factoring problem stipulates that it is computationally feasible, to multiply two distinct large prime
numbers but it is computationally not feasible to find those two prime numbers only from their product.
Computationally infeasibility means that a computation, even though can be solved by an algorithm,
cannot be solved using the resources (e.g. computers) available today.
With the RSA Factoring Challenge, the cutting edge in integer factorization was tracked. The last
factorized integers were RSA-576 in 2003, RSA-640 in 2005 an RSA-768 in 2009 using the Number
Field Sieve algorithm [45] which is the most efficient factorization algorithm for large integers (> 100
decimal digits). Thus it is recommended to use a key of 1024 bits at least for legacy-use and a key of
3072 bits at least for future applications.
The product of the two distinct large prime numbers is usually denoted = · . To assure the
difficulty to factorize them, p and q should be of the same bit-length but not too close together. Indeed,
if for example length (p) << length (q), it is easier to factorize their product by using the ECM method
presented by Lenstra in 1987 [46]. Moreover, according to [47], if | − | < , N can be factorized.
The primes and should be | |/2 bits long and generated randomly in order to ensure that it is
difficult to factorize . See ISO/IEC 18032 for more details on prime number generation. In quantum
computing, Shor’s algorithm [48] resolves the factoring problem in polynomial time in the input size,
e.g. the number of digits of the number to be factorize. This breakthrough will force developers to use
inappropriate key lengths to achieve basic security that will make the usage of RSA based algorithms
purposeless, though it still needs the development of efficient quantum computers, which is in its
research period for now.
Actually, the ciphers based on the factoring problem are more based on the RSA problem, which is a
priori easier than the factoring problem.
Thus & should be large enough to avoid the solution of this problem, e.g. by using Coppersmith’s lattice
based techniques (e.g. [47]). Currently, it is recommended to take & ≥ 65537 [48] for encryption
schemes which provides the best quality (security level)/speed ratio. For signature schemes, users
usually take & = 3 to be as fast as possible. For future applications, the minimum to use is & = 65537.
The developer should be careful also with the value of %, the private exponent and inverse of &. Indeed,
a % too small (to have faster calculations) induces security issues. Lattice attacks can be applied when
% is too small (e.g. [49]). Selecting & first and then finding % according to & , as standard practice,
should lead to a % large enough.
The security behind ECC is ECDLP, which is the Elliptic Curve Discrete Logarithm Problem, i.e. it is
the discrete logarithm problem for the group of points on an elliptic curve over a finite field. This
problem is, at best, solved in exponential time, thus the use of this technique in cryptography.
Let * be an elliptic curve defined over a finite field +, with prime. *: . = / + 1/ + 2, 1, 2 ∈ +, . Let
5 and 6 be points in *(+, ). Find an integer " so that 6 = "5. The smallest " is called the discrete
logarithm.
Approved: means that the algorithm or the key length is recommended to be used to apply
cryptographic protection on data or to process cryptographic protection on encrypted data. Moreover,
often their security has been proved. For key length versus time, it means that a developer should use
this key length at least if he wants his application to work on plain data for a short and large duration
(> 3 years).
Acceptable: means that the algorithm or the key length can be used to apply cryptographic protection
on data or to process cryptographic protection on encrypted data but better alternatives exist. For key
length versus time, it means that a developer should use this key length if he wants his application to
work on plain data for a short duration (≤ 2-3 years).
Legacy-use: means that the algorithm or the key length could be used only to process already-
encrypted data. For key length versus time, it means that a developer can use this key length if his
application works on already encrypted data.
Disallowed: means that the algorithm or the key length shall not be used to apply cryptographic
protection on data. For key length versus time, it means a developer should not use this length to have
proper security within his application.
The effective strength of each cipher has been mentioned and can be compared to the key length as
input.
According to Clause 1, all stated block ciphers have known methods of cryptanalysis. Though most
the presented cryptanalysis does not have an impact on the effective strength of the algorithm,
because they may be theoretical or still not computationally feasible. Although, for the Triple-DES,
because of the practicality of the attacks, the effective security is reduced.
Effective
Algorithm (key length) Mode 2018 2022 2027 2032
Strength
AES
128 En/Decryption Acceptable
(128 bits)
AES
192/256 En/Decryption Approved
(192 and 256 bits)
Camellia
128 En/Decryption Acceptable
(128 bits)
Camellia
192/256 En/Decryption Approved
(192 and 256 bits)
Encryption Disallowed
Two-Key TDES
803
(128 bits)
Decryption Legacy-use
Encryption Disallowed
Three-Key TDES
112 Acceptable
(192 bits)
Decryption Legacy-use
Effective
Algorithm (key length) IV length 2018 2022 2027 2032
Strength
SNOW 2.0
128 32 Acceptable
(128 bits)
SNOW 2.0
256 32 Approved
(256 bits)
3
The effective strength is more or less than 80-bits, depending on the number of plaintext-ciphertext
pairs available to an attacker [1.1]
Max
Hash Hash Output Security
plaintext 2018 2022 2027 2032
family function length strength
length
Dis-
SHA3-224 224 112 Any Acceptable
allowed
Dis-
SHA-224 224 112 Acceptable
allowed
2 −1
SHA-256 256 128 Acceptable
The 2nd column indicates the key lengths for approved symmetric algorithms, such as AES, which
provide the corresponding recommendation for the developer. For example, using AES with 128-bit
key is acceptable for current applications.
The 3rd column indicates the key lengths for approved asymmetric algorithms based on integer
factorization, i.e. the RSA algorithm, which provide the corresponding recommendation for the
developer. For example, using RSA for digital signatures with 4096-bit key is approved for current and
future applications.
The 4th column indicates the different hash functions that can be used with the approved algorithms
(for example in the case of a digital signature) to provide the corresponding recommendation for the
developer. For example, using SHA1 is disallowed in the context of a digital signature.
The 5th column indicates the size of the elliptic curve group for approved asymmetric algorithms to
provide the corresponding recommendation for the developer. For example, using elliptic curve for
digital signature with a group size higher than 384 bits is approved for current and future applications.
Hash
Digital signatures Elliptic Curve
Recommend
Symmetric key Factoring Modulus and applications MAC hash-based, Group Size
ation type
using only DRBGs [56]
hashes
Disallowed
(current ≤ 80 ≤ 1024 SHA1 - ≤ 160
applications)
Legacy-use SHA-224, SHA-
(current = 112 = 2048 512/224, SHA3- - ≤ 255
applications) 224
Acceptable SHA-256, SHA-
(current = 128 = 3072 512/256, SHA3- SHA1 ≤ 383
applications) 256
SHA-224, SHA-
Approved
SHA-384, SHA3- 512/224, SHA-256,
(current and
≥ 128 ≥ 3072 384, SHA-512, SHA-512/256, ≥ 384
future
SHA3-512 SHA-384, SHA-
applications)
512, SHA3-512
Note that this table provide general guidance. This information does not take into account the value
and lifetime of the asset to be protected.
Note that these results are estimated using the results from [56].
It is then clear that the classical cryptography is threatened, in particular the currently used public
cryptography as RSA-based and discrete logarithm-based cryptography is broken by quantum
computers.
To counter this major threat, using inappropriate key lengths would work, though only temporary. A
somewhat definitive solution is necessary, this is why many investments have been put over the years
on technologies that would be resilient as well in the world of classical cryptography as in the world of
quantum cryptography. These technologies belongs to post-quantum cryptography. Post-quantum
cryptography has many paths, the experts can work on lattices, codes, multivariate polynomials,
hashes and so on.
In order to develop these post-quantum algorithms and protect current and future sensitive data, NIST
invested into a process of standardization of post-quantum cryptography with a submission deadline
set up end of 2017. This process is currently still ongoing.
At ISO, SC 27/WG 2 Standing Document 8 [69] is currently on draft focusing on this major and
particular subject.
The summary of the threats on the current classical cryptography is presented below, taken from
NIST's internal report [70]:
Elliptic Curve
Public key Signatures, key exchange No longer secure
Cryptography
Finite Field
Public key Signatures, key exchange No longer secure
Cryptography
• Cost-relevant aspects such as the period of time a specific piece of equipment is expected to be
used before a planned equipment upgrade can take place,
The security analyst should determine the security requirements to determine an appropriate key
length and an appropriate cryptographic algorithm that fit the cost-relevant requirements and meet the
margins of derived security parameters. The documents and web-sites listed in the bibliography
contain background information which can aid in the cryptographic algorithm and key length selection
process (see 11.5 below).
References [1] and [2] are of a general theoretical nature, while reference [3], [8], [9] and [10] contain
recommendations which are aimed at specific applications or sectors. Reference [4] contains more
references itself to key length selection criteria and also provide implementations of calculations of the
various references above (and more) which may aid in key length selection. Reference [8] is a
technical report specifically aimed at the financial services sector and takes interoperability into
account in its recommendations.
• For applications where large amounts of data might be encrypted using a single key, 128-bit
block length (or larger) block ciphers should always be used.
• For highly constrained applications where only small amounts of data can be encrypted using
a single key, and where a 128-bit block cipher poses a cost or performance barrier to providing
security, smaller block sizes may be appropriate.
• As rules of thumb, where 128-bit block ciphers can be used, they should be used.
For a block cipher with a 128-bit block (like AES and Camellia [7], CLEFIA [55]), this will typically not
be an issue, because for most applications the number of blocks encrypted with a single key will stay
8 9
well below 2 . Underlying these data leakage issues is the “birthday problem”: a collection of 2
randomly chosen n-bit values, for k ≥ 0, will exhibit a collision —i.e., a repeated value— with probability
9
about 2 .
9
Note 1 to entry: In fact 2 is an upper bound on the collision probability.
8
Given 2 amount of data, a collision will happen with probability about 1 − & ≈ 0.39, which is
much higher than might naively be guessed.
Attacks on modes typically exploit the fact that information can be leaked if a block is repeated (for
example on CBC mode), or that with enough data it is possible to distinguish a permutation, which has
distinct outputs given distinct inputs, from a function, which can have the same output for two different
inputs (this is an issue for CTR mode.) For these attacks, the advantage of an adversary who makes
8 9
at most 2 queries to the block cipher is bounded above by the collision probability, which is ≤
9
2 . For any value of n, for the standard modes (excluding ECB mode, where collisions in inputs
can easily be constructed and detected) if the amount of data encrypted using a single key stays well
8
below 2 blocks —i.e., if k is not too small— then these attacks are not a concern.
Moreover, McGrew [22] presented attacks against n-bit block ciphers in CBC, CFB, and CTR modes
that can recover an unknown plaintext values when the birthday bound is not respected. The collision-
based attacks against CBC and CFB are straightforward and relatively inexpensive to carry out against
64-bit block ciphers; attacks against CTR are more involved, but are still feasible. He concludes that
128-bit block length block ciphers provide much more security than 64-bit block length ones, this is
why when it is possible, 128-bit block length block ciphers should be used.
It is important to note that the idea that there are no security issues until the number of blocks encrypted
8
using a single key reaches 2 is incorrect. Rather, the security degrades as the number of blocks
approaches this number. Thus, the number of blocks encrypted using a single key for an n-bit block
8
cipher should be kept well under 2 , however the value of =.
Thus, for block sizes n with n smaller than 128, it is important in practice to ensure that there are limits
on the amount of data encrypted using a single key because the birthday bound is much more rapidly
reached for them compared to 128-bit block length block ciphers for example. In particular, for n = 64,
8
2 is just 2 , which corresponds to 32 gigabytes of data. This means that the key for a 64-bit block
cipher operating in a standard mode should be changed well before the birthday bound, i.e., well before
9
2 blocks are encrypted. Ensuring that the key is changed after 2 blocks are encrypted means
that collisions occur, along with the corresponding potential compromise to security, with probability
9
about 2 . The user’s assessment of the risk associated with a particular application should dictate
how large a value of k is required.
Additionally, according to [23], the birthday attack for the 64-bit block cipher requires about 800GB
data and about 20-40 hours. While the attack for 64-bit block ciphers requires still huge data and long
time, similar attacks for the 32- or 48-bit block cipher require only a few MB data with a few seconds
or a few GB data with a few minutes, respectively.
Consequently, in order to prevent birthday attacks in the real world, block ciphers whose block length
is less than 64 bits are not recommended.
64-bit block ciphers (in standard modes) are only appropriate for applications where small amounts of
data will be encrypted using a single key, and where 128-bit block ciphers are not viable. They should
never be used (in standard modes) for applications where the amount of data available to an adversary
cannot be tightly controlled.
An example of an appropriate use for a 64-bit block cipher is RFID item tracking for items of low to
moderate value, where each item has a unique key, and where any particular tag is expected to be
queried only a handful of times over its lifetime. Here an adversary would have little incentive to carry
out an attack whose cost would exceed the value of the item. Another appropriate use would be
authentication of users by a central server, where queries in a challenge-response scheme could be
8
controlled and not repeated, thus allowing more than 2 encryptions.
Inappropriate uses would include encryption of large amount of data under single key which could
occur, for example, for file encryption on a desktop machine, TLS applications, etc.
It is important to remark that in the case that a message authentication code (MAC) is based on a
symmetric key block cipher, as seen discussions in NIST SP-800 38B, the default recommendation is
to limit the key to no more than 2 messages when the block size is 128 bits and 2 messages when
the block size is 64 bits in order to satisfy that the collision probability is respectively less than one in
a billion and less than one in a million.
Finally, rekeying techniques can be used to introduce a way to avoid drawbacks of using short block
sizes. Rekeying [53] means systematic changing of an encryption key. Key derivation functions from
ISO/IEC 11770-5 could be used for this purpose.
10.1 General
Related-key attacks against a block-cipher rely on the following assumption: it is possible for an
attacker to encrypt or decrypt messages under several different keys whose values are initially
unknown, but where some mathematical relationship connecting the keys is known to the attacker.
This is a strong assumption which is not relevant for a large number of practical applications such as
encryption or MAC using a long-term key. However, for "ad-hoc" applications which make use of a
block-cipher known to be vulnerable to one or several related-key attacks, it is strongly recommended
to assess the impact on the security. Indeed, it should not be possible for an attacker to encrypt/decrypt
messages under several different keys by controlling the mathematical relationship connecting the
keys in such a way that the constraints imposed by the attack (e.g. the number of related-keys required
to mount the attack, the mathematical relationship connecting the keys required to mount the attack,
etc.) are realistic.
The original ideas of related key attacks were introduced by Biham [17] and Knudsen [18], and many
more articles followed.
10.2 Example
As an example the following trivial related key attack is applicable to all block ciphers.
A block cipher encrypts one plaintext under an unknown key K. The attacker can modify the key K to
form key Ki by setting Ki equal to the logical AND between K and the bit pattern containing all ones
except in bit position i. This operation must be performed by the system (or in a more formal model,
by an Oracle).
The same plaintext is encrypted again, and if the corresponding ciphertext differs from the sample
ciphertext obtained from key K, the attackers knows that bit position i of K is a one, otherwise bit
position i is a zero. By repeating the steps for each of the bits of K, the entire key can be recovered by
the attacker. If K contains n-bits, the attacker requires n masks (or calls to the Oracle), and n+1
encryption calls.
The attack described above is described in terms of block ciphers, but is clearly applicable to more
than just block ciphers. In many practical applications one does not expect such modifications to the
encryption key to be allowable by the system. The described attack is the most basic form of related
key attack, but many more exist that also rely on the structure of the particular block cipher.
11.1 General
This clause deals with perceived defects that came to the attention of ISO/IEC JTC 1/SC 27 on its
cryptography standards and possible ways to deal with these defects should there be concern for their
continued use.
International Standards may contain cryptographic techniques for which, after publication, concerns
are raised as to possible defects that may exist in the mechanisms. In some cases, these perceived
defects are in fact not defects, but rather concerns expressed by the community (such as constants
used in the mechanism which were generated in an unknown way). ISO/IEC JTC 1/SC 27 usually
initiates a study period to which its liaison organizations, experts and National Bodies contribute. The
outcome of such a study period can then either
3. Neither prove nor confirm the defect, but propose mitigation techniques.
In the case of 1. and 3. a Technical Corrigendum to the respective International Standard will be
published. In all the cases further information can be made available in this document if appropriate.
Alternatively, users can study the original paper (see [16]) on how to avoid generating weak moduli.
OCB 2.0 was believed to be secure (under the right assumptions) until end of 2018. But, Inoue and
Minematsu show in [64] how to break the authenticity of OCB 2.0 by demonstrating practical attacks.
Then, Poettering in [65] extended the work from [64] to break the confidentiality of the mode. To finish,
Iwata in [66], from the two previously mentioned results, concludes how to fully break OCB 2.04 in
term of confidentiality and authenticity.
A public statement from ISO has been published regarding this breakthrough in [68].
“[…] if a MAC algorithm with a higher security level is needed, it is recommended to perform two MAC
calculations with independent keys and concatenate the results (rather than XORing them).” A recent
paper uploaded to ePrint [71] presents several attacks on the concatenation of the MAC algorithms
included in the standard, including one attack which requires less than birthday-bound number of
queries.
Users using the concatenation of two instances of MAC Algorithm 2 with Padding Scheme 2 are
advised to replace it immediately. Users employing this concatenation mechanism with any other pair
of MAC algorithms and padding schemes are advised to consult [71] to assess their exposure to these
attacks.
12 Bibliography
[1] Arjen K. Lenstra and Eric R. Verheul, Selecting Cryptographic Key Sizes, PKC2000: p. 446-465,
01/2000.
[2] European Network of Excellence in Cryptology, ECRYPT-CSA D5.4 Algorithms, Key Size and
Protocols Report (2018).
[3] H. Orman and P. Hoffman, Determining Strengths for Public Keys Used for Exchanging Symmetric
Keys, RFC 3766, 04/2004.
[4] <http://www.keylength.com>.
[5] ISO/IEC 10118-4:1998, Information technology – Security techniques – Hash functions using an
n-bit block cipher
4
Two other versions of OCB exist, namely OCB1 and OCB3, and none of them (yet) have been broken.
[6] ISO/IEC 18031:2014 Information technology – Security techniques – Random bit generation
[8] ISO TR 14742, Financial services — Recommendations on cryptographic algorithms and their use,
(ISO TC 68, Technical Report).
[9] National Security Agency (US), Fact Sheet Suite B Cryptography, 08/2009.
[10] NIST Special Publication 800-57, Recommendation for Key Management, Part 1:
General (Revised), March 2007.
[14] P. C. van Oorschot and M. J. Wiener, ‘A known-plaintext attack on two-key triple encryption’.
In I. B. Damgård, ed., Proc. Eurocrypt ’90, Springer-Verlag LNCS 473 (1996) page 318-325.
[16] V. G. Antipkin, “Smashing MASH-1”, Math. Asp. of Crypt., 5:2 (2014), 21–28
[17] E. Biham, New types of cryptanalytic attacks using related keys." Journal of Cryptology, 4, Springer.
[18] L.R. Knudsen, Cryptanalysis of LOKI91, Advances in Cryptography, Asiacrypt '92, LNCS 718,
Springer-Verlag.
[20] D. Shumow, N. Ferguson, On the Possibility of a Back Door in the NIST SP800-90 Dual EC PRNG.
CRYPTO Rump Session 2007. Microsoft, <http://rump2007.cr.yp.to/15-shumow.pdf>.
[21] United States Patent Application Publication, US 2007/0189527 A1, Aug. 16, 2007, “Elliptic Curve
Random Number Generation”.
[22] D. McGrew, Impossible plaintext cryptanalysis and probable-plaintext collision attacks of 64-bit
block cipher modes" <eprint.iacr.org/2012/623>
[23] K. Bhargavan and G. Leurent, On the practical (in-)security of 64-bit block ciphers: Collision
attacks on HTTP over TLS and OpenVPN. In: Proceedings of the 2016 ACM SIGSAC Conference on
Computer and Communications Security. ACM, 2016. p. 456-467.
[24] C. J. Mitchell, 'On the security of 2-key triple DES', IEEE Transactions on Information Theory, 62
(2016) 6260-6267.
[25] L. Wang, J. Guo, G. Zhang, J. Zhao and D. Gu, How to build fully secure tweakable blockciphers
from classical blockciphers. In: International Conference on the Theory and Application of Cryptology
and Information Security. Springer, Berlin, Heidelberg, 2016. p. 455-483.
[27] R. C-W. Phan, Impossible differential cryptanalysis of 7-round Advanced Encryption Standard
(AES). Information processing letters, 2004, 91.1, pp 33-38.
[28] N. Ferguson et al. Improved cryptanalysis of Rijndael. In: International Workshop on Fast
Software Encryption. Springer, Berlin, Heidelberg, 2000. pp. 213-230.
[29] A. Biryukov et al. Key recovery attacks of practical complexity on AES-256 variants with up to 10
rounds. Henri Gilbert. EUROCRYPT 2010. s.l. : Springer, 2010, pp. 299-319.
[30] A. Biryukov and D. Khovratovich Related-key cryptanalysis of the full AES-192 and AES-256.
Mitsuru Matsui. ASIACRYPT 2009. Tokyo: Springer, 2009, pp. 1-18.
[31] A. Bogdanov, D. Khovratovich and C. Rechberger. Biclique Cryptanalysis of the Full AES. 2011.
[32] J. Wallén and K. Nyberg, Improved linear distinguishers for SNOW 2.0.. [ed.] Matthew J. B.
Robshaw. s.l. : Springer, 2006, Lecture Notes in Computer Science, Vol. 4047, pp. 144-162.
[33] O. Billet and H. Gilbert, Resistance of SNOW 2.0 Against Algebraic Attacks. [ed.] Alfred Menezes:
Springer, 2005, Lectures Notes in Computer Science, Vol. 3376.
[35] C. De Cannière, J. Lano and B. Preneel. Comments on the Rediscovery of Time Memory Data
Tradeoffs. 2005.
[37] Lu Y., Wang H., Ling S. (2008) Cryptanalysis of Rabbit. In: Wu TC., Lei CL., Rijmen V., Lee DT.
(eds) Information Security. ISC 2008. Lecture Notes in Computer Science, vol 5222. Springer, Berlin,
Heidelberg
[38] J.P. Aumasson , I. Dinur, W. Meier, A. Shamir, Cube Testers and Key Recovery Attacks on
Reduced-Round MD6 and Trivium. In: Dunkelman O. (eds) Fast Software Encryption. Lecture Notes
in Computer Science, vol 5665. Springer, Berlin, Heidelberg
[39] A. Maximov, A. Biryukov Two Trivial Attacks on Trivium. In: Adams C., Miri A., Wiener M. (eds)
Selected Areas in Cryptography. SAC 2007. Lecture Notes in Computer Science, vol 4876. Springer,
Berlin, Heidelberg
[Note]P. Fouque and T. Vannet, Improving Key Recovery to 784 and 799 rounds of Triviumusing
Optimized Cube Attack, https://eprint.iacr.org/2015/312.pdf
[40] D. Khovratovich, C. Rechberger and A. Savelieva. Bicliques for Preimages: Attacks on Skein-512
and the SHA-2 family. 2011.
[41] M. Lamberger and F, Mendel. Higher-Order Differential Attack on Reduced SHA-256. 2011.
[42] F, Mendel, et al .The Rebound Attack: Cryptanalysis of Reduced Whirlpool and Grøstl. 2009.
[43] M. Lamberger, et al. The Rebound Attack and Subspace Distinguishers: Application to Whirlpool.
2010.
[44] Y. Sasaki. Meet-in-the-Middle Preimage Attacks on AES Hashing Modes and an Application to
Whirlpool. 2011.
[45] A. Lenstra and H. W. Lenstra. The development of the number field sieve. Springer, 1993.
[47] D. Coppersmith. Small Solutions to Polynomial Equations, and Low Exponent RSA Vulnerabilities.
Journal of Cryptology, 1997.
[49] D. Coppersmith. Finding a small root of a bivariate integer equation; Factoring with high bits
known. 1996.
[50] D.H. Bosselaers, B. Preneel and H. Dobbertin, The cryptographic hash-function RIPEMD-160, Dr.
Dobbs, 1997.
[51] F. Mendel, T. Peyrin, M. Schläffler, L. Wang and S. Wu, Improved cryptanalysis of reduced
RIPEMD-160, <eprint 2013/600>.
[52] M. Stevens, E. Bursztein, P. Karpman, A. Albertini and Y. Markov. The First Collision for Full SHA-
1. In: Katz J., Shacham H. (eds) Advances in Cryptology – CRYPTO 2017. CRYPTO 2017. Lecture
Notes in Computer Science, vol 10401.
[54] E. Barker, et al. NIST special publication 800-57. NIST Special Publication, 2007, 800.57: 1-142
[56] SOG-IS Crypto Evaluation Scheme Agreed Cryptographic Mechanisms Version 1.1, June 2018.
[57] P. Rogaway, Authenticated encryption with associated-data, CCS '02 Rpoceedings of the 9th ACM
conference on Computer and communications security, Pages 98-107
[58] P. Rogaway, M. Bellare, J. Black andT. Krovetz, OCB: a block-cipher mode of operation for
efficient authenticated encryption, CCS '01 Proceedings of the 8th ACM conference on Computer and
Communications Security Pages 196-205
[59] D. Whiting, R. Housley and N. Ferguson, RFC 3610: Counter with CBC-MAC (CCM). IETF,
Septembre 2003
[60] M. Bellare, P. Rogaway and D. Wagner, 'The EAX mode of operation'. In: B. K. Roy, W. Meier
(eds.): Fast Software Encryption, 11th International Workshop, FSE 2004, Delhi, India, February 5-7,
2004, Revised Papers. Lecture Notes in Computer Science 3017, Springer-Verlag (2004) pp. 389-407.
[61] M. Bellare and C. Namprempre, 'Authenticated encryption: Relations among notions and analysis
of the generic composition paradigm'. In: T- Okamoto (ed.), Advances in Cryptology – ASIACRYPT
2000, 6th International Conference on the Theory and application of Cryptology and Information
Security, Kyoto, Japan, December 3-7, 2000, Proceedings. Lecture Notes in Computer Science 1976,
Springer-Verlag (2000) pp. 531-545
[62] D. A. McGrew and J. Viega, 'The Security and Performance of the Galoix/Counter Mode (GCM)
of Operation'. In: A. Progress in Cryptology – INDOCRYPT 2004, 5th International Conference on
Cryptology in India, Chennai, India, December 20-22, 2004, Proceedings). Lecture Notes in Computer
Sciences 3348, Springer, pp 343-355.
[63] S. Vaudenay, Security Flaws Induced by CBC Padding Applications to SSL, IPSEC, WTLS…,
Advances in Cryptology – EUROCRYPT 2002, International Conference on the Theory and
Applications of Cryptographic Techniques, Amsterdam, The Netherlands, April 28 – May 2, 2002,
Proceedings, pp 534-546.
[64] A. Inoue and K. Minematsu, Cryptology ePrint Archive: Report 2018/1040: Cryptanalysis of OCB2.
See: https://eprint.iacr.org/2018/1040
[65] B. Poettering, Cryptology ePrint Archive: Report 2018/1087: Breaking the confidentiality of OCB2.
See: https://eprint.iacr.org/2018/1087
[66] T. Iwata, Cryptology ePrint Archive: Report 2018/1090: Plaintext Recovery Attack of OCB2. See:
https://eprint.iacr.org/2018/1090
[67] https://www.din.de/blob/236540/5b946078899f420e2b15fb64e3ca3e17/20170425-sc-27-
statement-of-sha-1-data.pdf
[71] Y. Shen, L. Wang and D. Gu, ISO/IEC 9797-1 Revisited: Beyond Birthday Bound. See:
https://eprint.iacr.org/2018/468