Access Control Method PDF
Access Control Method PDF
“Those who are enamored of practice without theory are like a pilot
who goes into a ship without rudder or compass and never
has any certainty where he is going. Practice should always
be based upon a sound knowledge of theory”
–Leonardo da Vinci (1452-1519)
Abstract Methods of enforcing distributed access control are one of the most de-
bated topics of computer security. From worries about password theft to viruses and
spam, both computer security specialists and everyday users agree that safe guard-
ing data privacy has become a priority for those wishing to connect to the Internet. In
this chapter we give the reader a synopsis of access control trends, how it all began
and where we think it is heading. We discuss the methods that have been proposed,
comparing and analyzing each in turn with the aim of drawing the reader’s attention
to the fact the access control mechanisms are typically designed for relatively static
scenarios and that increasingly hybrid scenarios that cannot be predicted apriori,
need to be handled by adaptive security mechanisms or models.
Distributed systems like the Internet are vulnerable to security threats mainly be-
cause of their open architecture and ability to facilitate interactions amongst hetero-
geneous systems. Although existing security protocols attest to the amount of work
that has gone into protecting data on the Internet, the evolving nature of Web appli-
cations create situations that require matching security with performance to avoid
security violations. Yet, the idea of matching security with performance has received
little attention in the computer security research community because the primary
goal of security schemes is to provide protection as opposed to speed. An exam-
ple of such a case arises in hierarchical Cryptographic Key Management schemes.
These schemes provide access control with an added layer of security that makes
violation more difficult, but have not gained widespread popularity because of cost
of implementation and lack of scalability.
The examples we use in this chapter and the rest of the monograph will be based
on a hypothetical a collaborative web application because it seems like an ideal
platform to use in portraying the need for adaptive security. The proposed collabo-
rative web application is centered on a file sharing environment that users can join
and leave spontaneously. A security administrator centrally oversees updates and
sanctions bad behavior by policing (i.e., a user is completely expelled or suspended
temporarily). The group that a user chooses to join determines the permissions (read,
write, modify, and/or delete) that the user is allowed. Examples of collaborative web
applications on the Internet include Chat Systems, Shared White boards, and “social
networking” environments.
For clarity, we begin with some definitions of the common terminology and then
proceed to discuss models of access control in distributed systems in relation to the
problems we evoked in Chapter 1, Section 1.2.
2.2 Terminology
Access control generally suggests that there is an active user and/or application pro-
cess, with a desire to read or modify a data object (file, database, etc). For simplicity,
we will hereafter refer to an entity as a user and a data object as a file. Access control
typically involves two steps: authentication and authorization. In order to authenti-
cate an active user, the distributed system needs some way of determining that a user
is in who he/she claims to be. A password is an example of a standard authentica-
tion method. On the other hand, authorization to access a file relies on a set of rules
that are specified formally and are used to decide which users have the permissions
required to access a file.
A permission allows a user to perform a number of well-defined operations on
a file. For example, the security administrator of our collaborative web application
can choose to schedule automatic virus checks with an anti virus application. In this
way, the application gets assigned the privilege of scanning all the hard disks and
memory on the computers on the network (system) with the aim of eliminating viral
threats.
Permissions imply a hierarchy of access rights and so users are assigned roles
that define what permissions they can exercise and in what context. A role is a set
of operations that a user is allowed to perform on a data object(s). A user can have
more than one role and more than one user can have the same role [17].
Partial orderings are used to order the permissions associated with a set of
security policies. A partial ordering on a set of security classes S, such that
S = {U0 , ..,Un−1 } where Ui denotes the ith security class, is a relation on S × S
which is:
• Reflexive: For all Ui ∈ S, Ui Ui holds;
• Transitive: For all Ui ,U j ,Uk ∈ S, if Ui U j and U j Uk then Ui Uk
2.3 General Access Control Models 13
In this section we discuss the pros and cons of some of the typical access control
models with respect to distributed systems like the Internet. We begin with the dis-
cretionary or mandatory access control models [39, 112, 120] because these were
the earliest models and served as the inspiration for later models like the Role-based
and Multi-Level access control models.
like to share. In this case, she decides to allow access only to members who belong
in her repertoire of “friends”. As shown in the table given in Figure 2.1, there are
three other users, Jane, John, and Sam, on the system. Jane can view and download
files from the Photographs Folder whereas John can only view the photographs and
Sam has no privileges at all with respect to the Photographs Folder.
Photographs Folder
Sam -
Jane {View, Download}
John {View}
Besides the confinement and information semantics problems that the access control
matrix model faces, a key problem that the DAC model faces is vulnerability to
Trojan Horse attacks. Trojan horse attacks are typically provoked by deceiving valid
users into accepting to run code that then allows a malicious user to get access to
information on the system [52].
The mandatory access control (MAC) model counters these threats by controlling
access centrally. An ordinary user (i.e., not the central authority) cannot change the
access rights a user has with respect to a file, and once a user logs on to the system
the rights he/she has are always assigned to all the files he/she creates. This pro-
cedure allows the system to use the concept of information flow control to provide
additional security [59]. Information flow control allows the access control system
to monitor the ways and types of information that are propagated from one user
to another. A security system that implements information flow control typically
classifies users into security classes and all the valid channels along which infor-
mation can flow between the classes are regulated by a central authority or security
administrator.
MAC models are typically designed using the concept of information flow con-
trol [49, 22]. Information flow control is different from regulating accesses to files
by users, as in the ACM model, because it prevents the propagation of information
from one user to another. In the MAC model, each user is categorized into a security
class and the files are tagged with security labels that are used to restrict access to
authorized users [77, 120]. All the valid channels along which information can flow
between the classes are regulated [49].
The collaborative file sharing example shown in Figure 2.1 can be extended to
handle a security scenario in which a security administrator prevents transitive dis-
closures, by the users accessing Alice’s Photographs Folder, by using data labels
to monitor information flow. Each data object is tagged with the security clearance
labels of each of the users in the system. A transitive disclosure could occur if a
user, in this case Sam, gains access to Alice’s photographs folder because he be-
longs in Jane’s (who incidentally is on Alice’s list of friends) list of “friends”. The
MAC model prevents such transitive disclosures by labeling information, as shown
in Figure 2.2, to monitor information flow centrally. The users are assigned labels
according to their security clearance and information flow is regulated by authen-
ticating a user and then granting access to the file based on their privileges. Since
each file is labeled with a security clearance tag, Sam can no longer access files that
Jane downloads from Alice’s Photographs Folder because Sam does not have a se-
curity clearance that allows him access. When the access control policy of a system
is based on the MAC model, the security of the system ceases to rely on voluntary
user compliance but rather is centrally controlled, making it easier to monitor usage
patterns and prevent violations.
16 2 A Presentation of Access Control Methods
The idea of role-based access control emerged as a sort of middle ground between
mandatory and discretionary access control because on the one hand, discretionary
access control was considered to be too flexible and on the other hand, mandatory
access control to be too rigid for concrete implementation purposes. Role-based ac-
cess control (RBAC) is a combination of mandatory and discretionary access con-
trol. In the role-based access control model, a role is typically a job function or
authorization level that gives a user certain privileges with respect to a file and these
privileges can be formulated in high level or low level languages. RBAC models are
more flexible than their discretionary and mandatory counterparts because users can
be assigned several roles and a role can be associated with several users. Unlike the
2.3 General Access Control Models 17
access control lists (ACLs) used in traditional DAC approaches to access control,
RBAC assigns permissions to specific operations with a specific meaning within an
organization, rather than to low level files. For example, an ACL could be used to
grant or deny a user modification access to a particular file, but it does not specify
the ways in which the file could be modified. By contrast, with the RBAC approach,
access privileges are handled by assigning permissions in a way that is meaningful,
because every operation has a specific pre-defined meaning within the application.
In an RBAC model, a user’s role is not mutually exclusive of other roles for which
the user already possesses membership. The operations and roles can be subject
to organizational policies or constraints and, when operations overlap, hierarchies
of roles are established. Instead of instituting costly auditing to monitor access,
organizations can put constraints on access through RBAC. For example, it may be
enough to allow all the users on the system (Jane, John and Sam) to have “view” and
“download” access to the Photographs Folder, if their permissions are monitored
and constraints used to prevent tampering.
Initial descriptions of the RBAC model assumed that all the permissions needed
to perform a task could be encapsulated but it has turned out to be a difficult task
[59]. The challenge of RBAC is the contention between strong security and easier
administration. On the one hand, for stronger security, it is better for each role to
be more granular, thus having multiple roles per user. On the other hand, for eas-
ier administration, it is better to have fewer roles to manage. Organizations need to
comply with privacy and other regulatory mandates and to improve enforcement of
security policies while lowering overall risk and administrative costs. Meanwhile,
web-based and other types of new applications are proliferating, and the Web ser-
vices application model promises to add to the complexity by weaving separate
components together over the Internet to deliver application services.
An added drawback that RBAC faces is that roles can be assigned in ways that
create conflicts that can open up loopholes in the access control policy. For example
in the scenario in Figure 2.2, we can assume that Alice is the security administrator
for the Photographs Folder and the Movies Folder, and that she chooses to assign
roles to users in a way that allow the users to either download or upload movies but
not both. Now suppose that at a future date Alice decides to assign a third role that
grants a user, say Sam, the right to veto an existing user’s (e.g. Jane’s) uploads. In
order to veto Jane’s uploads, Sam needs to be able to download as well as temporar-
ily delete questionable uploads, verify the movies and, if satisfied, reload the movies
to the site. So, essentially Sam has the right to both download and upload movies
to Movies Folder, a role assignation that conflicts with the first two Alice specified.
Since policy combinations create conflicts that can open up vulnerabilities in a secu-
rity system designed using the RBAC model, extensions to enable adaptability need
to be evaluated with care.
18 2 A Presentation of Access Control Methods
The multilevel security (MLS) model is essentially a special case of the MAC model
implemented for different contexts or scenarios. In the MLS model, a security goal
is set and information flow is regulated in a way that enforces the objectives de-
termined by the security goal [119]. Practical implementations of security schemes
based on the MLS concept include the Bell-Lapadula (BLP), Biba Integrity Model,
Chinese Wall, and Clark-Wilson models [119, 15, 40, 31, 71, 98]. In the following,
we briefly discuss each of these four models but for a detailed exposition of the
field one should see the works of McLean [104], Sandhu [122], Nie et al. [111], and
Gollmann [59].
In the BLP model [15], high level users are prevented from transmitting sensitive in-
formation to users at lower levels, by imposing conditions that allow users at higher
levels to only read data at lower levels but not write to it. On the other hand users
at lower levels can modify information at higher levels but cannot read it. Although
this method of information flow control prevents sensitive information from being
exposed, allowing users at lower levels to write information to files at higher levels
that they cannot read can create situations of violations of data integrity that are dif-
ficult to trace and correct [98]. The Biba integrity model [20] addresses this problem
of data integrity by checking the correctness of all write operations on a file. How-
ever, this approach opens up the possibility of security violations that result from
inferring high level information from low level information.
In 1989, Brewer and Nash proposed a commercial security model called the Chi-
nese wall security policy [31]. The basic idea is to build a family of impenetrable
walls, called Chinese walls, amongst the datasets of competing companies. So, for
instance, the Chinese wall security policy could be used to specify access rules in
consultancy businesses where analysts need to ensure that no conflicts of interest
arise when they are dealing with different clients. Conflicts can arise when clients
are direct competitors in the same market or because of ownerships of companies.
Analysts therefore need to adhere to a security policy that prohibits information
flows that cause a conflict of interest. The access rights in this model are designed
along the lines of the BLP model but with the difference that access rights are re-
assigned and re-evaluated at every state transition whereas they remain static in the
BLP model. Unfortunately, their mathematical model was faulty and the improve-
ments proposed have failed to completely capture the intuitive characteristics of the
Chinese wall security policy [148, 96, 97].
2.4 Cryptographic Access Control 19
Like the BIBA model, the CLW model addresses the security requirements of com-
mercial applications in which the importance of data integrity takes precedence over
data confidentiality [40]. The CLW model uses programs as an intermediate control
level between users and data (files). Users are authorized to execute certain pro-
grams that can in turn access pre-specified files. Security policies that are modeled
using the CLW model are based on five rules:
1. All data items must be in a valid state at the time a verification procedure is run
on it.
2. All data transformation procedures need to be set a priori and certified to be valid.
3. All access rules must satisfy the separation of duty requirements.
4. All transformation procedures must be stored in an append-only log.
5. Any file that has no access control constraints must be transformed into one with
one or more access control constraints before a transformation procedure is ap-
plied to it.
The CLW model is therefore more of a security policy specification framework
that extends the concepts in the BIBA model to the general case. The discussion in
this section also illustrates that access control models are typically designed with a
set goal and that the scenarios that they are designed for are assumed to be static.
Although no single access control scheme can be designed to handle every possible
security scenario, web-based security scenarios are increasingly difficult to predict
and control manually, which adds a further complication to the problem of design-
ing good security frameworks in the Web environment. In these cases, therefore, the
need for good security is intertwined with performance because the delays created
in trying to address new situations manually can be exploited maliciously and also
affect performance negatively from the user’s perspective. In the next section we
cover the cryptographic approaches that have been proposed to address the access
control problem. Hierarchical cryptographic access control schemes offer the ad-
vantage of being simpler to model mathematically and so lessen the SA’s burden of
security policy specification.
tially ordered set (poset) of security classes that represent groups of users requesting
access to a portion of the data on the system. Cryptographic keys for the various user
groups requiring access to part of the shared data in the system are defined by clas-
sifying users into a number of disjoint security groups Ui , represented by a poset
(S, ), where S = {U0 ,U1 , ...,Un−1 } [6, 99]. By definition, in the poset, Ui U j
implies that users in group U j can have access to information destined for users in
Ui but not the reverse. The following paragraphs present a comparative discussion
of Cryptographic Key Management (CKM) schemes that are based on the concept
of posets of security classes, highlighting the pros and cons of each approach in
relation to designing self-protecting access control frameworks.
Models of key management (KM) are based on the concept of posets and can gen-
erally be divided into two main categories: independent and dependent key manage-
ment schemes. Independent Key Management (IKM) schemes originate from the
multicast community where the concern is securing intra-group communications
efficiently. In these protocols, the focus is on how to manage keys within a group in
a way that minimizes the cost of key distribution when the membership of the group
changes [65, 147].
IKM schemes approach hierarchical KM by assigning each security class all the
keys they need to access information both at their level and below. Accesses are
granted only if the user requesting access holds the correct key [65]. So for instance
in Figure 2.3(a.), to access the data located at the security classes below it a user U0
needs to hold all the keys K1 , K2 , K3 , K4 , and K5 in addition to their own key K0 .
While this method of KM is easier to implement in practical systems because
of its flexibility, the cost of key distribution as well as the possibility of security
violations due to mis-managed or intercepted keys, is higher than that in dependent
key management schemes [65]. In fact, in the worst case scenario where all the
keys in the hierarchy are updated, 2n + 1 keys are redistributed (n represents the
maximum number of security classes in the hierarchy), making key re-distribution
more costly in comparison to the dependent key management approach where only
n keys are redistributed [65]. So, if a key say K4 , is updated then the new key needs
to be re-distributed to all the users in the classes U0 ,U1 ,U2 and U4 that use it.
A good way to alleviate these problems is to design the KM scheme in a way
that minimizes the number of keys distributed to any security class in the hierarchy.
This model, typically referred to as the dependent key management (DKM) scheme,
defines a precedence relationship between the keys assigned to the security classes
in the hierarchy whereby keys belonging to security classes situated at higher levels
in the hierarchy can be used to mathematically derive lower level keys. Access is
not possible if the derivation function fails to yield a valid key. So for instance, in
Figure 2.3(b.), the data d1 , d2 , d3 , d4 , and d5 is encrypted with the keys K1 , K2 , K3 , K4
and K5 to obtain DK1 , DK2 , DK3 , DK4 , and DK5 . Therefore, possession of the key K1
2.4 Cryptographic Access Control 21
allows access to DK1 , DK3 , and DK4 since the key is associated with the security
class U1 that is situated at a higher level than the classes U3 and U4 , and by the
partial ordering U3 ,U4 U1 . The reverse is not possible because keys belonging to
the lower classes cannot be used to derive the required keys and access information
at higher levels.
However, there are a number of drawbacks that impede the performance of de-
pendent key management schemes in dynamic scenarios. For instance, as shown
in Figure 2.4, an update request typically requires that the key associated with the
security class concerned be replaced.
So when a user u10 departs from U1 both K1 and the correlated keys K2 , K3 , and
K4 need to be changed to prevent the departed user u10 from continuing to access
DK1 , DK2 , DK3 , DK4 . Likewise, when u20 departs from U2 the keys K1 , K2 , K3 , K4 need
to be changed as well so that K1 can derive the new K2 and more importantly to guar-
antee that the new K2 does not overlap with K1 , K3 , or K4 and unknowingly grant
U3 or U4 access to DK2 or vice versa. This approach to key assignment is not scal-
able for environments with frequent1 group membership changes where meeting the
goals of service level agreements is an additional constraint. Table 2.1 summarizes
the pros and cons of both key management approaches [65, 147].
Akl and Taylor [6] proposed the first Cryptographic Key Management scheme based
on the DKM model. According to their scheme, a central authority (e.g., a security
1 Here, “frequent” implies that the interval between two rekey events is shorter than the time it
takes the key management scheme to define a new one-way function, check its uniqueness, and
generate a new key for the node (class) concerned.
22 2 A Presentation of Access Control Methods
provider or key server) U0 chooses a secret key K0 as well as two distinct large
primes p and q. The product M = p × q, is computed and made public whereas p
and q are kept secret. The access control hierarchy is composed of a maximum of n
security classes and each class (group) Ui is assigned a public exponent (an integer)
ti that is used, together with a one-way function2 , to compute a key Ki for the group.
In the schemes that Akl and Taylor proposed, this one-way function is expressed as:
Ki = K0ti mod M. For simplicity, throughout this section, in analyzing the complexity
of a KM scheme, we will assume that a hierarchy is comprised of a maximum of n
security classes. We assume also that every key Ki is less than M (Ki < M) and that
each key Ki requires O(log M) bits to be represented.
Akl and Taylor suggested two algorithms for assigning the exponents used to
compute the group keys. The first algorithm (referred to hereafter as the Ad-hoc
AT scheme) uses an ad-hoc assignment of exponents that is efficient in time and
space complexity, but is vulnerable to collusion attack. The second algorithm (re-
ferred to hereafter as the CAT scheme) assigns each group Ui a distinct prime pi
and the exponents are computed from the product of all the p j ’s associated with the
classes U j in the poset such that U j 6 Ui . The size of the nth largest prime, in an
n group hierarchy is O(n log n) [89]. Considering that the formula Ki = K0ti mod M
is used to compute the keys, if Ki < M then this implies that Ki requires O(log M)
bits to be represented. In the CAT scheme the largest exponent, ti , is a product of n
primes, so ti is O((n log n)n ). Each key in the hierarchy is computed by raising a key
to the power of ti , therefore logti multiplications are required to compute a single
key, that is, O(n log n) multiplications of O(log M) bit numbers. Finally, to generate
keys for the whole hierarchy, since n keys are required in total, we need O(n2 log n)
multiplications of O(log M) bit numbers. In the Ad-hoc AT scheme, the size of ti is
O(log n) since the integer values assigned to the exponents grow linearly with the
size of the hierarchy. Therefore generating keys for the whole hierarchy requires
O(n log n) multiplications of O(log M) bit numbers. In both schemes, key replace-
ments trigger updates throughout the entire hierarchy, so the costs of key generation
and replacement are equivalent. We note, however, that when the CAT scheme is
used key updates are more expensive than in the Ad-hoc scheme. Therefore, when
key updates are triggered frequently throughout the entire hierarchy, rekeying using
the CAT approach is computationally expensive.
Mackinnon et al. [99] address this problem with an improved algorithm designed
to determine an optimal assignment of keys by attributing the smallest primes to the
longest chains in the poset that defines the hierarchy. They use a heuristic algorithm
to reduce the number of primes used to obtain the decompositions because obtaining
an optimal decomposition of a poset in polynomial time remains an open problem
[99]. This improves the time efficiency for key generation in the CAT scheme, be-
cause the largest exponent ti , is a product of approximately half the total number
of primes and so, in comparison to the CAT scheme, reduces the total number of
multiplications required to compute a key by a half.
For example, in Figure 2.5(a.), the initial assignment of primes is 2, 3, 5, 7, 11, 13
and therefore using the exponent generation algorithm in the CAT gives the expo-
nent assignment shown in Figure 2.5(b.). By contrast, using the exponent generation
algorithm in the Mackinnon scheme yields the assignment given in Figure 2.5(c.).
Notice that in the last case the largest exponent is a product of 2, 5, 11 while the
largest exponent in the CAT scheme is a product of 2, 3, 5, 7, 13. The Mackinnon
scheme reduces the cost of exponent generation and consequently key generation,
in the best case but in the worst case, when the algorithm fails to obtain an optimal
decomposition, the complexity bounds for key generation and replacement remain
unchanged from what they are in the CAT scheme.
Sandhu [121] proposed addressing the issue of rekeying and security class addi-
tions/deletions with a key assignment/replacement scheme that is based on a one-
way function. According to his scheme, the security classes are organized in the
24 2 A Presentation of Access Control Methods
form of a rooted tree with the most privileged security class, say U0 , being at the
root and the least privileged at the leaves. In order to generate keys for the security
classes in the hierarchy, the central authority starts by selecting an arbitrary secret
key K0 for the class U0 . The keys for each of the successor classes are created by
assigning the classes an identification parameter, Id(Ui ), that is then encrypted with
the key belonging to its direct ancestor class.
This scheme makes it easier to completely delete or insert new security classes
into the hierarchy, because insertions/deletions only require the creation/deletion of
the identification parameter associated with the class and encrypting the identifi-
cation parameter with the ancestor key. However, key replacement at any security
class in the hierarchy still requires updating the whole hierarchy to prevent departed
users at higher levels from continuing to derive lower level keys and to allow high
level keys to continue to be able to derive low level keys. Key replacements also
imply updating the affected one-way functions and re-encrypting the data with the
new keys. Ensuring the uniqueness of the one-way functions can be challenging in
complex hierarchies when group membership changes frequently.
In order to evaluate the complexity of the Sandhu scheme, we assume that the
keys are equivalent in size to those obtained in the CAT scheme, i.e. Ki < M, where
M is the product of two large primes, and Ki requires O(log M) bits to be repre-
sented. We assume also, that in the worst case, the number of encryptions needed to
obtain the largest key, in an n group hierarchy is equivalent to the largest prime, so
O(n log n) encryptions of keys of size O(log M) bits are required to compute a key.
Since n keys are required for the whole hierarchy, we need O(n2 log n) encryptions
of O(log M) bit numbers to compute all n keys. Rekeying results in an update of
keys throughout the entire hierarchy, so the complexity bounds for key generation
and rekeying are equivalent.
In order to overcome the drawbacks of the Sandhu scheme, Yang and Li [142]
proposed a scheme that also uses a family of one-way functions but limits, to a
pre-specified number, the number of lower level security classes that can be directly
attached to any class in the hierarchy. The keys for the security classes in the hierar-
chy are assigned on the basis of the maximum accepted number of classes that can
be attached to a security class. A high level class can only derive the key associ-
ated with the nth lower level security class attached if there is a path that links it to
the lower class and if the high level class key can be used to derive the required low
level class key [142]. Limiting the number of lower level security classes attached to
a higher level security class reduces the number of keys and encryptions needed but
the complexity bounds for generating and replacing the keys remain the same as in
the Sandhu scheme. Moreover, Hassen et al. [65] have shown that the scheme does
not obey the confidentiality requirements expected of a key management graph (i.e.
high level nodes can only derive keys belonging to lower level nodes if they hold a
key that is valid and authorizes them access). According to Hassen et al. [65], in the
Yang and Li scheme, deleted keys can still be used by users to continue to derive
lower level keys since the lower levels keys are not updated when a change occurs
at a higher level security class.
Other schemes based on the concept of one-way functions and the DKM model
include those proposed by Harn and Lin [63], Shen and Chen [125], and Das et al.
[46]. Harn et al. proposed using a bottom-up key generation scheme instead of the
top-down approach that the Akl and Taylor schemes use. In the Harn et al. scheme,
the smallest primes are assigned to the lower level security classes and the larger
primes to the higher level classes. The scheme allows a security administrator to
insert new classes into a key management hierarchy without having to update the
whole hierarchy. Key insertions/deletions are handled by updating only the keys
belonging to higher level classes that have access privileges with respect to the new
class. Therefore, key insertions or deletions do not result in key updates throughout
the entire hierarchy. Moreover, in comparison to the CAT scheme, the cost of key
derivation in the Harn et al. scheme is lower if we consider that key derivation is
bounded by O(log(ti /t j )) operations (i.e., O(n log n) time) when Ui U j . However,
the complexity bounds for key generation and replacement remain unchanged from
what they were in the CAT scheme because the sizes of the largest prime and key
remain the same and rekeying still requires updating all the keys in the hierarchy.
26 2 A Presentation of Access Control Methods
In order to address these issues Shen and Chen [125], and Das et al. [46], pro-
posed using asymmetric cryptography, the Newton interpolation method and a pre-
defined one-way function to generate keys. As in the Harn et al. scheme, both
schemes allow security class additions/deletions without requiring updates through-
out the entire hierarchy. Key generation is handled by assigning each class (group)
a secret key Ki , such that Ki is relatively prime to a large prime number pi , and a
positive integer bi , such that 1 ≤ bi ≤ pi . Both Ki and bi are used to compute the in-
terpolation polynomial Hi (x) and the public parameter Qi associated with a class Ui .
Both Hi (x) and Qi are made public while bi and Ki are kept secret. Key derivation
is possible if a user’s secret key allows them to derive first the bl associated with
the lower class Ul , and then the required secret key Kl . Key updates are handled
by replacing only the interpolation polynomial ,Hi (x), and the public parameter Qi
associated with Ui , as well as the old secret key Ki that can be updated to a new one,
say Ki0 . So all the other keys do not need to be changed.
We know, from our analysis of the CAT scheme, that in an n class hierarchy, the
size of the nth largest prime pi is O(n log n). Therefore, if bi < pi , the size of bi
is O(n log n) [89]. A key Ki requires O(log M) bits to be represented. According to
Knuth [89], computing a k degree interpolation polynomial requires O(k2 ) divisions
and subtractions and since computing Qi requires that Ki be raised to the power
1/bi , this implies that we need O(log n) multiplications of O(log M) bit numbers, in
addition to the interpolation time O(k2 ), to compute one key. Therefore the n keys
in the hierarchy are obtained by O(n log n) multiplications of O(log M) bit numbers
in addition to O(nk2 ) interpolation time. Key derivation is a two step process, first
O(k2 ) interpolations are required to obtain bl and next O(log n) multiplications of
O(log M) bit numbers are required to obtain Kl . It is also worth noting that these
schemes have been shown to be vulnerable to “collusion attack” [151, 69].
We note that in all of the KM schemes that we have discussed in this section,
rekeying requires updating the whole hierarchy, so the costs of key generation and
replacement are equivalent. We also note that these costs are high, because key
replacement requires that the associated data be re-encrypted.
In order to alleviate the costs of key replacement, Atallah et al. [12] proposed
a method of updating keys locally, i.e. in the sub-hierarchy associated with the af-
fected security class. In the Atallah et al. scheme, a user belonging to a higher level
class uses a hash function to derive the key belonging to a lower level class and the
authors show that the scheme is secure against collusion attack. This scheme has the
advantage over previous approaches that security classes can be inserted into, and
deleted from, the hierarchy without having to change the whole hierarchy. In fact,
since each of the paths between the higher and lower level classes is associated with
a cryptographic hash function replacing, inserting, or deleting a key is achieved by
creating (in case of replacement or insertion), or deleting a key and recomputing a
hash function for the affected paths using the new key. Cycles are eliminated from
the key management graph, with a shortcut algorithm [11] that operates by dynami-
cally creating direct access paths from high level classes to low level classes. While
this improves on the cost of key derivation it adds more paths to the key management
2.4 Cryptographic Access Control 27
graph which goes to further complicate security management and open up potential
loopholes for security violations due to mismanaged edge function assignments.
There are a number of additional points worth noting regarding the Atallah et
al. scheme. First, in order to derive the key belonging to a lower level class, a user
belonging to a higher level class must derive all the keys along the path leading to the
target lower level class. By contrast, in the CAT scheme, key derivation is a one step
process. Although, the Atallah et al. scheme concludes with some pointers on how
to reduce the number of hops performed to reach a lower level class, this comes at
the price of additional public key information. Second, when the replacements need
to be done throughout most of the hierarchy, rekeying is more costly than in previous
schemes because it implies replacing the keys, re-computing all the hash functions
on all the affected edges, and re-encrypting the data associated with the updated
keys. So, in essence the scheme does well in the best and average cases but performs
worse than previous schemes in the worst case scenario. Third, because there is a
lot of public information that is computed from the secret key, there is a greater
chance that an adversary can correctly guess at the value of the secret key. Finally,
although the authors claim that their scheme is secure against collusion, a closer look
indicates that this is not the case. In the Atallah et al. scheme replacement requires
computing a new secret edge value yi, j , and since the old descendant class (group)
key is not replaced, once a high level user performs a key derivation operation to
obtain a key, he/she continues to have access to the lower level classes until the
classes are rekeyed.
In order to evaluate the complexity requirements of the Atallah et al. key man-
agement schemes, we assume that there are n groups in the hierarchy. We assume
also that largest key Ki < M, the size of each secret key is bounded by O(log M) bits.
The largest access key, yi,l , is obtained from the function yi,l = Kl −H(Ki , ll ) mod 2ρ
where H(Ki , ll ) is a cryptographic hash function, Kl is the key associated with the
lower level class connected by the edge to the higher level class, and ρ is a prime
number greater than any key value. Since (H(Ki , ll ) mod 2ρ ) < 2ρ , this implies that
H(Ki , ll ) mod 2ρ requires O(log(2ρ )) = O(ρ) bits to be represented. Therefore, in
the case where a single class has s edges connecting it to the lower classes directly
below it, we need one randomly generated O(log M) bit key and s subtractions of
O(ρ) bit numbers from O(log M) bit numbers to obtain all the keys required. Since
there are n classes in total in the hierarchy, each with s edges connecting it to its
direct descendant classes, we need, n randomly generated class keys and a total of
n − 1 subtractions of O(ρ) bit numbers from O(log M) bit numbers to obtain all
the keys required. Table 2.2 summarizes the worst-case complexity costs of all the
one-way function key generation algorithms that we have discussed in this section.
All of these solutions handled the access control problem as though keys assigned
to users were intended to be for an indeterminate period and that the only time when
it was necessary to regenerate keys was when a new user joined or left the system.
In practical scenarios, such as the examples evoked in relation to collaborative web
applications, it is likely that users may belong to a class for a short period of time
and then get a higher role or be eliminated from the group, which may give them
the right to a higher security clearance or disallow them access completely.
28 2 A Presentation of Access Control Methods
Table 2.2 One-Way Function Schemes: Time Complexity Analysis Comparison (Here n is the
number of security classes in a hierarchy, M is the product of two large primes and is used to
generate a key Ki such that Ki < M, and s is the number of descendant classes directly accessible
from a class Ui .)
Scheme Generation Rekeying
Ad-Hoc AT (Random) O(n log n) × O(log M) O(n log n) × O(log M)
CAT (Primes) O(n2 log n) × O(log M) O(n2 log n) × O(log M)
Mackinnon et al. O(n2 log n) × O(log M) O(n2 log n) × O(log M)
Sandhu O(n2 log n) × O(log M) O(n2 log n) × O(log M)
Yang and Li O(n2 log n) × O(log M) O(n2 log n) × O(log M)
Shen and Chen O(n log n) × O(nk2 ) O(log n) × O(k2 )
×O(log M) ×O(log M)
Atallah et al. One O(log M) bit key and n O(log M) bit keys
s subtractions of and n − 1 subtractions of
O(ρ) bit numbers from O(ρ) bit numbers from
O(log M)) bit numbers O(log M)) bit numbers
Tzeng [131] proposed using time bounded keys to avoid replacing most of the keys
each time a user is integrated into (e.g., subscriptions to newsletters where new users
are not allowed to view previous data), or excluded from, the system. His solution
supposes that each class U j has many class keys K tj , where K j is the key of class
U j during time period t. A user from class U j for time t1 through t2 is given an
information item I( j,t1 ,t2 ), such that, with I( j,t1 ,t2 ), the key Kit of Ui at time t can
be derived from K tj if and only if Ui U j and t1 ≤ t ≤ t2 . The scheme is efficient in
key storage and computation time because the same key can be used with different
time bounds for multiple sessions.
The scheme allows a user to keep only their information item I(i,t1 ,t2 ), which is
independent of the total number of classes for the time period from t1 to t2 , in order
to be able to derive all the keys to which they are entitled. However the computation
of a cryptographic key, say K tj , requires expensive public-key and Lucas computa-
tions [151, 145] and therefore has limited the implementation of this scheme in the
distributed environment. Moreover, Yi and Ye have shown that Tzeng’s scheme is
vulnerable to collusion attack [145].
Chien [36] addresses the collusion problem in Tzeng’s scheme with a solution
based on a tamper resistant device but as Yi has shown, this scheme is also vul-
nerable to collusion attack [144]. More recently, Wang et al. [136] have proposed
a time-bounded scheme based on merging that creates a correlation between group
keys by compressing them into single keys that are characterized with time-bounds.
Although this results in better efficiency for key derivation in the average case, the
time complexity for key generation is in O(n log n) (here, n is the maximum number
of security classes in a poset that is representative of the access control hierarchy)
and that for computing the number of time intervals z associated with a key, in O(z2 ).
More recent time-bounded key management schemes have focused on addressing
2.4 Cryptographic Access Control 29
the collusion attack problem [36, 145, 144, 136, 13, 14]. However, it is worth em-
phasizing that time bounded schemes are not practically efficient for dynamic sce-
narios where user behavior is difficult to foresee since it is hard to accurately predict
time bounds to associate with keys. Table 2.3 summarizes our analysis of the com-
parative time complexities of the overhead created by the one-way function, and
time-bounded schemes.
Table 2.3 A Comparison of the Time Complexities of the Overhead created by the different Key
Management Approaches
Approach Rekeying Collusion
One-Way Functions O(n2 log n) × O(log M) No for most cases
Time-Bounded O(z2 ) × O(n log n) Yes for most cases
information from the file into another file and leave the system without ever being
detected.
More recently, Ateniese et al.[14] have proposed an improvement on the variant
of IKM schemes that Blaze et al. [27] proposed in 1998 whereby proxy-reencryption
is used to assign users access to particular files associated with another user or
group. Basically, each group or user in the hierarchy is assigned two pairs of keys
(a master and a secondary key). The secondary key is used to encrypt files and
load them into a block store where they are made accessible to users outside of the
group. In order to access encrypted data from the block store a user must retrieve
the data and present both the data and their public key to an access control server.
The access control server re-encrypts the data in a format that is decryptable with
the user’s secret key, only if the presented secondary public key authorizes them
access. The problem of having to reencrypt, update and distribute new keys when
group membership changes remains.
Therefore irrespective of how a key management scheme is designed, rekeying is
handled by replacing the affected key and reencrypting the associated data. Rekey-
ing is time-consuming and increases the vulnerability window in a CKM scheme,
making it susceptible to two issues: delayed response time in handling key updates
(rekeys) and an increased possibility of security violations during the vulnerability
window.
2.5.1 Overview
The client-server architecture in its simplest form allows a server to protect itself by
authenticating a client requesting access. Kerberos is an example of an authentica-
tion service designed for such an environment [130, 59]. This client-server archi-
tecture has however changed in many aspects. For instance, when a client looks at a
web page, the client’s browser will run programs embedded in the page. So, instead
of handling simple accesses either to an operating system or a database, programs
2.5 Other Access Control Paradigms 31
are being sent from the server to be executed at the client side. Clients receive pro-
grams from servers and can store the session states in “cookies”. The World Wide
Web has also created a new paradigm for software distribution. Software can be
downloaded from the Internet and many organizations have learned the hard way to
restrict the kinds of programs that they allow their employees to download.
However, while the Internet has not created fundamentally new security prob-
lems, it has changed the context in which security needs to be enforced. Conse-
quently, the design of access control paradigms is currently going through a transi-
tory phase in which standard paradigms are being re-thought and evolved to cope
with the scenarios that arise on the Internet. The following sections explore some
of the changes that are occurring in access control paradigms, highlighting the pros
and cons of each in relation to the problem of designing adaptive security schemes
that ensure self-protecting access control.
2.5.2 Cookies
The http protocol (hypertext transfer protocol), is a stateless protocol that was orig-
inally designed to transfer HTML documents, and is the workhorse of the World
Wide Web [59]. Http requests are treated as independent events even when they are
initiated by the same client. Web browsers overcome the problem of having to re-
peat all management tasks associated with a transaction by storing the information
entered with the first request and automatically including that information in all sub-
sequent replies to the server. For instance (see Table 2.1), when Alice allows Jane to
download photographs from Photographs Folder, Jane’s web browser needs to keep
a record of the state of the download operation so that it is able to return a consistent
state between Jane’s client (browser) and the server Photographs Folder in case the
download operation is interrupted. The browser stores the state of the operation on
the client side as a cookie and the server can retrieve the cookie to learn about the
client’s current state in the operation.
At a basic level, cookies in themselves are not a security problem in the sense
that they are executable pieces of code that the server stores at the client-side and so
do not pose a problem of confidentiality. A server will only store a cookie on a client
that has passed an authentication test. There are however, a couple of application-
level attacks that exploit the behavior of cookies. For instance (see Table 2.1), if
Alice sets up a bonus points loyalty scheme for users of the online telephony system,
Skype.exe, a client could increase the score to get higher discounts, with a cookie
poisoning attack. In a cookie poisoning attack, the attacker could be a third party
that makes an educated guess about a client’s cookie and then uses the spoofed
cookie to impersonate the client.
Clients can protect themselves by setting up their browsers to control the place-
ment of , obliging the server to request permission before storing a cookie or block-
ing cookies, but this can become a nuisance [137]. There is also the option of delet-
ing the cookies at the end of a session or allowing the server to protect itself by
32 2 A Presentation of Access Control Methods
encrypting cookies. Spoofing attacks can then be prevented by using proper authen-
tication. However, we note again that in this case, all the attack prevention strategies
are statically implemented and as in the approaches we have already discussed, at-
tack types are assumed to be known beforehand.
above to the PEP. The PEP then allows or denies access to the resource. The PEP
and PDP components may be embedded within a single application or may be dis-
tributed across a network.
In order to make the PEP and PDP work, XACML provides a policy set, which is
a container that holds either a policy or other policy sets, plus links to other policies.
Each individual policy is stated using a set of rules. Conflicts are resolved through
policy-combining algorithms. XACML also includes methods of combining these
policies and policy sets, allowing some to override others. This is necessary be-
cause the policies may overlap or conflict. For example, a simple policy-combining
algorithm is “Deny Overwrites”, which causes the final decision to be “Deny” if
any policy results in an “Overwrite”. Conversely, other rules could be established to
allow an action if any of a set of policies results in “Allow”.
Determining what policy or policy set to apply is accomplished using the “Tar-
get” component. A target is a set of rules or conditions applied to each subject,
object, and operation. When a rule’s conditions are met for a user (subject), ob-
ject, operation combination, its associated policy or policy set is applied using the
process described above.
The associated access control data for a given enterprise domain can then be en-
coded in an XML document, and the conformance of data to the enterprise access
control model can be obtained by validating the XML document against the XML
34 2 A Presentation of Access Control Methods
schema that represents the enterprise access control model using XML parsers.
These XML parsers are based on standard application programming interfaces such
as the Document Object Model (DOM), and the parser libraries are implemented in
various procedural languages to enable an application program to create, maintain,
and retrieve XML-encoded data.
Although, XML-based and other access control languages provide capabilities
for composing policies from scratch, allowing users to specify access control poli-
cies, together with the authorizations through the programming of the language, they
lack a formal specification language for access control constraints (like historical-
based and domain constraints) that prevent assigning overlapping privileges. As an
example, consider the case of constraints that require the manipulation and record-
ing of access states (such as granted privileges). This is in order to avoid creating
situations that result in users who were previously denied access to certain files
being unknowingly granted access in a future state. Like most access control lan-
guages, XACML does not provide tools for the expression of historical constraints
for historical-based access control policies, thus leaving the completeness of the
constraint logics to the policy writer. This case is similar to the one that was evoked
in Section 2.2.3 where Alice unknowingly grants Sam a combination of “view” and
“download” rights with respect to the Movies Folder, by allowing Sam to veto Jane’s
uploads to the site.
Domain constraints are based on the semantic information pertaining to an en-
terprise context; a grammar-based language cannot deal with content-based con-
straints. So, an XML schema is insufficient for a complete specification of the RBAC
model for an enterprise since the latter contains content-based domain constraints.
An example is not allowing more than one user to be assigned to the role of “secu-
rity administrator” (role cardinality constraint) and not allowing the roles “viewer”
and “uploader” to be assigned to the same user (separation-of-duty constraint).
Here, again we note as before that the specification languages assume a static
environment where changes in access control policies are generally effected manu-
ally by a security administrator. So in essence, although XML-based access control
languages provide features that enable them to specify a broad range of policies, a
formal specification is still needed in order to define constraint rules adaptively.
which viruses can spread over the Internet further goes to emphasize the need for
adaptive anti-viral programs.
Viruses and malicious code installation on a computer system create amongst
other problems that of denial of service. Denial of service attacks that come from
one or two sources can often be handled quite effectively. Matters become much
more difficult when the denial of service attack is distributed. In distributed denial
of service attacks, a collection of malicious processes jointly attempt to bring down
a networked service. Typically, the attackers install malicious code on computers
that are not well protected and then use the combined power of these computers
to carry out the attack. There are basically two types of denial of service attacks:
bandwidth and resource depletion.
Bandwidth depletion attacks are accomplished by sending messages to a single
machine with the effect being that normal messages have difficulty getting to the
receiver. Resource depletion attacks on the other hand deceive the receiver into using
up resources on useless messages. An example of a resource depletion attack is TCP
SYN-flooding [130]. Here, the attacker initiates a large number of connections to a
server but never responds to the acknowledgments from the server. The result is that
the server keeps resending acknowledgment messages that consume bandwidth and
slow down network communications.
Since these attacks occur by secretly installing malicious software (malware)
on badly protected network devices, intrusion detection algorithms aim to detect
and remove such snippets of code from a system before the malware does damage.
Intrusion detection systems work to detect malicious behavior in order to alert a
security administrator so that some action can be taken to stop the intrusion before
the system is damaged irreparably [29, 56].
Firewalls play an added protection role in distributed systems. Essentially, a fire-
wall disconnects any part of a distributed system from the outside world. All out-
going and incoming packets are routed through a special computer and inspected
before they are passed. Unauthorized traffic is discarded and not allowed to con-
tinue. An important point is that the firewall itself needs to be protected against any
form of security threat and should never fail.
There are essentially two categories of firewalls: the packet-filtering gateway and
the application-level gateway. The packet-filtering gateway filters incoming and out-
going packets. In contrast to the packet-filtering gateway, the application-level gate-
way inspects the content of an incoming or outgoing message. An example of an
application-level gateway is a mail gateway that discards incoming or outgoing mail
exceeding a certain size. More sophisticated mail gateways exist that are capable of
filtering spam e-mail. The common pattern inherent in all the approaches discussed
above is the inability to forecast violation scenarios or adapt to new scenarios dy-
namically. Norton’s Symantec Anti-virus software is taking steps towards building
pre-emptive anti-virus software that incorporates adaptivity by using machine learn-
ing and data mining techniques, which is an indication that professional organiza-
tions also recognize the need for an evolution towards adaptive security mechanisms
[64, 78]. Adaptive intrusion detection algorithms are also still at a budding stage but
the idea of moving towards schemes that can adjust to new scenarios is inherent in
36 2 A Presentation of Access Control Methods
all these approaches. Table 2.4 summarizes our discussion and comparative analysis
of the the DAC, MAC, CAC, RBAC, Cookies, and XACML approaches to access
control.
the data remains secret to all the participants in the system and only those in pos-
session of valid keys are able to decrypt and read meaningful information from the
data.
Research on tackling the issue of access control to outsourced data with cryp-
tography began with the approach that Miklau et al. proposed in 2003 [107]. In
their approach, different cryptographic keys are used to control access to various
portions of an XML document. Different cryptographic keys are used to encrypt
different portions of the XML tree by introducing special metadata nodes in the
document’s structure. However, while this solves the problem of securing the out-
sourced data, and additionally imposes some form of hierarchical access control, it
does not address the problem of handling key updates and more crucially the need
of categorizing the data that the SP receives.
De Capitani Di Vimercati et al. build on this approach and specifically consider
the problem of authorization policy updates which is, in a situation where access
control is supported cryptographically, equivalent to the problem of key updates
[132, 33]. The De Capitani Di Viemercati approach operates by using two keys.
The first key is generated by the data owner and used to protect the data initially by
encrypting the data before it is transmitted to the SP. Depending on the authoriza-
tion policies the SP creates a second key that is used to selectively encrypt portions
of the data to reflect policy modifications. The combination of the two layers pro-
vides an efficient and robust solution to the problem of providing data security in
outsourced data environments. However, a close look at this solution reveals that
policy modifications or updates are handled by updating the affected cryptographic
key and re-encrypting the data. This approach is expensive encryption wise particu-
larly when large amounts of data need to be re-encrypted.
The literature on cryptographic access control approaches to addressing the prob-
lem of secure data access and cost effective key management has been investigated
in the context of distributed environments like the Internet [9, 10, 44, 82, 66]. Ex-
amples of applications in which access can be controlled using these more con-
ventional Cryptographic Key Management approaches to enforcing access control
hierarchically include, pay-tv, sensor networks and social networking environments
[23, 147, 149]. However, the case of controlling access to outsourced data differs
from these cases in the sense that the SP needs to categorize all the data he/she
receives in way that prevents malicious access and also minimizes the cost of au-
thorization policy changes. Moreover, in the case of cryptographically supported
access, when data re-encryptions involve large amounts of data, there is the risk of
jeopardizing data consistency if data updates fail to be written onto a data version
due to time-consuming re-encryptions.
methods have not gained as much popularity in the domain of access control due
to skepticism and reluctance on the part of the users towards autonomic approaches
[35]. The main reason behind the skepticism and reluctance is that security breaches
create scandals that are expensive to handle and so business owners prefer to opt for
security schemes that react in pre-specified and predictable ways, as opposed to
those that adapt and evolve dynamically. However, web applications are increas-
ingly faced with scenarios that are difficult to predict a priori, which makes manual
security management challenging and prone to error [87, 35]. Breaches created by
errors in security policy specifications are currently difficult to trace and prevent,
and this will become even harder as systems become more complex [35].
Security via the Autonomic Computing paradigm was first proposed by Chess et
al. in 2003 [35]. In order to address the challenge of handling complex situations
for which security needs to be ensured, they suggest using the paradigm of Auto-
nomic Computing that IBM proposed in 2001 [88, 87]. The paradigm of Autonomic
Computing supposes that a system can be designed to self-regulate by using auto-
matic reactions to defend, optimize and heal. The functions of an autonomic system
are modeled using a feedback control loop that has two major components: the au-
tonomic manager and the managed resource. The autonomic manager adjusts the
behavior of the managed resource on the basis of recorded observations.
The autonomic model shown in Figure 2.7, is comprised of six basic functions:
the sensor, monitor, analyzer, planner, executor, and effector. The sensor captures
2.7 Autonomic Access Control 39
information related to the behavior of the managed component and transmits this
information to the monitor. The monitor determines whether or not an event is ab-
normal by comparing observed values to threshold values in the knowledge base.
The analyzer, on reception of a message from the monitor, performs a detailed anal-
ysis to decide what parameters need to be adjusted and by how much, and transmits
this information to the planner where a decision is made on the action to take. The
executor inserts the task into a scheduling queue and calls the effector to enforce the
changes on the managed resource in the order indicated by the planner.
Autonomic Computing aims to provide survivability and fault-tolerance for secu-
rity schemes [35, 78]. Johnston et al. [78] propose a preliminary approach that uses
reflex Autonomic Computing in the development of a multi-agent security system.
This is an interesting approach to self-protecting security, but the authors indicate
that real-world implementation of their prototype system would require additional
security controls and does not support the ability of a security class to operate in-
dependently. As Moreno et al. [109] pointed out, the connection to the rest of the
system is lost. We note also that this work on autonomic access control focuses
mainly on security policy definitions and restrictions on the messages sent and re-
ceived by entities (users and/or agents) in the system as opposed to key management
for cryptographic access control. The problem of designing adaptive CAC schemes
to support a specified security policy definition in general still needs to be addressed.
The discussion in this chapter has been centered on the state of the art in dis-
tributed systems access control. The pros and cons of each approach were high-
lighted in relation to the problems that arise in ensuring access control in collabora-
tive web-applications where user group membership is dynamic. Proactive security
approaches like access control, are popular because it is easier to prevent damages
that result from security loopholes than to wait for a violation to occur and then try
to repair the resulting damage caused.
In analyzing each approach, we noted that access control methods typically face
one or more of the following three weaknesses: vulnerability to security violations,
inefficiency in management resulting in delays as well as reduced availability, and
a lack of inbuilt mechanisms that allow them handle new scenarios adaptively (i.e.,
without a security administrator having to intervene manually). Moreover, the fun-
damental assumption that all the security solutions make is that if specified correctly,
failure (security violation) is unlikely.
However, security attacks show that access control schemes need not only to
be supported by some form of fault tolerance (way of minimizing the chances of
a vulnerability being exploited by malicious users) but also need to be designed
in ways that enable them to adjust their behavior in order to cope with changing
scenarios. In the following chapters we focus specifically on cryptographic access
40 2 A Presentation of Access Control Methods
control schemes, that are designed using the DKM approach. We show that adapt-
ability can enhance the performance and security of cryptographic access control
schemes, without necessarily changing their underlying security specifications. In
Table 2.5, we summarize the main attributes of conventional and autonomic access
control approaches.
http://www.springer.com/978-1-4419-6654-4