Security Concepts and Principles
Security Concepts and Principles
Security Concepts and Principles
Chapter 1
Security Concepts and Principles
Copyright
2020-2022
c Paul C. van Oorschot. Under publishing license to Springer.
Our subject area is computer and Internet security—the security of software, comput-
ers and computer networks, and of information transmitted over them and files stored on
them. Here the term computer includes programmable computing/communications de-
vices such as a personal computer or mobile device (e.g., laptop, tablet, smartphone), and
machines they communicate with including servers and network devices. Servers include
front-end servers that host web sites, back-end servers that contain databases, and inter-
mediary nodes for storing or forwarding information such as email, text messages, voice,
and video content. Network devices include firewalls, routers and switches. Our interests
include the software on such machines, the communications links between them, how
people interact with them, and how they can be misused by various agents.
We first consider the primary objectives or fundamental goals of computer security.
Many of these can be viewed as security services provided to users and other system com-
ponents. Later in this chapter we consider a longer list of design principles for security,
useful in building systems that deliver such services.
2
1.1. Fundamental goals of computer security 3
Approved
availability
access control 24 / 7
authen:ca:on
policy
✔ accountability
data origin en:ty
authen:ca:on authen:ca:on
digital evidence
msg data secret iden:ty (asserted)
or verifiable uniqueness property
Figure 1.1: Six high-level computer security goals (properties delivered as a service).
Icons denote end-goals. Important supporting mechanisms are shown in rectangles.
specific users allowed to access specific assets, and the allowed means of access;1 secu-
rity services to be provided; and system controls that must be in place. Ideally, a system
enforces the rules implied by its policy. Depending on viewpoint and methodology, the
policy either dictates, or is derived from, the system’s security requirements.
T HEORY, PRACTICE . In theory, a formal security policy precisely defines each pos-
sible system state as either authorized (secure) or unauthorized (non-secure). Non-secure
states may bring harm to assets. The system should start in a secure state. System ac-
tions (e.g., related to input/output, data transfer, or accessing ports) cause state transi-
tions. A security policy is violated if the system moves into an unauthorized state. In
practice, security policies are often informal documents including guidelines and expec-
tations related to known security issues. Formulating precise policies is more difficult and
time-consuming. Their value is typically under-appreciated until security incidents occur.
Nonetheless, security is defined relative to a policy, ideally in written form.
ATTACKS , AGENTS . An attack is the deliberate execution of one of more steps in-
tended to cause a security violation, such as unauthorized control of a client device. At-
tacks exploit specific system characteristics called vulnerabilities, including design flaws,
implementation flaws, and deployment or configuration issues (e.g., lack of physical iso-
lation, ongoing use of known default passwords, debugging interfaces left enabled). The
source or threat agent behind a potential attack is called an adversary, and often called an
attacker once a threat is activated into an actual attack. Figure 1.2 illustrates these terms.
a) b)
ac5vated
threat agent a8acker
secure non-secure
state state
a8ack vector
ac5on not ac5on
viola5ng policy viola5ng policy target
vulnerability asset
Figure 1.2: Security policy violations and attacks. a) A policy violation results in a non-
secure state. b) A threat agent becomes active by launching an attack, aiming to exploit a
vulnerability through a particular attack vector.
T HREAT. A threat is any combination of circumstances and entities that might harm
assets, or cause security violations. A credible threat has both capable means and inten-
Ch.1. Security policy viola5ons and a8acks.
tions. The mere existence of a threat agent and a vulnerability that they have the capability
a) A policy viola5on results in an insecure state.
to exploit on a target system does not necessarily imply that an attack will be instanti-
ated in b) A threat agent that becomes ac5ve launches an a8ack, which a8empts to exploit a
a given time period; the agent may fail to take action, e.g., due to indifference
vulnerability through a par5cular a8ack vector.
or insufficient incentive. Computer security aims to protect assets by mitigating threats,
% An adversary (threat agent) is an opponent who may not yet have a8acked.
largely by identifying and eliminating vulnerabilities, thereby disabling viable attack vec-
tors—specific methods, or sequences of steps, by which attacks are carried out. Attacks
typically have specific objectives, such as: extraction of strategic or personal information;
1 Forexample, corporate policy may allow authorized employees remote access to regular user accounts
via SSH (Chapter 10), but not remote access to a superuser or root account. A password policy (Chapter 3)
may require that passwords have at least 10 characters including two non-alphabetic characters.
6 Chapter 1. Security Concepts and Principles
R = T ·V ·C (1.1)
T reflects threat information (essentially, the probability that particular threats are instan-
tiated by attackers in a given period). V reflects the existence of vulnerabilities. C re-
flects asset value, and the cost or impact of a successful attack. Equation (1.1) highlights
the main elements in risk modeling and obvious relationships—e.g., risk increases with
threats (and with the likelihood of attacks being launched); risk requires the presence of a
vulnerability; and risk increases with the value of target assets. See Figure 1.3.
1.3. Risk, risk assessment, and modeling expected losses 7
probability of
ac.vated threat T successful a>ack risk R = TVC
P
vulnerability V C
cost (tangible + intangible) if a>ack successful
Figure 1.3: Risk equation. Intangible costs may include corporate reputation.
Equation (1.1) may be rewritten to combine T and V into a variable P denoting the
probability that a threat agent takes an action that successfully exploits a vulnerability:
R = P ·C (1.2)
Here the sum is over all security events modeled by index i, which may differ for different
types of assets. Fi is the estimated annualized frequency of events of type i (taking into
account a combination of threats, and vulnerabilities that enable threats to translate into
successful attacks). Ci is the average loss expected per occurrence of an event of type i.
R ISK ASSESSMENT QUESTIONS . Equations 1.1–1.3 bring focus to some questions
that are fundamental not only in risk assessment, but in computer security in general:
8 Chapter 1. Security Concepts and Principles
1. What assets are most valuable, and what are their values?
2. What system vulnerabilities exist?
3. What are the relevant threat agents and attack vectors?
4. What are the associated estimates of attack probabilities, or frequencies?
C OST- BENEFIT ANALYSIS . The cost of deploying security mechanisms should be
accounted for. If the total cost of a new defense exceeds the anticipated benefits (e.g.,
lower expected losses), then the defense is unjustifiable from a cost-benefit analysis view-
point. ALE estimates can inform decisions related to the cost-effectiveness of a defensive
countermeasure—by comparing losses expected in its absence, to its own annualized cost.
Example (Cost-benefit of password expiration policies). If forcing users to change
their passwords every 90 days reduces monthly company losses (from unauthorized ac-
count access) by $1000, but increases monthly help-desk costs by $2500 (from users being
locked out of their accounts as a result of forgetting their new passwords), then the cost
exceeds the benefit before even accounting for usability costs such as end-user time.
R ISK ASSESSMENT CHALLENGES . Quantitative risk assessment may be suited to
incidents that occur regularly, but not in general. Rich historical data and stable statistics
are needed for useful failure probability estimates—and these exist over large samples,
e.g., for human life expectancies and time-to-failure for incandescent light bulbs. But for
computer security incidents, relevant such data on which to base probabilities and asset
losses is both lacking and unstable, due to the infrequent nature of high-impact security
incidents, and the uniqueness of environmental conditions arising from variations in host
and network configurations, and in threat environments. Other barriers include:
• incomplete knowledge of vulnerabilities, worsened by rapid technology evolution;
• the difficulty of quantifying the value of intangible assets (strategic information, cor-
porate reputation); and
• incomplete knowledge of threat agents and their adversary classes (Sect. 1.4). Ac-
tions of unknown intelligent human attackers cannot be accurately predicted; their
existence, motivation and capabilities evolve, especially for targeted attacks.
Indeed for unlikely events, ALE analysis (see above) is a guessing exercise with little ev-
idence supporting its use in practice. Yet, risk assessment exercises still offer benefits—
e.g., to improve an understanding of organizational assets and encourage assigning values
to them, to increase awareness of threats, and to motivate contingency and recovery plan-
ning prior to losses. The approach discussed next aims to retain the benefits while avoiding
inaccurate numerical estimates.
Q UALITATIVE RISK ASSESSMENT. As numerical values for threat probabilities (and
impact) lack credibility, most practical risk assessments are based on qualitative ratings
and comparative reasoning. For each asset or asset class, the relevant threats are listed;
then for each such asset-threat pair, a categorical rating such as (low, medium, high) or
perhaps ranging from very low to very high, is assigned to the probability of that threat
action being launched-and-successful, and also to the impact assuming success. The com-
bination of probability and impact rating dictates a risk rating from a combination matrix
1.4. Adversary modeling and security analysis 9
such as Table 1.1. In summary, each asset is identified with a set of relevant threats, and
comparing the risk ratings of these threats allows a ranking indicating which threat(s) pose
the greatest risk to that asset. Doing likewise across all assets allows a ranked list of risks
to an organization. In turn, this suggests which assets (e.g., software applications, files,
databases, client machines, servers and network devices) should receive attention ahead
of others, given a limited computer security budget.
R ISK MANAGEMENT VS . MITIGATION . Not all threats can (or necessarily should)
be eliminated by technical means alone. Risk management combines the technical activity
of estimating risk or simply identifying threats of major concern, and the business activity
of “managing” the risk, i.e., making an informed response. Options include (a) mitigat-
ing risk by technical or procedural countermeasures; (b) transferring risk to third parties,
through insurance; (c) accepting risk in the hope that doing so is less costly than (a) or
(b); and (d) eliminating risk by decommissioning the system.
from parties having some starting advantage, e.g., employees with physical access or
network credentials as legitimate users.
The line between outsiders and insiders can be fuzzy, for example when an outsider some-
how gains access to an internal machine and uses it to attack further systems.
S CHEMAS . Various schemas are used in modeling adversaries. A categorical schema
classifies adversaries into named groups, as given in Table 1.2. A capability-level schema
groups generic adversaries based on a combination of capability (opportunity and re-
sources) and intent (motivation), say from Level 1 to 4 (weakest to strongest). This may
also be used to sub-classify named groups. For example, intelligence agencies from the
U.S. and China may be in Level 4, insiders could range from Level 1 to 4 based on their
capabilities, and novice crackers may be in Level 1. It is also useful to distinguish tar-
geted attacks aimed at specific individuals or organizations, from opportunistic attacks or
generic attacks aimed at arbitrary victims. Targeted attacks may either use generic tools
or leverage target-specific personal information.
S ECURITY EVALUATIONS AND PENETRATION TESTING . Some government de-
partments and other organizations may require that prior to purchase or deployment, prod-
ucts be certified through a formal security evaluation process. This involves a third party
lab reviewing, at considerable cost and time, the final form of a product or system, to
verify conformance with detailed evaluation criteria as specified in relevant standards; as
a complication, recertification is required once even the smallest changes are made. In
contrast, self-assessments through penetration testing (pen testing) involve customers or
hired consultants (with prior permission) finding vulnerabilities in deployed products by
demonstrating exploits on their own live systems; interactive and automated toolsets run
attack suites that pursue known design, implementation and configuration errors compiled
from previous experience. Traditional pen testing is black-box, i.e., proceeds without use
of insights from design documents or source code; use of such information (making it
white-box) increases the chances of finding vulnerabilities and allows tighter integration
with overall security analysis. Note that tests carried out by product vendors prior to prod-
uct release, including common regression testing, remain important but cannot find issues
arising from customer-specific configuration choices and deployment environments.
1.5. Threat modeling: diagrams, trees, lists and STRIDE 11
architectural
a@ack trees
diagrams
threat
þ model Spoofing
Tampering
þ checklists STRIDE Repudia<on
Informa<on disclosure
þ Denial of service
Escala<on of privilege
Figure 1.5: Examples of threat modeling approaches.
Ch.10. Firewall architecture including DMZ (perimeter network). Two screening routers are
used, one internet-facing and one internal. The GW firewall proxy/host could be the GW in
the filter rules Table 1, most exposed so should be basIon host. A host (5) may connect to
Internet services by proxy through GW, and possibly directly through the exterior router (1)
for outgoing connecIons only on a reduced set of packet-filtered protocols.
1.5. Threat modeling: diagrams, trees, lists and STRIDE 13
tree
a'ack goal enter house
root
a'ack vector
or or
by window by door by tunnel
unlocked locked
and
li; use glass with suc<on li;er leaf
sash cu'er remove cut glass nodes
Figure 1.8: Attack tree. An attack vector is a full path from root to leaf.
Ch.1. A'ack tree.
different vectors. Multiple children of a node (Fig. 1.8) are by default distinct alternatives
(logical OR nodes); however, a subset of nodes at a given level can be marked as an
AND set, indicating that all are jointly necessary to meet the parent goal. Nodes can be
annotated with various details—e.g., indicating a step is infeasible, or by values indicating
costs or other measures. The attack information captured can be further organized, often
suggesting natural classifications of attack vectors into known categories of attacks.
The main output is an extensive (but usually incomplete) list of possible attacks, e.g.,
Figure 1.9. The attack paths can be examined to determine which ones pose a risk in the
real system; if the circumstances detailed by a node are for some reason infeasible in the
target system, the path is marked invalid. This helps maintain focus on the most relevant
threats. Notice the asymmetry: an attacker need only find one way to break into a system,
while the defender (security architect) must defend against all viable attacks.
An attack tree can help in forming a security policy, and in security analysis to check
that mechanisms are in place to counter all identified attack vectors, or explain why par-
ticular vectors are infeasible for relevant adversaries of the target system. Attack vectors
identified may help determine the types of defensive measures needed to protect specific
assets from particular types of malicious actions. Attack trees can be used to prioritize
vectors as high or low, e.g., based on their ease, and relevant classes of adversary.
The attack tree methodology encourages a form of directed brainstorming, adding
structure to what is otherwise an ad hoc task. The process benefits from a creative mind.
It requires a skill that improves with experience. The process is also best used iteratively,
with a tree extended as new attacks are identified on review by colleagues, or merged
with trees independently constructed by others. Attack trees motivate security architects
to “think like attackers”, to better defend against them.
Example (Enumerating password authentication attacks). To construct a list of at-
tacks on password authentication, one might draw a data flow diagram showing a pass-
word’s end-to-end paths, then identify points where an attacker might try to extract infor-
mation. An alternative approach is to build an attack tree, with root goal to gain access
to a user’s account on a given system. Which method is used is a side detail towards the
desired output: a list of potential attacks to consider further (Figure 1.9).
‡Exercise (Free-lunch attack tree). Read the article by Mauw [12], for a fun example
of an attack tree. As supplementary reading see attack-defense trees by Kordy [10].
1.5. Threat modeling: diagrams, trees, lists and STRIDE 15
Both can result from failing to adapt to changes in technology and attack capabilities.
Model assumptions can also be wrong, i.e., fail to accurately represent a target system, due
to incomplete or incorrect information, over-simplification, or loss of important details
through abstraction. Another issue is failure to record assumptions explicitly—implicit
assumptions are rarely scrutinized. Focusing attention on the wrong threats may mean
wasting effort on threats of lower probability or impact than others. This can result not
only from unrealistic assumptions but also from: inexperience or lack of knowledge, fail-
ure to consider all possible threats (incompleteness), new vulnerabilities due to computer
system or network changes, or novel attacks. It is easy to instruct someone to defend
against all possible threats; anticipating the unanticipated is more difficult.
W HAT ’ S YOUR THREAT MODEL . Ideally, threat models are built using both prac-
tical experience and analytical reasoning, and continually adapted to inventive attackers
who exploit rapidly evolving software systems and technology. Perhaps the most impor-
tant security analysis question to always ask is: What’s your threat model? Getting the
threat model wrong—or getting only part of it right—allows many successful attacks in
the real world, despite significant defensive expenditures. We give a few more examples.
Example (Online trading fraud). A security engineer models attacks on an online
stock trading account. To stop an attacker from extracting money, she disables the ability
to directly remove cash from such accounts, and to transfer funds across accounts. The
following attack nonetheless succeeds. An attacker X breaks into a victim account by
obtaining its userid and password, and uses funds therein to bid up the price of a thinly
traded stock, which X has previously purchased at lower cost on his own account. Then
X sells his own shares of this stock, at this higher price. The victim account ends holding
the higher-priced shares, bought on the (manipulated) open market.
Example (Phishing one-time passwords). Some early online banks used one-time
passwords, sharing with each account holder a sheet containing a list of unique passwords
to be used once each from top to bottom, and crossed off upon use—to prevent repeated
use of passwords stolen (e.g., by phishing or malicious software). Such schemes have
nonetheless been defeated by tricking users to visit a fraudulent version of a bank web site,
and requesting entry of the next five listed passwords “to help resolve a system problem”.
The passwords entered are used once each on the real bank site, by the attacker. (Chapter
3 discusses one-time passwords and the related mechanism of passcode generators.)
Example (Bypassing perimeter defenses). In many enterprise environments, corpo-
rate gateways and firewalls selectively block incoming traffic to protect local networks
from the external Internet. This provides no protection from employees who, bypassing
such perimeter defenses, locally install software on their computers, or directly connect
by USB port memory tokens or smartphones for synchronization. A well-known attack
vector exploiting this is to sprinkle USB tokens (containing malicious software) in the
parking lot of a target company. Curious employees facilitate the rest of the attack.
D EBRIEFING . What went wrong in the above examples? The assumptions, the threat
model, or both, were incorrect. Invalid assumptions or a failure to accurately model the
operational environment can undermine what appears to be a solid model, despite convinc-
ing security arguments and mathematical proofs. One common trap is failing to validate
18 Chapter 1. Security Concepts and Principles
assumptions: if a security proof relies on assumption A (e.g., hotel staff are honest), then
the logical correctness of the proof (no matter how elegant!) does not provide protection
if in the current hotel, A is false. A second is that a security model may truly provide a
100% guarantee that all attacks it considers are precluded by a given defense, while in
practice the modeled system is vulnerable to attacks that the model fails to consider.
I TERATIVE PROCESS : EVOLVING THREAT MODELS . As much art as science,
threat modeling is an iterative process, requiring continual adaptation to more complete
knowledge, new threats and changing conditions. As environments change, static threat
models become obsolete, failing to accurately reflect reality. For example, many Internet
security protocols are based on the original Internet threat model, which has two core
assumptions: (1) endpoints, e.g., client and server machine, are trustworthy; and (2) the
communications link is under attacker control (e.g., subject to eavesdropping, message
modification, message injection). This follows the historical cryptographer’s model for
securing data transmitted over unsecured channels in a hostile communications environ-
ment. However, assumption (1) often fails in today’s Internet where malware (Chapter 7)
has compromised large numbers of endpoint machines.
Example (Hard and soft keyloggers). Encrypting data between a client machine and
server does not protect against malicious software that intercepts keyboard input, and
relays it to other machines electronically. The hardware variation is a small, inexpensive
memory device plugged in between a keyboard cable and a computer, easily installed and
removed by anyone with occasional brief office access, such as cleaning staff.
1.6.2 Tying security policy back to real outcomes and security analysis
Returning to the big picture, we now pause to consider: How does “security” get tied back
to “security policy”, and how does this relate to threat models and security mechanisms?
O UTCOME SCENARIOS . Security defenses and mechanisms (means to implement
defenses) are designed and used to support security policies and services as in Fig. 1.1.
Consider the following outcomes relating defenses to security policies.
1. The defenses fail to properly support the policy; the security goal is not met.
2. The defenses succeed in preventing policy violations, and the policy is complete in the
sense of fully capturing an organization’s security requirements. The resulting system
is “secure” (both relative to the formal policy and real-world expectations).
3. The formal policy fails to fully capture actual security requirements. Here, even if
defenses properly support policy (attaining “security” relative to it), the common-sense
definition of security is not met, e.g., if an unanticipated simple attack still succeeds.
The third case motivates the following advice: Whenever ambiguous words like “secure”
and “security” are used, request that their intended meaning and context be clarified.
S ECURITY ANALYSIS AND KEY QUESTIONS . Figure 1.10 provides overall con-
text for the iterative process of security design and analysis. It may proceed as follows.
Identify the valuable assets. Determine suitable forms of protection to counter identified
threats and vulnerabilities; adversary modeling and threat modeling help here. This helps
1.6. Model-reality gaps and real-world outcomes 19
refine security requirements, shaping the security policy, which in turn shapes system de-
sign. Security mechanisms that can support the policy in the target environment are then
selected. As always, key questions help:
• What assets are valuable? (Alternatively: what are your protection goals?)
• What potential attacks put them at risk?
• How can potentially damaging actions be stopped or otherwise managed?
Options to mitigate future damage include not only attack prevention by countermeasures
that preclude (or reduce the likelihood of) attacks successfully exploiting vulnerabilities,
but also detection, real-time response, and recovery after the fact. Quick recovery can
reduce impact. Consequences can also be reduced by insurance (Section 1.3).
T ESTING IS NECESSARILY INCOMPLETE . Once a system is designed and imple-
mented, how do we test that the protection measures work and that the system is “se-
cure”? (Here you should be asking: What definition of “secure” are you using?) How to
test whether security requirements have been met remains without a satisfactory answer.
Section 1.4 mentioned security analysis (often finding design flaws), third-party security
evaluation, and pen testing (often finding implementation and configuration flaws). Using
checklist ideas from threat modeling, testing can be done based on large collections of
common flaws, as a form of security-specific regression testing; specific, known attacks
can be compiled and attempted under controlled conditions, to see whether a system suc-
cessfully withstands them. This of course leaves unaddressed attacks not yet foreseen or
invented, and thus difficult to include in tests. Testing is also possible only for certain
classes of attacks. Assurance is thus incomplete, and often limited to well-defined scopes.
S ECURITY IS UNOBSERVABLE . In regular software engineering, verification in-
volves testing specific features for the presence of correct outcomes given particular in-
puts. In contrast, security testing would ideally also confirm the absence of exploitable
20 Chapter 1. Security Concepts and Principles
flaws. This may be called a negative goal, among other types of non-functional goals.
To repeat: we want not only to verify that expected functionality works as planned, but
also that exploitable artifacts are absent. This is not generally possible—aside from the
difficulty of proving properties of software at scale, the universe of potential exploits is
unknown. Traditional functional and feature testing cannot show the absence of problems;
this distinguishes security. Security guarantees may also evaporate due to a small detail
of one component being updated or reconfigured. A system’s security properties are thus
difficult to predict, measure, or see; we cannot observe security itself or demonstrate it,
albeit on observing undesirable outcomes we know it is missing. Sadly, not observing bad
outcomes does not imply security either—bad things that are unobservable could be la-
tent, or be occurring unnoticed. The security of a computer system is not a testable feature,
but rather is said (unhelpfully) to be emergent—resulting from the complex interaction of
elements that compose an entire system.
A SSURANCE IS DIFFICULT, PARTIAL . So then, what happens in practice? Evalua-
tion criteria are altered by experience, and even thorough security testing cannot provide
100% guarantees. In the end, we seek to iteratively improve security policies, and likewise
confidence that protections in place meet security policy and/or requirements. Assurance
of this results from sound design practices, testing for common flaws and known attacks
using available tools, formal modeling of components where suitable, ad hoc analysis, and
heavy reliance on experience. The best lessons often come from attacks and mistakes.
might do likewise. The goal is to minimize the number of interfaces, simplify their
design (to reduce the number of ways they might be abused), minimize external ac-
cess to them, and restrict such access to authorized parties. Importantly, security
mechanisms themselves should not introduce new exploitable attack surfaces.
P2 SAFE - DEFAULTS : Use safe default settings (beware, defaults often go unchanged).
For access control, deny-by-default. Design services to be fail-safe, here meaning
that when they fail, they fail “closed” (e.g., denying access). As a method, use explicit
inclusion via allowlists (goodlists) of authorized entities with all others denied, rather
than exclusion via denylists (badlists) that would allow all those unlisted.
NOTE . A related idea, e.g., for data sent over real-time links, is to encrypt by
default using opportunistic encryption—encrypting session data whenever supported
by the receiving end. In contrast, default encryption is not generally recommended in
all cases for stored data, as the importance of confidentiality must be weighed against
the complexity of long-term key management and the risk of permanent loss of data
if encryption keys are lost; for session data, immediate decryption upon receipt at the
endpoint recovers cleartext.
P3 OPEN - DESIGN : Do not rely on secret designs, attacker ignorance, or security by ob-
scurity. Invite and encourage open review and analysis. For example, the Advanced
Encryption Standard was selected from a set of public candidates by open review;
undisclosed cryptographic algorithms are now widely discouraged. Without contra-
dicting this, leverage unpredictability where advantageous, as arbitrarily publicizing
tactical defense details is rarely beneficial (there is no gain in advertising to thieves
that you are on vacation, or posting house blueprints); and beware exposing error
messages or timing data that vary based on secret values, lest they leak the secrets.
NOTE . P3 is often paired with P9, and follows Kerckhoffs’ principle—a system’s
security should not rely on the secrecy of its design details.
P4 COMPLETE - MEDIATION : For each access to every object, and ideally immediately
before the access is to be granted, verify proper authority. Verifying authorization
requires authentication (corroboration of an identity), checking that the associated
principal is authorized, and checking that the request has integrity (it must not be
modified after being issued by the legitimate party—cf. P19).
P5 ISOLATED - COMPARTMENTS : Compartmentalize system components using strong
isolation structures (containers) that manage or prevent cross-component commu-
nication, information leakage, and control. This limits damage when failures occur,
and protects against escalation of privileges (Chapter 6); P6 and P7 have similar mo-
tivations. Restrict authorized cross-component communication to observable paths
with defined interfaces to aid mediation, screening, and use of chokepoints. Exam-
ples of containment means include: process and memory isolation, disk partitions,
virtualization, software guards, zones, gateways and firewalls.
NOTE . Sandbox is a term used for mechanisms offering some form of isolation.
P6 LEAST- PRIVILEGE: Allocate the fewest privileges needed for a task, and for the
shortest duration necessary. For example, retain superuser privileges (Chapter 5)
22 Chapter 1. Security Concepts and Principles
only for actions requiring them; drop and reacquire privileges if needed later. Do not
use a Unix root account for tasks where regular user privileges suffice. This reduces
exposure, and limits damage from the unexpected. P6 complements P5 and P7.
NOTE . This principle is related to the military need-to-know principle—access to
sensitive information is granted only if essential to carrying out one’s official duties.
P7 MODULAR - DESIGN: Avoid monolithic designs that embed full privileges into large
single components; favor object-oriented and finer-grained designs that segregate
privileges (including address spaces) across smaller units or processes. P6 guides
more on the use of privilege frameworks, P7 more on designing base architectures.
NOTE . A related financial accounting principle is separation of duties, whereby co-
dependent tasks are assigned to independent parties so that an insider attack requires
collusion. This also differs from requiring multiple authorizations from distinct par-
ties (e.g., two keys or signatures to open a safety-deposit box or authorize large-
denomination cheques), a generalization of which is thresholding of privileges—
requiring k of t parties (2 ≤ k ≤ t) to authorize an action.
P8 SMALL - TRUSTED - BASES: Strive for small code size in components that must be
trusted, i.e., components on which a larger system strongly depends for security. For
example, high-assurance systems centralize critical security services in a minimal
core operating system microkernel (cf. Chapter 5 end notes), whose smaller size al-
lows efficient concentration of security analysis.
NOTE . A related minimize-secrets principle is: Secrets should be few in number.
Benefits include reducing management complexity. Cryptographic algorithms use a
different form of trust reduction, and separate mechanism from secret; note that a
secret key is changeable at far less cost than the cryptographic algorithm itself.
P9 TIME - TESTED - TOOLS: Rely wherever possible on time-tested, expert-built security
tools including protocols, cryptographic primitives and toolkits, rather than designing
and implementing your own. History shows that security design and implementation
is difficult to get right even for experts; thus amateurs are heavily discouraged (don’t
reinvent a weaker wheel). Confidence increases with the length of time mechanisms
and tools have survived under load (sometimes called soak testing).
NOTE . The underlying reasoning here is that a widely used, heavily scrutinized
mechanism is less likely to retain flaws than many independent, scantly reviewed im-
plementations (cf. P3). Thus using well-known crypto libraries (e.g., OpenSSL) is
encouraged. Less understood is an older least common mechanism principle: mini-
mize the number of mechanisms (shared variables, files, system utilities) shared by
two or more programs and depended on by all. It recognizes that interdependencies
increase risk; code diversity can reduce impacts of single flaws.
P10 LEAST- SURPRISE: Design mechanisms, and their user interfaces, to behave as users
expect. Align designs with users’ mental models of their protection goals, to reduce
user mistakes that compromise security. Especially where errors are irreversible (e.g.,
sending confidential data or secrets to outside parties), tailor to the experience of
target users; beware designs suited to experts but triggering mistakes by ordinary
1.7. ‡Design principles for computer security 23
trust from a base point (such as a trust anchor in a browser certificate chain, Chapter
8). More generally, verify trust assumptions where possible, with extra diligence at
registration, initialization, software installation, and starting points in the lifecycle of
a software application, security key or credential.
P18 INDEPENDENT- CONFIRMATION: Use simple, independent (e.g., local device) cross-
checks to increase confidence in code or data, especially if it may arrive from outside
domains or over untrusted channels. Example: integrity of downloaded software ap-
plications or public keys can be confirmed (Chapter 8) by comparing a locally com-
puted cryptographic hash (Chapter 2) of the item to a “known-good” hash obtained
over an independent channel (voice call, text message, widely trusted site).
P19 REQUEST- RESPONSE - INTEGRITY: Verify that responses match requests in name res-
olution and other distributed protocols. Their design should include cryptographic
integrity checks that bind steps to each other within a given transaction or protocol
run to detect unrelated or substituted responses; beware protocols lacking authenti-
cation. Example: a certificate request specifying a unique subject name or domain
expects in response a certificate for that subject; this field in the response certificate
should be cross-checked against the request.
P20 RELUCTANT- ALLOCATION: Be reluctant to allocate resources or expend effort in
interactions with unauthenticated, external agents. For services open to all parties,
design to mitigate intentional resource consumption. Place a higher burden of effort
on agents that initiate an interaction. (A party initiating a phone call should not be
the one to demand: Who are you? When possible, authenticate. Related: processes
should beware abuse as a conduit extending their privileges to unverified agents.)
NOTE . P20 largely aims to avoid denial of service attacks (Chapter 11). P3 also
recommends reluctance, in that case to releasing system data useful to attackers.
We also include two higher-level principles and a maxim.
HP1 SECURITY- BY- DESIGN: Build security in, starting at the initial design stage of a
development cycle, since secure design often requires core architectural support ab-
sent if security is a late-stage add-on. Explicitly state the design goals of security
mechanisms and what they are not designed to do, since it is impossible to evaluate
effectiveness without knowing goals. In design and analysis documents, explicitly
state all security-related assumptions, especially related to trust and trusted parties
(supporting P17); note that a security policy itself might not specify assumptions.
HP2 DESIGN - FOR - EVOLUTION: Design base architectures, mechanisms, and protocols
to support evolution, including algorithm agility for graceful upgrades of crypto
algorithms (e.g., encryption, hashing) with minimal impact on related components.
Support automated secure software update where possible. Regularly re-evaluate
the effectiveness of security mechanisms, in light of evolving threats, technology,
and architectures; be ready to update designs and products as needed.
V ERIFY FIRST. The diplomatic maxim “trust but verify” suggests that given assertions
by foreign diplomats whom you don’t actually trust, one should feign trust while silently
1.8. ‡Why computer security is hard 25
cross-checking for yourself. In computer security, the rule is: verify first (before trust-
ing). Design principles related to this idea include: COMPLETE - MEDIATION (P4), DATA -
TYPE - VALIDATION (P15), TRUST- ANCHOR - JUSTIFICATION (P17), and INDEPENDENT-
CONFIRMATION (P18).
10. market economics and stakeholders: market forces often hinder allocations that im-
prove security, e.g., stakeholders in a position to improve security, or who would bear
the cost of deploying improvements, may not be those who would gain benefit.
11. features beat security: while it is well accepted that complexity is the enemy of secu-
rity (cf. P1), little market exists for simpler products with reduced functionality.
12. low cost beats quality: low-cost low-security wins in “market for lemons” scenar-
ios where to buyers, high-quality software is indistinguishable from low (other than
costing more); and when software sold has no liability for consequential damages.
13. missing context of danger and losses: cyberspace lacks real-world context cues and
danger signals to guide user behavior, and consequences of security breaches are often
not immediately visible nor linkable to the cause (i.e., the breach itself).
14. managing secrets is difficult: core security mechanisms often rely on secrets (e.g.,
crypto keys and passwords), whose proper management is notoriously difficult and
costly, due to the nature of software systems and human factors.
15. user non-compliance (human factors): users bypass or undermine computer security
mechanisms that impose inconveniences without visible direct benefits (in contrast:
physical door locks are also inconvenient, but benefits are understood).
16. error-inducing design (human factors): it is hard to design security mechanisms
whose interfaces are intuitive to learn, distinguishable from interfaces presented by
attackers, induce the desired human actions, and resist social engineering.
17. non-expert users (human factors): whereas users of early computers were technical
experts or given specialized training under enterprise policies, today many are non-
experts without formal training or any technical computer background.
18. security not designed in: security was not an original design goal of the Internet
or computers in general, and retro-fitting it as an add-on feature is costly and often
impossible without major redesign (see principle HP1).
19. introducing new exposures: the deployment of a protection mechanism may itself
introduce new vulnerabilities or attack vectors.
20. government obstacles: government desire for access to data and communications
(e.g., to monitor criminals, or spy on citizens and other countries), and resulting poli-
cies, hinders sound protection practices such as strong encryption by default.
We end by noting that this is but a partial list! Rather than being depressed by this, as op-
timists we see a great opportunity—in the many difficulties that complicate computer se-
curity, and in technology trends suggesting challenges ahead as critical dependence on the
Internet and its underlying software deepens. Both emphasize the importance of under-
standing what can go wrong when we combine people, computing and communications
devices, and software-hardware systems.
We use computers and mobile devices every day to work, communicate, gather in-
formation, make purchases, and plan travel. Our cars rely on software systems—as do
our airplanes. (Does this worry you? What if the software is wirelessly updated, and the
source of updates is not properly authenticated?) The business world comes to a standstill
1.9. ‡End notes and further reading 27
when Internet service is disrupted. Our critical infrastructure, from power plants and elec-
tricity grids to water supply and financial systems, is dependent on computer hardware,
software and the Internet. Implicitly we expect, and need, security and dependability.
Perhaps the strongest motivation for individual students to learn computer security
(and for parents and friends to encourage them to do so) is this: security expertise may
be today’s very best job-for-life ticket, as well as tomorrow’s. It is highly unlikely that
software and the Internet itself will disappear, and just as unlikely that computer security
problems will disappear. But beyond employment for a lucky subset of the population,
having a more reliable, trustworthy Internet is in the best interest of society as a whole.
The more we understand about the security of computers and the Internet, the safer we
can make them, and thereby contribute to a better world.
‡The double-dagger symbol denotes sections that may be skipped on first reading, or by instructors using
the book for time-constrained courses.
References (Chapter 1)
[1] G. A. Akerlof. The market for “lemons”: Quality uncertainty and the market mechanism. The Quarterly
Journal of Economics, 84(3):488–500, August 1970.
[2] E. Amoroso. Fundamentals of Computer Security Technology. Prentice Hall, 1994. Includes author’s
list of 25 Greatest Works in Computer Security.
[3] A. Avizienis, J. Laprie, B. Randell, and C. E. Landwehr. Basic concepts and taxonomy of dependable
and secure computing. ACM Trans. Inf. Systems and Security, 1(1):11–33, 2004.
[4] R. G. Bace. Intrusion Detection. Macmillan, 2000.
[5] R. W. Baldwin. Rule Based Analysis of Computer Security. PhD thesis, MIT, Cambridge, MA, June
1987. Describes security checkers called Kuang systems, and in particular one built for Unix.
[6] D. Basin, P. Schiller, and M. Schläpfer. Applied Information Security. Springer, 2011.
[7] D. Gollmann. Computer Security (3rd edition). John Wiley, 2011.
[8] M. Howard and D. LeBlanc. Writing Secure Code (2nd edition). Microsoft Press, 2002.
[9] A. Jaquith. Security Metrics: Replacing Fear, Uncertainty, and Doubt. Addison-Wesley, 2007.
[10] B. Kordy, S. Mauw, S. Radomirovic, and P. Schweitzer. Foundations of attack-defense trees. In Formal
Aspects in Security and Trust 2010, pages 80–95. Springer LNCS 6561 (2011).
[11] J. Lowry, R. Valdez, and B. Wood. Adversary modeling to develop forensic observables. In Digital
Forensics Research Workshop (DFRWS), 2004.
[12] S. Mauw and M. Oostdijk. Foundations of attack trees. In Information Security and Cryptology (ICISC
2005), pages 186–198. Springer LNCS 3935 (2006).
[13] NIST. Special Pub 800-30 rev 1: Guide for Conducting Risk Assessments. U.S. Dept. of Commerce,
September 2012.
[14] D. B. Parker. Risks of risk-based security. Comm. ACM, 50(3):120–120, March 2007.
[15] C. P. Pfleeger and S. L. Pfleeger. Security in Computing (4th edition). Prentice Hall, 2006.
[16] E. Rescorla. SSL and TLS: Designing and Building Secure Systems. Addison-Wesley, 2001.
[17] J. H. Saltzer and M. F. Kaashoek. Principles of Computer System Design. Morgan Kaufmann, 2010.
[18] J. H. Saltzer and M. D. Schroeder. The protection of information in computer systems. Proceedings of
the IEEE, 63(9):1278–1308, September 1975.
[19] A. Shostack. Threat Modeling: Designing for Security. John Wiley and Sons, 2014.
[20] R. E. Smith. A contemporary look at Saltzer and Schroeder’s 1975 design principles. IEEE Security &
Privacy, 10(6):20–25, 2012.
[21] W. Stallings and L. Brown. Computer Security: Principles and Practice (3rd edition). Pearson, 2015.
28