Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit 5

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 65

UNIT 5

1 PRETTY GOOD PRIVACY (PGP)

PGP provides the confidentiality and authentication service that can be


used for electronic mail and file storage applications. The steps involved in
PGP are:

Select the best available cryptographic algorithms as building blocks.


Integrate these algorithms into a general purpose application that is
independent of operating system and processor and that is based on a small
set of easy-to-use commands.
Make the package and its documentation, including the source code, freely
available via the internet, bulletin boards and commercial networks.
Enter into an agreement with a company to provide a fully compatible, low
cost commercial version of PGP.

PGP has grown explosively and is now widely used. A number of reasons
can be cited for this growth.

It is available free worldwide in versions that run on a variety of platform.


It is based on algorithms that have survived extensive public review and are
considered extremely secure.
e.g., RSA, DSS and Diffie Hellman for public key encryption CAST-128, IDEA
and 3DES for conventional encryption SHA-1 for hash coding.
It has a wide range of applicability.
It was not developed by, nor it is controlled by, any governmental or standards
organization.

Operational description
The actual operation of PGP consists of five services: authentication,
confidentiality, compression, e-mail compatibility and segmentation.
1. Authentication

The sequence for authentication is as


follows:

§ The sender creates the message

§ SHA-1 is used to generate a 160-bit hash code of the message

§ The hash code is encrypted with RSA using the sender‟sprivatek yands
private key and the result is prepended to the message

§ The receiver uses RSA with the sender‟sprivatek yands public key to
decrypt and recover the hash code.

The receiver generates a new hash code for the


message and compares it with the decrypted hash
code. If the two match, the message is accepted as
authentic.

2. Confidentiality
Confidentiality is provided by encrypting messages to be transmitted or to be
stored locally as files. In both cases, the conventional encryption algorithm
CAST-128 may be used. The 64-bit cipher feedback (CFB) mode is used.

In PGP, each conventional key is used only once. That is, a new key is
generated as a random 128-bit number for each message. Thus although
this is referred to as a session key, it is in reality a one time key. To
protect the key, it is encrypted with the receiver‟sprivatek yands public key.

The sequence for confidentiality is as follows:

The sender generates a message and a random 128-bit number to be used as a


session key for this message only.

The message is encrypted using CAST-128 with the session key.

The session key is encrypted with RSA, using the receiver‟sprivatek yands public key and is
prepended to the message.

The receiver uses RSA with its private key to decrypt and recover

the session key. The session key is used to decrypt the message.
Confidentiality and authentication

Here both services may be used for the same message. First, a signature is
generated for the plaintext message and prepended to the message. Then the
plaintext plus the signature is encrypted using CAST-128 and the session key
is encrypted using RSA.

3. Compression

As a default, PGP compresses the message after applying the signature


but before encryption. This has the benefit of saving space for both e-
mail transmission and for file storage.

The signature is generated before compression for two reasons:

It is preferable to sign an uncompressed message so that one can store only the
uncompressed message together with the signature for future verification. If
one signed a compressed document, then it would be necessary either to store
a compressed version of the message for later verification or to recompress the
message when verification is required.
Even if one were willing to generate dynamically a recompressed message fro
verification, PGP‟s private key ands compression algorithm presents a difficulty. The algorithm
is not deterministic; various implementations of the algorithm achieve different
tradeoffs in running speed versus compression ratio and as a result, produce
different compression forms.

Message encryption is applied after compression to strengthen cryptographic


security. Because the compressed message has less redundancy than the
original plaintext, cryptanalysis is more difficult. The compression algorithm
used is ZIP.

4. e-mail compatibility

Many electronic mail systems only permit the use of blocks consisting of
ASCII texts. To accommodate this restriction, PGP provides the service of

converting the raw 8-bit binary stream to a stream of printable ASCII


characters. The scheme used for this purpose is radix-64 conversion. Each
group of three octets of binary data is mapped into four ASCII characters.

e.g., consider the 24-bit (3 octets) raw text sequence 00100011 01011100
10010001, we can express this input in block of 6-bits to produce 4 ASCII
characters.

5. Segmentation and reassembly

E-mail facilities often are restricted to a maximum length. E.g., many of


the facilities accessible through the internet impose a maximum length of
50,000 octets. Any message longer than that must be broken up into
smaller segments, each of which is mailed separately.

To accommodate this restriction, PGP automatically subdivides a


message that is too large into segments that are small enough to send
via e-mail. The segmentation is done after all the other processing,
including the radix-64 conversion. At the receiving end, PGP must strip
off all e-mail headers and reassemble the entire original block before
performing the other steps.
2. PGP Operation Summary:

Cryptographic keys and key rings

Three separate requirements can be identified with respect

to these keys: A means of generating unpredictable session

keys is needed.

It must allow a user to have multiple public key/private key pairs.

Each PGP entity must maintain a file of its own public/private key pairs
as well as a file of public keys of correspondents.

We now examine each of the requirements in turn.

1. Session key generation

Each session key is associated with a single message and is used only for
the purpose of encryption and decryption of that message. Random 128-
bit numbers are generated using CAST-128 itself. The input to the
random number generator consists of a 128-bit key and two 64-bit
blocks that are treated as plaintext to be encrypted. Using cipher
feedback mode, the CAST-128 produces two 64-bit cipher text blocks,
which are concatenated tO
form the 128-bit session key. The plaintext input to CAST-128 is itself derived from a
stream of 128-bit randomized numbers. These numbers are based on the keystroke
input from the user.

2. Key identifiers

If multiple public/private key pair are used, then how does the recipient
know which of the public keys was used to encrypt the session key? One
simple solution would be to transmit the public key with the message
but, it is unnecessary wasteful of space. Another solution would be to
associate an identifier with each public key that is unique at least within
each user.

The solution adopted by PGP is to assign a key ID to each public key that
is, with very high probability, unique within a user ID. The key ID
associated with each public key consists of its least significant 64 bits.
i.e., the key ID of public key KUa is

(KUa mod 264).

message consists of three components.

Message component – includes actual data to be transmitted, as well as


the filename and a timestamp that specifies the time of creation.

Signature component – includes the following

Timestamp – time at which the signature


was made. Message digest – hash code.

Two octets of message digest – to enable the recipient to determine if the


correct public key was used to decrypt the message.

Key ID of sender‟sprivatekyands public key – identifies the public key

Session key component – includes session key and the identifier of the recipient
public key.
3. Key rings

PGP provides a pair of data structures at each node, one to store the
public/private key pair owned by that node and one to store the public
keys of the other users known at that node. These data structures are
referred to as private key ring and public key ring.

3. The general structures of the private and public key rings are shown

below: Timestamp – the date/time when this entry was made.

Key ID – the least significant bits of the public key.

Public key – public key portion of

the pair. Private key – private key

portion of the pair. User ID – the

owner of the key.


Key legitimacy field – indicates the extent to which PGP will trust that this is
a valid public key for this user.

Fig.4.5.3.1 General Structure of Private and Public

Rings Signature trust field – indicates the degree to which this PGP user

trusts the signer to certify public key.

Owner trust field – indicates the degree to which this public key is trusted to
sign other public key certificates.

PGP message generation

First consider message transmission and assume that the message is to


be both signed and encrypted. The sending PGP entity performs the
following steps:
1. Signing the message

PGP retrieves the sender‟sprivatekyands private key from the private key ring
using user ID as an index.

If user ID was not provided, the first private key from the ring is retrieved.

PGP prompts the user for the passpharse (password) to recover


the unencrypted private key.

The signature component of the message is constructed.

2. Encrypting the message

PGP generates a session key and encrypts the message.

PGP retrieves the recipient‟sprivatekyands public key from the public key ring
using user ID as index.

The session key component of the message is constructed.

The receiving PGP entity performs the following steps


1. Decrypting the message

PGP retrieves the receiver‟sprivatekyands private key from the private key ring,
using the key ID field in the session key component of the message
as an index.

PGP prompts the user for the passpharse (password) to recover


the unencrypted private key.

PGP then recovers the session key and decrypts the


message.

2. Authenticating the message

PGP retrieves the sender‟sprivatekyands public key from the public key ring,
using the key ID field in the signature key component of the
message as an index.

PGP recovers the transmitted message digest.

PGP computes the message digest for the received message and compares it
to the transmitted message digest to authenticate.
S/MIME

S/MIME (Secure/Multipurpose Internet Mail Extension) is a security


enhancement to the MIME Internet e-mail format standard, based on
technology from RSA Data Security. S/MIME is defined in a number of
documents, most importantly RFCs 3369, 3370, 3850 and 3851.

1. Multipurpose Internet Mail Extensions

MIME is an extension to the RFC 822 framework that is intended to address


some of the problems and limitations of the use of SMTP (Simple Mail Transfer
Protocol) or some other mail transfer protocol and RFC 822 for electronic mail.
Following are the limitations of SMTP/822 scheme:

1. SMTP cannot transmit executable files or other binary objects.

2. SMTP cannot transmit text data that includes national language characters
because these are represented by 8-bit codes with values of 128 decimal or
higher, and SMTP is limited to 7-bit ASCII.

3. SMTP servers may reject mail message over a certain size.

4. SMTP gateways that translate between ASCII and the character code EBCDIC
do not use a consistent set of mappings, resulting in translation problems.

5. SMTP gateways to X.400 electronic mail networks cannot handle nontextual data
included in
X.400 messages.

6. Some SMTP implementations do not adhere completely to the SMTP standards


defined in RFC 821. Common problems include:

Deletion, addition, or reordering of carriage return

and linefeed Truncating or wrapping lines longer

than 76 characters Removal of trailing white space

(tab and space characters) Padding of lines in a

message to the same length

Conversion of tab characters into multiple space characters

MIME is intended to resolve these problems in a manner that is compatible


with existing RFC 822 implementations. The specification is provided in RFCs
2045 through 2049.

2. Overview
The MIME specification includes the following elements:
1. Five new message header fields are defined, which may be included in an
RFC 822 header. These fields provide information about the body of the
message.

2. A number of content formats are defined, thus


standardizing representations that support multimedia electronic mail.

3. Transfer encodings are defined that enable the conversion of any content
format into a form that is protected from alteration by the mail system.

In this subsection, we introduce the five message header fields. The next two
subsections deal with content formats and transfer encodings.

3.The five header fields defined in MIME are as follows:

MIME-Version: Must have the parameter value 1.0. This field


indicates that the message conforms to RFCs 2045 and 2046.

Content-Type: Describes the data contained in the body with sufficient detail

Content-Transfer-Encoding: Indicates the type of transformation that has been


used to represent the body of the message in a way that is acceptable for mail
transport.

Content-ID: Used to identify MIME entities uniquely in multiple contexts.

Content-Description: A text description of the object with the body; this is


useful when the object is not readable (e.g., audio data).

4.MIME Content Types

The bulk of the MIME specification is concerned with the definition of a variety
of content types. This reflects the need to provide standardized ways of dealing
with a wide variety of information representations in a multimedia
environment.

Below lists the content types specified in RFC 2046. There are seven different
major types of content and a total of 15 subtypes
For the text type of body, no special software is required to get the full meaning
of the text, aside from support of the indicated character set. The primary
subtype is plain text, which is simply a string of ASCII characters or ISO 8859
characters. The enriched subtype allows greater formatting flexibility. The
multipart type indicates that the body contains multiple, independent parts.
The Content-Type header field includes a parameter, called boundary, that
defines the delimiter between body parts.

The multipart/digest subtype is used when each of the body parts is


interpreted as an RFC 822 message with headers. This subtype enables the
construction of a message whose parts are individual messages. For example,
the moderator of a group might collect e-mail messages from participants,
bundle these messages, and send them out in one encapsulating MIME
message.
The message type provides a number of important capabilities in MIME.
The message/rfc822 subtype indicates that the body is an entire
message, including header and body. Despite the name of this subtype,
the encapsulated message may be not only a simple RFC 822 message,
but also any MIME message.
The message/partial subtype enables fragmentation of a large message
into a number of parts, which must be reassembled at the destination.
For this subtype, three parameters are specified in the Content-Type:
Message/Partial field: an id common to all fragments of the same
message, a sequence number unique to each fragment, and the total
number of fragments.

The message/external-body subtype indicates that the actual data to be


conveyed in this message are not contained in the body. Instead, the
body contains the information needed to access the data. As with the
other message types, the message/external-body subtype has an outer
header and an encapsulated message with its own header. The only
necessary field in the outer header is the Content-Type field, which
identifies this as a message/external-body subtype. The inner header is
the message header for the encapsulated message. The Content-Type
field in the outer header must include an access-type parameter, which
indicates the method of access, such as FTP (file transfer protocol).

The application type refers to other kinds of data, typically either


uninterpreted binary data or information to be processed by a mail-based
application.

5. MIME Transfer Encodings

The other major component of the MIME specification, in addition to


content type specification, is a definition of transfer encodings for
message bodies. The objective is to provide reliable delivery across the
largest range of environments.

The MIME standard defines two methods of encoding data. The Content-
Transfer-Encoding field can actually take on six values. For SMTP transfer, it is
safe to use the 7bit form. The 8bit and binary forms may be usable in other
mai transport contexts. Another Content-Transfer- Encoding value is x-token,
which indicates that some other encoding scheme is used, for which a name is
to be supplied. The two actual encoding schemes defined are quoted-printable
and base64.
The quoted-printable transfer encoding is useful when the data consists
largely of octets that correspond to printable ASCII characters. In
essence, it represents nonsafe characters by the hexadecimal
representation of their code and introduces reversible (soft) line breaks to
limit message lines to 76 characters.

The base64 transfer encoding, also known as radix-64 encoding, is a


common one for encoding arbitrary binary data in such a way as to be
invulnerable to the processing by mail transport programs.

Canonical Form

An important concept in MIME and S/MIME is that of canonical form.


Canonical form is a format, appropriate to the content type, that is
standardized for use between systems. This is in contrast to native form,
which is a format that may be peculiar to a particular system.

INTRUDERS

One of the most publicized attacks to security is the intruder, generally


referred to as hacker or cracker. Three classes of intruders are as follows:

· Masquerader – an individual who is not authorized to use the computer and


who penetrates a system‟saceontrlxpiagmeusr‟cont.s access controls to exploit a legitimate user‟saceontrlxpiagmeusr‟cont.s
account.

· Misfeasor – a legitimate user who accesses data, programs, or resources for


which such access is not authorized, or who is authorized for such access but
misuse his or her privileges.

· Clandestine user – an individual who seizes supervisory control of the


system and uses this control to evade auditing and access controls or to
suppress audit collection.

The masquerader is likely to be an outsider; the misfeasor generally is


an insider; and the clandestine user can be either an outsider or an
insider.

Intruder attacks range from the benign to the serious. At the benign end
of the scale, there are many people who simply wish to explore internets
and see what is out there. At the serious end are individuals who are
attempting to read privileged data, perform unauthorized modifications
to data, or disrupt the system. Benign intruders might be tolerable,
although they do consume resources and may slow performance for
legitimate users. However there is no way in advance to know whether an
intruder will be benign or malign.

An analysis of previous attack revealed that there were two levels of hackers:

· The high levels were sophisticated users with a thorough knowledge of the technology.

· The low levels were the „foot soldiers‟saceontrlxpigmaeusr‟cont. who merely use the supplied cracking
programs with little understanding of how they work.

one of the results of the growing awareness of the intruder problem has been
the establishment of a number of Computer Emergency Response Teams
(CERT). these co-operative ventures collect information about system
vulnerabilities and disseminate it to systems managers. Unfortunately, hackers
can also gain access to CERT reports.

In addition to running password cracking programs, the intruders


attempted to modify login software to enable them to capture passwords
of users logging onto the systems.
Intrusion techniques

The objective of the intruders is to gain access to a system or to increase


the range of privileges accessible on a system. Generally, this requires
the intruders to acquire information that should be protected. In most
cases, the information is in the form of a user password.

Typically, a system must maintain a file that associates a password with


each authorized user. If such a file is stored with no protection, then it is
an easy matter to gain access to it. The password files can be protected
in one of the two ways:

· One way encryption – the system stores only an encrypted form of


user‟s aces contrls to exploit a legitma user‟ acount.s password. In practice, the system usually performs a one way
transformation (not reversible) in which the password is used to generate a key
for the encryption function and in which a fixed length output is produced.

· Access control – access to the password file is limited to one or a very few accounts.

The following techniques are used for learning passwords.

· Try default passwords used with standard accounts that are shipped with the
system.
Many administrators do not bother to change these defaults.

· Exhaustively try all short passwords.

· Try words in the system‟saceontrlxpiagmeusr‟cont.s online dictionary or a list of likely passwords.

· Collect information about users such as their full names, the name of
their spouse and children, pictures in their office and books in their office
that are related to hobbies.

· Try user‟saceontrlxpigmaeusr‟cont.s phone number, social security numbers and room numbers.

· Try all legitimate license plate numbers.

· Use a torjan horse to bypass restriction on access.

· Tap the line between a remote user and the host system.

Two principle countermeasures:

Detection – concerned with learning of an attack, either before or


after its success. Prevention – challenging security goal and an
uphill bottle at all times.
INTRUSION DETECTION:

Inevitably, the best intrusion prevention system will fail. A system's


second line of defense is intrusion detection, and this has been the focus
of much research in recent years. This interest is motivated by a number
of considerations, including the following:

· If an intrusion is detected quickly enough, the intruder can be identified


and ejected from the system before any damage is done or any data are
compromised.

· An effective intrusion detection system can serve as a deterrent, so acting to prevent


intrusions.

· Intrusion detection enables the collection of information about intrusion


techniques that can be used to strengthen the intrusion prevention facility.

Intrusion detection is based on the assumption that the behavior of the


intruder differs from that of a legitimate user in ways that can be
quantified.

Figure 5.2.1 suggests, in very abstract terms, the nature of the task
confronting the designer of an intrusion detection system. Although the typical
behavior of an intruder differs from the typical behavior of an authorized user,
there is an overlap in these behaviors. Thus, a loose interpretation of intruder
behavior, which will catch more intruders, will also lead to a number of "false
positives," or authorized users identified as intruders. On the other hand,
an attempt to limit false positives by a tight interpretation of intruder behavior
will lead to an increase in false negatives, or intruders not identified as
intruders. Thus, there is an element of compromise and art in the practice of
intrusion detection.
1. The approaches to intrusion detection:

Statistical anomaly detection: Involves the collection of data relating to


the behavior of legitimate users over a period of time. Then statistical
tests are applied to observed behavior to determine with a high level of
confidence whether that behavior is not legitimate user behavior.

Threshold detection: This approach involves defining


thresholds, independent of user, for the frequency of occurrence of
various events.

Profile based: A profile of the activity of each user is developed


and used to detect changes in the behavior of individual accounts.

Rule-based detection: Involves an attempt to define a set of


rules that can be used to decide that a given behavior is that of an
intruder.

Anomaly detection: Rules are developed to detect deviation from


previous usage patterns.

Penetration identification: An expert


system approach that searches for
suspicious behavior.

In terms of the types of attackers listed earlier, statistical anomaly detection is


effective against masqueraders. On the other hand, such techniques may be
unable to deal with misfeasors. For such attacks, rule-based approaches may
be able to recognize events and sequences that, in context, reveal penetration.
In practice, a system may exhibit a combination of both approaches to be
effective against a broad range of attacks.

Audit Records

A fundamental tool for intrusion detection is the audit record. Some record of
ongoing activity by users must be maintained as input to an intrusion
detection system. Basically, two plans are used:

· Native audit records: Virtually all multiuser operating systems include


accounting software that collects information on user activity. The advantage
of using this information is that no additional collection software is needed.
The disadvantage is that the native audit records
· may not contain the needed information or may not contain it in a convenient form.
· Detection-specific audit records: A collection facility can be
implemented that generates audit records containing only that information
required by the intrusion
detection system. One advantage of such an approach is that it could be made
vendor independent and ported to a variety of systems. The disadvantage is the
extra overhead involved in having, in effect, two accounting packages running
on a machine.

Each audit record contains the following fields:

· Subject: Initiators of actions. A subject is typically a terminal user but

might also be a o process acting on behalf of users or groups

of users.

· Object: Receptors of actions. Examples include files, programs, messages,


records, terminals, printers, and user- or program-created structures

· Resource-Usage: A list of quantitative elements in which each element gives


the amount used of some resource (e.g., number of lines printed or displayed,
number of records read
o or written, processor time, I/O units used, session elapsed time).

· Time-Stamp: Unique time-and-date stamp identifying when the action took


place. Most user operations are made up of a number of elementary actions.
For example, a file copy involves the execution of the user command, which
includes doing access validation and setting up the copy, plus the read from
one file, plus the write to another file. Consider the command

COPY GAME.EXE TO <Library>GAME.EXE

issued by Smith to copy an executable file GAME from the current directory to
the <Library> directory. The following audit records may be generated:

In this case, the copy is aborted because Smith does not have write permission to
<Library>. The decomposition of a user operation into elementary actions has three
advantages:
Because objects are the protectable entities in a system, the use of elementary
actions enables an audit of all behavior affecting an object. Thus, the system
can detect attempted subversions of access

Single-object, single-action audit records simplify the model and the implementation.

Because of the simple, uniform structure of the detection-specific audit


records, it may be relatively easy to obtain this information or at least part of it
by a straightforward mapping from existing native audit records to the
detection-specific audit records.

1.1 Statistical Anomaly Detection:

As was mentioned, statistical anomaly detection techniques fall into two broad
categories: threshold detection and profile-based systems. Threshold
detection involves counting the number of occurrences of a specific event
type over an interval of time. If the count surpasses what is considered a
reasonable number that one might expect to occur, then intrusion is assumed.

Threshold analysis, by itself, is a crude and ineffective detector of even


moderately sophisticated attacks. Both the threshold and the time interval
must be determined.

1.2 Profile-based anomaly detection focuses on characterizing the past


behavior of individual users or related groups of users and then detecting
significant deviations. A profile may consist of a set of parameters, so that
deviation on just a single parameter may not be sufficient in itself to signal an
alert.

The foundation of this approach is an analysis of audit records. The


audit records provide input to the intrusion detection function in two
ways. First, the designer must decide on a number of quantitative
metrics that can be used to measure user behavior. Examples of metrics
that are useful for profile-based intrusion detection are the following:

· Counter: A nonnegative integer that may be incremented but not


decremented until it is reset by management action. Typically, a count of
certain event types is kept over a particular period of time. Examples include
the number of logins by a single user during an hour, the number of times a
given command is executed during a single user session, and the number of
password failures during a minute.

· Gauge: A nonnegative integer that may be incremented or decremented.


Typically, a gauge is used to measure the current value of some entity.
Examples include the number of logical connections assigned to a user
application and the number of outgoing messages queued for a user process.
· Interval timer: The length of time between two related events. An example is
the length of time between successive logins to an account.

· Resource utilization: Quantity of resources consumed during a specified


period. Examples include the number of pages printed during a user session
and total time consumed by a program execution.

Given these general metrics, various tests can be performed to determine


whether current activity fits within acceptable limits.

· Mean and standard deviation


· Multivariate
· Markov process
· Time series
· Operational

The simplest statistical test is to measure the mean and standard


deviation of a parameter over some historical period. This gives a
reflection of the average behavior and its variability.

A multivariate model is based on correlations between two or more variables.


Intruder behavior may be characterized with greater confidence by considering
such correlations (for example, processor time and resource usage, or login
frequency and session elapsed time).

A Markov process model is used to establish transition probabilities


among various states. As an example, this model might be used to look
at transitions between certain commands.

A time series model focuses on time intervals, looking for sequences of


events that happen too rapidly or too slowly. A variety of statistical tests
can be applied to characterize abnormal timing.

Finally, an operational model is based on a judgment of what is


considered abnormal, rather than an automated analysis of past audit
records. Typically, fixed limits are defined and intrusion is suspected for
an observation that is outside the limits.

1.3 Rule-Based Intrusion Detection

Rule-based techniques detect intrusion by observing events in the system and


applying a set of rules that lead to a decision regarding whether a given pattern
of activity is or is not suspicious.
Rule-based anomaly detection is similar in terms of its approach and
strengths to statistical anomaly detection. With the rule-based approach,
historical audit records are analyzed to identify usage patterns and to generate
automatically rules that describe those patterns. Rules may represent past
behavior patterns of users, programs, privileges, time slots, terminals, and so
on. Current behavior is then observed, and each transaction is matched
against the set of rules to determine if it conforms to any historically observed
pattern of behavior.

As with statistical anomaly detection, rule-based anomaly detection does


not require knowledge of security vulnerabilities within the system.
Rather, the scheme is based on observing past behavior and, in effect,
assuming that the future will be like the past

Rule-based penetration identification takes a very different approach to


intrusion detection, one based on expert system technology. The key feature of
such systems is the use of rules for identifying known penetrations or
penetrations that would exploit known weaknesses.
Example heuristics are the following:

o Users should not read files in other users' personal directories.

o Users must not write other users' files.

o Users who log in after hours often access the same files they used earlier.

o Users do not generally open disk devices directly but rely on


higher-level operating system utilities.

o Users should not be logged in more than once to the same system.

o Users do not make copies of system programs.

2 The Base-Rate Fallacy

To be of practical use, an intrusion detection system should detect a


substantial percentage of intrusions while keeping the false alarm rate
at an acceptable level. If only a modest percentage of actual intrusions
are detected, the system provides a false sense of security. On the other
hand, if the system frequently triggers an alert when there is no
intrusion (a false alarm), then either system managers will begin to
ignore the alarms, or much time will be wasted analyzing the false
alarms.
Unfortunately, because of the nature of the probabilities involved, it is
very difficult to meet the standard of high rate of detections with a low
rate of false alarms. In general, if
the actual numbers of intrusions is low compared to the number of
legitimate uses of a system, then the false alarm rate will be high unless
the test is extremely discriminating.

3 Distributed Intrusion Detection

Until recently, work on intrusion detection systems focused on single-system


stand-alone facilities. The typical organization, however, needs to defend a
distributed collection of hosts supported by a LAN Porras points out the
following major issues in the design of a distributed intrusion detection
system

A distributed intrusion detection system may need to deal with different


audit record formats. In a heterogeneous environment, different systems
will employ different native audit collection systems and, if using
intrusion detection, may employ different formats for security-related
audit records.

One or more nodes in the network will serve as collection and analysis
points for the data from the systems on the network. Thus, either raw
audit data or summary data must be transmitted across the network.
Therefore, there is a requirement to assure the integrity and
confidentiality of these data.

Either a centralized or decentralized architecture can be used.

Below figure shows the overall architecture, which consists of three main components:

· Host agent module: An audit collection module operating as a


background process on a monitored system. Its purpose is to collect data on
security-related events on the host and transmit these to the central manager.
·
· LAN monitor agent module: Operates in the same fashion as a host agent
module except that it analyzes LAN traffic and reports the results to the central
manager.
·

· Central manager module: Receives reports from LAN monitor and host
agents and processes and correlates these reports to detect intrusion.
The scheme is designed to be independent of any operating system or system
auditing implementation.

· The agent captures each audit record produced by the native


audit collection system.
· A filter is applied that retains only those records that are of security
interest.
· These records are then reformatted into a standardized format
referred to as the host audit record (HAR).
· Next, a template-driven logic module analyzes the records for
suspicious activity.
· At the lowestlevel, the agent scans for notable events that
are of interest independent of any past events.
· Examplesinclude failed file accesses, accessing system files,
and changing a file's access control.
· At the next higher level, the agent looks for sequences of events,
such as known attack atterns (signatures).
· Finally, the agent looks for anomalous behavior of an individual
user based on a historical profile of that user, such as number of
programs executed, number of files accessed, and the like.
· When suspicious activity is detected, an alert is sent to the central
manager.
· The central manager includes an expert system that can draw
inferences from received data.
· The manager may also query individual systems for copies of
HARs to correlate with those from other agents.
· The LAN monitor agent also supplies information to the central manager.
· The LAN monitor agent audits host-host connections, services
used, and volume of traffic.
· It searches for significant events, such as sudden changes in
network load, the use of
· security-related services, and network activities such as rlogin.

The architecture is quite general and flexible. It offers a foundation for a


machine-independent approach that can expand from stand-alone intrusion
detection to a system that is able to correlate activity from a number of sites
and networks to detect suspicious activity that would otherwise remain
undetected.

4 Honeypots

A relatively recent innovation in intrusion detection technology is the honeypot.


Honeypots are decoy systems that are designed to lure a potential attacker
away from critical systems. Honeypots are designed to

· divert an attacker from accessing critical systems

· collect information about the attacker's activity

· encourage the attacker to stay on the system long enough for administrators to
respond
These systems are filled with fabricated information designed to appear valuable
but that a legitimate user of the system wouldn't access. Thus, any access to the
honeypot is suspect.

5 Intrusion Detection Exchange Format

To facilitate the development of distributed intrusion detection systems


that can function across a wide range of platforms and environments,
standards are needed to support interoperability. Such standards are the
focus of the IETF Intrusion Detection Working Group.

The outputs of this working group include the following:

a. A requirements document, which describes the high-level functional


requirements for communication between intrusion detection systems
and with management systems, including the rationale for those
requirements.

b. A common intrusion language specification, which describes data


formats that satisfy the requirements.

c. A framework document, which identifies existing protocols best used


for communication between intrusion detection systems, and describes
how the devised data formats relate to them.
PASSWORD MANAGEMENT

1. Password Protection

The front line of defense against intruders is the password system.


Virtually all multiuser systems require that a user provide not only a
name or identifier (ID) but also a password. The password serves to
authenticate the ID of the individual logging on to the system. In turn,
the ID provides security in the following ways:

· The ID determines whether the user is authorized to gain access to a system.

· The ID determines the privileges accorded to the user.

· The ID is used in ,what is referred to as discretionary access control. For


example, by listing the IDs of the other users, a user may grant permission to
them to read files owned by that user.

2. The Vulnerability of Passwords

To understand the nature of the threat to password-based systems, let


us consider a scheme that is widely used on UNIX, the following
procedure is employed.

· Each user selects a password of up to eight printable characters in length.

· This is converted into a 56-bit value (using 7-bit ASCII) that serves as the
key input to an encryption routine.

· The encryption routine, known as crypt(3), is based on DES. The DES


algorithm is modified using a 12-bit "salt" value.

· Typically, this value is related to the time at which the password is assigned to the
user.

· The modified DES algorithm is exercised with a data input consisting of a 64-bit block
of zeros.

· The output of the algorithm then serves as input for a second encryption.

· This process is repeated for a total of 25 encryptions.

· The resulting 64-bit output is then translated into an 11-character sequence.


· The hashed password is then stored, together with a plaintext copy of the
salt, in the password file for the corresponding user ID.
· This method has been shown to be secure against a variety of cryptanalytic attacks

The salt serves three purposes:

· It prevents duplicate passwords from being visible in the password file. Even
if two users choose the same password, those passwords will be assigned at
different times. Hence, the "extended" passwords of the two users will differ.

· It effectively increases the length of the password without requiring the user
to remember two additional characters.

· It prevents the use of a hardware implementation of DES, which would ease


the difficulty of a brute-force guessing attack.

When a user attempts to log on to a UNIX system, the user provides an ID and
a password. The operating system uses the ID to index into the password file
and retrieve the plaintext salt and the encrypted password. The salt and user-
supplied password are used as input to the encryption routine. If the result
matches the stored value, the password is accepted.The encryption routine is
designed to discourage guessing attacks. Software implementations of DES
are slow compared to hardware versions, and the use of 25 iterations
multiplies the time required by 25.

Thus, there are two threats to the UNIX password scheme. First, a user can
gain access on a machine using a guest account or by some other means and
then run a password guessing program, called a password cracker, on that
machine.
As an example, a password cracker was reported on the Internet in
August 1993. Using a Thinking Machines Corporation parallel computer,
a performance of 1560 encryptions per second per vector unit was
achieved. With four vector units per processing node (a standard
configuration), this works out to 800,000 encryptions per second on a
128-node machine (which is a modest size) and 6.4 million encryptions
per second on a 1024-node machine.

Password length is only part of the problem. Many people, when permitted to
choose their own password, pick a password that is guessable, such as their
own name, their street name, a common dictionary word, and so forth. This
makes the job of password cracking straightforward.
Following strategy was used:

Try the user's name, initials, account name, and other relevant personal
information. In all, 130 different permutations for each user were tried.

Try words from various dictionaries.

Try various permutations on the words from step 2.

Try various capitalization permutations on the words from step 2 that


were not considered in step 3. This added almost 2 million additional
words to the list.

3. Access Control

One way to thwart a password attack is to deny the opponent access to


the password file. If the encrypted password portion of the file is
accessible only by a privileged user, then the opponent cannot read it
without already knowing the password of a privileged user.

Password Selection Strategies


Four basic techniques are in use:
· User education

· Computer-generated passwords

· Reactive password checking


· Proactive password checking

Users can be told the importance of using hard-to-guess passwords and can be
provided with guidelines for selecting strong passwords. This user education
strategy is unlikely to succeed at most installations, particularly where there is
a large user population or a lot of turnover. Many users will simply ignore the
guidelines

Computer-generated passwords also have problems. If the passwords


are quite random in nature, users will not be able to remember them. Even if
the password is pronounceable, the user may have difficulty remembering it
and so be tempted to write it down

A reactive password checking strategy is one in which the system


periodically runs its own password cracker to find guessable passwords.

The most promising approach to improved password security is a proactive


password checker. In this scheme, a user is allowed to select his or her
own password. However, at the time of
selection, the system checks to see if the password is allowable and, if not,
rejects it. Such checkers are based on the philosophy that, with sufficient
guidance from the system, users can select memorable passwords from a fairly
large password space that are not likely to be guessed in a dictionary attack.

The first approach is a simple system for rule enforcement. For example, the
following rules could be enforced:
· All passwords must be at least eight characters long.

· In the first eight characters, the passwords must include at least one each of
uppercase, lowercase, numeric digits, and punctuation marks. These rules
could be coupled with advice to the user. Although this approach is superior to
simply educating users, it may not be sufficient to thwart password crackers.
This scheme alerts crackers as to which passwords not to try but may still
make it possible to do password cracking.

Another possible procedure is simply to compile a large dictionary of possible


"bad" passwords. When a user selects a password, the system checks to make
sure that it is not on the disapproved list.

There are two problems with this approach:

· Space: The dictionary must be very large to be effective..

· Time: The time required to search a large dictionary may itself be large

Two techniques for developing an effective and efficient proactive


password checker that is based on rejecting words on a list show
promise. One of these develops a Markov model for the generation of
guessable passwords. This model shows a language consisting of an
alphabet of three characters. The state of the system at any time is the
identity of the most recent letter. The value on the transition from one
state to another represents the probability that one letter follows
another. Thus, the probability that the next letter is b, given that the
current letter is a, is 0.5.

In general, a Markov model is a quadruple [m, A, T, k], where m is the number


of states in the model, A is the state space, T is the matrix of transition
probabilities, and k is the order of the model. For a kth-order model, the
probability of making a transition to a particular letter depends on the previous
k letters that have been generated.
The authors report on the development and use of a second-order model. To
begin, a dictionary of guessable passwords is constructed. Then the transition
matrix is calculated as follows:

1. Determine the frequency matrix f, where f(i, j, k) is the number of occurrences


of the trigram consisting of the ith, jth, and kth character. For example, the
password parsnips yields the trigrams par, ars, rsn, sni, nip, and ips.

2. For each bigram ij, calculate f(i, j,∞) as the total number of trigrams beginning
with ij. For example, f(a, b,∞) would be the total number of trigrams of the form
aba, abb, abc, and so on.

3. Compute the entries of T as follows:

T(i,j,k) = f(i, j, k) / f(i, j,∞)


The result is a model that reflects the structure of the words in the dictionary.

A quite different approach has been reported by Spafford. It is based on


the use of a Bloom filter. To begin, we explain the operation of the Bloom
filter. A Bloom filter of order k consists of a set of k independent hash
functions H1(x), H2(x),..., Hk(x), where each function maps a password
into a hash value in the range 0 to N - 1 That is,
Hi(Xj) = y 1 ≤i ≤k; 1 ≤j ≤D; 0 ≤y ≤N- 1

where
Xj = jth word in password dictionary
D = number of words in password dictionary

The following procedure is then applied to the dictionary:

· A hash table of N bits is defined, with all bits initially set to 0.

· For each password, its k hash values are calculated, and the corresponding
bits in the hash table are set to 1. Thus, if Hi(Xj) = 67 for some (i, j), then the
sixty-seventh bit of the hash table is set to 1; if the bit already has the value 1,
it

remains at 1.

When a new password is presented to the checker, its k hash values are calculated. If all the
corresponding bits of the hash table are equal to 1, then the password is rejected

VIRUSES AND RELATED THREATS

Perhaps the most sophisticated types of threats to computer systems are


presented by programs that exploit vulnerabilities in computing systems.

1. Malicious Programs
Malicious software can be divided into two categories:

those that need a host program, and those that are independent.

The former are essentially fragments of programs that cannot exist


independently of some actual application program, utility, or system program.
Viruses, logic bombs, and backdoors are examples. The latter are self-
contained programs that can be scheduled and run by the operating system.
Worms and zombie programs are examples.
2. The Nature of Viruses

A virus is a piece of software that can "infect" other programs by modifying


them; the modification includes a copy of the virus program, which can then go
on to infect other programs.

A virus can do anything that other programs do. The only difference is that it
attaches itself to another program and executes secretly when the host
program is run. Once a virus is executing, it can perform any function, such as
erasing files and programs.

During its lifetime, a typical virus goes through the following four phases:

· Dormant phase: The virus is idle. The virus will eventually be activated by
some event, such as a date, the presence of another program or file, or the
capacity of the disk exceeding some limit. Not all viruses have this stage.

· Propagation phase: The virus places an identical copy of itself into other
programs or into certain system areas on the disk. Each infected program will
now contain a clone of the virus, which will itself enter a propagation phase.

· Triggering phase: The virus is activated to perform the function for which it
was intended. As with the dormant phase, the triggering phase can be caused
by a variety of system events, including a count of the number of times that
this copy of the virus has made copies of itself.

· Execution phase: The function is performed. The function may be


harmless, such as a message on the screen, or damaging, such as the
destruction of programs and data files.

3. Virus Structure

A virus can be prepended or postpended to an executable program, or it can be


embedded in some other fashion. The key to its operation is that the infected
program, when invoked, will first execute the virus code and then execute the
original code of the program.

An infected program begins with the virus code and works as follows.

The first line of code is a jump to the main virus program. The second line is a
special marker that is used by the virus to determine whether or not a
potential victim program has already been infected with this virus.
When the program is invoked, control is immediately transferred to the main
virus program. The virus program first seeks out uninfected executable files
and infects them. Next, the virus may perform some action, usually detrimental
to the system.

This action could be performed every time the program is invoked, or it could
be a logic bomb that triggers only under certain conditions.

Finally, the virus transfers control to the original program. If the infection
phase of the program is reasonably rapid, a user is unlikely to notice any
difference between the execution of an infected and uninfected program.

A virus such as the one just described is easily detected because an infected
version of a program is longer than the corresponding uninfected one. A way to
thwart such a simple means of detecting a virus is to compress the executable
file so that both the infected and uninfected versions are of identical length..
The key lines in this virus are numbered. We assume that program P1 is
infected with the virus CV. When this program is invoked, control passes to its
virus, which performs the following steps:

1. For each uninfected file P2 that is found, the virus first compresses that
file to produce P'2, which is shorter than the original program by the size of
the virus.
2. A copy of the virus is prepended to the compressed program.
3. The compressed version of the original infected program, P'1, is uncompressed.
4. The uncompressed original program is executed.
In this example, the virus does nothing other than propagate. As in the
previous example, the virus may include a logic bomb.

4. Initial Infection

Once a virus has gained entry to a system by infecting a single program, it is in


a position to infect some or all other executable files on that system when the
infected program executes. Thus, viral infection can be completely prevented
by preventing the virus from gaining entry in the first place. Unfortunately,
prevention is extraordinarily difficult because a virus can be part of any
program outside a system. Thus, unless one is content to take an absolutely
bare piece of iron and write all one's own system and application programs, one
is vulnerable.

VIRUS COUNTERMEASURES

Antivirus Approaches

The ideal solution to the threat of viruses is prevention: The next best
approach is to be able to do the following:

· Detection: Once the infection has occurred, determine that it has


occurred and locate the virus.

· Identification: Once detection has been achieved, identify the specific virus
that has infected a program.

· Removal: Once the specific virus has been identified, remove all traces of the
virus from the infected program and restore it to its original state. Remove the
virus from all infected systems so that the disease cannot spread further.

If detection succeeds but either identification or removal is not possible, then


the alternative is to discard the infected program and reload a clean backup
version.

There are four generations of antivirus software:

· First generation: simple scanners

· Second generation: heuristic scanners

· Third generation: activity traps

· Fourth generation: full-featured protection


A first-generation scanner requires a virus signature to identify a virus..
Such signature- specific scanners are limited to the detection of known viruses.
Another type of first-generation scanner maintains a record of the length of
programs and looks for changes in length.

A second-generation scanner does not rely on a specific signature. Rather,


the scanner uses heuristic rules to search for probable virus infection. One
class of such scanners looks for fragments of code that are often associated
with viruses.

Another second-generation approach is integrity checking. A checksum can be


appended to each program. If a virus infects the program without changing the
checksum, then an integrity check will catch the change. To counter a virus
that is sophisticated enough to change the checksum when it infects a
program, an encrypted hash function can be used. The encryption key is stored
separately from the program so that the virus cannot generate a new hash code
and encrypt that. By using a hash function rather than a simpler checksum,
the virus is prevented from adjusting the program to produce the same hash
code as before.

Third-generation programs are memory-resident programs that identify a


virus by its actions rather than its structure in an infected program. Such
programs have the advantage that it is not necessary to develop signatures and
heuristics for a wide array of viruses. Rather, it is necessary only to identify the
small set of actions that indicate an infection is being attempted and then to
intervene.

Fourth-generation products are packages consisting of a variety of antivirus


techniques used in conjunction. These include scanning and activity trap
components. In addition, such a package includes access control capability,
which limits the ability of viruses to penetrate a system and then limits the
ability of a virus to update files in order to pass on the infection.

The arms race continues. With fourth-generation packages, a more


comprehensive defense strategy is employed, broadening the scope of defense
to more general-purpose computer security measures.

Advanced Antivirus Techniques

More sophisticated antivirus approaches and products continue to appear. In


this subsection, we highlight two of the most important.

Generic Decryption

Generic decryption (GD) technology enables the antivirus program to easily


detect even the most complex polymorphic viruses, while maintaining fast
scanning speeds . In order to detect such a structure, executable files are run
through a GD scanner, which contains the following elements:

· CPU emulator: A software-based virtual computer. Instructions in an


executable file are interpreted by the emulator rather than executed on the
underlying processor. The emulator includes software versions of all registers
and other processor hardware, so that the underlying processor is unaffected
by programs interpreted on the emulator.
· Virus signature scanner: A module that scans the target code looking
for known virus signatures.

· Emulation control module: Controls the execution of the target code.

Digital Immune System

The digital immune system is a comprehensive approach to virus protection


developed by IBM]. The motivation for this development has been the rising
threat of Internet-based virus propagation.Two major trends in Internet
technology have had an increasing impact on the rate of virus propagation in
recent years:

Integrated mail systems: Systems such as Lotus Notes and Microsoft Outlook
make it very simple to send anything to anyone and to work with objects that
are received.

Mobile-program systems: Capabilities such as Java and ActiveX allow


programs to move on their own from one system to another.

· A monitoring program on each PC uses a variety of heuristics based on


system behavior, suspicious changes to programs, or family signature to infer
that a virus may be present. The monitoring program forwards a copy of any
program thought to be infected to an administrative machine within the
organization.

· The administrative machine encrypts the sample and sends it to a central


virus analysis machine.
· This machine creates an environment in which the infected program can be
safely run for analysis. Techniques used for this purpose include emulation, or
the creation of a protected environment within which the suspect program can
be executed and monitored. The virus analysis machine then produces a
prescription for identifying and removing the virus.

· The resulting prescription is sent back to the administrative machine.

· The administrative machine forwards the prescription to the infected client.

· The prescription is also forwarded to other clients in the organization.

· Subscribers around the world receive regular antivirus updates that protect
them from the new virus.

The success of the digital immune system depends on the ability of the virus
analysis machine to detect new and innovative virus strains. By constantly
analyzing and monitoring the viruses found in the wild, it should be possible to
continually update the digital immune software to keep up with the threat.

Behavior-Blocking Software

Unlike heuristics or fingerprint-based scanners, behavior-blocking software


integrates with the operating system of a host computer and monitors program
behavior in real-time for malicious actions. Monitored behaviors can include
the following:

· Attempts to open, view, delete, and/or modify files;

· Attempts to format disk drives and other unrecoverable disk operations;

· Modifications to the logic of executable files or macros;

· Modification of critical system settings, such as start-up settings;

· Scripting of e-mail and instant messaging clients to send executable content; and

· Initiation of network communications.

If the behavior blocker detects that a program is initiating would-be malicious


behaviors as it runs, it can block these behaviors in real-time and/or terminate
the offending software. This gives it a fundamental advantage over such
established antivirus detection techniques as fingerprinting or heuristics.
Distributed denial of service (DDoS) attack

A distributed denial-of-service (DDoS) attack is an attack in which multiple


compromised computer systems attack a target, such as a server, website or
other network resource, and cause a denial of service for users of the
targeted resource. The flood of incoming messages, connection requests
or malformed packets to the target system forces it to slow down or even crash
and shut down, thereby denying service to legitimate users or systems.

DDoS attacks have been carried out by diverse threat actors, ranging from
individual criminal hackers to organized crime rings and government agencies.
In certain situations, often ones related to poor coding, missing patches or
generally unstable systems, even legitimate requests to target systems can
result in DDoS-like results.

How DDoS attacks work

In a typical DDoS attack, the assailant begins by exploiting a vulnerability in


one computer system and making it the DDoS master. The attack master
system identifies other vulnerable systems and gains control over them by
either infecting the systems with malware or through bypassing the
authentication controls (i.e., guessing the default password on a widely
used system or device).

A computer or networked device under the control of an intruder is known as a


zombie, or bot. The attacker creates what is called a command-and-control
server to command the network of bots, also called a botnet. The person in
control of a botnet is sometimes referred to as the botmaster (that term has
also historically been used to refer to the first system "recruited" into a botnet
because it is used to control the spread and activity of other systems in the
botnet).

Botnets can be comprised of almost any number of bots; botnets with tens or
hundreds of thousands of nodes have become increasingly common, and there
may not be an upper limit to their size. Once the botnet is assembled, the
attacker can use the traffic generated by the compromised devices to flood the
target domain and knock it offline.
Types of DDoS attacks

There are three types of DDoS attacks. Network-centric or volumetric attacks


overload a targeted resource by consuming available bandwidth with packet
floods. Protocol attacks target network layer or transport layer protocols using
flaws in the protocols to overwhelm targeted resources. And application layer
attacks overload application services or databases with a high volume of
application calls. The inundation of packets at the target causes a denial of
service.

While it is clear that the target of a DDoS attack is a victim, there can be many
other victims in a typical DDoS attack, including the owners of the systems
used to execute the attack. Although the owners of infected computers are
typically unaware their systems have been compromised, they are nevertheless
likely to suffer a degradation of service during a DDoS attack.

Internet of things and DDoS attacks

While the things comprising the internet of things (IoT) may be useful to
legitimate users, in some cases, they are even more helpful to DDoS attackers.
The devices connected to IoT include any appliance into which some computing
and networking capacity has been built, and, all too often, these devices are
not designed with security in mind.

Devices connected to the IoT expose large attack surfaces and display minimal
attention to security best practices. For example, devices are often shipped
with hard-coded authentication credentials for system administration, making
it simple for attackers to log in to the devices. In some cases, the
authentication credentials cannot be changed. Devices also often ship without
the capability to upgrade or patch device software, further exposing them to
attacks that leverage well-known vulnerabilities.

Internet of things botnets are increasingly being used to wage massive DDoS
attacks. In 2016, the Mirai botnet was used to attack the domain name service
provider Dyn, based in Manchester, N.H.; attack volumes were measured at
over 600 Gbps. Another late 2016 attack unleashed on OVH, the French
hosting firm, peaked at more than 1 Tbps.
DDoS defense and prevention

DDoS attacks can create significant business risks with lasting effects.
Therefore, it is important for IT and security administrators and managers, as
well as their business executives, to understand the threats, vulnerabilities
and risks associated with DDoS attacks.

Being on the receiving end of a DDoS attack is practically impossible to


prevent. However, the business impact of these attacks can be
minimized through some core information security practices, including
performing ongoing security assessments to look for -- and resolve -- denial of
service-related vulnerabilities and using network security controls, including
services from cloud-based vendors specializing in responding to DDoS attacks.

In addition, solid patch management practices, email phishing testing and user
awareness, and proactive network monitoring and alerting can help minimize
an organization's contribution to DDoS attacks across the internet.

Firewall design principles

Internet connectivity is no longer an option for most organizations.


However, while internet access provides benefits to the organization, it
enables the outside world to reach and interact with local network
assets. This creates the threat to the organization. While it is possible to
equip each workstation and server on the premises network with strong
security features, such as intrusion protection, this is not a practical
approach. The alternative, increasingly accepted, is the firewall.

The firewall is inserted between the premise network and internet to


establish a controlled link and to erect an outer security wall or
perimeter. The aim of this perimeter is to protect the premises network
from internet based attacks and to provide a single choke point where
security and audit can be imposed. The firewall can be a single computer
system or a set of two or more systems that cooperate to perform the
firewall function.

Firewall characteristics:

· All traffic from inside to outside, and vice versa, must pass through the
firewall. This is achieved by physically blocking all access to the local network
except via the firewall.
· Various configurations are possible.

· Only authorized traffic, as defined by the local security policy, will be allowed to pass.

· Various types of firewalls are used, which implement various types of security
policies.

· The firewall itself is immune to penetration. This implies that use of a trusted
system with a secure operating system. This implies that use of a trusted
system with a secure operating system.

Four techniques that firewall use to control access and enforce the site‟sace ontrl expialgtmeusr‟acont.s
security policy is as follows:

1. Service control – determines the type of internet services that can be accessed,
inbound or outbound. The firewall may filter traffic on this basis of IP address
and TCP port number; may provide proxy software that receives and interprets
each service request before passing it on; or may host the server software itself,
such as web or mail service.

2. Direction control – determines the direction in which particular service


request may be initiated and allowed to flow through the firewall.

3. User control – controls access to a service according to which user is attempting to


access it.

4. Behavior control – controls how particular services are used.

Capabilities of firewall

A firewall defines a single choke point that keeps unauthorized users out of the
protected network, prohibits potentially vulnerable services from entering or
leaving the network, and provides protection from various kinds of IP spoofing
and routing attacks.

A firewall provides a location for monitoring security related events. Audits and
alarms can be implemented on the firewall system.

A firewall is a convenient platform for several internet functions that are not

security related. A firewall can serve as the platform for IPsec.


Limitations of firewall

· The firewall cannot protect against attacks that bypass the firewall. Internal
systems may have dial-out capability to connect to an ISP. An internal LAN
may support a modem pool that provides dial-in capability for traveling
employees and telecommuters.

· The firewall does not protect against internal threats. The firewall does not
protect against internal threats, such as a disgruntled employee or an
employee who unwittingly cooperates with an external attacker.

· The firewall cannot protect against the transfer of virus-infected programs or


files. Because of the variety of operating systems and applications supported
inside the perimeter, it would be impractical and perhaps impossible for the
firewall to scan all incoming files, e-mail, and messages for viruses.

Types of firewalls

There are 3 common types of firewalls.

· Packet filters
·
· Application-level gateways
·
· Circuit-level gateways

Packet filtering router

A packet filtering router applies a set of rules to each incoming IP packet


and then forwards or discards the packet. The router is typically
configured to filter packets going in both directions. Filtering rules are
based on the information contained in a network packet:
Application level gateway

An Application level gateway, also called a proxy server, acts as a relay of


application level traffic. The user contacts the gateway using a TCP/IP
application, such as Telnet or FTP, and the gateway asks the user for the name
of the remote host to be accessed. When the user responds and provides a valid
user ID and authentication information, the gateway contacts the application
on the remote host and relays TCP segments containing the application data
between the two endpoints.

Application level gateways tend to be more secure than packet filters. It is easy
to log and audit all incoming traffic at the application level. A prime
disadvantage is the additional processing overhead on each connection.

Circuit level gateway

Circuit level gateway can be a stand-alone system or it can be a specified


function performed by an application level gateway for certain applications. A
Circuit level gateway does not permit an end-to-end TCP connection; rather,
the gateway sets up two TCP connections, one between itself and a TCP user
on an inner host and one between itself and a TCP user on an outer host.
Once the two connections are established, the gateway typically relays TCP
segments from one connection to the other without examining the contents.
The security function consists of determining which connections will be
allowed. A typical use of Circuit level gateways is a situation in which the
system administrator trusts the internal users. The gateway can be configured
to support application level or proxy service on inbound connections and
circuit level functions for outbound connections.
Trusted System :
Common Criteria for Information Technology Security Evaluation

1 The CC permits comparability between the results of independent security


evaluations. The CC does so by providing a common set of requirements for the
security functionality of IT products and for assurance measures applied to these
IT products during a security evaluation. These IT products may be implemented
in hardware, firmware or software.

2 The evaluation process establishes a level of confidence that the security


functionality of these IT products and the assurance measures applied to these IT
products meet these requirements. The evaluation results may help consumers to
determine whether these IT products fulfil their security needs.

3 The CC is useful as a guide for the development, evaluation and/or


procurement of IT products with security functionality.

4 The CC is intentionally flexible, enabling a range of evaluation methods to be


applied to a range of security properties of a range of IT products. Therefore users
of the standard are cautioned to exercise care that this flexibility is not misused.
For example, using the CC in conjunction with unsuitable evaluation methods,
irrelevant security properties, or inappropriate IT products, may result in
meaningless evaluation results.

5 Consequently, the fact that an IT product has been evaluated has meaning only
in the context of the security properties that were evaluated and the evaluation
methods that were used. Evaluation authorities are advised to carefully check the
products, properties and methods to determine that an evaluation will provide
meaningful results. Additionally, purchasers of evaluated products are advised to
carefully consider this context to determine whether the evaluated product is
useful and applicable to their specific situation and needs.

6 The CC addresses protection of assets from unauthorised disclosure,


modification, or loss of use. The categories of protection relating to these three
types of failure of security are commonly called confidentiality, integrity, and
availability, respectively. The CC may also be applicable to aspects of IT security
outside of these three. The CC is applicable to risks arising from human activities
(malicious or otherwise) and to risks arising from non-human activities. Apart
from IT security, the CC may be applied in other areas of IT, but makes no claim
of applicability in these areas.

7 Certain topics, because they involve specialised techniques or because they are
somewhat peripheral to IT security, are considered to be outside the scope of the
CC. Some of these are identified below.
a) The CC does not contain security evaluation criteria pertaining to
administrative security measures not related directly to the IT security
functionality. However, it is recognised that
significant security can often be achieved through or supported by
administrative measures such as organisational, personnel, physical, and
procedural controls. Introduction Page 12 of 106 Version 3.1 April 2017

b) The evaluation of some technical physical aspects of IT security such as


electromagnetic emanation control is not specifically covered, although
many of the concepts addressed will be applicable to that area.

c) The CC does not address the evaluation methodology under which the
criteria should be applied. This methodology is given in the CEM.

d) The CC does not address the administrative and legal framework under
which the criteria may be applied by evaluation authorities. However, it is
expected that the CC will be used for evaluation purposes in the context of
such a framework.

e) The procedures for use of evaluation results in accreditation are outside


the scope of the CC. Accreditation is the administrative process whereby
authority is granted for the operation of an IT product (or collection thereof)
in its full operational environment including all of its non-IT parts. The
results of the evaluation process are an input to the accreditation process.
However, as other techniques are more appropriate for the assessments of
non-IT related properties and their relationship to the IT security parts,
accreditors should make separate provisions for those aspects.

f) The subject of criteria for the assessment of the inherent qualities of


cryptographic algorithms is not covered in the CC. Should independent
assessment of mathematical properties of cryptography be required, the
evaluation scheme under which the CC is applied must make provision for
such assessments.

8 ISO terminology, such as "can", "informative", "may", "normative", "shall" and


"should" used throughout the document are defined in the ISO/IEC Directives,
Part 2. Note that the term "should" has an additional meaning applicable when
using this standard. See the note below. The following definition is given for the
use of “should” in the CC. within normative text, “should” indicates “that among
several.

9 should possibilities one is recommended as particularly suitable, without


mentioning or excluding others, or that a certain course of action is preferred but
not necessarily required.” (ISO/IEC Directives, Part 2). The CC interprets “not
necessarily required” to mean that the choice of another possibility requires a
justification of why the preferred option was not chosen.

You might also like