Operating System Module
Operating System Module
Faculty of technology
Department of computer science
Compiled by
1
Contents page
CHAPTER ONE ........................................................................................................................ 4
Introduction ............................................................................................................................ 4
1.1. What is a computer operating system? ........................................................................ 4
1.1.1. Definition ............................................................................................................. 4
1.2. Computer System Structure ........................................................................................ 6
1.3. History of Operating System ....................................................................................... 6
1.4. Classes of operating systems ....................................................................................... 7
1.5. Types of operating system........................................................................................... 9
Chapter Two............................................................................................................................. 12
Processes and process management ..................................................................................... 12
2.1. Process concept ............................................................................................................. 12
2.2. Process State ................................................................................................................. 13
2.3. Process Control Block................................................................................................... 15
2.4. The threads concept ...................................................................................................... 16
2.4.1. Multithreading........................................................................................................ 18
2.4.2. Multithreading Models........................................................................................... 20
2.4.3. Types of Thread ..................................................................................................... 24
2.5. Inter-process communication ........................................................................................ 26
2.6. Process Scheduling ....................................................................................................... 29
2.6.1 Preemptive vs. non-preemptive scheduling ............................................................ 32
2.7. Deadlock ....................................................................................................................... 33
2.7.1. Deadlock Characterization ..................................................................................... 34
2.7.2. Methods for handling Deadlocks ........................................................................... 37
2.7.3. Deadlock Avoidance .............................................................................................. 38
CHAPTER THREE ................................................................................................................. 40
3.0. Memory management ................................................................................................ 40
3.1. Logical versus Physical Address Space ........................................................................ 41
3.2. Swapping ................................................................................................................... 42
3.2.1. Overlay............................................................................................................... 44
3.3. Contiguous Allocation............................................................................................... 44
3.3.1. Single-Partition Allocation ................................................................................ 44
3.3.2. Multiple-Partition Allocation ............................................................................. 45
3.3.3. Fixed Partitioning............................................................................................... 46
3.4. Fragmentation............................................................................................................ 47
3.5. Paging ........................................................................................................................ 48
2
3.6. Segmentation ............................................................................................................. 52
3.6.1. Segmentation Architecture................................................................................. 54
CHAPTER FOUR .................................................................................................................... 56
Device management ............................................................................................................. 56
4.1. Characteristics of parallel and serial devices ............................................................ 56
4.2. Direct memory access ............................................................................................... 56
4.3. Recovery from failure ............................................................................................... 57
Chapter 5 .................................................................................................................................. 59
File Systems ......................................................................................................................... 59
5.1. Fundamental concepts on file ....................................................................................... 59
5.1.1. File Attributes ........................................................................................................ 59
5.2. Operations, organization and buffering in file .............................................................. 60
5.3. File Management Systems ............................................................................................ 62
Chapter 6 .................................................................................................................................. 65
Security ................................................................................................................................ 65
6.1. Overview of system security......................................................................................... 65
6.1.1. Policies and mechanism of system security ........................................................... 66
6.2. System protection, authentication ................................................................................. 66
6.2.1. Models of protection .............................................................................................. 67
6.2.2. Memory protection................................................................................................. 67
6.2.3. Encryption .............................................................................................................. 68
6.2.3. Recovery management ........................................................................................... 68
3
CHAPTER ONE
Introduction
There are many flavours of operating systems available in the marketplace today. The
programs for the operating system are generally written specifically for the type of hardware
they are installed on. For example, the Microsoft Windows operating system works primarily
on an IBM-compatible personal computer (pc), whereas the Apple Macintosh operating
system works on an Apple personal computer, but will not work on an IBM-compatible
computer (without special software called an ‗emulator‘). UNIX is generally designed for
larger mini or mainframe computers but there is now a version available for the desktop
computer.
1.1.1. Definition
Operating systems perform basic tasks, such as recognizing input from the keyboard,
sending output to the display screen, keeping track of files and directories on the disk,
and controlling peripheral devices such as disk drives and printers. A program that acts as
an intermediary between a user of a computer and the computer hardware.
OS is a resource allocator
Decides between conflicting requests for efficient and fair resource use
4
OS is a control program: which Controls execution of programs to prevent errors and
improper use of the computer
Procès management
Memory management
Devise management
File management
Security management
User interfacing
DOS: - Disk Operating System is one of the first operating systems for the personal
computer. When you turned the computer on all you saw was the command prompt which
looked like c:\ >. You had to type all commands at the command prompt which might
look like c:\>wp\wp.exe. This is called a command-line interface. It was not very "user
friendly"
Windows: - The Windows operating system, a product of Microsoft, is a GUI (graphical
user interface) operating system. This type of "user friendly" operating system is said to
have WIMP features: Windows, Icons, Menus and Pointing device (mouse)
MacOS : Macintosh, a product of Apple, has its own operating system with a GUI and
WIMP features.
UNIX: - Linux(the PC version of UNIX) - UNIX and Linux were originally created with
a command-line interface, but recently have added GUI enhancements.
5
1.2. Computer System Structure
Computer system can be divided into four components
Users
• People, machines, other computers
6
Generation Year Electronic devices used Types of OS and devices
1. Single user — an operating system described as ‗single user‘ means that only one user
can use the services of the operating system at any one time. If somebody else wants to
use the computer they have to wait until the person using it finishes. Older personal
computer operating systems such as MS-DOS and up to Windows 3.0 were single user
operating systems. Note that older versions of Windows actually used MS-DOS to
operate. Windows simply provided the GUI interface.
2. Multi user: Multi user systems allow more than one person to use the operating system
resources simultaneously. Obviously, two or more people would not want to physically
operate the same computer at the same time, so the ability to do this is provided by
network operating systems. A network operating system allows many personal computers
to connect to other computers by means of communication media such as cable or
wireless links.
3. Single tasking: These operating systems are more complex than single user operating
systems because they have to handle many requests for devices, resources etc., by many
different users at the same time. For example, if three users on a network all try to print a
document on a single network printer at the same time, it is the Network operating
system‘s responsibility to ensure that the documents are held on the hard disk
(spooled/queued) until the printer is ready to receive them. Multi user systems also
provide security functions such as who can access the system, what resources they can
use when logged in, what environment areas they can change, etc.
7
These are operating systems in which only one task can be performed by the operating
system at any one time. That single task must finish before the next task can be started.
E.g in MS-DOS, if you wanted to format a floppy disk, the computer would need to
finish that task before it gave control back to you to allow you perform the next task.
Early single user operating systems were single tasking.
4 Multi-tasking-single user —This means that a user can sit in front of their computer
(that is not attached to a network) and the computer appears to do many tasks at the same
time. Eg while the operating system is printing a 100-page document on your printer,
your database program is sorting hundreds of records for you, while you play your
favourite card games, all at the same time. (Note that the computer does not run these
tasks concurrently as explained later).
5 Multi-tasking-multi user — If you read the definition above for a multi user system,
you would probably have realised that all multi user systems must be multi-tasking.
MS –DOS IBM-compatible Developed around 1980. A single user, single tasking OS with
computers no GUI features. Not designed for running on a network. Other
similar products were DR-DOS and PC-DOS.
Windows IBM-compatible First version appeared around 1985. Never really gained
computers acceptability until the release of Windows 3.1 in 1992. Network
capability was added to a new version called Windows for
Workgroups later on in the same year. Used a GUI interface and
supports Multi User/Multi-Tasking capabilities. Current standard
version for the home computer is Windows XP.
UNIX Mainframes and Developed around the late 1960‘s. Has extensive Networking
now IBM- capabilities and handles Multi-Tasking/Multi User functions
compatible extremely well. Originally a command line operating system.
computers
8
LINUX IBM-compatible A popular freeware operating system that is very similar to
computers UNIX. Have basically the same features as UNIX. Very popular
for internet applications such as firewalls, gateways etc. Has a
GUI but currently is not quite as user friendly as Windows or
Apple Macs.
Macintos Apple Macintosh Uses a GUI and used it before Microsoft Windows was written.
h OS It supports multi-tasking. It is very popular for use in businesses
where graphical designing or video work is done.
9
Provide advantage of quick response.
Avoids duplication of software.
Reduces CPU idle time.
Real time system is defines as a data processing system in which the time interval required to
process and respond to inputs is so small that it controls the environment. Real time
10
processing is always on line whereas on line system need not be real time. The time taken by
the system to respond to an input and display of required updated information is termed as
response time. So in this method response time is very less as compared to the online
processing.
Real-time systems are used when there are rigid time requirements on the operation of a
processor or the flow of data and real-time systems can be used as a control device in a
dedicated application. Real-time operating system has well-defined, fixed time constraints
otherwise system will fail. For example Scientific experiments, medical imaging systems,
industrial control systems, weapon systems, robots, and home-appliance controllers, Air
traffic control system etc.
Best real time example to observe is when car driver tries to rotate the car the car will give
flash light to the rotating direction. So such system is real time operating system.
11
Chapter Two
12
program counter
Stack
A program is a passive entity, and a process is an active entity with the value of the PC.
It is an execution sequence.
Multiple processes can be associated with the same program (editor). They are separate
execution sequences.
For CPU, all processes are similar
Batch Jobs and user programs/tasks
Word file
Internet browser
System call
Scheduler
13
Fig 2: Diagram of Process State
State change
1. Null New : a new process is created to execute the program
New batch job, log on
Created by OS to provide the service
2. New ready: OS will move a process from prepared to ready state when it is prepared
to take additional process.
3. Ready Running: when it is a time to select a new process to run, the OS selects one of
the processes in the ready state.
4. Running terminated: The currently running process is terminated by the OS if the
process indicates that it has completed, or if it aborts.
5. Running Ready: The process has reached the maximum allowable time or interrupt.
6. Running Waiting: A process is put in the waiting state, if it requests something for
which it must wait.
Example: System call request.
7. Waiting Ready: A process in the waiting state is moved to the ready state, when the
event for which it has been waiting occurs.
8. Ready Terminated: If a parent terminates, child process should be terminated
9. Waiting Terminated: If a parent terminates, child process should be terminated
14
2.3. Process Control Block
Each process is represented in the operating system by control process block also called Task
control block. It contains many pieces‘ of information associated with specific process
including these:
1. Process state: The state may be new, ready running, waiting, halt (terminate), and so on.
2. Program counter: The counter indicates the address of the next instruction to be executed
for this process.
3. CPU registers: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information. Along with the program counter,
this state information must be saved when an interrupt occurs, to allow the process to be
continued correctly afterward.
4. CPU-scheduling information: This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameter.
5. Memory-management information: This information may include such information as
the value of the base and limit registers, the page tables, or the segment tables, depend on
the memory system used by the operating system.
6. Accounting information: This information includes the amount of CPU and real time
used, time limits, account numbers, job or process numbers, and so on.
7. I/O status information: This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.
Representation of Process Scheduling
15
A new process is initially put in the ready queue, Once a process is allocated CPU, the
following events may occur
A process could issue an I/O request
A process could create a new process
The process could be removed forcibly from CPU, as a result of an interrupt.
When process terminates, it is removed from all queues. Process control block and its other
resources are de-allocated.
Inter-process communication is simple and easy when used occasionally. If there are
many processes sharing many resources, then the mechanism becomes bulky- difficult to
handle. Threads are created to make this kind of resource sharing simple & efficient
A thread is a basic unit of CPU utilization that consists of:
16
Thread id
Program counter
Register set
Stack
Threads belonging to the same process share :its code, its data section and other OS
resources
Processes and Threads
A thread of execution is the smallest unit of processing that can be scheduled by an OS. The
implementation of threads and process differs from one OS to another, but in most cases, a
thread is contained inside a process.
Multiple threads can exist within the same process and share resources such as memory,
while different processes do not share these resources. Like process states, threads also have
states:
New
Ready
Running
Waiting
Terminated
Like processes, the OS will switch between threads (even though they belong to a single
process) for CPU usage. Like process creation, thread creation is supported by APIs, Creating
threads is inexpensive (cheaper) compared to processes.
They do not need new address space, global data, program code or operating
system resources
Context switching is faster as the only things to save/restore are program
counters, registers and stack
Similarity
Both share CPU and only one thread/process is active (running) at a time.
Like processes, threads within a process execute sequentially.
Like processes, thread can create children.
Like process, if one thread is blocked, another thread can run
17
Difference
Unlike processes, threads are not independent of one another.
Unlike processes, all threads can access every address in the task.
Unlike processes, threads are design to assist one other.
2.4.1. Multithreading
Multithreading refers to the ability on an operating system to support multiple threads of
execution within a single process. A traditional (heavy weight) process has a single thread of
control. There‘s one program counter and a set of instructions carried out at a time. If a
process has multiple thread of control, it can perform more than one task at a time
Each thread have their own program counter, stacks and registers But they share common
code, data and some operating system data structures like files
Traditionally there is a single thread of execution per process. Example: MSDOS
supports a single user process and single thread.
UNIX supports multiple user processes but only support one thread per process.
Multithreading: Java run time environment is an example of one process with
multiple threads.
Examples of supporting multiple processes, with each process supporting multiple
threads
Windows 2000, Solaris, Linux, Mach, and OS/2
18
Single and Multithreaded Processes
In a single threaded process model, the representation of a process includes its PCB, user
address space, as well as user and kernel stacks. When a process is running. The
contents of these registers are controlled by that process, and the contents of these
registers are saved when the process is not running.
In a multi-threaded environment, there is a single PCB and address space, there are
separate stacks for each thread as well as separate control blocks for each thread
containing register values, priority, and other thread related state information.
Benefits of Multithreading
There are four major benefits of multi-threading:
1. Responsiveness: one thread can give response while other threads are blocked or
slowed down doing computations
2. Resource Sharing: Threads share common code, data and resources of the process to
which they belong. This allows multiple tasks to performed within the same address
space
3. Economy: Creating and allocating processes is expensive, while creating threads is
cheaper as they share the resources of the process to which they belong. Hence, it‘s
more economical to create and context-switch threads.
19
4. Scalability/utilization of multi-processor architectures: The benefits of
multithreading is increased in a multi-processor architecture, where threads are
executed in parallel. A single-threaded process can run on one CPU, no matter how
many are available.
20
2. One-to-One
Each user-level thread maps to kernel thread.
A separate kernel thread is created to handle each user-level thread
It provides more concurrency and solves the problems of blocking system calls
Managing the one-to-one model involves more overhead slowed down system
Drawback: creating user thread requires creating the corresponding kernel thread.
Most implementations of this thread puts restrictions on the number of threads created
3. Many-to-Many
allows the mapping of many user threads in to many kernel threads
Allows the OS to create sufficient number of kernel threads
It combines the best features of one-to-one and many-to-one model
Users have no restrictions on the numbers of threads created
Blocking kernel system calls do not block the entire process.
Two-tier model
Threading Issues: -Some of the issues to be considered for multi-threaded programs are
Semantics of fork () and exec () system calls.
Thread cancellation.
Signal handling
Thread pools
Thread-specific data
1. Semantics of fork () and exec () system calls:- In multithreaded program the semantics
of the fork and exec systems calls change, if one thread calls fork, there are two options,
New process can duplicate all the threads or new process is a process with single thread
and Some systems have chosen two versions of fork
2. Thread cancellation:-Task of terminating thread before its completion. Example: if
multiple threads are searching database, if one gets the result others should be cancelled
Asynchronous cancellation: One thread immediately terminates the target thread
Deferred cancellation: The target thread can paradoxically check if it should
terminate
3. Thread signalling: - Signals are used in UNIX systems to notify a process that a
particular event has occurred
22
4. Signal handler: -is used to process signals.
Signal is generated by particular event
Signal is delivered to a process
Signal is handled
Options: -
Deliver the signal to the thread to which the signal applies,
Deliver the signal to every thread in the process
Deliver the signal to certain threads in the process
Assign a specific thread to receive all signals for the process
5. Threading Pool: - Creating new threads every time one is needed and then deleting it
when it is done can be inefficient, and can also lead to a very large (unlimited) number of
threads being created. An alternative solution is to create a number of threads when the
process first starts, and put those threads into a thread pool.
Threads are allocated from the pool as needed and returned to the pool when no longer
needed. When no threads are available in the pool, the process may have to wait until one
becomes available.
The (maximum) number of threads available in a thread pool may be determined by
adjustable parameters, possibly dynamically in response to changing system loads.
Win32 provides thread pools through the "Pool Function" function.
Java also provides support for thread pools.
6. Thread- specific data: - Most data is shared among threads, and this is one of the major
benefits of using threads in the first place. However sometimes threads need thread-
specific data also in some circumstances. Such data is called thread-specific data. Most
major thread libraries (P threads, Win32, Java) provide support for thread-specific data.
Scheduler activation
Both M: M and Two-level models require communication to maintain the appropriate number
of kernel threads allocated to the application. Scheduler activations provide up calls - a
communication mechanism from the kernel to the thread library. This communication allows
an application to maintain the correct number of kernel threads
23
2.4.3. Types of Thread
Threads are implemented in two ways:
1. User Level
2. Kernel Level
User Level Thread
In a user thread, all of the work of thread management is done by the application and the
kernel is not aware of the existence of threads. The thread library contains code for creating
and destroying threads, for passing message and data between threads, for scheduling thread
execution and for saving and restoring thread contexts. The application begins with a single
thread and begins running in that thread.
User level threads are generally fast to create and manage.
Advantage of user level thread over Kernel level thread:
1. Thread switching does not require Kernel mode privileges.
Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation,
scheduling and management in Kernel space. Kernel threads are generally slower to create
and manage than the user threads.
Advantages of Kernel level thread:
a. Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
24
b. If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
Disadvantages:
1. Kernel threads are generally slower to create and manage than the user threads.
2. Transfer of control from one thread to another within same process requires a mode
switch to the Kernel.
Advantages of Thread
1. Thread minimizes context switching time.
2. Use of threads provides concurrency within a process.
3. Efficient communication.
4. Economy- It is more economical to create and context switch threads.
5. Utilization of multiprocessor architectures –
The benefits of multithreading can be greatly increased in a multiprocessor architecture.
25
executes the same code but has its own memory and open files, child processes.
file resources.
4 If one server process is blocked no other server While one server thread is blocked
process can execute until the first process and waiting, second thread in the
unblocked. same task could run.
5 Multiple redundant process uses more resources Multiple threaded processes use
than multiple threaded. fewer resources than multiple
redundant processes.
6 In multiple processes each process operates One thread can read, write or even
independently of the others. completely wipe out another
threads stack
That part of the program where the shared memory is accessed is called the critical section
(CS). If we could arrange matters such that no two processes were ever in their critical
sections at the same time, we could avoid race conditions.
Although this requirement avoids race conditions, this is not sufficient for having parallel
processes cooperate correctly and efficiently using shared data. We need four conditions to
hold to have a good solution:
27
1. No two processes may be simultaneously inside their critical sections.
2. No assumptions may be made about speeds or the number of processors.
3. No process running outside its CS may block other processes.
4. No process should have to wait forever to enter its CS.
Solution to Critical-Section Problem
A solution to a critical –section problem must satisfy the following three
requirements.
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there exist some
processes that wish to enter their critical section, then only those processes that are
not executed in their remainder sections can participate in the decision on which will
enter its critical section next , and this selection cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times that other processes
are allowed to enter their critical sections after a process has made a request to enter
its critical section and before that request is granted
Assume that each process executes at a nonzero speed
No assumption concerning relative speed of the N processes
Solution to CS Problem – Mutual Exclusion
Advantages
Applicable to any number of processes on either a single processor or
multiple processors sharing main memory
It is simple and therefore easy to verify
It can be used to support multiple critical sections
Disadvantages
Busy-waiting consumes processor time
Starvation is possible when a process leaves a critical section and
more than one process is waiting.
Deadlock
If a low priority process has the critical region and a higher priority
process needs, the higher priority process will obtain the
processor(CPU) to wait for the critical region
28
2.6. Process Scheduling
Multiprogramming operating system allows more than one process to be loaded into the
executable memory at a time and for the loaded process to share the CPU using time
multiplexing. The scheduling mechanism is the part of the process manager that handles the
removal of the running process from the CPU and the selection of another process on the
basis of particular strategy.
Scheduling Queues
Process Scheduling Queues
Scheduling is to decide which process to execute and when
The objective of multi-programming to have some process running at all times.
Timesharing: Switch the CPU frequently that users can interact with the program
while it is running.
If there are many processes, the rest have to wait until CPU is free.
Scheduling is to decide which process to execute and when.
Scheduling queues:
Job queue – set of all processes in the system.
Ready queue – set of all processes residing in main memory, ready and waiting
to execute.
Device queues – set of processes waiting for an I/O device.
Each device has its own queue.
Process migrates between the various queues during its life time. When the process enters
into the system, they are put into a job queue. This queue consists of all processes in the
system. The operating system also has other queues.
Device queue is a queue for which a list of processes waiting for a particular I/O device. Each
device has its own device queue. Fig. 3.3 shows the queuing diagram of process scheduling.
In the fig 3.3, queue is represented by rectangular box. The circles represent the resources
that serve the queues. The arrows indicate the flow of processes in the system.
29
Fig: Queuing Diagram
Queues are of two types: ready queue and set of device queues. A newly arrived process is
put in the ready queue. Processes are waiting in ready queue for allocating the CPU. Once the
CPU is assigned to the process, then process will execute. While executing the process, one
of the several events could occur.
1. The process could issue an I/O request and then place in an I/O queue.
2. The process could create new sub process and waits for its termination.
3. The process could be removed forcibly from the CPU, as a result of interrupt and put back
in the ready queue.
Schedules
30
On same systems, the long term scheduler may be absent or minimal. Time-sharing operating
systems have no long term scheduler. When process changes the state from new to ready,
then there is a long term scheduler.
2. Short Term Scheduler
It is also called CPU scheduler. Main objective is increasing system performance in
accordance with the chosen set of criteria. It is the change of ready state to running state of
the process. CPU scheduler selects from among the processes that are ready to execute and
allocates the CPU to one of them. Short term scheduler also known as dispatcher, execute
most frequently and makes the fine grained decision of which process to execute next. Short
term scheduler is faster than long tern scheduler.
3. Medium Term Scheduler
Medium term scheduling is part of the swapping function. It removes the processes from the
memory. It reduces the degree of multiprogramming. The medium term scheduler is in
charge of handling the swapped out-processes.
Comparison between Scheduler
Sr. Long Term Short Term Medium Term
No.
1 It is job scheduler It is CPU Scheduler It is swapping
2 Speed is less than short term scheduler Speed is very fast Speed is in between
both
3 It controls degree of multiprogramming Less control over Reduce the degree of
degree of multiprogramming.
multiprogramming
4 Absent or minimal in time sharing Minimal in time Time sharing system
system. sharing system. use medium term
scheduler.
5 It select processes from pool and load It select from among Process can be
them into memory for execution. the processes that are reintroduced into
ready to execute. memory and its
execution can be
continued.
6 Process state is (New to Ready) Process state is (Ready -
to Running)
31
7 Select a good process, mix of I/O bound Select a new process -
and CPU bound. for a CPU quite
frequently.
Scheduling Criteria 3
Performance related:- Quantitative and Measurable, such as response time &
throughput
Non-performance related :- Qualitative, Predictability and Proportionality
Scheduling Policies
We will concentrate on scheduling at the level of selecting among a set of ready processes.
Scheduler is invoked whenever the operating system must select a user-level process to
execute:
32
after process creation/termination
a process blocks on I/O
I/O interrupt occurs
clock interrupt occurs (if preemptive)
Criteria for a good scheduling
fairness: all processes get fair share of the CPU
efficiency: keep CPU busy 100% of time
response time: minimize response time
turnaround: minimize the time batch users must wait for output
throughput: maximize number of jobs per hour
They are competing. Fairness/efficiency, interactive/batch
2.7. Deadlock
Deadlock can be defined as a permanent blocking of processes that either compete for system
resources or communicate with each other .The set of blocked processes each hold a resource
and wait to acquire a resource held by another process in the set. All deadlocks involve
conflicting needs for resources by two or more processes
A set of processes or threads is deadlocked when each process or thread is waiting for a
resource to be freed which is controlled by another process
Example 1: Suppose a system has 2 disk drives, If P1 is holding disk 2 and P2 is holding disk
1 and if P1 requests for disk 1 and P2 requests for disk 2, then deadlock occurs. In a
multiprogramming environment, several processes may compete for a finite number of
resources. A process requests resources; if the resources are not available at that time, the
process enters a wait state. It may happen that waiting processes will never again change
state, because the resources they have requested are held by other waiting processes. This
situation is called deadlock.
A process must request a resource before using it, and must release the resource after using it.
A process may request as many resources as it requires carrying out its designated task.
Under the normal mode of operation, a process may utilize a resource in only the following
sequence:
1. Request: If the request cannot be granted immediately, then the requesting process must
wait until it can acquire the resource.
2. Use: The process can operate on the resource.
33
3. Release: The process releases the resource
1. Process:
34
3. Pi requests instance of Rj:
35
• The RAG shown here tells us about the following situation in a system:
• P= {P1, P2, P3}
• R= {R1,R2, R3, R4}
• E ={P1R1,P2R3, R1P2, R2P1,R3P3}
The process states
• P1 is holding an instance of R2 and is waiting for an instance of R1
• P2 is holding an instance of R1 and instance of R2, and is waiting for an
instance of R3
• P3 is holding an instance of R3
•
Resource Allocation Graph With a Deadlock
37
Since resources may be allocated but not used for a long period, resource utilization will be
low
A process that needs several popular resources has to wait indefinitely because one of the
resources it needs is allocated to another process. Hence starvation is possible.
3. No Preemption
If a process holding certain resources is denied further request, that process must release its
original resources allocated to it .If a process requests a resource allocated to another process
waiting for some additional resources, and the requested resource is not being used, then the
resource will be preempted from the waiting process and allocated to the requesting process.
Preempted resources are added to the list of resources for which the process is waiting
.Process will be restarted only when it can regain its old resources, as well as the new ones
that it is requesting. This approach is practical to resources whose state can easily save and
retrieved easily
4. Circular Wait
A linear ordering of all resource types is defined and each process requests resources in an
increasing order of enumeration So, if a process initially is allocated instances of resource
type R, then it can subsequently request instances of resources types following R in the
ordering
38
If a system is in a safe state, then there are no deadlocks.
If a system is in unsafe state, then there is a possibility of deadlock
Deadlock avoidance method ensures that a system will never enter an unsafe state
Deadlock Detection
If a system does not implement either deadlock prevention or avoidance, deadlock may
occur. Hence the system must provide. A deadlock detection algorithm that examines the
state of the system if there is an occurrence of deadlock.
• An algorithm to recover from the deadlock
39
CHAPTER THREE
40
Figs a base and limit register define a logical address space.
The binding of instructions and data to memory addresses can be done at any step along the
way:
1. Compile time: If it is known at compile time where the process will reside nine
memories, then absolute code can be generated.
2. Load time: If it is not known at compile time where the process will reside in
memory, then the compiler must generate re-locatable code.
3. Execution time: If the process can be moved during its execution from one memory
segment to another, then binding must be delayed until run time.
41
The run-time mapping from virtual to physical addresses is done by the memory management
unit (MMU), which is a hardware device.
The base register is called a relocation register. The value in the relocation register is
added to every address generated by a user process at the time it is sent to memory.
For example, if the base is at 13000, then an attempt by the user to address location 0
dynamically relocated to location 14,000; an access to location 347 is mapped to
location 13347. The MS-DOS operating system running on the Intel 80x 86 families
of processors uses four relocation registers when loading and running processes.
The user program never sees the real physical addresses. The program can create a
pointer to location 347 store it memory, manipulate it, compare it to other addresses
all as the number 347.
The user program deals with logical addresses. The memory-mapping hardware
converts logical addresses into physical addressed Logical addresses (in the range 0 to
max) and physical addresses (in the range R + 0 to R + max for a base value R). The
user generates only logical addresses.
The concept of a logical address space that is bound to a separate physical address space is
central to proper memory management.
3.2.Swapping
A process, can be swapped temporarily out of memory to a backing store, and then brought
back into memory for continued execution.
Backing Store - fast disk large enough to accommodate copies of all memory
images for all users, must provide direct access to these memory images.
Roll out, roll in - swapping variation used for priority based scheduling algorithms,
lower priority process is swapped out, and so higher priority process can be loaded
and executed.
42
Major part of swap time is transfer time; total transfer time is directly proportional
to the amount of memory swapped (might be?).
In swapping
Context time must be taken into consideration.
Swapping a pending process for an IO needs care.
Modified versions of swapping are found on many systems, i.e. UNIX and
Microsoft Windows.
System maintains a ready queue of ready-to-run processes which have
memory images on disk.
Give possible solutionAssume a multiprogramming environment with a round robin CPU-
scheduling algorithm. When a quantum expires, the memory manager will start to swap out
the process that just finished, and to swap in another process to the memory space that has
been freed ( Fig).
When each process finishes its quantum, it will be swapped with another process.
43
Swapping requires a backing store. The backing store is commonly a fast disk. It is large
enough to accommodate copies of all memory images for all users. The system maintains a
ready queue consisting of all processes whose memory images are on the backing store or in
memory and are ready to run.
The context-switch time in such a swapping system is fairly high. Let us assume that the user
process is of size 100K and the backing store is a standard hard disk with transfer rate of 1
megabyte per second. The actual transfer of the 100K process to or from memory takes
100K / 1000K per second = 1/10 second = 100 milliseconds
3.2.1.Overlay
Replacement of a block of stored instructions or data with another. Keep in memory only
those instructions and data that are needed at any given time.
Needed when process is larger than amount of memory allocated to it.
Implemented by user, no special support from OS;
Programming design of overlay structure is complex.
3.3.Contiguous Allocation
The main memory must accommodate both the operating system and the various user
processes. The memory is usually divided into two partitions, one for the resident operating
system, and one for the user processes.
Resident Operating System, usually held in low memory with interrupt vector.
User processes then held in high memory.
3.3.1.Single-Partition Allocation
If the operating system is residing in low memory, and the user processes are executing in
high memory. And operating-system code and data are protected from changes by the user
processes. We also need protect the user processes from one another. We can provide this 2
protection by using relocation registers.
The relocation register contains the value of the smallest physical address; the limit register
contains the range of logical addresses (for example, relocation = 100,040 and limit =
74,600). With relocation and limit registers, each logical address must be less than the limit
44
register; the MMU maps the logical address dynamically by adding the value in the relocation
register. This mapped address is sent to memory.
The relocation-register scheme provides an effective way to allow the operating system size
to change dynamically.
3.3.2.Multiple-Partition Allocation
One of the simplest schemes for memory allocation is to divide memory into a number of
fixed-sized partitions. Each partition may contain exactly one process. Thus, the degree of
multiprogramming is bound by the number of partitions. When a partition is free, a process is
selected from the input queue and is loaded into the free partition. When the process
terminates, the partition becomes available for another process.
The operating system keeps a table indicating which parts of memory are available and which
are occupied. Initially, all memory is available for user processes, and is considered as one
large block, of available memory, a hole. When a process arrives and needs memory, we
search for hole large enough for this process.
For example, assume that we have 2560K of memory available and a resident operating
system of 400K. This situation leaves 2160K for user processes. FCFS job scheduling, we
can immediately allocate memory to processes P1, P2, P3. Holes size 260K that cannot be
used by any of the remaining processes in the input queue. Using a round-robin CPU-
scheduling with a quantum of 1 time unit, process will terminate at time 14, releasing its
memory.
Memory allocation is done using Round-Robin Sequence as shown in fig. When a process
arrives and needs memory, we search this set for a hole that is large enough for this process.
If the hole is too large, it is split into two: One part is allocated to the arriving process; the
other is returned to the set of holes. When a process terminates, it releases its block of
memory, which is then placed back in the set of holes. If the new hole is adjacent to other
holes, we merge these adjacent holes to form one larger hole.
This procedure is a particular instance of the general dynamic storage-allocation problem,
which is how to satisfy a request of size n from a list of free holes. There are many solutions
to this problem. The set of holes is searched to determine which hole is best to allocate, first-
fit, best-fit, and worst-fit are the most common strategies used to select a free hole from the
set of available holes.
45
First-fit: Allocate the first hole that is big enough. Searching can start either at the beginning
of the set of holes or where the previous first-fit search ended. We can stop searching as soon
as we find a free hole that is large enough.
Best-fit: Allocate the smallest hole that is big enough. We must search the entire list, unless
the list is kept ordered by size. This strategy-produces the smallest leftover hole.
Worst-fit: Allocate the largest hole. Again, we must search the entire list unless it is sorted
by size. This strategy produces the largest leftover hole which may be more useful than the
smaller leftover hole from a best-t approach
3.3.3.Fixed Partitioning
Equal-size partitions
Any process whose size is less than or equal to the partition size can be loaded into an
available partition. If all partitions are full, the operating system can swap a process out of a
partition. a program may not fit in a partition. The programmer must design the program
with overlays
Main memory use is inefficient. Any program, no matter how small, occupies an entire
partition. This is called internal fragmentation. Because all partitions are of equal size,
it does not matter which partition is used.
Unequal-size partitions
can assign each process to the smallest partition within which it will fit
queue for each partition
Processes are assigned in such a way as to minimize wasted memory within a partition.
Dynamic Partitioning
Partitions are of variable length and number
Process is allocated exactly as much memory as required. Eventually get holes in the
memory. This is called external fragmentation
Must use compaction to shift processes so they are contiguous and all free memory is in
one block
Example: 2560K of memory available and a resident OS of 400K. Allocate memory
to processes P1…P4 following FCFS.
Shaded regions are holes
46
Initially P1, P2, P3 create first memory map.
P1 600K 10
P2 1000K 5
P3 300K 20
P4 700K 8
P5 500K 15
3.4.Fragmentation
External Fragmentation – total memory space exists to satisfy a request, but it is not
contiguous.
– 50 % rule: Given N allocated blocks 0.5 blocks will be lost due to
fragmentation.
Internal Fragmentation – allocated memory may be slightly larger than requested
memory; this size difference is memory internal to a partition, but not being used.
47
– Consider the hole of 18,464 bytes and process requires 18462 bytes.
– If we allocate exactly the required block, we are left with a hole of 2 bytes.
– The overhead to keep track of this free partition will be substantially larger
than the whole itself.
– Solution: allocate very small free partition as a part of the larger request.
Solution to fragmentation
1. Compaction
2. Paging
3. Segmentation
1. Compaction: Reduce external fragmentation by compaction
– Shuffle memory contents to place all free memory together in one large block.
– Compaction is possible only if relocation is dynamic, and is done at execution
time.
– I/O problem
• Latch job in memory while it is involved in I/O.
• Do I/O only into OS buffers.
– Compaction depends on cost.
3.5. Paging
Basic Idea: logical address space of a process can be made noncontiguous; process is
allocated physical memory wherever it is available.
48
Divide physical memory into fixed-sized blocks called frames (size is power of 2,
between 512 bytes and 8,192 bytes)
Divide logical memory into blocks of same size called pages
Keep track of all free frames
To run a program of size n pages, need to find n free frames and load program
Set up a page table to translate logical to physical addresses
Address Translation Scheme
• Address generated by CPU is divided into:
– Page number (p) – used as an index into a page table which contains base
address of each page in physical memory.
– Page offset (d) – combined with base address to define the physical memory
address that is sent to the memory unit.
Page number is an index to the page table.
The page table contains base address of each page in physical memory.
The base address is combined with the page offset to define the physical address that is
sent to the memory unit.
The size of a page is typically a power of 2.
o 512 –8192 bytes per page.
The size of logical address space is 2m and page size is 2n address units.
Higher m-n bits designate the page number
lower order bits indicate the page offset.
49
Fig Paging Hardware
The page size like is defined by the hardware. The size of a page is typically a power of 2
varying between 512 bytes and 8192 bytes per page, depending on the computer architecture.
The selection of a power of 2 as a page size makes the translation of a logical address into a
page number and page offset. lf the size of logical address space is 2m, and a page size is 2n
addressing units (bytes or words), then the high-order m - n bits of a logical address designate
the page number, and the n low-order bits designate the page offset. Thus, the logical address
is as follows:
Example of Paging
50
Page size= 4 bytes; Physical memory=32 bytes (8 pages)
Logical address 0 maps 1*4+0=4
Logical address 3 maps to= 1*4+3=7
Logical address 4 maps to =4*4+0=16
Logical address 13 maps to= 7*4+1=29.
Assume:-
o page size=4 bytes
o Physical memory = 32 bytes (8 pages).
How a logical memory address can be mapped into physical memory?
Logical address 0(containing ‗a’)
i. is on page 0.
ii. Is at offset 0.
Indexing into the page table, you can that page 0 is in frame 5.
Logical address 0 is mapped to physical 20, i.e.
20=[(5x4)+0]
Similarly,
a. Logical address 13 (page 3,offset 1) mapped physical address 9=((2x4)+1).
51
Logical address 0 maps 5*4+0=20
Logical address 3 maps to= 5*4+3=23
Logical address 4 maps to =6*4+0=24
Logical address 13 maps to= 2*4+1=9
3.6. Segmentation
A user program can be subdivided using segmentation, in which the program and its
associated data are divided into a number of segments. It is not required that all segments of
all programs be of the same length, although there is a maximum segment length. As with
paging, a logical address using segmentation consists of two parts, in this case a segment
number and an offset.
Because of the use of unequal-size segments, segmentation is similar to dynamic partitioning.
In segmentation, a program may occupy more than one partition, and these partitions need
not be contiguous. Segmentation eliminates internal fragmentation but, like dynamic
partitioning, it suffers from external fragmentation. However, because a process is broken up
into a number of smaller pieces, the external fragmentation should be less. Whereas paging is
52
invisible to the programmer, segmentation usually visible and is provided as a convenience
for organizing programs and data.
Another consequence of unequal-size segments is that there is no simple relationship between
logical addresses and physical addresses. Segmentation scheme would make use of a segment
table for each process and a list of free blocks of main memory. Each segment table entry
would have to as in paging give the starting address in main memory of the corresponding
segment. The entry should also provide the length of the segment, to assure that invalid
addresses are not used. When a process enters the Running state, the address of its segment
table is loaded into a special register used by the memory management hardware.
Consider an address of n + m bits, where the leftmost n bits are the segment number and the
rightmost m bits are the offset. The following steps are needed for address translation:
Extract the segment number as the leftmost n bits of the logical address.
Use the segment number as an index into the process segment table to find the starting
physical address of the segment. Compare the offset, expressed in the rightmost m bits, to
the length of the segment. If the offset is greater than or equal to the length, the address is
invalid.
The desired physical address is the sum of the starting physical address of the segment plus
the offset. Segmentation and paging can be combined to have a good result.
o Memory Management Scheme that supports user view of memory.
o A program is a collection of segments.
o A segment is a logical unit such as
– main program, procedure, function
– local variables, global variables, common block
– stack, symbol table, arrays and so on.
Protect each entity independently
Allow each segment to grow independently
Share each segment independently
User’s View of a Program
53
Logical View of Segmentation
54
Segment-table base register (STBR) points to the segment table’s
location in memory.
Segment-table length register (STLR) indicates the number of
segments used by a program;.
Note: segment number s is legal if s < STLR.
55
CHAPTER FOUR
Device management
One reason is to cope with a speed mismatch between the producer and consumer of a data
stream.
Second buffer while the first buffer is written to disk. A second use of buffering is to adapt
between devices that have different data transfer sizes.
Caching and buffering are two distinct functions, but sometimes a region of memory can be
used for both purposes.
56
Error Handling
An operating system that uses protected memory can guard against many kinds of hardware
and application errors.
Device drivers
In computing, a device driver or software driver is a computer program allowing higher-level
computer programs to interact with a hardware device.
A driver typically communicates with the device through the computer bus or
communications subsystem to which the hardware connects. When a calling program invokes
a routine in the driver, the driver issues commands to the device.
Once the device sends data back to the driver, the driver may invoke routines in the original
calling program. Drivers are hardware-dependent and operating-system-specific.
They usually provide the interrupt handling required for any necessary asynchronous time-
dependent hardware interface.
57
58
Chapter 5
File Systems
The properties of a file can be different on many systems, however, some of them are
common:
59
1. Name – unique name for a human to recognize the file.
2. Identifier – A unique number tag that identifies the file in the file system,
and non-human readable.
3. Type – Information about the file to get support from the system.
4. Size – The current size of the file in bytes, kilobytes, or words.
5. Access Control – Defines who could read, write, execute, change, or
delete the file in the system.
6. Time, Date, and User identification – This information kept for date
created, last modified, or accessed and so on.
60
Repositioning within a file. The directory is searched for the appropriate entry, and the
current-file-position pointer is repositioned to a given value, Repositioning writing a file need
not involve any actual I/O. This file operation is also known as a file seek.
Deleting a file. To delete a file, we search the directory for the named file. Having found the
associated directory entry, we release all file space, so that it can be reused by other files, and
erase the directory entry.
Truncating a file. The user may want to erase the contents of a file but keep its attributes.
Rather than forcing the user to delete the file and then recreate it, this function a11o"vs all
attributes to remain unchanged -except for file length-but lets the file be reset to length zero
and its file space released.
A file is a collection of similar records. The file is treated as a single entity by users and
applications and may be referred by name. Files have unique file names and may be created
and deleted. Restrictions on access control usually apply at the file level.
A file is a container for a collection of information. The file manager provides a protection
mechanism to allow user‘s administrator how processes executing on behalf of different users
can access the information in a file. File protection is a fundamental property of files because
it allows different people to store their information on a shared computer.
File represents programs and data. Data files may be numeric, alphabetic, binary or alpha
numeric. Files may be free form, such as text files. In general, file is sequence of bits, bytes,
lines or records.
61
A field is the basic element of data. An individual field contains a single value. A record is a
collection of related fields that can be treated as a unit by some application program. A file is
a collection of similar records. The file is treated as a single entity by users and applications
and may be referenced by name. Files have file names and maybe created and deleted.
Access control restrictions usually apply at the file level.
A database is a collection of related data. Database is designed for use by a number of
different applications. A database may contain all of the information related to an
organization or project, such as a business or a scientific study. The database itself consists of
one or more types of files. Usually, there is a separate database management system that is
independent of the operating system.
62
To minimize or eliminate the potential for lost or destroyed data.
To provide a standardized set of I/O interface routines to use processes.
TO provide I/O support for multiple users, in the case of multiple-user systems File System
Architecture. At the lowest level, device drivers communicate directly with peripheral
devices or their controllers or channels. A device driver is responsible for starting I/O
operations on a device and processing the completion of an I/O request. For file operations,
the typical devices controlled are disk and tape drives. Device drivers are usually considered
to be part of the operating system.
The I/O control consists of device drivers and interrupt handlers to transfer information
between the memory and the disk system. A device driver can be thought of as a translator.
The basic file system needs only to issue generic commands to the appropriate device driver
to read and write physical blocks on the disk.
The file-organization module knows about files and their logical blocks, as well as physical
blocks. By knowing the type of file allocation used and the location of the file, the file-
organization module can translate logical block addresses to physical block addresses for the
basic file system to transfer. Each file's logical blocks are numbered from 0 (or 1) through N,
whereas the physical blocks containing the data usually do not match the logical numbers, so
a translation is needed to locate each block. The file-organization module also includes the
free-space manager, which tracks unallocated and provides these blocks to the file
organization module when requested.
The logical file system uses the directory structure to provide the file-organization module
with the information the latter needs, given a symbolic file name. The logical file system is
also responsible for protection and security.
To create a new file, an application program calls the logical file system. The logical file
system knows the format of the directory structures. To create a new file, it reads the
appropriate directory into memory, updates it with the new entry, and writes it back to the
disk.
Once the file is found the associated information such as size, owner, access permissions and
data block locations are generally copied into a table in memory, referred to as the open-file
fable, consisting of information about all the currently opened files.
The first reference to a file (normally an open) causes the directory structure to be searched
and the directory entry for this file to be copied into the table of opened files. The index into
this table is returned to the user program, and all further references are made through the
index rather than with a symbolic name. The name given to the index varies. UNIX systems
63
refer to it as a file descriptor, Windows/NT as a file handle, and other systems as a file
control block.
Consequently, as long as the file is not closed, all file operations are done on the open-file
table. When the file is closed by all users that have opened it, the updated file information is
copied back to the disk-based directory structure.
64
Chapter 6
Security
65
Attack: an attempt to break security
Form of malicious access
Breach of confidentiality: Unauthorized reading of data (theft of information)
Breach of integrity: Unauthorized modification of data
Breach of availability: Unauthorized destruction of data
Theft of service: Unauthorized use of resources.
Denial of service: Preventing legitimate use of the system (denial of service)
Absolute protection against malicious use is not possible; however cost of
perpetrator can be sufficiently high to deter unauthorized attempts.
66
Protection in operating system is mechanism that controls the access of programs, processes,
or users to the resources defined by a computer system are referred to as protection. You may
utilize protection as a tool for multi-programming operating systems, allowing multiple users
to safely share a common logical namespace, including a directory or files. It needs the
protection of computer resources like the software, memory, processor, etc. Users should take
protective measures as a helper to multiprogramming OS so that multiple users may safely
use a common logical namespace like a directory or data. Protection may be achieved by
maintaining confidentiality, honesty and availability in the OS. It is critical to secure the
device from unauthorized access, viruses, worms, and other malware.
67
memory immediately after A reads F. Alternatively, B may attempt to modify the access
control policy that is stored in the OS memory so that the OS thinks that B is permitted access
to this file.
6.2.3. Encryption
Encryption: Encrypt clear text into cipher text. Properties of good encryption technique:
Relatively simple for authorized users to encrypt and decrypt data.
Encryption scheme depends not on the secrecy of the algorithm but on a
parameter of the algorithm called the encryption key.
Extremely difficult for an intruder to determine the encryption key.
Data Encryption Standard substitutes characters and rearranges their
order on the basis of an encryption key provided to authorize users via a
secure mechanism. Scheme only as secure as the mechanism.
RSA: is one of the first practical public-key cryptosystems and is widely used for secure data
transmission. In such a cryptosystem, the encryption key is public and differs from the
decryption key which is kept secret.
Human
Careful screening of users to reduce the chance of unauthorized
access.
Network
No one should intercept the data on the network.
Operating system
The system must protect itself from accidental or purposeful security
beaches.
A weakness at a high level of security allows circumvention of low-level measures.
In the remainder of this chapter we discuss security at OS level
Security measures at OS level
68
User authentication: Verifying the user‘s authentication
Program threats: Misuse of programs unexpected misuse of programs.
System threats: Worms and viruses
Intrusion detection: Detect attempted intrusions or successful intrusions and initiate
appropriate responses to the intrusions.
Cryptography: Ensuring protection of data over network
Authentication:User identity most often established through passwords can be
considered a special case of either keys or capabilities. Easy to understand and use
they can be easily guessed, illegally transferred
Visual or electronic monitoring, network sniffing
Passwords must be kept secret.
Frequent change of passwords.
Use of ―non-guessable‖ passwords.
Log all invalid access attempts.
Passwords may also either be encrypted or allowed to be used only once.
For given a value of x, f(x) is calculated and stored. It is difficult to guess x by
seeing f(x).
One time passwords: Password is different for each session.
Example: My mother‘s name is K…: Password ―MmnisK‖
Biometrics: Palm and hand readers: temperature map, finger length, finger width and
line patterns.
Example: Fingerprint readers.
Two factor authentication scheme
Password plus fingerprint scan
Program Threats
1. Trojan horse: Code segment that misuses its environment. Exploits mechanisms for
allowing programs written by users to be executed by other users. Consider the use of
―.‖ character in a search path. The ―.‖ Tells the shell to include the current directory.
If the user sets current directory to a friend‘s directory, the program runs in users
domain and effects friend‘s directory.
2. Trap Door
The designer of the code might leave a hole in the software that only she is
capable of using.
69
Specific user identifier or password that circumvents normal security
procedures.
Could be included in a compiler.
Stack and Buffer Overflow
Exploits a bug in a program (overflow either the stack or memory buffers.)
3. Stack and Buffer Overflow: Exploits a bug in a program (overflow either the stack
or memory buffers.)The attacker determines the vulnerability and writes a program to
do the following. Overflow an input-field, command-line argument, or input buffer
until it writes into the stack. Overwrite the current return address on the stack with the
address of the exploit code in the next step. Write a simple set of code for the next
space in the stack that includes commands that the attacker wishes to execute, for
example, spwan a shell.
System Threats
1. Worms – use spawn mechanism; standalone program. The worm spawns copies of
itself, using up systems resources and perhaps locking out system use by all other
processes.
2. Internet worm: Exploited UNIX networking features (remote access) and bugs in
finger and send mail programs. Grappling hook program uploaded main worm
program.
3. Viruses – fragment of code embedded in legitimate program.Mainly effect
microcomputer systems. Downloading viral programs from public bulletin boards or
exchanging floppy disks containing an infection. MSWORD (micros) ,RTF format is
safe and Safe computing.
4. Denial of Service
Overload the targeted computer preventing it from doing any useful work.
Downloading of a page.
Partially started TCP/IP sessions could eat up all resources.
Difficult to prevent denial of service attacks.
Threat Monitoring
Check for suspicious patterns of activity – i.e., several incorrect password attempts
may signal password guessing.
70
Audit log – records the time, user, and type of all accesses to an object; useful for
recovery from a violation and developing better security measures.
Scan the system periodically for security holes; done when the computer is relatively
unused.
Check for:
Short or easy-to-guess passwords
Unauthorized set-uid programs
Unauthorized programs in system directories
Unexpected long-running processes
Improper directory protections
Improper protections on system data files
Dangerous entries in the program search path (Trojan horse)
Changes to system programs: monitor checksum values
Firewall: is placed between trusted and untrusted hosts. A firewall is a computer or router
that sits between trusted and untrusted systems. It monitors and logs all connections. It limits
network access between these two security domains.
Spoofing: An unauthorized host pretends to be an authorized host by meeting some
authorization criterion.
Intrusion Detection:
71
Detect attempts to intrude into computer systems.
Wide variety of techniques
The time of detection
The type of inputs examined to detect intrusion activity
The range of response capabilities.
Alerting the administrator, killing the intrusion process, false resource is exposed to the
attacker (but the resource appears to be real to the attacker) to gain more information about
the attacker.The solutions are known as intrusion detection systems.
Detection methods:
Auditing and logging.
Install logging tool and analyze the external accesses.
Tripwire (UNIX software that checks if certain files and directories have
been altered – I.e. password files)
Integrity checking tool for UNIX.
It operates on the premise that a large class of intrusions results in
anomalous modification of system directories and files.
It first enumerates the directories and files to be monitored for changes and
deletions or additions. Later it checks for modifications by comparing
signatures.
System call monitoring: Detects when a process is deviating from expected system call
behavior.
Cryptography: Eliminate the need to trust the network. Cryptography enables a recipient of
a message to verify that the message was created by some computer possessing a certain key.
Keys are designed to be computationally infeasible to derive from the messages.
72