Operating Systems Notes
Operating Systems Notes
SYLLABUS
Unit I: What is Operating System? History and Evolution of OS, Basic OS functions, Resource
Abstraction, Types of Operating Systems– Multiprogramming Systems, Batch Systems, Time
Sharing Systems; Operating Systems for Personal Computers, Workstations and Hand-held
Devices, Process Control & Real time Systems.
Unit Ii: Processor and User Modes, Kernels, System Calls and System Programs, System View
of the Process and Resources, Process Abstraction, Process Hierarchy, Threads, Threading
Issues, Thread Libraries; Process Scheduling, Non-Preemptive and Preemptive Scheduling
Algorithms.
Unit III: Process Management: Deadlock, Deadlock Characterization, Necessary and Sufficient
Conditions for Deadlock, Deadlock Handling Approaches: Deadlock Prevention, Deadlock
Avoidance and Deadlock Detection and Recovery. Concurrent and Dependent Processes,
Critical Section, Semaphores, Methods for Inter- process Communication; Process
Synchronization, Classical Process Synchronization Problems: Producer-Consumer, Reader-
Writer.
Unit IV: Memory Management: Physical and Virtual Address Space; Memory Allocation
Strategies– Fixed and -Variable Partitions, Paging, Segmentation, Virtual Memory.
Unit V: File and I/O Management, OS security : Directory Structure, File Operations, File
Allocation Methods, Device Management, Pipes, Buffer, Shared Memory, Security Policy
Mechanism, Protection, Authentication and Internal Access Authorization Introduction to
Android Operating System, Android Development Framework, Android Application
Architecture, Android Process Management and File System, Small Application Development
using Android Development Framework.
MODEL QUESTION PAPER
OPERATING SYSTEMS
Time: 3Hrs Max.marks:75
Section - A
(OR)
b) Explain system view of the process and resources.
11. a) Explain about deadlock Detection and recovery.
(OR)
b) Discuss classical process synchronization problems.
12. a) Explain the following i) Segmentation ii) Fixed and variable partitions.
(OR)
b) Explain in detail about Demand-paging.
13. a) Explain Authentication and Internal Access Authorization.
(OR)
b) Explain Android Development Framework.
Unit-I
Operating systems :
It work as an interface between the user and the computer hardware. OS is the
software which performs the basic tasks like input, output, disk management,
controlling peripherals etc.
(OR)
An Operating System (OS) is an interface between a computer user and computer
hardware. An operating system is a software which performs all the basic tasks like
file management, memory management, process management, handling input and
output, and controlling peripheral devices such as disk drives and printers
Ex: Windows, Linux Operating System, Windows Operating System, VMS, OS/400,
AIX, z/OS, etc.
Device Management
An Operating System manages device communication via their respective drivers. It
does the following activities for device management −
Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
Decides which process gets the device when and for how much time.
Allocates the device in the efficient way.
De-allocates devices.
File Management
A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions.
An Operating System does the following activities for file management −
Keeps track of information, location, uses, status etc. The collective facilities are
often known as file system.
Decides who gets the resources.
Allocates the resources.
De-allocates the resources.
Security
It means of password and similar other techniques, it prevents unauthorized access
to programs and data.
Control over system performance
Recording delays between request for a service and response from the system.
Job accounting
Keeping track of time and resources used by various jobs and users.
Error detecting aids
Production of dumps, traces, error messages, and other debugging and error
detecting aids.
Coordination between other software's and users
Coordination and assignment of compilers, interpreters, assemblers and other
software to the various users of the computer systems.
Resource Abstraction
Modern computers consist of processors, memories, timers, disks,
The Operating System as a Resource Manager: In the bottom-up view, the
operating system provides for an orderly mice, network interfaces, printers, and
a wide variety of other devices.
controlled allocation of the processors, memories, and I/O devices Operating
system allows multiple programs to be in memory and run among the various
programs.
Resource management includes multiplexing (sharing) resources in at the same
time.
In time multiplexed, different programs takes turns using CPU. First two
different ways: in time and in space. E.g Sharing the printer.
When multiple print jobs are queued up for one of them gets to use the
resource, then the another, and so on.
printing on a single printer, a decision has to be made about which one In space
multiplexing, Instead of the customers taking turns, each one is to be printed
E.g main memory is divided up among several running programs, so gets part
of the resource. each one can be resident at the same time.
An operating system abstraction layer (OSAL)
It provides an application programming interface (API) to an abstract operating
system making it easier and quicker to develop code for
multiple software or hardware platforms.
OS abstraction layers deal with presenting an abstraction of the common system
functionality that is offered by any Operating system by the means of providing
meaningful and easy to use Wrapper functions that in turn encapsulate the system
functions offered by the OS to which the code needs porting.
A well designed OSAL provides implementations of an API for several real-time
operating systems (such as vxWorks, eCos, RTLinux, RTEMS). Implementations may
also be provided for non real-time operating systems, allowing the abstracted
software to be developed and tested in a developer friendly desktop environment.
In addition to the OS APIs, the OS Abstraction Layer project may also provide
a hardware abstraction layer, designed to provide a portable interface to hardware
devices such as memory, I/O ports, and non-volatile memory.
To facilitate the use of these APIs, OSALs generally include a directory structure
and build automation (e.g., set of make files) to facilitate building a project for a
particular OS and hardware platform.
Implementing projects using OSALs allows for development of portable embedded
system software that is independent of a particular real-time operating system.
It also allows for embedded system software to be developed and tested on desktop
workstations, providing a shorter development and debug time.
Types of Operating Systems
An operating system is a well-organized collection of programs that manages
the computer hardware. It is a type of system software that is responsible for the
smooth functioning of the computer system.
Batch Operating System
In the 1970s, Batch processing was very popular. In this technique, similar types of
jobs were batched together and executed in time.
People were used to having a single computer which was called a mainframe.
In Batch operating system, access is given to more than one person; they submit
their respective jobs to the system for the execution.
The system put all of the jobs in a queue on the basis of first come first serve and
then executes the jobs one by one. The users collect their respective output when all
the jobs get executed.
The purpose of this operating system was mainly to transfer control from one job to
another as soon as the job was completed. It contained a small set of programs
called the resident monitor that always resided in one part of the main memory. The
remaining part is used for servicing jobs.
Advantages of Batch OS
The use of a resident monitor improves computer efficiency as it eliminates CPU time
between two jobs.
Disadvantages of Batch OS
1. Starvation
Batch processing suffers from starvation.
For Example:
There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the
execution time of J1 is very high, then the other four jobs will never be executed, or
they will have to wait for a very long time. Hence the other processes get starved.
2. Not Interactive
Batch Processing is not suitable for jobs that are dependent on the user's
input. If a job requires the input of two numbers from the console, then it will never
get it in the batch processing scenario since the user is not present at the time of
execution.
Multiprogramming Operating System
Multiprogramming is an extension to batch processing where the CPU is always kept
busy.
Each process needs two types of system time: CPU time and IO time.
In a multiprogramming environment, when a process does its I/O, The CPU can start
the execution of other processes. Therefore, multiprogramming improves the
efficiency of the system.
Advantages of Multiprogramming OS
Throughout the system, it increased as the CPU always had one program to
execute.
Response time can also be reduced.
Disadvantages of Multiprogramming OS
Multiprogramming systems provide an environment in which various systems
resources are used efficiently, but they do not provide any user interaction with the
computer system.
Multiprocessing Operating System
In Multiprocessing, Parallel computing is achieved. There are more than one
processors present in the system which can execute more than one process at the
same time. This will increase the throughput of the system.
Your computer's operating system (OS) manages all of the software and hardware on the
computer. Most of the time, there are several different computer programs running at the same
time, and they all need to access your computer's central processing unit (CPU), memory,
and storage. The operating system coordinates all of this to make sure each program gets what it
needs.
Types of operating systems
Operating systems usually come pre-loaded on any computer you buy. Most people use the
operating system that comes with their computer, but it's possible to upgrade or even change
operating systems. The three most common operating systems for personal computers
are Microsoft Windows, macOS, and Linux.
Modern operating systems use a graphical user interface, or GUI (pronounced gooey). A
GUI lets you use your mouse to click icons, buttons, and menus, and everything is clearly
displayed on the screen using a combination of graphics and text.
Each operating system's GUI has a different look and feel, so if you switch to a different
operating system it may seem unfamiliar at first. However, modern operating systems are
designed to be easy to use, and most of the basic principles are the same.
Microsoft Windows
Microsoft created the Windows operating system in the mid-1980s. There have been many
different versions of Windows, but the most recent ones are Windows 10 (released in
2015), Windows 8 (2012), Windows 7 (2009), and Windows Vista (2007). Windows
comes pre-loaded on most new PCs, which helps to make it the most popular operating
system in the world.
macOS
macOS (previously called OS X) is a line of operating systems created by Apple. It comes
preloaded on all Macintosh computers, or Macs. Some of the specific versions
include Mojave (released in 2018), High Sierra (2017), and Sierra (2016).
According to StatCounter Global Stats, macOS users account for less than 10% of global
operating systems—much lower than the percentage of Windows users (more than 80%). One
reason for this is that Apple computers tend to be more expensive. However, many people do
prefer the look and feel of macOS over Windows.
Linux
Linux (pronounced LINN-ux) is a family of open-source operating systems, which means
they can be modified and distributed by anyone around the world. This is different
from proprietary software like Windows, which can only be modified by the company that owns
it. The advantages of Linux are that it is free, and there are many different distributions—or
versions—you can choose from.
Operating systems for mobile devices or Handheld Devices
The operating systems we've been talking about so far were designed to run
on desktop and laptop computers. Mobile devices such as phones, tablet computers,
and MP3 players are different from desktop and laptop computers, so they run operating
systems that are designed specifically for mobile devices. Examples of mobile operating systems
include Apple iOS and Google Android
Workstation:
A wintel platform desktop or laptop general-purpose computer deployed to state knowledge
(i.e. business or office) workers. Computers used by developers and other high end or specialized
users may differ from this definition and therefore are not required to adhere to this standard.
For the State’s purpose, PC and Workstation are used more or less interchangeably. A state
definition is a desktop or laptop general-purpose computer deployed to a single state knowledge
(i.e. business or office) workers typically running a Windows operating system on an Intel x86 or
equivalent architecture. Traditional RISC/Unix based workstations are considered a very small
subset of the total population of workstations and are referred to as not a Wintel workstations or
specialized workstations.
Unit-2
There are two modes of operation in the operating system to make sure it
works correctly. These are user mode and kernel mode.
They are explained as follows −
User Mode
The system is in user mode when the operating system is running a user
application such as handling a text editor. The transition from user mode to kernel
mode occurs when the application requests the help of operating system or an
interrupt or a system call occurs.
The mode bit is set to 1 in the user mode. It is changed from 1 to 0 when
switching from user mode to kernel mode.
Kernel Mode
The system starts in kernel mode when it boots and after the operating
system is loaded, it executes applications in user mode. There are some privileged
instructions that can only be executed in kernel mode.
These are interrupt instructions, input output management etc. If the
privileged instructions are executed in user mode, it is illegal and a trap is
generated.
The mode bit is set to 0 in the kernel mode. It is changed from 0 to 1 when
switching from kernel mode to user mode.
In the above image, the user process executes in the user mode until it gets a
system call. Then a system trap is generated and the mode bit is set to zero. The
system call gets executed in kernel mode. After the execution is completed, again a
system trap is generated and the mode bit is set to 1. The system control returns to
kernel mode and the process execution continues.
Necessity of Dual Mode (User Mode and Kernel Mode) in Operating System
The lack of a dual mode i.e. user mode and kernel mode in an operating
system can cause serious problems. Some of these are −
A running user program can accidentally wipe out the operating system by
overwriting it with user data.
Multiple processes can write in the same system at the same time, with
disastrous results.
These problems could have occurred in the MS-DOS operating system which had
no mode bit and so no dual mode.
Below are some examples of how a system call varies from a user function.
1. A system call function may create and use kernel processes to execute the
asynchronous processing.
2. A system call has greater authority than a standard subroutine. A system call with
kernel-mode privilege executes in the kernel protection domain.
3. System calls are not permitted to use shared libraries or any symbols that are not
present in the kernel protection domain.
4. The code and data for system calls are stored in global kernel memory.
There are various situations where you must require system calls in the operating
system. Following of the situations are as follows:
There are commonly five types of system calls. These are as follows:
Process Control
File Management
Device Management
Information Maintenance
Communication
Process Control
Process control is the system call that is used to direct the processes. Some
process control examples include creating, load, abort, end, execute, process,
terminate the process, etc.
File Management
File management is a system call that is used to handle the files. Some file
management examples include creating files, delete files, open, close, read, write,
etc.
Device Management
Device management is a system call that is used to deal with devices. Some
examples of device management include read, device, write, get device attributes,
release device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain
information. There are some examples of information maintenance, including getting
system data, set time or date, get time or date, set system data, etc.
Communication
Communication is a system call that is used for communication. There are
some examples of communication, including create, delete communication
connections, send, receive messages, etc.
Examples of Windows and Unix system calls
There are various examples of Windows and Unix system calls. These are as listed below
in the table:
The Text section is made up of the compiled program code, read in from non-
volatile storage when the program is launched.
The Data section is made up of the global and static variables, allocated and
initialized prior to executing the main.
The Heap is used for the dynamic memory allocation and is managed via calls to
new, delete, malloc, free, etc.
The Stack is used for local variables. Space on the stack is reserved for local
variables when they are declared.
Program
A program is a piece of code which may be a single line or millions of lines. A
computer program is usually written by a computer programmer in a programming
language. For example, here is a simple program written in C programming language −
#include <stdio.h>
int main() {
printf("Hello, World! \n");
return 0;
}
A computer program is a collection of instructions that performs a specific task when
executed by a computer. When we compare a program with a process, we can conclude
that a process is a dynamic instance of a computer program.
A part of a computer program that performs a well-defined task is known as
an algorithm. A collection of computer programs, libraries and related data are
referred to as a software.
The process, from its creation to completion, passes through various states. The
minimum number of states is five.
The names of the states are not standardized although the process may be in
one of the following states during execution.
1. New
A program which is going to be picked up by the OS into the main memory is
called a new process.
2. Ready
Whenever a process is created, it directly enters in the ready state, in which,
it waits for the CPU to be assigned. The OS picks the new processes from the
secondary memory and put all of them in the main memory.
The processes which are ready for the execution and reside in the main
memory are called ready state processes. There can be many processes present in
the ready state.
3. Running
One of the processes from the ready state will be chosen by the OS
depending upon the scheduling algorithm. Hence, if we have only one CPU in our
system, the number of running processes for a particular time will always be one. If
we have n processors in the system then we can have n processes running
simultaneously.
4. Block or wait
From the Running state, a process can make the transition to the block or
wait state depending upon the scheduling algorithm or the intrinsic behavior of the
process.
When a process waits for a certain resource to be assigned or for the input
from the user then the OS move this process to the block or wait state and assigns
the CPU to the other processes.
5. Completion or termination
When a process finishes its execution, it comes in the termination state. All
the context of the process (Process Control Block) will also be deleted the process
will be terminated by the Operating system.
6. Suspend ready
A process in the ready state, which is moved to secondary memory from the
main memory due to lack of the resources (mainly primary memory) is called in the
suspend ready state.
If the main memory is full and a higher priority process comes for the
execution then the OS have to make the room for the process in the main memory
by throwing the lower priority process out into the secondary memory. The suspend
ready processes remain in the secondary memory until the main memory gets
available.
7. Suspend wait
Instead of removing the process from the ready queue, it's better to remove
the blocked process which is waiting for some resources in the main memory. Since
it is already waiting for some resource to get available hence it is better if it waits in
the secondary memory and make room for the higher priority process. These
processes complete their execution once the main memory gets available and their
wait is finished.
6 In multiple processes each process One thread can read, write or change
operates independently of the another thread's data.
others.
Advantages of Thread
Threads minimize the context switching time.
Use of threads provides concurrency within a process.
Efficient communication.
It is more economical to create and context switch threads.
Threads allow utilization of multiprocessor architectures to a greater scale and
efficiency.
Types of Thread
Threads are implemented in following two ways −
User Level Threads − User managed threads.
Kernel Level Threads − Operating System managed threads acting on kernel, an
operating system core.
User Level Threads
In this case, the thread management kernel is not aware of the existence of
threads. The thread library contains code for creating and destroying threads, for
passing message and data between threads, for scheduling thread execution and for
saving and restoring thread contexts. The application starts with a single thread.
Advantages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.
Disadvantages
In a typical operating system, most system calls are blocking.
Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads
In this case, thread management is done by the Kernel. There is no thread
management code in the application area. Kernel threads are supported directly by the
operating system. Any application can be programmed to be multithreaded. All of the
threads within an application are supported within a single process.
The Kernel maintains context information for the process as a whole and for
individuals threads within the process. Scheduling by the Kernel is done on a thread basis.
The Kernel performs thread creation, scheduling and management in Kernel space. Kernel
threads are generally slower to create and manage than the user threads.
Advantages
Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
If one thread in a process is blocked, the Kernel can schedule another thread of the
same process.
Kernel routines themselves can be multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires a
mode switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level
thread facility. Solaris is a good example of this combined approach. In a combined
system, multiple threads within the same application can run in parallel on multiple
processors and a blocking system call need not block the entire process. Multithreading
models are three types
Many to many relationship.
Many to one relationship.
One to one relationship.
Many to Many Model
The many-to-many model multiplexes any number of user threads onto an equal or
smaller number of kernel threads.
The following diagram shows the many-to-many threading model where 6 user level
threads are multiplexing with 6 kernel level threads. In this model, developers can create
as many user threads as necessary and the corresponding Kernel threads can run in
parallel on a multiprocessor machine. This model provides the best accuracy on
concurrency and when a thread performs a blocking system call, the kernel can schedule
another thread for execution.
Many to One Model
Many-to-one model maps many user level threads to one Kernel-level thread.
Thread management is done in user space by the thread library. When thread makes a
blocking system call, the entire process will be blocked. Only one thread can access the
Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a way
that the system does not support them, then the Kernel threads use the many-to-one
relationship modes.
Context Switch
A context switch is the mechanism to store and restore the state or context of a CPU
in Process Control block so that a process execution can be resumed from the same point
at a later time. Using this technique, a context switcher enables multiple processes to
share a single CPU. Context switching is an essential part of a multitasking operating
system features.
When the scheduler switches the CPU from executing one process to execute another,
the state from the current running process is stored into the process control block. After
this, the state for the process to run next is loaded from its own PCB and used to set the
PC, registers, etc. At that point, the second process can start executing.
Context switches are computationally intensive since register and memory state must
be saved and restored. To avoid the amount of context switching time, some hardware
systems employ two or more sets of processor registers. When the process is switched, the
following information is stored for later use.
Program Counter
Scheduling information
Base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
P1 4
P2 3
P3 5
Step 1) The execution begins with process P1, which has burst time 4. Here, every
process executes for 2 seconds. P2 and P3 are still in the waiting queue.
Step 2) At time =2, P1 is added to the end of the Queue and P2 starts executing
Step 3) At time=4 , P2 is preempted and add at the end of the queue. P3 starts
executing.
Step 4) At time=6 , P3 is preempted and add at the end of the queue. P1 starts
executing.
Step 5) At time=8 , P1 has a burst time of 4. It has completed execution. P2 starts
execution
Step 6) P2 has a burst time of 3. It has already executed for 2 interval. At time=9, P2
completes execution. Then, P3 starts execution till it completes.
Step 7) Let’s calculate the average waiting time for above example.
Wait time
P1= 0+ 4= 4
P2= 2+4= 6
P3= 4+3= 7
What is Non- Preemptive Scheduling?
In this type of scheduling method, the CPU has been allocated to a specific process. The
process that keeps the CPU busy will release the CPU either by switching context or
terminating.
It is the only method that can be used for various hardware platforms. That’s
because it doesn’t need specialized hardware (for example, a timer) like preemptive
Scheduling.
Non-Preemptive Scheduling occurs when a process voluntarily enters the wait
state or terminates.
Advantages of Non-preemptive Scheduling
Here, are pros/benefits of Non-preemptive Scheduling method:
Offers low scheduling overhead
Tends to offer high throughput
It is conceptually very simple method
Less computational resources need for Scheduling
Disadvantages of Non-Preemptive Scheduling
Here, are cons/drawback of Non-Preemptive Scheduling method:
It can lead to starvation especially for those real-time tasks
Bugs can cause a machine to freeze up
It can make real-time and priority Scheduling difficult
Poor response time for processes
Step 2) At time =2, process P1 arrives and is added to the waiting queue. P4 will
continue execution.
Step 3) At time = 3, process P4 will finish its execution. The burst time of P3 and P1 is
compared. Process P1 is executed because its burst time is less compared to P3.
Step 4) At time = 4, process P5 arrives and is added to the waiting queue. P1 will
continue execution.
Step 5) At time = 5, process P2 arrives and is added to the waiting queue. P1 will
continue execution.
Step 6) At time = 9, process P1 will finish its execution. The burst time of P3, P5, and
P2 is compared. Process P2 is executed because its burst time is the lowest.
Step 7) At time=10, P2 is executing, and P3 and P5 are in the waiting queue.
Step 8) At time = 11, process P2 will finish its execution. The burst time of P3 and P5 is
compared. Process P5 is executed because its burst time is lower.
CPU Scheduling:
A process scheduler schedules different process to be assigned to be the CPU based
on particular scheduling algorithms.
There are diff type of scheduling algorithms to schedule the process.They are:
1.First come -first serve(FCFS) scheduling
2.Shorest job first(SJF) scheduling
3.Priority scheduling
4.Round -Robin (RR)scheduling
5.Multi-level Queue scheduling
These algorithms are preemptive or non -preemptive
*Non -preemptive algorithms are cannot be preempted until it completes its allotted
time
*Preemptive scheduling is based on priority process enters into a ready state.
FCFS Scheduling:
*The implementation of FCFS, is rasily managed with a FIFO Queue.
*FCFS Scheduling algorithm is non-preemptive.
*The process that request the CPU first is allocated the CPU first.
*When a process enters the ready queue its PCB is linked onto the tail of the queue.
*When the CPU is free , it is allocated to the process at the head of the queue the
running process is then removed from the queue.
*The code for FCFS Scheduling is simple to write and understand.
*once the CPU has allocated to a process, that process keeps the CPU until it releases
the CPU either by terminating (or) by requesting I/O
Eg: consider the following set of process that arrive at time 0, with the length of the
CPU-brust time given in millisecond.
process Burst
ti
m
e
P1 24
P2 3
P3 3
If the process arrive in the order p1,p2,p3 and served in FCFSorder, we get
the result in the following Ganh chart:
P1 P2 P3
0 24 27 30
P1 P2 P3
0 3 6 30
SJF scheduling:
*The SJF algorithm may be either preemptive or non-preemptive.
*A non-preemptive SJF algorithm with allow the currently running process its CPU burst
time
*A preemptive algorithm will preemptive the currently executing process.
*preemptive SJF scheduling is sometimes called shortest- remaining-time-first.
*In this algorithm the shortest CPU burst time will be executed 1st
process Burst
ti
m
e
P1 6
P2 8
P3 7
P4 3
P4 P1 P3 P2
0 3 9 18 24
P1 P2 P4 P1 P3
0 1 3 10 17 26
Priority Scheduling:
*The fore ground queue might be scheduled by an RR algorithm, while the background
queue is scheduled by an FCFS algorithm.
Deadlock
System model:
A system consists of a finite number of resources to be distributed among a number of
completing processes. the partitioned into several types each of which consists of some
number of identical instance memory space CPU cycles ,files and I/O devices are examples of
resource types.
A process must request resource before using it and must release the resource after using it. a
process may request as many resources as it requires to carry out its designated task.
obviously, the number of resources may not exceed the total number of resources available in
the system.
Under the normal mode of operation, a process may utilize a resource in only the following
sequence.
Request: if the request cannot be granted immediately(for example, the resource is being used
by another process),then the requesting process must wait until it can acquire the resource
Use: the process can operate on the resource(for example if the resource is printer can print on
printer
Release: the process release the resource.
Here, P1R1,R1P2,P2R3,R3P3,P3R2,R2P1.
Methods for handling deadlocks:
Deadlock:
In multiprogramming environment several processes may complete for a finite number of
resources a process request resources if the resources are not available at the time then the
process enters into waiting state
The waiting processes may not change there state if the requested resources are held by other
waiting processes this situation is called as deadlock.
There are 3 different methods dealing with the deadlock method
1.Deadlock Prevention
2.Deadlock Avoidance
3.Deadlock Detection
1.Deadlock Prevention:
A Deadlock situation can arise if the following 4 conditions hold simultaneously in the system
1. Mutual exclusion
2. Hold and wait
3. No pre-emption
4. Circular wait
Mutual exclusion: the mutual exclusion condition holds that non sharable resources. for
example, a printer cannot be simultaneously shared by several processes.
Hold and wait: to ensure that the hold and wait condition never occurs in the system we must
guarantee that whenever a process request a resource it does not hold any other resource.
For this purpose the two protocols are used to
1.one protocol is to requires process to request and allocated all it resources before it begin
resources
2.another protocol allows a process to request resources only when the process has none
No pre-emption to ensure that this condition does not hold,we can use the following protocols:
1.if the process that is holding same resources requests another resource that cannot be
immediately allocated to it(i.e; the process must wait) then all resources currently being held
are preemptedi.e; these resource are implicitly released
Circular wait one way to ensure that are circular wait condition never holds is to impose a total
ordering of all resource types and to require that each process requests resources in an
increasing order of enumeration
Let R={R1,R2.....Rm} be set of resource types we assign to each resource type a unique integer
number which allows us to compare two resources and to determine whether one precedes
another in our ordering.
2.Deadlock Avoidance
the main drawback of deadlock prevention is low device utilization and reduced system through
put. an alternative technique for is to require additional information about how resources are
to be requested. it is possible to construct can algorithm that ensures the system will never a
deadlock state. the algorithm for deadlock avoidance approach are
1.resource allocation graph algorithm
2.banker’s algorithm
3.saftey algorithm
4.resource request algorithm
1.resource allocation graph algorithm: claim edge pirj indicated the process pi may request
resource rj. represented by dashed line
Claim edge converts to request edge when a process requests a resource
When a resource is released by a process assignment edge reconverts to a claim edge
Resource must be claimed a priori in the system
2.banker’s algorithm
The resources allocation graph algorithm is not applicable to a resource allocation system with
the multiple instance of each resource type
The banker’s algorithm name was chosen because this algorithm could be used in a banking
system to ensure that the bank never allocates its available can such that it can no longer
satisfy the needs of all its algorithm
Algorithm:
Available: A vector of length m indicates the number of available resources of each type. If
available [j]=k, there are k instance of resource type r1 available.
Max: an n*m matrix defines the maximum demand of each process. if max[i,j]=k,then pi may
request at most k instance of resource type rj.
Allocation: an n*m matrix defines the number of resource of each type currently allocated to
each process. if allocation [i,j]=k,then process pi is currently allocated k instance of resource
type rj.
Need: An n*m matrix indicate the remaining resource need of each process if need [i,j]=k then pi
may need k more instances of resource type rj to complete its task
Need[i,j]=Max[i,j]-Allocation[i,j]
3.saftey algorithm
The algorithm for finding out whether or not a system is in a safe state can be describe
as follows
1.let work and finish be vectors of length m and n respectively initialize work!= available
Finish[i]!=false for i=1,2,3,.......n
2.find an i such that both
A.finish[i]=flase
B.need i<=work
If no such i exists, go to step 4.
3.work:=work+allocation
Finish[i]:=true ,go to step 2.
4.if finish[i]=true for all i, then the system is in a safe state
4.resource request algorithm
If requesti<=needi go to step 2.otherwise raise an error condition. since the process has
executed its maximum claim.If requesti<=available, go to step 3.other wise pi must
wait since the resources are not available. Have the system pretend to have
allocated the requested resources to process pi by modifying the states as follows
a. Available:=Available-Requesti
b. Allocationi:=Allocationi+Requesti
c. Need i=Needi-Requesti
3.Deadlock Detection
it a system does not empty either a deadlock prevention or a deadlock avoidance
algorithm. an algorithm that examines the state of the system to determine whether
a deadlock has occurred. an algorithm to recover from the deadlock.
Single instance of each resource type: maintain wait for graph nodes are processes
Pipj if pi is waiting for pj. periodically invoke an algorithm that searches for a cycle in
the graph.an a algorithm to detect a cycle in a graph requires an order of n 2
operations where n is the number of vertices in the graph.
Several instance of resource type:
The wait for graph scheme is not applicable to a resource allocation system with multiple
instance of each resource type.
Available: a vector of length m indicates the number of available resource of each type.
Allocation: A n*m matrix defines the number of resources of each type currently
allocated to each process.
Request An n*m matrix indicates the current request of each process if request [i,j]=k,
then process pi is requesting k more instance of resource type rj.
Detection Algorithm
1.Let work and finish be vectors of length m and n respectively initialize
work:=available.for i=1,2,3.....n.
2.if allocation i!=0 then finish [i]:=false otherwise finish[i]:=true.
find an index i such that both
a.finish[i]=false
b.requesti<=work
3.if no such i exists go to step 4
work:=work+allocation
finish[i]:=true goto step2
4.if finish[i]=false,for same i 1<=i<=n, then the system is in a deadlock state moreover
it finish[i]=false,the process pi is deadlocked.
Recovery from deadlock
When a detection algorithm determines that a deadlock exists one possibility is true
inform the operator that a deadlock has occurred and deals with manually. the other
possibility is to let the system recovery from the deadlock automatically. there are two
options for breaking a deadlock
Process termination
To eliminates deadlock by aborting a process we use one of 2 methods they are
Abort all deadlocked processes
Abort one process at a time until the deadlock cycle eliminated
Resource pre-emption:
Selecting a victim-minimize cost
Roll back return to some safe state restart process for the state
Starvation some process may always be picked as victim include number of rollback in
cost factor.
Multiprogramming Environment:
Process:
A process is a program in execution the execution of a process must
progress in sequential fashion. a process is more than the program code known as
the text section. A process generally includes the process stack, containing
temporary. Data etc. a program is passive entity such as the content of a file stored
in disk, where as process is an active entity with program counter.
Process state:
As a process executes it change state. the state of a process is defined in part by the
current activity of that process. each process may be in one of following state
Process synchronization:
Process synchronization means sharing system resources by process in a such a way
that, concurrent access to shared data is handled there by minimizing the change
of inconsistency data maintaining data consistency demands mechanisms to ensure
synchronized execution of cooperating processes.
Algorithm1:
Here, the processors share common integer variable turn initialized to 0 or 1.if turn=i
the process pi is allowed to executed in critical section.the structure of process pi
shown below
This solution ensures that only one process at a time cab be its critical section. if does
not satisfy the progress requirement because only p0 is ready to enter in its critical
section.
Algorithm 2:
The problem with algorithm 1 is that, if does not retain sufficient information about the
state of each process. it remembers only which process is allowed into enter that
process critical section. to remedy this problem we can replaces the variable turn
with the following array:
Var flag:array[0....1] of Boolean;
The elements of array are initialized to false. if flag[i] is true the pi is ready to enter in
critical section. the structure of process pi is shown below
In this solution the mutual exclusion is satisfying but progress and bounded waiting
conditions are not satisfied
By combining the key ideal of algorithm 1&2,we obtain a correct solution to critical
section problem where, the processors share two variables
Semaphores:
The solutions to the critical section problem are not suitable for complex
problems. to overcome this difficulty we can use one tool is called semaphores.
Semaphores is a synchronization tool. In 1965,dijkstra proposed a new and
very significant technique for managing concurrent processes to synchronize the
progress of interacting process the new technique is called semaphore. a
semaphore ‘S’ is an integer variable which is used to access to S standard
automatic operations called wait and signal designated by P() and V() respectively.
The classical definition of wait and signal are:
Wait: decrement the value of its argument S as soon as if would become non-
negtive
Wait : while S<=0 do no-op; S:S-1;
Signal: increment the value of its argument, S as an individual operation
Signal(S):S:S+1;
Use of semaphores:
Semaphores are used to deal with the n process critical section problem. the
n-processors share a semaphores mutex(mutual exclusion) which is initialized to
one. Each processes pi is organized as follows
Repeat
Wait(mutex)
Critical section
Single(mutex)
Remainder Section
Until False;
Properties of semaphores:
1.simple
2.works with many processes
3.it can have many different critical section with different semaphores
4.each critical section has unique
Types of semaphores
Semaphores are mainly two types
Binary semaphores:
1.it is special form of semaphores used for implementing mutual exclusion hence it is
often mutex.
2. a binary semaphores is initialized to 1 and only takes the value 0 and 1during
execution of a program.
Counting semaphores:
These are used to implement bounded concurrency
Implementation:
The main disadvantages of mutual exclusion solution is that the are required busy
waiting. while a process is in its critical section any other process that tries to enter
its critical section must loop continuously in the entry code. This continual looping
is clearly a problem in a real programming system, various single CPU is shared
among many processes. busy waiting wastes CPU cycles that some other process
might be enable to use productively. this type of semaphores is also called as
“spinlock”. spinlock are useful in multi processor systems. the semaphores
operations can now be defined as
Wait(S): S. value :=S.value-1;
If S. value<0
Then begin
Add this process to S.L(list of processes)
Block;
End;
Signal(S):S.value:=S.value+1;
If S.value<0;
Then begin;
Remove a process p from S.L
Wake up(p);
End;
Limitations of semaphores:
1.priority in version is a big limitation of OS semaphores. with improper use a process
may clock indefinitely such as situation is called deadlock.
1. Pipes
2. Shared Memory
3. Message Queue
4. Direct Communication
5. Indirect communication
6. Message Passing
7. FIFO
To understand them in more detail, we will discuss each of them individually.
Pipe:-
The pipe is a type of data channel that is unidirectional in nature. It means
that the data in this type of data channel can be moved in only a single direction at a
time. Still, one can use two-channel of this type, so that he can able to send and
receive data in two processes. Typically, it uses the standard methods for input and
output. These pipes are used in all types of POSIX systems and in different versions
of window operating systems as well.
Shared Memory:-
It can be referred to as a type of memory that can be used or accessed by
multiple processes simultaneously. It is primarily used so that the processes can
communicate with each other. Therefore the shared memory is used by almost all
POSIX and Windows operating systems as well.
Message Queue:-
In general, several different messages are allowed to read and write the data
to the message queue. In the message queue, the messages are stored or stay in
the queue unless their recipients retrieve them. In short, we can also say that the
message queue is very helpful in inter-process communication and used by all
operating systems.
To understand the concept of Message queue and Shared memory in more
detail, let's take a look at its diagram given below:
Message Passing:-
It is a type of mechanism that allows processes to synchronize and
communicate with each other. However, by using the message passing, the
processes can communicate with each other without restoring the hared variables.
Usually, the inter-process communication mechanism provides two operations
that are as follows:
1.send (message)
2.received (message)
Direct Communication:-
In this type of communication process, usually, a link is created or established
between two communicating processes. However, in every pair of communicating
processes, only one link can exist.
Indirect Communication
Indirect communication can only exist or be established when processes share
a common mailbox, and each pair of these processes shares multiple communication
links. These shared links can be unidirectional or bi-directional.
FIFO:-
It is a type of general communication between two unrelated processes. It
can also be considered as full-duplex, which means that one process can
communicate with another process and vice versa.
Unit-4
Memory Management
Logical VS physical address space:
*The concept of a logical address space that is bound to a separate physical address
space is central proper memory management.
- Logical address: generated by the CPU , also referred to as virtual address.
- Physical address: address seen by the memory unit.
*Logical and physical addresses are the same in compile time and load time address -
binding scheme logical (virtual) and physical addresses differ in execution time
address- binding scheme.
*The set of all physical addresses corresponding to these logical addresses is referred to
as a physical address space.
*The run-time mapping from virtual to physical addresses is done by the memory -
management unit (MMU).which is a hard ware device .
*There are no. of different schemes for a accomplishing search mapping as follows
*The base register is known called a re-location register.
*The value in the re location register is added to every address generated by a user
process at the time it is sent to memory.
*The user program never sees the real physical addresses.
*The program can create a pointer to location 346, store it in memory. maniculate it,
compare it to other addresses-all as the number 346
*The user program deals with logical address .
Memory Allocation Methods :
*The direct access nature of disks us flexibility in the implementation of files.
*In almost every case , many files will be stored on the same disk.
*The main problem is how to allocate space to these files so that disk space is utilized to
these files so that disk space is utilized effectively and files can be accessed quickly.
*Three major methods of allocation disk space are in wide use they are
1.Contiguous allocation
2.Linked allocation
3.Indexed allocation
Contiguous Allocation:
*The contiguous allocation method requires each file to occupy a set of contiguous
blocks on the disk.
*Disk address define a linear ordering on disk.
*Notice that with this ordering , assuming block 'b+1' after block 'b' normally requires
no head movement.
*When a head movement is needed, it is only one track.
*Thus , the number of disk seeks required for accessing contiguously allocated files in
minimal as is seek time when a seek is finally needed.
-Random access
-wasteful of space
-files cannot grow
Linked allocation of disk space
*each block contains a pointer to the next block.
*To create a new file , we simply create a new entry in a directory.
*With linked allocation, each directory entry has a pointer to the disk block of the file.
*This pointer to initialized to nil (the end of the list pointer value) to signify an empty
file.
*The size field is also set to 0.
*There is no external fragmentation with linked allocation and any free block on the
free-space list can be used to satisfy a request.
*The major problem is that it can be used effectively for only sequential-access files.
*To find 'ith’ block of a file , we must start at beginning of the file, and follow the pointers
until we get to the i'th block.
*Another disadvantage to linked allocation is the space required for the pointer.
*The usual solution tom this problem is to collect blocks into multiples , called clusters,
and to allocate the clusters rather than blocks.
Indexed Allocation:
*Linked allocation solves the external -fragmentation and size - declaration problems of
contiguous allocation.
*Linked allocation cannot support efficient direct access.
*Indexed allocation solves this problem by bringing all the pointers together into one
locations the index block.
*Each file has its own index block ,which is an array of disk-block address
*The i'th entry in the index block points to the i'th block of file.
*The directory contains the address of the index block.
*To read i'th block, we use the pointer in the i'th index block entry to find and read the
desired block.
*Indexed allocation supports direct access, without suffering from external
fragmentation, because any free block on the disk may satisfy a request for more
space.
*Indexed allocation does suffer from wasted space .
*The pointer overhead of the index block is generally greater than the pointer overhead
of linked allocation
Paging:
*Logical address space of a process can be non-contiguous; process is allocated physical
memory whenever the latter is available.
*Device physical memory into fixed-size blocks is called frames. size is power of 2,b/w
512 bytes and 8192 bytes.
*Divide logical memory into blocks of same size called pages.
*Keep track of all free frames.
*To run a program of size n-pages , need to find n-free frames and load program.
*set up a page table to translate logical to physical addresses .
*Internal fragmentation.
Address translation scheme:
Address generated by the CPU is divided into
Page number(p): it used as index into a page table which contains base address of
each page in physical memory.
Page offset(o): combined with base address to define the physical memory address
that is sent to the memory unit
Segmentation:
*memory management scheme that supports user view of memory.
*program is a collection of segments A segments is a logical unit such as:
program
procedure
function
method
object
local variables, global variables
stack
symbol table , arrays
Segmentation Architecture :
Logical address consist of 2 types :
< segment-number , offset >
Segment table - maps two - dimensional physical addresses ; each table entry has
base - contains the starting physical address where the segments reside in memory .
Limit-specifies the length of the segment.
Segment-table base register (STBR) points to the segment tables location in
memory.
Segment - table length register(STLR) indicates numbers of segments used by a
program;segment number s is legal if s<STLR
Relocation
dynamic
be segment table
Sharing
shared segments
same segment number
Allocation
first fit /best fit
external fragmentation
The MULTICS system solved problems of external fragmentation and lengthy search
times by paging the segments.
Solution differs from pure segmentation in that the segment-table entry contains not
the base address of the segment , but rather the base address of a page table for
this segment.
Trashing:
If the number frames allocated to low priority process false below the minimum
number required by the computer architecture, we must suspend that process
execution.
We should then page out its remaining pages, freeing all its allocated frames.
This provision introduces a swap-in , swap-out level of intermediate CPU scheduling.
Although it is technically possible to reduce the number of (larger) number of pages
that are in active useful.
If the process does not have this no. of frames it will very quickly page fault.
The process continues to fault replacing pages for which it will then fault and bring
back in right away.
This high paging activity is called thrashing.
A process is thrashing if it is spending more time paging than executing.
Virtual memory:
Virtual memory is a concept used in some large computer systems that
permits space were available, equal to the totally of auxiliary memory.
Virtual memory is used to give programmers the illusion that they have a
very large memory at their disposal, even though the computer actually has a
relatively small main memory .A virtual memory system provides a mechanism for
translating program-generated addresses into correct main memory locations. this is
done dynamically , while programs are being executed in the CPU. The translation or
mapping is handled automatically by the hardware by means of a mapping table.
Address space and memory space:
An address used by a programmer will be called a virtual address , and the
set of such addresses the address space . An address in main memory is called the
memory space
The address space is allowed to the larger than the memory space in
computers with virtual memory.
In a multi program computer system , programs and data are transferred to
and from auxiliary memory and main memory based on demands imposed by the
CPU. suppose that program 1 is currently being executed in the CPU. program 1 and
a portion of its associated data are moved from auxiliary memory into main memory.
In a virtual memory system, the address field of an instruction code will consist of 20
bits but physical memory addresses must be specified with only 15 bits .Thus CPU
will reference instructions and data with a 20-bit address, but the information at this
address mus5t be taken from physical memory.
Unit-5
File and I/O Management ,OS Security
Directory Structure
What is a directory?
Directory can be defined as the listing of the related files on the disk. The
directory may store some or the entire file attributes.
To get the benefit of different file systems on the different operating systems,
A hard disk can be divided into the number of partitions of different sizes. The
partitions are also called volumes or mini disks.
Each partition must have at least one directory in which, all the files of the
partition can be listed. A directory entry is maintained for each file in the directory
which stores all the information related to that file.
A directory can be viewed as a file which contains the Meta data of the bunch
of files.Every Directory supports a number of common operations on the file:
1. File Creation
2. Search for the file
3. File deletion
4. Renaming the file
5. Traversing Files
6. Listing of files
Single Level Directory
The simplest method is to have one big list of all the files on the disk. The
entire system will contain only one directory which is supposed to mention all the
files present in the file system. The directory contains one entry per each file present
on the file system.
8. Close operation:
When the processing of the file is complete, it should be closed so that all the
changes made permanent and all the resources occupied should be released. On
closing it de-allocates all the internal descriptors that were created when the file was
opened.
9. Append operation:
This operation adds data to the end of the file.
10. Rename operation:
This operation is used to rename the existing file.
Allocation Methods
There are various methods which can be used to allocate disk space to the
files. Selection of an appropriate allocation method will significantly affect the
performance and efficiency of the system. Allocation method provides a way in which
the disk will be utilized and the files will be accessed.
There are following methods which can be used for allocation.
1. Contiguous Allocation
2. Extents
3. Linked Allocation
4. Clustering
5. FAT
6. Indexed Allocation
7. Linked Indexed Allocation
8. Multilevel Indexed Allocation
9. Inode
We will discuss three of the most used methods in detail.
Contiguous Allocation
If the blocks are allocated to the file in such a way that all the logical blocks
of the file get the contiguous physical block in the hard disk then such allocation
scheme is known as contiguous allocation.
In the image shown below, there are three files in the directory. The starting
block and the length of each file are mentioned in the table. We can check in the
table that the contiguous blocks are assigned to each file as per its need.
Advantages
1. It is simple to implement.
2. We will get Excellent read performance.
3. Supports Random Access into files.
Disadvantages
1. The disk will become fragmented.
2. It may be difficult to have a file grow.
Advantages
1. There is no external fragmentation with linked allocation.
2. Any free block can be utilized in order to satisfy the file block requests.
3. File can continue to grow as long as the free blocks are available.
4. Directory entry will only contain the starting block address.
Disadvantages
1. Random Access is not provided.
2. Pointers require some space in the disk blocks.
3. Any of the pointers in the linked list must not be broken otherwise the file will get
corrupted.
4. Need to traverse each block
File Allocation Table
The main disadvantage of linked list allocation is that the Random access to a
particular block is not provided. In order to access a block, we need to access all its
previous blocks.
File Allocation Table overcomes this drawback of linked list allocation. In this
scheme, a file allocation table is maintained, which gathers all the disk block links.
The table has one entry for each disk block and is indexed by block number.
File allocation table needs to be cached in order to reduce the number of head seeks.
Now the head doesn't need to traverse all the disk blocks in order to access one
successive block.
It simply accesses the file allocation table, read the desired block entry from
there and access that block. This is the way by which the random access is
accomplished by using FAT. It is used by MS-DOS and pre-NT Windows versions.
Advantages
1. Uses the whole disk block for data.
2. A bad disk block doesn't cause all successive blocks lost.
3. Random access is provided although its not too fast.
4. Only FAT needs to be traversed in each file operation.
Disadvantages
1. Each Disk block needs a FAT entry.
2. FAT size may be very big depending upon the number of FAT entries.
3. Number of FAT entries can be reduced by increasing the block size but it will also
increase Internal Fragmentation.
Indexed Allocation(Limitation of FAT)
Limitation in the existing technology causes the evolution of a new
technology. Till now, we have seen various allocation methods; each of them was
carrying several advantages and disadvantages.
File allocation table tries to solve as many problems as possible but leads to a
drawback. The more the number of blocks, the more will be the size of FAT.
Therefore, we need to allocate more space to a file allocation table. Since, file
allocation table needs to be cached therefore it is impossible to have as many space
in cache. Here we need a new technology which can solve such problems.
Indexed Allocation Scheme
Instead of maintaining a file allocation table of all the disk pointers, Indexed
allocation scheme stores all the disk pointers in one of the blocks called as indexed
block. Indexed block doesn't hold the file data, but it holds the pointers to all the
disk blocks allocated to that particular file. Directory entry will only contain the index
block address.
Advantages
1. Supports direct access
2. A bad data block causes the lost of only that block.
Disadvantages
1. A bad index block could cause the lost of entire file.
2. Size of a file depends upon the number of pointers, a index block can hold.
3. Having an index block for a small file is totally wastage.
4. More pointer overhead
Linked Index Allocation(Single level linked Index Allocation)
In index allocation, the file size depends on the size of a disk block. To allow
large files, we have to link several index blocks together. In linked index allocation,
o Small header giving the name of the file
o Set of the first 100 block addresses
o Pointer to another index block
For the larger files, the last entry of the index block is a pointer which points to
another index block. This is also called as linked schema.
Advantage: It removes file size limitations
Disadvantage: Random Access becomes a bit harder
Multilevel Index Allocation
In Multilevel index allocation, we have various levels of indices. There are outer
level index blocks which contain the pointers to the inner level index blocks and the
inner level index blocks contain the pointers to the file data.
o The outer level index is used to find the inner level index.
o The inner level index is used to find the desired data block.
Advantage: Random Access becomes better and efficient.
Disadvantage: Access time for a file will be higher.
Inode
In UNIX based operating systems, each file is indexed by an Inode. Inode are
the special disk block which is created with the creation of the file system. The
number of files or directories in a file system depends on the number of Inodes in
the file system.
An Inode includes the following information
1. Attributes (permissions, time stamp, ownership details, etc) of the file
2. A number of direct blocks which contains the pointers to first 12 blocks of the file.
3. A single indirect pointer which points to an index block. If the file cannot be indexed
entirely by the direct blocks then the single indirect pointer is used.
4. A double indirect pointer which points to a disk block that is a collection of the
pointers to the disk blocks which are index blocks. Double index pointer is used if the
file is too big to be indexed entirely by the direct blocks as well as the single indirect
pointer.
5. A triple index pointer that points to a disk block that is a collection of pointers. Each
of the pointers is separately pointing to a disk block which also contains a collection
of pointers which are separately pointing to an index block that contains the pointers
to the file blocks.
Instruction Pipeline
Pipeline processing can occur not only in the data stream but in the
instruction stream as well.
Most of the digital computers with complex instructions require instruction
pipeline to carry out operations like fetch, decode and execute instructions.
In general, the computer needs to process each instruction with the following
sequence of steps.
1. Fetch instruction from memory.
2. Decode the instruction.
3. Calculate the effective address.
4. Fetch the operands from memory.
5. Execute the instruction.
6. Store the result in the proper place.
Each step is executed in a particular segment, and there are times when different
segments may take different times to operate on the incoming information.
Moreover, there are times when two or more segments may require memory access
at the same time, causing one segment to wait until another is finished with the
memory.
The organization of an instruction pipeline will be more efficient if the instruction
cycle is divided into segments of equal duration. One of the most common examples
of this type of organization is a Four-segment instruction pipeline.
A four-segment instruction pipeline combines two or more different segments
and makes it as a single one. For instance, the decoding of the instruction can be
combined with the calculation of the effective address into one segment.
The following block diagram shows a typical example of a four-segment
instruction pipeline. The instruction cycle is completed in four segments.
Segment 1:The instruction fetch segment can be implemented using first in, first out
(FIFO) buffer.
Segment 2:The instruction fetched from memory is decoded in the second segment, and
eventually, the effective address is calculated in a separate arithmetic circuit.
Segment 3:An operand from memory is fetched in the third segment.
Segment 4:The instructions are finally executed in the last segment of the pipeline
organization.
Buffering in Operating System
The buffer is an area in the main memory used to store or hold the
data temporarily. In other words, buffer temporarily stores data transmitted from
one place to another, either between two devices or an application. The act of storing
data temporarily in the buffer is called buffering.
A buffer may be used when moving data between processes within a
computer. Buffers can be implemented in a fixed memory location in hardware or by
using a virtual data buffer in software, pointing at a location in the physical memory.
In all cases, the data in a data buffer are stored on a physical storage medium.
Most buffers are implemented in software, which typically uses the faster RAM
to store temporary data due to the much faster access time than hard disk drives.
Buffers are typically used when there is a difference between the rate of received
data and the rate of processed data, for example, in a printer spooler or online video
streaming.
A buffer often adjusts timing by implementing a queue or FIFO algorithm in
memory, simultaneously writing data into the queue at one rate and reading it at
another rate
Purpose of Buffering
You face buffer during watching videos on YouTube or live streams. In a video
stream, a buffer represents the amount of data required to be downloaded before
the video can play to the viewer in real-time. A buffer in a computer environment
means that a set amount of data will be stored to preload the required data before it
gets used by the CPU.
Computers have many different devices that operate at varying speeds, and a
buffer is needed to act as a temporary placeholder for everything interacting. This is
done to keep everything running efficiently and without issues between all the
devices, programs, and processes running at that time. There are three reasons
behind buffering of data,
1. It helps in matching speed between two devices in which the data is transmitted.
For example, a hard disk has to store the file received from the modem. As we know,
the transmission speed of a modem is slow compared to the hard disk. So bytes
coming from the modem is accumulated in the buffer space, and when all the bytes
of a file has arrived at the buffer, the entire data is written to the hard disk in a
single operation.
2. It helps the devices with different sizes of data transfer to get adapted to each
other. It helps devices to manipulate data before sending or receiving it. In computer
networking, the large message is fragmented into small fragments and sent over the
network. The fragments are accumulated in the buffer at the receiving end and
reassembled to form a complete large message.
3. It also supports copy semantics. With copy semantics, the version of data in the
buffer is guaranteed to be the version of data at the time of system call, irrespective
of any subsequent change to data in the buffer. Buffering increases the performance
of the device. It overlaps the I/O of one job with the computation of the same job.
Types of Buffering
There are three main types of buffering in the operating system, such as:
1. Single Buffer
In Single Buffering, only one buffer is used to transfer the data between two
devices. The producer produces one block of data into the buffer. After that, the
consumer consumes the buffer. Only when the buffer is empty, the processor again
produces the data.
Block oriented device: The following operations are performed in the block-oriented
device,
o System buffer takes the input.
o After taking the input, the block gets transferred to the user space and then requests
another block.
o Two blocks work simultaneously. When the user processes one block of data, the
next block is being read in.
o OS can swap the processes.
o OS can record the data of the system buffer to user processes.
Stream oriented device: It performed the following operations, such as:
o Line-at a time operation is used for scroll made terminals. The user inputs one line
at a time, with a carriage return waving at the end of a line.
o Byte-at a time operation is used on forms mode, terminals when each keystroke is
significant.
2. Double Buffer
In Double Buffering, two schemes or two buffers are used in the place of
one. In this buffering, the producer produces one buffer while the consumer
consumes another buffer simultaneously. So, the producer not needs to wait for
filling the buffer. Double buffering is also known as buffer swapping.
Block oriented: This is how a double buffer works. There are two buffers in the system.
o The driver or controller uses one buffer to store data while waiting for it to be taken
by a higher hierarchy level.
o Another buffer is used to store data from the lower-level module.
o A major disadvantage of double buffering is that the complexity of the process gets
increased.
o If the process performs rapid bursts of I/O, then using double buffering may be
deficient.
Stream oriented: It performs these operations, such as:
o Line- at a time I/O, the user process does not need to be suspended for input or
output unless the process runs ahead of the double buffer.
o Byte- at time operations, double buffer offers no advantage over a single buffer of
twice the length.
3. Circular Buffer
When more than two buffers are used, the buffers' collection is called
a circular buffer. Each buffer is being one unit in the circular buffer. The data
transfer rate will increase using the circular buffer rather than the double buffering.
o In this, the data do not directly pass from the producer to the consumer because the
data would change due to overwriting of buffers before consumed.
o The producer can only fill up to buffer x-1 while data in buffer x is waiting to be
consumed.
How Buffering Works
In an operating system, buffer works in the following way:
o Buffering is done to deal effectively with a speed mismatch between the producer
and consumer of the data stream.
o A buffer is produced in the main memory to heap up the bytes received from the
modem.
o After receiving the data in the buffer, the data get transferred to a disk from the
buffer in a single operation.
o This process of data transfer is not instantaneous. Therefore the modem needs
another buffer to store additional incoming data.
o When the first buffer got filled, then it is requested to transfer the data to disk.
o The modem then fills the additional incoming data in the second buffer while the
data in the first buffer gets transferred to the disk.
o When both the buffers completed their tasks, the modem switches back to the first
buffer while the data from the second buffer gets transferred to the disk.
o Two buffers disintegrate the producer and the data consumer, thus minimising the
time requirements between them.
o Buffering also provides variations for devices that have different data transfer sizes.
Advantages of Buffer
Buffering plays a very important role in any operating system during the
execution of any process or task. It has the following advantages.
o The use of buffers allows uniform disk access. It simplifies system design.
o The system places no data alignment restrictions on user processes doing I/O. By
copying data from user buffers to system buffers and vice versa, the kernel
eliminates the need for special alignment of user buffers, making user programs
simpler and more portable.
o The use of the buffer can reduce the amount of disk traffic, thereby increasing
overall system throughput and decreasing response time.
o The buffer algorithms help ensure file system integrity.
Disadvantages of Buffer
Buffers are not better in all respects. Therefore, there are a few disadvantages as
follows, such as:
o It is costly and impractical to have the buffer be the exact size required to hold the
number of elements. Thus, the buffer is slightly larger most of the time, with the rest
of the space being wasted.
o Buffers have a fixed size at any point in time. When the buffer is full, it must be
reallocated with a larger size, and its elements must be moved. Similarly, when the
number of valid elements in the buffer is significantly smaller than its size, the buffer
must be reallocated with a smaller size and elements be moved to avoid too much
waste.
o Use of the buffer requires an extra data copy when reading and writing to and from
user processes. When transmitting large amounts of data, the extra copy slows down
performance
For applications that exchange large amounts of data, shared memory is far
superior to message passing techniques like message queues, which require system
calls for every data exchange. To use shared memory, we have to perform two basic
steps:
1. Request a memory segment that can be shared between processes to the operating
system.
2. Associate a part of that memory or the whole memory with the address space of the
calling process.
A shared memory segment is a portion of physical memory that is shared by
multiple processes. In this region, processes can set up structures, and others may
read/write on them. When a shared memory region is established in two or more
processes, there is no guarantee that the regions will be placed at the same base
address. Semaphores can be used when synchronization is required.
For example, one process might have the shared region starting at address
0x60000 while the other process uses 0x70000. It is critical to understand that these
two addresses refer to the exact same piece of data. So storing the number 1 in the
first process's address 0x60000 means the second process has the value of 1 at
0x70000. The two different addresses refer to the exact same location.
Usually, inter-related process communication is performed using Pipes or
Named Pipes. And unrelated processes communication can be performed using
Named Pipes or through popular IPC techniques of Shared Memory and Message
Queues.
But the problem with pipes, FIFO, and message queue is that the information
exchange between two processes goes through the kernel, and it works as follows.
o The server reads from the input file.
o The server writes this data in a message using pipe, FIFO, or message queue.
o The client reads the data from the IPC channel, again requiring the data to be copied
from the kernel's IPC buffer to the client's buffer.
o Finally, the data is copied from the client's buffer.
A total of four copies of data are required (2 read and 2 write). So, shared
memory provides a way by letting two or more processes share a memory segment.
With Shared Memory, the data is only copied twice, from the input file into shared
memory and from shared memory to the output file.
Functions of IPC Using Shared Memory
Two functions shmget() and shmat() are used for IPC using shared
memory. shmget() function is used to create the shared memory segment, while the
shmat() function is used to attach the shared segment with the process's address
space.
1. shmget() Function
The first parameter specifies the unique number (called key) identifying the shared
segment. The second parameter is the size of the shared segment, e.g., 1024 bytes
or 2048 bytes. The third parameter specifies the permissions on the shared segment.
On success, the shmget() function returns a valid identifier, while on failure, it
returns -1.
Syntax
#include <sys/ipc.h>
#include <sys/shm.h>
int shmget (key_t key, size_t size, int shmflg);
2. shmat() Function:
shmat() function is used to attach the created shared memory segment
associated with the shared memory identifier specified by shmid to the calling
process's address space. The first parameter here is the identifier which the
shmget() function returns on success. The second parameter is the address where to
attach it to the calling process. A NULL value of the second parameter means that
the system will automatically choose a suitable address. The third parameter is '0' if
the second parameter is NULL. Otherwise, the value is specified by SHM_RND.
Syntax
#include <sys/types.h>
#include <sys/shm.h>
void *shmat(int shmid, const void *shmaddr, int shmflg);
There are several goals of system security. Some of them are as follows:
1. Integrity
Unauthorized users must not be allowed to access the system's objects, and
users with insufficient rights should not modify the system's critical files and
resources.
2. Secrecy
The system's objects must only be available to a small number of authorized
users. The system files should not be accessible to everyone.
3. Availability
All system resources must be accessible to all authorized users, i.e., no single
user/process should be able to consume all system resources. If such a situation
arises, service denial may occur. In this case, malware may restrict system resources
and preventing legitimate processes from accessing them
Operating System Security Policies and Procedures
Various operating system security policies may be implemented based on the
organization that you are working in. In general, an OS security policy is a document
that specifies the procedures for ensuring that the operating system maintains a specific
level of integrity, confidentiality, and availability.
OS Security protects systems and data from worms, malware, threats,
ransom ware, backdoor intrusions, viruses, etc. Security policies handle all
preventative activities and procedures to ensure an operating system's protection,
including steal, edited, and deleted data.
As OS security policies and procedures cover a large area, there are various
techniques to addressing them. Some of them are as follows:
1. Installing and updating anti-virus software
2. Ensure the systems are patched or updated regularly
3. Implementing user management policies to protect user accounts and privileges.
4. Installing a firewall and ensuring that it is properly set to monitor all incoming and
outgoing traffic.
OS security policies and procedures are developed and implemented to
ensure that you must first determine which assets, systems, hardware, and date are
the most vital to your organization. Once that is completed, a policy can be
developed to secure and safeguard them properly.
The process of identifying every system user and associating the programs executing
with those users is known as authentication. The operating system is responsible for
implementing a security system that ensures the authenticity of a user who is
executing a specific program. In general, operating systems identify and
authenticate users in three ways.
1. Username/Password
Every user contains a unique username and password that should be input correctly
before accessing a system.
2. User Attribution
These techniques usually include biometric verification, such as fingerprints, retina
scans, etc. This authentication is based on user uniqueness and is compared to
database samples already in the system. Users can only allow access if there is a
match.
3. User card and Key
To login into the system, the user must punch a card into a card slot or enter a key
produced by a key generator into an option provided by the operating system.
Authentication is used by a server when the server needs to know exactly who is
accessing their information or site.
Authentication is used by a client when the client needs to know that the server is
system it claims to be.
In authentication, the user or computer has to prove its identity to the server or
client.
Usually, authentication by a server entails the use of a user name and password.
Other ways to authenticate can be through cards, retina scans, voice recognition,
and fingerprints.
Authentication by a client usually involves the server giving a certificate to the client in
which a trusted third party such as Verisign or Thawte states that the server belongs to the
entity (such as a bank) that the client expects it to.
Authentication does not determine what tasks the individual can do or what files the
individual can see. Authentication merely identifies and verifies who the person or
system is.
Authorization
1. Applications
An application is the top layer of the android architecture. The pre-installed applications
like camera, gallery, home, contacts, etc., and third-party applications downloaded
from the play store like games, chat applications, etc., will be installed on this layer.
It runs within the Android run time with the help of the classes and services provided by
the application framework.
2. Application framework
Application Framework provides several important classes used to create an Android
application. It provides a generic abstraction for hardware access and helps in
managing the user interface with application resources. Generally, it provides the
services with the help of which we can create a particular class and make that class
helpful for the Applications creation.
It includes different types of services, such as activity manager, notification manager,
view system, package manager etc., which are helpful for the development of our
application according to the prerequisite.
The Application Framework layer provides many higher-level services to applications in
the form of Java classes. Application developers are allowed to make use of these
services in their applications. The Android framework includes the following key
services:
o Activity Manager: Controls all aspects of the application lifecycle and activity stack.
o Content Providers: Allows applications to publish and share data with other
applications.
o Resource Manager: Provides access to non-code embedded resources such as
strings, colour settings and user interface layouts.
o Notifications Manager: Allows applications to display alerts and notifications to the
user.
o View System: An extensible set of views used to create application user interfaces.
3. Application runtime
Android Runtime environment contains components like core libraries and the Dalvik
virtual machine (DVM). It provides the base for the application framework and
powers our application with the help of the core libraries.
Like Java Virtual Machine (JVM), Dalvik Virtual Machine (DVM) is a register-based
virtual machine designed and optimized for Android to ensure that a device can run
multiple instances efficiently.
It depends on the layer Linux kernel for threading and low-level memory management.
The core libraries enable us to implement android applications using the
standard JAVA or Kotlin programming languages.
4. Platform libraries
The Platform Libraries include various C/C++ core libraries and Java-based libraries such
as Media, Graphics, Surface Manager, OpenGL, etc., to support Android
development.
o app: Provides access to the application model and is the cornerstone of all Android
applications.
o content: Facilitates content access, publishing and messaging between applications
and application components.
o database: Used to access data published by content providers and includes SQLite
database, management classes.
o OpenGL: A Java interface to the OpenGL ES 3D graphics rendering API.
o os: Provides applications with access to standard operating system services,
including messages, system services and inter-process communication.
o text: Used to render and manipulate text on a device display.
o view: The fundamental building blocks of application user interfaces.
o widget: A rich collection of pre-built user interface components such as buttons,
labels, list views, layout managers, radio buttons etc.
o WebKit: A set of classes intended to allow web-browsing capabilities to be built into
applications.
o media: Media library provides support to play and record an audio and video format.
o surface manager: It is responsible for managing access to the display subsystem.
o SQLite: It provides database support, and FreeType provides font support.
o SSL: Secure Sockets Layer is a security technology to establish an encrypted link
between a web server and a web browser.
5. Linux Kernel
Linux Kernel is the heart of the android architecture. It manages all the available drivers
such as display, camera, Bluetooth, audio, memory, etc., required during the
runtime.
The Linux Kernel will provide an abstraction layer between the device hardware and the
other android architecture components. It is responsible for the management of
memory, power, devices etc. The features of the Linux kernel are:
o Security: The Linux kernel handles the security between the application and the
system.
o Memory Management: It efficiently handles memory management, thereby
providing the freedom to develop our apps.
o Process Management: It manages the process well, allocates resources to
processes whenever they need them.
o Network Stack: It effectively handles network communication.
o Driver Model: It ensures that the application works properly on the device and
hardware manufacturers responsible for building their drivers into the Linux build.
Android Applications
Android applications are usually developed in the Java language using the Android
Software Development Kit. Once developed, Android applications can be packaged
easily and sold out either through a store such as Google Play, SlideME, Opera
Mobile Store, Mobango, F-droid or the Amazon Appstore.
Android powers hundreds of millions of mobile devices in more than 190 countries
around the world. It's the largest installed base of any mobile platform and growing
fast. Every day more than 1 million new Android devices are activated worldwide.
Android Emulator
The Emulator is a new application in the Android operating system. The Emulator is a
new prototype used to develop and test android applications without using any
physical device.
The android emulator has all of the hardware and software features like mobile devices
except phone calls. It provides a variety of navigation and control keys. It also
provides a screen to display your application. The emulators utilize the android
virtual device configurations. Once your application is running on it, it can use
services of the android platform to help other applications, access the network, play
audio, video, store, and retrieve the data.
Advantages of Android Operating System
We considered every one of the elements on which Android is better as thought about
than different platforms. Below are some important advantages of Android OS, such
as:
o Android Google Developer: The greatest favourable position of Android is Google.
Google claims an android operating system. Google is a standout amongst the most
trusted and rumoured item on the web. The name Google gives trust to the clients to
purchase Android gadgets.
o Android Users: Android is the most utilized versatile operating system. More than a
billion individuals clients utilize it. Android is likewise the quickest developing
operating system in the world. Various clients increment the number of applications
and programming under the name of Android.
o Android Multitasking: The vast majority of us admire this component of Android.
Clients can do heaps of undertakings on the double. Clients can open a few
applications on the double and oversee them very. Android has incredible UI, which
makes it simple for clients to do multitasking.
o Google Play Store App: The best part of Android is the accessibility of many
applications. Google Play store is accounted for as the world's largest mobile store. It
has practically everything from motion pictures to amusements and significantly
more. These things can be effortlessly downloaded and gotten to through an Android
phone.
o Android Notification and Easy Access: Without much of a stretch, one can access
their notice of any SMS, messages, or approaches their home screen or the notice
board of the android phone. The client can view all the notifications on the top bar.
Its UI makes it simple for the client to view more than 5 Android notices
immediately.
o Android Widget: Android operating system has a lot of widgets. This gadget
improves the client encounter much and helps in doing multitasking. You can include
any gadget relying on the component you need on your home screen. You can see
warnings, messages, and a great deal more use without opening applications.
Disadvantages of Android Operating System
We know that the Android operating system has a considerable measure of interest for
users nowadays. But at the same time, it most likely has a few weaknesses. Below
are the following disadvantages of the android operating system, such as:
o Android Advertisement pop-ups: Applications are openly accessible in the Google
play store. Yet, these applications begin demonstrating tons of advertisements on
the notification bar and over the application. This promotion is extremely difficult and
makes a massive issue in dealing with your Android phone.
o Android require Gmail ID: You can't get to an Android gadget without your email
ID or password. Google ID is exceptionally valuable in opening Android phone bolts
as well.
o Android Battery Drain: Android handset is considered a standout amongst the
most battery devouring operating systems. In the android operating system, many
processes are running out of sight, which brings about the draining of the battery. It
is difficult to stop these applications as the lion's share of them is system
applications.
o Android Malware/Virus/Security: Android gadget is not viewed as protected
when contrasted with different applications. Hackers continue attempting to take
your data. It is anything but difficult to target any Android phone, and each day
millions of attempts are done on Android phones.
Application Framework
The Android OS exposes the underlying libraries and features of the Android device that are using
a Java API. This is what is known as the Android framework. The framework exposes a safe and
uniform means to utilize Android device resources.
Application framework
1) Activity Manager
Applications use the Android activity component for presenting an entry point to the app. Android
Activities are the components that house the user interface that app users interact with. As end-
users interact with the Android device, they start, stop, and jump back and forth across many
applications. Each navigation event triggers activation and deactivation of many activities in
respective applications.
The Android ActivityManager is responsible for predictable and consistent behavior during
application transitions. The ActivityManager provides a slot for app creators to have their apps
react when the Android OS performs global actions. Applications can listen to events such as
device rotation, app destruction due to memory shortage, an app being shifted out of focus, and
so on.
Some examples of the way applications can react to these transitions include pausing activity in a
game, stopping music playing during a phone call.
2) Window Manager
Android can determine screen information to determine the requirements needed to create
windows for applications. Windows are the slots where we can view our app user interface.
Android uses the Window manager to provide this information to the apps and the system as they
run so that they can adapt to the mode the device is running on.
The Window Manager helps in delivering a customized app experience. Apps can fill the complete
screen for an immersive experience or share the screen with other apps. Android enables this by
allowing multi-windows for each app.
3) Location Manager
Most Android devices are equipped with GPS devices that can get user location using satellite
information to which can go all the way to meters precision. Programmers can prompt for location
permission from the users, deliver location, and aware experiences.
Android is also able to utilize wireless technologies to further enrich location details and increase
coverage when devices are enclosed spaces. Android provides these features under the umbrella
of the Location-Manager.
4) Telephony Manager
Most Android devices serve a primary role in telephony. Android uses TelephoneManager to
combine hardware and software components to deliver telephony features. The hardware
components include external parts such as the sim card, and device parts such as the microphone,
camera, and speakers. The software components include native components such as dial pad,
phone book, ringtone profiles. Using the TelephoneManager, a developer can extend or fine-tune
the default calling functionality.
5) Resource Manager
Android app usually come with more than just code. They also have other resources such as icons,
audio and video files, animations, text files, and the like. Android helps in making sure that there
is efficient, responsive access to these resources. It also ensures that the right resources are
delivered to the end-users. For example, the proper language text files are used when populating
fields in the apps.
6) View System
Android also provides a means to easily create common visual components needed for app
interaction. These components include widgets like buttons, image holders such as ImageView,
components to display a list of items such as ListView, and many more. The components are
premade but are also customizable to fit app developer needs and branding.
7) Notification Manager
The Notification Manager is responsible for informing Android users of application events. It does
this by giving users visual, audio or vibration signals or a combination of them when an event
occurs. These events have external and internal triggers. Some examples of internal triggers are
low-battery status events that trigger a notification to show low battery. Another example is user-
specified events like an alarm. Some examples of external triggers include new messages or new
wifi networks detected.
Android provides a means for programmers and end-users to fine-tune the notifications system.
This can help to guarantee they can send and receive notification events in a means that best
suits them and their current environments.
8) Package Manager
Android also provides access to information about installed applications. Android keeps track of
application information such as installation and uninstallation events, permissions the app
requests, and resource utilization such as memory consumption.
This information can enable developers to make their applications to activate or deactivate
functionality depending on new features presented by companion apps.
9) Content Provider
Android has a standardized way to share data between applications on the device using the
content provider. Developers can use the content provider to expose data to other applications.
For example, they can make the app data searchable from external search applications. Android
itself exposes data such as calendar data, contact data, and the like using the same system.
You can know which partitions are available along with the partition size for all
partition in your android device. Go through the below image and run the adb command as
shown in that image.
/boot:
This is the boot partition of your Android device, as the name suggests. It includes the
android kernel and the ramdisk. The device will not boot without this partition. Wiping this
partition from recovery should only be done if absolutely required and once done, the device
must NOT be rebooted before installing a new one, which can be done by installing a ROM that
includes a /boot partition.
/system
As the name suggests, this partition contains the entire Android OS. This includes the
Android GUI and all the system applications that come pre-installed on the device. Wiping this
partition will remove Android from the device without rendering it unbootable, and you will still
be able to put the phone into recovery or bootloader mode to install a new ROM
/recovery
This is specially designed for backup. The recovery partition can be considered as an
alternative boot partition, that lets the device boot into a recovery console for performing
advanced recovery and maintenance operations on it.
/data
It is called userdata partition. This partition contains the user’s data like your contacts,
sms, settings and all android applications that you have installed. While you are doing factory
reset on your device, this partition will wipe out, Then your device will be in the state, when
you use for he first time, or the way it was after the last official or custom ROM installation
/cache
This is the partition where Android stores frequently accessed data and app
components. Wiping the cache doesn’t effect your personal data but simply gets rid of the
existing data there, which gets automatically rebuilt as you continue using the device.
/misc
This partition contains miscellaneous system settings in form of on/off switches.
These settings may include CID (Carrier or Region ID), USB configuration and certain
hardware settings etc. This is an important partition and if it is corrupt or missing, several of
the device’s features will will not function normally
/sdcard
This is not a partition on the internal memory of the device but rather the SD card. In
terms of usage, this is your storage space to use as you see fit, to store your media,
documents, ROMs etc. on it. Wiping it is perfectly safe as long as you backup all the data you
require from it, to your computer first. Though several user-installed apps save their data and
settings on the SD card and wiping this partition will make you lose all that data.
/sd-ext
This is not a standard Android partition, but has become popular in the custom ROM
scene. It is basically an additional partition on your SD card that acts as the /data partition. It
is especially useful on devices with little internal memory allotted to the /data partition. Thus,
users who want to install more programs than the internal memory allows can make this
partition and use it for installing their apps.
Sometimes you might prefer to use the traditional file system to store your data. For
example, you might want to store the text of poems you want to display in your applications.
In Android, you can use the classes in the java.io package to do so.
Saving to Internal Storage
To save text into a file, you use the FileOutputStream class. The openFileOutput()
method opens a named file for writing, with the mode specified. In this example, you used the
MODE_WORLD_READABLE constant to indicate that the file is readable by all other
applications. MODE_PRIVATE MODE_APPEND MODE_WORLD_WRITEABLE
To convert a character stream into a byte stream, you use an instance of the
OutputStreamWriter class, by passing it an instance of the FileOutputStream object: You then use
its write() method to write the string to the file. To ensure that all the bytes are written to the
file, use the flush() method. Finally, use the close() method to close the file
Small Application Development using Android Development Framework