USIT103 Operating System
USIT103 Operating System
(IT)
SEMESTER - I (CBCS)
OPERATING SYSTEM
SUBJECT CODE : USIT103
© UNIVERSITY OF MUMBAI
Prof. Suhas Pednekar
Vice Chancellor
University of Mumbai, Mumbai.
Prof. Ravindra D. Kulkarni Prof. Prakash Mahanwar
Pro Vice-Chancellor, Director
University of Mumbai. IDOL, University of Mumbai.
Published by : Director
Institute of Distance and Open Learning ,
University of Mumbai,
Vidyanagari, Mumbai - 400 098.
DTP
ipinComposed
Enterprises : Varda Offset and Typesetters
Andheri (W),
Tantia Mumbai
Jogani - 400 053.Pace
Industrial Estate,Computronics
Unit No. 2,
"Samridhi" Paranjpe 'B' Scheme, Vile Parle (E), Mumbai - 57.
Ground Floor, Sitaram Mill Compound,
Printed by :
J.R. Boricha Marg, Mumbai - 400 011
CONTENT
Chapter No. Title Page No.
UNIT I
Chapter 1 Introduction ............................................................................................................ 1
0
UNIT II
Chapter 4 Memory Management..............................................................................................38
Chapter 5 Paging and segmentation.......................................................................................... 48
Chapter 6 File System.............................................................................................................. 60
UNIT III
Chapter 7 Principles of I/O hardware and Software.................................................................. 84
Chapter 8 I/O Devices..............................................................................................................97
Chapter 9 Deadlocks..............................................................................................................112
UNIT IV
Chapter 10 Virtualization and Cloud..........................................................................................127
Chapter 11 Multiprocessing system...........................................................................................133
Chapter 12 Multiple Processing Systems...................................................................................138
UNIT V
Chapter 13 Linux Case Study................................................................................................... 147
Chapter 14 Android Case study................................................................................................169
Chapter 15 Windows Case Study Objectives............................................................................195
*****
Syllabus
B. Sc (Information Technology) Semester – I
Course Name: Operating Systems Course Code: USIT103
Periods per week (1 Period is 50 minutes)
5
Credits 2
Hours Marks
Evaluation Theory 2½ 75
System Examination - 25
List of Practical:
1. Installation of virtual machine software.
1
INTRODUCTION
Unit Structure
1.0 Objectives
1.1 Introduction
1.2 What is operating system
1.2.1 Definition
1.2.2 The Operating System as an Extended Machine
1.2.3 The Operating System as a Resource Manager
1.3 History of operating system
1.3.1 First Generation OS
1.3.2 Second Generation OS
1.3.3 Third Generation OS
1.3.4 Fourth Generation OS
1.3.5 Fifth Generation OS
1.4 Computer hardware
1.4.1 Processor
1.4.2 Memory
1.4.3 Disk
1.4.4 Booting of system
1.5 Let us Sum Up
1.6 List of Reference
1.7 Bibliography
1.8 Unit End Questions
1.0 OBJECTIVES
1.1 INTRODUCTION
1
Operating system performs the function of resource handling and
distributing the resources to the different part of the system.
It is the intermediary between users and the computer system and
provide a level of abstraction due to which complicated details can be
kept hidden from the user.
1.2.1 Definition:
• Operating System is a system software which acts as an intermediary
between user and hardware.
• Operating System keeps the complicated details of the hardware
hidden from the user and provides user with easy and simple interface.
• It performs functions which involves allocation of resources efficiently
between user program, file system, Input Output device.
Explanation of Figure 1:
The hardware components lies at the bottom of the diagram. It is
considered as the most crucial part of computer system. To protect
the hardware from direct access, it is kept at the lowest level of
hierarchy. Hardware components includes circuits, input output
device, monitor etc
Operating system runs in the kernel mode of the system wherein the
OS gets an access to all hardware and can execute all machine
instructions. Other part of the system runs in user mode.
2
1.2.2 The Operating System as an Extended Machine:
The structure of computers system at the machine-language level is
complicated to program, especially for input/output. Programmers
don’t deal with hardware, so a level of abstraction is supposed to be
maintained.
Operating systems provides layer of abstraction for using disks: files.
Abstraction allows a programs to create, write, and read files,
without having to deal with the messy details of how the hardware
actually works
Abstraction is the key to managing all the complexity.
Good abstractions turn a nearly impossible task into two manageable
ones.
The first is defining and implementing the abstractions.
The second is using these abstractions to solve the problem at hand.
operating system primarily provides abstractions to application
programs in a top- down view
E.g.: It is much easier to deal with photos, emails, songs, and Web
pages than with the details of these files on SATA (or other) disks.
3
1.3 HISTORY OF OPERATING SYSTEM
4
The main purpose of this generation was all software, including the
operating system, OS/360, worked on all models
Important feature identified in this generation was multiprogramming
where in when one job was waiting for I/O to complete, another job
could be using the CPU. This way maximum utilization of CPU could
be achieved
Spooling that has an ability to read jobs from cards onto the disk
Time sharing, which allocates the CPU in turns to number of users
Third generation computers were used for Large scientific
calculations and massive commercial data-processing runs
5
which uses natural language processing for analysis of input.
Computers in this era were capable of self learning
Year 1990 – Present
Programming language High level programming language
Operating system iOS, Android, Symbians,RIM
Hardware Ultra large scale integrated chip
Computers Handheld devices, wearable devices, PDA,
Smart phone
1.4.1 Processor:
CPU is the most vital part of the computer. The instructions are
fetched from the memory and executed by CPU using fetch-
decode-execute
CPUs contains registers inside to hold key variables and temporary
data.
Special registers called as program counter contains memory
address of the next instruction to be fetched. Program Status
Word contains the condition code bits
The Intel Pentium 4 introduced multithreading or
hyperthreading to the x86 processor, allowing the CPU to hold
the state of two different threads and then switch back and forth
in nanosecond.
A GPU is a processor with thousands of tiny cores which are
very good for many small computations done in parallel like
rendering polygons in graphics applications
1.4.2 Memory:
• The basic expectations from the memory is its speed, storage and
performance but a single memory is not capable of fulfilling the same
• The memory system is based on hierarchy of layers.
• Registers inside the CPU forms the top layer in the hierarchy which
gives quick access to data.
• Cache memory is next in the hierarchy. Most heavily used data are kept
inside the cache for high speed access using cache hit and cache miss.
• Two types of cache are present in the system depending upon the
manufacturing company cache L1 and L2
• The cache that is always inside the CPU is L1 which enters decoded
instructions inside the CPU
• L2 cache holds megabytes of memory words which were recently used
The difference between the L1 and L2 caches lies in the timing.
• Main memory comes next in the hierarchy also known as RAM. Cache
Miss request goes inside the main memory for
6
Figure 2. Memory hierarchy
Reference: ModernOperating system, Fourth edition, Andrew S.
Tanenbaum, Herbert
1.7 BIBLIOGRAPHY
Modern Operating System by galvin
*****
8
2
OPERATING SYSTEM CONCEPT
Unit Structure
2.0 Objectives
2.1 Introduction
2.2 Different Operating Systems
2.2.1 Mainframe Operating Systems
2.1.1 Server Operating Systems
2.1.2 Multiprocessor Operating System
2.1.3 Personal Operating Systems
2.1.4 Handheld Operating System
2.1.5 Embedded Operating Systems
2.1.6 Sensor-Node Operating System
2.1.7 Real-Time Operating Systems
2.1.8 Smart Card Operating System
2.3 Operating system Concepts
2.4 System Calls
2.4.1 System calls for Process management
2.4.2 System calls for File management
2.4.3 System calls for Directory management
2..4.4 Windows Win32
2.4.4 API
2.5 Operating System
2.5.1 Monolithic System
2.5.2 Layered System
2.5.3 Microkernels
2.5.4 Client Server System
2.5.5 Exokernel
2.6 Let us Sum Up
2.6 List of Reference
2.7 Bibliography
2.8 Unit End Exercise
2.0 OBJECTIVES
9
• To understand the operating system services provided to the users and
processes,
• To understand the various operating system structures.
• To describe various types of operating system.
2.1 INTRODUCTION
10
• These computers have high speed communication mechanism with
strong connectivity.
• Personal computer are also created and embedded with the
multiprocessor technology.
• Multiprocessor operating system give high processing speed as
multiple processors into single system.
11
2.2.8 Smart Card Operating System:
• Smart Card Operating Systems runs on smart cards. They contain
processor chip embedded inside the CPU chip.
• They have very high processing power and memory constraints
• These operating systems can handle single function like making
electronic payment and are license softwares.
Files are the data which the user want to retrieve back from the
computer. Operating systems is supposed to maintain the data in hard disk
and retrieve it from it when ever needed. Operating system arranges files
in the form of hierarchy. The data goes inside the directory.
Shell :Shell is the command interpreter for UNIX. Shell become the
intermediate between the user on the terminal and the operating systems.
Every shell has a terminal for entering the data and to get the output.
Instructions are given in the form of commands to the computer.
It is a way by which user program request for services from the kernel.
System calls provide an interface to the services made available by an
operating system.
14
2.4.3 System calls for Directory management:
• mkdir is a system call that creates and empty directories, whereas
rmdir removes an empty directories.
• link allows the same file to appear under two or more names, often in
different directories for allowing several members of the same
programming team to share a common file, with each of them having
the file appear in his own directory, possibly under different names.
• By executing the mount system call, the USB file system can be
attached to the root file system
• The mount call makes it possible to integrate removable media into a
single integrated file hierarchy, without having to worry about which
device a file is on
16
in out rings are supposed to make a system call to access the process in
the inner ring
• The diagram reflects the structure of The operating system with
following details
• Layer 0 dealt with allocation of the processor, switching between
processes when interrupts occurred or timers expired.
• Layer 1 did the memory management. It allocated space for processes
in main memory.
• Layer2 handled communication between each process and the operator
console·
• Layer 3 took care of managing the I/O devices and buffering the
information streams·
• Layer 4 was where the user programs were found.
• Layer 5 : The system operator process was located.
• TRAP instruction whose parameters were carefully checked for
validity before the call was allowed to proceed.
2.5.3 : Microkernels:
• The servers, each of which provides some service, and the clients,
which use these services. This model is known as the client-server
model.
• Since clients communicate with servers by sending messages, the
clients need not know whether the messages are handled locally on
their own machines, or whether they are sent across a network to
servers on a remote machine.
• As far as the client is concerned,: requests are sent and replies come
back.
• Thus the client-server model is an abstraction that can be used for a
single machine or for a network of machines
2.5.5 Exokernel:
• Exokernel runs in the bottom layer of kernel mode.
• Its job is to allocate resources to virtual machines and then check
attempts to use them to make sure no machine is trying to use
somebody else’s resources.
• The advantage of the exokernel scheme is that it saves a layer of
mapping whereas the virtual machine monitor must maintain tables to
remap disk addresses
• The exokernel need only to keep track of which virtual machine
• has been assigned which resource
18
2.5 LET US SUM UP
1. Different types of operating systems are used in different types of
machines depending upon the need of the user. Some of it are
mainframe operating system, server operating system, embedded
operating system, handheld operating system
2. System calls explains what the operating system does. Different types
of system calls are used in operating system activities like file
management, process creation, directory management.
3. The structure of the operating system has evolved with time. Most
common ones includes monolithic, layered, microkernel etc
2.7 BIBLIOGRAPHY
*****
19
3
PROCESSES AND THREADS
Unit Structure
3.0 Objectives
3.1 Introduction
3.2 Process
3.2.1 Process Creation
3.2.2 Process Termination
3.2.3 Process State
3.3 Threads
3.3.1 Thread Usage
3.3.2 Classical Thread Model
3.3.3 Implementing thread in User Space
3.4.4 Implementing thread in Kernel Space
3.4.5 Hybrid Implementation
3.4 Interprocess Communication
3.4.1 Race Condition
3.4.2 Critical Region
3.4.3 Mutual Exclusion and busy waiting
3.4.4 Sleep and Wake up
3.4.5 Semaphores
3.4.6 Mutex
3.5 Scheduling
3.5.1 First Come First Serve Scheduling
3.5.2 Shortest Job First Scheduling
3.5.3 Priority Scheduling
3.5.4 Round Robin Scheduling
3.5.5 Multiple Queue
3.6 Classical IPC problem
3.6.1 Dinning Philosopher
3.6.2 Reader Writer
3.7 Let us Sum Up
3.8 List of Reference
3.9 Bibliography
3.10 Unit End Questions
20
3.0 OBJECTIVES
3.1 INTRODUCTION
• The most important concept of any operating system is process which
is an abstraction of a running program.
• They support the ability to perform concurrent operation even with
single processorModern computing exists only because of process.
• Operating system can make the computer more productive by
switching the CPU between processes
3.2PROCESS
21
3.2.1 Process Creation:
Four principle events cause processes to be created:
1. System initialization:
• When an operating system is booted, numerous processes are created.
• Some of these are foreground processes: processes that interact with
(human) users and perform work for them.
• Others run in the background also called as daemons and not
associated with particular users, but instead have some specific
function.
Error exit:
• The third type of error occurs due to program bug like executing an
illegal instruction, referencing
• 3on-existent memory or dividing by zero.
Fatal exit:
• A termination of a process occurs when it discovers a fatal error.
• For example, if a user types the command
22
• cc xyz.c
• to compile the program xyz.c and if no such file exists, the compiler
simply announces this fact and exits.
Figure 3.1
Reference: “Operating System Concepts” by Abraham Silberschatz,
Greg Gagne, and Peter Baer Galvin
Any process in the system is present in any one of the given states
23
3.3 THREAD
Advantages:
▪ Can be implemented on the OS that do not support thread and thread
are implemented by library
▪ Requires no modification in the operating system.
▪ It gives better performance as there is no context switching involved
from kernel.
▪ Each process is allowed to have its own customized scheduling
algorithm.
Disadvantages
• Implementing blocking system calls would cause all thread to stop.
• If a thread starts running, no other thread would be able to run unless
the thread voluntarily leaves the CPU.
Figure 3.2
3.3.4 Implementing thread in User Kernel:
• Kernel manages the thread by keeping a track of all threads by
maintaining a thread table in the system.
• When a thread wants to create a new thread or destroy an existing
thread, it makes a kernel call, which then does the creation or
destruction by updating the kernel thread table.
• The kernel’s thread table holds each thread’s registers, state, and other
information and also maintains the traditional process table to keep
track of processes
Advantages:
• Thread-create and friends are now systems and hence much slower.
25
• A thread that blocks causes no particular problem. The kernel can run
another thread from this process or can run another process.
• Similarly a page fault in one thread does not automatically block the
other threads in the process.
Disadvantages:
• Relatively greater cost of creating and destroying threads in the kernel
• When a signal comes in then which thread should handle it is a
problem
Figure 3.4
3.3.5 Hybrid implementation:
• Hybrid implementation combines the advantages of userlevel threads
with kernel-level threads. One way is use kernel-level threads and then
multiplex user-level threads onto some or all of them.
• This model provides maximum flexibility
• The kernel is aware of only the kernel-level threads and schedules
those.
• These user-level threads are created, destroyed, and scheduled like the
user-level threads in a process that runs on an operating system
without multithreading capability
Figure 3.4
26
3.4 INTERPROCESS COMMUNICATION
• It is a mechanism that allows the exchange of data between processes
• Enables resource and data sharing between the processes without
interference.
• To provide information about process status to other processes.
• Three problems which are faced,
• How one process can pass information to another?
• The second has to do with making sure two or more processes do not
get in each other’s way.
• The third concerns proper sequencing when dependencies are present.
Figure 3.5
Race condition can be avoided by ensuring that no two processes are ever
in their critical regions at the same time.
28
3.4.3.2 Lock Variables:
A single, shared, lock variable is considered initially at 0. When a
process wants to enter in its critical section, it first test the lock. If lock is
0, the process first sets it to 1 and then enters the critical section. If the
lock is already 1, the process just waits until lock variable becomes 0.
Thus, a 0 means that no process in its critical section, and 1 means to wait
since some process is in its critical section.
When the first process runs again, it will also set the lock to 1, and
two processes will be in their critical regions at the same time.
29
Solution: Producer goes to sleep and to be awakened when the consumer
has removed data. The consumer wants to remove data from the buffer but
buffer is already empty.
Solution: Consumer goes to sleep until the producer puts some data in
buffer and wakes consumer up.
Conclusion:
3.4.5 Semaphore:
E. W. Dijkstra (1965) suggest semaphore, an integer variable to
count the number of wakeups saved for future use. A semaphore could
have the value 0, indicating that no wakeups were saved, or some positive
value if one or more wakeups were pending.
Two operations are performed on semaphores called as
down(sleep) and up(wakeup). Processes to synchronize their activities.
These operations are also known as: wait() denoted by P and signal() is
denoted by V. wait(S)
{
while (S <= 0)
;
S—;
}
signal(S)
{ S++;
}
3.4.6 Mutex:
• Mutex is a simplified version of the semaphore used for managing
mutual exclusion of shared resources.
• They are easy and efficient to implement and useful in thread packages
that are implemented entirely in user space.
• A mutex is a shared variable that can be in one of two states: unlocked
orLocked.
• The semaphore is initialized to the number of resources available.
• Each process that wishes to use a resource performs a wait() operation
on the semaphore. When a process releases a resource, it performs a
signal() operation.
30
• When the count for the semaphore goes to 0, all resources are being
used.
• Any processes that wish to use a resource will block until the count
becomes greater than 0.
3.5 SCHEDULING
• The part of the operating system that makes the choice is called the
scheduler, and the algorithm it uses is called the scheduling algorithm
• Processes are of two types Compute bound or Input output bound
Compute-bound processes have long CPU bursts and infrequent I/O
waits I/O-bound processes have short CPU bursts and frequent I/O
waits.
• The length of the CPU burst is an important factor
• It takes the same time to issue the hardware request to read a disk
block no matter how much or how little time it takes to process the
data after they arrive. Scheduling is of two types preemptive and non
preemptive
• Scheduling algorithm are classified as Batch, Interactive and Real
Time
• CPU scheduling takes place when one of the following condition is
true Switching of process from the running state to waiting state
• Switching of process from the running state to ready state Switching of
process from the waiting state to ready state When a process
terminates
• scheduling under conditions 1 and 4 is called as non-preemptive
scheduling. scheduling under conditions 2 and 3 is preemptive
scheduling
Gantt chart:
34
• However, a philosopher can only eat spaghetti when he has both left
and right forks. Each fork can be held by only one philosopher and so
a philosopher can use the fork only if it is not being used by another
philosopher.
• After he finishes eating, he needs to put down both forks so they
become available to others.
• A philosopher can take the fork on his right or the one on his left as
they become available, but cannot start eating before getting both of
them.
• The problem is how to design a discipline of behaviour (a concurrent
algorithm) such that no philosopher will starve.
• Mutual exclusion is the basic idea of the problem; the dining
philosophers create a generic and abstract scenario useful for
explaining issues of this type.
• The failures these philosophers may experience are analogous to the
difficulties that arise in real computer programming when multiple
programs need exclusive access to shared resources
Problem:
36
3.9 BIBLIOGRAPHY
*****
37
UNIT II
4
MEMORY MANAGEMENT
Unit Structure
4.0 Objectives
4.1 Introduction
4.2 Address Space
4.3 Virtual Memory
4.4 Let us Sum Up
4.5 List of Reference
4.6 Bibliography
4.7 Unit End Questions
4.0 OBJECTIVES
4.1 INTRODUCTION
For software programs to save and retrieve stored data, each unit of
data must have an address where it can be individually located or else the
program will be unable to find and manipulate the data. The number of
address spaces available will depend on the underlying address structure
and these will usually be limited by the computer architecture being used.
39
Fig: Illustration of translation from logical block addressing to physical
geometry
4.2.3 virtual address space to physical address space:
The Domain Name System maps its names to (and from) network-
specific addresses (usually IP addresses), which in turn may be mapped to
link layer network addresses via Address Resolution Protocol. Also,
network address translation may occur on the edge of different IP spaces,
such as a local area network and the Internet.
An iconic example of virtual-to-physical address translation is
virtual memory, where different pages of virtual address space map either
to page file or to main memory physical address space. It is possible that
several numerically different virtual addresses all refer to one physical
address and hence to the same physical byte of RAM. It is also possible
that a single virtual address maps to zero, one, or more than one physical
address
43
Virtual memory is commonly implemented by demand paging. It
can also be implemented in a segmentation system. Demand segmentation
can also be used to provide virtual memory.
44
Advantages:
Disadvantages:
When the page that was selected for replacement and was paged
out, is referenced again, it has to read in from disk, and this requires for
I/O completion. This process determines the quality of the page
replacement algorithm: the lesser the time waiting for page-ins, the better
is the algorithm.
45
4.3.6 Least Recently Used (LRU) algorithm:
Page which has not been used for the longest time in main memory is
the one which will be selected for replacement.
Easy to implement, keep a list, replace pages by looking back into
time.
46
4.3.8 Least frequently Used (LFU) algorithm:
The page with the smallest count is the one which will be selected for
replacement.
This algorithm suffers from the situation in which a page is used
heavily during the initial phase of a process, but then is never used
again.
4.7 SUMMARY
*****
47
5
PAGING AND SEGMENTATION
Unit Structure
5.0 Objectives
5.1 Memory management goals
5.2 Segmentation
5.3 Paging
5.4 Page replacement algorithms
5.5 Design issues for paging System
5.6 Summary
5.7 Unit End Questions
Relocation:
Relocatability - the ability to move process around in memory without
it affecting its execution
OS manages memory, not programmer, and processes may be moved
around in memory
MM must convert program's logical addresses into physical addresses
Process's first address is stored as virtual address 0
Static Relocation - Program must be relocated before or during loading
of process into memory. Programs must always be loaded into the
same address space in memory, or relocator must be run again.
Dynamic Relocation - Process can be freely moved around in memory.
Virtual-to-physical address space mapping is done at run-time.
Protection:
Write Protection - to prevent data & instructions from being
overwritten.
Read Protection - To ensure privacy of data & instructions.
OS needs to be protected from user processes, and user processes need
to be protected from each other.
Memory protection (to prevent memory overlaps) is usually supported
by the hardware (limit registers), because most languages allow
memory addresses to be computed at run-time.
48
Sharing:
Sometimes distinct processes may need to execute the same process
(e.g., many users executing the same editor), or even the same data
(when one process prepares data for another process).
When different processes signal or wait the same semaphore, they
need to access the same memory address.
OS has to allow sharing, while at the same time ensure protection.
Different frame sizes are available for data sets with larger or
smaller pages andmatching-sized frames. 4KB to 2MB are common sizes,
and GB-sized frames areavailable in high-performance servers.
The main idea behind the paging is to divide each process in the
form of pages. The main memory will also be divided in the form of
frames.
Pages of the process are brought into the main memory only when
they are required otherwise they reside in the secondary storage.
51
5.1A fig: Paging process
Example:
Let us consider the main memory size 16 Kb and Frame size is 1
KB therefore the main memory will be divided into the collection of 16
frames of 1 KB each.
There are 4 processes in the system that is P1, P2, P3 and P4 of 4
KB each. Each process is divided into pages of 1 KB each so that one page
can be stored in one frame.
Initially, all the frames are empty therefore pages of the processes
will get stored in the contiguous way.
Frames, pages and the mapping between the two is shown in the
image below.
5.2 SEGMENTATION
What is Segmentation?
Operating system doesn't care about the User's view of the process.
It may divide the same function into different pages and those pages may
or may not be loaded at the same time into the memory. It decreases the
efficiency of the system.
54
Fig : Segmentation
Advantages of Segmentation:
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compare to the pagetable in
paging.
Disadvantages:
1. It can have external fragmentation.
2. It is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.
55
7. Paging comprises a page While segmentation also
table which encloses the comprises the segment table
base address of every which encloses segment number
page. and segment offset
56
Initially all slots are empty, so when 1, 3, 0 came they are allocated to the
empty slots —>3 Page Faults. when 3 comes, it is already in memory so
—>0 Page Faults. Then 5 comes, it is not available in memory so it
replaces the oldest page slot i.e 1. —>1 Page Fault.
In this algorithm, pages are replaced which would not be used for the
longest duration of time in the future.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty
slots —>4
Now for the further page reference string —>0 Page fault because
they are already available in the memory.
57
Least Recently Used:
In this algorithm page will be replaced which is least recently used.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty
slots —>4 Page faults
0 is already their so —>0 Page fault.
when 3 came it will take the place of 7 because it is least recently used
—>1 Page fault
0 is already in memory so —>0 Page fault.
4 will takes place of 1 —>1 Page Fault
Now for the further page reference string —>0 Page fault because they
are already available in the memory.
The Working Set Model. In the purest form of paging, processes are
started up with none of their pages in memory. .
Local versus Global Allocation Policies. In the preceding sections we
have discussed several algorithms for choosing a page to replace when
a fault occurs. ...
Page Size. ...
Virtual Memory Interface.
5.6 SUMMARY
1. Paging is a storage mechanism that allows the OS to retrieve processes
from the secondary storage into the main memory in the form of
pages.
2. Fragmentation refers to the condition of a disk in which files are
divided into pieces scattered around the disk.
3. Segmentation method works almost similarly to paging. The only
58
difference between the two is that segments are of variable-length,
whereas, in the paging method, pages are always of fixed size.
4. Dynamic loading is a routine of a program which is not loaded until
the program calls it.
5. Linking is a method that helps OS to collect and merge various
modules of code and data into a single executable file.
1. What is Paging?
2. What is Segmentation? and Paging vs. Segmentation
3. Advantages of Paging
4. Advantage of Segmentation and Disadvantages of Paging
5. Disadvantages of Segmentation
6. Page replacement algorithms numerical
*****
59
6
FILE SYSTEM
Unit Structure
6.0 Objectives
6.1 Introduction
6.2 File structure
6.3 File type
6.4 File access mechanism
6.5 Space Allocations
6.6 Let us Sum Up
6.6 List of Reference
6.7 Bibliography
6.8 Unit End Questions
6.0 OBJECTIVES
• Files
• Directories
• file system implementation
• file-system management and optimization
• MS-DOS file system
• UNIX V7 file system
• CDROM file system
6.1FILE
60
• An object file is a sequence of bytes organized into blocks that are
understandable by the machine.
• When an operating system defines different file structures, it also
contains the code to support these file structures. Unix, MS-DOS
support a minimum number of file structures.
1) Ordinary files:
• These are the files that contain user information.
• These may have text, databases or executable programs.
• The user can apply various operations on such files like add, modify,
delete or even remove the entire file.
2) Directory files:
• These files contain a list of file names and other information related to
these files.
3) Special files:
• These files are also known as device files.
• These files represent physical devices like disks, terminals, printers,
networks, tape drive etc.
File access mechanism refers to the manner in which the records of a file
may be accessed. There are several ways to access files “
• Sequential access
• Direct/Random access
• Indexed sequential access 1)Sequential access
1) Contiguous Allocation
• Each file occupies a contiguous address space on disk.
• Assigned disk address is in linear order.
• Easy to implement.
• External fragmentation is a major issue with this type of allocation
technique.
2) Linked Allocation:
• Each file carries a list of links to disk blocks.
• Directory contains a link / pointer to the first block of a file.
• No external fragmentation
• Effectively used in a sequential access file.
• Inefficient in case of direct access file.
3) Indexed Allocation:
• Provides solutions to problems of contiguous and linked allocation.
• An index block is created having all pointers to files.
• Each file has its own index block which stores the addresses of disk
space occupied by the file.
• Directory contains the addresses of index blocks of files.
62
Structure of directory in OS:
There are several logical structures of a directory, these are given below.
1. Single-level directory:
Advantages:
• Since it is a single directory, so its implementation is very easy.
• If the files are smaller in size, searching will become faster.
• The operations like file creation, searching, deletion, updating are very
easy in such a directory structure.
Disadvantages:
• There may be a chance of name collision because two files can not
have the same name.
63
• Searching will become time taking if the directory is large.
• This can not group the same type of files together.
2. Two-level directory:
In the two-level directory structure, each user has their own user
files directory (UFD). The UFDs have similar structures, but each lists
only the files of a single user. The system’s master file directory (MFD) is
searched whenever a new user id=s logged in. The MFD is indexed by
username or account number, and each entry points to the UFD for that
user.
Advantages:
• We can give a full path like /User-name/directory-name/.
• Different users can have the same directory as well as file name.
• Searching for files becomes more easy due to path name and user-
grouping.
Disadvantages:
• A user is not allowed to share files with other users.
• Still it is not very scalable; two files of the same type cannot be
grouped together in the same user.
3. Tree-structured directory:
64
arbitrary height. This generalization allows the user to create their own
subdirectories and to organize their files accordingly.
Advantages:
• Very generalize, since full path names can be given.
• Very scalable, the probability of name collision is less.
• Searching becomes very easy; we can use both absolute paths as well
as relative paths.
Disadvantages:
• Every file does not fit into the hierarchical model; files may be saved
into multiple directories.
• We cannot share files.
• It is inefficient, because accessing a file may go under multiple
directories.
It is the point to note that shared file is not the same as copy file. If
any programmer makes some changes in the subdirectory it will reflect in
both subdirectories.
Advantages:
• We can share files.
• Searching is easy due to different-different paths.
Disadvantages:
• We share the files via linking, in case of deleting it may create the
problem,
• If the link is softlink then after deleting the file we left with a dangling
pointer.
• In the case of hardlink, to delete a file we have to delete all the
references associated with it.
66
Advantages:
• It allows cycles.
• It is more flexible than other directories structure.
Disadvantages:
• It is more costly than others.
• It needs garbage collection.
I/O Control
Devices
It has information about files, location of files and their logical and
physical blocks. Physical blocks do not match with logical blocks
numbered from 0 to N. It also has a free space which tracks unallocated
blocks.
Advantages:
1. Duplication of code is minimized.
2. Each file system can have its own logical file system.
Disadvantages:
If we access many files at same time then it results in low
performance. We can implement file system by using two types data
structures :
1.On-disk Structures:
68
2) Volume Control Block:
3) Directory Structure:
3) Per-File FCB:
File permissions
File size
2) In-Memory Structure:
They are maintained in main-memory and these are helpful for file
system management for caching. Several in-memory structures given
below:
Mount Table :
It contains information about each mounted volume.
1) Directory-Structure cache :
This cache holds the directory information of recently accessed
directories.
2) System wide open-file table :
It contains the copy of FCB of each open file.
3) Per-process open-file table :
It contains information opened by that particular process and it maps
with appropriate system wide open-file.
69
3) Directory Implementation:
1) Linear List:
It maintains a linear list of filenames with pointers to the data
blocks. It is time- consuming also. To create a new file, we must first
search the directory to be sure that no existing file has the same name then
we add a file at end of the directory. To delete a file, we search the
directory for the named file and release the space. To reuse the directory
entry either we can mark the entry as unused or we can attach it to a list of
free directories.
2) Hash Table:
The hash table takes a value computed from the file name and
returns a pointer to the file. It decreases the directory search time. The
insertion and deletion process of files is easy. The major difficulty is hash
tables are its generally fixed size and hash tables are dependent on hash
function on that size.
1) Disk-Space Management
1) Disk-Space Management:
Since all the files are normally stored on disk one of the main
concerns of file system is management of disk space.
2) Block Size:
70
3) Keeping track of free blocks:
After a block size has been finalized the next issue that needs to be
catered is how to keep track of the free blocks. In order to keep track there
are two methods that are widely used:
Using a linked list: Using a linked list of disk blocks with
eachblock holding as many free disk block numbers as will fit.
Bitmap: A disk with n blocks has a bitmap with n bits. Freeblocks are
represented using 1's and allocated blocks as 0'sas seen below in the
figure.
71
1) Disk quotas:
5) File-system Backups:
72
• Since there is an immense amount of data, it is generally desired to
compress the data before taking a backup for the same.
• It is difficult to perform a backup on an active file-system since the
backup may be inconsistent.
• Making backups introduces many security issues
There are two ways for dumping a disk to the backup disk:
• Physical dump: In this way the dump starts at block 0 of the disk,
writes all the disk blocks onto the output disk in order and stops after
copying the last one.
• Advantages: Simplicity and great speed.
• Disadvantages: inability to skip selected directories, make
incremental dumps, and restore individual files upon request
• Logical dump: In this way the dump starts at one or more specified
directories and recursively dump all files and directories found that
have been changed since some given base date. This is the most
commonly used way
73
• If a file is linked to two or more directories, it is important that the file
is restored only one time and that all the directories that are supposed
to point to it do so
• UNIX files may contain holes
• Special files, named pipes and all other files that are not real should
never be dumped.
6) File-system Consistency:
In case if both the tables have 0 in it that may be because the block
is missing and hence will be reported as a missing block. The two other
situations are if a block is seen more than once in a free list and the same
data block is present in two or more files.
• In addition to checking to see that each block is properly accounted
for, the file-system checker also checks the directory system. It too
uses a table of counters but per file- size rather than per block. These
counts start at 1 when a file is created and are incremented each time a
(hard) link is made to the file. In a consistent file system, both counts
will agree
7) File-system Performance:
Since the access to disk is much slower than access to memory,
many file systems have been designed with various optimizations to
improve performance as described below.
74
8) Caching:
The most common technique used to reduce disk access time is the
block cache or buffer cache. Cache can be defined as a collection of items
of the same type stored in a hidden or inaccessible place. The most
common algorithm for cache works in such a way that if a disk access is
initiated, the cache is checked first to see if the disk block is present. If yes
then the read request can be satisfied without a disk access else the disk
block is copied to cache first and then the read request is processed.
75
I-nodes are
located near
the start of the
disk
In the above figure all the i-nodes are near the start of the disk, so the
average distance between an inode and its blocks will be half the number
of cylinders, requiring long seeks. But to increase the performance the
placement of i-nodes can be modified as below next setting
Disk Is Divided
Into cylinder
groups, each
with its own i-
nodes
76
8) Defragmenting Disks:
Due to continuous creation and removal of files the disks get badly
fragmented with files and holes all over the place. As a consequence,
when a new file is created, the blocks used for it may be spread all over
the disk, giving poor performance. The performance can be restored by
moving files around to make them contiguous and to put all (or at least
most) of the free space in one or more large contiguous regions on the
disk.
MS-DOS Filesystem:
Limitations:
79
Types of Unix files – The UNIX files system contains several different
types of files :
Ordinary Files
Directories
Special Files
Pipes
Sockets
Symbolic Links
2. Directories:
Directories store both special and ordinary files. For users familiar
with Windows or Mac OS, UNIX directories are equivalent to folders. A
directory file contains an entry for every file and subdirectory that it
houses. If you have 10 files in a directory, there will be 10 entries in the
directory. Each entry has two components.
(1) The Filename
(2) A unique identification number for the file or directory (called the
inode number)
Branching points in the hierarchical tree.
Used to organize groups of files.
May contain ordinary files, special files or other directories.
Never contain “real” information which you would work with
(such as text). Basically, just used for organizing files.
All files are descendants of the root directory, ( named / ) located
at the top of the tree.
3. Special Files:
Used to represent a real physical device such as a printer, tape
drive or terminal, used for Input/Output (I/O) operations. Device or
80
special files are used for device Input/Output(I/O) on UNIX and Linux
systems. They appear in a file system just like an ordinary
file or a directory.
On UNIX systems there are two flavors of special files for each device,
character special files and block special files:
When a character special file is used for device Input/Output(I/O), data
is transferred one character at a time. This type of access is called raw
device access.
When a block special file is used for device Input/Output(I/O), data is
transferred in large fixed-size blocks. This type of access is called
block device access.
For terminal devices, it’s one character at a time. For disk devices
though, raw access means reading or writing in whole chunks of data –
blocks, which are native to your disk.
4. Pipes:
Introduction:
CDFS stands for Compact Disc File System. Before the era of
CDFS, there was no medium for people to store their memories or files
that they want to store for the long term purpose. The storing of data and
information was a major problem because in that time the world needs a
system that can store multiple files incompressed format. But the
revolution of technology changed the culture of the world and new
advanced things started coming to the market. CDFS came into the picture
on 21 August 1999. At that time CDFS is considered the most advanced
technology in the technology Industry. There were many features offered
by CDFS that came into limelight immediately:
1. It is a file system for read-only and write-once CD-ROMs.
2. It exports all tracks and boot images on a CD as normal files.
81
3. CDFS provides a wide range of services which include creation,
replacing, renaming, or deletion of files on write-oncemedia.
4. It uses a VCACHE driver to control the CD-ROM disc cache allowing
for a smoother playback.
5. It includes several disc properties like volume attributes, file attributes,
and file placement.
History:
CDFS was developed by Simson Garfinkel and J. Spencer Love at
the MIT Media Lab between 1985 and 1986. CDFS was developed from
the write-once CD-ROM simulator. They are designed to store any data
and information on read-only and write-once media.A great setback for
CDFS was that it never gets sold. The File System source code was
published on the internet.
Disk Images can be saved using the CDFS standard, which may be
used to burn ISO 9660 discs. ISO 9660 also referred to as CDFS by some
hardware and software providers, is a file system published by ISO
(International Organization for Standardization) for optical disc media.
Applications:
A file system is a systematic organized way in which files have to
get organized in a hard disk. The file system is initiated when a user opens
a hard disk to access files. Here aresome applications of the Compact Disk
File System:
1. CDFS creates a way in which the system first sets up the root directory
and then automatically creates all the subsequent folders for it.
2. The system also provides a wide range of services for all users. You
can create new files or folders which are added to the main root file or
we can say the “file tree” of the system.
3. There was also a problem of transferring data or files from CDs to a
laptop or computer. But CDFS shows us a great solution to solve this
problem. It is useful for burning discs that can be exchanged between
different devices.
4. CDFS is not specific to a single Operating System, it means that a disc
burned on Macintosh using CDFS can be read on a Windows or Linux
based computer.
5. It can operate over numerous Operating Systems. It means if a user
started shifting files from Macintosh using Compact Disk File System,
he can also operate the files in Windows Operating System.
6. Disc Pictures are also saved using proper system standards. All files
have a typical.ISO name extension.
Types:
There are different versions of Compact Disk File System:
1. Clustered operated system. (can be Global or Grid)
2. Flash operated
82
3. Object file system
4. Semantic
5. Steganographic process
6. Versioning
7. Synthetic operated system
6.6 SUMMARY
A file is a collection of correlated information which is recorded on
secondary or non-volatile storage like magnetic disks, optical disks,
and tapes.
It provides I/O support for a variety of storage device types.
Files are stored on disk or other storage and do not disappear when a
user logs off.
A File Structure needs to be predefined format in such a way that an
operating system understands it.
File type refers to the ability of the operating system to differentiate
different types of files like text files, binary, and source files.
Create find space on disk and make an entry in the directory.
Indexed Sequential Access method is based on simple sequential
access
In Sequential Access method records are accessed in a certain
predefined sequence
The random access method is also called direct random access
Three types of space allocation methods are: Linked
Allocation,Indexed Allocation,Contiguous Allocation
Information about files is maintained by Directories
7
PRINCIPLES OF I/O HARDWARE AND
SOFTWARE
Unit Structure
7.0 Objectives
7.1 Introduction
7.2 Principles of I/O software
7.3 I/O software layers
7.4 Summary
7.5 Unit End Questions
7.0 OBJECTIVES
7.1 INTRODUCTION
Each controller has a few registers that are used for communicating
with the CPU. By writing into these registers, the operating system can
command the device to deliver data, accept data, switch itself on or off, or
otherwise perform some action. By reading from these registers, the operating
system can learn what the device’s state is, whether it is prepared to accept a
new command, and so on.
In block mode, the DMA controller tells the device to acquire the
bus, issue a series of transfers, then release the bus. This form of operation
is called burst mode. It is more efficient than cycle stealing because
acquiring the bus takes time and multiple words can be transferred for the
price of one bus acquisition. The down side to burst mode is that it can
block the CPU and other devices for a substantial period if a long burst is
being transferred.
Precise Interrupts:
First we will look at its goals and then at the different ways I/O can be
done from the point of view of the operating system.
88
to the operating system to make operations that are actually interrupt-
driven look blocking to the user programs.
The simplest form of I/O is to have the CPU do all the work. This
method is called programmed I/O.
The user process then acquires the printer for writing by making a
system call to open it. If the printer is currently in use by another process,
this call will fail and return an error code or will block until the printer is
available, depending on the operating system and the parameters of the
call. Once it has the printer, the user process makes a system call telling
the operating system to print the string on the printer.
The operating system then (usually) copies the buffer with the
string to an array, say, p, in kernel space, where it is more easily accessed
(because the kernel may have to change the memory map to get at user
space). It then checks to see if the printer is currently available. If not, it
89
waits until it is. As soon as the printer is available, the operating system
copies the first character to the printer’s data register, in this example
using memory-mapped I/O. This action activates the printer. The character
may not appear yet because some printers buffer a line or a page before
printing anything. In Fig. 7.4 (b), however, we see that the first character
has been printed and that the system has marked the ‘‘B’’ as the next
character to be printed. As soon as it has copied the first character to the
printer, the operating system checks to see if the printer is ready to accept
another one. Generally, the printer has a second register, which gives its
status. The act of writing to the data register causes the status to become
not ready. When the printer controller has processed the current character,
it indicates its availability by setting some bit in its status register or
putting some value in it.
At this point the operating system waits for the printer to become
ready again. When that happens, it prints the next character, as shown in
Fig. 7.4 (c). This loop continues until the entire string has been printed.
Then control returns to the user process.
Now let us consider the case of printing on a printer that does not
buffer characters but prints each one as it arrives. If the printer can print,
say 100 characters/ sec, each character takes 10 msec to print. This means
that after every character is written to the printer’s data register, the CPU
will sit in an idle loop for 10 msec waiting to be allowed to output the next
character. This is more than enough time to do a context switch and run
some other process for the 10 msec that would otherwise be wasted.
The way to allow the CPU to do something else while waiting for
the printer to become ready is to use interrupts. When the system call to
print the string is made, the buffer is copied to kernel space, as we showed
earlier, and the first character is copied to the printer as soon as it is
willing to accept a character. At that point the CPU calls the scheduler and
some other process is run. The process that asked for the string to be
printed is blocked until the entire string has printed.
90
The big win with DMA is reducing the number of interrupts from
one per character to one per buffer printed. If there are many characters
and interrupts are slow, this can be a major improvement. On the other
hand, the DMA controller is usually much slower than the main CPU. If
the DMA controller is not capable of driving the device at full speed, or
the CPU usually has nothing to do anyway while waiting for the DMA
interrupt, then interrupt-driven I/O or even programmed I/O may be better.
92
However, there is no technical restriction on having one device driver
control multiple unrelated devices. It is just not a good idea in most cases.
Buffering:
Buffering is also an issue, both for block and character devices, for a
variety of reasons. To see one of them, consider a process that wants to
read data from an (ADSL—Asymmetric Digital Subscriber Line) modem,
something many people use at home to connect to the Internet. One
possible strategy for dealing with the incoming characters is to have the
user process do a read system call and block waiting for one character.
Each arriving character causes an interrupt. The interrupt-service
procedure hands the character to the user process and unblocks it. After
putting the character somewhere, the process reads another character and
blocks again.
Error Reporting:
Errors are far more common in the context of I/O than in other
contexts. When they occur, the operating system must handle them as best
it can. Many errors are device specific and must be handled by the
appropriate driver, but the framework for error handling is device
independent.
The library procedure write might be linked with the program and
contained in the binary program present in memory at run time. In other
systems, libraries can be loaded during program execution. Either way, the
collection of all these library procedures is clearly part of the I/O system.
95
It formats a string consisting of the 14-character string ‘‘The
square of ’’ followed by the value i as a 3-character string, then the 4-
character string ‘‘ is ’’, then i2 as 6 characters, and finally a line feed.
7.4 SUMMARY
Different people look at I/O hardware in different ways. In this
book we are concerned with programming I/O devices, not designing,
building, or maintaining them, so our interest is in how the hardware is
programmed, not how it works inside. Different disks may have different
sector sizes. It is up to the device-independent software to hide this fact
and provide a uniform block size to higher layers, for example, by treating
several sectors as a single logical block.
8.0 OBJECTIVES
8.1 INTRODUCTION
We will begin with disks, which are conceptually simple, yet very
important. After that we will examine clocks, keyboards, and displays.
97
8.1.1 Disk Hardware:
Disks come in a variety of types. The most common ones are the
magnetic hard disks. They are characterized by the fact that reads and
writes are equally fast, which makes them suitable as secondary memory
(paging, file systems, etc.). Arrays of these disks are sometimes used to
provide highly reliable storage. For distribution of programs, data, and
movies, optical disks (DVDs and Blu-ray) are also important. Finally,
solid-state disks are increasingly popular as they are fast and do not
contain moving parts. In the following sections we will discuss magnetic
disks as an example of the hardware and then describe the software for
disk devices in general.
Magnetic Disks:
Older disks have little electronics and just deliver a simple serial bit
stream. On these disks, the controller does most of the work. On other
disks, in particular, IDE (Integrated Drive Electronics) and SATA
(Serial ATA) disks, the disk drive itself contains a microcontroller that
does considerable work and allows the real controller to issue a set of
higher-level commands. The controller often does track caching, bad-
block remapping, and much more.
A device feature that has important implications for the disk driver
is the possibility of a controller doing seeks on two or more drives at the
same time. These are known as overlapped seeks. While the controller
and software are waiting for a seek to complete on one drive, the
controller can initiate a seek on another drive. Many controllers can also
read or write on one drive while seeking on one or more other drives, but a
floppy disk controller cannot read or write on two drives at the same time.
(Reading or writing requires the controller to move bits on a microsecond
time scale, so one transfer uses up most of its computing power.) The
situation is different for hard disks with integrated controllers, and in a
system with more than one of these hard drives they can operate
simultaneously, at least to the extent of transferring between the disk and
the controller’s buffer memory. Only one transfer between the controller
and the main memory is possible at once, however. The ability to perform
two or more operations at the same time can reduce the average access
time considerably.
RAID:
Fig 8.2
The preamble starts with a certain bit pattern that allows the
hardware to recognize the start of the sector. It also contains the cylinder
and sector numbers and some other information. The size of the data
100
portion is determined by the low level formatting program. Most disks use
512-byte sectors. The ECC field contains redundant information that can
be used to recover from read errors. The size and content of this field
varies from manufacturer to manufacturer, depending on how much disk
space the designer is willing to give up for higher reliability and how
complex an ECC code the controller can handle. A 16-byte ECC field is
not unusual.
First, consider how long it takes to read or write a disk block. The time
required is determined by three factors:
1. Seek time (the time to move the arm to the proper cylinder).
2. Rotational delay (how long for the proper sector to appear under the
reading head).
3. Actual data transfer time.
If the disk driver accepts requests one at a time and carries them
out in that order, that is, FCFS (First-Come, First-Served), little can be
done to optimize seek time. However, another strategy is possible when
the disk is heavily loaded. It is likely that while the arm is seeking on
behalf of one request, other disk requests may be generated by other
processes. Many disk drivers maintain a table, indexed by cylinder
number, with all the pending requests for each cylinder chained together in
a linked list headed by the table entries.
Given this kind of data structure, we can improve upon the first-
come, first servedscheduling algorithm. To see how, consider an
imaginary disk with 40 cylinders. A request comes in to read a block on
cylinder 11. While the seek to cylinder 11 is in progress, new requests
come in for cylinders 1, 36, 16, 34, 9, and 12, in that order.
They are entered into the table of pending requests, with a separate
linked list for each cylinder. The requests are shown in Fig. 8.3 . When the
current request (for cylinder 11) is finished, the disk driver has a choice of
which request to handle next. Using FCFS, it would go next to cylinder 1,
101
then to 36, and so on. This algorithm would require arm motions of 10, 35,
20, 18, 25, and 3, respectively, for a total of 111 cylinders.
Fig 8.3
8.1.4 Error Handling:
There are two general approaches to bad blocks: deal with them in
the controller or deal with them in the operating system. In the former
approach, before the disk is shipped from the factory, it is tested and a list
of bad sectors is written onto the disk. For each bad sector, one of the
spares is substituted for it.
There are two ways to do this substitution. In Fig. 8.4 (a) we see a
single disk track with 30 data sectors and two spares. Sector 7 is defective.
What the controller can do is remap one of the spares as sector 7 as shown
in Fig. 8.4(b). The other way is to shift all the sectors up one, as shown in
Fig. 8.4 (c). In both cases the controller has to know which sector is
which. It can keep track of this information through internal tables (one
per track) or by rewriting the preambles to give the remapped sector
numbers. If the preambles are rewritten, the method of Fig. 8.4(c) is more
102
work (because 23 preambles must be rewritten) but ultimately gives better
performance because an entire track can still be read in one rotation.
Fig 8.4
Errors can also develop during normal operation after the drive has been
installed.
The first line of defense upon getting an error that the ECC cannot
handle is to just try the read again. Some read errors are transient, that is,
are caused by specks of dust under the head and will go away on a second
attempt. If the controller notices that it is getting repeated errors on a
certain sector, it can switch to a spare before the sector has died
completely. In this way, no data are lost and the operating system and user
do not even notice the problem. Usually, the method of Fig. 8.4(b) has to
be used since the other sectors might now contain data. Using the method
of Fig. 8.4(c) would require not only rewriting the preambles, but copying
all the data as well.
103
Before describing the algorithm, it is important to have a clear
model of the possible errors. The model assumes that when a disk writes a
block (one or more sectors), either the write is correct or it is incorrect and
this error can be detected on a subsequent read by examining the values of
the ECC fields. In principle, guaranteed error detection is never possible
because with a, say, 16-byte ECC field guarding a 512-byte sector, there
are 24096 data values and only 2144 ECC values. Thus if a block is
garbled during writing but the ECC is not, there are billions upon billions
of incorrect combinations that yield the same ECC. If any of them occur,
the error will not be detected. On the whole, the probability of random
data having the proper 16- byte ECC is about 2−144, which is small
enough that we will call it zero, even though it is really not.
The model also assumes the CPU can fail, in which case it just
stops. Any disk write in progress at the moment of failure also stops,
leading to incorrect data in one sector and an incorrect ECC that can later
be detected. Under all these conditions, stable storage can be made 100%
reliable in the sense of writes either working correctly or leaving the old
data in place. Of course, it does not protect against physical disasters, such
as an earthquake happening and the computer falling 100 meters into a
fissure and landing in a pool of boiling magma. It is tough to recover from
this condition in software.
1. Stable writes. A stable write consists of first writing the block on drive
1, then reading it back to verify that it was written correctly. If it was not,
the write and reread are done again up to n times until they work. After n
consecutive failures, the block is remapped onto a spare and the operation
repeated until it succeeds, no matter how many spares have to be tried.
After the write to drive 1 has succeeded, the corresponding block on drive
2 is written and reread, repeatedly if need be, until it, too, finally succeeds.
In the absence of CPU crashes, when a stable write completes, the block
has correctly been written onto both drives and verified on both of them.
2. Stable reads: A stable read first reads the block from drive 1. If this
yields an incorrect ECC, the read is tried again, up to n times. If all of
these give bad ECCs, the corresponding block is read from drive 2. Given
the fact that a successful stable writeleaves two good copies of the block
104
behind, and our assumption that the probability of the same block
spontaneously going bad on both drives in a reasonable time interval is
negligible, a stable read always succeeds.
Fig 8.5
8.2 CLOCKS
Clocks (also called timers) are essential to the operation of any
multiprogrammed system for a variety of reasons. They maintain the time
of day and prevent one process from monopolizing the CPU, among other
things. The clock software can take the form of a device driver, even
though a clock is neither a block device, like a disk, nor a character
device, like a mouse. Our examination of clocks will follow the same
pattern as in the previous section: first a look at clock hardware and then a
look at the clock software.
105
8.2.1 Clock Hardware:
User input comes primarily from the keyboard and mouse (or
sometimes touch screens), so let us look at those. On a personal computer,
the keyboard contains an embedded microprocessor which usually
communicates through a specialized serial port with a controller chip on
the parentboard (although increasingly keyboards are connected to a USB
port). An interrupt is generated whenever a key is struck and a second one
is generated whenever a key is released. At each of these keyboard
interrupts, the keyboard driver extracts the information about what
happens from the I/O port associated with the keyboard. Everything else
happens in software and is pretty much independent of the hardware.
Keyboard Software:
The number in the I/O register is the key number, called the scan
code, not the ASCII code. Normal keyboards have fewer than 128 keys, so
only 7 bits are needed to represent the key number. The eighth bit is set to
0 on a key press and to 1 on a key release. It is up to the driver to keep
track of the status of each key (up or down). So all the hardware does is
give press and release interrupts. Software does the rest.
When the A key is struck, for example, the scan code (30) is put in
an I/O register. It is up to the driver to determine whether it is lowercase,
uppercase, CTRLA, ALT-A, CTRL-ALT-A, or some other combination.
Since the driver can tell which keys have been struck but not yet released
(e.g., SHIFT), it has enough information to do the job.
Mouse Software:
109
release of the operating system comes out, aa great deal of work has to be
done to perform the upgrade on each machine separately.
At most corporations, the labor costs of doing this kind of software
maintenance dwarf the actual hardware and software costs. For home
users, the labor is technically free, but few people are capable of doing it
correctly and fewer still enjoy doing it. With a centralized system, only
one or a few machines have to be updated and those machines have a staff
of experts to do the work. A related issue is that users should make regular
backups of their gigabyte file systems, but few of them do. When disaster
strikes, a great deal of moaning and wringing of hands tends to follow.
With a centralized system, backups can be made every night by automated
tape robots.
Another advantage is that resource sharing is easier with
centralized systems. A system with 256 remote users, each with 256 MB
of RAM, will have most of that RAM idle most of the time. With a
centralized system with 64 GB of RAM, it never happens that some user
temporarily needs a lot of RAM but cannot get it because it is on someone
else’s PC. The same argument holds for disk space and other resources.
Finally, we are starting to see a shift from PC-centric computing to Web
Centric computing. One area where this shift is very far along is email.
People used to get their email delivered to their home machine and read it
there. Nowadays, many people log into Gmail, Hotmail, or Yahoo and
read their mail there. The next step is for people to log into other Websites
to do word processing, build spreadsheets, and other things that used to
require PC software. It is even possible that eventually the only software
people run on their PC is a Web browser, and maybe not even that. It is
probably a fair conclusion to say that most users want high-performance
interactive computing but do not really want to administer a computer.
This has led researchers to reexamine timesharing using dumb terminals
(now politely called thin clients) that meet modern terminal expectations.
110
power plants (or an equivalent number of fossil-fuel plants) is a big win
and well worth pursuing.
The other place where power is a big issue is on battery-powered
computers, including notebooks, handhelds, and Webpads, among others.
The heart of the problem is that the batteries cannot hold enough charge to
last very long, a few hours at most. Furthermore, despite massive research
efforts by battery companies, computer companies, and consumer
electronics companies, progress is glacial. To an industry used to a
doubling of performance every 18 months (Moore’s law), having no
progress at all seems like a violation of the laws of physics, but that is the
current situation. As a consequence, making computers use less energy so
existing batteries last longer is high on everyone’s agenda. The operating
system plays a major role here, as we will see below.
At the lowest level, hardware vendors are trying to make their
electronics more energy efficient. Techniques used include reducing
transistor size, employing dynamic voltage scaling, using low-swing and
adiabatic buses, and similar techniques.
These are outside the scope of this book, but interested readers can
find a good survey in a paper by Venkatachalam and Franz (2005). There
are two general approaches to reducing energy consumption. The first one
is for the operating system to turn off parts of the computer (mostly I/O
devices) when they are not in use because a device that is off uses little or
no energy. The second one is for the application program to use less
energy, possibly degrading the quality of the user experience, in order to
stretch out battery time. We will look at each of these approaches in turn,
but first we will say a little bit about hardware design with respect to
power usage.
8.6 SUMMARY
While using memory mapped IO, the OS allocates a buffer in
memory and informs I/O device to use that buffer to send data to the CPU.
I/O device operates asynchronously with CPU, interrupts CPU when
finished. Memory mapped IO is used for most high-speed I/O devices like
disks, communication interfaces.
9.0 OBJECTIVES
To understand what is a Deadlock
learn how to make Resource acquisition
learn different methods Detecting Deadlocks and Recovering
To understand The Ostrich Algorithm
9.1 RESOURCES
● Processes make requests, use the resource, and release the resource. The
allocation decisions are made by the system and we will study policies
used to make these decisions.
The following four conditions (Coffman; Havender) are necessary but not
sufficient for deadlock. Repeat: They are not sufficient.
1. Mutual exclusion: A resource can be assigned to at most one process
at a time (no sharing).
2. Hold and wait: A processing holding a resource is permitted to
request another.
3. No preemption: A process must release its resources; they cannot be
taken away.
4. Circular wait: There must be a chain of processes such that each
member of the chain is waiting for a resource held by the next
member of the chain.
The first three are characteristics of the system and resources. That
is, for a given system with a fixed set of resources, the first three
conditions are either true or false: They don't change with time. The truth
or falsehood of the last condition does indeed change with time as the
resources are equested/allocated/released.
9.2.2: Deadlock Modeling:
Following are several examples of a Resource Allocation Graph,
also called a Reusable Resource Graph.
To find a directed cycle in a directed graph is not hard. The idea is simple.
1. For each node in the graph do a depth first traversal to see if the graph
is a DAG(directed acyclic graph), building a list as you go down the
DAG (and pruning it as you backtrack back up).
2. If you ever find the same node twice on your list, you have found a
directed cycle, the graph is not a DAG, and deadlock exists among the
processes in your current list.
3. If you never find the same node twice, the graph is a DAG and no
deadlock occurs.
4. The searches are finite since there are a finite number of nodes.
The figure on the above shows a resource allocation graph with multiple
unit resources.
Each unit is represented by a dot in the box.
Request edges are drawn to the box since they represent a request for any
dot in the box.
Allocation edges are drawn from the dot to represent that this unit of the
resource has been assigned (but all units of a resource are equivalent and
the choice of which one to assign is arbitrary).
Note that there is a directed cycle in red, but there is no deadlock. Indeed
the middle process might finish, erasing the green arc and permitting the
blue dot to satisfy the rightmost process.
An algorithm for detecting deadlocks in this more general setting. The
idea is as follows.
116
1. look for a process that might be able to terminate (i.e., all its request
arcs can be satisfied).
2. If one is found pretend that it does terminate (erase all its arcs), and
repeat step 1.
3. If any processes remain, they are deadlocked.
The algorithm just given makes the most optimistic assumption about a
running process: it will return all its resources and terminate normally. If
we still find processes that remain blocked, they are deadlocked.
Preemption:
For example, to take a laser printer away from its owner, the
operator can collect all the sheets already printed and put them in a pile.
Then the process can be suspended (marked as not runnable). At this point
the printer can be assigned to another process. When that process finishes,
the pile of printed sheets can be put back in the printer’s output tray and
the original process restarted.
Rollback:
117
Kill processe:
118
We have two processes H (horizontal) and V.
The origin represents them both starting.
Their combined state is a point on the graph.
The parts where the printer and plotter are needed by each process are
indicated.
The dark green is where both processes have the plotter and hence
execution cannot reach this point.
Light green represents both having the printer; also impossible.
Pink is both having both a printer and plotter; impossible.
Gold is possible (H has plotter, V has printer), but the system can't get
there.
The upper right corner is the goal; both processes have finished.
The red dot is ... (cymbals) deadlock. We don't want to go there.
The cyan is safe. From anywhere in the cyan we have horizontal and
vertical moves to the finish point (the upper right corner) without hitting
any impossible area.
The magenta interior is very interesting. It is
Possible: each processor has a different resource
Not deadlocked: each processor can move within the magenta
Deadly: deadlock is unavoidable. You will hit a magenta-green
boundary
and then will have no choice but to turn and go to the red dot.
The cyan-magenta border is the danger zone.
The dashed line represents a possible execution pattern.
With a uniprocessor no diagonals are possible. We either move to the
right meaning H is executing or move up indicating V is executing.
The trajectory shown represents.
1. H executing a little.
2. V executing a little.
3. H executes; requests the printer; gets it; executes some more.
4. V executes; requests the plotter
The crisis is at hand!
If the resource manager gives V the plotter, the magenta has been
entered and all is lost. “Abandon all hope ye who enter here” --Dante.
The right thing to do is to deny the request, let H execute moving
horizontally under the magenta and dark green. At the end of the dark
green, no danger remains, both processes will complete successfully.
Victory!
119
This procedure is not practical for a general purpose OS since it
requires knowing the programs in advance. That is, the resource
manager, knows in advance what requests each process will make and
in what order.
120
In the definition of a safe state no assumption is made about the
running processes; that is, for a state to be safe termination must occur no
matter what the processes do (providing the all terminate and to not
exceed their claims). Making no assumption is the same as making the
most pessimistic assumption.
You can NOT tell until I give you the initial claims of the process.
Please do not make the unfortunately common exam mistake to give an
example involving safe states without giving the claims.
For the figure on the right, if the initial claims are: P: 1 unit of R and 2
units of S (written (1,2)) Q: 2 units of R and 1 units of S (written (2,1))
the state is NOT safe.
But if the initial claims are instead: P: 2 units of R and 1 unit of S (written
(2,1)) Q: 1 unit of R and 2 units of S (written (1,2)) the state IS safe.
Explain why this is so.
3. The banker now pretends that P has terminated (since the banker knows
that it can guarantee this will happen). Hence the banker pretends that all
of P's currently held resources are returned. This makes the banker richer
and hence perhaps a process that was not eligible to be chosen as P
previously, can now be chosen.
Example 1
122
9.6 DEADLOCK PREVENTION
9.7 ISSUES
9.7.2: Starvation:
As usual FCFS is a good cure. Often this is done by priority aging
and picking the highest priority process to get the resource. Also can
124
periodically stop accepting new processes until all old ones get their
resources
Problems on Deadlock:
Problem 01:
1. 3
2. 5
3. 4
4. 6
Solution:
In worst case,
The number of units that each process holds = One less than its
maximum demand
So,
Process P1 holds 1 unit of resource R
Process P2 holds 1 unit of resource R
Process P3 holds 1 unit of resource R
Thus,
Maximum number of units of resource R that ensures deadlock = 1 + 2
+3=6
Minimum number of units of resource R that ensures no deadlock = 6
+1=7
125
9.8 SUMMARY
1) Explain how the system can recover from the deadlock using
(a) recovery through preemption.
(b) recovery through rollback.
(c) recovery through killing processes.
2) Explain deadlock detection and recovery.
3) How can deadlocks be prevented?
4) Explain deadlock prevention techniques in Details.
5) Explain Deadlock Ignorance.
6) Define the Deadlock with Suitable examples.
7) Explain Deadlock Avoidance in detail.
8) Explain other issues in deadlocks.
9) Explain Resource Acquisition in Deadlock.
10) A system has 3 user processes P1, P2 and P3 where P1 requires 21
units of resource R, P2 requires 31 units of resource R, P3 requires
41 units of resource R. The minimum number of units of R that
ensures no deadlock is _____?
*****
126
UNIT IV
10
VIRTUALIZATION AND CLOUD
Unit Structure
10.0 Objectives
10.1 Introduction
10.1.1 About VMM
10.1.2Advantages
10.2 Introduction - Cloud
10.3 Requirements for Virtualization
10.4 Type 1 & Type 2 Hypervisors
10.5 Let us sum it up
10.6 List of references
10.7 Bibliography
10.8 Unit End Questions
10.0 OBJECTIVES
127
For instance, organizations often depend on more than one
operating system for their daily operations: a Web server on Linux, a mail
server on Windows, an e-commerce server for customers running on OS
X, and a few other services running on various types of UNIX. The
obvious solution to this is making use of virtual machine technology
1. It is important that virtual machines act just like the real McCoy (real
thing).
2. In particular, it must be possible to boot them like real machines and
install arbitrary operating systems on them, just as can be done on the
real hardware.
3. It is the task of hypervisor to provide this illusion and to do it
efficiently. Every hypervisor measured on following three
dimensions:
a. Safety: The hypervisor should have full control of the virtualized
resources.
b. Fidelity: The behaviour of the program on a virtual machine
should be identical to that of the same program running on bare
hardware.
c. Efficiency: Much of the code in a virtual machine should run
without intervention of hypervisor.
4. The interpreter may be able to execute an INC (increment) as it is, but
instructions that are not safe to execute directly must be simulated by
the interpreter.
5. For instance, we cannot really allow the guest operating system to
disable interrupts for the entire machine or modify the page-table
mappings.
129
6. The idea is to make the operating system on top of the hypervisor
think that it has disabled interrupts, or changed the machine’s page
mappings.
7. Every CPU with kernel mode and user mode has a set of instructions
that behave differently when executed in kernelmode than when
executed in user mode.
8. These include instructions that do I/O, change the MMU settings, and
so on.
9. Popek and Goldberg called these sensitive instructions. There is also a
set of instructions that cause a trap if executed in user mode.
10. Popek and Goldberg called these privileged instructions. Their paper
stated for the first time that a machine is “virtualizable” only if the
sensitive instructions are a subset of the privileged instructions.
130
b) Type 2 Hypervisor: is a different kind of animal. It is a program that
relies on, say, Windows or Linux to allocate and schedule resources,
very much like a regular process. Of course, the type 2 hypervisor still
pretends to be a full computer with a CPU and various devices. Both
types of hypervisor must execute the machine’s instruction set in a
safe manner. For instance, an operating system running on top of the
hypervisor may change and even mess up its own page tables, but not
those of others.
131
10.6 LIST OF REFERENCES
10.7 BIBLIOGRAPHY
*****
132
11
MULTIPROCESSING SYSTEM
Unit Structure
11.0 Objectives
11.1 Pre-requisites
11.2 Techniques for efficient virtualization
11.3 Memory virtualization
11.4 I/O Virtualization
11.5 Virtual appliances
11.6 Let us sum it up
11.7 List of References
11.8 Bibliography
11.9 Unit End Questions
11.0 OBJECTIVES
11.1 PRE-REQUISITES
133
3. However, the virtual machine runs a guest operating system that thinks
it is in kernel mode. We will call this virtual kernel mode.
4. The virtual machine also runs user processes, which think they are in
user mode.
5. What happens when the guest operating system (which thinks it is in
kernel mode) executes an instruction that is allowed only when the
CPU really is in kernel mode? Normally, on CPUs without VT, the
instruction fails and the operating system crashes.
6. On CPUs with VT, when the guest operating system executes a
sensitive instruction, a trap to the hypervisor does occur.
7. Now, let’s understand how to migrate from One Virtual machine to
another.
a) To move the virtual machine from the hardware to the new machine
without taking it down at all.
b) Modern virtualization solutions offer is something known as live
migration. They move the virtual machine while it is still
operational.
c) For instance, they employ techniques like pre-copy memory
migration.
d) This means that they copy memory pages while the machine is still
serving requests.
e) Most memory pages are not written much, so copying them over is
safe.
f) Remember, the virtual machine is still running, so a page may be
modified after it has already been copied.
g) When memory pages are modified, we have to make sure that the
latest version is copied to the destination, so we mark them as dirty.
h) They will be recopied later. When most memory pages have been
copied, we are left with a small number of dirty pages.
i) We now pause very briefly to copy the remaining pages and resume
the virtual machine at the new location. While there is still a pause, it
is so brief that applications typically are not affected.
j) When the downtime is not noticeable, it is known as a seamless live
migration.
134
a) The boxes represent pages, and the arrows show the different
memory mappings.
b) The arrows from guest virtual memory to guest physical memory
show the mapping maintained by the page tables in the guest
operating system.
c) The arrows from guest physical memory to machine memory show
the mapping maintained by the VMM.
d) The dashed arrows show the mapping from guest virtual memory
to machine memory in the shadow page tables also maintained by
the VMM.
e) The underlying processor running the virtual machine uses the
shadow page table mappings
1. The guest operating system will typically start out probing the
hardware to find out what kinds of I/O devices are attached. These
probes will trap to the hypervisor. Hypervisor will do two things:
135
2. One approach is for it to report back that the disks, printers, and so on
are the ones that the hardware actually has.
i. The guest will then load device drivers for these devices and try to
use them.
ii. When the device drivers try to do actual I/O, they will read and
write the device’s hardware device registers.
iii. These instructions are sensitive and will trap to the hypervisor,
which could then copy the needed values to and from the hardware
registers, as needed.
iv. But here, too, we have a problem. Each guest OS could think it
owns an entire disk partition, and there may be many more virtual
machines (hundreds) than there are actual disk partitions.
136
7. These „„shrink wrapped‟‟ virtual machines are often called “virtual
appliances”.
8. As an example, Amazon’s EC2 cloud has many pre-packaged virtual
appliances available for clients, which it offers as convenient software
services („„Software as a Service‟‟).
11.8 BIBLIOGRAPHY
*****
137
12
MULTIPLE PROCESSING SYSTEMS
Unit Structure
12.0 Objectives
12.1 Pre-requisites
12.2 Virtual machines on multicore CPUs
12.3 Licensing Issues
12.4 Clouds
12.4.1 Characteristics
12.4.2 Services Offered
12.4.3 Advantages
12.5 Multiple Processor Systems
12.5.1 Multiprocessors
12.5.2 Multi-computers
12.5.3 Distributed Systems
12.6 Let us sum it up
12.7 List of References
12.8 Bibliography
12.9 Unit End Questions
12.0 OBJECTIVES
12.1 PRE-REQUISITES
139
3. The problem is much worse in companies that have a license allowing
them to have ‘n’ machines running the software at the same time,
especially when virtual machines come and go on demand.
4. In some cases, software vendors have put an explicit clause in the
license forbidding the licensee from running the software on a virtual
machine or on an unauthorized virtual machine.
5. For companies that run all their software exclusively on virtual
machines, this could be a real problem. Whether any of these
restrictions will hold up in court and how users respond to them
remains to be seen.
12.4 CLOUDS
140
12.4.2 Services offered by Cloud:
12.4.2.1 Software as a Service (It offers specific software)
12.4.2.2 Platform as a Service (It creates environment which gives
specific Operating system, database, web server etc.)
12.4.2.3 Infrastructure as a Service (Same cloud can run different
Operating Systems) We can refer the diagram below for more
understanding:
141
b. Each CPU has its own private memory and its own private copy of the
operating system.
c. Alternative to this scheme is to allow all the CPUs to share the
operating system code and make private copies of only the data
structures of OS.
d. There are 4 aspects of this design,
i. When a process makes a system call, the system call is caught and
handled on its own CPU using the data structures in that operating
system’s tables.
ii. Each operating system has its own tables; it also has its own set of
processes that it schedules by itself.
iii. Third, there is no sharing of physical pages. So some CPU is
overburdened and some is idle, as there is no load sharing.
iv. No additional memory, so programs cannot grow
142
12.5.1.3 Symmetric Multiprocessor:
a. It eliminates the asymmetry in Master-Slave configuration.
b. There is one copy of the operating system in memory, but any CPU
can run it.
c. It eliminates the master CPU bottleneck, since there is no master.
d. No need to redirect system calls to 1 CPU as each CPU can run the
OS.
e. While running a process, the CPU on which the system call was made
processes the system call.
12.5.2 Multi-computers:
a Following are the various inter-connection technologies used in
Multi-computer:
143
b. Single Switch/ Star topology: Every node contains a network
interface card and all computers are connected to switches/hubs. Fast,
expandable but single point failure systems. Failure in switch/hub can
take down entire system.
c. Ring Topology: Each node has two wires coming out the network
interface card, one into the node on the left and one going into the
node on the right. There is no use of switches in this topology.
d. Grid/mesh topology: two dimensional design with multiple switches
and can be expanded easily to large size. Its diameter is the longest
path between any two nodes.
e. Double Torus: alternative to grid, which is a grid with the edges
connected. With compare to grid its diameter is less and it is more
fault tolerant. The diameter is less as opposite corners communicates
in only two hops.
f. Cube: Fig e. shows 2 x 2 x 2 cube which is a regular three-
dimensional topology. In general case it could be a n x n x n cube. No
of nodes = 2n. So for 3D cube, 8 nodes can be attached.
g. A 4-D Hypercube: Fig (f) shows four –dimensional cube constructed
from two three – dimensional cubes with the equivalent nodes
connected. An n-dimensional cube formed this way is called a
hypercube. Many parallel computers can be building using hypercube
topology.
144
12.6 LET US SUM IT UP
145
12.7 LIST OF REFERENCES
12.8 BIBLIOGRAPHY
*****
146
13
LINUX CASE STUDY
Unit Structure
13.0 Objectives
13.1 History
13.1.1 History of UNIX
13.1.2 History of Linux
13.2 OVERVIEW
13.2.1 An Overview of Unix
13.2.2 Overview of Linux
13.3 PROCESS in Linux
13.4 Memory Management
13.5 Input output in Linux
13.6 Linux File system
13.7 Security in Linux
13.8 Summary
13.9 List of references
13.10 Bibliography
13.11 Unit End Questions
13.0 OBJECTIVES
To understand principles of Linux
To learn principles of Process Management, Memory Management
To learn principles of I/O in Linux, File System and Security
147
IV. Ken Thompson spent a year's sabbatical with the University of
California at Berkeley. While there he and two graduate students,
Bill Joy and Chuck Haley, wrote the first Berkely version of Unix,
which was distributed to students.
V. This resulted in the source code being worked on and developed by
many different people.
VI. The Berkeley version of Unix is known as BSD, Berkeley Software
Distribution. From BSD came the vi editor, C shell, virtual memory,
Sendmail, and support for TCP/IP.
VII. For several years SVR4 was the more conservative, commercial, and
well supported.
VIII. Today SVR4 and BSD look very much alike. Probably the biggest
cosmetic difference between them is the way the ps command
functions.
IX. The Linux operating system was developed as a Unix look alike and
has a user command interface that resembles SVR4.
Creation:
In 1991, Torvalds became interested in the operating system.
Torvalds introduced a switch from its original license, which prohibited
commercial re-distribution to the GNU GPL. The developers worked to
integrate the GNU components into the Linux kernel, creating a fully
functional and free operating system.
13.2 OVERVIEW
149
The real-time sharing of resources makes UNIX one of the most
powerful operating systems ever.
Multitasking capability:
Many computers can only do one thing at a time, and anyone with a
PC or laptop can prove it. While opening the browser and opening the
word processing program, try to log in to the company's network. When
arranging multiple instructions, the processor may freeze for a few
seconds.
Multiuser capability:
The same design that allows multitasking allows multiple users to use
the computer. The computer can accept many user commands (depending
on the design of the computer) to run programs, access files and print
documents at the same time.
UNIX programs:
150
A Linux-based system is a modular Unix-like Operating System. It
derives much of its basic design from principles established in Unix
during the 1970 and 1980. Such a system uses a monolithic kernel, the
Linux kernel, which handles process control, networking, & peripheral &
file system access. Device drivers are integrated directly with the kernel or
they added as modules loaded while the system is running.
A bootloader - for example GRUB or LILO. This is a program which
is executed by the computer when it is first turned on, & loads the
Linux kernel into memory.
An init program - This is a process launched by the Linux kernel, &
is at the root of the process tree, in other words, all processes are
launched through in it. It starts processes such as system services &
login prompts (whether graphical or in terminal mode)
Software libraries which contain code which can be used by running
processes. On Linux OS using ELF-format executable files, the
dynamic linker which manages use of dynamic libraries is "ld-
linux.so".
The most commonly used software library on Linux systems is the
GNU C. Library. If the OS is set up for the user to compile software
themselves, header files will also be included to describe the interface
of installed libraries.
User interface programs such as comm. & shells or windowing
environments
152
Major services during a UNIX :
Init the only most vital service during a UNIX is provided by init,
init is started because the first process of each UNIX, because the last item
the kernel does when it boots. When init starts, it continues the boot
process by doing various start-up chores (checking and mounting
filesystems, starting daemons, etc). When the system is pack up, it's init
that's responsible of killing all other processes, unmounting all filesystems
and stopping the processor, along side anything it's been configured to try
to do
Syslog:
The cron service is about up to try to this. Each user can have a
crontab file, where she lists the commands she wishes to execute and
therefore the times they ought to be executed. The cron daemon takes care
of starting the commands when specified.
Graphical interface:
This arrangement makes the system more flexible, but has the
disadvantage that it's simple to implement a special interface for every
program, making the system harder to find out.
Networking:
Network logins:
Network logins work a touch differently than normal logins. for
every person logging in via the network there's a separate virtual network
connection, and there are often any number of those counting on the
available bandwidth. It’s therefore impossible to run a separate getty for
every possible virtual connection
Techniques are:
1.Paging:
Example :
A2
A3
A0
A1
Main Memory
154
2. Segmentation:
Segment Table:
Limit Base
1500 1500
1000 4700
500 4500
Segment Table
According to the above table, the segments are stored in the main
memory as:
Segment- 0
Segment- 3
Segment- 2
Main Memory
Paging vs Segmentation:
Paging divides program into fixed size Segmentation divides program into
155
Paging Segmentation
Paging divides program into Segmentation divides program
fixed size pages. into variable size segments.
Memory Management:
Demand Paging:
Deciding which pages need to be kept in the main memory and
which need to be kept in the secondary memory, is going to be difficult
because we can’t say in advance that a process might require a particular
page at a particular moment of time. So, to beat this problem, there comes
a concept called Demand Paging. It advises keeping all pages of the
frames in the secondary memory till they are required. We can also say
that, it says that do not load other pages in the main memory till they are
required. Whenever any page is referred for the 1st time in the main
memory, then that page will appear in the secondary memory
Page fault:
If the mention page is not available in the main memory then there
will be a gap and this theory is called Page miss or page fault. Then the
CPU has to work on the missed page from the secondary memory. If the
number of page faults is very high then the time to access a page of the
system will become very high.
Thrashing:
Page Replacement:
156
out or write to disk. Page replacement is to be done when the requested
page is not been able to access or found in the main memory (page fault)
157
benchmark. Other algorithms are balanced to this in terms of
optimality.
13.5 INPUT
The main advantages with block devices are the fact that they can
be read randomly. Also, serial devices are operated.
158
Another advantage of using block devices is that, if allows access
to random location's on the device. Also data from the device is read with
a fixed block size. Input & output to the block devices works on the
"elevator algorithm" It says that it works on the same principle, as an
elevator would.
The slowest part of any Linux system (or any other operating
system), is the disk I/O systems. There is a large difference between, the
speed and the duration taken to complete an input/output request of CPU,
RAM and Hard-disk. Sometimes if one of the processes running on your
system does a lot of read/write operations on the disk, there will be an
intense lag or slow response from other processes since they are all
waiting for their respective I/O operations to get completed.
Linux Io Commands:
Output Redirection:
The >> operator are often used to append the output in an existing file as
follows
159
Input Redirection:
The commands that normally take their input from the standard
input can have their input redirected from a file in this manner. For
example, to count the number of lines in the file users generated above,
you can execute the command as follows
Upon execution, you will receive the following output. You can
count the number of lines in the file by redirecting the standard input of
the wc command from the file users
In the first case, we knows that it is reading its input from the file
users. In the second case, it only knows that it is reading its input from
standard input so it does not display file name.
Redirection Commands:
Following is a complete list of commands which you can use for
redirection
/bin: Where core commands of Linux exists for example ls, mv.
/home: Here personal folders are allotted to the users to store his folders
with his/her name like/home/like geeks.
/lib: Here the libraries are located of the installed package. You may find
duplicates in different folders since libraries are shared among all
packages unlike windows.
/media: Here is the external devices like DVDs and USB sticks that are
mounted and you can access their files here.
/sbin: Similar to /bin but difference is that the binaries here are for root
user only.
161
/tmp: Contains the temporary files.
/usr: Where the utilities and files shared between users od linux.
Ext2: It is the first Linux file system which allows 2 terabytes of data
allowed.
Ext3: It is arrived from Ext2 which is more upgraded and has backward
compatibility.
Ext4: It is quite faster which allows larger files to operate with significant
speed.
JFS: old file system made by IBM. Working of this is very well with
small and big files but when used for longer time the files get orrupted.
XFS: It is old file system which works slowly with small files.
Btrfs: made by oracle. It is not stable as Ext in some distros, but you can
say that it is replacement for it if you too. It has a good performance.
The Linux file system unifies all physical hard drives and
partitions into a single directory structure. It starts at the top-the root
directory.
162
2. ls – This command will list the content of present directory.
4. ls -la – This command will list all the content of present directory
including the hidden files and directories.
163
5. mkdir – This command will create a new directory
164
12. cp file1 file2 – This command copies contents of file file1 into file
file2
Linux Commands:
File Commands:
ls = Listing the entire directory
ls -at = Show formatted listing of hidden files
ls -lt = Sort the formatted listing by time modified
cd dir = To change the directory user is in
165
cd = Shift to home directory
pwd = To see which directory user is working
mkdir dir = Creating a directory to work on
cat >file = Places the standard input into the file
more file = Shows output of the content of the file
head file = Shows output of the first 10 files of file
tail file = Shows output of the last 10 files of file
tail -f file = Shows output content of the file as it grows, starting with
the last 10 lines
touch file = Used to create or upload a file
rm file = For deleting a file
rm -r dir = For deleting an entire directory
rm -f file = This will force remove the file
rm -rf dir = This will force remove a directory
cp file1 file2 = It’ll copy the contents of file1 to file2
cp -r dir1 dir2 = it’ll copy the contents of dir1 to dir2; Also create the
directory if not present.
mv file1 file2 = It’ll rename or move file1 to file2, only if file2 is
existing
ln -s file link = It creates symbolic-link to a file
Process Management:
ps = It displays the currently working processes
top = It displays all running process
kill pid = It’ll kill the given process (as per specific PID)
killall proc = It kills all the process named proc
pgkill pattern = It will kill all processes matching the pattern given
bg = It lists stopped or bg jobs, resume a stopped job in the
background
fg = It brings the most recent job to foreground
13.7.2 Authentication:
Example: passwd – setuid and setgid allow process to run with the
privileges of the file owner – Improper use of setuid and setgid can lead to
security breaches – LSM
13.8 SUMMARY
Linux is an open source family of Unix-like Linux based kernel
applications, a kernel operating system that was first released on
September 17, 1991, by Linus Torvalds Linux usually included in the
Linux distribution.
167
Many of the available software programs, utilities, games available on
Linux are freeware or open source. Even such complex programs such
as Gimp,OpenOffice, StarOffice are available for free or at a low cost.
GUI makes the system more flexible, but has the disadvantage that it's
simple to implement a special interface for every program, making the
system harder to find out.
The Linux kernel consists of several important parts: process
management, memory management, hardware device drivers,
filesystem drivers, network management, and various other bits and
pieces.
Memory management takes care of assigning memory areas and swap
file areas to processes, parts of the kernel, and for the buffer cache.
Process management creates processes, and implements multitasking
by switching the active process on the processor.
Linux File System or any file system generally is a layer which is
under the operating system that handles the positioning of your data on
the storage.
The Linux file system unifies all physical hard drives and partitions
into a single directory structure. It starts at the top-the root directory.
Kernel provides a minimal set of security features
13.10 BIBLIOGRAPHY
https://www.tutorialspoint.com
https://www.geeksforgeeks.org
https://www.javatpoint.com
https://guru99.com
www.slideshare.net
Unit Structure
14.0 Objectives
14.1 Android History
14.2 Android Overview
14.2.1 Features
14.2.2 Android Architecture
14.3 Android Programming
14.4 Process
14.4.1 Introduction
14.4.2 Process in the application
14.4.3 Structure of process
14.4.4 States of process
14.4.5 Process lifecycle
14.4.6 Interprocess communication
14.5 Android memory management
14.5.1 Introduction
14.5.2 Garbage collection
14.5.3 How to improve memory usage
14.5.4 How to avoid memory leaks
14.5.5 Allocate and reclaim app memory
14.6 File system
14.6.1 Flash memory- android OS file system
14.6.2 Media-based android file system
14.6.3 Pseudo file systems
14.6.4 Android / android application file structure
14.6.5 Files in android studio and explained below
14.7 Security in android
14.7.1 Authentication
14.7.2 Biometrics
14.7.3 Encryption
14.7.4 Keystore
14.7.5 Trusty trusted execution environment (TEE)
14.7.6 Verified boot
14.8 Summary
169
14.9 List of references
14.10 Bibliography
14.10 Unit End Questions
14.0 OBJECTIVES
170
widget for the Wi-Fi, Bluetooth,
GPS, etc.
4) Android 2.0 - 2.1 Éclair Google launched the second version
of Android and named it as éclair in
October 2009.First version of
Android; with a text-to-speech
support feature & also included:
multiple account support,
navigation with Google Maps. The
first smartphone with Éclair version
was Motorola Droid, which was
also the first one, that was sold by
Verizon wireless company.
5) Android 2.2 Froyo Released in May 2010, also called
―frozen yogurt‖. New features,
including Wi-Fi mobile hotspot
functions, push notifications via
Android Cloud to Device
Messaging (C2DM) service, flash
support and also Wi-Fi mobile
hotspot functions were introduced.
6) Android 2.3 Gingerbread Launched in Sept. 2010, is
currently, the oldest versions of the
OS that Google. Android devices
are currently running on this
version. The first mobile phone was
the Nexus S mobile to add both
Gingerbread and
NFC hardware, co-developed by
Google and Samsung. It also
introduced features like selfie, by
adding in support for multiple
cameras and video chat support
within Google Talk.
7) Android 3.0 Honeycomb This Version introduced in Feb,
2011, Motorola Xoom tablet along
with, was released by Google only
for tablets and other mobile devices
with larger displays than normal
smartphones. Honeycomb would
offer specific features that could
not be handled by the smaller
displays found on smartphones at
the time.
8) Android 4.0 Ice Cream Launched in October 2011. First to
Sandwich introduce the feature to unlock the
phone using its camera. Other
features are support for all the on-
screen buttons, the ability to
171
monitor the mobile and Wi-Fi data
usage, and swipe gestures to
dismiss notifications and browser
tabs.
9) Android 4.1- 4.3 Jelly Bean Google released versions 4.2 and
4.3, both under the Jelly Bean label,
in Oct. 2012 and July 2013.
Features include software updates,
notifications that showed more
content or action buttons, along
with full support for the Android
version of Google's Chrome web
browser. Google now made Search,
and "Project Butter to speed up and
improve touch responsiveness
10) Android 4.4 KitKat Officially launched in Sept. 2013,
codename is ―Key Lime Pie‖. It
helped to expand the overall market
&was optimized to run on the
smartphones that had as little as 512
MB of RAM. This allowed many
makers to get the latest version &
installed it on a much cheaper
handset.
11) Android 5.0 Lollipop Released in the first month of
2014.This included the support for
dual-SIM Card Feature, HD Voice
calls, Device Protection to keep
thieves locked out of your phone
even after a factory reset.
12)Android 6.0 Marshmallow Initially called as
Macadamia Nut Cookie, but later
was released as Marshmallow in
May 2015. Features are app drawer
& the first version that had native
support for unlocking the
smartphone with biometrics, Type
C support & Android pay was also
there. Google’s Nexus 6P and
Nexus 5X were the first handsets.
13) Android 7.0 Nougat Released in August 2016.
Multitasking features that designed
for smartphones with bigger
screens. It included a split-screen
feature and fast switching between
applications. Other changes are
made by Google such as switching
to a new JIT compiler that could
speed. Pixel, and Pixel XL, and LG
172
V20 were released with this
version.
14) Android 8.0 Oreo Second time Google used a
(August 21, 2017) trademarked name for it’s Android
version, the first was KitKat.
Android 8.0 Oreo launched in
August 2017. It included, visual
changes such as native support for
picture-in-picture mode feature,
new autofill APIs;help in better
managing the passwords and fill
data, notification hannels.
15)Android 9.0 Pie (August 6,
2018)
15)Android 9.0 Pie Released in August 2018. New
(August 6, 2018) features & updates such as battery
life. Home - button was added in
this version. When swiped up it
brings the apps that were used
recently, a search bar, and
suggestions of five apps at the
bottom of the screen. New option
added of swiping left to see the
currently running applications.
16) Android 10 (September Finally, Google opted to drop the
3, 2019) tradition of naming the Android
version after sweets, desserts, and
candies. It was launched in
September 2019. Several new
features were added such as support
for the upcoming foldable smart
phones with flexible displays.
Android 10 also has a dark mode
feature, along with the newly
introduced navigation control using
gestures, the feature for smart reply
for all the messaging apps, and a
sharing menu that is more effective.
173
14.2.1 Features:
Android is an operating system as well as supports great features. Few of
them are listed below:
1. Beautiful UI: Android OS provides a beautiful and intuitive user
interface.
2. Connectivity: Supports a large group of networks like GSM/EDGE,
CDMA, UMTS, Bluetooth, WiFi, LTE, and WiMAX.
3. Storage: Uses SQLite, lightweight relational database storage for data
storage. It is really helpful when there is limited mobile memory
storage to be considered.
4. Media support: It Includes support for a large number of media
formats, Audio as well as for Video, like H.263, H.264, MPEG 4 SP,
AMR, AMR WB, AAC, MP3, JPEG, PNG, GIF & BMP.
5. Messaging: Both SMS and MMS are supported.
6. Web Browser: Based on Open Source WebKit, now known as
Chrome.
7. Multi-Touch: Supports multi-touch screen. -
8. Multi-Task: Supports application multitasking.i.e, task to another and
same time various applications can run simultaneously.
9. Resizable widgets: Widgets are resizable, so users can reuseit to show
more content or to save space.
10. Multi-Language: Supports single direction and bi-irectionaltext.
11. Hardware Support: Accelerometer Sensor, Camera, DigitalN
Compass, Proximity Sensor & GPS, and a lot more.
174
All these layers are responsible for different roles and features that have been
discussed below.
Linux Kernel:
Libraries:
Android Runtime:
Application Framework:
175
5) View System: Extensible set of views used to create application user
interfaces.
Applications:
At the top, the layer you will find all android applications. This
layer uses all the layers below it for the proper functioning of the mobile
app, such applications are Contacts Books, Calendar, Browser, Games,
and many more.
1. Java:
Java is the official.language for Android App Development and it is
the most used language as well. Apps in the Play Store are most of
built with Java and it is also the most supported language by Google.
Java has a great online community for support in case of any
problems.
Java was developed by Sun Microsystems in 1995, and it is used for a
wide range of programming.applications. Java code is run by a virtual
machine. That runs on Android devices and interprets the code.
However, Java is complicated. Language for a beginner to use as it
contains complex topics like constructors, null pointer, exceptions,
concurrency, checked exceptions, etc. Android Software Development
Kit(SDK) increases the complexity at a greater extent.
176
Development using java also requires a basic understanding of
concepts like Gradle, Android Manifest, and the markup language
XML.
2. Kotlin:
Kotlin is a cross-platform programming language that is used as an
alternative to Java for Android App Development. It has been
introduced as a secondary - official‖ Java language in 2017.
It can inter-operate with Java and it runs on the Java Virtual Machine.
The only sizable difference is that Kotlin removes the superfluous
features of Java such as null pointer exceptions and removes the
necessity of ending every line with a semicolon.
In short, It is much simpler for beginners to try as compared to Java
and it can also be used as an entry point‖ for Android App
Development.
3. C++:
C++ is used for Android App Development using the Android Native
Development Kit(NDK). An app cannot be created using C++ and the
Native Development Kit is used to implement parts of the app in C++
native code. This helps in using C++ code libraries for the app as
required.
While C++ is useful for Android App Development in some cases, it is
much more difficult to set up and is much less flexible. For
applications like 3D games, this will use out some extra performance
from an Android device, which means that you’ll be able to use
libraries written in C or C++.
It may also lead to more bugs because of the increased complexity.So,
it is better to use Java as compared to C++ as it does not provide
enough gain to offset the efforts required.
4. C#:
C# is a little bit similar to Java and so it is ideal for Android App
Development. Like Java, C# also implements garbage collection
sothere are fewer chances of memory leaks. And C# also has a cleaner
and simpler syntax than Java which makes coding with it
comparatively easier.
Earlier, the biggest drawback of C# was that it could run only on
Windows systems as it used the .NET Framework. However, this
problem was handled by Xamarin.
Android is a cross-platform implementation of the Common Language
Infrastructure. Now, Xamarin. The android tool can be used to write
native Android apps and share the code across multiple platforms.
177
5. Python:
It is used for Android App Development even though Android doesn’t
support native Python development. This is done using various tools
that convert the Python apps into Android Packages that can be run on
Android devices.
An example of this can be Kivy that is an open-source Python library
used for developing mobile apps. It supports Android and also
provides rapid app development. However, a downside to this is that
there won't be native benefits for Kivy as it isn’t natively supported.
6. Corona:
It is a software development kit that is used for developing Android
apps using Lua. It contains two operational modes, i.e. Corona
Simulator and Corona Native. The Corona Simulator is used to build
apps directly whereas the Corona Native is used to integrate the Lua
code with an Android Studio project to build an app using native
features.
While Lua is a little limited as compared to Java, it is also much
simpler and easy to learn. It is mostly used to create graphics
applications and games but is by no means limited to that.
We need to use a text editor like Notepad++ to enter your code and
you can run said code on an emulator without even needing to compile
first. When we are ready to create an APK and deploy, we will be able
to do this using an online tool.
7. Unity:
Unity is a "game engine," which means it provides things like physics
calculations and 3D graphics rendering and an IDE like Android
Studio.
It is an open-source tool, which makes it incredibly easy to create our
games, and the community is strong, which means we get a lot of
support. With just a few lines of code, we have a basic platform game
set up in less than an hour. It's multiplatform and is used by many
game studios.
It is a great way to learn object-oriented programming concept as the
objects are an object.
This is used to become a game developer.
For a complete beginner, it is not the entry point to Android
development – but for a small company wanting to create an app for
iOS and Android, it makes sense and there’s plenty of support and
information out there to help you out.
178
8. PhoneGap:
The last simple option you can choose to develop Android apps
program.
PhoneGap is powered by Apache Cordova and it allows you to create
apps using the same code you would normally use to create a website:
TML, CSS, and JavaScript. This is then shown through a "WebView"
but packaged like an app. It acts like a mediator, which allows
developers to access the basic features of the phone, such as the
camera.
This is not real Android development, though, and the only real
programming will be JavaScrip
Conclusion:
There are a lot of apps such as Chat messengers, Music players,
Games. Calculators. etc. that can be created using the above languages.
No language is correct for Android Development.
So, it's upon you to make the correct choice of language based on your
objectives and preferences for each project.
2. Firebase:
With Firebase, we can focus our time and attention on developing the
best possible applications for our business. The operation and internal
functions are very solid. They have taken care of the Firebase
Interface. We can spend more time in developing high-quality apps
that users want to use.
179
Authentication: Firebase has little friction with acclaimed
authentication.
Hosting: Firebase delivers web content faster.
Remote Configuration: It allows us to customize our app on the go.
Dynamic Links: Dynamic Links are smart URLs that dynamically
change behavior for providing the best experience across different
platforms.
These links allow app users to take directly to the content of their
interest after installing the app - no matter whether they are completely
new or lifetime customers.
Crash Reporting: It keeps our app stable.
Real-time Database: It can store and sync app data in real-time.
Storage: We can easily store the file in the database.
3. Realm DB:
The realm is a relational database management system which is
like a conventional database that data can be queried, filtered, and
persisted but also have objects which are life and fully reactive.
4. ORMLite:
It is a lighter version of Object Relational Mapping which provides
some simple functionality for persisting java object to SQL database.
It is ORM wrapper over any mobile SQL related database.
180
It is used to simplify complicated SQL operations by providing a
flexible query builder. It also provides powerful abstract Database
Access Object (DAO) classes.
It is helpful in big size applications with complex queries because it
handles "compiled" SQL statements for repetitive query tasks. It also
supports for configuring of tables and fields without annotations and
supports native calls to Android SQLite databases APIs.
It doesn’t fulfill all the requirements like it is bulky compared to
SQLite or
Realm, slower than SQLite and Realm but faster than most of the other
ORMs present in the market.
14.4 PROCESS
14.4.1 Introduction:
A process is a - program in execution. It is generally used to accomplish a
task, a process needs resources. For instance, CPU file, memory, etc.
Resources are allocated to processes in two stages
The stage when the process was created
Dynamically allocate the process while they are running
181
It supports an android: process attribute that can specify a process
in which that component should run. Developers can set this attribute so
that each component runs in its processor so that some components share a
process while others do not. Developers can also set android: process so
that components of different applications run in the same process means it
provided that the applications share the same Linux user ID and are signed
with the same certificates.
182
14.4.5 Process Lifecycle:
There are a total of five levels in the hierarchy. The following lists
show the different types of processes in order of importance:
1. Foreground process:
3. Visible process:
A process that does not have any foreground components, but still
can affect what the user sees on the screen. A process is considered to be
visible if all the following conditions are true: It hosts an Activity that is
not in the foreground, but it is still visible to its user (its onPause() method
called). This could occur, for eg if the foreground activity starts a dialog,
which would allow the previous activity to be seen behind it. It hosts the
Service that is bound to be visible (or foreground) activity. A visible
process is considered extremely important and will not be killed unless
doing so is required to keep all foreground processes running.
183
4. Service process:
A process that is running on the service that has been started with
the startService() method and does not fall into either of the two higher
categories. Although service processes are not directly tied to anything the
user sees, they are generally doing what the user cares about ( like playing
music in the background or downloading data on the network), and so the
system keeps them running unless there's not enough memory to
remember them along with all foreground and visible processes in them.
5. Background process:
6. Empty process:
There are two major techniques related to the inter process communication
and they are namely;
Remote methods: By this we mean the remote procedure calls with these
APIs can be accessed remotely. With this calls the methods to appear to be
local which are executed in another process.
14.5.1 Introduction:
If kind wants to make an app run and live longer in the background
only to deallocate unnecessary memory in the four more into the
185
background system will generate an error message or terminate the
application.
Each heap generation has its dedicated upper limit on the quantity
of memory that objects there can occupy. Any time a generation starts to
refill, the system executes a garbage pickup event to release memory.
Even though garbage pickup is often quite fast, it can still affect
your app's performance. You don't generally control when a garbage
pickup event occurs from within your code.
When the standards are satisfied, the system stops executing the
method and begins garbage pickup. If garbage pickup occurs within the
middle of an intensive processing loop like animation or during music
playback, it can increase the time interval.
186
"autoboxing/unboxing" is spread all over the usage. Instead, sparse
array-like containers map keys into the plane array.
4. We should avoid creating unnecessary objects. Do not allocate
memory especially for the short term temporary objects if you can
avoid it and garbage collection will occur less when fewer objects are
created.
187
The logical size of the heap is not like the amount of physical memory
used by the heap.
When we are inspecting our app's heap, a value called the Proportional
Set Size (PSS) is computed by Android, that accounts for both dirty
and clean pages which are shared with other processes—but only in an
amount that's proportional to how many apps shared by that RAM.
This (PSS) total is what the system considers to be the physical
memory footprint. For more information regarding PSS, see the
Investigating Your RAM Usage guide.
The Dalvik heap enables a compact of the logical size of the heap,
meaning Android does not defragment the heap to the close-up space.
Android can only shrink by the logical heap size when there is unused
space at the end of the heap. Therefore, the system can still reduce the
physical memory used by the heap.
After the garbage collection process, Dalvik walks the heap and finds the
unused pages, then returns these pages to the kernel using the advice.
188
3. JFFs2: It stands for the Journal Flash File System version 2. This is
the default flash file system for the Android Application Open Source
Project kernels. This version of the Android File System has been
around since the Android Ice Cream Sandwich Operating system was
released.
189
14.6.5 Files in Android Studio and Explained below:
1. AndroidManifest.xml: Every project in Android includes a manifest
file, which is manifest.xml stored in the root directory of its project
hierarchy. The AndroidAppmanifest file is an important part of our
app because it defines the structure and metadata of our application its
components and its requirements. The file includes nodes for each of
the Activities, Services providers Content ProvidersApplication and
Broadcast Receiver App that make the application and using Intent
Filters and Permissions, determines how they co-ordinate with each
other and other Android applications.
2. Java: The Java folder contains the Java source code files in
Application. These files are used as a controller for a controlled layout
file. It gets the data from the layout file App and after processing that
data output Android Application will be shown in the UI layout. It
works on the backend of an Android application.
3. Drawable: A Drawable folder contains a resource type file (something
that can be drawn). Drawables may take a variety of files like Bitmap
Nine Patch, Vector (XML), Shape, Layers, States, Levels, and Scale.
4. Layout: A layout defines the visual structure for the user interface,
such as the UI for an Android application. This Layout folder stores
Layout files that are written in XML language we can add
additional layout objects or widgets as child elements to gradually
build a view hierarchy that defines your layout file.
190
5. Mipmap: Mipmap Android folder contains the Image Asset file that
can be used in Android Studio Application. Generate the following
icon types like Launcher icons, Action bar, and tab icons and
Notification icons there.
6. Colors.xml: Colors.xml file contains color resources of the Android
Application. Different color values are identified by a unique name
that can be used in the Android application.
7. Strings.xml: The strings.xml file contains string resources of the
Android application the different string value is identified by a unique
name that can be used in the Android application program file also
stores string array by using XML language in Application.
8. Styles.xml: Here styles.xml file contains resources of the theme style
in the Android application. It is written in XML language for all
activities in general in android.
9. build.gradle: This defines and implements the module-specific build
configurations. We add dependencies need in the Android application
here in the Gradle module.
191
and the file attributes and meta-data that they associate with files in this
Applications.
By default, these files are private and are accessed by only your
application and get deleted, when the user deletes your android
application.
14.8 SUMMARY
Android made this version authorized in the year 2008, with Android
1.0.
Finally, Google opted to drop the tradition of naming the Android
version after sweets, desserts, and candies. It was launched in
September 2019.
Android OS provides a beautiful and intuitive user interface.
Android is a stack of components of the software which is divided into
five layers.
Android ranks process at the highest level it can, based on the
importance of their components which are currently active in the
process. For e.g. if the process hosts a service and a visible activity,
the process is ranked as a visible process, not a service process.
193
The Dalvik virtual machine maintains track of memory allocation.
Once it gets to know that memory is no longer used by any program if
freeze into a heap without any participation from the programmer.
14.10 BIBLIOGRAPHY
https://www.tutorialspoint.com/
https://www.geeksforgeeks.org/
https://www.javatpoint.com/java-tutorial
https://guru99.com
*****
194
15
WINDOWS CASE STUDY OBJECTIVES
Unit Structure
15.0 Objectives
15.1 History of Windows
15.2 Programming Windows
15.3 System Structure
15.4 Process and Threads in Windows
15.5 Memory Management in Window
15.6 Windows IO Management
15.7 Windows NT File System
15.8 Windows Power Management
15.9 Security in Windows
15.10 Summary
15.11 List of References
15.12 Bibliography
15.13 Unit End Questions
15.0 OBJECTIVES
195
to get people used to moving the
mouse everywhere and clicking
onscreen elements.
2) Windows 2.0 (1987) ● Two years after the release
of Windows 1, Microsoft’s
Windows 2 substituted it in
December 1987.
196
5) Windows 95 (1995) ● As the name implies,
Windows 95 reached in
August 1995 and with it
brought the first ever Start
button and Start menu. It also
presented the idea of "plug and
play" connect a peripheral and
the operating system catches
the suitable drivers for it and
makes it work.
197
● Microsoft’s automatic
informing played a significant
role in Windows2000 and
became the first Windows to
support hibernation.
8) Windows ME (2000): ● Released in September 2000, it
was the consumer-aimed
operating system looped with
Windows 2000 meant at the
enterprise market. It presented
some vital concepts to
consumers, with more
automated system recovery
tools.
● IE 5.5, Windows Media
Player 7 and Windows Movie
Maker all made their presence
for the first time.
Autocomplete also seemed in
Windows Explorer, but the
operating system was
dishonorable for being buggy,
failing to install properly and
being generally poor.
9) Windows XP (2001) ● It was built on Windows NT
similar Windows 2000, but
brought the consumer -
friendly basics from Windows
ME. The Start menu and task
bar got a visual renovation,
bringing the acquainted green
Start button, blue task bar and
vista wallpaper, along with
several shadow and other
visual effects.•
● Its major problem was
security though it had a
firewall constructed in, it
was turned off by default.
Windows XP’s vast approval
turned out to be a boon for
hackers and
criminals, who browbeaten its
flaws, especially in Internet
Explorer, pitilessly - leading
Bill Gates to pledge a
Trustworthy Computing‖
initiative and the ensuing
issuance of to Service Pack
updates that tough XP
against attack substantially.
198
10) Windows Vista (2007) ● Windows XP remained the
course for close to six years
before being substituted by
indows Vista in January
2007. Vista efficient the look
and feel of Windows with
more emphasis on transparent
elements, search and security.
Its growth, under the
codename Longhorn‖,
was troubled, with
determined elements
uncontrolled in order to
get it into production.
All Win32 programs chiefly seem the same and act the same but,
just like C++ programs, there are slight changes in terms of starting a
program, trusting on the compiler you are utilizing. Here we will be
challenging our programs on Borland C++ Builder, Microsoft Visual C++,
and Microsoft Visual C++.NET.
Start "" (confirm to leave a space before and after “”) shadowed by the
absolute path of the program you wish to open in quote.
6. Right after that press Enter and write additional line like the one
overhead so as to open a new application.
7. Guarantee to write each Start ‘‘ ’’ command on a new line so that a
line will encircle a Start ‘‘ ’’ command only, or else the batch file
won’t work and you won’t be able to open many programs!
8. Now save the file with any name you wish and settle to save it as .bat
extension (and not as .txt).
202
Accessing Resources from Code:
The keys that know resources if they are clear during XAML are
also used to improve specific resources if you request the resource in code.
The meekest manner to recuperate a resource from code is to call also the
FindResource or the TryFindResource method from framework-level
objects in your application.
Click the Run button (displayed below with the arrow) and pause a
few seconds. This compiles the program to an EXE file and runs it:
When the program runs, you will detect the dialog on the screen. It
appears like this:
203
Steps in creating a Dos Programme
Source Code
(Example C)
C Compiler
Objective File
(Example OBJ)
Linker
Finished Programme
(Example . EXE)
205
The operating system is divided into a numeral of layers (levels),
each constructed on top of lower layers. The lowest layer (layer 0), is the
hardware; the highest (layer N) is the user interface.
A thread can achieve any part of the process code, with parts
currently being executed by another thread. Each process brings the
resources desirable to execute a program. A process consumes a virtual
address space, executable code, open grips to system objects, a security
setting, a unique process identifier, environment variables, an importance
class, minimum and maximum working set sizes, and at smallest one
thread of execution. Each process is continuing with a single thread, often
called the primary thread, but can make additional threads from any of its
threads. A thread is the object within a process that can be planned for
execution. All threads of a process part its virtual address space and
system resources. In adding, each thread supports exception handlers, a
scheduling importance, thread local storage, a unique thread identifier, and
a set of creations the system will use to save the thread setting until it is
scheduled.
206
15.5 MEMORY MANAGEMENT IN WINDOW
207
The default 64-bit Windows Operating System (OS) configuration
offers up to 16 TB (2^54) of addressable memory space separated
similarly among the kernel and the user applications.
Each 64-bit process has its own space while each 32-bit application runs in
a virtual 2 GB Windows-On-Windows (WOW).
15.5.6.1 Caching:
209
15.6.3 Synchronous and Asynchronous I/O:
210
• An access control list (ACL) that lets a server administrator control
who can access detailed files
• Integrated file compression
• Support for names created on Unicode
• Support for long file names in addition to "8 by 3" names
• Data security on equally removable and fixed disks
15.7.1 Architecture of Windows NT:
• The design of Windows NT, a streak of operating systems formed and
shifted by Microsoft, is a layered scheme that contains of two key
constituents, user mode and kernel mode.
• To procedure input/output (I/O) requests, they use packet-driven I/O,
which utilizes I/O request packets (IRPs) and asynchronous I/O.
• Kernel mode in Windows NT has full admission to the hardware and
system resources of the computer. The Windows NT kernel is a hybrid
kernel; the architecture includes a simple kernel, hardware abstraction
layer (HAL), drivers, and a range of services (collectively named
Executive), which all occur in kernel mode.
• User mode in Windows NT is made of subsystems accomplished of
passing I/O requests to the suitable kernel mode device drivers by
using the I/O manager.
• The kernel is also answerable for initializing device drivers at bootup.
• Kernel mode drivers occur in three levels: highest level drivers,
intermediate drivers and low-level drivers.
• Windows Driver Model (WDM) exists in the intermediate layer and
was mostly aimed to be binary and source compatible between
Windows 98 and Windows 2000.
• The lowest level drivers are either legacy Windows NT device drivers
that control a device straight or can be a plug and play (PnP) hardware
bus.
Diagram
211
15.7.2 Layout of NTFS volume:
212
15.8.1 Edit Plan Setting in Windows:
213
15.8.4 System Power Status:
The system power status specifies whether the source of power for
a computer is a system battery or AC power. For computers that use
batteries, the system power status also specifies how much battery life
remains and whether the battery is charging.
As of now, we are going to discuss only Six states of System Power:
• Working State (S0)
• Sleep State (Modern Standby)
• Sleep State (S1 – S3)
• Hibernate State (S4)
• Soft Off State (S5)
• Mechanical Off State (G3)
214
Account Security: User accounts are core unit of Network security. In
Win Server 2003 & Win2000, domain accounts are kept in Active
Directory directories databases, where as in local accounts, they are kept
in Security Accounts Manager database. The passwords for the accounts
are stored and maintained by System Key. Though the accounts are
protected by default, we can secure them even further. Go to
Administrative tools in control panel (only when you are logged in as an
admin) and click on Local Security and Settings".
Account Lock out policies: Account lockout period: Locks out the
account after a specific period (1- 99,999 minutes). This feature is only
exists is Win Ser 2003, Win 2000, but not in Windows XP.
215
15.10 SUMMARY
15.12 BIBLIOGRAPHY
https://www.tutorialspoint.com/
https://www.geeksforgeeks.org/
https://www.javatpoint.com/java-tutorial
https://guru99.com
https://docs.microsoft.com/
https://www.installsetupconfig.com/
*****
216