Lecture04 Operating System Architecture
Lecture04 Operating System Architecture
System Services
Operating System
System Calls
• Interface between a
process and the operating
system kernel
– Provide access to operating
system services
– Is an explicit request to the
kernel made via a software
interrupt
– Each system call is
identified by a system call
number
– The execution of a system
call takes place in kernel
mode
System Calls
• System calls are the only entry point into the
kernel
• Categories
– Process management
– Memory management
– File management
– Device management
– Communication
API’s and System Calls
• Operating systems usually come with a library that
implements an API of functions wrapping these system
calls:
– Typically written in a high-level language (C or C++)
– Unix / Linux: libc (the standard C library)
– Usually, each system call has a corresponding wrapper
routine, which an application programmer can use in their
programs
• POSIX is a standard API implemented by many kernel
architectures:
– Many Unix kernels, Linux, Mac OSX, Windows NT
• Win32 is another important API
Invoking a System Call
• Typically, a number is associated with each
system call
– The process invoking the system call must pass the
system call number to the kernel to identify the
corresponding system call service routine
• Operating system maintains a table of pointers to
system call service routines, system call number is
index for this table
• Operating system handles the invocation of the
service routine and any return status / values
System Call Handling
User Program
i sys_write()
System Call
Kernel Dispatch Table
Assembler instruction
System Call switching CPU back into
Dispatch Table User Model
• Linux implements a system call handler to manage the invocation of system call
service routines
• System Dispatch Table holds all the service routine addresses
– System Call number is index into this table
Passing Parameters
• Three general methods:
– Pass via registers
– Use a memory block:
• Store parameters in memory in a table or memory block
• Pass address of this memory block via register to service
routine
• This approach is taken by Linux and Solaris
– Use a stack
• User program pushes parameters onto stack
• System service routine pops parameters from stack
Architectural Approaches
• Developments leading to modern operating
systems
– Microkernel architecture
– Multithreading
– Symmetric multiprocessing
– Virtualisation
– Distributed operating systems
Kernel Architectures
Kernel Architectures
• Kernel the core element of operating system
• Various design and implementation
approaches
– Monolithic kernels
– Layered approach
– Microkernel
– Kernel modules
Monolithic Kernel
• Most operating systems, until recently, featured a large monolithic
kernel (most Unix systems, Linux)
• Provide
– Scheduling
– File system management
– Networking
– Device drivers
– Memory management
– Etc.
• Implemented as a single process
– All functional components share same address space
• Benefit
– Performance
• Problem
– Vulnerability to failure in components
Simple Monolithic Structure
• Early operating systems
were monolithic
• No well defined structure
• No layering, not divided into
modules
• Started as small and simple
systems
• Example: MS-DOS
– Developed to provide most
functionality in the least
space
– Levels not well separated,
programs can directly access
I/O devices
Monolithic, Layered Approach
• Simple un-organised structures became infeasible
• Introduction of a layered approach
• Operating system is divided into a number of layers
(levels), each built on top of lower layers
– The bottom layer (layer 0) is the hardware
– The highest layer (layer N) is the user interface
• Each layer uses only functions and services provided by
a lower layer
• All or most of the layers operate in kernel mode
• Examples
– MULTICS, VAX/VMS
Simple Layered Approach
• Approach used by original Unix kernel
– Minimal layering, thick monolithic layers
– No encapsulation, total visibility of all layer
functions and services across the system
– This kernel is effectively a collection of procedures
that can call any other procedure in the system
– Enormous amount of functionality in kernel
• Modern systems more strictly layered
Unix Kernel
Layered Approach
• If layers are strictly separated, then they can be
debugged and replaced independently
• Example
– The TCP/IP networking stack is a strictly layered
architecture
• Difficulty
– How to define layers appropriately?
– Layering is only possible if there is a strict calling
hierarchy among system calls and no circular
dependencies
Layered Approach
• Circular dependencies
– Example disk device driver
• Device driver may have to wait for I/O completion, invokes
the CPU scheduling layer
• CPU may need to call the device driver to swap processes in
and out
• The more layers the more indirections from
function to function and the bigger the overhead
in function calls
• Backlash against strict layering: return to fewer
layers with more functionality
Microkernel
• A microkernel is a reduced operating system core that
contains only essential OS functions
• Idea: minimise kernel by executing as much
functionality as possible in user mode
– Run them as conventional user processes
• Many services are now external processes
– Device drivers
– File systems
– Virtual memory manager
– Windowing systems
– Security services etc.
• Popularised with the Mach operating system
Microkernel System Structure
• Operating system components external to the
microkernel are implemented as server processes
– These processes interact via message passing
• Microkernel facilitates the message exchange
– Validates messages
– Passes messages between components
– Checks whether message passing is permitted
• Grants access to hardware
• Microkernel effectively implements a client-
server infrastructure on a single computer
Microkernel
• Benefits
– Uniform interfaces
• Processes pass messages, no distinction between user-mode and
kernel-mode services, all services are provided via message
passing as in a client-server infrastructure
– Extensibility
• Easier to extend, new services introduced as new applications
– Portability
• Only the microkernel has to be adapted to a new hardware
– Reliability and security
• much less code runs in kernel mode, program failures occurring in
user mode execution does not affect the rest of the system
Microkernel
• Problems
– Performance overhead of communication between system
services
• Each interaction involves the kernel and a user mode / kernel
mode switch
• System services running in user mode are processes, operating
system has to switch between them
– Solution: reintegration of services running in user mode
back into the kernel
• Improves performance: less mode switches, services integrated in
kernel share one address space (one process)
• This was done with the Mach kernel
– Solution: make kernel even smaller – experimental kernel
architectures (Nano kernels pico kernels)
Microkernel Design
• Minimal functionality that has to be included into
a microkernel
– Low-level memory management
• Mapping of memory pages to physical memory locations
• All other mechanisms of memory management are provided
by services running in user mode
– Address space protection
– Page replacement algorithms
– Virtual memory management
– Interprocess communication (IPC)
– I/O and interrupt management
Modular Kernel Design
• Many operating systems implement kernel modules
– E.g.: Linux
• Each core component is separate
• Communication via defined interfaces
• Loadable on demand
• Modules are somehow a hybrid between the layered
and microkernel approach
– Clean software engineering approach
– But: modules are inside the kernel space, they don’t
require the overhead of message passing
– Compromise with performance benefits
Modular Approach
Application Environment
Communication Services
BSD
Kernel
Mach
• Mac OSX
– takes a hybrid approach, has a Mach kernel (microkernel)
combined with a BSD kernel
– BSD: provides support for command line interface, networking,
file system, file system, POSIX API and threads
– Mach: memory management, Remote procedure Call (RPC),
Interprocess communication (IPC), message passing
Virtual Machines
Virtual Machines
• First appeared commercially in IBM mainframes in
1972
• A virtual machine takes the layered approach further
– Creates the illusion of a virtual hardware environment
(processor, memory, I/O), implemented in software
– Is running as an application on an underlying operating
system kernel
– Virtualisation enables a single PC or server to
simultaneously run multiple operating systems or multiple
sessions of a single OS on a single platform
– A machine can host numerous applications from these
different operating systems and execute them
Virtual Machines
Vmware Virtualisation
Java Virtual Machine
Multi-Processing
Modern Processor Architectures:
Multiple Processors
• Improve performance by introducing multiple
processors to allow true parallelism of executing
programs
• Symmetric Multiprocessing (SMP)
– Two or more processors
– Processors share the same memory and access to I/O
devices
– Uniform instruction set: all processors can perform the
same functions
– Operating system takes SMP architecture into account,
manages processor utilisation:
• process / task scheduling
• Synchronisation to shared hardware resources etc.
SMP Organisation
SMP Advantages
• Performance
– More than one process can be running
simultaneously, each on a different processor
• Availability
– Failure of a single process does not halt the system
• Incremental Growth
– Performance of a system can be enhanced by adding
additional processors
• Scaling
– Systems can be scaled to requirements
SMP Design Considerations for
Operating System
• A multiprocessor OS must provide all the features of a multitasking system
as running on single processors plus has to accommodate the difficulties
of operating with multiprocessors
• Key issues
– Re-entrant kernel routines:
• The same kernel code is executed simultaneously by different processors
– Scheduling
• Processes are scheduled on different processors
– Synchronization
• True parallelism of process execution and access to shared resources such as I/O and
memory, effective synchronisation needed
– Memory management
• Processors share the same physical memory – this shared resource has to be managed
carefully (page replacement algorithms)
– Reliability and fault tolerance
• If one processor fails, tasks should be distributed to other processors
• “graceful degradation” of a system
Multicore Systems
• A processor has multiple processing cores
– Parallelism on the processor chip itself
– Each core has all the components of a single
processor
• Performance advantages
– Multiple processors on a single chip brings huge
performance advantages
– Introduction of different levels of cache memory
Distribution and Object Orientation
• Distributed Operating • Object-oriented Design
Systems – Used for adding modular
– Provide the illusion of extensions to a small
• A single main memory kernel
space – Enables programmers to
• Unified access facilities customise an operating
– State of the art for system without disrupting
distributed operating system integrity
systems lag that of – Eases the development of
uniprocessor and SMP distributed tools and full-
operating systems blown distributed
operating systems