Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
Chapter 4Threads, SMP, and MicrokernelsDave BremerOtago Polytechnic, N.Z.©2008, Prentice HallOperating Systems:Internals and Design Principles, 6/EWilliam Stallings
RoadmapThreads: Resource ownership and executionSymmetric multiprocessing (SMP).MicrokernelCase Studies of threads and SMP:WindowsSolarisLinux
Processes and ThreadsProcesses have two characteristics:Resource ownership - process includes a virtual address space to hold the process imageScheduling/execution - follows an execution path that may be interleaved with other processesThese two characteristics are treated independently by the operating system
Processes and ThreadsThe unit of dispatching is referred to as a thread or lightweight processThe unit of resource ownership is referred to as a process or task
MultithreadingThe ability of an OS to support multiple, concurrent paths of execution within a single process.
Single Thread ApproachesMS-DOS supports a single user process and a single thread. Some UNIX, support multiple user processes but only support one thread per process
MultithreadingJava run-time environment is a single process with multiple threadsMultiple processes and threads are found in Windows, Solaris, and many modern versions of UNIX
ProcessesA virtual address space which holds the process imageProtected access toProcessors, Other processes, Files, I/O resources
One or More Threads in ProcessEach thread hasAn execution state (running, ready, etc.)Saved thread context when not runningAn execution stackSome per-thread static storage for local variablesAccess to the memory and resources of its process (all threads of a process share this)
One view…One way to view a thread is as an independent program counter operating within a process.
Threads vs. processes
Benefits of ThreadsTakes less time to create a new thread than a processLess time to terminate a thread than a processSwitching between two threads takes less time that switching processesThreads can communicate with each other without invoking the kernel
Thread use in a Single-User SystemForeground and background workAsynchronous processingSpeed of executionModular program structure
ThreadsSeveral actions that affect all of the threads in a process The OS must manage these at the process level. Examples:Suspending a process involves suspending all threads of the process Termination of a process, terminates all threads within the process
Activities similar to ProcessesThreads have execution states and may synchronize with one another.Similar to processesWe look at these two aspects of thread functionality in turn.States Synchronisation
Thread Execution StatesStates associated with a change in thread stateSpawn (another thread)BlockIssue: will blocking a thread block other, or  all, threadsUnblockFinish (thread)Deallocate register context and stacks
Example: Remote Procedure CallConsider:A program that performs two remote procedure calls (RPCs)  to two different hosts to obtain a combined result.
RPCUsing Single Thread
RPC Using One Thread per Server
Multithreading on a Uniprocessor
Adobe PageMaker
Categories of Thread ImplementationUser Level Thread (ULT)Kernel level Thread (KLT) also called:kernel-supported threads lightweight processes.
User-Level ThreadsAll thread management is done by the applicationThe kernel is not aware of the existence of threads
Relationships between  ULTThread and Process States
Kernel-Level ThreadsKernel maintains context information for the process and the threads No thread management done by applicationScheduling is done on a thread basisWindows is an example of this approach
Advantages of KLTThe kernel can simultaneously schedule multiple threads from the same process on multiple processors. If one thread in a process is blocked, the kernel can schedule another thread of the same process. Kernel routines themselves can be multithreaded.
Disadvantage of KLTThe transfer of control from one thread to another within the same process requires a mode switch to the kernel
Combined ApproachesThread creation done in the user spaceBulk of scheduling and synchronization of threads by the applicationExample is Solaris
Relationship Between Thread and Processes
RoadmapThreads: Resource ownership and executionSymmetric multiprocessing (SMP).MicrokernelCase Studies of threads and SMP:WindowsSolarisLinux
Traditional ViewTraditionally, the computer has been viewed as a sequential machine.A processor executes instructions one at a time in sequenceEach instruction is a sequence of operationsTwo popular approaches to providing parallelismSymmetric MultiProcessors (SMPs)Clusters (ch 16)
Categories of Computer SystemsSingle Instruction Single Data (SISD) streamSingle processor executes a single instruction stream to operate on data stored in a single memorySingle Instruction Multiple Data (SIMD) streamEach instruction is executed on a different set of data by the different processors
Categories of Computer SystemsMultiple Instruction Single Data (MISD) stream (Never implemented)A sequence of data is transmitted to a set of processors, each of execute a different instruction sequence Multiple Instruction Multiple Data (MIMD)A set of processors simultaneously execute different instruction sequences on different data sets
Parallel Processor Architectures
Symmetric MultiprocessingKernel can execute on any processorAllowing portions of the kernel to execute in parallelTypically each processor does self-scheduling from the pool of available process or threads
TypicalSMP Organization
Multiprocessor OSDesign ConsiderationsThe key design issues includeSimultaneous concurrent processes or threadsSchedulingSynchronizationMemory ManagementReliability and Fault Tolerance
RoadmapThreads: Resource ownership and executionSymmetric multiprocessing (SMP).MicrokernelCase Studies of threads and SMP:WindowsSolarisLinux
MicrokernelA microkernel is a small OS core that provides the foundation for modular extensions.Big question is how small must a kernel be to qualify as a microkernelMust drivers be in user space? In theory, this approach provides a high degree of flexibility and modularity.
Kernel Architecture
Microkernel Design: Memory ManagementLow-level memory management - Mapping each virtual page to a physical page frameMost memory management tasks occur in user space
Microkernel Design:Interprocess CommunicationCommunication between processes or threads in a microkernel OS is via messages.A message includes: A header that identifies the sending and receiving process and  A body that contains direct data, a pointer to a block of data, or some control information about the process.
Microkernal Design:I/O and interrupt managementWithin a microkernel it is possible to handle hardware interrupts as messages and to include I/O ports in address spaces.a particular user-level process is assigned to the interrupt and the kernel maintains the mapping.
Benefits of aMicrokernel OrganizationUniform interfaces on requests made by a process.ExtensibilityFlexibilityPortabilityReliabilityDistributed System SupportObject Oriented Operating Systems
RoadmapThreads: Resource ownership and executionSymmetric multiprocessing (SMP).MicrokernelCase Studies of threads and SMP:WindowsSolarisLinux
Different Approaches to ProcessesDifferences between different OS’s support of processes includeHow processes are namedWhether threads are providedHow processes are representedHow process resources are protectedWhat mechanisms are used for inter-process communication and synchronization How processes are related to each other
Windows ProcessesProcesses and services provided by the Windows Kernel are relatively simple and general purposeImplemented as objectsAn executable process may contain one or more threadsBoth processes and thread objects have built-in synchronization capabilities
Relationship between Process and Resources
Windows Process Object
Windows Thread Object
Thread States
Windows SMP SupportThreads can run on any processorBut an application can restrict affinitySoft AffinityThe dispatcher tries to assign a ready thread to the same processor it last ran on.This helps reuse data still in that processor’s memory caches from the previous execution of the thread.Hard AffinityAn application restricts threads to certain processor
SolarisSolaris implements multilevel thread support designed to provide flexibility in exploiting processor resources.Processes include the user’s address space, stack, and process control block
Solaris ProcessSolaris makes use of four separate thread-related concepts:Process: includes the user’s address space, stack, and process control block.User-level threads: a user-created unit of execution within a process.Lightweight processes: a mapping between ULTs and kernel threads. Kernel threads
Relationship betweenProcesses and Threads
Traditional Unix vs SolarisSolaris replaces  the processor state block with a list  of LWPs
LWP Data StructureAn LWP identifierThe priority of this LWP  A signal mask Saved values of user-level registers The kernel stack for this LWPResource usage and profiling dataPointer to the corresponding kernel threadPointer to the process structure
Solaris Thread States
Linux TasksA process, or task, in Linux is represented by a task_struct data structureThis contains a number of categories including:StateScheduling informationIdentifiersInterprocess communicationAnd others
Linux Process/Thread Model

More Related Content

Chapter04 new

  • 1. Chapter 4Threads, SMP, and MicrokernelsDave BremerOtago Polytechnic, N.Z.©2008, Prentice HallOperating Systems:Internals and Design Principles, 6/EWilliam Stallings
  • 2. RoadmapThreads: Resource ownership and executionSymmetric multiprocessing (SMP).MicrokernelCase Studies of threads and SMP:WindowsSolarisLinux
  • 3. Processes and ThreadsProcesses have two characteristics:Resource ownership - process includes a virtual address space to hold the process imageScheduling/execution - follows an execution path that may be interleaved with other processesThese two characteristics are treated independently by the operating system
  • 4. Processes and ThreadsThe unit of dispatching is referred to as a thread or lightweight processThe unit of resource ownership is referred to as a process or task
  • 5. MultithreadingThe ability of an OS to support multiple, concurrent paths of execution within a single process.
  • 6. Single Thread ApproachesMS-DOS supports a single user process and a single thread. Some UNIX, support multiple user processes but only support one thread per process
  • 7. MultithreadingJava run-time environment is a single process with multiple threadsMultiple processes and threads are found in Windows, Solaris, and many modern versions of UNIX
  • 8. ProcessesA virtual address space which holds the process imageProtected access toProcessors, Other processes, Files, I/O resources
  • 9. One or More Threads in ProcessEach thread hasAn execution state (running, ready, etc.)Saved thread context when not runningAn execution stackSome per-thread static storage for local variablesAccess to the memory and resources of its process (all threads of a process share this)
  • 10. One view…One way to view a thread is as an independent program counter operating within a process.
  • 12. Benefits of ThreadsTakes less time to create a new thread than a processLess time to terminate a thread than a processSwitching between two threads takes less time that switching processesThreads can communicate with each other without invoking the kernel
  • 13. Thread use in a Single-User SystemForeground and background workAsynchronous processingSpeed of executionModular program structure
  • 14. ThreadsSeveral actions that affect all of the threads in a process The OS must manage these at the process level. Examples:Suspending a process involves suspending all threads of the process Termination of a process, terminates all threads within the process
  • 15. Activities similar to ProcessesThreads have execution states and may synchronize with one another.Similar to processesWe look at these two aspects of thread functionality in turn.States Synchronisation
  • 16. Thread Execution StatesStates associated with a change in thread stateSpawn (another thread)BlockIssue: will blocking a thread block other, or all, threadsUnblockFinish (thread)Deallocate register context and stacks
  • 17. Example: Remote Procedure CallConsider:A program that performs two remote procedure calls (RPCs) to two different hosts to obtain a combined result.
  • 19. RPC Using One Thread per Server
  • 20. Multithreading on a Uniprocessor
  • 22. Categories of Thread ImplementationUser Level Thread (ULT)Kernel level Thread (KLT) also called:kernel-supported threads lightweight processes.
  • 23. User-Level ThreadsAll thread management is done by the applicationThe kernel is not aware of the existence of threads
  • 24. Relationships between ULTThread and Process States
  • 25. Kernel-Level ThreadsKernel maintains context information for the process and the threads No thread management done by applicationScheduling is done on a thread basisWindows is an example of this approach
  • 26. Advantages of KLTThe kernel can simultaneously schedule multiple threads from the same process on multiple processors. If one thread in a process is blocked, the kernel can schedule another thread of the same process. Kernel routines themselves can be multithreaded.
  • 27. Disadvantage of KLTThe transfer of control from one thread to another within the same process requires a mode switch to the kernel
  • 28. Combined ApproachesThread creation done in the user spaceBulk of scheduling and synchronization of threads by the applicationExample is Solaris
  • 30. RoadmapThreads: Resource ownership and executionSymmetric multiprocessing (SMP).MicrokernelCase Studies of threads and SMP:WindowsSolarisLinux
  • 31. Traditional ViewTraditionally, the computer has been viewed as a sequential machine.A processor executes instructions one at a time in sequenceEach instruction is a sequence of operationsTwo popular approaches to providing parallelismSymmetric MultiProcessors (SMPs)Clusters (ch 16)
  • 32. Categories of Computer SystemsSingle Instruction Single Data (SISD) streamSingle processor executes a single instruction stream to operate on data stored in a single memorySingle Instruction Multiple Data (SIMD) streamEach instruction is executed on a different set of data by the different processors
  • 33. Categories of Computer SystemsMultiple Instruction Single Data (MISD) stream (Never implemented)A sequence of data is transmitted to a set of processors, each of execute a different instruction sequence Multiple Instruction Multiple Data (MIMD)A set of processors simultaneously execute different instruction sequences on different data sets
  • 35. Symmetric MultiprocessingKernel can execute on any processorAllowing portions of the kernel to execute in parallelTypically each processor does self-scheduling from the pool of available process or threads
  • 37. Multiprocessor OSDesign ConsiderationsThe key design issues includeSimultaneous concurrent processes or threadsSchedulingSynchronizationMemory ManagementReliability and Fault Tolerance
  • 38. RoadmapThreads: Resource ownership and executionSymmetric multiprocessing (SMP).MicrokernelCase Studies of threads and SMP:WindowsSolarisLinux
  • 39. MicrokernelA microkernel is a small OS core that provides the foundation for modular extensions.Big question is how small must a kernel be to qualify as a microkernelMust drivers be in user space? In theory, this approach provides a high degree of flexibility and modularity.
  • 41. Microkernel Design: Memory ManagementLow-level memory management - Mapping each virtual page to a physical page frameMost memory management tasks occur in user space
  • 42. Microkernel Design:Interprocess CommunicationCommunication between processes or threads in a microkernel OS is via messages.A message includes: A header that identifies the sending and receiving process and A body that contains direct data, a pointer to a block of data, or some control information about the process.
  • 43. Microkernal Design:I/O and interrupt managementWithin a microkernel it is possible to handle hardware interrupts as messages and to include I/O ports in address spaces.a particular user-level process is assigned to the interrupt and the kernel maintains the mapping.
  • 44. Benefits of aMicrokernel OrganizationUniform interfaces on requests made by a process.ExtensibilityFlexibilityPortabilityReliabilityDistributed System SupportObject Oriented Operating Systems
  • 45. RoadmapThreads: Resource ownership and executionSymmetric multiprocessing (SMP).MicrokernelCase Studies of threads and SMP:WindowsSolarisLinux
  • 46. Different Approaches to ProcessesDifferences between different OS’s support of processes includeHow processes are namedWhether threads are providedHow processes are representedHow process resources are protectedWhat mechanisms are used for inter-process communication and synchronization How processes are related to each other
  • 47. Windows ProcessesProcesses and services provided by the Windows Kernel are relatively simple and general purposeImplemented as objectsAn executable process may contain one or more threadsBoth processes and thread objects have built-in synchronization capabilities
  • 52. Windows SMP SupportThreads can run on any processorBut an application can restrict affinitySoft AffinityThe dispatcher tries to assign a ready thread to the same processor it last ran on.This helps reuse data still in that processor’s memory caches from the previous execution of the thread.Hard AffinityAn application restricts threads to certain processor
  • 53. SolarisSolaris implements multilevel thread support designed to provide flexibility in exploiting processor resources.Processes include the user’s address space, stack, and process control block
  • 54. Solaris ProcessSolaris makes use of four separate thread-related concepts:Process: includes the user’s address space, stack, and process control block.User-level threads: a user-created unit of execution within a process.Lightweight processes: a mapping between ULTs and kernel threads. Kernel threads
  • 56. Traditional Unix vs SolarisSolaris replaces the processor state block with a list of LWPs
  • 57. LWP Data StructureAn LWP identifierThe priority of this LWP A signal mask Saved values of user-level registers The kernel stack for this LWPResource usage and profiling dataPointer to the corresponding kernel threadPointer to the process structure
  • 59. Linux TasksA process, or task, in Linux is represented by a task_struct data structureThis contains a number of categories including:StateScheduling informationIdentifiersInterprocess communicationAnd others

Editor's Notes

  1. These slides are intended to help a teacher develop a presentation. This PowerPoint covers the entire chapter and includes too many slides for a single delivery. Professors are encouraged to adapt this presentation in ways which are best suited for their students and environment.
  2. Multithreading refers to the ability of an OS to support multiple, concurrent paths of execution within a single process.
  3. Animated SlideOnload Enlarges top-left to discuss DOSClick1:Enlarges bottom-left for UnixSingle Threaded approach: The traditional approach of a single thread of execution per process, in which the concept of a thread is not recognized, examples areMS DOS (single process, single thread)Unix (multiple, single threaded processes)
  4. Animated SlideOnload: Emphasis on top-right and JRE (single process, multiple thread), Click 1: Emphasis on multiple processes with multiple threads – this is the main topic of this chapterJRE is an example of a system of one process with multiple threads. Of main interest in this chapter is the use of multiple processes, each of which support multiple threads.Examples include:Windows, Solaris, and many modern versions of UNIX.
  5. In a multithreaded environment, a process is defined as the unit of resource allocation and a unit of protection.
  6. Within a process, there may be one or more threads, each with the following:• A thread execution state (Running, Ready, etc.).• A saved thread context when not running; one way to view a thread is as an independent program counter operating within a process.
  7. Distinction between threads and processes from the point of view of process management. In a single-threaded process model, the representation of a process includesits process control block user address space, user and kernel stacks to manage the call/return behaviour of the execution of the process.While the process is running, it controls the processor registers. The contents of these registers are saved when the process is not running. In a multithreaded environment, there is still a single process control block and user address space associated with the process,but separate stacks for each thread, as well as a separate control block for each thread containing register values, priority, and other thread-related state information.Thus, all of the threads of a process share the state and resources of that process. They reside in the same address space and have access to the same data. When one thread alters an item of data in memory, other threads see the results if and when they access that item. If one thread opens a file with read privileges, other threads in the same process can also read from that file.
  8. If there is an application or function that should be implemented as a set of related units of execution, it is far more efficient to do so as a collection of threads - rather than a collection of separate processes.
  9. Foreground and background worke.g. Spreadsheet – one thread looking after displayAnother thread updating results of formulae Asynchronous processingE.G. protection against power failure within a word processor, A thread writes random access memory (RAM) buffer to disk once every minute. this thread schedules itself directly with the OS; no need for fancy code in the main program to provide for time checks or to coordinate input and output.Speed of execution- On thread can compute one batch of data while another thread reading the next batch from a device. On a multiprocessor system, multiple threads from the same process may be able to execute simultaneously. Even though one thread may be blocked for an I/O operation to read in a batch of data, another thread may be executing.Modular program structure- Threads make it easier to design programs which involve a variety of activities or a variety of sources and destinations of input and output.
  10. Suspension involves swapping the address space of one process out of main memory to make room for the address space of another process. Because all threads in a process share the same address space, all threads are suspended at the same time.Similarly, termination of a process terminates all threads within that process.
  11. A significant issue is whether the blocking of a thread results in the blocking of the entire process. If one thread in a process is blocked, does this prevent the running of any other thread in the same process even if that other thread is in a ready state? Clearly, some of the flexibility and power of threads is lost if the one blocked thread blocks an entire process.
  12. The results are obtained in sequence, so that the program has to wait for a response from each server in turn.
  13. Rewriting the program to use a separate thread for each RPC results in a substantial speedup. Note that if this program operates on a uniprocessor, the requests must be generated sequentially and the results processed in sequence; however, the program waits concurrently for the two replies.
  14. On a uniprocessor, multiprogramming enables the interleaving of multiple threads within multiple processes.In the example, - three threads in two processes are interleaved on the processor. - Execution passes from one thread to another either when the currently running thread is blocked or its time slice is exhausted.
  15. An example of the use of threads is the Adobe PageMaker application running under a shared system. Three threads are always active: an event-handling thread, a screen-redraw thread, a service thread.
  16. The kernel maintains context information for the process as a whole and for individual threads within the process.Scheduling by the kernel is done on a thread basis. -
  17. In a combined approach, multiple threads within the same application can run in parallel on multiple processors, and a blocking system call need not block the entire process. If properly designed, this approach should combine the advantages of the pure ULT and KLT approaches while minimizing the disadvantages.
  18. The concepts of resource allocation and dispatching unit have traditionally been embodied in the single concept of the process; that is, as a 1 : 1 relationship between threads and processes. There has been much interest in providing for multiple threads within a single process, which is a many-to-one relationship.However, as the table shows, the other two combinations have also been investigated, namely, a many-to-many relationship and a one-to-many relationship.
  19. Traditionally, the computer has been viewed as a sequential machine.Most computer programming languages require the programmer to specify algorithms as sequences of instructions. A processor executes programs by executing machine instructions in sequence and one at a time. Each instruction is executed in a sequence of operations (fetch instruction, fetch operands, perform operation, store results).This view of the computer has never been entirely true. As computer technology has evolved and as the cost of computer hardware has dropped, computer designers have sought more and more opportunities for parallelism, usually to improve performance and, in some cases, to improve reliability.the two most popular approaches to providing parallelism by replicating processors: symmetric multiprocessors (SMPs) and clusters.
  20. It is useful to see where SMP architectures fit into the overall category of parallel processors.
  21. With the MIMD organization, the processors are general purpose, because they must be able to process all of the instructions necessary to perform the appropriate data transformation. MIMDs can be subdivided by the means in which the processors communicate. If the processors each have a dedicated memory, then each processing element is a self-contained computer. Such a system is known as a cluster, or multicomputer. If the processors share a common memory, then each processor accesses programs and data stored in the shared memory, and processors communicate with each other via that memory; sucha system is known as a shared-memory multiprocessor.
  22. In a symmetric multiprocessor (SMP), the kernel can execute on any processor, and typically each processor does self-scheduling from the pool of available processes or threads. The kernel can be constructed as multiple processes or multiple threads, allowing portions of the kernel to execute in parallel. Thiscomplicates the OS. It must ensure that two processors do not choose the same process and that processes are not somehow lost from the queue.Techniques must be employed to resolve and synchronize claims to resources.
  23. There are multiple processors, each of which contains its own control unit, arithmetic-logic unit, and registers.Each processor has access to a shared main memory and the I/O devices through some form of interconnection mechanism; a shared bus is a common facility. The processors can communicate with each other through memory (messages and status information left in shared address spaces). It may also be possible for processors to exchange signals directly. The memory is often organized so that multiple simultaneous accesses to separate blocks of memory are possible.
  24. Talk through each of the issuesSimultaneous concurrent processes or threads: Kernel routines need to be re-entrant to allow several processors to execute the same kernel code simultaneously.With multiple processors executing the same or different parts of the kernel, kernel tables and management structures must be managed properly to avoid deadlock or invalid operations.Scheduling: Scheduling may be performed by any processor, so conflicts must be avoided. If kernel-level multithreading is used, then the opportunity exists to schedule multiple threads from the same process simultaneously on multiple processors.Synchronization: With multiple active processes having potential access to shared address spaces or shared I/O resources, care must be taken to provide effective synchronization. Synchronization is a facility that enforces mutual exclusion and event ordering. A common synchronization mechanism used in multiprocessor operating systems is locksMemory management: Memory management on a multiprocessor must deal with all of the issues found on uniprocessor computers.The also OS needs to exploit the available hardware parallelism, such as multiported memories, to achieve the best performance. The paging mechanisms on different processors must be coordinated to enforce consistency when several processors share a page or segment and to decide on page replacement.Reliability and fault tolerance: The OS should provide graceful degradation in the face of processor failure. The scheduler and other portions of the OS must recognize the loss of a processor and restructure management tables accordingly.
  25. Also, whether to run nonkernel operations in kernel or user space, and whether to keep existing subsystem code (e.g., a version of UNIX) orstart from scratch.The microkernel approach was popularized by its use in the Mach OS, which is now the core of the Macintosh Mac OS X operating system. A number of products now boast microkernel implementations, and this general design approach is likely to be seen in most of the personal computer, workstation, and server operating systems developed in the near future.
  26. Figure A:Operating systems developed in the mid to late 1950s were designed with little concern about structure. The problems caused by mutual dependence and interaction were grossly underestimated. In these monolithic operating systems, virtually any procedure can call any other procedure – the approach was unsustainable as operating systems grew to massive proportions.Modular programming techniques were needed to handle this scale of software development. layered operating systems were developed in which functions are organized hierarchically and interaction only takes place between adjacent layers.M ost or all of the layers execute in kernel mode.PROBLEM: Major changes in one layer can have numerous effects on code in adjacent layers - many difficult to traceAnd security is difficult to build in because of the many interactions between adjacent layers.Figure BIn a Microkernel - only absolutely essential core OS functions should be in the kernel. Less essential services and applications are built on the microkernel and execute in user mode.Common characteristic is that many services that traditionally have been part of the OS are now external subsystems that interact with the kernel and with each other; these include device drivers, file systems, virtual memory manager, windowing system, and security services.The microkernel functions as a message exchange: It validates messages, passes them between components, and grants access to hardware.The microkernel also performs a protection function; it prevents message passing unless exchange is allowed.
  27. The microkernel has to control the hardware concept of address space to make it possible to implement protection at the process level. Providingthe microkernel is responsible for mapping each virtual page to a physical frame, the majority of memory management can be implemented outside the kernel(protection of the address space of one process from another and the page replacement algorithm and other paging logic etc)
  28. The basic form of communication between processes or threads in a microkernel OS is messages. A message includes: A header that identifies the sending and receiving process and A body that contains direct data, a pointer to a block of data, or some control information about the process.Typically, we can think of IPC as being based on ports associated with processes.
  29. With a microkernel architecture, it is possible to handle hardware interrupts as messages and to include I/O ports in addressspaces.
  30. Uniform Interfaces Imposes a uniform interface on requests made by a process. Processes need not distinguish between kernel-level and user-level services because all such services are provided by means of message passing.Extensibility is facilitated allowing the addition of new services as well as the provision of multiple services in the same functional area. when a new feature is added, only selected servers need to be modified or added. The impact of new or modified servers is restricted to a subset of the system. Modifications do not require building a new kernel.Flexibility Existing features can be subtracted to produce a smaller, more efficient implementation. If features are made optional, the base product will appeal to a wider variety of users.Portability All or at least much of the processor-specific code is in the microkernel. Thus, changes needed to port the system to a new processor are fewer and tend to be arranged in logical groupings.Reliability A small microkernel can be rigorously tested. Its use of a small number of application programming interfaces (APIs) improves the chance of producing quality code for the OS services outside the kernel. Distributed System Support When a message is sent from a client to a server process, the message must include an identifier of the requested service. If a distributed system (e.g., a cluster) is configured so that all processes and services have unique identifiers, then in effect there is a single system image at the microkernel level. A process can send a message without knowing on which computer the target service resides.Object Oriented Operating Systems An object-oriented approach can lend discipline to the design of the microkernel and to the development of modular extensions to the OS.
  31. The native process structures and services provided by the Windows Kernel are relatively simple and general purpose, allowing each OS subsystem to emulate a particular process structure and functionality.
  32. This figure shows a single thread. In addition, the process has access to a file object and to a section object that defines a section of shared memory.TOKENEach process is assigned a security access token, called the primary token of the process. When a user first logs on, Windows creates an access token that includes the security ID for the user.Every process that is created by or runs on behalf of this user has a copy of this access token.The token is used by windows to validate the user’s ability to access secured objects or to perform restricted functions on the system and on secured objects.The access token controls whether the process can change its own attributes.In this case, the process does not have a handle opened to its access token. If the process attempts to open such a handle, the security system determines whether this is permitted and therefore whether the process may change its own attributes.ADDRESS SPACEA series of blocks that define the virtual address space currently assigned to this process. The process cannot directly modify these structures but must rely on the virtual memory manager, which provides a memory allocationservice for the process.OBJECT TABLEThe process includes an object table, with handles to other objects known to this process. One handle exists for each thread contained in this object.
  33. Each Windows process is represented by an object whose general structure is shown in this figureThe object-oriented structure of Windows facilitates the development of a general purpose process facility. Windows makes use of two types of process-related objects: processes and threads. A process corresponds to a user job or application that owns resources, such as memory, and opens files.A thread is a dispatchable unit of work that executes sequentially and is interruptible, so that the processor can turn to another thread.
  34. This figure depicts the object structure for a thread object.A Windows process must contain at least one thread to execute. That thread may then create other threads. In a multiprocessor system, multiple threads from the same process may execute in parallel.
  35. An existing Windows thread is in one of six states
  36. The threads of any process, including those of the executive, can run on any processor. In the absence of affinity restrictions, the microkernel assigns a ready thread to the next available processor. As a default, the microkernel uses the policy of soft affinity in assigning threads to processors: The dispatcher tries to assign a ready thread to the same processor it last ran on. This helps reuse data still in that processor’s memory caches from the previous execution of the thread. It is possible for an application to restrict its thread execution to certain processors (hard affinity).
  37. Solaris makes use of four separate thread-related concepts:• Process: - includes the user’s address space, stack, and process control block.• User-level threads: - Implemented through a threads library in the address space of a process (invisible to the OS).- A user-level thread (ULT) is a user-created unit of execution within a process.• Lightweight processes: - Can be viewed as a mapping between ULTs and kernel threads. Each LWP supports ULT and maps toone kernel thread. - LWPs are scheduled by the kernel independently and may execute in parallel on multiprocessors.• Kernel threads: Fundamental entities that can be scheduled and dispatched to run on one of the system processors.
  38. Note that there is always exactly one kernel thread for each LWP.A process may consists of a single ULT bound to a single LWP. In this case, there is a single thread of execution, corresponding to a traditional UNIX process.When concurrency is not required within a single process, an application uses this process structure. If an application requires concurrency, its process contains multiple threads, - each bound to a single LWP, - which in turn are each bound to a single kernel thread
  39. Animated SlidePoint out the traditional unix structure – CLICK to emphasise the changeThis figure compares, in general terms, the process structure of a traditional UNIX system with that of Solaris.Typical UNIX implementation of a process includes the process ID; the user IDs; a signal dispatch table, which the kernel uses to decide what to do when sending a signal to a process; file descriptors, which describe the state of files in use by this process; a memory map, which defines the address space for this process; and a processor state structure, which includes the kernel stack for this process. Solaris retains this basic structure but replaces the processor state block with a list of structures containing one data block for each LWP.
  40. The LWP data structure includes the following elements:• An LWP identifier• The priority of this LWP and hence the kernel thread that supports it• A signal mask that tells the kernel which signals will be accepted• Saved values of user-level registers (when the LWP is not running)• The kernel stack for this LWP, which includes system call arguments, - results, and - error codes for each call level• Resource usage and profiling data• Pointer to the corresponding kernel thread• Pointer to the process structure
  41. A simplified view of both thread execution states.These states reflect the execution status of both a kernel thread and the LWP bound to it.- Some kernel threads are not associated with an LWP; the same execution diagram applies.The states are as follows: RUN: - The thread is runnable; that is, the thread is ready to execute. ONPROC: - The thread is executing on a processor. SLEEP: -The thread is blocked. STOP:- The thread is stopped. ZOMBIE: The thread has terminated. FREE: Thread resources have been released and the thread is awaiting removal from the OS thread data structure.
  42. A process, or task, in Linux is represented by a task_struct data structure. The task_struct data structure contains information in a number of categories:
  43. Running: Corresponds to two states. A Running process is either executing or it is ready to execute.Interruptible: A blocked state, in which the process is waiting for an event, such as the end of an I/O operation, the availability of a resource, or a signal from another process.Uninterruptible: Another blocked state. The difference between the Interruptible state is that in this state, a process is waiting directly on hardware conditions and therefore will not handle any signals.Stopped: The process has been halted and can only resume by positive action from another process. E.G., a process that is being debugged can be put into the Stopped state.Zombie: The process has been terminated but, for some reason, still must have its task structure in the process table.