The document discusses the design and functions of an operating system. It begins by describing the basic components of a computer system, then explains that the operating system manages these devices and provides a simpler interface for user programs. It discusses how operating systems perform two main functions: extending the machine by providing abstractions and managing resources like processors, memory, and I/O devices. It provides examples of how operating systems provide abstractions like files to simplify disk access for programmers. Finally, it briefly reviews the history and basic components of computer hardware to provide context for how operating systems interface with and manage the underlying hardware.
The document discusses the design and functions of an operating system. It begins by describing the basic components of a computer system, then explains that the operating system manages these devices and provides a simpler interface for user programs. It discusses how operating systems perform two main functions: extending the machine by providing abstractions and managing resources like processors, memory, and I/O devices. It provides examples of how operating systems provide abstractions like files to simplify disk access for programmers. Finally, it briefly reviews the history and basic components of computer hardware to provide context for how operating systems interface with and manage the underlying hardware.
• one or more processors • some main memory • disks • Printers • a keyboard • a display • network interfaces • other input/output devices
All in all, a complex system.
Computer are equipped with a layer of software called the operating system, whose job is to manage all these devices and provide user programs with a simpler interface to the hardware. Computer Platform The operating system is (usually) that portion of the software that runs in kernel mode or supervisor mode. It is protected from user tampering by the hardware (ignoring for the moment some older or low-end microprocessors that do not have hardware protection at all). Compilers and editors run in user mode. Operating systems differ from user programs in ways other than where they reside. In particular, they are huge, complex and long lived. The source code of an operating system like Linux or Windows is on the order of five million lines of code Difficult to pin down precisely what an operating system is. Operating systems perform two basically unrelated functions: • Extending the machine • Managing resources, The Operating System as an Extended Machine What the programmer wants is a simple, high- level abstraction to deal with. In the case of disks,a typical abstraction would be that the disk contains a collection of named files. Each file can be opened for reading or writing, then read or written, and finally closed. programmer would want to deal with this disk at the hardware level. Instead, a piece of software, called a disk driver, deals with the hardware and provides an interface to read and write disk blocks, without getting into the details. Operating systems contain many drivers for controlling I/O devices. But even this level is much too low for most applications. For this reason, all operating systems provide yet another layer of abstraction for using disks: files. Using this abstraction, programs can create, write, and read files, without having to deal with the messy details of how the hardware actually works. The Operating System as an Extended Machine This abstraction is the key to managing all this complexity. Good abstractions turn a nearly impossible task into two manageable ones. The first is defining and implementing the abstractions. The second is using these abstractions to solve the problem at hand. One abstraction that almost every computer user understands is the file The Operating System as an Extended Machine One of the major tasks of the operating system is to hide the hardware and present programs with nice, clean, elegant, consisten, abstractions to work with. Operating systems turn the ugly into the beatiful. The Operating System as a Resource Manager Modern computers consist of processors, memories, timers, disks, mice, network interfaces, printers, and a wide variety of other devices. In the alternative view, the job of the operating system is to provide for an orderly and controlled allocation of the processors, memories, and I/O devices among the various programs competing for them. Modern operating systems allow multiple programs to run at the same time Imagine what would happen if three programs running on some computer all tried to print their output simultaneously on the same printer. Whenone program is finished, the operating system can then copy its output from the disk file where it has been stored to the printer, while at the same time the other program can continue generating more output, oblivious to the fact that the output is not really going to the printer (yet). When a computer (or network) has multiple users, the need for managing and protecting the memory, I/O devices, and other resources is even greater, since the users might otherwise interfere with one another. In addition, users often need to share not only hardware, but information (files, databases, etc.) as well. In short, this view of the operating system holds that its primary task is to keep track of who is using which resource, to grant resource requests, to account for usage, and to mediate conflicting requests from different programs and users. The Operating System as a Resource Manager Resource management includes multiplexing (sharing) resources in two ways: in time and in space. When a resource is time multiplexed, different programs or users take turns using it. Time Multiplexing For example, with only one CPU and multiple programs that want to run on it, the operating system first allocates the CPU to one program, then after it has run long enough, another one gets to use the CPU, then another, and then eventually the first one again. Determining how the resource is time multiplexed — who goes next and for how long — is the task of the operating system. Space Multiplexing For example, main memory is normally divided up among several running programs, so each one can be resident at the same time (for example, in order to take turns using the CPU). Assuming there is enough memory to hold multiple programs, it is more efficient to hold several programs in memory at once rather than give one of them all of it, especially if it only needs a small fraction of the total. History of Operating System Generations 1. (1945–55) Vacuum Tubes 2. (1955–65) Transistors and Batch Systems 3. (1965–1980) ICs and Multiprogramming 4. (1980–Present) Personal Computers Computer Hardware Review An operating system is intimately tied to the hardware of the computer it runs on. It extends the computer’s instruction set and manages its resources. To work it must know a great deal about the hardware, at least, about how the hardware appears to the programmer. Some of the components of a simple PC Bus A microprocessor connects to devices such as memory and I/O via data and address buses. Collectively, these two buses can be referred to as the microprocessor bus. A bus is a collection of wires that serve a common purpose. Processors The “brain” of the computer is the CPU. It fetches instructions from memory and executes them. The basic cycle of every CPU is to fetch the first instruction from memory, decode it to determine its type and operands, execute it, and then fetch, decode, and execute subsequent instructions. In this way, programs are carried out. Processors • Instruction Set • ALU Arithmetic/Logic Unit • General Registers • Program Counter • Stack Pointer • PSW Program Status Word • Pipeline Processors Instructions Programs are composed of many very simple individual operations, called instructions, that specify in exact detail how the microprocessor should carry out an algorithm. A simple program may have dozens of instructions, whereas a complex program can have tens of millions of instructions. Collectively, the programs that run on microprocessors are called software Instructions Each unique instruction is represented as a binary value called an opcode. A microprocessor fetches and executes opcodes one at a time from program memory. A microprocessor is a synchronous logic element that advances through opcodes on each clock cycle. Some opcodes may be simple enough to execute in a single clock cycle, and others may take multiple cycles to complete. Clock speed is often used as an indicator of microprocessor’s performance. When an opcode is fetched from memory, it must be briefly examined to determine what needs to be done, after which the appropriate actions are carried out. This process is called instruction decoding. A central logic block coordinates the operation of the entire microprocessor by fetching instructions from memory, decoding them, and loading or storing any data as required. The accumulator is a register that temporarily holds data while it is being processed. ALU The ALU is sometimes the most complex single logic element in a microprocessor. It is responsible for performing arithmetic and logical operations as directed by the instruction decode logic. Not only does the ALU add or subtract data from the accumulator, it also keeps track of status flags that tell subsequent branch instructions whether the result was positive, negative, or zero, and whether an addition or subtraction operation created a carry or borrow bit. General Registers • In addition to the general registers used to hold variables and temporary results, most computers have several special registers that are visible to the programmer. Some Registers: • Program counter • Stack pointer • PSW (Program Status Word) Program Counter A microprocessor needs a mechanism to keep track of its place in the instruction sequence. Like a bookmark that saves your place as you read through a book, the program counter (PC) maintains the address of the next instruction to be fetched from program memory. The PC is a counter that can be reloaded with a new value from the instruction decoder. Under normal operation, the microprocessor Program Counter Under normal operation, the microprocessor moves through instructions sequentially. After executing each instruction, the PC is incremented, and a new instruction is fetched from the address indicated by the PC. SUBROUTINES AND THE STACK Most programs are organized into multiple blocks of instructions called subroutines rather than a single large sequence of instructions. Subroutines are located apart from the main program segment and are invoked by a subroutine call. This call is a type of branch instruction that temporarily jumps the microprocessor’s PC to the subroutine, allowing it to be executed. SUBROUTINES AND THE STACK Subroutines provide several benefits to a program, including modularity and ease of reuse. This concept greatly speeds the software development process.
When a branch-to-subroutine is executed, the PC is
saved into a data structure called a stack. The stack is a region of data memory that is set aside by the programmer specifically for the main purpose of storing the microprocessor’s state information when it branches to a subroutine. Stack Pointer A stack is a last-in, first-out memory structure. When data is stored on the stack, it is pushed on. When data is removed from the stack, it is popped off. Popping the stack recalls the most recently pushed data. The first datum to be pushed onto the stack will be the last to be popped. A stack pointer (SP) holds a memory address that identifies the top of the stack at any given time. The SP decrements as entries are pushed on and increments at they are popped off, thereby growing the stack downward in memory as data is pushed Stack By pushing the PC onto the stack during a branch-to-subroutine, the microprocessor now has a means to return to the calling routine at any time by restoring the PC to its previous value by simply popping the stack. PSW (Program Status Word) Another register is the PSW (Program Status Word). This register contains the condition code bits, which are set by comparison instructions, the CPU priority, the mode (user or kernel), and various other control bits. User programs may normally read the entire PSW but typically may write only some of its fields. The PSW plays an important role in system calls and I/O. Pipeline Many modern CPUs have facilities for executing more than one instruction at the same time. For example, a CPU might have separate fetch, decode, and execute units, so that while it was executing instruction n, it could also be decoding instruction n + 1 and fetching instruction n + 2. Such an organization is called a pipeline Figure 1-7. (a) A three-stage pipeline. (b) A superscalar CPU. superscalar CPU Even more advanced than a pipeline design is a superscalar CPU In this design, multiple execution units are present, for example, one for integer arithmetic, one for floating- point arithmetic, and one for Boolean operations. Two or more instructions are fetched at once, decoded, and dumped into a holding buffer until they can be executed. As soon as an execution unit is free, it looks in the holding buffer to see if there is an instruction it can handle, and if so, it removes the instruction from the buffer and executes it. To obtain services from the operating system, a user program must make a system call, which traps into the kernel and invokes the operating system. The TRAP instruction switches from user mode to kernel mode and starts the operating system. Multithreaded and Multicore Chips
Figure 1-8. (a) A quad-core chip with a shared L2 cache.
(b) A quad-core chip with separate L2 caches. Memory Ideally, a memory should be extremely fast (faster than executing an instruction so the CPU is not held up by the memory), abundantly large, and dirt cheap. Memory
Figure 1-9. A typical memory hierarchy.
The numbers are very rough approximations. Register The top layer consists of the registers internal to the CPU. They are made of the same material as the CPU and are thus just as fast as the CPU. Consequently, there is no delay in accessing them. Cache The cache memory, which is mostly controlled by the hardware. Main memory is divided up into cache lines, typically 64 bytes, with addresses 0 to 63 in cache fine 0, addresses 64 to 127 in cache line 1, and so on. The most heavily used cache lines are kept in a high-speed cache located inside or very close to the CPU. Cache memory is limited in size due to its high cost. Some machines have two or even three levels of cache. RAM Main memory is often called RAM (Random Access Memory). Old timers sometimes call it core memory. Currently, memories are tens to hundreds of megabytes and growing rapidly. Nonvolatile memory In addition to the kinds of memory discussed above, many computers have a small amount of nonvolatile random access memory. Unlike RAM, nonvolatile memory does not lose its contents when the power is switched off. • ROM • EEPROM (Electrically Erasable Programmable Read-Only Memory ) • CMOS (volatile) ROM ROM (Read Only Memory) is programmed at the factory and cannot be changed afterward. It is fast and inexpensive. On some computers, the bootstrap loader used to start the computer is contained in ROM. Also, some I/O cards come with ROM for handling low-level device control. EEPROM EEPROM (Electrically Erasable ROM) and flash RAM are also nonvolatile, but in contrast to ROM can be erased and rewritten. However, writing them takes orders of magnitude more time than writing RAM, so they are used in the same way ROM is, only with the additional feature that it is now possible to correct bugs in programs they hold by rewriting them in the field. CMOS CMOS is volatile. Many computers use CMOS memory to hold the current time and date. The CMOS memory and the clock circuit that increments the time in it are powered by a small battery, so the time is correctly updated, even when the computer is unplugged. The CMOS memory can also hold the configuration parameters, such as which disk to boot from. CMOS is used because it draws so little power that the original factory-installed battery often lasts for several years. Magnetic Disk Next in the hierarchy is magnetic disk (hard disk). Disk storage is two orders of magnitude cheaper than RAM per bit and often two orders of magnitude larger as well. The only problem is that the time to randomly access data on it is close to three orders of magnitude slower. This low speed is due to the fact that a disk is a mechanical device. A disk consists of one or more metal platters that rotate at 5400, 7200, or 10,800 rpm. At any given arm position, each of the heads can read an annular region called a track. Together, all the tracks for a given arm position form a cylinder. Each track is divided into some number of sectors, typically 512 bytes per sector. Figure 1-10. Structure of a disk drive. Tapes The medium is often used as backup for disk storage and for holding very large data sets. The big plus of tape is that it is cheap per bit and removable, which is important for backup tapes that must b e stored off-site in order to survive fires, floods, earth quakes and other disaster I/O devices I/O devices also interact heavily with the operating system. As we saw in Fig. 1-5, I/O devices generally consist of two parts: a controller and the device itself. The controller is a chip or a set of chips on a plug-in board that physically controls the device. It accepts commands from the operating system, for example, to read data from the device, and carries them out. Because each type of controller is different, different software is needed to control each one. The software that talks to a controller, giving it commands and accepting responses, is called a device driver.
Fundamentals of Digital Logic and Microcomputer Design Fifth Edition M. Rafiquzzaman(Auth.) - The ebook is available for quick download, easy access to content
Fundamentals of Digital Logic and Microcomputer Design Fifth Edition M. Rafiquzzaman(Auth.) - The ebook is available for quick download, easy access to content