Plan9 Operating System: Ashish Ranjan
Plan9 Operating System: Ashish Ranjan
Plan9 Operating System: Ashish Ranjan
A SEMINAR REPORT
Submitted by
ASHISH RANJAN
of
BACHELOR OF TECHNOLOGY
In
SCHOOL OF ENGINEERING
COCHIN – 682022
SEPTEMBER 2008
DIVISION OF COMPUTER SCIENCE AND ENGINEERING
SCHOOL OF ENGINEERING
COCHIN – 682022
Certificate
ASHISH RANJAN
Of the VIIth semester ,Computer Science and Engineering in the year 2008 in the
partial fulfillment of the requirements to the award of Degree of Bachelor of
Technology in Computer Science of Engineering of Cochin University of Science
and Technology.
Date:
ACKNOWLEDGEMENT
It is with greatest pleasure and pride that I present this report before you. At this
moment of triumph, it would be unfair to neglect all those who helped me in the
First of all, I would like to place myself at the feet of God Almighty for his
everlasting love and for the blessings & courage that he gave me, which made it
possible to me to see through the turbulence and to set me in the right path.
I would also like to thank our Head of the Department, Mr. David Peter S for all
I am grateful to my seminar guide, Lata Nair ,for his guidance and whole
hearted support and very valued constructive criticism that has driven to complete
I would take this opportunity to thank my friends who were always a source of
encouragement.
i
ABSTRACT
ii
Table of Contents
9 File Caching 15
10 File Permissions 16
14 Parallel Programming 25
15 Hardware Requirements 26
16 Features Of Plan9 29
17 Performance 30
18 Applications of Plan9 31
19 Conclusion 32
20 References 33
List of figures
1 INTRODUCTION TO PLAN9
It was developed as the research successor to Unix by the Computing Sciences
Research Center at Bell Labs between the mid-1980s and 2002. Plan 9 replaced
Unix at Bell Labs as the organization's primary platform for research and explores
several changes to the original Unix model that improve the experience of using and
programming the system, notably in distributed multi-user environments. Plan 9
from Bell Labs is a distributed operating system.
One of the key features adopted from Unix was the use of the file system to
access resources. Plan 9 is an operating system , system kernel but also a collection
of accompanying software. The bulk of the software is predominantly new, written
for Plan 9 rather than ported from Unix or other systems. The window compilers ,
file server, and network services are all freshly written for Plan 9.
Plan 9 is most notable for representing all system interfaces, including those
required for networking and the user-interface, through the file system rather than
specialized interfaces. Plan 9 aims to provide users with a workstation-independent
working environment through the use of the 9P protocols. Plan 9 continues to be
used and developed in some circles as a research operating system and by hobbyists.
2 INTRODUCTION TO UNIX
Unix (officially trademarked as UNIX®) is a computer operating system originally
developed in 1969 by a group of AT&T employees at Bell Labs including Ken
Thompson, Dennis Ritchie and Douglas McIlroy. Today's Unix systems are split
into various branches, developed over time by AT&T as well as various commercial
vendors and non-profit organizations.
As of 2007, the owner of the trademark UNIX® is The Open Group, an
industry standards consortium. Only systems fully compliant with and certified to
the Single UNIX Specification qualify as "UNIX®" (others are called "Unix
system-like" or "Unix-like").
During the late 1970s and early 1980s, Unix's influence in academic circles
led to large-scale adoption of Unix (particularly of the BSD variant, originating
from the University of California, Berkeley) by commercial startups, the most
notable of which is Sun Microsystems. Today, in addition to certified Unix systems,
Unix-like operating systems such as Linux and BSD derivatives are commonly
encountered.Sometimes, "traditional Unix" may be used to describe a Unix or an
operating system that has the characteristics of either Version 7 Unix or UNIX
System V.
3 INSTALLATION OF PLAN9
Figure 1. INSTALLATION
When a terminal is powered on, it must be told the name of a file server to boot
from, the operating system kernel to boot, and a user name and password. Once it is
complete, the terminal loads the Plan 9 kernel, which sets some environment
variables and builds an initial namespace from the user input (‘$cputype’,
‘$objtype’, ‘$user’, ‘$home’, union of ‘/$cputype/bin’ and ‘/rc/bin’ into ‘/bin’).
Eventually, the terminal runs ‘rc’ on ‘/usr/$user/lib/profile’. The user name
becomes the terminal’s ID. The password is converted into a 56-bit DES key and
saved as the machine key.
When a CPU or a file server boots, it reads a key, an ID, and a domain name
from non-volatile RAM. This allows the servers to reboot without one operator
intervention.
A terminal runs programs depending on how it uses resources (e.g., heavy
computations on a CPU server, frequent file I/O close to the file system). A call to
the command ‘cpu’ starts an ‘rc’ shell on a CPU server. ‘cpu’ is invoked in a ‘rio’
window. Standard input, output, and error files are connected to the ‘/dev/cons’ in
the namespace where the ‘cpu’ command was invoked.
The namespace for the new ‘rc’ is similar to the one from which the ‘cpu’
command was invoked; only architecture-dependent bindings such as ‘/bin’ may
change; CPU-local devices such as fast file systems are still local; only terminal-
local devices are imported; the terminal becomes a file server of the CPU. The result
is different from ‘rlogin’ which moves to a distinct namespace.The result is different
from NFS which keeps the namespace but runs the process locally.
use the same database to talk to the network; there is no need to manage a
distributed naming system or keep parallel files up to date. To install a new machine
on the local Ethernet, choose a name and IP address and add these to a single file
in /lib/ndb; all the machines in the installation will be able to talk to it immediately.
To start running, plug the machine into the network, turn it on, and use BOOTP and
TFTP to load the kernel. All else is automatic.
Finally, the automated dump file system frees all users from the need to
maintain their systems, while providing easy access to backup files without tapes,
special commands, or the involvement of support staff. It is difficult to overstate the
improvement in lifestyle afforded by this service.
Plan 9 runs on a variety of hardware without constraining how to configure
an installation. In our laboratory, we chose to use central servers because they
amortize costs and administration. A sign that this is a good decision is that our
cheap terminals remain comfortable places to work for about five years, much
longer than workstations that must provide the complete computing environment.
We do, however, upgrade the central machines, so the computation available from
even old Plan 9 terminals improves with time.
into several broad classes. Some are new programs for old jobs: programs like ls,
cat, and who have familiar names and functions but are new, simpler
implementations. Who, for example, is a shell script, while ps is just 95 lines of C
code. Some commands are essentially the same as their UNIX ancestors: awk, troff,
and others have been converted to ANSI C and extended to handle Unicode, but are
still the familiar tools. Some are entirely new programs for old niches: the shell rc,
text editor sam, debugger acid, and others displace the better-known UNIX tools
with similar jobs. Finally, about half the commands are new.
daily snapshots of all files, it is easy to find when a particular change was made or
what changes were made on a particular date. People feel free to make large
speculative changes to files in the knowledge that they can be backed out with a
single copy command. There is no backup system as such; instead, because the
dump is in the file name space, backup problems can be solved with standard tools
such as cp, ls, grep, and diff.
The other (very rare) use is complete system backup. In the event of disaster,
the active file system can be initialized from any dump by clearing the disk cache
and setting the root of the active file system to be a copy of the dumped root.
Although easy to do, this is not to be taken lightly: besides losing any change made
after the date of the dump, this recovery method results in a very slow system. The
cache must be reloaded from WORM, which is much slower than magnetic disks.
The file system takes a few days to reload the working set and regain its full
performance.
Access permissions of files in the dump are the same as they were when the
dump was made. Normal utilities have normal permissions in the dump without any
special arrangement. The dump file system is read-only, though, which means that
files in the dump cannot be written regardless of their permission bits; in fact, since
directories are part of the read-only structure, even the permissions cannot be
changed.
Once a file is written to WORM, it cannot be removed, so our users never
see ``please clean up your files'' messages and there is no df command. We regard
the WORM jukebox as an unlimited resource. The only issue is how long it will
take to fill.
is restored and the read-only root of the dumped file system appears in a hierarchy of
all dumps ever taken, named by its date. For example, the directory
/n/dump/1995/0315 is the root directory of an image of the file system as it appeared
in the early morning of March 15, 1995. It takes a few minutes to queue the blocks,
but the process to copy blocks to the WORM, which runs in the background, may
take hours.
There are two ways the dump file system is used. The first is by the users
themselves, who can browse the dump file system directly or attach pieces of it to
their namespace. For example, to track down a bug, it is straightforward to try the
compiler from three months ago or to link a program with yesterday’s library. With
daily snapshots of all files, it is easy to find when a particular change was made or
what changes were made on a particular date. People feel free to make large
speculative changes to files in the knowledge that they can be backed out with a
single copy command. There is no backup system as such; instead, because the
dump is in the file name space, backup problems can be solved with standard tools
such as cp, ls, grep, and diff
The other (very rare) use is complete system backup. In the event of disaster,
the active file system can be initialized from any dump by clearing the disk cache
and setting the root of the active file system to be a copy of the dumped root.
Although easy to do, this is not to be taken lightly: besides losing any change made
after the date of the dump, this recovery method results in a very slow system. The
cache must be reloaded from WORM, which is much slower than magnetic disks.
The file system takes a few days to reload the working set and regain its full
performance.
.
9 FILE CACHING
The 9P protocol has no explicit support for caching files on a client. The large
memory of the central file server acts as a shared cache for all its clients, which
reduces the total amount of memory needed across all machines in the network.
Nonetheless, there are sound reasons to cache files on the client, such as a slow
connection to the file server.
The version field of the qid is changed whenever the file is modified, which
makes it possible to do some weakly coherent forms of caching. The most important
is client caching of text and data segments of executable files. When a process execs
a program, the file is re-opened and the qid’s version is compared with that in the
cache; if they match, the local copy is used. The same method can be used to build a
local caching file server. This user-level server interposes on the 9P connection to
the remote server and monitors the traffic, copying data to a local disk. When it sees
a read of known data, it answers directly, while writes are passed on
immediately_the cache is write-through_to keep the central copy up to date. This is
transparent to processes on the terminal and requires no change to 9P; it works well
on home machines connected over serial lines. A similar method can be applied to
build a general client cache in unused local memory, but this has not been done in
Plan 9.
10 FILE PERMISSIONS
One of the advantages of constructing services as file systems is that the solutions to
ownership and permission problems fall out naturally. As in UNIX, each file
ordirectory has separate read, write, and execute/search permissions for the file
owner, the file’s group, and anyone else. The idea of group is unusual: any user nam
is potentially a group name. A group is just a user with a list of other users in the
group. Conventions make the distinction: most people have user names without
group members, while groups have long lists of attached names. For example, the
sys group traditionally has all the system programmers, and system files are
accessible by group sys. Consider the following two lines of a user database stored
on a server:
PJW:PJW:
SYS::PJW,KEN,PHILW,PRESOTTO
The first establishes user pjw as a regular user. The second establishes user sys as a
group and lists four users who are members of that group. The empty colon-
separated field is space for a user to be named as the group leader. If a group has a
leader, that user has special permissions for the group, such as freedom to change
the group permissions of files in that group. If no leader is specified, each member
of the group is considered equal, as if each were the leader. In our example, only
pjw can add members to his group, but all of system’s members are equal partners
in that group. Regular files are owned by the user that creates them. The group name
is inherited from the directory holding the new file. Device files are treated
specially: the kernel may arrange the ownership and permissions of a file
appropriate to the user accessing the file.
A good example of the generality this offers is process files which are owned
and read-protected by the owner of the process. If the owner wants to let someone
else access the memory of a process, for example to let the author of a program
debug a broken image, the standard chmod command applied to the process files
does the job.Another unusual application of file permissions is the dump file system,
which is not only served by the same file server as the original data, but represented
by the same user database. Files in the dump are therefore given identical protection
as files in the regular file system; if a file is owned by pjw and read-protected, once
it is in the dump file system it is still owned by pjw and read-protected. Also, since
the dump file system is immutable, the file cannot be changed; it is read-protected
forever. Drawbacks are that if the file is readable but should have been read-
protected, it is readable forever, and that user names are hard to reuse.
program there, the intermediate produced on the new architecture is identical to the
intermediate produced on the native processor. From the compiler_s point of view,
every compilation is a cross-compilation.
Although each architecture_s loader accepts only intermediate files produced
by compilers for that architecture, such files could have been generated by a
compiler executing on any type of processor. For instance, it is possible to run the
MIPS compiler on a 486, then use the MIPS loader on a SPARC to produce a MIPS
executable.
14 PARALLEL PROGRAMMING
Plan 9_s support for parallel programming has two aspects. First, the kernel
provides a simple process model and a few carefully designed system calls for
synchronization and sharing. Second, a new parallel programming language called
Alef supports concurrent programming. Although it is possible to write parallel
programs in C, Alef is the parallel language of choice.
There is a trend in new operating systems to implement two classes of
processes: normal UNIX-style processes and light-weight kernel threads. Instead,
Plan 9 provides a single class of process but allows fine control of the sharing of a
process_s resources such as memory and file descriptors. A single class of process is
a feasible approach in Plan 9 because the kernel has an efficient system call
interface and cheap process creation and scheduling Parallel
programs have three basic requirements: management of resources shared between
processes, an interface to the scheduler, and fine-grain process synchronization
using spin locks. On Plan 9, new processes are created using the rfork system call.
Rfork takes a single argument, a bit vector that specifies which of the parent
process_s resources should be shared, copied, or created anew in the child. The
resources controlled by rfork include the name space, the environment, the file
descriptor table, memory segments, and notes (Plan 9_s analog of UNIX signals).
One of the bits controls whether the rfork call will create a new process; if the bit is
off, the resulting modification to the resources occurs in the process making the call.
For example, a process calls rfork(RFNAMEG) to disconnect its name space from
its parents. Alef uses a fine-grained fork in which all the resources, including
memory, are shared between parent and child, analogous to creating a kernel thread
in many systems An
indication that rfork is the right model is the variety of ways it is used. Other than
the canonical use in the library routine fork, it is hard to find two calls to rfork with
the same bits set; programs use it to create many different forms of sharing and
resource allocation. A system with just two types of processes_regular processes
and threads_could not handle this variety.
15 HARDWARE REQUIREMENTS
IDE/ATAPI CONTROLLERS
Plan 9 supports almost all motherboard IDE/ATAPI controllers, but DMA transfers
are only used on these recognized chipsets (chipsets not listed here will simply run
slower; you can try turning on DMA by editing /sys/src/9/pc/sdata.c).
-AMD 768, 3111
-CMD 640B, 646
-HighPoint HPT366
-Intel PIIX, PIIX3, PIIX4, ICH, ICH0, ICH2-6
-NS PC87415
-nVidia nForce 1, nForce 2, nForce 3, nForce 4
-PC-Tech RZ1000
-Promise PDC202xx, Ultra/133 TX2, 20378
-ServerWorks IB6566
-SiL 3112 SATA, 3114 SATA/RAID
-ATI 4379 SATA
-SiS 962
-VIA 82C686, VT8237 SATA/RAID
-SCSI CONTROLLERS
-Buslogic BT-948 or BT-958 (AKA Mylex multimaster series). These aren't being
made any more, but you might be able to buy them used.
Adaptec 1540 or 1542 for the ISA bus
Ultrastor 14F ISA or 34F VLB
USB
Intel's UHCI interface is supported, but it only supports USB 1 (12Mb/s) devices.
Support for the alternative OHCI interface is in progress. EHCI (USB 2, 480Mb/s)
support has not been started but is likely to follow before long, since plugging a
USB 2 device (e.g., disk) into a system containing an EHCI controller causes all
USB traffic to be routed to the EHCI controller, despite the presence of UHCI or
OHCI controllers
ETHERNET
Plan 9 will automatically recognize the PCI Ethernet cards that it can drive. The
following chips/cards are supported
-AMD 79C97
-Digital (now Intel) 2114x and clones. (Tulip, PIC, PIC-II, Centaur, Digital
-DE-500)
-NE2000 clones
KEYBOARDS
Any PS/2 keyboard should work. USB keyboards might work if you can enable PS/
2 "emulation" in your BIOS.
16 FEATURES OF PLAN9
Plan 9 is designed around the basic principle that all resources appear as files in a
hierarchical file system (namespace) which is unique to each process. These
resources are accessed via a network-level protocol called 9P which hides the exact
location of services from the user. All servers provide their services as an exported
hierarchy of files
Features
-The dump file system makes a daily "snapshot" of the filestore available to users
-Unicode character set support throughout the system
-Advanced kernel synchronization facilities for parallel processing
-ANSI/POSIX environment emulator (APE)
-Plumbing, a language driven way for applications to communicate
-Acme - an editor, shell and window system for programmers
-Sam - a screen editor with structural regular expressions
-Support for MIME mail messages and IMAP4
-Security - there is no super-user or root, and passwords are never sent over the
network
-Venti - archival storage
-Fossil - Hierarchical file system built on top of Venti, with automatic snapshots and
archives
17 PERFORMANCE
As a simple measure of the performance of the Plan 9 kernel, we compared the time
to do some simple operations on Plan 9 and on SGI_s IRIX Release 5.3 running on
an SGI Challenge M with a 100MHz MIPS R4400 and a 1-megabyte secondary
cache. The test program was written in Alef, compiled with the same compiler, and
run on identical hardware, so the only variables are the operating system and
libraries.The program tests the time to do a context switch (rendezvous on Plan
9,blockproc on IRIX); a trivial system call (rfork(0) and nap(0)); and
lightweightfork (rfork(RFPROC) and sproc(PR_SFDS|PR_SADDR)). It also
measures the time to send a byte on a pipe from one process to another and the
throughput on a pipe
SYSTEM CALL 6 µs 36 µs
Performance comparison.
18 APPLICATIONS OF PLAN9
18.1 Inferno System
An OS that combines the system structure ideas from Plan 9 with other ideas:
-A virtual operating system that can run either stand-alone on a small
device (hand-held, or set-top box, games console)
-Or as an ordinary application under Windows, Unix, etc.
By chance and circumstance, similar for portable languages and systems were
also re-emerging with Java language technology
18.3 Viaduct
-A small box (15 cm long) provides VPN (Virtual Private Network) secure
tunneling for homes or small offices
-Does encryption and compression
-Intended for DSL and cable modem connections
no administration needed--just insert between modem and computer
-Uses Plan 9 as its operating system
-Not a product: used mainly in research group
19 CONCLUSION
20 REFERENCES
Plan 9 Programmer's Manual, Volume 1,
T.J. Killian, “Processes as Files”
B. Clifford Neuman, “The Prospero File System”
http://en.wikipedia.org/wiki/Comparison_of_operating_systems