Operating System: What Do Operating Systems Do?
Operating System: What Do Operating Systems Do?
Time-sharing operating systems schedule tasks for efficient use of the system and may also include
accounting software for cost allocation of processor time, mass storage, printing, and other
resources.
For hardware functions such as input and output and memory allocation, the operating system acts
as an intermediary between programs and the computer hardware,[1][2] although the application code
is usually executed directly by the hardware and frequently makes system calls to an OS function or
is interrupted by it. Operating systems are found on many devices that contain a computer –
from cellular phones and video game consoles to web servers and supercomputers.
There are four general types of operating systems. Their use depends on the type of
computer and the type of applications that will be run on those computers.
Windows 3.1 was released in 1991. By then, Windows had gained in market share.
Microsoft released Windows 95 in August 1995. It was so well marketed and in such
high demand that people bought the operating system, even if they didn't own a home
computer. With each new release, from Windows 98 to Window 2000 to Windows XP,
Microsoft gained popularity. Today, almost every new personal computer comes
preloaded with the Windows operating system. Windows can be run on practically any
brand of personal computers. It is estimated that 90 percent of personal computers run
the Windows operating system. The remaining 10 percent run the Macintosh operating
system.
Linux is a UNIX variant that runs on several different hardware platforms. Linus
Torvalds, a student at the University of Helsinki in Finland, initially created it as a hobby.
The kernel, at the heart of all Linux systems, is developed and released under the
General Public License (GNU), and its source code is freely available to everyone.
There are now hundreds of companies, organizations, and individuals that have
released their own versions of operating systems based on the Linux kernel.
The 32-bit processor was the primary processor used in all computers until
the early1990s. Intel Pentium processors and early AMD processors were
32-bit processors. The operating system and software on a computer with a
32-bit processor is also 32-bit based, in that they work with data units that
are 32 bits wide. Windows 95, 98, and XP are all 32-bit operating systems
that were common on computers with 32-bit processors.
Note: A computer with a 32-bit processor cannot have a 64-bit version of an
operating system installed. It can only have a 32-bit version of an operating
system installed.
64-bit processor
The 64-bit computer has been around since 1961 when IBM created the IBM
7030 Stretch supercomputer. However, it was not put into use in home
computers until the early 2000s. Microsoft released a 64-bit version of
Windows XP to be used on computers with a 64-bit processor. Windows
Vista, Windows 7, and Windows 8 also come in 64-bit versions. Other
software has been developed that is designed to run on a 64-bit computer,
which are 64-bit based as well, in that they work with data units that are 64
bits wide.
Note: A computer with a 64-bit processor can have a 64-bit or 32-bit
version of an operating system installed. However, with a 32-bit operating
system, the 64-bit processor would not run at its full capability.
Note: On a computer with a 64-bit processor, you cannot run a 16-
bit legacyprogram. Many 32-bit programs will work with a 64-bit processor
and operating system, but some older 32-bit programs may not function
properly, or at all, due to limited or no compatibility.
Differences between a 32-bit and 64-bit CPU
So how long will the transition from 32-bit to 64-bit systems take?
The main issue is that your computer works from the hardware such as the processor (or
CPU, as it is called), through the operating system (OS), to the highest level which is your
applications. So the computer hardware is designed first, the matching operating systems
are developed, and finally the applications appear.
We can look back at the transition from 16-bit to 32-bit Windows on 32-bit processors. It
took 10 years (from 1985 to 1995) to get a 32-bit operating system and even now, more
than 15 years later, there are many people still using 16-bit Windows applications on older
versions of Windows.
The hardware and software vendors learnt from the previous transition, so the new
operating systems have been released at the same time as the new processors. The
problem this time is that there haven't been enough 64-bit applications. Ten years after the
PC's first 64-bit processors, installs of 64-bit Windows finally exceeded installs of 32-bit
Windows. Further evidence of this inertia is that you are probably reading this tutorial
because you are looking to install your first 64-bit software.
Your computer system in three parts
Now we'll look at those three components of your system. In simple terms they are three
layers with the processor or CPU as the central or lowest layer and the application as the
outermost or highest layer as shown below:
To run a 64-bit operating system you need support from the lower level: the 64-bit CPU.
To run a 64-bit application you need support from all lower levels: the 64-bit OS and the
64-bit CPU.
This simplification will be enough for us to look what happens when we mix the 32-bit and
64-bit parts. But if you want to understand the issue more deeply then you will also need to
consider the hardware that supports the CPU and the device drivers that allow the OS and
the applications to interface with the system hardware.
What 32-bit and 64-bit combinations are compatible and will work
together?
This is where we get to the practicalities and can start answering common questions.
The general rule is that 32-bit will run on a lower level 64-bit component but 64-bit does
not run on a lower level 32-bit component:
Operating System
32-bit 32-bit 64-bit 64-bit
(OS)
Yes No No No
Operating System
64-bit 64-bit 32-bit 32-bit
(OS)
The main reason that 32-bit will always run on 64-bit is that the 64-bit components have
been designed to work that way. So the newer 64-bit systems are backward-compatible
with the 32-bit systems (which is the main reason most of us haven't moved to 64-bit
software).
An example of backward compatibility is Windows 64-bit. It has software called WOW64
that provides compatibility by emulating a 32-bit system. See the article How Windows 7 /
Vista 64 Support 32-bit Applications if you want to know more. One important point that is
made in that article is that it is not possible to install a 32-bit device driver on a 64-bit
operating system. This is because device drivers run in parallel to the operating system. The
emulation is done at the operating system level so it is available to the higher layer, the
application, but it is not available to the device driver which runs on the same level i.e. the
operating system level.
Similarly 32-bit software (usually very old programs) can have some code in 16-bit which is
why those 32-bit applications will usually fail to run properly on a 64-bit OS.
Can a 64-bit CPU with a 32-bit host OS run a virtual machine (VM)
for a 64-bit guest OS?
Yes. It all depends upon the level of virtualization.
With software virtualization it is hardly likely to work, or if it does work it may be very slow.
Hardware virtualization will need to be supported by the CPU (e.g. with Intel-VT or AMD-V)
and the BIOS
BIT
Short for binary digit,is the smallest unit of information on a machine. The term was first used in
1946 by John Tukey, a leading statistician and adviser to five presidents. A single bit can hold
only one of two values: 0 or 1. More meaningful information is obtained by combining
consecutive bits into larger units. For example, abyteis composed of 8 consecutive bits.
Computers are sometimes classified by the number of bits they canprocess at one time or by
the number of bits they use to representaddresses. These two values are not always the same,
which leads to confusion. For example, classifying a computer as a 32-bit machine might mean
that its data registers are 32 bits wide or that it uses 32 bits to identify each address in memory.
Whereas larger registers make a computer faster, using more bits for addresses enables a
machine to support larger programs.
Graphics are also often described by the number of bits used to represent each dot. A 1-bit
image ismonochrome; an 8-bit image supports 256 colors or grayscales; and a 24- or 32-bit
graphic supports true color.
Byte
A byte is a unit of measurement used to measure data. One byte contains eight binary bits, or a
series of eight zeros and ones. Therefore, each byte can be used to represent 2^8 or 256
different values.
The byte was originally developed to store a single character, since 256 values is sufficient to
represent all standard lowercase and uppercase letters, numbers, and symbols. However, since
some languages have more than 256 characters, modern character encoding standards, such
as UTF-16, use two bytes, or 16 bits for each character.
While the byte was originally designed to measure character data, it is now the fundamental unit
of measurement for all data storage. For example, a kilobyte contains 2^10 or 1,024 bytes. A
megabyte contains 1,024 x 1,024, or 1,048,576 bytes. Since bytes are so small, they are most
often used to measure specific data within a file, such as pixels or characters. Even the smallest
files are typically measured in kilobytes, while data storage limits are often measured in
gigabytes or terabytes
Kilobytes
.The kilobyte (abbreviated "K" or "KB") is the smallest unit of measurement greater than a byte.
It precedes the megabyte, which contains 1,000,000 bytes. While one kilobyte is technically
1,000 bytes, kilobytes are often used synonymously with kibibytes, which contain 1,024 bytes.
Kilobytes are most often used to measure the size of small files. For example, a plain text
document may contain 10 KB of data and therefore would have a file size of 10 kilobytes. Small
website graphics are often between 5 KB and 100 KB in size. While some files contain less than
4 KB of data, modern file systems such as NTFS (Windows) and HFS+ (Mac) have a cluster
size of 4 KB. Therefore, individual files typically take up a minimum of four kilobytes of disk
space.
Megabytes
One megabyte (abbreviated "MB") is equal to 1,000 kilobytes and precedes the gigabyte unit of
measurement. While a megabyte is technically 1,000,000 bytes, megabytes are often used
synonymously with mebibytes, which contain 1,048,576 bytes (220 or 1,024 x 1,024 bytes).
Megabytes are often used to measure the size of large files. For example, a high resolution
JPEG image file might range is size from one to five megabytes. Uncompressed RAW images
from a digital camera may require 10 to 50 MB of disk space. A three minute song saved in a
compressed format may be roughly three megabytes in size, and the uncompressed version
may take up 30 MB of space. While CD capacity is measured in megabytes (typically 700 to 800
MB), the capacity of most other forms of media, such as flash drives and hard drives, is typically
measured in gigabytes or terabytes.
Gigabyte
One gigabyte (abbreviated "GB") is equal to 1,000 megabytes and precedes the terabyte unit of
measurement. While a gigabyte is technically 1,000,000,000 bytes, in some cases, gigabytes
are used synonymously with gibibytes, which contain 1,073,741,824 bytes (1024 x 1,024 x
1,024 bytes).
Gigabytes, sometimes abbreviated "gigs," are often used to measure storage capacity. For
example, a standard DVD can hold 4.7 gigabytes of data. An SSD might hold 256 GB, and a
hard drive may have a storage capacity of 750 GB. Storage devices that hold 1,000 GB of data
or more are typically measured in terabytes.
RAM is also usually measured in gigabytes. For example, a desktop computer may come with
16 GB of system RAM and 2 GB of video RAM. A tablet may only require 1 GB of system RAM
since portable apps typically do not require as much memory as desktop applications.
Terabyte
One terabyte (abbreviated "TB") is equal to 1,000 gigabytes and precedes the petabyte unit of
measurement. While a terabyte is exactly 1 trillion bytes, in some cases terabytes and tebibytes
are used synonymously, though a tebibyte actually contains 1,099,511,627,776 bytes (1,024
gibibytes).
Terabytes are most often used to measure the storage capacity of large storage devices. While
hard drives were measured in gigabytes for many years, around 2007, consumer hard drives
reached a capacity of one terabyte. Now, all hard drives that have a capacity of 1,000 GB or
more are measured in terabytes. For example, a typical internal HDD may hold 2 TB of data.
Some servers and high-end workstations that contain multiple hard drives may have a total
storage capacity of over 10 TB.
Terabytes are also used to measure bandwidth, or data transferred in a specific amount of time.
For example, a Web host may limit a shared hosting client's bandwidth to 2 terabytes per
month. Dedicated servers often have higher bandwidth limits, such as 10 TB/mo or more.
GHz
1. Short for gigahertz, GHz is a unit of measurement for alternating current (AC) or
electromagnetic (EM) wave frequencies equal to 1,000,000,000 Hz.
2. When referring to a computer processor or CPU, GHz is a clock frequency, also known as a
clock rate or clock speed, representing a cycle of time. An oscillator circuit supplies a small
amount of electricity to a crystal each second that is measured in KHz, MHz, or GHz. "Hz" is an
abbreviation of Hertz, and "K" represents Kilo (thousand), "M" represents Mega (million), and
"G" represents Giga (thousand million).
In general, the higher the GHz number for a processor, the faster the processor can run and
process data. The first 1 GHz processors for consumer computers were released in March of
2000 by AMD and Intel. Today, processors are reaching 3.8 GHz or higher speeds.