ComputerMaintenanceAndTechnical Support Module
ComputerMaintenanceAndTechnical Support Module
ComputerMaintenanceAndTechnical Support Module
Campus
Department of Information
Technology
Computer Maintenance and Technical
Support Module
Version 1.0
October, 2022
Dilla, Ethiopia
Acronyms
BIOS Basic Input Output System
CGA Color Graphics Adapter
2|Page
Chapter One
Maintenance Basics
1.1. General Concepts about PC and Technical Support
A computer is a programmable machine designed to perform arithmetic and logical operations
automatically and sequentially on the input given by the user and gives the desired output after
processing. Computer components are divided into two major categories namely hardware and
software. Hardware is the machine itself and its connected devices such as monitor, keyboard, mouse
etc. Software are the set of programs that make use of hardware for performing various functions.
Computer has the following characteristics:
A. Speed
Computers work at an incredible speed. A powerful computer is capable of performing about 3-4
million simple instructions per second.
B. Accuracy
In addition to being fast, computers are also accurate. Errors that may occur can almost always be
attributed to human error (inaccurate data, poorly designed system or faulty instructions/programs
written by the programmer)
C. Diligence
Unlike human beings, computers are highly consistent. They do not suffer from human traits of boredom
and tiredness resulting in lack of concentration. Computers, therefore, are better than human beings in
performing voluminous and repetitive jobs.
D. Versatility
Computers are versatile machines and are capable of performing any task as long as it can be broken
down into a series of logical steps. The presence of computers can be seen in almost every sphere –
Railway/Air reservation, Banks, Hotels, Weather forecasting and many more.
3|Page
E. Storage Capacity
Today’s computers can store large volumes of data. A piece of information once recorded (or stored)
in the computer, can never be forgotten and can be retrieved almost instantaneously.
Computer Organization
A computer system consists of mainly four basic units; namely input unit, storage unit, central
processing unit and output unit. Central Processing unit further includes Arithmetic logic unit and
control unit. A computer performs five major operations or functions irrespective of its size and make.
These are
it accepts data or instructions as input,
it stores data and instruction
it processes data as per the instructions,
it controls all operations inside a computer, and
it gives results in the form of output.
Maintenance and Technical Support
The main role of Maintenance and technical support is about understanding how information
systems are used as applying technical knowledge related to computer hardware or software.
As an IT technical support will monitor and maintain the computer systems and networks of
your organization.
Outsourcing IT support companies offer free or paid technical support and maintenance for the
products they sell or services that they provide. IT support can be provided in-person by
offshore support staff. Other support services can be conducted via online chat, over the phone,
email, forum comments or incident reporting. In most cases, many big companies have in-
house IT departments that conduct troubleshooting and technical support for their own staff.
However, in some small and medium-sized enterprises (SMEs), they resort to outsourced IT
support to minimize overhead costs and achieve strategic operational advantages. Types of IT
technical support:
Computer networks - installing, configuring and maintaining computers in large
organizations
Desktop support - providing direct user assistance
Contract Hardware Maintenance - working for a business with contracts to maintain
and repair computer hardware
Vertical Software Applications - working for a supplier of a software application for a
specific business sector, such as retail, travel, or pharmaceuticals
4|Page
Managed Hosting Providers - ensuring clients websites and applications stay up and
running and offering technical support
1. Corrective maintenance
2. Preventive maintenance
3. Risk-based maintenance
4. Condition-based maintenance
5|Page
Condition-based maintenance: Maintenance based on the equipment performance monitoring
and the control of the corrective actions taken as a result. The real actual equipment condition
is continuously assessed by the on-line detection of significant working device parameters and
their automatic comparison with average values and performance. Maintenance is carried out
when certain indicators give the signaling that the equipment is deteriorating and the failure
probability is increasing. This strategy, in the long term, allows reducing drastically the costs
associated with maintenance, thereby minimizing the occurrence of serious faults and
optimizing the available economic resources management.
The care and maintenance of laboratory equipment are an integral part of quality assurance in
the lab. Well-maintained lab equipment ensures that data is consistent and reliable, which in
turn impacts the productivity and integrity of the work produced.
Furthermore, since laboratory equipment generally takes up a big cut of the budget, good
maintenance contributes to cost-cutting measures, by lowering the chances of premature
repurchases and replacement.
In addition, routine maintenance ensures that lab equipment is safe for use by highlighting and
repair of faulty equipment and equipment parts. Various procedures and routines will ensure
that your laboratory equipment is well-maintained and cared for, this includes;
Developing standard operating procedures for all lab equipment.
Preparing documentation on each specific equipment, outlining the repairs and
maintenance undertaken.
Outlining a preventive maintenance program for each piece of equipment.
Training both technical and managerial staff on proper use and care of lab equipment.
A. Standard Operating Procedure for Maintenance of Lab Equipment
Standard operating procedures (SOPs) are a must for all complex lab equipment. This ensures
that the correct use and maintenance of the equipment are integrated into routine work.
Detailed instructions for equipment use should be sourced from the manufacturer’s operator
manual. The SOP can be written by the lab manager, an equipment officer, or staff that
frequently works with the specific equipment. The SOP should also be easily accessible on the
workbench.
6|Page
A proper SOP should contain the following;
Reference documents, such as manuals used to prepare SOP and manufacturer’s websites,
should be outlined for use when further information is required.
B. Equipment Maintenance Documentation
The maintenance log outlines equipment identification and descriptions like equipment name,
model number, manufacturer, purchase date, warranty, model, etc as shown in Table 1.
It also contains the description of repair work, parts replacements, tests, measurements,
adjustments, or deep cleaning done on the equipment.
Item identification
Equipment: Brand:
Purchase Date: Model:
Storage/Position in lab: Serial No.:
Warranty Expiration:
Manufacturer: Tel. No.:
Contact Person: Tel. No.:
Table 1: Example of an identification and maintenance log.
7|Page
Highlight trends like repair costs and equipment durability and efficiency. Therefore,
helping lab managers to make decisions on equipment models and brands that are best
suited for the lab.
Point out the equipment that undergoes wear and tear frequently. If the cause of the
malfunction is operation related, it can highlight the need for re-training of laboratory staff.
A preventive maintenance program ensures that the equipment is functioning with minimal
interruptions and within the manufacturer’s specifications. It maximizes the equipment
operational efficiency and reduces overall costs. It is mainly recommended for equipment with
moving parts, gas or liquid flow, optical systems and filters. The maintenance and quality
control is performed under an outlined schedule and results are documented.
Logs for error reports and failure events; see example in Table 2.
The servicing and calibration done on the equipment and the dates for subsequent
calibrations.
Stickers should be used for equipment labelling to summarize the preventive maintenance
actions undertaken, the date, and the personnel involved.
Failure Events
Date Event Corrective action Operator
Training of both technical and managerial staff is not a one-time activity. It should be regular
with additional courses given when new equipment or improved models are bought.
8|Page
The initial induction training should be elaborate with an expert-guided discussion and
demonstration, while follow-up training can be done in-house to refresh the staff technique.
The lab manager or lab quality control officer are responsible for ensuring all staff are well
trained.
Regular cleaning of lab equipment ensures that it is ready for use when needed, that stubborn
stains/substances do not get a firm hold, and that experiments are not contaminated by
impurities carried over from previous experiments.
Cleaning reagents and cleaning aids used are specific for laboratory equipment care.
In addition to cleaning lab equipment before and after each use, a schedule is required for
more in-depth cleaning. This might involve dissembling certain machines to clean hard-
to-reach parts.
Always follow instructions from the manufacturer on cleaning policy. Certain parts of the
equipment might require very specific solvents, cleaning materials, or drying procedures.
2. Calibration
Calibration involves comparing the measurements of equipment against the standard unit of
measure, for the purpose of verifying its accuracy and making necessary adjustments. Regular
9|Page
calibration of laboratory equipment should be done because over time, biases develop in
relation to the standard unit of measure.
This guards against invalid data and ensures safety during experimentation. An independent
specialist, that can provide calibration certificates where necessary, should be engaged in the
process.
The equipment is hit by a force, dropped on the ground, or involved in an accident or event
of safety concern.
Lab equipment is generally costly and repairs and refurbishment prolong the lifespan of
equipment, saving the lab the expense of new purchases.
Repair and/or refurbish faulty or worn-out lab equipment without any delay. Faulty
machines may stop working suddenly in the middle of an experiment leading to losses and
they can also be a source of safety concerns.
Minor repairs can be done by a dedicated staff, while major repairs should be directed to
specialists with knowledge of the specific machine or equipment.
Refurbish old equipment to give them a new lease of life by cleaning thoroughly, polishing
where necessary, lubricating movable parts, and replacing small worn out bits.
4. Quality Replacement
10 | P a g e
High-quality lab equipment is easier to maintain and its durability translates to reduced costs
in the long term.
Non-faulty equipment that is too old should also be replaced, while some wear and tear might
not be noticeable during its operation, outdated machines are not reliable and technical support
in terms of servicing and acquisition of spare parts may be limited.
Maintenance Tools
Specialized Tools
Identify tools and software used with personal computer components and their purposes.
For every job there is the right tool . Make sure that you are familiar with the correct use
of each tool and that the right tool is used for the current task. Skilled use of tools and
software makes the job less difficult and ensures that tasks are performed properly and
safely.
Hardware tools
Hand Tools: includes various screwdrivers, needle-nose pliers, hex drivers, wire cutters,
tweezers, part retriever, flashlight, Assorted flat-blade screwdrivers, Assorted Phillips
screwdrivers, Assorted small nut drivers, Assorted small torx bit drivers, Diagonal pliers
Cleaning Tools: includes soft cloth, compressed air can, cable ties, and parts organizer.
Other related tools, includes Contact cleaner, Foam swabs, Cleaning supplies,
Magnifying glass, Clip leads, IC extractors.
11 | P a g e
ii. Identify software tools and their purposes
Software Tools
Disk Management
Organizational Tools
It is important that a technician document all services and repairs. The documentation can then
be used as reference material for similar problems that are encountered in the future. Good
customer service includes providing the customer with a detailed description of the problem
and the solution.
Personal reference tools
12 | P a g e
Notes – Make notes as you go through the investigation and repair process. Refer to these notes
to avoid repeating previous steps and to determine what steps to take next.
Journal – Document the upgrades and repairs that you perform. The documentation should
include descriptions of the problem, possible solutions that have been tried in order to correct
the problem, and the steps taken to repair the problem. Be sure to note any configuration
changes made to the equipment and any replacement parts used in the repair. Your journal,
along with your notes, can be valuable when you encounter similar situations in the future.
History of repairs – Make a detailed list of problems and repairs, including the date,
replacement parts, and customer information. The history allows a technician to determine
what work has been performed on a computer in the past.
Internet reference tools - The Internet is an excellent source of information about specific
hardware problems and possible solutions:
Internet search engines
News groups
Manufacturer FAQs (frequently asked questions)
Online computer manuals
Online forums and chat
Technical websites
Manufacturer download areas for new drivers
Miscellaneous tools – With experience, you will discover many additional items to add to the
toolkit. A working computer is a valuable resource to take with you on computer repairs in the
field. A working computer can be used to research information, download tools or drivers, or
communicate with other technicians. Using known good working components to replace
possible bad ones in computers will help you quickly determine which component may not be
working properly.
1.3. Static energy and its effects on computer
Static Electricity can cause numerous problems with a system. These problems usually appear
during the winter months, when humidity is low, or in extremely dry climates. Static electricity
is caused by dry climate and friction of substances in the climate. In these cases, you may need
to take special precautions to ensure that the systems can function properly. The usual symptom
is in the form of a parity-check (memory) error or a totally locked up system. Any time the
system unit is opened or you are handling circuits outside the system unit, however, you must
be careful with static. You can damage a component with static discharges if these charges are
13 | P a g e
not routed to a ground. It is recommended that you handle boards and adapters by a grounding
point first, to minimize the potential for any static damage. But most static problems
experienced by functional systems are caused by improper grounding. One of the easiest ways
to prevent static problems is with a solid stable ground, which is extremely important for
computer equipment. A poorly designed grounding system is one of the largest causes of a poor
computer design. A good way to solve static problems is to prevent the static signal from
getting into the computer in the first place. The chassis ground in a properly designed system
serves as a static guard for the computer to redirect the static charge safely to the ground, which
means that the system must be plugged into a properly grounded three-wire outlet. If the
problem is extreme, you can resort to other measures. One is to use a properly grounded static
mat underneath the computer. Touch this mat first before touching the computer itself. This
procedure ensures that any static discharges are routed to the ground, away from the system
unit internals. If problems still persist, you may want to check out the electrical building
ground.
1.4. Safety rules
Electrical Precautions
Pull the plug. Do not work on a system, under any circumstances, while it is plugged in. Don't
just turn it off and think that's enough. It is prudent to disconnect all external cables from the
PC before opening it up. Because cables like network or phone cord may cause electric shock
if not disconnected from the computer. Stay out of the power supply unless you know what
you are doing. Similarly, do not open up your monitor unless you are absolutely sure of what
you are doing. You can electrocute yourself even with the power disconnected when inside the
monitor. Watch out for components being left inside the box. Dropping a screw inside your
case can be a hazard if it isn't removed before the power is applied, because it can cause
components to short-circuit.
Mechanical Precautions
Make sure you have a large, flat area to work on. That will minimize the chance of components
falling, getting bent, or getting lost. Don't tighten screws too far or you may strip them or make
it impossible to loosen them later. Sometimes it makes sense to turn your machine on with the
cover off the case, to see if something works before replacing the cover. If you do this, be very
careful to keep objects from accidentally falling into the box.
Data Precautions
14 | P a g e
Back up your data before you open the box, even if the work you are doing seems "simple".
This applies doubly to any upgrades or repairs that involve changes to the motherboard,
processor or hard disk since data is irreplaceable. Make a copy of your system's BIOS settings
before doing any major work or changing anything in the BIOS.
Electrostatic Discharge Precautions
Static electricity is a major enemy of computer components. Static electricity can zap and ruin
your CPU, memory or other components instantly. The safest way to avoid this problem is to
work at a static-safe station or use a commercial grounding strap. The chassis ground in a
properly designed system serves as a static guard for the computer to redirect the static charge
safely to the ground, which means that the system must be plugged into a properly grounded
three-wire outlet.
Safety with Children
If you have children, I'd be extra careful leaving an open machine lying around. Kids are
inquisitive and curiosity and live electrical equipment don't mix too well. In general, kids
should always be supervised when using a PC. This can help avoid both possible problems: the
PC hurting the kids, or the kids damaging the PC (or destroying data.)
Other safety rules
Avoid putting computers on the same circuits as air conditioning
Use spike suppressor to avoid damage from electrical spike
Use voltage regulator to avoid problems with low voltage or brown-outs
Use UPS (Uninterruptible power supply) to keep equipment running during power
failure.
1.5. Preventive maintenance and troubleshooting
17 | P a g e
source. The term UPS is frequently used to describe two different types of power backup
systems: - The first is a standby power system, and the second is a truly uninterruptible power
system.
The standby system monitors the power input line and waits for a significant variation to occur.
The batteries in this unit are held out of the power loop and draw only enough current from the
AC source to stay recharged. When an interruption occurs, the UPS senses it and switches the
output of the batteries into an inverter circuit that converts the DC output of the batteries into
an AC current, and voltage, that resembles the commercial power supply. This power signal is
typically applied to the computer within 10 milliseconds. The uninterruptible systems do not
keep the batteries offline. Instead, the batteries and converters are always actively attached to
the output of UPS. When an interruption in the supply occurs, no switching of the output is
required. The battery/ inverter section just continues under its own power. Standby systems do
not generally provide a high level of protection from sags and spikes. They do, however,
include additional circuitry to minimize such variations. Conversely, an uninterruptible system
is an extremely good power-conditioning system. Because it always sits between the
commercial power and the computer, it can supply a constant power supply to the system.
Troubleshooting process steps
Step 1 - Identify the problem
During the troubleshooting process, gather as much information from the customer as
possible, but always respectfully.
Use the following strategy during this step:
1. Start by using open-ended questions to obtain general information.
2. Continue using closed-ended (yes/no) questions to get relevant information.
3. Then document the responses in the work order and in the repair journal.
4. And lastly, verify the customer’s description by gathering data from the computer
by using applications such as:
What problem occurred
Date and time of the problem
Severity of the problem
Source of the problem
Event ID number
Which user was logged in when the problem occurred
18 | P a g e
Although the Event Viewer lists details about the error, you might need to
further research the solution.
Device Manager:
The Device Manager displays all of the devices that are configured on a computer. Any device
that the operating system determines to be acting incorrectly is flagged with an error icon. This
type of error has a yellow circle with an exclamation point (!). If a device is disabled, it is
flagged with a red circle and an "X". A yellow question mark (?) indicates that the hardware is
not functioning properly because the system does not know which driver to install for the
hardware.
Beep Codes:
Each BIOS manufacturer has a unique beep sequence for hardware failures. When
troubleshooting, power on the computer and listen. As the system proceeds through the POST,
most computers emit one beep to indicate that the system is booting properly. If there is an
error, you might hear multiple beeps. Document the beep code sequence, and research the code
to determine the specific hardware failure.
BIOS Information:
If the computer boots and stops after the POST, investigate the BIOS settings to determine
where to find the problem. A device might not be detected or configured properly. Refer to the
motherboard manual to make sure that the BIOS settings are accurate.
Diagnostic Tools:
Conduct research to determine which software is available to help diagnose and solve
problems. There are many programs available that can help you troubleshoot hardware. Often,
manufacturers of system hardware provide diagnostic tools of their own. For instance, a hard
drive manufacturer, might provide a tool that you can use to boot the computer and diagnose
why the hard drive does not boot Windows.
Step 2 - Establish a theory of probable causes
Crete a list of the most common reasons why the error would occur.
List the easiest or most obvious causes at the top with the more complex causes at the
bottom.
Step 3 – Determine an exact cause
Determine the exact cause by testing the theories of probable causes one at a time,
starting with the quickest and easiest.
19 | P a g e
After identifying an exact cause of the problem, determine the steps to resolve the
problem.
If the exact cause of the problem has not been determined after you have tested all your
theories, establish a new theory of probable causes and test it.
Step 4 – Implement the solution
Sometimes quick procedures can determine the exact cause of the problem or even
correct the problem. If it does, you can go to step 5.
If a quick procedure does not correct the problem, you might need to research the
problem further to establish the exact cause.
Divide larger problems into smaller problems that can be analysed and solved
individually.
Step 5 – Verify solution and full system functionality
Verify full system functionality and implement any preventive measures if needed.
Ensures that you have not created another problem while repairing the computer.
Step 6 – Document findings
Discuss the solution with the customer.
Have the customer confirm that the problem has been solved.
Document the process:
Problem description
Steps to resolve the problem
Components used in the repair
20 | P a g e
Chapter Two
21 | P a g e
disk drives. However, they do possess the shortcomings of full towers (such as heat build-
up).
A typical system unit contains a single power-supply unit that converts commercial power
into the various levels required by the different units in the system. The number and types
of disk drives installed in a system varies according to the intended use of the system.
However, a single floppy disk drive (FDD) unit, a single Hard Disk Drive (HDD) unit, and
a single CD-ROM drive are typically installed to handle the system’s mass-storage
requirements.
The system board is the center of the system. It contains the portions of the system that
define its computing power and speed. System boards also are referred to as motherboards,
main boards, or planar boards. Plug-in options adapter cards (or just adapter cards) permit
22 | P a g e
a wide array of peripheral equipment to be added to the basic PC system. The most
frequently installed adapter cards in PC systems are video adapter cards. Older units also
may include different types of input/output (I/O) adapter cards.
Adapter cards plug into expansion slot connectors located at the rear of the system board.
Peripheral devices, such as printers and modems, normally connect to adapter cards
through expansion slot openings in the rear of the system unit.
23 | P a g e
Determine which type of case you are working on.
If the case is a desktop model, does the cover slide off the chassis in a forward
direction, bringing the front panel with it, or does it raise off of the chassis from
the rear? If the back lip of the outer cover folds over the edge of the back panel,
the lid raises up from the back, after the retaining screws are removed.
If the retaining screws go through the back panel without passing through the lip,
the outer cover will slide forward after the retaining screws have been removed.
24 | P a g e
Major Components
For orientation purposes, the end of the board where the keyboard connector, expansion slots,
and power connectors are located on standard desktop and tower system boards is generally
referred to as the rear of the board. The rear of the system board corresponds to the back of the
system unit. As mentioned earlier, the system board receives power from the power-supply unit
through special power connectors. These connectors are often located along the right-rear
corner of the system board so that they are near the power-supply unit. They also are keyed so
that the power cord cannot be plugged in backward. On AT system boards, the power
connectors are typically labeled as P1 and P2 and are always located directly beside each other.
However, they are identical and can be reversed. Therefore, in an AT system the P8 and P9
power connectors should always be installed so that the black wires from each connector are
25 | P a g e
together. The system’s keyboard connector is normally located along the back edge of the board
as well.
Microprocessors
The microprocessor is the major component of any system board. It can be thought of as the
“brains” of the computer system because it reads, interprets, and executes software instructions,
and also carries out arithmetic and logical operations for the system. The original PC and PC-
XT computers were based on the 8/16-bit 8088 microprocessor from Intel. But now a days
there are many types of Microprocessors and their growth increases continuously. For instance
Intel have many versions like Intel core i3, i5, i7 the latest and AMD is another brand also have
many versions.
Primary Memory
All computers need a place to temporarily store information while other pieces of information
are processed. In digital computers, information storage is normally conducted at two different
levels: primary memory (made up of semiconductor RAM and ROM chips), and mass-storage
memory (usually involving floppy and hard disk drives). Most of the system’s primary memory
is located on the system board. Primary memory typically exists in two or three forms on the
system board:
Read-only memory (ROM): - Contains the computer’s permanent start-up
programs.
Random-access memory (RAM): - Quick enough to operate directly with the
microprocessor and can be read from, and written to, as often as desired.
Cache memory: - A fast RAM system specially designed to hold information
that the microprocessor is likely to use.
ROM devices store information in a permanent fashion and are used to hold programs and data
that do not change. RAM devices retain only the information stored in them as long as electrical
power is applied to the IC. Any interruption of power causes the contents of the memory to
vanish. This is referred to as volatile memory. ROM, on the other hand, is non-volatile.
Chipsets
The first digital computers were giants that took up entire rooms and required several
technicians and engineers to operate. They were constructed with vacuum tubes and their
computing power was limited compared to modern computers. However, the advent of IC
technology in 1964 launched a new era in compact electronic packaging. The much-smaller,
26 | P a g e
low-power transistor replaced the vacuum tube and the size of the computer started to shrink.
The first ICs were relatively small devices that performed simple digital logic. These basic
digital devices still exist and occupy a class of ICs referred to as small-scale integration (SSI)
devices. SSI devices range up to 100 transistors per chip. As manufacturers improved
techniques for creating ICs, the number of transistors on a chip grew and complex digital
circuits were fabricated together. Eventually, large-scale integration (LSI) and very large-scale
integration (VLSI) devices were produced. LSI devices contain between 3,000 and 100,000
electronic components, and VLSI devices exceed 100,000 elements.
Expansion Slots
It would be expensive to design and build a computer that fit every conceivable user
application. With this in mind, computer designers include standardized connectors that enable
users to configure the system to their particular computing needs. Most PCs use standardized
expansion slot connectors that enable various types of peripheral devices to be attached to the
system. Optional I/O devices, or their interface boards, are plugged into these slots to connect
the devices to the system’s address, data, and control buses.
27 | P a g e
Note: Changing the Guard System boards fundamentally change for four reasons: new industry
2011 form factors , new microprocessor designs , new expansion slot types , and reduced chip
counts. Reduced chip counts are typically the result of improved microprocessor support
chipsets.
It should be evident that all system boards are not alike. The term form factor is used to refer
to the physical size and shape of a device. Also it describes the general locations of components
and parts. When used in conjunction with system boards, however, it also refers to their case
style and power-supply compatibility, as well as to their I/O connection placement schemes.
These factors come into play when assembling a new system from components, as well as in
repair and upgrade situations in which the system board is being replaced. The original IBM
PC form factor established the industry standard for the PC, PC-XT, and PC-AT clone system
boards. While IBM produced a large AT format board, the industry soon returned to the PC-
XT/Baby AT form factor. Within the parameters of this form factor several variations of the
AT-class system board have been produced. Currently, technicians must deal with primarily
only two system board form factors: the older AT system boards, and newer ATX system
boards. Although the AT class of system boards has been around for a long time, the ATX class
currently dominates the new computer market.
The newest system board designation is the ATX form factor developed by Intel for Pentium-
based systems. This specification is an evolution of the older Baby AT form factor that moves
the standard I/O functions to the system board. The ATX specification basically rotates the
Baby AT form factor by 90 degrees, relocates the power-supply connection, and moves the
microprocessor and memory modules away from the expansion slots. Figure below represents
a Pentium-based, ATX system board that directly supports the FDD, HDD, serial, and parallel
ports. The board is 12” (30.5cm) wide and 9.6” (24.4cm) long. A revised, mini-ATX
specification allows for 28.44cm by 20.8cm system boards. The hole patterns for the ATX and
mini-ATX system boards require a case that can accommodate the new boards. Although ATX
shares most of its mounting-hole pattern with the Baby AT specification, it does not match
exactly.
28 | P a g e
The power-supply orientation enables a single fan to be used to cool the system. This provides
reduced cost, reduced system noise, and improved reliability. The relocated microprocessor
and memory modules allow full-length cards to be used in the expansion slots while providing
easy upgrading of the microprocessor, RAM, and I/O cards. The fully implemented ATX
format also contains specifications for the power-supply and I/O connector placements. In
particular, the ATX specification for the power-supply connection calls for a single, 20-pin
power cord between the system board and the power-supply unit rather than the typical P8/P9
cabling.
AT System Boards
The forerunner of the ATX system board was a derivative of the Industry Standard Architecture
system board developed for the IBM PC-AT. The original PC-AT system board measured 30.5-
33 centimetres. As the PC-AT design became the real industry standard, printed circuit board
manufacturers began to combine portions of the AT design into larger IC devices to reduce the
size of their system boards. These chipset-based system boards were quickly reduced to match
29 | P a g e
that of the original PC and PC-XT system boards (22cm - 33cm). This permitted the new 80286
boards to be installed in the smaller XT-style cases.
System boards are generally removed for one of two possible reasons.
Either the system board has failed and needs to be replaced or
The user wants to install a new system board with better features.
In either case, it is necessary to remove the current system board and replace it. The removal
procedure can be defined in five steps,
To replace a system board, it is necessary to disconnect several cables from the old system
board and reconnect them to the new system board. The easiest way to handle this is to use
30 | P a g e
tape (preferably masking tape) to mark the wires and their connection points (on the new
system board) before removing any wires from the old system board.
Unplug all power cords from the commercial outlet. Remove all peripherals from the system
unit. Disconnect the mouse, keyboard, and monitor signal cable from the rear of the unit.
Finally, disconnect the monitor power cord from the system (or the outlet). The following
Figure illustrates the system unit’s back-panel connections.
Unplug the power cord from the system unit. Determine which type of case you are working
on. If the case is a desktop model, does the cover slide off the chassis in a forward direction,
bringing the front panel with it, or does it raise off of the chassis from the rear? If the back lip
of the outer cover folds over the edge of the back panel, the lid raises up from the back, after
the retaining screws are removed. If the retaining screws go through the back panel without
passing through the lip, the outer cover will slide forward after the retaining screws have been
removed.
A wide variety of peripheral devices are used with PC-compatible systems. Many of these
devices communicate with the main system through options adapter cards that fit into
31 | P a g e
expansion slot connectors on the system board. Remove the retaining screws that secure the
adapter cards to the system unit’s back panel. Remove the adapter cards from the expansion
slots. It is a good practice to place adapter cards back into the same slots they were removed
from, if possible. Store the screws properly. The following Figure shows how to perform this
procedure.
If the system employs an MI/O card, disconnect the floppy drive signal cable (smaller signal cable) and
the hard drive signal cable (larger signal cable) from the card. Also disconnect any I/O port connections
from the card before removing it from the expansion slot.
The Pentium system board provides the interface connections for the system’s disk drives. The
disk drive signal cables must be removed in order to exchange the system board. Although the
FDD and IDE connectors are different sizes, there have been instances in which individuals
have forced the 34-pin FDD cable onto the 40- pin IDE connector. The main consideration
when removing the disk drive cables from the system board is their locations and orientation.
Some system boards furnish keyed (named) FDD and IDE connections so that they cannot be
plugged in backward. However, this is not true for all system boards. Also, it is easy to reverse
the primary and secondary IDE connections because they are identical. It also is possible to
confuse the 80-wire/40-pin IDE cable used in some advanced IDE interfaces with the standard
40-wire/40-pin IDE cable used with other IDE devices.
32 | P a g e
Removing the System Board
Verify the positions of all jumpers and switch settings on the old system board. Record these
settings and verify their meanings before removing the board from the system. This may require
the use of the board’s User Manual, if available. Remove the grounding screw (or screws) that
secure the system board to the chassis. Store the screw(s) properly.
In a desktop unit, slide the system board toward the left (as you face the front of the unit) to
free its plastic feet from the slots in the floor of the system unit. Tilt the left edge of the board
up, and then lift it straight up and out of the system unit, as illustrated in the following Figure.
34 | P a g e
A CMOS System Option Not Set message displays indicating failure of CMOS
battery or CMOS checksum test.
A CMOS Checksum Failure message displays, indicating CMOS battery low or
CMOS checksum test failure.
A 201 error code displays, indicating a RAM failure.
A parity check error message displays, indicating a RAM error.
Typical symptoms associated with system board setup failures include the following:
A CMOS In operational message displays, indicating failure of CMOS shutdown
register.
A Display Switch Setting Not Proper message displays—failure to verify display type.
A CMOS Display Mismatch message displays—failure of display-type verification.
A CMOS Memory Size Mismatch message displays—system configuration and setup
failure.
A CMOS Time & Date Not Set message displays—system configuration and setup
failure.
An IBM-compatible error code displays, indicating that a configuration problem has
occurred.
Typical symptoms associated with system board I/O failures include the following:
Speaker doesn’t work during operation. The rest of the system works, but no sounds
are produced through the speaker.
Keyboard does not function after being replaced with a known good unit.
The system board normally marks the end of any of the various troubleshooting schemes given
for different system components. It occupies this position for two reasons.
First, the system board supports most of the other system components, either directly
or indirectly.
Second, it is the system component that requires the most effort to replace and test.
Chapter Three
35 | P a g e
compatible machines or computer architectures with multiple, differing implementations.
Programs written for one machine would run on no other kind, even other kinds from the same
company. This was not a major drawback then because no large body of software had been
developed to run on computers, so starting programming from scratch was not seen as a large
barrier.
The design freedom of the time was very important because designers were very constrained
by the cost of electronics, and only starting to explore how a computer could best be organized.
Some of the basic features introduced during this period included index registers (on
the Ferranti Mark 1), a return address saving instruction (UNIVAC I), immediate operands
(IBM 704), and detecting invalid operations (IBM 650).
By the end of the 1950s, commercial builders had developed factory-constructed, truck-
deliverable computers. The most widely installed computer was the IBM 650, which
used drum memory onto which programs were loaded using either paper punched
tape or punched cards. Some very high-end machines also included core memory which
provided higher speeds. Hard disks were also starting to grow popular.
A computer is an automatic abacus. The type of number system affects the way it works. In the
early 1950s, most computers were built for specific numerical processing tasks, and many
machines used decimal numbers as their basic number system; that is, the mathematical
functions of the machines worked in base-10 instead of base-2 as is common today. These were
not merely binary-coded decimal (BCD). Most machines had ten vacuum tubes per digit in
each processor register. Some early Soviet computer designers implemented systems based
on ternary logic; that is, a bit could have three states: +1, 0, or -1, corresponding to positive,
zero, or negative voltage.
An early project for the U.S. Air Force, BINAC attempted to make a lightweight, simple
computer by using binary arithmetic. It deeply impressed the industry.
As late as 1970, major computer languages were unable to standardize their numeric behavior
because decimal computers had groups of users too large to alienate.
Even when designers used a binary system, they still had many odd ideas. Some used sign-
magnitude arithmetic (-1 = 10001), or ones' complement (-1 = 11110), rather than
modern two's complement arithmetic (-1 = 11111). Most computers used six-bit character sets
because they adequately encoded Hollerith punched cards. It was a major revelation to
designers of this period to realize that the data word should be a multiple of the character size.
They began to design computers with 12-, 24- and 36-bit data words (e.g., see the TX-2).
36 | P a g e
In this era, Grosch's law dominated computer design: computer cost increased as the square of
its speed.
3.2. Types of CPU
There are a few different options out there and when you buy a central processing unit, it can
be hard to know just what you really need in terms of speed, core, costs, brands, and more. It
is complicated and it isn’t something we talk about all that often.
Still, a CPU is known as the “brain of the computer,” which means that it is actually one of the
most important things. A good CPU helps with multitasking, performance, and speed above all
else, but it helps overall efficiency in a million little ways.
In this guide, our experts will break down the different central processing unit types and help
you understand where you should look, what you should look for, and how you can use each
one. Keep reading for more.
CPU is the shortened and more commonly used form of central processing unit or processor.
The “unit” is composed of two parts: the central processor and the electronic circuitry that is
located within the tower of your computer.
The CPU is a multitasking device that feeds the computer information, performs logic and
arithmetic for programs, and inputs and outputs operations. It is one of the core elements of the
computer but up until recently, there hasn’t been much discussion on them amongst the general
public.
CPUs now have microprocessors that use integrated circuits that have a unit metal-oxide-
semiconductor. This chip has the CPU as well as memory chips, microcontrollers, interfaces,
and other systems. This chip takes up a single socket as a “CPU core.”
Your CPU is one of the most important elements of your computer, managing almost all of the
commands and calculations that make it work properly. It controls components, peripherals,
and important processes no matter what you do.
A CPU needs to rapidly input and output information. The different components all need to be
powerful enough to work together: everything will perform to the capabilities of the slowest,
weakest part.
It is important to know that there are two leading manufacturers of CPU
processors: AMD and Intel. There are other manufacturers, but these are the most prolific.
When we talk about the type of central processing unit, we need to think about not only the
number of cores, but the brand, size, speed, and more. However, the most defining component
is the number of cores, so that is how we classify the different types of CPUs.
37 | P a g e
A. Single-Core CPU
The original type of CPU was a single-core CPU. It is available widely and used in most
standard personal computers and business computers.
These CPUs can only execute one command at a time, making them poor options for
multitasking. If you are trying to do more than one thing at a time, it is possible, but you will
notice a degradation in performance.
If one operation starts, the next process will have to wait in a virtual queue of sorts until the
first one finishes. The computer doesn’t like when these queues form and it could start to freeze
and take a long time to perform operations that normally take just a few seconds.
If possible, we don’t recommend this type of CPU at all unless someone just needs a computer
for bare-bones activities like word processing or social media browsing.
B. Dual-Core CPU
A dual core CPU consists of two cores that act like one CPU. This means that the CPU can
multitask more effectively and allows two things to happen at once, or more. Once again, you
can overload a dual-core CPU, but it is harder to do.
To most effectively use a dual-core CPU, both the programs and the operating system need to
have a unique code in them called a “simultaneous multi-threading technology” code. This is
something found standard in most operating systems and programs today, but there are a few
exceptions.
Most computers today will have at least a dual-core CPU, even the ones you buy without
customization. Even this number of cores is becoming outdated and limits future-proofing
capabilities in your build.
C. Quad-Core CPU
A quad-core CPU has four cores on a single CPU processor. It is a refined model that is used
in the best computer builds today, especially in some of the best all-in-one computers and
boxed computers.
The CPU evenly divides the workload between the cores, making it the best option for
multitasking. It doesn’t signify single operations, making it faster, lighter, and more efficient
than the other cores.
Just like with the dual-core CPU, quad-core CPUs use the SMT code to speed up the processes
and make them seem instantaneous in many cases.
38 | P a g e
For some people, a quad-core CPU will be overkill. However, gamers, programmers, and
anyone who heavily uses the computer while streaming, playing music, editing, or using bulky
programs would benefit from this CPU size.
D. Hexa-Core Processor
Even bigger and faster than a quad-core CPU is the Hexa-core processor. This comes with six
cores and can execute tasks faster than other models. This processor is limited and harder to
find on personal computers than one would expect.
Hexa-core processors are most commonly used in smartphones and tablets. Most smartphones
(including those from Android and Apple) will use a hexacore processor. These processors
make it possible to play games, listen to music, text, and get notifications all at the same time.
E. Octa-Core Processor
Even rarer is the octa-core processor that has eight independent cores to go even faster. These
processors are a bit more expensive and only necessary for people who need to work quickly
for their jobs. Gamers can use an octa-core processor, but only professional gamers who play
for money will really get the advantages out of it.
There are core sets in this build that are typically tasked to do the repetitive, minimum powered
activities that we do on a computer and then there are cores set aside for faster processing and
action. With some software and builds, you can actually pick these programs, but that is a far
more advanced level of computer building.
F. Deca-Core Processor
A deca-core processor can come as a double core processor, four cores with quad cores, six
cores, and more. Deca-core uses ten independent cores as well, though those are the most
expensive and hardest to find options. These systems function just like the octa-core processors
do, with some cores dedicated to mundane tasks while others perform the more advanced tasks.
Newer smartphones and tablets are being manufactured with deca-core processors and modern
manufacturing has made them lower in cost than other cores. For the foreseeable future at least,
deca-cores are going to be future-proofed.
Most new market items will have the deca-core processor as well, even budget or low-priced
computers because of how much easier and cheaper the manufacturing process is.
AMD Vs Intel Processor
39 | P a g e
When we talk about computer processors types, we talk about AMD and Intel for the most part.
These manufacturers are both great, though they tend to be used in different situations. While
they can be, and are, used in other builds, they tend to work best in these categories.
AMD CPUs
AMD CPUs are most often used in servers and workstation computers for bigger corporations.
They can be used in gaming computers as well, but that isn’t as common.
The most common examples of CPU from AMD are:
K6-2 Athlon 64 FX Athlon X2
K6-III Turion 64 Phenom II
Athlon Athlon 64 X2 Athlon II
Duron Turion 64 X2 E2 series
Athlon XP Phenom FX A4 series
Sempron Phenom X4 A6 series
Athlon 64 Phenom X3 A8 series
Mobile Athlon 64 Athlon 6-series A10 series
Athlon XP-M Athlon 4-series
AMD doesn’t produce as many CPUs as Intel, but their CPUs tend to be the ones that push
manufacturing forward. They use the latest technology in their builds. Their CPUs are generally
more expensive but have incredible longevity.
Intel CPUs
Intel CPUs are used most frequently in PC builds and smaller companies.
The most common and well known Intel CPUs are:
4004 Pentium w/ MMX Pentium D
8080 Pentium Pro Pentium Extreme
8086 Pentium II Edition
8087 Celeron Core Duo
40 | P a g e
For a long time, Intel was known as the “gold standard” for all CPUs, so it has a deeper base
of CPUs. They have slowed down production in the last few years, however. Intel has created
some great processors that have changed the game, but they have also had some clunkers.
ARM CPUs
One manufacturer we don’t talk about all that often is ARM CPUs, even though they may make
more CPUs than any other company.
These CPUs are used in tablets, smartwatches, and smartphones because they are smaller and
require less power. In turn, they are cheaper and generate less heat.
3.3. CPU Sockets and Slots
In computer hardware, a CPU socket or CPU slot contains one or more mechanical
components providing mechanical and electrical connections between a microprocessor and
a printed circuit board (PCB). This allows for placing and replacing the central processing
unit (CPU) without soldering.
Common sockets have retention clips that apply a constant force, which must be overcome
when a device is inserted. For chips with many pins, zero insertion force (ZIF) sockets are
preferred. Common sockets include Pin Grid Array (PGA) or Land Grid Array (LGA). These
designs apply a compression force once either a handle (PGA type) or a surface plate (LGA
type) is put into place. This provides superior mechanical retention while avoiding the risk of
bending pins when inserting the chip into the socket. Certain devices use Ball Grid
Array (BGA) sockets, although these require soldering and are generally not considered user
replaceable.
CPU sockets are used on the motherboard in desktop and server computers. Because they
allow easy swapping of components, they are also used for prototyping new
circuits. Laptops typically use surface-mount CPUs, which take up less space on the
motherboard than a socketed part.
As the pin density increases in modern sockets, increasing demands are placed on the printed
circuit board fabrication technique, which permits the large number of signals to be
41 | P a g e
successfully routed to nearby components. Likewise, within the chip carrier, the wire
bonding technology also becomes more demanding with increasing pin counts and pin
densities. Each socket technology will have specific reflow soldering requirements. As CPU
and memory frequencies increase, above 30 MHz or thereabouts, electrical signalling
increasingly shifts to differential signaling over parallel buses, bringing a new set of signal
integrity challenges. The evolution of the CPU socket amounts to a coevolution of all these
technologies in tandem.
Modern CPU sockets are almost always designed in conjunction with a heat sink mounting
system, or in lower power devices, other thermal considerations.
Selecting a Processor
Understand how processors and motherboards work. Your computer's motherboard is
essentially one large circuit board which provides the base into which you'll plug your
computer's other components, including the processor. Since processors' sizes and connectors
vary depending on the model, you will need to ensure that your selected processor works with
your current motherboard.
Know your computer's limitations. While you can upgrade virtually all Windows desktop
processors and motherboards, upgrading a laptop's processor is often impossible; even if your
laptop model supports changing the processor, doing so is a tricky process that is more likely
to harm your computer than help it.
Find your computer's motherboard model. While you can use Command Prompt to find
your motherboard's basic information, using a free service called Speccy will allow you to see
vital information about your motherboard (e.g., the processor's socket type).
Determine the type of processor socket used by your motherboard. If you're using Speccy
to find your motherboard's information, you'll click the CPU tab and look at the "Package"
heading to determine the socket.
You can click the Motherboard tab and then review the "Chipset" heading to see your
processor's chipset, though the service you'll use to check processor compatibility
usually determines this for you.
If you decided not to use Speccy, you can enter your motherboard's name and model
number, followed by "socket" and "chipset", into a search engine and search through
the results.
42 | P a g e
Alternately, you can almost always find the socket type listed on the motherboard
around the CPU socket.
Find processors which match your motherboard. You'll have to find a processor based on
your current motherboard's socket size and chipset:
Go to https://www.gigabyte.com/us/Support/CPU-Support in your computer's web
browser.
Click the Choose Socket drop-down box, then select your motherboard's socket
number.
Click the Choose Chipset drop-down box, then click chipset number (usually, there is
only one number here).
Click the "Search" icon to the right of the chipset number, then review the names of
compatible processors in the pop-up window.
Find a new motherboard to match your processor if necessary. While you can easily type
your processor's specifications and the phrase "supported motherboards" into a search engine
and review the results, using a CPU support site to do the work for you is easier:
Go back to https://www.gigabyte.com/Support/CPU-Support in your computer's web
browser.
Click the Choose Processor Series drop-down box, then select your processor's name.
Click the Choose Model drop-down box, then click your processor's model.
Click the "Search" icon to the right of the model number, then review the list of
compatible motherboards in the "Model" column.
Buy your processor. Now that you know which processors will work with your computer's
motherboard, you can select the one best-suited to your price range, computational needs, and
region.
Steps to Upgrade CPUs
A. Turn off and unplug your computer. Before you move or open up your computer, make
sure that it is both turned off and unplugged from any power sources.
B. Place your computer on its side. Doing so will give you access to the PC's side panel.
C. Remove the side panel. Some cases will require you to unscrew the side panel, while other
cases only need you to unclamp or slide off the side panel.
D. Ground yourself. This will prevent accidental static electricity discharge. Since static can
completely ruin sensitive computer components such as the motherboard, you'll want to make
sure you remain grounded throughout the entire installation process.
43 | P a g e
E. Locate the motherboard. The motherboard resembles a circuit board with various wires
attached to it. In most cases, you'll find the motherboard resting on the bottom of the tower.
You may find the motherboard perched against the side of the case instead.
F. Remove the current heat sink. The heat sink is mounted on top of the motherboard, and
usually has a large fan on top of it. To remove the heat sink, you may have to unclip it from
the motherboard, unscrew it, or slide it out.
Since each heat sink has a different design—and, thus, a different installation process—
you'll need to consult your heat sink's instruction manual for model-specific removal
steps.
G. Check your current processor's fit. You'll have to install your new processor using the
same fit as the current one, so knowing which direction the processor is facing will help you
install it correctly the first time.
H. Remove the current processor. Carefully lift the processor, which resembles a square chip,
out of its space on the motherboard.
I. Install your new motherboard if necessary. If you're installing a new motherboard, remove
the current one from the housing, then install the new one according to its installation
instructions (if necessary). You'll then need to hook up your computer's various components to
the motherboard.
J. Plug in your new processor. Your processor should only fit into the slot one way, so don't
force it; just gently place the processor in its slot and check to make sure that it's level.
If the processor is tilted or won't seat properly, try rotating 90 degrees until it does fit.
Try not to touch the connectors on the bottom of the processor, as doing so may harm
the processor.
K. Reinstall the heat sink. Place a dot of thermal paste on top of the processor, then reattach
the heat sink to its mount on the motherboard. The thermal paste on top of the processor should
bridge the gap between your processor and your heat sink.
L. Plug back in any unplugged components. Depending on your computer's orientation, you
may have unplugged a cable or two during the installation process. If so, make sure you
reconnect them to your motherboard before proceeding.
This especially applies if you installed a new motherboard.
M. Reassemble and run your computer. Once your computer's put back together and
plugged back in, you can boot up your computer and click through any setup menus which
appear.
44 | P a g e
Since Windows will need to download and install new drivers for your processor, you
will most likely be prompted to restart your computer after it finishes starting up.
Chapter Four
Memory
4.1. Random Access Memory Defined
Memory is a collection of storage cells together with the necessary circuits to transfer
information to and from them. In other words, Computer memory is the storage space in
computer where data is to be processed and instructions required for processing are stored.
RAM is Random Access Memory which loses its contents when the computer is switched off
(it is volatile). This memory can be written to, plus instructions and data can be loaded into it.
The kind of memory used for holding programs and data being executed is called RAM. RAM
45 | P a g e
differs from ROM in that it can be both read and written; it is considered volatile storage
because unlike ROM, the components of RAM are lost when the power is turned off. Common
RAM sizes are such as 1GB, 2GB, 4GB, 6GB, & more (in multiples of 2).
The gold or tin pins on the lower edge of the front and back of a SIMM are connected, providing
a single line of communication paths between the module and system. But the pins in DIMM
are not connected, providing two line of connection paths between the module and the system
one in the front and on in the back.
Cache Memory
Primary Memory/Main Memory
ROM
Cache Memory
Cache memory is a very high speed semiconductor memory which can speed up CPU. It acts
as a buffer between the CPU and main memory. It is used to hold those parts of data and
program which are most frequently used by CPU. The parts of data and programs are
transferred from disk to cache memory by operating system, from where CPU can access them.
We will be seeing more on cache memory later on this chapter…
Primary Memory (Main Memory)
Primary memory holds only those data and instructions on which computer is currently
working. It has limited capacity and data is lost when power is switched off. It is generally
46 | P a g e
made up of semiconductor device. These memories are not as fast as registers. The data and
instruction required to be processed reside in main memory. It is divided into two subcategories
RAM and ROM.
Characteristics of primary Memory
These are semiconductor memories
Usually volatile memory or data is lost in case power is switched off.
It is working memory of the computer.
Faster than secondary memories.
A computer cannot run without primary memory
Functions of Primary Memory:
2. It temporarily holds the input instructions from the input devices while the data is being
input and processed.
3. It stores the results temporarily until it is transferred to the respective output devices.
47 | P a g e
Uses 6 or more transistors to hold a single bit.
Costly than DRAM
High power consumption than DRAM. Cache memory is SRAM.
Secondary Memory
This type of memory is also known as external memory or non-volatile. It is slower than main
memory. These are used for storing data/Information permanently. CPU directly does not
access these memories instead they are accessed via input-output routines. Contents of
secondary memories are first transferred to main memory, and then CPU can access it. For
example: Hard disk, flash drive, CD-ROM, DVD etc. We will see them in detail in chapter six.
48 | P a g e
As its name implies, it is a semiconductor memory device that can be programmed with data
which can only be read, but not altered, by the application circuit. As such, programming an
EPROM generally takes place prior to its attachment to the application circuit. One of the most
common applications for an EPROM is as a BIOS chip of a personal computer, which stores
information about the computer's basic input/output system. An EPROM is a non-volatile
memory device, i.e., it can retain its stored data even if it is powered off. Reprogramming an
EPROM with new data is possible, but it has to undergo a special data erasure process that
employs ultraviolet (UV) light before it can be done. There are some EPROMs though, known
as one-time programmable (OTP) EPROMs, that are designed to be non-reprogrammable as a
cheaper alternative for storing specific bug-free data that never require any change.
C. Electrically Erasable Programmable ROM (EEPROM)
EEPROM is user-modifiable read-only memory that can be erased and reprogrammed (written
to) repeatedly through the application of higher than normal electrical voltage. Unlike EPROM
chips, EEPROMs do not need to be removed from the computer to be modified. However, an
EEPROM chip has to be erased and reprogrammed in its entirety, not selectively. It also has a
limited life - that is, the number of times it can be reprogrammed is limited to tens or hundreds
of thousands of times. In an EEPROM that is frequently reprogrammed while the computer is
in use, the life of the EEPROM can be an important design consideration. A special form of
EEPROM is flash memory, which uses normal PC voltages for erasure and reprogramming.
Cache Memory
It is a special high-speed storage mechanism. It can be either a reserved section of main
memory or an independent high-speed storage device. Two types of caching are commonly
2011 used in personal computers: memory caching and disk caching. A memory cache ,
sometimes called a cache store or RAM cache, is a portion of memory made of high-speed
static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main
memory. Memory caching is effective because most programs access the same data or
instructions over and over. By keeping as much of this information as possible in SRAM, the
computer avoids accessing the slower DRAM.
Some memory caches are built into the architecture of microprocessors. The Intel 80486
microprocessor, for example, contains an 8K memory cache, and the Pentium has a 16K cache.
49 | P a g e
Such internal caches are often called Level 1 (L1) caches. Most modern PCs also come with
external cache memory, which is located on the motherboard, called Level 2 (L2) caches. These
caches sit between the CPU and the DRAM. Like L1 caches, L2 caches are composed of SRAM
but they are much larger. Disk caching works under the same principle as memory caching, but
instead of using high-speed SRAM, a disk cache uses conventional main memory. The most
recently accessed data from the disk (as well as adjacent sectors) is stored in a memory buffer.
When a program needs to access data from the disk, it first checks the disk cache to see if the
data is there. Disk caching can dramatically improve the performance of applications, because
accessing a byte of data in RAM can be thousands of times faster than accessing a byte on a
hard disk.
When data is found in the cache, it is called a cache hit, and the effectiveness of a cache is
judged by its hit rate. Many cache systems use a technique known as smart caching, in which
the system can recognize certain types of frequently used data.
Advantages
The advantages of cache memory are as follows:
Cache memory is faster than main memory.
It consumes less access time as compared to main memory.
It stores the program that can be executed within a short period of time.
It stores data for temporary use.
Disadvantages
The disadvantages of cache memory are as follows:
Cache memory has limited capacity.
It is very expensive.
4.5. Identify memory problems and upgrading
Upgrading System Memory System memory provides the working memory of the CPU.
Insufficient quantities of RAM can cause a system to slow down and run such more poorly
than it otherwise could. Conversely, upgrading the quantity and quality of RAM can make a
sluggish system in to faster, robust machine. Further applications and operating systems require
varying quantities of RAM in which to load, with the basic rule being that newer programs
always need more RAM than older versions of the same program, windows 2000 runs poorly
on less than 128 MB of RAM for example whereas windows 95 runs fine with that amount.
50 | P a g e
Upgrading RAM modules is one of the most common system upgrades you can perform, but
requires you to consider at least the following issues.
How much RAM does the system currently have?
How much RAM can the system use?
Which type and speed of RAM does the system have? The most common types of
RAM are FPM (Fast Page Mode) and EDO (Extended Data Out), SIMMs, and EDO
and SDRAM, DIMMs FPM
Which type and speed of RAM can you put in the system? Check mother board
Manual.
Can you mix types in different banks? Many Pentium mother boards with two banks
of 72- pin SIMM slots for example could handle FPM SIMMs in one bank and EDO
SIMMs in the other. Some Pentium and Pentium II systems came with both SIMM
and 168pin DIMM SLOTS.
Steps in the upgrading/replacing Memory
Open the computer case and locate the SIMM or DIMM slots on the system board.
Determine how many RAM modules you need to fill a bank. Remember you should
fill an entire bank or memory errors may occur.
Remove the RAM from the antistatic
SIMM modules must be inserted in to SIMM slots on an angle and snapped upright
in to position so they are perpendicular to the mother board.
DIMM modules must be inserted in to the slot and process.
Chapter Five
Power Supplies
5.1. Power Supplies
The system’s power-supply unit (PSU) is an internal hardware component used to supply the
components in a computer (CPU, expansion cards, RAM, Chipsets, etc…) with power by
converting potentially lethal (Sufficient to cause death) 110-115 or 220-230 volt alternating
current (AC) into a steady low-voltage direct current (DC) usable by the computer. In desktop
and tower PCs, the power supply is the shiny metal box located at the rear of the system unit.
52 | P a g e
The desktop/tower power supply produces four (or five) different levels of efficiently regulated
DC voltage. These are +5V, –5V, +12V and –12V. (The ATX design also provides a +3.3V
level to the system board.) The power-supply unit also provides the system’s ground. The +5V
level is used by the IC devices on the system board and adapter cards. The +3.3V level is used
by the microprocessor. The 12V levels are typically used to power the motors used in hard and
floppy disk drives.
The other power-supply bundles are used to supply power to optional systems, such as the Disk
drives and CD-ROM drives. System board power connectors provide the system board and the
individual expansion slots with up to 1 ampere of current each. The basic four voltage levels
are available for use through the system board’s expansion slot connectors.
Several bundles of cable emerge from the power supply to provide power to the components
of the system unit and to its peripherals. The power supply delivers power to the system board,
and its expansion slots, through the system board power connectors. Notice that it is keyed so
that it cannot be installed incorrectly. On the back-end of the power supply is where you
connect the power cord to the computer. On the front-end, which is not visible unless the
computer is opened is several dozen other cables that connect the power supply to each of the
devices and the computer motherboard.
Motherboard Power
CPUs, RAM, chipsets-everything on your motherboard-need electrical power to run. Every
power supply provides specialized connections to the motherboard to provide DC electricity in
53 | P a g e
several voltages to feed the needs of the many devices. As mentioned earlier, different
motherboard form factors require different connectors.
54 | P a g e
A pair of connectors-called P8 and P9 link the AT power supply to the AT motherboard. Each
of these connectors has a row of teeth along one side and a small guide on the opposite side
that help hold the connection in place.
You might find that installing P8 and P9 requires a little bit of work , because of facing, keying,
and figuring out which one goes where. P8 and P9 are faced (that is, they have a front and a
back), so you cannot install them backwards. Sometimes the small keys on P8 and P9 require
that you angle the connectors in before snapping them down all the way.
Although you cannot plug P8 and P9 in backwards, you certainly can reverse them by putting
P8 where P9 should go, and vice versa. When connecting P8 and P9 to the motherboard, keep
the black ground wires next to each other. All AT motherboards and power supplies follow this
rule. Be careful!! Incorrectly inserting P8 and P9 can damage both the power supply and other
components in the PC.
ATX Power Supply
A newer specification that uses a single 20 pin connection to the system board. These
connectors are keyed to make sure that the connector is plugged in properly. Both models
provide 4 levels of DC voltage. ATX power supplies add an additional voltage of +3.3V. The
wires coming out of the power supply are color coded with the black one as the ground wire.
Yellow: +12V
Blue: -12V
Red: +5V
White: -5V
Circuitry (ICs): +/- 5V
Motor: +/- 12V
55 | P a g e
Microprocessor: +3.3V
ATX Power Connector
ATX uses a single P1 power connector instead of the P8 and P9 commonly found on AT
systems. The P1 connector requires its own special socket on the motherboard. P1 connectors
include a 3.3-volt wire along with the standard 5-volt and 12-volt wires. The invariably white
P1 socket stands out clearly on the motherboard. The P1 has a notched connector that allows
you to insert it one way only-you cannot install the P1 connector incorrectly.
Power-supply units come in a variety of shapes and power ratings. The shapes of the
power supplies are determined by the type of case in which they are designed to be used.
The major difference between these two power-supply types is in their form factors. The
ATX power supply is somewhat smaller in size than the AT-style power supply, and their
hole patterns differ.
Another point that differentiates power supplies is their power (or wattage) rating.
Typical power ratings include 150, 200, and 250- watt versions.
Note: Be aware that power supply’s form factor and wattage ratings must be taken into account
when ordering a replacement power supply for a system.
5.3. Batteries
Laptops and portables utilize an external power supply and rechargeable battery system.
Batteries were typically nickel-cadmium, but newer technologies have introduced nickel metal-
hydride and lithium-ion batteries that provide extended life and shorter recharge times. Lithium
batteries are also used to power a computer's CMOS ROM.
Select batteries
If you experience problems that you suspect are battery related, exchange the battery with a
known, good battery that is compatible with the laptop. If a replacement battery cannot be
located, take the battery to an authorized repair centre for testing.
Guidelines for selecting a replacement battery include that the replacement battery
a. Matches the model of the laptop,
b. Does it Fit the laptop,
c. Is it compatible with battery connection, and
d. Does it has correct voltage requirements?
NOTE: Always follow the instructions provided by the manufacturer when charging a new
battery. The laptop can be used during an initial charge, but do not unplug the AC adapter.
NiCad and NiMH rechargeable batteries should occasionally be discharged completely to
56 | P a g e
remove the charge memory. When the battery is completely discharged, it should then be
charged to maximum capacity.
CAUTION: Care should always be taken when handling batteries. Batteries can explode if
improperly charged, shorted, or mishandled. Be sure that the battery charger is designed for
the chemistry, size, and voltage of your battery. Batteries are considered toxic waste, and must
be disposed of according to local laws.
Chapter Six
Storage Drives
A storage device is any hardware capable of holding information so storage in the computers
process of retaining information for future use. There are two types of storage devices used in
computers; a primary storage device, such as RAM, and a secondary storage device, like a hard
drive. Secondary storage can be a removable, internal, or external storage. Without a storage
57 | P a g e
device, your computer would not be able to save any settings or information and would be
considered a dumb terminal. The following are some additional examples of storage devices
that are used with computers.
58 | P a g e
A number of different types of floppy disks have been developed; the size of the floppy got
smaller, and the storage capacity increased; however, in 1990s, other media, including hard
disk drives, ZIP drives, optical drives and USB flash drives, started to replace floppy disks as
the primary storage medium.
Types of floppy disks
The first floppy disk that come on the market were 8inches (20cm) in diameter. The disk was
protected by a flexible plastic jacket. An 8-inch disk back in late 1970s & the most common
sizes are 360K and 1.2MB. This was quickly followed by a smaller version of the same design,
the 5 ¼ inch (133mm) floppy, which could store about the same amount of information using
higher-density media and recording techniques. In the early 1980s, the 3 ½ inch (90mm) floppy
or „micro floppy came on the market, and this type became the dominant storage medium for
personal computers for many years.
Three different generations of floppy disks: 8-inch, 5.25-inch and 3.5-inch. Each of the floppy
disks requires a different type of floppy disk drive. These were typically built into the computer
case itself.
Floppy disks were quite vulnerable. The disk medium was very sensitive to dust, moisture and
heat. The flexible plastic carrier was also not very sturdy (not solidly built). The hard plastic
case of the 3.5-inch floppy presented a substantial improvement in this respect. The most
common format of this floppy became the double-sided, highdensity 1.44 MB disk drive.
6.2. Hard Drive
From the early days of the computer to the present, computer storage has been classified into a
primary working memory (RAM) which is usually volatile and the non-volatile secondary or
backup storage. For secondary storage, paper tapes and cards were used in the early computers,
giving way subsequently to magnetic tapes, drums and disks.
59 | P a g e
Because the hard disk drive is expected to retain data until deliberately erased or
overwritten, the hard drive is used to store crucial programming and data.
Most computer hard drive is in an internal drive bay at the front of the computer.
They connect to the motherboard using both an ATA, SATA, SCSI cable and power
cable.
60 | P a g e
of the platters used, as compared to floppy disks and other media which use flexible "platters"
(actually, they aren't usually even called platters when the material is flexible.)
The platters are "where the action is"--this is where the data itself is recorded. For this reason
the quality of the platters and particularly, their media coating, is critical. The surfaces of each
platter are precision machined and treated to remove any imperfections, and the hard disk itself
is assembled in a clean room to reduce the chances of any dirt or contamination getting onto
the platters.
The form factor of the hard disk also has a great influence on the number of platters in a drive.
Even if hard disk engineers wanted to put lots of platters in a particular model, the standard PC
"slim line" hard disk form factor is limited to 1 inch in height, which limits the number of
platters that can be put in a single unit. Larger 1.6-inch "half height" drives are often found in
servers and usually have many more platters than desktop PC drives. Of course, engineers are
constantly working to reduce the amount of clearance required between platters, so they can
increase the number of platters in drives of a given height.
Tracks and Sectors
Platters are organized into specific structures to enable the organized storage and retrieval of
data. Each platter is broken into tracks--tens of thousands of them--which are tightly-packed
concentric circles. These are similar in structure to the annual rings of a tree (but not similar to
the grooves in a vinyl record album, which form a connected spiral and not concentric rings).
A track holds too much information to be suitable as the smallest unit of storage on a disk, so
each one is further broken down into sectors. A sector is normally the smallest
individuallyaddressable unit of information stored on a hard disk, and normally holds 512 bytes
of information. The first PC hard disks typically held 17 sectors per track. Today's hard disks
can have thousands of sectors in a single track, and make use of zoned recording to allow more
sectors on the larger outer tracks of the disk.
Head Actuator
The actuator is the device used to position the head arms to different tracks on the surface of
the platter (actually, to different cylinders, since all head arms are moved as a synchronous
unit, so each arm moves to the same track number of its respective surface). The actuator is a
very important part of the hard disk, because changing from track to track is the only operation
on the hard disk that requires active movement: changing heads is an electronic function, and
changing sectors involves waiting for the right sector number to spin around and come under
the head (passive movement). Changing tracks means the heads must be shifted, and so making
sure this movement can be done quickly and accurately is of paramount importance. This is
61 | P a g e
especially so because physical motion is so slow compared to anything electronic--typically a
factor of 1,000 times slower or more.
Head Crashes
Since the read/write heads of a hard disk are floating on a microscopic layer of air above the
disk platters themselves, it is possible that the heads can make contact with the media on the
hard disk under certain circumstances. Normally, the heads only contact the surface when the
drive is either starting up or stopping. Considering that a modern hard disk is turning over 100
times a second, this is not a good thing.
If the heads contact the surface of the disk while it is at operational speed, the result can be loss
of data, damage to the heads, damage to the surface of the disk, or all three. This is usually
called a head crash, two of the most frightening words to any computer user. The most common
causes of head crashes are contamination getting stuck in the thin gap between the head and
the disk, and shock applied to the hard disk while it is in operation.
Despite the lower floating height of modern hard disks, they are in many ways less susceptible
to head crashes than older devices. The reason is the superior design of hard disk enclosures to
eliminate contamination, more rigid internal structures and special mounting techniques
designed to eliminate vibration and shock. The platters themselves usually have a protective
layer on their surface that can tolerate a certain amount of abuse before it becomes a problem.
Taking precautions to avoid head crashes, especially not abusing the drive physically, is
obviously still common sense. Be especially careful with portable computers; I try to never
move the unit while the hard disk is active.
A hard disk is comprised of four basic parts: platters, a spindle, read/write heads, and integrated
electronics.
Platters are rigid disks made of metal or plastic.
Both sides of each platter are covered with a thin layer of iron oxide or other
magnetisable material.
The platters are mounted on a central axle or spindle, which rotates all the platters at
the same speed.
Read/write heads are mounted on arms that extend over both top and bottom surfaces
of each disk.
There is at least one read/write head for each side of each platter.
62 | P a g e
The arms jointly move back and forth between the platters‟ centres and outside edges;
this movement, along with the platters‟ rotation, allow the read/write heads to access
all areas of the platters.
The integrated electronics translate commands from the computer and move the
read/write heads to specific areas of the platters, thus reading and/or writing the needed
data
How Is Data Stored and Retrieved?
When a computer saves data, it sends the data to the hard disk as a series of bits.
As the disk receives the bits, it uses the read/write heads to magnetically record or
“write” the bits as a magnetic charge on the oxide coating of a disk platter.
NB: Data bits are not necessarily(but it is possible) stored in succession; for example,
the data in one file may be written to several different areas on different platters.
When the computer requests data stored on the disk, the platters rotate and the
read/write heads move back and forth to the specified data areas.
The read/write heads read the data by determining the magnetic field of each bit,
positive or negative, and then relay that information back to the computer
Disk Formatting
It’s most basic form of disk organization.
Formatting prepares the hard disk so that files can be written to the platters and then
quickly retrieved when needed.
Hard disks must be formatted in two ways: physically and logically
A hard disk’s physical formatting (also called low-level formatting) is usually
performed by the manufacturer. (means when they are built)
Physical formatting divides the hard disk’s platters into their basic physical elements:
tracks, sectors, and cylinders
The tracks are identified by number, starting with track zero at the outer edge.
Tracks are divided into smaller areas or sectors, which are used to store a fixed amount
of data.
Sectors are usually formatted to contain 512 bytes of data
A cylinder is comprised of a set of tracks that lie at the same distance from the spindle
on all sides of all the platters.
Computer hardware and software frequently work using cylinders.
63 | P a g e
When data is written to a disk in cylinders, it can be fully accessed without having to
move the read/write heads. Because head movement is slow compared to disk rotation
and switching between heads, cylinders greatly reduce data access time.
Bad sectors. Are the sectors that can no longer be used to hold data due to gradual
deterioration of magnetic properties of the platter coating.
Consequently, it becomes more and more difficult for the read/write heads to read data
from or write data to the affected platter sectors
Most modern computers can determine when a sector is bad; if this happens, the
computer simply marks the sector as bad (so it will never be used) and then uses an
alternate sector.
Logical Formatting
After a hard disk has been physically formatted, it must also be logically formatted.
Logical formatting places a file system on the disk, allowing an operating system (such
as DOS, OS/2, Windows, or Linux) to use the available disk space to store and retrieve
files.
Understanding Partitions
After a disk has been physically formatted, it can be divided into separate physical
sections or partitions.
Each partition functions as an individual unit, and can be logically formatted with any
desired file system.
Once a disk partition has been logically formatted, it is referred to as a volume.
As part of the formatting operation, you are asked to give the partition a name, called
the “volume label.” This name helps you easily identify the volume.
Why Use Multiple Partitions?
Install more than one OS on your hard disk;
Make the most efficient use of your available disk space;
Make your files as secure as possible;
Physically separate data so that it is easy to find files and back up data.
Partition types
There are three kinds of partitions: primary, extended, and logical.
Primary and extended partitions are the main disk divisions;
64 | P a g e
One hard disk may contain up to four primary partitions, or three primary partitions and
one extended partition. The extended partition can then be further divided into any
number of logical partitions
Primary partition
A primary partition may contain an operating system along with any number of data
files (for example, program files or user files). Before an OS is installed,
The primary partition must be logically formatted with a file system compatible to the
OS.
If you have multiple primary partitions on your hard disk, only one primary partition
may be visible and active at a time. The active partition is the partition from which an
OS is booted at computer start-up.
Primary partitions other than the active partition are hidden, preventing their data from
being accessed. Thus, the data in a primary partition can be accessed (for all practical
purposes) only by the OS installed on that partition
Extended partition
The extended partition was invented as a way of getting around the arbitrary four
partition limit.
An extended partition is essentially a container in which you can further physically
divide your disk space by creating an unlimited number of logical partitions.
An extended partition does not directly hold data. You must create logical partitions
within the extended partition in order to store data.
Once created, logical partitions must be logically formatted, but each can use a different
file system.
Logical partitions
Logical partitions can exist only within an extended partition and are meant to contain
only data files and OSs that can be booted from a logical partition (OS/2, Linux, and
Windows NT)
You can access logical partition files from multiple OSs
Managing Partitions
Shrinking volume
Extending volume
Creating partition
Deleting volume
65 | P a g e
Hiding volume
Making primary part as active
When you create multiple primary partitions to hold different operating systems, you
must tell the computer which primary partition to boot from. The primary partition from
which the computer boots is called the active partition.
If there is not an active primary partition on the first physical hard disk, your computer
will not be able to boot from your hard disk.
WARNING! Before you make a primary partition active, make sure that it is a bootable
partition.
Bootable partitions are logically formatted and have the necessary OS files installed.
Partitions without an OS cannot be booted.
On the back of a hard drive there is a circuit board called the Disk-Controller. The disk
controller determines the logical interaction between the device and the computer The hard
disk drive (HDD) is a magnetic storage device. The storage capacity is measured in gigabytes
(GB). Magnetic hard drives have drive motors designed to spin magnetic platters and move the
drive heads. Solid state drives (SSDs) do not have moving parts, which results in faster access
to data, higher reliability, and reduced power usage.
Internal Cables
Internal power cables (Molex and Berg) connect drives and fans to the motherboard.
Front panel cables connect the case buttons and lights to the motherboard.
Data cables connect drives to the drive controller.
• Floppy disk drive (FDD) data cable
• PATA (IDE) data cable
• PATA (EIDE) data cable
• SATA data cable
• eSATA data cable
66 | P a g e
FireWire is a high-speed, hot-swappable interface that can support up to 63 devices.
Some devices can also be powered through the FireWire port.
A parallel cable is used to connect parallel devices, such as a printer or scanner, and
can transmit 8 bits of data at one time.
A SCSI port can transmit data at rates in excess of 320 Mbps and can support up to 15
devices. SCSI devices must be terminated at the endpoints of the SCSI chain.
A network port, also known as an RJ-45 port, connects a computer to a network. The
maximum length of network cable is 328 feet (100 m).
A PS/2 port connects a keyboard or a mouse to a computer. The PS/2 port is a 6-pin
mini-DIN female connector.
An audio port connects audio devices to the computer.
A video port connects a monitor cable to a computer.
A hard disk or drive non-volatile memory, and it’s the part of your computer responsible for
long-term storage of information.
Magnetic disks – rigid metal or glass platters covered with magnetic recording material.
Disk surface is logically divided into tracks, which are subdivided into sectors.
The disk controller determines the logical interaction between the device and the
computer
Hard Disk
Track: The area in which data and information are stored on magnetic tape or disk.
Sector: A subdivision of a track on a magnetic disk; used to improve access to data or
information.
Cylinder: A storage concept that refers to the same track location on each of the
platters.
Head Crash: The situation that occurs when the read/write heads that normally.
67 | P a g e
Chapter Seven
68 | P a g e
Additionally, the bus speed is also defined by its frequency (expressed in Hertz), the number
of data packets sent or received per second. Each time that data is sent or received is called a
cycle.
It is possible to find the maximum transfer speed (Throughput (bps or MB/s)) of the bus, the
amount of data which it can transport per unit of time, by multiplying its width by its frequency.
A bus with a width of 16 bits and a frequency of 133 MHz, therefore, has a transfer speed equal
to:
16 bits × 133*106 Hz
2128 × 106 bits/second
266 × 106 bytes/second
266 × 103 KB/s => 266 MB/s
Types of bus
Architecture
The address bus (sometimes called the memory bus) transports memory addresses
which the processor wants to access in order to read or write data. It is a
unidirectional bus.
The data bus transfers instructions coming from or going to the processor. It is a
bidirectional bus. It is used to send data from one device to another. Data is passed
in parallel or serial manner. I.e. parallel will normally pass in a multiple of eight
bits at a time but serial passes one bit at a time.
The control bus (or command bus) transports orders and synchronization signals
coming from the control unit and travelling to all other hardware components. It is
a bidirectional bus, as it also transmits response signals from the hardware.
The primary buses
There are generally two buses within a computer:
The internal bus (sometimes called the Front side bus, or FSB for short). The internal
bus allows the processor to communicate with the system's central memory (the RAM).
The expansion bus (sometimes called the input/output bus) allows various motherboard
components (USB, serial, and parallel ports, cards inserted in PCI connectors, hard
drives, CD-ROM and CD-RW drives, etc.) to communicate with one another. However,
it is mainly used to add new devices using what are called expansion slots connected to
the input/output bus.
69 | P a g e
FSB (Front Side Bus)
Short for Front Side Bus, FSB is also known as the Processor Bus, Memory Bus, or System
Bus and connects the CPU (chipset) with the main memory and L2 cache. The FSB can range
from speeds of 66 MHz, 133 MHz, 100 MHz, 266 MHz, 400 MHz, and up. The FSB is now
another important consideration when looking at purchasing a computer Motherboard or a new
computer.
The FSB speed can be set either using the system BIOS or with jumpers located on the 2011
computer motherboard. While most motherboards allow you to set the FSB to any setting,
ensure that the FSB is properly set unless you plan to overclock the computer. Keep in mind
that improper settings may cause issues such as hardware lockups, data corruption, or other
errors may arise with older hardware (e.g. SCSI cards). Verify your component's compatibility
with your motherboard and FSB speed.
Chipset
A chipset is the component which routes data between the computer's buses, so that all the
components which make up the computer can communicate with each other. The chipset
originally was made up of a large number of electronic chips, hence the name. It generally has
two components:
The Northbridge (also called the memory controller) is in charge of controlling transfers
between the processor and the RAM, which is why it is located physically near the
processor. It is sometimes called the GMCH, for Graphic and Memory Controller Hub.
The Southbridge (also called the input/output controller or expansion controller)
handles communications between peripheral devices. It is also called the ICH (I/O
70 | P a g e
Controller Hub). The term bridge is generally used to designate a component which
connects two buses.
Expansion Buses
Over the years many different buses have been developed. Let us see them as follows;-
A. The ISA Bus (Industry Standard Architecture Bus):
Which is an old low speed bus, soon to be excluded from the PC design. When this bus was
originally released it was a proprietary bus to IBM, which allowed only IBM to create
peripherals and the actual interface. Later however in the early 1980's the bus was being created
by other clone manufacturers.
B. PCI Bus (Peripheral Component Interconnect Bus)
The PCI is the high speed bus of the 1990s. The PCI Bus emerged as the answer to the
performance bottleneck. The PCI bus is being used to address all of the problems faced by
video, disk (SCSI and IDE), network, etc. However, it is a high-performance bus that is used
for peripherals requiring CPU-like performance.
The PCI bus is the most found & commonly used bus on computers today for computer
expansion cards. This bus is made by Intel. It is used today in all PCs and other computers for
connecting adapters, such as network-controllers, graphics cards, sound cards etc.
The PCI bus connects PCI slots to the Southbridge. On most systems, the speed of the PCI bus
is 33 MHz. Also compatible with PCI is PCI Express, which is much faster than PCI but is still
compatible with current software and operating systems. PCI Express is likely to replace both
PCI and AGP busses.
The PCI Bus is now a well-defined open standard. There are a massive number of PCIbased
systems, PCI cards, and chipsets. Today you can find PCI video cards, networking cards, SCSI
and IDE controller cards and chips, and others. Furthermore, PCI is processor independent and
is used on a number of different CPU-based systems, such as Intel, DELL, DEC Alpha, etc.
Most systems shipped today include a PCI bus with slots, or at least PCI-based peripherals.
PCI Express
Originally known as 3rd Generation I/O (3GIO), PCI Express, or PCIe, was approved as a
standard on July 2002 and is a computer bus found in computers. PCI Express is a serial bus
designed to replace PCI and AGP and is available in different formats: x1, x2, x4, x8, x12, x16,
and x32. The data transmitted over PCI-Express is sent over wires called lanes in full duplex
mode (both directions at the same time). Each lane is capable of around 250MBps and the
specification can be scaled from 1 to 32 lanes. This means 16 lanes could support a bandwidth
of up to 4,000MBps in both directions. => 16x250MBps => 4000MBps => 4GBps.
71 | P a g e
AGP (Advanced Graphics ports)
Introduced by Intel in 1997, AGP or Advanced Graphic Port is a 32-bit bus designed for the
high demands of 3-D graphics. AGP has a direct line to the computer’s memory which allows
3-D elements to be stored in the system memory instead of the video memory. For AGP to
work in a computer must have the AGP slot which comes with 2011 most Pentium II and
Pentium III machines. The computer also needs to be running Windows 95 OSR2.1, Windows
98, Windows 98 SE, Windows 2000, or higher. AGP is one of the fastest expansion bus in use
but it’s only for video or graphics environment.
USB (Universal Serial Bus)
Short for Universal Serial Bus, USB is a standard that was introduced in 1995 by Intel,
Compaq, Microsoft and other computer companies. USB 1.x is an external bus standard that
supports data transfer rates of 12 Mbps and is capable of supporting up to 127 peripheral
devices and supports hot plugging.
USB transfer speeds
USB 2.0, also known as hi-speed USB, was developed by Compaq, Hewlett Packard, Intel,
Lucent, Microsoft, and NiCad Philips and was introduced in 2001. Hi-speed USB is capable of
supporting a transfer rate of up to 480 Mbps and is backwards compatible, meaning it is capable
of supporting USB 1.0 and 1.1 devices and cables.
As of 2012, USB 3.0 also known as Super Speed USB is the latest version of the USB protocol.
Most new computers feature USB 3.0 ports built-in, offering data transfer speeds of up to 5
Gbps. USB 3.0 improved upon the USB 2.0 technology with speed and performance increases,
improved power management and increased bandwidth capability (providing two
unidirectional data paths for receiving and sending data at the same time). Today, many devices
use the USB 3.0 revision for improved performance and speed, including USB thumb drives,
digital cameras, external hard drives, MP3 players, and other devices.
FireWire
Alternatively referred to as IEEE-1394, FireWire was developed by Apple in 1995 and is a bus
that has a bandwidth of 400-800 Mbps, can handle up to 63 units on the same bus, and is hot
swappable. Users more familiar with USB can relate FireWire to USB as it has many
similarities. Like USB, FireWire has dozens of different devices such as removable drives and
cameras.
ATA
Short for Advanced Technology Attachment, ATA was first approved May 12, 1994, under the
ANSI document number X3.221-1994 and is an interface used to connect such devices as hard
72 | P a g e
drives, CD-ROM drives, and other disk drives. The first ATA interface is now commonly
referred to as PATA, which is short for Parallel AT Attachment after the introduction of SATA.
Today, almost all home computers use the ATA interface, including Apple computers, which
use SATA.
The ATA standard is backwards compatible, which means new ATA drives (excluding SATA)
can be used with older ATA interfaces. Additionally, any new feature introduced is also found
in all future releases. For example, ATA-4 has support for PIO modes 0, 1, 2, 3, and 4, even
though these were first introduced in ATA-1 and ATA-2.
Below is a listing of each of the ATA, IDE, and EIDE standards to help provide a better
understanding of the history behind this interface, as well as an understanding of each
interface's capabilities.
ATA, ATA-1, and IDE
ATA was first developed by Control Data Corporation, Western Digital, and Compaq and first
utilized an 8-bit or 16-bit interface with a transfer rate of up to 8.3MBps, and support for PIO
modes 0, 1, and 2. Today, ATA and ATA-1, are considered obsolete.
ATA-2, EIDE, Fast ATA, Fast IDE, and Ultra ATA
ATA-2, more commonly known as EIDE, and sometimes known as Fast ATA or Fast IDE, is
a standard approved by ANSI in 1996 under document number X3.279-1996. ATA-2
introduces new PIO modes of 3 and 4, transfer rates of up to 16.6MBps, DMA modes 1 and 2,
LBA support, and supports drives up to 8.4GB. Today, ATA-2 is also considered obsolete.
ATA-3, and EIDE
ATA-3 is a standard approved by ANSI in 1997 under document number X3.298-1997. ATA-
3 added additional security features and the new S.M.A.R.T feature.
ATA-4, ATAPI-4, and ATA/ATAPI-4
ATA-4 is a standard approved by ANSI in 1998 under document NCITS 317-1998. ATA-4
includes the ATAPI packet command feature, introduces UDMA/33, also known as
73 | P a g e
ultraDMA/33 or ultra-ATA/33, which is capable of supporting data transfer rates of up to
33MBps.
ATA-5 and ATA/ATAPI-5
ATA-5 is a standard approved by ANSI in 2000 under document NCITS 340-2000. ATA-5
adds support for Ultra-DMA/66, which is capable of supporting data transfer rates of up to
66MBps, and has the capability of detecting between 40 or 80-wire cables.
ATA-6 and ATA/ATAPI-6
ATA-6 is a standard approved by ANSI in 2001 under document NCITS 347-2001. ATA-6
added support for Ultra-DMA/100 and has a transfer rate of up to 100MBps.
ATA layout
The above ATA interfaces on a 3.5-inch disk drives have a 40-pin connector and are capable
of supporting up to two drives per interface. Note: 2.5-inch hard drives use a 50-pin connector
and PCMCIA utilizes a 68-pin connector.
IDE (Integrated Drive Electronics)
Short for Integrated Drive Electronics or IBM Disc Electronics, IDE is more commonly known
as ATA or Parallel ATA (PATA) and is a standard interface for IBM compatible hard drives.
IDE is different from the Small Computer Systems 2011 Interface (SCSI) and Enhanced Small
Device Interface (ESDI) because its controllers are on each drive, meaning the drive can
connect directly to the motherboard or controller. IDE and its updated successor, Enhanced
IDE (EIDE), are the most common drive interfaces found in IBM compatible computers today.
74 | P a g e
This cable helps make a much easier cable routing and offers better airflow in the computer
when compared to the earlier ribbon cables used with ATA drives.
In addition to being an internal solution SATA also supports external drives through External
SATA more commonly known as eSATA. eSATA offers many more advantages when
compared to other solutions. For example, it is hot-swappable, supports faster transfer speeds
and no bottleneck issues when compared with other popular external solutions such as USB
and Firewire, and supports disk drive technologies such as S.M.A.R.T.
Unfortunately, however, eSATA does have some disadvantages such as not distributing power
through the cable like USB, which means drives will require an external power source and it
only supports a maximum cable lengths of up to 2m. Because of these disadvantages don't plan
on eSATA becoming the only external solution for computers.
7.2. Cards
The video card
Your system's video card is the component responsible for producing the visual output from
your computer. Virtually all programs produce visual output; the video card is the piece of
hardware that takes that output and tells the monitor which of the dots on the screen to light up
(and in what color) to allow you to see it. Like most parts of the PC, the video card had very
humble beginnings--it was only responsible for taking what the processor produced as output
and displaying it on the screen. Early on, this was simply text, and not even color at that. Video
cards today are much more like coprocessors; they have their own intelligence and do a lot of
processing that would otherwise have to be done by the system processor. This is a necessity
due to the enormous increase both in how much data we send to our monitors today, and the
sophisticated calculations that must be done to determine what we see on the screen. This is
particularly so with the rise of graphical operating systems, and 3D computing.
The video card in your system plays a significant role in the following important aspects of
your computer system:
Performance: The video card is one of the components that has an impact on system
performance. For some people (and some applications) the impact is not that significant; for
others, the video card's quality and efficiency can impact on performance more than any other
component in the PC! For example, many games that depend on a high frame rate (how many
times per second the screen is updated with new information) for smooth animation, are
impacted far more by the choice of video card than even by the choice of system CPU.
75 | P a g e
Software Support: Certain programs require support from the video card. The software that
normally depends on the video card the most includes games and graphics programs. Some
programs (for example 3D-enhanced games) will not run at all on a video card that doesn't
support them.
Reliability and Stability: While not a major contributor to system reliability, choosing the
wrong video card can cause problematic system behavior. In particular, some cards or types of
cards are notorious for having unstable drivers, which can cause a host of difficulties.
Comfort and Ergonomics: The video card, along with the monitor, determine the quality of
the image you see when you use your PC. This has an important impact on how comfortable
the PC is to use. Poor quality video cards don't allow for sufficiently high refresh rates, causing
eyestrain and fatigue.
Adapter cards increase the functionality of a computer by adding controllers for specific
devices or by replacing malfunctioning ports.
An expansion board that enables a computer to manipulate and output sounds. Sound cards are
necessary for nearly all CD-ROMs and have become commonplace on modern personal
computers. Sound cards enable the computer to output sound through speakers connected to
the board, to record sound input from a microphone connected to the computer, and manipulate
sound stored on a disk. Nearly all sound cards support MIDI, a standard for representing music
electronically. In addition, most sound cards are Sound Blaster-compatible, which means that
they can process commands written for a Sound Blaster card, the de facto standard for PC
sound.
Sound cards use two basic methods to translate digital data into analog sounds:
76 | P a g e
Network card
Network Interface Cards (NICs) enable you to plug network cables in to the PC. [Thus
experience one of the fundamentally important sides of computers]. Most network cables have
either an RJ45 or BNC connector that connects to the NIC in a corresponding port.
77 | P a g e
Chapter Eight
I/O Connectors
Ports (Connectors)
External ports appear at the rear of the PC, through slots cut in to the case which are either port
of an expansion card or the motherboard and are male or female in gender.
8.1. Serial Port
Serial ports transfer data 1 bit at a time and are used to connect mice, external modems, and
other serial devices to the computer. Serial ports can be either 9 pin or 25 pin male ports. All
computers have at least one 9 pin serial port, and many still have a 25 pin serial port.
78 | P a g e
Network Card Ports (RJ45 Port): Network Interface Cards (NICs) enable you to plug
network cables in to the PC. [Thus experience one of the fundamentally important sides of
computers]. Most network cables have either an RJ45 or BNC connector that connects to the
NIC in a corresponding port.
USB: is a recent port technology that cans easily chain up to 127 USB devices. USB ports
transfer data at speeds up to 12 megabytes per second, making them much faster than traditional
parallel or serial communications. USB is supported by all windows operating systems after
Windows 95 but was not supported by the original release of Windows 95 or Windows NT.
SCSI (Small Computer System Interface): is a system – level interface that provides a
complete additional expansion bus in to which you add peripherals such as hard disk drives,
CD-ROMs, tape backup drives and scanners. SCSI devices have a variety of interfaces, but the
50 pin SCSI-2 port is the most common. You might also see 68 pin or 25 pin ports on some
devices or Pc’s.
HDMI (High Definition Multimedia Interface) Port: HDMI is a newer digital audio/video
interface developed to be backward compatible with DVI (Digital Visual Interface).
8.4. Monitor
8.4.1. Introduction to Computer Monitor
The computer monitor is an output device that is part of your computer's display system. A
cable connects the monitor to a video adapter (video card) that is installed in an expansion slot
on your computer’s motherboard. This system converts signals into text and pictures and
displays them on a TV-like screen (the monitor). It is different from most of the other
components of the PC due to its passive nature; it isn't responsible for doing any real
computing, but rather for showing the results of computing. The computer sends a signal to the
video adapter, telling it what character, image or graphic to display. The video adapter converts
that signal to a set of instructions that tell the display device (monitor) how to draw the image
on the screen or tells the monitor which of the dots on the screen to light-up (and in what color).
Monitors are important not because of their impact on performance, but rather their impact on
the usability of the PC. A poor quality monitor can hamper the use of an otherwise very good
PC, because a monitor that is hard to look at can make the PC hard to use. Despite the fact that
they don't have a direct impact on performance, many people spend almost as much on their
monitor when buying a new PC as they do on the PC system itself.
Your monitor plays a significant role in the following important aspects of your computer
system:
79 | P a g e
o Comfort and Ergonomics: Working with your video card, your monitor determines
the quality of the image you see when you use your PC. This has an important impact
on how comfortable the PC is to use. Poor quality monitors lead directly to eyestrain
and other problems, and can ruin the computing experience.
o Software and Video Mode Support: Use of advanced, high-resolution or high color
depth video modes requires support for these modes from the monitor. A video card
that can drive high resolutions in true color at high refresh rates is useless without a
monitor that can handle them as well.
o Upgradability: Since most monitors are interchangeable with each other and can be
used on any similar PC, they are naturals to carry over to a new machine or to use after
upgrading. Since they hold their value, a frequent upgrader with a good monitor can
use it for many years and through many changes of processors, memory, motherboards
and other components that become dated quite quickly.
8.4.2. Types of Displays
Cathode Ray tube (CRT) Monitors
The CRT, or Cathode Ray Tube, is the "picture tube" of your monitor. Although it is a large
vacuum tube, it's shaped more like a bottle. The tube tapers near the back where there's a
negatively charged cathode, or "electron gun". The electron gun shoots electrons at the back
of the positively charged screen, which is coated with a phosphorous chemical. This excites
the phosphors causing them to glow as individual dots called pixels (picture elements). The
image you see on the monitor's screen is made up of thousands of tiny dots (pixels). The
distance between the pixels has a lot to do with the quality of the image. If the distance
between pixels on a monitor screen is too great, the picture will appear "fuzzy", or grainy.
The closer together the pixels are, the sharper the image on screen. The distance between
pixels on a computer monitor screen is called its dot pitch and is measured in millimetres.
You should try to get a monitor with a dot pitch of .28 mm or less.
There are a couple of electromagnets (yokes) around the collar of the tube that actually
bend the beam of electrons. The beam scans (is bent) across the monitor from left to right
and top to bottom to create, or draw the image, line by line. The number of times in one
second that the electron gun redraws the entire image is called the refresh rate and is
measured in Hertz (Hz). If the scanning beam hits each and every line of pixels, in
succession, on each pass, then the monitor is known as a non-interlaced monitor. A non-
interlaced monitor is preferred over an interlaced monitor. The electron beam on an
interlaced monitor scans the odd numbered lines on one pass, and then scans the even lines
80 | P a g e
on the second pass. This results in an almost unperceivable flicker that can cause eye-strain.
This type of eye-strain can result in blurred vision, sore eyes, headaches and even nausea.
Note some important points: -
A monitor is the primary output device of the computer that operates very similar to
how your regular television set works. The principle is based up on the use of an
electronic screen called a Cathode Ray Tube (CRT), which is the major (& most
expensive) part of the monitor.
The CRT is linked with a phosphorous material that grows when it is struck by a
stream of electrons. This material is arranged in to an array of millions of tiny cells,
usually called Dots (Pixels).
At the back of the monitor is a set of electron guns which produce a controlled stream
of electrons, much as the name implies.
81 | P a g e
phosphorous coated screen. The phosphors then emit a small spot of light at each position
contacted by the electron beam. As light emitted by the phosphor fades very rapidly, a method
called refresh is used to keep the phosphor glowing and two redraw the picture repeatedly by
quickly directing the electron beam back over the same point. The minimum number of points
that can be displayed without overlap on a CRT is referred to as the resolution.
A CRT monitor displays color pictures using a combination of phosphors that emit
different colored lights. By combining the emitted light from the different phosphors, a
range of color can be generated.
Size of a monitor measured diagonally can be typically 14, 15, 17, 25 inches.
Liquid Crystal Display (LCD) Monitors
Liquid crystal displays (LCDs) create an image by blocking light. A backlight passes through
layer of pixels, which are formed by liquid crystal molecules, sandwiched between two layers
of polarized glass. An electrical current forces the naturally twisted liquid crystal molecules to
unwind or coil tighter, thereby changing the amount of light that passes through the glass to
the viewer’s eyes. LCD technology uses a progressive scan and thus produces a flicker-free
display. It was introduced in monitors for notebook computers and is now also used to make
much lighter televisions that swivel. One of the important parameters to consider in selecting
an LCD display is the response time, which indicates how much time it takes for the pixels to
change colors. A faster response is needed to reduce the problem with latency (ghosting of
rapidly moving images on television and in computer video games). Currently, a fast response
rate is 3 – 5 milliseconds (for most computer monitors) and 8 milliseconds (for many large
televisions).
82 | P a g e
A fluorescent or electroluminescent light source (or backlight) (1), occupies the panel's rear.
(In a few new models, the backlighting source is a row of LEDs around the perimeter of the
screen.) In front of it are two glassmounted polarizing filters (2) and (5), scored with super-fine
parallel grooves and oriented with their grooves facing and rotated 90 degrees to each other.
(A polarizing filter allows light waves to pass or not pass, depending on the waves' orientation;
those waves that do pass are thus oriented in a known plane.) The filters lie a tiny distance
apart, and a layer of liquid-crystal molecules (3) is sandwiched between them.
Liquid crystals, by their nature, arrange themselves in predictable structures. Here, the
molecules' natural tendency is to lie parallel with the grooves in the filters, with the excess
molecules suspending themselves in the tiny space between the filters, arranging themselves in
a helical arrangement. When light from the backlight (polarized by the rear filter) hits a given
helix, it follows the path of the molecules and is "twisted" in the proper direction to pass
through the front polarizing filter and on to your eye. (If the light was not twisted, the front
filter would partially or wholly block it.)
Now, introduce a transparent, thin grid of transistors (4) that can apply current at any given
intersection of the grid, with each intersection representing a "subpixel." Each pixel in a color
LCD employs three addressable subpixels (red, green, and blue) fronted by a matching color
filter. Charge a given transistor, and there the crystal arrangement "untwists," redirecting the
local orientation of light before it reaches the color filters and the front polarizing filter.
Depending on its orientation, the light in each subpixel may pass, pass partially, or be blocked;
by precisely regulating the transistor charges, the display controls how much light can reach a
pixel's three individual color filters and exit the front polarizing filter as visible light (6).
Because the eye perceives any given set of three subpixels as a single color dot, you simply see
a pixel as a dot of blended color; therefore, varying the pixels' ratios of red, green, and blue
creates the illusion of individually colored pixels. Now, multiply this operation by hundreds of
thousands — possibly more than a million —pixels, performed many times per second, and
respect your "humble" desktop LCD.
Video Technologies
Video technologies differ in many different ways. However, the major 2 differences are
resolution and the number of colors it can produce at those resolutions.
Resolution
83 | P a g e
Resolution is the number of pixels that are used to draw an image on the screen. If you could
count the pixels in one horizontal row across the top of the screen, and the number of pixels in
one vertical column down the side, that would properly describe the resolution that the monitor
is displaying. It’s given as two numbers. If there were 800 pixels across and 600 pixels down
the side, then the resolution would be 800 X 600. Multiply 800 times 600 and you’ll get the
number of pixels used to draw the image (480,000 pixels example). A monitor must be matched
with the video card in the system. The monitor has to be capable of displaying the resolutions
and colors that the adapter can produce. It works the other way around too. If your monitor is
capable of displaying a resolution of 1,024 X 768 but your adapter can only produce 640 X
480, then that’s all you’re going to get.
Monochrome
Monochrome monitors are very basic displays that produce only one color. The basic text mode
in DOS is 80 characters across and 25 down. When graphics were first introduced, they were
fairly rough by today’s standards, and you had to manually type in a command to change from
text mode to graphics mode. A company called Hercules Graphics developed a video adapter
that could do this for you. Not only could it change from text to graphics, but it could do it on
the fly whenever the application required it. Today’s adapters still basically use the same
methods.
In VGA, colors are produced digitally. Each electron beam could be either on or off. There are
three electron guns, one for each color, red, green and blue (RGB). This combination could
produce 8 colors. By cutting the intensity of the beam in half, we could get 8 more colors for a
total of 16. IBM came up with the idea of developing an analog display system that could
produce 64 different levels of intensity. Their new Video Graphics Array adapter was capable
of a resolution of 640 X 480 pixels and could display up to 256 colors from a palette of over
260,000. This technology soon became the standard for almost every video card and monitor
being developed.
Once again, manufacturers began to develop video adapters that added features and
enhancements to the VGA standard. Super-VGA is based on VGA standards and describes
display systems with several different resolutions and a varied number of colors. When SVGA
84 | P a g e
first came out it could be defined as having capabilities of 800 X 600 with 256 colors or 1024
X 768 with 16 colors. However, these cards and monitors are now capable of resolutions up to
1280 X 1024 with a palette of more than 16 million colors.
Extended Graphics Array was developed by IBM. It improved upon the VGA standard (also
developed by IBM) but was a proprietary adapter for use in Micro Channel Architecture
expansion slots. It had its own coprocessor and bus-mastering ability, which means that it had
the ability to execute instructions independent of the CPU. It was also a 32-bit adapter capable
of increased data transfer speeds. XGA allowed for better performance, could provide higher
resolution and more colors than the VGA and SVGA cards at the time. However, it was only
available for IBM machines. Many of these features were later incorporated by other video
card manufacturers.
The first mainstream video card to support color graphics on the PC was IBM's Color Graphics
Adapter (CGA) standard. The CGA supports several different modes; the highest quality text
mode is 80x25 characters in 16 colors. Graphics modes range from monochrome at 640x200
to 16 colors at 160x200. The card refreshes at 60 Hz. Note that the maximum resolution of
CGA is actually significantly lower than MDA: 640x200. These dots are accessible
individually when in a graphics mode but in text each character was formed from a matrix that
is 8x8, instead of the MDA's 9x14, resulting in much poorer text quality. CGA is obsolete,
having been replaced by EGA.
IBM's next standard after CGA was the Enhanced Graphics Adapter or EGA. This standard
offered improved resolutions and more colors than CGA, although the capabilities of EGA are
still quite poor compared to modern devices. EGA allowed graphical output up to 16 colors
(chosen from a palette of 64) at screen resolutions of 640x350, or 80x25 text with 16 colors,
all at a refresh rate of 60 Hz. You will occasionally run into older systems that still use EGA;
EGA-level graphics are the minimum requirement for Windows 3.x and so some very old
systems still using Windows 3.0 may be EGA. There is of course no reason to stick with EGA
when it is obsolete and VGA cards are so cheap and provide much more performance and
software compatibility.
85 | P a g e
Troubleshooting Monitors
Symptom Check
No picture Check if power switch and computer power switch are in the on
position check if the signal cable is correctly connected to the video
card Check if the pins of D Sub connector are not bent. Check if the
computer is in the power saving mode.
Power is not lit: Check if power switch is in the on position Check if the power cards
correctly connected.
Image is not stable: Check if the signal cable is suitable to the video card.
Image is not Adjust H & V centre to get the proper image.
cantered:
Picture is blurred: Adjust contrast and brightness.
Brocken Screen Change the screen using proper instruction on how to change
One concern for computer-based workers should be posture. Sitting in a poor position for a
whole working day, or even part of the day, can cause a lot of health problems, especially with
the back and neck.
Another worry is spending a long time in front of an electronic screen, which can be damaging
to the eyes.
The Health and Safety Executive (HSE) advises that you must:
Analyze workstations to assess and reduce risk
Make sure workers take breaks from screen work
Provide information and training for workers
Provide eye and eyesight tests on request, and special spectacles if needed
Review the assessment when the user or DSE changes.
Common health and safety issues
Your workplace will have its unique challenges to overcome, but there are health and safety
issues familiar to every business, small or large:
Temperature, light and air conditioning
Harmful surroundings and hazardous substances, like asbestos
Workstation health and safety, like computers and other display screen equipment (DSE)
86 | P a g e
Manual handling
Noise and sound exposure
Slips, trips and falls
Handling heavy machinery, tools and equipment
8.4.4. Monitor connections
HDMI, DisplayPort, and USB-C™ are the most common types of monitor ports and cables,
and you’ll find them on the majority of modern displays. However, there are legacy options
available as well, such as VGA and DVI, that you may need to connect to older devices.
Selecting the right monitor port type for your needs is essential, because most monitors don’t
come with all five types of display ports. That’s why it’s important to know which monitor
cable is relevant for which device, as well as the benefits and disadvantages of each one of
these video port types on a new monitor.
1. HDMI
HDMI (high-definition multimedia interface) ports are the most ubiquitous on the market. And
in many ways, HDMI is the industry standard. It’s used by film companies such as Universal,
Warner Bros., and Disney to showcase their films, as well as technology and video game
manufacturers such as Panasonic, Philips, Silicon Image, Sony, and Toshiba.
87 | P a g e
While that may seem to imply that there is only one type of HDMI cable, that is not the case.
There are actually 4 active types of HDMI cables that you can connect to a monitor’s HDMI
port:
HDMI Standard: For resolutions up to 1080p. This is the most common option, but if
you want a higher resolution, you’ll need to opt for one of the other three HDMI
monitor cable types.
HDMI High Speed: For 4K resolution.
HDMI Premium High Speed: For HDR-enabled devices.
HDMI Ultra High Speed: For HDMI 2.1 features, which include uncompressed 8K
video display and 48 GB/s bandwidth.
2. DVI
Prior to the advent of HDMI ports, DVI (digital visual interface) ports were one of two analog
standards that were widely used by PCs. However, many monitors still come with this monitor
port type, often alongside the HDMI and VGA ports. While DVI is less common than a VGA
port, and is not capable of carrying audio, it has specific uses.
For one, a DVI port can provide you with a higher frame rate than an HDMI cable on 1080p
monitors. This is because a DVI port directly transmits digital signals, which increases
transmission speed. This can also provide a clearer picture and increase image sharpness and
detail in comparison to HDMI. A higher frame rate is a boon for gamers, in particular,
especially those with graphics cards capable of more than 60 frames per second (fps).
When selecting a DVI cable type, make sure to get a double-link cable. This is because a dual-
link DVI cable can support up to 2560 x 1600 resolution, while a single-link cable can only
support up to 1920 x 1200 resolution.
Keep in mind that a DVI cable is not capable of delivering 4K video. If you use a monitor and
graphics card capable of 4K, and you want to utilize those capabilities, then you are better off
using an HDMI High Speed cable or the DisplayPort.
88 | P a g e
3. DISPLAYPORT
DisplayPort (DP) is a newer connection (launched in 2008) that’s primarily found on premium-
level monitors. Given that status, it is generally reserved for high-end graphics cards and is
mostly used for gaming and video editing or other visually-intensive tasks.
89 | P a g e
Unlike other monitor plug types, a VGA port consists of a 15-pin connector that features 3
rows of 5 pins. Each pin has a unique function that fits into a VGA cable. Aside from using a
VGA with older PCs, many projectors also use VGA cables. So if you intend to connect your
PC or monitor to a projector for screenings or use in a business or school setting, make sure
your display is VGA compatible in some way. A VGA port is also useful for playing certain
older video game consoles.
5. USB-C
You can find USB-C ports on more versatile monitors, meaning those with more features and
ports in general. While it is largely thought of as a replacement for a traditional USB port, you
can also use it in lieu of a DP or HDMI port.
One of the great things about the USB-C cable; it’s reversible, allowing you to plug it in either
way. This makes a very convenient monitor port type, especially when you plan to use it with
different devices, since it provides you with the option to connect your smartphone, tablet, and
more to your monitor.
In addition to providing video, the USB-C port can output audio, data, and power as well.
While USB-C is still an emerging technology – its initial design was only finalized in 2014 –
its incredible versatility makes it perfect for anyone looking to use only one cable type for their
monitor.
The USB-C port is also a great option if you want to connect your laptop to a monitor. Let’s
say you work in a home office but want to connect to a larger display. If you own a newer
laptop, chances are you’ll be in luck because most of them are equipped with USB-C ports.
Simply connect the plug-and-play cable between your laptop and monitor and, voila, you’ll be
ready to enjoy a larger screen in moments.
8.4.5. Troubleshooting Video System
90 | P a g e
There are multiple video formats now days. More video formats mean more problems. More
problems essentially mean more precautions and more troubleshooting. One may face such
problems as to see nothing on screen. But there is one embarrassing question out there. The
question may seem to be foolish, but many a times this scenario happens, for which no image
is scene on the screen. The question is whether the video output device is plugged in or not. If
the answer is a positive one, then the next typical area to look at is a double check, a check of
the power cord and data cable. When this thing also doesn't resolve the problem, and then starts
the critical troubleshooting.
Common symptoms
Here are the common symptoms which can indicate some problems;
VGA mode
The new model monitors are not at all expected to be set as VGA as its default video running
mode. But they cannot be configured in a way that it never supports a VGA format video. So
the configuration should be checked by one, before complaining that the video is not supported
by the device. There has to be some place in the control panel where the video supporting
formats are to be chosen. One should go there and check, whether VGA format support is
enabled there or not. If not checked in there, one should check that in and recheck, whether the
video is viewable or not. If that resolves the problem, the trouble shooting for that issue cones
to an end there. If not, then one is needed to then problems elsewhere.
No image on screen
Another common problem we face many a times is the blank screen or no display at all, or one
may find the screen showing he command, no video to display. Very simple thing to check out
there is to see the power chord of the device and whether power is reaching the output device
it not. For the command shown, no video to display, one should check on the data cable. Most
of such cases are solved there effectively, but if not, then one can look after another common
problematic area. Not only that, but many a time it happens that he video cards are having
incomplete videos. This also may cause a no display at the video screen. Generally videos are
downloaded or copied from somewhere, and it happens sometimes that the download was not
completed or some strings have gone missing due to a pause break during the downloading
process. Again, during a file being copied or moved to some other devices, some parts have
not begun copied properly due to a common problem of hardware acceleration. So, that must
be checked by a try to play the videos at some other devices.
91 | P a g e
Overheat shutdown
Many a times, one can see that the entire display goes off, while a video is about to play. This
thing starts a thought in one's mind that the video is a bad format or badly configured or may
be corrupted. But another thing that may be happening to make the system shut down almost
never comes to mind. The thing is the temperature. The system has a capacity to bear the
temperature. But if it crosses the limit, then it is sure to go off. Sometimes, what happens is the
system is running for a long time and it is already at its limited point of temperature. So,
whenever the high pixel video is allowed to play, then the temperature limited is exceeded and
that excess temperature make the system shut down.
Dead pixels
Another common issue for problems in video is the low pixels of video image. The colours of
the video image may seem to be faded out or may be distorted at places. This is due to the bad
quality of the input file, and nothing can be done there. But still, one can try to find out the
quality of the video. If the video format is VGA or 3GP, then one can go to the control panel
and check out whether all the types of video output format are selected or not. Checking in the
video formats, may improve the rendering quality of the video. If one finds all the videos from
different sources are showing the same dead pixels, then there must be a problem in the display
system, and one should try to contact the manufacturer for that purpose.
Artifacts
Many a times the graphic quality is not supported be the system, one is handling for the videos
to be displayed. For that, one may need graphic card additionally inserted or installed for
finding a better video rendering experience. Apart from that, one faces these types of problems,
92 | P a g e
when they go with cathode tube display monitors. There magnetic distortion creates problems
to show a good quality video every time. So, before concluding the video to be badly configured
or before saying that the video file is corrupted, one should check the file with other displays
or monitors. If the problem is caused in a cathode ray tube display monitor, then one must
check the file in a LCD monitor and see if it is running correctly or showing the same problem.
If the trouble is displayed there also, then one is needed to search and check for other
problematic issues.
Dim image
The video images may dim or show some attaching pixels on it. The LCD monitor one is using
is having lots of pixel on it. Some pixels might doom down, some may show extra brightness,
and again some may show different colours. If this problem continues, then one must contact
with the manufacturer and go for a warranty replacement. Otherwise, sometimes the videos
show a slow move on playing. This may be due to hardware acceleration problem. One might
free up some space on the system or can remove the temp files from the system to get better
quality output. One may also use some registry cleaners to make the unused unwanted files out
of the system. This lessens the pressure on the hard disk, and thus the hard disk performance
may increase by a lot.
Flickering image
The image flickering is a very common problem in old devices. Many a times the graphic
quality is not supported be the system, one is handling for the videos to be displayed. For that,
one may need graphic card additionally inserted or installed for finding a better video rendering
93 | P a g e
experience. Apart from that, one faces these types of problems, when they go with cathode tube
display monitors. There magnetic distortion creates problems to show a good quality video
every time. So, before concluding the video to be badly configured or before saying that the
video file is corrupted, one should check the file with other displays or monitors. If the problem
is caused in a cathode ray tube display monitor, then one must check the file in a LCD monitor
and see if it is running correctly or showing the same problem. If the trouble is displayed there
also, then one is needed to search and check for other problematic issues. The colours of the
video image may seem to be faded out or may be distorted at places. This is due to the bad
quality of the input file, and nothing can be done there. But still, one can try to find out the
quality of the video. If the video format is VGA or 3GP, then one can go to the control panel
and check out whether all the types of video output format are selected or not. Checking in the
video formats, may improve the rendering quality of the video. If one finds all the videos from
different sources are showing the same dead pixels, then there must be a problem in the display
system, and one should try to contact the manufacturer for that purpose.
Distorted image
Distorted image may be due to the dismantling of the LCD pixels. The pixels don't collaborate
with each other every time. If this is the case then one must contact with the manufacturer to
replace the system within warranty period. The other area of concern is the graphics control
and hardware acceleration properties. The old systems cannot accelerate with the fast and good
quality video images, or the rendering of the images are not properly fetched out there. But if
that video is rendered well and smoothly at some other LCD display, then the problem is sure
to be with the old model display. If the problem persists in the LCD screen also, then the case
reverses. Neither of the devices are having a problem. The problem lies within the video. In
another instance, the colour pattern remains effected throughout the video and that remains
94 | P a g e
same for all the video files in that device. Then the problem lies in the video settings of the
system. One is recommended to check the video rendering settings by go in to the control panel.
This changing display configuration causes the image to be distorted or even broken screen
images.
Discoloration (degaussing)
An old system generally displays this kind of problems. The entire video turns reddish or bluish
or even may turn greenish. The cathode tubes generally functions by putting cathode rays on
the pixels, and if this ray is not properly emit or refracted on the pixels, his type of problems
continues. Otherwise hardware slowing down also creates such problems. For confirming the
case, one may check the video in some new LCD monitors. If this is not the case in the new
device, then it is time, when one should go for replacing the monitor or at least install a new
tube by replacing the old one.
BSOD
The screen goes off and turns blue at times. The video is stopped and the entire system hangs
down. The system was running at good condition, until the video was played. As soon as the
video is allowed to render, the system turns blue and everything is stopped. For such condition,
one must try to restart the system and check it again. The system has a capacity to bear the
temperature. But if it crosses the limit, then it is sure to go off. Sometimes, what happens is the
system is running for a long time and it is already at its limited point of temperature. So,
whenever the high pixel video is allowed to play, then the temperature limited is exceeded and
that excess temperature make the system shut down. Under that condition, one should allow
the system to cool down for a while, and check the video again. If the problem remains then
also, then the system is not supporting the video quality. That may be either for its too high
quality, or may be its too low quality.
Thus a video may face problems, due to many reasons. The reason may be about the system
configuration, or may be due to setting problem, or may be due to the device incompatibility.
Once the exact reason is identified, then troubleshooting becomes much easier.
8.5. BIOS
The BIOS (Basic Input/output System) is also known as the System BIOS or ROM BIOS. The
BIOS software is built into the PC, and is the first code run by a PC when powered on ('boot
firmware'). When the PC starts up, the first job for the BIOS is the power-on self-test, which
95 | P a g e
initializes and identifies system devices such as the CPU, RAM, video display card, keyboard
and mouse, hard disk drive, optical disc drive and other hardware. The BIOS then locates boot
loader software held on a peripheral device (designated as a 'boot device'), such as a hard disk
or a CD/DVD, and loads and executes that software, giving it control of the PC. This process
is known as booting, or booting up, which is short for bootstrapping.
BIOS have a user interface (UI), typically a menu system accessed by pressing a certain key
on the keyboard when the PC starts. In the BIOS UI, a user can:
Configure hardware
Set the system clock
Enable or disable system components
Select which devices are eligible to be a potential boot device
Set various password prompts, such as a password for securing access to the BIOS UI
functions itself and preventing malicious users from booting the system from
unauthorized peripheral devices.
The BIOS provides a small library of basic input/output functions used to operate and control
the peripherals (such as the keyboard, text display functions and so forth), and these software
library functions are callable by external software.
The role of the BIOS has changed over time. As of 2011, the BIOS is being replaced by the
more complex Extensible Firmware Interface (EFI) in many new machines, but BIOS remains
in widespread use. EFI booting has been supported in only Windows versions supporting the
Linux kernel and later, Mac OS X on Intel-based Macs However, the distinction between BIOS
and EFI is rarely made in terminology by the average computer user, making BIOS a catch-all
term for both systems.
In modern PCs the BIOS is stored in rewritable memory, allowing the contents to be replaced
or 'rewritten'. This rewriting of the contents is sometimes termed flashing. This can be done by
96 | P a g e
a special program, usually provided by the system's manufacturer, or at POST, with a BIOS
image in a hard drive or USB flash drive. A file containing such contents is sometimes termed
'a BIOS image'. BIOS might be reflashed in order to upgrade to a newer version to fix bugs or
provide improved performance or to support newer hardware, or a reflashing operation might
be needed to fix a damaged BIOS. BIOS may also be "flashed" by putting the file on the root
of a USB drive and booting.
Most laptops come with a very strong BIOS password capability that locks up the hardware
and makes the laptop completely unusable. This is the password that has to be entered before
the operating system loads, usually on a black screen a few seconds after the laptop is started.
Of course BIOS password can be set on a PC too, but there it is stored together with the other
BIOS settings – date, time, hard disk size, etc. It is very easy to reset the BIOS settings (and
the password) on a PC – usually there is a jumper near the BIOS battery on the motherboard
that needs to be moved from connecting pins 1+2 to pins 2+3 for a few seconds and then moved
back to pins 1+2. Next time the PC is started it will alert you “… BIOS settings invalid…
Defaults loaded… Press F1 to continue…” or something similar, and…. the password is gone!
However most laptops store the BIOS password in a special chip, sometimes even hidden under
the CPU, that is not affected when the rest of the BIOS settings are reset. This makes the
removal of a BIOS password on a laptop almost impossible. The only option in most cases is
to replace the chip which is quite expensive and risky procedure and, of course, not supported
by the manufacturers.
Some manufacturers (like Dell) can generate a “master password” for a particular laptop (from
their service tag) if sufficient proof of ownership is provided. Others (like IBM) would advise
replacing the laptop’s motherboard (very expensive). On some old laptops (4 – 5 years or older)
the BIOS password can still be reset relatively easy, usually by shorting two solder points on
the motherboard or by plugging a special plug in the printer port, etc.
BIOS boot specification If the expansion ROM wishes to change the way the system boots
(such as from a network device or a SCSI adapter for which the BIOS has no driver code), it
can use the BIOS Boot Specification (BBS) to register its ability to do so. Once the expansion
ROM have registered using the BBS, the user can select among the available boot options from
within the BIOS's user interface. This is why most BBS compliant PC BIOS implementations
will not allow the user to enter the BIOS's user interface until the expansion ROMs have
finished executing and registering themselves with the BBS.
Plug and Play
97 | P a g e
Plug and Play (PnP) consists of a series of standards designed to enable devices
to self – configure. PnP, makes devices installation trivial. You simply install a
device and it automatically configures its I/O address. IRQ and DMA with no user
invention.
For PNP to work properly, the PC needs two items:
o A PnP BIOS if you have a Pentium or latter computer, you have a PnP BIOS.
o A PnP operating system such as windows 9x or windows 2000.Older operating
systems such as DOS and Windows 3x, could use PnP devices with the help of
special drivers and utility programs, such as windows 3x and DOS.
8.6. Printer
A device that prints text or illustrations on the paper.
Printers are add on peripheral output device that transform text and graphics from
your PC in to hard – copy output on paper or transparency.
Printer can be black and white or colored.
Printers are connected to the computer system through the 25 female printer (LPT)
ports, but now they can be connected using USB ports.
Depending on the print quality and speed, printers can be basically of the following kinds:
A. Dot Matrix: The oldest kind of printer common in the PC world is the dot matrix,
which was used on PCs primarily in the 1980s. These units use an array of pins and a
ribbon to print text, and in some cases, graphics. Noisy, slow and low in quality. These
have been almost entirely pushed out of the market by inkjet and laser printers. They
are still sold for special needs, especially for printing multi- port continuous forms for
business Purposes (lasers and inkjets can’t print through multi – sheet carbon forms).
B. Laser: These printers use technology similar to that of a photocopier that use light to
create high – quality print out at high speed. They are expensive, however and they are
generally limited to black and white output unless you want to spend a lot of money on
a color laser printer.
C. Ink Jet: The most popular type of printer sold today. Ink jet printers use a special print
head that ejects microscopic dots of colored ink on to paper to create an output image.
They are relatively in expensive to buy and almost all will print in color.
D. Daisy-Wheel: Similar to a ball-head typewriter, this type of printer has a plastic or metal
wheel on which the shape of each character stands out in relief. A hammer presses the
98 | P a g e
wheel against a ribbon, which in turn makes an ink stain in the shape of the character
on the paper. Daisy-wheel printers produce letter-quality print but cannot print graphics.
E. LCD & LED: Similar to a laser printer, but uses liquid crystals or light-emitting diodes
rather than a laser to produce an image on the drum.
F. Line Printer: Contains a chain of characters or pins that print an entire line at one time.
Line printers are very fast, but produce low-quality print.
G. Thermal Printer: An inexpensive printer that works by pushing heated pins against
heat sensitive paper. Thermal printers are widely used in calculators and fax machines.
Printers are also classified by the following characteristics:
Quality of type: The output produced by printers is said to be either letter
quality (as good as a typewriter), near letter quality, or draft quality. Only daisy-
wheel, ink-jet, and laser printers produce letter-quality type. Some dot-matrix
printers claim letter-quality print, but if you look closely, you can see the
difference.
Speed: Measured in characters per second (cps) or pages per minute (ppm), the
speed of printers varies widely. Daisy-wheel printers tend to be the slowest,
printing about 30 cps. Line printers are fastest (up to 3,000 lines per minute).
Dot-matrix printers can print up to 500 cps, and laser printers range from about
4 to 20 text pages per minute.
Impact or non-impact: Impact printers include all printers that work by
striking an ink ribbon. Daisy-wheel, dot-matrix, and line printers are impact
printers. Nonimpact printers include laser printers and ink-jet printers. The
important difference between impact and non-impact printers is that impact
printers are much noisier.
Graphics: Some printers (daisy-wheel and line printers) can print only text.
Other printers can print both text and graphics.
Fonts: Some printers, notably dot-matrix printers, are limited to one or a few
fonts. In contrast, laser and ink-jet printers are capable of printing an almost
unlimited variety of fonts. Daisy-wheel printers can also print different fonts,
but you need to change the daisy wheel, making it difficult to mix fonts in the
same document.
Installing Printers
To install a printer on your computer, apply the following steps:
99 | P a g e
Start – Control Panel
Choose printers and other hardwares
Add a printer
Click next
Choose for a local or network printer – next
Choose a manufacture and printer model – next
Type a printer name and choose to make your printer a default printer or not – next
Choose to share your printer or not – next
Choose to print a test page or not – next – finish.
Troubleshooting Printers
Printer problems are sometimes the most difficult computer peripherals to diagnose. The
common printer problems and their likely trouble shooting methods are mentioned below.
1. Feed and output Problems
When paper feed problems occur, either the paper can become jammed in the feed mechanism
or there will not be papers outputted, at all halting the printing process. If two much paper in
the paper tray, this can cause the feed mechanism to peek several sheets of paper and try to
send them simultaneously through the printer and usually followed by paper jam.
2. Out of paper Error
This problem will encounter you, if no paper is in the printer. Thus, get enough paper to the
printer.
3. Input/output errors
I/O errors occur, when the computer is unable to communicate with the printer. To trouble
shoot this problem check the following possibilities.
Is the printer plugged in?
Is the printer turned on?
Are all cables firmly turned on?
Is the proper drive for your printer installed?
Is the I/O settings for the printer correct?
If all these checkouts, try restarting your computer, simply restarting can sometimes fix
a multi-output error, connect the printer to another computer.
If the printer does not work on the other computer you know you have a printer problem,
if it works file on the other computer, you have a configuration problem.
4. No Default Printer Errors
100 | P a g e
A no default printer error indicates no printer has been installed or that you haven’t set a default
printer. To set a default printer:
Click the start button
Select control panel
Select printers. The printers dialog box will appear
Alternate – click the icon of the printer you want to set as the default.
Select set as default from the short cut menu.
5. Toner Low and Ink Low Errors
When the toner cartridge in a laser printer runs low, is an issue a warning before the cartridge
turns out completely. Replace the cartridge as soon as you see the error to avoid half – finished
or delayed print jobs.
101 | P a g e
Chapter Nine
Software Concept
9.1. Introduction
Software is a set of instructions, data or programs used to operate computers and execute
specific tasks. It is the opposite of hardware, which describes the physical aspects of a
computer. Software is a generic term used to refer to applications, scripts and programs that
run on a device.
As you know, the hardware devices need user instructions to function. A set of instructions that
achieve a single outcome are called program or procedure. Many programs functioning
together to do a task make a software.
For example, a word-processing software enables the user to create, edit and save documents.
A web browser enables the user to view and share web pages and multimedia files. There are
two categories of software −
System Software
Application Software
Utility Software
102 | P a g e
9.2. History of Operating System
A. The First Generation (1940's to early 1950's)
When electronic computers were first introduced in the 1940's they were created without any
operating systems. All programming was done in absolute machine language, often by wiring
up plugboards to control the machine's basic functions. During this generation computers were
generally used to solve simple math calculations, operating systems were not necessarily
needed.
B. The Second Generation (1955-1965)
The first operating system was introduced in the early 1950's, it was called GMOS and was
created by General Motors for IBM's machine the 701. Operating systems in the 1950's were
called single-stream batch processing systems because the data was submitted in groups. These
new machines were called mainframes, and they were used by professional operators in large
computer rooms. Since there was such as high price tag on these machines, only government
agencies or large corporations were able to afford them.
C. The Third Generation (1965-1980)
By the late 1960's operating systems designers were able to develop the system of
multiprogramming in which a computer program will be able to perform multiple jobs at the
same time. The introduction of multiprogramming was a major part in the development of
operating systems because it allowed a CPU to be busy nearly 100 percent of the time that it
was in operation. Another major development during the third generation was the phenomenal
growth of minicomputers, starting with the DEC PDP-1 in 1961. The PDP-1 had only 4K of
18-bit words, but at $120,000 per machine (less than 5 percent of the price of a 7094), it sold
like hotcakes. These microcomputers help create a whole new industry and the development of
more PDP's. These PDP's helped lead to the creation of personal computers which are created
in the fourth generation.
D. The Fourth Generation (1980-Present Day)
The fourth generation of operating systems saw the creation of personal computing. Although
these computers were very similar to the minicomputers developed in the third generation,
personal computers cost a very small fraction of what minicomputers cost. A personal
computer was so affordable that it made it possible for a single individual could be able to own
one for personal use while minicomputers where still at such a high price that only corporations
could afford to have them. One of the major factors in the creation of personal computing was
the birth of Microsoft and the Windows operating system. The windows Operating System was
created in 1975 when Paul Allen and Bill Gates had a vision to take personal computing to the
103 | P a g e
next level. They introduced the MS-DOS in 1981 although it was effective it created much
difficulty for people who tried to understand its cryptic commands. Windows went on to
become the largest operating system used in technology today with releases of Windows 95,
Windows 98, Windows XP (Which is currently the most used operating system to this day),
and their newest operating system Windows 7. Along with Microsoft, Apple is the other major
operating system created in the 1980's. Steve Jobs, co-founder of Apple, created the Apple
Macintosh which was a huge success due to the fact that it was so user friendly. Windows
development throughout the later years were influenced by the Macintosh and it created a
strong competition between the two companies. Today all of our electronic devices run off of
operating systems, from our computers and smartphones, to ATM machines and
motor vehicles. And as technology advances, so do operating systems.
104 | P a g e
According to StatCounter Global Stats, macOS users account for less than 10% of global
operating systems—much lower than the percentage of Windows users (more than 80%). One
reason for this is that Apple computers tend to be more expensive. However, many people do
prefer the look and feel of macOS over Windows.
Linux
Linux (pronounced LINN-ux) is a family of open-source operating systems, which means they
can be modified and distributed by anyone around the world. This is different from proprietary
software like Windows, which can only be modified by the company that owns it. The
advantages of Linux are that it is free, and there are many different distributions—or
versions—you can choose from.
According to StatCounter Global Stats, Linux users account for less than 2% of global
operating systems. However, most servers run Linux because it's relatively easy to customize.
Operating systems for mobile devices
The operating systems we've been talking about so far were designed to run
on desktop and laptop computers. Mobile devices such as phones, tablet computers, and MP3
players are different from desktop and laptop computers, so they run operating systems that are
designed specifically for mobile devices. Examples of mobile operating systems include Apple
iOS and Google Android. In the screenshot below, you can see iOS running on an iPad.
Operating systems for mobile devices generally aren't as fully featured as those made for
desktop and laptop computers, and they aren't able to run all of the same software. However,
you can still do a lot of things with them, like watch movies, browse the Web, manage your
calendar, and play games.
105 | P a g e