Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Complete Report

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 95

Training report on Computer Hardware & networking

A REPORT
ON

INDUSTRIAL TRAINING
AT

COMPUTER HARDWARE & NETWORKING PCS TECHNOLOGY,THALTEJ


Prepared By:

CHAUHAN AMIT D. (S-130607044)


Internal Guide External Guide

Mr. P. P. Gajjar (Lecturer,EC Dept.)

Er. Swaraj Lukose (SDE,Vastrapur)

ELECTRONICS & COMM. ENGINEERING DEPT. GOVT. POLYTECHNIC COLLEGE AHMEDABAD-15.

Page 1 of 95

Training report on Computer Hardware & networking

____________________________________________________________________________

This is to certify that, Mr. Amit D. Chauhan (S-130607044) ,a student of VI semester, EC Dept. has successfully completed his training at PCS Technology Ltd.,Thaltej, Ahmadabad.
Training Period :- 7th Jan. to 30th April,2010 Date of Submission :Place :Guided By MR. P. P. Gajjar ______________________
_______ _______

Head Of Dept. Prof. M. N. Charel ______________________

ACKNOWLEDGEMWNT
Page 2 of 95

Training report on Computer Hardware & networking As a student of the Hardware & Network Engg, It is obivious to have desire to see and experience some practical task rather than having knowledge of the pages of the books. PCS Technology for providding such opportunity to be part of their culture. I would like to thanks selection comitee for providing me chance to work with such wide orgaization of my interest. Next I would like to glad to thank Mr. Swaraj Lukose (Sr. Engg.) for making me more practical and expert in Telecommunication field.Also I would thank to technical staff for providing me related knowledge. At last but not least my college guide Mr. P. P. Gajjar,I am very much thankful to him,for his nice guidence throughout the training period.His valuable comments help a lot for completing the documentation part. I am also thankful to my parents for their support the same. Without them it is not possible. Finally I would like to thank every person whose active or passive help in present in this period of time. Thank you all.

Page 3 of 95

Training report on Computer Hardware & networking

ABSTRACT
This training was the practical experience towards the theoritical concepts of the Computer hardware and Computer networking. The training at PCS Technology was great experience of all about Computer and its peripherals. This training has provided all the information about various types of Harddisks, Motherboard, Monitor, Keyboard, Mouse, etc. Various types of practical problems and its possible solution was also experienced like fault finding in the Motherboard of the CPU, etc. In the training the different types of Printers, Scanners,etc. I got also the information about printers & Scanners. This training has also provided information about Networking & various types of its components like NIC, Cables, Ethernet Cards, Switches, Hubs,etc. In Networking it has also provided softwares like IP Addressing, DNS Addressing,etc. This training report also provided Softwares Like Installation of Windows XP, Windows 2000 Server, Oracle,etc. This training has been a great learning experience opening the doors to the ever growing Computer Networking field.

INDEX
Page 4 of 95

Training report on Computer Hardware & networking

Sr. No. 01 02 03 04 05 06 07 08 09 10 11 12

Title PCS Profile PCS Services Introduction to Computer Hardware Personal Computer Hardware Introduction to Windows XP installation Introduction to Networking Types of Networks Network Models Network devices Network Addressing Adv. & Disadv. Of Networking References

Page No. 06 08 10 13 43 49 58 65 76 88 92 94

Page 5 of 95

Training report on Computer Hardware & networking

PCS PROFILE

PCS Technology Limited provides information technology (IT) hardware and software solutions in India and internationally. It engages in hardware manufacturing, software development and services, and facilities management, as well as offers location based services, packaged software, system and network integration services, banking software solutions, healthcare software solutions, and network security solutions. The companys products include computers for personal use, business use, corporate personal computers, servers, and notebook computers. PCS Technologys IT services include third party maintenance services, such as corrective maintenance and repair, and replacement of spares for desktops, laptops, servers, printers, and networking equipments; and managed it services, including IT help desk services, asset management, vendor management, desktop management, server management, LAN ,WAN management, system administration, mail/messaging management, desktop imaging services, IT security management, data center management, and application and database management. The company also offers document management solutions, including enterprise document management solutions, enterprise workflow solutions, check truncation, engineering solutions, and scanning services. In addition, it provides consulting, audit, and training services in information security management system and information technology service management areas; and telematics Page 6 of 95

Training report on Computer Hardware & networking and location intelligence solutions and services. PCS Technology also offers its products and solutions in the United States, Europe, Africa, West Asia, and the United Arab Emirates through an international branch network. The company was founded in 1983 and is headquartered in Mumbai, India. PCS Technology established in 1983, is a $100 million company today. Headquartered in Mumbai, the company has a global presence with 32 offices across the globe. They have the experience and the understanding to improve business performance and business processes and technology platforms. The company possess a keen understanding of the client requirements hence their solutions focus on delivering competitive business advantage and increased productivity for all their clients in a cost effective manner. PCS understands and addresses to the complex and varying needs of Indian personal computer users. Equipped with reliable and latest technology, its personal computers cater to the customer's requirement of competent pricing coupled with reliability. PCS has been in computer business for over 28 years and with India's second largest service network on call, the company is having capacity to provide assistance whenever and wherever required. Started in the year 1983, PCS Technology is a 100 million dollar company today. Headquartered in Mumbai, we have Global presence with 32 offices across the globe. We have the experience and the understanding to improve business performance and business processes and technology platforms. We possess a keen understanding of the Client requirements hence our solutions focus on delivering competitive business advantage and increased productivity for all our clients in a cost effective manner. Focus on Customer Delight: While our culture of thought leadership encourages us to explore new concepts and solutions, our strategic focus of delivering competitive advantage for our client's business success never varies. The testimony of this fact lies in the fact that our clients consists of Blue-chip and Fortune 500 companies, choose us for repeat business. We meet the client challenges by delivering cost effective solution while maintaining a roadmap that can evolve to fit their future needs.

Partnering with Global Leaders: We believe that in order to keep pace with the global trends, it is imperative to stay connected to the leaders in the field of technology. To that end, we have formed a range of strategic relationships with a number of companies worldwide, which ensures that our products and services are always keeping pace with global trends. Highlights: Global IT Service provider. 32+ Years of experience in managing mission critical contracts. Global representation across 5 Continents. Widespread service network. State of art manufacturing facilities. Page 7 of 95

Training report on Computer Hardware & networking Installed base of over 5 million PC's. Our distinct Quality Process, ISO 9001:2000 & ISO 14001 certified.

To be amongst the 5 most admired Information Technology Solution Providers globally with leadership focus in delivery of products, solutions and services which are globally competitive. The New Generation Of PCS : Delivery of Product and Services which are globally competitive. Continuous improvement of our products, processes and people. A learning organization of committed and contributing employees who share the competitive. Continuous satisfaction of our customers , shareholders and employees. Expansion in our areas of core competency and development of New Competencies.

Mission Statement of the company PCS will provide products and services that not only meet but exceed the expectation of our customers through planned and continuous improvement of our Services,Products, Processes and People.The Mission of the Human Resources Department, is to Recruit, Develop and Retain the High-Caliber Diverse workforce.

Employee centric organization: Well defined policies and processes. Premeditated induction/Orientation Programme to suit individual needs. Long term engagement with multiple project opportunities. Diversity in verticals/domain focus - Finance, Telecom, Technology, Shipping, Airlines, Medical. Well carved learning curve with performance management system. Rewards & Recognition. Cross training/learning opportunities & redeployment opportunities. Multiple geographic location project opportunity. International best practices.

Recognition: Enhance performance by continuous learning Continuous Motivation Provide practical feedback for learning Glass door policy to address any issue or concern Culture of performance Lead employees on learning curve Rewards & recognition Page 8 of 95

Training report on Computer Hardware & networking Building Commitment: Focus - Employees are given equal opportunities Involvement - Marching together towards common goal Development - Encourage opportunities for learning and growth Gratitude - Recognize performance (formal or informal) Accountability - Employees are given freedom to work and outshine.

Delivering Quality Solutions and Products is our area of expertise. For this purpose Management will actively and conscientiously pursue the following: Plan and provide training to employees for acquiring knowledge Actively promote and assist the concept of personnel development, team building at various levels and provide motivation, encouragement to individuals and groups of people towards higher level of achievement. Support, recognize and reward ideas and actions both by individuals and groups which are innovative, improve quality and are customer - oriented. Provide suitable resources to achieve the above objectives. Monitor quality of our Services and Products to meet international standards

PCS SERVICES
OBJECTIVES:
Provide greater customer satisfaction. Provide Services and Products of International Standards. Highlights of the companys performance: Global IT Service provider 32+ years of experience in managing mission critical contracts Global representation across 5 continents Widespread service network State of art manufacturing facilities Installed base of over 5 million PCs Their distinct Quality Process, ISO 9001:2000 & ISO 14001 certified.

Products and services offered by the company:


Facility Management Services Software Services Video Conferencing Logistics Consulting Page 9 of 95

Training report on Computer Hardware & networking Networking Solutions Healthcare Solutions Location Intelligence Data Centre Operations Staffing EMS

Products:
Server Products Desktop Products Industries Served Telecom ECM ERP

INTRODUCTION TO COMPUTER HARDWARE


INTRODUCTION :
The computer is a programmable electronic device that can store, retrieve, and, process data. The basic develops in the 1200s when a Moslem cleric proposes solving problems with a series of written procedures. In 1975 the first personal computer is marketed in kit from. Bill Gates, with others, writes a BASIC compiler for the machine. Continuing today, companies strive to reduce the size and price of PCs while increasing capacity. IBM (International Business Machines) introduces its PC in 1981.

TYPES OF COMPUTER : Personal Computer (Micro computer) :


The computer used in house or in office is known as a personal computer (PC).Initially PC made by IBM, the biggest and oldest computer manufacturer in the world. Later on IBM freely allows other manufacturers to copy their design, so that PC is become more popular in short time, for personal user and small business group. This PC are also known as IBM clones. This PC has low cost and smaller size. It is useful for individuals without intermediate computer operator. Page 10 of 95

Training report on Computer Hardware & networking

Basically the PC are identify by its processor such as Intel x86, Advanced Micro Devices (AMD),Zilog Z80, Motorola 6800 etc. The most common operating system such as Microsoft window, Mac OS, UNIX, LINUX etc are used in PC. It can be capable to connect with other PC on LAN.

Mainframe Computer:
A Mainframe computer is a large, powerful computer that handles the processing for many users simultaneously. Mainframe computer is some time called big iron. It is high-performance computer which require grater availability and security than personal computer. In past it basically associated with centralized rather than distributed computing.

Users connect to the mainframe using terminals and their tasks for processing by the mainframe. A terminal is a device that has a screen and keyboard for input and output, but it does not do its own processing. The processing power of the mainframe is time-shared between all of the users. They are used in situations where a company wants the processing power and information storage in a centralized location.

Minicomputer:
A minicomputer is a multi-user computerthat is less powerful than a mainframe. The minicomputer has been largely taken over by high-end microcomputer workstations serving multiple users.

Page 11 of 95

Training report on Computer Hardware & networking

Workstation Computer:

A workstation is a powerful, high-end microcomputer. They contain one or more microprocessor CPUs. They may be used by a single-user for applications requiring more power than a typical PC. The term workstation also has an alternate meaning: In networking, any client computer connected to the network client workstation could be a personal computer or even a system. Among the most successful makers of this kind of workstation are Sun Microsystems, Hewlett-Packard, DCE, and IBM.

Super Computer:

A supercomputer is mainframe computer that has been optimized for speed and processing power. A supercomputer is typically used for application that must handle very large database or/and do a Page 12 of 95

Training report on Computer Hardware & networking great among of computation. Most supercomputer is multiple computers that work parallel. The Blue Pacific, IBMs supercomputer, which was built to simulate the physics of a nuclear explosion. It is operated at 3.9 trillion operation per second, 15,000 times faster than the average PC. It consists of 5,800 processors containing a total of 2.6 trillion bytes of memory.

Personal computer hardware

Hardware of a modern Personal Computer.


Page 13 of 95

Training report on Computer Hardware & networking

Monitor Motherboard CPU Power supply Storage devices Keyboard Mouse

1. Display monitors (VDU)


These are the most common output device. Running costs are low and output is silent. LCD (Liquid crystal displays) are used for laptop computers. They have the advantages of low power consumption compared to cathode ray (electron beam) devices; they are lighter and have a flat profile. However they are more expensive to produce, particularly if a larger size screen is required and the display works more slowly than the traditional cathode ray screen . CRT monitors are used on most desktop displays (use a cathode ray tube) VGA Video Graphics Array introduced by IBM in 1987. It handles a palette of 256 colours with a resolution up to 640x480. SVGA , which is widely used in current monitors and has much better resolution & refersh rates. It allows 1024x768 and 1280x1024 resolutions in 16 and 256 colours. Modern cards produce 16,777,216 colours.

TFT stands for Thin Film Transistor , a type of LCD flat-panel display screen, in which each pixel is controlled by from one to four transistors. The TFT technology provides the best resolution of all the flat-panel techniques, but it is also the most expensive. TFT screens are sometimes called ActiveMatrix LCDs. (1280 x 1024 resolution) Page 14 of 95

Training report on Computer Hardware & networking The resolution of a monitor indicates how densely packed the pixels are. In general, the more pixels (often expressed in dots per inch), the sharper the image. Common screen sizes are 15 or 17 Bit mapping - an image built up in memory pixel by pixel is used to automatically generate the screen display. With black and white images only one bit per pixel is needed ( logic 0 to represent black, logic 1 to represent white), with 4 colours two bits are required e.g. 00=black, 01=red, 10=yellow, 11=white, three bits for eight etc. Colour monitors therefore require more memory (see bit mapping).

2. Motherboard

Page 15 of 95

Training report on Computer Hardware & networking

Function:
The motherboard is a printed circuit board (PCB) that contains and controls the components that are responsible for processing data.

Description:
The motherboard contains the CPU, memory, and basic controllers for the system. Motherboards are often sold with a CPU. The motherboard has a Real-time clock (RTC), ROM BIOS, CMOS RAM, RAM sockets, bus slots for attaching devices to a bus, CPU socket(s) or slot(s), cache RAM slot or sockets, jumpers, keyboard controller, interrupts, internal connectors, and external connectors. The bus architecture and type of components on it determine a computers performance. The motherboard with its ribbon cables, power supply, CPU, and RAM is designated as a "bare bones" system.

Clock:
Page 16 of 95

Training report on Computer Hardware & networking The motherboard contains a systems clock to synchronize the operation of the bus and other components. Jumpers on the motherboard allow a user to set different clock rates to work with the CPU. Other jumpers control other components on the motherboard. 286 and 386 motherboards had an extra socket on board for a math coprocessor. The coprocessor is responsible for non-integer calculation. It is also known as an FPU or Floating Point Unit. 486DX's and all generation of Pentiums have a math coprocessor already built into the CPU chip.

Computer Mother board and its constituent components

A typical PC mother board with important components is given below:

1. Mouse & keyboard 2. USB 3. Parallel port 4. CPU Chip 5. RAM slots 6. Floppy controller 7. IDE controller 8. PCI slot 9. ISA slot 10. CMOS Battery 11. AGP slot 12. CPU slot 13. Power supply plug in 1. Mouse & keyboard:
Page 17 of 95

Training report on Computer Hardware & networking Keyboard Connectors are two types basically. All PCs have a Key board port connected directly to the motherboard. The oldest, but still quite common type, is a special DIN, and most PCs until recently retained this style connector. The AT-style keyboard connector is quickly disappearing, being replaced by the smaller mini DIN PS/2-style keyboard connector. You can use an AT-style keyboard with a PS/2-style socket (or the other way around) by using a converter. Although the AT connector is unique in PCs, the PS/2-style mini-DIN is also used in more modern PCs for the mouse. Fortunately , most PCs that use the mini-DIN for both the keyboard and mouse clearly mark each mini-DIN socket as to its correct use. Some keyboards have a USB connection, but these are fairly rare compared to the PS/2 connection keyboards.

2. USB (Universal serial bus):


USB is the General-purpose connection for PC. You can find USB versions of many different devices, such as mice, keyboards, scanners, cameras, and even printers. a USB connector's distinctive rectangular shape makes it easily recognizable. USB has a number of features that makes it particularly popular on PCs. First, USB devices are hot swappable. You can insert or remove them without restarting your system.

3. Parallel port:
Most printers use a special connector called a parallel port. Parallel port carry data on more than one wire, as opposed to the serial port, which uses only one wire. Parallel ports use a 25pin female DB connector. Parallel ports are directly supported by the motherboard through a direct connection or through a dangle.

4. CPU Chip :
The central processing unit, also called the microprocessor performs all the calculations that take place inside a pc. CPUs come in Variety of shapes and sizes. Modern CPUs generate a lot of heat and thus require a cooling fan or heat sink. The cooling device (such as a cooling fan) is removable, although some CPU manufactures sell the CPU with a fan permanently attached.

5. RAM slots:
Random-Access Memory (RAM) stores programs and data currently being used by the CPU. RAM is measured in units called bytes. RAM has been packaged in many different ways. The most current package is called a 168-pin DIMM (Dual Inline Memory module).

6. Floppy controller:
Page 18 of 95

Training report on Computer Hardware & networking The floppy drive connects to the computer via a 34-pin ribbon cable, which in turn connects to the motherboard. A floppy controller is one that is used to control the floppy drive.

7. IDE controller:
Industry standards define two common types of hard drives: EIDE and SCSI. Majority of the PCs use EIDE drives. SCSI drives show up in high end PCs such as network servers or graphical workstations. The EIDE drive connects to the hard drive via a 2-inch-wide, 40-pin ribbon cable, which in turn connects to the motherboard. IDE controller is responsible for controlling the hard drive.

8. PCI slot:
Intel introduced the Peripheral component interconnect bus protocol. The PCI bus is used to connect I/O devices (such as NIC or RAID controllers) to the main logic of the computer. PCI bus has replaced the ISA bus.

9. ISA slot:
(Industry Standard Architecture) It is the standard architecture of the Expansion bus. Motherboard may contain some slots to connect ISA compatible cards.

10.CMOS Battery
To provide CMOS with the power when the computer is turned off all motherboards comes with a battery. These batteries mount on the motherboard in one of three ways: the obsolete external battery, the most common onboard battery, and built-in battery.

11.AGP slot:
If you have a modern motherboard, you will almost certainly notice a single connector that looks like a PCI slot, but is slightly shorter and usually brown. You also probably have a video card inserted into this slot. This is an Advanced Graphics Port (AGP) slot.

12.CPU slot:
To install the CPU, just slide it straight down into the slot. Special notches in the slot make it impossible to install them incorrectly. So remember if it does not go easily, it is probably not correct. Be sure to plug in the CPU fan's power.

13. Power supply plug in:


The Power supply, as its name implies, provides the necessary electrical power to make the pc operate. the power supply takes standard 110-V AC power and converts into 12-Volt, 5-Volt, and 3.3-Volt DC power.

3. Central Processing Unit


Page 19 of 95

Training report on Computer Hardware & networking

The Processor
The processor (really a short form for microprocessor and also often called the CPU or central processing unit) is the central component of the PC. It is the brain that runs the show inside the PC. All work that you do on your computer is performed directly or indirectly by the processor. Obviously, it is one of the most important components of the PC, if not the most important. It is also, scientifically, not only one of the most amazing parts of the PC, but one of the most amazing devices in the world Of Technology.

The processor plays a significant role in the following important aspects of your computer system:

Performance: The processor is probably the most important single determinant of system performance in the PC. While other components also play a key role in determining performance, the processor's capabilities dictate the maximum performance of a system. The other devices only allow the processor to reach its full potential. Software Support: Newer, faster processors enable the use of the latest software. In addition, new processors such as the Pentium with MMX Technology, enable the use of specialized software not usable on earlier machines. Reliability and Stability: The quality of the processor is one factor that determines how reliably your system will run. While most processors are very dependable, some are not. This also depends to some extent on the age of the processor and how much energy it consumes. Energy Consumption and Cooling: Originally processors consumed relatively little power compared to other system devices. Newer processors can consume a great deal of power. Power consumption has an impact on everything from cooling method selection to overall system reliability. Motherboard Support: The processor you decide to use in your system will be a major determining factor in what sort of chipset you must use, and hence what motherboard you buy. The motherboard in turn dictates many facets of your system's capabilities and performance.

This section discusses many different aspects of this important component in full detail, and concludes with a look at the major processor families used on the PC platform, from the original IBM PC processors to the latest technology. Note that the explanations of how the processor works and what its characteristics are, is kept mostly separate from the information on particular processors. This is done to keep information organized and easier to find. However, I also include summary tables in each of the description sections showing how the various processors fare in that particular area. This shows the evolution of the technology and lets you more easily in various ways.

Page 20 of 95

Training report on Computer Hardware & networking The CPU (central processing uni)t is the brains of the computer. Sometimes referred to simply as the processor or central processor, the CPU is where most calculations take place. In terms of computing power, the CPU is the most important element of a computer system. A modern CPU is usually small and square with many short, rounded, metallic connectors on its underside. Some older CPUs have pins instead metallic connectors. The CPU attaches directly to a CPU socket(or sometimes a "slot") on the motherboard. The CPU is inserted into the socket pin-side-down and a small lever helps to secure the processor. After running even a short while, modern CPUs can get very hot. To help dissipate this heat, it is necessary to attach a heat sink and a small fan directly on top of the CPU. Typically, these come bundled with a CPU purchase. Other more advanced cooling options are also available. The components of a CPU are: The arithmetic logic unit (ALU), which performs arithmetic and logical operations. The control unit, which extracts instructions from memory and decodes and executes them, calling on the ALU when necessary. Main memory - The instructions and data that the CPU is working on will be stored in the CPU's memory store, normally having been copied to the memory from backing storage. The power of the CPU is determined by Clock speed - All processors depend on a series of pulses or clock signals to control the internal movement and processing of data. In general the faster the clock speed is (measured in megahertz MHz) the faster the processor will be able to process data. A typical microprocessor would have a clock speed of around 3Ghz. Word length - the number of bits designed to handle as a unit - 8, 16, 32, 64. Architecture - the number of input/output paths it can handle -internal circuitry CISC and RISC (Reduced Instruction Set Processor)

4. Memory
Page 21 of 95

Training report on Computer Hardware & networking Associated with the CPU- short-term volatile memory- its contents are removed when replaced by new instructions and data or when electrical power is turned off.

Read Only Memory (ROM)


Read only memory (ROM) contains data that is stored in the chip when it is manufactured. This data cannot be changed and it remains permanently stored in the chip. Normally the memory will contain a small amount of ROM to store the bootstrap loader program. This is the program that sets the computer up when it is first switched on. Its main task is to load the operating system - the program that runs the computer system - as part of the power up sequence.

Random Access Memory (RAM)


When booting up where the operating system, programs and data are stored. This is called volatile memory - when you switch the machine off, the contents of this memory is lost. RAM is a type of computer memory that can be accessed randomly; that is, any byte of memory can be accessed without touching the preceding bytes. RAM is the most common type of memory found in computers and other devices, such as printers. There are two basic types of RAM: Dynamic RAM (DRAM)

Static RAM (SRAM)


The two types differ in the technology they use to hold data, dynamic RAM being the more common type. Dynamic RAM needs to be refreshed thousands of times per second. Static RAM does not need to be refreshed, which makes it faster; but it is also more expensive than dynamic RAM. Both types of RAM are volatile, meaning that they lose their contents when the power is turned off.

Virtual Memory
When the machine runs out of RAM it is possible to take over part of the hard disk space to use as memory. This has the effect of causing the machine to appear to be running very slowly. If the machine is not shut down properly, there is a danger of the hard disk clogging up with temporary files. The purpose of virtual memory is to enlarge the address space, the set of addresses a program can utilize.

Cache
A memory cache, sometimes called a cache store or RAM cache, is a portion of memory made of high-speed static RAM (SRAM) instead of the slower and cheaper dynamic RAM (DRAM) used for main memory. Memory caching is effective because most programs access the same data or instructions over and over. By keeping as much of this information as possible in SRAM, the computer avoids accessing the slower DRAM. Disk caching works under the same principle as memory caching, but instead of using high-speed SRAM, a disk cache uses conventional main memory. The most recently accessed data from the disk (as well as adjacent sectors) is stored in a memory buffer. When a program needs to access data Page 22 of 95

Training report on Computer Hardware & networking from the disk, it first checks the disk cache to see if the data is there. Disk caching can dramatically improve the performance of applications, because accessing a byte of data in RAM can be thousands of times faster than accessing a byte on a hard disk.

Bits and Byte


Everything in a computer s memory is represented by an on (1) or off (0) electronic pulse.- binary system. One binary digit is called a bit. Several electronic pulses are grouped together - 7 with a special check bit called a parity bit. Eight bits equal a byte. The computer industry calls this arrangement the American Standard Code for Information Exchange or ASCII- micro- and minis. (p.129) We call a computer whose CPU is designed for bytes of 8 bits an 8 bit machine. Supercomputers work with 64- or 128- bit bytes. Bit Byte Kilobyte (K) Megabyte (M) Gigabyte (G) 1 or 0 8 bits 1024 bytes 1000000 bytes 1 billion bytes

The size of the associated memory store will affect processor performance. A larger store will allow more data to be present in memory and therefore available for immediate processing. There will then be fewer delays while the processor waits for data to be copied from backing store. A larger memory also allows the processor to have several different programs or tasks available in central memory at the same time.

5. Power supply unit:


Page 23 of 95

Training report on Computer Hardware & networking

A power supply unit (PSU) is the component that supplies power to the other components in a computer. More specifically, a power supply unit is typically designed to Convert general purpose alternating current (AC) electric power from mains (100-127V in North America, parts of South America, Japan, and Taiwan; 220-240V in most of the rest of the world) to usable low-voltage DC power for the internal components of the computer. Some power supplies have a switch to change between 230 V and 115 V. Other models have automatic sensors that switch input voltage automatically, or are able to accept any voltage between those limits. The most common computer power supplies are built to conform to the ATX form factor. This enables different power supplies to be interchangeable with different components inside the computer. ATX power supplies also are designed to turn on and off using a signal from the mothrboard, and provide support for modern functions such as the standby mode available in many computers. The most recent specification of the ATX standard PSU as of mid-2008 is version 2.31.

Power rating
Computer power supplies are rated based on their maximum output power. Typical power ranges are from 500 W to lower than 300 W for Small form factor systems intended as ordinary home computers, the use of which is limited to Internet-surfing and burning and playing DVDs. Power supplies used by gamers and enthusiasts mostly range from 450 W to 1400 W. Typical gaming PCs feature power supplies in the range of 500-800 W, with higher-end PCs demanding 800-1400 W supplies. The highest-end units are up to 2 kW strong and are intended mainly for servers and, to a lesser degree, extreme performance computers with multiple processors, several hard disks and multiple graphics cards. The power rating of a PC power supply is not officially certified and is selfclaimed by each manufacturer. A common way to reach the power figure for PC PSUs is by adding the power available on each rail, which will not give a true power figure. Therefore it is possible to overload a PSU on one rail without having to use the maximum rated power.

Page 24 of 95

Training report on Computer Hardware & networking Sometimes manufacturers inflate their power ratings, in order to gain an advantage in the market. This can be done due to a lack of clear standards regarding power supply labeling and testing.

Advertising the peak power, rather than the continuous power; Determining the continuous output power capability at unrealistically low temperatures (at room temperature as opposed to 40C, a more likely temperature inside a PC case); Advertising total power as a measure of capacity, when modern systems are almost totally reliant on the current available from the 12 volt line(s).

This may mean that if:


PSU A has a peak rating of 550 watts at 25C, with 25 amps (300 W) on the 12 volt line, and PSU B has a continuous rating of 450 watts at 40C, with 33 amps (400 W) on the 12 volt line,

and those ratings are accurate, then PSU B would have to be considered a vastly superior unit, despite its lower overall power rating. PSU A may only be capable of delivering a fraction of its rated power under real world conditions. This tendency has led in turn to greatly overspecified power supply recommendations, and a shortage of high-quality power supplies with reasonable capacities. Very few computers require more than 300350 watts maximum. Higher end computers such as servers and gaming machines with multiple high power GPUs are among the few exceptions.

Appearance
Most computer power supplies are a square metal box, and have a large bundle of wires emerging from one end. Opposite the wire bundle is the back face of the power supply, with an air vent and C14 IEC connector to supply AC power. There may optionally be a power switch and/or a voltage selector switch. A label on one side of the box lists technical information about the power supply, including safety certifications maximum output power. Common certification marks for safety are the UL mark, GS mark, TUV, NEMKO, SEMKO, DEMKO, FIMKO, CCC, CSA, VDE, GOST R and BSMI. Common certificate marks for EMI/RFI are the CE mark, FCC and C-tick. The CE mark is required for power supplies sold in Europe and India. A ROHS or 80 PLUS can also sometimes be seen. Dimensions of an ATX power supply are 150 mm width, 86 mm height, and typically 140 mm depth, although the depth can vary from brand to brand.

Connectors

. Page 25 of 95

Training report on Computer Hardware & networking Typically, power supplies have the following connectors:

PC Main power connector (usually called P1): Is the connector that goes to the motherboard to provide it with power. The connector has 20 or 24 pins. One of the pins belongs to the PS-ON wire (it is usually green). This connector is the largest of all the connectors. In older AT power supplies, this connector was split in two: P8 and P9. A power supply with a 24-pin connector can be used on a motherboard with a 20-pin connector. In cases where the motherboard has a 24-pin connector, some power supplies come with two connectors (one with 20-pin and other with 4-pin) which can be used together to form the 24-pin connector. ATX12V 4-pin power connector (also called the P4 power connector). A second connector that goes to the motherboard (in addition to the main 24-pin connector) to supply dedicated power for the processor. For high-end motherboards and processors, more power is required, therefore EPS 12V has an 8 pin connector. 4-pin peripheral power connectors (usually called Molex for its manufacturer): These are the other, smaller connectors that go to the various disk drives of the computer. Most of them have four wires: two black, one red, and one yellow. Unlike the standard mains electrical wire colorcoding, each black wire is a ground, the red wire is +5 V, and the yellow wire is +12 V. In some cases these are also used to provide additional power to PCI cards such as FireWire 800 cards. 4-pin Berg power connectors (usually called Mini-connector or "mini-Molex"): This is one of the smallest connectors that supplies the floppy drive with power. In some cases, it can be used as an auxiliary connector for AGP video cards. Its cable configuration is similar to the Peripheral connector. Auxiliary power connectors: There are several types of auxiliary connectors designed to provide additional power if it is needed. Serial ATA power connectors: a 15-pin connector for components which use SATA power plugs. This connector supplies power at three different voltages: +3.3, +5, and +12 volts. 6-pin Most modern computer power supplies include 6-pin connectors which are generally used for PCI express graphics cards, but a newly introduced 8-pin connector should be seen on the latest model power supplies. Each PCI Express 6-pin connector can output a maximum of 75 W. 6+2 pin For the purpose of backwards compatibility, some connectors designed for use with PCI express graphics cards feature this kind of pin configuration. It allows either a 6-pin card or an 8-pin card to be connected by using two separate connection modules wired into the same sheath: one with 6 pins and another with 2 pins. A C14 IEC connector with an appropriate C13 cord is used to attach the power supply to the local power grid.

Page 26 of 95

Training report on Computer Hardware & networking

AT vs. ATX

A typical installation of an ATX form factor computer power supply. There are two basic differences between AT and ATX power supplies: The connectors that provide power to the motherboard, and the soft switch. On older AT power supplies, the Power-on switch wire from the front of the computer is connected directly to the power supply. On newer ATX power supplies, the power switch on the front of the computer goes to the motherboard over a connector labeled something like; PS ON, Power SW, SW Power, etc. This allows other hardware and/or software to turn the system on and off. The motherboard controls the power supply through pin #14 of the 20 pin connector or #16 of the 24 pin connector on the motherboard. This pin carries 5V when the power supply is in standby. It can be grounded to turn the power supply on without having to turn on the rest of the components. This is useful for testing or to use the computer ATX power supply for other purposes. AT stands for Advanced Technology when ATX means Advanced Technology eXtended.

Laptops
Most portable computers have power supplies that provide 25 to 100 watts. In portable computers (such as laptops) there is usually an external power supply (sometimes referred to as a "power brick" due to its similarity, in size, shape and weight, to a real brick) which converts AC power to one DC voltage (most commonly 19 V), and further DC-DC convertion occurs within the laptop to supply the various DC voltages required by the other components of the portable computer.

Servers
Some web servers use a single-voltage 12 volt power supply. All other voltages are generated by voltage regulator modules on the motherboard.

Page 27 of 95

Training report on Computer Hardware & networking

Energy efficiency
Computer power supplies are generally about 7075% efficient. That means in order for a 75% efficient power supply to produce 75 W of DC output it would require 100 W of AC input and dissipate the remaining 25 W in heat. Higher-quality power supplies can be over 80% efficient; higher energy efficient PSU's waste less energy in heat, and requires less airflow to cool, and as a result will be quieter. Google's server power supplies are more than 90% efficient. HP's server power supplies have reached 94% efficiency. It's important to match the capacity of a power supply to the power needs of the computer. The energy efficiency of power supplies drops significantly at low loads. Efficiency generally peaks at about 50-75% load. The curve varies from model to model (examples of how this curve looks can be seen on test reports of energy efficient models found on the 80PLUS website). As a rule of thumb for standard power supplies it is usually appropriate to buy a supply such that the calculated typical consumption of one's computer is about 60% of the rated capacity of the supply provided that the calculated maximum consumption of the computer does not exceed the rated capacity of the supply. Note that advice on overall power supply ratings often given by the manufacturer of single component, typically graphics cards, should be treated with great skepticism. These manufacturers wish to minimise support issues due to under rating the power supply and are willing to advise customers to overrate it to avoid this. Various initiatives are underway to improve the efficiency of computer power supplies. Climate savers computing initiative promotes energy saving and reduction of greenhouse gas emissions by encouraging development and use of more efficient power supplies. 80PLUS certifies power supplies that meet certain efficiency criteria, and encourages their use via financial incentives.

Small facts to consider

Page 28 of 95

Training report on Computer Hardware & networking Redundant power supply.

Life span is usually measured in mean time between failure (MTBF). Higher MTBF ratings are preferable for longer device life and reliability. Quality construction consisting of industrial grade electrical components and/or a larger or higher speed fan can help to contribute to a higher MTBF rating by keeping critical components cool, thus preventing the unit from overheating. Overheating is a major cause of PSU failure. MTBF value of 100,000 hours is not uncommon. Power supplies may have passive or active power factor correction (PFC). Passive PFC is a simple way of increasing the power factor by putting a coil in series with the primary filter capacitors. Active PFC is more complex and can achieve higher PF, up to 99%. In computer power supplies that have more than one +12V power rail it is preferable for stability reasons to spread the power load over the 12V rails evenly to help avoid overloading one of the rails on the power supply. o Multiple 12V power supply rails are separately current limited as a safety feature; they are not generated separately. Despite widespread belief to the contrary, this separation has no effect on mutual interference between supply rails. o The ATX12V 2.x and EPS12V power supply standards defer to the IEC 60950 standard, which requires that no more than 240 volt-amps be present between any two accessible points. Thus, each wire must be current-limited to no more than 20 A; typical supplies guarantee 18 A without triggering the current limit. Power supplies capable of delivering more than 18 A at 12 V connect wires in groups to two or more current sensors which will shut down the supply if excess current flows. Unlike a fuse or circuit breaker, these limits reset as soon as the overload is removed. o Because of the above standards, almost all high-power supplies claim to implement separate rails, however this claim is often false; many omit the necessary current-limit circuitry, both for cost reasons and because it is an irritation to customers. (The lack is sometimes advertised as a feature under names like "rail fusion" or "current sharing".) When the computer is powered down but the power supply is still on, it can be started remotely via Wake on LAN and Wake on ring or locally via Keyboard Power ON (KBPO) if the motherboard supports it. Most computer power supplies are a type of swithed mode power supply(SMPS). Computer power supplies may have short circuit protection, overpower (overload) protection, overvoltage protection, undervoltage protection, overcurrent protection, and over temperature protection. Some power supplies come with sleeved cables, which is aesthetically nicer, makes wiring easier and cleaner and have less detrimental effect on airflow. There is a popular misconception that a greater power capacity (watt output capacity) is always better. Since supplies are self-certified, a manufacturer's claims may be double or more what is actually provided. Although a too-large power supply will have an extra margin of safety as far as not over-loading, a larger unit is often less efficient at lower loads (under 20% of its total Page 29 of 95

Training report on Computer Hardware & networking capability) and therefore will waste more electricity than a more appropriately sized unit. Additionally, computer power supplies generally do not function properly if they are too lightly loaded. Under no-load conditions they may shut down or malfunction.

Another popular misconception is that the greater the total watt capacity is, the more suitable the power supply becomes for higher-end graphics cards. The most important factor for judging a PSUs suitability for certain graphics cards is the PSUs total 12V output, as it is that voltage on which modern graphics cards operate. If the total 12V output stated on the PSU is higher than the suggested minimum of the card, then that PSU can fully supply the card. It is however recommended that a PSU should not just cover the graphics cards' demands, as there are other components in the PC that depend on the 12V output, including the CPU and disk drives. Power supplies can feature magnetic amplifiers or double-forward converter circuit design.

Page 30 of 95

Training report on Computer Hardware & networking

6. STORAGE DEVICES
Primary Storage
The computers main memory (RAM) is known as primary storage and I shall deal with this in Processing.

Backing Storage
For computers to be useful you need to store data and programs that is not currently being processed out of main memory, for retrieval at a later date. The choice of backing store will depend on volume, physical storage space, accessibility of data, etc.

Formatting
Both magnetic tape and magnetic disc require formatting before use. This is essentially a process of marking out the medium into usable areas in a way that allows the tape unit or disc drive to find its way about the medium when reading and writing at a later time.

Magnetic tapes

Tapes for computers are similar to tapes used to store music. Storing data on tapes is considerably cheaper than storing data on disks. A tape of this type can store tens of gigabytes of data but it is likely to hold one file only. Accessing data on tapes, however, is much slower than accessing data on disks. Tapes are sequential-access media, which means that to get to a particular point on the tape, the tape must go through all the preceding points. Because tapes are so slow, they are generally used only for long-term storage and backup. Data to be used regularly is almost always kept on a disk. Tapes are also used for transporting large amounts of data. Tapes are sometimes called streamers or streaming tapes. Tape cartridges are a smaller form of magnetic tape. They are sealed units usually holding 0.25 inch tape and, like an audio cassette, both reels are built into the cartridge. Cartridges are used almost exclusively to hold backup-up copies of hard discs on small computer systems (a tape streamer). A cartridge can store up to about 20 gigabytes of data.

Page 31 of 95

Training report on Computer Hardware & networking

Floppy Disk

A soft magnetic disk. 312-inch: Floppy is something of a misnomer for these disks, as they are encased in a rigid envelope. And have a storage capacity from 400K to 1.4MB of data.

Hard Disk.

A magnetic disk on which you can store computer data. The term hard is used to distinguish it from a soft, or floppy, disk. Hard disks hold more data and are faster than floppy disks. A hard disk, for example, can store anywhere from 10 to more than 100 gigabytes, whereas most floppies have a maximum storage capacity of 1.4 megabytes. P141.

Optical Disc
A storage medium from which data is read and to which it is written by lasers. Optical disks can store much more data -- up to 6 gigabytes (6 billion bytes) -- than most portable magnetic media, such as floppies. CD-ROM : The data is permanent and can be read any number of times, but CD-ROMs cannot be modified. . A CD-ROM can store about 650 megabytes of data. They are used in situations where the data does not age quickly. Examples include encyclopedias, large catalogues and telephone directories.

WORM : Page 32 of 95

Training report on Computer Hardware & networking Stands for write-once, read -many. With a WORM disk drive, you can write data onto a WORM disk, but only once. After that, the WORM disk behaves just like a CD-ROM. CD-RW: Optical disks that can be erased and loaded with new data, just like magnetic disks. These are often referred to as EO (erasable optical) disks. Standard CDs store up to 700MB of data.

DVD-RW: DVD discs hold a whopping 4.7GB per side. That means enough capacity for a full-length DVDquality movie, fewer discs to swap during backups, and less wasted space on the shelf.

7. Keyboard
Page 33 of 95

Training report on Computer Hardware & networking The QWERTY keyboard remains the most common input device although in terms of speed of input it is one of the most limited. It is however suitable for entering a wide range of data and it is a device that is familiar every clerical worker. The QWERTY layout was designed to reduce the chance of jamming on early mechanical typewriters. This was achieved by spreading the most commonly used letters around the keyboard, effectively slowing the typist down. Keyboards do exist that that non-standard in layout as do those with ergonomic designs and nonstandard arrangements to reduce incidence of RSI.

Standard key meanings


The PC keyboard with its various keys has a long history of evolution reaching back to teletypewriters. In addition to the 'old' standard keys, the PC keyboard has accumulated several special keys over the years. Some of the additions have been inspired by the opportunity or requirement for improving user productivity with general office application softare, while other slightly more general keyboard additions have become the factory standards after being introduced by certain operating system or GUI software vendors such as Microsoft. From mechanical typewriters

Shift selects the upper character, or select upper case of letters. The Shift key in typewriters was attached to a lever that moved the character types so that the uppercase characters could be printed in the paper. Unlike mechanical typewriters, PC keyboards do not capitalize all letters properly when both shift keys are engaged simultaneously. Caps lock selects upper case, or if shift is pressed, lower case of letters. In mechanical typewriters, it worked like the Shift key, but also used a lock to keep the Shift key depressed. The lock was released by pressing the Shift key. Enter wraps to the next line or activates the default or selected option. ASCII keyboards were labeled CR or Return. Typewriters used a lever that would return the cylinder with the paper to the start of the line.

From Teletype keyboards

Ctrl shifts the value of letters and numbers from the ASCII graphics range, down into the ASCII control characters. For example, CTRL-S is XOFF (stops many programs as they print to screen) CTRL-Q is XON (resume printing stopped by CTRL-S). ESc produces an ASCII escape character. It may be used to exit menus or modes. Tab produces an ASCII tab character. Moves to the next tab stop. ~ is the tilde, an accent backspaced and printed over other letters for non-English languages. Nowadays the key does not produce a backspaceable character and is used for 'not' or 'circa'. ` is a grave accent or backtick, also formerly backspaced over letters to write non-English languages; on some systems it is used as an opening quote. The single quote ' is normally used for an acute accent. ^ is a circlumflex, another accent for non-English languages. Also used to indicate exponentiation where superscript is not available. * is an asterisck, used to indicate a note, or multiplication. is an underscore , backspaced and overprinted to add emphasis. Page 34 of 95

Training report on Computer Hardware & networking is a vertical bar, originally used as a typographic separator for optical character recognition. Many character sets break it in the middle so it cannot be confused with the numeral "1" or the letter "l. This character is often known as a "pipe" or a "fencepost."

Invented for the PC

The Windows key (also known as the super key) is a quick way to open the Start menu in Windows' standard Explorer shell, and can usually be configured to behave similarly in other graphical user interfaces, for Windows and other operating systems. The menu key brings up a context menu, similar to right-clicking. Function key are the numbered keys, use varies by program, but F1 is often "help." Arrow key move on the screen. When shifted, they select items. Home moves to the start of text, usually the left side of the screen. Endmoves to the end of text, usually the right-most edge of the current line. PgUp & PgDn move through the document by pages. Delete deletes the character after the screen position, or the selected items. Insert toggles between "insertion" and "overwrite" mode. Print screen originally printed a text image of the screen; nowadays often takes a screenshot. In combination with Alt, it produces a different keycode, SysRq. num lck toggles between states for the numeric keypad. When off, it acts as arrow and navigational keys. When on, it is a 10-key pad similar to a standard calculator. Preferences vary so much that a favorite default for this key can often be configured in the BIOS configuration. Its continued existence on keyboards that separate out the arrow keys has mostly historical reasons. Scroll lock is little-used. On modern software, typing text usually causes earlier text to scroll off the top of the screen or window. Some old programs could disable this and restart at the top of the window when scroll lock was pressed. The advantage is that the entire screen full of text does not shift, making it easier to read. It was also used to lock the cursor on its line and scroll the work area under it. On spreadsheets such as Microsoft Excel, it locks the cell pointer on the current cell, allowing the user to use the arrow keys to move the view window around without moving the cell pointer. On some consoles (such as the Linux console), it prevents scrolling of messages until another key combination is pressed. Pause pauses either output or processing. In combination with Control, it produces a different keycode, for Break. Ctrl-Break traditionally stopped programs in DOS. Ctrl-Break is also used to halt execution of the debugger in some programming environments such as Microsoft Visual Studio. In combination with the Windows key, it brings up the System Properties window in Microsoft Windows environments. Alt shifts the letters and numbers into the range above hex 0x80 where the international characters and special characters exist in the PC's standard character set. Alt plus a number typed on the numeric pad produces special characters, see Windows Alt keys. ALtgr works like the Ctrl+Alt key combination, often used in combination with other keys to print special characters like the BAckslash on non-English keyboards. FN may be present on compact keyboards such as those built into laptop computers. When depressed in combination with other keys, it either enables the user to access key functions that do not have dedicated keys on the compact keyboard (such as the numeric block), or it controls hardware functions such as switching between the built-in screen and an external display, changing screen brightness, or changing speaker volume. These alternate meanings are usually indicated with text or symbols of a different color printed on the key, with the 'Fn' key text having that same color. Page 35 of 95

Training report on Computer Hardware & networking turbo on some keyboards. It is usually on the right side of the right Shift key. When depressed in combination with a function key it sets the key repeat rate.

Connectors There are three types of connector used to connect a PC keyboard to the main system unit. All three are mechanically different from each other, but the first two are electrically identical (except for XT keyboards, which used a connector mechanically identical to the later AT connector, but not electrically compatible with it). The three connector types are listed below in chronological order:

5-pin DIN (DIN 41524) "AT" connector. 6-pin Mini DIN (DIN 45322) "PS/2 connector. 4-pin USB connector.

Concept Keyboard This is a keyboard-like device with a number of pictures instead of the usual keys. The user indicates input by pressing one of the pictures. It is used as an input device for young children in Computer Based Learning systems and it is sometimes used in cafs and pubs where there is a computerised till. A feature of this device is the capability to change the overlay to provide a different set of pictures. This is particularly useful in computer based learning as a single keyboard can then be used with a variety of overlays.

8. Mouse
Page 36 of 95

Training report on Computer Hardware & networking

A computer mouse with the most common standard features: two buttons and a scroll wheel, which can also act as a third button. In computing, a mouse (plural mice, mouses, or mouse devices.) is a pointing device that functions by detecting two-dimensional motion relative to its supporting surface. Physically, a mouse consists of an object held under one of the user's hands, with one or more buttons. It sometimes features other elements, such as "wheels", which allow the user to perform various system-dependent operations, or extra buttons or features can add more control or dimensional input. The mouse's motion typically translates into the motion of a cursor on a display, which allows for fine control of a Graphical User Interface. The name mouse, originated at the Stanford Research Institute, derives from the resemblance of early models (which had a cord attached to the rear part of the device, suggesting the idea of a tail) to the common mouse. The first marketed integrated mouse shipped as a part of a computer and intended for personal computer navigation came with the Xerox 8010 Star Information System in 1981. However, the mouse remained relatively obscure until the appearance of the Apple Macintosh; in 1984 PC columnist John C. Dvorak ironically commented on the release of this new computer with a mouse: There is no evidence that people want to use these things. A mouse now comes with most computers and many other varieties can be bought separately.

Technologies Page 37 of 95

Training report on Computer Hardware & networking Early mice

The world's first trackball invented by Tom Cranston, Fred Longstaff and Kenyon Taylor working on the Royal Canadian Navy's DATAR project in 1952. It used a standard Canadian five-pin bowling ball. It was not patented, as it was a secret military project.

Early mouse patents. From left to right: Opposing track wheels by Engelbart, Nov. 1970, U.S. Patent 3,541,541. The first computer mouse, held by Ball and wheel by Rider, inventor Douglas Sept. 1974, U.S. Patent Engelbart, showing 3,835,464. Ball and two the wheels that make rollers with spring by Opocensky, Oct. 1976, U.S. contact with the working surface Patent 3,987,685.

A Smaky mouse, as invented at the EPFL by JeanDaniel Nicoud and Andr Guignard.

Douglas Engelbart at the Stanford Research Institute invented the first mouse prototype in 1963.with the assistance of his colleague Bill English. Engelbart never received any royalties for it, as his patent ran out before it became widely used in personal computers. The invention of the mouse was just a small part of Engelbart's much larger project, aimed at augmenting human intellect. Eleven years earlier, the Royal Canadian Navy had invented the trackball using a Canadian five-pin bowling ball as a user interface for their DATAR system. Several other experimental pointing-devices developed for Engelbart's oN-Line System (NLS) exploited different body movements for example, head-mounted devices attached to the chin or nose but ultimately the mouse won out because of its simplicity and convenience. The first mouse, a bulky device (pictured) used two gear-wheels perpendicular to each other: the rotation of each wheel translated into motion along one axis. Engelbart received patent US3541541 on November 17, 1970 for an "X-Y Position Indicator for a Display System". At the time, Engelbart envisaged that users would hold the mouse continuously in one hand and type on a five-key chord keyset with the other. The concept was preceded in the 19th century by the telautograph, which also anticipated the fax machine.

Mechanical mouse devices

Page 38 of 95

Training report on Computer Hardware & networking

Mechanical mouse, shown with the top cover removed

Operating an opto-mechanical mouse.


1: moving the mouse turns the ball. 2: X and Y rollers grip the ball and transfer movement. 3: Optical encoding disks include light holes. 4: Infrared LEDs shine through the disks. 5: Sensors gather light pulses to convert to X and Y vectors. Bill English, builder of Engelbart's original mouse. invented the ball mouse in 1972 while working for Xerox PARC.[12] The ball-mouse replaced the external wheels with a single ball that could rotate in any direction. It came as part of the hardware package of the Xerox Alto computer. Perpendicular chopper wheels housed inside the mouse's body chopped beams of light on the way to light sensors, thus detecting in their turn the motion of the ball. This variant of the mouse resembled an inverted trackball and became the predominant form used with personal computers throughout the 1980s and 1990s. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size keyboard and grabbing the mouse when required. The ball mouse utilizes two rollers rolling against two sides of the ball. One roller detects the forward backward motion of the mouse and other the leftright motion. The motion of these two rollers causes two disc-like encoder wheels to rotate, interrupting optical beams to generate electrical signals. The mouse sends these signals to the computer system by means of connecting wires. The driver software in the system converts the signals into motion of the mouse cursor along X and Y axes on the screen. Ball mice and wheel mice were manufactured for Xerox by Jack Hawley, doing business as The Mouse House in Berkeley, California, starting in 1975. Page 39 of 95

Training report on Computer Hardware & networking Based on another invention by Jack Hawley, proprietor of the Mouse House, Honeywell produced another type of mechanical mouse. Instead of a ball, it had two wheels rotating at off axes. Keytronic later produced a similar product. Modern computer mice took form at the cole polytechnique fdrale de Lausanne (EPFL) under the inspiration of Professor Jean-Daniel Nicoud and at the hands of engineer and watchmaker Andr Guignard. This new design incorporated a single hard rubber mouseball and three buttons, and remained a common design until the mainstream adoption of the scroll-wheel mouse during the 1990s. In 1985, Ren Sommer added a microprocessor to Nicoud's and Guignard's design. Through this innovation, Sommer is credited with inventing a significant component of the mouse, which made it more "intelligent;" though optical mice from Mouse Systems had incorporated microprocessors by 1984. Another type of mechanical mouse, the "analog mouse" (now generally regarded as obsolete), uses potentiometers rather than encoder wheels, and is typically designed to be plug-compatible with an analog joystick. The "Color Mouse," originally marketed by Radio Shack for their Color Computer (but also usable on MS-DOS machines equipped with analog joystick ports, provided the software accepted joystick input) was the best-known example.

Optical mice
An optical mouse uses a light-emitting diode and photodiodes to detect movement relative to the underlying surface, rather than moving some of its parts as in a mechanical mouse.

Xerox optical mouse chip


Early optical mice, first demonstrated by two independent inventors in 1980, came in two different varieties: Some, such as those invented by Steve Kirsch of MIT and Mouse Systems Corporation, used an infrared LED and a four-quadrant infrared sensor to detect grid lines printed with infrared absorbing ink on a special metallic surface. Predictive algorithms in the CPU of the mouse calculated the speed and direction over the grid.

Page 40 of 95

Training report on Computer Hardware & networking Others, invented by Richard F. Lyon and sold by Xerox, used a 16-pixel visible-light image sensor with integrated motion detection on the same chip and tracked the motion of light dots in a dark field of a printed paper or similar mouse pad. These two mouse types had very different behaviors, as the Kirsch mouse used an x-y coordinate system embedded in the pad, and would not work correctly when the pad was rotated, while the Lyon mouse used the x-y coordinate system of the mouse body, as mechanical mice do.

Modern optical mice

Optical mouse sensor disassembled Modern surface-independent optical mice work by using an optoelectronic sensor to take successive images of the surface on which the mouse operates. As computing power grew cheaper, it became possible to embed more powerful special-purpose image-processing chips in the mouse itself. This advance enabled the mouse to detect relative motion on a wide variety of surfaces, translating the movement of the mouse into the movement of the cursor and eliminating the need for a special mouse-pad. This advance paved the way for widespread adoption of optical mice. Optical mice illuminate the surface that they track over, using an LED or a laser diode. Changes between one frame and the next are processed by the image processing part of the chip and translated into movement on the two axes using an optical flow estimation algorithm. For example, the Avago Technologies ADNS-2610 optical mouse sensor processes 1512 frames per second: each frame consisting of a rectangular array of 1818 pixels, and each pixel can sense 64 different levels of gray. Razer DeathAdder processes 6400 frames per second.

Page 41 of 95

Training report on Computer Hardware & networking

Laser mice
The laser mouse uses an infrared laser diode instead of an LED to illuminate the surface beneath their sensor. As early as 1998, Sun Microsystems provided a laser mouse with their Sun SPARCstation servers and workstations. However, laser mice did not enter the mainstream market until 2004, when Logitech, in partnership with Agilent Technologies, introduced its MX 1000 laser mouse. This mouse uses a small infrared laser instead of an LED and has significantly increased the resolution of the image taken by the mouse. The laser enables around 20 times more surface tracking power to the surface features used for navigation compared to conventional optical mice, via interference effects.

Color of optical mouse diodes

The color of the optical mouse's light-emitting diodes can vary, but red is most common, as red diodes are inexpensive and silicon is very sensitive to red light. Other colors are sometimes used, such as the blue LED of the V-Mouse VM-101 illustrated at right.

Power saving in optical mice

A wireless mouse on a mouse pad


Page 42 of 95

Training report on Computer Hardware & networking Manufacturers often engineer their optical mice especially battery-powered wireless models to save power when possible. In order to do this, the mouse dims or blinks the laser or LED when in standby mode (each mouse has a different standby time). This function may also increase the laser / LED life. Mice designed specifically for gamers, such as the Logitech G5 or the Razer Copperhead, often lack this feature in an attempt to reduce latency and to improve responsiveness. A typical implementation in Logitech mice has four power states, where the sensor is pulsed at different rates per second: 1500 full on condition for accurate response while moving, illumination appears bright. 100 fallback active condition while not moving, illumination appears dull. 10 standby 2 sleep state

Some other mice turn the sensor fully off in the sleep state, requiring a button click to wake. Optical mice utilizing infrared elements (LEDs or lasers) offer substantial increases in battery life. Some Logitech mice, such as the V450 848 nm laser mouse, are capable of functioning on two AA batteries for a full year, due to the low power requirements of the infrared laser.

1 Welcome to the Windows XP Installation Guide


Page 43 of 95

Training report on Computer Hardware & networking The purpose of this document is to provide users of FTDI chips with a simple procedure forinstalling drivers for their devices under Windows XP.

2 .Installing FTDI Device Drivers


FTDI have previously provided two types of driver for Windows: a D2XX direct driver and a virtual COM port (VCP) driver. Previously, these drivers were mutually exclusive and could not be installed at the same time. The new Windows combined driver model (CDM) allows applications to access FTDI devices through either the D2XX DLL or a COM port without having to change driver type. However, it should be noted that an application can only communicate through one of these interfaces at a time and cannot send commands to the D2XX DLL and the associated COM port at the same time. The CDM driver comes in two parts. The bus layer provides D2XX style functionality and is always installed. The CDM driver will determine whether a COM port should be exposed by reading the EEPROM of FT232R, FT245R and FT2232C devices. In the case of FT232BM, FT245BM, FT8U232AM and FT8U245AM devices, the CDM driver will default to always installing a COM port. This behaviour can be changed and EEPROM settings ignored by changing the driver INF files as detailed in AN232B-10 Advanced Driver Options. Please note that modifying the INF files of a Microsoft WHQL certified driver will invalidate the certification.

2.1 Installing CDM Drivers


To install CDM drivers for an FTDI device under Windows 2000, follow the instructions below: If a device of the same type has been installed on your machine before and the drivers that are about to be installed are different from those installed already, the original drivers need to be uninstalled. Please refer to the Uninstalling CDM Drivers section of this document for further details of this procedure. Download the latest available CDM drivers from the FTDI web site and unzip them to a location on your PC. If you are running Windows XP or Windows XP SP 1, temporarily disconnect your PC from the Internet.

This can be done by either removing the network cable from your PC or by disabling Page 44 of 95

Training report on Computer Hardware & networking your network card by going to the "Control Panel\Network and Dial-Up Connections", rightclicking on the appropriate connection and selecting "Disable" from the menu. The connection can be re-enabled after the installation is complete. This is not necessary under Windows XP SP 2 if configured to ask before connecting to Windows Update. Windows XP SP 2 can have the settings for Windows Update changed through "Control Panel\System" then select the "Hardware" tab and click "Windows Update". Connect the device to a spare USB port on your PC. If the device is based on the FT2232C, the Microsoft composite device driver is automatically loaded silently in the backgound. Once the composite driver has been installed Windows Found New Hardware Wizard will launch. If thereis no available Internet connection or Windows XP SP 2 is configured to ask before connecting to Windows Update, the screen below is shown. Select "No, not this time" from the options available and then click "Next" to proceed with the installation. If there is an available Internet connection, Windows XP will silently connect to the Windows Update website and install any suitable driver it finds for the device in preference to the driver manually selected. Select "Install from a list or specific location (Advanced)" as shown below and then click "Next". Select "Search for the best driver in these locations" and enter the file path in the combo-box ("E:\CDM 2.00.00" in the example below) or browse to it by clicking the browse button. Once the file path has been entered in the box, click next to proceed. If Windows XP is configured to warn when unsigned (non-WHQL certified) drivers are about to be installed, the following screen will be displayed unless installing a Microsoft WHQL certified driver. Click on "Continue Anyway" to continue with the installation. If Windows XP is configured to ignore file signature warnings, no message will appear. If Windows XP is configured to warn when unsigned (non-WHQL certified) drivers are about to be installed, the following screen will be displayed unless installing a Microsoft WHQL certified driver. Click on "Continue Anyway" to continue with the installation. If Windows XP is configured to ignore file signature warnings, no message will appear. If the device is based on the FT2232C, the Found New Hardware Wizard will continue by installing the USB Serial Converter driver for the second port of the FT2232C device. Theprocedure for installing the second port is identical to that for installing the first port from the first screen of the Found New Hardware Wizard. Page 45 of 95

Training report on Computer Hardware & networking If the device is not based on the FT2232C, the COM port emulation driver is loaded as indicated in the following steps. The Found New Hardware Wizard will launch automatically to install the COM port emulation drivers. As above, select "No, not this time" From the options and click "Next" to proceed with the installation. Select "Install from a list or specific location (Advanced)" as shown below and then click "Next". Select "Search for the best driver in these locations" and enter the file path in the combo-box ("E:\CDM 2.00.00" in the example below) or browse to it by clicking the browse button. Once the file path has been entered in the box, click next to proceed. If Windows XP is configured to warn when unsigned (non-WHQL certified) drivers are about to be installed, the following screen will be displayed unless installing a Microsoft WHQL certified driver. Click on "Continue Anyway" to continue with the installation. If Windows XP is configured to ignore file signature warnings, no message will appear. The following screen will be displayed as Windows XP copies the required driver files. Windows should then display a message indicating that the installation was successful. Click "Finish" to complete the installation for the first port of the device. If the device is based on the FT2232C, the second port must also be installed. The procedure for installing the second port is identical to that for installing the first port from the first screen of the Found New Hardware Wizard for the USB Serial Port device. If the driver is Microsoft WHQL certified, this is done automatically. Open the Device Manager (located in "Control Panel\System" then select the "Hardware" tab and click "Device Manger") and select "View > Devices by Connection", the device appears as a "USB Serial Converter" with an additional COM port with the label "USB Serial Port". If the device is based on the FT2232C, two ports will be available from a composite USB device. If the device is based on the FT2232C, the second port must also be installed. The procedurefor installing the second port is identical to that for installing the first port from the first screen of the Found New Hardware Wizard for the USB Serial Port device. If the driver is Microsoft WHQL certified, this is done automatically. Page 46 of 95

Training report on Computer Hardware & networking Open the Device Manager (located in "Control Panel\System" then select the "Hardware" tab and click "Device Manger") and select "View > Devices by Connection", the device appears as a "USB Serial Converter" with an additional COM port with the label "USB Serial Port". If the device is based on the FT2232C, two ports will be available from a composite USB device.

3.Uninstalling FTDI Devices


When uninstalling devices from Windows XP, it should always be done through the Add/Remove Programs utility as this uses the FTDI driver uninstaller program to remove files and registry entries to leave a clean system. Other methods may leave fragments of the driver that may interfere with future installations. The FTDI uninstaller willl also remove drivers which were pre-installed using DPInst.

3.1 Uninstalling CDM Drivers


To uninstall CDM drivers for FTDI devices, follow the instructions below: Disconnect any FTDI devices that are attached to the PC. Open the Add/Remove Programs utility located in "Control Panel\Add/Remove Programs". Select "FTDI USB Serial Converter Drivers" from the list of installed programs. Click the "Change/Remove" button. This will run the FTDI uninstaller program. Click "Continue" to run the uninstaller or "Cancel" to exit. When the uninstaller has finished removing the device from the system, the caption on the"Cancel" button will change to "Finish". Click "Finish" to complete the process.

4 Troubleshooting
4.1 Windows XP cannot find drivers for my device This error can occur if the VID and PID programmed into the device EEPROM do not match those listed in the INF files for the driver. The VID and PID programmed into the device EEPROM may be found by using the USBView utility from the FTDI web site. These can then be checked against the VID and PID entries in the driver INF files. If they do not match, that driver cannot be installed for that device without either re-programming the device EEPROM or modifying the list of VID and PID numbers in the INF files.

Page 47 of 95

Training report on Computer Hardware & networking Please note that only your own company VID and PID or FTDI's VID (0x0403) and FTDI PID issued for use by your company should be used in the EEPROM and INF/INI files.

4.2 Windows XP forces a reboot after installing a device


This problem can occur if an application is accessing a file while the New Hardware Wizard is trying to copy it. This usually occurs with the FTD2XX.DLL file. If installing a D2XX device, selecting not to restart the computer then unplugging and re-plugging the device may allow the device to function properly without restarting. Restarting the machine will allow the device to work correctly.

4.3 Driver installation fails and Windows XP gives error code 10 Windows error code 10 indicates a hardware error or failed driver installation. This error may appear if a device has insufficient power to operate correctly (e.g. plugged into a bus powered hub with other devices), or may indicate a more serious hardware problem. Also, it may be indicative of USB root hub drivers being incorrectly installed. Please refer to the example schematics on the FTDI web site for standard device configurations. If the error persists, please contact the FTDI support department.

4.4 FT232BM or FT245BM device hangs randomly during operation under Windows XP
This is not caused by the driver, but is a hardware compatibility problem. Some newer USB 2.0 hubs and host controllers can be susceptible to noise and can cause random device failures. This can be overcome by fitting 47pF capacitors to ground on the USBDP and USBDM lines on the USB connector side of the 27W series resistors.

4.5

Windows XP displays an error and then terminates installation


If the following screen is displayed with this message, Windows XP has been configured to block the installation of any drivers that are not WHQL certified. Two options are available to successfully install the device. Page 48 of 95

Training report on Computer Hardware & networking Either a certified version of the driver can be installed (if available) or the driver signing options can be changed to either warn or ignore to allow the installation to complete. To change the current driver signing setting, go to "Control Panel\System", click on the "Hardware" tab and then click "Driver Signing". The desired signing option may then be selected.

5 Revision History Version Release Date Comments


January 2005 Initial release June 2006 Modified to reflect new driver model

Page 49 of 95

Training report on Computer Hardware & networking

INTRODUCTION TO NETWORKING
Computer networks
Computer networking is the engineering discipline concerned with the communication between computer systems or devices. A computer network is any set of computers or devices connected to each other with the ability to exchange data.[1] Computer networking is sometimes considered a sub-discipline of telecommunications, computer science, information technology and/or computer engineering since it relies heavily upon the theoretical and practical application of these scientific and engineering disciplines. The three types of networks are: the Internet, the intranet, and the extranet. Examples of different network methods are: Local area network (LAN), which is usually a small network constrained to a small geographic area. An example of a LAN would be a computer network within a building. Metropolitan area network (MAN), which is used for medium size area. examples for a city or a state. Wide area network (WAN) that is usually a larger network that covers a large geographic area. Wireless LANs and WANs (WLAN & WWAN) are the wireless equivalent of the LAN and WAN. All networks are interconnected to allow communication with a variety of different kinds of media, including twisted-pair copper wire cable, coaxial cable, optical fiber, power lines and various wireless technologies.[2] The devices can be separated by a few meters (e.g. via Bluetooth) or nearly unlimited distances (e.g. via the interconnections of the Internet[3]).

Views of networks
Users and network administrators often have different views of their networks. Often, users who share printers and some servers form a workgroup, which usually means they are in the same geographic location and are on the same LAN. A community of interest has less of a connection of being in a local area, and should be thought of as a set of arbitrarily located users who share a set of servers, and possibly also communicate via peer-to-peer technologies. Network administrators see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application layer gateways that interconnect the physical media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more physical media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using virtual LAN (VLAN) technology. Both users and administrators will be aware, to varying extents, of the trust and scope characteristics of a network. Again using TCP/IP architectural terminology, an intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. Page 50 of 95

Training report on Computer Hardware & networking employees). Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers).Informally, the Internet is the set of users, enterprises,and content providers that are interconnected by Internet Service Providers (ISP). From an engineering standpoint, the Internet is the set of subnets, and aggregates of subnets, which share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of the Domain Name System (DNS).
[5]

History of Computer Networks


Before the advent of computer networks that were based upon some type of telecommunications system, communication between calculation machines and early computers was performed by human users by carrying instructions between them. Many of the social behaviors seen in today's Internet were demonstrably present in the nineteenth century and arguably in even earlier networks using visual signals. In September 1940 George Stibitz used a teletype machine to send instructions for a problem set from his Model at Dartmouth College in New Hampshire to his Complex Number Calculator in New York and received results back by the same means. Linking output systems like teletypes to computers was an interest at the Advanced Research Projects Agency (ARPA) when, in 1962, J.C.R. Licklider was hired and developed a working group he called the "Intergalactic Network", a precursor to the ARPANet. In 1964, researchers at Dartmouth developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at MIT, a research group supported by General Electric and Bell Labs used a computer DEC's to route and manage telephone connections. Throughout the 1960s Leonard Kleinrock, Paul Baran and Donald Davies independently conceptualized and developed network systems which used datagrams or packets that could be used in a network between computer systems. 1965 Thomas Merrill and Lawrence G. Roberts created the first wide area network (WAN). The first widely used PSTN switch that used true computer control was the Western Electric introduced in 1965. In 1969 the University of California at Los Angeles, SRI (in Stanford), University of California at Santa Barbara, and the University of Utah were connected as the beginning of the ARPANET network using 50 kbit/s circuits. Commercial services using X.25 were deployed in 1972, and later used as an underlying infrastructure for expanding TCP/IP networks. Computer networks, and the technologies needed to connect and communicate through and between them, continue to drive computer hardware, software, and peripherals industries. This expansion is mirrored by growth in the numbers and types of users of networks from the researcher to the home user. Today, computer networks are the core of modern communication. All modern aspects of the Public Switched Telephone Network (PSTN) are computer-controlled, and telephony increasingly runs over Page 51 of 95

Training report on Computer Hardware & networking the Internet Protocol, although not necessarily the public Internet. The scope of communication has increased significantly in the past decade, and this boom in communications would not have been possible without the progressively advancing computer network.

Modes of connection
Communication over a network can be classified into two types:

Circuit-switched
In this network, a dedicated channel exists to the remote machine. An exampleis that of a telephone network: when you place a call, a dedicated circuit isestablished for you to the destination. In a circuitswitched network, you areguaranteed to have access to the full bandwidth of the circuit.

Packet-switched
Connections are shared in this type of network. Data that is transported acrossthe network is broken up into chunks called packets. Because packets fromdifferent sources (and to different destinations) are now intermixed, each packetmust contain the address of the destination (in a circuit-switched network, thisisnt needed since the circuit is a dedicated connection). An ethernet network Is an example of this type of network. In a packet-switched network, the bandwidth that you see will usually be less than the capacity of the network since youre sharing the channel with others.

NETWORKING HARDWARE:
NETWORKING HARDWARE can be defined as a collection of physical network comnponents to establish a network area. Networking hardware includes all computers, peripherals, interface cards and other equipment needed to perform data-processing and communications within the network. CLICK on the terms below to learn more about those pieces of networking hardware.

Page 52 of 95

Training report on Computer Hardware & networking

Hardware components are :


(1) network interface card
(2)

transmission media (guided media like twisted pair,coaxial cable,fiber optic cable) & (unguiuded media like radiowave,microwave,infrared)

(3) workstations / clients / nodes / terminals (4) servers (5) connectors (6) modem (7) internetworking devices like repeater,hub,bridge,switch,router,gateway.

NETWORKING SOFTWARE:

Network connections
There are two types of Network connections: Peer-to-Peer Networks Client-Server Networks

Peer-to-Peer Networks
A peer-to-peer network allows two or more PCs to pool their resources together. Individual resources like disk drives, CD-ROM drives, and even printers are transformed into shared, collective resources that are accessible from every PC. Unlike client-server networks, where network information is stored on a centralized file server PC and made available to tens, hundreds, or thousands client PCs, the information stored across peer-topeer networks is uniquely decentralized. Because peer-to-peer PCs have their own hard disk drives that are accessible by all computers, each PC acts as both a client (information requestor) and a server (information provider). In the diagram below, three peer-to-peer workstations are shown. Although not capable of handling the same amount of information flow that a client-server network might, all three computers can communicate directly with each other and share one another's resources.

Page 53 of 95

Training report on Computer Hardware & networking

Peer to Peer Network

A peer-to-peer network can be built with either 10BaseT cabling and a hub or with a thin coax backbone. 10BaseT is best for small workgroups of 16 or fewer users that don't span long distances, or for workgroups that have one or more portable computers that may be disconnected from the network from time to time. After the networking hardware has been installed, a peer-to-peer network software package must be installed onto all of the PCs. Such a package allows information to be transferred back and forth between the PCs, hard disks, and other devices when users request it. Popular peer-to-peer NOS software includes Windows 95, Windows for Workgroups, Artisoft LANtastic, and NetWare Lite. Most NOSs allow each peer-to-peer user to determine which resources will be available for use by other users. Specific hard & floppy disk drives, directories or files, printers, and other resources can be attached or detached from the network via software. When one user's disk has been configured so that it is "sharable", it will usually appear as a new drive to the other users. In other words, if user A has an A and C drive on his computer, and user B configures his entire C drive as sharable, user A will suddenly have an A, C, and D drive (user A's D drive is actually user B's C drive). Directories work in a similar fashion. If user A has an A & C drive, and user B configures his "C:\WINDOWS" and "C:\DOS" directories as sharable, user A may suddenly have an A, C, D, and E. The advantages of peer-to-peer over client-server NOSs include: No need for a network administrator Network is fast/inexpensive to setup & maintain Each PC can make backup copies of its data to other PCs for security. By far the easiest type of network to build, peer-to-peer is perfect for both home and office use.

Client-Server Networks
In a client-server environment like Windows NT or Novell NetWare, files are stored on a centralized, high speed file server PC that is made available to client PCs. Network access speeds are usually faster than those found on peer-to-peer networks, which is reasonable given the vast numbers of clients that this architecture can support. Nearly all network services like printing and electronic mail are routed through the file server, which allows networking tasks to be tracked. Inefficient network segments can be reworked to make them faster, and users' activities can be closely monitored. Page 54 of 95

Training report on Computer Hardware & networking Public data and applications are stored on the file server, where they are run from client PCs' locations, which makes upgrading software a simple task--network administrators can simply upgrade the applications stored on the file server, rather than having to physically upgrade each client PC.

File Server

Other equipment

In the client-server diagram below, the client PCs are shown to be separate and subordinate to the file server. The clients' primary applications and files are stored in a common location. File servers are often set up so that each user on the network has access to his or her "own" directory, along with a range of "public" directories where applications are stored. If the two clients below want to communicate with each other, they must go through the file server to do it. A message from one client to another is first sent to the file server, where it is then routed to its destination. With tens or hundreds of client PCs, a file server is the only way to manage the often complex and simultaneous operations that large networks require. Table 1 provides a summary comparison between Peer-to-Peer and Client/Server Networks. Peer-to-Peer Networks vs Client/Server Networks Peer-to-Peer Networks Client/Server Networks Easy to set up More difficult to set up Less expensive to install More expensive to install A variety of operating systems can be supported Can be implemented on a wide range of on the client computers, but the server needs to run operating systems an operating system that supports networking More time consuming to maintain the Less time consuming to maintain the software software being used (as computers must be being used (as most of the maintenance is managed individually) managed from the server) Very low levels of security supported or none High levels of security are supported, all of which at all. These can be very cumbersome to set are controlled from the server. Such measures up, depending on the operating system being prevent the deletion of essential system files or the used changing of settings Ideal for networks with less than 10 No limit to the number of computers that can be computers supported by the network Requires a server running a server operating Does not require a server system

Page 55 of 95

Training report on Computer Hardware & networking

Network Topologies
A physical topology depicts how network devices are connected physically, the cabling. A logical topology depicts the route a the signal takes on the network.

BUS
a bus physical topology means that all of the devices are connected to a common backbone; signal is sent in both directions, but some buses are unidirectional; can be used for 10BASE5, 10BASE2 or 10BROAD36.

Advantage: good for small networks. Disadvantage: difficult to troubleshoot and locate where the break in the cable is or which machine
is causing the fault; when one device fails the rest of the LAN fails.

STAR
a star physical topology means that the nodes/devices are all connected to a centralized hub or switch and is commonly used for 10BASE5, 10BASE-T or 100BASE-TX

Advantage: cabling is inexpensive, easy to wire, more reliable and easier to manage because of
the use of hubs which allow defective cable segments to be routed around; locating and repairing bad cables is easier because of the concentrators; network growth is easier

Disadvantage: all nodes receive the same signal therefore dividing bandwidth; max computers is
1,024 on a LAN; max UTP length is 100 meters (approx 330 ft); distance between computers is 2.5 meters.

Page 56 of 95

Training report on Computer Hardware & networking

RING
a ring physical topology is when the devices are wired in a circle, but almost always implemented in a logical ring topology on a star physical topology. Each device has a transceiver which behaves like a repeater which moves the signal around the ring; ideal for token-passing access methods.

Advantage: signal degeneration is low; only the device that holds the token can transmit which
reduces collisions.

Disadvantage: difficult to locate a problem cable segment; expensive hardware MESH


a mesh physical topology is when every device on the network is connected to every device on the network; most commonly used in WAN configurations.

Advantage: helps find the quickest route on the network; provides redundancy Disadvantage: very expensive and not easy to set up

Tree
Page 57 of 95

Training report on Computer Hardware & networking The type of network topology in which a central 'root' node (the top level of the hierarchy) is connected to one or more other nodes that are one level lower in the hierarchy (i.e., the second level) with a point-to-point link between each of the second level nodes and the top level central 'root' node, while each of the second level nodes that are connected to the top level central 'root' node will also have one or more other nodes that are one level lower in the hierarchy (i.e., the third level) connected to it, also with a point-to-point link, the top level central 'root' node being the only node that has no other node above it in the hierarchy (The hierarchy of the tree is symmetrical.) Each node in the network having a specific fixed number, of nodes connected to it at the next lower level in the hierarchy, the number, being referred to as the 'branching factor' of the hierarchical tree. 1.) A network that is based upon the physical hierarchical topology must have at least three levels in the hierarchy of the tree, since a network with a central 'root' node and only one hierarchical level below it would exhibit the physical topology of a star. 2.) A network that is based upon the physical hierarchical topology and with a branching factor of 1 would be classified as a physical linear topology. 3.) The total number of point-to-point links in a network that is based upon the physical hierarchical topology will be one less than the total number of nodes in the network. 4.) If the nodes in a network that is based upon the physical hierarchical topology are required to perform any processing upon the data that is transmitted between nodes in the network, the nodes that are at higher levels in the hierarchy will be required to perform more processing operations on behalf of other nodes than the nodes that are lower in the hierarchy. Such a type of network topology is very useful and highly recommended.

Types of networks
Page 58 of 95

Training report on Computer Hardware & networking There are main three types of networks: Local area network (LAN) Metropolitan area network (MAN) Wide area network (WAN)

LAN
A computer network that spans a relatively small area. Most LANs are confined to a single building or group of buildings. However, one LAN can be connected to other LANs over any distance via telephone lines and radio waves. A system of LANs connected in this way is called a wide-area network (WAN).Most LANs connect workstations and personal computers. Each node (individual computer ) in a LAN has its own CPU with which it executes programs, but it also is able to access data and devices anywhere on the LAN. This means that many users can share expensive devices, such as laser printers, as well as data. Users can also use the LAN to communicate with each other, by sending e-mail or engaging in chat sessions. There are many different types of LANs Ethernets being the most common for PCs. Most Apple Macintosh networks are based on Apples AppleTalk network system, which is built into Macintosh computers.

Advantages and Disadvantages of LAN. Advantages of LAN


Workstations can share peripheral devices like printers. This is cheaper than buying a printer for every workstations. Workstations do not necessarily need their own hard disk or CD-ROM drives which make them cheaper to buy than stand-alone PCs.

Page 59 of 95

Training report on Computer Hardware & networking User can save their work centrally on the network fs file server. This means that they can retrieve their work from any workstation on the network. They don ft need to go back to the same workstation all the time. Users can communicate with each other and transfer data between workstations very easily. One copy of each application package such as a word processor, spreadsheet etc. can be loaded onto the file and shared by all users. When a new version comes out, it only has to be loaded onto the server instead of onto every workstation.

Disadvantages LAn
Special security measures are needed to stop users from using programs and data that they should not have access to. Networks are difficult to set up and need to be maintained by skilled technicians.

WAN
A computer network that spans a relatively large geographical area. Typically, a WAN consists of two or more local-area networks (LANs). Computers connected to a wide-area network are often connected through public networks, such as the telephone system. They can also be connected through leased lines or satellites. The largest WAN in existence is the Internet.

Advantages of WAN
Messages can be sent very quickly to anyone else on the network. These messages can have pictures, sounds, or data included with them (called attachments). Expensive things (such as printers or phone lines to the internet) can be shared by all the computers on the network without having to buy a different peripheral for each computer. Page 60 of 95

Training report on Computer Hardware & networking Everyone on the network can use the same data. This avoids problems where some users may have older information than others. Share information/files over a larger area

Disadvantages Of WAN
Setting up a network can be an expensive and complicated experience. The bigger the network the more expensive it is. Security is a real issue when many different people have the ability to use information from other computers. Protection against hackers and viruses adds more complexity and expense. Once set up, maintaining a network is a full-time job which requires network supervisors and technicians to be employed.

MAN
Short for Metropolitan Area Network, a data network designed for a town or city. In terms of geographic breadth, MANs are larger than local-area networks (LANs), but smaller than wide-area networks (WANs). MANs are usually characterized by very high-speed connections using fiber optical cable or other digital media.

Page 61 of 95

Training report on Computer Hardware & networking

Wireless Networks Introduction


Wireless local area networks (WLANs) are the same as the traditional LAN but they have a wireless interface. With the introduction of small portable devices such as PDAs (personal digital assistants), the WLAN technology is becoming very popular. WLANs provide high speed data communication in small areas such as a building or an office. It allows users to move around in a confined area while they are still connected to the network. Examples of wireless LAN that are available today are NCR's waveLAN and Motorola's ALTAIR. In this article, the transmission technology used in WLANs is considered. We will also discuss some of the technical standards for WLANs developed by the IEEE Project 802.11. Transmission Technology There are three main ways by which WLANs transmit information : microwave, spread spectrum and infrared. Microwave Transmission Motorola's WLAN product (ALTAIR) transmits data by using low powered microwave radio signals. It operates at the 18GHz frequency band. Spread Spectrum Transmission With this transmission technology, there are two methods used by wireless LAN products : frequency hopping and direct sequence modulation.

Frequency Hopping The signal jumps from one frequency to another within a given frequency range. The transmitter device "listens" to a channel, if it detects an idle time (i.e. no signal is transmitted), it transmits the data using the full channel bandwidth. If the channel is full, it "hops" to another channel and repeats the process. The transmitter and the receiver "jump" in the same manner. Direct Sequence Modulation band together with Code Division Multiple Access (CDMA). Signals from different units are transmitted at a given frequency range. The power levels of these signals are very low (just above background noise). A code is transmitted with each signal so that the receiver can identify the appropriate signal transmitted by the sender unit. The frequency at which such signals are transmitted is called the ISM (industrial, scientific and medical) band. This frequency band is reserved for ISM devices. The ISM band has three frequency ranges : 902-928, 2400-2483.5 and 5725-5850 MHz. An exception to this is Motorola's ALTAIR which operates at 18GHz. Spread spectrum transmission technology is used by many wireless LAN manufacturers such as NCR for waveLAN product and SpectraLink for the 2000 PCS.

Page 62 of 95

Training report on Computer Hardware & networking

Infrared Transmission
This method uses infrared light to carry information. There are three types of infrared transmission : diffused, directed and directed point-to-point. Diffused The infrared light transmitted by the sender unit fills the area (e.g. office). Therefore the receiver unit located anywhere in that area can receive the signal. Directed The infrared light is focused before transmitting the signal. This method increases the transmission speed. Directed point-to-point Directed point-to-point infrared transmission provides the highest transmission speed. Here the receiver is aligned with the sender unit. The infrared light is then transmitted directly to the receiver. The light source used in infrared transmission depends on the environmemt. Light emitting diode (LED) is used in indoor areas, while lasers are used in outdoor areas. Infrared radiation (IR) has major biological effects. It greatly affects the eyes and skin. Microwave signals are also dangerous to health. But with proper design of systems, these effects are reduced considerably.

Technical Standards
Technical standards are one of the main concerns of users of wireless LAN products. Users would like to be able to buy wireless products from different manufacturers and be able to use them on one network. The IEEE Project 802.11 has set up universal standards for wireless LAN. In this section we will consider some of these standards.

Requirements
In March 1992 the IEEE Project 802.11 established a set of requirements for wireless LAN. The minimum bandwidth needed for operations such as file transfer and program loading is 1Mbps. Operations which need real-time data transmission such as digital voice and process control, need support from time bounded services.

Types of Wireless LAN


The Project 802.11 committee distinguished between two types of wireless LAN : "ad-hoc" and "infrastructred" networks.

Page 63 of 95

Training report on Computer Hardware & networking

Figure 2 : (a) Infrastructred Wireless LAN; Wireless LAN.

Figure 3 : Architecture for Wireless (b) (b)Ad-hoc

Ad-hoc Networks
Figure 2b shows an ad-hoc network. This network can be set up by a number mobile users meeting in a small room. It does not need any support from a wired/wireless backbone. There are two ways to implement this network.

Broadcasting/Flooding
Suppose that a mobile user A wants to send data to another user B in the same area. When the packets containing the data are ready, user A broadcasts the packets. On receiving the packets, the receiver checks the identification on the packet. If that receiver was not the correct destination, then it rebroadcasts the packets. This process is repeated until user B gets the data.

Temporary Infrastructure
In this method, the mobile users set up a temporary infrastructure. But this method is complicated and it introduces overheads. It is useful only when there is a small number of mobile users.

Infrastructure Networks
Figure 2a shows an infrastructure-based network. This type of network allows users to move in a building while they are connected to computer resources. The IEEE Project 802.11 specified the components in a wireless LAN architecture. In an infrastructure network, a cell is also known as a Basic Service Area (BSA). It contains a number of wireless stations. The size of a BSA depends on the power of the transmitter and receiver units, it also depends on the environment. A number of BSAs are connected to each other and to a distribution system by Access Points (APs). A group of stations belonging to an AP is called a Basic Service Set (BSS). Figure 3 shows the basic architecture for wireless LANs. .

Page 64 of 95

Training report on Computer Hardware & networking

Advantages of WLAN
It is easier to add or move workstations. It is easier to provide connectivity in areas where it is difficult to lay cable. Installation can be fast and easy and can eliminate the need to pull cable through walls and ceilings. Access to the network can be from anywhere in the school within range of an access point. Portable or semi-permanent buildings can be connected using a wireless LAN. Where laptops are used, the computer suite can be moved from classroom to classroom on mobile carts. While the initial investment required for wireless LAN hardware can be similar to the cost of wired LAN hardware, installation expenses can be significantly lower. Where a school is located on more than one site (such as on two sides of a road), it is possible with directional antennae, to avoid digging trenches under roads to connect the sites.

Disadvantages Of WLAN
As the number of computers using the network increases, the data transfer rate to each computer will decrease accordingly. As standards change, it may be necessary to replace wireless cards and/or access points. Lower wireless bandwidth means some applications such as video streaming will be more effective on a wired LAN. Security is more difficult to guarantee, and requires configuration. Devices will only operate at a limited distance from an access point, with the distance determined by the standard used and buildings and other obstacles between the access point and the user. A wired LAN is most likely to be required to provide a backbone to the wireless LAN; a wireless LAN should be a supplement to a wired LAN and not a complete solution .

Page 65 of 95

Training report on Computer Hardware & networking

Network Models
There are several network models which you may hear about but the one you will hear about most is the ISO network model described below. You should realize, however that there are others such as:

The OSI MODEL The TCP/IP 4 layered protocol The OSI Network Model Standard
The International Standards Organization (ISO) has defined a standard called the Open Systems Interconnection (OSI) reference model. This is a seven layer architecture listed below. Each layer is considered to be responsible for a different part of the communications. This concept was developed to accommodate changes in technology. The layers are arranged here from the lower levels starting with the physical (hardware) to the higher levels.

Layer 1: Physical Layer


The Physical Layer defines the electrical and physical specifications for devices. In particular, it defines the relationship between a device and a physical medium. This includes the layout of pins, voltages, cable specifications, hubs, repeaters, network adapters, host bus adapters (HBAs used in storage area networks) and more. To understand the function of the Physical Layer in contrast to the functions of the Data Link Layer, think of the Physical Layer as concerned primarily with the interaction of a single device with a medium, where the Data Link Layer is concerned more with the interactions of multiple devices (i.e., at least two) with a shared medium. Standards such as RS-232 do use physical wires to control access to the medium. The major functions and services performed by the Physical Layer are: Page 66 of 95

Training report on Computer Hardware & networking Establishment and termination of a connection to a communications medium. Participation in the process whereby the communication resources are effectively shared among multiple users. For example, contention resolution and flow control. Modulation, or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. These are signals operating over the physical cabling (such as copper and optical fiber) or over a radio link.

Layer 2: Data Link Layer


The Data Link Layer provides the functional and procedural means to transfer data between network entities and to detect and possibly correct errors that may occur in the Physical Layer. Originally, this layer was intended for point-to-point and point-to-multipoint media, characteristic of wide area media in the telephone system. Local area network architecture, which included broadcast-capable multiaccess media, was developed independently of the ISO work in IEEE Project 802. IEEE work assumed sublayering and management functions not required for WAN use. In modern practice, only error detection, not flow control using sliding window, is present in data link protocols such as Point-to-Point Protocol (PPP), and, on local area networks, the IEEE 802.2 LLC layer is not used for most protocols on the Ethernet, and on other local area networks, its flow control and acknowledgment mechanisms are rarely used. Sliding window flow control and acknowledgment is used at the Transport Layer by protocols such as TCP, but is still used in niches where X.25 offers performance advantages. The ITU-T G.hn standard, which provides high-speed local area networking over existing wires (power lines, phone lines and coaxial cables), includes a complete Data Link Layer which provides both error correction and flow control by means of a selective repeat Sliding Window Protocol. Both WAN and LAN services arrange bits, from the Physical Layer, into logical sequences called frames. Not all Physical Layer bits necessarily go into frames, as some of these bits are purely intended for Physical Layer functions. For example, every fifth bit of the FDDI bit stream is not used by the Layer.

Layer 3: Network Layer


The Network Layer provides the functional and procedural means of transferring variable length data sequences from a source to a destination via one or more networks, while maintaining the quality of service requested by the Transport Layer. The Network Layer performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. Routers operate at this layersending data throughout the extended network and making the Internet possible. This is a logical addressing scheme values are chosen by the network engineer. The addressing scheme is hierarchical.

Page 67 of 95

Training report on Computer Hardware & networking The best-known example of a Layer 3 protocol is the Internet Protocol (IP). It manages the connectionless transfer of data one hop at a time, from end system to ingress router, router to router, and from egress router to destination end system. It is not responsible for reliable delivery to a next hop, but only for the detection of errored packets so they may be discarded. When the medium of the next hop cannot accept a packet in its current length, IP is responsible for fragmenting the packet into sufficiently small packets that the medium can accept. A number of layer management protocols, a function defined in the Management Annex, ISO 7498/4, belong to the Network Layer. These include routing protocols, multicast group management, Network Layer information and error, and Network Layer address assignment. It is the function of the payload that makes these belong to the Network Layer, not the protocol that carries them.

Layer 4: Transport Layer


The Transport Layer provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers. The Transport Layer controls the reliability of a given link through flow control, segmentation/desegmentation, and error control. Some protocols are state and connection oriented. This means that the Transport Layer can keep track of the segments and retransmit those that fail. Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition of the Transport Layer, typical examples of Layer 4 are the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). Of the actual OSI protocols, there are five classes of connection-mode transport protocols ranging from class 0 (which is also known as TP0 and provides the least error recovery) to class 4 (TP4, designed for less reliable networks, similar to the Internet). Class 0 contains no error recovery, and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the Session Layer. Also, all OSI TP connection-mode protocol classes provide expedited data and preservation of record boundaries, both of which TCP is incapable. Detailed characteristics of TP0-4 classes are shown in the following table: Perhaps an easy way to visualize the Transport Layer is to compare it with a Post Office, which deals with the dispatch and classification of mail and parcels sent. Do remember, however, that a post office manages the outer envelope of mail. Higher layers may have the equivalent of double envelopes, such as cryptographic presentation services that can be read by the addressee only. Roughly speaking, tunneling protocols operate at the Transport Layer, such as carrying non-IP protocols such as IBM's SNA or Novell's IPX over an IP network, or end-to-end encryption with IPsec. While Generic Routing Encapsulation (GRE) might seem to be a Network Layer protocol, if the encapsulation of the payload takes place only at endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains complete frames or packets to deliver to an endpoint. L2TP carries PPP frames inside transport packet.

Page 68 of 95

Training report on Computer Hardware & networking

Layer 5: Session Layer


The Session Layer controls the dialogues (connections) between computers. It establishes, manages and terminates the connections between the local and remote application. It provides for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. The OSI model made this layer responsible for graceful close of sessions, which is a property of the Transmission Control Protocol, and also for session checkpointing and recovery, which is not usually used in the Internet Protocol Suite. The Session Layer is commonly implemented explicitly in application environments that use remote procedure calls.

Layer 6: Presentation Layer


The Presentation Layer establishes a context between Application Layer entities, in which the higherlayer entities can use different syntax and semantics, as long as the presentation service understands both and the mapping between them. The presentation service data units are then encapsulated into Session Protocol data units, and moved down the stack. This layer provides independence from differences in data representation (e.g., encryption) by translating from application to network format, and vice versa. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer.

Layer 7: Application Layer


The application layer is the OSI layer closest to the end user, which means that both the OSI application layer and the user interact directly with the software application. This layer interacts with software applications that implement a communicating component. Such application programs fall outside the scope of the OSI model. Application layer functions typically include identifying communication partners, determining resource availability, and synchronizing communication. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. When determining resource availability, the application layer must decide whether sufficient network or the requested communication exist. In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer. Some examples of application layer implementations include Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP) and X.400 Mail...

Page 69 of 95

Training report on Computer Hardware & networking

TCP/IP Model
This model is sometimes called the DOD model since it was designed for the department of defense It is also called the TCP/IP four layer protocol, or the internet protocol.

Application TCP IP Network Physical Network

Application TCP IP Network Physical

TCP/IP ARCHITECTURE

It has the following layers:

Physical Layer
physical interface between terminals that allows transmission of data. Data rate, transmission medium, and related matters are of concern to this layer.

Network
Device driver and interface card which maps to the data link and physical layer of the OSI model. The Link layer corresponds to the hardware, including the device driver and interface card. The link layer has data packets associated with it depending on the type of network being used such as ARCnet, Token ring or ethernet. In our case, we will be talking about ethernet.

Internet
Corresponds to the network layer of the OSI model and includes the IP, ICMP, and IGMP protocols. The network layer manages the movement of packets around the network and includes IP, ICMP, and IGMP. It is responsible for making sure that packages reach their destinations, and if they don't, reporting errors.

Page 70 of 95

Training report on Computer Hardware & networking

Transport
Corresponds to the transport layer and includes the TCP and UDP protocols. The transport layer is the mechanism used for two computers to exchange data with regards to software. The two types of protocols that are the transport mechanisms are TCP and UDP. There are also other types of protocols for systems other than TCP/IP but we will talk about TCP and UDP in this document.

Application
Corresponds to the OSI Session, Presentation and Application layers and includes FTP, Telnet, ping, Rlogin, rsh, TFTP, SMTP, SNMP, DNS, your program, etc.The application layer refers to networking protocols that are used to support various services such as FTP, Telnet, BOOTP, etc.

Page 71 of 95

Training report on Computer Hardware & networking

Components of a Network
A computer network comprises the following components: A minimum of at least 2 computers. Cables that connect the computers to each other, although wireless communication is becoming more common (see Advice Sheet 20 for more information). A network interface device on each computer (this is called a network interface card or NIC). A Switch used to switch the data from one point to another. Hubs are outdated and are little used for new installations. Network operating system software

Structured Cabling
The two most popular types of structured network cabling are twisted-pair (also known as 10BaseT) and thin coax (also known as 10Base2). 10BaseT cabling looks like ordinary telephone wire, except that it has 8 wires inside instead of 4. Thin coax looks like the copper coaxial cabling that's often used to connect a Video Recorder to a TV.

10BaseT Cabling
When 10BaseT cabling is used, a strand of cabling is inserted between each computer and a hub. If you have 5 computers, you'll need 5 cables. Each cable cannot exceed 325 feet in length. Because the cables from all of the PCs converge at a common point, a 10BaseT network forms a star configuration.

Fig 4a: Cat5e Cable and a close up of RJ-45 connector

Fig 4b: Cat5e Wall Outlets

Fig 4c: Cat5e Patch Panel

Fig4d: Wall Mounted Cabinet

Page 72 of 95

Training report on Computer Hardware & networking 10BaseT cabling is available in different grades or categories. Some grades, or "cats", are required for Fast Ethernet networks, while others are perfectly acceptable for standard 10Mbps networks--and less expensive, too. All new networks use a minimum of standard unshielded twisted-pair (UTP) Category 5e 10BaseT cabling because it offers a performance advantage over lower grades.

TWISTED PAIR
The most common type of twisted pair cabling is Unshielded Twisted Pair (UTP) cabling. This type of cabling is typically made up of 4 twisted pairs of copper wires as depicted in the image below. Each wire has its own cover, and so does the complete bundle. UTP cabling is categorized using a number. The required category depends on the network technology and the desired transmission speed. Following are the UTP categories: Cat.1 Cat.2 Cat.3 Cat.4 Cat.5 Cat.5e Cat.6 Used for voice/telephone communication only. Data rates up to 4 Mbps. Data rates up to 4 Mbps in TokenRing networks, 10 Mbps in Ethernet networks, bandwidth of 16 MHz. Data rates up to 16 Mbps in TokenRing networks, 10 Mbps in Ethernet networks, bandwidth of 20 MHz. Data rates up to 100 Mbps, bandwidth of 100 MHz Data rates up to 1 Gbps (Gigabit Ethernet), bandwidth of 100 MHz rated (tested up to 350 Mhz). Data rates up to 1 Gbps (Gigabit Ethernet), bandwidth of 250 MHz rated (tested up to 550 Mhz).

Another, more expensive type of twisted pair cabling is Shielded Twisted Pair (STP). STP cabling includes a metal cover shielding the bundle of wires, reducing electrical interference and cross-talk. In a cross-over cable wire 1 & 3, and 2 & 6 are crossed, these cables are typically used to connect a pc to pc, or switch to switch for example. UTP cabling in networks use RJ-45 connector as depicted below:

10BaseT Ethernet, 100BaseTX Fast Ethernet, 1000BaseT and Token Ring are the most common networks that use twisted pair cabling and are described below.

10BaseT
Page 73 of 95

Training report on Computer Hardware & networking The 10BaseT specification uses Cat 3, 4 and 5 UTP cabling in a star/hierarchical topology. Devices on the network are connected through a central hub. 10BaseT specifications: - Maximum segment length is 100 meters - Maximum data transfer speed is 10Mb/s - Cat 3, 4 and 5 Unshielded Twisted Pair (UTP)

100BaseTX (Fast Ethernet, 802.3u)


Is similar to 10BaseT, except it requires at least Category 5 UTP or Category 1 STP cabling. Only uses 4 of the 8 wires like just like 10BaseT. The maximum data transfer rate is 100 Mb/s.

802.5 (Token Ring)


Token Ring uses the token passing method described earlier in this TechNote. While the logical topology of a Token Ring network is a ring, the physical topology is star/hierarchical as illustrated in the diagram below. Stations connect to MultiStation Access Units (look a bit like hubs) using UTP cabling which in turn are connected in a physical ring. Token Ring specifications: - Data transfer rate is 4 or 16 Mb/s - Uses Twisted Pair cabling (Cat 3 for 4 MB/s, Cat 5 for 16 Mb/s) - Logical topology ring, physical topology is star Token Ring is originally created by IBM, and was later standardized by IEEE under the 802.5 specification. The original IBM Token Ring specification uses IBM Class 1 STP cabling with IBM proprietary connectors. This connector is called the IBM-type Data Connector (IDC) or Universal Data Connector (UDC), and is male nor female.

Coaxial
Coaxial cabling is used primarily in 10Base2 (Thinnet) and 10Base5 (Thicknet) Ethernet networks. Coaxial cable uses a copper core with a protective shield, to reduce interference. Theshield is covered with the outside cover made from PVC or plenum. The most common types are listed in the following table. RG-58U 50 Ohm, used in 10Base2 Ethernet networks (Thinnet). RG-8 50 Ohm, used in 10Base5 Ethernet networks (Thicknet). RG-59 / RG-6 75 ohms, used for cable television (hence, cable modem access), video, digital audio, and telecommunication applications (for example for E1 coaxial cabling).

10Base2
Commonly referred to as Thinnet, uses a bus topology. Stations are attached using BNC TPage 74 of 95

Training report on Computer Hardware & networking connectors represented in the picture below. Both cable ends are terminated using a 50 ohm terminator.

BNC (British Naval Connector) T-connector. 10Base2 specifications: - Maximum segment length is 185 meters - Maximum data transfer speed is 10Mb/s - 0.2 inch, 50 ohm RG-58 coaxial cable (Thinnet)

10Base5
Commonly referred to as Thicknet, commonly uses a bus topology. Stations are attached to the cable using MAUs, a transceiver that is attached to the cable using vampire taps that pierce the cable. A cable with AUI connectors is used to connect the transceiver to the network interface on for example a computer, hub or repeater. Both cable ends are terminated using a 50 ohm terminator.

AUI connectors

MAU transceiver

10Base5 specifications: - Maximum segment length is 500 meters - Maximum data transfer speed is 10Mb/s - 0.4 inch, 50 ohm coaxial RG-8 cabling (Thicknet)

FIBER OPTIC
Fiber optic cabling is a rather new technology that allows for fast data transfer over large distances. Fiber optic cabling is not susceptible to electrical interference, but needs expensive equipment and is Page 75 of 95

Training report on Computer Hardware & networking fragile. There are two main types of fiber optics, the first is multi-mode, which is typically used in corporate networks' backbone. In a multi-mode cable, light travels down the fiber cable in multiple paths. Essentially, the light beam is reflected off the cladding (material surrounding the actual fiber) as it travels down the core. The other type is single-mode, this type is typically used by telephone companies to cover very large distances. In a single-mode cable, light travels thru the cable without interacting with the glass cladding (material surrounding the actual fiber), maintaining signal quality for great distances.Fiber optic cabling is connected using SC, ST or MIC connectors.

SC connectors

ST connectors

MIC connectors

Network Devices
Network Interface Card (NIC)
A NIC (pronounced 'nick') is also known as a network card. It connects the computer to the cabling, which in turn links all of the computers on the network together. Each computer on a network must Page 76 of 95

Training report on Computer Hardware & networking have a network card. Most modern network cards are 10/100 NICs and can operate at either 10Mbps or 100Mbps. Only NICs supporting a minimum of 100Mbps should be used in new installations schools. Computers with a wireless connection to a network also use a network card (see Advice Sheet 20 for more information on wireless networking).

Fig 5: Network Interface Cards (NICs)

Network Repeater
A repeater connects two segments of your network cable. It retimes and regenerates the signals to proper amplitudes and sends them to the other segments. When talking about, ethernet topology, you are probably talking about using a hub as a repeater. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay which can affect network communication when there are several repeaters in a row. Many network architectures limit the number of repeaters that can be used in a row. Repeaters work only at the physical layer of the OSI network model.

Bridge
A bridge reads the outermost section of data on the data packet, to tell where the message is going. It reduces the traffic on other network segments, since it does not send all packets. Bridges can be programmed to reject packets from particular networks. Bridging occurs at the data link layer of the OSI model, which means the bridge cannot read IP addresses, but only the outermost hardware address of the packet. In our case the bridge can read the ethernet data which gives the hardware address of the destination address, not the IP address. Bridges forward all broadcast messages. Only a special bridge called a translation bridge will allow two networks of different architectures to be connected. Bridges do not normally allow connection of networks with different architectures. The hardware address is also called the MAC (media access control) address. To determine the network segment a MAC address belongs to, bridges use one of: Transparent Bridging - They build a table of addresses (bridging table) as they receive packets. If the address is not in the bridging table, the packet is forwarded to all segments other than the one it came from. This type of bridge is used on ethernet networks. Page 77 of 95

Training report on Computer Hardware & networking Source route bridging - The source computer provides path information inside the packet. This is used on Token Ring networks.

Network Router
A router is used to route data packets between two networks. It reads the information in each packet to tell where it is going. If it is destined for an immediate network it has access to, it will strip the outer packet, readdress the packet to the proper ethernet address, and transmit it on that network. If it is destined for another network and must be sent to another router, it will re-package the outer packet to be received by the next router and send it to the next router. The section on routing explains the theory behind this and how routing tables are used to help determine packet destinations. Routing occurs at the network layer of the OSI model. They can connect networks with different architectures such as Token Ring and Ethernet. Although they can transform information at the data link level, routers cannot transform information from one data format such as TCP/IP to another such as IPX/SPX. Routers do not send broadcast packets or corrupted packets. If the routing table does not indicate the proper address of a packet, the packet is discarded.

Brouter
There is a device called a brouter which will function similar to a bridge for network transport protocols that are not routable, and will function as a router for routable protocols. It functions at the network and data link layers of the OSI network model.

Gateway
A gateway can translate information between different network data formats or network architectures. It can translate TCP/IP to AppleTalk so computers supporting TCP/IP can communicate with Apple brand computers. Most gateways operate at the application layer, but can operate at the network or session layer of the OSI model. Gateways will start at the lower level and strip information until it gets to the required level and repackage the information and work its way back toward the hardware layer of the OSI model. To confuse issues, when talking about a router that is used to interface to another network, the word gateway is often used. This does not mean the routing machine is a gateway as defined here, although it could be.

Hub and Switch


A hub is a device used to connect a PC to the network. The function of a hub is to direct information around the network, facilitating communication between all connected devices. However in new installations switches should be used instead of hubs as they are more effective and provide better performance. A switch, which is often termed a 'smart hub'. Page 78 of 95

Training report on Computer Hardware & networking Switches and hubs are technologies or boxes to which computers, printers, and other networking devices are connected. Switches are the more recent technology and the accepted way of building today's networks. With switching, each connection gets "dedicated bandwidth" and can operate at full speed. In contrast, a hub shares bandwidth across multiple connections such that activity from one PC or server can slow down the effective speed of other connections on the hub. Now more affordable than ever, Dual-speed 10/100 autosensing switches are recommended for all school networks. Schools may want to consider upgrading any hub based networks with switches to improve network performance ie speed of data on the network.

Fig 6a: An 8 port Hub

Fig 6b: 2 Examples of 24 port Switc

PROTOCOLS TCP/IP
TCP/IP is today's most popular network protocol and is the protocol in the Internet. It is a routable protocol that provides connection between heterogeneous systems, these are the main reasons the Page 79 of 95

Training report on Computer Hardware & networking protocol is so widely adapted; for example it allows communication between UNIX, Windows, Netware and Mac OS computers spread over multiple interconnected networks. The "TCP/IP protocol" is actually the "TCP/IP suite" composed of many different protocols each with its own functions. The two main protocols are in its name: the Internet Protocol and the Transmission Control Protocol. IP addressing is assigning a 32-bit logical numeric address to a network device. Every IP address on the network must be unique. An IP address is represented in a dotted decimal format, for example: 159.101.6.8. As you can see the address is divided in 4 parts, these parts are called octets. The current used addressing schema in version 4 of IP is divided in 5 Classes: Classes Class A Class B Class C Class D Class E First Octet 1 126 128 191 192 223 224 239 240 254

A subnet mask is used to determine which part is the network part and which is the host part. Default subnet masks: Class A Class B Class C 255.0.0.0 255.255.0.0 255.255.255.0

IANA reserved 4 address ranges to be used in private networks, these addresses won't appear on the Internet avoiding IP address conflicts: - 10.0.0.0 through 10.255.255.255 - 172.16.0.0 through 172.31.255.255 - 192.168.0.0 through 192.168.255.255 - 169.254.0.1 through 169.254.255.254 (reserved for Automatic Private IP Addressing)

TCP/IP Ports and Addresses


Each machine in the network shown below, has one or more network cards. The part of the network that does the job of transporting and managing the data across the network is called TCP/IP which stands for Transmission Control Protocol (TCP) and Internet Protocol (IP).

Page 80 of 95

Training report on Computer Hardware & networking There are other alternative mechanisms for managing network traffic, but most, such as IPX/SPX for Netware, will not be described here in much detail. The IP layer requires a 4 (IPv4) or 6 (IPv6) byte address to be assigned to each network interface card on each computer. This can be done automatically using network software such as dynamic host configuration protocol (DHCP) or by manually entering static addresses into the computer.

Ports
The TCP layer requires what is called a port number to be assigned to each message. This way it can determine the type of service being provided. Please be aware here, that when we are talking about "ports" we are not talking about ports that are used for serial and parallel devices, or ports used for computer hardware control. These ports are merely reference numbers used to define a service. For instance, port 23 is used for telnet services, and HTTP uses port 80 for providing web browsing service. There is a group called the IANA (Internet Assigned Numbers Authority) that controls the assigning of ports for specific services. There are some ports that are assigned, some reserved and many unassigned which may be utilized by application programs. Port numbers are straight unsigned integer values which range up to a value of 65535.

Addresses
Addresses are used to locate computers. It works almost like a house address. There is a numbering system to help the mailman locate the proper house to deliver customer's mail to. Without an IP numbering system, it would not be possible to determine where network data packets should go. IPv4, which means internet protocol version 4, is described here. Each IP address is denoted by what is called dotted decimal notation. This means there are four numbers, each separated by a dot. Each number represents a one byte value with a possible mathematical range of 0-255. Briefly, the first one or two bytes, depending on the class of network, generally will indicate the number of the network, the third byte indicates the number of the subnet, and the fourth number indicates the host number. This numbering scheme will vary depending on the network and the numbering method used such as Classless Inter-Domain Routing (CIDR) which is described later. The host number cannot be 0 or 255. None of the numbers can be 255 and the first number cannot be 0. This is because broadcasting is done with all bits set in some bytes. Broadcasting is a form of communication that all hosts on a network can read, and is normally used for performing various network queries.

An address of all 0's is not used, because when a machine is booted that does not have a hardware address assigned, it provides 0.0.0.0 as its address until it receives its assignment. This would occur for machines that are remote booted or those that boot using the dynamic host configuration protocol (DHCP). The part of the IP address that defines the network is referred to as the network ID, and the latter part of the IP address that defines the host address is referred to as the host ID. Page 81 of 95

Training report on Computer Hardware & networking IPv6 is an enhancement to the IPv4 standard due to the shortage of internet addresses. The dotted notation values are increased to 12 bit values rather than byte (8 bit) values. This increases the effective range of each possible decimal value to 4095. Of course the values of 0 and 4095 (all bits set) are generally reserved the same as with the IPv4 standard.

Internet Protocol
The Internet was born in 1969 as a research network of four machines that was funded bythe Department of Defenses Advanced Research Projects Agency (ARPA). The goal wasto build an efficient, fault-tolerant network that could connect heterogeneous machinesand link together separately connected networks. The network protocol is called theInternet Protocol, or IP. It is a connectionless protocol that is designed to handle theinterconnection of a large number of local and wide area networks that comprise the Internet. IP may route a packet from one physical network to another. Every machine on an IP network is assigned a unique 32-bit IP address. When an application sends data to a machine, it must address it with the IP address of that machine. The IP address is not the same as the machine address (e.g. the ethernet address) but strictly a logical address. A 32-bit address can potentially support 232, or 4,294,967,296 addresses. If every machine on an IP network would receive an arbitrary IP address, then routers would need to keep a table of over four billion entries to know how to direct traffic throughout the Internet! To deal with this more sensibly, routing tables were designed so that one entry can match multiple addresses. To do this, a hierarchy of addressing was created so that machines that are physically close together (say, in the same organization) would share a common prefix of bits in the address. For instance, consider the two machines: name address address (in hex): cs.rutgers.edu 128.6.4.2 80 06 04 02 remus.rutgers.edu 128.6.13.3 80 06 0d 03 The first sixteen bits identify the entire set of machines within Rutgers University. Systems outside of Rutgers that encounter any destination IP address that begins with 0x8006 have only to know how to route those packets to some machine (router) within Rutgers that can take care of routing the exact address to the proper machine. This saves the outside world from keeping track of up to 65,536 (216)machines within Rutgers. An IP address consists of two parts: network number identifies the network that the machine belongs to host number identifies a machine on that network.

The network number is used to route the IP packet to the correct local area network.The host number is used to identify a specific machine once in that local area network. Ifwe use a fixed 16-bit partition Page 82 of 95

Training report on Computer Hardware & networking between network numbers and host numbers, we will beallowed to have a maximum of 65,536 (216) separate networks on the Internet, each with amaximum of 65, 536 hosts. The expectation, however, was that there would be a few bignetworks and many small ones. To support this, networks are divided into several classes.These classes allow the address space to be partitioned into a few big networks that can support many machines and many smaller networks that can support few machines. Thefirst bits of an IP address identify the class of the network. class leading bits bits for network number bits for host number: A 0 7 24 B 10 14 16 C 110 21 8 An IP address is generally written as a sequence of four bytes in decimal separated byperiods. For example, an IP address written as 135.250.68.43 translates into thehexadecimal address 87FA442B (135=0x87, 250=0xfa, etc.). In binary, this address is1000 0111 1111 1010 0100 0100 0010 1011. The leading bits of this address are 10, whichidentifies the address as belonging to a class B network. The next 14 bits(00 0111 1111 1010) contain the network number (7FA) and the last 16 bits contain the host number (442B).

UDP
The User Datagram Protocol (UDP) is one of the core members of the Internet Protocol Suite, the set of network protocols used for the Internet. With UDP, computer applications can send messages, sometimes known as datagrams, to other hosts on an Internet Protocol (IP) network without requiring prior communications to set up special transmission channels or data paths. UDP is sometimes called the Universal Datagram Protocol. UDP uses a simple transmission model without implicit hand-shaking dialogues for guaranteeing reliability, ordering, or data integrity. Thus, UDP provides an unreliable service and datagrams may arrive out of order, appear duplicated, or go missing without notice. UDP assumes that error checking and correction is either not necessary or performed in the application, avoiding the overhead of such processing at the network interface level. Time-sensitive applications often use UDP because dropping packets is preferable to using delayed packets. If error correction facilities are needed at the network interface level, an application may use the Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP) which are designed for this purpose. UDP's stateless nature is also useful for servers that answer small queries from huge numbers of clients. Unlike TCP, UDP is compatible with packet broadcast (sending to all on local network) and multicasting (send to all subscribers).

ICMP
The Internet Control Message Protocol (ICMP) is one of the core protocols of the Internet Protocol Suite. It is chiefly used by networked computers' operating systems to send error messages Page 83 of 95

Training report on Computer Hardware & networking indicating, for instance, that a requested service is not available or that a host or router could not be reached. ICMP relies on IP to perform its tasks, and it is an integral part of IP. It differs in purpose from transport protocols such as TCP and UDP in that it is typically not used to send and receive data between end systems. It is usually not used directly by user network applications, with some notable exceptions being the ping tool and traceroute. ICMP for Internet Protocol version 4 (IPv4) is also known as ICMPv4. IPv6 has a similar protocol, ICMPv6. ICMP messages are constructed at the IP layer, usually from a normal IP datagram that has generated an ICMP response. IP encapsulates the appropriate ICMP message with a new IP header (to get the ICMP message back to the original sending host) and transmits the resulting datagram in the usual manner. For example, every machine (such as an intermediate router) that forwards an IP datagram has to decrement the time to live (TTL) field of the IP header by one; if the TTL reaches 0, an ICMP Time to live exceeded in transit message is sent to the source of the datagram. Each ICMP message is encapsulated directly within a single IP datagram, and thus, like UDP, ICMP is unreliable. Although ICMP messages are contained within standard IP datagrams, ICMP messages are usually processed as a special case, distinguished from normal IP processing, rather than processed as a normal sub-protocol of IP. In many cases, it is necessary to inspect the contents of the ICMP message and deliver the appropriate error message to the application that generated the original IP packet, the one that prompted the sending of the ICMP message. Many commonly-used network utilities are based on ICMP messages. The traceroute command is implemented by transmitting UDP datagrams with specially set IP TTL header fields, and looking for ICMP Time to live exceeded in transit (above) and "Destination unreachable" messages generated in response. The related ping utility is implemented using the ICMP "Echo request" and "Echo reply" messages.

IGMP
The Internet Group Management Protocol (IGMP) is a communications protocol used to manage the membership of Internet Protocol multicast groups. IGMP is used by IP hosts and adjacent multicast routers to establish multicast group memberships. It is an integral part of the IP multicast specification, operating above the network layer, though it doesn't actually act as a transport protocol. It is analogous to ICMP for unicast connections. IGMP Page 84 of 95

Training report on Computer Hardware & networking can be used for online streaming video and gaming, and allows more efficient use of resources when supporting these types of applications. IGMP does allow some attacks, and firewalls commonly allow the user to disable it if not needed. The IGMP protocol is implemented as a host side and a router side. A host side reports its membership of a group to its local router, and a router side listens to reports from hosts and periodically sends out queries. The Linux operating system supports IGMP. The Linux kernel at the core of the operating system only implements IGMP as host side, not router side, however a daemon such as mrouted can be used to act as a IGMP Linux router. There are also entire routing suites (such as XORP), which turn an ordinary computer into a full-fledged multicast router.

SMTP
Simple Mail Transfer Protocol (SMTP) is an Internet standard for electronic mail (e-mail) transmission across Internet Protocol (IP) networks. SMTP was first defined in RFC 821 (STD 10), and last updated by RFC 5321 (2008), which describes the protocol in widespread use today, also known as extended SMTP (ESMTP). While electronic mail server software uses SMTP to send and receive mail messages, user-level client mail applications typically only use SMTP for sending messages to a mail server for relaying. For receiving messages, client applications usually use either the Post Office Protocol (POP) or the Internet Message Access Protocol (IMAP) to access their mail box accounts on a mail server. SMTP is a relatively simple, text-based protocol, in which one or more recipients of a message are specified (and in most cases verified to exist) along with the message text and possibly other encoded objects. The message is then transferred to a remote server using a series of queries and responses between the client and server. Either an end-user's e-mail client, a.k.a. MUA (Mail User Agent), or a relaying server's MTA (Mail Transport Agents) can act as an SMTP client. An e-mail client knows the outgoing mail SMTP server from its configuration. A relaying server typically determines which SMTP server to connect to by looking up the MX (Mail eXchange) DNS record for each recipient's domain name. Conformant MTAs (not all) fall back to a simple A record in the case of no MX (relaying servers can also be configured to use a smart host). The SMTP client initiates a TCP connection to server's port 25 (unless overridden by configuration). It is quite easy to test an SMTP server using the netcat program (see below).

SMTP is a "push" protocol that cannot "pull" messages from a remote server on demand. To retrieve messages only on demand, which is the most common requirement on a single-user computer, a mail client must use POP3 or IMAP. Another SMTP server can trigger a delivery in SMTP using ETRN. It is possible to receive mail by running an SMTP server. POP3 became popular when single-user computers connected to the Internet only intermittently; SMTP is more suitable for a machine permanently connected to the Internet. An e-mail client requires the name or the IP address of an SMTP server as part of its configuration. The server will deliver messages on behalf of the user. This setting allows for various policies and Page 85 of 95

Training report on Computer Hardware & networking network designs. End users connected to the Internet can use the services of an e-mail provider that is not necessarily the same as their connection provider (ISP). Network topology, or the location of a client within a network or outside of a network, is no longer a limiting factor for e-mail submission or delivery. Modern SMTP servers typically use a client's credentials (authentication) rather than a client's location (IP address), to determine whether it is eligible to relay e-mail.

RPC
Remote procedure call (RPC) is an Inter-process communication technology that allows a computer program to cause a subroutine or procedure to execute in another address space (commonly on another computer on a shared network) without the programmer explicitly coding the details for this remote interaction. That is, the programmer would write essentially the same code whether the subroutine is local to the executing program, or remote. When the software in question is written using object-oriented principles, RPC may be referred to as remote invocation or remote method invocation. RPC is an obvious and popular paradigm for implementing the client-server model of distributed computing. An RPC is initiated by the client sending a request message to a known remote server in order to execute a specified procedure using supplied parameters. A response is returned to the client where the application continues along with its process. There are many variations and subtleties in various implementation, resulting in a variety of different (incompatible) RPC protocols. While the server is processing the call, the client is blocked (it waits until the server has finished processing until resuming execution). An important difference between remote procedure calls and local calls is that remote calls can fail because of unpredictable network problems. Also, callers generally must deal with such failures without knowing whether the remote procedure was actually invoked. Idempotent procedures (those which have no additional effects if called more than once) are easily handled, but enough difficulties remain that code which calls remote procedures is often confined to carefully written low-level subsystems. Server rising algorithm Create passive mode socket Register a socket in the translation database Work Wait for incomming connection Get data from client P Make evaluations Returns data to client Close connection

Client algorithm Connect to Portmapper ( translation database )( Port 111) Ask for the port number for the specified servie Page 86 of 95

Training report on Computer Hardware & networking Connect to the RPC server on the received port na Send data Wait for results Receive data Close connection

XDR
external Data Representation (XDR) is an IETF standard from 1995 of the presentation layer in the OSI model. XDR allows data to be wrapped in an architecture independent manner so data can be transferred between heterogeneous computer systems. Converting from the local representation to XDR is called encoding. Converting from XDR to the local representation is called decoding. XDR is implemented as a software library of functions that is portable between different operating systems and is also independent of the transport layer.

TELNET
Telnet (Telecommunication network) is a network protocol used on the Internet or local area network (LAN) connections. It was developed in 1969 beginning with RFC 15 and standardized as IETF STD 8, one of the first Internet standards. Typically, telnet provides access to a command-line interface on a remote machine. When Telnet was initially developed in 1969, most users of networked computers were in the computer departments of academic institutions, or at large private and government research facilities. In this environment, security was not nearly as much of a concern as it became after the bandwidth explosion of the 1990s. The rise in the number of people with access to the Internet, and by extension, the number of people attempting to crack other people's servers made encrypted alternatives much more of a necessity. Experts in computer security, such as SANS Institute, and the members of the comp.os.linux.security newsgroup recommend that the use of Telnet for remote logins should be discontinued under all normal circumstances, for the following reasons:

Telnet, by default, does not encrypt any data sent over the connection (including passwords), and so it is often practical to eavesdrop on the communications and use the password later for malicious purposes; anybody who has access to a router, switch, hub or gateway located on the network between the two hosts where Telnet is being used can intercept the packets passing by and obtain login and password information (and whatever else is typed) with any of several common utilities like tcpdump and Wireshark. Most implementations of Telnet have no authentication that would ensure communication is carried out between the two desired hosts and not intercepted in the middle. Page 87 of 95

Training report on Computer Hardware & networking Commonly used Telnet daemons have several vulnerabilities discovered over the years. These security-related shortcomings have seen the usage of the Telnet protocol drop rapidly, especially on the public Internet, in favor of the ssh protocol, first released in 1995. SSH provides much of the functionality of telnet, with the addition of strong encryption to prevent sensitive data such as passwords from being intercepted, and public key authentication, to ensure that the remote computer is actually who it claims to be. As has happened with other early Internet protocols, extensions to the Telnet protocol provide TLS security and SASL authentication that address the above issues. However, most Telnet implementations do not support these extensions; and there has been relatively little interest in implementing these as SSH is adequate for most purposes. The main advantage of TLS-Telnet would be the ability to use certificate-authority signed server certificates to authenticate a server host to a client that does not yet have the server key stored. In SSH, there is a weakness in that the user must trust the first session to a host when it has not yet acquired the server key. As of the mid-2000s, while the Telnet protocol itself has been mostly superseded for remote login, Telnet clients are still used, often when diagnosing problems, to manually "talk" to other services without specialized client software.

Network Addressing
IP addresses are broken into 4 octets (IPv4) separated by dots called dotted decimal notation. An octet is a byte consisting of 8 bits. The IPv4 addresses are in the following form: 192.168.10.1 There are two parts of an IP address: Page 88 of 95

Training report on Computer Hardware & networking Network ID Host ID

The various classes of networks specify additional or fewer octets to designate the network ID versus the host ID. Class 1st Octet Net ID A Net ID B Net ID C 2nd Octet Host ID 3rd Octet 4th Octet

Host ID Host ID

When a network is set up, a netmask is also specified. The netmask determines the class of the network as shown below, except for CIDR. When the netmask is setup, it specifies some number of most significant bits with a 1's value and the rest have values of 0. The most significant part of the netmask with bits set to 1's specifies the network address, and the lower part of the address will specify the host address. When setting addresses on a network, remember there can be no host address of 0 (no host address bits set), and there can be no host address with all bits set.

Class A-E networks


The addressing scheme for class A through E networks is shown below. Note: We use the 'x' character here to denote don't care situations which includes all possible numbers at the location. It is many times used to denote networks. Network Type Class A Class B Class C Class D Class E Address Range Normal Netmask Comments 001.x.x.x to 126.x.x.x 255.0.0.0 For very large networks 128.1.x.x to 191.254.x.x 255.255.0.0 For medium size networks 192.0.1.x to 223.255.254.x 255.255.255.0 For small networks 224.x.x.x to 239.255.255.255 Used to support multicasting 240.x.x.x to 247.255.255.255

RFCs 1518 and 1519 define a system called Classless Inter-Domain Routing (CIDR) which is used to allocate IP addresses more efficiently. This may be used with subnet masks to establish networks rather than the class system shown above. A class C subnet may be 8 bits but using CIDR, it may be 12 bits. There are some network addresses reserved for private use by the Internet Assigned Numbers Authority (IANA) which can be hidden behind a computer which uses IP masquerading to connect the private network to the internet. There are three sets of addresses reserved. These address are shown below: 10.x.x.x Page 89 of 95

Training report on Computer Hardware & networking 172.16.x.x - 172.31.x.x 192.168.x.x Other reserved or commonly used addresses: 127.0.0.1 - The loopback interface address. All 127.x.x.x addresses are used by the loopback interface which copies data from the transmit buffer to the receive buffer of the NIC when used. 0.0.0.0 - This is reserved for hosts that don't know their address and use BOOTP or DHCP protocols to determine their addresses. 255 - The value of 255 is never used as an address for any part of the IP address. It is reserved for broadcast addressing. Please remember, this is exclusive of CIDR. When using CIDR, all bits of the address can never be all ones. To further illustrate, a few examples of valid and invalid addresses are listed below: Valid addresses: 10.1.0.1 through 10.1.0.254 10.0.0.1 through 10.0.0.254 10.0.1.1 through 10.0.1.254 Invalid addresses: 10.1.0.0 - Host IP can't be 0. 10.1.0.255 - Host IP can't be 255. 10.123.255.4 - No network or subnet can have a value of 255. 0.12.16.89 - No Class A network can have an address of 0. 255.9.56.45 - No network address can be 255. 10.34.255.1 - No network address can be 255.

Network/Netmask specification
Sometimes you may see a network interface card (NIC) IP address specified in the following manner: 192.168.1.1/24 The first part indicates the IP address of the NIC which is "192.168.1.1" in this case. The second part "/24" indicates the netmask value meaning in this case that the first 24 bits of the netmask are set. This makes the netmask value 255.255.255.0. If the last part of the line above were "/16", the netmask would be 255.255.0.0. Page 90 of 95

Training report on Computer Hardware & networking

Subnet masks
Subnetting is the process of breaking down a main class A, B, or C network into subnets for routing purposes. A subnet mask is the same basic thing as a netmask with the only real difference being that you are breaking a larger organizational network into smaller parts, and each smaller section will use a different set of address numbers. This will allow network packets to be routed between subnetworks. When doing subnetting, the number of bits in the subnet mask determine the number of available subnets. Two to the power of the number of bits minus two is the number of available subnets. When setting up subnets the following must be determined: Number of segments Hosts per segment

Subnetting provides the following advantages: Network traffic isolation - There is less network traffic on each subnet. Simplified Administration - Networks may be managed independently. Improved security - Subnets can isolate internal networks so they are not visible from external networks.

A 14 bit subnet mask on a class B network only allows 2 node addresses for WAN links. A routing algorithm like OSPF or EIGRP must be used for this approach. These protocols allow the variable length subnet masks (VLSM). RIP and IGRP don't support this. Subnet mask information must be transmitted on the update packets for dynamic routing protocols for this to work. The router subnet mask is different than the WAN interface subnet mask. One network ID is required by each of: Subnet WAN connection One host ID is required by each of: Each NIC on each host. Each router interface.

Types of subnet masks: Default - Fits into a Class A, B, or C network category Custom - Used to break a default network such as a Class A, B, or C network into subnets.

These are as follows: Class A - 255.0.0.0 - 11111111.00000000.00000000.00000000 Class B - 255.255.0.0 - 11111111.11111111.00000000.00000000 Class C - 255.255.255.0 - 11111111.11111111.11111111.00000000 Page 91 of 95

Training report on Computer Hardware & networking Additional bits can be added to the default subnet mask for a given Class to further subnet, or break down, a network. When a bitwise logical AND operation is performed between the subnet mask and IP address, the result defines the Subnet Address (also called the Network Address or Network Number). There are some restrictions on the subnet address. Node addresses of all "0"s and all "1"s are reserved for specifying the local network (when a host does not know its network address) and all hosts on the network (broadcast address), respectively. This also applies to subnets. A subnet address cannot be all "0"s or all "1"s. This also implies that a 1 bit subnet mask is not allowed. This restriction is required because older standards enforced this restriction. Recent standards that allow use of these subnets have superseded these standards, but many "legacy" devices do not support the newer standards. If you are operating in a controlled environment, such as a lab, you can safely use these restricted subnets.

Advantages and Disadvantages of Computer Networks


Computer networks are a vital part of any organization these days. Read on to know about the major advantages and disadvantages of computer networks.

Page 92 of 95

Training report on Computer Hardware & networking

A computer network is basically a connection of computers and resources like printers, scanners, etc. Here are some of the advantages and disadvantages of computer networks.

Advantages of Computer Networks


Following are some of the advantages of computer networks:

File Sharing: The major advantage of a computer network is that is allows file sharing and
remote file access. A person sitting at one workstation of a network can easily see the files present on the other workstation, provided he is authorized to do so. It saves the time which is wasted in copying a file from one system to another, by using a storage device. In addition to that, many people can access or update the information stored in a database, making it up-todate and accurate.

Resource Sharing: Resource sharing is also an important benefit of a computer network.


For example, if there are four people in a family, each having their own computer, they will require four modems (for the Internet connection) and four printers, if they want to use the resources at the same time. A computer network, on the other hand, provides a cheaper alternative by the provision of resource sharing. In this way, all the four computers can be interconnected, using a network, and just one modem and printer can efficiently provide the services to all four members. The facility of shared folders can also be availed by family members.

Increased Storage Capacity: As there is more than one computer on a network which
can easily share files, the issue of storage capacity gets resolved to a great extent. A standalone computer might fall short of storage memory, but when many computers are on a network, memory of different computers can be used in such case. One can also design a storage server on the network in order to have a huge storage capacity.

Page 93 of 95

Training report on Computer Hardware & networking Increased Cost Efficiency: There are many softwares available in the market which are costly and take time for installation. Computer networks resolve this issue as the software can be stored or installed on a system or a server and can be used by the different workstations.

Disadvantages of Computer Networks


Following are some of the major disadvantages of computer networks:

Security Issues: One of the major drawbacks of computer networks is the security issues
involved. If a computer is a standalone, physical access becomes necessary for any kind of data theft. However, if a computer is on a network, a computer hacker can get unauthorized access by using different tools. In case of big organizations, various network security softwares are used to prevent the theft of any confidential and classified data.

Rapid Spread of Computer Viruses: If any computer system in a network gets affected
by computer virus, there is a possible threat of other systems getting affected too. Viruses get spread on a network easily because of the interconnectivity of workstations. Such spread can be dangerous if the computers have important database which can get corrupted by the virus.

Expensive Set Up: The initial set up cost of a computer network can be high depending on
the number of computers to be connected. Costly devices like routers, switches, hubs, etc., can add up to the bills of a person trying to install a computer network. He will also have to buy NICs (Network Interface Cards) for each of the workstations, in case they are not inbuilt.

Dependency on the Main File Server: In case the main File Server of a computer
network breaks down, the system becomes useless. In case of big networks, the File Server should be a powerful computer, which often makes it expensive.

REFRENCE
www.google.com
Page 94 of 95

Training report on Computer Hardware & networking

www.wikipediya.com www.pdfdatabase.com

www.pcstech.com

Page 95 of 95

You might also like