Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Microp Assig

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

c 


 

                      
  
          !"#$   %
  &             #'    
     (   """ !    
 "#

November 15, 2001 marked the 30th anniversary of the microprocessor, and in those 30 years processor speed has

increased more than 18,500 times (from 0.108MHz to 2GHz).The 4004 was introduced on November 15, 1971 and

originally ran at a clock speed of 108KHz (108,000 cycles per second, or just over one-tenth a megahertz). The 4004

contained 2,300 transistors and was built on a 10-micron process. This means that each line, trace, or transistor could be

spaced about 10 microns (millionths of a meter) apart. Data was transferred 4 bits at a time, and the maximum

addressable memory was only 640 bytes. The 4004 was designed for use in a calculator but proved to be useful for many

other functions because of its inherent programmability. For example, the 4004 was used in traffic light controllers,

blood analyzers, and even in the NASA Pioneer 10 deep space probe!

In April 1972, Intel released the 8008 processor, which originally ran at a clock speed of 200KHz (0.2MHz). The 8008

processor contained 3,500 transistors and was built on the same 10-micron process as the previous processor. The big

change in the 8008 was that it had an 8-bit data bus, which meant it could move data 8 bits at a timetwice as much as

the previous chip. It could also address more memory, up to 16KB. This chip was primarily used in dumb terminals and

general-purpose calculators.

The next chip in the lineup was the 8080, introduced in April 1974, running at a clock rate of 2MHz. Due mostly to the

faster clock rate, the 8080 processor had 10 times the performance of the 8008. The 8080 chip contained 6,000

transistors and was built on a 6-micron process. Similar to the previous chip, the 8080 had an 8-bit data bus, so it could

transfer 8 bits of data at a time. The 8080 could address up to 64KB of memory, significantly more than the previous

chip.

It was the 8080 that helped start the PC revolution because this was the processor chip used in what is generally regarded

as the first personal computer, the Altair 8800. The CP/M operating system was written for the 8080 chip, and Microsoft

was founded and delivered its first product: Microsoft BASIC for the Altair. These initial tools provided the foundation for

a revolution in software because thousands of programs were written to run on this platform.

In fact, the 8080 became so popular that it was cloned. A company called Zilog formed in late 1975, joined by several ex-

Intel 8080 engineers. In July 1976, it released the Z-80 processor, which was a vastly improved version of the 8080. It was

not pin compatible but instead combined functions such as the memory interface and RAM refresh circuitry, which

enabled cheaper and simpler systems to be designed. The Z-80 also incorporated a superset of 8080 instructions, meaning

it could run all 8080 programs. It also included new instructions and new internal registers, so software designed for the
Z-80 would not necessarily run on the older 8080. The Z-80 ran initially at 2.5MHz (later versions ran up to 10MHz) and

contained 8,500 transistors. The Z-80 could access 64KB of memory.

RadioShack selected the Z-80 for the TRS-80 Model 1, its first PC. The chip also was the first to be used by many

pioneering systems, including the Osborne and Kaypro machines. Other companies followed, and soon the Z-80 was the

standard processor for systems running the CP/M operating system and the popular software of the day.

Intel released the 8085, its follow-up to the 8080, in March 1976. Even though it predated the Z-80 by several months, it

never achieved the popularity of the Z-80 in personal computer systems. It was popular as an embedded controller,

finding use in scales and other computerized equipment. The 8085 ran at 5MHz and contained 6,500 transistors. It was

built on a 3-micron process and incorporated an 8-bit data bus.

Along different architectural lines, MOS Technologies introduced the 6502 in 1976. This chip was designed by several ex-

Motorola engineers who had worked on Motorola's first processor, the 6800. The 6502 was an 8-bit processor like the

8080, but it sold for around $25, whereas the 8080 cost about $300 when it was introduced. The price appealed to Steve

Wozniak, who placed the chip in his Apple I and Apple II designs. The chip was also used in systems by Commodore and

other system manufacturers. The 6502 and its successors were also used in game consoles, including the original Nintendo

Entertainment System (NES) among others. Motorola went on to create the 68000 series, which became the basis for the

Apple Macintosh line of computers. Today those systems use the PowerPC chip, also by Motorola and a successor to the

68000 series.

All these previous chips set the stage for the first PC processors. Intel introduced the 8086 in June 1978. The 8086 chip

brought with it the original x86 instruction set that is still present in current x86-compatible chips such as the Pentium 4

and AMD Athlon. A dramatic improvement over the previous chips, the 8086 was a full 16-bit design with 16-bit internal

registers and a 16-bit data bus. This meant that it could work on 16-bit numbers and data internally and also transfer 16

bits at a time in and out of the chip. The 8086 contained 29,000 transistors and initially ran at up to 5MHz.

The chip also used 20-bit addressing, so it could directly address up to 1MB of memory. Although not directly backward

compatible with the 8080, the 8086 instructions and language were very similar and enabled older programs to quickly be

ported over to run. This later proved important to help jumpstart the PC software revolution with recycled CP/M (8080)

software.

Although the 8086 was a great chip, it was expensive at the time and more importantly required expensive 16-bit board

designs and infrastructure to support it. To help bring costs down, in 1979 Intel released what some called a crippled

version of the 8086 called the 8088. The 8088 processor used the same internal core as the 8086, had the same 16-bit

registers, and could address the same 1MB of memory, but the external data bus was reduced to 8 bits. This enabled

support chips from the older 8-bit 8085 to be used, and far less expensive boards and systems could be made. These

reasons are why IBM chose the 8088 instead of the 8086 for the first PC.
This decision would affect history in several ways. The 8088 was fully software compatible with the 8086, so it could run

16-bit software. Also, because the instruction set was very similar to the previous 8085 and 8080, programs written for

those older chips could be quickly and easily modified to run. This enabled a large library of programs to be quickly

released for the IBM PC, thus helping it become a success. The overwhelming blockbuster success of the IBM PC left in its

wake the legacy of requiring backward compatibility with it. To maintain the momentum, Intel has pretty much been

forced to maintain backward compatibility with the 8088/8086 in most of the processors it has released since then.

To date, backward compatibility has been maintained, but innovating and adding new features has still been possible.

One major change in processors was the move from the 16-bit internal architecture of the 286 and earlier processors to

the 32-bit internal architecture of the 386 and later chips, which Intel calls IA-32 (Intel Architecture, 32-bit). Intel's 32-

bit architecture dates to 1985, and it took a full 10 years for both a partial 32-bit mainstream OS (Windows 95) as well as

a full 32-bit OS requiring 32-bit drivers (Windows NT) to surface, and another 6 years for the mainstream to shift to a

fully 32-bit environment for the OS and drivers (Windows XP). That's a total of 16 years from the release of 32-bit

computing hardware to the full adoption of 32-bit computing in the mainstream with supporting software. I'm sure you

can appreciate that 16 years is a lifetime in technology.

Now we are in the midst of another major architectural jump, as Intel and AMD are in the process of moving from 32-bit

to 64-bit computing for servers, desktop PCs, and even portable PCs. Intel had introduced the IA-64 (Intel Architecture,

64-bit) in the form of the Itanium and Itanium 2 processors several years earlier, but this standard was something

completely new and not an extension of the existing 32-bit technology. IA-64 was first announced in 1994 as a CPU

development project with Intel and HP (codenamed Merced), and the first technical details were made available in

October 1997. The result was the IA-64 architecture and Itanium chip, which was officially released in 2001.

The fact that the IA-64 architecture is not an extension of IA-32 but is instead a whole new and completely different

architecture is fine for non-PC environments such as servers (for which IA-64 was designed), but the PC market has always

hinged on backward compatibility. Even though emulating IA-32 within IA-64 is possible, such emulation and support is

slow.

With the door now open, AMD seized this opportunity to develop 64-bit extensions to IA-32, which it calls AMD64

(originally known as x86-64). Intel eventually released its own set of 64-bit extensions, which it calls EM64T or IA-32e

mode. As it turns out, the Intel extensions are almost identical to the AMD extensions, meaning they are software

compatible. It seems for the first time that Intel has unarguably followed AMD's lead in the development of PC

architecture.

To make 64-bit computing a reality, 64-bit operating systems and 64-bit drivers are also needed. Microsoft began

providing trial versions of Windows XP Professional x64 Edition (which supports AMD64 and EM64T) in April 2005, and

major computer vendors now offer systems with Windows XP Professional x64 already installed. Major hardware vendors
have also developed 64-bit drivers for current and recent hardware. Linux is also available in 64-bitcompatible versions,

making the move to 64-bit computing possible.

The latest development is the introduction of dual-core processors from both Intel and AMD. Dual-core processors have

two full CPU cores operating off of one CPU packagein essence enabling a single processor to perform the work of two

processors. Although dual-core processors don't make games (which use single execution threads and are usually not run

with other applications) play faster, dual-core processors, like multiple single-core processors, split up the workload

caused by running multiple applications at the same time. If you've ever tried to scan for viruses while checking email or

running another application, you've probably seen how running multiple applications can bring even the fastest processor

to its knees. With dual-core processors available from both Intel and AMD, your ability to get more work done in less time

by multitasking is greatly enhanced. Current dual-core processors also support AMD64 or EM64T 64-bit extensions,

enabling you to enjoy both dual-core and 64-bit computing's advantages.

PCs have certainly come a long way. The original 8088 processor used in the first PC contained 29,000 transistors and ran

at 4.77MHz. The AMD Athlon 64FX has more than 105 million transistors, while the Pentium 4 670 (Prescott core) runs at

3.8GHz and has 169 million transistors thanks to its 2MB L2 cache. Dual-core processors, which include two processor

cores and cache memory in a single physical chip, have even higher transistor counts: The Intel Pentium D processor has

230 million transistors, and the AMD Athlon 64 X2 includes over 233 million transistors. As dual-core processors and large

L2 caches continue to be used in more and more designs, look for transistor counts and real-world performance to

continue to increase. And the progress doesn't stop there because, according to Moore's Law, processing speed and

transistor counts are doubling every 1.52 years.

You might also like