Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Intel 8086

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Background[edit]

In 1972, Intel launched the 8008, the first 8-bit microprocessor.[note 2] It implemented
an instruction set designed by Datapoint corporation with programmable CRT terminals in
mind, which also proved to be fairly general purpose. The device needed several
additional ICs to produce a functional computer, in part due to it being packaged in a small 18pin "memory package", which ruled out the use of a separate address bus (Intel was primarily
a DRAM manufacturer at the time).
Two years later, Intel launched the 8080,[note 3] employing the new 40-pin DIL
packages originally developed for calculator ICs to enable a separate address bus. It had an
extended instruction set that was source (not binary) compatible with the 8008 and also
included some 16-bit instructions to make programming easier. The 8080 device, often
described as "the first truly useful microprocessor" [citation needed], was eventually replaced by
the depletion-load based 8085 (1977) which sufficed with a single +5 V power supply instead
of the three different operating voltages of earlier chips. [note 4] Other well known 8-bit
microprocessors that emerged during these years were Motorola 6800 (1974), General
Instrument PIC16X (1975), MOS Technology 6502 (1975), Zilog Z80 (1976), andMotorola
6809 (1978).

The first x86 design[edit]

Intel 8086 CPU die image

The 8086 project started in May 1976 and was originally intended as a temporary substitute for
the ambitious and delayed iAPX 432 project. It was an attempt to draw attention from the lessdelayed 16- and 32-bit processors of other manufacturers (such as Motorola, Zilog,
and National Semiconductor) and at the same time to counter the threat from the Zilog
Z80 (designed by former Intel employees), which became very successful. Both the
architecture and the physical chip were therefore developed rather quickly by a small group of
people, and using the same basic microarchitecture elements and physical implementation

techniques as employed for the slightly older 8085 (and for which the 8086 also would function
as a continuation).
Marketed as source compatible, the 8086 was designed to allow assembly language for the
8008, 8080, or 8085 to be automatically converted into equivalent (suboptimal) 8086 source
code, with little or no hand-editing. The programming model and instruction set was (loosely)
based on the 8080 in order to make this possible. However, the 8086 design was expanded to
support full 16-bit processing, instead of the fairly basic 16-bit capabilities of the 8080/8085.
New kinds of instructions were added as well; full support for signed integers, base+offset
addressing, and self-repeating operations were akin to the Z80 design[3] but were all made
slightly more general in the 8086. Instructions directly supporting nested ALGOL-family
languages such as Pascal and PL/M were also added. According to principal architect Stephen
P. Morse, this was a result of a more software centric approach than in the design of earlier
Intel processors (the designers had experience working with compiler implementations). Other
enhancements included microcoded multiply and divide instructions and a bus structure better
adapted to future coprocessors (such as 8087 and 8089) and multiprocessor systems.
The first revision of the instruction set and high level architecture was ready after about three
months,[note 5] and as almost no CAD tools were used, four engineers and 12 layout people were
simultaneously working on the chip.[note 6] The 8086 took a little more than two years from idea to
working product, which was considered rather fast for a complex design in 19761978.
The 8086 was sequenced[note 7] using a mixture of random logic[4] and microcode and was
implemented using depletion-load nMOS circuitry with approximately
20,000 active transistors (29,000 counting all ROM and PLA sites). It was soon moved to a
new refined nMOS manufacturing process called HMOS (for High performance MOS) that Intel
originally developed for manufacturing of fast static RAM products.[note 8] This was followed by
HMOS-II, HMOS-III versions, and, eventually, a fully static CMOS version for battery powered
devices, manufactured using Intel's CHMOS processes.[note 9] The original chip measured
33 mm and minimum feature size was 3.2 m.
The architecture was defined by Stephen P. Morse with some help and assistance by Bruce
Ravenel (the architect of the 8087) in refining the final revisions. Logic designer Jim McKevitt
and John Bayliss were the lead engineers of the hardware-level development team[note 10] and
Bill Pohlman the manager for the project. The legacy of the 8086 is enduring in the basic
instruction set of today's personal computers and servers; the 8086 also lent its last two digits
to later extended versions of the design, such as the Intel 286and the Intel 386, all of which
eventually became known as the x86 family. (Another reference is that the PCI Vendor ID for
Intel devices is 8086h.)

You might also like