Instruction Set Architecture - Wikipedia
Instruction Set Architecture - Wikipedia
Instruction Set Architecture - Wikipedia
In general, an ISA defines the supported instructions, data types, registers, the
hardware support for managing main memory, fundamental features (such as the
memory consistency, addressing modes, virtual memory), and the input/output
model of a family of implementations of the ISA.
The binary compatibility that they provide makes ISAs one of the most fundamental
abstractions in computing.
Overview
:
An instruction set architecture is distinguished from a microarchitecture, which is
the set of processor design techniques used, in a particular processor, to implement
the instruction set. Processors with different microarchitectures can share a
common instruction set. For example, the Intel Pentium and the AMD Athlon
implement nearly identical versions of the x86 instruction set, but they have
radically different internal designs.
The concept of an architecture, distinct from the design of a specific machine, was
developed by Fred Brooks at IBM during the design phase of System/360.
Some virtual machines that support bytecode as their ISA such as Smalltalk, the
Java virtual machine, and Microsoft's Common Language Runtime, implement this
by translating the bytecode for commonly used code paths into native machine code.
In addition, these virtual machines execute less frequently used code paths by
interpretation (see: Just-in-time compilation). Transmeta implemented the x86
instruction set atop VLIW processors in this fashion.
Classification of ISAs
An ISA may be classified in a number of different ways. A common classification is
by architectural complexity. A complex instruction set computer (CISC) has many
specialized instructions, some of which may only be rarely used in practical
programs. A reduced instruction set computer (RISC) simplifies the processor by
efficiently implementing only the instructions that are frequently used in programs,
while the less common operations are implemented as subroutines, having their
resulting additional processor execution time offset by infrequent use.[2]
Other types include very long instruction word (VLIW) architectures, and the closely
related long instruction word (LIW) and explicitly parallel instruction computing
(EPIC) architectures. These architectures seek to exploit instruction-level
parallelism with less hardware than RISC and CISC by making the compiler
responsible for instruction issue and scheduling.[3]
:
Architectures with even less complexity have been studied, such as the minimal
instruction set computer (MISC) and one-instruction set computer (OISC). These
are theoretically important types, but have not been commercialized.[4][5]
Instructions
Machine language is built up from discrete statements or instructions. On the
processing architecture, a given instruction may specify:
registers
literal/constant values
addressing modes used to access memory
Instruction types
Add, subtract, multiply, or divide the values of two registers, placing the result in
a register, possibly setting one or more condition codes in a status register.[6]
increment, decrement in some ISAs, saving operand fetch in trivial cases.
Perform bitwise operations, e.g., taking the conjunction and disjunction of
:
corresponding bits in a pair of registers, taking the negation of each bit in a
register.
Compare two values in registers (for example, to see if one is less, or if they are
equal).
Floating-point instructions for arithmetic on floating-point numbers.[6]
Coprocessor instructions
Complex instructions
Instruction encoding
On traditional architectures,
an instruction includes an
opcode that specifies the
operation to perform, such as
add contents of memory to
register—and zero or more
operand specifiers, which may
specify registers, memory One instruction may have several fields, which
locations, or literal data. The identify the logical operation, and may also
operand specifiers may have include source and destination addresses and
addressing modes determining constant values. This is the MIPS "Add
their meaning or may be in Immediate" instruction, which allows selection of
fixed fields. In very long source and destination registers and inclusion of
instruction word (VLIW) a small constant.
architectures, which include
many microcode architectures,
multiple simultaneous opcodes and operands are specified in a single instruction.
Some exotic instruction sets do not have an opcode field, such as transport triggered
architectures (TTA), only operand(s).
Most stack machines have "0-operand" instruction sets in which arithmetic and
logical operations lack any operand specifier fields; only instructions that push
operands onto the evaluation stack or that pop operands from the stack into
variables have operand specifiers. The instruction set carries out most ALU actions
with postfix (reverse Polish notation) operations that work only on the expression
stack, not on data registers or arbitrary main memory cells. This can be very
convenient for compiling high-level languages, because most arithmetic expressions
can be easily translated into postfix notation.[7]
:
Conditional instructions often have a predicate field—a few bits that encode the
specific condition to cause an operation to be performed rather than not performed.
For example, a conditional branch instruction will transfer control if the condition is
true, so that execution proceeds to a different part of the program, and not transfer
control if the condition is false, so that execution continues sequentially. Some
instruction sets also have conditional moves, so that the move will be executed, and
the data stored in the target location, if the condition is true, and not executed, and
the target location not modified, if the condition is false. Similarly, IBM
z/Architecture has a conditional store instruction. A few instruction sets include a
predicate field in every instruction; this is called branch predication.
Number of operands
(In the examples that follow, a, b, and c are (direct or calculated) addresses referring
to memory cells, while reg1 and so on refer to machine registers.)
C = A+B
C = A+B needs four instructions.[9] For stack machines, the terms "0-
operand" and "zero-address" apply to arithmetic instructions, but not to all
instructions, as 1-operand push and pop instructions are used to access
memory.
1-operand (one-address machines), so called accumulator machines, include
early computers and many small microcontrollers: most instructions specify a
single right operand (that is, constant, a register, or a memory location), with the
implicit accumulator as the left operand (and the destination if there is one):
load a, add b, store c.
Due to the large number of bits needed to encode the three registers of a 3-operand
instruction, RISC architectures that have 16-bit instructions are invariably 2-
operand designs, such as the Atmel AVR, TI MSP430, and some versions of ARM
Thumb. RISC architectures that have 32-bit instructions are usually 3-operand
designs, such as the ARM, AVR32, MIPS, Power ISA, and SPARC architectures.
Register pressure
Register pressure measures the availability of free registers at any point in time
during the program execution. Register pressure is high when a large number of the
available registers are in use; thus, the higher the register pressure, the more often
the register contents must be spilled into memory. Increasing the number of
registers in an architecture decreases register pressure but increases the cost.[11]
While embedded instruction sets such as Thumb suffer from extremely high register
pressure because they have small register sets, general-purpose RISC ISAs like MIPS
and Alpha enjoy low register pressure. CISC ISAs like x86-64 offer low register
pressure despite having smaller register sets. This is due to the many addressing
modes and optimizations (such as sub-register addressing, memory operands in
ALU instructions, absolute addressing, PC-relative addressing, and register-to-
register spills) that CISC ISAs offer.[12]
Instruction length
The size or length of an instruction varies widely, from as little as four bits in some
microcontrollers to many hundreds of bits in some VLIW systems. Processors used
in personal computers, mainframes, and supercomputers have minimum
instruction sizes between 8 and 64 bits. The longest possible instruction on x86 is 15
bytes (120 bits).[13] Within an instruction set, different instructions may have
different lengths. In some architectures, notably most reduced instruction set
computers (RISC), instructions are a fixed length, typically corresponding with that
architecture's word size. In other architectures, instructions have variable length,
typically integral multiples of a byte or a halfword. Some, such as the ARM with
Thumb-extension have mixed variable encoding, that is two fixed, usually 32-bit and
16-bit encodings, where instructions cannot be mixed freely but must be switched
between on a branch (or exception boundary in ARMv8).
Code density
:
In early 1960s computers, main memory was expensive and very limited, even on
mainframes. Minimizing the size of a program to make sure it would fit in the
limited memory was often central. Thus the size of the instructions needed to
perform a particular task, the code density, was an important characteristic of any
instruction set. It remained important on the initially-tiny memories of
minicomputers and then microprocessors. Density remains important today, for
smartphone applications, applications downloaded into browsers over slow Internet
connections, and in ROMs for embedded applications. A more general advantage of
increased density is improved effectiveness of caches and instruction prefetch.
Computers with high code density often have complex instructions for procedure
entry, parameterized returns, loops, etc. (therefore retroactively named Complex
Instruction Set Computers, CISC). However, more typical, or frequent, "CISC"
instructions merely combine a basic ALU operation, such as "add", with the access
of one or more operands in memory (using addressing modes such as direct,
indirect, indexed, etc.). Certain architectures may allow two or three operands
(including the result) directly in memory or may be able to perform functions such
as automatic pointer increment, etc. Software-implemented instruction sets may
have even more complex and powerful instructions.
Certain embedded RISC ISAs like Thumb and AVR32 typically exhibit very high
density owing to a technique called code compression. This technique packs two 16-
bit instructions into one 32-bit word, which is then unpacked at the decode stage
and executed as two instructions.[14]
Minimal instruction set computers (MISC) are commonly a form of stack machine,
where there are few separate instructions (8–32), so that multiple instructions can
be fit into a single machine word. These types of cores often take little silicon to
implement, so they can be easily realized in an FPGA or in a multi-core form. The
code density of MISC is similar to the code density of RISC; the increased
instruction density is offset by requiring more of the primitive instructions to do a
task.[15]
:
There has been research into executable compression as a mechanism for improving
code density. The mathematics of Kolmogorov complexity describes the challenges
and limits of this.
Representation
The instructions constituting a program are rarely specified using their internal,
numeric form (machine code); they may be specified by programmers using an
assembly language or, more commonly, may be generated from high-level
programming languages by compilers.[16]
Design
The design of instruction sets is a complex issue. There were two stages in history
for the microprocessor. The first was the CISC (Complex Instruction Set Computer),
which had many different instructions. In the 1970s, however, places like IBM did
research and found that many instructions in the set could be eliminated. The result
was the RISC (Reduced Instruction Set Computer), an architecture that uses a
smaller set of instructions. A simpler instruction set may offer the potential for
higher speeds, reduced processor size, and reduced power consumption. However, a
more complex set may optimize common operations, improve memory and cache
efficiency, or simplify programming.
Some instruction set designers reserve one or more opcodes for some kind of system
call or software interrupt. For example, MOS Technology 6502 uses 00H, Zilog Z80
uses the eight codes C7,CF,D7,DF,E7,EF,F7,FFH[17] while Motorola 68000 use
codes in the range A000..AFFFH.
Fast virtual machines are much easier to implement if an instruction set meets the
Popek and Goldberg virtualization requirements.
1. Some computer designs "hardwire" the complete instruction set decoding and
sequencing (just like the rest of the microarchitecture).
2. Other designs employ microcode routines or tables (or both) to do this—typically
as on-chip ROMs or PLAs or both (although separate RAMs and ROMs have
been used historically). The Western Digital MCP-1600 is an older example,
using a dedicated, separate ROM for microcode.
Some designs use a combination of hardwired design and microcode for the control
unit.
Some CPU designs use a writable control store—they compile the instruction set to a
writable RAM or flash inside the CPU (such as the Rekursiv processor and the Imsys
Cjip),[18] or an FPGA (reconfigurable computing).
Often the details of the implementation have a strong influence on the particular
instructions selected for the instruction set. For example, many implementations of
the instruction pipeline only allow a single memory load or memory store per
instruction, leading to a load–store architecture (RISC). For another example, some
early ways of implementing the instruction pipeline led to a delay slot.
:
The demands of high-speed digital signal processing have pushed in the opposite
direction—forcing instructions to be implemented in a particular way. For example,
to perform digital filters fast enough, the MAC instruction in a typical digital signal
processor (DSP) must use a kind of Harvard architecture that can fetch an
instruction and two data words simultaneously, and it requires a single-cycle
multiply–accumulate multiplier.
See also
Comparison of instruction set architectures
Computer architecture
Processor design
Compressed instruction set
Emulator
Simulation
Instruction set simulator
OVPsim full systems simulator providing ability to create/model/emulate any
instruction set using C and standard APIs
Register transfer language (RTL)
Micro-operation
References
1. Pugh, Emerson W.; Johnson, Lyle R.; Palmer, John H. (1991). IBM's 360 and
Early 370 Systems (https://archive.org/details/ibms360early370s0000pugh). MIT
Press. ISBN 0-262-16123-0.
2. Crystal Chen; Greg Novick; Kirk Shimano (December 16, 2006). "RISC
Architecture: RISC vs. CISC" (http://cs.stanford.edu/people/eroberts/courses/soc
o/projects/risc/risccisc/). cs.stanford.edu. Retrieved February 21, 2015.
3. Schlansker, Michael S.; Rau, B. Ramakrishna (February 2000). "EPIC: Explicitly
Parallel Instruction Computing". Computer. 33 (2). doi:10.1109/2.820037 (https://
doi.org/10.1109%2F2.820037).
4. Shaout, Adnan; Eldos, Taisir (Summer 2003). "On the Classification of Computer
Architecture" (https://www.researchgate.net/publication/267239549_On_the_Cla
ssification_of_Computer_Architecture). International Journal of Science and
Technology. 14: 3. Retrieved March 2, 2023.
5. Gilreath, William F.; Laplante, Phillip A. (December 6, 2012). Computer
Architecture: A Minimalist Perspective. Springer Science+Business Media.
ISBN 978-1-4615-0237-1.
6. Hennessy & Patterson 2003, p. 108.
7. Durand, Paul. "Instruction Set Architecture (ISA)" (http://www.cs.kent.edu/~duran
:
7. Durand, Paul. "Instruction Set Architecture (ISA)" (http://www.cs.kent.edu/~duran
d/CS0/Notes/Chapter05/isa.html). Introduction to Computer Science CS 0.
8. Hennessy & Patterson 2003, p. 92.
9. Hennessy & Patterson 2003, p. 93.
10. Cocke, John; Markstein, Victoria (January 1990). "The evolution of RISC
technology at IBM" (https://www.cis.upenn.edu/~milom/cis501-Fall11/papers/coc
ke-RISC.pdf) (PDF). IBM Journal of Research and Development. 34 (1): 4–11.
doi:10.1147/rd.341.0004 (https://doi.org/10.1147%2Frd.341.0004). Retrieved
2022-10-05.
11. Page, Daniel (2009). "11. Compilers". A Practical Introduction to Computer
Architecture. Springer. p. 464. Bibcode:2009pica.book.....P (https://ui.adsabs.har
vard.edu/abs/2009pica.book.....P). ISBN 978-1-84882-255-9.
12. Venkat, Ashish; Tullsen, Dean M. (2014). Harnessing ISA Diversity: Design of a
Heterogeneous-ISA Chip Multiprocessor (http://dl.acm.org/citation.cfm?id=2665692
). 41st Annual International Symposium on Computer Architecture.
13. "Intel® 64 and IA-32 Architectures Software Developer's Manual" (https://www.in
tel.com/content/www/us/en/developer/articles/technical/intel-sdm.html). Intel
Corporation. Retrieved 5 October 2022.
14. Weaver, Vincent M.; McKee, Sally A. (2009). Code density concerns for new
architectures. IEEE International Conference on Computer Design.
CiteSeerX 10.1.1.398.1967 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=
10.1.1.398.1967). doi:10.1109/ICCD.2009.5413117 (https://doi.org/10.1109%2FI
CCD.2009.5413117).
15. "RISC vs. CISC" (https://cs.stanford.edu/people/eroberts/courses/soco/projects/ri
sc/risccisc/). cs.stanford.edu. Retrieved 2021-12-18.
16. Hennessy & Patterson 2003, p. 120.
17. Ganssle, Jack (February 26, 2001). "Proactive Debugging" (https://www.embedd
ed.com/electronics-blogs/break-points/4023293/Proactive-Debugging).
embedded.com.
18. "Great Microprocessors of the Past and Present (V 13.4.0)" (http://cpushack.net/
CPU/cpu7.html). cpushack.net. Retrieved 2014-07-25.
Further reading
Bowen, Jonathan P. (July–August 1985). "Standard Microprocessor
Programming Cards". Microprocessors and Microsystems. 9 (6): 274–290.
doi:10.1016/0141-9331(85)90116-4 (https://doi.org/10.1016%2F0141-9331%288
5%2990116-4).
Hennessy, John L.; Patterson, David A. (2003). Computer Architecture: A
Quantitative Approach (https://books.google.com/books?id=XX69oNsazH4C)
:
(Third ed.). Morgan Kaufmann Publishers. ISBN 1-55860-724-2. Retrieved
2023-03-04.
External links
Media related to Instruction set architectures at Wikimedia Commons
Programming Textfiles: Bowen's Instruction Summary Cards (http://www.textfiles
.com/programming/CARDS/)
Mark Smotherman's Historical Computer Designs Page (http://www.cs.clemson.
edu/~mark/hist.html)