Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

RISC Vs CISC

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

CPU PROCESSING METHODS-RISC vs CISC INTRODUCTION What is RISC?

RISC, or Reduced Instruction Set Computer. is a type of microprocessor architecture that utilizes a small, highly-optimized set of instructions, rather than a more specialized set of instructions often found in other types of architectures. History

The first RISC projects came from IBM, Stanford, and UC-Berkeley in the late 70s and early 80s. The IBM 801, Stanford MIPS, and Berkeley RISC 1 and 2 were all designed with a similar philosophy which has become known as RISC. Certain design features have been characteristic of most RISC processors: one cycle execution time: RISC processors have a CPI (clock per instruction) of one cycle. This is due to the optimization of each instruction on the CPU and a technique called PIPELINING. pipelining: a techique that allows for simultaneous execution of parts, or stages, of instructions to more efficiently process instructions; large number of registers: the RISC design philosophy generally incorporates a larger number of registers to prevent in large amounts of interactions with memory

The simplest way to examine the advantages and disadvantages of RISC architecture is by contrasting it with it's predecessor: CISC (Complex Instruction Set Computers) architecture. Multiplying Two Numbers in Memory On the right is a diagram representing the storage scheme for a generic computer. The main memory is divided into locations numbered from (row) 1: (column) 1 to (row) 6: (column) 4. The execution unit is responsible for carrying out all computations. However, the execution unit can only operate on data that has been loaded into one of the six registers (A, B, C, D, E, or F). Let's say we want to find the product of two numbers one stored in location 2:3 and another stored in location 5:2 - and then store the product back in the location 2:3. The CISC Approach The primary goal of CISC architecture is to complete a task in as few lines of assembly as possible. This is achieved by building processor hardware that is capable of understanding and executing a series of operations. For this

Page 1

particular task, a CISC processor would come prepared with a specific instruction (we'll call it "MULT"). When executed, this instruction loads the two values into separate registers, multiplies the operands in the execution unit, and then stores the product in the appropriate register. Thus, the entire task of multiplying two numbers can be completed with one instruction: MULT 2:3, 5:2 MULT is what is known as a "complex instruction." It operates directly on the computer's memory banks and does not require the programmer to explicitly call any loading or storing functions. It closely resembles a command in a higher level language. For instance, if we let "a" represent the value of 2:3 and "b" represent the value of 5:2, then this command is identical to the C statement "a = a * b." One of the primary advantages of this system is that the compiler has to do very little work to translate a high-level language statement into assembly. Because the length of the code is relatively short, very little RAM is required to store instructions. The emphasis is put on building complex instructions directly into the hardware. The RISC Approach RISC processors only use simple instructions that can be executed within one clock cycle. Thus, the "MULT" command described above could be divided into three separate commands: "LOAD," which moves data from the memory bank to a register, "PROD," which finds the product of two operands located within the registers, and "STORE," which moves data from a register to the memory banks. In order to perform the exact series of steps described in the CISC approach, a programmer would need to code four lines of assembly: LOAD A, 2:3 LOAD B, 5:2 PROD A, B STORE 2:3, A At first, this may seem like a much less efficient way of completing the operation. Because there are more lines of code, more RAM is needed to store the assembly level instructions. The compiler must also perform more work to convert a high-level language statement into code of this form. CISC Emphasis on hardware Includes multi-clock complex instructions RISC Emphasis on software Single-clock, reduced instruction only

Memory-to-memory: Register to register: "LOAD" and "STORE" "LOAD" and "STORE" incorporated in instructions are independent instructions Small code sizes, high cycles per second Low cycles per second, large code sizes

Transistors used for storing Spends more transistors complex instructions on memory registers However, the RISC strategy also brings some very important advantages. Because each instruction requires only one clock cycle to execute, the entire program will execute in

Page 2

approximately the same amount of time as the multi-cycle "MULT" command. These RISC "reduced instructions" require less transistors of hardware space than the complex instructions, leaving more room for general purpose registers. Because all of the instructions execute in a uniform amount of time (i.e. one clock), pipelining is possible. Separating the "LOAD" and "STORE" instructions actually reduces the amount of work that the computer must perform. After a CISC-style "MULT" command is executed, the processor automatically erases the registers. If one of the operands needs to be used for another computation, the processor must re-load the data from the memory bank into a register. In RISC, the operand will remain in the register until another value is loaded in its place. The Performance Equation The following equation is commonly used for expressing a computer's performance ability:

The CISC approach attempts to minimize the number of instructions per program, sacrificing the number of cycles per instruction. RISC does the opposite, reducing the cycles per instruction at the cost of the number of instructions per program. RISC Roadblocks Despite the advantages of RISC based processing, RISC chips took over a decade to gain a foothold in the commercial world. This was largely due to a lack of software support. Although Apple's Power Macintosh line featured RISC-based chips and Windows NT was RISC compatible, Windows 3.1 and Windows 95 were designed with CISC processors in mind. Many companies were unwilling to take a chance with the emerging RISC technology. Without commercial interest, processor developers were unable to manufacture RISC chips in large enough volumes to make their price competitive. Another major setback was the presence of Intel. Although their CISC chips were becoming increasingly unwieldy and difficult to develop, Intel had the resources to plow through development and produce powerful processors. Although RISC chips might surpass Intel's efforts in specific areas, the differences were not great enough to persuade buyers to change technologies. The Overall RISC Advantage Today, the Intel x86 is arguable the only chip which retains CISC architecture. This is primarily due to advancements in other areas of computer technology. The price of RAM has decreased dramatically. In 1977, 1MB of DRAM cost about $5,000. By 1994, the same amount of memory cost only $6 (when adjusted for inflation). Compiler technology

Page 3

become more sophisticated, so that the RISC use of RAM and emphasis on software has become ideal.

Definition

RISC (reduced instruction set computer)


E-mail Print A AA AAA LinkedIn Facebook Twitter Share This Reprints

RISC (reduced instruction set computer) is a microprocessor that is designed to perform a smaller number of types of computer instructions so that it can operate at a higher speed (perform more millions of instructions per second, or MIPS). Since each instruction type that a computer must perform requires additional transistors and circuitry, a larger list or set of computer instructions tends to make the microprocessor more complicated and slower in operation. John Cocke of IBM Research in Yorktown, New York, originated the RISC concept in 1974 by proving that about 20% of the instructions in a computer did 80% of the work. The first computer to benefit from

iSeries Resources

this discovery was IBM's PC/XT in 1980. Later, IBM's RISC System/6000, made use of the idea. The term itself (RISC) is credited to David Patterson, a teacher at the University of California in Berkeley. The concept was used in Sun Microsystems' SPARC microprocessors and led to the founding of what is now MIPS Technologies, part of Silicon Graphics. A number of current microchips now use the RISC concept. The RISC concept has led to a more thoughtful design of the microprocessor. Among design considerations are how well an instruction can be mapped to the clock speed of the microprocessor (ideally, an instruction can be performed in one clock cycle); how "simple" an architecture is required; and how much work can be done by the microchip itself without resorting to software help. Besides performance improvement, some advantages of RISC and related design improvements are:

Page 4

A new microprocessor can be developed and tested more quickly if one of its aims is to be less complicated. Operating system and application programmers who use the microprocessor's instructions will find it easier to develop code with a smaller instruction set. The simplicity of RISC allows more freedom to choose how to use the space on a microprocessor. Higher-level language compilers produce more efficient code than formerly because they have always tended to use the smaller set of instructions to be found in a RISC computer.

After the introduction of RISC, any "full-set" instruction computer was said to use complex instruction set computing (CISC).

The historical approach


Perhaps the most common approach to comparing RISC and CISC is to list the features of each and place them side-by-side for comparison, discussing how each feature aids or hinders performance. This approach is fine if youre comparing two contemporary and competing pieces of technology, like OSs, video cards, specific CPUs, etc., but it fails when applied to RISC and CISC. It fails because RISC and CISC are not so much technologies as they are design strategies--approaches to achieving a specific set of goals that were defined in relation to a particular set of problems. Or, to be a bit more abstract, we could also call them design philosophies, or ways of thinking about a set of problems and their solutions. Its important to see these two design strategies as having developed out of a particular set of technological conditions that existed at a specific point in time. Each was an approach to designing machines that designers felt made the most efficient use of the technological resources then available. In formulating and applying these strategies, researchers took into account the limitations of the days technologylimitations that dont necessarily exist today. Understanding what those limitations were and how computer architects worked within them is the key to understanding RISC and CISC. Thus, a true RISC vs. CISC comparison requires more than just feature lists, SPEC benchmarks and sloganeeringit requires a historical context. In order to understand the historical and technological context out of which RISC and CISC developed, it is first necessary to understand the state of the art in VLSI, storage/memory, and compilers in the late 70s and early 80s. These three technologies defined the technological environment in which researchers worked to build the fastest machines.

Page 5

Storage and memory

Its hard to underestimate the effects that the state of storage technology had on computer design in the 70s and 80s. In the 1970s, computers used magnetic core memory to store program code; core memory was not only expensive, it was agonizingly slow. After the introduction of RAM things got a bit better on the speed front, but this didnt address the cost part of the equation. To help you wrap your mind around the situation, consider the fact that in 1977, 1MB of DRAM cost about $5,000. By 1994, that price had dropped to under $6 (in 1977 dollars) [2]. In addition to the high price of RAM, secondary storage was expensive and slow, so paging large volumes of code into RAM from the secondary store impeded performance in a major way. The high cost of main memory and the slowness of secondary storage conspired to make code bloat a deadly serious issue. Good code was compact code; you needed to be able to fit all of it in a small amount of memory. Because RAM counted for a significant portion of the overall cost of a system, a reduction in code-size translated directly in to a reduction in the total system cost. (In the early 90s, RAM accounted for around %36 of the total system cost, and this was after RAM had become quite a bit cheaper [4].) Well talk a bit more about code size and system cost when we consider in detail the rationales behind CISC computing.

Compilers

David Patterson, in a recently published retrospective article on his original proposal paper for the RISC I project at Berkeley, writes: Something to keep in mind while reading the paper was how lousy the compilers were of that generation. C programmers had to write the word "register" next to variables to try to get compilers to use registers. As a former Berkeley Ph.D. who started a small computer company said later, "people would accept any piece of junk you gave them, as long as the code worked." Part of the reason was simply the speed of processors and the size of memory, as programmers had limited patience on how long they were willing to wait for compilers. [3] The compilers job was fairly simple at that point: translate statements written in a high level language (HLL), like C or PASCAL, into assembly language. The assembly language was then converted into machine code by an assembler. The compilation stage took a long time, and the output was hardly optimal. As long as the HLL => assembly translation was correct, that was about the best you could hope for. If you really wanted compact, optimized code, your only choice was to code in assembler. (In fact, some would argue that this is still the case today.)

VLSI

The state of the art in Very Large Scale Integration (VLSI) yielded transistor densities that were low by todays standards. You just couldnt fit too much functionality onto one chip. Back in

Page 6

1981 when Patterson and Sequin first proposed the RISC I project (RISC I later became the foundation for Suns SPARC architecture), a million transistors on a single chip was a lot [1]. Because of the paucity of available transistor resources, the CISC machines of the day, like the VAX, had their various functional units split up across multiple chips. This was a problem, because the delay-power penalty on data transfers between chips limited performance. A singlechip implementation would have been ideal, but, for reasons well get into in a moment, it wasnt feasible without a radical rethinking of current designs.

Page 7

You might also like