Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CAO Unit-2 Entire Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 106

Module 2: CENTRAL PROCESSING

UNIT
Outline

• Arithmetic logic unit


• Integer multiplication- Booth’s algorithm
• Floating point representation principles
• Machine instruction characteristics
• Addressing modes
• Instruction formats – Instruction length, Allocation of
bits
• Processor organization
• Register organization – User visible registers, Control
and status registers
• Instruction cycle
Arithmetic logic unit
• The ALU is the part of the computer that actually
performs arithmetic and logical operations on data.
• All of the other elements of the computer system—
control unit, registers, memory, I/O—are there mainly
to bring data into the ALU for it to process and then to
take the results back out.
• In the below figure, operands for arithmetic and logic
operations are presented to the ALU in registers, and
the results of an operation are stored in registers.
Figure: ALU Inputs and Outputs
INTEGER REPRESENTATION

• In the binary number system, arbitrary numbers can


be represented with just the digits zero and one, the
minus sign (for negative numbers), and the period, or
radix point (for numbers with a fractional component).
-1101.01012 = -13.312510
• An 8-bit word can represent the numbers from 0 to
255, such as
00000000 = 0
00000001 = 1
00101001 = 41
10000000 = 128
11111111 = 255
• Sign-Magnitude Representation- If the sign bit is 0, the
number is positive; if the sign bit is 1, the number is
negative.
• The simplest form of representation that employs a sign bit
is the sign-magnitude representation. In an n-bit word, the
rightmost n - 1 bits hold the magnitude of the integer.
+18 = 00010010
-18 = 10010010 (sign magnitude)
• There are several drawbacks to sign-magnitude
representation.
• One is that addition and subtraction require a consideration
of both the signs of the numbers and their relative
magnitudes to carry out the required operation.
• Another drawback is that there are two representations of
0:
+ 010 = 00000000
- 010 = 10000000 (sign magnitude)
• Twos Complement Representation
Steps:
1. Take the Boolean complement of each bit of the
integer (including the sign bit). That is, set each 1 to
0 and each 0 to 1.
2. Treating the result as an unsigned binary integer,
add 1.
INTEGER ARITHMETIC

Addition and Subtraction:


• If the result of the operation is positive, we get a
positive number in twos complement form, which is
the same as in unsigned-integer form. If the result of
the operation is negative, we get a negative number
in twos complement form
• On any addition, the result may be larger than can be
held in the word size being used. This condition is
called overflow. When overflow occurs, the ALU must
signal this fact so that no attempt is made to use the
result.
e, f are overflow
SUBTRACTION RULE: To subtract one number (subtrahend)
from another (minuend), take the twos complement
(negation) of the subtrahend and add it to the minuend

Figure: Subtraction
of Numbers in
Twos Complement
Representation (M
- S)
Figure: Block Diagram of Hardware for
Addition and Subtraction
Integer multiplication
• Compared with addition and subtraction,
multiplication is a complex operation, whether
performed in hardware or software.
• unsigned integers
Observation during multiplication

1. Multiplication involves the generation of partial


products, one for each digit in the multiplier. These
partial products are then summed to produce the final
product.
2. The partial products are easily defined. When the
multiplier bit is 0, the partial product is 0. When the
multiplier is 1, the partial product is the multiplicand.
3. The total product is produced by summing the partial
products. For this operation, each successive partial
product is shifted one position to the left relative to the
preceding partial product.
4. The multiplication of two n-bit binary integers results
in a product of up to 2n bits in length
Figure: Block diagram Hardware Implementation of Unsigned Binary Multiplication
Figure: Flowchart for Unsigned Binary Multiplication
[Type text]

UNIT III
Computer Data Representation
Basic computer data types.
 The data types found in the registers of digital computers may be classified as being
one of the following categories:
(1) numbers used in arithmetic computations
(2) letters of the alphabet used in data processing
(3) Other discrete symbols used for specific purposes.
 All types of data, except binary numbers, are represented in computer registers in
binary-coded form.
 This is because registers are made up of flip-flops and flip-flops are two-state devices
that can store only 1's and 0's.

Number System:
Radix
 Uses R distinct symbols for each digit.
Example AR = an-1 an-2 ... a1 a0 .a-1…a-m.
 Radix point (.) separates the integer portion and the fractional portion.
 Example:
R = 10 Decimal number system, R = 2 Binary
R = 8 Octal, R = 16 Hexadecimal

1. Decimal
 The decimal number system in everyday use employs the radix 10 system.
 The 10 symbols are 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
Example
The string of digits 724.5 is interpreted to represent the quantity
7 x 102 + 2 x 101 + 4 x 10° + 5 x 10-1
that is, 7 hundreds, plus 2 tens, plus 4 units, plus 5 tenths.
 Every decimal number can be similarly interpreted to find the quantity it
represents.
2. Binary
 The binary number system uses the radix 2.
 The two digit symbols used are 0 and 1. The string of digits 101101 is
interpreted to represent the quantity
1 x 25 + 0 x 24 + 1 x 23 + 1 x 22 + 0 x 21 + 1 x 2° = 45
 To distinguish between different radix numbers, the digits will be enclosed in
parentheses and the radix of the number inserted as a subscript.
For example,
to show the equality between decimal and binary forty-five we will write
(101101)2 = (45)10.

[Type text]
[Type text]

3. Octal
 Octal number system uses the (radix 8).
 The eight symbols of the octal system are 0, 1, 2, 3, 4, 5, 6, and 7.
For example, octal 736.4 is converted to decimal as follows:
(736.4)8 = 7 x 82 + 3 x 81 + 6 x 8° + 4 x 8-1
= 7 x 64 + 3 x 8 + 6 x 1 + 4/8 = (478.5)10

4. Hexadecimal
 Hexadecimal number system uses the (radix 16).
 The 16 symbols of the hexadecimal system are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C,
D, E, and F.
 Hexadecimal digits, the symbols A, B, C, D, E, and F correspond to the decimal
numbers 10, 11, 12, 13, 14, and 15, respectively.
 The equivalent decimal number of hexadecimal F3 is obtained from the
following calculation:
(F3)16 = F x 16 + 3 = 15 x 16 + 3 = (243)10

Complement number system


 Complements are used in digital computers for simplifying the subtraction operation
and for logical manipulation.
 There are two types of complements for each base r system: the r's complement and
the (r - 1)'s complement.
 When the value of the base r is substituted in the name, the two types are referred to
as the 2's and 1's complement for binary numbers and the 10's and 9's complement for
decimal numbers.

(r —1)'s Complement
 9's complement
 Given a number N in base r having n digits, the (r - 1)'s complement of N is
defined as (rn - 1) - N. For decimal numbers r = 10 and r - 1 = 9, so the 9's
complement of N is (10" - 1) - N.
 Now, 10n represents a number that consists of a single 1 followed by n 0's.
 10n- 1 is a number represented by n 9's.
For example, with n = 4 we have 104 = 10000 and 104 - 1 = 9999.
 It follows that the 9's complement of a decimal number is obtained by
subtracting each digit from 9.
 For example, the 9's complement of 546700 is 999999 - 546700 =453299 and
the 9's complement of 12389 is 99999 - 12389 = 87610.

[Type text]
[Type text]

 1's complement
 For binary numbers, r = 2 and r - 1 = 1, so the 1's complement of N is (2" - 1) - N.
 Again, 2n is represented by a binary number that consists of a 1 followed by n
0's.
 2n - 1 is a binary number represented by n 1's.
For example, with n = 4, we have 24 = (10000)2 and 24 - 1 = (1111)2.
 Thus the 1's complement of a binary number is obtained by subtracting each
digit from 1.
 However, the subtraction of a binary digit from 1 causes the bit to change from
0 to 1 or from 1 to 0.
 Therefore, the 1's complement of a binary number is formed by changing 1's
into 0's and 0's into 1's.
For example, the 1's complement of 1011001 is 0100110 and the 1's
complement of 0001111 is 1110000.
(r's) Complement
 The r's complement of an n-digit number N in base r is defined as r" - N for N≠0 and 0
for N = 0.
 Comparing with the (r - 1)'s complement, we note that the r's complement is obtained
by adding 1 to the (r -1)'s complement since r" - N = [(r" - 1) - N] + 1.

 10's complement
 Thus the 10's complement of the decimal 2389 is 7610 + 1 = 7611 and is
obtained by adding 1 to the 9's complement value.
 2's complement
 The 2's complement of binary 101100 is 010011 + 1 = 010100 and is obtained
by adding 1 to the 1's complement value.
 2's complement notation solves the problem of the relationship between
positive and negative numbers, and achieves accurate results in subtractions.

Subtraction of Unsigned Numbers in Complement


number system
 The subtraction of two n-digit unsigned numbers M - N (N 4- 0) in base r can be done as
follows:
Add the minuend M to the r's complement of the subtrahend N.
This performs M + rn – N = M — N + r".
1. If M≥N, the sum will produce an end carry r" which is discarded, and what is left
is the result M - N.
2. If M < N, the sum does not produce an end carry and is equal to r" - (N - M),
which is the r's complement of (N - M). To obtain the answer in a familiar form,
take the r's complement of the sum and place a negative sign in front.

[Type text]
[Type text]

 For example, the subtraction 72532 - 13250 = 59282. The 10's complement of 13250 is
86750.

M = 72532
10's complement of N = + 86750
Sum = 159282
Discard end carry 105 = - 100000
Answer = 59282

 Now consider an example with M < N. The subtraction 13250 – 72532 produces
negative 59282. Using the procedure with complements, we have

M = 13250
10's complement of N = + 27468
Sum = 159282

There is no end carry. Answer is negative 59282 = 10's complement of 40718.

 Subtraction with complements is done with binary numbers in a similar manner using
the same procedure outlined above.

 Using the two binarynumbers X = 1010100 and Y = 1000011, we perform the


subtraction X – Y and Y - X using 2's complement's:
X = 1010100
2's complement of Y + 0111101
=
Sum = 10010001

Discard end carry 27 - 10000000


=
Answer: X – Y = 0010001

Y = 1000011
2's complement of X +
= 0101100
Sum =
1101111

There is no end carry.


Answer is negative 0010001 = 2's complement of 1101111

[Type text]
[Type text]

Fixed Point Representation


 This method assumes that binary point is fixed in one position.
 Binary Fixed-Point Representation is shown below:
X = xnxn-1xn-2 ... x1x0. x-1x-2 ... x-m
Sign Bit(xn): 0 for positive - 1 for negative
Remaining Bits(xn-1xn-2 ... x1x0. x-1x-2 ... x-m)
 Positive integers, including zero, can be represented as unsigned numbers.
 However, to represent negative integers, we need a notation for negative values.
 In ordinary arithmetic, a negative number is indicated by a minus sign and a positive
number by a plus sign.
 Because of hardware limitations, computers must represent everything with 1's and
0's, including the sign of a number.
 As a consequence, it is customary to represent the sign with a bit placed in the leftmost
position of the number.
 The convention is to make the sign bit equal to 0 for positive and to 1 for negative.
 In addition to the sign, a number may have a binary (or decimal) point.
 The position of the binary point is needed to represent fractions, integers, or mixed
integer-fraction numbers.
 The representation of the binary point in a register is complicated by the fact that it is
characterized by a position in the register.
 There are two ways of specifying the position of the binary point in a register:
1. By giving it a fixed position
2. By employing a floating-point representation.
 The fixed-point method assumes that the binary point is always fixed in one position.
 The two positions most widely used are
(1) A binary point in the extreme left of the register to make the stored number a
fraction, and
(2) A binary point in the extreme right of the register to make the stored number an
integer.
 In either case, the binary point is not actually present, but its presence is assumed from
the fact that the number stored in the register is treated as a fraction or as an integer.

Integer Representation of Signed Numbers


 When an integer binary number is positive, the sign is represented by 0 and the
magnitude by a positive binary number.
 When the number is negative, the sign is represented by 1 but the rest of the number
may be represented in one of three possible ways:
1. Signed-magnitude representation
2. Signed 1's complement representation
3. Signed 2's complement representation
 The signed-magnitude representation of a negative number consists of the magnitude
[Type text]
[Type text]

and a negative sign.

[Type text]
[Type text]

 In the other two representations, the negative number is represented in either the 1's
or 2's complement of its positive value.
 As an example, consider the signed number 14 stored in an 8-bit register.
1. In signed-magnitude representation 1 0001110
2. In signed-1's complement representation 1 1110001
3. In signed-2's complement representation 1 1110010
 The signed-magnitude representation of -14 is obtained from +14 by complementing
only the sign bit.
 The signed-1's complement representation of -14 is obtained by complementing all the
bits of +14, including the sign bit.
 The signed-2's complement representation is obtained by taking the 2's complement of
the positive number, including its sign bit.

Arithmetic Addition
1. Compare their signs
2. If two signs are the same, ADD the two magnitudes - Look out for an overflow
3. If not the same , compare the relative magnitudes of the numbers and then SUBTRACT
the smaller from the larger --> need a subtractor to add
4. Determine the sign of the result

+6 0000011 -6 11111010
0
+13 0000110 +13 00001101
1
+19 0001001 +7 00000111
1

+6 0000011 -6 11111010
0
-13 1111001 -13 11110011
1
-7 1111100 -19 11101101
1

 In each of the four cases, the operation performed is always addition, including the
sign bits.
 Any carry out of the sign bit position is discarded, and negative results are
automatically in 2's complement form.

[Type text]
[Type text]

Arithmetic Subtraction
 Arithmetic Subtraction in 2’s complement.
 Take the 2's complement of the subtrahend (including the sign bit) and add it to the
minuend (including the sign bit).
 A carry out of the sign bit position is discarded.

 This procedure stems from the fact that a subtraction operation can be changed to an
addition operation if the sign of the subtrahend is changed.
 This is demonstrated by the following relationship:
(±A) - (+B) = (±A) + ( - B)
(±A) - ( -B ) = (±A) + (+B)
 But changing a positive number to a negative number is easily done by taking its 2's
complement.
 The reverse is also true because the complement of a negative number in complement
form produces the equivalent positive number.
 Consider the subtraction of (- 6) - (-13) = +7.
 In binary with eight bits this is written as 11111010 - 11110011.
 The subtraction is changed to addition by taking the 2's complement of the subtrahend
( - 1 3 ) to give (+13).
 In binary this is 11111010 + 00001101 = 100000111.
 Removing the end carry, we obtain the correct answer 00000111 (+7).

Overflow
 When two numbers of n digits each are added and the sum occupies n + 1 digits, we
say that an overflow occurred.
 An overflow is a problem in digital computers because the width of registers is finite.
 A result that contains n + 1 bits cannot be accommodated in a register with a standard
length of n bits.
 For this reason, many computers detect the occurrence of an overflow, and when it
occurs, a corresponding flip-flop is set which can then be checked by the user.
 An overflow cannot occur after an addition if one number is positive and the other is
negative, since adding a positive number to a negative number produces a result that is
smaller than the larger of the two original numbers.
 An overflow may occur if the two numbers added are either positive or both negative.
 Consider the following example.
 Two signed binary numbers, +70 and +80, are stored in two 8-bit registers.
 The range of numbers that each register can accommodate is from binary +127 to
binary -128.
 Since the sum of the two numbers is +150, it exceeds the capacity of the 8-bit register.
 This is true if the numbers are either positive or both negative.
 The two additions in binary are shown below together with the last two carries.

[Type text]
[Type text]

carries: 0 1 carries: 1 0
+70 0 1000110 -70 1 0111010
+80 0 1010000 -80 1 0110000
+150 1 0010110 -150 1 0110000

 Note that the 8-bit result that should have been positive has a negative sign bit and the
8-bit result that should have been negative has a positive sign bit.
 If, however, the carry out of the sign bit position is taken as the sign bit of the result,
the 9-bit answer so obtained will be correct.
 Since the answer cannot be accommodated within 8 bits, we say that an overflow
occurred.
 An overflow condition can be detected by observing the carry into the sign bit position
and the carry out of the sign bit position.
 If these two carries are not equal, an overflow condition is produced.
 This is indicated in the examples where the two carries’ are explicitly shown.
 If the two carries’ are applied to an exclusive-OR gate, an overflow will be detected
when the output of the gate is equal to 1.

Floating - Point Representation


 The floating-point representation of a number has two parts.
 The first part represents a signed, fixed-point number called the mantissa.
 The second part designates the position of the decimal (or binary) point and is called
the exponent.
 The fixed-point mantissa may be a fraction or an integer.
For example, the decimal number +6132.789 is represented in floating-point with a
fraction and an exponent as follows:
Fraction Exponent
+0.6132789 + 04
 Floating-point is always interpreted to represent a number in the following form:
m x re
 Only the mantissa m and the exponent e are physically represented in the register
(including their signs).
 The radix r and the radix-point position of the mantissa are always assumed.
 A floating-point binary number is represented in a similar manner except that it uses
base 2 for the exponent.
 For example, the binary number +1001.11 is represented with an 8-bit fraction and 6-
bit exponent as follows:
Fraction Exponent
01001110 000100
 The fraction has a 0 in the leftmost position to denote positive.
 The binary point of the fraction follows the sign bit but is not shown in the register.
 The exponent has the equivalent binary number +4 ( i.e. 000100 2=410 ).
[Type text]
[Type text]

 The floating-point number is equivalent to:

m x 2e = +(.1001110)2 x 2+4
Normalization

[Type text]
[Type text]

 A floating-point number is said to be normalized if the most significant digit of the


mantissa is nonzero.
For example, the decimal number 350 is normalized but 00035 are not.
 Regardless of where the position of the radix point is assumed to be in the mantissa,
the number is normalized only if its leftmost digit is nonzero.
For example, the 8-bit binary number 00011010 is not normalized because of the three
leading 0's.
 The number can be normalized by shifting it three positions to the left and discarding
the leading 0's to obtain 11010000.
 The three shifts multiply the number by 23 = 8.
 To keep the same value for the floating-point number, the exponent must be
subtracted by normalized numbers provide the maximum possible precision for the
floating-point number.
 A zero cannot be normalized because it does not have a nonzero digit.
 It is usually represented in floating-point by all 0's in the mantissa and exponent.
 Arithmetic operations with floating-point numbers are more complicated than
arithmetic operations with fixed-point numbers and their execution takes longer and
requires more complex hardware.
 However, floating-point representation is a must for scientific computations because of
the scaling problems involved with fixed-point computations.
 Many computers and all electronic calculators have the built-in capability of
performing floating-point arithmetic operations.

Addition and Subtraction with signed-magnitude data


 The flowchart is shown in Figure 7.1. The two signs A, and B, are compared by an
exclusive-OR gate.
If the output of the gate is 0 the signs are identical;
If it is 1, the signs are different.
 For an add operation, identical signs dictate that the magnitudes be added. For a
subtract operation, different signs dictate that the magnitudes be added.
 The magnitudes are added with a microoperation EA  A + B, where EA is a register that
combines E and A. The carry in E after the addition constitutes an overflow if it is equal
to 1. The value of E is transferred into the add-overflow flip-flop AVF.
 The two magnitudes are subtracted if the signs are different for an add operation or
identical for a subtract operation. The magnitudes are subtracted by adding A to the 2's
complemented B. No overflow can occur if the numbers are subtracted so AVF is cleared
to 0.
 1 in E indicates that A >= B and the number in A is the correct result. If this numbs is zero,
the sign A must be made positive to avoid a negative zero.
 0 in E indicates that A < B. For this case it is necessary to take the 2's complement of the
value in A. The operation can be done with one microoperation A A' +1.
 However, we assume that the A register has circuits for microoperations complement

[Type text]
[Type text]

and increment, so the 2's complement is obtained from these two microoperations.
 In other paths of the flowchart, the sign of the result is the same as the sign of A. so no
change in A is required. However, when A < B, the sign of the result is the complement of
the original sign of A. It is then necessary to complement A, to obtain the correct sign.
 The final result is found in register A and its sign in As. The value in AVF provides an
overflow indication. The final value of E is immaterial.
 Figure 7.2 shows a block diagram of the hardware for implementing the addition and
subtraction operations.
 It consists of registers A and B and sign flip-flops As and Bs.
 Subtraction is done by adding A to the 2's complement of B.
 The output carry is transferred to flip-flop E , where it can be checked to determine the
relative magnitudes of two numbers.
 The add-overflow flip-flop AVF holds the overflow bit when A and B are added.
 The A register provides other microoperations that may be needed when we specify the
sequence of steps in the algorithm.

[Type text]
[Type text]

Subtract Operation Addition Operation

Minuend in A Augend in A

=0 =1 =1 =0
A s ⊕ Bs A s ⊕ Bs

EA ← A+ B’ + 1
EA ← A+ B
As ≠ Bs
As ≠ Bs As = Bs
=0 =1
E AVF ← E
A≥B
A<B
≠0 =0
A ← A’ A

A← A+1 AS ← 0

As ← As ’

END

Figure 7.1: Flowchart for add and subtract operations.

Figure 7.2: Hardware for signed-magnitude addition and subtraction

[Type text]
[Type text]

Booth’s algorithm
 Booth algorithm gives a procedure for multiplying binary integers in signed- 2’s
complement representation.
 It operates on the fact that strings of 0’s in the multiplier require no addition but just
shifting, and a string of 1’s in the multiplier from bit weight 2k to weight 2m can be
treated as 2k+1 – 2m.
 For example, the binary number 001110 (+14) has a string 1’s from 23 to 21 (k=3, m=1).
The number can be represented as 2k+1 – 2m. = 24 – 21 = 16 – 2 = 14. Therefore, the
multiplication M X 14, where M is the multiplicand and 14 the multiplier, can be done as
M X 2 4 – M X 2 1.
 Thus the product can be obtained by shifting the binary multiplicand M four times to the
left and subtracting M shifted left once.

Multiply

Multiplicand in BR
Multiplier in QR

AC ← 0

Qn + 1 ← 0

= 10 = 01
QnQn+1

AC ← AC + BR’ + 1 = 00 AC ← AC + BR

ashr (AC & QR)

≠0 =0
SC

END
Figure 7.3: Booth algorithm for multiplication of signed-2's complement numbers

 As in all multiplication schemes, booth algorithm requires examination of the multiplier


bits and shifting of partial product.

[Type text]
[Type text]

 Prior to the shifting, the multiplicand may be added to the partial product, subtracted
from the partial, or left unchanged according to the following rules:
1. The multiplicand is subtracted from the partial product upon encountering the
first least significant 1 in a string of 1’s in the multiplier.
2. The multiplicand is added to the partial product upon encountering the first 0 in a
string of 0’s in the multiplier.
3. The partial product does not change when multiplier bit is identical to the
previous multiplier bit.
 The algorithm works for positive or negative multipliers in 2’s complement
representation.
 This is because a negative multiplier ends with a string of 1’s and the last operation will
be a subtraction of the appropriate weight.
 The two bits of the multiplier in Qn and Qn+1 are inspected.
 If the two bits are equal to 10, it means that the first 1 in a string of 1 's has been
encountered. This requires a subtraction of the multiplicand from the partial product in
AC.
 If the two bits are equal to 01, it means that the first 0 in a string of 0's has been
encountered. This requires the addition of the multiplicand to the partial product in AC.
 When the two bits are equal, the partial product does not change.

Multiplication Operation on two floating point numbers.


 The multiplication of two floating-point numbers requires that we multiply the mantissas
and add the exponents.
 No comparison of exponents or alignment of mantissas is necessary.
 The multiplication of the mantissas is performed in the same way as in fixed-point to
provide a double-precision product.
 The double-precision answer is used in fixed-point numbers to increase the accuracy of
the product.
 In floating-point, the range of a single-precision mantissa combined with the exponent is
usually accurate enough so that only single-precision numbers are maintained.
 Thus the half most significant bits of the mantissa product and the exponent will be
taken together to form a single-precision floating-point product.
 The multiplication algorithm can be subdivided into four parts:
1. Check for zeros.
2. Add the exponents.
3. Multiply the mantissas.
4. Normalize the product.
 The flowchart for floating-point multiplication is shown in Figure 7.4. The two operands
are checked to determine if they contain a zero.
 If either operand is equal to zero, the product in the AC is set to zero and the operation
is terminated.

[Type text]
[Type text]

 If neither of the operands is equal to zero, the process continues with the exponent
addition.
 The exponent of the multiplier is in q and the adder is between exponents a and b.
 It is necessary to transfer the exponents from q to a, add the two exponents, and
transfer the sum into a.

Multiplicand in BR
Multiplier in QR

=0
BR
≠0
=0
QR

AC ← 0 ≠0

a←q

a←a+b

a ← a - bias

Multiply mantissa

=0
shl AQ
A1
a←a-1
=1

END
(product is in AC)

Figure 7.4: Multiplication of floating-point numbers

 Since both exponents are biased by the addition of a constant, the exponent sum will
have double this bias.
 The correct biased exponent for the product is obtained by subtracting the bias number
from the sum.
 The multiplication of the mantissas is done as in the fixed-point case with the product
residing in A and Q.
 Overflow cannot occur during multiplication, so there is no need to check for it.
 The product may have an underflow, so the most significant bit in A is checked. If it is a
1, the product is already normalized.

[Type text]
[Type text]

 If it is a 0, the mantissa in AQ is shifted left and the exponent decremented.


 Note that only one normalization shift is necessary. The multiplier and multiplicand were
originally normalized and contained fractions. The smallest normalized operand is 0.1, so
the smallest possible product is 0.01.
 Therefore, only one leading zero may occur.
 Although the low-order half of the mantissa is in Q, we do not use it for the floating-
point product. Only the value in the AC is taken as the product.

4. Multiply the (-9) with (-13) using Booth’s algorithm. Give each
step. (Sum’14)
 A numerical example of booth algorithm is shown for n=5. It shows the step-by-step
multiplication of (-9) X (-13) = +117.
9: 01001 13: 01101
1’s complement of 9: 10110 1’s complement of 13: 10010
+ 1 + 1
2’s complement of 9: 10111 (-9) 2’s complement of 13: 10011 (-13)
AC QR(-13) Qn+1 M(BR)(-9) SC Comments
00000 10011 0 10111 5 Initial value
01001 10011 0 10111 Subtraction: AC=AC+BR’+1
4
00100 11001 1 10111 Arithmetic Shift Right
00010 01100 1 10111 3 Arithmetic Shift Right
11001 01100 1 10111 Subtraction: AC=AC+BR’+1
2
11100 10110 0 10111 Arithmetic Shift Right
11110 01011 0 10111 1 Arithmetic Shift Right
00111 01011 0 10111 Subtraction: AC=AC+BR’+1
0
00011 10101 1 10111 Arithmetic Shift Right
Answer: -9 X -13 =117 => 001110101

5. Multiply the (7) with (3) using Booth’s algorithm. Give each
step.
7: 0111 3: 0011

AC QR(3) Qn+1 M(BR)(7) SC Comments


0000 0011 0 0111 4 Initial value
1001 0011 0 0111 Subtraction: AC=AC+BR’+1
3
1100 1001 1 0111 Arithmetic Shift Right
1110 0100 1 0111 2 Arithmetic Shift Right
0101 0100 1 0111 Addition: AC=AC+BR
1
0010 1010 0 0111 Arithmetic Shift Right
0001 0101 0 0111 0 Arithmetic Shift Right
[Type text]
[Type text]

Answer: 7 X 3 =21 => 00010101

6. Multiply the (15) with (13) using Booth’s algorithm. Give each
step.
15: 01111 13: 01101
15X13=195
AC QR(15) Qn+1 M(BR)(13) SC Comments
00000 01111 0 01101 5 Initial value
10011 01111 0 01101 Subtraction: AC=AC+BR’+1
4
11001 10111 1 01101 Arithmetic Shift Right
11100 11011 1 01101 3 Arithmetic Shift Right
11110 01101 1 01101 2 Arithmetic Shift Right
11111 00110 1 01101 1 Arithmetic Shift Right
01100 00110 1 01101 Addition: AC=AC+BR
0
00110 00011 1 01101 Arithmetic Shift Right
Answer: 15X13=195 => 0011000011

7. Multiply the (+15) with (-13) using Booth’s algorithm. Give each
step.
15: 01111 13: 01101
1’s complement of 13: 10010
+ 1
2’s complement of 13: 10011 (-13)

AC QR(-13) Qn+1 M(BR)(+15) SC Comments


00000 10011 0 01111 5 Initial value
10001 10011 0 01111 Subtraction: AC=AC+BR’+1
4
11000 11001 1 01111 Arithmetic Shift Right
11100 01100 1 01111 3 Arithmetic Shift Right
01011 01100 1 01111 Addition: AC=AC+BR
2
00101 10110 0 01111 Arithmetic Shift Right
00010 11011 0 01111 1 Arithmetic Shift Right
10011 11011 0 01111 Subtraction: AC=AC+BR’+1
0
11001 11101 1 01111 Arithmetic Shift Right

Answer: (+15) X (-13) = -195 => 1100111101

To verify 0011000010
+ 1

+195=> 0011000011
[Type text]
[Type text]

BCD adder
 BCD representation is a class of binary encodings of decimal numbers where each
decimal digit is represented by a fixed number of bits.
 BCD adder is a circuit that adds two BCD digits in parallel and produces a sum digit in
BCD form.

Figure 7.5: BCD Adder


 Since each input digit does not exceed 9, the output sum cannot be greater than
19(9+9+1). For example: suppose we apply two BCD digits to 4-bit binary adder.
 The adder will form the sum in binary and produce a result that may range from 0 to 19.
 In following figure 7.5, these binary numbers are represented by K, Z8, Z4, Z2, and Z1.
 K is the carry and subscripts under the Z represent the weights 8, 4, 2, and 1 that can be
assigned to the four bits in the BCD code.
 When binary sum is equal to or less than or equal to 9, corresponding BCD number is
identical and therefore no conversion is needed.
 The condition for correction and output carry can be expressed by the Boolean function:
C= K + Z8Z4 + Z8 Z2
 When it is greater than 9, we obtain non valid BCD representation, then additional
binary 6 to binary sum converts it to correct BCD representation.
 The two decimal digits, together with the input-carry, are first added in the top 4-bit
binary adder to produce the binary sum. When the output-carry is equal to 0, nothing is
added to the binary sum.
 When C is equal to 1, binary 0110 is added to the binary sum using bottom 4-bit binary
adder. The output carry generated from the bottom binary-adder may be ignored.

[Type text]
[Type text]

[Type text]
Floating point representation
Principles:
• The floating-point representation can implement operations for
high range values. The numerical evaluations are carried out using
floating-point values.
• The floating-point representation breaks the number into two
parts, the left-hand side is a signed, fixed-point number known as a
mantissa and the right-hand side of the number is known as the
exponent.
• For decimal numbers, we get around this limitation by using
scientific notation. Thus, 976,000,000,000,000 can be represented
as 9.76 * 1014 , and 0.0000000000000976 can be represented as
9.76 * 10-14
• This same approach can be taken with binary numbers. We can
represent a number in the form
±M * B ± E
• This number can be stored in a binary word
with three fields:
■ Sign: plus or minus
■ Significand S or Mantissa M
■ Exponent E

Figure: Typical 32-Bit Floating-Point Format


Machine instruction characteristics
• The operation of the processor is determined by the
instructions it executes, referred to as machine
instructions or computer instructions. The collection of
different instructions that the processor can execute is
referred to as the processor’s instruction set.
Elements of a Machine Instruction
• Each instruction must contain the information required
by the processor for execution.
• Refer instruction cycle topic in unit-1
Instruction Representation
• Within the computer, each instruction is represented by a
sequence of bits. The instruction is divided into fields,
corresponding to the constituent elements of the instruction.

Figure: A Simple Instruction Format

• During instruction execution, an instruction is read into an


instruction register (IR) in the processor. The processor must
be able to extract the data from the various instruction fields
to perform the required operation.
• Refer the table of IAS instruction set
• Opcodes are represented by abbreviations, called
mnemonics, that indicate the operation. Common examples
include
• ADD-Add
• SUB- Subtract
• MUL- Multiply
• DIV- Divide
• LOAD-Load data from memory
• STOR- Store data to memory
• Operands are also represented symbolically.
For example, the instruction
ADD R, Y
may mean add the value contained in data
location Y to the contents of register R. In this
example, Y refers to the address of a location in
memory, and R refers to a particular register.
Instruction Types
• Consider a high-level language instruction that could be expressed
in a language such as BASIC or FORTRAN.
For example, X = X + Y
• This statement instructs the computer to add the value stored in Y
to the value stored in X and put the result in X. How might this be
accomplished with machine instructions? Let us assume that the
variables X and Y correspond to locations 513 and 514. If we
assume a simple set of machine instructions, this operation could
be accomplished with three instructions:
1. Load a register with the contents of memory location 513.
2. Add the contents of memory location 514 to the register.
3. Store the contents of the register in memory location 513
we can categorize instruction types as follows:
• Data processing: Arithmetic and logic instructions.
• Data storage: Movement of data into or out of register and or
memory locations.
• Data movement: I/O instructions.
• Control: Test and branch instructions.
• Arithmetic instructions provide computational capabilities for
processing numeric data.
• Logic (Boolean) instructions operate on the bits of a word as bits
rather than as numbers; thus, they provide capabilities for
processing any other type of data the user may wish to employ.
These operations are performed primarily on data in processor
registers.
• Memory instructions for moving data between memory and the
registers.
• I/O instructions are needed to transfer programs and data into
memory and the results of computations back out to the user.
• Test instructions are used to test the value of a data word or the
status of a computation.
• Branch instructions are then used to branch to a different set of
instructions depending on the decision made.
Number of Addresses
• Arithmetic and logic instructions will require the most
operands.
• Virtually all arithmetic and logic operations are either unary
(one source operand) or binary (two source operands).
• Thus, we would need a maximum of two addresses to
reference source operands.
• The result of an operation must be stored, suggesting a third
address, which defines a destination operand.
• Finally, after completion of an instruction, the next instruction
must be fetched, and its address is needed.
• In most architectures, many instructions have one, two, or
three operand addresses, with the address of the next
instruction being implicit (obtained from the program
counter).
𝐴−𝐵
Programs to Execute Y =𝐶+𝐷∗𝐸
Addressing modes
• The address field or fields in a typical instruction
format are relatively small.
• We would like to be able to reference a large
range of locations in main memory or, for some
systems, virtual memory.
• To achieve this objective, a variety of addressing
techniques has been employed.
• They all involve some trade-off between address
range and/or addressing flexibility, on the one
hand, and the number of memory references in
the instruction and/or the complexity of address
calculation, on the other.
Addressing techniques
The most common addressing techniques, or
modes:
• Immediate
• Direct
• Indirect
• Register
• Register indirect
• Displacement
• Stack
• A = contents of an address field in the instruction
• R = contents of an address field in the instruction that
refers to a register
• EA = actual (effective) address of the location containing
the referenced operand
• (X) = contents of memory location X or register X
• One or more bits in the instruction format can be used as a
mode field. The value of the mode field determines which
addressing mode is to be used.
• The effective address will be either a main memory
address or a register. In a virtual memory system, the
effective address is a virtual address or a register.
Immediate Addressing
• The simplest form of addressing is immediate addressing,
in which the operand value is present in the instruction:
Operand = A
• This mode can be used to define and use constants or set
initial values of variables.
Examples-

• ADD 10 will increment the value stored in the


accumulator by 10.
• MOV R #20 initializes register R to a constant value 20.
• The advantage of immediate addressing is
that no memory reference other than the
instruction fetch is required to obtain the
operand, thus saving one memory or cache
cycle in the instruction cycle.
• The disadvantage is that the size of the
number is restricted to the size of the address
field, which, in most instruction sets, is small
compared with the word length.
Direct Addressing
• A very simple form of addressing is direct
addressing, in which the address field contains
the effective address of the operand:
EA = A
• The technique was common in earlier
generations of computers, but is not common on
contemporary architectures. It requires only one
memory reference and no special calculation.
The obvious limitation is that it provides only a
limited address space.
• It is also called as absolute addressing mode.
Example-

• ADD X will increment the value stored in the accumulator by the


value stored at memory location X.
AC ← AC + [X]
Indirect Addressing
• With direct addressing, the length of the address
field is usually less than the word length, thus
limiting the address range. One solution is to
have the address field refer to the address of a
word in memory, which in turn contains a full-
length address of the operand. This is known as
indirect addressing:
EA = (A)
The parentheses are to be interpreted as meaning
contents of.
Example-

• ADD X will increment the value stored in


the accumulator by the value stored at
memory location specified by X.
AC ← AC + [[X]]
• Advantage of this approach is that for a word
length of N, an address space of 2N is now
available. The disadvantage is that instruction
execution requires two memory references to
fetch the operand: one to get its address and a
second to get its value.
Register Addressing
• Register addressing is similar to direct
addressing. The only difference is that the
address field refers to a register rather than a
main memory address:
EA = R
• To clarify, if the contents of a register address
field in an instruction is 5, then register R5 is
the intended address, and the operand value
is contained in R5
Example-

• ADD R will increment the value stored in


the accumulator by the content of register
R.
AC ← AC + [R]
• The advantages of register addressing are that
(1) only a small address field is needed in the
instruction, and
• (2) no time- consuming memory references
are required.
• The disadvantage of register addressing is that
the address space is very limited.
Register Indirect Addressing
• Just as register addressing is analogous to direct
addressing, register indirect addressing is analogous to
indirect addressing. In both cases, the only difference is
whether the address field refers to a memory location or a
register. Thus, for register indirect address
EA = (R)
• The advantages and limitations of register indirect
addressing are basically the same as for indirect addressing.
In both cases, the address space limitation (limited range of
addresses) of the address field is overcome by having that
field refer to a word-length location containing an address.
In addition, register indirect addressing uses one less
memory reference than indirect addressing.
Example-

• ADD R will increment the value stored in


the accumulator by the content of
memory location specified in register R.
AC ← AC + [[R]]
Displacement Addressing
• A very powerful mode of addressing combines
the capabilities of direct addressing and
register indirect addressing. It is known by a
variety of names depending on the context of
its use, but the basic mechanism is the same.
We will refer to this as displacement
addressing:
EA = A + (R)
• Displacement addressing requires that the instruction have
two address fields, at least one of which is explicit. The
value contained in one address field (value = A) is used
directly. The other address field, or an implicit reference
based on opcode, refers to a register whose contents are
added to A to produce the effective address.
• Three of the most common uses of displacement
addressing are: Relative addressing, Base-register
addressing and Indexing
Relative Addressing Mode
• In this addressing mode,
• Effective address of the operand is obtained
by adding the content of program counter
with the address part of the instruction.
Effective Address = Content of Program Counter
+ Address part of the instruction
Base Register Addressing Mode
• In this addressing mode,
• Effective address of the operand is obtained
by adding the content of base register with
the address part of the instruction.
Effective Address = Content of Base Register +
Address part of the instruction
Indexed Addressing Mode
• In this addressing mode,
• Effective address of the operand is obtained
by adding the content of index register with
the address part of the instruction.
Effective Address= Content of Index Register +
Address part of the instruction
Stack Addressing
• The final addressing mode that we consider is
stack addressing.
• A stack is a linear array of locations. It is
sometimes referred to as a pushdown list or last-
in-first-out queue.
• The stack is a reserved block of locations.
• Items are appended to the top of the stack so
that, at any given time, the block is partially filled.
• Associated with the stack is a pointer whose
value is the address of the top of the stack.
Processor organization
• To understand the organization of the processor, let us
consider the requirements placed on the processor, the
things that it must do:
• Fetch instruction: The processor reads an instruction
from memory (register, cache, main memory).
• Interpret instruction: The instruction is decoded to
determine what action is required.
• Fetch data: The execution of an instruction may
require reading data from memory or an I/O module.
• Process data: The execution of an instruction may
require performing some arithmetic or logical
operation on data.
• Write data: The results of an execution may require
writing data to memory or an I/O module.
• To do these things, it should be clear that the
processor needs to store some data
temporarily. It must remember the location of
the last instruction so that it can know where
to get the next instruction. It needs to store
instructions and data temporarily while an
instruction is being executed.
Figure: Internal Structure of the CPU
• The major components of the processor are an
arithmetic and logic unit (ALU) and a control unit
(CU).
• The ALU does the actual computation or
processing of data.
• The control unit controls the movement of data
and instructions into and out of the processor,
and controls the operation of the ALU.
• In addition, the figure shows a minimal internal
memory, consisting of a set of storage locations,
called registers.
• The data transfer and logic control paths are
indicated, including an element labeled internal
processor bus
• This element is needed to transfer data
between the various registers and the ALU,
because the ALU in fact operates only on data
in the internal processor memory.
• Note the similarity between the internal
structure of the computer as a whole, and the
internal structure of the processor.
• In both cases, there is a small collection of
major elements (computer: processor, I/O,
memory; processor: control unit, ALU,
registers) connected by data paths.
floating Poin) Sepresentakien
1) Mantiss a

Bose
)Ezpene

lumoe Mamh'ssa Ba enpore


8
9x10

436u 784 36u78y 3

TEEE 1Sy forma

Smgle pstasidn 3 bt

Sign EayPoent MAmbSSa

23 2 2
8bits
33 bit3034 23 bits

Drble peüsin
-
6u bit

MamhiSa Snponet
Sin
S2 bls 71 bts
63
2Repvesent (12s91
9-121) in Single amd double
Preusion Jerwat

Step- Cenued deumal to binay


12S9 1ootl] 8oo1) 21259
2 629-)
Co-12 0-12S2 2.S6 319-
O 2 2 0.55o
21S6

0-S2 2 7-
O OX 2
2 3-
2
2
0o1O
ooN o00ll 0010) 22-
step-2 Novmali2o e numbeh
E -124
pseusi om N) 2
Single E-Ib 23
DOub preus t o
(N)2
(lool00011 0o1),
sht deimal J ,

step-3
SP ormat
E -127

(1N) (1. oolI\O0oj1 o pi) 2


N

Ewate Poaen ad 2

E-12910 E = n74to <(3).


Conwent i inv binay
E 1 3 = (0001o01) 237
30 2
234o
217 o
8bits 23 bitS
8
bit
on b 24-
23 bts -d
skep y
DP omat
E lo23 to
2
(N)2 (1. o0tt oooIX
E -lo23 =10 lo23+ ID 1o33

e 1033 (100oo00 l0ol)


zlo33
lo0o00oloo0 olo001 . 2 S16-1
bit 1bits
22S8-
52 bits 2129-
Sn M
264
132D
26

22
INSTRUCTION FORMATS
• An instruction format defines the layout of the
bits of an instruction, in terms of its constituent
fields.
• An instruction format must include an opcode
and, implicitly or explicitly, zero or more
operands.
• Each explicit operand is referenced using one of
the addressing modes.
• The format must, implicitly or explicitly, indicate
the addressing mode for each operand.
Instruction Length
• The most basic design issue to be faced is the
instruction format length.
• This decision is affected by memory size,
memory organization, bus structure,
processor complexity, and processor speed.
• This decision determines the richness and
flexibility of the machine as seen by the
assembly-language programmer.
• The most obvious trade-off here is between the
desire for a powerful instruction set and a need
to save space.
• Programmers want more opcodes, more
operands, more addressing modes, and greater
address range.
• More opcodes and more operands make life
easier for the programmer, because shorter
programs can be written to accomplish given
tasks.
• Similarly, more addressing modes give the
programmer greater flexibility in implementing
certain functions, such as table manipulations
and multiple- way branching.
• With the increase in main memory size and
the increasing use of virtual memory,
programmers want to be able to address
larger memory ranges.
• All of these things (opcodes, operands,
addressing modes, address range) require bits
and push in the direction of longer instruction
lengths.
• But longer instruction length may be wasteful.
• A 64-bit instruction occupies twice the space
of a 32-bit instruction.
• Either the instruction length should be equal to
the memory-transfer length (in a bus system,
databus length) or one should be a multiple of
the other.
• Memory transfer rate: This rate has not kept up
with increases in processor speed.
• The processor can execute instructions faster
than it can fetch them, then memory required
also high.
• One solution to this problem is to use cache
memory.
• Another is to use shorter instructions.
• Thus, 16-bit instructions can be fetched at twice
the rate of 32-bit instructions, but probably can
be executed less than twice as rapidly.
Allocation of Bits:
• For a given instruction length, there is clearly a trade-
off between the number of opcodes and the power of
the addressing capability. For an instruction format of a
given length, this reduces the number of bits available
for addressing.
• Use variable-length opcodes to solve this issue.
• In this approach, there is a minimum opcode length
but, for some opcodes, additional operations may be
specified by using additional bits in the instruction. For
a fixed-length instruction, this leaves fewer bits for
addressing. Thus, this feature is used for those
instructions that require fewer operands and/or less
powerful addressing.
Interrelated factors
• Number of addressing modes
• Number of operands
• Register versus memory
• Number of register sets
• Address range
• Address granularity
REGISTER ORGANIZATION
• Registers are the smaller and
the fastest accessible memory units in the central
processing unit (CPU).
• According to memory hierarchy, the registers in the
processor, function a level above the main
memory and cache memory.
• The registers used by the central unit are also called
as processor registers.
• Register organization is the arrangement of the registers in
the processor.
• The processor designers decide the organization of the
registers in a processor.
• A register can hold the instruction, address location, or
operands. Sometimes, the instruction has register as a part
of itself.
• Different processors may have different
register organization.
• Depending on the roles played by the registers
they can be categorized into two types, user-
visible register and control and status register.
Definition and purpose
• User-visible registers: Enable the machine- or
assembly language programmer to minimize
main memory references by optimizing use of
registers.
• Control and status registers: Used by the
control unit to control the operation of the
processor and by privileged, operating system
programs to control the execution of
programs.
User-Visible Registers
• A user-visible register is one that may be
referenced by means of the machine language
that the processor executes.
• These registers are visible to the assembly or
machine language programmers and they use
them effectively to minimize the memory
references in the instructions.
• These registers can only be referenced using
the machine or assembly language.
General-purpose registers
• The general-purpose registers detain both
the addresses or the data.
• The general purpose register also accepts
the intermediate results in the course of program
execution.
• The programmers can restrict some of the
general-purpose registers to specific functions.
• Like, some registers are specifically used for stack
operations or for floating-point operations.
• The general-purpose register can also be
employed for the addressing functions.
Data Register
• The term itself describes that these registers
are employed to hold the data. But the
programmers can’t use these registers
for calculating operand address.
Address Register
• Now, the address registers contain the address of
an operand or it can also act as a general-
purpose register.
• An address register may be dedicated to a
certain addressing mode.
(a) Segment Pointer Register: A memory divided in
segments, requires a segment register to hold the
base address of the segment. There can be
multiple segment registers. As one segment register
can be employed to hold the base address of the
segment occupied by the operating system. The
other segment register can hold the base address of
the segment allotted to the processor.
(b) Index Register: The index register is
employed for indexed addressing and it is initial
value is 0. Generally, it used for traversing the
memory locations. After each reference, the
index register is incremented or decremented
by 1, depending upon the nature of the
operation. Sometime the index register may
be auto indexed.
(c) Stack Pointer Register: The stack register has
the address that points the stack top. This
allows implicit addressing; that is, push, pop,
and other stack instructions need not contain an
explicit stack operand.
Condition Code
• Condition codes are the flag bits which are
the part of the control register. The condition
codes are set by the processor as a result of an
operation and they are implicitly read through
the machine instruction.
• The programmers are not allowed to alter the
conditional codes. Generally, the condition
codes are tested during conditional branch
operation.
Design issues
• Use completely general- purpose registers or
to specialize their use.
• With the use of specialized registers, it can
generally be implicit in the opcode which type
of register a certain operand specifier refers
to. The operand specifier must only identify
one of a set of specialized registers rather
than one out of all the registers, thus saving
bits. On the other hand, this specialization
limits the programmer’s flexibility.
• Number of registers: either general purpose
or data plus address, to be provided. Again,
this affects instruction set design because
more registers require more operand specifier
bits.
• Between 8 and 32 registers appears optimum.
• Register length: Registers that must hold
addresses obviously must be at least long
enough to hold the largest address. Data
registers should be able to hold values of most
data types.
Control and Status Registers
• The control and status register holds
the address or data that is important
to control the processor’s operation.
• The most important thing is that these
registers are not visible to the users.
• All the control and status registers
are essential for the execution of an
instruction.
Types
1. Program Counter
• The program counter is a processor register that holds the address
of the instruction that has to be executed next. It is a processor
which updates the program counter with the address of the next
instruction to be fetched for execution.
2. Instruction Register
• Instruction register has the instruction that is currently fetched. It
helps in analysing the opcode and operand present in the
instruction.

3. Memory Address Register (MAR)


• Memory address register holds the address of a memory location.

4. Memory Buffer Register (MBR)


• The memory buffer register holds the data that has to be written to
a memory location or it holds the data that is recently been read.
• The memory address registers (MAR) and memory buffer registers
(MBR) are used to move the data between processor and memory.
• Apart from the above registers, several processors have
a register termed as Program Status Word (PSW). As
the word suggests it contains the status information.
• The fields included in Program Status Word (PSW):
– Sign: This field has the resultant sign bit of the last
arithmetic operation performed.
– Zero: This field is set when the result of the operation
is zero.
– Carry: This field is set when an arithmetic operation results
in a carry into or borrow out.
– Equal: If a logical operation results in, equality the Equal
bit is set.
– Overflow: This bit indicates the arithmetic overflow.
– Interrupt: This bit is set to enable or disable the interrupts.
– Supervisor: This bit indicates whether the processor is
executing in the supervisor mode or the user mode.

You might also like