Digital Logic
Digital Logic
Digital Logic
Now a day’s digital systems are used in wide variety of industrial and consumer products such as
automated industrial machinery, pocket calculators, microprocessors, digital computers, digital watches,
TV games and signal processing and so on.
Analog system process information that varies continuously i.e; they process time varying signals
that can take on any values across a continuous range of voltage, current or any physical parameter.
Digital systems use digital circuits that can process digital signals which can take either 0 or 1 for
binary system.
Advantages of Digital system over Analog system
1. Ease of programmability
The digital systems can be used for different applications by simply changing the program without
additional changes in hardware.
The cost of hardware gets reduced by use of digital components and this has been possible due to
advances in IC technology. With ICs the number of components that can be placed in a given area of
Silicon are increased which helps in cost reduction.
3.High speed
Digital processing of data ensures high speed of operation which is possible due to advances in
Digital Signal Processing.
4. High Reliability
Digital systems are highly reliable one of the reasons for that is use of error correction codes.
5. Design is easy
The design of digital systems which require use of Boolean algebra and other digital techniques is
easier compared to analog designing.
Since the output of digital systems unlike analog systems is independent of temperature, noise,
humidity and other characteristics of components the reproducibility of results is higher in digital systems
than in analog systems.
Use more energy than analog circuits to accomplish the same tasks, thus producing more heat as
well.
Digital circuits are often fragile, in that if a single piece of digital data is lost or misinterpreted the
meaning of large blocks of related data can completely change.
Digital computer manipulates discrete elements of information by means of a binary code.
Quantization error during analog signal sampling.
NUMBER SYSTEM
Number system is a basis for counting varies items. Modern computers communicate and operate
with binary numbers which use only the digits 0 &1. Basic number system used by humans is Decimal
number system.
For Ex: Let us consider decimal number 18. This number is represented in binary as 10010.
We observe that binary number system take more digits to represent the decimal number. For large
numbers we have to deal with very large binary strings. So this fact gave rise to three new number systems.
The base decides the total number of digits available in that number system.
First digit in the number system is always zero and last digit in the number system is always
base-1.
The binary number has a radix of 2. As r = 2, only two digits are needed, and these are 0 and 1. In
binary system weight is expressed as power of 2.
The left most bit, which has the greatest weight is called the Most Significant Bit (MSB). And the
right most bit which has the least weight is called Least Significant Bit (LSB).
For Ex: 1001.012 = [ ( 1 ) × 23 ] + [ ( 0 ) × 22 ] + [ ( 0 ) × 21 ] + [ ( 1 ) × 20 ] + [ ( 0 ) × 2-1 ] + [
( 1 ) × 22 ]
1001.012 = [ 1 × 8 ] + [ 0 × 4 ] + [ 0 × 2 ] + [ 1 × 1 ] + [ 0 × 0.5 ] + [ 1 × 0.25 ]
1001.012 = 9.2510
The decimal system has ten symbols: 0,1,2,3,4,5,6,7,8,9. In other words, it has a base of 10.
Digital systems operate only on binary numbers. Since binary numbers are often very long, two
shorthand notations, octal and hexadecimal, are used for representing large binary numbers. Octal systems
use a base or radix of 8. It uses first eight digits of decimal number system. Thus it has digits from 0 to 7.
The hexadecimal numbering system has a base of 16. There are 16 symbols. The decimal digits 0 to
9 are used as the first ten digits as in the decimal system, followed by the letters A, B, C, D, E and F, which
represent the values 10, 11,12,13,14 and 15 respectively.
The human beings use decimal number system while computer uses binary number system.
Therefore it is necessary to convert decimal number system into its equivalent binary.
=4x83+0x82+5x81+7x80+0x8-1+6x8-2
=2048+0+40+7+0+0.0937
=2095.093710
8 | 378
|
8 |47 --- 2
|
8 |5 --- 7 ↑
|
0 --- 5
=5728
0.9310 to octal :
0.93x8=7.44
0.44x8=3.52 ↓
0.53x8=4.16
0.16x8=1.28
=0.73418
378.9310=572.73418
=1280+192+7
=14710
viii) Decimal to Hexadecimal Conversion
Ex: 2598.67510
1 6 2598
16 162 -6
10 -2
= A26 (16)
0.67510=0.675x16 -- 10.8
=0.800x16 -- 12.8 ↓
=0.800x16 -- 12.8
=0.800x16 -- 12.8
=0.ACCC16
2598.67510 = A26.ACCC16
The simplest way is to first convert the given octal no. to binary & then the binary no. to
hexadecimal.
Ex: 756.6038
7 5 6 . 6 0 3
111 101 110 . 110 000 011
0001 1110 1110 . 1100 0001 1000
1 E E . C 1 8
First convert the given hexadecimal no. to binary & then the binary no. to octal.
Ex: B9F.AE16
B 9 F . A E
1011 1001 1111 . 1010 1110
101 110 011 111 . 101 011 100
5 6 3 7 . 5 3 4
=5637.534
Complements:
In digital computers to simplify the subtraction operation & for logical manipulation complements
are used. There are two types of complements used in each radix system.
Ex:
0 1 0 1 0 0 1
↓
Sign bit =+41 magnitude
↑
1 1 0 1 0 0 1
= -41
Note: manipulation is necessary to add a +ve no to a –ve no
Ex:
Given no. Sign mag form 2‘s comp form 1‘s comp form
01101 +13 +13 +13
010111 +23 +23 +23
10111 -7 -7 -8
1101010 -42 -22 -21
Special case in 2’s comp representation:
Whenever a signed no. has a 1 in the sign bit & all 0‘s for the magnitude bits, the decimal
equivalent is -2n , where n is the no of bits in the magnitude .
Ex: 1000= -8 & 10000=-16
Decimal Sign 2‘s comp form Sign 1‘s comp form Sign mag form
+7 0111 0111 0111
+6 0110 0110 0110
+5 0101 0101 0101
+4 0100 0100 0100
+3 0011 0011 0011
+2 0010 0010 0010
+1 0011 0011 0011
+0 0000 0000 0000
-0 -- 1111 1000
-1 1111 1110 1001
-2 1110 1101 1010
-3 1101 1100 1011
-4 1100 1011 1100
-5 1011 1010 1101
-6 1010 1001 1110
-7 1001 1000 1111
8 1000 -- --
Methods of obtaining 2’s comp of a no:
In 3 ways
1. By obtaining the 1‘s comp of the given no. (by changing all 0‘s to 1‘s & 1‘s to 0‘s) &
then adding 1.
2. By subtracting the given n bit no N from 2n
3. Starting at the LSB , copying down each bit upto & including the first 1 bit
encountered , and complimenting the remaining bits.
Ex: Express -45 in 8 bit 2‘s comp form
I method:
1‘s comp of 00101101 & the add 1
00101101
11010010
+1
_ _ _ _ _ _ _ _ _ _
2n = 100000000
Subtract 45= -00101101
+1
__ _
III method:
bits 11010011
Ex:
-73.75 in 12 bit 2‘s comp form
I method
01001001.1100
10110110.0011
+1
10110110.0100 is 2‘s
II method:
28 = 100000000.0000
Sub 73.75=-01001001.1100
Orginalno : 01001001.1100
Copy up to 1‘st bit : 100
Comp the remaining bits: 10110110.0
10110110.0100
+14 = 00001110
-14 = 11110010 2‘s comp
+46 = 00101110
-14 =+11110010 2‘s comp form of -14
-32 (1)00100000 ignore carry
Ignore carry , The MSB is 0 . so the result is +ve. & is in normal binary
form. So the result is +00100000=+32.
+75 = 01001011
-75 =10110101 2‘s comp
+26 = 00011010
2‘s comp form of -75
-75 =+10110101
No carry , MSB is a 1, result is _ve & is in 2‘s comp. The magnitude is 2‘s comp of
11001111. i.e, 00110001 = 49. so result is -49
+77.25 = 01001101.0100
-77.25 = 10110010.1011
+1
No carry MSB =1
00001011 result=-ve=-1110
MSB is a 0 so result is +ve (binary )
=+1110
Binary codes
Binary codes are codes which are represented in binary system with modification from the
original ones.
Weighted Binary codes
Non Weighted Codes
Weighted binary codes are those which obey the positional weighting principles, each
position of the number represents a specific weight. The binary counting sequence is
an example.
Reflective Code
A code is said to be reflective when code for 9 is complement for the code for 0, and
so is for 8 and 1 codes, 7 and 2, 6 and 3, 5 and 4. Codes 2421, 5211, and excess-3 are
reflective, whereas the 8421 code is not.
Sequential Codes
A code is said to be sequential when two subsequent codes, seen as numbers in binary
representation, differ by one. This greatly aids mathematical manipulation of data. The 8421 and
Excess-3 codes are sequential, whereas the 2421 and 5211 codes are not.
Non weighted codes are codes that are not positionally weighted. That is, each
position within the binary number is not assigned a fixed value. Ex: Excess-3 code
Excess-3 Code
Excess-3 is a non weighted code used to express decimal numbers. The code derives
its name from the fact that each binary code is the corresponding 8421 code plus
0011(3).
Gray Code
The gray code belongs to a class of codes called minimum change codes, in
which only one bit in the code changes when moving from one code to the next. The
Gray code is non-weighted code, as the position of bit does not contain any weight.
The gray code is a reflective digital code which has the special property that any two
subsequent numbers codes differ by only one bit. This is also called a unit- distance
code. In digital Gray code has got a special place.
Binary to Gray Conversion
Each decimal digit 0 through 9 is coded by a 4 bit binary no. called natural binary codes.
Because of the 8,4,2,1 weights attached to it. It is a weighted code & also sequential . it is useful
for mathematical operations. The advantage of this code is its case of conversion to & from
decimal. It is less efficient than the pure binary, it require more bits.
Ex: 14→1110 in binary
The disadvantage of the BCD code is that , arithmetic operations are more complex than
they are in pure binary . There are 6 illegal combinations 1010,1011,1100,1101,1110,1111 in
these codes, they are not part of the 8421 BCD code system . The disadvantage of 8421 code is,
the rules of binary addition 8421 no, but only to the individual 4 bit groups.
BCD Addition:
38 0011 1000
No carry , no illegal code .This is the corrected sum
(b). 679.6 + 536.8
679.6 = 0110 0111 1001 .0110 in BCD
+536.8 = +0101 0011 0010 .1000 in BCD
___ ________________ _
1216.4 1011 1010 0110 . 1110 illegal codes
+0110 + 0011 +0110 . + 0110 add 0110 to each
1 2 1 6 . 4
BCD Subtraction:
Performed by subtracting the digits of each 4 bit group of the subtrahend the digits from
the corresponding 4- bit group of the minuend in binary starting from the LSD . if there is no
borrow from the next group , then 610(0110)is subtracted from the difference term of this group.
(a)38-15
23 0010 0011
No borrow, so correct difference.
.(b) 206.7-147.8
Form the 9‘s & 10‘s compliment of the decimal subtrahend & encode that no. in
the 8421 code . the resulting BCD no.s are then added.
305.5 = 305.5
-168.8= +83.1 9‘s comp of -168.8
__
(1)136.6
+1 end around carry
136.7 corrected difference
305.510 = 0011 0000 0101 . 0101
+831.110 = +1000 0011 0001 . 0001 9‘s comp of 1
_ _ _ ________________ _ 68.8 in BCD
+1011 0011 0110 . 0110 1011 is illegal code
+0110 add 0110
Excess-3 Addition:
Add the xs-3 no.s by adding the 4 bit groups in each column starting from the LSD. If
there is no carry starting from the addition of any of the 4-bit groups , subtract 0011 from the
sum term of those groups ( because when 2 decimal digits are added in xs-3 & there is no carry ,
result in xs-6). If there is a carry out, add 0011 to the sum term of those groups( because when
there is a carry, the invalid states are skipped and the result is normal binary).
EX: 37 0110 1010
+28 +0101 1011
_ _ _ _ _ _ _ _ _ _
Subtract the xs-3 no.s by subtracting each 4 bit group of the subtrahend from the
corresponding 4 bit group of the minuend starting form the LSD .if there is no borrow from the
next 4-bit group add 0011 to the difference term of such groups (because when decimal digits are
subtracted in xs-3 & there is no borrow , result is normal binary). I f there is a borrow , subtract
0011 from the differenceterm(b coz taking a borrow is equivalent to adding six invalid states ,
result is in xs-6)
Ex: 267-175
687 687
-348 → +651 9‘s compl of 348
339 (1)338
+1 end around carry
_
Gray code is a non-weighted code & is not suitable for arithmetic operations. It is not a
BCD code . It is a cyclic code because successive code words in this code differ in one bit
position only i.e, it is a unit distance code.Popular of the unit distance code.It is also a reflective
code i.e,both reflective & unit distance. The n least significant bits for 2 n through 2n+1-1 are the
mirror images of thosr for 0 through 2n-1.An N bit gray code can be obtained by reflecting an N-
1 bit code about an axis at the end of the code, & putting the MSB of 0 above the axis & the
MSB of 1 below the axis.
Reflection of gray codes:
Gray Code
1 bit 2 bit 3 bit 4 bit Decimal 4 bit binary
0 00 000 0000 0 0000
1 01 001 0001 1 0001
11 011 0011 2 0010
10 010 0010 3 0011
110 0110 4 0100
111 0111 5 0101
101 0101 6 0110
110 0100 7 0111
1100 8 1000
1101 9 1001
1111 10 1010
1110 11 1011
1010 12 1100
1011 13 1101
1001 14 1110
1000 15 1111
Procedure: ex-or the bits of the binary no with those of the binary no shifted one position to the
right . The LSB of the shifted no. is discarded & the MSB of the gray code no.is the same as the
MSB of the original binaryno.
EX: 10001
(a). Binary : 1 →0 →0 →1
Gray : 1 1 0 1
(b). Binary: 1 0 0 1
Shifted binary: 1 0 0 (1)
______ _
1 1 0 1→gray
Gray to Binary Conversion:
its binary equivalent by Bn Bn-1 ------- B1 then the binary bits are obtained from gray bits as
To convert no. in any system into given no. first convert it into binary & then binary to gray. To
convert gray no into binary no & convert binary no into require no system.
Binary:1 0 0 1
In a normal gray code , the bit patterns for 0(0000) & 9(1101) do not have a unit distance
between them i.e, they differ in more than one position.In xs-3 gray code , each decimal digit is
encoded with gray code patter of the decimal digit that is greater by 3. It has a unit distance
between the patterns for 0 & 9.
Decimal digit Xs-3 gray code Decimal digit Xs-3 gray code
0 0010 5 1100
1 0110 6 1101
2 0111 7 1111
3 0101 8 1110
4 0100 9 1010
Binary codes block diagram
Error – Detecting codes: When binary data is transmitted & processed,it is susceptible to noise
that can alter or distort its contents. The 1‘s may get changed to 0‘s & 1‘s .because digital
systems must be accurate to the digit, error can pose a problem. Several schemes have been
devised to detect the occurrence of a single bit error in a binary word, so that whenever such an
error occurs the concerned binary word can be corrected & retransmitted.
Parity: The simplest techniques for detecting errors is that of adding an extra bit known as parity
bit to each word being transmitted.Two types of parity: Oddparity, evenparity forodd parity, the
parity bit is set to a ‗0‘ or a ‗1‘ at the transmitter such that the total no. of 1 bit in the word
including the parity bit is an odd no.For even parity, the parity bit is set to a ‗0‘ or a ‗1‘ at the
transmitter such that the parity bit is an even no.
Ans:
(a) No. of 1‘s in the word is even is 6 so word has error
(b) No. of 1‘s in the word is even is 4 so word has error
(c) No. of 1‘s in the word is odd is 5 so there is no error
Checksums:
Simple parity can‘t detect two errors within the same word. To overcome this, use a sort
of 2 dimensional parity. As each word is transmitted, it is added to the sum of the previously
transmitted words, and the sum retained at the transmitter end. At the end of transmission, the
sum called the check sum. Up to that time sent to the receiver. The receiver can check its sum
with the transmitted sum. If the two sums are the same, then no errors were detected at the
receiver end. If there is an error, the receiving location can ask for retransmission of the entire
data, used in teleprocessing systems.
Block parity:
Block of data shown is create the row & column parity bits for the data using odd parity.
The parity bit 0 or 1 is added column wise & row wise such that the total no. of 1‘s in each
column & row including the data bits & parity bit is odd as
Data Parity bit data
10110 0 10110
10001 1 10001
10101 0 10101
00010 0 00010
11000 1 11000
00000 1 00000
11010 0 11010
A code is said to be an error –correcting code, if the code word can always be deduced
from an erroneous word. For a code to be a single bit error correcting code, the minimum
distance of that code must be three. The minimum distance of that code is the smallest no. of bits
by which any two code words must differ. A code with minimum distance of 3 can‘t only correct
single bit errors but also detect ( can‘t correct) two bit errors, The key to error correction is that
it must be possible to detect & locate erroneous that it must be possible to detect & locate
erroneous digits. If the location of an error has been determined. Then by complementing the
erroneous digit, the message can be corrected , error correcting , code is the Hamming code , In
this , to each group of m information or message or data bits, K parity checking bits denoted by
P1,P2,----------pk located at positions 2 k-1 from left are added to form an (m+k) bit code word.
To correct the error, k parity checks are performed on selected digits of each code word, & the
position of the error bit is located by forming an error word, & the error bit is then
complemented. The k bit error word is generated by putting a 0 or a 1 in the 2 k-1th position
depending upon whether the check for parity involving the parity bit Pk is satisfied or not.Error
positions & their corresponding values :
Error Position For 15 bit code For 12 bit code For 7 bit code
C4 C3 C2 C1 C4 C3 C2 C1 C3 C2 C1
0 0000 0000 00 0
1 0001 0001 00 1
2 0010 0010 01 0
3 0011 0011 01 1
4 0100 0100 10 0
5 0101 0101 10 1
6 0 1 10 0 1 10 1 1 0
7 0 1 1 1 0 1 1 1 1 1 1
8 1 0 0 0 1 0 0 0
9 1 0 0 1 1 0 0 1
10 1 0 1 0 1 0 1 0
11 1 0 1 1 1 0 1 1
12 1 1 0 0 1 1 0 0
13 1 1 0 1
14 1 1 1 0
15 1 1 1 1
To transmit four data bits, 3 parity bits located at positions 2 0 21&22 from left are
added to make a 7 bit codeword which is then transmitted.
P1 P2 D3 P4 D5 D6 D7
D—Data bits P-
Parity bits
1 1 0 1
12-Bit Hamming Code:It transmit 8 data bits, 4 parity bits located at position 20 21 22 23
Word format is
P1 P2 D3 P4 D5 D6 D7 P8 D9 D10 D11 D12
Alphanumeric Codes:
These codes are used to encode the characteristics of alphabet in addition to the decimal
digits. It is used for transmitting data between computers & its I/O device such as printers,
keyboards & video display terminals.Popular modern alphanumeric codes are ASCII code &
EBCDIC code.
Boolean algebra
In 1854, George Boole developed an algebraic system now called Boolean algebra. In 1938,
Claude E. Shannon introduced a two‐valued Boolean algebra called switching algebra that
represented the properties of bistable electrical switching circuits. For the formal definition of
Boolean algebra, we shall employ the postulates formulated by E. V. Huntington in 1904.
Boolean algebra is a system of mathematical logic. It is an algebraic system consisting of the set
of elements (0, 1), two binary operators called OR, AND, and one unary operator NOT. It is the
basic mathematical tool in the analysis and synthesis of switching circuits. It is a way to express
logic functions algebraically.
Axioms or Postulates of Boolean algebra are a set of logical expressions that we accept without
proof and upon which we can build a set of useful theorems.
Complementation law
Commutative law
Associative law
Distributive law
Absorption law
Theorem1: ̅̅̅̅̅̅̅̅̅̅̅
(A + B) =A ̅. B
̅ Theorem2: ̅̅̅̅̅̅̅̅
(A . B)=A̅+ B
̅
̅.B=A+B
Rule1: A+ A ̅+B)=AB
Rule2: A.(A
̅.B
Solution: A+ A ̅+B)
Solution: A.(A
(A+A).(A+B) ∴ A + BC = (A + B).(A +C) ̅+A.B
A.A
A+B ∴A+A=1 AB
Consensus Theorem
The BC term is called the consensus term and is redundant. The consensus term is formed from
a PAIR OF TERMS in which a variable (A) and its complement (A’) are present; the consensus
term is formed by multiplying the two terms and leaving out the selected variable and its
complement
Consensus Theorem1 Proof:
AB+A’C+BC=AB+A’C+(A+A’)BC
=AB+A’C+ABC+A’BC
=AB(1+C)+A’C(1+B)
= AB+ A’C
Principle of Duality
Each postulate consists of two expressions statement one expression is transformed into the
other by interchanging the operations (+) and (⋅) as well as the identity elements 0 and 1.
Such expressions are known as duals of each other.
If some equivalence is proved, then its dual is also immediately true.
E.g. If we prove: (x.x)+(x’+x’)=1, then we have by duality: (x+x)⋅(x’.x’)=0
The Huntington postulates were listed in pairs and designated by part (a) and part (b) in below
table.
Table for Postulates and Theorems of Boolean algebra
Part-A Part-B
A+0=A A.0=0
A+1=1 A.1=A
A+A=A (Impotence law) A.A=A (Impotence law)
̅
A+ A=1 ̅=0
A. A
̅=A (double inversion law)
A --
Commutative law: A+B=B+A A.B=B.A
Associative law: A + (B +C) = (A +B) +C A(B.C) = (A.B)C
Distributive law: A.(B + C) = AB+ AC A + BC = (A + B).(A +C)
Absorption law: A +AB =A A(A +B) = A
DeMorgan Theorem: ̅̅̅̅̅̅̅̅̅̅̅
(A + B) =A ̅. B
̅ ̅̅̅̅̅̅̅̅
(A . B)=A̅+ B
̅
Redundant Literal Rule: A+ A̅.B=A+B A.(A̅+B)=AB
Consensus Theorem: AB+ A’C + BC = AB + A’C (A+B). (A’+C).(B+C) =(A+B).( A’+C)
Boolean Function
Boolean algebra is an algebra that deals with binary variables and logic operations.
A Boolean function described by an algebraic expression consists of binary variables, the
constants 0 and 1, and the logic operation symbols.
For a given value of the binary variables, the function can be equal to either 1 or 0.
F(vars) = expression
x y z F1
0 0 0 0
0 0 1 1
0 1 0 0
0 1 1 0
1 0 0 1
1 0 1 1
1 1 0 1 Gate Implementation of F1 = x + y’z
1 1 1 1
Note:
Q: Let a function F() depend on n variables. How many rows are there in the truth table of F() ?
A: 2n rows, since there are 2n possible binary patterns/combinations for the n variables.
Truth Tables
Enumerates all possible combinations of variable values and the corresponding function
value
Truth tables for some arbitrary functions
F1(x,y,z), F2(x,y,z), and F3(x,y,z) are shown to the below.
x y z F1 F2 F3
0 0 0 0 1 1
0 0 1 0 0 1
0 1 0 0 0 1
0 1 1 0 1 1
1 0 0 0 1 0
1 0 1 0 1 0
1 1 0 0 0 0
1 1 1 1 0 1
• Example: Prove
x’y’z’ + x’yz’ + xyz’ = x’z’ + yz’
• Proof:
x’y’z’+ x’yz’+ xyz’
= x’y’z’ + x’yz’ + x’yz’ + xyz’
= x’z’(y’+y) + yz’(x’+x)
= x’z’•1 + yz’•1
= x’z’ + yz’
Complement of a Function
The complement of a function is derived by interchanging (• and +), and (1 and 0), and
complementing each variable.
Otherwise, interchange 1s to 0s in the truth table column showing F.
The complement of a function IS NOT THE SAME as the dual of a function.
Example
• Find G(x,y,z), the complement of F(x,y,z) = xy’z’ + x’yz
Ans: G = F’ = (xy’z’ + x’yz)’
= (xy’z’)’ • (x’yz)’ DeMorgan
= (x’+y+z) • (x+y’+z’) DeMorgan again
Note: The complement of a function can also be derived by finding the function’s dual, and
then complementing all of the literals
Canonical and Standard Forms
Definitions
Minterm
Represents exactly one combination in the truth table.
Denoted by mj, where j is the decimal equivalent of the minterm’s corresponding binary
combination (bj).
A variable in mj is complemented if its value in bj is 0, otherwise is uncomplemented.
Example: Assume 3 variables (A, B, C), and j=3. Then, bj = 011 and its corresponding minterm is denoted
by mj = A’BC
Maxterm
Example: Assume 3 variables (A, B, C), and j=3. Then, bj = 011 and its corresponding maxterm is denoted
by Mj = A+B’+C’
Truth Table notation for Minterms and Maxterms
• Minterms and Maxterms are easy to denote using a truth table.
Example: Assume 3 variables x,y,z (order is fixed)
x y z Minterm Maxterm
0 0 0 x’y’z’ = m0 x+y+z = M0
0 0 1 x’y’z = m1 x+y+z’ = M1
0 1 0 x’yz’ = m2 x+y’+z = M2
0 1 1 x’yz = m3 x+y’+z’= M3
1 0 0 xy’z’ = m4 x’+y+z = M4
1 0 1 xy’z = m5 x’+y+z’ = M5
1 1 0 xyz’ = m6 x’+y’+z = M6
1 1 1 xyz = m7 x’+y’+z’ = M7
Canonical Forms
• Every function F() has two canonical forms:
– Canonical Sum-Of-Products (sum of minterms)
– Canonical Product-Of-Sums (product of maxterms)
Canonical Sum-Of-Products:
The minterms included are those mj such that F( ) = 1 in row j of the truth table for F( ).
Canonical Product-Of-Sums:
The maxterms included are those Mj such that F( ) = 0 in row j of the truth table for F( ).
Example a b c f1
Consider a Truth table for f1(a,b,c) at right 0 0 0 0
The canonical sum-of-products form for f1 is
0 0 1 1
f1(a,b,c) = m1 + m2 + m4 + m6
= a’b’c + a’bc’ + ab’c’ + abc’ 0 1 0 1
The canonical product-of-sums form for f1 is 0 1 1 0
f1(a,b,c) = M0 • M3 • M5 • M7
1 0 0 1
= (a+b+c)•(a+b’+c’)• (a’+b+c’)•(a’+b’+c’).
1 0 1 0
• Observe that: mj = Mj’ 1 1 0 1
1 1 1 0
Shorthand: ∑ and ∏
• f1(a,b,c) = ∑ m(1,2,4,6), where ∑ indicates that this is a sum-of-products form, and m(1,2,4,6)
indicates that the minterms to be included are m1, m2, m4, and m6.
• f1(a,b,c) = ∏ M(0,3,5,7), where ∏ indicates that this is a product-of-sums form, and M(0,3,5,7)
indicates that the maxterms to be included are M0, M3, M5, and M7.
• Since mj = Mj’ for any j,
∑ m(1,2,4,6) = ∏ M(0,3,5,7) = f1(a,b,c)
•
Conversion between Canonical Forms
• Replace ∑ with ∏ (or vice versa) and replace those j’s that appeared in the original form with those
that do not.
• Example:
f1(a,b,c)= a’b’c + a’bc’ + ab’c’ + abc’
= m1 + m2 + m 4 + m6
= ∑(1,2,4,6)
= ∏(0,3,5,7)
= (a+b+c)•(a+b’+c’)•(a’+b+c’)•(a’+b’+c’)
Standard Forms
Another way to express Boolean functions is in standard form. In this configuration, the terms that form
the function may contain one, two, or any number of literals.
There are two types of standard forms: the sum of products and products of sums.
The sum of products is a Boolean expression containing AND terms, called product terms, with one or
more literals each. The sum denotes the ORing of these terms. An example of a function expressed as a
sum of products is
F1 = y’ + xy + x’yz’
The expression has three product terms, with one, two, and three literals. Their sum is, in effect, an OR
operation.
A product of sums is a Boolean expression containing OR terms, called sum terms. Each term may have any
number of literals. The product denotes the ANDing of these terms. An example of a function expressed as
a product of sums is
F2 = x(y’ + z)(x’ + y + z’)
This expression has three sum terms, with one, two, and three literals. The product is an AND operation.
Conversion of SOP from standard to canonical form
Example-1.
Express the Boolean function F = A + B’C as a sum of minterms.
Solution: The function has three variables: A, B, and C. The first term A is missing two variables; therefore,
A = A(B + B’) = AB + AB’
This function is still missing one variable, so
A = AB(C + C’) + AB’ (C + C’)
= ABC + ABC’ + AB’C + AB’C’
The second term B’C is missing one variable; hence,
B’C = B’C(A + A’) = AB’C + A’B’C
Combining all terms, we have
F = A + B’C
= ABC + ABC’ + AB’C + AB’C’+ A’B’C
But AB’C appears twice, and according to theorem (x + x = x), it is possible to remove one of those
occurrences. Rearranging the minterms in ascending order, we finally obtain
F = A’B’C + AB’C + AB’C + ABC’ + ABC
= m1 + m4 + m5 + m6 + m7
When a Boolean function is in its sum‐of‐minterms form, it is sometimes convenient to express the
function in the following brief notation:
F(A, B, C) = ∑m (1, 4, 5, 6, 7)
Example-2.
Express the Boolean function F = xy + x’z as a product of maxterms.
Solution: First, convert the function into OR terms by using the distributive law:
F = xy + x’z = (xy + x’)(xy + z)
= (x + x’)(y + x’)(x + z)(y + z)
= (x’+ y)(x + z)(y + z)
The function has three variables: x, y, and z. Each OR term is missing one variable; therefore,
x’+ y = x’ + y + zz’ = (x’ + y + z)(x’ + y + z’)
x + z = x + z + yy’ = (x + y + z)(x + y’ + z)
y + z = y + z + xx’ = (x + y + z)(x’ + y + z)
Combining all the terms and removing those which appear more than once, we finally obtain
F = (x + y + z)(x + y’ + z)(x’ + y + z)(x’ + y + z)
F= M0M2M4M5
A convenient way to express this function is as follows:
F(x, y, z) = πM(0, 2, 4, 5)
The product symbol, π, denotes the ANDing of maxterms; the numbers are the indices of the maxterms of
the function.
Digital Logic Gates
Boolean functions are expressed in terms of AND, OR, and NOT operations, it is easier to
implement a Boolean function with these type of gates.
Properties of XOR Gates
NAND and NOR gates are called Universal gates. All fundamental gates (NOT, AND, OR) can be
realized by using either only NAND or only NOR gate. A universal gate provides flexibility and
offers enormous advantage to logic designers.
Two-variable k-map:
A two-variable k-map can have 22=4 possible combinations of the input variables A and
B
. Each of these combinations, , B,A ,AB(in the SOP form) is called a minterm.
The minterm may be represented in terms of their decimal designations – m0 for , m1 for
B,m2 for A and m3 for AB, assuming that A represents the MSB. The letter m stands for
minterm and the subscript represents the decimal designation of the minterm. The presence or
absence of a minterm in the expression indicates that the output of the logic circuit assumes logic
1 or logic 0 level for that combination of input variables.
term as F= m0+m2+m3=∑m(0,2,3)
A two-variable k-map has 22=4 squares .These squares are called cells. Each square on the k-
map represents a unique minterm. The minterm designation of the squares are placed in any
square, indicates that the corresponding minterm does output expressions. And a 0 or no entry in
any square indicates that the corresponding minterm does not appear in the expression for output.
k-map of ∑m(0,2,3)
F= m1+m2=∑m(1,2)The k-map is
To minimize Boolean expressions given in the SOP form by using the k-map, look for
adjacent adjacent squares having 1‘s minterms adjacent to each other, and combine them to form
larger squares to eliminate some variables. Two squares are said to be adjacent to each other, if
their minterms differ in only one variable. (i.e, B & A differ only in one variable. so they may
be combined to form a 2-square to eliminate the variable B.similarly all other.
The necessary condition for adjacency of minterms is that their decimal designations must
differ by a power of 2. A minterm can be combined with any number of minterms adjacent to it
to form larger squares. Two minterms which are adjacent to each other can be combined to form
a bigger square called a 2-square or a pair. This eliminates one variable – the variable that is not
common to both the minterms. For EX:
f1 = m0+m1= + B= (B+
f2 = m0+m2= + = ( + )=
= + +A +AB
= (B+ ) +A(B+ )
= +A
=1
Two 2-squares adjacent to each other can be combined to form a 4-square. A 4-square
eliminates 2 variables. A 4-square is called a quad. To read the squares on the map after
minimization, consider only those variables which remain constant through the square, and
ignore the variables which are varying. Write the non complemented variable if the variable is
remaining constant as a 1, and the complemented variable if the variable is remaining constant as
a 0, and write the variables as a product term. In the above figure f1 read as , because, along the
square , A remains constant as a 0, that is , as , where as B is changing from 0 to 1.
EX: Reduce the minterm f= +A +AB using mapping Expressed in terms of minterms, the
given expression is F=m0+m1+m2+ m3=m∑(0,1,3)& the figure shows the k-map for f and its
reduction . In one 2-square, A is constant as a 0 but B varies from a 0 to a 1, and in the other 2-
square, B is constant as a 1 but A varies from a 0 to a 1. So, the reduced expressions is +B.
The 1‘s on the k-map indicate the presence of minterms in the output expressions, where
as the 0s indicate the absence of minterms .Since the absence of a minterm in the SOP expression
means the presense of the corresponding maxterm in the POS expression of the same .when a
SOP expression is plotted on the k-map, 0s or no entries on the k-map represent the maxterms.
To obtain the minimal expression in the POS form, consider the 0s on the k-map and follow the
procedure used for combining 1s. Also, since the absence of a maxterm in the POS expression
means the presence of the corresponding minterm in the SOP expression of the same , when a
POS expression is plotted on the k-map, 1s or no entries on the k-map represent the minterms.
Each sum term in the standard POS expression is called a maxterm. A function in two
variables (A, B) has four possible maxterms, A+B,A+ , +B, +
. They are represented as M0, M1, M2, and M3respectively. The uppercase letter M stands for
maxterm and its subscript denotes the decimal designation of that maxterm obtained by treating
the non-complemented variable as a 0 and the complemented variable as a 1 and putting them
side by side for reading the decimal equivalent of the binary number so formed.
For mapping a POS expression on to the k-map, 0s are placed in the squares
corresponding to the maxterms which are presented in the expression an d1s are placed in the
squares corresponding to the maxterm which are not present in the expression. The decimal
designation of the squares of the squares for maxterms is the same as that for the minterms. A
two-variable k-map & the associated maxterms are asthe maxterms of a two-variable k-map
To obtain the minimal expression in POS form, map the given POS expression on to the
K-map and combine the adjacent 0s into as large squares as possible. Read the squares putting
the complemented variable if its value remains constant as a 1 and the non-complemented
variable if its value remains constant as a 0 along the entire square ( ignoring the variables which
do not remain constant throughout the square) and then write them as a sum term.
Various maxterm combinations and the corresponding reduced expressions are shown in
figure. In this f1 read as A because A remains constant as a 0 throughout the square and B
changes from a 0 to a 1. f2 is read as B‘ because B remains constant along the square as a 1 and
A changes from a 0 to a 1. f5
Is read as a 0 because both the variables are changing along the square.
The given expression in terms of maxterms is f=πM(0,1,3). It requires two gates inputs
for realization of the reduced expression as
F=AB‘
K-map in POS form and logic diagram
In this given expression ,the maxterm M2 is absent. This is indicated by a 1 on the k-map. The
corresponding SOP expression is ∑m2 or AB‘. This realization is the same as that for the POS
form.
Three-variable K-map:
A function in three variables (A, B, C) expressed in the standard SOP form can have eight
possible combinations: A B C , AB C,A BC ,A BC,AB C ,AB C,ABC , and ABC. Each one of these
combinations designate d by m0,m1,m2,m3,m4,m5,m6, and m7, respectively, is called a
minterm. A is the MSB of the minterm designator and C is the LSB.
In the standard POS form, the eight possible combinations are:A+B+C, A+B+C ,
A+B +C,A+B + C ,A + B + C,A + B + C ,A + B + C,A + B + C . Each oneof these combinations
designated by M0, M1, M2, M3, M4, M5, M6, and M7respectively is called a maxterm. A is the
MSB of the maxterm designator and C is the LSB.
A three-variable k-map has, therefore, 8(=23) squares or cells, and each square on the
map represents a minterm or maxterm as shown in figure. The small number on the top right
corner of each cell indicates the minterm or maxterm designation.
The three-variable k-map.
The binary numbers along the top of the map indicate the condition of B and C for each
column. The binary number along the left side of the map against each row indicates the
condition of A for that row. For example, the binary number 01 on top of the second column in
fig indicates that the variable B appears in complemented form and the variable C in non-
complemented form in all the minterms in that column. The binary number 0 on the left of the
first row indicates that the variable A appears in complemented form in all the minterms in that
row, the binary numbers along the top of the k-map are not in normal binary order. They are,
infact, in the Gray code. This is to ensure that twophysically adjacent squares are really adjacent,
i.e., their minterms or maxterms differ by only one variable.
=110=m6;ABC=111=m7.
So the expression is f=∑m(1,5,2,6,7)= ∑m(1,2,5,6,7). The corresponding k-map is
For reducing the Boolean expressions in SOP (POS) form plotted on the k-map, look
at the 1s (0s) present on the map. These represent the minterms (maxterms). Look for the
minterms (maxterms) adjacent to each other, in order to combine them into larger squares.
Combining of adjacent squares in a k-map containing 1s (or 0s) for the purpose of simplification
of a SOP (or POS)expression is called looping. Some of the minterms (maxterms) may have
many adjacencies. Always start with the minterms (maxterm) with the least number of
adjacencies and try to form as large as large a square as possible. The larger must form a
geometric square or rectangle. They can be formed even by wrapping around, but cannot be
formed by using diagonal configurations. Next consider the minterm (maxterm) with next to the
least number of adjacencies and form as large a square as possible. Continue this till all the
minterms (maxterms) are taken care of . A minterm (maxterm) can be part of any number of
squares if it is helpful in reduction. Read the minimal expression from the k-map, corresponding
to the squares formed. There can be more than one minimal expression.
Two squares are said to be adjacent to each other (since the binary designations along
the top of the map and those along the left side of the map are in Gray code), if they are
physically adjacent to each other, or can be made adjacent to each other by wrapping around.
For squares to be combinable into bigger squares it is essential but not sufficient that their
minterm designations must differ by a power of two.
= + B
= ( +B)=
f3 is read as + , because in the 4-square formed by m0,m2,m6, and m4, the variable A and B
are changing , where as the variable C remains constant as a 0. So it is read as . In the 4-square
formed by m0, m1, m4, m5, A and C are changing but B remains constant as a 0. So it is read as
. So, the resultant expression for f3 is the sum of these two, i.e., + .
Some possible maxterm groupings and the corresponding minimal POS expressions read from
the k-map are
In this figure, along the 4-square formed by M1, M3, M7, M5, A and B are changing from a 0 to
a 1, where as C remains constant as a 1. SO it is read as . Along the 4-squad formed by M3,
M2, M7, and M6, variables A and C are changing from a 0 to a 1. But B remains constant as a 1.
So it is read as . The minimal expression is the product of these two terms , i.e., f1 = ( )( ).also
in this figure, along the 2-square formed by M4 and M6 , variable B is changing from a 0 to a 1,
while variable A remains constant as a 1 and variable C remains constant as a 0. SO, read it
as
+C. Similarly, the 2-square formed by M7 andM6 is read as + , while the 2-square formed
by M2 and M6 is read as +C. The minimal expression is the product of these sum terms, i.e, f2
=( + )+( + )+( +C)
Ex:Reduce the expression f=∑m(0,2,3,4,5,6) using mapping and implement it in AOI logic as
well as in NAND logic.The Sop k-map and its reduction , and the implementation of the minimal
expression using AOI logic and the corresponding NAND logic are shown in figures below
1. m5 has only one adjacency m4 , so combine m5 and m4 into a square. Along this 2-square
A remains constant as 1 and B remains constant as 0 but C varies from 0 to 1. So read it
as A .
2. m3 has only one adjacency m2 , so combine m3 and m2 into a square. Along this 2-square
A remains constant as 0 and B remains constant as 1 but C varies from 1 to 0. So read it
as B.
3. m6 can form a 2-square with m2 and m4 can form a 2-square with m0, but observe that by
wrapping the map from left to right m0, m4 ,m2 ,m6 can form a 4-square. Out of these m2
andm4 have already been combined but they can be utilized again. So make it. Along this
4-square, A is changing from 0 to 1 and B is also changing from 0 to 1 but C is remaining
constant as 0. so read it as .
4. Write all the product terms in SOP form. So the minimal SOP expression is
fmin=
k-map AOI logic NAND logic
Four variable k-maps:
Four variable k-map expressions can have 24=16 possible combinations of input variables such
as , ,------------ABCD with minterm designations m0,m1--------------m15 respectively
in SOP form & A+B+C+D, A+B+C+ ,---------- + + + with maxterms M0,M1, ---------
-
-M15 respectively in POS form. It has 24=16 squares or cells.The binary number designations of
rows & columns are in the gray code. Here follows 01 & 10 follows 11 called Adjacency
ordering.
EX:
Five variable k-map:
Five variable k-map can have 25 =32 possible combinations of input variable as
, E,--------ABCDE with minterms m0, m1-----m31 respectively in SOP &
A+B+C+D+E, A+B+C+ ,---------- + + + + with maxterms M0,M1, -----------
5
M31 respectively in POS form. It has 2 =32 squares or cells of the k-map are divided into 2
blocks of
16 squares each.The left block represents minterms from m0 to m15 in which A is a 0, and the
right block represents minterms from m16 to m31 in which A is 1.The 5-variable k-map may
contain 2-squares, 4-squares , 8-squares , 16-squares or 32-squares involving these two blocks.
Squares are also considered adjacent in these two blocks, if when superimposing one block on
top of another, the squares coincide with one another.
Grouping s is
Ex: F=∑m(0,1,4,5,6,13,14,15,22,24,25,28,29,30,31) is SOP
POS is F=πM(2,3,7,8,9,10,11,12,16,17,18,19,20,21,23,26,27)
The real minimal expression is the minimal of the SOP and POS forms.
1. There is no isolated 1s
2. M12 can go only with m13. Form a 2-square which is read as A‘BCD‘
3. M0 can go with m2,m16 and m18 . so form a 4-square which is read as B‘C‘E‘
4. M20,m21,m17 and m16 form a 4-square which is read as AB‘D‘
5. M2,m3,m18,m19,m10,m11,m26 and m27 form an 8-square which is read as C‘d
6. Write all the product terms in SOP form.
3.
4.M8
5. M28
6.M30
Fmin= A‘BcD‘+B‘C‘E‘+AB‘D‘+C‘D
Six variable k-map:
Don’t care combinations:For certain input combinations, the value of the output is unspecified
either because the input combinations are invalid or because the precise value of the output is of
no consequence. The combinations for which the value of experiments are not specified are
called don‘t care combinations are invalid or because the precise value of the output is of no
consequence. The combinations for which the value of expressions is not specified are called
don‘t care combinations or Optional Combinations, such expressions stand incompletely
specified. The output is a don‘t care for these invalid combinations.
Ex:In XS-3 code system, the binary states 0000, 0001, 0010,1101,1110,1111 are unspecified. &
never occur called don‘t cares.
A standard SOP expression with don‘t cares can be converted into a standard POS
form by keeping the don‘t cares as they are & writing the missing minterms of the SOP form as
the maxterms of the POS form viceversa.
Or f=π M(0,3,7,9,10,11,15).πd(2,4)
= + + + +( +
)
Each square or rectangle made up of the bunch of adjacent minterms is called a subcube. Each of
these subcubes is called a Prime implicant (PI). The PI which contains at leastone which cannot
be covered by any other prime implicants is called as Essential Prime implicant (EPI).The PI
whose each 1 is covered at least by one EPI is called a Redundant Prime implicant (RPI). A PI
which is neither an EPI nor a RPI is called a Selective Prime implicant (SPI).
F(A,B,C,D)= CD+ABC+A D + B
The RPI ‗BD‘ may be included without changing the function but the resulting expression would
not be in minimal SOP(MSP) form.
Here, the MSP form is obtained by including two EPI‘s & selecting a set of SPI‘s to cover
remaining uncovered minterms 5,13,15. & these can be covered as
False PI’s Essential False PI’s, Redundant False PI’s & Selective False PI’s:
The maxterms are called falseminterms. The PI‘s is obtained by using the maxterms are
called False PI‘s (FPI). The FPI which contains at least one ‗0‘ which can‘t be covered by only
other FPI is called an Essential False Prime implicant (ESPI)
F(A,B,C,D)= ∑m(0,1,2,3,4,8,12)
=π M(5,6,7,9,10,11,13,14,15)
Fmin= ( + )( + )( + )( + )
All the FPI, EFPI‘s as each of them contain atleast one ‗0‘ which can‘t be covered by any other
FPI
Essential False Prime implicants
Quine-Mccluskey Method:
PA+P =P (P is set of literals) on all adjacent pairs of terms, yields the set of all PI‘s from which
a minimal sum may be selected.
Consider expression
∑m(0,1,4,5)= + C+A +A C
First, second terms & third, fourth terms can be combined
( + )+ (C+ )= +A
Reduced to
( + )=
The same result can be obtained by combining m0& m4 & m1&m5 in first step & resulting terms
in the second step .
Procedure:
Decimal Representation
Don‘t cares
PI chart
EPI
Dominating Rows & Columns
Determination of Minimal expressions in comples cases.
Branching Method:
EX:
Combinational Logic Design
Logic circuits for digital systems may be combinational or sequential. The output of a
combinational circuit depends on its present inputs only .Combinational circuit processing
operation fully specified logically by a set of Boolean functions .A combinational circuit consists
of input variables, logic gates and output variables.Both input and output data are represented by
signals, i.e., they exists in two possible values. One is logic –1 and the other logic 0.
For n input variables,there are 2n possible combinations of binary input variables .For
each possible input Combination ,there is one and only one possible output combination.A
combinational circuit can be described by m Boolean functions one for each output
variables.Usually the input s comes from flip-flops and outputs goto flip-flops.
Design Procedure:
0+0=0,0+1=1,1+0=1,1+1=10
The first three operations produce a sum whose length is one digit, but when augends and addend
bits are equal to 1,the binary sum consists of two digits.The higher significant bit of this result is
called a carry.A combinational circuit that performs the addition of two bits is called a half-
adder. One that performs the addition of 3 bits (two significant bits & previous carry) is called a
full adder.& 2 half adder can employ as a full-adder.
The Half Adder: A Half Adder is a combinational circuit with two binary inputs (augends and
addend bits and two binary outputs (sum and carry bits.) It adds the two inputs (A and B) and
produces the sum (S) and the carry (C) bits. It is an arithmetic operation of addition of two single
bit words.
The Sum(S) bit and the carry (C) bit, according to the rules of binary addition, the sum (S) is the
X-OR of A and B ( It represents the LSB of the sum). Therefore,
S=A + B=
The carry (C) is the AND of A and B (it is 0 unless both the inputs are 1).Therefore,
C=AB
A half-adder can be realized by using one X-OR gate and one AND gate a
NOR Logic:
A Full-adder is a combinational circuit that adds two bits and a carry and outputs a sum
bit and a carry bit. To add two binary numbers, each having two or more bits, the LSBs can be
added by using a half-adder. The carry resulted from the addition of the LSBs is carried over to
the next significant column and added to the two bits in that column. So, in the second and
higher columns, the two data bits of that column and the carry bit generated from the addition in
the previous column need to be added.
The full-adder adds the bits A and B and the carry from the previous column called the
carry-in Cin and outputs the sum bit S and the carry bit called the carry-out Cout . The variable S
gives the value of the least significant bit of the sum. The variable Cout gives the output carry.The
eight rows under the input variables designate all possible combinations of 1s and 0s that these
variables may have. The 1s and 0s for the output variables are determined from the arithmetic
sum of the input bits. When all the bits are 0s , the output is 0. The S output is equal to 1 when
only 1 input is equal to 1 or when all the inputs are equal to 1. The Cout has a carry of 1 if two or
three inputs are equal to 1.
From the truth table, a circuit that will produce the correct sum and carry bits in response to
every possible combination of A,B and Cin is described by
and
S A B Cin
Cout ACin BCin AB
The sum term of the full-adder is the X-OR of A,B, and Cin, i.e, the sum bit the modulo
sum of the data bits in that column and the carry from the previous column. The logic diagram
of the full-adder using two X-OR gates and two AND gates (i.e, Two half adders) and one OR
gate is
Even though a full-adder can be constructed using two half-adders, the disadvantage is that the
bits must propagate through several gates in accession, which makes the total propagation delay
greater than that of the full-adder circuit using AOI logic.
The Full-adder neither can also be realized using universal logic, i.e., either only NAND gates or
only NOR gates as
NAND Logic:
NOR Logic:
Subtractors:
The subtraction of two binary numbers may be accomplished by taking the complement
of the subtrahend and adding it to the minuend. By this, the subtraction operation becomes an
addition operation and instead of having a separate circuit for subtraction, the adder itself can be
used to perform subtraction. This results in reduction of hardware. In subtraction, each
subtrahend bit of the number is subtracted from its corresponding significant minuend bit to form
a difference bit. If the minuend bit is smaller than the subtrahend bit, a 1 is borrowed from the
next significant position., that has been borrowed must be conveyed to the next higher pair of
bits by means of a signal coming out (output) of a given stage and going into (input) the next
higher stage.
The Half-Subtractor:
A Half-subtractor is a combinational circuit that subtracts one bit from the other and
produces the difference. It also has an output to specify if a 1 has been borrowed. . It is used to
subtract the LSB of the subtrahend from the LSB of the minuend when one binary number is
subtracted from the other.
A circuit that produces the correct difference and borrow bits in response to every possible
combination of the two 1-bit numbers is , therefore ,
d=A + B= and b= B
That is, the difference bit is obtained by X-OR ing the two inputs, and the borrow bit is obtained
by ANDing the complement of the minuend with the subtrahend.Note that logic for this exactly
the same as the logic for output S in the half-adder.
A half-substractor can also be realized using universal logic either using only NAND gates or
using NOR gates as:
NAND Logic:
NOR Logic:
The Full-Subtractor:
From the truth table, a circuit that will produce the correct difference and borrow bits in response
to every possiblecombinations of A,B and bi is
NAND Logic:
NOR Logic:
Binary Parallel Adder:
A binary parallel adder is a digital circuit that adds two binary numbers in parallel form
and produces the arithmetic sum of those numbers in parallel form. It consists of full adders
connected in a chain , with the output carry from each full-adder connected to the input carry of
the next full-adder in the chain.
The interconnection of four full-adder (FA) circuits to provide a 4-bit parallel adder. The
augends bits of A and addend bits of B are designated by subscript numbers from right to left,
with subscript 1 denoting the lower –order bit. The carries are connected in a chain through the
full-adders. The input carry to the adder is Cin and the output carry is C4. The S output generates
the required sum bits. When the 4-bit full-adder circuit is enclosed within an IC package, it has
four terminals for the augends bits, four terminals for the addend bits, four terminals for the sum
bits, and two terminals for the input and output carries. AN n-bit parallel adder requires n-full
adders. It can be constructed from 4-bit, 2-bit and 1-bit full adder ICs by cascading several
packages. The output carry from one package must be connected to the input carry of the one
with the next higher –order bits. The 4-bit full adder is a typical example of an MSI function.
In the parallel adder, the carry –out of each stage is connected to the carry-in of
the next stage. The sum and carry-out bits of any stage cannot be produced, until sometime after
the carry-in of that stage occurs. This is due to the propagation delays in the logic circuitry,
which lead to a time delay in the addition process. The carry propagation delay for each full-
adder is the time between the application of the carry-in and the occurrence of the carry-out.
The 4-bit parallel adder, the sum (S1) and carry-out (C1) bits given by FA1 are not valid, until
after the propagation delay of FA1. Similarly, the sum S2 and carry-out (C2) bits given by FA2 are
not valid until after the cumulative propagation delay of two full adders (FA1 and FA2) , and so
on. At each stage ,the sum bit is not valid until after the carry bits in all the preceding stages are
valid. Carry bits must propagate or ripple through all stages before the most significant sum bit is
valid. Thus, the total sum (the parallel output) is not valid until after the cumulative delay of all
the adders.
The parallel adder in which the carry-out of each full-adder is the carry-in to the next most
significant adder is called a ripple carry adder.. The greater the number of bits that a ripple carry
adder must add, the greater the time required for it to perform a valid addition. If two numbers
are added such that no carries occur between stages, then the add time is simply the propagation
time through a single full-adder.
The subtraction of binary numbers can be carried out most conveniently by means of
complements , the subtraction A-B can be done by taking the 2‘s complement of B and adding
it to A . The 2‘s complement can be obtained by taking the 1‘s complement and adding 1 to the
least significant pair of bits. The 1‘s complement can be implemented with inverters as
Binary-Adder Subtractor:
A 4-bit adder-subtractor, the addition and subtraction operations are combined into
one circuit with one common binary adder. This is done by including an X-OR gate with each
full-adder. The mode input M controls the operation. When M=0, the circuit is an adder, and
when M=1, the circuit becomes a subtractor. Each X-OR gate receives input M and one of the
inputs of B. When M=0, .The full-adder receives the value of B , the input carry is 0
and the circuit performs A+B. when and C1=1. The B inputs are complemented
and a 1 is through the input carry. The circuit performs the operation A plus the 2‘s complement
of B.
The method of speeding up the addition process is based on the two additional
functions of the full-adder, called the carry generate and carry propagate functions.
Consider one full adder stage; say the nth stage of a parallel adder as shown in fig.
we know that is made by two half adders and that the half adder contains an X-OR gate to
produce the sum and an AND gate to produce the carry. If both the bits An and Bn are 1s, a carry
has to be generated in this stage regardless of whether the input carry Cin is a 0 or a 1. This is
called generated carry, expressed as Gn= An.Bn which has to appear at the output through the OR
gate as shown in fig.
Thereis another possibility of producing a carry out. X-OR gate inside the half-adder
Consider the case of both Pn and Cn being 1. The input carry Cn has to be propagated
to the output only if Pn is 1. If Pn is 0, even if Cn is 1, the and gate in the second half-adder will
inhibit Cn . the carry out of the nth stage is 1 when either Gn=1 or Pn.Cn =1 or both Gn and Pn.Cn
are equal to 1.
For the final sum and carry outputs of the nth stage, we get the following Boolean
expressions.
Observe the recursive nature of the expression for the output carry
at the nth stage which becomes the input carry for the (n+1)st stage .it is possible to express the
output carry of a higher significant stage is the carry-out of the previous stage.
Based on these , the expression for the carry-outs of various full adders are as follows,
Most modern computers use the 2‘s complement system to represent negative numbers
and to perform subtraction operations of signed numbers can be performed using only the
addition operation ,if we use the 2‘s complement form to represent negative numbers.
The circuit shown can perform both addition and subtraction in the 2‘s complement. This
adder/subtractor circuit is controlled by the control signal ADD/SUB‘. When the ADD/SUB‘
level is HIGH, the circuit performs the addition of the numbers stored in registers A and B.
When the ADD/Sub‘ level is LOW, the circuit subtract the number in register B from the number
in register A. The operation is:
When ADD/SUB‘ is a 1:
1. AND gates 1,3,5 and 7 are enabled , allowing B0,B1,B2and B3 to pass to the OR gates
9,10,11,12 . AND gates 2,4,6 and 8 are disabled , blocking B0‘,B1‘,B2‘, and B3‘ from
reaching the OR gates 9,10,11 and 12.
2. The two levels B0 to B3 pass through the OR gates to the 4-bit parallel adder, to be added
to the bits A0 to A3. The sum appears at the output S0 to S3
When ADD/SUB‘ is a 0:
1. AND gates 1,3,5 and 7 are disabled , allowing B0,B1,B2and B3 from reaching the OR
gates 9,10,11,12 . AND gates 2,4,6 and 8 are enabled , blocking B0‘,B1‘,B2‘, and B3‘
from reaching the OR gates.
2. The two levels B0‘ to B3‘ pass through the OR gates to the 4-bit parallel adder, to be
added to the bits A0 to A3.The C0 is now 1.thus the number in register B is converted to
its 2‘s complement form.
Adders/Subtractors used for adding and subtracting signed binary numbers. In computers , the
output is transferred into the register A (accumulator) so that the result of the addition or
subtraction always end up stored in the register A This is accomplished by applying a transfer
pulse to the CLK inputs of register A.
Serial Adder:
A serial adder is used to add binary numbers in serial form. The two binary numbers to be
added serially are stored in two shift registers A and B. Bits are added one pair at a time through
a single full adder (FA) circuit as shown. The carry out of the full-adder is transferred to a D flip-
flop. The output of this flip-flop is then used as the carry input for the next pair of significant
bits. The sum bit from the S output of the full-adder could be transferred to a third shift register.
By shifting the sum into A while the bits of A are shifted out, it is possible to use one register for
storing both augend and the sum bits. The serial input register B can be used to transfer a new
binary number while the addend bits are shifted out during the addition.
Initially register A holds the augend, register B holds the addend and the carry flip-flop is
cleared to 0. The outputs (SO) of A and B provide a pair of significant bits for the full-adder at x
and y. The shift control enables both registers and carry flip-flop , so, at the clock pulse both
registers are shifted once to the right, the sum bit from S enters the left most flip-flop of A , and
the output carry is transferred into flip-flop Q . The shift control enables the registers for a
number of clock pulses equal to the number of bits of the registers. For each succeeding clock
pulse a new sum bit is transferred to A, a new carry is transferred to Q, and both registers are
shifted once to the right. This process continues until the shift control is disabled. Thus the
addition is accomplished by passing each pair of bits together with the previous carry through a
single full adder circuit and transferring the sum, one bit at a time, into register A.
Initially, register A and the carry flip-flop are cleared to 0 and then the first number is
added from B. While B is shifted through the full adder, a second number is transferred to it
through its serial input. The second number is then added to the content of register A while a
third number is transferred serially into register B. This can be repeated to form the addition of
two, three, or more numbers and accumulate their sum in register A.
The parallel adder registers with parallel load, whereas the serial adder uses shift
registers. The number of full adder circuits in the parallel adder is equal to the number of bits in
the binary numbers, whereas the serial adder requires only one full adder circuit and a carry flip-
flop. Excluding the registers, the parallel adder is a combinational circuit, whereas the serial
adder is a sequential circuit. The sequential circuit in the serial adder consists of a full-adder and
a flip-flop that stores the output carry.
BCD Adder:
1. Add the 4-bit BCD code groups for each decimal digit position using ordinary binary
addition.
2. For those positions where the sum is 9 or less, the sum is in proper BCD form and no
correction is needed.
3. When the sum of two digits is greater than 9, a correction of 0110 should be added to
that sum, to produce the proper BCD result. This will produce a carry to be added to
the next decimal position.
A BCD adder circuit must be able to operate in accordance with the above steps. In other words,
the circuit must be able to do the following:
1. Add two 4-bit BCD code groups, using straight binary addition.
2. Determine, if the sum of this addition is greater than 1101 (decimal 9); if it is , add 0110
(decimal 6) to this sum and generate a carry to the next decimal position.
The first requirement is easily met by using a 4- bit binary parallel adder such as the 74LS83
IC .For example , if the two BCD code groups A3A2A1A0and B3B2B1B0 are applied to a 4-bit
parallel adder, the adder will output S4S3S2S1S0 , where S4 is actually C4 , the carry –out of the
MSB bits.
The sum outputs S4S3S2S1S0 can range anywhere from 00000 to 100109when both the
BCD code groups are 1001=9). The circuitry for a BCD adder must include the logic needed to
detect whenever the sum is greater than 01001, so that the correction can be added in. Those
cases , where the sum is greater than 1001 are listed as:
Let us define a logic output X that will go HIGH only when the sum is greater than 01001
(i.e, for the cases in table). If examine these cases ,see that X will be HIGH for either of the
following conditions:
X=S4+S3(S2+S1)
Whenever X=1, it is necessary to add the correction factor 0110 to the sum bits, and to
generate a carry. The circuit consists of three basic parts. The two BCD code groups A3A2A1A0
and B3B2B1B0 are added together in the upper 4-bit adder, to produce the sum S4S3S2S1S0. The
logic gates shown implement the expression for X. The lower 4-bit adder will add the correction
0110 to the sum bits, only when X=1, producing the final BCD sum output represented by
∑3∑2∑1∑0. The X is also the carry-out that is produced when the sum is greater than 01001.
When X=0, there is no carry and no addition of 0110. In such cases, ∑3∑2∑1∑0= S3S2S1S0.
Two or more BCD adders can be connected in cascade when two or more digit decimal
numbers are to be added. The carry-out of the first BCD adder is connected as the carry-in of the
second BCD adder, the carry-out of the second BCD adder is connected as the carry-in of the
third BCD adder and so on.
EXCESS-3(XS-3) ADDER:
EX:
Implementation of xs-3 adder using 4-bit binary adders is shown. The augend (A3
A2A1A0) and addend (B3B2B1B0) in xs-3 are added using the 4-bit parallel adder. If the carry is a
1, then 0011(3) is added to the sum bits S3S2S1S0 of the upper adder in the lower 4-bit parallel
adder. If the carry is a 0, then 1101(3) is added to the sum bits (This is equivalent to subtracting
0011(3) from the sum bits. The correct sum in xs-3 is obtained
The minuend and the 1‘s complement of the subtrahend in xs-3 are added in the upper 4-
bit parallel adder. If the carry-out from the upper adder is a 0, then 1101 is added to the sum bits
of the upper adder in the lower adder and the sum bits of the lower adder are complemented to
get the result. If the carry-out from the upper adder is a 1, then 3=0011 is added to the sum bits
of the lower adder and the sum bits of the lower adder give the result.
Binary Multipliers:
In binary multiplication by the paper and pencil method, is modified somewhat in digital
machines because a binary adder can add only two binary numbers at a time.
In a binary multiplier, instead of adding all the partial products at the end, they are added two at
a time and their sum accumulated in a register (the accumulator register). In addition, when the
multiplier bit is a 0,0s are not written down and added because it does not affect the final result.
Instead, the multiplicand is shifted left by one bit.
Code converters:
The availability of a large variety of codes for the same discrete elements of
information results in the use of different codes by different digital systems. It is sometimes
necessary to use the output of one system as the input to another. A conversion circuit must be
inserted between the two systems if each uses different codes for the same information. Thus a
code converter is a logic circuit whose inputs are bit patterns representing numbers (or
character) in one cod and whose outputs are the corresponding representation in a different
code. Code converters are usually multiple output circuits.
To convert from binary code A to binary code B, the input lines must supply the bit
combination of elements as specified by code A and the output lines must generate the
corresponding bit combination of code B. A combinational circuit performs this transformation
by means of logic gates.
For example, a binary –to-gray code converter has four binary input lines B4, B3,B2,B1 and four
gray code output lines G4,G3,G2,G1. When the input is 0010, for instance, the output should be
0011 and so forth. To design a code converter, we use a code table treating it as a truth table to
express each output as a Boolean algebraic function of all the inputs.
In this example, of binary –to-gray code conversion, we can treat the binary to the
gray code table as four truth tables to derive expressions for G4, G3, G2, and G1. Each of these
four expressions would, in general, contain all the four input variables B4, B3,B2,and B1.
Thus,this code converter is actually equivalent to four logic circuits, one for each of the truth
tables.
The logic expression derived for the code converter can be simplified using the usual
techniques, including ‗don‘t cares‘ if present. Even if the input is an unweighted code, the same
cell numbering method which we used earlier can be used, but the cell numbers --must
correspond to the input combinations as if they were an 8-4-2-1 weighted code. s
Design of a 4-bit binary to gray code converter:
Design of a 4-bit gray to Binary code converter:
Design of a 4-bit BCD to XS-3 code converter:
Design of a BCD to gray code converter:
Design of a SOP circuit to Detect the Decimal numbers 5 through 12 in a 4-bit gray code
Input:
Design of a SOP circuit to detect the decimal numbers 0,2,4,6,8 in a 4-bit 5211 BCD code
input:
Design of a Combinational circuit to produce the 2’s complement of a 4-bit binary number:
Comparators:
1. Magnitude Comparator:
ENCODERS:
This allows multiple circuits to share the same output line or lines (such as a bus which cannot
listen to more than one device at a time).
Three-state outputs are implemented in many registers, bus drivers, and flip-flops in the 7400
and 4000 series as well as in other types, but also internally in many integrated circuits. Other
typical uses are internal and external buses in microprocessors, computer memory, and
peripherals. Many devices are controlled by an active-low input called OE (Output Enable)
which dictates whether the outputs should be held in a high-impedance state or drive their
respective loads (to either 0- or 1-level).
Unit III
Sequential machine fundamentals
Sequential circuits
There are two types of asynchronous circuits: fundamental mode circuits and pulse mode
circuits.
From the diagram you can see that the clock period is the time between successive
transitions in the same direction, that is, between two rising or two falling edges. State transitions
in synchronous sequential circuits are made to take place at times when the clock is making a
transition from 0 to 1 (rising edge) or from 1 to 0 (falling edge). Between successive clock pulses
there is no change in the information stored in memory.
The reciprocal of the clock period is referred to as the clock frequency. The clock
width is defined as the time during which the value of the clock signal is equal to 1. The ratio of
the clock width and clock period is referred to as the duty cycle. A clock signal is said to
be active high if the state changes occur at the clock's rising edge or during the clock width.
Otherwise, the clock is said to be active low. Synchronous sequential circuits are also known
as clocked sequential circuits.
The memory elements used in synchronous sequential circuits are usually flip-flops.
These circuits are binary cells capable of storing one bit of information. A flip-flop circuit has
two outputs, one for the normal value and one for the complement value of the bit stored in it.
Binary information can enter a flip-flop in a variety of ways, a fact which give rise to the
different types of flip-flops. For information on the different types of basic flip-flop circuits and
their logical properties, see the previous tutorial on flip-flops.
In asynchronous sequential circuits, the transition from one state to another is initiated by the
change in the primary inputs; there is no external synchronization. The memory commonly used
in asynchronous sequential circuits are time-delayed devices, usually implemented by feedback
among logic gates. Thus, asynchronous sequential circuits may be regarded as combinational
circuits with feedback. Because of the feedback among logic gates, asynchronous sequential
circuits may, at times, become unstable due to transient conditions. The instability problem
imposes many difficulties on the designer. Hence, they are not as commonly used as
synchronous systems.
Latches and flip-flops are the basic elements for storing information. One latch or flip-
flop can store one bit of information. The main difference between latches and flip-flops is that
for latches, their outputs are constantly affected by their inputs as long as the enable signal is
asserted. In other words, when they are enabled, their content changes immediately when their
inputs change. Flip-flops, on the other hand, have their content change only either at the rising or
falling edge of the enable signal. This enable signal is usually the controlling clock signal. After
the rising or falling edge of the clock, the flip-flop content remains constant even if the input
changes.
There are basically four main types of latches and flip-flops: SR, D, JK, and T. The major
differences in these flip-flop types are the number of inputs they have and how they change state.
For each type, there are also different variations that enhance their operations. In this chapter, we
will look at the operations of the various latches and flip-flops.the flip-flops has two outputs,
labeled Q and Q‘. the Q output is the normal output of the flip flop and Q‘ is the inverted output.
A latch may be an active-high input latch or an active –LOW input latch.active –HIGH
means that the SET and RESET inputs are normally resting in the low state and one of them will
be pulsed high whenever we want to change latch outputs.
SR latch:
The latch has two outputs Q and Q‘. When the circuit is switched on the latch may enter
into any state. If Q=1, then Q‘=0, which is called SET state. If Q=0, then Q‘=1, which is called
RESET state. Whether the latch is in SET state or RESET state, it will continue to remain in the
same state, as long as the power is not switched off. But the latch is not an useful circuit, since
there is no way of entering the desired input. It is the fundamental building block in constructing
flip-flops, as explained in the following sections
NAND latch
NAND latch is the fundamental building block in constructing a flip-flop. It has the
property of holding on to any previous output, as long as it is not disturbed.
The opration of NAND latch is the reverse of the operation of NOR latch.if 0‘s are
replaced by 1‘s and 1‘s are replaced by 0‘s we get the same truth table as that of the NOR latch
shown
NOR latch
The analysis of the operation of the active-HIGHNOR latch can be summarized as follows.
1. SET=0, RESET=0: this is normal resting state of the NOR latch and it has no effect on the
output state. Q and Q‘ will remain in whatever stste they were prior to the occurrence of this
input condition.
2. SET=1, RESET=0: this will always set Q=1, where it will remain even after SET returns to 0
3. SET=0, RESET=1: this will always reset Q=0, where it will remain even after RESET
returns to 0
4. SET=1,RESET=1; this condition tries to SET and RESET the latch at the same time, and it
produces Q=Q‘=0. If the inputs are returned to zero simultaneously, the resulting output stste
is erratic and unpredictable. This input condition should not be used.
The SET and RESET inputs are normally in the LOW state and one of them will be pulsed
HIGH. Whenever we want to change the latch outputs..
RS Flip-flop:
The basic flip-flop is a one bit memory cell that gives the fundamental idea of memory
device. It constructed using two NAND gates. The two NAND gates N1 andN2 are connected
such that, output of N1 is connected to input of N2 and output of N2 to input of N1. These
form the feedback path the inputs are S and R, and outputs are Q and Q‘. The logic diagram and
the block diagram of R-S flip-flop with clocked input
Figure: RS Flip-flop
The flip-flop can be made to respond only during the occurrence of clock pulse by adding
two NAND gates to the input latch. So synchronization is achieved. i.e., flip-flops are
allowed to change their states only at particular instant of time. The clock pulses are
generated by a clock pulse generator. The flip-flops are affected only with the arrival of
clock pulse.
Operation:
1. When CP=0 the output of N3 and N4 are 1 regardless of the value of S and R. This is
given as input to N1 and N2. This makes the previous value of Q and Q‘unchanged.
2. When CP=1 the information at S and R inputs are allowed to reach the latch and
change of state in flip-flop takes place.
3. CP=1, S=1, R=0 gives the SET state i.e., Q=1, Q‘=0.
4. CP=1, S=0, R=1 gives the RESET state i.e., Q=0, Q‘=1.
5. CP=1, S=0, R=0 does not affect the state of flip-flop.
6. CP=1, S=1, R=1 is not allowed, because it is not able to determine the next state. This
condition is said to be a ―race condition‖.
In the logic symbol CP input is marked with a triangle. It indicates the circuit responds to
an input change from 0 to 1. The characteristic table gives the operation conditions of flip-flop.
Q(t) is the present state maintained in the flip-flop at time ‗t‘. Q(t+1) is the state after the
occurrence of clock pulse.
Figure: truth table, block diagram, logic diagram of edge triggered flip-flop
JK flip-flop (edge triggered JK flip-flop)
The race condition in RS flip-flop, when R=S=1 is eliminated in J-K flip-flop. There is a
feedback from the output to the inputs. Figure 3.4 represents one way of building a JK flip-flop.
Figure: JK flip-flop
The J and K are called control inputs, because they determine what the flip-flop does
when a positive clock edge arrives.
Operation:
1. When J=0, K=0 then both N3 and N4 will produce high output and the previous
value of Q and Q‘ retained as it is.
2. When J=0, K=1, N3 will get an output as 1 and output of N4 depends on the value
of Q. The final output is Q=0, Q‘=1 i.e., reset state
3. When J=1, K=0 the output of N4 is 1 and N3 depends on the value of Q‘. The final
output is Q=1 and Q‘=0 i.e., set state
4. When J=1, K=1 it is possible to set (or) reset the flip-flop depending on the current
state of output. If Q=1, Q‘=0 then N4 passes ‘0‘to N2 which produces Q‘=1, Q=0 which is
reset state. When J=1, K=1, Q changes to the complement of the last state. The flip-flop is said to
be in the toggle state.
The characteristic equation of the JK flip-flop is:
JK flip-flop operation[28]
0 1 0 reset 0 1 1 X Set
1 0 1 set 1 0 X 1 Reset
1 1 Q toggle 1 1 X 0 No change
T flip-flop:
If the T input is high, the T flip-flop changes state ("toggles") whenever the clock input is
strobed. If the T input is low, the flip-flop holds the previous value. This behavior is described by
the characteristic equation
When T is held high, the toggle flip-flop divides the clock frequency by two; that is, if
clock frequency is 4 MHz, the output frequency obtained from the flip-flop will be 2 MHz This
"divide by" feature has application in various types of digital counters. A T flip-flop can also be
built using a JK flip-flop (J & K pins are connected together and act as T) or D flip-flop (T input
and Previous is connected to the D input through an XOR gate).
T flip-flop operation[28]
Comment Comment
1 0 1 toggle 0 1 1 Complement
1 1 0 toggle 1 0 1 Complement
Consider, for example, that the inputs are J = K = 1 and Q = 1, and a pulse as shown in
Figure is applied at the clock input.
After a time interval t equal to the propagation delay through two NAND gates in series,
the outputs will change to Q = 0. So now we have J = K = 1 and Q = 0.
After another time interval of t the output will change back to Q = 1. Hence, we
conclude that for the time duration of tP of the clock pulse, the output will oscillate
between 0 and 1. Hence, at the end of the clock pulse, the value of the output is not
certain. This situation is referred to as a race-around condition.
Generally, the propagation delay of TTL gates is of the order of nanoseconds. So
if the clock pulse is of the order of microseconds, then the output will change thousands
of times within the clock pulse.
This race-around condition can be avoided if tp< t < T. Due to the small propagation
delay of the ICs it may be difficult to satisfy the above condition.
A more practical way to avoid the problem is to use the master-slave (M-S) configuration
as discussed below.
Applications of flip-flops:
Frequency Division: When a pulse waveform is applied to the clock input of a J-K flip-
flop that is connected to toggle, the Q output is a square wave with half the frequency of the
clock input. If more flip-flops are connected together as shown in the figure below, further
division of the clock frequency can be achieved
. Parallel data storage: a group of flip-flops is called register. To store data of N bits, N
flip-flops are required. Since the data is available in parallel form. When a clock pulse is applied
to all flip-flops simultaneously, these bits will transfer will be transferred to the Q outputs of the
flip flops.
Serial data storage: to store data of N bits available in serial form, N number of D-flip-
flops is connected in cascade. The clock signal is connected to all the flip-flops. The serial data is
applied to the D input terminal of the first flip-flop.
Transfer of data: data stored in flip-flops may be transferred out in a serial fashion, i.e.,
bit-by-bit from the output of one flip-flops or may be transferred out in parallel form.
Excitation Tables:
Conversions of flip-flops:
The key here is to use the excitation table, which shows the necessary triggering signal
(S,R,J,K, D and T) for a desired flip-flop state transition :
We need to design the circuit to generate the triggering signal D as a function of T and Q:
. Consider the excitation table:
We need to design the circuit to generate the triggering signals S and R as functions of
and consider the excitation table:
The desired signal and can be obtained as functions of and current FF state from
the Karnaugh maps:
Add a state D
State D – have detected the 3rd input in the start of a sequence, a 0, now having
110. From State D, if the next input is a 1 the sequence has been detected and a 1
is output.
A number of ff‘s connected together such that data may be shifted into and shifted out of them is
called shift register. data may be shifted into or out of the register in serial form or in parallel
form. There are four basic types of shift registers.
1. Serial in, serial out, shift right, shift registers
2. Serial in, serial out, shift left, shift registers
3. Parallel in, serial out shift registers
4. Parallel in, parallel out shift registers
Serial IN, serial OUT, shift right, shift left register:
The logic diagram of 4-bit serial in serial out, right shift register with four stages. The register
can store four bits of data. Serial data is applied at the input D of the first FF. the Q output of the
first FF is connected to the D input of another FF. the data is outputted from the Q terminal of
the last FF.
When serial data is transferred into a register, each new bit is clocked into the first FF at the
positive going edge of each clock pulse. The bit that was previously stored by the first FF is
transferred to the second FF. the bit that was stored by the Second FF is transferred to the third
FF.
In this type of register, the data bits are entered into the register serially, but the data stored in
the register is shifted out in parallel form.
Once the data bits are stored, each bit appears on its respective output line and all bits are
available simultaneously, rather than on a bit-by-bit basis with the serial output. The serial-in,
parallel out, shift register can be used as serial-in, serial out, shift register if the output is taken
from the Q terminal of the last FF.
Parallel-in, serial-out, shift register:
For a parallel-in, serial out, shift register, the data bits are entered simultaneously into their
respective stages on parallel lines, rather than on a bit-by-bit basis on one line as with serial data
bits are transferred out of the register serially. On a bit-by-bit basis over a single line.
There are four data lines A,B,C,D through which the data is entered into the register in
parallel form. The signal shift/ load allows the data to be entered in parallel form into the register
and the data is shifted out serially from terminalQ4
In a parallel-in, parallel-out shift register, the data is entered into the register in parallel form,
and also the data is taken out of the register in parallel form. Data is applied to the D input
terminals of the FF‘s. When a clock pulse is applied, at the positive going edge of the pulse, the
D inputs are shifted into the Q outputs of the FFs. The register now stores the data. The stored
data is available instantaneously for shifting out in parallel form.
Bidirectional shift register:
A bidirectional shift register is one which the data bits can be shifted from left to right
or from right to left. A fig shows the logic diagram of a 4-bit serial-in, serial out, bidirectional
shift register. Right/left is the mode signal, when right /left is a 1, the logic circuit works as a
shift-register.the bidirectional operation is achieved by using the mode signal and two NAND
gates and one OR gate for each stage.
A HIGH on the right/left control input enables the AND gates G1, G2, G3 and G4 and
disables the AND gates G5,G6,G7 and G8, and the state of Q output of each FF is passed
through the gate to the D input of the following FF. when a clock pulse occurs, the data bits are
then effectively shifted one place to the right. A LOW on the right/left control inputs enables the
AND gates G5, G6, G7 and G8 and disables the And gates G1, G2, G3 and G4 and the Q output
of each FF is passed to the D input of the preceding FF. when a clock pulse occurs, the data bits
are then effectively shifted one place to the left. Hence, the circuit works as a bidirectional shift
register
A register is capable of shifting in one direction only is a unidirectional shift register. One that
can shift both directions is a bidirectional shift register. If the register has both shifts and parallel
load capabilities, it is referred to as a universal shift registers. Universal shift register is a
bidirectional register, whose input can be either in serial form or in parallel form and whose
output also can be in serial form or I parallel form.
The most general shift register has the following capabilities.
A universal shift register can be realized using multiplexers. The below fig shows the logic
diagram of a 4-bit universal shift register that has all capabilities. It consists of 4 D flip-flops and
four multiplexers. The four multiplexers have two common selection inputs s1 and s0. Input 0 in
each multiplexer is selected when S1S0=00, input 1 is selected when S1S0=01 and input 2 is
selected when S1S0=10 and input 4 is selected when S1S0=11. The selection inputs control the
mode of operation of the register according to the functions entries. When S1S0=0, the present
value of the register is applied to the D inputs of flip-flops. The condition forms a path from the
output of each flip-flop into the input of the same flip-flop. The next clock edge transfers into
each flip-flop the binary value it held previously, and no change of state occurs. When S1S0=01,
terminal 1 of the multiplexer inputs have a path to the D inputs of the flip-flop. This causes a
shift-right operation, with serial input transferred into flip-flopA4. When S1S0=10, a shift left
operation results with the other serial input going into flip-flop A1. Finally when S1S0=11, the
binary information on the parallel input lines is transferred into the register simultaneously
during the next clock cycle
mode control
S0 S1 register operation
0 0 No change
0 1 Shift Right
1 0 Shift left
1 1 Parallel load
Counters:
Counter is a device which stores (and sometimes displays) the number of times
particular event or process has occurred, often in relationship to a clock signal. A Digital counter
is a set of flip flops whose state change in response to pulses applied at the input to the counter.
Counters may be asynchronous counters or synchronous counters. Asynchronous counters are
also called ripple counters
In electronics counters can be implemented quite easily using register-type circuits such as
the flip-flops and a wide variety of classifications exist:
Asynchronous (ripple) counter – changing state bits are used as clocks to subsequent state
flip-flops
Synchronous counter – all state bits change under control of a single clock
Decade counter – counts through ten states per stage
Up/down counter – counts both up and down, under command of a control input
Ring counter – formed by a shift register with feedback connection in a ring
Johnson counter – a twisted ring counter
Cascaded counter
Modulus counter.
Each is useful for different applications. Usually, counter circuits are digital in nature, and count
in natural binary Many types of counter circuits are available as digital building blocks, for
example a number of chips in the 4000 series implement different counters.
Occasionally there are advantages to using a counting sequence other than the natural binary
sequence such as the binary coded decimal counter, a linear feed-back shift register counter, or
a gray-code counter.
Counters are useful for digital clocks and timers, and in oven timers, VCR clocks, etc.
Asynchronous counters:
An asynchronous (ripple) counter is a single JK-type flip-flop, with its J (data) input fed
from its own inverted output. This circuit can store one bit, and hence can count from zero to one
before it overflows (starts over from 0). This counter will increment once for every clock cycle
and takes two clock cycles to overflow, so every cycle it will alternate between a transition from
0 to 1 and a transition from 1 to 0. Notice that this creates a new clock with a 50% duty cycle at
exactly half the frequency of the input clock. If this output is then used as the clock signal for a
similarly arranged D flip-flop (remembering to invert the output to the input), one will get
another 1 bit counter that counts half as fast. Putting them together yields a two-bit counter:
Two bit ripple counter used two flip-flops. There are four possible states from 2 – bit up-
counting I.e. 00, 01, 10 and 11.
· The counter is initially assumed to be at a state 00 where the outputs of the tow flip-flops
are noted as Q1Q0. Where Q1 forms the MSB and Q0 forms the LSB.
· For the negative edge of the first clock pulse, output of the first flip-flop FF1 toggles its
state. Thus Q1 remains at 0 and Q0 toggles to 1 and the counter state are now read as 01.
· During the next negative edge of the input clock pulse FF1 toggles and Q0 = 0. The output
Q0 being a clock signal for the second flip-flop FF2 and the present transition acts as a negative
edge for FF2 thus toggles its state Q1 = 1. The counter state is now read as 10.
· For the next negative edge of the input clock to FF1 output Q0 toggles to 1. But this
transition from 0 to 1 being a positive edge for FF2 output Q1 remains at 1. The counter state is
now read as 11.
· For the next negative edge of the input clock, Q0 toggles to 0. This transition from 1 to 0
acts as a negative edge clock for FF2 and its output Q1 toggles to 0. Thus the starting state 00 is
attained. Figure shown below
Two-bit ripple down-counter using negative edge triggered flip flop:
Two-bit ripple up-down counter using negative edge triggered flip flop:
Figure: asynchronous 2-bit ripple up-down counter using negative edge triggered flip flop:
As the name indicates an up-down counter is a counter which can count both in upward
and downward directions. An up-down counter is also called a forward/backward counter
or a bidirectional counter. So, a control signal or a mode signal M is required to choose
the direction of count. When M=1 for up counting, Q1 is transmitted to clock of FF2 and
when M=0 for down counting, Q1‘ is transmitted to clock of FF2. This is achieved by
using two AND gates and one OR gates. The external clock signal is applied to FF1.
Clock signal to FF2= (Q1.Up)+(Q1‘. Down)= Q1m+Q1‘M‘
To design a asynchronous counter, first we write the sequence , then tabulate the values of
reset signal R for various states of the counter and obtain the minimal expression for R and R‘
using K-Map or any other method. Provide a feedback such that R and R‘ resets all the FF‘s after
the desired count
Design of a Mod-6 asynchronous counter using T FFs:
A mod-6 counter has six stable states 000, 001, 010, 011, 100, and 101. When the sixth
clock pulse is applied, the counter temporarily goes to 110 state, but immediately resets to 000
because of the feedback provided. it is ―divide by-6-counter‖, in the sense that it divides the
input clock frequency by 6.it requires three FFs, because the smallest value of n satisfying the
conditionN≤2n is n=3; three FFs can have 8 possible states, out of which only six are utilized and
the remaining two states 110and 111, are invalid. If initially the counter is in 000 state, then after
the sixth clock pulse, it goes to 001, after the second clock pulse, it goes to 010, and so on.
After sixth clock pulse it goes to 000. For the design, write the truth table with present state
outputs Q3, Q2 and Q1 as the variables, and reset R as the output and obtain an expression for R
in terms of Q3, Q2, and Q1that decides the feedback into be provided. From the truth table,
R=Q3Q2. For active-low Reset, R‘ is used. The reset pulse is of very short duration, of the order
of nanoseconds and it is equal to the propagation delay time of the NAND gate used. The
expression for R can also be determined as follows.
The logic diagram and timing diagram of Mod-6 counter is shown in the above fig.
0 0 0 0 0
1 0 0 1 0
2 0 1 0 0
3 0 1 1 0
4 1 0 0 0
5 1 0 1 0
6 1 1 0 1
0 0 0 0
7 0 0 0 0
The count table and the K-Map for reset are shown in fig. from the K-Map R=Q4Q2. So,
feedback is provided from second and fourth FFs. For active –HIGH reset, Q4Q2 is applied to
the clear terminal. For active-LOW reset 4 2 is connected isof all Flip=flops.
After Count
pulses Q4 Q3 Q2 Q1
0 0 0 0 0
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 0 0 1
6 0 1 1 0
7 0 1 1 1
8 1 0 0 0
9 0 1 0 1
10 0 0 0 0
Synchronous counters:
Asynchronous counters are serial counters. They are slow because each FF can change state
only if all the preceding FFs have changed their state. if the clock frequency is very high, the
asynchronous counter may skip some of the states. This problem is overcome in synchronous
counters or parallel counters. Synchronous counters are counters in which all the flip flops are
triggered simultaneously by the clock pulses Synchronous counters have a common clock pulse
applied simultaneously to all flip- -Bit Synchronous Binary Counter
Step 1:State Diagram: draw the state diagram showing all the possible states state diagram which
also be called nth transition diagrams, is a graphical means of depicting the sequence of states
through which the counter progresses.
Step2: number of flip-flops: based on the description of the problem, determine the required
number n of the flip-flops- the smallest value of n is such that the number of states N≤2n--- and
the desired counting sequence.
Step3: choice of flip-flops excitation table: select the type of flip-flop to be used and write the
excitation table. An excitation table is a table that lists the present state (ps) , the next state(ns)
and required excitations.
Step4: minimal expressions for excitations: obtain the minimal expressions for the excitations of
the FF using K-maps drawn for the excitation of the flip-flops in terms of the present states and
inputs.
Step5: logic diagram: draw a logic diagram based on the minimal expressions
Step1: determine the number of flip-flops required. A 3-bit counter requires three FFs. It has 8
states (000,001,010,011,101,110,111) and all the states are valid. Hence no don‘t cares. For
selecting up and down modes, a control or mode signal M is required. When the mode signal
M=1 and counts down when M=0. The clock signal is applied to all the FFs simultaneously.
Step2: draw the state diagrams: the state diagram of the 3-bit up-down counter is drawn as
Step3: select the type of flip flop and draw the excitation table: JK flip-flops are selected and the
excitation table of a 3-bit up-down counter using JK flip-flops is drawn as shown in fig.
Step4: obtain the minimal expressions: From the excitation table we can conclude that J1=1 and
K1=1, because all the entries for J1and K1 are either X or 1. The K-maps for J3, K3,J2 and K2
based on the excitation table and the minimal expression obtained from them are shown in fig.
00 01 11 10
Q3Q2 Q1M
1
1
X X X X
X X X X
Step5: draw the logic diagram: a logic diagram using those minimal expressions can be drawn as
shown in fig.
Step 1: the number of flip-flops: we know that the counting sequence for a modulo-6 gray code
counter is 000, 001, 011, 010, 110, and 111. It requires n=3FFs (N≤2n, i.e., 6≤23). 3 FFs can have
8 states. So the remaining two states 101 and 100 are invalid. The entries for excitation
corresponding to invalid states are don‘t cares.
Step2: the state diagram: the state diagram of the mod-6 gray code converter is drawn as shown
in fig.
Step3: type of flip-flop and the excitation table: T flip-flops are selected and the excitation table
of the mod-6 gray code counter using T-flip-flops is written as shown in fig.
required
PS NS excitations
Q3 Q2 Q1 Q3 Q2 Q1 T3 T2 T1
0 0 0 0 0 1 0 0 1
0 0 1 0 1 1 0 1 0
0 1 1 0 1 0 0 0 1
0 1 0 1 1 0 1 0 0
1 1 0 1 1 1 0 0 1
1 1 1 0 0 0 1 1 1
Step4: The minimal expressions: the K-maps for excitations of FFs T3,T2,and T1 in terms of
outputs of FFs Q3,Q2, and Q1, their minimization and the minimal expressions for excitations
obtained from them are shown if fig
Step5: the logic diagram: the logic diagram based on those minimal expressions is drawn as
shown in fig.
Design of a synchronous BCD Up-Down counter using FFs:
Step1: the number of flip-flops: a BCD counter is a mod-10 counter has 10 states (0000 through
1001) and so it requires n=4FFs(N≤2n,, i.e., 10≤24). 4 FFS can have 16 states. So out of 16 states,
six states (1010 through 1111) are invalid. For selecting up and down mode, a control or mode
signal M is required. , it counts up when M=1 and counts down when M=0. The clock signal is
applied to all FFs.
Step2: the state diagram: The state diagram of the mod-10 up-down counter is drawn as shown
in fig.
Step3: types of flip-flops and excitation table: T flip-flops are selected and the excitation table of
the modulo-10 up down counter using T flip-flops is drawn as shown in fig.
PS NS
Step5: the logic diagram: the logic diagram based on the above equation is shown in fig.
Ring counter: this is the simplest shift register counter. The basic ring counter using D flip-
flops is shown in fig. the realization of this counter using JK FFs. The Q output of each stage is
connected to the D flip-flop connected back to the ring counter.
Only a single 1 is in the register and is made to circulate around the register as long as clock
pulses are applied. Initially the first FF is present to a 1. So, the initial state is 1000, i.e., Q1=1,
Q2=0,Q3=0,Q4=0. After each clock pulse, the contents of the register are shifted to the right by
one bit and Q4 is shifted back to Q1. The sequence repeats after four clock pulses. The number
of distinct states in the ring counter, i.e., the mod of the ring counter is equal to number of FFs
used in the counter. An n-bit ring counter can count only n bits, where as n-bit ripple counter can
count 2n bits. So, the ring counter is uneconomical compared to a ripple counter but has
advantage of requiring no decoder, since we can read the count by simply noting which FF is set.
Since it is entirely a synchronous operation and requires no gates external FFs, it has the further
advantage of being very fast.
Timing diagram:
This counter is obtained from a serial-in, serial-out shift register by providing feedback
from the inverted output of the last FF to the D input of the first FF. the Q output of each is
connected to the D input of the next stage, but the Q‘ output of the last stage is connected to the
D input of the first stage, therefore, the name twisted ring counter. This feedback arrangement
produces a unique sequence of states.
The logic diagram of a 4-bit Johnson counter using D FF is shown in fig. the realization
of the same using J-K FFs is shown in fig.. The state diagram and the sequence table are shown
in figure. The timing diagram of a Johnson counter is shown in figure.
Let initially all the FFs be reset, i.e., the state of the counter be 0000. After each clock
pulse, the level of Q1 is shifted to Q2, the level of Q2to Q3, Q3 to Q4 and the level of Q4‘to Q1
and the sequences given in fig.
1. Moore circuit: in this model, the output depends only on the present state of the flip-
flops
2. Meelay circuit: in this model, the output depends on both present state of the flip-
flop. And the inputs.
Sequential circuits are also called finite state machines (FSMs). This name is due to the fast that
the functional behavior of these circuits can be represented using a finite number of states.
State diagram: the state diagram or state graph is a pictorial representation of the relationships
between the present state, the input, the next state, and the output of a sequential circuit. The
state diagram is a pictorial representation of the behavior of a sequential circuit.
The state represented by a circle also called the node or vertex and the transition between
states is indicated by directed lines connecting circle. a directed line connecting a circle with
itself indicates that the next state is the same as the present state. The binary number inside each
circle identifies the state represented by the circle. The direct lines are labeled with two binary
numbers separated by a symbol. The input value is applied during the present state is labeled
after the symbol.
NS,O/P
INPUT X
PS X=0 X=1
a a,0 b,0
b b,1 c,0
c d,0 c,1
d d,0 a,1
In case of moore circuit ,the directed lines are labeled with only one binary number representing
the input that causes the state transition. The output is indicated with in the circle below the
present state, because the output depends only on the present state and not on the input.
NS
INPUT X
PS X=0 X=1 O/P
a a b 0
b b c 0
c d c 1
d a d 0
Step2 and 3: state diagram and state table: let a designate the state of the serial adder at ti if a
carry 0 was generated at ti-1, and let b designate the state of the serial adder at t i if carry 1 was
generated at ti-1 .the state of the adder at that time when the present inputs are applied is referred
to as the present state(PS) and the state to which the adder goes as a result of the new carry value
is referred to as next state(NS).
The behavior of serial adder may be described by the state diagram and state table.
PS NS ,O/P
X1 X2
0 0 1 1
0 1 0 1
A A,0 B,0 B,1 B,0
B A,1 B,0 B,0 B,1
If the machine is in state B, i.e., carry from the previous addition is a 1, inputs X 1=0 and X2=1
gives sum, 0 and carry 1. So the machine remains in state B and outputs a 0. Inputs X 1=1 and
X2=0 gives sum, 0 and carry 1. So the machine remains in state B and outputs a 0. Inputs X1=1
and X2=1 gives sum, 1 and carry 0. So the machine remains in state B and outputs a 1. Inputs
X1=0 and X2=0 gives sum, 1 and carry 0. So the machine goes to state A and outputs a 1. The
state table also gives the same information.
Setp4: reduced standard from state table: the machine is already in this form. So no need to
do anything
0 0 0 0 1 0 1 1 1
1 0 1 1 1 1 0 0 1
STEP6: choose type of FF and excitation table: to write table, select the memory element the
excitation table is as shown in fig.
PS I/P NS I/P-FF O/P
y x1 x2 Y D Z
0 0 0 0 0 0
0 0 1 0 0 1
0 1 0 0 0 1
0 1 1 1 1 0
1 0 0 0 0 1
1 0 1 1 1 0
1 1 0 1 1 0
1 1 1 1 1 1
Sequence detector:
Step1: word statement of the problem: a sequence detector is a sequential machine which
produces an output 1 every time the desired sequence is detected and an output 0 at all other
times
Suppose we want to design a sequence detector to detect the sequence 1010 and say that
overlapping is permitted i.e., for example, if the input sequence is 01101010 the corresponding
output sequence is 00000101.
Step2 and 3: state diagram and state table: the state diagram and the state table of the sequence
detector. At the time t1, the machine is assumed to be in the initial state designed arbitrarily as A.
while in this state, the machine can receive first bit input, either a 0 o r a 1. If the input bit is 0,
the machine does not start the detection process because the first bit in the desired sequence is a
1. If the input bit is a 1 the detection process starts.
PS NS,Z
X=0 X=1
A A,0 B,0
B C,0 B,0
C A,0 D,0
D C,1 B,0
NS(Y1Y2) O/P(z)
PS(y1y2 X=0 X=1 X=0 X=1
A= 0 0 0 0 0 1 0 0
B=0 1 1 0 0 1 0 0
C=1 0 0 0 1 1 0 0
D=1 1 1 1 0 1 1 0
Step6: choose type of flip-flops and form the excitation table: select the D flip-flops as memory
elements and draw the excitation table.
INPUTS -
PS I/P NS FFS O/P
y1 Y2 X Y1 Y2 D1 D2 Z
0 0 0 0 0 0 0 0
0 0 1 0 1 0 1 0
0 1 0 1 0 1 0 0
0 1 1 0 1 0 1 0
1 0 0 0 0 0 0 0
1 0 1 1 1 1 1 0
1 1 0 1 0 1 0 1
1 1 1 0 1 0 1 0
Step7: K-maps and minimal functions: based on the contents of the excitation table , draw the k-
map and simplify them to obtain the minimal expressions for D1 and D2 in terms of y1, y2 and x
as shown in fig. The expression for z (z=y1,y2) can be obtained directly from table
Finite state machine can be defined as a type of machine whose past histories can affect its future
behavior in a finite number of ways. To clarify, consider for example of binary full adder. Its
output depends on the present input and the carry generated from the previous input. It may have
a large number of previous input histories but they can be divided into two types: (i) Input
The most general model of a sequential circuit has inputs, outputs and internal states. A
sequential circuit is referred to as a finite state machine (FSM). A finite state machine is abstract
model that describes the synchronous sequential machine. The fig. shows the block diagram of a
finite state model. X1, X2,….., Xl, are inputs. Z1, Z2,….,Zm are outputs. Y1,Y2,….Yk are state
variables, and Y1,Y2,….Yk represent the next state.
Let a finite state machine have n states. Let a long sequence of input be given to the machine.
The machine will progress starting from its beginning state to the next states according to the
state transitions. However, after some time the input string may be longer than n, the number of
states. As there are only n states in the machine, it must come to a state it was previously been in
and from this phase if the input remains the same the machine will function in a periodically
repeating fashion. From here a conclusion that ‗for a n state machine the output will become
periodic after a number of clock pulses less than equal to n can be drawn. States are memory
elements. As for a finite state machine the number of states is finite, so finite number of memory
elements are required to design a finite state machine.
Limitations:
1. Periodic sequence and limitations of finite states: with n-state machines, we can generate
periodic sequences of n states are smaller than n states. For example, in a 6-state machine,
we can have a maximum periodic sequence as 0,1,2,3,4,5,0,1….
2. No infinite sequence: consider an infinite sequence such that the output is 1 when and
only when the number of inputs received so far is equal to P(P+1)/2 for P=1,2,3….,i.e.,
the desired input-output sequence has the following form:
Input: x x x x x x x x x x x x x x x x x x x x x x
Output: 1 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1
Mealy model:
When the output of the sequential circuit depends on the both the present state of the flip-flops
and on the inputs, the sequential circuit is referred to as mealy circuit or mealy machine.
The fig. shows the logic diagram of the mealy model. Notice that the output depends up on the
present state as well as the present inputs. We can easily realize that changes in the input during
the clock pulse cannot affect the state of the flip-flop. They can affect the output of the circuit. If
the input variations are not synchronized with a clock, he derived output will also not be
synchronized with the clock and we get false output. The false outputs can be eliminated by
allowing input to change only at the active transition of the clock.
Fig: logic diagram of a mealy model
The behavior of a clocked sequential circuit can be described algebraically by means of state
equations. A state equation specifies the next state as a function of the present state and inputs.
The mealy model shown in fig. consists of two D flip-flops, an input x and an output z. since the
D input of a flip-flop determines the value of the next state, the state equations for the model can
be written as
Y1 (t+1)=y1(t)x(t)+y2(t)x(t)
Y2(t+1)= 1(t)x(t)
The stable table of the mealy model based on the above state equations and output equation is
shown in fig. the state diagram based on the state table is shown in fig.
In general form, the mealy circuit can be represented with its block schematic as shown in below
fig.
Moore model: when the output of the sequential circuit depends up only on the present state of
the flip-flop, the sequential circuit is referred as to as the Moore circuit or the Moore machine.
Notice that the output depend only on the present state. It does not depend upon the input at
all. The input is used only to determine the inputs of flip-flops. It is not used to determine the
output. The circuit shown has two T flip-flops, one input x, and one output z. it can be described
algebraically by two input equations an output equation.
T1=y2x
T2=x
Z=y1y2
The state table of the Moore model based on the above state equations and output equation is
shown in fig.
In general form , the Moore circuit can be represented with its block schematic as shown in
below fig.
Terminal state: looking at the state diagram , we observe that no such input sequence exists
which can take the sequential machine out of state E and thus state E is said to be a terminal
state.
Strongly-connected machine: in sequential machines many times certain subsets of states may
not be reachable from other subsets of states. Even if the machine does not contain any terminal
state. If for every pair of states si, sj, of a sequential machine there exists an input sequence which
takes the machine M from si to sj, then the sequential machine is said to be strongly connected.
State equivalence theorem: it states that two states s1, and s2 are equivalent if for every possible
input sequence applied. The machine goes to the same next state and generates the same output.
That is
If S1(t+1)= s2(t+1) and z1=z2, then s1=s2
Here the outputs are different after 2-state transition and hence states A and E are 2-
distungishable. Again consider states A and C . the output sequence is as follows:
different
Here the outputs are different after 3- transition and hence states A and B are 3-distuingshable.
the concept of K- distuingshable leads directly to the definition of K-equivalence. States that are
not K-distinguishable are said to be K-equivalent.
PS NS,Z
X=0 X=1
A C,0 F,0
B D,1 F,0
C E,0 B,0
D B,1 E,0
E D,0 B,0
F D,1 B,0
Merger graphs:
The merger graph is a state reducing tool used to reduce states in the incompletely specified
machine. The merger graph is defined as follows.
1. Each state in the state table is represented by a vertex in the merger graph. So it contains
the same number of vertices as the state table contains states.
2. Each compatible state pair is indicated by an unbroken line draw between the two state
vertices
3. Every potentially compatible state pair with non-conflicting outputs but with different
next states is connected by a broken line. The implied states are written in theline break
between the two potentially compatible states.
4. If two states are incompatible no connecting line is drawn.
Consider a state table of an incompletely specified machine shown in fig. the corresponding
merger graph shown in fig.
State table:
PS NS,Z
I1 I2 I3 I4
A … E,1 B,1 ….
B … D,1 … F,1
C F,1 … … …
D … … C,1 …
E C,0 … A,0 F,1
F D,0 A,1 B,0 …
States A and B have non-conflicting outputs, but the successor under input I2are compatible only
if implied states D and E are compatible. So, draw a broken line from A to B with DE written in
between states A and C are compatible because the next states and output entries of states A and
C are not conflicting. Therefore, a line is drawn between nodes A and C. states A and D have
non-conflicting outputs but the successor under input I3 are B and C. hence join A and D by a
broken line with BC entered In between.
Two states are said to be incompatible if no line is drawn between them. If implied states are
incompatible, they are crossed and the corresponding line is ignored. Like, implied states D and
E are incompatible, so states A and B are also incompatible. Next, it is necessary to check
whether the incompatibility of A and B does not invalidate any other broken line. Observe that
states E and F also become incompatible because the implied pair AB is incompatible. The
broken lines which remain in the graph after all the implied pairs have been verified to be
compatible are regarded as complete lines.
After checking all possibilities of incompatibility, the merger graph gives the following seven
compatible pairs.
These compatible pairs are further checked for further compatibility. For example, pairs
(B,C)(B,D)(C,D) are compatible. So (B, C, D) is also compatible. Also pairs (A,c)(A,D)(C,D)
are compatible. So (A,C,D) is also compatible. . In this way the entire set of compatibles of
sequential machine can be generated from its compatible pairs.
To find the minimal set of compatibles for state reduction, it is useful to find what are called the
maximal compatibles. A set of compatibles state pairs is said to be maximal, if it is not
completely covered by any other set of compatible state pairs. The maximum compatible can be
found by looking at the merger graph for polygons which are not contained within any higher
order complete polygons. For example only triangles (A, C,D) and (B,C,D) are of higher order.
The set of maximal compatibles for this sequential machine given as
Example:
Figure: state table
State Minimization:
Completely Specified Machines
Two states, si and sj of machine M are distinguishable if and only if there exists a finite
input sequence which when applied to M causes different output sequences depending on
whether M started in si or sj.
Such a sequence is called a distinguishing sequence for (si, sj).
If there exists a distinguishing sequence of length k for (si, sj), they are said to be k-
distinguishable.
EXAMPLE:
• states A and B are 1-distinguishable, since a 1 input applied to A yields an output 1,
versus an output 0 from B.
• states A and E are 3-distinguishable, since input sequence 111 applied to A yields output
100, versus an output 101 from E.
• States si and sj (si ~ sj ) are said to be equivalent iff no distinguishing sequence exists for
(si, sj ).
• If si ~ sj and sj ~ sk, then si ~ sk. So state equivalence is an equivalence relation (i.e. it is a
reflexive, symmetric and transitive relation).
• An equivalence relation partitions the elements of a set into equivalence classes.
• Property: If si ~sj, their corresponding X-successors, for all inputs X, are also equivalent.
• Procedure: Group states of M so that two states are in the same group iff they are
equivalent (forms a partition of the states).
Machine M:
Attempt to reduce this case to usual state minimization of completely specified machines.
Brute Force Method: Force the don‘t cares to all their possible values and choose the
smallest of the completely specified machines so obtained.
In this example, it means to state minimize two completely specified machines obtained
from M, by setting the don‘t care to either 0 and 1.
States s1 and s2 are equivalent if s3 and s2 are equivalent, but s3 and s2 assert different
outputs under input 0, so s1 and s2 are not equivalent.
States s1 and s3 are not equivalent either.
So this completely specified machine cannot be reduced further (3 states is the
minimum).
Suppose that the - is set to be a 1.
Machine M’’red :
Machine M2 and M3 are formed by filling in the unspecified entry in M with 0 and 1,
respectively.
Both machines M2 and M3 cannot be reduced.
Conclusion?: M cannot be minimized further!
But is it a correct conclusion?
Note: that we want to ‗merge‘ two states when, for any input sequence, they generate the same
output sequence, but only where both outputs are specified.
Definition: A set of states is compatible if they agree on the outputs where they are all specified.
Machine M’’ :
In this case we have two compatible sets: A = (s1, s2) and B = (s3, s2). A reduced machine Mred
can be built as follows.
Machine Mred
A set of compatibles that cover all states is: (s3s6), (s4s6), (s1s6), (s4s5), (s2s5).
But (s3s6) requires (s4s6),
(s4s6) requires(s4s5), (s4s5) requires (s1s5),
(s1s6) requires (s1s2), (s1s2) requires (s3s6),
(s2s5) requires (s1s2).
So, this selection of compatibles requires too many other compatibles...
Another set of compatibles that covers all states is (s1s2s5), (s3s6), (s4s5).
But (s1s2s5) requires (s3s6) (s3s6) requires (s4s6)
(s4s6) requires (s4s5) (s4s5) requires (s1s5).
So must select also (s4s6) and (s1s5).
Selection of minimum set is a binate covering problem
When a next state is unspecified, the future behavior of the machine is unpredictable. This
suggests the definition of admissible input sequence.
Definition. An input sequence is admissible, for a starting state of a machine if no unspecified
next state is encountered, except possibly at the final step.
Definition. State si of machine M1 is said to cover, or contain, state sj of M2 provided
1. every input sequence admissible to sj is also admissible to si , and
2. its application to both M1 and M2 (initially is si and sj, respectively) results in
identical output sequences whenever the outputs of M2 are specified.
Definition. Machine M1 is said to cover machine M2 if for every state sj in M2, there is a
corresponding state si in M1 such that si covers sj.
The binary information stored in the digital system can be classified as either data or
control information.
The data information is manipulated by performing arithmetic, logic, shift and other data
processing tasks.
The control information provides the command signals that controls the various
operations on the data in order to accomplish the desired data processing task.
Design a digital system we have to design two subsystems data path subsystem and
control subsystem.
ASM CHART:
A special flow chart that has been developed specifically to define digital hardware
algorithms is called ASM chart.
A hardware algorithm is a step by step procedure to implement the desire task.
conventional flow chart describes the sequence of procedural steps and decision paths for
an algorithm without concern for their time relationship
An ASM chart describes the sequence of events as well as the timing relationship b/n the
states of sequential controller and the events that occur while going from one state to the
next
ASM consists of
1. State box
2. Decision box
3. Conditional box
State box
Decision box
BINARY MULTIPLIER
Data path subsystem for binary multiplier