Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
80 views

Onur Digitaldesign 2020 Lecture4 Combinational Logic Afterlecture

The document provides an overview of the lecture on combinational logic circuits. It assigns readings on combinational logic and hardware description languages for the current and following weeks. It notes that students should complete readings in digital logic and computer architecture textbooks by the end of the following week. The document also encourages students to watch the instructor's inaugural lecture for an optional 1% extra credit assignment.

Uploaded by

Abdelhafid
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views

Onur Digitaldesign 2020 Lecture4 Combinational Logic Afterlecture

The document provides an overview of the lecture on combinational logic circuits. It assigns readings on combinational logic and hardware description languages for the current and following weeks. It notes that students should complete readings in digital logic and computer architecture textbooks by the end of the following week. The document also encourages students to watch the instructor's inaugural lecture for an optional 1% extra credit assignment.

Uploaded by

Abdelhafid
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 125

Digital Design & Computer Arch.

Lecture 4: Combinational Logic I

Prof. Onur Mutlu

ETH Zürich
Spring 2020
28 February 2020
Assignment: Required Readings
n This week
q Combinational Logic
n P&P Chapter 3 until 3.3 + H&H Chapter 2

n Next week
q Hardware Description Languages and Verilog
n H&H Chapter 4 until 4.3 and 4.5
q Sequential Logic
n P&P Chapter 3.4 until end + H&H Chapter 3 in full

n By the end of next week, make sure you are done with
q P&P Chapters 1-3 + H&H Chapters 1-4

2
A Note on Hardware vs. Software
n This course might seem like it is only “Computer Hardware”

n However, you will be much more capable if you master both


hardware and software (and the interface between them)
q Can develop better software if you understand the hardware
q Can design better hardware if you understand the software
q Can design a better computing system if you understand both

n This course covers the HW/SW interface and microarchitecture


q We will focus on tradeoffs and how they affect software

n Recall the four mysteries

3
Recap: Four Mysteries
n Meltdown & Spectre (2017-2018)

n Rowhammer (2012-2014)

n Memory Performance Attacks (2006-2007)

n Memories Forget: Refresh & RAIDR (2011-2012)

4
Computer Architecture as an
Enabler of the Future

5
Assignment: Required Lecture Video
n Why study computer architecture?
n Why is it important?
n Future Computing Architectures

n Required Assignment
q Watch Prof. Mutlu’s inaugural lecture at ETH and understand it
q https://www.youtube.com/watch?v=kgiZlSOcGFM

n Optional Assignment – for 1% extra credit


q Write a 1-page summary of the lecture and email us
n What are your key takeaways?
n What did you learn?
n What did you like or dislike?
n Submit your summary to Moodle – Deadline: April 1
6
… but, first …
n Let’s understand the fundamentals…

n You can change the world only if you understand it well


enough…
q Especially the basics (fundamentals)
q Past and present dominant paradigms
q And, their advantages and shortcomings – tradeoffs
q And, what remains fundamental across generations
q And, what techniques you can use and develop to solve
problems

7
Fundamental Concepts

8
What is A Computer?
n Three key components
n Computation
n Communication
n Storage/memory
Burks, Goldstein, von Neumann, “Preliminary discussion of the
logical design of an electronic computing instrument,” 1946.

9
Image source: https://lbsitbytes2010.wordpress.com/2013/03/29/john-von-neumann-roll-no-15/
What is A Computer?
n Three key components
n Computation
n Communication
n Storage/memory
Burks, Goldstein, von Neumann, “Preliminary discussion of the
logical design of an electronic computing instrument,” 1946.

10
Image source: https://lbsitbytes2010.wordpress.com/2013/03/29/john-von-neumann-roll-no-15/
What is A Computer?
n We will cover all three components

Processing
control Memory
(sequencing) (program I/O
and data)
datapath

11
Recall: The Transformation Hierarchy

Problem
Algorithm
Program/Language
System Software
Computer Architecture SW/HW Interface Computer Architecture
(expanded view) (narrow view)
Micro-architecture
Logic
Devices
Electrons

12
What We Will Cover (I)
n Combinational Logic Design

n Hardware Description Languages (Verilog)

n Sequential Logic Design Problem


Algorithm
Program/Language
n Timing and Verification System Software
SW/HW Interface
n ISA (MIPS and LC3b) Micro-architecture
Logic
Devices
n MIPS Assembly Programming Electrons

13
What We Will Cover (II)
n Microarchitecture Basics: Single-cycle

n Multi-cycle and Microprogrammed Microarchitectures

n Pipelining

n Issues in Pipelining: Control & Data Dependence Handling,


State Maintenance and Recovery, …

n Out-of-Order Execution

n Other Processing Paradigms (SIMD, VLIW, Systolic, …)

n Memory and Caches

n Virtual Memory
14
Processing Paradigms We Will Cover
n Pipelining
n Out-of-order execution
n Dataflow (at the ISA level)
n Superscalar Execution
Problem
n VLIW Algorithm
n SIMD Processing (Vector & array, GPUs) Program/Language
n Decoupled Access Execute System Software
SW/HW Interface
n Systolic Arrays
Micro-architecture
Logic
Devices
Electrons

15
Combinational Logic Circuits
and Design

16
What Will We Learn Today?
n Building blocks of modern computers
q Transistors
q Logic gates

n Boolean algebra

n Combinational circuits

n How to use Boolean algebra to represent combinational


circuits

n Minimizing logic circuits (if time permits)


17
(Micro)-Processors

18
FPGAs

19
Custom ASICs

20
They All Look the Same
Microprocessors FPGAs ASICs

In short: Common building Reconfigurable You customize


block of computers hardware, flexible everything

21
They All Look the Same
Microprocessors FPGAs ASICs

In short: Common building Reconfigurable You customize


block of computers hardware, flexible everything
Program minutes days months
Development Time

22
They All Look the Same
Microprocessors FPGAs ASICs

In short: Common building Reconfigurable You customize


block of computers hardware, flexible everything
Program minutes days months
Development Time
Performance o + ++

23
They All Look the Same
Microprocessors FPGAs ASICs

In short: Common building Reconfigurable You customize


block of computers hardware, flexible everything
Program minutes days months
Development Time
Performance o + ++
Good for Ubiquitous Prototyping Mass production,
Simple to use Small volume Max performance

24
They All Look the Same
Microprocessors FPGAs ASICs

In short: Common building Reconfigurable You customize


block of computers hardware, flexible everything
Program minutes days months
Development Time
Performance o + ++
Good for Ubiquitous Prototyping Mass production,
Simple to use Small volume Max performance
Programming Executable file Bit file Design masks
Languages C/C++/Java/… Verilog/VHDL Verilog/VHDL
Main Companies Intel, ARM, AMD Xilinx, Altera, Lattice TSMC, UMC, ST,
Globalfoundries

25
They All Look the Same
Microprocessors FPGAs ASICs
Want to By
learn how program
these ming
work these
In short: Common building Reconfigurable You customize
block of computers hardware, flexible everything
Program minutes days months
Development Time
Performance o + ++
Good for Ubiquitous Prototyping Mass production,
Simple to use Small volume Max performance
Using this language
Programming Executable file Bit file Design masks
Languages C/C++/Java/… Verilog/VHDL Verilog/VHDL
Main Companies Intel, ARM, AMD Xilinx, Altera, Lattice TSMC, UMC, ST,
Globalfoundries

26
Building Blocks of Modern
Computers

27
Transistors

28
Transistors
n Computers are built from very large numbers of very
simple structures
q Intel’s Pentium IV microprocessor, first offered
for sale in 2000, was made up of more than 42 Problem
million MOS transistors Algorithm
q Intel’s Core i7 Broadwell-E, offered for sale in Program/Language
2016, is made up of more than 3.2 billion MOS Runtime System
transistors (VM, OS, MM)

n This lecture ISA (Architecture)

q How the MOS transistor works (as a logic Microarchitecture


element) Logic

q How these transistors are connected to form Devices


logic gates Electrons
q How logic gates are interconnected to form larger units that
are needed to construct a computer
29
MOS Transistor
n By combining
q Conductors (Metal)
q Insulators (Oxide)
q Semiconductors Gate
Source Drain
n We get a Transistor (MOS)

n Why is this useful?


q We can combine many of these to realize simple logic gates
n The electrical properties of metal-oxide semiconductors are
well beyond the scope of what we want to understand in
this course
q They are below our lowest level of abstraction
30
Different Types of MOS Transistors
n There are two types of MOS transistors: n-type and p-type

n-type p-type

n They both operate “logically,” very similar to the way wall


switches work

31
How Does a Transistor Work?
Wall Switch

Power Supply

q In order for the lamp to glow, electrons must flow


q In order for electrons to flow, there must be a closed circuit
from the power supply to the lamp and back to the power
supply
q The lamp can be turned on and off by simply manipulating the
wall switch to make or break the closed circuit

32
How Does a Transistor Work?
n Instead of the wall switch, we could use an n-type or a p-
type MOS transistor to make or break the closed circuit
Drain
If the
If the gate
gate of
ofan
ann-type
n-typetransistor
transistorisis
supplied with a high
highvoltage,
voltage,the
the
Gate connection from source to drain acts like a
piece of wire

Source Depending on the technology, 0.3V to 3V


Schematic of an n-type
MOS transistor If the gate of the n-type transistor is
supplied with 0V, the connection between
the source and drain is broken

33
How Does a Transistor Work?
n The n-type transistor in a circuit with a battery and a bulb

3
0 Volt
Gate

Power Supply
Shorthand notation

n The p-type transistor works in exactly the opposite fashion


from the n-type transistor
Drain Drain
The circuit is closed The circuit is closed
when the gate is when the gate is
supplied with 3V Gate Gate
supplied with 0V
n-type p-type
Source Source
34
Logic Gates

35
One Level Higher in the Abstraction
n Now, we know how a MOS transistor works
n How do we build logic out of MOS transistors?
Problem
n We construct basic logic structures out of Algorithm
individual MOS transistors Program/Language
Runtime System
(VM, OS, MM)
n These logical units are named logic gates ISA (Architecture)
q They implement simple Boolean functions Microarchitecture
Logic
Devices
Electrons

36
Making Logic Blocks Using CMOS Technology
n Modern computers use both n-type and p-type transistors,
i.e. Complementary MOS (CMOS) technology
nMOS + pMOS = CMOS

n The simplest logic structure that exists in a modern


computer 3V

p-type

In (A) Out (Y) What does this circuit do?


n-type

0V
37
Functionality of Our CMOS Circuit
What happens when the input is connected to 0V?

3V 3V
p-type transistor
pulls the output up

0V Out (Y) Y = 3V

0V 0V

38
Functionality of Our CMOS Circuit
What happens when the input is connected to 3V?

3V 3V

A= 3V Out (Y) Y = 0V

n-type transistor pulls


the output down

0V 0V

39
CMOS NOT Gate
3V
n This is actually the CMOS NOT Gate
n Why do we call it NOT? P
q If A = 0V then Y = 3V In (A) Out (Y)
q If A = 3V then Y = 0V
N
n Digital circuit: one possible interpretation
q Interpret 0V as logical (binary) 0 value
0V
q Interpret 3V as logical (binary) 1 value

A P N Y
0 ON OFF 1 𝑌 = 𝐴̅
1 OFF ON 0

40
CMOS NOT Gate
3V
n This is actually the CMOS NOT Gate
n Why do we call it NOT? P
q If A = 0V then Y = 3V In (A) Out (Y)
q If A = 3V then Y = 0V
N
n Digital circuit: one possible interpretation
q Interpret 0V as logical (binary) 0 value NOT
0V
q Interpret 3V as logical (binary) 1 value
A Y 𝐴̅
𝑌=
Truth table: what
Y =would
A be the logical
A Y output of the circuit for each possible input

A Y
We call it a NOT gate 0 1
or an inverter
1 0
41
Another CMOS Gate: What Is This?
n Let’s build more complex gates!

3V

P1 P2
Out (Y)
In (A) N1

In (B) N2

0V

42
CMOS NAND Gate
n Let’s build more complex gates!
3V

P1 P2 𝑌 = 𝐴 % 𝐵 = 𝐴𝐵
Out (Y)
In (A) N1 A B P1 P2 N1 N2 Y
In (B) N2 0 0 ON ON OFF OFF 1
0 1 ON OFF OFF ON 1
0V
1 0 OFF ON ON OFF 1
1 1 OFF OFF ON ON 0
q P1 and P2 are in parallel; only one must be ON to pull the
output up to 3V
q N1 and N2 are connected in series; both must be ON to pull
the output to 0V
43
CMOS NAND Gate
n Let’s build more complex gates!
3V

P1 P2 𝑌 = 𝐴 NAND
% 𝐵 = 𝐴𝐵
Out (Y)
A
In (A) N1 Y
B
In (B) N2
Y = AB
0V
A B Y
0 0 1
A 0 1 1
Y 1 0 1
B 1 1 0

44
CMOS AND Gate
n How can we make an AND gate?
A B Y 𝑌 = 𝐴 % 𝐵 = 𝐴𝐵
0 0 0
0 1 0 A
1 0 0 Y
1 1 1 B
3V 3V
We make an AND gate using
one NAND gate and P1 P2 P3
one NOT gate Out (Y)
In (A) N1 N3

In (B) N2
0V

0V

45
NOT
CMOS NOT, NAND, AND Gates
A Y
A A
A Y Y Y
Y=A B B

A Y A B Y A B Y
0 1 0 0 1 0 0 0
1 0 0 1 1 0 1 0
1 0 1 1 0 0
1 1 0 1 1 1
3V
3V 3V 3V

P1 P2 P1 P2 P3
P Out (Y)
Out (Y)
In (A) Out (Y) In (A) N1 In (A) N1 N3
N
In (B) N2 In (B) N2
0V

0V 0V 0V

46
General CMOS Gate Structure
n The general form used to construct any inverting logic gate,
such as: NOT, NAND, or NOR
q The networks may consist of
transistors in series or in pMOS
parallel pull-up
network
q When transistors are in
parallel, the network is ON if inputs
one of the transistors is ON output
q When transistors are in series,
the network is ON only if all nMOS
pull-down
transistors are ON network

pMOS transistors are used for pull-up


nMOS transistors are used for pull-down

47
General CMOS Gate Structure (II)
n Exactly one network should be ON, and the other network
should be OFF at any given time

q If both networks are ON at the pMOS


same time, there is a short pull-up
network
circuit à likely incorrect
operation inputs
output
q If both networks are OFF at
nMOS
the same time, the output is pull-down
floating à undefined network

pMOS transistors are used for pull-up


nMOS transistors are used for pull-down

48
Digging Deeper: Why This Structure?
n MOS transistors are not perfect switches

n pMOS transistors pass 1’s well but 0’s poorly


n nMOS transistors pass 0’s well but 1’s poorly

n pMOS transistors are good at “pulling up” the output


n nMOS transistors are good at “pulling down” the output
3V 3V
pMOS
pull-up
P1 P2 P3
network
Out (Y)
In (A) N1 N3
inputs
output
In (B) N2
0V
nMOS
pull-down
0V
network

See Section 1.7 in H&H 49


Digging Deeper: Latency
n Which one is faster?
q Transistors in series
q Transistors in parallel

n Series connections are slower than parallel connections


q More resistance on the wire

n How do you alleviate this latency?


q See H&H Section 1.7.8 for an example: pseudo-nMOS Logic

50
Digging Deeper: Power Consumption
n Dynamic Power Consumption
q C * V2 * f
n C = capacitance of the circuit (wires and gates)
n V = supply voltage
n f = charging frequency of the capacitor

n Static Power consumption


q V * Ileakage
n supply voltage * leakage current

n Energy Consumption
q Power * Time

n See more in H&H Chapter 1.8


51
Common Logic Gates

52
Larger Gates
n We can extend the gates to more than 2 inputs

n Example: 3-input AND gate, 10-input NOR gate

n See your readings

53
Aside: Moore’s Law:
Enabler of Many Gates on a Chip

54
An Enabler: Moore’s Law

Moore, “Cramming more components onto integrated circuits,”


Electronics Magazine, 1965. Component counts double every other year

Image source: Intel


55
Number of transistors on an integrated circuit doubles ~ every two years
Image source: Wikipedia
56
57
Recommended Reading
n Moore, “Cramming more components onto integrated
circuits,” Electronics Magazine, 1965.

n Only 3 pages

n A quote:
“With unit cost falling as the number of components per
circuit rises, by 1975 economics may dictate squeezing as
many as 65 000 components on a single silicon chip.”

n Another quote:
“Will it be possible to remove the heat generated by tens of
thousands of components in a single silicon chip?”
58
How Do We Keep Moore’s Law
n Manufacturing smaller transistors/structures
q Some structures are already a few atoms in size

n Developing materials with better properties


q Copper instead of Aluminum (better conductor)
q Hafnium Oxide, air for Insulators
q Making sure all materials are compatible is the challenge

n Optimizing the manufacturing steps


q How to use 193nm ultraviolet light to pattern 20nm structures

n New technologies
q FinFET, Gate All Around transistor, Single Electron Transistor…
59
Combinational Logic Circuits

60
We Can Now Build Logic Circuits
Now, we understand the workings of the basic logic gates

What is our next step?

Build some of the logic structures that are important


components of the microarchitecture of a computer!

n A logic circuit is composed of:


q Inputs
functional spec
inputs outputs
q Outputs timing spec

n Functional specification (describes relationship between


inputs and outputs)

n Timing specification (describes the delay between inputs


changing and outputs responding)
61
Types of Logic Circuits
functional spec
inputs outputs
timing spec

n Combinational Logic
q Memoryless
q Outputs are strictly dependent on the combination of input
values that are being applied to circuit right now
q In some books called Combinatorial Logic
n Later we will learn: Sequential Logic
q Has memory
n Structure stores history à Can ”store” data values
q Outputs are determined by previous (historical) and current
values of inputs
62
Boolean Equations

63
Functional Specification
n Functional specification of outputs in terms of inputs
n What do we mean by “function”?
q Unique mapping from input values to output values
q The same input values produce the same output value every
time
q No memory (does not depend on the history of input values)

n Example (full 1-bit adder – more later):


A
C S
S = F(A, B, Cin) B L
Cout
Cin
Cout = G(A, B, Cin)
S = A Å B Å Cin
Cout = AB + ACin + BCin

64
Simple Equations: NOT / AND / OR
𝑨 𝑨
𝑨 (reads “not A”) is 1 iff A is 0
0 1
A 𝑨
1 0

A • B (reads “A and B”) is 1 iff A and B are both 1 𝑨 𝑩 𝑨•𝑩


0 0 0
A 0 1 0
A•B
B
1 0 0
1 1 1

A + B (reads “A or B”) is 1 iff either A or B is 1 𝑨 𝑩 𝑨+𝑩


0 0 0
A 0 1 1
A+B
B 1 0 1
1 1 1
65
Boolean Algebra: Big Picture
n An algebra on 1’s and 0’s
q with AND, OR, NOT operations

n What you start with


q Axioms: basic things about objects and operations
you just assume to be true at the start

n What you derive first


q Laws and theorems: allow you to manipulate Boolean expressions
q …also allow us to do some simplification on Boolean expressions

n What you derive later


q More “sophisticated” properties useful for manipulating digital
designs represented in the form of Boolean equations

George Boole, “The Mathematical Analysis of Logic,” 1847. 66


Boolean Algebra: Axioms
Formal version English version
1. B contains at least two elements,
0 and 1, such that 0 ≠ 1 Math formality...

2. Closure a,b ∈ B, Result of AND, OR stays


(i) a + b ∈ B in set you start with
(ii) a • b ∈ B

3. Commutative Laws: a,b ∈ B, For primitive AND, OR of


(i) a + b = b + a 2 inputs, order doesn’t matter
(ii) a • b = b • a

4. Identities: 0, 1 ∈ B There are identity elements


(i) a + 0 = a for AND, OR, that give you back
(ii) a • 1 = a what you started with
5. Distributive Laws: • distributes over +, just like algebra
(i) a + (b • c) = (a + b) • (a + c) …but + distributes over •, also (!!)
(ii) a • (b + c) = a • b + a • c

6. Complement:
(i) 𝐚 + 𝒂%= 1 There is a complement element;
%=0
(ii) 𝐚 • 𝒂 AND/ORing with it gives the identity elm.
67
Boolean Algebra: Duality
n Observation
q All the axioms come in “dual” form
q Anything true for an expression also true for its dual
q So any derivation you could make that is true, can be flipped into
dual form, and it stays true
n Duality — More formally
q A dual of a Boolean expression is derived by replacing
n Every AND operation with... an OR operation
n Every OR operation with... an AND
n Every constant 1 with... a constant 0
n Every constant 0 with... a constant 1
n But don’t change any of the literals or play with the complements!

Example a • (b + c) = (a • b) + (a • c)
➙ a + (b • c) = (a + b) • (a + c)

68
Boolean Algebra: Useful Laws
Dual
Operations with 0 and 1: AND, OR with identities
1. X + 0 = X 1D. X • 1 = X gives you back the original
2. X + 1 = 1 2D. X • 0 = 0 variable or the identity

Idempotent Law:
3. X + X = X 3D. X • X = X AND, OR with self = self
Involution Law:
$) = X
4. (𝑿 double complement =
no complement
Laws of Complementarity: AND, OR with complement
$= 1
5. X + 𝐗 $= 0
5D. X • 𝐗 gives you an identity

Commutative Law:
6. X + Y = Y + X 6D. X • Y = Y • X Just an axiom…

69
Useful Laws (cont)
Associative Laws:
7. (X + Y) + Z = X + (Y + Z) 7D. (X • Y) • Z = X • (Y • Z) Parenthesis order
=X+Y+Z =X•Y•Z does not matter

Distributive Laws:
8. X • (Y+ Z) = (X • Y) + (X • Z) 8D. X + (Y• Z) = (X + Y) • (X + Z) Axiom

Simplification Theorems:
%= X
9. X • Y + X • 𝒀 %) = X
9D. (X + Y) • (X + 𝒀
Useful for
10. X + X • Y = X 10D. X • (X + Y) = X simplifying
expressions
%) • Y = X • Y
11. (X + 𝒀 %) + Y = X + Y
11D. (X •𝒀

Actually worth remembering — they show up a lot in real designs…


70
Boolean Algebra: Proving Things
Proving theorems via axioms of Boolean Algebra:

EX: Prove the theorem: X • Y + X • 𝒀$=X


" ) = X Distributive (5)
X • ( Y +𝐘
X•1 =X Complement (6)
X =X Identity (4)

EX2: Prove the theorem: X + X•Y = X


X•1+X•Y =X Identity (4)
X•(1+Y) =X Distributive (5)
X•1 =X Identity (2)
X =X Identity (4)

71
DeMorgan’s Law: Enabling Transformations
DeMorgan's Law:
12. (𝑿 + 𝒀 + 𝒁 + ⋯ ) = 𝑿$. 𝒀
$. 𝒁
$. …
$+𝒀
12D. (𝑿 . 𝒀. 𝒁. … ) = 𝑿 $+𝒁 $+ …

¢ Think of this as a transformation


§ Let’s say we have:

F=A+B+C

§ Applying DeMorgan’s Law (12), gives us


$. 𝑩
𝑭 = (𝑨 + 𝑩 + 𝑪) = (𝑨 $
$ . 𝑪)

At least one of A, B, C is TRUE --> It is not the case that A, B, C are all false
72
DeMorgan’s Law (Continued)
These are conversions between different types of logic functions
They can prove useful if you do not have every type of gate

$𝒀$ 𝑿 𝑿 𝒀 𝑿+𝒀 # 𝒀
𝑿 # #𝒀
𝑿 #
𝑨 = (𝑿 + 𝒀) = 𝑿 𝒀 𝑨
0 0 1 1 1 1
0 1 0 1 0 0
NOR is equivalent to AND 𝑿 1 0 0 0 1 0
with inputs complemented 𝑨
𝒀
1 1 0 0 0 0

$+𝒀
𝑩 = (𝑿𝒀) = 𝑿 $ 𝑿 𝑿 𝒀 𝑿𝒀 # 𝒀
𝑿 # !+𝒀
𝑿 !
𝒀 𝑩
0 0 1 1 1 1
0 1 1 1 0 1
NAND is equivalent to OR 1 0 1 0 1 1
𝑿 𝑩
with inputs complemented 𝒀 1 1 0 0 0 0

73
Digital Design & Computer Arch.
Lecture 4: Combinational Logic I

Prof. Onur Mutlu

ETH Zürich
Spring 2020
28 February 2020
We Did Not Cover the
Following Slides in Lecture 4

75
Using Boolean Equations
to Represent a Logic Circuit

76
Sum of Products Form: Key Idea
n Assume we have the truth table of a Boolean Function

n How do we express the function in terms of the inputs in a


standard manner?

n Idea: Sum of Products form


n Express the truth table as a two-level Boolean expression
q that contains all input variable combinations that result in a 1
output
q If ANY of the combinations of input variables that results in a
1 is TRUE, then the output is 1
q F = OR of all input variable combinations that result in a 1

77
Some Definitions
¢ Complement: variable with a bar over it
𝑨,𝑩,𝑪

¢ Literal: variable or its complement


𝑨, 𝑨,𝑩,𝑩,𝑪,𝑪

¢ Implicant: product (AND) of literals


(𝑨 1 𝑩 1 𝑪) , (𝑨 1 𝑪) , (𝑩 1 𝑪)

¢ Minterm: product (AND) that includes all input variables


(𝑨 1 𝑩 1 𝑪) , (𝑨 1 𝑩 1 𝑪) , (𝑨 1 𝑩 1 𝑪)

¢ Maxterm: sum (OR) that includes all input variables


(𝑨 + 𝑩 + 𝑪) , (𝑨 + 𝑩 + 𝑪) , (𝑨 + 𝑩 + 𝑪)

78
Two-Level Canonical (Standard) Forms
n Truth table is the unique signature of a Boolean function …
q But, it is an expensive representation

n A Boolean function can have many alternative Boolean


expressions
q i.e., many alternative Boolean expressions (and gate
realizations) may have the same truth table (and function)

n Canonical form: standard form for a Boolean expression


q Provides a unique algebraic signature
q If they all say the same thing, why do we care?
n Different Boolean expressions lead to different gate realizations

79
Two-Level Canonical Forms
Sum of Products Form (SOP)
Also known as disjunctive normal form or minterm expansion

𝐀 𝐁 𝐂 𝐅 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1
0 0 0 0 $ 𝐁𝐂 + 𝐀𝑩
𝑭=𝑨 $𝑪$ + 𝐀𝑩
$ 𝐂 + 𝐀𝐁𝑪
$ + 𝐀𝐁𝐂
0 0 1 0
0 1 0 0
0 1 1 1
1 0 0 1
1 0 1 1
1 1 0 1
1 1 1 1
• Each row in a truth table has a minterm
• A minterm is a product (AND) of literals
• Each minterm is TRUE for that row (and only that row)

All Boolean equations can be written in SOP form

80
Find all the input combinations (minterms) for which the output of the function is TRUE.
SOP Form — Why Does It Work?
This input
𝐀 𝐁 𝐂 𝐅 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1
0 0 0 0 $ 𝐁𝐂 + 𝐀𝑩
𝑭=𝑨 $𝑪$ + 𝐀𝑩 $ + 𝐀𝐁𝐂
$ 𝐂 + 𝐀𝐁𝑪
0 0 1 0
0 1 0 0 Activates
this term
0 1 1 1
1 0 0 1
1 0 1 1
1 1 0 1
1 1 1 1
n $𝐂 =
Only the shaded product term — 𝐀𝑩 $ 1 𝟏 — will be 1
𝟏1𝟎
n No other product terms will “turn on” — they will all be 0
n So if inputs A B C correspond to a product term in expression,
q We get 0 + 0 + … + 1 + … + 0 + 0 = 1 for output

n If inputs A B C do not correspond to any product term in expression


q We get 0 + 0 + … + 0 = 0 for output

81
Aside: Notation for SOP
n Standard “shorthand” notation
q If we agree on the order of the variables in the rows of truth
table…
n then we can enumerate each row with the decimal number that
corresponds to the binary number created by the input pattern
𝐀 𝐁 𝐂 𝐅
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 1
1 0 0 1 100 = decimal 4 so this is minterm #4, or m4
1 0 1 1
1 1 0 1
1 1 1 1 111 = decimal 7 so this is minterm #7, or m7

f = m3 + m4 + m5 + m6 + m7 We can write this as a sum of products

= ∑m(3,4,5,6,7) Or, we can use a summation notation

82
Canonical SOP Forms
𝐀 𝐁 𝐂 minterms F in canonical form:
0 0 0 %𝑩
𝑨 %𝑪% = m0
0 0 1 %𝑩
𝑨 % 𝑪 = m1 F(A,B,C) = ∑m(3,4,5,6,7)
0 1 0 % 𝑩𝑪
𝑨 % = m2 = m3 + m4 + m5 + m6 + m7
0 1 1 % 𝑩𝑪 = m3
𝑨
1 0 0 𝑨𝑩%𝑪% = m4 𝑭=𝑨$ 𝐁𝐂 + 𝐀𝑩 $
$𝑪
1 0 1 𝑨𝑩% 𝑪 = m5 $ + 𝐀𝐁𝐂
$ 𝐂 + 𝐀𝐁𝑪
% = m6 + 𝐀𝑩
1 1 0 𝑨𝑩𝑪
1 1 1 𝑨𝑩𝑪 = m7
canonical form ≠ minimal form
Shorthand Notation for $ + 𝑨
$ 𝑪+𝑪 $
$ 𝐁𝐂 + 𝐀𝐁(𝑪 + 𝑪)
Minterms of 3 Variables 𝑭 = 𝐀𝑩
$+ 𝑨
= 𝐀𝑩 $ 𝐁𝐂 + 𝐀𝐁
$ 𝐁𝐂
$ + 𝑩) + 𝑨
= 𝐀(𝑩
$ 𝐁𝐂
=𝐀+ 𝑨
2-Level AND/OR = 𝐀 + 𝐁𝐂
Realization

83
From Logic to Gates
¢ SOP (sum-of-products) leads to two-level logic

¢ Example: 𝒀 = 𝑨 1 𝑩 1 𝑪 + 𝑨 1 𝑩 1 𝑪 + 𝑨 1 𝑩 1 𝑪

A B C

A B C
minterm: ABC

minterm: ABC

minterm: ABC

Y
84
Alternative Canonical Form: POS
We can have another from of representation
%
DeMorgan of SOP of 𝑭
products
A product of sums (POS) % % + 𝑪)
𝑭 = (𝑨 + 𝑩 + 𝑪)(𝑨 + 𝑩 + 𝑪)(𝑨 +𝑩
Each sum term represents one of the sums
This input
“zeros” of the function
0 0 0 0 0 1 0 1 0
𝑭= 𝑨+𝑩+𝑪 𝑨+𝑩+𝑪 % (𝑨 + 𝑩
% + 𝑪)
𝐀 𝐁 𝐂 𝐅
0 0 0 0
0 0 1 0
Activates this term
0 1 0 0
0 1 1 1
1 0 0 1 For the given input, only the shaded sum term
1 0 1 1 will equal 0
1 1 0 1 𝑨+𝑩% +𝑪=𝟎+𝟏 %+𝟎
1 1 1 1
Anything ANDed with 0 is 0; Output F will be 0

85
Consider A=0, B=1, C=0
Input
𝐀 𝐁 𝐂 𝐅 0 1 0 %)(𝑨 + 𝑩
𝑭 = (𝑨 + 𝑩 + 𝑪)(𝑨 + 𝑩 + 𝑪 % + 𝑪)
0 0 0 0
0 0 1 0 𝟎 𝟏 𝟎 𝟎 𝟏 %
𝟎 𝟎 %
𝟏 𝟎
0 1 0 0
0 1 1 1
1 0 0 1
1 0 1 1
1 1 0 1
1 1 1 1 1 1 0

𝑭=𝟎

Only one of the products will be 0, anything ANDed with 0 is 0

Therefore, the output is F = 0


86
POS: How to Write It
%
𝑭 = (𝑨 + 𝑩 + 𝑪)(𝑨 + 𝑩 + 𝑪)(𝑨 % + 𝑪)
+𝑩
𝐀 𝐁 𝐂 𝐅
0 0 0 0
0 0 1 0
0 1 0 0
𝑨 𝑩% 𝑪
0 1 1 1
1 0 0 1 𝑨 + 𝑩 % + 𝑪
1 0 1 1
1 1 0 1
1 1 1 1
Maxterm form:
1. Find truth table rows where F is 0

2. 0 in input col ➙ true literal


3. 1 in input col ➙ complemented literal

4. OR the literals to get a Maxterm


5. AND together all the Maxterms

% !!
Or just remember, POS of 𝑭 is the same as the DeMorgan of SOP of 𝑭
87
Canonical POS Forms
Product of Sums / Conjunctive Normal Form / Maxterm Expansion

%
𝐅 = (𝑨 + 𝑩 + 𝑪)(𝑨 + 𝑩 + 𝑪)(𝑨 % + 𝑪)
+𝑩

𝐀 𝐁 𝐂 5 𝑴(𝟎, 𝟏, 𝟐)
Maxterms
0 0 0 𝑨 + 𝑩 + 𝑪 = M0
0 0 1 𝑨+𝑩+𝑪 % = M1
0 1 0 𝑨+𝑩 % + 𝑪 = M2 𝐀 𝐁 𝐂 𝐅 Note that you
0 1 1 𝑨+𝑩 % +𝑪% = M3 0 0 0 0 form the
1 0 0 % + 𝑩 + 𝑪 = M4
𝑨 0 0 1 0 maxterms around
1 0 1 %+𝑩+𝑪
𝑨 % = M5 0 1 0 0 the “zeros” of the
1 1 0 %+𝑩
𝑨 % + 𝐂 = M6 0 1 1 1 function
1 1 1 %+𝑩
𝑨 % +𝑪% = M7 1 0 0 1 This is not the
1 0 1 1 complement of
Maxterm shorthand notation 1 1 0 1 the function!
for a function of three variables 1 1 1 1

88
Useful Conversions
1. Minterm to Maxterm conversion:
rewrite minterm shorthand using maxterm shorthand
replace minterm indices with the indices not already used
E.g., 𝐅 𝑨, 𝑩, 𝑪 = ∑ 𝒎 𝟑, 𝟒, 𝟓, 𝟔, 𝟕 = ∏ 𝑴(𝟎, 𝟏, 𝟐)

2. Maxterm to Minterm conversion:


rewrite maxterm shorthand using minterm shorthand
replace maxterm indices with the indices not already used
E.g., 𝐅 𝑨, 𝑩, 𝑪 = ∏ 𝑴(𝟎, 𝟏, 𝟐) = ∑ 𝒎 𝟑, 𝟒, 𝟓, 𝟔, 𝟕
3. Expansion of 𝐅 to expansion of 𝑭 %:

𝐄. 𝐠. , 𝐅 𝑨, 𝑩, 𝑪 = D 𝒎 𝟑, 𝟒, 𝟓, 𝟔, 𝟕 % 𝑨, 𝑩, 𝑪 = D 𝒎(𝟎, 𝟏, 𝟐)
𝑭

= 5 𝑴(𝟎, 𝟏, 𝟐) = 5 𝑴 𝟑, 𝟒, 𝟓, 𝟔, 𝟕
4. Minterm expansion of 𝐅 to Maxterm expansion of 𝑭 %:
rewrite in Maxterm form, using the same indices as 𝐅
𝐄. 𝐠. , 𝐅 𝑨, 𝑩, 𝑪 = D 𝒎 𝟑, 𝟒, 𝟓, 𝟔, 𝟕 % 𝑨, 𝑩, 𝑪 = ∏ 𝑴 𝟑, 𝟒, 𝟓, 𝟔, 𝟕
𝑭

= 5 𝑴(𝟎, 𝟏, 𝟐) = D 𝒎(𝟎, 𝟏, 𝟐)
89
Combinational Building Blocks
used in Modern Computers

90
Combinational Building Blocks
n Combinational logic is often grouped into larger building
blocks to build more complex systems

n Hides the unnecessary gate-level details to emphasize the


function of the building block

n We now look at:


q Decoders
q Multiplexers
q Full adder
q PLA (Programmable Logic Array)
Decoder
n n inputs and 2n outputs
n Exactly one of the outputs is 1 and all the rest are 0s
n The one output that is logically 1 is the output
corresponding to the input pattern that the logic circuit is
expected to detect
A A=1
1 if A,B is 00 0
B B=0

1 if A,B is 01 0

1 if A,B is 10 1

1 if A,B is 11 0
Decoder
n The decoder is useful in determining how to interpret a bit
pattern
q It could be the A=1
address of a row in 0
B=0
DRAM, that the
processor intends to
read from 0

q It could be an
instruction in the 1
program and the
processor has to
decide what action to 0
do! (based on
instruction opcode)
93
Multiplexer (MUX), or Selector
n Selects one of the N inputs to connect it to the output
n Needs log2N-bit control input
n 2:1 MUX A B A B

S S=0

a b A 0

A
C C
Multiplexer (MUX)
n The output C is always connected to either the input A or
the input B
q Output value depends on the value of the select line S

A B

S C S
0 A
1 B
C
n Your task: Draw the schematic for an 8-input (8:1) MUX
q Gate level: as a combination of basic AND, OR, NOT gates
q Module level: As a combination of 2-input (2:1) MUXes
95
Full Adder (I)
n Binary addition
q Similar to decimal addition 𝒂𝒏F𝟏 𝒂𝒏F𝟐 … 𝒂𝟏 𝒂𝟎
q From right to left 𝒃𝒏F𝟏 𝒃𝒏F𝟐 … 𝒃𝟏 𝒃𝟎
q One column at a time 𝑪𝒏 𝑪𝒏F𝟏 … 𝑪𝟏
q One sum and one carry bit 𝑺𝒏F𝟏 … 𝑺𝟏 𝑺𝟎
n Truth table of binary addition on one column of bits within
two n-bit operands a b carry carry S
i i i i+1 i
0 0 0 0 0
0 0 1 0 1
0 1 0 0 1
0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 1 1

96
Full Adder (II)
n Binary addition
q N 1-bit additions 𝒂𝒏F𝟏 𝒂𝒏F𝟐 … 𝒂𝟏 𝒂𝟎
q SOP of 1-bit addition 𝒃𝒏F𝟏 𝒃𝒏F𝟐 … 𝒃𝟏 𝒃𝟎
FullFull
Adder
Adder
(1 bit) 𝑪𝒏 𝑪𝒏F𝟏 … 𝑪𝟏
ai
𝑺𝒏F𝟏 … 𝑺𝟏 𝑺𝟎
bi
ci+1 ai bi carryi carryi+1 Si
ci
0 0 0 0 0
0 0 1 0 1
0 1 0 0 1
si 0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 1 1

97
4-Bit Adder from Full Adders
n Creating a 4-bit adder out of 1-bit full adders
q To add two 4-bit binary numbers A and B
b3 a3 b2 a2 b1 a1 b0 a0

c4 Full Adder c3 Full Adder c2 Full Adder c1 Full Adder 0

s3 s2 s1 s0
𝒂𝟑 𝒂𝟐 𝒂𝟏 𝒂𝟎 𝟏 𝟎 𝟏 𝟏
+ 𝒃𝟑 𝒃𝟐 𝒃𝟏 𝒃𝟎 + 𝟏 𝟎 𝟎 𝟏
𝒄𝟒 𝒄𝟑 𝒄𝟐 𝒄𝟏 𝟏 𝟎 𝟏 𝟏
𝒔𝟑 𝒔𝟐 𝒔𝟏 𝒔𝟎 𝟎 𝟏 𝟎 𝟎

98
The Programmable Logic Array (PLA)
n The below logic structure is a very common building block
for implementing any collection of logic functions one
wishes to A
X
n An array of AND gates B
followed by an array of OR C
gates Connections
Y

n How do we determine the


number of AND gates? Z
q Remember SOP: the
number of possible minterms
q For an n-input logic function, we need a PLA with 2n n-input
AND gates
n How do we determine the number of OR gates? The
number of output columns in the truth table
99
The Programmable Logic Array (PLA)
n How do we implement a logic function?
q Connect the output of an AND gate to the input of an OR gate
if the corresponding minterm is included in the SOP
q This is a simple programmable A
X
logic B

n Programming a PLA: we C
Y
program the connections from Connections
AND gate outputs to OR gate
inputs to implement a desired Z

logic function

n Have you seen any other type of programmable logic?


q Yes! An FPGA…
q An FPGA uses more advanced structures, as we saw in Lecture 3
100
Implementing a Full Adder Using a PLA
A
X
B

C This input should not be


Y connected to any outputs We do not need
Connections
this output

Z ai
X
bi

ci
ci+1
Truth table of a full adder
ai bi carryi carryi+1 Si
0 0 0 0 0
si
0 0 1 0 1
0 1 0 0 1
0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 1 1

101
Logical (Functional) Completeness
n Any logic function we wish to implement could be
accomplished with a PLA
q PLA consists of only AND gates, OR gates, and inverters
q We just have to program connections based on SOP of the
intended logic function

n The set of gates {AND, OR, NOT} is logically complete


because we can build a circuit to carry out the specification
of any truth table we wish, without using any other kind of
gate

n NAND is also logically complete. So is NOR.


q Your task: Prove this.

102
More Combinational Building Blocks
n H&H Chapter 2 in full
q Required Reading
q E.g., see Tri-state Buffer and Z values in Section 2.6

n H&H Chapter 5
q Will be required reading soon.

n You will benefit greatly by reading the “combinational”


parts of Chapter 5 soon.
q Sections 5.1 and 5.2

103
Tri-State Buffer
n A tri-state buffer enables gating of different signals onto a
wire

n Floating signal (Z): Signal that is not driven by any circuit


q Open circuit, floating wire

104
Example: Use of Tri-State Buffers
n Imagine a wire connecting the CPU and memory

q At any time only the CPU or the memory can place a value on
the wire, both not both

q You can have two tri-state buffers: one driven by CPU, the
other memory; and ensure at most one is enabled at any time

105
Example Design with Tri-State Buffers

GateCPU

CPU

GateMem

Memory
Shared Bus

106
Logic Simplification:
Karnaugh Maps (K-Maps)

107
Recall: Full Adder in SOP Form Logic

Full Adder
ai

bi
ci+1 ai bi carryi carryi+1 Si
ci
0 0 0 0 0
0 0 1 0 1
0 1 0 0 1
si 0 1 1 1 0
1 0 0 0 1
1 0 1 1 0
1 1 0 1 0
1 1 1 1 1

108
Goal: Simplified Full Adder

How do we simplify Boolean logic?

109
Quick Recap on Logic Simplification
n The original Boolean expression (i.e., logic circuit) may not
be optimal

F = ~A(A + B) + (B + AA)(A + ~B)

n Can we reduce a given Boolean expression to an equivalent


expression with fewer terms?

F=A+B

n The goal of logic simplification:


q Reduce the number of gates/inputs
q Reduce implementation cost

A basis for what the automated design tools are doing today
110
Logic Simplification
n Systematic techniques for simplifications
q amenable to automation
$ + 𝑨𝑩
Key Tool: The Uniting Theorem — 𝑭 = 𝑨𝑩

𝑭= $ + 𝑨𝑩 = 𝑨 𝑩
𝑨𝑩 $ +𝑩 =𝑨 𝟏 =𝑨

B's value changes within the rows where F==1 (“ON set”)
Essence of Simplification:
Find two elementA's subsets
value doesof
NOT
thechange within
ON-set the ON-set
where onlyrows
one variable
changes its value.
If anThis
inputsingle
(B) can varying variable
change without can the
changing be output,
eliminated!
that input
value is not needed
➙ B is eliminated, A remains

$𝑩
𝑮= 𝑨 $ + 𝑨𝑩 $+𝑨 𝑩
$= 𝑨 $=𝑩
$

B's value stays the same within the ON-set rows

A's value changes within the ON-set rows


➙ A is eliminated, B remains
111
Complex Cases
n One example
& 𝑩𝑪 + 𝑨𝑩
𝑪𝒐𝒖𝒕 = 𝑨 & + 𝑨𝑩𝑪
& 𝑪 + 𝑨𝑩𝑪
n Problem
q Easy to see how to apply Uniting Theorem…
q Hard to know if you applied it in all the right places…
q …especially in a function of many more variables

n Question
q Is there an easier way to find potential simplifications?
q i.e., potential applications of Uniting Theorem…?

n Answer
q Need an intrinsically geometric representation for Boolean f( )
q Something we can draw, see…

112
Karnaugh Map
n Karnaugh Map (K-map) method
q K-map is an alternative method of representing the truth table
that helps visualize adjacencies in up to 6 dimensions
q Physical adjacency ↔ Logical adjacency

2-variable K-map 3-variable K-map 4-variable K-map


𝑩 0 1 𝑩𝑪 𝑪𝑫
𝑨
𝑨 00 01 11 10 𝑨𝑩 00 01 11 10
0 00 01
0 000 001 011 010 00 0000 0001 0011 0010

1 10 11
1 100 101 111 110 01 0100 0101 0111 0110

11 1100 1101 1111 1110

10 1000 1001 1011 1010

Numbering Scheme: 00, 01, 11, 10 is called a


“Gray Code” — only a single bit (variable) changes
from one code word and the next code word

113
Karnaugh Map Methods
Adjacent

𝑩𝑪
𝑨 00 01 11 10
000 100
010 110
001 101
0 000 001 011 010 011 111

1 100 101 111 110 000 010 110 100

001 011 111 101


Adjacent

K-map adjacencies go “around the edges”


Wrap around from first to last column
Wrap around from top row to bottom row

114
K-map Cover - 4 Input Variables

𝐅(𝐀, 𝐁, 𝐂, 𝐃) = D 𝒎(𝟎, 𝟐, 𝟓, 𝟖, 𝟗, 𝟏𝟎, 𝟏𝟏, 𝟏𝟐, 𝟏𝟑, 𝟏𝟒, 𝟏𝟓)


𝑪𝑫
𝑨𝑩 00 01 11 10 %𝑫 %
% + 𝐁𝑪𝑫
𝐅=𝐀+𝑩
00 1 0 0 1
01 0 1 0 0
11 1 1 1 1 Strategy for “circling” rectangles on Kmap:
10 1 1 1 1
As big as possible
Biggest “oops!” that people forget:
Wrap-arounds

115
Logic Minimization Using K-Maps
n Very simple guideline:
q Circle all the rectangular blocks of 1’s in the map, using the
fewest possible number of circles
n Each circle should be as large as possible
q Read off the implicants that were circled

n More formally:
q A Boolean equation is minimized when it is written as a sum of
the fewest number of prime implicants
q Each circle on the K-map represents an implicant
q The largest possible circles are prime implicants

117
K-map Rules
n What can be legally combined (circled) in the K-map?
q Rectangular groups of size 2k for any integer k
q Each cell has the same value (1, for now)
q All values must be adjacent
n Wrap-around edge is okay

n How does a group become a term in an expression?


q Determine which literals are constant, and which vary across group
q Eliminate varying literals, then AND the constant literals
n %
constant 1 ➙ use 𝐗, constant 0 ➙ use 𝑿

n What is a good solution?


q Biggest groupings ➙ eliminate more variables (literals) in each term
q Fewest groupings ➙ fewer terms (gates) all together
q OR together all AND terms you create from individual groups
118
K-map Example: Two-bit Comparator
A B C D F1 F2 F3
0 0 0 0 1 0 0
0 0 0 1 0 1 0
A AB = CD 0 0 1 0 0 1 0
F1
0 0 1 1 0 1 0
B AB < CD 0 1 0 0 0 0 1
F2
0 1 0 1 1 0 0
C AB > CD 0 1 1 0 0 1 0
F3
0 1 1 1 0 1 0
D 1 0 0 0 0 0 1
1 0 0 1 0 0 1
1 0 1 0 1 0 0
1 0 1 1 0 1 0
Design Approach: 1 1 0 0 0 0 1

Write a 4-Variable K-map 1 1 0 1 0 0 1


for each of the 3 1 1 1 0 0 0 1
output functions 1 1 1 1 1 0 0

119
K-map Example: Two-bit Comparator (2)
K-map for F1 A B C D F1 F2 F3
0 0 0 0 1 0 0
𝑪
0 0 0 1 0 1 0
𝑪𝑫 00 01 11 10 0 0 1 0 0 1 0
𝑨𝑩
00 0 0 1 1 0 1 0
1 0 1 0 0 0 0 1

01 0 1 0 1 1 0 0
1 0 1 1 0 0 1 0

11 𝑩 0 1 1 1 0 1 0
1
𝑨 1 0 0 0 0 0 1
1 0 0 1 0 0 1
10
1 1 0 1 0 1 0 0
1 0 1 1 0 1 0
1 1 0 0 0 0 1
𝑫 1 1 0 1 0 0 1
1 1 1 0 0 0 1
F1 = A'B'C'D' + A'BC'D + ABCD + AB'CD'
1 1 1 1 1 0 0

120
K-map Example: Two-bit Comparator (3)
K-map for F2 A B C D F1 F2 F3
𝑪 0 0 0 0 1 0 0
𝑪𝑫 00 01 11 10 0 0 0 1 0 1 0
𝑨𝑩 0 0 1 0 0 1 0
00
1 1 1 0 0 1 1 0 1 0
0 1 0 0 0 0 1
01
1 1 0 1 0 1 1 0 0
0 1 1 0 0 1 0
11 𝑩
0 1 1 1 0 1 0
𝑨 1 0 0 0 0 0 1
10
1 1 0 0 1 0 0 1
1 0 1 0 1 0 0
1 0 1 1 0 1 0
𝑫 1 1 0 0 0 0 1

F2 = A'C + A'B'D + B'CD 1 1 0 1 0 0 1


1 1 1 0 0 0 1
F3 = ? (Exercise for you) 1 1 1 1 1 0 0

121
K-maps with “Don’t Care”
n Don’t Care really means I don’t care what my circuit outputs if this
appears as input
q You have an engineering choice to use DON’T CARE patterns
intelligently as 1 or 0 to better simplify the circuit

A B C D F G

•••
I can pick 00, 01, 10, 11
0 1 1 0 X X independently of below
0 1 1 1
1 0 0 0 X X
1 0 0 1 I can pick 00, 01, 10, 11
independently of above
•••

122
Example: BCD Increment Function
n BCD (Binary Coded Decimal) digits
q Encode decimal digits 0 - 9 with bit patterns 00002 — 10012
q When incremented, the decimal sequence is 0, 1, …, 8, 9, 0, 1

A B C D W X Y Z
0 0 0 0 0 0 0 1
0 0 0 1 0 0 1 0
0 0 1 0 0 0 1 1
0 0 1 1 0 1 0 0
0 1 0 0 0 1 0 1
0 1 0 1 0 1 1 0
0 1 1 0 0 1 1 1
0 1 1 1 1 0 0 0
1 0 0 0 1 0 0 1
1 0 0 1 0 0 0 0
1 0 1 0 X X X X
1 0 1 1 X X X X
1 1 0 0 X X X X
These input patterns should
1 1 0 1 X X X X never be encountered in practice
1 1 1 0 X X X X (hey -- it’s a BCD number!)
1 1 1 1 X X X X So, associated output values are
“Don’t Cares”
123
K-map for BCD Increment Function
W X
𝑪𝑫 𝑪𝑫
Z
ABCD
(without
𝑨𝑩 don’t
00 01 cares)
11 10 = A'D'
𝑨𝑩 +00B'C'D’
01 11 10
00 00 1
+ 1
Z (with don’t
01 cares)1 = D' 01 1 1 1
WXYZ 11 X X X X 11 X X X X
10 1 X X 10 X X

Y 𝑪𝑫
Z 𝑪
00 01 11 10 𝑪𝑫
𝑨𝑩 𝑨𝑩 00 01 11 10
00 1 1 00 1 1
01 1 1 01 1 1
11 X X X X 11 X X X X
𝑩
𝑨
10 X X 10 1 X X
𝑫 124
K-map Summary

n Karnaugh maps as a formal systematic approach


for logic simplification

n 2-, 3-, 4-variable K-maps

n K-maps with “Don’t Care” outputs

n H&H Section 2.7


125
Digital Design & Computer Arch.
Lecture 4: Combinational Logic I

Prof. Onur Mutlu

ETH Zürich
Spring 2020
28 February 2020

You might also like