Ee Ac9 Logic Circuits and Switching Theory: Module 1 - (Part 2)
Ee Ac9 Logic Circuits and Switching Theory: Module 1 - (Part 2)
Module 1 – (Part 2)
BY
Complements are used in digital computers in order to simply the subtraction operation and for
the logical manipulations. For the Binary number (base-2) system, there are two types of
complements: 1’s complement and 2’s complement.
There is a simple algorithm to convert a binary number into 1’s complement. To get 1’s
complement of a binary number, simply invert the given number or by transforming the 0 bit to 1
and the 1 bit to 0.
Examples:
There is a simple algorithm to convert a binary number into 2’s complement. To get 2’s
complement of a binary number, simply invert the given number and add 1 to the least significant
bit (LSB) of given result. 2’s complement of a binary number is 1 added to the 1’s complement of
the binary number.
Examples:
Bits - The smallest unit of data in a computer is called Bit (Binary Digit). A bit has a single binary
value, either 0 or 1. In most computer systems, there are eight bits in a byte. The value of a bit is
usually stored as either above or below a designated level of electrical charge in a single capacitor
within a memory device. A bit is abbreviated with a small “b”.
Octet - In some systems, the term octet is used for an eight-bit unit instead of byte. In many
systems, four eight-bit bytes or octets form a 32-bit word. In such systems, instructions lengths are
sometimes expressed as full-word (32 bits in length) or half-word (16 bits in length).
Kilobyte - A Kilobyte (kb or Kbyte) is approximately a thousand bytes (actually, 2 to the 10th
power, or decimal 1,024 bytes).
Megabyte - As a measure of computer processor storage and real and virtual memory, a megabyte
(abbreviated MB) is 2 to the 20th power byte, or 1,048,576 bytes in decimal notation.
Gigabyte - A Gigabyte (pronounced Gig-a-bite with hard G’s) is a measure of computer data
storage capacity and is “roughly” a billion bytes. A gigabyte is 2 to the 30th power, or
1,073,741,824 in decimal notation.
Terabyte - A Terabyte is a measure of computer storage capacity and is 2 to the 40th power or
1024 gigabytes.
Binary Addition
- It is a key for binary subtraction, multiplication, division. There are four rules of binary
addition.
Case A+B Sum Carry
1 0+0 0 0
2 0+1 1 0
3 1+0 1 0
4 1+1 0 1
In the fourth case, a binary addition is creating a sum of (1 + 1 = 10) i.e., 0 is written in
the given column and a carry of 1 over to the next column.
Binary Subtraction
- Subtraction and Borrow, these two words will be used very frequently for the binary
subtraction. There are four rules of binary subtraction.
Case A–B Difference Borrow
1 0–0 0 0
2 1–0 1 0
3 1–1 0 0
4 0–1 1 1
In the fourth case, a binary subtraction is creating a difference of (0 – 1 = 1), borrowing 1
from the next most significant bit.
- Binary subtraction is also similar to that of decimal subtraction with the difference that
when 1 is subtracted from 0, it is necessary to borrow 1 from the next higher order bit and
that bit is reduced by 1 (or 1 is added to the next bit of subtrahend) and the remainder is
1.
Example 18: 10010101 10001001
Binary Multiplication
Binary multiplication may sound like it would be more difficult than binary addition or
subtraction – but is actually a simple process. Binary multiplication is similar to decimal
multiplication. It is simpler than decimal multiplication because only 0s and 1s are
involved. There are four rules of binary multiplication.
Case AxB Difference
1 0x0 0
2 0x1 0
3 1x0 0
4 1x1 1
In Binary Multiplication, there are no carry-overs or borrows.
Example 20: 11010 1100
Binary Division
Binary division is similar to decimal division. It is called as the long division procedure.
Binary division is comprised of other two binary arithmetic operations, multiplication
and subtraction; an example will explain the operation more easily.
Example 21: 101010 000110
1.4 BINARY CODES
Binary code is any data, text, or computer instructions represented using a two-symbol
system. These two numeral symbols are 0 and 1. Computers and digital electronic devices can
only communicate using 0’s and 1’s. For example, the text on your mobile app is in English,
but there is a background coding system translating those words to binary numbers for a
computer to process.
Weighted Codes
Non-Weighted Codes
Alphanumeric Codes
Error Detection Codes (Parity Codes)
Weighted binary codes are those binary codes which obey the positional weight principle.
This is a system where every digit is assigned a specific weight based on its position. Several
systems of the codes are used to express the decimal digits 0 through 9. In these codes each decimal
digit is represented by a group of four bits.
Binary weights are values assigned to binary numbers based on their position in binary code. They
help to convert binary numbers to decimal numbers easily. To convert 10110 to decimal system,
consider the weights of this binary code.
16, 8, 4, 2, 1
These values originate from 2 to the power of their binary position: 24, 2³, 2², 2¹, 2⁰, starting from
the most significant bit (MSB) to the least significant bit (LSB).
We will only consider the weights corresponding to the 1’s in 10110 because the 0’s return 0 after
multiplication. Now, let’s add the weights. 2 + 4 + 16 = 22
Weighted binary code is essential for displaying numeric values in digital devices such as
voltmeters and calculators
Weighted code, for example, Binary Coded Decimal (BCD) code facilitates convenient
input/output functionality display in digital circuits
Weighted binary code is used in complex mathematical calculations
Non-weighted code is used for special applications where a binary weight is not needed.
Non-weighted code does not use positional weights to convert binary code to other systems like
decimal, hexadecimal, octal, or any other system. Examples of non-weighted code include excess-
3 code and gray code.
Non-weighted binary codes are perfect for error detection because they follow an organized
incrementing sequence.
The Gray code is used in shaft position encoders. These are devices employed in electric
motors to measure speed and position.
Alphanumeric Codes
A binary digit or bit can represent only two symbols as it has only two states '0' or '1'. But this is
not enough for communication between two computers because there we need many more symbols
for communication. These symbols are required to represent 26 alphabets with capital and small
letters, numbers from 0 to 9, punctuation marks and other symbols.
Computers work with only 0’s and 1’s. However, there is a need for more advanced forms of
communication with machines. This is why alphanumeric code is important.
The alphanumeric codes are the codes that represent numbers and alphabetic characters. Mostly
such codes also represent other characters such as symbol and various instructions necessary for
conveying information. An alphanumeric code should at least represent 10 digits and 26 letters of
alphabet i.e., total 36 items. The following three alphanumeric codes are very commonly used for
the data representation.
ASCII code is a 7-bit code whereas EBCDIC is an 8-bit code. ASCII code is more commonly used
worldwide while EBCDIC is used primarily in large IBM computers.
When data or instructions are electronically transmitted, there is a chance of errors during
data transmission in the form of scrambling or corruption of data. In order to avoid this, error-
detecting codes are utilized.
An error detection code attaches additional data to a message before sending and this
determines whether the message was corrupted during data transmission.
A parity code is an error detection code where an extra bit (called a parity bit) is attached
to the message to make the number of 0’s and 1’s either even or odd depending on the type of
parity.
BCD code is an example of a weighted binary code. In this code each decimal digit is
represented by a 4-bit binary number. BCD is a way to express each of the decimal digits with a
binary code. In the BCD, with four bits we can represent sixteen numbers (0000 to 1111). But in
BCD code only first ten of these are used (0000 to 1001). The remaining six code combinations
i.e., 1010 to 1111 are invalid in BCD.
But do not get confused, binary coded decimal is not the same as hexadecimal. Whereas a
4-bit hexadecimal number is valid up to F16 representing binary 11112, (decimal 15), binary coded
decimal numbers stop at 9 binary 10012. This means that although 16 numbers (24) can be
represented using four binary digits, in the BCD numbering system the six binary code
combinations of: 1010 (decimal 10), 1011 (decimal 11), 1100 (decimal 12), 1101 (decimal 13),
1110 (decimal 14), and 1111 (decimal 15) are classed as forbidden numbers and cannot be used.
Gray Code
A Gray Code represents numbers using a binary encoding scheme that groups a sequence
of bits so that only one bit in the group changes from the number before and after. It is named for
Bell Labs researcher Frank Gray, who described it in his 1947 patent submittal on Pulse Code
Communication. He did not call it a Gray Code, but noted there was no name associated with the
novel code and referred to it as a Binary Reflected Code for the way he determined the groupings
and number representations. When the patent was granted in 1953 others began to refer to the
encoding scheme as the Gray Code.
It is the non-weighted code and it is not arithmetic codes. That means there are no specific
weights assigned to the bit position. It has a very special feature that, only one bit will change each
time the decimal number is incremented as shown in fig. As only one bit changes at a time, the
gray code is called as a unit distance code. The gray code is a cyclic code. Gray code cannot be
used for arithmetic operation.
ASCII is an acronym for American Standard Code for Information Interchange. It is a code
that uses numbers to represent characters. Each letter is assigned a number between 0 and 127. A
upper and lower case character are assigned different numbers. For example, the character A is
assigned the decimal number 65, while a is assigned decimal 97.
ASCII code is a 7-bit code used in smaller computers. ASCII code represents the numbers
from 0 to 9, ninety-five upper and lowercase letters of the alphabet, punctuation marks, and a blank
space. In total, ASCII encodes 128 characters.
When a computer sends data the keys you press or the text you send and receive is sent as
a bunch of numbers. These numbers represent the characters you typed or generated. Because the
range of standard ASCII is 0 to 127 it only requires 7 bits or 1 byte of data. Microprocessors only
understand bits and bytes. To it everything is a sequence of bits.
1.5 DATA REPRESENTATION
Data
o refers to the symbols that represent people, events, things, and ideas. Data can be a
name, a number, the colors in a photograph, or the notes in a musical composition.
Data Representation
Digitization
Representing Numbers
Representing Text
o Character data is composed of letters, symbols, and numerals that are not used in
calculations.
o Examples of character data include your name, address, and hair color.
o Character data is commonly referred to as “text.”
o Digital devices employ several types of codes to represent character data, including
ASCII, Unicode, and their variants.
o The ASCII code for an uppercase A is 1000001.
Representing Image
o Images also need to be converted into binary in order for a computer to process
them so that they can be seen on our screen. Digital images are made up of pixels.
Each pixel in an image is made up of binary numbers.
o If we say that 1 is black (or on) and 0 is white (or off), then a simple black and
white picture can be created using binary.
o To create the picture, a grid can be set out and the squares coloured (1 – black and
0 – white). But before the grid can be created, the size of the grid needs be known.
This data is called metadata and computers need metadata to know the size of an
image. If the metadata for the image to be created is 10x10, this means the picture
will be 10 pixels across and 10 pixels down.
The system described so far is fine for black and white images, but most images need to
use colors as well. Instead of using just 0 and 1, using four possible numbers will allow an
image to use four colors. In binary this can be represented using two bits per pixel:
00 – white 10 – green
01 – blue 11 – red
Representing Sound
o Sound needs to be converted into binary for computers to be able to process it. To do
this, sound is captured - usually by a microphone - and then converted into a digital
signal.
o An analogue to digital converter will sample a sound wave at regular time intervals.
For example, a sound wave like this can be sampled at each time sample point:
The samples can then be converted to binary. They will be recorded to the nearest whole
number.
Time Sample 1 2 3 4 5 6 7 8 9 10
Denary 8 3 7 6 9 7 2 6 6 6
Binary 1000 0011 0111 0110 1001 0111 0010 0110 0110 011
Compression
Processing power and storage space is very valuable on a computer. To get the best out of
both, it can mean that we need to reduce the file size of text, image and audio data in order to
transfer it more quickly and so that it takes up less storage space.
In addition, large files take a lot longer to download or upload which leads to web pages,
songs and videos that take longer to load and play when using the internet.
Any kind of data can be compressed. There are two main types of compression: lossy and lossless.
Lossy compression removes some of a file’s original data in order to reduce the file size. This
might mean reducing the numbers of colors in an image or reducing the number of samples in a
sound file. This can result in a small loss of quality of an image or sound file.
A popular lossy compression method for images is the JPEG, which is why most images on the
internet are JPEG images. A popular lossy compression method for sounds is MP3. Once a file
has been compressed using lossy compression, the discarded data cannot be retrieved again.
Lossless compression doesn’t reduce the quality of the file at all. No data is lost, so lossless
compression allows a file to be recreated exactly as it was when originally created.
There are various algorithms for doing this, usually by looking for patterns in the data that are
repeated. Zip files are an example of lossless compression.