Microprocessor Module 1 and 2
Microprocessor Module 1 and 2
Elements of microcomputer
• The microcomputer has microprocessor, memory, buses, and I/O devices as
shown in the above block diagram.
• Microprocessor : The microprocessor is the central processing unit. It does all
the arithmetic and logical operations through internal registers and ALU
(Arithmetic Logic Unit). It also works as a master to control the data transfer
which includes address generation, bus synchronization and actual data transfer.
Generally, the frequency of operation of microprocessor is in Megahertz which
means the time period is in microsecond. So, it is called as microprocessor.
• Memory : The microprocessor does not have internal memory. So external
memories need to be interfaced with the microprocessor. It includes ROM which
is used for storage of program and RAM which is used to store data temporarily.
Upon reset or power cut, the data in RAM is erased while data in ROM remains
the same. Apart from these, some systems may include EEPROM as per the
applications requirement.
• Buses : For actual data transfer between CPU and peripheral devices or
memory, buses are used. A bus is collection of connectors/paths (typically in
multiples of 8) and is used for data transfer. Depending upon the use, the buses
can be classified in 3 types, viz., address bus, data bus and control bus. The data
bus carries data in bidirectional way. The address bus is used to carry the
address whereas control bus is used to synchronize data transfer.
• I/O devices : The input/output devices are necessary to interact with external
world. The devices can be interfaced through a peripheral controller IC. Similar
to memory, input devices send data to CPU and output devices receive data from
CPU through buses. The I/O device can be as simple as keys and LEDs.
Types of Programming Languages
A computer is simply a machine and hence it cannot perform any task itself.
Therefore, to make a computer functional different coding languages are
developed, which are known as programming languages.
Computer programming languages are broadly classified into the following three
main categories −
• Machine Language
• Assembly Language
• High-level Language
Read this article to get an overview of assembly language and high-level
languages, and how they are different from each other.
In assembly language, the computer codes are written using words and
expressions that are easier to understand for human. The computer processor
can only execute machine codes, hence it is required to convert the assembly
codes into machine codes. For this purpose, a utility program is used to convert
assembly code into executable machine code. This utility program which converts
assembly code into machine code is called assembler.
The major advantages of assembly language are less memory requirement, less
execution time, easier to write instructions and codes, etc.
Now, let us discuss the important differences between assembly language and
high-level language.
Conclusion
To conclude, assembly language is a low-level language in which the codes are
written using symbolic representation of machine codes, whereas high-level
languages are programming languages that use keywords and phrases which are
closer to natural language like English to write the computer instructions.
Examples of some commonly used high-level languages include C, C++, Java,
Python, C#,
EVOLUTION OF MICROPROCESSOR
Intel introduced its first 4-bit microprocessor 4004 in 1971 and its 8-bit
microprocessor 8008 in 1972. These microprocessors could not survive as general
purpose microprocessors due to their design and performance Limitations. The launch
of the first general purpose 8-bit microprocessor 8080 in 1974 by Intel is considered
to be the first major stepping stone towards the development of advanced
microprocessors. The microprocessor 8085 followed 8080, with a few more added
features to its architecture, which resulted in a functionally complete microprocessor.
The main limitations of the 8-bit microprocessors were their low speed, low memory
addressing capability, limited number of general purpose registers and a less powerful
instruction set All these limitations at the 8-bit microprocessors pushed the designers
to build more powerful processors in terms of advanced architecture, more processing
capability, larger memory addressing capability and a more powerful instruction set The
8086 was a result of such developmental design efforts
In the family of 16-bit microprocessors, intel’s 8086 was the first one to be launched
in 1978. The introduction of the 16-bit processor was a result of the increasing demand
for more powerful and high speed computational resources. The 8086 microprocessor
has a much more powerful instruction set along with the architectural developments
which imparts substantial programming flexibility and improvement in speed over the 8-
bit microprocessors.
The peripheral chips designed earlier for 8085 were compatible with microprocessor
8086 with slight or no modifications: Though there is a considerable difference
between the memory addressing techniques of 8085 and 8086, the memory interfacing
technique is similar, but includes the use of a few additional signals. The clock
requirements are also different as compared to 8085, but the overall minimal system
organisation of 8086 is similar to that of a general 8-bit microprocessor.
HISTORY OF INTEL PROCESSORS