Introduction To Computer
Introduction To Computer
Introduction To Computer
Definitions
Computer: A computer is basically defined as a tool or machine used for processing data to
give required information. The computer is any electronic device that can accept data,
Giving out the result (output) on the screen or the Visual Display Unit (VDU)
OUTPUT
INPUT PROCESSING
(Data) (Information)
Data: The term data refers to facts about a person, object or place, e.g. name, age, complexion,
1
Information: This is referred to as processed data or a meaningful statement, e.g. net pay of
interview etc.
The computer is a machine used for a variety of purposes. Its use transcends all areas of human
endeavours owing to the advantages of the computer method of data processing over the manual
Computing
Computing is the process of using computers and computer technology to perform tasks, solve
The following are the three major methods that have been widely used for data processing over
the years:
1. The Manual Method: The manual method of data processing involves the use of chalk,
wall, pen, pencil and the like. These devices, machines or tools facilitate human efforts
manual data processing operations entail considerable manual efforts. Thus, the manual
When there are errors, then the reliability, accuracy, neatness, tidiness, and validity of
the data would be in doubt. The manual method does not allow for the processing of
2. The Mechanical Method: The mechanical method of data processing involves the use
of machines such as the typewriter, roneo machines, adding machines and the like.
and presenting data or information. The mechanical operations are basically routine in
hazardous, error prone and untidy. The mechanical method does not allow for the
3. The Computer Method: The computer method of carrying out data processing has the
There is a store where data and instructions can be stored temporarily and permanent.
Output reports are usually very neat, decent and can be produced in various forms such as
CHARACTERISTICS OF A COMPUTER
3
Speed: The computer can manipulate large data at incredible speed and response time can
be very fast.
Accuracy: Its accuracy is very high and its consistency can be relied upon. Errors
committed in computing are mostly due to human rather than technological weakness. There
Storage: It has both internal and external storage facilities for holding data and
instructions. This capacity varies from one machine to the other. Memories are built up in K
Automatic: Once a program is in the computer’s memory, it can run automatically each
Reliability: Being a machine, a computer does not suffer human traits of tiredness and
lack of concentration. It will perform the last job with the same speed and accuracy as the
first job every time even if ten million jobs are involved.
Flexibility: It can perform any type of task once it can be reduced to logical steps. Modern
computers can be used to perform a variety of functions like on-line processing, multi-
Categories of Computers
4
Although there are no industry standards, computers are generally classified in the following
ways:
1. Classification Based on Signal Type There are basically three types of electronic
The Digital Computer This represents its variables in the form of digits. The data it
deals with, whether representing numbers, letters or other symbols, are converted into
binary form on input to the computer. The data undergoes a processing after which
the binary digits are converted back to alpha numeric form for output for human use.
Because of the fact that business applications like inventory control, invoicing and
payroll deal with discrete values (separate, disunited, discontinuous), they are best
processed with digital computers. As a result of this, digital computers are mostly
The Analog Computer It measures rather than counts. This type of computer sets up
a model of a system. The common type represents its variables in terms of electrical
voltage and sets up circuit analog to the equation connecting the variables. The
answer can be either by using a voltmeter to read the value of the variable required, or
by feeding the voltage into a plotting device. Analog computers hold data in the form
give an exact answer because the answer has not been approximated to the nearest
digit. Whereas, when we try to obtain the answers using a digital voltmeter, we often
find that the accuracy is less than that which could have been obtained from an analog
5
controlling and monitoring of systems in such areas as hydrodynamics and rocketry in
production.
The Hybrid Computer: Hybrid computers are a combination of analog and digital
computers. They use analog techniques for data acquisition and processing and digital
In some cases, the computer user may wish to obtain the output from an analog computer
machine where the two are connected and the analog computer may be regarded as a
peripheral of the digital computer. In such a situation, a hybrid system attempts to gain
the advantage of both the digital and the analog elements in the same machine. This kind
of machine is usually a special-purpose device which is built for a specific task. It needs a
conversion element which accepts analog inputs, and outputs digital values. Such
converters are called digitisers. There is a need for a converter from analog to digital also.
calculations can be dealt with by the digital elements, thereby requiring a large memory,
and giving accurate results after programming. They are mainly used in aerospace and
solve a restricted class of problems. Such computers may even be designed and built
to handle only one job. In such machines, the steps or operations that the computer
6
follows may be built into the hardware. Most of the computers used for military
purposes fall into this class. Other examples of special purpose computers include:
Computers used as robots in factories like vehicle assembly plants and glass
industries.
Special-purpose computers are usually very efficient for the tasks for which they are specially
designed. They are very much less complex than the general-purpose computers. The simplicity
of the circuiting stems from the fact that provision is made only for limited facilities. They are
very much cheaper than the general-purpose type since they involve fewer components and are
less complex.
imposed by memory size, speed and the type of input/output devices. Examples of
areas where general purpose computers are employed include the following:
Payroll
Banking
Billing
7
Sales analysis
Cost accounting
Manufacturing scheduling
Inventory control
General-purpose computers are more flexible than special purpose computers. Thus,
They are less efficient than the special-purpose computers due to such problems as the
following:
computers was measured in terms of physical size. Today, however, physical size
volume of work that a computer can handle. The volume of work that a given
computer handles is closely tied to the cost and to the memory size of the
computer. Therefore, most authorities today accept rental price as the standard
8
for ranking computers. Here, both memory size and cost shall be used to rank
Microcomputers
Medium/mini/small computers
Large computer/mainframes.
Microcomputers: Microcomputers, also known as single board computers, are .the cheapest
class of computers. In the microcomputer, we do not have a Central Processing Unit (CPU)
as we have in the larger computers. Rather we have a microprocessor chip as the main data
processing unit. They are the cheapest and smallest, and can operate under normal office
conditions. Examples are IBM, APPLE, COMPAQ, Hewlett Packard (HP), Dell and
Toshiba, etc.
Normally, personal computers are placed on the desk; hence they are referred to as desktop
personal computers. Still other types are available under the categories of personal
Laptop Computers: These are small size types that are battery- operated. The screen is used
to cover the system while the keyboard is installed flat on the system unit. They could be
carried about like a box when closed after operation and can be operated in vehicles while on
a journey.
Notebook Computers: These are like laptop computers but smaller in size. Though small,
9
Palmtop Computers: The palmtop computer is far smaller in size. All the components are
complete as in any of the above, but it is made smaller so that it can be held on the palm.
It can be used to produce documents like memos, reports, letters and briefs.
It can assist in searching for specific information from lists or from reports Advantages of the
Personal Computer
It can attend to several users at the same time, thereby being able to process several jobs at
a time
10
It requires special skill to operate
Some computers cannot function properly without the aid of a cooling system, e.g. air
Mini Computers
Mini computers have memory capacity in the range ‘128- 256 Kbytes’ and are also not
expensive but reliable and smaller in size compare to mainframe. They were first introduced
in 1965; when DEC (Digital Equipment Corporation) built the PDP – 8.Other mini
Mainframe Computers
The mainframe computers, often called number crunchers have memory capacity of the
order of ‘4 Kbytes’, and are very expensive. They can execute up to 100 MIPS (Meanwhile
Instructions per Second). They have large systems and are used by many people for a variety
of purposes.
MILESTONES
11
A Brief History of Computer Technology
A complete history of computing would include a multitude of diverse devices such as the
ancient Chinese abacus, the Jacquard loom (1805) and Charles Babbage’s “analytical engine”
(1834). It would also include a discussion of mechanical, analog and digital computing
architectures. As late as the 1960s, mechanical devices, such as the Marchant calculator, still
found widespread application in science and engineering. During the early days of electronic
computing devices, there was much discussion about the relative merits of analog vs. digital
computers. In fact, as late as the 1960s, analog computers were routinely used to solve systems
of finite difference equations arising in oil reservoir modeling. In the end, digital computing
devices proved to have the power, economics and scalability necessary to deal with large scale
computations. Digital computers now dominate the computing world in all areas ranging from
the hand calculator to the supercomputer and are pervasive throughout society. Therefore, this
brief sketch of the development of scientific computing is limited to the area of digital, electronic
computers. The evolution of digital computing is often divided into generations. Each generation
is characterised by dramatic improvements over the previous generation in the technology used
to build computers, the internal organisation of computer systems, and programming languages.
Although not usually associated with computer generations, there has been a steady
history has been organised using these widely recognized generations as mileposts
12
These machines used electronic switches, in the form of vacuum tubes, instead of
electromechanical relays. In principle the electronic switches were more reliable, since they
would have no moving parts that would wear out, but technology was still new at that time
and the tubes were comparable to relays in reliability. Electronic components had one major
benefit, however: they could “open” and “close” about 1,000 times faster than mechanical
professor of physics and mathematics at Iowa State, in 1937. A second early electronic
machine was Colossus, designed by Alan Turning for the British military in 1943. This
machine played an important role in breaking codes used by the German army in World War
II. Turning’s main contribution to the field of computer science was the idea of the Turning
Machine, a mathematical formalism widely used in the study of computable functions. The
first general purposes programmable electronic computer was the Electronic Numerical
Integrator and Computer (ENIAC), built by J. Presper Eckert and John V. Mauchly at the
Department, which needed a way to compute ballistics during World War II. The machine
wasn’t completed until 1945, but then it was used extensively for calculations during the
design of the hydrogen bomb. By the time it was decommissioned in 1955 it had been used
for research on the design of wind tunnels, random number generators, and weather
prediction.
Electronic switches in this era were based on discrete diode and transistor technology with a
switching time of approximately 0.3 microseconds. The first machines to be built with this
technology include TRADIC at Bell Laboratories in 1954 and TX-0 at MIT’s Lincoln
13
Laboratory.Important commercial machines of this era include the IBM 704 and 7094. The
latter introduced I/O processors for better throughput between I/O devices and main memory.
The second generation also saw the first two supercomputers designed specifically for
reserved for a machine that is an order of magnitude more powerful than other machines of
its era. Two machines of the 1950s deserve this title. The Livermore Atomic Research
Computer (LARC) and the IBM 7030 (aka Stretch) were early examples of machines that
overlapped memory operations with processor operations and had primitive forms of parallel
processing.
The third generation brought huge gains in computational power. Innovations in this era
include the use of integrated circuits, or ICs (semiconductor devices with several transistors
built into one physical component), semiconductor memories starting to be used instead of
processors, the coming of age of pipelining and other forms of parallel processing, and the
introduction of operating systems and time-sharing. The first ICs were based on small-scale
integration (SSI) circuits, which had around 10 devices per circuit (or “chip”), and evolved to
the use of medium-scale integrated (MSI) circuits, which had up to 100 devices per chip.
Multilayered printed circuits were developed and core memory was replaced by faster, solid
state memories. Computer designers began to take advantage of parallelism by using multiple
functional units, overlapping CPU and I/O operations, and pipelining (internal parallelism) in
both the instruction stream and the data stream. In 1964, Seymour Cray developed the CDC
6600, which was the first architecture to use functional parallelism. By using 10 separate
14
functional units that could operate simultaneously and 32 independent memory banks, the
CDC 6600 was able to attain a computation rate of 1 million floating point operations per
second (1 Mflops).
The next generation of computer systems saw the use of large scale integration (LSI –1000
devices per chip) and very large scale integration (VLSI –100,000 devices per chip) in the
construction of computing elements. At this scale entire processors will fit onto a single chip,
and for simple systems the entire computer (processor, main memory, and I/O controllers)
can fit on one chip. Microcomputers and workstations were introduced and saw wide use as
logic). These languages tend to use a declarative programming style as opposed to the
should be computed to the compiler and/or runtime system. These languages are not yet in
wide use, but are very promising as notations for programs that will run on massively parallel
computers (systems with over 1,000 processors). Compilers for established languages started
to use sophisticated optimisation techniques to improve codes, and compilers for vector
processors were able to vectorise simple loops (turn loops into single instructions that would
initiate an operation over an entire vector). Two important events marked the early part of the
third generation: the development of the C programming language and the UNIX operating
15
The development of the next generation of computer systems is characterised mainly by the
acceptance of parallel processing. Until this time, parallelism was limited to pipelining and
vector processing, or at most to a few processors sharing jobs. The fifth generation saw the
introduction of machines with hundreds of processors that could all be working on different
incredible pace, so that by 1990 it was possible to build chips with a million components –
and semiconductor memories became standard on all computers. Other new developments
were the widespread use of computer networks and the increasing use of single-user
workstations. Prior to 1985, large scale parallel processing was viewed as a research goal, but
two systems introduced around this time are typical of the first commercial products to be
single shared memory module (but each processor had its own local cache). The machine
was designed to compete with the DEC VAX-780 as a general purpose Unix system, with
each processor working on a different user’s job. However, Sequent provided a library of
subroutines that would allow programmers to write programs that would use more than one
processor, and the machine was widely used to explore parallel algorithms and programming
techniques. The Intel iPSC-1, nicknamed “the hypercube”, took a different approach. Instead
of using one memory module, Intel connected each processor to its own memory and used a
memory was no longer a bottleneck and large systems (using more processors) could be
built. The largest iPSC-1 had 128 processors. Toward the end of this period, a third type of
parallel processor was introduced to the market. In this style of machine, known as a data-
16
Sixth Generation (Future): ADVANCED AI, QUANTUM COMPUTING, AND
BEYOND
Transitions between generations in computer technology are hard to define, especially as they are
taking place. Some changes, such as the switch from vacuum tubes to transistors, are
immediately apparent as fundamental changes, but others are clear only in retrospect. Integration
of AI with everyday devices and environments, leading to pervasive and ubiquitous computing
considered unsolvable, also the development of neuromorphic computing, which mimics the
neural structure of the human brain for more efficient processing, Use of nanotechnology for
Milestones:
Continued advancements in AI, with models like GPT-4 achieving more sophisticated
Progress in quantum computing, with companies like IBM, Google, and others
Intel and other companies making strides in neuromorphic computing, with chips
The sixth generation represents the convergence of multiple cutting-edge technologies, leading to
17
fields such as medicine, finance, climate science, and more, paving the way for a future where
SKILLS
number of characteristics, such as logically ordering and analyzing data, and creating solutions
involves various techniques and processes that can be used to tackle complex problems in a
systematic and efficient manner. It is not just for computer scientists but is a fundamental skill
for everyone. The term was popularized by Jeannette Wing in 2006, who argued that CT should
be a fundamental skill like reading, writing, and arithmetic. The ability to solve problems is
essential in everyday life and various professional fields. It involves identifying issues, analyzing
1. Decomposition
Decomposition involves breaking down a complex problem or system into smaller, more
manageable parts.
18
Benefits: Makes large problems easier to solve, helps in understanding the problem better, and
Examples:
Software Development: Breaking down a software project into modules, functions, or classes.
Daily Life: Planning a trip by splitting it into tasks like booking flights, reserving
2. Pattern Recognition
This involves identifying patterns or trends in data and leveraging these to solve problems more
efficiently.
3. Abstraction
Abstraction is the process of filtering out the unnecessary details to focus on the important
Benefits: Simplifies complex problems, makes it easier to understand and manage the problem,
Examples:
Modeling: Creating a simplified model of a real-world system, like a map or a business process.
Examples:
19
4. Algorithm Design
task.
Examples:
20
Computational Thinking in Practice Education:
1. Integrating CT into the curriculum helps students develop critical thinking skills from
2. Workplace: Essential for roles in IT, engineering, healthcare, finance, and many other
fields. Examples: Data scientists using CT to analyze and interpret large datasets;
3. Everyday Life: CT can help in making informed decisions, planning, and managing
1. Educational Programs
Schools and Universities: Offering courses that include programming, algorithms, and problem-
solving exercises.
2. Online Resources
3. Practical Experience
Projects: Engaging in personal or group projects that require problem-solving and algorithm
development.
21
4. Competitions: Participating in hackathons, coding challenges, and other competitive
programming events.
CAREER OPPORTUNITIES
Computing offers a wide range of career opportunities and pathways due to the increasing
reliance on technology across various industries. Here are some key areas and roles within
computing:
1. Software Development
Data Scientist: Analyze and interpret complex data to help companies make decisions.
Machine Learning Engineer: Develop algorithms that allow computers to learn from and make
3. Cybersecurity
22
Security Analyst: Protect an organization’s information systems and data from cyber threats.
IT Manager: Oversee the IT department and ensure the smooth operation of IT services.
Help Desk Technician: Assist users with technical issues and troubleshooting
Emerging technologies and innovation encompass a wide range of new advancements and
creative processes that have the potential to significantly impact various industries and society.
23
1. Artificial Intelligence and Machine Learning: Enhancements in AI and ML are driving
and decentralized methods for transactions and data storage, impacting finance, supply
3. Internet of Things (IoT): IoT connects everyday objects to the internet, allowing for
real-time data collection and analysis. Applications range from smart homes to industrial
latency, and enhanced connectivity, enabling new applications in areas like autonomous
productivity.
1. Automation
24
Robotic Process Automation (RPA): Automates repetitive tasks, such as data entry and invoice
Big Data Analytics: Analyzes large datasets to uncover trends, patterns, and insights that inform
business decisions.
Business Intelligence (BI): Utilizes data visualization and reporting tools to provide actionable
insights.
CRM Systems: Manage customer data, track interactions, and improve customer service and
ERP Systems: Integrate core business processes, such as finance, HR, supply chain, and
5. Cloud Computing
Cloud Services: Provide scalable and flexible IT resources, such as storage, computing power,
Software as a Service (SaaS): Delivers software applications over the internet, reducing the need
25
6. Cybersecurity
Security Solutions: Protect business data and systems from cyber threats through firewalls,
Predictive Analytics: Use machine learning models to predict future trends and behaviors, such
Natural Language Processing (NLP): Improve customer interactions and automate customer
8. E-Commerce
E-Commerce Platforms: Enable online sales and manage inventory, orders, and customer
Digital Marketing: Utilize analytics, social media, and SEO tools to optimize marketing
Smart Devices: Use IoT devices to collect and analyze data from physical objects, improving
Asset Tracking: Monitor and manage assets in real-time to optimize inventory and logistics
Blockchain
26
12. Human Resources Management
Digital Payments: Facilitate online payments and transactions through services like PayPal,
reduce costs, enhance customer experiences, and gain a competitive edge in the market.
27