Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
8 views

C Programming Unit-1 PPT

The document provides an overview of computer systems, detailing their evolution from early mechanical calculators to modern digital computers across five generations. It explains key components such as the CPU, memory, and input/output devices, along with the characteristics and advantages of each generation. Additionally, it categorizes computers based on work, size, and purpose, while also outlining the basic functions of the CPU and its components.

Uploaded by

suganya.cse
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

C Programming Unit-1 PPT

The document provides an overview of computer systems, detailing their evolution from early mechanical calculators to modern digital computers across five generations. It explains key components such as the CPU, memory, and input/output devices, along with the characteristics and advantages of each generation. Additionally, it categorizes computers based on work, size, and purpose, while also outlining the basic functions of the CPU and its components.

Uploaded by

suganya.cse
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 102

Programmi

ng in C
U23CSTC01
Syllabus:
Computer:
A computer is a machine that can be programmed to carry out
sequences of arithmetic or logical operations (computation)
automatically.
• Modern digital electronic computers can perform generic sets of
operations known as programs. These programs enable computers to
perform a wide range of tasks.
• The term computer system may refer to a nominally complete
computer that includes the hardware, operating system, software,
and peripheral equipment needed and used for full operation; or to a
group of computers that are linked and function together, such as a
computer network or computer cluster.
• A broad range of industrial and consumer products use computers as
control systems.
• Computers power the Internet, which links billions of computers and
users.
• Early computers were meant to be used only for calculations. Simple
manual instruments like the abacus have aided people in doing
calculations since ancient times.
• The first digital electronic calculating machines were developed during
World War II, both electromechanical and using thermionic valves.
• The first semiconductor transistors in the late 1940s were followed by
the silicon-based MOSFET (MOS transistor) and monolithic integrated
circuit chip technologies in the late 1950s, leading to the
microprocessor and the microcomputer revolution in the 1970s.
• The speed, power and versatility of computers have been increasing
dramatically ever since then, with transistor counts increasing at a
rapid pace (Moore's law noted that counts doubled every two years),
leading to the Digital Revolution during the late 20th to early 21st
centuries.
• Conventionally, a modern computer consists of at least one
processing element, typically a central processing unit (CPU) in the
form of a microprocessor, together with some type of computer
memory, typically semiconductor memory chips.
• The processing element carries out arithmetic and logical operations,
and a sequencing and control unit can change the order of operations
in response to stored information.
• Peripheral devices include input devices (keyboards, mice, joystick,
etc.), output devices (monitor screens, printers, etc.), and
input/output devices that perform both functions (e.g., the 2000s-era
touchscreen).
• Peripheral devices allow information to be retrieved from an external
source and they enable the result of operations to be saved and
retrieved.
Below are the 8 Mechanical Calculators before modern computers
were invented:
• Abacus (ca. 2700 BC)
• Pascal’s Calculator (1652)
• Stepped Reckoner (1694)
• Arithmometer (1820)
• Comptometer (1887) and Comptograph (1889)
• The Difference Engine (1822)
• Analytical Engine (1834)
• The Millionaire (1893)
Basic terms:
• Vacuum tube – an electronic device that controls the flow of electrons
in a vacuum. It used as a switch, amplifier, or display screen in many
older model radios, televisions, computers, etc.

• Transistor – an electronic component that can be used


as an amplifier or as a switch. It is used to control
the flow of electricity in radios, televisions, computers,
etc.
• Integrated circuit (IC) – a small electronic circuit printed on a chip
(usually made of silicon) that contains many its own circuit elements
(e.g. transistors, diodes, resistors, etc.).

• Microprocessor – an electronic component held on an


integrated circuit that contains a computer’s central
processing unit (CPU) and other associated circuits.

• CPU (central processing unit) – It is often referred to as


the brain or engine of a computer where most of the
processing and operations take place (CPU is part of a
microprocessor).
• Magnetic drum – a cylinder coated with magnetic material, on which data
and programs can be stored.
• Magnetic core – uses arrays of small rings of magnetized
material called cores to store information.
• Machine language – a low-level programming language
comprised of a collection of binary digits (ones and zeros)
that the computer can read and understand.
• Memory – a physical device that is used to store data, information and
program in a computer.
• Artificial intelligence (AI) – an area of computer science
that deals with the simulation and creation of intelligent
machines or intelligent behave in computers (they think,
learn, work, and react like humans).
Classification of generations of computers:
• The evolution of computer technology is often divided into five
generations.
Five Generations of Computers
Generations of Generations Evolving
computers timeline hardware
First Vacuum tube
1940s-1950s
generation based
Second Transistor
1950s-1960s
generation based
Third Integrated
1960s-1970s
generation circuit based
Fourth Microprocessor
First generation of computers
(1940s-1950s):
• The 1st Generation Computers were introduced using the technology
of vacuum tubes which can control the flow of electronics in a
vacuum.
• These tubes are usually used in switches, amplifiers, radios,
televisions, etc.
• The First Generation of Computer was very heavy and large and were
not ideal for programming.
• They used basic programming and didn’t have an operating system,
which made it tough for users to do programming on them. The 1st
Generation Computers required a big room dedicated to them and
also consumed a lot of electricity.
Cont…
• Some examples of main first-generation computers are-
• ENIAC: Electronic Numerical Integrator and Computer, built by J.
Presper Eckert and John V. Mauchly which contained 18,000
vacuum tubes.
• EDVAC: Electronic Discrete Variable Automatic Computer,
designed by Von Neumann.
• UNIVAC: Universal Automatic Computer, developed by Eckert and
Mauchly in 1952.
Cont…
• Characteristics of 1st Generation Computers
• These computers were designed using vacuum tubes.
• Programming in these computers was done using machine languages.
• The main memory of 1st Generation Computers consisted of magnetic
tapes and magnetic drums.
• Paper tapes and Punched cards were used as input/output devices in
these computers.
• These computers were very huge but worked very slowly.
• Examples of 1st Generation Computers are IBM 650, IBM 701, ENIAC,
UNIVAC1, etc....
Cont…
Advantages: It made use of vacuum tubes which are the only
electronic component available during those days.
1.These computers could calculate in milliseconds.
• Disadvantages:
1.These were very big in size, weight was about 30 tones.
2.These computers were based on vacuum tubes.
3.These computers were very costly.
second generation of computers (1950s-
1960s):
• Main electronic component – transistor
• Programming language – assembly language
• Power and size – low power consumption, generated less heat, and
smaller in size (in comparison with the first generation computers).
• Speed – improvement of speed and reliability (in comparison with the
first generation computers).
• Input/output devices – punched cards and magnetic tape.
• Examples – IBM 1401, IBM 7090 and 7094, UNIVAC 1107, etc.
second generation of computers
• The second generation of computers replaced vacuum tubes with transistors, making them
smaller, faster, and more reliable.
• Advantages: Due to the presence of transistors instead of vacuum tubes, the size of electron
component decreased. This resulted in reducing the size of a computer as compared to first
generation computers.
1. Less energy and not produce as much heat as the first generation.
2. Assembly language and punch cards were used for input.
3. Low cost than first generation computers.
4. Better speed, calculate data in microseconds.
Third generation of computers
1.Main electronic component – integrated circuits (ICs)
2.Programming language – high level language (FORTRAN, BASIC,
Pascal, COBOL, C, etc.)
3.Size – smaller, cheaper, and more efficient than second generation
computers (they were called minicomputers).
4.Speed – improvement of speed and reliability (in comparison with
the second generation computers).
5.Input / output devices – magnetic tape, keyboard, monitor, printer,
etc.
6.Examples – IBM 360, IBM 370, PDP-11, UNIVAC 1108, etc.
Fourth generation of computers (1970s-present):
• Main electronic component – very large-scale integration (VLSI) and
microprocessor.
• VLSI– thousands of transistors on a single microchip.
• Memory – semiconductor memory (such as RAM, ROM, etc.)
• RAM (random-access memory) – a type of data storage (memory
element) used in computers that temporary stores of programs and data
(volatile: its contents are lost when the computer is turned off).
• ROM (read-only memory) – a type of data storage used in computers that
permanently stores data and programs (non-volatile: its contents are
retained even when the computer is turned off).
• Programming language – high level language (Python, C#, Java,
JavaScript, Rust, Kotlin, etc.).
• A mix of both third- and fourth-generation languages
• Size – smaller, cheaper and more efficient than third generation
computers.
• Speed – improvement of speed, accuracy, and reliability (in
comparison with the third generation computers).
• Input / output devices – keyboard, pointing devices, optical
scanning, monitor, printer, etc.
• Network – a group of two or more computer systems linked
together.
• Examples – IBM PC, STAR 1000, APPLE II, Apple Macintosh, etc.
Fifth generation of computers (the present
and the future):
• Main electronic component: based on artificial intelligence, uses
the Ultra Large-Scale Integration (ULSI) technology and parallel
processing method.
• ULSI – millions of transistors on a single microchip
• Parallel processing method – use two or more microprocessors to
run tasks simultaneously.
• Language – understand natural language (human language).
• Power – consume less power and generate less heat.
• Speed – remarkable
improvement of speed, accuracy
and reliability (in comparison
with the fourth generation
computers).
• Size – portable and small in size,
and have a huge storage
capacity.
• Input / output device –
keyboard, monitor, mouse,
trackpad (or touchpad),
touchscreen, pen, speech input
(recognise voice / speech), light
scanner, printer, etc.
• Example – desktops, laptops,
tablets, smartphones, etc.
Based on Work

• a. Analog Computers:
• Process data that is continuous and not discrete.
• Used for tasks such as measuring and controlling physical quantities like
voltage, pressure, etc.
• b. Digital Computers:
• Work with discrete data.
• Use binary digits (bits) to process data.
• c. Hybrid Computers:
• Combine features of both analog and digital computers.
• Can process both continuous and discrete data.
Based on Size

• a. Supercomputers:
• The most powerful in terms of processing capacity.
• Used for complex scientific calculations, weather forecasting, molecular modeling, etc.
• b. Mainframe Computers:
• Large and powerful machines capable of handling and processing huge amounts of data rapidly.
• Often used by large organizations for bulk data processing like census, industry and consumer
statistics, etc.
• c. Minicomputers (Midrange Computers):
• Smaller and less powerful than mainframes.
• Serve as network servers and internet servers.
• d. Microcomputers (Personal Computers):
• Widely used by individuals and businesses.
• Includes desktop computers, laptops, tablets, and smartphones.
Based on Purpose

• a. General-purpose Computers:
• Designed to perform a range of tasks.
• Can run various types of software applications.
• b. Special-purpose Computers:
• Built for a specific task.
• Often found in industrial machines, medical equipment, and
scientific instruments.
Block Diagram of Computer:
Data → Input Units
(Keyboard/Mouse etc.) →
Central Processing Unit
(CPU) → Output Unit
(Monitor, Speaker,
Printer).
Steps Involved:
• Step 1: Input devices allow the users to provide data and commands to the computer. The data
inserted manually is collected by input devices like keyboard, mouse, scanners, and others. These
devices generate electrical signals or data packets representing the input.
• Step 2: The data generated by input devices is sent to the computer’s input interface/Memory
Unit which processes and formats the data for further use by the computer.
• Step 3: The processed input data is then sent to the computer’s Central Processing Unit (CPU)
which temporarily stores this data in memory (RAM) for immediate processing. The CPU executes
instructions related to the input data.
• For example, if you’re typing a document, the Central Processing Unit (CPU) processes the
keystrokes and stores them in memory. The control unit schedules all the activities for the smooth
working of the computer.
• Step 4: After processing, the CPU sends the results or instructions to the computer’s output
interface where the data is formatted for transmission to the output devices.
• Step 5: Then the output unit receives the final processed output. Output devices such as monitors,
printers, speakers, and others receive the formatted data. Monitors display visual information,
printers produce hard copies, and speakers play audio, based on the data they receive.
Input Unit:
• The input unit takes all the data received by the computer. The input unit
comprises different devices such as a mouse, keyboard, scanner, etc. All of these
devices act as intermediaries between the users and the computer.
• The input unit takes the data that has to be processed.
• The raw data is accepted by the computer in binary form. This data is then
processed and the desired output is produced.

The major functions of the Input Unit are-


• The Input Unit takes the data to be processed by the user.
• The data is then converted into machine-readable form.
• The Input Unit then transmits the converted data into the main memory of the
computer.
• The main purpose of this process is to connect the user and the computer by
creating an easy connection between them.
Central Processing Unit (CPU):

• The Central Processing Unit or CPU is known as the brain of the computer. Just like the
human brain controls all human activities, the CPU also takes care of all the tasks.
• The CPU is responsible for performing all the arithmetic and logical operations within the
computer.
• All the major calculations, operations, and comparisons are performed inside the CPU.

• Some of the main functions of a CPU are-


• All the components of a computer system, software, and data processing are controlled by
the CPU.
• The Input devices provide data to the CPU which is then executed and then the CPU sends
the output to the Output devices.
• All the operations including the arithmetical and logical are processed by the CPU.
• The CPU comprises two parts- ALU (Arithmetic Logic Unit) and CU (Control Unit). These
units work in sync to help the CPU process the whole data. Let us know about these
components.
Arithmetic Logic Unit (ALU):
• The Arithmetic Logic Unit is comprised of two terms- arithmetic and
logic.
The two primary functions that the ALU performs are-
• Data is entered into the primary memory via the input unit. Then, the
ALU carries out essential arithmetic operations on this data, including
addition, subtraction, multiplication, and division. After performing all
sorts of calculations required on the data, it sends back data to the
storage.
• The ALU also performs logical operations such as AND, OR, Equal to,
Less than, etc. In addition, it also handles tasks like merging, sorting,
and selecting the given data.
Control Unit (CU):
• The Control Unit (CU) is the controller of all the activities, tasks, and
operations. All these operations are performed inside the computer.
• The memory unit sends a set of instructions to the control unit which
is then converted by the CU.
• These instructions are then converted to control signals.
• The purpose of these control signals is to help in prioritizing and
scheduling activities. So, the control unit ensures that all tasks inside
the computer work together smoothly, coordinating with the input
and output units.
Memory Unit:
• The Memory Unit stores all the data that has to be processed or has been
processed. The memory unit serves as a central hub for all the data.
• This data is then transmitted to the required part of the computer whenever
necessary.
• This unit works in sync with the Central Processing Unit to help in faster accessing
and processing of the data. This results in making the tasks easier and quicker.
Computer Memory is of two types-
• Primary memory: The primary memory cannot store a vast amount of data.
Hence, it is only used to store recent data which is temporary.
• Once the power is switched off, the data stored can be erased. Hence it is also
called temporary memory or main memory.
• An example of primary memory is Random Access Memory (RAM). This memory
is directly accessible by the CPU and is used for reading and writing purposes. The
data has to be first transferred to the RAM and then to the CPU for processing.
• Secondary memory: Since the primary memory stores temporary
data it cannot be accessed in the future. So, for permanent storage
purposes, secondary memory is used. It is also known as permanent
memory or auxiliary memory.
• An example of secondary memory is the hard disk. The data does not
get erased easily even in case of a power failure
Output Unit:
• Once the information sent to the computer is processed, the user receives the results
through the output unit. Examples of output units are devices such as printers,
monitors, projectors, etc.
• The output unit presents the data either as a soft copy (on the screen) or as a hard
copy (on paper).
• The printer is for the hard copy.
• The monitor is for the display.
• The output unit receives data in binary form from the computer and converts it into a
readable format for the user.

The Output Units perform these functions-


• The Output Unit accepts all the data and information from the main memory of a
computer system in binary form.
• The Output Unit also converts the binary data into a human-readable form for a better
understanding.
Software:
• Software is a collection of instructions, data, or computer programs
that are used to run machines and carry out particular activity.
• In a computer system, the software is basically a set of instructions or
commands that tell a computer what to do. In other words, the
software is a computer program that provides a set of instructions to
execute a user’s commands and tell the computer what to do. For
example like MS-Word, MS-Excel, PowerPoint, etc.
System Software:
• Operating System
• Language Processor
• Device Driver

Application Software:
• General Purpose Software
• Customize Software
• Utility Software
System Software:
• System software is software that directly operates the computer
hardware and provides the basic functionality to the users as well as
to the other software to operate smoothly.
• Or in other words, system software basically controls a computer’s
internal functioning and also controls hardware devices such as
monitors, printers, and storage devices, etc.
• It is like an interface between hardware and user applications, it helps
them to communicate with each other because hardware
understands machine language(i.e. 1 or 0) whereas user applications
are work in human-readable languages like English, Hindi, German,
etc. so system software converts the human-readable language into
machine language and vice versa.
Types of System Software:
It has two subtypes which are:
• Operating System: It is the main program of a computer system.
When the computer system ON it is the first software that loads into
the computer’s memory. Basically, it manages all the resources such
as computer memory, CPU, printer, hard disk, etc., and provides an
interface to the user, which helps the user to interact with the
computer system.
• It also provides various services to other computer software.
Examples of operating systems are Linux, Apple macOS, Microsoft
Windows, etc.
Language Processor:
• As we know that system software converts the human-readable
language into a machine language and vice versa. So, the conversion
is done by the language processor.
• It converts programs written in high-level programming languages like
Java, C, C++, Python, etc (known as source code), into sets of
instructions that are easily readable by machines (known as object
code or machine code).
Device Driver:
• A device driver is a program or software that controls a device and
helps that device to perform its functions. Every device like a printer,
mouse, modem, etc. needs a driver to connect with the computer
system eternally.
• So, when you connect a new device with your computer system, first
you need to install the driver of that device so that your operating
system knows how to control or manage that device.
Features of System Software:
• System Software is closer to the computer system.
• System Software is written in a low-level language in general.
• System software is difficult to design and understand.
• System software is fast in speed(working speed).
• System software is less interactive for the users in comparison to
application software.
Application Software:
• Software that performs special functions or provides functions that
are much more than the basic operation of the computer is known as
application software.
• Or in other words, application software is designed to perform a
specific task for end-users. It is a product or a program that is
designed only to fulfill end-users’ requirements.
• It includes word processors, spreadsheets, database management,
inventory, payroll programs, etc.
Types of Application Software:
There are different types of application software and those are:
• General Purpose Software: This type of application software is used
for a variety of task and it is not limited to performing a specific task
only. For example, MS-Word, MS-Excel, PowerPoint, etc.
• Customized Software: This type of application software is used or
designed to perform specific tasks or functions or designed for
specific organizations. For example, railway reservation system, airline
reservation system, invoice management system, etc.
• Utility Software: This type of application software is used to support
the computer infrastructure. It is designed to analyze, configure,
optimize and maintains the system, and take care of its requirements
as well. For example, antivirus, disk fragmenter, memory tester, disk
repair, disk cleaners, registry cleaners, disk space analyzer, etc.
Features of Application Software:
• An important feature of application software is it performs more
specialized tasks like word processing, spreadsheets, email, etc.
• Mostly, the size of the software is big, so it requires more storage
space.
• Application software is more interactive for the users, so it is easy to
use and design.
• The application software is easy to design and understand.
• Application software is written in a high-level language in general.
System Software Application Software

It is designed to manage the resources of the It is designed to fulfill the requirements of the
computer system, like memory and process user for performing specific tasks.
management, etc.

Written in a low-level language. Written in a high-level language.

Less interactive for the users. More interactive for the users.

System software plays vital role for the effective Application software is not so important for the
functioning of a system. functioning of the system, as it is task specific.

It is independent of the application software to


run. It needs system software to run.
Network Structure:
• A computer network is a system that connects two or more
computing devices for transmitting and sharing information.
Computing devices include everything from a mobile phone to a
server.
• These devices are connected using physical wires such as fiber optics,
but they can also be wireless.
• The first working network, called ARPANET, was created in the late
1960s and was funded by the U.S. Department of Defense.
• An example of a computer network at large is the traffic monitoring
systems in urban cities. These systems alert officials and emergency
responders with information about traffic flow and incidents.
How Does a Computer Network
Work?
• Basics building blocks of a Computer network are Nodes and Links.
• A Network Node can be illustrated as Equipment for Data
Communication like a Modem, Router, etc., or Equipment of a Data
Terminal like connecting two computers or more.
• Link in Computer Networks can be defined as wires or cables or free
space of wireless networks.
• The working of Computer Networks can be simply defined as rules or
protocols which help in sending and receiving data via the links which
allow Computer networks to communicate.
• Each device has an IP Address, that helps in identifying a device.
• Network: A network is a collection of computers and devices that are connected
together to enable communication and data exchange.
• Nodes: Nodes are devices that are connected to a network. These can include
computers, Servers, Printers, Routers, Switches, and other devices.
• Protocol: A protocol is a set of rules and standards that govern how data is transmitted
over a network. Examples of protocols include TCP/IP, HTTP, and FTP.
• Topology: Network topology refers to the physical and logical arrangement of nodes on
a network. The common network topologies include bus, star, ring, mesh, and tree.
• Service Provider Networks: These types of Networks give permission to take Network
Capacity and Functionality on lease from the Provider. Service Provider Networks
include Wireless Communications, Data Carriers, etc.
• IP Address: An IP address is a unique numerical identifier that is assigned to every
device on a network. IP addresses are used to identify devices and enable
communication between them.
• DNS: The Domain Name System (DNS) is a protocol that is used to translate human-
readable domain names (such as www.google.com) into IP addresses that computers
can understand.
• Firewall: A firewall is a security device that is used to monitor and control
incoming and outgoing network traffic. Firewalls are used to protect
networks from unauthorized access and other security threats.
Types of network:
LAN: A Local Area Network (LAN) is a network that covers a small area, such as
an office or a home. LANs are typically used to connect computers and other
devices within a building or a campus.
WAN: A Wide Area Network (WAN) is a network that covers a large geographic
area, such as a city, country, or even the entire world. WANs are used to
connect LANs together and are typically used for long-distance
communication.
Cloud Networks: Cloud Networks can be visualized with a Wide Area Network
(WAN) as they can be hosted on public or private cloud service providers and
cloud networks are available if there is a demand. Cloud Networks consist of
Virtual Routers, Firewalls, etc.
Network Devices:
An interconnection of multiple devices, also known as
hosts, that are connected using multiple paths for the
purpose of sending/receiving data or media.
Network Topology:
• The Network Topology is the layout arrangement of the different
devices in a network. Common examples include Bus, Star, Mesh,
Ring, and Daisy chain.
Algorithm:
• The word Algorithm means ” A set of finite rules or instructions to be
followed in calculations or other problem-solving operations ”
Or
• ” A procedure for solving a mathematical problem in a finite number
of steps that frequently involves recursive operations”.
Use of algorithm:
• Computer Science: Algorithms form the basis of computer
programming and are used to solve problems ranging from simple
sorting and searching to complex tasks such as artificial intelligence
and machine learning.
• Mathematics: Algorithms are used to solve mathematical problems,
such as finding the optimal solution to a system of linear equations or
finding the shortest path in a graph.
• Operations Research: Algorithms are used to optimize and make
decisions in fields such as transportation, logistics, and resource
allocation.
• Artificial Intelligence: Algorithms are the foundation of artificial
intelligence and machine learning, and are used to develop intelligent
systems that can perform tasks such as image recognition, natural
language processing, and decision-making.
• Data Science: Algorithms are used to analyze, process, and extract
insights from large amounts of data in fields such as marketing,
finance, and healthcare.
Need for algorithm:
• Algorithms are necessary for solving complex problems efficiently and
effectively.
• They help to automate processes and make them more reliable,
faster, and easier to perform.
• Algorithms also enable computers to perform tasks that would be
difficult or impossible for humans to do manually.
• They are used in various fields such as mathematics, computer
science, engineering, finance, and many others to optimize processes,
analyze data, make predictions, and provide solutions to problems.
Characterist
ics of an
Algorithm:
• Clear and Unambiguous: The algorithm should be unambiguous. Each of its
steps should be clear in all aspects and must lead to only one meaning.
• Well-Defined Inputs: If an algorithm says to take inputs, it should be well-
defined inputs. It may or may not take input.
• Well-Defined Outputs: The algorithm must clearly define what output will be
yielded and it should be well-defined as well. It should produce at least 1
output.
• Finite-ness: The algorithm must be finite, i.e. it should terminate after a finite
time.
• Feasible: The algorithm must be simple, generic, and practical, such that it can
be executed with the available resources. It must not contain some future
technology or anything.
• Language Independent: The Algorithm designed must be language-
independent, i.e. it must be just plain instructions that can be implemented in
any language, and yet the output will be the same, as expected.
• Input: An algorithm has zero or more inputs. Each that contains a
fundamental operator must accept zero or more inputs.
• Output: An algorithm produces at least one output. Every instruction that
contains a fundamental operator must accept zero or more inputs.
• Definiteness: All instructions in an algorithm must be unambiguous, precise,
and easy to interpret. By referring to any of the instructions in an algorithm
one can clearly understand what is to be done. Every fundamental operator
in instruction must be defined without any ambiguity.
• Finiteness: An algorithm must terminate after a finite number of steps in all
test cases. Every instruction which contains a fundamental operator must
be terminated within a finite amount of time. Infinite loops or recursive
functions without base conditions do not possess finiteness.
• Effectiveness: An algorithm must be developed by using very basic, simple,
and feasible operations so that one can trace it out by using just paper and
pencil.
Properties of Algorithm:
• It should terminate after a finite time.
• It should produce at least one output.
• It should take zero or more input.
• It should be deterministic means giving the same output for the same
input case.
• Every step in the algorithm must be effective i.e. every step should do
some work.
Types of Algorithms:
Brute Force Algorithm:
• It is the simplest approach to a problem. A brute force algorithm is
the first approach that comes to finding when we see a problem.

Recursive Algorithm:
• A recursive algorithm is based on recursion. In this case, a problem is
broken into several sub-parts and called the same function again and
again.
Backtracking Algorithm:
• The backtracking algorithm builds the solution by searching among all
possible solutions. Using this algorithm, we keep on building the
solution following criteria. Whenever a solution fails we trace back to
the failure point build on the next solution and continue this process
till we find the solution or all possible solutions are looked after.

Searching Algorithm:
• Searching algorithms are the ones that are used for searching
elements or groups of elements from a particular data structure. They
can be of different types based on their approach or the data
structure in which the element should be found.
Sorting Algorithm:
• Sorting is arranging a group of data in a particular manner according
to the requirement. The algorithms which help in performing this
function are called sorting algorithms. Generally sorting algorithms
are used to sort groups of data in an increasing or decreasing manner.

Hashing Algorithm:
• Hashing algorithms work similarly to the searching algorithm. But
they contain an index with a key ID. In hashing, a key is assigned to
specific data.
How to Design an Algorithm?
• The problem that is to be solved by this algorithm i.e. clear problem
definition.
• The constraints of the problem must be considered while solving the
problem.
• The input to be taken to solve the problem.
• The output is to be expected when the problem is solved.
• The solution to this problem is within the given constraints.
Algorithm to add 3 numbers and print their sum:
• START
• Declare 3 integer variables num1, num2, and num3.
• Take the three numbers, to be added, as inputs in variables num1,
num2, and num3 respectively.
• Declare an integer variable sum to store the resultant sum of the 3
numbers.
• Add the 3 numbers and store the result in the variable sum.
• Print the value of the variable sum
• END
// C program to add three numbers printf("Enter the 2nd number: ");
// with the help of above designed scanf("%d", &num2);
algorithm
printf("%d\n", num2);
printf("Enter the 3rd number: ");
#include <stdio.h>
int main() scanf("%d", &num3);
{ printf("%d\n", num3);
// Variables to take the input of the 3 // Calculate the sum using + operator
numbers // and store it in variable sum
int num1, num2, num3; sum = num1 + num2 + num3;
// Variable to store the resultant sum // Print the sum
int sum;
printf("\nSum of the 3 numbers is:
// Take the 3 numbers as input %d", sum);
printf("Enter the 1st number: ");
return 0;
scanf("%d", &num1);
}
printf("%d\n", num1);
Flow chart:
A flowchart is a type of diagram that represents a
workflow or process. A flowchart can also be defined as a
diagrammatic representation of an algorithm, a step-by-
step approach to solving a task.
Flow chart symbol:
Uses of Flowcharts:

• It is a pictorial representation of an algorithm that increases the


readability of the program.
• Complex programs can be drawn in a simple way using a flowchart.
• It helps team members get an insight into the process and use this
knowledge to collect data, detect problems, develop software, etc.
• A flowchart is a basic step for designing a new process or adding extra
features.
• Communication with other people becomes easy by drawing
flowcharts and sharing them.
When to Use Flowchart?
• It is most importantly used when programmers make
projects. As a flowchart is a basic step to make the design of
projects pictorially, it is preferred by many.
• When the flowcharts of a process are drawn, the programmer
understands the non-useful parts of the process. So
flowcharts are used to separate sound logic from the
unwanted parts.
• Since the rules and procedures of drawing a flowchart are
universal, a flowchart serves as a communication channel to
the people who are working on the same project for better
understanding.
• Optimizing a process becomes easier with flowcharts. The
efficiency of the code is improved with the flowchart drawing.
Types of Flowcharts:

• Process flowchart: This type of flowchart shows all the activities that
are involved in making a product. It provides a pathway to analyze the
product to be built. A process flowchart is most commonly used in
process engineering to illustrate the relation between the major as well
as minor components present in the product. It is used in business
product modeling to help understand employees about the project
requirements and gain some insight into the project.
• Data flowchart: As the name suggests, the data flowchart is used to
analyze the data, specifically it helps in analyzing the structural details
related to the project. Using this flowchart, one can easily understand
the data inflow and outflow from the system. It is most commonly used
to manage data or to analyze information to and fro from the system.
• Business Process Modeling Diagram: Using this flowchart or diagram,
one can analytically represent the business process and help simplify
the concepts needed to understand business activities and the flow of
information. This flowchart illustrates the business process and
models graphically which paves the way for process improvement.
Types of boxes used to make a flowchart:

• Terminal

• This box is of an oval shape which is used to indicate the start or end
of the program. Every flowchart diagram has an oval shape that
depicts the start of an algorithm and another oval shape that depicts
the end of an algorithm.
• Data

• This is a parallelogram-shaped box inside which the inputs or outputs


are written. This basically depicts the information that is entering the
system or algorithm and the information that is leaving the system or
algorithm. For example: if the user wants to input a from the user and
display it, the flowchart for this would be:
Process: This is a rectangular box inside which a programmer writes the
main course of action of the algorithm or the main logic of the
program. This is the crux of the flowchart as the main processing codes
is written inside this box. For example: if the programmer wants to add
1 to the input given by the user, he/she would make the following
flowchart
• On-Page Reference: This circular figure is used to depict that the
flowchart is in continuation with the further steps. This figure comes
into use when the space is less and the flowchart is long. Any
numerical symbol is present inside this circle and that same numerical
symbol will be depicted before the continuation to make the user
understand the continuation. Below is a simple example depicting the
use of On-Page Reference
Draw a flowchart to find the greatest number
among the 2 numbers.
Algorithm:
• Start
• Input 2 variables from user
• Now check the condition If a > b, goto step 4, else goto step 5.
• Print a is greater, goto step 6
• Print b is greater
• Stop
Task:
• Draw a flowchart to check whether the input number is odd or even.
• Draw a flowchart to print the input number 5 times.
• Draw a flowchart to print numbers from 1 to 10.
• Draw a flowchart to print the first 5 multiples of 3.
• Algorithm:

• 1. Start
• 2. Put input a
• 3. Now check the
condition if a % 2 == 0,
goto step 5. Else goto step
4
• 4. Now print(“number is
odd”) and goto step 6
• 5. Print(“number is even”)
• 6. Stop
• Algorithm:
• 1. Start
• 2. Input number a
• 3. Now initialise c = 1
• 4. Now we check the
condition if c <= 5,
goto step 5 else, goto
step 7.
• 5. Print a
• 6. c = c + 1 and goto
step 4
• 7. Stop
• Algorithm:
• 1. Start
• 2. Now initialise c = 1
• 3. Now we check the
condition if c < 11,
then goto step 4
otherwise goto step 6.
• 4. Print c
• 5. c = c + 1 then goto
step 3
• 6. Stop
Number system:
Binary System: Binary System is the system of writing
number using only two numbers that are, 0 and 1. The
base of binary number is 2.

Decimal System: The Decimal Numbers System is the


number system that is used by us in our daily lives. The
base of Decimal numbers is 10 and it uses 10 digits that
are, 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9.
Number system, we may have learned about different
types of numbers such as;
• Binary Numbers – Base 2
• Octal Numbers – Base 8
• Decimal Numbers – Base 10
• Hexadecimal Numbers – Base 16
• Convert (101010)2 = (?)10
• Convert (11100)2 = (?)10
• Convert (111)2 to Decimal
• Convert (10110)2 to Decimal
• Convert (10101101)2 to Decimal
• Convert (11000)2 to Decimal
• Convert (10111)2 to Decimal
• Convert (111110000)2 to Decimal
• Convert (00011)2 to Decimal
• Convert (110011)2 to Decimal
• Convert 1710 into a binary number: 1710 = 100012

Divide by 2 Result Remainder Binary Value


17 ÷ 2 8 1 1 (LSB)
8÷2 4 0 0
4÷2 2 0 0
2÷2 1 0 0
1÷2 0 1 1 (MSB)
• Convert 16010 to binary Number
• Convert 24410 to its equivalent binary number.
• Convert 7610 to binary number.
• What is the binary equivalent of decimal number 89110?
• Convert 5710 into a binary number.
Octal Number System: The octal number system, also
known as oct, is a base-8 number system. The system
uses digits from 0 to 7.

Hexadecimal Number System: A hexadecimal number


system is a number system in which the base value is 16.
This means that there are 16 symbols used in the
hexadecimal system. The hexadecimal symbols are 0, 1,
2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F

You might also like