Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

Unit 1 - Computer and Programming Fundamentals-5

The document covers computer and programming fundamentals, including the evolution of computers, programming concepts, and the anatomy of computer systems. It discusses the classification of programming languages, types of operating systems, and various hardware and software components. Additionally, it introduces key programming concepts in C, control structures, functions, memory management, and file handling, along with the importance of debugging and best practices.

Uploaded by

ninojustin1234
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Unit 1 - Computer and Programming Fundamentals-5

The document covers computer and programming fundamentals, including the evolution of computers, programming concepts, and the anatomy of computer systems. It discusses the classification of programming languages, types of operating systems, and various hardware and software components. Additionally, it introduces key programming concepts in C, control structures, functions, memory management, and file handling, along with the importance of debugging and best practices.

Uploaded by

ninojustin1234
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 61

UNIT 1 - COMPUTER AND

PROGRAMMING FUNDAMENTALS
Computer fundamentals – Evolution, classification, Anatomy
of a computer: CPU, Memory, I/O –Introduction to software –
Classification of programming languages – Compiling –Linking
and loading a program – Introduction to OS – Types of OS.
### 1. **Introduction to Programming Concepts**
- **Explain the Basics**: Start by explaining what programming
is and why it's important. Introduce concepts like algorithms,
logic, and problem-solving.
- **Introduction to C**: Discuss why C is a foundational
language, its history, and where it's used today.

### 2. **Setting Up the Environment**


- **IDE/Compiler Setup**: Guide students on how to set up an
IDE or a simple text editor and compiler, such as GCC, to write
and run C programs.
- **Hello, World!**: Start with the traditional "Hello, World!"
program to demonstrate the structure of a C program.
### 3. **Understanding the Basics**
- **Syntax and Semantics**: Teach basic syntax (data types, variables,
operators) and the importance of correct syntax in programming.
- **Input and Output**: Introduce `printf()` and `scanf()` for basic input
and output.

### 4. **Control Structures**


- **Conditional Statements**: Explain `if`, `else if`, and `else` for
decision-making.
- **Loops**: Teach `for`, `while`, and `do-while` loops for repeated
execution of code.

### 5. **Functions**
- **Defining and Using Functions**: Explain what functions are, how to
define them, and how to call them. Stress the importance of modular
code.
- **Function Arguments and Return Values**: Discuss passing
parameters to functions and returning values.
### 6. **Arrays and Strings**
- **Introduction to Arrays**: Teach how to store and manipulate
sequences of data using arrays.
- **Basic String Manipulation**: Introduce character arrays and some
basic string functions.

### 7. **Pointers**
- **Understanding Pointers**: Explain what pointers are and their
significance in C programming.
- **Pointer Operations**: Teach basic pointer operations, pointer
arithmetic, and how to use pointers with arrays and functions.

### 8. **Memory Management**


- **Dynamic Memory Allocation**: Introduce `malloc()`, `calloc()`,
`realloc()`, and `free()` for dynamic memory management.
- **Memory Leaks**: Discuss the importance of freeing allocated
memory.
### 9. **Structures**
- **Defining and Using Structures**: Teach how to define
structures for grouping different data types and how to access
structure members.
- **Structures and Functions**: Show how to pass structures
to functions.

### 10. **File Handling**


- **Basic File Operations**: Teach how to open, read, write,
and close files in C using `fopen()`, `fread()`, `fwrite()`, and
`fclose()`.

### 11. **Debugging and Best Practices**


- **Error Handling**: Introduce common errors and
debugging techniques, such as using `gdb`.
- **Best Practices**: Discuss the importance of writing clean,
readable, and well-documented code.
12. *Projects and Practice*
- **Mini Projects**: Encourage students to work on
simple projects like a calculator, a tic-tac-toe game, or
a basic file management system.
- **Code Reviews**: Have students review each
other’s code to learn from mistakes and different
approaches.
Evolution of Computers:

Evolution of Computers

Computing in the mechanical era Computing in the electrical era

I .COMPUTING IN THE MECHANICAL ERA:

1. **Origins of Calculating Machines**:


- The first mechanical calculating apparatus was the **abacus**, invented
around **500 BC in Babylon**.
- The abacus was used extensively without major improvements until
**1642**.

2. **Blaise Pascal's Contribution**:


- In **1642**, **Blaise Pascal** designed a calculator that used gears and
wheels, marking a significant advancement in mechanical calculating devices.
3. **Mechanical Computing Calculators**:
- Practical geared mechanical computing calculators became
available in the **early 1800s**.
- These machines could perform calculations but were not
programmable.

4. **Charles Babbage and the Analytical Engine**:


- In **1823**, **Charles Babbage**, with assistance from
**Augusta Ada Byron (Countess of Lovelace)**, began
developing a programmable calculating machine called the
**Analytical Engine** for the Royal Navy of Great Britain.
- **Input** for the Analytical Engine was provided through
**punched cards**.
- The engine could store **1,000, 20-digit decimal numbers**
and run a modifiable program, enabling it to execute different
computing tasks.
II. Computing in the electrical era:
Early Mechanical Calculators:
1889: Herman Hollerith developed a mechanical machine
driven by a single electric motor to process punched cards.
1896: Hollerith formed the Tabulating Machine Company,
later merging into IBM.
Development of Electronic Computers:
1941: Konrad Zuse developed the Z3, the first electronic
calculating computer.
1943: Alan Turing developed the Colossus, considered the
first electronic computer, but it was not programmable.
1946: ENIAC, the first general-purpose electronic digital
computer, was completed by J.W. Mauchly and S.P. Eckert.
Advancements in Computing Hardware:
1950s: Introduction of computers using vacuum tubes by companies like Sperry-Rand and
IBM.
1948: Transistor invented at Bell Labs.
1958: Development of transistor-based general-purpose computers.
1958: Integrated circuit invented by Jack Kilby at Texas Instruments, leading to the IBM
360/370 and other models.
1971: Intel announced the single-chip microprocessor 4004.
1970s: Intel introduced the 8080, 8085, 8086, and 8088 microprocessors.
1981: IBM PC, using the 8088 microprocessor, became widely popular.
1983-1989: Introduction of 16-bit and 32-bit microprocessors (80286, 80386, 80486).
1993: Pentium microprocessor introduced, enhancing personal computers' capabilities.

Evolution of Portable Computers:


Development of laptops and palmtops with capabilities far surpassing earlier computers.
Ongoing efforts to integrate palmtops with mobile phones.

Development of Programming Languages and Operating Systems:


1950s: Assembly language developed for UNIVAC computers.
1957: IBM developed FORTRAN.
Subsequent languages: ALGOL, COBOL, BASIC, Pascal, C/C++, Ada, Java.
Operating Systems:
UNIX for large and mini-computers.
MS-DOS and MS-Windows for personal computers.
Growing trend toward using Linux as an operating system.
CLASSIFICATION OF COMPUTERS:
John von Neumann's architecture is the foundational design
concept for most modern computers. This architecture is based
on the idea that a computer's instructions (the program) and
the data it processes are stored in the same memory space. This
allows the computer to be more flexible and capable of
executing complex programs.
Concepts of von Neumann Architecture:
•Stored Program Concept: Both the program instructions and the data are
stored in the same memory.
•Central Processing Unit (CPU): The CPU performs operations by fetching
instructions from memory, decoding them, and executing them.
•Sequential Execution: Instructions are processed in a sequential manner, one
after the other.
CLASSIFICATION OF COMPUTERS

Supercomputer Microcomputers

Desktop computer

Mainframe Laptop computer


Minicomputers

Palmtop
computer/Digital
diary/Notebook/
PDAs
Based on Power:
•Supercomputers: Extremely powerful machines used for complex computations, such
as weather forecasting, scientific simulations, and cryptography.
•Mainframes: Large, powerful systems used by large organizations for bulk data
processing, like census, industry, and enterprise resource planning.
•Minicomputers: Mid-sized systems that are less powerful than mainframes but still
capable of handling multiple users simultaneously. Commonly used in manufacturing
and research.
•Microcomputers: Personal computers (PCs) and laptops that are widely used for
everyday tasks like word processing, internet browsing, and gaming.
Based on Usage:
•Personal Computers (PCs): Used by individuals for general tasks like word processing,
gaming, and internet browsing.
•Workstations: High-performance computers used for tasks requiring greater
computational power, such as graphic design, engineering simulations, and scientific
calculations.
•Servers: Computers that manage network resources and provide services to other
computers in a network. They are designed to handle multiple tasks and requests
simultaneously.
.
Based on Size:
•Desktops: Computers designed to stay in one place, typically on a
desk, and used for a wide range of tasks.
•Laptops: Portable computers that offer similar capabilities to
desktops but in a compact, portable form.
•Tablets/Palmtops: Smaller, handheld devices that are highly
portable and typically used for basic tasks like browsing, reading,
and media consumption.
Based on Processing Power:
•General-Purpose Computers: These computers can perform a
variety of tasks and run many types of programs.
•Special-Purpose Computers: These are designed to perform
specific tasks and are optimized for that purpose, such as
embedded systems in appliances and vehicles
ANATOMY OF COMPUTER:
A computer is essentially a machine that follows instructions to do
specific tasks. These tasks can include taking in data (input),
storing or processing that data, and then providing some kind of
result or output.
The computer system is made up of two main parts:
Hardware: These are the physical parts of the computer—things
you can touch and see, like the keyboard, mouse, monitor, and
internal parts like the processor and memory. Hardware is
responsible for actually doing the tasks, like moving data around,
showing results on the screen, etc.
Software: This is the set of invisible instructions or programs that
tell the hardware what to do. Software includes everything from
operating systems (like Windows or macOS) to applications (like
Word or web browsers). Without software, hardware wouldn’t
know how to do anything, and the computer would be useless.
HARDWARE:
Hardware refers to the physical components of a computer,
including mechanical, electrical, and electronic parts. A
computer consists of the following key hardware components:
•Input and output devices
•Central Processing Unit (CPU)
•Memory unit and storage devices
•Interface unit
Input Devices:
Input devices allow users to interact with a computer by feeding
data and instructions. Examples include keyboards, mice, and
scanners.
Output Devices:
Output devices display or print data from the computer. The
most common output devices are monitors and printers.
Central Processing Unit (CPU)
The CPU is the brain of the computer, responsible for processing instructions
and data. It consists of:
Registers: High-speed storage locations for data.
Arithmetic Logic Unit (ALU): Performs mathematical and logical operations.
Control Unit (CU): Manages data flow between CPU, memory, and peripherals.
Memory Unit
Memory is where data and instructions are stored. There are two types:
Primary Memory: Fast and volatile, it includes RAM and cache memory.
Secondary Memory: Slower but non-volatile, it includes hard drives and SSDs.
Interface Unit
This unit connects the CPU with memory and I/O devices via data, address,
and control buses. It allows communication between the computer's internal
components and external peripherals.
SOFTWARE:
Software is what gives instructions to the hardware (the physical parts of a
computer) to perform tasks. These instructions are organized in a specific
sequence, forming a computer program.
So, when we say "a program," we are talking about a list of steps or
instructions that tell the computer what to do, and this is executed by the
computer's processor (its "brain").
But software is more than just one program. It refers to a collection of related
programs that work together to accomplish a task. Software also includes:
The data used by the programs.
The data structures, which are ways of organizing data so the programs can
work with them efficiently.
The documents, which describe how the software works and how to use it.
Comparison of a computer program and software:
1. A computer program is a single set of instructions designed for a specific
task.
2. Software is a broader term that includes multiple programs, the data they
use, the structures to handle the data, and any related documentation.
Software Installation
Most software today needs to be installed before it can be used. This process involves
copying files and configuring settings so that the software can run smoothly on your
computer. Installation may differ depending on the software and the operating system
(like Windows, macOS, or Linux).
Types of Software
Software is generally divided into two categories:
A. System Software
B. Application Software
A. System Software
This type of software helps manage and operate the computer hardware, making it
usable. It works at a low level, closer to the machine itself. Examples include:
Operating Systems (OS) like Windows, macOS, or Linux. The OS is the most important
system software because it manages hardware resources (memory, processor,
input/output devices) and ensures that other programs (both system and application)
can function by providing access to these resources.
Device Drivers, which are specialized programs that allow the OS to communicate with
hardware like printers, graphics cards, or network cards. These drivers either
automatically install (plug-and-play) or need manual setup.
Loader and Linker, which are responsible for preparing programs to run by copying
them from storage (e.g., a hard drive) to memory and setting them up for execution.
2. Application Software
This type of software is designed to perform specific tasks for the
user. Examples include:
Microsoft Word for word processing.
Microsoft Excel for spreadsheets.
Photoshop for image editing.
Application software comes in two types:
Custom Software: Designed for a specific user or company. It's
made based on their particular needs. For example, a bank might
have software custom-built to manage its operations.
Pre-written Software Packages: These are off-the-shelf products
that anyone can buy and use. They may not be customized but
are designed to meet general needs. Examples include Microsoft
Office, Oracle for databases, and SPSS for statistical analysis.
Categories of Application Software Packages
Database Management Software (e.g., Oracle,
Microsoft SQL Server): These manage and organize
data.
Spreadsheet Software (e.g., Microsoft Excel): Used
for calculations, graphs, and data management.
Word Processing (e.g., Microsoft Word) and Desktop
Publishing (DTP) (e.g., Pagemaker): Used for creating
documents and publications.
Graphics Software (e.g., Corel Draw): For designing
images and graphics.
Statistical Software (e.g., SPSS): For performing data
analysis and research.
MEMORY
The different types of memories available for a computer are shown in Fig. 1.4.
1) PRIMARY MEMORY
The role of semiconductor memory, specifically RAM (Random
Access Memory), in modern computers. It highlights how RAM
allows data to be accessed randomly, meaning any location can be
read or written directly. This memory type is fast, inexpensive,
and compact, making it ideal for primary memory use. RAM is
volatile, meaning it loses stored data when the computer is
powered off. RAM stores data, instructions for programs, and the
operating system’s basic functions.
The paragraph distinguishes between two types of RAM: Dynamic
RAM (DRAM) and Static RAM (SRAM). DRAM requires periodic
refreshing to retain data, as it uses a single transistor and a
capacitor that holds electrical charges. SRAM, on the other hand,
does not need refreshing as it uses flip-flop circuits (with 4-6
transistors) to maintain data and is typically used in processor
caches.
Various types of DRAM include:
SDRAM (Synchronous Dynamic RAM): Common in earlier computers.
RDRAM (Rambus Dynamic RAM): Proprietary, faster, and more expensive,
used in high-end systems.
DDR RAM (Double Data Rate RAM): A faster version of SDRAM, with improved
versions like DDR2, DDR3.
Additionally, Video RAM (VRAM) is used for graphics, acting as a buffer
between the processor and display.
The paragraph also discusses cache memory, which helps bridge the speed gap
between the processor and main memory. Cache memory is located closer to
the processor and stores frequently accessed data for faster retrieval. There are
two levels:
L1 cache: Embedded in the processor, faster but smaller (8 KB to 64 KB).
L2 cache: Slightly slower but larger (64 KB to 2 MB), and was originally external
to the processor.
Overall, semiconductor memory, especially RAM, plays a critical role in
computing performance, balancing speed and storage efficiency.
Read Only Memory (ROM):
Non-volatile memory/Read-Only Memory (ROM) and its variations. Unlike volatile
memory like RAM, non-volatile memory retains data even when the power is turned off.
ROM (Read-Only Memory):
Used in personal computers to store permanent instructions, such as startup
routines (bootstrapping).
ROM is programmed during manufacturing and cannot be modified afterward.
PROM (Programmable ROM):
Similar to ROM but can be programmed once after manufacturing using a special
device.
Once programmed, it behaves like ROM and cannot be re-written.
EPROM (Erasable Programmable ROM):
Can be written and erased, making it more flexible than PROM.
To erase data, the memory chip must be exposed to ultraviolet light, which resets all
storage cells to their initial state.
After erasure, new data can be written electrically. The erasing process is time-
consuming.
EEPROM (Electrically Erasable Programmable ROM):
A more advanced version of EPROM where data can be erased and rewritten
electrically without using ultraviolet light.
It retains data when the power is off and allows multiple write operations.
However, writing data to EEPROM takes longer than reading it, and it is more
expensive.
2) SECONDARY MEMORY
There are four main types of secondary storage devices
available in a computer system:
Disk drives
CD drives (CD-R, CD-RW, and DVD)
Tape drives
USB flash drives
These storage mediums are used to retain data for long-term use,
unlike primary memory (RAM), which is volatile.
1. Hard Disks
Hard disks are high-capacity, fast, and non-removable storage
devices.
They consist of metal platters coated with a magnetic material.
These platters spin at a constant speed.
Read/write heads, attached to arms, move over the platters to
either write data by magnetizing the surface or read it by detecting
magnetic marks.
•The platters are divided into tracks (concentric circles), and each
track is divided into sectors (small sections that store data,
typically 512 bytes).
•All tracks aligned vertically across multiple platters form a
cylinder. The heads move in sync, so once aligned, they can read
or write data across multiple platters without additional
movement.
•Hard disks offer large storage capacities (in gigabytes), fast access
speeds, high reliability, and low data errors.
2. Floppy Disks
Floppy disks are thin plastic
disks coated with magnetic
material and enclosed in a
protective jacket. They are
removable but have much
lower capacity (typically 1.44
MB) and slower performance
than hard disks.
The floppy disk drive reads
and writes data by moving a
pair of read/write heads
across the surface of the disk
while the disk spins.
Floppy disks were commonly
used in earlier personal
computers for storing and
exchanging data.
3. Compact Discs (CDs)
CDs are portable, non-volatile storage devices. Data is permanently stored
when the CD is created.
A CD-R (Compact Disc-Recordable) can be written to once and cannot be
erased, while a CD-RW (Compact Disc-Rewritable) allows data to be erased
and rewritten multiple times.
CDs are made from synthetic resin coated with a reflective material, like
aluminum. Data is stored by creating tiny pits on the surface during the
writing process.
Laser technology is used to read CDs: a laser beam reflects off the smooth
or pitted surface, and a sensor detects the difference to determine whether
the data represents a 1 or 0.
4. DVD (Digital Versatile Disc)
DVDs work similarly to CDs but can store more data (due to
higher density of pits), making them ideal for video and large
files.
Like CDs, DVDs are read using laser technology.
5. Magnetic Tapes
Magnetic tapes store data magnetically on a long strip of
material. They are often used for backups and archival
purposes due to their large storage capacity and durability,
though they are slower to access.
INTRODUCTION TO SOFTWARE:
•System Software: Manages the computer's hardware, such
as input/output devices, memory, and the processor. It also
handles the scheduling of multiple tasks, like running
programs and managing resources.
•Application Software: Designed for specific tasks or
general purposes, such as creating documents, drawing, or
playing video games. This software is often developed for
sale to users or organizations.
•Composition of Software:
•Software consists of multiple files, including at least one
executable file (a file that can be run by the user or the
operating system).
•In addition to the main executable, there are program files,
data files, and configuration files that work together to
make the software function properly.
Software Installation:
•Installing software involves more than just copying files
onto the hard drive. It also depends on the type of operating
system and the nature of the software (local, web-based, or
portable).
•Types of Applications:
•Local Applications: Installed on a computer's hard disk
and may require additional configurations with the operating
system. Once installed, they can be run as needed.
•Portable Applications: Designed to run directly from a
removable storage device (e.g., USB flash drive, CD)
without needing installation on the hard disk. When the
storage device is removed, no trace of the application is left
on the computer.
•Web Applications: Accessed through a web browser.
They run on servers over the internet and don't require
installation on the local computer.
PROGRAMMING LANGUAGES
•Low-Level vs. High-Level Languages:
•Low-Level Languages:
•Assembly Language and Machine Language are categorized as
low-level because they are closely aligned with the computer's
hardware.
•Programs written in machine language can be directly executed
by the processor without the need for translation.
•Assembly language represents operations at a single instruction
level, which makes coding specific and hardware-dependent.
•High-Level Languages:
•High-level languages abstract the complexity of hardware,
allowing programmers to focus on problem-solving without delving
into machine-specific details.
•These languages can be further divided based on their
programming paradigms.
Classification of Programming Languages
Programming Paradigms:
A programming paradigm is a framework that defines how problems are approached
and solved using programming languages.
Procedural Programming:
Programs are structured as a sequence of instructions or procedures.
Each program can be broken down into smaller segments or functions that can be
reused.
Examples include C, COBOL, PASCAL, and FORTRAN.
 Algorithmic languages focus on specifying steps to solve problems using a top-down
approach.
 Object-Oriented Programming (OOP) emphasizes the use of objects that contain
both data and methods. Key features of OOP include:
 Abstraction: Simplifying complex systems by focusing on essential details.
 Encapsulation: Bundling data and methods into a single unit (object).
 Inheritance: Allowing new classes to extend existing ones, promoting code
reuse.
 Polymorphism: Allowing one name to represent different methods based on
data types.
 Examples include C++, JAVA, and SMALLTALK.
 Scripting Languages:
Initially considered auxiliary tools, scripting languages are now recognized for
their ability to automate tasks and integrate components.
They are often interpreted and include languages like Python, Tcl, and Perl.
Non-Procedural Languages:
Functional programming applies functions to variables without focusing
on the sequence of execution.
Examples include LISP, ML, and Scheme.
Logic programming languages (e.g., PROLOG) express programs as a set
of facts and rules, solving queries based on logical inference.
Problem-Oriented Languages:
These provide predefined procedures for specific problem domains.
MATLAB is mentioned for its application in engineering and scientific
computations.
MATHEMATICA is noted for symbolic manipulation of mathematical
expressions.
Markup Languages:
Not programming languages, markup languages like HTML are used to
format and structure web content.
Some extensions (like JSP and XSLT) provide limited programming
capabilities.
COMPILING, LINKING, AND LOADING A PROGRAM
The compilation process that a computer goes through to transform a high-
level program into an executable file.
1. Source Code and Target Language
Source language is the language in which you write your program, such as C,
Java, or Python. It's called "high-level" because it’s easier for humans to
understand.
Target language is what the source code is translated into. The target
language can be machine language (directly understandable by the
computer) or assembly language (closer to machine code but still needs
further translation).
2. Compilation Process
When you compile a program, a compiler translates the source code into a
target language (usually assembly language). However, the output of the
compiler isn’t the final executable file. Several more steps are involved
before the program can run:
The assembler translates assembly language into object code,
which is in machine language but not yet ready to be executed.
The object program (or object code) is combined with other
pieces of object code (from system libraries or other programs) by
a linker. This step creates an executable file, which can be run by
the computer.
The executable file is stored in secondary memory (like your hard
drive) until it is needed for execution.
When you want to run the program, the operating system loads it
from secondary memory into the main memory (RAM) for
execution.
3. **Phases of Compilation**
The compilation process consists of several phases:

a. **Lexical Analysis**
- In this phase, the compiler scans the source code and breaks it down into
**tokens**. Tokens are the smallest units of the code, like identifiers (names
of variables), operators (like +, -), or keywords (like `if`, `while`).
- A **symbol table** is created to store information about user-defined names
(like variables and functions), which is used in later stages of the compilation.

b. **Syntax Analysis (Parsing)**


- Here, the tokens are grouped into **syntactic units** (like expressions and
statements) based on the **grammar** of the programming language.
- The compiler checks if the program follows the syntax rules of the language.
For example, it verifies whether a statement like `if (x == 1)` follows the proper
format.
- This process produces a **parse tree**, which represents the hierarchical
structure of the code.
c. **Semantic Analysis**
- During semantic analysis, the compiler checks the **meaning** of the code. It
ensures that the operations and data types used make sense according to the
language’s rules.
- For example, it will check if you’re trying to divide a string by a number, which
isn’t valid.
-This phase ensures that the program is logically correct according to the source
language’s rules.

d. **Intermediate Code Generation and Optimization**


- After the syntax and semantic analysis, the compiler may produce an
**intermediate code**, which is not yet machine code but easier to optimize
and transform into the final code.
- The purpose of intermediate code generation is to make the final program
**smaller** (in terms of memory) and **faster** (in execution).
- The compiler then **optimizes** the intermediate code, for example, by
removing redundant calculations or rearranging instructions for efficiency.
e. **Code Generation**
- In the final phase, the intermediate code is transformed into **target code**,
which is either in machine language or assembly language.
-This phase depends on the specific machine the program is running on (the
processor architecture, number of registers, etc.). The compiler uses **target
language templates** to generate this code.

4. **Linking and Loading**


- If your program uses **libraries** (pre-written functions like `printf` in C) or
other code that was compiled separately, the **linker** brings them all
together into a single **executable program**.
- Finally, the executable file is **loaded** into the main memory by the
operating system for execution.

5. **Special Case**
- In some cases, the target language may not be machine language or assembly
language. For example, it could be an **intermediate language** like bytecode
(used in Java), which requires another **translator** (like the Java Virtual
Machine) to make it executable.
INTRODUCTION TO OPERATING SYSTEM
An operating system (OS) is a set of programs that acts as a bridge between a
computer’s user and its hardware. It provides the environment for running
programs. Almost every computer system is made up of three parts:
Hardware – This includes memory, the CPU, storage devices, and peripherals like
keyboards and screens.
System programs – These include the operating system, compilers, and utilities
that help the computer run.
Application programs – These are programs like word processors or business
software that help users solve specific tasks.
The OS manages the hardware and ensures that system and
application programs can run smoothly. It acts as a resource
allocator, meaning it keeps track of resources (like CPU time and
memory) and decides how to share them among different
programs and users.
The OS also functions as a control program, overseeing the
execution of user programs and preventing errors or misuse of
the system.
Functions of an Operating System :
An operating system (OS) performs several key functions to
manage a computer's hardware and software resources
efficiently.
A. Process Management
A process is a running program. It needs resources like CPU time, memory, and
I/O devices.
The OS handles:
1. Creating and deleting processes.
2. Suspending and resuming processes.
3. Allocating resources to processes.
4. Managing process synchronization (deciding which process gets the CPU
and for how long).
5. Handling deadlocks, which occur when processes block each other from
accessing resources.
B. Memory Management
Memory is essential for a computer to function. Programs must be loaded into
memory to run.
The OS:
6. Tracks which parts of memory are in use.
7. Decides which programs should be loaded into memory.
8. Allocates and de-allocates memory as needed.
Secondary Memory Management: Since primary memory (RAM) is limited, the
OS uses disks for additional storage and manages how data moves between RAM
and disk space.
C. Device (I/O) Management
The OS hides the complexity of hardware devices from users.
It manages:
1. Buffering data between memory and devices.
2. Device drivers, which control specific hardware components.
D. File Management
The OS manages files, which store data and programs.
It:
3. Creates and deletes files and directories.
4. Manages access to files and maps them to storage devices.
5. Handles file permissions to control user access.
E. Protection
Protection ensures that processes and users do not interfere with each other’s
resources.
The OS:
6. Controls memory access so each process stays within its limits.
7. Prevents unauthorized access to the CPU and I/O devices.
8. Improves system reliability by detecting errors early.
Components of an Operating System
Types of Operating Systems

1. Batch Process Operating System


2. Multiprogramming Operating System
3. Time-sharing Operating Systems
4. Real-time Operating Systems
5. Network Operating System
6. Distributed Operating System
1. BATCH PROCESS OPERATING SYSTEM
A batch process operating system works by collecting jobs from users in a
central location and then running them in batches. Here's a simplified
overview:
How it works: Users submit jobs (programs, data, and commands), which
are queued and processed one by one. The user does not interact with the
program while it is running. The response time (turnaround time) is the time
between submitting the job and receiving the results, which can take
minutes, hours, or even days.
Job processing: In the past, batch systems grouped similar jobs (like Fortran
or COBOL programs). Modern batch systems process jobs from an input
queue, often on a first-come, first-serve basis. The key feature of batch
systems is that there is no interaction with the job while it runs.
Memory and I/O management: Memory is divided into two areas—one for
the operating system and the other for user programs. Jobs are loaded one
at a time, and since only one job runs at a time, managing input/output
devices and files is simple.
Advantages: Batch systems are ideal for tasks that require long processing
times but no user interaction, like payroll, data analysis, and scientific
computations. Users can submit their jobs and come back later for the
results.
Disadvantages:
Non-interactive: Users can't interact with the program while it's running
or control intermediate results. Turnaround time is slow, which makes
software development difficult.
Offline debugging: Programmers can't fix errors immediately. They have
to wait until after the job finishes to find and correct bugs.
2. Multiprogramming Operating System
A multiprogramming operating system allows more than one user program
to be in the main memory at the same time.
This is more advanced than a batch operating system, where jobs are
processed one after another.
In a multiprogramming system, several programs can be ready to run, and
the operating system manages their memory and decides which program
gets to use the CPU at any given moment.
This decision-making process is called CPU scheduling. The system also
ensures that these programs don’t interfere with each other while they share
resources like memory, disk space, and CPU time.
•Multiprogramming: This is the system’s ability to keep multiple
programs in memory and manage their execution so the CPU is always
busy.
•Multitasking: A subset of multiprogramming, this refers to switching
between tasks so that multiple tasks seem to run at the same time.
For example, you could be working on a document while a calculator
is running in the background.
•Example: Operating systems like UNIX and Windows support
multitasking.
•Multi-user: Multiple users can access the system at the same time.
For instance, in a railway reservation system, many terminals can be
connected to the main computer to serve different users
simultaneously.
•Example: Linux and UNIX are multi-user operating systems.
•Multiprocessing: A system with more than one CPU, where each CPU
can work on different tasks at the same time. This boosts performance
because multiple tasks can be processed in parallel.
•Example: Systems with multiple processors can run complex scientific
simulations or manage large databases efficiently.
3. Time-sharing Operating Systems
A time-sharing operating system allows many users to interact with a computer
at the same time. Each user feels like they have their own personal computer,
but in reality, they are sharing the same system, which switches between tasks
very quickly to serve each user. This is made possible by the operating system
managing the computer's resources like the CPU, memory, and input/output
(I/O) devices.
Time-Sharing: The system divides its time among multiple users. Each user gets
a small time slice of the CPU, so it feels like everyone is working on their own
system, even though it's the same computer serving everyone.
Example: Think of using a shared online platform where many people access
the same server, but it responds quickly to each person's request, such as typing
or clicking.
Interactive System: In time-sharing systems, users interact with the computer
directly. You type a command (like pressing a key), the system processes it, and
it shows the result within a few seconds.
CPU Scheduling: To make sure the system is fair to all users, the CPU is given to
each user or task for a short period (called a time slice). If a task takes too long,
it gets interrupted and placed back in line, allowing others to use the CPU.
Memory and I/O Management: The system separates and protects each user's
program in memory so they don't interfere with each other. It also manages
input/output devices (like keyboards or printers) so multiple users can use them
without issues.
Interrupts: When you interact with the computer (like pressing a key), the
system immediately shifts to process your request through an interrupt,
ensuring quick responses.
4. Real-time Operating Systems
A real-time operating system (RTOS) is designed to handle
situations where quick responses are crucial. These systems are
used in applications where a delay could lead to errors,
disasters, or major problems. Examples include airline
reservation systems, controlling machines, or monitoring critical
infrastructure like nuclear power plants.
Real-Time Response: In an RTOS, the system must respond
immediately to events. For example, in flight control systems, any
delay could be catastrophic, so the system must process signals
and give results in real time.
Priority-Based Scheduling: Each task or process in the system is
given a priority level. The operating system will always execute the
highest-priority task first. If a more important task comes up while
a lower-priority one is running, the RTOS interrupts or "pre-empts"
the lower-priority task to handle the higher one immediately.
Memory Management: In RTOS, processes stay in primary
memory (RAM) most of the time to allow quick access, unlike
regular operating systems that swap data between RAM and
storage. Memory management here is simpler because the system
prioritizes speed.
I/O Management: Input/output devices are managed with a focus
on speed. RTOS efficiently handles hardware interrupts (like signals
from sensors or external devices) so it can quickly respond to
events.
File Management: The focus is on quick access to files rather than
storing large amounts of data efficiently. In some real-time
systems, there might not even be secondary storage (like hard
drives), especially in embedded systems like those found in cars or
medical devices.
Types of Applications:
Military Systems: For example, missile guidance systems, where
any delay could be disastrous.
Industrial Control: Systems that control machinery or monitor
power plants need real-time feedback to function safely.
Real-Time Simulations: Used in areas like flight simulators, where
the system must process events as they happen, without delay.
Examples of Real-Time Operating Systems:
Chimera, Lynx, QNX, RTMX, RTX: These are specialized RTOS used
in industries requiring fast, reliable response times.
5.Network Operating System
A networked computing system is a group of
connected computers. Each computer has its own
operating system, but they are connected to share
resources and communicate with each other. In this
setup, a network operating system (NOS) manages
these connections, allowing computers to work
together while keeping their own private systems.
Standalone OS: Each computer has its own operating system.
Users work on their systems and need to use specific commands
to access other machines.
Remote Access: Users can log into other computers on the
network and transfer files between them.
Resource Sharing: Users can access remote resources (like
printers or files) as if they were local, but file transfers need
explicit commands.
Security: NOS controls access, ensuring only authorized users can
access certain resources.
Two Main Ways to Access Files Across the Network:
File Transfer Programs: Users copy files from one computer to
another to access them locally.
Pathname Access: Users can access remote files by specifying the
file's location (like a URL).
Examples of NOS:
Linux
Windows 2000/2003 Server
6. Distributed Operating System
A distributed computing system is a group of computers connected
together to share processing power and resources. Unlike networked
systems, which require users to manually access and manage different
computers, a distributed operating system makes the whole system feel
like one unified computer to users. The system automatically distributes
tasks and resources across the connected machines, making everything
more efficient and transparent to the user.
Unified System: In a distributed OS, it appears like a single system to the
user, even though multiple computers are working together. This is in
contrast to a network OS, where each machine runs its own OS
independently.
Automatic Job Distribution: The operating system automatically distributes
tasks and files across different processors and machines. The user doesn’t
need to worry about where a program is running or where files are stored
—it's all managed by the system.
Global File System: In a distributed system, all machines share a
single, global file system. This allows the operating system to
manage files across different computers, balancing storage use
and even making backups or copies of important files without
users needing to do anything.
Program Execution: The distributed system automatically
chooses which processor to use based on factors like load
balancing (how busy each machine is) or file location. In
networked systems, the user often chooses which machine to run
the program on.
Protection and User Management: In distributed systems, there
is a global user identification system (UID) that works across all
machines. This makes access control and file protection more
seamless. In a network OS, each machine has its own user
management system, requiring additional mapping between
machines.
Advantages of Distributed Systems:
Cost-Effective: Distributed systems use multiple smaller, cheaper processors
instead of one large, expensive machine.
Scalability: You can easily increase computing power by adding more
machines.
Reliability: If one part of the system goes down, other parts can continue to
function, improving system uptime.
File and Protection Management:
In a distributed system, file storage and access are managed automatically by
the operating system. The system ensures that the file placement and access
are optimized without user intervention.
Protection is simplified in a distributed OS because of the single UID system,
which works the same across all machines in the network.
Example of a Distributed OS:
Amoeba is an example of a true distributed operating system, where the
software manages all computers as one seamless system.

You might also like