Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
10 views

Components of Computer

Uploaded by

Rudra ks
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Components of Computer

Uploaded by

Rudra ks
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

Components of Computer

BBA-IT Sem I
Index
• List of components of Computer
• CPU
• Memory
• Overview of PC architecture
Components of Computer
• Following are the 5 main
components of Computer
• Central Processing Unit (CPU)
• Input devices
• Output devices
• Primary memory
• Secondary memory
Central Processing Unit (CPU)
• ‘Brain’ of your computer.
• CPU is a critical part of any modern devices.
• Also called a processor, central processor, or microprocessor.
• The CPU receives instructions from both hardware components and
active software.
• It processes these instructions to produce output, performing
calculations and manipulating data as required.
• Essential programs like operating systems (OS) and application software
(e.g., for word processing, web browsing, gaming) are stored and
executed by the CPU.
• The CPU facilitates communication between input and output devices.
• It interprets inputs from actions such as clicking a mouse, moving
the cursor, or pressing keys on a keyboard.
• The CPU works with relevant software programs to achieve outcomes
such as printing documents, playing audio, or displaying text on the
screen.
• This ensures seamless interaction with peripherals.
• The CPU is installed into a CPU socket on the motherboard.
• It is equipped with a heat sink to absorb and dissipate heat, ensuring
smooth functionality and optimal operating temperatures.
Central Processing Unit (CPU)
• 1970s Early Microprocessors:
• 1971: Intel introduces the 4004, the first commercially available
microprocessor, designed by Ted Hoff, Federico Faggin, and others.
It was a 4-bit CPU used in calculators.
• 1972: Intel releases the 8008, an 8-bit microprocessor, expanding
capabilities beyond simple calculators.
• 1974: Intel launches the 8080, which becomes widely used in early
personal computers and other systems.
• 1980s Rise of x86 Architecture:
• 1981: IBM introduces the IBM PC with the Intel 8088 processor,
marking the beginning of the x86 architecture dominance in
personal computing.
• 1982: Intel releases the 80286 (286) processor, offering increased
performance and capabilities.
• 1985: Intel launches the 386 processor (80386), introducing 32-bit
architecture and enabling multitasking and larger memory
addressing.
• 1990s Advancements in Performance:
• 1993: Intel introduces the Pentium processor (80586), bringing
enhanced performance and floating-point arithmetic capabilities.
• Late 1990s: AMD becomes a significant competitor with its K5 and
K6 processors, challenging Intel's dominance.
Central Processing Unit (CPU)
• 2000s Multicore Processors and Mobility:
• 2005: Intel introduces dual-core processors with the Pentium D
and later the Core 2 Duo, focusing on performance and energy
efficiency.
• 2003: AMD launches the Athlon 64, introducing 64-bit processing
to consumer desktops.
• 2010s Integration and Efficiency:
• 2011: Intel releases Sandy Bridge processors, integrating CPU and
GPU on the same die, enhancing multimedia and gaming
performance.
• 2017: AMD launches Ryzen processors, offering competitive
performance against Intel's offerings with multi-core architecture.
• 2020s Continuing Innovation:
• 2020: AMD introduces Ryzen 5000 series processors based on Zen
3 architecture, focusing on gaming and content creation
performance.
• 2021: Intel launches Alder Lake processors, combining high-
performance and efficiency cores for desktop and mobile
platforms.
Central Processing Unit (CPU)
• Components of CPU
• Memory unit
• Control unit
• Arithmatic & Logic unit (ALU)
• Clock speed
• The clock speed of a processor, also known as the CPU clock rate,
denotes how many instructions it can process per second.
• It is measured in gigahertz (GHz), where 1 GHz equals 1 billion
cycles per second.
• For instance, a CPU with a clock speed of 4.0 GHz can execute 4
billion instructions per second.
• Each instruction represents a basic CPU operation, such as data
transfer or mathematical calculation.
• Higher clock speeds enable CPUs to process instructions more
quickly, thereby improving overall performance.
• Analogously, in a factory production line, the clock speed
resembles the speed of a conveyor belt, determining how many
workpieces can be processed within a specified period.
Control Unit of CPU
• The Control Unit (CU) is an essential component of the Central
Processing Unit (CPU), responsible for overseeing the operation of a
computer system.
• Within the CU, circuitry utilizes electrical signals to direct the computer
in executing stored instructions.
• It retrieves instructions from memory, decodes them, and ensures their
execution.
• Consequently, the CU governs and harmonizes the operations of all
computer components.
• The primary role of the Control Unit (CU) is to oversee and manage the
flow of information within the processor.
• Acting as a traffic controller, it ensures efficient transfer of information
and instructions among different components of the computer system.
• The CU orchestrates the sequence in which instructions are executed
and synchronizes activities among various CPU units.
• Unlike other CPU components involved in processing and storing data,
the CU functions as a supervisor, ensuring instructions are executed
correctly and in the intended order.
Control Unit of CPU
• The Control Unit achieves coordination within the CPU through a
series of steps:
• Fetch: The Control Unit retrieves an instruction from the computer's memory
using the program counter (PC), which holds the address of the next instruction.
• Decode: Upon fetching, the Control Unit breaks down the instruction into its
elements, including the operation code (opcode) and operands. Op operands
provide data or memory locations, while the opcode specifies the operation
type.
• Execute: Following decoding, the Control Unit initiates the execution phase. It
coordinates actions among CPU units like the arithmetic logic unit (ALU) to
perform the operation defined by the instruction, involving calculations or data
manipulation.
• Store: After execution, the Control Unit updates registers and flags to reflect the
operation's results. This includes storing outcomes in registers, updating the
program counter, or adjusting status flags (e.g., zero flags, carry flags).
• Repeat: The Control Unit proceeds to fetch the next instruction by incrementing
the program counter. This cycle, known as the fetch-decode-execute cycle,
repeats for each instruction in the program.

• This repetitive cycle ensures the sequential execution of


instructions, facilitated by the Control Unit's management of
information flow and coordination of CPU actions. It enables the
CPU to perform necessary computations and operations,
contributing to the overall functionality of the computer system.
Arithmetic Logic Unit of CPU
• The Arithmetic Logic Unit (ALU) performs arithmetic and logical
operations within a CPU.
• Arithmetic operations include addition, subtraction, multiplication,
division, and comparisons.
• Logical operations involve data selection, comparison, and merging.
• Some CPUs may feature multiple ALUs to enhance processing capabilities.
• ALUs can also manage timers crucial for coordinating computer
operations.
• The ALU is divided into two main sections:
• Arithmetic Section: • The ALU's arithmetic and logical
• Performs fundamental mathematical operations such as addition, functions are integral to CPU operations.
subtraction, multiplication, and division.
• Handles other tasks like bitwise operations and incrementing or • When an instruction is fetched and
decrementing values. decoded by the Control Unit, the ALU
• Essential for mathematical computations in various applications and performs the specified arithmetic or
programs.
logical operation.
• Logic Section: • For example, if an instruction calls for
• Executes logical operations involving data manipulation based on conditions.
adding two integers, the Arithmetic
• Includes operations like selecting or excluding data elements, comparing
values (e.g., equal, greater than, less than), and merging data based on Section of the ALU executes the addition
logical principles. and produces the result.
• Commonly used in decision-making, data filtering, and processing tasks.
Memory of CPU
• The memory or storage unit of a computer system stores
instructions, data, and interim results.
• It serves as a repository accessible to other computer
components for storing and retrieving data.
• Known by various names such as internal storage unit, main
memory, primary storage, or Random-access memory
(RAM).
• Essential for the functioning of applications and operating
systems by providing fast access to data and instructions.
• The capacity of the memory unit directly impacts the speed,
power, and overall performance of the computer.
• A larger memory capacity enables storage of more data and
instructions.
• This enhances the computer's ability to efficiently handle
complex tasks.
• Improved machine capacity results in enhanced
performance and responsiveness.
Memory of CPU
• Secondary memory encompasses hard disk drives (HDDs),
solid-state drives (SSDs), and external storage devices.
• Unlike RAM, secondary memory is non-volatile and retains
data even when the computer is powered off.
• It serves as the storage location for operating systems,
software applications, documents, and user data.
• Secondary memory has a larger capacity compared to RAM,
allowing for extensive data storage.
• Retrieving data from secondary memory takes longer than
from primary memory but offers the advantage of long-
term data retention.
Memory of CPU
• Functions of the memory unit
• Storage: The memory unit stores instructions, data, and intermediate
results essential for computer operations.
• Retrieval: It enables rapid and efficient access to stored information,
allowing the processor to retrieve data and instructions during
program execution.
• Temporary Storage: RAM provides temporary storage for actively
running programs, facilitating quick access and manipulation of data
by the CPU.
• Data Transfer: The memory unit facilitates seamless data transfer
between the CPU and other computer components, ensuring
efficient communication and processing.
• Fast Access: It provides swift access to data and instructions,
minimizing delays in program execution and optimizing overall
system performance.
• Random Access: The memory unit allows the CPU to retrieve data
from any location within the memory unit instantly, without needing
to search sequentially, thereby supporting fast and random access to
information.
Types of CPUs
• Single-Core CPUs
• Single Core CPUs, originating in the 1970s, feature a single processing
core for handling operations.
• They can execute only one operation at a time, switching between
different sets of data streams when multiple programs run concurrently.
• Not ideal for multitasking, as performance may degrade when running
more than one application simultaneously.
• Performance of single-core CPUs primarily relies on their clock speed.
• Still utilized in various devices like smartphones due to their efficiency in
specific tasks.
• Multi-core CPUs have become prevalent with advancements in
technology, offering enhanced multitasking capabilities.
• These CPUs can execute multiple instructions simultaneously, leveraging
multiple processing cores.
• Single-core CPUs are less common in desktop and laptop computers
today, although they remain prevalent in embedded systems and mobile
phones.
• Smartphones often integrate single-core or dual-core CPUs designed for
power efficiency, balancing performance and battery life effectively.
Types of CPUs
• Dual core CPU
• Dual Core CPU features two cores within a single Integrated Circuit (IC),
each with its own controller and cache.
• Cores are interconnected to function as a unified unit, enhancing
performance compared to single-core processors.
• Each core in a dual-core CPU can independently execute instructions,
enabling parallel processing.
• This capability significantly boosts multitasking efficiency compared to
single-core processors.
• Users can concurrently run multiple applications on dual-core CPUs
without experiencing notable performance degradation.
• Dual-core CPUs provide advantages beyond multitasking by improving
performance for single-threaded applications.
• Each core can independently process instructions, allowing tasks that
cannot be parallelized to benefit from the dual-core architecture.
• One core can focus on executing the primary application, while the other
manages background processes or system tasks.
• This division of workload enhances user experience by ensuring
smoother operation and boosting overall system responsiveness.
Types of CPUs
• Quad core CPU
• A quad-core processor integrates four independent processing units
(cores) into a single chip or integrated circuit (IC).
• Each core reads and executes CPU instructions independently, enabling
simultaneous execution of multiple instructions and enhancing overall
program speed compatible with parallel processing.
• Quad-core CPUs leverage technology that allows four cores to operate in
parallel on a single chip, boosting performance without increasing clock
speed.
• Performance gains are dependent on software support for
multiprocessing, which distributes processing tasks across multiple cores
rather than utilizing one core at a time.
• Software that supports multiprocessing can divide the processing load
among multiple processors simultaneously, improving productivity and
accelerating processing times.
• Quad-core processors excel in multitasking and computationally intensive
tasks by dividing workloads more evenly among the four cores.
• Tasks such as video editing, 3D graphics rendering, and gaming benefit
from the simultaneous execution of multiple tasks, demonstrating the
effectiveness of quad-core CPUs in parallel processing scenarios.
Types of CPUs
• Hexa core CPU
• Hexa-core CPUs feature six separate cores integrated into a single chip or
integrated circuit (IC), each capable of independent computation and
command execution.
• The inclusion of six cores enhances processing power and overall
performance.
• Hexa-core CPUs excel in multitasking and handling resource-intensive
tasks by efficiently distributing workload among the cores.
• Users can seamlessly run multiple programs simultaneously, including
web browsers, video editing software, and gaming applications, without
experiencing significant performance slowdowns.
• Hexa-core CPUs are particularly advantageous for applications requiring
substantial computational power, such as video editing, 3D rendering,
scientific simulations, and virtualization.
• Tasks in these applications can leverage multiple cores for faster
processing and reduced processing wait times, optimizing performance.
Types of CPUs
• Octa-core CPU
• Octa-core CPUs feature eight separate cores integrated into a single chip
or integrated circuit (IC), with each core functioning as an independent
processing unit capable of executing calculations and commands.
• The inclusion of eight cores significantly enhances processing power and
overall system performance.
• Octa-core CPUs excel in handling demanding workloads and offer robust
multitasking capabilities.
• With eight cores, the CPU efficiently manages multiple simultaneous
tasks by distributing the workload across cores for efficient processing.
• Users can run numerous applications concurrently without experiencing
noticeable performance slowdowns or system lag.
• The primary advantage of octa-core CPUs lies in their ability to execute
instructions in parallel, where each core independently processes
different tasks simultaneously.
• This parallel processing capability enhances overall system performance
and accelerates task completion, particularly beneficial for tasks that can
be divided into smaller subtasks and processed concurrently.
• Octa-core CPUs are ideal for computationally intensive software
applications that require substantial resources, such as high-definition
video editing, 3D rendering, complex scientific simulations, and
virtualization.
• These tasks can leverage multiple cores effectively, resulting in faster
processing speeds and reduced waiting times.
Types of CPUs
• Multi-core CPU
• also referred to as multi-core processors, integrate multiple independent cores onto a
single chip or integrated circuit.
• These CPUs utilize two or more cores working together to execute instructions and
perform computations, unlike single-core processors that rely on a single core for all
tasks.
• The primary advantage of multi-core CPUs lies in their ability to handle multiple tasks
simultaneously, enhancing overall performance and efficiency.
• Cores within the CPU function as separate processing units capable of independent
operation.
• Multi-core CPUs leverage parallel processing to distribute tasks across cores, enabling
faster and concurrent task completion.
• Users can run multiple programs concurrently on multi-core CPUs without significant
slowdowns or performance bottlenecks.
• Each task can be assigned to a different core for efficient processing, allowing activities
such as web browsing, video streaming, and document work simultaneously.
• This multitasking capability improves overall system responsiveness and enhances user
experience.
• Multi-core CPUs excel in performing computationally demanding tasks in addition to
multitasking.
• They are effective in handling complex activities like video editing, 3D rendering,
scientific simulations, and gaming by distributing workload across multiple cores.
• Distributing tasks among cores results in faster processing speeds and shorter wait
times, optimizing performance for demanding applications.
Memory
• Bits and Bytes: The Building Blocks of Digital Information.
• Bits
• A bit (binary digit) is the basic unit of digital information.
• It can have only two values: 0 or 1.
• Bits are used to represent data, instructions, and addresses in
computers.
• Bytes
• A byte is a group of 8 bits.
• Bytes are used to represent characters, numbers, and other data in
computers.
• Each byte can represent 256 (2^8) unique values.
• Bit and Byte Conversions
• Bit to Byte: 1 byte = 8 bits
• Byte to Bit: 1 bit = 1/8 byte
• Kilobyte (KB): 1 KB = 1024 bytes = 8192 bits
• Megabyte (MB): 1 MB = 1024 KB = 1048576 bytes = 8388608 bits
• Bitwise Operations
• AND: Bitwise AND operation compares each bit of two numbers.
• OR: Bitwise OR operation compares each bit of two numbers.
• XOR: Bitwise XOR operation compares each bit of two numbers.
• NOT: Bitwise NOT operation flips the bits of a number.
Memory
• Computer memory is analogous to the
human brain.
• It is used to store data, information, and
instructions.
• Functions as a data storage unit or device.
• Stores data to be processed and instructions
required for processing.
• Can hold both input data and output results.
• Types
• Primary memory
• RAM (Random Access Memory)
• Static RAM
• Dynamic RAM
• ROM (Read-Only Memory)
• MROM(Masked ROM)
• PROM (Programmable Read Only Memory)
• EPROM (Erasable Programmable Read Only Memory)
• EEPROM (Electrically Erasable Programmable Read
Only Memory)
• Secondary memory
• Cache memory
Memory
• Memory hierarchy is a way to structure memory to
balance speed, cost, and size.
• Fastest to slowest
• Registers (inside CPU)
• L1 Cache
• L2 Cache
• L3 Cache
• RAM
• Virtual Memory (on storage devices)
• Secondary Storage (HDD/SSD)
• Each level in the hierarchy has trade-offs between
speed, cost, and capacity.
• Faster memory is more expensive and has lower
capacity, while slower memory is less costly and
offers more storage.
• The memory hierarchy helps optimize performance
by keeping frequently used data in faster, smaller
memory and less frequently used data in slower,
larger memory.
• Understanding these types of memory and their roles
helps in designing efficient computer systems and
troubleshooting performance issues.
Memory - Registers
• Registers
• Small, fast storage locations within the CPU used
to hold data and instructions that are being
processed.
• They provide the CPU with immediate access to
data and instructions needed for computation.
• Characteristics:
• Speed:
• Registers are the fastest type of memory because
they are part of the CPU.
• They operate at the same speed as the CPU, unlike
other types of memory (like RAM or cache) which are
slower.
• Size:
• Registers are typically very small, often only 32 or 64
bits in size, depending on the CPU architecture.
• This small size is balanced by their speed and
accessibility.
• Functionality:
• Registers are used to store temporary data,
intermediate results, and control information.
• Their immediate availability to the CPU enables rapid
data processing and instruction execution.
Memory - Registers
• Types of Registers
• General-Purpose Registers
• Function: Used for a variety of tasks, such as holding intermediate results of computations or data being
processed. They are versatile and can be used for different types of operations as needed by the CPU.
• Examples: In x86 architecture, these include registers like EAX, EBX, ECX, and EDX.
• Special-Purpose Registers
• Function: Serve specific roles and are used for particular purposes in the CPU. They control operations or
provide status information.
• Examples:
• Program Counter (PC): Holds the address of the next instruction to be executed.
• Instruction Register (IR): Contains the current instruction being decoded and executed.
• Stack Pointer (SP): Points to the top of the stack, used for managing function calls and local variables.
• Base Pointer (BP): Used to reference variables in the stack frame.
• Status Register/Flag Register: Holds flags or status bits that indicate the result of operations (e.g., zero, carry, overflow
flags).
• Index Registers
• Function: Used to modify address calculations for indexed addressing modes, facilitating access to arrays or
tables.
• Examples: SI (Source Index) and DI (Destination Index) in x86 architecture.
• Data Registers
• Function: Specifically used for holding operands and intermediate results during arithmetic and logic
operations.
• Examples: The AX register in x86 architecture can be divided into AH (high byte) and AL (low byte) for 8-bit
operations.
Memory –
Registers

• The Memory unit has a capacity of 4096 words, and each word contains 16 bits.
• The Data Register (DR) contains 16 bits which hold the operand read from the memory location.
• The Memory Address Register (MAR) contains 12 bits which hold the address for the memory location.
• The Program Counter (PC) also contains 12 bits which hold the address of the next instruction to be read from
memory after the current instruction is executed.
• The Accumulator (AC) register is a general purpose processing register.
• The instruction read from memory is placed in the Instruction register (IR).
• The Temporary Register (TR) is used for holding the temporary data during the processing.
• The Input Registers (IR) holds the input characters given by the user.
• The Output Registers (OR) holds the output after processing the input data.
Memory - Registers
• Role in Instruction Execution
• Fetching: The CPU fetches instructions from memory into the Instruction Register.
• Decoding: The instruction is decoded, and the relevant data is fetched from
registers if necessary.
• Executing: The CPU performs the operation using the data in registers.
• Storing: The result is either stored back into a register or written to memory.

• Example in Assembly Language


• Consider an assembly language snippet for an x86 processor:
• In this example:
• The MOV instruction places the value 5 into the AX register.
• The ADD instruction adds 10 to the value currently in the AX register.
Memory - Cache
• Cache memory
• is a small, fast, and high-speed memory
storage location that stores frequently
accessed data or instructions.
• It acts as a buffer between the main memory
and the CPU, providing quick access to
essential data.
• Key Characteristics:
• Speed: Cache memory is much faster than
main memory.
• Size: Cache memory is smaller than main
memory.
• Location: Cache memory is located close to
the CPU.
• Content: Cache memory stores frequently
accessed data or instructions.
• Types of Cache Memory:
• Level 1 (L1) Cache: Smallest and fastest cache,
built into the CPU.
• Level 2 (L2) Cache: Larger and slower than L1,
located on the CPU or motherboard.
• Level 3 (L3) Cache: Shared among multiple
CPU cores in a multi-core processor.
Memory - Cache
• Cache Memory Hierarchy and Management
• Hierarchy:
• L1 Cache: Smallest and fastest cache, located closest to the CPU
• L2 Cache: Larger and slower than L1
• L3 Cache: Largest and slowest cache, shared among multiple CPU cores
• Cache Organization:
• Cache memory is divided into fixed-size blocks (lines)
• Each block contains a small amount of data copied from main memory
• CPU accesses cache memory in blocks, not bytes
• A block is a fixed-size group of bytes, used to manage and transfer data •
Cache Access:
efficiently. • CPU checks cache before reading or writing data
• Block size varies depending on the system or device, but common sizes
include: • Cache hit: Data is in cache, CPU retrieves it quickly
• 512 bytes (traditional hard drives) • Cache miss: Data is not in cache, CPU fetches it from
• 1024 bytes (some SSDs) main memory, causing a delay
• 2048 bytes (some flash memory)
• 4096 bytes (some modern SSDs) • Cache Hierarchy and Management:
• Modern processors have L1, L2, and L3 caches with
• Cache Coherency:
• Ensures cached data matches main memory data increasing capacity and latency
• Cache coherence techniques update other cores' caches when one core • Parallel access is achieved by splitting L1 cache into
writes to a memory location instruction and data caches
• Cache Replacement Policies: • Cache management optimizes cache utilization,
• Decide which block to evict when the cache is full and a new block is maximizing hit rates and minimizing miss penalties
needed
• Common policies: LRU (Least Recently Used), FIFO (First-In-First-Out),
• Prefetching improves cache performance by
Random Replacement predicting memory accesses and loading data into
cache
Memory - RAM
• RAM (Random Access Memory)
• Volatile Memory Loses all data when power is lost
or interrupted.
• Function Used to boot up the computer and
temporarily stores programs and data that the
processor is actively using.
• Types of RAM
• SRAM (Static RAM)
• Uses transistors and maintains its state as long as power
is supplied.
• Composed of flip-flops, with each flip-flop storing 1 bit.
• Faster access time due to less need for refreshing.
• DRAM (Dynamic RAM)
• Utilizes capacitors and transistors, storing data as a
charge on capacitors.
• Contains thousands of memory cells.
• Requires periodic refreshing of the charge on capacitors,
making it slower compared to SRAM.
• Capacity:
• Measured in gigabytes (GB) or terabytes (TB).
• Larger RAM capacity allows for more applications
and data to be handled simultaneously.
Memory - ROM
• ROM (Read-Only Memory)
• Non-Volatile Memory Retains
information even when power is lost or
interrupted.
• Function Stores essential information
used to operate the system.
• Read-Only Data and programs stored in
ROM can only be read and not modified.
• Data Storage Information is stored in
binary format and is often referred to as
permanent memory.
Memory - ROM
• Types of ROM
• MROM (Masked ROM)
• Early ROMs with pre-programmed data or instructions.
• Low-cost and hard-wired, not modifiable after manufacturing.
• PROM (Programmable Read-Only Memory)
• Can be programmed once by the user after purchase.
• User writes the required contents into a blank PROM using a special programmer.
• Content cannot be erased or modified once written.
• EPROM (Erasable Programmable Read-Only Memory)
• Can be erased and reprogrammed by exposing it to ultraviolet (UV) light.
• Erasing requires about 40 minutes of UV exposure.
• EEPROM (Electrically Erasable Programmable Read-Only Memory)
• Can be erased and reprogrammed electrically.
• Supports up to 10,000 erase and reprogram cycles.
• Erasing and programming are quick, typically taking 4-10 milliseconds.
• Allows selective erasing and programming of specific areas.
Memory – Secondary memory
• Non-volatile storage used for long-term data
retention.
• Unlike primary memory (RAM), secondary
memory retains data even when the
computer is turned off.
• Types of Secondary Memory
• Magnetic Tapes:
• Description: A long, narrow strip of plastic film
coated with a magnetic material.
• Data Storage: Bits are recorded as magnetic patches
called records across multiple tracks (typically 7 or 9
bits recorded concurrently).
• Reading/Writing: Data is recorded and read using a
read/write head that moves along the tape.
• Operation: The tape can be started, stopped, moved
forward or backward, and rewound.
• Magnetic Disks:
• Description: Circular metal or plastic plates coated
with a magnetic material, used on both sides.
• Data Storage: Bits are stored on magnetized surfaces
in concentric rings called tracks.
• Organization: Tracks are divided into smaller pieces
called sectors for data management.
Memory – Secondary memory
• Optical Disks
• A laser-based storage medium used for reading and writing data.
• It is cost-effective, durable, and can be easily removed from the
computer by occasional users.
• Types of Optical Disks
• CD-ROM (Compact Disc Read-Only Memory)
• Function Read-only; data is pre-written and cannot be modified.
• Data Writing Uses a laser beam to create pits on the disc surface, which are
read by reflecting light from a highly reflective aluminum layer.
• Diameter 5.25 inches.
• Track Density 16,000 tracks per inch.
• Capacity Approximately 600 MB, with each sector storing 2,048 bytes of data.
• Data Transfer Rate About 4,800 KB/sec.
• Access Time Around 80 milliseconds.
• WORM (Write Once, Read Many)
• Function Data can be written once and read multiple times.
• Data Writing Uses a laser beam to write data; once written, the data cannot
be changed.
• Capacity Commonly 650 MB or 5.2 GB, depending on disk size (5.25 inch or
3.5 inch).
• Characteristics Suitable for lasting records, but has higher access time; new
data can be written to another part of the disk.
Memory – Secondary memory
DVDs (Digital Versatile/Video Disc)
• Types
• DVD-R (Writable) A one-time writable disc similar to
WORM technology.
• DVD-RW (Re-Writable) Allows multiple writing and
erasing cycles.
• DVD-ROM (Read-Only Memory) Pre-recorded discs with
higher capacity than CD-ROMs.
• Specifications
• Capacity Ranges from 4.7 GB to 17 GB for standard DVDs; 3.5
inch disks hold up to 1.3 GB.
• Construction Features a thick polycarbonate plastic layer as a
base for other layers.
• Usage Suitable for both reading and writing data, with capacities
significantly higher than CDs.
Memory – Secondary memory
• USB Flash Drives:
• Description: Portable storage devices with flash
memory, commonly used for transferring files and data
between devices.
• Capacity: Ranges from a few gigabytes (GB) to several
terabytes (TB).
• Speed: Varies based on USB standard (e.g., USB 2.0,
USB 3.0, USB 3.1/3.2, USB4). USB 3.0 and above offer
faster data transfer rates compared to USB 2.0.USB
• USB Standards and Storage Performance:
• USB 2.0: Offers transfer speeds up to 480 Mbps;
generally considered slower for large data transfers.
• USB 3.0: Provides transfer speeds up to 5 Gbps,
improving performance significantly over USB 2.0.
• USB 3.1/3.2: Enhances speeds further, with USB 3.1
offering up to 10 Gbps and USB 3.2 up to 20 Gbps.
• USB4: Delivers speeds up to 40 Gbps and supports
Thunderbolt 3 compatibility, ideal for high-speed data
transfer and large file handling.
Memory – Secondary memory
• Hard Disk Drive (HDD)
• Mechanical Head: Reads and writes data on
magnetic disks.
• Disk Rotation: Disks spin at high speed
(5400-7200 rpm) to access data.
• Seek Time: Head moves to specific location
on disk to access data.
• Data Transfer: Data is transferred between
disk and computer through mechanical
head.
• Solid State Drive (SSD)
• Flash Memory: Stores data in
interconnected flash memory chips.
• Electrical Signals: Data is accessed and
transferred using electrical signals.
• No Moving Parts: No mechanical head or
disk rotation required.
• Instant Access: Data is accessed instantly,
without seek time or mechanical delay.
Memory – Secondary memory
Memory – Secondary memory
• How Data is Written on HDD
• Step 1: Data Receipt
• Data is sent from the computer to the HDD controller.
• The controller receives the data and prepares it for writing.
• Step 2: Encoding
• Data is encoded with error-correcting codes to ensure data integrity.
• Encoding converts data into a format suitable for magnetic storage.
• Step 3: Sector Selection
• The HDD controller selects the target sector on the disk.
• Each sector is a small area on the disk that can store a fixed amount of data.
• Step 4: Head Positioning
• The mechanical head is positioned over the selected sector.
• The head is moved using a servo motor and tracks the sector's location.
• Step 5: Data Writing
• The encoded data is written onto the sector using the mechanical head.
• The head magnetizes tiny areas on the disk, representing 0s and 1s.
In Summary
• Step 6: Verification
• The HDD controller verifies the written data for errors. Data is written on an HDD through a process involving
• If errors occur, the data is rewritten or corrected. encoding, sector selection, head positioning, data
• Step 7: Sector Marking writing, verification, and sector marking. The
• The sector is marked as occupied, and its location is recorded. mechanical head plays a crucial role in writing data
• The HDD maintains a map of occupied and free sectors.
onto the magnetic disk.
• Physical Process
• Magnetic fields are used to align and flip tiny magnetic domains on the disk.
• These domains represent 0s and 1s, storing the written data.
Memory – Secondary memory
• A Solid State Drive (SSD) is a non-volatile storage device
that stores data on interconnected flash memory chips.
• Key Components
• Flash Memory Chips: Store data in a series of transistors and
capacitors like NAND (Not-AND) flash memory.
• Controller: Manages data storage, retrieval, and error
correction.
• Interface: Connects SSD to computer (e.g., SATA, PCIe,
NVMe).
• Working Process
• Write Operation:
• Data is sent from computer to SSD controller.
• Controller writes data to flash memory chips.
• Data is stored in a series of electrical charges.
• Read Operation: In Summary
• Controller receives read request from computer. SSDs work by storing data in flash memory chips,
• Controller reads data from flash memory chips. managed by a controller that handles write, read, and
• Data is transmitted back to computer. memory management operations. Their fast access
• Memory Management: times, low latency, high reliability, and low power
• Controller manages memory allocation and deallocation.
consumption make them a popular choice for
• Wear leveling ensures even usage of memory cells.
• Error Correction: modern computing applications.
• Controller detects and corrects data errors.
How memory works?
• When you open MS-Word or any other program, here's what happens with memory:
• Step 1: Program Loading
• Executable File: The OS loads the MS-Word executable file (.exe) from the hard drive into memory.
• Memory Allocation: The OS allocates a chunk of memory to MS-Word, assigning a unique address space.
• Step 2: Program Initialization
• Initialization Code: MS-Word's initialization code is executed, setting up the program's environment.
• Memory Mapping: The program's memory is mapped, allocating space for:
• Code (program instructions)
• Data (program data, settings, and resources)
• Stack (function calls, local variables)
• Heap (dynamic memory allocation)
• Step 3: Memory Usage
• Memory Consumption: MS-Word starts consuming memory, loading:
• Program modules (DLLs, libraries)
• User interface components (menus, toolbars, windows)
• Document data (text, images, formatting)
• Memory Management: The OS and MS-Word work together to manage memory, allocating and deallocating space as needed.
• Memory Usage Patterns
• Peak Memory Usage: Memory usage peaks when the program is actively being used (e.g., typing, editing).
• Idle Memory Usage: Memory usage decreases when the program is idle (e.g., no user input).
• Memory Deallocation
• Program Closure: When you close MS-Word, the OS:
• Deallocates memory assigned to the program
• Releases system resources (file handles, sockets)
• Memory Cleanup: The OS performs memory cleanup, removing any remaining memory allocations.
How memory works?
• Memory Components Used in the MS-Word Example
• In the example of opening MS-Word, the following memory components are used:
• RAM (Random Access Memory): MS-Word's executable file, program modules, user interface components, and
document data are loaded into RAM.
• Cache Memory: The CPU's cache memory is used to store frequently accessed data and instructions from MS-Word.
• Stack Memory: The stack is used to store function calls, local variables, and parameters during program execution.
• Heap Memory: The heap is used for dynamic memory allocation, storing data structures and objects created by MS-
Word.
• Virtual Memory: If the system runs low on physical RAM, virtual memory (hard drive space) is used to store less
frequently used data and program components.
• Paging File (Page File): The paging file is used to store pages of memory that are swapped out of RAM to free up
space.
• Registers: The CPU's registers are used to store small amounts of data temporarily during calculations and operations.
• Additionally, the following memory types are also involved:
• Main Memory: The main memory (RAM) is where most of the data and program instructions are stored.
• Secondary Storage: The hard drive is used to store the MS-Word executable file, program modules, and document
data when not in use.
• Video Memory: The graphics card's video memory is used to store graphics and images displayed by MS-Word.
• These memory components work together to enable MS-Word to run efficiently and effectively.
Overview of PC architecture
• Computer architecture refers to the design and organization of a
computer's internal components, including the relationships between
hardware and software components.
• It defines how data is processed, stored, and transmitted within a
computer system.
• Computer architecture is like a blueprint or a floor plan for a
computer.
• It shows how all the components fit together, how data flows
between them, and how the computer executes instructions.
• This design determines the computer's performance, power
consumption, and functionality.
Overview of PC architecture
• Purpose of Computer Architecture
• Every function a system performs, whether it's browsing the web or printing
documents, relies on the manipulation and processing of numerical data.
• Computer architecture is essentially a mathematical framework designed to gather,
transfer, and interpret these numbers.
• Data as Numbers
• Computers represent all data using numbers.
• While developers working with machine learning code or analyzing complex
algorithms might focus on higher-level concepts, it's important to remember that
everything ultimately boils down to numerical values.
• Data Manipulation
• Computers handle information through numerical operations.
• For instance, to display an image on a screen, a matrix of numbers is sent to the
video memory, with each number corresponding to a specific pixel color.
• Complex Functions
• Computer architecture encompasses both software and hardware components.
• The processor, a hardware element responsible for executing computer programs, is
a central component of any computer system.
Overview of PC architecture
• Booting Up
• When a computer is turned on, the processor runs initial programs that configure the system and
initialize various hardware components.
• This software, known as firmware, is permanently stored in the computer’s memory.
• Temporary Storage
• Memory is a crucial part of computer architecture, often comprising various types within a single
system.
• It temporarily holds programs and data that are actively processed by the CPU.
• Permanent Storage
• Computers also feature components for storing and transferring data to and from external
devices.
• This includes input methods like keyboards, output displays like monitors, and data transfer
mechanisms such as disk drives.
• User-Facing Functionality
• Software controls how a computer operates and interacts with users.
• Computer architecture includes multiple software layers, each typically interacting only with the
layers directly above or below it.
• The operation of a computer architecture starts with the bootup process. During this
phase, the firmware is loaded and plays a crucial role in initializing the entire system.
Once loaded, the firmware ensures that the various components of the computer
architecture function smoothly, enabling the user to access, process, and manage
different types of data effectively.
Overview of PC architecture
• Input unit and associated peripherals
• The input unit serves as the bridge between external data
sources and the computer system.
• It connects the external environment to the computer by
receiving data from input devices, converting it into
machine language, and then integrating it into the
system.
• Common input devices, such as keyboards and mouse,
are frequently used and come with hardware drivers that
enable seamless interaction with the rest of the
computer architecture.
• Output unit and associated peripherals
• The output unit presents the results of computer
processes to the user.
• Most output data consists of music, graphics, or video.
Output devices in a computer architecture include
displays, printers, speakers, and headphones.
• For example, to display a high-resolution image, the
system reads a numerical array from memory that
represents the image's pixel data.
• The computer architecture processes these numbers to
decode the image data and then sends the resulting
information to the graphics card. The graphics card then
processes this data and outputs it to the monitor, allowing the
user to view the image in high resolution.
Overview of PC architecture
• Storage unit/memory
• The storage unit comprises various components used to
store data, and it is generally divided into two
categories: primary storage and secondary storage and
various types of each of those categories as studied
earlier.
• Central Processing Unit(CPU)
• The central processing unit (CPU) consists of registers,
an arithmetic logic unit (ALU), and control circuits that
interpret and execute assembly language instructions.
• The CPU coordinates with all other components of the
computer architecture to process data and generate the
required output.
• Bootloader
• Firmware includes the bootloader, a specialized program
run by the processor that retrieves the operating system
from storage (such as a disk, non-volatile memory, or
network interface) and loads it into memory for
execution.
• The bootloader is a critical component found in desktop
computers, workstations, and embedded devices, and it
is essential for all computer architectures.
Overview of PC architecture
• Operating system
• The operating system sits above the firmware in the
computer hierarchy, overseeing the system's
functionality.
• It manages memory allocation and controls devices like
the keyboard, mouse, display, and disk drives.
• Additionally, the OS provides an interface for users to
launch applications and access data stored on the drive.
• Typically, the operating system offers a suite of tools and
services that allow programs to interact with the screen,
disk drives, and other components of the computer
architecture.
• Buses
• A bus is a physical array of signal lines designed for a
specific purpose; a common example is the Universal
Serial Bus (USB).
• Buses facilitate the transfer of electrical signals between
different components of a computer, allowing data to
move from one system component to another.
• The size of a bus refers to the number of signal lines
used for data transfer.
• For instance, an 8-bit bus transfers 8 bits of data in parallel at
a time.
Overview of PC architecture
• Interrupts
• Interrupts, sometimes called traps or
exceptions in certain processors, are
mechanisms that temporarily divert the
processor from its current task to handle an
event.
• This event could be an issue with a peripheral
device or a signal indicating that an I/O device
has finished its previous operation and is ready
for the next one.
• For example, each time you press a key or click
a mouse button, the system generates an
interrupt to process these actions.
Types of Computer architecture
• Instruction Set Architecture (ISA)
• Instruction Set Architecture (ISA) refers to the design of the instructions that a computer's
processor can execute directly. It defines the syntax, semantics, and format of the
instructions, as well as the addressing modes, data types, and registers used.
• Key components of ISA:
• Instruction format: The structure of an instruction, including opcode, operands, and
addressing modes.
• Opcode: The operation code that specifies the operation to be performed.
• Operands: The data or addresses used by the instruction.
• Addressing modes: The ways in which operands are addressed (e.g., immediate, register,
memory).
• Registers: Small amount of on-chip memory that stores data temporarily.
• Data types: The types of data that can be processed (e.g., integer, floating-point, character).
• Types of ISA:
• CISC (Complex Instruction Set Computing): Many complex instructions, each performing
multiple operations.
• RISC (Reduced Instruction Set Computing): Fewer, simpler instructions, each performing a
single operation.
• VLIW (Very Long Instruction Word): Instructions that specify multiple operations to be
executed in parallel.
• Importance of ISA:
• Performance: ISA affects the speed and efficiency of instruction execution.
• Power consumption: ISA influences power consumption and heat generation.
• Software compatibility: ISA determines the compatibility of software with different
processors.
• Hardware design: ISA impacts the design of the processor and other hardware components.
• In summary, ISA is the foundation of a computer's processor, defining how instructions
are executed and data is processed.
Types of Computer architecture
• Microarchitecture
• Microarchitecture refers to the internal design and organization of a computer
processor's central processing unit (CPU). It defines how the processor executes
instructions, manages data, and controls the flow of information.
• Key components of Microarchitecture:
• Execution Units: Perform arithmetic, logical, and other operations.
• Registers: Small amount of on-chip memory that stores data temporarily.
• Data Paths: The flow of data between execution units, registers, and memory.
• Control Logic: Manages the flow of instructions and data.
• Pipelining: Breaks down instructions into stages for efficient execution.
• Cache Memory: Small, fast memory that stores frequently accessed data.
• Types of Microarchitecture:
• Von Neumann Architecture: Most common type, uses a single bus for data and
instructions.
• Harvard Architecture: Separate buses for data and instructions.
• Superscalar Architecture: Executes multiple instructions simultaneously.
• Pipelined Architecture: Breaks down instructions into stages for efficient
execution.
• Importance of Microarchitecture:
• Performance: Microarchitecture affects the speed and efficiency of instruction
execution.
• Power consumption: Microarchitecture influences power consumption and heat
generation.
• Area and cost: Microarchitecture impacts the size and cost of the processor.
• In summary, microarchitecture is the internal design of a processor that
determines how it executes instructions, manages data, and controls
the flow of information.
Types of Computer architecture
• Client-Server Architecture
• Client-server architecture is a distributed computing model
that separates the application logic into two distinct
components:
• Client:
• Requests services from the server
• Displays results to the user
• Typically a web browser, mobile app, or desktop application
• Server:
• Provides services to clients
• Manages data and performs computations
• Typically a web server, application server, or database server
• Key characteristics:
• Separation of concerns: Client and server have distinct
responsibilities
• Request-response model: Client requests services, server
responds with results
• Scalability: Servers can be scaled to handle increased client
requests
• Flexibility: Clients and servers can be developed and
updated independently
Types of Computer architecture
• Types of client-server architecture:
• One-tier architecture: Client and server are combined in a
single application
• Two-tier architecture: Client and server are separate, with a
direct connection
• Three-tier architecture: Client, application server, and database
server are separate
• N-tier architecture: Multiple layers of servers provide services
to clients
• Advantages:
• Improved scalability
• Enhanced flexibility
• Better maintainability
• Easier updates and upgrades
• Disadvantages:
• Increased complexity
• Higher costs
• Dependence on network connectivity
• In summary, client-server architecture is a distributed
computing model that separates application logic into
client and server components, enabling scalability,
flexibility, and maintainability.
Types of Computer architecture
• SIMD Architecture
• SIMD (Single Instruction, Multiple Data) architecture is a
parallel processing technique where a single instruction is
executed simultaneously on multiple data elements.
• Key characteristics:
• Single instruction: One instruction is broadcast to all processing
units.
• Multiple data: Each processing unit operates on a different data
element.
• Parallel processing: Multiple data elements are processed
simultaneously.
• Components:
• Control Unit: Broadcasts instructions to processing units.
• Processing Units: Execute instructions on multiple data
elements.
• Data Memory: Stores data elements for processing.
• Types of SIMD architectures:
• Vector processors: Designed for scientific simulations and data
processing.
• Graphics Processing Units (GPUs): Optimized for graphics
rendering and compute tasks.
• Digital Signal Processors (DSPs): Specialized for signal
processing and data analysis.
Types of Computer architecture
• Advantages:
• High throughput: Process large amounts of data in parallel.
• Improved performance: Accelerate compute-intensive tasks.
• Energy efficiency: Reduce power consumption per
operation.
• Applications:
• Scientific simulations
• Data analytics
• Machine learning
• Computer graphics
• Signal processing
• Examples:
• Intel SSE/AVX
• ARM NEON
• NVIDIA CUDA
• OpenCL
• In summary, SIMD architecture enables parallel
processing of multiple data elements with a single
instruction, accelerating compute-intensive tasks and
improving performance.
Types of Computer architecture
• Multicore Architecture
• Multicore architecture is a design where multiple
processing cores are integrated onto a single processor
die (chip), sharing resources and improving overall
processing capability.
• Key characteristics:
• Multiple processing cores: Execute instructions
independently.
• Shared resources: Cores share memory, I/O, and other
resources.
• Improved multithreading: Enhanced support for
concurrent execution of threads.
• Types of multicore architectures:
• Symmetric Multicore: Cores are identical and share
resources equally.
• Asymmetric Multicore: Cores have different
capabilities and resources.
• Homogeneous Multicore: Cores are identical and have
the same architecture.
• Heterogeneous Multicore: Cores have different
architectures and capabilities.
Types of Computer architecture
• Advantages:
• Improved performance: Increased processing power and throughput.
• Enhanced multithreading: Better support for concurrent execution of
threads.
• Power efficiency: Reduced power consumption per core.
• Scalability: Easier to add more cores to increase processing power.
• Challenges:
• Synchronization: Coordinating data access and execution between
cores.
• Communication: Exchanging data between cores and memory.
• Cache coherence: Maintaining consistent data across cores.
• Programming complexity: Writing software to effectively utilize
multiple cores.
• Examples:
• Intel Core i7
• AMD Ryzen 9
• ARM Cortex-A72
• IBM Power9
• In summary, multicore architecture integrates multiple
processing cores onto a single chip, improving processing
capability, power efficiency, and scalability, while presenting
challenges in synchronization, communication, and programming
complexity.
Examples of Computer architecture
• Von Neumann Architecture
• Von Neumann architecture is a computer design model
that uses a single bus to transfer data between the
central processing unit (CPU), memory, and
input/output (I/O) devices.
• Key components:
• Central Processing Unit (CPU): Executes instructions
and performs calculations.
• Memory: Stores data and program instructions.
• Input/Output (I/O) Devices: Interact with the user and
external devices.
• Bus: Transfers data between CPU, memory, and I/O
devices.
• Characteristics:
• Fetch-Decode-Execute Cycle: CPU fetches instructions,
decodes them, and executes them.
• Stored-Program Concept: Program instructions are
stored in memory.
• Sequential Processing: Instructions are executed one at
a time.
Examples of Computer architecture
• Advantages:
• Simple and efficient: Easy to implement and understand.
• Flexible: Can be used for a wide range of applications.
• Disadvantages:
• Von Neumann Bottleneck: Data transfer between CPU and
memory can be slow.
• Limited parallel processing: Instructions are executed
sequentially.
• Examples:
• Most modern computers: Use a modified version of the Von
Neumann architecture.
• Embedded systems: Often use a simplified Von Neumann
architecture.
• In summary, Von Neumann architecture is a
fundamental computer design model that uses a
single bus to transfer data between CPU, memory, and
I/O devices, with a fetch-decode-execute cycle and
stored-program concept, but has limitations in parallel
processing and data transfer speed.
Examples of Computer architecture
• Harvard Architecture
• Harvard architecture is a computer design model that
uses separate buses for data and instructions, unlike
the Von Neumann architecture which uses a single bus.
• Key components:
• Central Processing Unit (CPU): Executes instructions
and performs calculations.
• Instruction Memory: Stores program instructions.
• Data Memory: Stores data.
• Data Bus: Transfers data between CPU and data
memory.
• Instruction Bus: Transfers instructions between CPU
and instruction memory.
• Characteristics:
• Separate buses: Data and instructions have separate
buses.
• Improved performance: Faster data transfer and
instruction execution.
• Increased complexity: More complex design than Von
Neumann architecture.
Examples of Computer architecture
• Advantages:
• Faster execution: Instructions and data can be accessed simultaneously.
• Improved parallel processing: Instructions and data can be processed in
parallel.
• Disadvantages:
• Increased cost: More complex design and separate buses increase cost.
• Difficulty in programming: More complex architecture can make
programming challenging.
• Examples:
• Digital Signal Processors (DSPs): Often use Harvard architecture for fast
data processing.
• Embedded systems: Harvard architecture is used in some embedded
systems for improved performance.
• Variations:
• Modified Harvard Architecture: Combined bus with separate
instruction and data caches.
• Super Harvard Architecture: Improved Harvard architecture with
additional features.
• In summary, Harvard architecture uses separate buses for data
and instructions, improving performance and parallel processing,
but increasing complexity and cost, making it suitable for
applications requiring fast data processing and parallel
execution.
Examples of Computer architecture
• CISC (Complex Instruction Set Computing) Architecture
• Instructions can perform multiple operations
• Examples: x86, x64 processors
• Advantages:
• Improved performance
• Reduced number of instructions
• Disadvantages:
• Increased complexity
• Reduced flexibility
• RISC (Reduced Instruction Set Computing) Architecture
• Instructions perform simple operations
• Examples: ARM processors used in embedded systems, PowerPC processors
• Advantages:
• Improved performance
• Increased flexibility
• Reduced power consumption
• Disadvantages:
• Increased number of instructions
• Reduced code density
Examples of Computer architecture
• Parallel Architecture
• Multiple processing units (cores) work simultaneously
• Examples:
• Multi-core processors like Intel core i7
• Clusters
• Grids
• Advantages:
• Improved performance
• Increased throughput
• Disadvantages:
• Increased complexity
• Reduced scalability
• Distributed Architecture
• Multiple computers connected via a network
• Each computer can act as a client or server
• Examples:
• Client-server architecture
• Peer-to-peer architecture
• Advantages:
• Improved scalability
• Increased flexibility
• Reduced costs
• Disadvantages:
• Increased complexity
• Reduced performance
Examples of Computer architecture
• Embedded Architecture
• Designed for specific applications, such as:
• Consumer electronics
• Industrial control systems
• Medical devices
• Examples:
• Microcontrollers
• System-on-chip (SoC) devices like smartwatches or set-top boxes
• Advantages:
• Improved performance
• Increased efficiency
• Reduced costs
• Disadvantages:
• Reduced flexibility
• Increased complexity
• Real-Time Architecture
• Designed for applications requiring predictable timing
• Examples:
• Medical equipment like heart rate monitors or ventilators
• Robotics
• Aerospace
• Advantages:
• Improved performance
• Increased reliability
• Reduced latency
• Disadvantages:
• Increased complexity
• Reduced flexibility
Examples of Computer architecture
• Virtual Architecture
• Uses virtualization to abstract hardware resources
• Examples:
• Virtual machines
• Cloud computing like Amazon web service, google cloud platform
• Advantages:
• Improved flexibility
• Increased scalability
• Reduced costs
• Disadvantages:
• Reduced performance
• Increased complexity
• Quantum Architecture
• Uses quantum-mechanical phenomena for computation
• Still in the experimental phase
• Examples:
• Quantum computing
• Quantum simulation
• Advantages:
• Improved performance
• Increased efficiency
• Reduced energy consumption
• Disadvantages:
• Increased complexity
• Reduced control

You might also like