Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Kya Hua

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Q1: What is linear pipeline? Explain with block diagram.

A linear pipeline is a series of processing stages where each stage performs a part of the overall task, and data flows
through these stages in a sequential manner. This allows for tasks to be broken down and processed in parallel, increasing
throughput and efficiency.

Block Diagram:

Input -> [Stage 1] -> [Stage 2] -> [Stage 3] -> ... -> [Stage n] -> Output

Q2: Classification of pipeline

a) Arithmetic Pipeline with Example

An arithmetic pipeline is used for performing complex arithmetic operations by dividing them into simpler steps. Each stage
in the pipeline performs part of the arithmetic operation, allowing for parallel processing of different parts of the
calculation

Example:

To compute the expression (a + b) * (c - d), an arithmetic pipeline might be structured as follows:

Input -> [Stage 1: Compute a + b] -> [Stage 2: Compute c - d] -> [Stage 3: Multiply (a + b) by (c - d)] -> Output

b) Dynamic Pipelines or Non-linear

Dynamic pipelines can adjust their structure dynamically during execution to accommodate different types of tasks or
instructions. Unlike linear pipelines, which have a fixed sequence of stages, dynamic pipelines can handle tasks that may
require different paths through the pipeline, making them more flexible and efficient for a variety of workloads.

Q3: Performance analysis of Pipeline architecture

Clock Cycle: The duration required to complete one stage of the pipeline. The overall clock cycle time is determined by the
slowest stage in the pipeline.

Throughput: The rate at which tasks are completed. For an n-stage pipeline with a clock cycle time T, the ideal throughput is
1/T tasks per clock cycle.

Speedup: The ratio of the time taken to execute a task without pipelining to the time taken with pipelining. Ideal speedup is
equal to the number of stages n, but practical speedup is usually less due to pipeline hazards.

Efficiency: A measure of how effectively the pipeline is utilized. It is given by the ratio of actual speedup to the ideal
speedup.

Q4: What is pipeline hazard?

A pipeline hazard occurs when the next instruction in the pipeline cannot execute in its designated clock cycle. Hazards can
cause delays and reduce the efficiency of the pipeline. There are three main types of hazards:

Data Hazards: Occur when instructions that exhibit data dependencies affect the pipeline.

Structural Hazards: Occur when hardware resources required for an instruction are not available.

Control Hazards: Occur due to branch instructions and changes in the instruction flow.

Q5: What is pipeline stall?

A pipeline stall is a delay in the pipeline flow caused by pipeline hazards. During a stall, one or more stages in the pipeline
are idle, waiting for the hazard to be resolved. This interrupts the smooth flow of instructions and can reduce the overall
performance and throughput of the pipeline.
Q6: Explain Data Hazard, Structural Hazard and Control Hazard. Also explain data dependency at RAW, WAR, WAW
hazard.

Data Hazard: Occurs when instructions depend on the results of previous instructions.

RAW (Read After Write): An instruction needs to read a register after a previous instruction writes to it.

Example: Instruction 1: R1 = R2 + R3, Instruction 2: R4 = R1 + R5.

WAR (Write After Read): An instruction needs to write to a register that a previous instruction reads from.

Example: Instruction 1: R2 = R4 + R5, Instruction 2: R4 = R1 + R3.

WAW (Write After Write): Two instructions write to the same register.

Example: Instruction 1: R1 = R2 + R3, Instruction 2: R1 = R4 + R5.

Structural Hazard: Occurs when hardware resources are insufficient to support all the instructions simultaneously. For
instance, if two instructions need to access the same memory unit at the same time.

Control Hazard: Occurs due to branch instructions that change the flow of execution. When the pipeline encounters a
branch instruction, it may need to wait until the branch decision is made, causing a stall.

Q7: What is Reservation Table? Compare with space-time diagram.

A Reservation Table is used to represent the usage of pipeline stages by various operations over time. It helps in identifying
conflicts and scheduling instructions to avoid resource conflicts.

A Space-Time Diagram visualizes the execution of instructions in a pipeline over time and stages. It shows how different
instructions occupy different stages of the pipeline over time, helping to understand the pipeline's performance and
identify bottlenecks.

Q8: Calculate forbidden latencies, collision vector. Design State Transition diagram and find out MAL and Greedy cycle.

Forbidden Latencies: These are the delays that must be avoided to prevent resource conflicts in the pipeline. They are
identified based on the reservation table.

Collision Vector: This vector indicates the forbidden latencies, helping to identify the timing of instruction issuance to avoid
conflicts.

State Transition Diagram: Shows the states and transitions based on instruction execution patterns, representing how
instructions move through the pipeline stages.

MAL (Minimum Average Latency): The optimal average latency for instruction scheduling.

Greedy Cycle: A cycle of instruction scheduling that aims to utilize the pipeline stages as efficiently as possible, minimizing
idle time and maximizing throughput.

Q9: Explain VLIW Architecture and Super Scalar Architecture.

VLIW (Very Long Instruction Word) Architecture: Uses long instruction words that encode multiple operations to be
executed simultaneously. Each instruction word contains several operations that can be executed in parallel, reducing the
need for complex instruction scheduling by the hardware.

Super Scalar Architecture: Capable of executing multiple instructions per clock cycle by dispatching them to multiple
functional units. This architecture relies on dynamic scheduling and out-of-order execution to achieve parallelism, allowing
for higher performance on varied workloads.

Q10: What is Cache memory?

Cache memory is a small, high-speed memory located close to the CPU, designed to store frequently accessed data and
instructions. By keeping frequently used data close to the processor, cache memory significantly reduces the time needed
to access data from the main memory, improving overall system performance.
Q11: Explain direct mapping, associative mapping and set associative mapping.

Direct Mapping: Each block of main memory maps to exactly one cache line. This is simple and fast but can lead to frequent
conflicts if multiple blocks map to the same line.

Example: Block i maps to line (i % number_of_lines).

Associative Mapping: Any block can be loaded into any line of the cache, reducing conflicts. This is more flexible but
requires more complex hardware to search all cache lines.

Example: A block can be placed in any cache line.

Set-Associative Mapping: A compromise between direct and associative mapping. The cache is divided into sets, and each
block maps to any line within a set.

Example: Block i maps to set (i % number_of_sets) and can be placed in any line within that set.

Q12: Difference between RISC and CISC

RISC (Reduced Instruction Set Computer):

Uses a small set of simple instructions.

Emphasizes speed and efficiency with a uniform instruction length.

Instructions typically execute in one clock cycle.

CISC (Complex Instruction Set Computer):

Uses a large set of complex instructions.

Emphasizes instruction capability and flexibility.

Instructions can take multiple clock cycles to execute.

Q13: Modes of Data Transfer, Interrupt, and Program I/O

Direct Memory Access (DMA): Allows peripherals to directly transfer data to/from memory without involving the CPU. This
improves efficiency and frees up the CPU for other tasks.

DMA Controller: A hardware unit that manages DMA operations, controlling the data transfer between memory and
peripherals.

DMA Transfer: Involves the DMA controller setting up the transfer, initiating it, and signaling completion, allowing for
efficient and high-speed data transfers.

Q14: What is virtual memory?

Virtual memory is a memory management technique that gives an application the impression of having contiguous working
memory, while in reality, it may be fragmented and spread across different physical memory locations. This allows for
efficient use of memory and enables running larger applications than the available physical memory.

Q15: What do you mean by address space & memory space?

Address Space: The range of addresses that a program can use, which is defined by the architecture. It is the set of all
possible addresses that the program can access.

Memory Space: The actual physical memory locations available in the system where data can be stored. It includes all the
physical memory that can be addressed.

Q16: Address mapping, Associative mapping, Concept of Page Fault (FIFO+LRU)


Address mapping is the process of converting virtual addresses (used by programs) into physical addresses (used by the
hardware). This is managed by the Memory Management Unit (MMU).

Example: Virtual address 0x1234 maps to physical address 0xABCD

Associative mapping is used in cache memory systems where any block of main memory can be placed in any cache line.
This method is more flexible than direct mapping but requires a search mechanism to find the block in the cache.

Example: Block 100 can be placed in any of the 4 cache lines.

Page Fault: Occurs when accessing data not in physical memory, needing retrieval from secondary storage.

Page Replacement Algorithms:

• FIFO (First-In-First-Out): Replaces the oldest page.

• LRU (Least Recently Used): Replaces the page least recently used.

Example: For A, B, C, D, FIFO replaces A first; LRU replaces the least recently accessed.

You might also like