Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
254 views

CSC410 - Computer Systems Performance & Evaluation

Uploaded by

Owoeye Shina
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
254 views

CSC410 - Computer Systems Performance & Evaluation

Uploaded by

Owoeye Shina
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 132

CSC410

Computer Systems
Performance & Evaluation

Olufemi AKINYEDE, PhD


Lecture 1
• Introduction to Performance Evaluation:
o Definition of performance in the context of computer systems.
o Importance of performance evaluation.
o Metrics for measuring performance (response time, throughput,
efficiency, etc.)
Definition of Performance in the Context
of Computer Systems:

• A computer is a programmable electronic device


that processes data and performs tasks
according to a set of instructions called a
program.
• System -a set of principles or procedures
according to which something is done; an
organized scheme or method.
Definition of Performance in the Context
of Computer Systems …

• Performance in computer systems refers to a


system's efficiency in executing tasks, meeting or
exceeding specified requirements. It includes
speed, responsiveness, resource utilization, and overall
effectiveness in delivering desired outcomes.
• Performance is crucial for handling workloads,
responding to user requests, and achieving
optimal functionality.
Definition of Evaluation in the Context of
Computer Systems …

• Evaluation in computer systems refers to the


process of assessing and analyzing the performance,
efficiency, reliability, and overall effectiveness of a
computer system or its components.
• This evaluation can be carried out for various
purposes, including system design, optimization,
troubleshooting, and decision-making.
Definition of Performance Evaluation in
the Context of Computer Systems …

• Performance Evaluation: This involves measuring


and analyzing the speed, throughput, and
response time of a computer system. It helps
identify bottlenecks and areas for improvement
in order to enhance overall system
performance.
Definition of Performance Evaluation in
the Context of Computer Systems …
• Computer systems performance and evaluation assess
efficiency, responsiveness, and effectiveness by
measuring processing speed, throughput,
memory utilization, and other metrics. It aims
to identify bottlenecks, optimize resource usage,
and ensure system meets performance goals.
Utilizing methodologies like benchmarking,
testing, and monitoring enhances system
performance.
Importance of Performance Evaluation

1. User Satisfaction: Performance evaluation


ensures that computer systems respond
promptly to user input, providing a
seamless and satisfactory user experience.
2. Efficiency and Resource Utilization: It helps
optimize the use of hardware resources,
ensuring that computing power, memory,
and storage are utilized efficiently to achieve
maximum output.
Importance of Performance Evaluation ...

3. Cost-Effectiveness: Efficient performance


leads to cost savings by minimizing the need
for additional hardware or infrastructure to
meet performance requirements.
4. Reliability and Stability: Performance
evaluation helps identify and address
bottlenecks, improving the system's reliability
and stability under various conditions.
Importance of Performance Evaluation ...

5. Scalability: Understanding performance


characteristics is crucial for scalability,
enabling systems to handle increased
workloads without significant degradation in
performance.
6. Capacity Planning: Performance evaluation
informs capacity planning, allowing
organizations to anticipate future needs and
scale their infrastructure accordingly..
Importance of Performance Evaluation ...

7. Optimizing Software: Performance evaluation


guides the optimization of software
applications, ensuring that code is efficient
and resource-friendly.
8. Competitive Advantage: In competitive
environments, superior performance provides
a competitive advantage, attracting users and
customers.
Metrics for Measuring Performance
1. Response Time: The time it takes for a system
to respond to a user's request or input.
Importance: Short response times contribute to a
more responsive and user-friendly system.
2. Throughput: The rate at which a system
processes and delivers a set of tasks or
transactions over a specific period.
Importance: High throughput indicates the
system's ability to handle a large number of
tasks efficiently.
Metrics for Measuring Performance:
3. Efficiency: The ratio of useful work output to
the total resources input.
Importance: Efficient systems achieve optimal
performance with minimal resource
consumption.
4. Utilization: The extent to which hardware
resources (CPU, memory, disk space) are being
used.
Importance: Monitoring utilization helps prevent
resource bottlenecks (a situation that stops a process
or activity from progressing) and ensures effective
resource allocation.
Metrics for Measuring Performance:

5. Latency: The time delay between the initiation


and completion of a task.
Importance: Low latency is crucial for real-time
applications, reducing delays in data
processing.
6. Scalability Metrics: Metrics indicating how well
a system can handle increased workloads or
user demands.
Importance: Scalability metrics assess the
system's ability to grow without compromising
performance.
Metrics for Measuring Performance:

7. Reliability Metrics: Metrics measuring the


system's stability and consistency in delivering
desired outcomes.
Importance: Reliable systems provide
consistent performance, contributing to user
trust and satisfaction.
Lecture 2
Worked Exercises -Let's go over these performance
assessment calculation questions and answers:

Question 1: Throughput and Execution Time


A system processes 500 tasks in 10 seconds.
Calculate the throughput of the system.

Answer:
Throughput = Total Work/Execution Time
Throughput = 500 tasks/10 seconds
Throughput = 50 tasks per second
Worked Examples -Let's go over these performance
assessment calculation questions and answers:

Question 2: Response Time


If the service time for a task is 5 seconds, and
the wait time is 2 seconds, calculate the response
time for the task.

Answer :
Response Time = Service Time + Wait Time
Response Time = 5 seconds + 2 seconds

Response Time = 7 seconds


Worked Examples -Let's go over these performance
assessment calculation questions and answers:

Question 3: Speedup

A program takes 100 seconds to execute on a


single processor and 60 seconds on a dual-core
processor. Calculate the speedup achieved by using
the dual-core processor.
Answer:
Speedup = Execution Time (without
optimization)/Execution Time (with optimization)
Speedup = 100 seconds / 60 seconds = 1.67
Worked Examples -Let's go over these performance
assessment calculation questions and answers:

Question 4: Efficiency

If the speedup achieved by using a parallel system


with 4 processors is 3, calculate the efficiency of
the parallel system.

Answer:
Efficiency = Speedup / Number of Processors
Efficiency = 3 / 4 = 0.75 or 75%
Worked Examples -Let's go over these performance
assessment calculation questions and answers:

Question 5: Utilization

A server is busy for 80 seconds out of a total time


of 100 seconds. Calculate the utilization of the
server.

Answer:
Utilization = Busy Time / Total Time
Utilization = 80 seconds/100 seconds

Utilization = 0.8 or 80%


Worked Examples -Let's go over these performance
assessment calculation questions and answers:

Question 6: Response Time Law

In an interactive system, if the arrival rate is 5 tasks


per second and the throughput is 8 tasks per
second, calculate the average response time using
the response time law.

Answer:
Response Time = Number of Users /
(Throughput - Arrival Rate)
Response Time = 5 / (8 - 5) = 1.67 seconds
Worked Examples -Let's go over these performance assessment
calculation questions and answers:
Question 7: Relative Performance
System A processes 200 tasks in 50 seconds, and System B
processes the same tasks in 40 seconds. Calculate the
relative performance of System A compared to System B.
Answer:
Worked Examples -Let's go over these performance
assessment calculation questions and answers:
Question 8: Cost Performance
System X has a throughput of 100 tasks per second and
costs N5000, while System Y has a throughput of 80 tasks
per second and costs N4000. Calculate the cost performance
of each system and determine which system is more cost-
effective.
Answer:
Cost Performance = Throughput / Cost
Cost Performance for System X = 100/5000 = 0.02 tasks
per naira
Cost Performance for System Y = 80/4000 = 0.02 tasks per
naira
Both systems have the same cost performance.
Lecture 3
• Performance Measurement:
o Tools and techniques for measuring performance.
o Profiling and instrumentation.
o Sampling and statistical techniques
Performance Evaluation

Definition of Performance in Computer Systems:


• Performance in computer systems refers to the
ability of a system to execute tasks and deliver
results efficiently. It involves how well a system
meets user expectations and requirements
while utilizing its resources effectively.
Importance of Performance Evaluation

1. User Satisfaction: Ensuring that computer


systems perform optimally directly impacts
user satisfaction. Users expect systems to
respond promptly and deliver results in a
timely manner.
2. Resource Utilization: Efficient performance
ensures that system resources (CPU,
memory, disk, etc.) are utilized effectively,
avoiding bottlenecks and maximizing
throughput.
Importance of Performance Evaluation ..

3. Cost-Effectiveness: Performance evaluation


helps identify areas for improvement,
leading to more cost-effective use of
hardware and infrastructure.
4. Scalability: Understanding performance
under varying workloads helps design
systems that can scale appropriately as
demands increase.
Importance of Performance Evaluation ..

5. Reliability and Availability: Performance


evaluation contributes to the overall
reliability and availability of systems by
addressing potential performance issues
before they impact users.
Metrics for Measuring Performance:

1. Response Time: The time it takes for a system


to respond to a user request. It includes
processing time, waiting time, and any other
delays.
2. Throughput: The number of tasks or
transactions a system can handle in a given
time period. It measures the system's overall
processing capacity.
Metrics for Measuring Performance …

3. Efficiency: How well resources are utilized to


achieve system goals. It often involves a
trade-off between performance and
resource consumption.
4. Utilization: The percentage of time a
resource is actively used. High utilization
doesn't always equate to high performance,
as it may lead to contention and delays.
Metrics for Measuring Performance …

3. Scalability: The ability of a system to


maintain or improve performance as
workload or demand increases.
4. Reliability: The ability of a system to
perform consistently and predictably over
time.
5. Availability: The proportion of time a
system is operational and available to users.
Performance Measurement

• Performance measurement is a crucial aspect of


assessing and improving the effectiveness of
various processes, systems, and individuals within
organizations.
• It serves as a tool for evaluating the success
of strategic initiatives, monitoring progress
towards goals, and making informed
decisions to enhance overall performance.
Performance Measurement
• This lecture will delve into the key concepts, methods,
and challenges associated with performance
measurement.
Key Concepts:
1. Throughput: The rate at which a system can
process and complete tasks or transactions.
2. Response Time: The time it takes for a system
to respond to a user request or input.
3. Scalability: The ability of a system to handle
increased workload or resource demands.
Performance Measurement -Key Concepts:

4. Reliability: The consistency and stability of a


system's performance over time.
5. Resource Utilization: Efficient use of CPU,
memory, storage, and other resources.
6. Benchmarking: Comparing the performance
of a system against standard benchmarks or
industry best practices.
Performance Measurement -Key Concepts:

7. Monitoring: Continuous observation of


system metrics and performance in real-
time.
8. Profiling: Analyzing the behavior of an
application or system under specific
conditions.
Performance Measurement -Methods:

Methods:
1. Load Testing: Simulating user activity to
evaluate system behavior under various
levels of load.
2. Stress Testing: Assessing system performance
beyond normal operating conditions to
identify failure points.
3. Benchmark Suites: Using standardized sets of
tests to compare and evaluate system
performance.
Performance Measurement -Methods:

4. Profiling Tools: Employing software tools to


analyze resource usage, identify bottlenecks,
and optimize code.
5. Real User Monitoring (RUM): Collecting data
on user interactions and experiences in a
live environment.
6. Performance Modeling: Creating mathematical
models to predict system behavior under
different conditions.
Performance Measurement -Methods:

7. Capacity Planning: Estimating future resource


needs based on performance trends and
growth projections.
Performance Measurement –Challenges:
Challenges:

1. Complexity: Modern systems are intricate,


involving numerous components and
dependencies, making performance
measurement challenging.
2. Dynamic Environments: Systems operate in
dynamic conditions where workloads and
user interactions can vary significantly.
Performance Measurement –Challenges:
3. Interactions with External Systems:
Dependencies on external services or APIs
can impact performance, and measuring
these interactions can be complex.
4. Security Concerns: Performance measurement
tools and methods need to ensure data
privacy and security.
5. Subjectivity: User experience and satisfaction,
crucial aspects of performance, can be
subjective and challenging to quantify.
Performance Measurement –Challenges:

6. Costs: Implementing and maintaining


performance measurement tools and
practices can have associated costs.
7. Continuous Adaptation: Performance
measurement is an ongoing process,
requiring continuous adaptation to changes
in software, hardware, and user
requirements.
Lecture 4
Exercises
• Question 1: A company wants to measure its overall
performance using key performance indicators (KPIs). They
have identified three main KPIs: Revenue Growth, Customer
Satisfaction, and Operational Efficiency. The weights assigned to
these KPIs are 40%, 30%, and 30% respectively. If the
company's performance scores for each KPI are as follows:
Revenue Growth = 8 out of 10, Customer Satisfaction = 9 out of 10,
and Operational Efficiency = 7 out of 10, calculate the overall
performance score.
Answer:
o Overall Performance=(0.4×8)+(0.3×9)+(0.3×7)
o Overall Performance=3.2+2.7+2.1
o Overall Performance=8
o Therefore, the overall performance score is 8 out of 10.
Exercises …
• Question 2: A company's return on investment (ROI)
is calculated by dividing the net profit by the total
investment and expressing it as a percentage. If a
company has a net profit of $500,000 and the total
investment is $2,000,000, calculate the ROI.
• Answer:
ROI=(Net Profit/Total Investment)×100
ROI=(500,000/2,000,000)×100
ROI=0.25×100
ROI=25%
Therefore, the return on investment is 25%.
Exercises …
• Question 3: A website's performance is measured by
its response time, which is the time taken to load a
page. If a website has an average response time of 2
seconds and a goal response time of 1.5 seconds,
calculate the website's performance efficiency.
• Answer:
• Performance Efficiency
=(Goal Response Time/Actual Response Time)×100
• Performance Efficiency=(1.5/2)×100
• Performance Efficiency=0.75×100
• Performance Efficiency=75%
• Therefore, the website's performance efficiency is 75%.
Exercises …
• Question 4: A web server receives 1000 requests in
10 minutes. If the total response time for all requests
is 500 seconds, what is the average response time per
request?
• Answer: The average response time (R) per request
can be calculated using the formula:
Exercises …
• Question 5: A database system processes 200
transactions in 2 minutes. Calculate the transaction
throughput in transactions per second.
• Answer: Throughput (T) is calculated as the number
of transactions processed per unit of time. The
formula is:
Exercises …
• Question 6: A parallel processing system completes a
task in 50 seconds, while the same task takes 100
seconds on a single processor. Calculate the speedup
achieved by using the parallel processing system.

• Answer: Speedup (S) is calculated using the formula:


Exercises …
• Question 7: In a parallel system, four processors
achieve a speedup of 3.5 for a specific task. Calculate
the efficiency of the parallel system.

• Answer: Efficiency (E) is calculated using the formula:


Lecture 5
• Benchmarking:

o Definition and purpose of benchmarking.


o Types of benchmarks: synthetic, application,
kernel.
o Challenges and limitations of benchmarking.
Benchmarking

• Definition: Benchmarking is a strategic


management process that involves comparing
an organization's performance metrics,
practices, and processes with those of
industry leaders, competitors, or best-in-class
organizations.
Benchmarking ..

• The primary purpose of benchmarking is to


identify areas for improvement, adopt best
practices, and enhance overall performance
and competitiveness. By analyzing the
performance of others, organizations can set
realistic performance targets, improve
efficiency, and drive continuous
improvement.
Types of Benchmarks

1. Synthetic Benchmarks: They simulate a


specific type of workload or activity to
measure the performance of a system or
application under controlled conditions.
Purpose: They help assess the capabilities of
hardware, software, or systems by creating
standardized, reproducible tests that mimic
real-world scenarios.
Types of Benchmarks …

2. Application Benchmarks: They measure the


performance of a system or device using
specific applications or software.
Purpose: These benchmarks are tailored to
represent the typical workload of specific
applications, providing insights into how
well a system performs under conditions
relevant to real-world usage.
Types of Benchmarks …

3. Kernel Benchmarks: They focus on


evaluating the performance of the core
components (kernel) of an operating
system.
Purpose: These benchmarks assess the
efficiency and speed of the operating
system's core functions, helping identify
areas for optimization and improvement.
Challenges & Limitations of Benchmarking

1. Applicability and Relevance:


Challenge: Benchmarking may not always be
directly applicable to every organization, as
each may have unique processes, goals, and
contexts.
Limitation: The relevance of benchmarks
must be carefully evaluated to ensure that
the practices being compared align with the
organization's specific needs and objectives.
Challenges & Limitations of Benchmarking

2. Lack of Standardization:
Challenge: Benchmarks may lack
standardized metrics or methodologies,
making it difficult to establish consistent
comparisons across different industries or
sectors.
Limitation: Without standardization,
organizations may struggle to interpret
benchmarking results accurately.
Challenges & Limitations of Benchmarking

3. Dynamic Business Environments:


Challenge: Business environments are
dynamic, and benchmarks may become quickly
outdated due to technological advancements,
changes in market conditions, or shifts in
industry practices.
Limitation: Benchmarks may not always reflect
the latest trends or innovations, limiting their
relevance in rapidly evolving industries.
Challenges & Limitations of Benchmarking

4. Resistance to Change:
Challenge: Employees and stakeholders
may resist changes suggested by
benchmarking initiatives.
Limitation: The success of benchmarking
relies on the organization's ability to
implement recommended improvements,
and resistance can hinder effective
implementation.
Challenges & Limitations of Benchmarking

5. Overemphasis on Competition:
Challenge: Focusing too much on
competitive benchmarks may lead to a lack
of innovation and a narrow perspective.
• Limitation: Organizations should balance
competition-focused benchmarks with a
broader view that considers collaboration
and innovation.
Lecture 6
Exercises
Question 1:
A manufacturing company is benchmarking its production efficiency
against industry standards. The industry benchmark for a particular
production metric is set at 90 units per hour. The company produces
80 units per hour. Calculate the company's efficiency relative to the
industry benchmark.
Answer:
Exercises
Question 2:
A retail company is
benchmarking its financial
performance against
industry standards. The
industry average return on
investment (ROI) is 15%.
The company's ROI is
$500,000 on an
investment of $3,000,000.
Calculate the company's
ROI and assess its
performance relative to
the industry benchmark.
Exercises

Question 3:
A customer service
department is benchmarking
its performance in resolving
customer issues. The
industry benchmark for
resolving customer
complaints is 24 hours. The
department resolves 80% of
complaints within this
timeframe. Calculate the
department's performance
against the benchmark.
Exercises
Question 4:
An e-commerce website is benchmarking its page load
time against industry standards. The industry
benchmark for page load time is 3 seconds. The
website's average load time is 2.5 seconds. Calculate
the website's performance relative to the benchmark.
Answer:

The website's performance is 120%, indicating that it


loads pages 20% faster than the industry benchmark
of 3 seconds.
Exercises
Question 5:
• Two computer systems, A and B, are benchmarked
using a performance metric. System A scores 1200,
and System B scores 1500. Calculate the relative
performance of System B compared to System A.
• Answer:
Exercises
Question 6:
• A benchmark is designed to represent the
performance of a graphics card. The baseline score
is 1000. If a new graphics card scores 1200 on this
benchmark, calculate the benchmark index for the
new card.
• Answer:
Exercises
Question 7:
• A software benchmark is executed in two different
versions. Version 1 completes in 30 seconds, and
Version 2 completes in 20 seconds. Calculate the
speedup and performance improvement of Version
2 compared to Version 1.
• Answer:
Exercises
Question 8:
• Two servers are benchmarked for energy efficiency.
Server X completes a task using 1500 Joules, and
Server Y completes the same task using 1200 Joules.
Calculate the energy efficiency index for Server Y
compared to Server X.
• Answer:
Lecture 7
• Performance Models:

o Analytical modeling.
o Queueing models.
o Simulation models.
o Empirical models.
Performance Models

• Performance Models: They are essential


tools in the field of computer systems and
software engineering for predicting, analyzing,
and optimizing the performance of systems.
Performance Models …
Here are four types of performance models:
1. Analytical Modeling: They are mathematical
representations that use equations to describe
the behavior and performance characteristics
of a system.
• Characteristics:
a. Analytical models often involve solving
equations based on system parameters,
such as input load, processing time, and resource
capacities.
Here are four types of performance models: ..
Analytical Modeling
b. Common analytical modeling techniques include
queuing theory, probability theory, and mathematical
optimization.

Applications:
Useful for predicting performance under
different conditions and analyzing the impact
of changes to system parameters.
Examples include closed-form formulas and
mathematical expressions used to estimate
response time or throughput.
Here are four types of performance models: ..
Queueing Models
2. Queueing Models: They focus on modeling
the queuing behavior of entities within a
system, considering waiting times, service rates,
and resource utilization.
Characteristics:
a. Utilizes concepts from queuing theory to
represent how tasks or requests wait in
queues before being processed by resources.
b. Parameters include arrival rates, service rates,
and the number of servers.
Here are four types of performance models: ..
Queueing Models

Applications:

a. Well-suited for scenarios where


resources are shared, and tasks must wait
in a queue for service.
b. Used to predict queue lengths, wait
times, and resource utilization.
Here are four types of performance models: ..
Simulation Models
3. Simulation Models: They involve creating a
computer-based model of a system to
observe its behavior over time through the
simulation of events and interactions.
Characteristics:
a. Simulates the dynamic behavior of the
system by modeling individual components
and their interactions.
b. Involves running experiments using the
simulation to observe how the system
responds under different conditions.
Here are four types of performance models: ..
Simulation Models
Applications:
a. Useful for complex systems where
analytical solutions are difficult to derive.
E.g Urban traffic systems involve
numerous interacting components,
including vehicles, traffic lights,
pedestrians, and road infrastructure.
b. Provides insights into the dynamic
behavior of the system and allows for the
experimentation with different scenarios.
Here are four types of performance models:

4. Empirical Models: They are based on


observed data and statistical analysis of
actual system performance.
Characteristics:
a. Data-driven models that use real-world
measurements to create relationships
between input and output variables.
b. Statistical techniques, regression analysis,
and machine learning may be employed
to develop these models.
Here are four types of performance models: ..
Empirical Models

• Applications:
a. Ideal for situations where direct
measurements of system
performance are available.
b. Allows for the identification of
patterns and trends based on
historical data.
Lecture 8
Exercises
• Question 1: Consider a computer system with a single
processor. The processor can execute 100 instructions
per second. If the average execution time for a program
is 10 milliseconds, what is the throughput of the system?
• Answer: Throughput is calculated as the number of tasks
completed per unit of time. In this case, the throughput
(T) can be calculated using the formula:

So, the throughput


of the system is 100
tasks per second.
Exercises
• Question 2:
• System A completes a benchmark in 40 seconds, and
System B completes the same benchmark in 30
seconds. Calculate the normalized performance score
for System A compared to System B
• Answer:
Exercises
• Question 3:
• System X processes 200 transactions in 120 seconds,
and after optimization, it processes the same
transactions in 80 seconds. Calculate the percentage
improvement in performance.
• Answer:
Exercises
• Question 4:
• A task takes 60 seconds on a single processor and 15
seconds on a quad-core processor. Calculate the
speedup achieved by using the quad-core processor.
• Answer:
Exercises
• Question 5: System P consumes 1200 kWh of energy to
complete a benchmark, and System Q consumes 1000
kWh for the same benchmark. Calculate the energy
efficiency in kWh per unit of benchmark for each
system.
• Answer:

So, the energy efficiency for System P is 1200 kWh per benchmark, and for System Q, it is
1000 kWh per benchmark.
Exercises
• Question 6: A software development team estimates that
doubling the number of developers working on a project
will halve the time required to complete the project. If
the original completion time was 12 months, calculate
the estimated time with the doubled number of
developers.
• Answer: The empirical model suggests a linear relationship
between the number of developers (N) and the completion
time (T):

So, the estimated


time with the
doubled number
of developers is
6 months.
Exercises
• Question 7: A simulation model of a manufacturing
process predicts that a change in the production line can
reduce the average processing time from 8 minutes to 5
minutes per item. Calculate the percentage reduction in
processing time.

• Answer: The percentage reduction in processing time can


be calculated using the formula:

So, the percentage reduction in processing time is 37.5%.


Exercises
• Question 8: A helpdesk receives an average of 60 support
calls per hour. If the average service rate is 70 calls per
hour, calculate the average number of calls waiting in the
queue
Answer: The average
number of calls waiting in
the queue (Lq) in a single-
server queue can be
calculated using the
formula:

So, the average number of calls


waiting in the queue is
approximately 5.14.
Lecture 9
• Workload Characterization

o Types of workloads (CPU-bound, I/O-bound,


mixed).
o Trace-driven simulation.
o Characterizing real-world workloads
Workload Characterization

Workload characterization involves understanding


and analyzing the patterns and behaviors of
tasks or activities that a computer system or
application encounters. This understanding is
crucial for optimizing system performance,
resource allocation, and capacity planning.
Workload Characterization
Here are key aspects of workload characterization:
Types of Workloads:
1. CPU-Bound Workload: A CPU-bound workload is a
computing scenario where the CPU's processing
capacity is the primary performance limiting factor,
as tasks or processes in this scenario demand
significant computational power, making the CPU
the key resource determining the system's overall
performance.
• Characteristics: These workloads are characterized by
tasks that require significant CPU processing power.
• Example: Scientific simulations, complex mathematical
calculations.
Types of Workloads …

2. I/O-Bound Workload: A CPU-bound workload is a


computing scenario where the CPU's processing
capacity is the primary performance-limiting factor,
as tasks or processes in this scenario demand
significant computational power, making the CPU
the key resource determining the system's overall
performance.
• Characteristics: Workloads where the tasks spend a
significant amount of time waiting for input/output
operations to complete.
• Example: File processing, database queries with heavy
disk I/O.
Types of Workloads …

3. Mixed Workload: A mixed workload is a


computing environment where multiple tasks
are executed concurrently, varying in nature,
resource requirements, and characteristics. It
is commonly used in various IT contexts, like
databases, servers, and distributed systems.
• Characteristics: A combination of CPU-bound and
I/O-bound tasks within the workload.
• Example: Web servers handling user requests
(combining processing and serving static/dynamic
content).
Trace-Driven Simulation
Trace-driven simulation involves analyzing the
behavior of a system or application by
replaying recorded traces of actual system
activities.
• Process:
a. Recorded traces typically include information about
the sequence and timing of events, such as CPU
usage, memory access, disk I/O, or network activity.
b. These traces are then used to simulate the workload
on a model or actual system to observe how it
performs under realistic conditions.
Trace-Driven Simulation

• Benefits:
a. Provides a realistic representation of
actual system behavior.
b. Useful for evaluating system performance,
identifying bottlenecks, and optimizing
resource allocation.
Characterizing Real-World Workloads
• Characterizing real-world workloads is
essential for understanding and optimizing
the performance of computing systems.
Real-world workloads can vary widely based
on the type of application, industry, and
specific use cases.
o Data Collection:
a. Real-world workload characterization involves
collecting data on how users interact with the
system or application.
b. Data may include information on the frequency
and types of requests, response times, and
resource usage patterns.
Characterizing Real-World Workloads …

o Analysis:
a. Analyzing the collected data helps identify
trends, peak usage periods, and variations in
workload intensity.
b. Understanding the variability and characteristics
of the workload is crucial for effective system
design and resource provisioning.
Characterizing Real-World Workloads …

o Application:
a. Real-world workload characterization is essential
for designing systems that can handle the actual
usage patterns and demands of users.
b. It helps in making informed decisions about
hardware configurations, scalability
requirements, and system architecture.
Lecture 10
Exercises

• Question 1: A system spends 80% of its time performing


CPU-intensive tasks. Calculate the percentage of time
that is not CPU-bound.

• Answer: If a system spends 80% of its time performing CPU-


intensive tasks, then the percentage of time that is not CPU-bound
can be calculated by subtracting the CPU-bound time from 100%.:
• Not CPU-bound time = 100% − CPU - bound time
• Not CPU-bound time = 100% − 80% =20%
• So, the system is not CPU-bound 20% of the time.
Exercises
• Question 2: In a workload, there are 150 tasks, and
90 of them involve substantial I/O operations.
Calculate the I/O-bound task ratio.
• Answer: The I/O-bound task ratio can be calculated by
dividing the number of tasks involving substantial I/O
operations by the total number of tasks and expressing it as
a percentage. The formula is:
Exercises
• Question 3: In a trace-driven simulation, a CPU
executes 2 million instructions in 10,000 time steps.
Calculate the average instruction count per time
step.
Answer: The average instruction count per
time step can be calculated by dividing the
total number of instructions executed by the
total number of time steps.
Exercises
• Question 4: In a real-world database workload, 500
transactions are processed in 2 minutes. Calculate
the transaction throughput.

Answer: Transaction throughput is calculated as


the number of transactions processed per unit of
time. The formula for throughput (T) is.
Exercises
• Question 5: In a cloud computing environment, a
web application experiences an average of 100
concurrent user connections during peak hours. If
each user session requires an average of 5 seconds
of processing time, calculate the throughput of the
system.
Answer
Exercises
• Question 6: A network handles 5 gigabytes of data
in 30 minutes. Calculate the average network traffic
rate.
Answer
Exercises
• Question 7: A data analytics workload processes
large datasets with an average size of 1 terabyte. If
the system processes 5 datasets per day, calculate
the daily data processing volume.
Answer
Exercises
• Question 8: An e-commerce website experiences a
peak concurrent user load of 500 users during a
flash sale. If each user session involves an average
of 10 HTTP requests, calculate the total number
of requests during the peak load.
Answer
Lecture 11
• Workload Characterization

o Types of workloads (CPU-bound, I/O-bound,


mixed).
o Trace-driven simulation.
o Characterizing real-world workloads
Workload Characterization

Workload characterization is a crucial aspect of


understanding and optimizing the
performance of computer systems. It involves
studying the patterns and characteristics of the
tasks or processes that a system executes.
Types of Workload

1. CPU-Bound Workloads: These workloads


are characterized by tasks that heavily utilize
the central processing unit (CPU). Programs
with extensive calculations or computations
often fall into this category.
Types of Workload …

2. I/O-Bound Workloads: These workloads


are characterized by tasks that spend a
significant amount of time waiting for
input/output operations to complete.
Examples include file reading/writing and
network communication.
Types of Workload …

3. Mixed Workloads: Some workloads exhibit


characteristics of both CPU-bound and
I/O-bound tasks. This could involve a
combination of computation-intensive and
data-intensive operations.
Trace-Driven Simulation

• Trace-driven simulation involves studying the


behavior of a system by analyzing traces or
logs of past executions. These traces provide
a record of events such as CPU usage, disk
I/O, and network activity.
Trace-Driven Simulation…

• Simulating workloads based on traces allows


researchers and system designers to evaluate
the performance of different configurations,
algorithms, or hardware setups in a controlled
environment.
Characterizing Real-World Workloads

• Analyzing real-world workloads is essential


for understanding the diverse and dynamic
nature of system usage. This can involve
collecting and studying traces from actual
systems in operation.
Characterizing Real-World Workloads …

• Real-world workload characterization may


include profiling applications, measuring
resource utilization, and identifying patterns
of user behavior. This information is
valuable for designing systems that can
handle the demands of specific applications
or user scenarios.
Metrics for Characterization
• Throughput: The number of tasks completed
per unit of time.
• Latency: The time it takes for a task to be
completed.
• Utilization: The degree to which system
resources (CPU, memory, disk, etc.) are being
used.
• Concurrency: The number of tasks executing
simultaneously.
Application-Specific Characterization

• Different applications may have unique


characteristics. For example, a database
workload may exhibit different patterns
compared to a web server workload.
• Understanding the specific requirements and
behavior of an application is crucial for
optimizing the system to provide efficient
and responsive performance.
Workload Characterization ..

• In summary, workload characterization is an


essential step in designing and optimizing
computer systems. It helps in understanding
the nature of tasks, resource utilization
patterns, and user behavior, ultimately
guiding decisions related to system
architecture, hardware configuration, and
software optimization.
Lecture 12
Exercises

• Question 1: A mixed workload consists of 200 tasks.


Of these, 120 tasks are CPU-bound, and the rest are
equally split between I/O-bound and memory-
bound. Calculate the percentage of tasks that are
memory-bound.
Answer 1:
• Percentage of Memory-Bound Tasks = (No of
Memory-Bound Tasks / Total No of Tasks) * 100
• Percentage of Memory-Bound Tasks = (40
tasks/200 tasks) * 100 = 20%
Exercises
• Question 2: A trace-driven simulation of a system's
behavior covers 24 hours, with a time granularity of
1 second. Calculate the total number of time steps
in the simulation.
Exercises

• Question 3: A computational task takes 80 seconds to


complete, and during this time, the CPU is utilized
at 75%. Is this task CPU-bound, I/O-bound, or
mixed?
Exercises

• Question 4: A file processing task reads 120 MB of


data from disk in 15 seconds. Determine the disk
I/O throughput.
Exercises

• Question 5: In a web server log, there are 50,000 page


views, 8,000 search queries, and 500 transactions
over 24 hours. Calculate the transaction throughput
per minute.

You might also like