Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Tutorial

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

Simulating and Evaluating Shared Cache Replacement

Algorithms for Multi-Core Processors


This is Advanced Computer Architecture project developed and implemented by:

Alanoud Alsalman / alanoud.alsalman@ucdenver.edu


Arwa Almalki / arwa.almalki@ucdenver.edu
Samaher Alghamdi / samaher.alghamdi@ucdenver.edu
Norah Almaayouf / norah.almaayouf@ucdenver.edu

This file provides a simple tutorial on how to run Sniper simulator and run our
cache replacement algorithms under Sniper simulator using PDS lab machines. If you
need more information about how to use Sniper simulator, you can refer to Sniper
manual in this link: http://snipersim.org/w/Manual or contact us.

To run Sniper for the first time in PDS lab machine, do the following:

1- Log in to Dozer machine.


2- Download Sniper folder from https://files.fm/f/am5sbpdn and copy it to your
directory
3- Uncompress the folder:
tar -xvjf sniper-6.1.tar.bz2

4- Add sniper-6.1 library into your PATH


To do so, add this line to the end of your ~/.bashrc
PATH=$PATH:/usr/local/sniper-6.1/
or use the following command:
export PATH=$usr/local/sniper-6.1:$PATH
( To check if it was added correctly type the command: echo $PATH )
5- Compile the code:
$ cd ~/sniper-6.1/test/fft
$ gcc fft.c -lm -I /usr/local/sniper-6.1/include/ -L /usr/local/sniper-6.1/lib/
-pthread -o fft
6- Run:
$ run-sniper -n 2 -c gainestown --roi --viz -- ./fft -p 2

To download and run the integrated benchmarks for the first time under
Sniper simulator, do the following: (You will not need to do this as the
benchmarks were uploaded with the file you downloaded)
1- wget http://snipersim.org/packages/sniper-benchmarks.tbz
2- tar xjf sniper-benchmarks.tbz
3- cd benchmarks
4- export SNIPER_ROOT=/path/to/sniper
/path/to/sniper is the path to Snipers copy on your home directory
5- export BENCHMARKS_ROOT=$(pwd)
6- make -j 2
Notes:
The 6 steps above needed to be done once (just the first time), and then you
can run the benchmarks.
The make -j 2 takes long time to finish.

Run benchmarks under the Sniper simulator:


- To run benchmarks, you should be in benchmarks directory which can be
specified through this command:
cd /home/User Account /sniper-6.1/benchmarks
- Before going in deep into how to run benchmarks, here is a list of used
benchmarks and workloads:
1- Parsec:
- bodytrack
- canneal
- dedup
- streamcluster
2- Splash2:
- cholesky
- fft
- fmm
- lu.cont
- lu.ncont
- ocean.cont
- water.nsq

- Two methods for running benchmarks can be used:

1- To run single multi-threaded workload (one benchmark at a time):


- General command:
./run-sniper -p <program> -i <input size (test)> -n <ncores (1)> -m <machines
(1)> -d <outputdir (.)> -c <config-file> -r <sniper-root-dir> -g <options>
(-p) option: allows to specify the benchmark name and the workload. As we
mentioned above, two benchmarks are used: Parsec and Splash2.
(-i) option: allows to specify the input size of the selected benchmark.
For Parsec benchmark, there are three input size: simsmall,
simmeduim, and simlarg.
For Splash2 benchmark, there are two input size: small and large.
The default input size is test.
(-n) option: allows to specify the number of cores. This option overrides the
general number of cores in the configuration file.
(-d) option: allows to specify the output directory for all generated files. To be
able to use our Python script later to analyze the results, you should set the
directory as following: single_run / simulted_policy_name
/bencmark_name-workload_name-on-simulated_configration-simulted_policy. For
example:
single_run /ewlru/parsec-canneal-on-gainestown-ewlru
(-c) option: allows to specify the configuration file. Two configurations are used:
Gainestown which is the default and Hydra.
(-g) option: allows to specify individual configuration settings i.e. overwrites
default configurations in the configuration file.

- Examples of command options needed to run our replacement policy


(ewlru -ewsrrip- mrut):

Parsec: To run Parsec benchmark using Bodytrack workload with


Gainestown configuration and using ewlru replacement policy for l3, this is
the used command:
/run-sniper -p parsec-bodytrack -i simsmall -n 4 -c gainestown -g
--perf_model/l3_cache/replacement_policy=ewlru -d ~/ single_run
/ewlru/parsec-bodytrack-on-gainestown-ewlru

Splash2: To run Splash2 benchmark using FFT workload with Hydra


configuration and using mrut replacement policy for l3, this is the used
command :
./run-sniper -p splash2-fft -i small -n 8 -c hydra -g
--perf_model/l3_cache/replacement_policy=mrut -d ~/ single_run
/mrut/splash2-fft-on-gainstown-mrut

2- To run multiple multi-threaded workloads in parallel (more than one


benchmark at a time):
- General command:
./run-sniper -c <config-file> --benchmarks=<1st benchmark
name>-<workload>-<input size>-<# of assigned cores>,<2nd benchmark
name>-<workload>-<input size>-<# of assigned cores>, etc
- In this type of run, you should add parallel to the output directory as following:
multiple_run/simulted_policy_name/bencmark_name-workload_name-on-si
mulated_configration-simulted_policy. For example:
multiple_run /ewlru/parsec-canneal-on-gainestown-ewlru

- Examples of command options needed to run our replacement policy


(ewlru- ewsrrip- mrut):

To run two workloads (Canneal and Bodytrack) from Parsec benchmark


using Gainestown configuration and giving each workload 2 cores , this is
the used command :
./run-sniper -c gainestown --benchmarks=parsec
-canneal-simsmall-2,parsec-bodytrack-simsmall-2 -d
~/multiple_run/parsec-canneal-parsec-bodytrack-on-gainestown-lru- 2-2

To run two workloads FFT from Splash2 benchmark and Dedup from
Parsec benchmark using Gainestown configuration and giving each
workload 2 cores with ewlru replacement policy, this is the used
command:
./run-sniper -c gainestown
--benchmarks=splash2-fft-small-2,parsec-dedup-simsmall-2 -g
--perf_model/l3_cache/replacement_policy=ewlru -d
~/multiple_run/splash2-fft-parsec-dedup-on-gainestown-ewlru-2-2
Notes:
If the input size for the benchmark is not specified correctly, the default input
size (test) will be used.
All resulted files for single run will be saved in the single_run path like
~/single_run/ewlru/parsec-bodytrack-on-gainstown-ewlru
in your home directory.
All resulted files for multiple run will be saved in the multiple_run path like
~/multiple_run/ewlru/splash2-fft-parsec-dedup-on-gainestown-ewlru-2-2
in your home directory.
As we mentioned above, we are using a fixed way of naming the folders. We
were using the name of the benchmark for naming the folder that has the
resulted files. However, we have changed 4 workloads' names from Splash2
benchmark for simplicity. These changes appear in the table:
Workload name Used name
lu.cont lucont
lu.ncont luncont
water.nsq waterN
ocean.cont ocean

If you did not specify the replacement policy using -g option, lru will be used
as a default replacement policy.
Some syntax errors when setting (-g) option such as using - rather than _ do
not show any error, but do not apply the needed changes.
To check that all configuration changes has been applied, refer to Sim.cfg file.
In all our tests, simsmall input size was used for Parsec benchmark while
small was used in Splash2 benchmark.
As for Hydra configuration, please run it using 8 cores (this is due to Sniper
limitations).
When running multiple benchmarks at the same time, be sure that the total
number of assigned cores doesnt exceed the default number of cores for the
configuration (4 for Gainestown and 8 for Hydra), otherwise, it will overwrite it.
How to check the result of the simulation:
After running any benchmark under Sniper, four files will be generated:
Sim.cfg: It has all the details about the used configuration.
Sim.out: It has the output results of the simulation including runtime, idle time,
number of instruction, average miss rate for each cache level..etc.
Sim.info: Provides information about the run command that generated these
output files.
Sim.stats.sqlite3

In order to be able to read the data and analyze them, we developed ipython
notebook files to read and analyze data in sim.out files. We have developed two scripts:
1- collect_single_run.ipynb: to analyze the data for running single benchmarks
(one at a time). This file reads the single_run directory
2- collect_multiple_runs.ipynb: to analyze the data for benchmarks that run in
parallel. This file reads the multiple_run directory
These files calculate the average miss rate since we use another way of
calculating average miss rate based on our key paper. Also, they were used to plot the
execution time and average miss rates for 11 workloads using three replacement
policies: ewlru, ewsrrip, and mrut compared to lru. Also, they plot the speedup of all
three replacement policies over lru.

How to use the Python scripts:


- This should be done on your personal computer and not on the PDS lab
machine.
- To run our .ipynb files you need to have the following:
1- Jupyter Notebook :
If you dont already have it, please refer to
http://jupyter.readthedocs.io/en/latest/install.html for information
about the installation.
2- Python 3.3 or greater or Python 2.7.

- After downloading our .ipynb files, run Jupyter Notebook as following:


1- Use the terminal to run them by typing the command jupyter notebook

After typing jupyter notebook, a web page in your web browser will open which
allows you to open one file.

2- After opening the file, you can directly examine the results and plot what we
generated, or you can re-run the code. If you chose to run the .ipynb file, change
the directory (rootdir) in the first cell of the code to the directory where your
Sniper generated files are or where our files were downloaded (the run-results
folder).
3- Check the names of the folders that is in the same format we mentioned
earlier since any change in the names will cause an error and the script will
not generate any result.

4- Run the script (all cells) if you change or add any files in the root directory.

After running the script, plots will appear below each cell.

You might also like