AP254XINST
AP254XINST
AP254XINST
cover
Front cover
Trademarks
IBM is a registered trademark of International Business Machines Corporation.
The following are trademarks of International Business Machines Corporation in the United
States, or other countries, or both:
Active Memory AIX 5L AIX 6
AIX BladeCenter DB
DB2 developerWorks EnergyScale
Express i5/OS Power Architecture
POWER Hypervisor Power Systems Power
PowerPC PowerVM POWER6+
POWER6 POWER7 Systems POWER7+
POWER7 Redbooks System p
System p5 System z Systems Director
VMControl
Tivoli Workload Partitions z/VM
Manager
z9 400
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Microsoft, Windows and Windows NT are trademarks of Microsoft Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Java and all Java-based trademarks and logos are trademarks or registered trademarks
of Oracle and/or its affiliates.
Other product and service names might be trademarks of IBM or other companies.
TOC Contents
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Exercise A. Using the Virtual I/O Server Performance Analysis Reporting Tool . A-1
Exercise instructions with hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-3
Exercise review/wrapup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-8
iv PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
TMK Trademarks
The reader should recognize that the following terms, which appear in the content of this
training document, are official trademarks of IBM or other companies:
IBM is a registered trademark of International Business Machines Corporation.
The following are trademarks of International Business Machines Corporation in the United
States, or other countries, or both:
Active Memory AIX 5L AIX 6
AIX BladeCenter DB
DB2 developerWorks EnergyScale
Express i5/OS Power Architecture
POWER Hypervisor Power Systems Power
PowerPC PowerVM POWER6+
POWER6 POWER7 Systems POWER7+
POWER7 Redbooks System p
System p5 System z Systems Director
VMControl
Tivoli Workload Partitions z/VM
Manager
z9 400
Linux is a registered trademark of Linus Torvalds in the United States, other countries, or
both.
Microsoft, Windows and Windows NT are trademarks of Microsoft Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Java and all Java-based trademarks and logos are trademarks or registered trademarks
of Oracle and/or its affiliates.
Other product and service names might be trademarks of IBM or other companies.
vi PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
viii PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Estimated time
00:45
Introduction
This is an exploratory lab. You will explore some commands, reference
materials, and the HMC applications to review prerequisite skills and
to try out commands that you will use later in the course.
Requirements
This workbook
A computer with a network connection to the lab environment.
An HMC that is configured and supporting a POWER7 system.
A POWER7 processor-based system with at least one partition
running AIX 7 and one partition running the Virtual I/O Server code
per student.
Copyright IBM Corp. 2010, 2013 Exercise 1. Introduction to the lab environment 1-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
1-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Preface
All exercises of this chapter depend on the availability of specific equipment in your
classroom. You need a computer system configured with a network connection to an
HMC.
The hints provided for locating documentation on particular web pages were correct
when this course was written. However, web pages tend to change over time. Ask your
instructor if you have trouble navigating the websites.
All hints are marked by a sign.
The output shown in the hints is an example. Your output and answers based on the
output might be different.
Copyright IBM Corp. 2010, 2013 Exercise 1. Introduction to the lab environment 1-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
At the IBM Systems Hardware Information Center, use the Search option on the left
side of the screen to find topics related to following keywords:
Installing AIX in a partition
Partitioning your server
Managing server using HMC
__ 4. From http://publib.boulder.ibm.com/eserver, select AIX Information Center. Select
the AIX 7.1 Information Center. At the resulting IBM Systems Information Center
page, look for the following topics and see what information is available:
Click the AIX PDFs link to get a list of PDFs of the AIX documentation.
Notice the Performance management and Performance Tools Guide and
Reference PDFs.
__ 5. Go to the PowerVM virtualization website and see what is available:
http://www.ibm.com/systems/power/software. After looking at the available links,
follow the link to PowerVM Virtualization without limits. Take a moment to see
what is available from this page.
1-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty Note: You might have multiple virtual target devices (VTDs) in your list. Fill out the chart
for the first VTD. The PVID is retrieved from the lspv command listed in the next step.
$ lsmap -all
VSA Physloc Client Partition ID
--------------- --------------------------------------------
vhost0 U8204.E8A.652ACF2-V1-C13 0x00000002
VTD lpar1_rootvg
Status Available
LUN 0x8100000000000000
Backing device hdisk2
Physloc
U78A0.001.DNWGGSH-P1-C1-T1-W500507680140581E-L3000000000000
VSA Physloc Client Partition ID
--------------- --------------------------------------------
vhost1 U8204.E8A.652ACF2-V1-C14 0x00000003
VTD lpar2_rootvg
Status Available
LUN 0x8100000000000000
Backing device hdisk3
Physloc
U78A0.001.DNWGGSH-P1-C1-T1-W500507680140581E-L4000000000000
Copyright IBM Corp. 2010, 2013 Exercise 1. Introduction to the lab environment 1-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
1-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty
Table 2: Virtual Ethernet configuration
Virtual Ethernet adapter slot
Shared Ethernet adapter
Ethernet backing device
Control channel adapter for
SEA failover
ha_mode
Priority and status
Given the above examples in the hints you would document:
Virtual Ethernet adapter slot = virtual slot 11
Shared Ethernet adapter = ent4
Ethernet backing device = ent0
Control channel adapter for SEA failover = ent3
ha_mode = enabled
Priority and status = 1 and Active is true (primary)
Copyright IBM Corp. 2010, 2013 Exercise 1. Introduction to the lab environment 1-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
The output from the lparstat -i command should be similar to the following:
# lparstat -i
Node Name : sys114_lpar1
Partition Name : sys114_lpar1
Partition Number : 2
Type : Shared-SMT
Mode : Capped
Entitled Capacity : 0.35
Partition Group-ID : 32770
Shared Pool ID : 0
Online Virtual CPUs : 1
Maximum Virtual CPUs : 10
Minimum Virtual CPUs : 1
Online Memory : 1024 MB
Maximum Memory : 2048 MB
Minimum Memory : 768 MB
Variable Capacity Weight : 0
Minimum Capacity : 0.10
Maximum Capacity : 2.00
Capacity Increment : 0.01
Maximum Physical CPUs in system : 8
Active Physical CPUs in system : 8
Active CPUs in Pool : 8
Shared Physical CPUs in system : 8
Maximum Capacity of Pool : 800
Entitled Capacity of Pool : 280
Unallocated Capacity : 0.00
Physical CPU Percentage : 35.00%
Unallocated Weight : 0
Memory Mode : Dedicated
Total I/O Memory Entitlement : -
Variable Memory Capacity Weight : -
Memory Pool ID : -
Physical Memory in the Pool : -
Hypervisor Page Size : -
Unallocated Variable Memory Capacity Weight: -
Unallocated I/O Memory entitlement : -
Memory Group ID of LPAR : -
Given the above example, here are the answers:
__ a. Is the partition using dedicated or shared processors? Shared
__ b. Is the partition capped or uncapped? Capped
__ c. Is simultaneous multithreading on or off? On
1-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Network Address.............2E5C5B02E80B
Displayable Message.........Virtual I/O Ethernet Adapter
(l-lan)
Hardware Location Code......U8204.E8A.652ACF2-V2-C11-T1
PLATFORM SPECIFIC
Name: l-lan
Node: l-lan@3000000b
Device Type: network
Physical Location: U8204.E8A.652ACF2-V2-C11-T1
Copyright IBM Corp. 2010, 2013 Exercise 1. Introduction to the lab environment 1-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 18. Determine the operating system level in your partition using the oslevel -s
command. You should find that the LPAR is running AIX 7.1.
__ 19. Use the lspath command to verify that your hdisk0 has two different access paths.
MPIO is set up at your client logical partition.
You should see the following for hdisk0:
# lspath -l hdisk0
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1
__ 20. Using the Putty program on your desktop, log in to the HMC command line.
Determine the HMCs version with the lshmc -V command. The following is an
example command and output that shows an HMC at Version 7, Release 7.4.0
Service Pack 1. It also has two fix packs installed.
hscroot@sys11hmc:~> lshmc -V
"version= Version: 7
Release: 7.4.0
Service Pack: 1
HMC Build level 20120207.1
MH01302: Fix for HMC V7R7.4.0 SP1 (02-07-2012)
MH01306: Fix for HMC V7R7.4.0 (02-29-2012)
","base_version=V7R7.4.0
"
Your assigned HMC may be at a different level. In your workplace, you might notice
differences in the output of some commands if your HMC is at a later version or
release.
End of exercise
1-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Copyright IBM Corp. 2010, 2013 Exercise 1. Introduction to the lab environment 1-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
1-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Estimated time
02:00
Introduction
This exercise is divided into four parts. Throughout this exercise, all of
the partitions have simultaneous multi-threading enabled.
In the first part of the exercise, students gain experience with viewing
micro-partitioning specific configuration options of a logical partition.
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
The second part provides details regarding statistics that are relevant
to SMT and a micro-partitioning environment.
In the third part of the exercise, by dynamically changing the capacity
entitlement (CE), the capped/uncapped setting, and the number of
virtual processors (VPs), you will see the impacts of these
configuration options on the logical partitions processing capacity and
performance.
In the fourth part, you will run a CPU stress load on the partitions using
an executable named spload, and monitor the effect. The spload tool
can be found in the /home/an31/ex2 directory.
Requirements
This workbook
A computer with a network connection to the lab environment.
An HMC that is configured and supporting a POWER7 system.
A POWER7 processor-based system with at least one partition
running AIX 7 and one partition running the Virtual I/O Server code
per student.
2-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Preface
All exercises of this chapter depend on the availability of specific equipment in your
classroom.
All hints are marked by a sign.
__ 1. Using the HMC, check the properties of your running partition. Verify that your
assigned client partition has the following configuration values. Use the HMC to look
at the partition properties.
Processing mode: Shared
Processing units: Min=0.1, Current=0.35, Max=2
Virtual processors: Min=1, Current=1, Max=20
Sharing mode: Capped
Shared processor pool: DefaultPool
Log in to your assigned HMC. In the HMC Server Management application, select your
server to display the LPAR table. Select your assigned LPAR and choose Properties.
Click the Hardware tab, then the Processors tab. Verify the processor information is
correct.
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 2. Open a terminal window or a Telnet session to your partition, and log in as the root
user. Your instructor should have provided the root password.
__ 3. Check the partition configuration using the commands lparstat and lsattr
using the following commands.
# lparstat -i
# lsattr -El sys0
2-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty The lparstat command provides a convenient output for checking many
configuration values.
# lparstat -i
Node Name : sys114_lpar1
Partition Name : sys114_lpar1
Partition Number : 2
Type : Shared-SMT-4
Mode : Capped
Entitled Capacity : 0.35
Partition Group-ID : 32771
Shared Pool ID : 0
Online Virtual CPUs : 1
Maximum Virtual CPUs : 10
Minimum Virtual CPUs : 1
Online Memory : 1024 MB
Maximum Memory : 2048 MB
Minimum Memory : 512 MB
Variable Capacity Weight : 0
Minimum Capacity : 0.10
Maximum Capacity : 2.00
Capacity Increment : 0.01
Maximum Physical CPUs in system : 8
Active Physical CPUs in system : 8
Active CPUs in Pool : 8
Shared Physical CPUs in system : 8
Maximum Capacity of Pool : 800
Entitled Capacity of Pool : 280
Unallocated Capacity : 0.00
Physical CPU Percentage : 35.00%
Unallocated Weight : 0
Memory Mode : Dedicated
Total I/O Memory Entitlement : -
Variable Memory Capacity Weight : -
Memory Pool ID : -
Physical Memory in the Pool : -
Hypervisor Page Size : -
Unallocated Variable Memory Capacity Weight : -
Unallocated I/O Memory entitlement : -
Memory Group ID of LPAR : -
Desired Virtual CPUs : 1
Desired Memory : 1024 MB
Desired Variable Capacity Weight : 0
Desired Capacity : 0.35
Target Memory Expansion Factor : -
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
2-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty exactly the same result - utilization can only go up to the partition's entitled capacity; not
higher.
It is possible to dynamically change a partition from capped to uncapped, and change
the weight by using the HMC and the dynamic logical partitioning menu (or the chhwres
HMC command).
__ 5. Using the lsdev command, list the available processors in your partition. What type
of processors are listed with the lsdev command?
Example command and its output:
# lsdev -c processor
proc0 Available 00-00 Processor
We have one processor available. This means that we have one virtual processor
configured because your assigned LPAR is configured as a shared processor partition.
If you see any processors listed as Defined, it means that the processor was previously
used by the partition, but is not currently available. This might happen if the partition has
been shut down and then reactivated with a smaller number of processors.
__ 6. Using the bindprocessor command, display the processors available in your
partition. What processor type does the bindprocessor command list?
Example command and its output:
# bindprocessor -q
The available processors are: 0 1 2 3
The bindprocessor command lists the available logical processors. In this example,
we have one virtual processor available (revealed by the lsdev command), and four
logical processors. This means that simultaneous multi-threading is enabled and is
using the SMT4 mode.
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
# smtctl
This system is SMT capable.
This system supports up to 4 SMT threads per processor.
SMT is currently disabled.
SMT boot mode is set to disabled.
SMT threads are bound to the same virtual processor.
# lparstat
2-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
# smtctl
This system is SMT capable.
This system supports up to 4 SMT threads per processor.
SMT is currently enabled.
SMT boot mode is set to enabled.
SMT threads are bound to the same virtual processor.
# smtctl
This system is SMT capable.
This system supports up to 4 SMT threads per processor.
SMT is currently enabled.
SMT boot mode is set to enabled.
SMT threads are bound to the same virtual processor.
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
# mpstat -s 1 1
System configuration: lcpu=4 ent=0.3 mode=Capped
Proc0
1.35%
cpu0 cpu1 cpu2 cpu3
0.69% 0.22% 0.22% 0.22%
The example output for both commands has a header field (lcpu=) that reports four
logical CPUs. The mpstat report, in addition, shows that there are four logical CPUs per
processor.
Note: New intelligent threads technology enables workload optimization by dynamically
selecting the most suitable threading mode - single thread per core, SMT (simultaneous
multi-thread) with two threads per core or SMT with four threads per core. As a result,
applications can run at their peak performance and server workload capacity is
increased. In this section we will see how POWER7 intelligent threads behave with an
increase in the workload.
__ 15. Run the yes command to generate load on the CPU. Execute one instance of the
yes command in the background.
The suggested command is:
# yes > /dev/null &
__ 16. Monitor the system wide processor utilization by running lparstat for four
intervals of one second each.
The suggested command and example output are:
# lparstat 1 4
System configuration: type=Shared mode=Capped smt=4 lcpu=4 mem=1024MB psize=16
ent=0.35
%user %sys %wait %idle physc %entc lbusy vcsw phint
----- ----- ------ ------ ----- ----- ------ ----- -----
61.6 2.2 0.0 36.2 0.36 102.5 26.9 274 0
61.3 1.5 0.0 37.2 0.35 99.6 28.5 390 1
61.7 2.0 0.0 36.3 0.35 100.4 28.5 374 1
60.7 2.0 0.0 37.3 0.35 98.9 23.2 379 2
__ 17. Examine the lparstat report. How many processing units are being consumed
(physc)?
2-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty In the example output, the single CPU intensive job is utilizing all of the entitled capacity
(0.35).
__ 18. Monitor the utilization for all logical CPUs, using the sar command. Specify two
intervals of one second each. You might wish to maximize your terminal emulation
window in order to see all of the output without scrolling. What do you notice in the
command output?
The suggested command and example output are:
# sar -P ALL 1 2
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
You can check that there are two yes jobs running:
# jobs
[2] + Running yes > /dev/null &
[1] - Running yes > /dev/null &
__ 20. Monitor the system wide processor utilization by running lparstat for four intervals
of one second each. What are the current values of physc and %entc?
____________________________________________________________
The suggested command and example output are:
# lparstat 1 4
System configuration: type=Shared mode=Capped smt=4 lcpu=4 mem=1024MB
psize=4 ent=0.35
The output should shows that we are still using up all of the entitled capacity. The physc
value matches the ent value and the entitled capacity percentage is 100 or nearly so.
__ 21. Monitor the utilization for all logical CPUs using the sar command. Specify two
intervals of two seconds each. What do you notice about the sar output?
2-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
# jobs
[4] + Running yes > /dev/null &
[2] - Running yes > /dev/null &
[3] Running yes > /dev/null &
[1] Running yes > /dev/null &
There should be four instance of the yes command running on your LPAR now.
__ 23. Monitor the utilization for all logical CPUs using the sar command. Specify two
intervals of two seconds each.
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 24. Examine the sar report. How many logical CPUs have a very high % utilization
(%user + %sys) along with a significant physical processor consumption (physc)?
What can you say about the distribution of the workload?
The example output shows four logical CPUs in each interval, each with a high
utilization totalling 100% and a significant physical processor consumption that is split
evenly between all four logical processors. Recall that this is a capped processor
partition, so it cannot use more than its entitled capacity.
__ 25. Run the lparstat command with a count and interval of two. Note the physc and
%entc values here: _____________________________________________
2-14 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty Example lparstat command and its output which shows that physc is the same as the
entitled capacity (0.35) and %entc is about 100%.
# lparstat 2 2
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 27. In your LPAR login session, run the lparstat with a count and interval of two. What
do you notice now about the physc and %entc statistics? How do they compare with
the last lparstat command that you ran?
Here is an example lparstat command and its output showing that physc is now 1.0
and the %entc is 285.8%. Now that the partition is uncapped, it is only limited by its one
virtual processor (and the amount of excess cycles in the shared processor pool).
# lparstat 2 2
2-16 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Here are example commands and their example outputs. There should be four yes
processes. The sar output should show even distribution of the workload.
# jobs
[4] + Running yes > /dev/null &
[2] - Running yes > /dev/null &
[3] Running yes > /dev/null &
[1] Running yes > /dev/null &
# sar -P ALL 2 2
2-18 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Proc0
99.95%
cpu0 cpu1 cpu2 cpu3
25.00% 25.01% 24.99% 24.95%
------------------------------------------------
Proc0
99.95%
cpu0 cpu1 cpu2 cpu3
25.00% 25.01% 24.99% 24.95%
You should find that the distribution is even just like the sar output.
__ 31. Run the lparstat command with an interval and count of two. What do you notice
about the lbusy statistic? Does this value make sense?
Here is the lparstat command and example output which shows a logical processor
percentage (lbusy) of 100%. This makes sense because you are using all four logical
processors.
# lparstat 2 2
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
2-20 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty Example commands and their outputs which show two busy logical processors and two
relatively idle logical processors:
# sar -P ALL 2 2
Proc0
100.02%
cpu0 cpu1 cpu2 cpu3
37.96% 5.90% 43.92% 12.24%
-----------------------------------------
Proc0
99.98%
cpu0 cpu1 cpu2 cpu3
44.04% 6.06% 43.74% 6.14%
__ 34. Once again run the lparstat command with an interval and count of two. What do
you notice about the lbusy statistic now? Does this value make sense?
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Here is the lparstat command and example output which shows a logical processor
percentage (lbusy) of about 50%. This makes sense because you are using only two of
the four logical processors.
# lparstat 2 2
cpu
-----------------------
us sy id wa pc ec
87 1 12 0 1.00 285.5
93 1 6 0 1.00 286.0
__ 36. Run the iostat command from your partition and check the CPU utilization
statistics. Use an interval and count of two. What columns display information about
the actual CPU consumption of the partition?
2-22 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
tty: tin tout avg-cpu: % user % sys % idle % iowait physc % entc
0.0 31.2 88.0 0.9 11.2 0.0 1.0 285.6
tty: tin tout avg-cpu: % user % sys % idle % iowait physc % entc
0.0 147.0 88.6 0.7 10.6 0.0 1.0 285.9
Like lparstat, the physc column displays the number of physical processors consumed,
and the %entc column displays the percentage of entitlement consumed.
__ 37. Next, start the topas program on your partition. Press the L key. Look for the physc
and %entc values. Look also for the lbusy field.
The following example shows a partial screen. Notice the physc, %entc, and the
%lbusy fields. There is also a PHYSC field for each logical processor.
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 38. In the topas window, press a capital C to see cross partition data. You might have to
wait a moment for information to appear. It should show all the active partitions in
your managed system. If it doesnt work on your system, just move on to the next
step. When youve finished with topas, press the q key to quit the program.
__ 39. Kill all yes processes that are still running. Use the kill %# command where # is the
number of the job. Use the jobs command to verify that there are no running jobs.
__ 41. In a session with your partition, change directory to /home/an31/ex2, and invoke the
following command to generate a workload on the CPU:
# ./spload -t 2
2-24 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty Important: Keep this executable running until instructed to stop it. You will have to
monitor the transaction rate output as you make changes to the system
configuration.
The values in the output are the transaction rates.
The following is an example of running spload with the transaction rate displayed
every two seconds.
# cd /home/an31/ex2
# ./spload -t 2
140
143
144
142
144
...
__ 42. Open another window to your partition. Log in as the root user and check the CPU
consumption using the lparstat command.
What is the physical processor consumed value on your partition?
What is the percent entitlement consumed value of the partition?
What is the app value? What does it mean?
# lparstat 2 2
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
99.6 0.4 0.0 0.0 1.00 285.8 100.0 2.99 400 8
99.7 0.3 0.0 0.0 1.00 285.8 100.0 2.99 400 9
The physc and %entc show a busy partition. The partition is using the maximum amount
of entitled capacity for a partition with one virtual processor.
The available processing units in the shared pool can be seen with the app statistic. In
the example above, the value is 2.99 representing the equivalent of approximately three
idle processors in the shared processor pool. You will likely see a different amount on
your training system.
__ 43. Using the HMC, dynamically change the capacity entitlement of your partition to 0.8
and mark it as capped. Leave the virtual processor value at 1.
Using the HMC GUI, select your partition, then select Dynamic Logical Partitioning >
Processor Resources > Add or Remove. Enter 0.8 in the Assigned Processing
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
units field, uncheck the uncapped checkbox, then click OK. The Add / Remove
Processor Resources window is shown below.
2-26 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty
__ 44. Check the processing capacity of your partition using the lparstat command with
an interval and count of two. What is the physical processor capacity consumed
compared to the entitled capacity?
The processing capacity consumed is equal to the new entitled capacity of 0.8.
# lparstat 2 2
System configuration: type=Shared mode=Capped smt=4 lcpu=4 mem=1024MB
psize=4 ent=0.80
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
99.0 0.8 0.0 0.1 0.80 99.9 100.0 3.18 402 4
99.6 0.4 0.0 0.0 0.80 100.0 100.0 3.19 400 8
__ 45. Now, make two dynamic changes to the LPAR. Use DLPAR to change your partition
back to an uncapped partition. Set the weight value to 128. Also, add one more
virtual processor for a total of two.
Using the HMC GUI, select your partition, then select Dynamic Logical Partitioning >
Processor Resources > Add or Remove. Check the uncapped box, and set the
weight value to 128. Then click the OK button to make the change.
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Note that the available system processing values on your training system will likely be
different then those shown on the example screen above.
__ 46. Using the lparstat command, discover the current value of the consumed
processing capacity (physc) on your partition.
The processing capacity consumed value is now 2.0, as shown in the output of the
lpartstat command below. Note that it might be possible that your LPAR may have
less than 2.0 physc. If other students are adjusting their entitled capacity and running
2-28 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty processes that take up a lot of available processing resources in the pool, then your
LPAR may have less than 2.0 processing units for physc.
# lparstat 2 2
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
99.8 0.2 0.0 0.0 2.00 249.9 100.0 1.99 800 16
99.8 0.2 0.0 0.0 2.00 249.9 100.0 1.99 800 10
__ 47. Did the transaction rate provided by the spload executable increase? Why or why
not?
The transaction rate increased when the number of virtual processors was increased to
two. The physical capacity consumed by the logical partition can grow up to 2.0. The
physical CPU capacity consumed by your partition depends also on the activity of the
other partitions that consume extra CPU cycles from the shared processor pool. The
transaction rate can be fluctuating or can be lower than the value shown in the example.
#./spload -t 2
284
289
285
286
287
286
285
285
__ 48. On your partition, use the mpstat command to check the number of context
switches occurring on the partition. Check the columns ilcs and vlcs (involuntary and
voluntary context switches respectively).
# mpstat -d 2
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
From the mpstat output, we can see that we have only involuntary logical context
switches (ilcs). This means the partition is not ceding any idle cycles.
# mpstat -d 2
System configuration: lcpu=8 ent=0.8 mode=Uncapped
cpu cs ics bound rq push S3pull S3grd S0rd S1rd S2rd S3rd S4rd S5rd ilcs vlcs
S3hrd S4hrd S5hrd
0 104 53 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
1 118 57 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
2 1 1 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
3 0 0 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
4 0 0 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
5 0 0 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
6 150 75 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
7 0 0 0 0 0 0 0 100 0 0 0 0 0 150 0 100 0 0
ALL 373 186 0 0 0 0 0 100 0 0 0 0 0 1200 0 100 0 0
If simultaneous multithreading is enabled on a shared processor partition, the mpstat
values for ilcs and vlcs reported for a logical processor are actually the number of
context switches encountered by the virtual processor being used to run the logical
processor. Since each virtual processor has four logical processors, this means the four
logical processors from a single virtual processor will report the same ilcs and vlcs
values. The ALL line of the mpstat output shows the total number of ilcs and vlcs for
all of the virtual processors of the partition.
__ 49. Optional step: Use the nmon command to monitor CPU utilization. Run nmon then
press the h key to view the nmon shortcut keys. Try out the p, c, C, and lowercase L
(l) shortcut keys. See how many statistics you recognize. Use q to quit nmon when
youre finished.
__ 50. Kill the spload process.
Example commands and their outputs:
# ps -ef | grep spload
root 4849716 7077978 493 16:22:20 pts/0 48:24 ./spload -t 2
# kill 4849716
# ps -ef | grep spload
End of exercise
2-30 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Copyright IBM Corp. 2010, 2013 Exercise 2. Shared processors and virtual processor tuning 2-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
2-32 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Estimated time
01:30
Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Requirements
One shared processor LPAR per student on a POWER7 system.
3-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Preface
Use your assigned LPAR for this exercise.
All hints are marked by a sign.
Introduction
The multiple shared processor pool is a POWER6 or above system firmware feature. The
system can have up to 63 additional shared processor pools configured. In this exercise,
you configure the use of one user-defined shared processor pool. The topas command is
used to verify the pool utilization.
__ 1. Before starting the exercise, verify your assigned partition has the following CPU
configuration. Use the HMC to look at the partition properties. Use DLPAR
operations to alter any of the dynamic attributes, or perform a full shut down of the
LPAR, alter the Normal configuration file, and activate the LPAR.
If you completed all of Exercise 2, your assigned LPAR may be currently set to 0.8
processing units, 2 virtual processors, and is uncapped. Use DLPAR to alter these
three settings to the values shown below. The other two settings, shared mode and
assignment to the default shared processor pool, should already be configured.
Processing mode: Shared
Processing units: 0.35
Virtual processors: =1
Sharing mode: Capped
Shared processor pool: DefaultPool
Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Select your LPAR and run the Dynamic Logical Partitioning > Processors >
Add or Remove task. Make the necessary changes and click OK. This example
shows the proper values.
__ 2. Access your primary managed systems HMC (GUI or command) and verify there
are at least 0.5 processors available. You can view the value in the server tables
Available Processing Units column or open the managed system properties and go
to the Processors tab. A sample servers table is shown below which shows 10.6
processing units available. Your system may have a different number.
Record the number of available processing units on your system: _____________
3-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty __ 3. At this time, your partition is assigned to the default shared processor pool named
DefaultPool with an ID of 0. In the next few steps youll configure a custom pool and
assign your partition to this pool.
Navigate to the Shared Processor Pool Management screen by selecting your
server and running the Configuration > Virtual Resources > Shared Processor
Pool Management task.
__ 4. In the panel that opens, configure the Reserved Processing Units and Maximum
Processing Units for the SharedPool0x where x is your LPAR number. For example,
an LPAR named sys036_lpar1 will select pool 1; an lpar named sys036_lpar2 will
select pool 2, and so on. Select the Pool Name with a left click on the name, and
then change the value for Reserve Processor Units to 0.5 and the Maximum
Processing Units to 2. Click OK to close the Modify Pool Attributes panel, then close
the Shared Processor Pool panel.
An example Shared Processor Pool panel is shown below. In this example, the
first user-defined pool is being modified.
Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 5. On the HMC, look at the servers table once again. Notice the change in the
Available Processor Units column.
Record the current amount of available processing units: __________________
The number should be reduced by your Reserved Processor units value (0.5). Note
that multiple students may all be trying this at the same time so the value may
change more than the reserve processor units value that you set. If no one else is
using your server, then the adjustment should be the same as the reserved
processor units value.
3-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty __ 6. Now assign your shared processor partition to the shared processor pool that you
configured. The pool assignment can be done in the LPAR profile or it can be done
dynamically. You will configure it dynamically.
Once again run the Configuration > Virtual Resources > Shared Processor Pool
Management task.
In the Shared Processor Pool window, select the Partitions tab. Then left click
your assigned partitions name. Use the pulldown for the Pool Name(ID) field and
look for your configured shared processor pool name. The default pool and any
other pool with the maximum processor unit attribute set to a whole number will
display in the pull-down list. Select OK to exit.
__ 7. Using an SSH shell, log in to the HMC and run the following command to display the
shared processor pools attributes and the LPAR pool assignments:
lshwres -r procpool -m <managed_system>
Here is an example of a lshwres command and its output that shows that the
sys504_lpar3 partition is assigned to the SharedPool03 pool. The rest of the
partitions are assigned to the DefaultPool.
Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 8. Use a Telnet or SSH session to connect to your assigned partition. Use the following
lparstat command to verify your partition is assigned to your shared processor
pool: lparstat -i | grep Shared Pool
Here is example output showing the shared pool ID 3.
Shared Pool ID : 3
__ 9. In the session to your LPAR, start lparstat 2 5. Does the app column appear?
This was configured in Exercise 2. If there is no app column, enable it by opening the
properties of your partition on the HMC. Go to the Hardware tab, then the
Processor tab, and click the Allow performance information collection checkbox
as shown below.
If you just configured the app column, run the lparstat 2 5 command again.
Can you explain the app column value? Record the value here: ______________
The app column value reflects the amount of idle CPU cycles available in your
shared processor pool. As your LPAR is not CPU constrained, the app value
should be closed to the maximum capacity value of the user-defined shared pool
(2.00).
__ 10. Start topas -C. Wait for topas to initialize. You are looking at host metrics. Press
the p key to go to the shared processor pool metrics.
__ a. In the pool metrics, you should see your shared pool ID. Observe how the psize
is the same value that you set for the maximum processor units for the pool. This
value is also seen in the maxc column (multiplied by 100).
__ b. Observe the entc column value for your shared pool ID. Can you explain the
value?
In the following example the entc column shows a value of 85.0. This
corresponds to the sum of the logical partition capacity entitlement (Desired
CE=0.35) plus the Reserved Capacity value of the shared processor pool
(Reserved value=0.5) multiplied by 100. The reserved value of the shared
processor pool is part of the entitlement for that shared pool.
3-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty
__ 11. To focus on your pool ID, use the up and down arrow keys to put your cursor on
your pool ID number in the pool column at the left side and when highlighting moves
to that number, select the f key to toggle the focus. This changes what is displayed
in the lower portion of the output. Note that when PhysB (busy physical processors)
appears in topas it is the same as physc or pc (physical processors consumed) in
other places and other tools
And example topas screen is shown below. To get here run topas -C, then
press the p key to get to the pool statistics. Use the down arrow to get to your
shared processor pool ID, then press the f key.
__ 12. Observe the CPU resource consumption by watching the PhysB and %EntC for your
partition. Keep this topas running for the next few steps.
Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 13. Dynamically change your logical partition to be an Uncapped partition. Set the
weight to 128.
Using the HMC GUI, select your LPAR and, then select Dynamic Logical
Partitioning > Processor > Add or Remove. Check the uncapped box, and
use the weight value of 128.
__ 14. Start another Telnet session to your partition and change directory to
/home/an31/ex3, and invoke the following command to generate a workload on the
CPU:
# ./spload -t 2
__ 15. Observe the PhysB and %EntC for your partition in the topas screen. Can you explain
the PhysB value?
The PhysB value should be 1.00 and the %EntC value should be 285. The
partition consumes 285% of its entitlement (0.35), which is one CPU (1.0). The
CPU consumption is limited by the number of virtual processors in the LPAR.
__ 16. Dynamically change the number of virtual processors for your partition from 1 to 3.
On the HMC, select your partition and run the Dynamic Logical Partitioning >
Processor > Add or Remove task. Change the assigned virtual processors to
3.
__ 17. Look at the topas screen and notice the PhysB and %EntC values for your LPAR.
Can you explain the PhysB value?
The PhysB value is about 2.00. We could have expected the logical partition to
consume up to three physical processors as we have three VPs configured. But
the maximum capacity of the shared processor pool is 2 and the LPAR cannot
consume more than this maximum capacity. The %EntC is about 571.2% which
also means that it is using the equivalent of 2.0 processing units.
__ 18. Dynamically reconfigure your LPAR to Capped mode and the number of Desired
Virtual processors to 1. Then stop spload and topas.
On the HMC, select your partition and run the Dynamic Logical Partitioning >
Processor > Add or Remove task. Change the assigned virtual processors to
1. and uncheck the Uncapped checkbox.
In the session where spload is running, enter CTRL C. In the session where
topas is running, type a q to quit.
3-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty __ 20. Assign both of your teams partitions to the same shared processor pool. If your
team is using the lpar1and lpar2 partitions, assign them both to SharedPool1. If
your team is using the lpar3 and lpar4 partitions, assign them both to SharedPool3.
In this example, lpar3 and lpar4 are assigned to SharedPool3:
__ 21. Start a CPU load on one of your teams partitions. Change the directory to
/home/an31/ex3, and invoke the following command to generate the workload:
# ./spload -t 2
Important
Keep this executable running until instructed to stop it. You will have to monitor the
transaction rate output given as you make changes to the system configuration.
Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
What is the value of the available processing capacity (app) in this virtual shared
pool?
What can you say about processing capacity of your second partition using the
same shared processor pool?
The processing capacity available in the virtual shared pool can be seen using
the app column value of the lparstat command output. In this case, the value
is approximately 1.65. Recall that the LPARs are configured as capped with 0.35
processing units.
Since the user-defined shared pool has 1.65 physical processors available, and
one of your LPARs is using 0.35 of a processor capacity, we can say that the
other LPAR in your shared processor pool is not using any processor capacity at
all. We can say that this LPAR cedes all its processor cycles to the physical
shared processor pool.
__ 23. Use DLPAR to change the partition running the CPU load to be an Uncapped
partition. Set the Weight value to 128. Also set the number of virtual processors to 2.
Using the HMC GUI, select your LPAR and, then select Dynamic Logical
Partitioning > Processor > Add or Remove. Check the Uncapped box, and
use the Weight value of 128. Type 2 in the Assigned virtual processors input
box, then click the OK button to make the change.
__ 24. Go to the session for the LPAR that is running the lparstat 2 command. Notice
the physc value and the app value. The app statistics may bounce around a bit.
What do you notice?
With two virtual processors configured, the logical partition is able to consume
nearly all the processor cycles (physc=1.99) from the shared processor pool in
this example. The amount of idle CPU cycles available in the shared processor
pool is mostly zero. Your app numbers may bounce around a bit, but you should
notice at least half the time they are close to zero.
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
0.2 2.4 0.0 97.5 0.02 4.9 0.2 1.97 718 0
99.6 0.4 0.0 0.0 1.99 568.4 100.0 0.03 1868 4
99.7 0.3 0.0 0.0 1.99 568.8 100.0 0.04 1858 0
99.7 0.3 0.0 0.0 1.99 568.8 100.0 0.02 1846 4
99.7 0.3 0.0 0.0 1.99 568.6 100.0 2.00 1844 0
99.7 0.3 0.0 0.0 1.99 568.6 100.0 0.03 1872 0
__ 25. While looking at the physc value, start a CPU load on your second partition. Change
the directory to /home/an31/ex3, and start the executable spload and keep it
running up until instructed to stop it.
# ./spload -t 2
3-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty __ 26. Using the lparstat command with an interval of 2 on the first (uncapped) partition,
do you notice anything different? What happened to the processing capacity
(physc)? Run lparstat in the second (capped) partition and notice the physc
statistic.
The first LPAR is uncapped and the number of virtual processors is 2, so the
physical capacity can grow up to 2.0 physical CPUs. But it has to contend with
the capped LPAR which is now running a CPU load. The capped LPAR has a
capacity entitlement of 0.35, and cannot grow more than this value. The
uncapped LPAR manages to consume 1.65, and the capped one is at 0.35.
On the uncapped logcal partition:
# lparstat 2
System configuration: type=Shared mode=Uncapped smt=4 lcpu=8
mem=1024MB psize=2 ent=0.35
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
99.7 0.3 0.0 0.0 1.65 471.1 100.0 0.10 2400 4
99.7 0.3 0.0 0.0 1.65 471.1 100.0 2.00 2400 0
99.7 0.3 0.0 0.0 1.65 471.1 100.0 0.09 2400 0
99.6 0.4 0.0 0.0 1.65 471.1 100.0 2.00 2400 2
On the capped logical partition:
# lparstat 2
System configuration: type=Shared mode=Capped smt=4 lcpu=4 mem=1024MB
psize=2 ent=0.35
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
99.1 0.8 0.0 0.1 0.35 99.9 100.0 2.00 400 0
99.0 0.8 0.0 0.1 0.35 99.9 100.0 0.07 400 0
99.1 0.8 0.0 0.1 0.35 99.9 100.0 2.00 400 0
99.1 0.8 0.0 0.1 0.35 99.9 100.0 0.08 400 1
__ 27. Now change the configuration of the capped partition to a sharing mode of
Uncapped with the weight to 128, and set the assigned virtual processors to 2. This
configuration is now the same as the other partition.
Using the HMC GUI, select your LPAR and, then select Dynamic Logical
Partitioning > Processor > Add or Remove. Check the Uncapped box, and
use the default weight value of 128. Set 2 in the Assigned virtual processors
input box, then click the OK button to make the change.
__ 28. What do you notice about the transaction rate reported by spload and the processor
consumed on both logical partitions? Why is this?
The transaction rate on both partitions is now about the same. This is because
the capacity entitlement and the uncapped weight value of both partitions is the
Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
same. This means both the guaranteed and excess shared pool cycles are
allocated equally to both partitions.
If you encounter strange behavior, such as a difference in the CPU physical
capacity consumed between LPARs, you should deactivate the virtual processor
folding feature on your logical partitions by executing
schedo -o vpm_xvcpus=-1. At the time of writing, in some situations the virtual
processor folding feature can prevent dispatching the logical threads to all the
virtual processors in the partition.
On the first LPAR:
# lparstat 2
System configuration: type=Shared mode=Uncapped smt=4 lcpu=8
mem=1024MB psize=2 ent=0.35
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
99.4 0.6 0.0 0.0 1.00 285.3 100.0 0.12 2406 0
99.4 0.6 0.0 0.0 1.00 285.3 100.0 2.00 2400 1
99.4 0.6 0.0 0.0 1.00 285.3 100.0 0.11 2400 7
99.4 0.6 0.0 0.0 1.00 285.4 100.0 2.00 2400 1
99.6 0.4 0.0 0.0 1.00 285.2 100.0 0.12 2400 5
On the second LPAR:
# lparstat 2
System configuration: type=Shared mode=Uncapped smt=4 lcpu=8
mem=1024MB psize=2 ent=0.35
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
99.7 0.3 0.0 0.0 1.00 285.4 100.0 0.00 1600 0
99.7 0.3 0.0 0.0 1.00 285.2 100.0 0.01 1604 0
99.7 0.3 0.0 0.0 1.00 285.4 100.0 2.00 1600 2
99.7 0.3 0.0 0.0 1.00 285.4 100.0 0.00 1600 0
99.6 0.4 0.0 0.0 1.00 285.4 100.0 2.00 1600 0
__ 29. Now use the HMC GUI to change the uncapped weight of one of your logical
partition to 254. By setting a higher weight value, we want the partition to be able to
get more available excess shared pool cycles than the second partition.
What do you notice? Is the partition with the higher weight value getting more CPU
cycles than the other one?
Using the HMC GUI, select your LPAR and, then select Dynamic Logical
Partitioning > Processor > Add or Remove. Change the weight value from
128 to 254, then click OK to make the change.
For our example system configuration used in the hint examples so far, there are
16 physical CPUs and the two partitions are using a custom shared processor
pool. You will not see any difference in the physc on both logical partitions
3-14 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty because the partitions do not compete to get extra CPU cycles. Uncapped
weights have no influence because the partitions never reach the point where
they are in competition for the cycles. This is because there are idle cycle still
available in the overall physical shared pool (rather than the user-defined shared
pool). Uncapped capacity is distributed among the uncapped shared processor
partitions that are configured in the entire server not just on the uncapped
partitions within a specific shared processor pool. Therefore, with our
configuration where there are still idle processing units in the physical shared
processor pool, there is no CPU contention and the uncapped weights are not
used.
__ 30. Kill the spload processes in both partitions with Ctrl <C>.
__ 31. Reassign both partitions to the default shared processor pool ID 0. Unconfigure the
two custom pools that you and your partner configured by setting all values back to
zero.
Select the server and run the Configuration > Virtual Resources > Shared
Processor Pool Management task.
Click the Partitions tab. Click each partition in turn and reassign to the
DefaultPool.
Go to the Pools tab.
For each custom pool, click on the name and enter zeros in the Reserved
processing units and the Maximum processing units fields as shown below.
__ 32. Access the HMC of your managed system using your browser.
__ 33. Select your managed system name and run the Properties task. Go to the
Capabilities tab and verify the managed system has the Active Partition
Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Processor Sharing Capable capability set to True. You may need to scroll toward
the bottom of the list of capabilities.
Here is an example of the Capabilities tab:
3-16 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty Make sure both of the Processor Sharing options are not checked. Click OK.
Here is the example screen. Click OK when you have finished making the edits.
__ 36. Activate your partition using the Dedicated profile. It should have successfully shut
down by now.
If you still have the Managed Profiles window open, you can use the Actions >
Activate menu option. Otherwise, select the LPAR and run the Operations >
Activate > Profile task. Select the Dedicated profile, the click OK.
__ 37. Open a virtual terminal or a Telnet session to your partition. Use the lparstat
command to verify the partition type is Dedicated and the mode is Capped. (That is,
the option to share the dedicated processors when active is not enabled.)
Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
3-18 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty To check the current setting you can use the following command (but do this after
checking the lparstat command output):
lshwres -m george -r proc --level lpar -F curr_sharing_mode \
--filter lpar_names=lpar1
share_idle_procs_active
See the changed message in the LPAR window. Notice there are more columns
now in the lparstat output. Also notice the mode is now Donating.
System configuration changed. The current iteration values may be
inaccurate.
0.1 0.6 0.0 99.3 0.95 5.2 0.0 0.0 0.0
__ 41. In your dedicated partition login session, you should have observed the following:
The System configuration changed message.
The mode is now Donating.
There are more columns of output: physc (physical capacity consumed), plus
%idon and %bdon which are the donated cycles to the shared processor pool.
You should see high values for %idle cycles and idle donated (%idon).
Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
3-20 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 49. In your shared processor partition, run the lparstat 4 command. Check the app
column to see the amount of idle cycles in the shared processor pool.
Examine the app column in the lparstat output to see the amount of idle CPU
cycles in the shared processor pool. Your training server may have a different
amount than the example below that shows about 3.7 processing units. Your
3-22 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty results will depend on the size of your system and the workloads in the other
LPARs:
# lparstat 4
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
0.4 2.0 0.0 97.6 0.01 4.2 0.9 3.70 247 0
0.0 1.4 0.0 98.6 0.01 2.7 0.1 3.74 230 0
0.0 1.2 0.0 98.7 0.01 2.3 0.0 3.73 176 0
__ 50. In your shared processor partition, start a CPU load in the background using the
following command:
# yes > /dev/null &
Remember that only uncapped partitions can use excess cycles in the shared
processor pool.
__ 51. Start the lparstat 4 command again and notice the drop in the app values. There
are fewer idle cycles now. Let the command continue to run. Since the LPAR is
configured with one virtual processor, you should see the app value drop by
approximately 1.0 processing units although your results may vary if another lab
team is performing this same procedure on your server. The yes command can
keep one processor 100% busy.
The lparstat command and its expected output that shows the app value is
about 1.0 processing units less than it was before you started the yes program.
Note that if your results are not the same, it could be that the other group of
students is affecting the results. You should see the app value be reduced by at
least 1.0 processing units.
# lparstat 4
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
61.6 1.4 0.0 36.9 1.00 285.6 24.7 2.73 298 1
61.6 1.3 0.0 37.1 1.00 285.7 25.7 2.73 309 2
61.5 1.3 0.0 37.2 1.00 285.4 24.4 2.98 290 1
__ 52. While the lparstat 4 command is running, go to the dedicated processor
partition and start a load in the background using the command:
# yes > /dev/null &
Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 53. On the shared processor LPAR, what do you observe about the app value?
The app value is reduced as donated cycles are taken back by the dedicated
processor LPAR.
Example lparstat command and its expected output:
# lparstat 4
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
61.8 1.1 0.0 37.1 1.00 285.9 25.4 1.99 300 2
61.6 1.3 0.0 37.1 1.00 285.3 24.7 1.99 308 1
61.8 1.1 0.0 37.1 1.00 285.9 25.4 1.99 303 2
You may see different results based on the activity of the other lab team,
however you should see an app value that is even lower than before.
__ 54. On the dedicated processor LPAR, run lparstat -d 2. Notice the idle and donate
values are very low (perhaps even zero).
Example command and its expected output:
# lparstat -d 2
3-24 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty __ 57. Stop the lparstat command and kill the yes job on the shared processor partition.
# jobs
[1] + Running yes > /dev/null &
# kill %1
__ 58. Shutdown the dedicated processor partition and activate it with its Normal profile.
You do not need to wait until it is fully booted.
End of exercise
Copyright IBM Corp. 2010, 2013 Exercise 3. Configuring multiple shared processor pools and 3-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Exercise review/wrap-up
This exercise showed how multiple shared processors pools could be dynamically
configured. Students monitored the user-defined shared pool. The last part of the exercise
showed how to configure and monitor dedicated partitions running in donating mode.
3-26 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Estimated time
02:00
Introduction
In this exercise, you will first create a logical volume or format a hdisk
device to use as paging device for your assigned LPAR. Then, you will
configure your assigned LPAR profile to use the shared memory pool.
Each lab team will start a memory load on one of your assigned
LPARs and monitor the logical memory over-commit. AMS statistics
will be monitored using lparstat, vmstat, and topas.
Students will work alone to start a memory load to over-commit the
physical memory in the shared pool. The AMS behavior in this
resource demanding environment will be observed using lparstat,
vmstat, and topas.
Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Requirements
This workbook
A computer with a web browser and a network connection to an
HMC running version 7.7.4 or above to support a POWER7
processor-based system
Utility for running Telnet or SSH
4-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Preface
The procedures in this exercise depend on the availability of specific equipment. You
will need a computer system connected to the Internet, a web browser, a Telnet
program, and a utility for running SSH. You will also need a managed system capable of
running shared processor partitions. All lab systems must be accessible to each other
on a network.
All hints are marked by a sign.
The hints in this exercise reflect results obtained on a System p750 with 4 GB of shared
memory pool and a partition running AIX V7.1 Your systems specific results might differ
but the overall conclusions should be the same.
__ 1. Use your web browser to connect to your assigned HMC. Log in with your HMC ID.
Verify your managed system is AMS capable by looking in the Server properties.
Go to the Systems Management application on the HMC. Expand the Servers
information, then select your managed system.
Choose Properties from the Tasks menu or from the tasks pad. The Active Memory
Sharing Capable capability value should be True. This value is set to True only if the
Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
PowerVM Enterprise Edition is activated. The following example has been reduced to
showing only one capability for clarity:
__ 2. Check the shared memory pool configuration from the HMC GUI. This can be done
by accessing the Shared Memory Pool Management wizard. What is the size of
the current shared memory pool? ______________ GB
Go to the Systems Management application on the HMC. Expand the Servers
information, then select your managed system.
Select Configuration > Virtual Resources > Shared Memory Pool Management
from the Tasks menu or from the tasks pad.
In the pop-up window, verify the Pool size is set to 4 GB
Note that on your system the available system memory and available pool memory may
be different than the example above.
__ 3. Open a Telnet or SSH session to the virtual I/O server partition that will be used as
paging space partition (amsvios). Refer to your worksheet, or ask your instructor if
you need the connection information.
__ 4. In the next set of steps, you will configure paging space devices for your logical
partition. Perform this step to use an hdisk as paging device for your partition.
__ a. Identify the hdisk number that you will use as paging space device for your
logical partition. Log in to the AMS virtual I/O server on your managed system
4-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty and list the hdisk devices using the lspv command. You should see rootvg
assigned to hdisk0, and other hdisks should be available and not part of any
volume group.
Log in to the AMS virtual I/O server and list the hdisk devices.
For the hdisk selection, the highest student number selects the highest hdisk
number. The lowest student number selects the lowest hdisk number. Record
your hdisk number here ____________.
Log in to the AMS VIOS and run the lspv command. You should see hdisks devices
available. Record your hdisk number.
$ lspv
hdisk0 0002acf2859db562 rootvg
hdisk1 0002acf29d805873 None
hdisk2 0002acf2ad17106b None
hdisk3 0002acf242d3b83d None
hdisk4 0002ace20f96431e None
__ b. To perform this next step you must synchronize with the other students sharing
your managed system. Only one student at a time can perform modifications to
the shared memory pool (adding a paging space device to the pool). If all of the
students perform this step simultaneously, (at least if they select Yes to make
changes to the pool), then only the last modifications will be taken into account.
and all the other modifications performed by the other students will be lost.
One student at a time. From the Shared Memory Pool Management wizard,
select the Paging Space Device(s) tab, then modify the shared memory pool to
add your hdisk device as a paging space device. If you do not see your hdisk, go
to the next step. Specify the AMS virtual I/O server (amsvios) as paging VIOS 1.
Do not specify a second paging VIOS.
Go to the Paging space Device(s) tab. Then click the Add/Remove Paging Space
Device(s) button. The Modify Shared Memory Pool wizard should appear. Click Next
twice to get to the Paging VIOS screen.
The amsvios partition should already be listed as the Paging VIOS 1 as shown below.
If it is not already configured, then select it. Do not specify any VIO server in the paging
VIOS 2 option. Click Next.
Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Answer Yes to the question Do you wish to make paging space device changes to the
pool? Click Next.
In the Paging Space Device(s) menu, click Select Device(s).
Click the Refresh button.
A device list of available physical drives should appear. Here is an example output with
available physical disk drives. The lists content depends on the available hdisk devices
attached to the virtual I/O server.
Select your hdisk device name; then click OK. If you device name doesnt not appear,
go to the next step.
4-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty The Paging Space Device(s) menu should appear showing your hdisk device name as
Paging space device.
Click Next, then check the Summary of all the selections that you made.
Click Finish.
__ c. When you have configured your paging device, you should see it in the Pool
Properties window.
When done, click OK to close the window. Tell the other students sharing the
same shared memory pool that you are done with the paging device
configuration. Then go to Part 2: Configure your LPAR to use the shared memory
pool.
__ d. If your disk device did not appear in the Paging Space Device(s) panel, then you
need to clear any lingering disk information from previous classes. To clear the
disk information, assign it to a new volume group then remove the volume group.
Here is an example using hdisk2:
# mkvg -f -vg myvg1 hdisk2
# reducevg myvg1 hdisk2
Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Now, return to the previous step 4b above to assign your paging device to the
shared memory pool.
4-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty Choose Configuration > Manage Profiles from the Tasks menu or from the tasks pad.
In the pop-up window, check the profile name (Normal_AMS), and choose Edit from
the Actions menu.
__ c. Change the profile to use the Shared memory mode. Configure the partition
shared memory to be: 512 MB minimum, 1 GB 512 MB Desired, and 2 GB
maximum. Leave the properties window open until you finished configuring the
profile.
Click the Memory tab in the Logical Partition Profile Properties window that pops up.
In the Memory mode box, select the Shared radio button.
In the Logical Memory box, enter 0 GB 512 MB for the minimum, 1 GB 512 MB for the
desired, and 2 GB for the maximum parameters.
Do not click OK yet.
__ d. Change the memory weight to 128. Select the xxx_amsvios as the Primary
Paging VIOS. Leave the Secondary Paging VIOS to None. Do not select the
Custom I/O entitled memory box. Click the OK button when done.
On the same Memory tab you used in the last step, enter 128 in the Memory weight
field, and select amsvios virtual I/O server as the Primary Paging VIOS.
The Memory tab should now look like the example below. Click OK to make the
changes.
Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
4-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty
__ 9. As there is no relationship between a paging device and a logical partition before the
first LPAR activation, and because four students use the same shared memory pool,
you cannot be sure of which paging device will be used by your partition when it
activates. Depending on which partition activates first in your managed system, your
partition might not use the device that you created.
__ a. When your partition has finished booting, identify the paging device used by your
partition. Use the HMC Shared Memory Pool Management wizard to check the
paging device configuration. Record the hdisk name or logical volume name of
the paging device assigned to your partition. Your partition is identified by its
partition ID number. Paging device name: _____________________
Go to the Systems Management application on the HMC. Expand the Servers
information, then select your managed system.
Choose Configuration > Virtual Resources > Shared Memory Pool Management
from the Tasks menu or from the tasks pad. In the pop-up window, select the Paging
Space Device(s) tab.
Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Record the Device name associated with your logical partition ID. Here is an example
showing the device name (hdisk devices in this example) and the partition ID.
__ 10. Once you see that the LPAR has booted in the console, log in as the root user.
__ a. Use the lparstat AIX command with the -i option and view the available
memory information. Check for the Memory Mode, the I/O Memory Entitlement,
the Variable Memory Capacity Weight value, and the physical memory size in the
shared memory pool. Use the main page for lpartstat if you have questions
about the output of this command.
Log in to your partition and run lparstat -i. The output shows the memory settings
for your partition.
Identify the Memory Mode, the I/O Memory Entitlement, the variable Memory Capacity
Weight value, and the physical memory size in the shared memory pool.
4-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ b. Perform the lparstat command to get statistics about the I/O memory
entitlement for your shared memory partition. What is the value of the I/O entitled
memory for your LPAR? ________MB.
I/O memory entitlement can be seen using the lparstat -m command. lparstat
command example and its output that shows 77 MB of I/O entitled memory:
# lparstat -m
System configuration: lcpu=4 mem=1536MB mpsz=4.00GB iome=77.00MB iomp=9
ent=0.35
physb hpi hpit pmem iomin iomu iomf iohwm iomaf %entc vcsw
----- ----- ----- ----- ------ ------ ------ ------ ----- ----- -----
0.00 3318 2171 1.11 23.7 12.0 53.3 12.7 0 0.0 147943
__ c. How would you get detailed statistics about I/O memory pools?
I/O memory pool statistics can be seen using the lparstat -me command. Here is an
lparstat command output example:
# lparstat -me
physb hpi hpit pmem iomin iomu iomf iohwm iomaf %entc vcsw
----- ----- ----- ----- ------ ------ ------ ------ ----- ----- -----
0.00 3318 2171 1.11 23.7 12.0 53.3 12.7 0 0.0 180276
__ d. Run the vmstat -h command without any interval count. Look at the memory
mode, the shared memory pool size, and the pmem and loan values displayed
under the hypv-page section. If we consider your partition idle at that time, what
can you say about the sum of the pmem and loan column values?
4-14 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty When your partition is idle, the sum of the pmem and loan values is equal to the logical
memory size of the partition.
# vmstat -h
System configuration: lcpu=4 mem=1536MB ent=0.35 mmode=shared mpsz=4.00GB
kthr memory page faults cpu hypv-page
----- ----------- ------------ ---------- ----------- -----------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec hpi hpit pmem loan
1 1 174070 125243 0 0 0 2 5 0 6 617 371 0 0 99 0 0.00 0.1 3 1 1.11 0.00
__ e. Using the HMC, dynamically add 512 MB of memory to your LPAR. Then run the
vmstat -h command with no interval count again. What do you observe about
the pmem and loan column values?
The loan value increases as the overall logical memory size increases. Without any
memory load on your LPAR, your logical partition working set is completely backed by
physical memory in the shared pool, so your LPAR operating system can loan logical
memory to the PHYP. That is why the loan column value increased. Depending on the
memory activity on the other LPARs, the pmem value can fluctuate on your LPAR.
Here is an example vmstat -h command output:
# vmstat -h
System configuration: lcpu=4 mem=2048MB ent=0.35 mmode=shared mpsz=4.00GB
kthr memory page faults cpu hypv-page
----- ----------- ------------ ---------- ----------- ------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec hpi hpit pmem loan
1 1 177647 106784 0 0 0 1 5 0 6 608 370 0 0 99 0 0.00 0.1 3 1 1.12 0.63
__ f. The topas cross-partition view (topas -C) can be used to get statistics on the
shared memory pool and shared memory partitions. On your shared memory
partition, issue topas -C. The cross-partition panel should appear showing the
partitions.
__ g. Wait for topas to find the other LPARs. Then, to display the Memory Pool panel
from the CEC Panel, press the m key. This panel displays the statistics of all of
the memory pools in the system (at this time we have only one memory pool).
Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ h. Display the partitions associated with the shared memory pool by selecting the
shared memory pool (move the cursor to highlight it) and press the f key. You
can look at the logical memory that is used for each partition (memu column) and
the logical memory loaned to hypervisor by each LPAR (meml column). Consult
the topas main page for more informations about this panel and columns.
Keep this topas panel running while performing the next step.
Here is a example of an example topas output showing the partitions using the shared
memory pool:
Topas CEC Monitor Interval: 10 Wed May 20 17:53:04
2009
Partitions Memory (GB) Memory Pool(GB) I/O Memory(GB)
Mshr: 4 Mon: 6.0 InUse: 4.4 MPSz: 4.0 MPUse: 4.0 Entl: 308.0Use:
47.9
Mded: 0 Avl: 1.6 Pools: 1
Host mem memu pmem meml iome iomu hpi hpit vcsw hysb %entc
-------------------------------------------------------------
lpar1 2.00 1.63 1.14 0.36 77.0 12.0 0 0 714 0.25 248.86
lpar4 2.00 1.68 1.00 0.50 77.0 12.0 0 0 280 0.01 5.49
lpar3 2.00 1.68 0.90 0.60 77.0 12.0 2 1 462 0.18 183.03
lpar2 2.00 1.42 0.96 0.54 77.0 12.0 0 0 622 0.01 6.10
__ i. Using the HMC GUI, perform a dynamic operation to remove 512 MB of memory
from your LPAR for a total of 1.5 GB. Monitor the mem, memu, and meml values
for your LPAR from the topas output. You should notice the values change
according to the logical memory size.
4-16 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty The mem, memu, and meml values change according to the logical memory size in the
partition.
Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 12. In your teams LPAR. Issue topas -C to get the cross partitions panels. Notice the
pmem value.
Keep the topas -C command running while performing the following steps.
Here is a topas -C command output example:
Topas CEC Monitor Interval: 10 Mon Dec 10 18:03:03 2012
Partitions Memory (GB) Processors
Shr: 4 Mon: 6.0 InUse: 4.6 Shr:0.4 PSz: 4 Don: 0.0 Shr_PhysB 0.03
Ded: 0 Avl: - Ded: 0 APP: 4.0 Stl: 0.0 Ded_PhysB 0.00
Host OS Mod Mem InU Lp Us Sy Wa Id PhysB Vcsw Ent %EntC PhI pmem
-------------------------------------shared--------------------------------------
lpar1 A71 U-d 1.5 1.2 2 1 3 0 95 0.01 1272 0.35 7.5 0 0.97
lpar2 A71 U-d 1.5 1.1 2 1 3 0 95 0.01 408 0.35 7.4 0 1.08
lpar3 A71 U-d 1.5 1.1 2 0 2 0 96 0.01 308 0.35 6.0 0 1.03
lpar4 A71 U-d 1.5 1.2 2 0 2 0 96 0.01 322 0.35 5.9 0 0.93
4-18 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty __ 13. Open another Telnet or SSH window on your teams partition. In a session on your
teams LPAR, set the ulimit data to unlimited. There should be no output.
# ulimit -d unlimited
__ 14. In the second session, change the directory to /home/an31/ex4 and then run the
following command:
./amsload -M 800m -d 60
Note: This amsload tool generates a memory intensive load. It will allocate 800 MB
of memory on your LPAR. The memory allocation will be reached at the end of the
60 seconds (specified by the -d option). The output gives you the amount of
allocated memory versus the target memory to allocate.
# ./amsload -M 800m -d 60
minsize = 10485760 bytes (10.000000 MB)
maxsize = 838860800 bytes (800.000000 MB)
rate = 13806250 bytes/s (13.166666 MB/s)
rampup=60 sec (1.000000 min)
Todo loop=-1
Delay=1
Verbose=0
36 / 800 (MB)
49 / 800 (MB)
59 / 800 (MB)
63 / 800 (MB)
__ 15. Leave this command running for about 10 minutes while checking the pmem values
in the topas -C command output. What can you conclude?
In this topas output example, we have two logical partitions running an intensive
memory workload and the other two partitions are without any memory load. The
LPARs without the load will have pmem values that are low, while the LPARs with the
load will have pmem values that are high. The hypervisor allocated memory pages to
the high demanding logical partitions. The partitions that are not memory constrained
and probably loaned a lot of free memory to the PHYP. The amount of their logical
memory backed by physical memory in the shared pool is low.
Topas CEC Monitor Interval: 10 Mon Dec 10 18:24:42 2012
Partitions Memory (GB) Processors
Shr: 4 Mon: 6.0 InUse: 5.9 Shr:0.4 PSz: 4 Don: 0.0 Shr_PhysB 2.02
Ded: 0 Avl: - Ded: 0 APP: 2.0 Stl: 0.0 Ded_PhysB 0.00
Host OS Mod Mem InU Lp Us Sy Wa Id PhysB Vcsw Ent %EntC PhI pmem
-------------------------------------shared-----------------------------------
lpar1 A71 U-d 1.5 1.5 2 97 1 0 0 1.00 394 0.10 1002.7 0 1.37
lpar2 A71 U-d 1.5 1.4 2 97 2 0 0 1.00 412 0.10 998.1 2 1.37
lpar3 A71 U-d 1.5 1.5 2 1 3 0 95 0.01 937 0.10 8.2 0 0.63
Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
lpar4 A71 U-d 1.5 1.5 2 0 3 0 95 0.01 372 0.10 7.1 0 0.63
__ 16. Stop the memory workload by typing Ctrl-C. Open a Telnet or SSH session to your
other teams LPAR. Run the ulimit -d unlimited command.
__ 17. Start the memory workload by running the following commands:
# cd /home/an31/ex4
# ./amsload -M 800m -d 60
Leave this command running for about 10 minutes while checking the pmem values
in the topas -C command output. The pmem value for the LPAR running the
memory intensive workload should increase, while the pmem value for your teams
other LPAR is decreasing.
This is the scenario where the previous LPAR running a memory demanding workload
stops, then another LPAR starts a high intensive memory workload. The PHYP
allocates memory pages to high demanding partitions.
Topas CEC Monitor Interval: 10 Mon Dec 10 18:34:42 2012
Partitions Memory (GB) Processors
Shr: 4 Mon: 6.0 InUse: 5.9 Shr:0.4 PSz: 4 Don: 0.0 Shr_PhysB 2.02
Ded: 0 Avl: - Ded: 0 APP: 2.0 Stl: 0.0 Ded_PhysB 0.00
Host OS Mod Mem InU Lp Us Sy Wa Id PhysB Vcsw Ent %EntC PhI pmem
-------------------------------shared------------------------------------
lpar1 A71 U-d 1.5 1.5 2 97 1 0 0 1.00 394 0.10 1002.7 0 1.57
lpar2 A71 U-d 1.5 1.4 2 97 2 0 0 1.00 412 0.10 998.1 2 1.63
lpar3 A71 U-d 1.5 1.5 2 1 3 0 95 0.01 937 0.10 8.2 0 0.40
lpar4 A71 U-d 1.5 1.5 2 0 3 0 95 0.01 372 0.10 7.1 0 0.40
Your statistics will be different than the example above because of variable activity in
the other partitions.
__ 18. Stop any amsload processes and stop the topas program with CTRL C.
In the next test, you will simultaneous run memory intensive workloads on both of your
teams partitions. You will then check the partitions loaning and hypervisor paging
activities. The observed AMS statistical values will be impacted by the activity of the
partitions that share the memory pool.
__ 19. Restart your and your teammates partitions operating systems to reset the
statistics.
Run the shutdown -Fr command in each partition.
__ 20. Once the partitions have rebooted, open a wide Telnet window on each partition and
issue vmstat -h 2 command. Leave these commands running during the next
few steps.
4-20 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
__ 21. Open a Telnet window on the VIO server used as the paging partition, then check
the paging device activity. Start topas then click ~ to switch to the nmon screen.
From the nmon screen, type d to get the disks I/O graph.
You should see this type of I/O graph:
+-topas_nmon--S=WLMsubclasses----Host=george4--------Refresh=2 secs---18:53.04-+
Disk-KBytes/second-(K=1024,M=1024*1024) -------------------------------------
Disk Busy Read Write 0----------25-----------50------------75--------100
Name KB/s KB/s | | | | |
hdisk3 0% 0 0| |
hdisk2 0% 0 0| |
hdisk4 0% 0 0| |
hdisk0 0% 0 0| |
hdisk5 0% 0 0| |
hdisk1 0% 0 0| |
Totals 0 0+-----------|------------|-------------|----------+
------------------------------------------------------------------------------
__ 22. Start a memory load on both partitions. Run the following command in both LPARs:
./amsload -M 1000m -d 60
This will consume 1 GB of memory in each LPAR. If you get a realloc: Not
enough space. message then run the ulimit -d unlimited command.
Example steps:
# ulimit -d unlimited
# cd /home/an31/ex4
# ./amsload -M 1000m -d 60
__ 23. While the memory is consumed by the amsload tool, use the vmstat command
output to see the fre memory (amount of free memory) and the loan values
decreasing. Also, verify the hypervisor page in activity (hpi column) increased.
Here is a vmstat output example: After a while, the loan value goes to zero. the hpi
shows hypervisor paging space activity. This activity depends on the activity of the other
Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
partitions sharing the memory pool. An AIX operating system paging space activity
might also occur.
# vmstat -h 2
kthr memory page faults cpu hypv-page
----- ----------- ------------------------ ------------ ----------------------- -------------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec hpi hpit pmem loan
2 1 438809 1091 0 88 2070 2244 2320 0 224 187 939 70 7 19 5 0.35 99.8 0 0 1.50 0.00
2 0 443041 1165 0 31 2085 2244 2310 0 176 178 846 62 6 30 2 0.35 98.8 0 0 1.50 0.00
1 0 447296 1409 0 0 2166 2249 2295 0 122 171 703 60 8 32 0 0.34 98.4 0 0 1.50 0.00
1 0 451509 1021 0 2 1966 1987 2048 0 112 1082 615 62 6 31 1 0.35 100.6 0 0 1.50 0.00
1 0 457765 34 0 0 2434 3209 3886 0 130 97 631 60 7 32 0 0.35 99.0 0 0 1.50 0.00
1 0 464204 1164 0 0 3541 3209 3343 0 138 103 704 60 8 32 0 0.35 99.5 0 0 1.50 0.00
__ 24. On the VIOS paging partition, check the paging device activity in the nmon output.
You should see your paging device is busy. Here is an example of I/O graph activity
that shows two disks are busy.
In this example nmon output, we can see read and write activity for hdisk5 and hdisk6.
These hdisks are the paging space devices of the two partitions running the workload
generated by the amsload tool.
__ 25. Keep watching the activity until you are no longer curious. Stop all amsload
processes with CTRL C, and stop any analysis tools that are still running.
__ 26. Shut down your logical partition. (Your teammate should do the same with his or her
partition.) Reactivate your partition using the Normal profile to use dedicated
memory.
End of exercise
4-22 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Copyright IBM Corp. 2010, 2013 Exercise 4. Active Memory Sharing 4-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
4-24 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Estimated time
00:35
Introduction
This exercise is designed to give you experience working with Active
Memory Expansion.
Known problems
There are no known problems.
Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Preface
Two versions of these instructions are available: one with hints and one without. You can
use either version to complete this exercise. Also, dont hesitate to ask the instructor if you
have questions.
The output shown in the answers is an example. Your output and answers based on the
output might be different.
All hints are marked with a >> sign.
5-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty __ 4. In the monitoring window, run vmstat with an interval of five seconds and no limit
on iterations.
The suggested command and sample output are:
# vmstat 5
System configuration: lcpu=4 mem=1024MB ent=0.35
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec
0 0 182845 27343 0 0 0 0 0 0 5 519 247 0 2 98 0 0.01 3.9
0 0 182845 27343 0 0 0 0 0 0 7 29 232 0 1 99 0 0.01 2.4
1 0 182829 27359 0 0 0 0 0 0 2 407 227 0 2 98 0 0.01 3.5
2 0 182829 27359 0 0 0 0 0 0 6 37 230 0 1 99 0 0.01 2.5
2 0 182964 27156 0 0 0 0 0 0 11 926 250 0 2 97 0 0.02 4.6
__ 5. Return to your first window, change directory to /home/an31/ex5, and list the files.
The suggested commands and sample output are:
# cd /home/an31/ex5
# ls
memory-eater
__ 6. Execute the memory-eater program in the background and then execute the
lparstat command for an interval of five seconds and for four intervals.
What is the average physical processor consumption (physc)?
The suggested commands and sample output are:
# ./memory-eater &
Allocating memory
[1] 5374038
# lparstat 5 4
System configuration: type=Shared mode=Capped smt=4 lcpu=4 mem=1024MB
psize=4 ent=0.35
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
0.2 1.3 0.0 98.5 0.01 2.7 0.0 2.98 183 0
0.4 1.7 0.0 97.9 0.01 3.8 0.8 2.97 247 0
0.2 1.3 0.0 98.5 0.01 2.8 0.0 2.98 199 0
0.3 1.7 0.0 98.1 0.01 3.4 0.0 2.98 181 0
In the example output, the average physical processor consumption was about 0.01
processing units. Note: You may or may not see an app column in the output.
__ 7. In your monitoring window observe the affect of the memory-eater workload on the
paging space I/O.
Did the extra memory workload cause any thrashing on the paging space?
(Thrashing is persistent page-ins and page-outs in each interval).
Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
# lparstat 5 4
System configuration: type=Shared mode=Capped smt=4 lcpu=4 mem=1024MB
psize=4 ent=0.35
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
3.9 4.4 6.7 85.0 0.06 16.7 1.4 2.92 1913 0
12.1 11.3 16.2 60.4 0.16 46.8 3.7 2.82 5172 3
10.6 13.4 14.5 61.5 0.16 46.9 6.3 2.81 4723 3
12.5 12.0 16.2 59.4 0.17 49.3 5.4 2.80 5487 3
In the example output, the average physical processor consumption was between 0.06
and 0.17 processing units.
5-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty This is a significant increase over the previous situation. Part of this is the activity of the
memory-eater threads but part of it is the memory management overhead related to the
paging space activity.
__ 9. In your monitoring window observe the affect of the memory-eater workload on the
paging space I/O.
Did the extra memory workload cause any thrashing on the paging space?
(Thrashing is persistent page-ins and page-outs in each interval).
The example output is:
kthr memory page faults cpu
----- ----------- ------------------------ ------------ -----------------------
r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec
1 1 305555 2400 0 1147 1121 1129 10201 0 1206 35 2517 4 6 81 8 0.07 20.6
1 2 305556 2454 0 2449 2473 2474 2967 0 2546 472 5360 10 9 63 17 0.14 38.7
2 2 305689 2507 0 2459 2467 2498 15911 0 2563 122 5300 17 11 58 15 0.18 51.0
1 1 305673 2520 0 2558 2487 2580 27004 0 2702 472 5754 10 14 61 15 0.16 44.8
In the example output, there is persistent paging space paging activity (pi and po
columns).
__ 10. List and then terminate all of your background jobs.
The suggested commands and sample output are:
# jobs
[4] + Running ./memory-eater &
[3] - Running ./memory-eater &
[2] Running ./memory-eater &
[1] Running ./memory-eater &
# kill %1 %2 %3 %4
[4] + Terminated ./memory-eater &
[3] + Terminated ./memory-eater &
[2] + Terminated ./memory-eater &
[1] + Terminated ./memory-eater &
__ 11. Shut down your AIX operating system with no delays. Do not wait for the shutdown
to complete, but instead go directly to the next step in this exercise.
The suggested command is:
# shutdown -F
Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
5-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
5-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
# lparstat -c 5 4
System configuration: type=Shared mode=Capped mmode=Ded-E smt=4 lcpu=4 mem=1536MB
tmem=1024MB psize=4 ent=0.35
%user %sys %wait %idle physc %entc lbusy app vcsw phint %xcpu xphysc dxm
----- ----- ------ ------ ----- ----- ------ --- ----- ----- ------ ------ ------
0.8 1.6 0.8 96.8 0.02 4.9 1.1 2.96 589 0 9.9 0.0017 0
1.2 2.5 0.8 95.4 0.02 6.7 0.7 2.96 422 0 8.9 0.0021 0
0.3 1.5 0.3 97.9 0.01 3.5 0.3 2.97 334 0 10.1 0.0012 0
0.8 2.1 0.1 97.0 0.02 5.1 0.7 2.97 326 0 5.5 0.0010 0
In the example output, the average physical processor consumption was between 0.01
and 0.02 processing units.
In the example output, the average physical processor consumption due to memory
expansion and compression was between 0.0010 and 0.0021 processing units.
__ 23. In your monitoring window observe the affect of the memory-eater workload on the
paging space I/O.
Did the extra memory workload cause any thrashing on the paging space?
Did the extra memory workload cause any thrashing on the compressed memory
pools?
The example vmstat -c 5 output is:
kthr memory page
------- ------------------------------------------------------ -------------------
r b avm fre csz cfr dxm ci co pi po
0 0 283219 114287 18490 2523 0 2 77 0 0
0 0 283219 114287 18490 2523 0 0 0 0 0
0 0 283221 114273 18490 2534 0 5 0 0 0
0 0 283221 114269 18490 2534 0 0 0 0 0
The columns to the right of the po column are not shown to improve readability.
In the example output:
There was no paging space paging activity (pi and po columns).
There was some transitory compression pool paging activity (ci and co columns).
__ 24. Return to your first window and execute two more memory-eater programs in the
background and then execute the lparstat command for an interval of five
seconds and for four intervals.
What is the average physical processor consumption (physc)?
How much physical processor consumption is due to AME compression and
decompression?
5-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
# ./memory-eater &
[4] 6684812
Allocating memory
# lparstat -c 5 4
System configuration: type=Shared mode=Capped mmode=Ded-E smt=4 lcpu=4 mem=1536MB
tmem=1024MB psize=4 ent=0.35
%user %sys %wait %idle physc %entc lbusy app vcsw phint %xcpu xphysc dxm
----- ----- ------ ------ ----- ----- ------ --- ----- ----- ------ ------ ------
15.6 23.4 2.2 58.7 0.17 47.2 27.2 2.82 883 1 73.7 0.1217 0
29.0 49.1 8.8 13.0 0.30 84.9 72.2 2.69 593 7 86.7 0.2576 0
7.9 11.5 2.3 78.4 0.09 25.0 12.5 2.90 737 0 63.3 0.0553 0
20.9 36.5 7.8 34.8 0.23 64.3 50.0 2.76 737 13 81.8 0.1841 0
In the example output, the average physical processor consumption was between 0.09
and 0.30 processing units.
This is a significant increase over the previous situation. It is not that much more
overhead that the non-AME, two memory-eater threads case. (You can refer back to the
hints in step 8 to see the non-AME lparstat output with four memory-eater processes.
Part of this is the CPU activity of the memory-eater threads but part of it is overhead to
manage the memory management overhead related to the compressed memory pool
paging activity.
__ 25. In your monitoring window observe the affect of the memory-eater workload on the
paging space I/O.
Did the extra memory workload cause any thrashing on the paging space?
Did the extra memory workload cause any thrashing on the compressed memory
pools?
The example output is:
kthr memory page
------- ------------------------------------------------------
r b avm fre csz cfr dxm ci co pi po
4 2 345281 56985 39052 3640 0 9483 9666 0 0
5 4 345281 57525 39052 3445 0 29908 29775 0 0
3 0 345281 57230 39052 3553 0 11162 11216 0 0
4 2 345281 58590 39052 3523 0 29454 29179 0 0
In the example output:
- There was no paging space paging activity (pi and po columns).
Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
- There was significant and persistent compressed memory pool paging activity (ci
and co columns).
__ 26. Return to your first window and execute two more memory-eater programs in the
background and then execute the lparstat command for an interval of five
seconds and for four intervals.
What is the average physical processor consumption (physc)?
How much physical processor consumption is due to AME compression and
decompression?
The suggested commands and sample output are:
# ./memory-eater &
[5] 6946916
Allocating memory
# ./memory-eater &
[6] 4915302
Allocating memory
# lparstat -c 5 4
System configuration: type=Shared mode=Capped mmode=Ded-E smt=4 lcpu=4 mem=1536MB
tmem=1024MB psize=4 ent=0.35
%user %sys %wait %idle physc %entc lbusy app vcsw phint %xcpu xphysc dxm
----- ----- ------ ------ ----- ----- ------ --- ----- ----- ------ ------ ------
31.4 48.7 11.4 8.5 0.33 95.1 66.0 2.65 2061 117 71.5 0.2382 0
29.3 46.2 11.9 12.6 0.31 88.6 62.7 2.67 1794 114 71.9 0.2230 0
26.4 34.2 6.5 32.9 0.28 79.1 36.2 2.71 2245 32 63.7 0.1762 0
29.3 49.5 13.6 7.6 0.33 93.0 61.4 2.66 1646 91 73.3 0.2386 0
In the example output, the average physical processor consumption was between 0.28
and 0.33 processing units.
Part of this is the CPU activity of the memory-eater threads but part of it is overhead to
manage the memory management overhead related to the compressed memory pool
paging activity.
In the example output, the processor consumption due to AME compression and
decompression was reported to be between 0.17 and 0.24 processing units. This is an
increase over the previous case.
__ 27. In your monitoring window observe the affect of the memory-eater workload on the
paging space I/O.
Did the extra memory workload cause any thrashing on the paging space?
Did the extra memory workload cause any thrashing on the compressed memory
pools?
5-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 32. Scroll down and view the System Resource Statistics and the AME Statistics. The
AME statistics show how much CPU resource was needed to compress a certain
amount of memory. Notice the actual compression ratio.
Example output which shows that 0.18 processing units was needed to compress
680 MB of memory.
System Resource Statistics: Current
--------------------------- ----------------
CPU Util (Phys. Processors) 0.22 [ 22%]
Virtual Memory Size (MB) 1590 [104%]
True Memory In-Use (MB) 1019 [100%]
Pinned Memory (MB) 330 [ 32%]
File Cache Size (MB) 2 [ 0%]
Available Memory (MB) 0 [ 0%]
5-14 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty Example output showing the achievable compression ratio is much higher (4.04) than
the current expansion factor:
Active Memory Expansion Modeled Statistics :
-------------------------------------------
Modeled Expanded Memory Size : 1.50 GB
Achievable Compression ratio :4.04
The recommendations state that this LPARs memory could be reduced to 1.25 GB
(from 1.5 GB). This would allow you to allocate this extra 0.25 GB to other workloads. It
also recommends setting the expansion factor to 1.20 and save some CPU resources.
__ 34. List and then terminate all of your background jobs.
Copyright IBM Corp. 2010, 2013 Exercise 5. Active Memory Expansion 5-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 35. In your monitoring window, terminate your vmstat execution with CTRL C.
__ 36. Shutdown your LPAR with shutdown -F. When the state on the HMC is Not
Activated, activate it with the Normal profile.
End of exercise
5-16 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Estimated time
01:00
Introduction
The purpose of this exercise is to monitor and measure a virtualized
environment.
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Requirements
This workbook
A computer with a web browser and a network connection to an
HMC running Version 7.7.4.0 2 or later configured to support a
POWER7 processor-based system
A Virtual I/O Server version 2.2.2.1 and a client logical partition
running AIX 7
Utility for running Telnet or SSH
6-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty how the virtual Ethernet bandwidth increases. The second test shows
how the throughput scales at different processor capacity entitlement
values. A scalability factor will be used to see if the performance and
throughput scales linearly.
Part 3 Shared Ethernet: In this part, the students will monitor the
shared Ethernet adapter, then tune it by adjusting the capacity
entitlement of the virtual I/O server. The students must use the virtual
I/O server command line interface when checking the devices
attributes or displaying the activity and statistics.
The TCP/IP and device tuning are not seen here and are not part of
the exercise objectives. Checking and reconfiguring the real and
virtual devices is also part of a tuning activity, but in this exercise, the
monitoring and tuning options are limited to the CPU entitlement of the
virtual I/O.
In some cases, the systems assigned to the class might contain
10/100/1000 Ethernet adapters in both the AIX partitions. If these
adapters are connected to a gigabit Ethernet switch, then students will
observe performance numbers greater than those shown in the hints.
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Preface
All procedures of this exercise depend on the availability of specific equipment. You will
need a computer system connected to the Internet, a web browser, a Telnet program,
and a utility for running SSH. You will also need a managed system with Fibre Channel
adapters. All lab systems need to be accessible to each other on a network.
All hints are marked by a sign.
__ 1. Before starting the exercise, use the HMC GUI to check the VIOS processor
configuration. The desired configuration is:
Assigned Processing units: 0.10
Assigned Virtual Processors: 1
Partition is Capped
Dynamically change the processor configuration if necessary.
To check the configuration, open the VIOS partition properties. Heres an example in
which the partition is configured correctly:
6-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty
If your VIOS partition needs its processor configuration changed, select the partition
and run the Dynamic Logical Partitioning > Processor > Add or Remove task. Fill
out the attributes as shown below. The Uncapped Weight value will not be used so the
value doesnt matter. Click OK to execute the change.
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 2. On your client logical partition, log in as root and use the lspv command to check
your virtual SCSI disk configuration. You should have a free virtual SCSI disk
available.
Command and expected output:
# lspv
hdisk0 000bf8411dc9dee1 rootvg active
hdisk1 000bf8417b8194fd None
__ 3. On your client logical partition, change directory to /home/an31/ex6 and start the
VIOload script with no options. You will need to confirm by entering YES (all capital
letters) when asked to start the load generation. The VIOload script generates I/O
load for about five minutes. Continue with the exercise as soon as it starts running
because you want to monitor this workload.
__ 4. In a session on your assigned VIOS partition, start the topas command. Press the
d key twice to obtain disk summary statistics as shown below.
While the VIOload script is running on the AIX partitions, look at the Physc, %Entc,
and disk total KBPS values displayed by topas. Record these values in Table 4,
VIOS capacity entitlement tuning, on page 8, in the row for capacity entitlement
value 0.1.
Here is an example of topas output:
6-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty $ topas
Topas Monitor for host: george5 EVENTS/QUEUES FILE/TTY
Tue Dec 11 18:43:35 2012 Interval: 2 Cswitch 244 Readch 288
Syscall 910 Writech 3330.5K
CPU User% Kern% Wait% Idle% Physc Entc Reads 3 Rawin 0
ALL 74.7 13.4 0.0 11.9 0.10 99.7 Writes 835 Ttyout 210
Forks 0 Igets 0
Network KBPS I-Pack O-Pack KB-In KB-Out Execs 0 Namei 1
Total 0.5 2.0 2.0 0.2 0.4 Runqueue 1.0 Dirblk 0
Waitqueue 0.0
Disk Busy% KBPS TPS KB-Read KB-Writ MEMORY
Total 7.6 46.0K 580.0 46.0K 0.0 PAGING Real,MB 1024
Faults 0 % Comp 96
FileSystem KBPS TPS KB-Read KB-Writ Steals 0 % Noncomp 1
Total 0.2 1.5 0.2 0.0 PgspIn 0 % Client 1
PgspOut 0
Name PID CPU% PgSp Owner PageIn 0 PAGING SPACE
yes 8192090 38.6 0.2 root PageOut 0 Size,MB 1536
yes 8978558 38.6 0.2 root Sios 0 % Used 6
topas 8781970 0.4 1.6 padmin % Free 94
sec_rcv 3342438 0.2 0.4 root NFS (calls/sec)
getty 7209196 0.1 0.6 root SerV2 0 WPAR Activ
sched 196614 0.1 0.4 root CliV2 0 WPAR Total
sshd 6946876 0.0 0.9 padmin SerV3 0 Press: "h"-help
java 9109708 0.0 86.0 root CliV3 0 "q"-quit
In this case, you would record the value 0.10 in the Physc column, 99.7 in the %EntC
column, and 46.0K in the KBPS column of Table 4. Once the KBPS numbers grow
beyond 99999 KBPS, the topas command displays the numbers with a K multiplier, for
example: 62.2K. This means the throughput is about 62200 KBPS.
__ 5. Stop the topas execution in the VIOS partition by pressing the q key.
__ 6. Use the HMC GUI to dynamically change the capacity entitlement of the VIOS
partition to the next value shown in the Capacity Entitlement column of Table 4.
Select your VIOS partition, then select Dynamic Logical Partitioning > Processor >
Add or Remove. A window similar to the following will be displayed.
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Enter the appropriate number of processing units value to change the partition to have
the next capacity entitlement value listed in Table 4, and then click the OK button to
implement the change.
__ 7. Make sure that the VIOload script is still running on your logical partition. If it has
stopped, then start it again.
__ 8. On the VIOS partition, start the topas command.
Note: It is important to stop and restart topas after each dynamic change the
processor configuration. If topas is not restarted, it will show incorrect values for
%EntC.
__ 9. Record the values for Physc, %EntC, and disk total KBPS displayed by topas in the
appropriate fields of Table 4 for the current capacity entitlement value.
__ 10. Repeat the steps from step 5 to step 9 to complete the fields of Table 4 for the other
capacity entitlement values.
6-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Overall throughput
Capacity entitlement Physc %EntC
(KBPS)
0.1 0.06 59.0% 65.5K
0.12 0.09 73.5% 105.3K
0.14 0.10 72.4% 133.1K
0.25 0.20 79.6% 270.0K
0.50 0.25 49.7% 339.1K
__ 11. Looking at the throughput values, the capacity entitlement, and the CPU
consumption on the virtual I/O server partition, what do you notice?
Notice that when the virtual I/O server is CPU constrained (%EntC over 70%), the
throughput is strongly impacted. If there are perceived performance issues on disks or
storage adapters, be sure to check the CPU utilization on the VIOS partition. Be sure to
monitor processor utilization on the VIOS on a regular basis.
When running with shared processors, the virtual I/O server should be configured as
uncapped. That way, if the capacity entitlement of the partition is undersized, there is
opportunity to get more processor resources (assuming there is some available in the
pool) to service I/O.
__ 12. The next few steps will investigate the memory configuration considerations on the
virtual I/O server. This test is going to show you the memory consumption when a
virtual SCSI client partition is writing on a file system.
Stop topas on the VIOS partition if it is still running. Wait for the VIOload script
running on your logical partition to finish before continuing with the next step.
__ 13. On the VIOS partition, invoke the following command:
$ vmstat 2
Look at the fre column. This is the amount of free memory.
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
The following is an example of the output from vmstat on the VIOS partition:
$ vmstat 2
6-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty The following shows the output of the vmstat command on the VIOS partition while the
dd command was running on the AIX LPAR:
System configuration: lcpu=4 mem=1024MB ent=0.50
__ 17. What can you say about the memory requirement of a virtual I/O server partition
serving virtual SCSI disks to client partitions?
Notice during the dd command, the memory free pages number decreases on the client
partition. This indicates the virtual memory cache is used while writing. However, there
is not any memory activity on the virtual I/O server.
There is not any data caching in memory on the server partition. All I/Os which it
services are essentially synchronous disk I/Os. Therefore, the virtual I/O servers
memory requirements should be modest.
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Use the lparstat command in the LPARs to see the current processor configuration. In
this example, the mode is listed as capped which means it needs to be changed to
uncapped:
# lparstat
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
0.0 0.0 3.0 97.0 0.00 0.0 5.9 3.27 1590588 16622
To change the LPARs to the uncapped mode, select the partition and run the Dynamic
Logical Partitioning > Processors > Add or Remove task.
In the pop up, window check the Uncapped checkbox and enter 128 for the weight
value (if its not already there). Here is an example of this window:
6-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty To dynamically add a virtual Ethernet adapter to your partitions, perform the following
actions.
In the HMC GUI Server Management application, select your logical partition, then
select Dynamic Logical Partitioning > Virtual Adapters from the pop up menu.
Use the Actions menu to create a virtual Ethernet adapter as shown below:
Enter 88 as the Adapter ID value, and enter 88 in the Port Virtual Ethernet value. Do
not select the Access external network or IEEE 802.1Q compatible adapter options.
Keep the VSwitch to ETHERNET0(Default). After entering the adapter ID and port
virtual Ethernet ID value, click OK. This will return you to the previous window. Click OK
to add the adapter to the partition.
Be sure to add a virtual Ethernet adapter to both of the AIX partitions.
__ 20. Run the cfgmgr command in each partition to detect the newly added virtual
Ethernet device.
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 21. Verify the additional virtual Ethernet adapter is marked as Available in each partition.
Perform the following command sequence to check the existence of the newly added
virtual Ethernet adapter:
# lsdev -c adapter | grep ^ent
ent0 Available Virtual I/O Ethernet Adapter (l-lan)
ent1 Available Virtual I/O Ethernet Adapter (l-lan)
vsa0 Available LPAR Virtual Serial Adapter
vscsi0 Available Virtual SCSI Client Adapter
You should see ent1 marked as Available.
__ 22. Configure the newly added interfaces using smitty chinet. Use subnet mask
255.255.255.0. The name of the interface is based on the name of the adapter
instance. For example, if the virtual Ethernet adapter is ent1, then the interface to
use is en1.
If you followed the instructions in previous lab exercises, you should be using en1
for both logical partitions.
Use the following IP addresses depending on your team number.
Table 5: IP addresses
IP@ second logical
Team number IP@ first logical partition
partition
Team1 10.10.10.1 10.10.10.2
Team2 10.10.20.1 10.10.20.2
Team3 10.10.30.1 10.10.30.2
Team4 10.10.40.1 10.10.40.2
Team5 10.10.50.1 10.10.50.2
Team6 10.10.60.1 10.10.60.2
For the other ISNO network parameters, just use the default values (that is, leave
the fields blank).
Change the current STATE of the adapter to up in the SMIT panel.
6-14 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty Invoke smitty chinet, and then select the desired interface from the list presented.
The following SMIT panel will be shown:
Change / Show a Standard Ethernet Interface
[Entry Fields]
Network Interface Name en1
INTERNET ADDRESS (dotted decimal) []
Network MASK (hexadecimal or dotted decimal) []
Current STATE up +
Use Address Resolution Protocol (ARP)? yes +
BROADCAST ADDRESS (dotted decimal) []
Interface Specific Network Options
('NULL' will unset the option)
rfc1323 []
tcp_mssdflt []
tcp_nodelay []
tcp_recvspace []
tcp_sendspace []
Apply change to DATABASE only no +
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 25. Check the status of the I/O Completion Ports pseudo device on both logical
partitions with the following command:
# lsdev -l iocp0
iocp0 Defined I/O Completion Ports
If the device is marked as Defined (rather than Available) as shown above, activate
the device permanently with the following command sequence:
# chdev -l iocp0 -a autoconfig=available
# mkdev -l iocp0
__ 26. Start the netperf server program on both logical partitions, by issuing the following
command on each partition:
# /home/an31/ex6/start_netserver
To determine if the netperf server is running, use the netstat command to verify
that the default port 12865 has LISTEN for its status.
Example command and expected output:
# netstat -an | grep 12865
tcp4 0 0 *.12865 *.* LISTEN
__ 27. On each partition, use the no command to verify the use_isno network attribute is
enabled. Enable it with the no command if necessary. The use_isno attribute is a
restricted parameter.
Example command and expected output:
# no -a -F | grep use_isno
If use_isno is set to a value of 0, use the following command to enable it:
# no -o -F use_isno=1
6-16 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty __ 29. On the first logical partition, use the ifconfig command to determine the values of
tcp_sendspace and tcp_recvspace for the interface associated with the virtual
Ethernet adapter. For example, in the output below, tcp_sendspace is 262144 and
tcp_recvspace is 262144.
# ifconfig en1
en1:
flags=1e080863,1<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GR
OUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE)>
inet 10.10.X0.1 netmask 0xff000000 broadcast 10.255.255.255
tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1
Note
The virtual Ethernet devices have predefined Interface Specific Network Options (ISNO)
attributes that are automatically set based on the MTU size. So, it is not necessary to
explicitly specify the tcp_sendspace and tcp_recvspace attributes for the virtual Ethernet
devices.
__ 30. Record the values for tcp_sendspace and tcp_recvspace from your logical
partition in the tcp_sendspace (local) and tcp_recvspace (local) fields for the
MTU size 1500 column of Table 6, Virtual Ethernet throughput, on page 19.
__ 31. On your second logical partition, use the ifconfig command to determine the
values of tcp_sendspace and tcp_recvspace for the interface associated with the
virtual Ethernet adapter. Record the values for tcp_sendspace and tcp_recvspace
from in the tcp_sendspace (remote) and tcp_recvspace (remote) fields for the
MTU size 1500 column of Table 6.
__ 32. Open a new terminal session to your first partition and run the netstat -r 2
command. Leave it running until youre instructed to stop it. Youll be monitoring the
total number of packets column.
Example command and expected output:
# netstat -r 2
input (en0) output input (Total) output
packets errs packets errs colls packets errs packets errs colls
26446 0 5398 0 0 29897 0 8854 0 0
13 0 2 0 0 13 0 2 0 0
__ 33. On a different session to your first logical partition, change directory to
/home/an31/ex6.
__ 34. The next part of the exercise uses the tcp_stream.sh script located in the
/home/an31/ex6 directory. This script needs four arguments as follows:
#./tcp_stream.sh <remote host IP> <msg size> <mode> <duration>
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
The mode can be specified as simplex or duplex, and the duration is in seconds. On
your first logical partition, start the tcp_stream.sh network load generator using
10.10.X0.2 as the remote host IP address (X in the IP address is your team number;
this is the address of your second logical partition on the virtual Ethernet.). Use
1000000 (1 Megabyte) as the message size, simplex as the mode, and 20 seconds
for the duration.
#./tcp_stream.sh 10.10.X0.2 1000000 simplex 20
(Replace X in the IP address with your team number.)
__ 35. Record the megabits/second (listed as 10^6bits/s) throughput result given by the
tcp_stream.sh script in the MTU 1500 column in Table 6. Below is example output
from the tcp_stream.sh script. The network interface maximum throughput is listed
at the end of the output in megabits per second (10^6bits/s) and kilobytes per
second (KBytes/s). In this example, you would log 1151 in the Simplex mode
throughput (Megabits/s) row of the MTU 1500 column of Table 6. Be sure to do both
the Simplex test and the Duplex test before proceeding to the next step.
# ./tcp_stream.sh 10.10.X0.2 1000000 simplex 20
TCP STREAM TEST: 10.10.X0.2:4
(+/-5.0% with 99% confidence) - Version: 5.4.0.1 Nov 18 2004 14:18:06
Recv Send Send ---------------------
Socket Socket Message Elapsed Throughput
Size Size Size Time (iter) ---------------------
bytes bytes bytes secs. 10^6bits/s KBytes/s
6-18 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty
Table 6: Virtual Ethernet throughput
MTU Size
1500 9000 32768 65390
tcp_sendspace
(local)
tcp_sendspace
(remote)
tcp_recvspace
(local)
tcp_recvspace
(remote)
Simplex mode
throughput
(Megabits/s)
Duplex mode
throughput
(Megabits/s)
Here are the throughput results measured by the authors. You can compare these
results with the results you measure.
MTU size
1500 9000 32768 65390
tcp_sendspace
262144 262144 262144 262144
(local)
tcp_sendspace
262144 262144 262144 262144
(remote)
tcp_recvspace
262144 262144 262144 262144
(local)
tcp_recvspace
262144 262144 262144 262144
(remote)
Simplex mode
throughput 1151 3723 5075 6056
(Megabits/s)
Duplex mode
throughput 1540 5755 9700 12133
(Megabits/s)
__ 38. What conclusions can you draw from that test?
You should have observed the virtual Ethernet throughput increased when increasing
the MTU size. Choose a large MTU of 65390 or 32768 if you expect large amounts of
data to be transferred inside your virtual Ethernet network.
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-19
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
The virtual Ethernet device with a large MTU of something like 32768 or 65390 means a
bulk transfer needs fewer packets to move the data. Fewer packets means fewer trips
up and down the protocol stack, which means less CPU consumption.
__ 39. Stop the netstat execution.
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ----- ------ ------ ----- ----- ------ --- ----- -----
0.0 0.0 1.3 98.7 0.00 0.0 2.8 3.27 2176660 16712
__ 42. In the next part of the exercise, you will alter the MTU size of the original virtual
Ethernet interface that is connected to the outside network. You will do this in both of
your teams AIX LPARs. Use ifconfig -a command if you need to look up with
interface is on the external network. Change the MTU size of the virtual Ethernet
interface on both logical partitions back to 1500.
Run the following command on both partitions to change the MTU size.
# chdev -l enX -a mtu=1500
__ 43. From your first logical partition, start the network load generator script
tcp_stream.sh located in the /home/an31/ex6 directory. This script requires four
6-20 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty arguments. You must specify the remote hosts IP address for its virtual Ethernet
adapter, the message size, the mode (simplex or duplex), and the test duration. In
this sequence of tests, we will only be using the duplex mode.
Use 10000 (10 KBytes) as the message size, 10 seconds for the duration, and
perform the test in duplex mode.
# cd /home/an31/ex6
#./tcp_stream.sh 10.10.X0.2 10000 duplex 10
(Replace X in the IP address with your team number.)
__ 44. Record the megabits per second throughput value (10^6bits/s) in Table 7 below, in
the entry for Capacity entitlement 0.2 and MTU size 1500.
__ 45. Change the MTU size of the virtual Ethernet interface on both logical partitions to
9000.
Change the MTU on both partitions.
# chdev -l enX -a mtu=9000
__ 46. Restart the network load generator script tcp_stream.sh located in the
/home/an31/ex6 directory. Use 10000 (10 KBytes) as the message size, 10 seconds
for the duration, and perform the test in duplex mode. Record the megabits per
second throughput value (10^6bits/s) in Table 7 below, in the entry for Capacity
entitlement 0.2 and MTU size 9000.
#./tcp_stream.sh 10.10.X0.2 10000 duplex 10
(Replace X in the IP address with your team number.)
__ 47. Change the MTU size of the virtual Ethernet interface on both logical partitions back
to 1500.
Change the MTU on both partitions.
# chdev -l enX -a mtu=1500
__ 48. Using the HMC GUI, dynamically change the desired capacity entitlement of your
partition to 0.4 processing units.
Using the HMC GUI, select your logical partition, then select Dynamic Logical
Partitioning > Processor Resources > Add or Remove. Enter 0.4 in the Assigned
Processing units field; then click OK to make the changes.
__ 49. Repeat the tasks from step 40 to step 47 for the different capacity entitlement values
listed in Table 7 below.
__ 50. Once you have recorded the megabits/s throughput results in the table, calculate the
scalability factor for each column by dividing the throughput in megabits/s by the CE
value.
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-21
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Here are example throughput results with different capacity entitlement values.
__ 51. What do you notice about the scalability value for each MTU size as the capacity
entitlement of the partition is increased?
For each MTU size, the scalability value is almost the same for each different CE value.
This indicates that the virtual Ethernet throughput scales linearly with processor
capacity entitlement, so there is no need to specifically dedicate processors to partitions
for performance. The throughput performance is dependant on the processor
entitlement of the partitions.
6-22 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty partition. To generate a workload in the partitions, you will use the scripts ftpload.sh and
tcp_rr.sh available in the directory /home/an31/ex6.
__ 52. Identify which VIOS partition on your assigned server is the primary SEA. There
should be a failover configuration between the vios1 and the vios2 partitions. There
should be another failover configuration between the vios3 and the vios4 partitions.
For your teams set of VIOS partitions, you need to find the one configured as the
primary SEA. Use the entstat -all ent4 | grep -i active command on both
of your assigned VIOS partitions. This example output shows the primary SEA:
Priority 1 Active: True
If the output is the following, this is the secondary SEA in the failover configuration.
Priority 2 Active: False
Document the name of the VIOS partition which has the primary SEA configuration:
___________________
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-23
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 53. On the VIOS that you documented above, verify the MTU size of the virtual Ethernet
interface and the physical Ethernet adapter configured in the SEA are set to 1500.
Use the lsmap -net -all command to see the names of the Ethernet adapters.
On the VIOS partition, check the MTU size using the lsdev command.
$ lsdev -dev en0 -attr mtu
value
1500
On the VIOS partition, if necessary, set the MTU size:
$ chdev -dev en0 -attr mtu=1500
__ 54. Use the first LPAR (based on its number) assigned to your team for the AIX LPAR
that will use a virtual Ethernet adapter. In that partition, make sure that the interface
connected to the external network is using an MTU size of 1500.
On the logical partition, check the MTU size using the lsattr or netstat commands.
# lsattr -El en1 -a mtu
mtu 1500 Maximum IP Packet Size for This Device True
# netstat -i
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en1 1500 link#2 0.9.6b.6b.d.f6 10802 0 1779 3 0
en1 1500 9.47.88 max92 10802 0 1779 3 0
lo0 16896 link#1 889 0 892 0 0
lo0 16896 127 loopback 889 0 892 0 0
lo0 16896 localhost 889 0 892 0 0
On the client logical partition, if necessary, set the MTU size:
# chdev -l enX -a mtu=1500
__ 55. In the following steps, you will add a logical Host Ethernet adapter to your teams
second logical partition. You will also assign the IP address of the logical partition to
the logical host Ethernet adapter port interface.
__ a. On your second logical partition, perform a dynamic operation to add a logical
host Ethernet adapter port. The first team on each managed system should use
physical port ID 0. The second team should use physical port ID 1.
6-24 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty To get to the above screen, run the Dynamic Logical Partitioning > Host Ethernet >
Add command.
__ b. Select an available logical port. For team 1, use logical port 1. For team 2, use
logical port 2. The ports that do not have LPAR names after them are available.
__ c. Open a virtual terminal on your logical partition, then run the cfgmgr command.
Check for a logical host Ethernet port entX Available.
Here is the output of the command:
lsdev -Cc adapter | grep hea
ent1 Available Logical Host Ethernet Port (lp-hea)
lhea0 Available Logical Host Ethernet Adapter (l-hea)
__ 56. Record the hostname, IP address, netmask, and gateway of the interface configured
for the external network in your second logical partition. You can use a combination
of hostname, netstat -rn, and ifconfig -a to get this information.
Hostname: ___________________________________
IP Address: ___________________________________
Netmask: _____________________________________
Gateway Address: _____________________________
__ 57. Detach the current IP configuration using the chdev -l en0 -a state=detach
command. The IP address should be configured on the first virtual Ethernet adapter
at this time.
__ 58. Assign the same IP configuration that you recorded to the new logical host Ethernet
port.
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-25
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 59. Try to ping the default gateway to check the HEA configuration.
__ 60. In the following steps, you will start a network load and monitor the statistics of the
VIOS partitions shared Ethernet adapter.
__ 61. Log in to the partition with the HEA port configured. As the root user, check the
status of the I/O Completion Ports pseudo device with the following command:
# lsdev -l iocp0
If the device is marked as Defined, then activate the device permanently with the
following command sequence:
# chdev -l iocp0 -a autoconfig=available
# mkdev -l iocp0
__ 62. As the root user, verify the netperf server is running. To determine if the netperf
server is running, use the netstat command to verify that the default port 12865
has LISTEN for its status as shown below.
# netstat -an | grep 12865
tcp4 0 0 *.12865 *.* LISTEN
If netperf is not running, start it by issuing the following command:
# /home/an31/ex6/start_netserver
__ 63. Repeat the previous two steps on your first LPAR with the virtual Ethernet adapter
configured.
__ 64. As the root user on your second partition (the one with the HEA port configured),
use the tcp_rr.sh script located in the directory /home/an31/ex6 to generate a
network load to the other logical partition.
Syntax of the tcp_rr.sh script:
# ./tcp_rr.sh <remote host IP> <sessions> <duration>
The tcp_rr.sh script requires three arguments: the IP address of the remote
partition, the number of sessions, and the duration of the test. Start the command
with 10 sessions and a duration of 300.
Here is an example command. Substitute 9.47.88.153 with the actual IP address of
your remote logical partition:
# ./tcp_rr.sh 9.47.88.153 10 300
__ 65. On your logical partition configured with the virtual Ethernet adapter, use the
netstat command to list the network packets going through the virtual Ethernet
adapter interface. Remember the first line of the netstat report is the number of
packets since system boot.
You should see some input and output packets, since the shared Ethernet adapter
on the VIOS partition is forwarding packets being generated by the tcp_rr.sh script
running on the other partition.
6-26 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty There are a few ways of using the netstat command to see packet statistics. Here is
one example, followed by a sample output. Substitute en0 with the device name for the
virtual Ethernet adapter on your logical partition.
# lsdev -Cc adapter | grep ^e
ent0 Available Virtual I/O Ethernet Adapter (l-lan)
# netstat -I en0 2
input (en0) output input (Total) output
packets errs packets errs colls packets errs packets errs colls
1262778 0 697553 0 0 1273553 0 708365 0 0
51701 0 51693 0 0 51701 0 51693 0 0
51692 0 51668 0 0 51693 0 51669 0 0
51759 0 51735 0 0 51759 0 51735 0 0
51633 0 51618 0 0 51633 0 51618 0 0
51705 0 51682 0 0 51705 0 51682 0 0
51593 0 51594 0 0 51593 0 51594 0 0
__ 66. From the padmin CLI on the VIOS partition, use the netstat command to list the
packet flow going through the shared Ethernet adapter device. Do you see any IP
packets? Why or why not?
$ netstat -stats 2
Here is the output of the command netstat -stats 2:
$ netstat -stats 2
input (en3) output input (Total) output
packets errs packets errs colls packets errs packets errs colls
133562 0 5218 0 0 136478 0 8144 0 0
4 0 0 0 0 4 0 0 0 0
5 0 0 0 0 5 0 0 0 0
6 0 0 0 0 6 0 0 0 0
4 0 0 0 0 4 0 0 0 0
16 0 0 0 0 16 0 0 0 0
4 0 1 0 0 4 0 1 0 0
Notice the statistics on the left side are related to the interface en3. The statistics on the
right side of the output are the total statistics on the partition. There might be a small
number of packets received and transmitted, however they are likely the packets used
to transmit output to the terminal you have used to log in (assuming you have not used
the console window from the HMC GUI application).
You might have expected to see the total number of packets received and transmitted
(displayed on the right side of the output) be at least the same as the statistics being
shown on your logical partition (around 52000 packets). However, remember that the
shared Ethernet adapter is a layer 2 bridge device. It is functioning at the Ethernet
frame level, not at the IP packet level, which is where netstat reports statistics. The
way to see the amount of packets going through the shared Ethernet adapter is to use
the enstat entX command on the shared Ethernet adapter logical device name.
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-27
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 67. On the VIOS partition, try to list the packet flow going through the physical and the
virtual adapters associated with the shared Ethernet adapter, using the netstat or
enstat command. What are the results?
$ entstat entX
$ netstat -stats 2
Use lsdev to list all of the Ethernet adapters on the VIOS partition.
$ lsdev | grep ent
ent0 Available 10/100 Mbps Ethernet PCI Adapter II (1410ff01)
ent1 Available Virtual I/O Ethernet Adapter (l-lan)
ent2 Available Virtual I/O Ethernet Adapter (l-lan)
ent3 Available Shared Ethernet Adapter
If you are following along with the examples in this exercise, the real physical adapter is
ent0 and the associated virtual adapter is ent2. You can see this on your system from
the output of the lsdev -dev ent3 -attr command. The ent2 adapter is virtual, and it
is configured for access to the VIOS partition.
It is not possible using enstat or netstat to directly display the statistics of the virtual
and physical devices associated with the shared Ethernet adapter. Here is an example
of error messages when trying enstat:
$ entstat -all ent2
entstat: 0909-003 Unable to connect to device ent1, errno = 19
$ entstat -all ent0
entstat: 0909-003 Unable to connect to device ent0, errno = 19
__ 68. A way to see the shared Ethernet adapter statistics is to list the Ethernet device
driver and devices statistics of the shared Ethernet adapter itself. Examine the
statistics for the adapters in the Shared Ethernet Adapter section of the output.
Execute this command multiple times to see the statistics updating.
From the padmin CLI on the VIOS partition:
$ entstat -all entX | more
Replace entX with the name of the shared Ethernet adapter device. The output of
the command can be quite large. If you only want to view the real and virtual side
statistics for packet counts, try passing the output to grep for the paragraph titled
Statistics for adapters in the shared Ethernet adapter entX.
For example:
$ entstat -all entX | grep -p "SEA"
6-28 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
--------------------------------------------------------------
Statistics for adapters in the Shared Ethernet Adapter ent3
--------------------------------------------------------------
Number of adapters: 2
SEA Flags: 00000001
<THREAD >
VLAN Ids :
ent1: 0 1
Real Side Statistics:
Packets received: 38039595
Packets bridged: 38039595
Packets consumed: 2
Packets received: 0
Packets transmitted: 37448355
Packets dropped: 2
Virtual Side Statistics:
Packets received: 37448355
Packets bridged: 37448355
Packets consumed: 0
Packets received: 0
Packets received: 38039593
Packets dropped: 0
Other Statistics:
Output packets generated: 0
Output packets dropped: 0
Device output failures: 0
Memory allocation failures: 0
ICMP error packets sent: 0
Non IP packets larger than MTU: 0
Thread queue overflow packets: 0
--------------------------------------------------------------
Real Adapter: ent0
__ 69. Another way to check the statistics of the shared Ethernet adapter is using the
seastat command. The seastat command generates a per client view of the
shared Ethernet adapter statistics. To gather network statistics at this level of detail,
advanced accounting should be enabled on the shared Ethernet adapter to provide
additional information about its network traffic.
__ a. Use the chdev command to enable the advanced accounting on the SEA.
$ chdev -dev entX -attr accounting=enabled
Replace entX with your shared Ethernet adapter device name.
__ b. Invoke the seastat command to display the shared Ethernet adapter statistics
per client logical partition. Check for the transmit and receive packets number
increasing.
$ seastat -d entX
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-29
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
================================================================
MAC: 56:ED:E7:C9:03:0B
----------------------
VLAN: None
VLAN Priority: None
IP: 10.6.112.42
__ 70. Stop any monitoring commands that are running in any of your partitions. Stop the
tcp_rr.sh script with Ctrl C.
6-30 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty __ 71. Use the HMC GUI to dynamically configure both of your AIX logical partitions as
uncapped with an uncapped weight value to 128. You can leave the assigned
processing units at the current setting.
Select a partition and run the Dynamic Logical Partitioning > Processor > Add or
Remove task. Make sure the uncapped checkbox is checked and the weight value is
128. Repeat for the other AIX partition.
__ 72. Use the HMC GUI to check that your VIOS partition has the following CPU
configuration:
Assigned Processing units: 0.10
Assigned Virtual Processors: 1
Partition is Capped
If needed, dynamically change your VIOS partition configuration using the HMC
GUI.
Using the HMC GUI, from the Navigation area on the left, select your server, then your
assigned VIOS partition. From the Tasks menu, select Dynamic Logical Partitioning
> Processor > Add or Remove.
Reconfigure the partition, and then click OK to implement the change.
__ 73. On the VIOS partition that you documented previously with the primary SEA
configuration, monitor the CPU consumed (physc value) using the viostat
command.
# viostat -tty 2
__ 74. As the root user on your second AIX partition (the partition with the HEA port
configured), use the ftpload.sh script located in the /home/an31/ex6 directory to
generate a network load between your logical partitions.
Syntax of the ftpload.sh script:
# ./ftpload.sh <remote host IP> <user> <password>
Here is an example command. Substitute abcxyz with the actual password for the
root account, and substitute 9.47.88.153 with the actual IP address of your second
logical partition:
# ./ftpload.sh 9.47.88.153 root abcxyz
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-31
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Here is an example of the output from the script when the CE of the VIOS partition is
0.10:
# ./ftpload.sh 9.47.88.153 root abcxyz
Verbose mode on.
200 PORT command successful.
150 Opening data connection for /dev/null.
300+0 records in.
300+0 records out.
226 Transfer complete.
314572800 bytes sent in 47.82 seconds (6424 Kbytes/s)
local: | dd if=/dev/zero bs=1m count=300 remote: /dev/null
221 Goodbye.
__ 75. Convert the throughput value reported by the ftpload.sh script from Kbytes/s into
Megabits/s, and record the value in the ftp throughput column for Capacity
Entitlement value 0.1 in Table 8, Throughput scalability, on page 33.
To convert from Kbytes/s to Megabits/s, multiply the value by 8, and then divide by
1000.
If the throughput value reported by the ftpload.sh script is in scientific e-notation,
convert this to Kbytes/s and then convert to Megabits/s. For example, if the result
shows 1.216e_04 KB then multiply the base number (1.216) by 10000 to get 12160
KBytes/s. Convert this into 97.27 Megabits/s (97.27 Mb/s)
__ 76. Monitor the %entc value from the viostat output on the VIOS partition while the
ftpload.sh script is running on your logical partition. Record the result in the %entc
column of the row for Capacity Entitlement value 0.1 in Table 8, Throughput
scalability, on page 33.
Here is an example of the output when the CE is 0.10:
$ viostat -tty 2
System configuration: lcpu=2 ent=0.10
tty: tin tout avg-cpu: % user % sys % idle % iowait physc % entc
0.0 41.0 0.0 1.0 99.0 0.0 0.0 2.8
0.0 41.0 0.0 0.9 99.0 0.0 0.0 2.7
0.0 41.0 0.0 0.9 99.1 0.0 0.0 2.7
0.0 41.0 0.0 1.1 98.9 0.0 0.0 3.0
0.0 39.7 0.0 30.0 70.0 0.0 0.1 53.2
0.0 40.4 0.0 45.2 54.8 0.0 0.1 79.0
0.0 40.6 0.0 45.3 54.7 0.0 0.1 80.2
0.0 40.5 0.1 45.6 54.2 0.0 0.1 79.6
0.0 39.8 0.0 45.2 54.8 0.0 0.1 79.4
0.0 39.4 0.0 45.5 54.5 0.0 0.1 79.8
0.0 39.6 0.0 44.9 55.1 0.0 0.1 78.5
0.0 40.6 0.0 44.7 55.2 0.0 0.1 79.5
0.0 39.0 0.0 44.8 55.2 0.0 0.1 78.8
0.0 41.3 0.0 44.7 55.3 0.0 0.1 78.8
0.0 38.8 0.0 45.3 54.6 0.0 0.1 79.6
6-32 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty In the above viostat output, notice the physc value is equal to the entitled capacity of
the VIOS partition (0.1) yet %entc is reporting only 80% of the entitled capacity is being
used. This is because the physc value is rounded up. If you used the lparstat
command, you would see physc is 0.8 as shown below:
# lparstat 2
System configuration: type=Shared mode=Capped smt=On lcpu=2 mem=512
psize=2 ent0.10
%user %sys %wait %idle physc %entc lbusy app vcsw phint
----- ---- ----- ----- ----- ----- ------ --- ---- -----
0.0 45.7 0.0 54.2 0.08 79.4 64.1 1.80 4728 87
0.0 45.5 0.0 54.5 0.08 79.2 65.4 1.79 4692 79
0.0 45.7 0.0 54.3 0.08 79.4 61.5 1.79 4622 90
0.0 45.9 0.0 54.1 0.08 79.7 61.1 1.79 4752 88
__ 77. Use the HMC GUI application to change the capacity entitlement of the VIOS
partition to the next value in Table 8, then repeat the actions from Step 63 to
Step 76, recording the results in the appropriate row of Table 8. Continue repeating
the steps until you have recorded values for all the different capacity entitlement
values listed in the table.
Use the following sequence of steps to change the capacity entitlement of the VIOS
partition.
In the LPAR table view, select the VIOS partition, then right-click and select Dynamic
Logical Partitioning > Processor > Add or Remove. Enter the new desired amount
of processing units in the Assigned field, then click the OK button to make the change.
The actual values you obtain here are not important. The point is to understand the
throughput of a shared Ethernet adapter will be restricted if the virtual I/O server
partition is CPU constrained.
Here are the results obtained by the authors:
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-33
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
End of exercise
6-34 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Copyright IBM Corp. 2010, 2013 Exercise 6. I/O device virtualization performance and tuning 6-35
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
6-36 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Estimated time
01:00
Introduction
In this exercise, you will configure the assigned lab environment to
support a Live Partition Mobility operation. The source and destination
systems may be managed by different HMCs. You will verify the virtual
I/O servers are configured as mover service partitions, configure the
remote HMC SSH key authentication, and start and monitor the
migration process.
Requirements
Each student must have access to two POWER7 systems and the
associated HMCs. The systems must have the PowerVM Enterprise
Edition feature enabled in the system firmware.
The exercise depends on the following:
Four VIO servers per managed system with one VIO per student
and four students per managed system. This allows the students to
perform the Live Partition Mobility in a dual VIO environment.
Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
7-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Preface
Use your assigned LPAR for this exercise.
All hints are marked by a sign.
Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 1. In the following tables, write down the source and destination system names
assigned to your class.
Table: Migration information - for students 1 to 8
Student LPAR Source Source VIO server Destination Destination VIO
number system server (Destination
(Source mover
Name system
service partition) mover service
partition)
student 1 lpar1 VIOS1 VIOS1
student 2 lpar2 VIOS2 VIOS2
student 3 lpar3 VIOS3 VIOS3
student 4 lpar4 VIOS4 VIOS4
student 5 lpar1 VIOS1 VIOS1
student 6 lpar2 VIOS2 VIOS2
student 7 lpar3 VIOS3 VIOS3
student 8 lpar4 VIOS4 VIOS4
If there are more than eight students attending the class, there will be additional
systems with similar configurations for student9 through student16.
Table: Migration information - for students 9 to 16
Student LPAR Source Source VIO server Destination Destination VIO
number system server (Destination
(Source mover
Name system
service partition) mover service
partition)
student 9 lpar1 VIOS1 VIOS1
student 10 lpar2 VIOS2 VIOS2
student 11 lpar3 VIOS3 VIOS3
student 12 lpar4 VIOS4 VIOS4
student 13 lpar1 VIOS1 VIOS1
student 14 lpar2 VIOS2 VIOS2
student 15 lpar3 VIOS3 VIOS3
student 16 lpar4 VIOS4 VIOS4
7-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty Access the HMC GUI, go to the Systems Management menu, click Servers in
the left navigation pane. In the Table view, locate the Available Processing Units
and the Available Memory values associated with your destination system (In the
upper right side, make sure the View: is Table and not Tree. You will not see
these values in the Tree view.)
You can also list the current available memory and processor values using the
HMC command line. Type the following commands and look at the
curr_avail_sys_proc_units and curr_avail_sys_mem attributes:
lshwres -r proc -m <managed_system> --level sys
configurable_sys_proc_units=8.0,curr_avail_sys_proc_units=5.0,pend_
avail_sys_proc_units=5.0,installed_sys_proc_units=8.0,max_capacity_
sys_proc_units=deprecated,deconfig_sys_proc_units=0,min_proc_units_
per_virtual_proc=0.1,max_virtual_procs_per_lpar=64,max_procs_per_lp
ar=64,max_shared_proc_pools=64
Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
VTD vtscsi0
Status Available
LUN 0x8100000000000000
Backing device hdisk1
Physloc U78AA.001.WZSGHWS-P2-D5
Mirrored false
VTD vtscsi2
Status Available
LUN 0x8200000000000000
Backing device hdisk2
Physloc U78AA.001.WZSGHWS-P2-D1
Mirrored false
__ 4. In your assigned client LPAR, document the PVIDs for the two disks using the lspv
command.
This example shows the PVIDs in the second column:
# lspv
hdisk0 00f6bcc9f7c38cc0 rootvg active
hdisk1 00f6bcc9a30e4bdc None
__ 5. Verify that both backing devices (hdisk<#>) have the reserve_policy attribute set
to no_reserve on both the source and destination VIO servers. If needed, use the
chdev command to change the attribute value.
Note: The hdisk numbers might not be the same on the source and destination VIO
servers. Use the PVIDs to determine the correct disks.
Use lspv on the source VIO server to view the PVIDs of the hdisk backing
devices and verify the same PVID is also visible on the destination VIO server.
lsdev -dev hdisk<#> -attr reserve_policy
If necessary, run the following command to change the reserve_policy
attribute value:
chdev -dev hdisk# -attr reserve_policy=no_reserve
Note: For the source VIOS partition, if you need to change the
reserve_policy attribute for the client LPAR, you will need to shut down the
client LPAR, remove the virtual target devices, run the chdev command to
change the attribute, remake the virtual target devices with mkvdev, then activate
the LPAR. Because the disks are not yet in use on the destination, this procedure
is not necessary on the destination server. You can simply run the chdev
command if necessary.
7-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty __ 6. Determine the VLAN ID that the client LPAR is using. In a terminal session to your
AIX LPAR, run the following command:
# entstat -d ent0 | grep ID
Example output that shows only one VLAN ID (10) in use:
# entstat -d ent0 | grep VLAN
Invalid VLAN ID Packets: 0
Port VLAN ID: 10
VLAN Tag IDs: None
__ 7. Verify that the destination VIO server is also bridging the same VLAN associated
with your client partitions virtual network adapter. Log in to your destination VIO
server, and execute lsmap -all -net and check for the SEA adapter name.
When done, execute entstat -all ent# | grep ID to verify the PVID is the
same as on your AIX client partition.
Output example of lsmap command:
$ lsmap -net -all
SVEA Physloc
------ --------------------------------------------
ent1 U8204.E8A.06BCC9P-V3-C31-T1
SEA ent3
Backing device ent0
Status Available
Physloc U7311.D20.WZSGHWS-P1-C06-T1
Output example of entstat command:
$ entstat -all ent3 | grep ID
Control Channel PVID: 19
Invalid VLAN ID Packets: 0
Port VLAN ID: 10
VLAN Tag IDs: None
Switch ID: ETHERNET0
Invalid VLAN ID Packets: 0
Port VLAN ID: 19
VLAN Tag IDs: None
Switch ID: ETHERNET0
__ 8. Verify each HMC is capable of performing remote migrations. In a CLI session to
each HMC, run the lslparmigr command,
Example command:
$ lslparmigr -r manager
Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
7-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty
__ 12. Start an HMC command line session to your source HMC (use ssh/putty) and
execute the lslparmigr command to list the source and destination system's
mover service partitions (managed system names are displayed in the HMC
browser session).
Example command for systems with only one HMC for both the source and
target systems:
hscroot@<hmc_hostname> lslparmigr r msp m <source managed system name> \
-t <destination managed system name> \
--filter lpar_names=<mobile partition name> --ip
Example command for systems for remote LPM (two HMCs):
hscroot@<hmc_hostname> lslparmigr r msp m <source managed system name> \
-t <destination managed system name> --ip address <remote HMC IP> -u hscroot \
--filter lpar_names=<mobile partition name>
__ 13. Dynamically change the MSP partitions (VIOS partitions) so that the processors are
uncapped. The migration process consumes about 0.85 CPU on the MSPs so the
partitions need to be able to use more processing resources.
Select the VIOS partition. Run the Dynamic Logical Partitioning > Processor
> Add or Remove task. Check the Uncapped checkbox and make the weight
value 192. The processing units can stay at whatever value is currently
configurated. Change both the source and target MSPs.
__ 14. Using the HMC browser session, start the migration process. View the Migration
Information panel and click Next.
Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Select your mobile LPAR by clicking the check box on the same row as your
LPAR. Then from the LPAR's context menu, click Operations > Mobility >
Migrate. The Migration wizard displays. Click Next.
__ 15. For the New destination profile name panel, type default, and click Next.
__ 16. For remote LPM operations, select the Remote Migration box. Then, type the IP
address for the destination systems HMC in the Remote HMC field. Type hscroot
for the Remote User and click Next. If your source and destination systems are
being managed by the same HMC, skip this step and just click Next.
__ 17. Select the destination managed system from the pull-down menu and click Next.
__ 18. If you see Paging VIOS Redundancy, make sure the Paging VIOS redundancy value
is none. This panel will only appear if a partition profile contains an AMS
configuration.
__ 19. The validation task will run at this point and will take a moment. At Validations
Errors/Warning, you might see a window containing a message ID. The type of
message should be Errors. You should see an error message that is similar to the
following:
7-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty
__ 20. When an error occurs during the live partition mobility validation process, you can
find additional details at the virtual I/O server by using the alog command.
__ 21. As the padmin user, log in to your source VIO server, and then run the following alog
command:
echo alog -t cfg -o config.log | oem_setup_env | more
Search for the physical device location specified in the error message. This should
be associated with information such as ERROR: cannot migrate reserve
type single_path.
Looking at the alog output, can you explain why we have this error during the
validation? What should you do to fix the problem?
The error message identifies the associated vscsi adapter that has a single path
disk associated. Use the lsmap command on your VIO server to find out which
hdisks devices are attached to the VSCSI adapter specified in the error
message. Then use the lsdev command to find out which hdisk device has a
single path. You must remove the virtual target device (vtscsi0) that maps this
hdisk device to your client logical partition.
Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Also, at the client you can use the lspath command to identify the hdisk that
has the single path.
__ a. On the virtual I/O server, use the lsmap -all command to find out which virtual
SCSI adapter has the physical location code mentioned in the error message.
Then identify the hdisk devices that are attached to the virtual SCSI adapter.
Here is an example of the lsmap command output. The virtual SCSI server
adapter vhost0 is the adapter with the physical location code mentioned in the
error message.
$ lsmap -all
SVSA Physloc Client Partition ID
--------------- -------------------------------------------- ------------------
vhost0 U8204.E8A.65BF7D1-V1-C13 0x00000005
VTD sys046l1rvg
Status Available
LUN 0x8100000000000000
Backing device hdisk6
Physloc U7311.D20.650443C-P1-C07-T1-W500507680140581E-L6000000000000
VTD vtscsi0
Status Available
LUN 0x8200000000000000
Backing device hdisk1
Physloc U7311.D20.650443C-P1-C07-T1-W500507680140581E-L1000000000000
In the example output above, hdisk0 and hdisk1 are the hdisk devices attached
to the virtual SCSI server adapter vhost0.
__ b. Use the lsdev command to identify the hdisk device that has a reserve_policy
attribute set to single_path.
hdisk1 is a non shared LUN and has the reserve_policy set to single_path.
$ lsdev -dev hdisk1 -attr | grep policy
reserve_policy single_path Reserve Policy True
__ c. In order to proceed with the live partition migration, you must remove the virtual
target device that maps this hdisk device to your client logical partition. Log in to
your VIO server and run the rmdev command to remove the VTD.
Use this command if the VTD device name is vtscsi0:
$ rmdev -dev vtscsi0
__ 22. Re-run the validation (click Back to get the Destination panel, and then click Next).
At Validations Errors/Warnings, verify the type of message is a warning and not
7-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty an error. If error messages do not appear, the validation process was successful and
you can proceed with the next step. You should still read the warning messages.
You can ignore any warning messages related to VIOS partitions not involved in the
migration.
Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-13
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
MSPs. If not, select them (refer to the migration information table for your source
and destination MSPs) and click Next.
__ 24. View the VLAN configuration, and click Next.
__ 25. Check the virtual SCSI assignment. The automatically selected pair might not be
correct. Recall that the client partition has two paths for its hdisk0 and you want to
recreate that configuration on the destination. Select the appropriate VIOS partitions
and click Next.
The following commands and example outputs show how to determine what
paths currently exist and the location codes for the paths:
# lspath -l hdisk0
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1
# lscfg | grep vscsi
* vscsi1 U8203.E4A.06F9EC1-V4-C4-T1 Virtual SCSI Client Adapter
* vscsi0 U8203.E4A.06F9EC1-V4-C5-T1 Virtual SCSI Client Adapter
The location codes contain the virtual adapter ID after the "C". You can use the
HMC to determine which VIOS is associated with the client adapter IDs.
__ 26. Verify the Pool ID 0 is selected for the Destination Shared Processor Pool.
__ 27. Keep the Wait time default value and click Next.
__ 28. When the Partition Migration Summary menu displays. Look at the Destination
partition ID value. The ID is assigned automatically and the first available value is
chosen (in our example ID=3). Do not click Finish.
7-14 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty
__ 29. Establish two login sessions, one to each of your source and destination MSP VIO
servers. In these sessions, start topas to monitor the CPU and network resource
utilization during the migration process.
__ 30. click Finish on the Summary panel to start the partition migration.
__ 31. Look at the topas command outputs and observe the kernel, CPU, and network
activity.
__ 32. Observe the pop-up window and wait until the migration status shows success. Click
Close to exit.
Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-15
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ 33. Using the lspartition HMC command, identify the LPAR's new partition ID and
managed system.
Example command: lspartition -dlpar
A service processor lock might occur when doing simultaneous moves that use the
same VIO servers. In case of a lock, you can use the Recover option in the Mobility
task; the HMC will perform the necessary operations to complete the migration.
End of exercise
7-16 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Copyright IBM Corp. 2010, 2013 Exercise 7. Live Partition Mobility 7-17
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
7-18 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Estimated time
01:00
Introduction
Using the Suspend / Resume feature, clients can provide long-term
suspension of partitions, saving the partition state (memory, NVRAM,
and VSP state) on persistent storage, freeing server resources that
were in use by that partition, restoring partition state to server
resources, and resuming operation of that partition and its applications
either on the same server or on a different server.
Requirements
This workbook
HMC V7.7.2 or later
System Firmware v7.2.0 SP1 or higher
AIX v7.1 TL0 SP2 or higher or v6.1 TL6 SP or higher
Copyright IBM Corp. 2010, 2013 Exercise 8. Suspend and resume 8-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
8-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Preface
All procedures in this exercise depend on the availability of specific equipment in
your classroom.
All hints are marked by a sign.
Copyright IBM Corp. 2010, 2013 Exercise 8. Suspend and resume 8-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
It should display the list of devices in the reserved storage device pool as shown
above. These were the device (hdisk1 to hdisk5) which were earlier used as
paging devices in AMS exercise.
__ 2. List volumes in the reserved storage pool from the HMC command line.
8-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty From the HMC command line interface, run lshwres with the --rsubtype
rsdev flag to view the reserved storage device used to save suspension data for
your partition.
hscroot@hmc45:~> lshwres -r rspool -m sys426 --rsubtype rsdev
device_name=hdisk1,vios_name=sys426_amsvios,vios_id=9,size=5120,type=
phys,state=Inactive,p
hys_loc=U5877.001.M09H12R-P1-C10-T1-W500507680140B855-
L1000000000000,is_redundant=0,lpar_id=none,device_selection_type=auto
device_name=hdisk2,vios_name=sys426_amsvios,vios_id=9,size=5120,type=
phys,state=Inactive,p
hys_loc=U5877.001.M09H12R-P1-C10-T1-W500507680140B855-
L2000000000000,is_redundant=0,lpar_id=none,device_selection_type=auto
device_name=hdisk3,vios_name=sys426_amsvios,vios_id=9,size=5120,type=
phys,state=Inactive,p
hys_loc=U5877.001.M09H12R-P1-C10-T1-W500507680140B855-
L3000000000000,is_redundant=0,lpar_id=none,device_selection_type=auto
device_name=hdisk4,vios_name=sys426_amsvios,vios_id=9,size=5120,type=
phys,state=Inactive,p
hys_loc=U5877.001.M09H12R-P1-C10-T1-W500507680140B855-
L4000000000000,is_redundant=0,lpar_id=none,device_selection_type=auto
device_name=hdisk5,vios_name=sys426_amsvios,vios_id=9,size=5120,type=
phys,state=Inactive,p
hys_loc=U5877.001.M09H12R-P1-C10-T1-W500507680140B855-
L5000000000000,is_redundant=0,lpar_id=none,device_selection_type=auto
Note: The size of the volume must be at least the same size as the maximum
memory specified in the profile for the suspending partition. In our lab environment,
we are using 5 GB LUNs for reserved storage device which is more then the
maximum memory configured for the LPARs.
Copyright IBM Corp. 2010, 2013 Exercise 8. Suspend and resume 8-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ b. Now we will execute a script on the suspend capable partition and will verify the
functionality of the resume operation once the partition is resumed back. The
script is located in the /home/an31/ex8 directory.
Note: The script must be executed in the background with nohup.
# ./nohup S-R.sh &
[1] 4128902
# Sending nohup output to nohup.out.
# tail -f /tmp/test.log
Fri Jan 13 07:52:44 CET 2012
Fri Jan 13 07:52:45 CET 2012
Fri Jan 13 07:52:46 CET 2012
Fri Jan 13 07:52:47 CET 2012
Fri Jan 13 07:52:48 CET 2012
Fri Jan 13 07:52:49 CET 2012
Fri Jan 13 07:52:50 CET 2012
Fri Jan 13 07:52:51 CET 2012
__ 4. Suspend the client partition.
8-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty Select your LPAR and run the Operations > Suspend Operations > Suspend
task. Accept the defaults and click the Suspend button. This is the window that
pops up:
__ 5. After clicking the Suspend button, the status for each activity will be shown in a
separate window. Watch the status window as each activity completes. When all
steps have completed, the status window will look like this and you can click the
Close button:
__ 7. View the reserved storage pool state once the suspend operation is initiated from
the HMC command line.
Copyright IBM Corp. 2010, 2013 Exercise 8. Suspend and resume 8-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
8-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
EXempty
__ 10. In the main Hardware Management Console window, check the status of the
partition. It should now be Running.
__ 11. Log in to your LPAR and check to see if the script functionality is resumed after the
completion of Resume operation.
Example command and expected output:
# ps -ef | grep S-R.sh
root 4128902 1 0 07:49:29 - 0:00 /bin/sh ./S-R.sh
# tail -f /tmp/test.log
Fri Jan 13 07:54:04 CET 2012
Fri Jan 13 07:54:05 CET 2012
Fri Jan 13 07:54:06 CET 2012
Observe that the script has started logging process in the /tmp/test.log file once
the partition was resumed.
__ 12. Kill the S-R.sh process.
Example commands:
# ps -ef | grep S-R.sh
(Discover the PID.)
# kill -9 4128902
__ 13. Log in to the HMC and check the status of the volume used by suspended partition.
Example command:
Copyright IBM Corp. 2010, 2013 Exercise 8. Suspend and resume 8-9
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
End of exercise
8-10 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Copyright IBM Corp. 2010, 2013 Exercise 8. Suspend and resume 8-11
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
8-12 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Estimated time
00:30
Introduction
The PART tool is now included in the Virtual I/O Server (VIOS) in
version 2.2.2.0. Students will use this tool to collect configuration and
performance data on a VIOS partition. They will view the resulting
XML file in a browser.
Requirements
This workbook
One Virtual I/O Server partition at Version 2.2.2.0 or higher
One AIX 7 logical partition
A system with a web browser
The ability to copy files from the assigned lab partition to a system
with a web browser. This capability is not available in all training
labs.
Copyright IBM Corp. 2010, 2013 Exercise A. Using the Virtual I/O Server Performance Analysis A-1
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
A-2 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
Preface
All procedures in this exercise depend on the availability of specific equipment in your
classroom.
All hints are marked by a sign.
__ 3. Now that you have verified that the part tool is available, generate a CPU load in
your VIOS partition. Run oem_setup_env and then run two yes commands in the
background. Redirect the command output to /dev/null.
Example commands and expected output with PID:
# yes > /dev/null &
[1] 8978558
# yes > /dev/null &
[2] 8192090
__ 4. Run the part tool for 10 minutes and use the detailed logging level.
Example command and its expected output:
$ part -i 10 -t 2 &
[1] 10551342
__ 5. Open a second login session to your assigned VIOS partition and run the
oem_setup_env command. Run lparstat with a 2 second interval and a count
value of 5. You should see some CPU activity.
Below is an example command and its expected output showing CPU activity. The
VIOS is using 1.00 physc. In the system configuration line we see that SMT is set to
Copyright IBM Corp. 2010, 2013 Exercise A. Using the Virtual I/O Server Performance Analysis A-3
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
four threads and there are four logical processors. This means that the LPAR is
configured with one virtual processor, so it is at its maximum processing capacity.
# lparstat 2 5
__ 7. While you are waiting for the part command to finish, explore the topas and nmon
panels that were covered in the lecture materials. Explore other available
commands from the VIOS command line that were covered in the lecture materials.
Suggested commands include: vmstat, svmon, fcstat, and entstat.
__ 8. When you see the following message in the first login session to the VIOS, the part
utility is finished. The filename on your system will be different as it is the hostname
followed by the date and time.
part: Reports are successfully generated in lpar1_121210_12_46_16.tar
__ 9. When the part utility is finished, look for the filename that was printed to the screen
in the /home/padmin directory.
Example command and expected output:
$ ls /home/padmin/lpar1*
/home/padmin/lpar1_121210_12_46_16.tar
__ 10. Extract the tar files in the vadvisor using the following command, then list the
contents. Be sure to replace the example tar file name with the name of your tar file.
$ tar xvf vadvisor/lpar1_121210_12_46_16.tar
__ 11. This will have created a directory with the same name as the tar file, except for the
.tar suffix. List out the new directory with ls and explore what files are there.
A-4 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
lpar1_121210_12_46_16/images:
Warning_icon.png close.jpg headerLogo.png readonly.png
bg.png correct.png investigate.png red-error.png
lpar1_121210_12_46_16/logs:
ionfile logfile
__ 12. Use the ftp command and copy the entire contents of the new directory to your
local PC. If you are using Microsoft Windows, use the Command Prompt window
and the ftp tool. If you are on a Linux system, use ftp. Copy the files to a
folder/directory. Follow this procedure to perform these actions:
__ a. Open a session on your local system. This could be a Windows PC that is in your
classroom or a personal computer running Linux. When you open the Windows
Command Prompt window, it will likely show that the current directory is
C:\Documents and Settings\Administrator or something similar. This is fine.
__ b. Create a directory named vadvisor.
Example Windows command: mkdir vadvisor
__ c. Change your current directory to the new directory.
Example Windows command: cd vadvisor
__ d. Run ftp to your assigned VIOS partition.
Example Windows command which lists the VIOS IP address as 10.10.10.10:
ftp 10.10.10.10
__ e. Log in as the padmin user and enter the password when prompted.
__ f. Run the FTP subcommand binary and press Enter.
__ g. Run the FTP subcommand prompt and press Enter.
__ h. Run the FTP subcommand mget followed by the name of the directory, followed
by a slash (/) and an asterisk. For example:
mget lpar1_12_12_10_12_46_16/*
__ i. The command will take a moment to run. When its finished, type the FTP bye
subcommand to close the FTP session.
Copyright IBM Corp. 2010, 2013 Exercise A. Using the Virtual I/O Server Performance Analysis A-5
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
__ j. List the items in the directory that you created with the dir Windows command.
You should see an filename with the .xml suffix.
__ 13. Use the GUI to navigate to your vadvisor folder/directory. Create a folder in the
vadvisor folder/directory called images. Put all of the files that end in .png into that
folder/directory.
__ 14. On the Windows or Linux local system, open a browser, then open the XML file in
your directory in the browser. An easy way to do this is to use the file explorer and
double click on the XML file name.
__ 15. Look at the data available and pay particular attention to the VIOS - CPU panel.
What do you observe?
Example CPU panel:
One would expect that the tool to suggested a higher entitled capacity and a higher
virtual processor number since the CPU utilization was over 100%. At least the tool
highlighted this important area for you to investigate.
__ 16. Look at the VIOS - Memory panel. Note that the tool always complains if there is
less than 2.5 GB of memory configured in a VIOS partition.
Example memory panel:
A-6 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
Instructor Exercises Guide with hints
AP
__ 17. Left click on some of the cells in the chart to see help information.
__ 19. Remove the vadvisor folder (or directory) that you created.
__ 20. Let your instructor know that you have completed this exercise.
End of exercise
Copyright IBM Corp. 2010, 2013 Exercise A. Using the Virtual I/O Server Performance Analysis A-7
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
Instructor Exercises Guide with hints
Exercise review/wrapup
This exercise provided hands on use of the VIOS Performance Advisor tool. Students used
the part utility to capture 10 minutes of data after starting CPU intensive processes with
the yes command. Then they copied the resulting files to a local system and viewed the
report, which is in XML format, in a web browser.
A-8 PowerCare: Performance for Power Systems AIX Copyright IBM Corp. 2010, 2013
Course materials may not be reproduced in whole or in part
without the prior written permission of IBM.
V8.0
backpg
Back page