AIX Perf Tuning
AIX Perf Tuning
AIX Perf Tuning
Jaqui Lynch
AIX Performance Tuning Part 1
lynchj@forsythe.com
Agenda
• Part 1 • Part 2
• CPU • I/O
• Memory tuning • Volume Groups and File
• Network systems
• Starter Set of Tunables • AIO and CIO for Oracle
• Performance Tools
2
CPU
3
Applications and SPLPARs
• Applications do not need to be aware of Micro-Partitioning
• Not all applications benefit from SPLPARs
• Applications that may not benefit from Micro-Partitioning:
• Applications with a strong response time requirements for transactions may find Micro-Partitioning
detrimental:
• Because virtual processors can be dispatched at various times during a timeslice
• May result in longer response time with too many virtual processors:
• Each virtual processor with a small entitled capacity is in effect a slower CPU
• Compensate with more entitled capacity (2-5% PUs over plan)
• Applications with polling behavior
• CPU intensive application examples: DSS, HPC, SAS
• Applications that are good candidates for Micro-Partitioning:
• Ones with low average CPU utilization, with high peaks:
• Examples: OLTP, web applications, mail server, directory servers
4
Logical Processors
Logical Processors represent SMT threads
L L L L L L L L Logical
V V V V V=0.6 V=0.6 V=0.4
V=0.4 Virtual
2 Cores 2 Cores
Dedicated Dedicated PU=1.2 PU=0.8
VPs under the covers VPs under the covers
Weight=128 Weight=192
Hypervisor
Core Core Core Physical
Core Core Core Core
Core Core Core
5
Dispatching in shared pool
6
POWER6 vs POWER7 Virtual Processor Unfolding
• Scaled provides the highest core throughput at the expense of per-thread response times and throughput.
It also provides the highest system-wide throughput per VP because tertiary thread capacity is “not left on the
table.”
• schedo –p –o vpm_throughput_mode=
0 Legacy Raw mode (default)
1 “Enhanced Raw” mode with a higher threshold than legacy
2 Scaled mode, use primary and secondary SMT threads
4 Scaled mode, use all four SMT threads
Dynamic Tunable
• SMT unfriendly workloads could see an enormous per thread performance degradation
8
Understand SMT
• SMT
• Threads dispatch via a Virtual Processor (VP)
• Overall more work gets done (throughput)
• Individual threads run a little slower SMT Thread
• SMT1: Largest unit of execution work Primary 0
• SMT2: Smaller unit of work, but provides greater
amount of execution work per cycle 1
Secondary
• SMT4: Smallest unit of work, but provides the
maximum amount of execution work per cycle
Tertiary 2 3
• On POWER7, a single thread cannot exceed 65%
utilization
• On POWER6 or POWER5, a single thread can Diagram courtesy of IBM
consume 100%
• Understand thread dispatch order
POWER7 SMT=2 70% & SMT=4 63% tries to show potential spare capacity
• Escaped most peoples attention
• VM goes 100% busy at entitlement & 100% from there on up to 10 x more CPU
SMT4 100% busy 1st CPU now reported as 63% busy
• 2nd, 3rd and 4th LCPUs each report 12% idle time which is approximate
10
More on Dispatching
How dispatching works
Example - 1 core with 6 VMs assigned to it
VPs for the VMs on the core get dispatched (consecutively) and their threads run
As each VM runs the cache is cleared for the new VM
When entitlement reached or run out of work CPU is yielded to the next VM
Once all VMs are done then system determines if there is time left
Assume our 6 VMs take 6MS so 4MS is left
Remaining time is assigned to still running VMs according to weights
VMs run again and so on
Problem - if entitlement too low then dispatch window for the VM casn be too low
If VM runs multiple times in a 10ms window then it does not run full speed as cache has to be warmed up
If entitlement higher then dispatch window is longer and cache stays warm longer - fewer cache misses
• This means that in POWER7 you need to pay more attention to VPs
• You may see more cores activated a lower utilization levels
• But you will see higher idle
• If only primary SMT threads in use then you have excess VPs
• Performance may (in most cases, will) degrade when the number of Virtual Processors in an
LPAR exceeds the number of physical processors
12
Useful processor Commands
13
lparstat 30 2
lparstat 30 2 output
%user %sys %wait %idle physc %entc lbusy app vcsw phint
46.8 11.6 0.5 41.1 11.01 91.8 16.3 4.80 28646 738
48.8 10.8 0.4 40.0 11.08 92.3 16.9 4.88 26484 763
NOTE – Must set “Allow performance information collection” on the LPARs to see good values for app, etc
Required for shared pool monitoring
14
Using sar –mu -P ALL (Power7 & SMT4)
AIX (ent=10 and 16 VPs) so per VP physc entitled is about .63
4 84 14 0 1 0.49 4.9
5 42 7 1 50 0.17 1.7
6 0 1 0 99 0.10 1.0
7 0 1 0 99 0.10 1.0 .86 physc
8 88 11 0 1 0.51 5.1
9 40 11 1 48 0.18 1.8
............. Lines for 10-62 were here
63 0 1 0 99 0.11 1.1
- 55 11 0 33 12.71 127.1
May see a U line if in SPP and is unused LPAR capacity (compared against entitlement)
15
mpstat -s
mpstat –s 1 1
System configuration: lcpu=64 ent=10.0 mode=Uncapped
………………………………
Proc60
99.11%
cpu60 cpu61 cpu62 cpu63
62.63% 13.22% 11.63% 11.63%
16
nmon Summary
17
lparstat – bbbl tab in nmon
lparno 3
lparname gandalf
CPU in sys 24
Virtual CPU 16
Logical CPU 64
smt threads 4
capped 0
min Virtual 8
max Virtual 20 Compare VPs to poolsize
min Logical 8 LPAR should not have more
max Logical 80
min Capacity 8 VPs than the poolsize
max Capacity 16
Entitled Capacity 10
min Memory MB 131072
18
Entitlement and vps from lpar tab in nmon
19
Cpu by thread from cpu_summ tab in nmon
20
vmstat -IW
bnim: vmstat -IW 2 2
vmstat –IW 60 2
r column is average number of runnable threads (ready but waiting to run + those running)
This is the global run queue – use mpstat and look at the rq field to get the run queue for each logical CPU
b column is average number of threads placed in the VMM wait queue (awaiting resources or I/O)
21
vmstat –IW
lcpu 72 Mem 319488MB Ent 12
r b p w avm fre fi fo pi po fr sr in sy cs us sy id wa pc ec
17 0 0 2 65781580 2081314 2 5 0 0 0 0 11231 146029 22172 52 13 36 0 11.91 99.2
11 0 0 5 65774009 2088879 2 136 0 0 0 0 11203 126677 20428 47 13 40 0 11.29 94.1
14 0 0 4 65780238 2082649 2 88 0 0 0 0 11463 220780 22024 50 14 36 0 12.62 105.2
20 0 0 9 65802196 2060643 51 114 0 0 0 0 9434 243080 21883 55 15 29 1 12.98 108.2
26 1 0 7 65810733 2052105 9 162 0 0 0 0 9283 293158 22779 60 16 23 1 13.18 109.8
18 1 0 5 65814822 2048011 0 0 0 0 0 0 11506 155344 27308 53 13 33 0 13.51 112.6
18 0 0 5 65798165 2064666 0 0 0 0 0 0 10868 200123 25143 53 14 32 0 13.27 110.6
17 0 0 5 65810136 2052662 4 0 0 0 0 0 12394 230802 29167 51 17 32 1 14.14 117.8
15 0 0 7 65796659 2066142 0 0 0 0 0 0 12301 217839 27142 48 16 35 0 13.42 111.9
13 0 0 8 65798332 2064469 1 20 0 0 0 0 14001 160576 30871 44 16 39 0 11.41 95.1
15 1 0 4 65795292 2067486 7 212 0 0 0 0 14263 215226 31856 51 14 35 0 13.33 111.1
19 0 0 7 65807317 2055454 0 0 0 0 0 0 11887 306416 26162 52 13 34 0 13.85 115.4
13 0 0 5 65807079 2055689 0 0 0 0 0 0 11459 196036 26782 49 14 36 0 12.49 104.1
19 0 0 5 65810475 2052293 0 0 0 0 0 0 13187 292694 28050 52 13 35 0 14 116.7
17 0 0 11 65819751 2043008 4 0 0 0 0 0 12218 225829 27516 51 14 35 0 13.14 109.5
26 0 0 10 65825374 2037373 1 35 0 0 0 0 11447 220479 23273 55 13 32 0 13.91 115.9
26 0 0 6 65820723 2042005 6 182 0 0 0 0 11652 331234 26888 63 11 25 1 14.3 119.2
18 1 0 6 65816444 2046275 4 0 0 0 0 0 11628 184413 25634 51 14 35 0 13.19 110
20 0 0 8 65820880 2041819 0 13 0 0 0 0 12332 190716 28370 51 14 36 0 13.37 111.4
17 0 0 6 65822872 2039836 0 0 0 0 0 0 13269 128353 30880 47 14 39 0 11.46 95.5
15 0 0 5 65832214 2030493 0 0 0 0 0 0 12079 207403 26319 51 13 35 0 13.24 110.4
14 0 0 8 65827065 2035639 17 14 0 0 0 0 14060 117935 32407 48 15 36 0 12.06 100.5
15 0 0 4 65824658 2037996 10 212 0 0 0 0 12690 137533 27678 44 20 36 0 13.53 112.8
18 0 0 10 65817327 2045339 0 0 0 0 0 0 12665 161261 28010 50 14 36 0 12.69 105.8
17 0 0 8 65820348 2042321 0 0 0 0 0 0 14047 228897 28475 53 13 34 0 14.44 120.4
16 0 0 6 65817053 2045609 0 0 0 0 0 0 12953 160629 26652 50 14 35 0 12.83 106.9
18 0 0 12 65813683 2048949 0 0 0 0 0 0 11766 198577 26593 53 13 33 0 13.54 112.9
18 0 0 13 65808798 2053853 18 23 0 0 0 0 12195 209122 27152 53 14 33 0 13.86 115.5
12 1 0 14 65800471 2062164 6 218 0 0 0 0 12429 182117 27787 55 13 31 1 13.38 111.5
18 2 0 8 65805624 2056998 6 72 0 0 0 0 12134 209260 25250 54 13 32 0 13.73 114.4
r b p w avm fre fi fo pi po fr sr in sy cs us sy id wa pc ec
Average 17.33 0.23 0.00 7.10 65809344 2053404 5.00 50 0.00 0.00 0.00 0.00 12134.80 203285 26688 51.53 14.10 33.93 0.17 13.14 109.48
Max 26.00 2.00 0.00 14.00 65832214 2088879 51.00 218 0.00 0.00 0.00 0.00 14263.00 331234 32407 63.00 20.00 40.00 1.00 14.44 120.40
Min 11.00 0.00 0.00 2.00 65774009 2030493 0.00 0 0.00 0.00 0.00 0.00 9283.00 117935 20428 44.00 11.00 23.00 0.00 11.29 94.10
22
MEMORY
23
Memory Types
• Persistent
• Backed by filesystems
• Working storage
• Dynamic
• Includes executables and their work areas
• Backed by page space
• Shows as avm in a vmstat –I (multiply by 4096 to get bytes instead of pages) or as
%comp in nmon analyser or as a percentage of memory used for computational pages in
vmstat –v
• ALSO NOTE – if %comp is near or >97% then you will be paging and need more memory
• Prefer to steal from persistent as it is cheap
• minperm, maxperm, maxclient, lru_file_repage and page_steal_method all
impact these decisions
24
Correcting Paging
From vmstat -v
11173706 paging space I/Os blocked with no psbuf
lsps output on above system that was paging before changes were made to tunables
lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type
paging01 hdisk3 pagingvg 16384MB 25 yes yes lv
paging00 hdisk2 pagingvg 16384MB 25 yes yes lv
hd6 hdisk0 rootvg 16384MB 25 yes yes lv
lsps -s
Total Paging Space Percent Used Can also use vmstat –I and vmstat -s
49152MB 1%
Should be balanced – NOTE VIO Server comes with 2 different sized page datasets on one hdisk (at least until FP24)
Best Practice
More than one page volume
All the same size including hd6
Page spaces must be on different disks to each other
Do not put on hot disks
Mirror all page spaces that are on internal or non-raided disk
If you can’t make hd6 as big as the others then swap it off after boot
25
Memory with lru_file_repage=0
lru_file_repage=0
• minperm=3
• Always try to steal from filesystems if filesystems are using more than 3% of memory
• maxperm=90
• Soft cap on the amount of memory that filesystems or network can use
• Superset so includes things covered in maxclient as well
• maxclient=90
• Hard cap on amount of memory that JFS2 or NFS can use – SUBSET of maxperm
All AIX systems post AIX v5.3 (tl04 I think) should have these 3 set
On v6.1 and v7 they are set by default
26
page_steal_method
• Default in 5.3 is 0, in 6 and 7 it is 1
• What does 1 mean?
• lru_file_repage=0 tells LRUD to try and steal from filesystems
• Memory split across mempools
• LRUD manages a mempool and scans to free pages
• 0 – scan all pages
• 1 – scan only filesystem pages
27
page_steal_method Example
• 500GB memory
• 50% used by file systems (250GB)
• 50% used by working storage (250GB)
• mempools = 5
• So we have at least 5 LRUDs each controlling about 100GB memory
• Set to 0
• Scans all 100GB of memory in each pool
• Set to 1
• Scans only the 50GB in each pool used by filesystems
• Reduces cpu used by scanning
• When combined with CIO this can make a significant difference
28
Looking for Problems
• lssrad –av
• mpstat –d
• topas –M
• svmon
• Try –G –O unit=auto,timestamp=on,pgsz=on,affinity=detail
options
• Look at Domain affinity section of the report
• Etc etc
29
Memory Problems
• Look at computational memory use
• Shows as avm in a vmstat –I (multiply by 4096 to get bytes instead of pages)
• or as %comp in nmon analyser
• or as a percentage of memory used for computational pages in vmstat –v
• NOTE – if %comp is near or >97% then you will be paging and need more memory
• Try svmon –P –Osortseg=pgsp –Ounit=MB | more
• This shows processes using the most pagespace in MB
• You can also try the following:
• svmon –P –Ofiltercat=exclusive –Ofiltertype=working –Ounit=MB| more
30
nmon memnew tab
31
nmon memuse tab
32
Memory Tips
DIMMs
Empty
DIMMs
DIMMs
Cores Cores Cores Cores
DIMMs
Empty
DIMMs
DIMMs
Cores Cores Cores Cores
Empty
DIMMs
DIMMs
Empty
Cores Cores Cores Cores
DIMMs
DIMMs
Empty
DIMMs
Cores Cores Cores Cores
34
Starter set of tunables 1
For AIX v5.3
No need to set memory_affinity=0 after 5.3 tl05
MEMORY
vmo -p -o minperm%=3
vmo -p -o maxperm%=90
vmo -p -o maxclient%=90
vmo -p -o minfree=960 We will calculate these
vmo -p -o maxfree=1088 We will calculate these
vmo -p -o lru_file_repage=0
vmo -p -o lru_poll_interval=10
vmo -p -o page_steal_method=1
For AIX v6 or v7
Memory defaults are already correctly except minfree and maxfree
If you upgrade from a previous version of AIX using migration then you need to check the settings
after
35
Starter set of tunables 2
The parameters below should be reviewed and changed
(see vmstat –v and lvmo –a later)
PBUFS
Use the new way (coming up)
JFS2
ioo -p -o j2_maxPageReadAhead=128
(default above may need to be changed for sequential) – dynamic
Difference between minfree and maxfree should be > that this value
j2_dynamicBufferPreallocation=16
Max is 256. 16 means 16 x 16k slabs or 256k
Default that may need tuning but is dynamic
Replaces tuning j2_nBufferPerPagerDevice until at max.
numclient=numperm so most likely the I/O being done is JFS2 or NFS or VxFS
Based on the blocked I/Os it is clearly a system using JFS2
It is also having paging problems
pbufs also need reviewing
37
vmstat –v Output
uptime
02:03PM up 39 days, 3:06, 2 users, load average: 17.02, 15.35, 14.27
9 memory pools
3.0 minperm percentage
90.0 maxperm percentage
14.9 numperm percentage
14.9 numclient percentage
90.0 maxclient percentage
38
Memory Pools and fre column
• fre column in vmstat is a count of all the free pages across all the memory pools
• When you look at fre you need to divide by memory pools
• Then compare it to maxfree and minfree
• This will help you determine if you are happy, page stealing or thrashing
• You can see high values in fre but still be paging
• In below if maxfree=2000 and we have 10 memory pools then we only have 990 pages free in
each pool on average. With minfree=960 we are page stealing and close to thrashing.
39
Calculating minfree and maxfree
vmstat –v | grep memory
3 memory pools
Calculation is:
minfree = (max (960,(120 * lcpus) / memory pools))
maxfree = minfree + (Max(maxpgahead,j2_maxPageReadahead) * lcpus) / memory pools
I would probably bump this to 1536 rather than using 1472 (nice power of 2)
If you over allocate these values it is possible that you will see high values in the “fre” column of a vmstat and yet you will be paging.
40
svmon
# svmon -G -O unit=auto -i 2 2
Unit: auto
--------------------------------------------------------------------------------------
size inuse free pin virtual available mmode
memory 8.00G 3.14G 4.86G 2.20G 2.57G 5.18G Ded-E
pg space 4.00G 10.5M
41
svmon
# svmon -G -O unit=auto,timestamp=on,pgsz=on,affinity=detail -i 2 2
42
NETWORK
See article at:
http://www.ibmsystemsmag.com/aix/administrator/networks/network_tuning/
43
Starter set of tunables 3
NETWORK
no -p -o rfc1323=1
no -p -o tcp_sendspace=262144
no -p -o tcp_recvspace=262144
no -p -o udp_sendspace=65536
no -p -o udp_recvspace=655360
Also check the actual NIC interfaces and make sure they are set to at least these values
44
ifconfig
ifconfig -a output
en0:
flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACT
IVE),CHAIN>
inet 10.2.0.37 netmask 0xfffffe00 broadcast 10.2.1.255
tcp_sendspace 65536 tcp_recvspace 65536 rfc1323 0
lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
On a VIO server I normally bump the transmit queues on the real (underlying adapters) for the aggregate/SEA
Example for a 1Gbe adapter:
chdev -l ent? -a txdesc_que_sz=1024 -a tx_que_sz=16384 -P
45
My VIO Server SEA
# ifconfig -a
en6:
flags=1e080863,580<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,6
4BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
lo0:
flags=e08084b,1c0<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BI
T,LARGESEND,CHAIN>
inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255
inet6 ::1%1/0
tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1
46
Network
Interface Speed MTU tcp_sendspace tcp_recvspace rfc1323
AIX v6.1
http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.prftungd/doc/prftungd/prftungd_pdf.pdf
47
Network Commands
• entstat –d or netstat –v (also –m and –I)
• netpmon
• iptrace (traces) and ipreport (formats trace)
• Tcpdump
• traceroute
• chdev, lsattr
• no
• ifconfig
• ping and netperf
• ftp
• Can use ftp to measure network throughput
• ftp to target
• ftp> put “| dd if=/dev/zero bs=32k count=100” /dev/null
• Compare to bandwidth (For 1Gbit - 948 Mb/s if simplex and 1470 if duplex )
• 1Gbit = 0.125 GB = 1000 Mb = 100 MB) but that is 100%
48
Net tab in nmon
49
Other Network
• If 10Gb network check out Gareth’s Webinar
• https://www.ibm.com/developerworks/wikis/download/attachments/153124943/7_PowerVM_10Gbit_Ethernet.pdf?version=1
• netstat –v
• Look for overflows and memory allocation failures
Max Packets on S/W Transmit Queue: 884
S/W Transmit Queue Overflow: 9522
• “Software Xmit Q overflows” or “packets dropped due to memory allocation failure”
• Increase adapter xmit queue
• Use lsattr –EL ent? To see setting
• Look for receive errors or transmit errors
• dma underruns or overruns
• mbuf errors
• lparstat 2
• Look for high vcsw – indicator that entitlement may be too low
• tcp_nodelay (or tcp_nodelayack)
• Disabled by default
• 200ms delay by default as it waits to piggy back TCP acks onto response packets
• Tradeoff is more traffic versus faster response
• Also check errpt – people often forget this
50
entstat -v
ETHERNET STATISTICS (ent18) :
Device Type: Shared Ethernet Adapter
Elapsed Time: 44 days 4 hours 21 minutes 3 seconds
Transmit Statistics: Receive Statistics:
-------------------- -------------------
Packets: 94747296468 Packets: 94747124969
Bytes: 99551035538979 Bytes: 99550991883196
Interrupts: 0 Interrupts: 22738616174
Transmit Errors: 0 Receive Errors: 0
Packets Dropped: 0 Packets Dropped: 286155
Bad Packets: 0
Max Packets on S/W Transmit Queue: 712
S/W Transmit Queue Overflow: 0
Current S/W+H/W Transmit Queue Length: 50
51
entstat –v vio
SEA
Transmit Statistics: Receive Statistics:
-------------------- -------------------
Packets: 83329901816 Packets: 83491933633
Bytes: 87482716994025 Bytes: 87620268594031
Interrupts: 0 Interrupts: 18848013287
Transmit Errors: 0 Receive Errors: 0
Packets Dropped: 0 Packets Dropped: 67836309
Bad Packets: 0
Max Packets on S/W Transmit Queue: 374
S/W Transmit Queue Overflow: 0
Current S/W+H/W Transmit Queue Length: 0
52
52
Buffers
Virtual Trunk Statistics
Receive Information
Receive Buffers
Buffer Type Tiny Small Medium Large Huge
Min Buffers 512 512 128 24 24
Max Buffers 2048 2048 256 64 64
Allocated 513 2042 128 24 24
Registered 511 506 128 24 24
History
Max Allocated 532 2048 128 24 24
Lowest Registered 502 354 128 24 24
53
53
nmon Monitoring
54
Useful Links
• Nigel on Entitlements and VPs plus 7 most frequently asked questions
• http://www.youtube.com/watch?v=1W1M114ppHQ&feature=youtu.be
• Charlie Cler Articles
• http://www.ibmsystemsmag.com/authors/Charlie-Cler/
• Andrew Goade Articles
• http://www.ibmsystemsmag.com/authors/Andrew-Goade/
• Jaqui Lynch Articles
• http://www.ibmsystemsmag.com/authors/Jaqui-Lynch/
• Jay Kruemke Twitter – chromeaix
• https://twitter.com/chromeaix
• Nigel Griffiths Twitter – mr_nmon
• https://twitter.com/mr_nmon
• Jaqui’s Upcoming Talks and Movies
• Upcoming Talks
• http://www.circle4.com/forsythetalks.html
• Movie replays
• http://www.circle4.com/movies
55
Useful Links
• Nigel Griffiths
• AIXpert Blog
• https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/?lang=en
• 10 Golden rules for rPerf Sizing
• https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/entry/size_with_rperf_if_you_
must_but_don_t_forget_the_assumptions98?lang=en
• Youtube channel
• http://www.youtube.com/user/nigelargriffiths
• AIX Wiki
• https://www.ibm.com/developerworks/wikis/display/WikiPtype/AIX
• HMC Scanner
• http://www.ibm.com/developerworks/wikis/display/WikiPtype/HMC+Scanner
• Workload Estimator
• http://ibm.com/systems/support/tools/estimator
• Performance Tools Wiki
• http://www.ibm.com/developerworks/wikis/display/WikiPtype/Performance+Monitoring+Tools
• Performance Monitoring
• https://www.ibm.com/developerworks/wikis/display/WikiPtype/Performance+Monitoring+Documentation
• Other Performance Tools
• https://www.ibm.com/developerworks/wikis/display/WikiPtype/Other+Performance+Tools
• Includes new advisors for Java, VIOS, Virtualization
• VIOS Advisor
• https://www.ibm.com/developerworks/wikis/display/WikiPtype/Other+Performance+Tools#OtherPerformance
Tools-VIOSPA
56
References
• Simultaneous Multi-Threading on POWER7 Processors by Mark Funk
• http://www.ibm.com/systems/resources/pwrsysperf_SMT4OnP7.pdf
• Processor Utilization in AIX by Saravanan Devendran
• https://www.ibm.com/developerworks/mydeveloperworks/wikis/home?lang=en#/wiki/Power%20Systems/p
age/Understanding%20CPU%20utilization%20on%20AIX
• Rosa Davidson Back to Basics Part 1 and 2 –Jan 24 and 31, 2013
• https://www.ibm.com/developerworks/mydeveloperworks/wikis/home?lang=en#/wiki/Power%20Systems/page/AIX%20Virtual%20
User%20Group%20-%20USA
• Nigel – PowerVM User Group
• https://www.ibm.com/developerworks/mydeveloperworks/wikis/home?lang=en#/wiki/Power%20Systems/page/PowerVM%20techn
ical%20webinar%20series%20on%20Power%20Systems%20Virtualization%20from%20IBM%20web
• SG24-7940 - PowerVM Virtualization - Introduction and Configuration
• http://www.redbooks.ibm.com/redbooks/pdfs/sg247940.pdf
• SG24-7590 – PowerVM Virtualization – Managing and Monitoring
• http://www.redbooks.ibm.com/redbooks/pdfs/sg247590.pdf
• SG24-8080 – Power Systems Performance Guide – Implementing and Optimizing
• http://www.redbooks.ibm.com/redbooks/pdfs/sg248080.pdf
• SG24-8079 – Power 7 and 7+ Optimization and Tuning Guide
• http://www.redbooks.ibm.com/redbooks/pdfs/sg248079.pdf
• Redbook Tip on Maximizing the Value of P7 and P7+ through Tuning and Optimization
• http://www.redbooks.ibm.com/technotes/tips0956.pdf
57
Thank you for your time
58