Good Performance of Storage Systems With IBM I
Good Performance of Storage Systems With IBM I
Good Performance of Storage Systems With IBM I
Agenda
Customers considerations
Storage systems that connect to IBM i
IBM i architecture and external storage
Sizing guidelines for Storage systems with IBM i
Demo of Disk Magic for IBM i
Daily workload
– Interactive transaction workload
Requst: Short Transaction rensponse time
Typical request: IBM i on External storage should perform as good as on internal disk
SSD IBM i
SVC SSD
VIOS
Tape
Libraries
& Drives
DS4000 DS5000
XIV
SSD ProtecTIER
DS6000
© 2009 IBM Corporation
Note: The shown connections with VIOS mean VIOS vscsi
Building a smarter planet
Pages
Map disk and sector to a virtual address
Processors
IO flow
MS bus Main storage
IO bus – PCI-X
Cache
IOA Ex #5903, #5904
Request
Power Hadrware
Data
Storage system
© 2009 IBM Corporation
Building a smarter planet
POWER
VSCSI
VIOS VIOS
SVC
vdisk
Storage pool
Background storage system
IBM i LUN
LPAR 1
Virtual FC
IBM i LPAR 2
LUN
LPAR 3
IBM i
Sector conversion
Sizing Guidelines
Good to use IBM i performance data even before modelling with Disk Magic, to apply sizing guidelines
IO rate by server
Both systems KB/IO
3000.0 80.0
70.0
2500.0 60.0
50.0
40.0 KB/Write
2000.0
30.0
i1_ASP1 20.0
1500.0 10.0
i2_ASP1 0.0
500.0
0.0
1 43 85 127 169 211 253 295 337 379 421 463 505 547 589 631 673 715 757 799 841
7 0 0 0 .0
6 0 0 0 .0
5 0 0 0 .0
4 0 0 0 .0
I/O R a te
3 0 0 0 .0
2 0 0 0 .0
1 0 0 0 .0
0 .0
Oct/01/2009
Oct/01/2009
Oct/01/2009
Oct/02/2009
Oct/02/2009
Oct/03/2009
Oct/03/2009
Oct/03/2009
Oct/04/2009
Oct/04/2009
Oct/05/2009
Oct/05/2009
Oct/05/2009
Oct/06/2009
Oct/06/2009
Oct/07/2009
In te r v a l T im e
© 2009 IBM Corporation
S
ep
/1
S 6/2
ep 0
/1 11
S 6/2 00
ep 0 :0
/1 11 1:
S 6/2 00 04
0.0
2000.0
4000.0
6000.0
8000.0
10000.0
12000.0
ep 01 :4
/1 1 6:
S 6/2 01 04
ep 0 :
/1 11 31:
S 6/2 0 04
ep 01 2:1
/1 1 6
S 6/2 03 :04
ep 0 :0
/1 11 1
S 6/2 03 :04
ep 0 :4
/1 11 6:
S 6/2 04 04
ep 0 :
/1 11 31:
S 6/2 05 04
Building a smarter planet
ep 0 :1
/1 11 6
S 6/2 06 :04
ep 01 :0
/1
6 1 1:
S 06 04
ep /20 :
/1 11 46:
S 6/2 07 04
ep 0 :
/1 11 31:
S 6/2 0 04
ep 01 8:1
/1 1 6:
I/O Rate
S 6/2 09 04
ep 0 :0
Interval Time
/1 11 1:
S 6/2 09 04
ep 0 :
/1 11 46:
S 6/ 1 04
ep 20 0:
/1 11 31:
S 6/ 1 04
ep 20 1:
16
/1 11
S 6/2 12 :04
ep 0 :0
Peak in IO/Sec
/1 11 1:
6/ 04
20 12:
1 46
1 :
13 04
:3
1:
Graphs from Disk Magic spreadsheet
04
I/O Rate
Determine the peaks - continue
In
te
rv
al
S ta
rt
T 0.0
1000.0
2000.0
3000.0
4000.0
5000.0
6000.0
7000.0
8000.0
9000.0
10000.0
00 ime
:3
1
01 :0 4
:1
6:
02 0 4
:0
1
02 :0 4
:4
6
03 :0 4
:3
1
04 :0 4
:1
6
05 :0 4
:0
1
05 :0 4
:4
6:
06 0 4
:3
1
07 :0 4
:1
6:
08 0 4
RW
:0
1
08 :0 4
:4
6
09 :0 4
Peak for disk utilization
:3
1:
10 0 4
:1
6
11 :0 4
:0
1:
11 0 4
:4
6
12 :0 4
:3
1
13 :0 4
:1
Read write ratio is important to determine the peak, because of RAID penalty on write operations
6:
04
© 2009 IBM Corporation
Reads/sec
Writes/sec
Building a smarter planet
Example:
LUNs in DS8000
Assumed cache
hits:
20% read hit
30% write efficiency
Example 9800 IO/sec with Read/Write ratio 50/50 need 9800 / 982 =app10 * RAID-10 ranks of 15 K rpm
disk drives, connected with IOP-less adapters
The table can be found in the Redbook IBM System Storage DS8000:Host Attachment and
Interoperability, SG24-8887-00
Detailed calculation: ( reads/sec – read cache hits % ) + 2 * (writes/sec – write cache efficiency) = disk
operations/sec (on disk)
RAID-5 58 45
RAID-1 or RAID-10 55 49
RAID-5 39 30
RAID-5 96 75
RAID-1 or RAID-10 92 82
RAID-5 64 50
Example: 7000 IO/sec with read/write ratio 70 / 30 needs 7000 / 138 =app
50 * 15 K RPM disk drives in RAID-10
The sizing guidelines and calculations for DDMs in Storage systems connected with VIOS or
VIOS_NPIV don’t change
The sizing guidelines and calculations for DDMs in Storage systems connected with SVC and
VIOS don’t change
With given disk capacity: The bigger the number of LUNs the smaller the size
Sizing guidelines for the number, or for the size
To obtain the number of LUNs you may use WLE ( number of disk drives)
Considerations for very big number of LUNs:
– Many physical adapters are needed for natively connected storage
– Difficult to manage and troubleshoot with big number of virtual adapters in VIOS
The listed guidelines for a particular storage system apply to all cases (when applicable):
– Native connection
Sizing for physical FC adapters applies to natively connected storage
– Connectioned with VIOS vscsi
– Connection with VIOS_NPIV
– Connection via SVC
The size of LUNs applies to SVC vdisks
Avg of max sequential throughput for 4KB, 216 MB/sec per port
256KB raeds, writes
Avg of min sequential throughput for 4KB, 132 MB/sec per port
256KB raeds, writes
Avg of max sequential throughput for 4KB, 382 MB/sec per 2 ports
256KB raeds, writes
Avg of min sequential throughput for 4KB, 208 MB/sec per 2 ports
256KB raeds, writes
The following table shows recommended number of adapters per half loop
Rule of thumb:
– 0.25 CPUs per each 10,000 I/O of the virtual SCSI client
– For lowest I/O latency preferably use a dedicated processor for VIOS.
1 GB main memory for VIOS
Two or more FC adapters assigned to VIOS for multi-pathing
Sizing IASP
100
Skew level: 90
80
70
Percentage of workload
50
20
10
0
0 10 20 30 40 50 60 70 80 90 100
Percentage of active data
• If the compression of devices for remote links is know you may apply it. If it is not known you may assume a 2:1
compression
Backup charts
Estimate % read cache hit and % write cache efficiency from present cache hits on internal
disk
Rough estimation by best practise:
– If % cache hits is below 50% estimate the same percentage on external storage
– If % cache hits is above 50% estimate half of this % on external storage
If cache hits are not known or you are in doubt, use Disk Magic default estimation: 20% read
cache hit, 30% write cache efficiency