Good day, I'm supporting a IBM Power AIX environment, which I have been working with for about a year. Along with sorting out the general maintenance of the systems and moving to AIX 7.2 there is a large UniVerse environment on the hardware I support. So I've done a little reading, as I didn't build these environments, about recommend tuning. I couldn't find anything specific, so along with what I've found, I wanted to see if there was anything else I could do or look into. We have a number of test environments, and servers that I can build to test these things on.
I have found some mention of tuning JFS, first was check the type of filesystem mount, if CIO enabled disable it. Then set 'ioo -p -o j2_dynamicBufferPreallocation=64'. I've also seen mention that j2_nPagesPerWriteBehindCluster should be changed from the default 32 to zero
I have some ideas of things I might like to try and see if they made a difference, setting the "/usr/sbin/syncd 60 > /dev/null 2>&1 &" default to 10 in '/sbin/rc.boot'.
Along with setting I/O pacing, as this is normally turned off as default: chdev -l sys0 -a maxpout='33' -a minpout='24'. I was also interested in learning if turning on Asynchronous IO makes a difference, I know other DB's make this recommendation. But I've no idea if it something that others have found useful.
Finally disk/lun and fibre configuration, with the changes of most system having high speed SSD fibre attached volumes. The instances of lots of LUNs added to a environment and spread across to get better write speeds are gone. With that I need to make sure that the disk, fibre and attachment tunings are the best they can be. I've done the usual and checks the disk queue depths, fibre command queues, but nothing really screams out as an issue. I have just done some test with 'num_io_queues' and 'num_sp_cmd_elem' on the fibre adapters, but so far I've seen no real difference.
I am inexperienced with the sort of data that UniVerse writes as my background is DB2 or Oracle, but I understand it's mostly 4k read/writes. Is this true?
Thanks very much.
------------------------------
Daniel Martin-Corben
Rocket Forum Shared Account
------------------------------
Hello Daniel!
We have a similar environment - UniVerse 11.3.4 on AIX 7.2, on IBM Power9 servers. We use an external, non-IBM SAN for our data, but have the OS and UniVerse installed on local disk. We don't use VIOS, as our "policy" is to not share Ethernet or Fibre Channel connections between LPARs. (may change in the future, but requires some R&D and POC testing before we dive into that).
From what I can remember, it looks like you have most of what we ran into covered, with one exception. You noted that that the days of increased performance from multiple spindles is gone. I will disagree with you there. We performed a number of tests and discovered that having multiple LUNs per filesystem increased performance measurably (though, unfortunately, I do not seem to be able to find the data at the moment). We have our SAN team carve up the disk into LUNs of about 20-25 GB for well-used filesystems, and 50-100 GB for archive, or less-active filesystems. For example, one of our primary filesystems is 750GB in size, so the SAN team presents 30 25GB LUNs to the AIX server for this filesystem.
One other item I don't see mentioned is the jfs2 logs. For active filesystems, set them up with external jfs2 logs- don't have them in-line.
I'll go back through my notes and see if there was anything else we encountered. Let me know if you have any additional questions.
Brian
Hello Daniel!
We have a similar environment - UniVerse 11.3.4 on AIX 7.2, on IBM Power9 servers. We use an external, non-IBM SAN for our data, but have the OS and UniVerse installed on local disk. We don't use VIOS, as our "policy" is to not share Ethernet or Fibre Channel connections between LPARs. (may change in the future, but requires some R&D and POC testing before we dive into that).
From what I can remember, it looks like you have most of what we ran into covered, with one exception. You noted that that the days of increased performance from multiple spindles is gone. I will disagree with you there. We performed a number of tests and discovered that having multiple LUNs per filesystem increased performance measurably (though, unfortunately, I do not seem to be able to find the data at the moment). We have our SAN team carve up the disk into LUNs of about 20-25 GB for well-used filesystems, and 50-100 GB for archive, or less-active filesystems. For example, one of our primary filesystems is 750GB in size, so the SAN team presents 30 25GB LUNs to the AIX server for this filesystem.
One other item I don't see mentioned is the jfs2 logs. For active filesystems, set them up with external jfs2 logs- don't have them in-line.
I'll go back through my notes and see if there was anything else we encountered. Let me know if you have any additional questions.
Brian
Thanks, is there a reason that you go for the external file logs compared to inline?
I did wonder about the number of disk LUNs, I have done some testing on a lone AIX server with it running 4k sequential writes. There was some improvement for the writes over the reads, that only seemed to be good for a write heavy system. I was thinking that the increased number of disk paths would make an improvement, but it didn't seem to be a massive amount. This was for a relatively simple 10 minute test, so in a real world example its very really possible I'm going to get something very different.
The main reason for the tests is to give examples of the performance improvement to show the team before I look to push it out.
If there is some JFS improvements, I would be keen to try those.
We have a mix of VIO attached and on metal systems. I prefer VIO managed systems in case I need to move stuff about.
------------------------------
Daniel Martin-Corben
Rocket Forum Shared Account
------------------------------
Thanks, is there a reason that you go for the external file logs compared to inline?
I did wonder about the number of disk LUNs, I have done some testing on a lone AIX server with it running 4k sequential writes. There was some improvement for the writes over the reads, that only seemed to be good for a write heavy system. I was thinking that the increased number of disk paths would make an improvement, but it didn't seem to be a massive amount. This was for a relatively simple 10 minute test, so in a real world example its very really possible I'm going to get something very different.
The main reason for the tests is to give examples of the performance improvement to show the team before I look to push it out.
If there is some JFS improvements, I would be keen to try those.
We have a mix of VIO attached and on metal systems. I prefer VIO managed systems in case I need to move stuff about.
------------------------------
Daniel Martin-Corben
Rocket Forum Shared Account
------------------------------
Daniel,
For the external vs. inline filesystems logs, it's a performance thing. A single UniVerse database write can generate multiple disk-level writes, which then get multiplied when you add in the AIX filesystem log. Having the log external allows multiple paths and decreases the chances of a bottleneck.
Most of these performance improvements are all relative to the system you're running and the number of users. On a small system with a small number of users, it probably wouldn't be noticeable. On a multi-functional system (like a full ERP application) with hundreds of users (or more), the little improvements can add up. Plus, it's nice to 'know' if you can make it better or not, rather than just thinking you can or can't.
Brian
Daniel,
For the external vs. inline filesystems logs, it's a performance thing. A single UniVerse database write can generate multiple disk-level writes, which then get multiplied when you add in the AIX filesystem log. Having the log external allows multiple paths and decreases the chances of a bottleneck.
Most of these performance improvements are all relative to the system you're running and the number of users. On a small system with a small number of users, it probably wouldn't be noticeable. On a multi-functional system (like a full ERP application) with hundreds of users (or more), the little improvements can add up. Plus, it's nice to 'know' if you can make it better or not, rather than just thinking you can or can't.
Brian
UV database files are just files as far as AIX is concerned. There is no support for AIO.
So anything that helps cache files in memory, and maximises write performance is a help.
We used to run UV 11.1 and 11.2 on AIX 7.1 (we now use RHEL instead).
We did have a number of goes at engaging IBM to understand how to tune AIX to improve UV performance.
Digging out my notes uncovered the following tunings as a result of these efforts.
These might help, but they are AIX 7.1 so assess them against your current settings and current version applicability.
Buffers
lvmo -v uvvg -o pv_pbuf_count=4096
ioo -p -o j2_dynamicBufferPreallocation=128
JFS2 Sequential Write-behind algorithm & Sync Control
ioo -p -o j2_nPagesPerWriteBehindCluster=0 -o j2_syncPageCount=256 -o j2_syncPageLimit=40960
JFS2 Filesystem Logging & mount option 'noatime'
Mount UVTEMP and UV data filesystems with 'noatime' option to avoid WRITING to the filesystem each time you READ a database file.
Storage Protection Keys
/usr/sbin/skctl -V -k'off'
Memory Tracing
raso -r -o mtrc_enabled=0
Component Tracing
ctctrl -P memtraceoff
------------------------------
Gregor Scott
Software Architect
Pentana Solutions Pty Ltd
Mount Waverley VIC AU
------------------------------
UV database files are just files as far as AIX is concerned. There is no support for AIO.
So anything that helps cache files in memory, and maximises write performance is a help.
We used to run UV 11.1 and 11.2 on AIX 7.1 (we now use RHEL instead).
We did have a number of goes at engaging IBM to understand how to tune AIX to improve UV performance.
Digging out my notes uncovered the following tunings as a result of these efforts.
These might help, but they are AIX 7.1 so assess them against your current settings and current version applicability.
Buffers
lvmo -v uvvg -o pv_pbuf_count=4096
ioo -p -o j2_dynamicBufferPreallocation=128
JFS2 Sequential Write-behind algorithm & Sync Control
ioo -p -o j2_nPagesPerWriteBehindCluster=0 -o j2_syncPageCount=256 -o j2_syncPageLimit=40960
JFS2 Filesystem Logging & mount option 'noatime'
Mount UVTEMP and UV data filesystems with 'noatime' option to avoid WRITING to the filesystem each time you READ a database file.
Storage Protection Keys
/usr/sbin/skctl -V -k'off'
Memory Tracing
raso -r -o mtrc_enabled=0
Component Tracing
ctctrl -P memtraceoff
------------------------------
Gregor Scott
Software Architect
Pentana Solutions Pty Ltd
Mount Waverley VIC AU
------------------------------
This is a great help, I have a server that I'm experimenting on. So thanks!
------------------------------
Daniel Martin-Corben
Rocket Forum Shared Account
------------------------------
Daniel,
For the external vs. inline filesystems logs, it's a performance thing. A single UniVerse database write can generate multiple disk-level writes, which then get multiplied when you add in the AIX filesystem log. Having the log external allows multiple paths and decreases the chances of a bottleneck.
Most of these performance improvements are all relative to the system you're running and the number of users. On a small system with a small number of users, it probably wouldn't be noticeable. On a multi-functional system (like a full ERP application) with hundreds of users (or more), the little improvements can add up. Plus, it's nice to 'know' if you can make it better or not, rather than just thinking you can or can't.
Brian
Thanks for the explanation. After you confirmed my thoughts about the disk layout, I pushed my test runs out and I'm slowly seeing more of a noticeable improvement. All this along with some changes to the way JFS handles things is increasing things more and more.
------------------------------
Daniel Martin-Corben
Rocket Forum Shared Account
------------------------------
Good day, I'm supporting a IBM Power AIX environment, which I have been working with for about a year. Along with sorting out the general maintenance of the systems and moving to AIX 7.2 there is a large UniVerse environment on the hardware I support. So I've done a little reading, as I didn't build these environments, about recommend tuning. I couldn't find anything specific, so along with what I've found, I wanted to see if there was anything else I could do or look into. We have a number of test environments, and servers that I can build to test these things on.
I have found some mention of tuning JFS, first was check the type of filesystem mount, if CIO enabled disable it. Then set 'ioo -p -o j2_dynamicBufferPreallocation=64'. I've also seen mention that j2_nPagesPerWriteBehindCluster should be changed from the default 32 to zero
I have some ideas of things I might like to try and see if they made a difference, setting the "/usr/sbin/syncd 60 > /dev/null 2>&1 &" default to 10 in '/sbin/rc.boot'.
Along with setting I/O pacing, as this is normally turned off as default: chdev -l sys0 -a maxpout='33' -a minpout='24'. I was also interested in learning if turning on Asynchronous IO makes a difference, I know other DB's make this recommendation. But I've no idea if it something that others have found useful.
Finally disk/lun and fibre configuration, with the changes of most system having high speed SSD fibre attached volumes. The instances of lots of LUNs added to a environment and spread across to get better write speeds are gone. With that I need to make sure that the disk, fibre and attachment tunings are the best they can be. I've done the usual and checks the disk queue depths, fibre command queues, but nothing really screams out as an issue. I have just done some test with 'num_io_queues' and 'num_sp_cmd_elem' on the fibre adapters, but so far I've seen no real difference.
I am inexperienced with the sort of data that UniVerse writes as my background is DB2 or Oracle, but I understand it's mostly 4k read/writes. Is this true?
Thanks very much.
------------------------------
Daniel Martin-Corben
Rocket Forum Shared Account
------------------------------
There are some workloads that j2_nPagesPerWriteBehindCluster impact. The function flushes dirty pages in the previous, next lower address, cluster to disk. Once on the write queue, you cannot access them until they complete the flush and are marked clean.
An example of one workload repeatedly writes records into adjacent groups. The demonstration uses a type 2 file, and repeats writing of records n and n+1, such as record ID pairs of 127 and 128. When record n is in the end of one cluster, and n+1 is in the next cluster, writing record n+1 forces the flush of the page containing record n, and you have to wait for it to become free again. Writing the pair of records that cross the cluster boundary can take significantly longer elapsed time than other record pairs in the file. In addition, the forced writes defeat only writing into the cache, so physical writes can drastically increase. Measurements also indicate additional CPU charged to the process due to the physical disk writes.
With this example to understand the mechanism, where do we see this happen in an application?
-
A file containing control records can exceed one cluster. Control records can contain next available numbers, last updated time stamps, and other frequently updated values. The j2_nPagesPerWriteBehindCluster can affect primary groups in the file, as well as blocks in overflow chains.
-
UVTEMP contains sorting and selecting work files. These temporary files typically have short life span, less than the default file system sync time of 60 seconds. The ps command displays sync daemon cycle time as part of the executed command. Disabling j2_nPagesPerWriteBehindCluster can avoid sending those temporary blocks to disk completely. Short duration EXECUTE CAPTURING files seldom grow to the default 128K cluster size, so the pain incurred from this workload relates to creating and deleting inodes for the capture file.
-
Index records consist of main file ID for an indexed value, separated by field marks. A chain of 8K blocks comprising the record can cross cluster boundaries. While not necessarily in file address sequence, writing can delay rewriting the next time because of actively-flushing dirty blocks. I witnessed one nightly process writing daily activity records into a summary file. The index built on date contained a multiple-megabyte list of ID when complete. Each addition of a main file ID would rewrite the index record each time, one ID longer for each record containing "today". The approximately 2.5 hour run each night dropped to under 40 minutes, with the dirty blocks only flushing at the sync daemon cycle time instead of each write. Combined with other nightly processes interacting with j2_nPagesPerWriteBehindCluster, the physical writes became almost negligible.
-
Oversize records in dynamic and static hashed files are comprised of group-size blocks. An oversize write first adds the blocks to the free list, and then allocates from the free list, and possibly more at the end of the overflow area, to rewrite the record. Touching those blocks more than once can affect adjacent clusters and aggravate j2_nPagesPerWriteBehindCluster dirty page flushing.
These are examples of the workloads affected by j2_nPagesPerWriteBehindCluster.
Maybe I need to resume the Hitchhiker's Guide to the UniVerse series?
------------------------------
Mark A Baldridge
Principal Consultant
Thought Mirror
Nacogdoches, Texas United States
------------------------------
There are some workloads that j2_nPagesPerWriteBehindCluster impact. The function flushes dirty pages in the previous, next lower address, cluster to disk. Once on the write queue, you cannot access them until they complete the flush and are marked clean.
An example of one workload repeatedly writes records into adjacent groups. The demonstration uses a type 2 file, and repeats writing of records n and n+1, such as record ID pairs of 127 and 128. When record n is in the end of one cluster, and n+1 is in the next cluster, writing record n+1 forces the flush of the page containing record n, and you have to wait for it to become free again. Writing the pair of records that cross the cluster boundary can take significantly longer elapsed time than other record pairs in the file. In addition, the forced writes defeat only writing into the cache, so physical writes can drastically increase. Measurements also indicate additional CPU charged to the process due to the physical disk writes.
With this example to understand the mechanism, where do we see this happen in an application?
-
A file containing control records can exceed one cluster. Control records can contain next available numbers, last updated time stamps, and other frequently updated values. The j2_nPagesPerWriteBehindCluster can affect primary groups in the file, as well as blocks in overflow chains.
-
UVTEMP contains sorting and selecting work files. These temporary files typically have short life span, less than the default file system sync time of 60 seconds. The ps command displays sync daemon cycle time as part of the executed command. Disabling j2_nPagesPerWriteBehindCluster can avoid sending those temporary blocks to disk completely. Short duration EXECUTE CAPTURING files seldom grow to the default 128K cluster size, so the pain incurred from this workload relates to creating and deleting inodes for the capture file.
-
Index records consist of main file ID for an indexed value, separated by field marks. A chain of 8K blocks comprising the record can cross cluster boundaries. While not necessarily in file address sequence, writing can delay rewriting the next time because of actively-flushing dirty blocks. I witnessed one nightly process writing daily activity records into a summary file. The index built on date contained a multiple-megabyte list of ID when complete. Each addition of a main file ID would rewrite the index record each time, one ID longer for each record containing "today". The approximately 2.5 hour run each night dropped to under 40 minutes, with the dirty blocks only flushing at the sync daemon cycle time instead of each write. Combined with other nightly processes interacting with j2_nPagesPerWriteBehindCluster, the physical writes became almost negligible.
-
Oversize records in dynamic and static hashed files are comprised of group-size blocks. An oversize write first adds the blocks to the free list, and then allocates from the free list, and possibly more at the end of the overflow area, to rewrite the record. Touching those blocks more than once can affect adjacent clusters and aggravate j2_nPagesPerWriteBehindCluster dirty page flushing.
These are examples of the workloads affected by j2_nPagesPerWriteBehindCluster.
Maybe I need to resume the Hitchhiker's Guide to the UniVerse series?
------------------------------
Mark A Baldridge
Principal Consultant
Thought Mirror
Nacogdoches, Texas United States
------------------------------
Has anyone done any testing with striping the lv? In theory it should allow multiple logical LUN's to be accessed also helping the improvement of performance on the system.
------------------------------
Daniel Martin-Corben
Rocket Forum Shared Account
------------------------------
Has anyone done any testing with striping the lv? In theory it should allow multiple logical LUN's to be accessed also helping the improvement of performance on the system.
------------------------------
Daniel Martin-Corben
Rocket Forum Shared Account
------------------------------
Daniel,
Yes, striping is a good way to gain performance and redundancy of disks. In fact a RAID 10, striping and mirror provides the extra advantage of making backups easier.
This International Spectrum article ( Wait! Backup! ) , by Kevin King, does a great job of explaining it.
------------------------------
Mike Rajkowski
MultiValue Product Evangelist
Rocket Internal - All Brands
US
------------------------------
There are some workloads that j2_nPagesPerWriteBehindCluster impact. The function flushes dirty pages in the previous, next lower address, cluster to disk. Once on the write queue, you cannot access them until they complete the flush and are marked clean.
An example of one workload repeatedly writes records into adjacent groups. The demonstration uses a type 2 file, and repeats writing of records n and n+1, such as record ID pairs of 127 and 128. When record n is in the end of one cluster, and n+1 is in the next cluster, writing record n+1 forces the flush of the page containing record n, and you have to wait for it to become free again. Writing the pair of records that cross the cluster boundary can take significantly longer elapsed time than other record pairs in the file. In addition, the forced writes defeat only writing into the cache, so physical writes can drastically increase. Measurements also indicate additional CPU charged to the process due to the physical disk writes.
With this example to understand the mechanism, where do we see this happen in an application?
-
A file containing control records can exceed one cluster. Control records can contain next available numbers, last updated time stamps, and other frequently updated values. The j2_nPagesPerWriteBehindCluster can affect primary groups in the file, as well as blocks in overflow chains.
-
UVTEMP contains sorting and selecting work files. These temporary files typically have short life span, less than the default file system sync time of 60 seconds. The ps command displays sync daemon cycle time as part of the executed command. Disabling j2_nPagesPerWriteBehindCluster can avoid sending those temporary blocks to disk completely. Short duration EXECUTE CAPTURING files seldom grow to the default 128K cluster size, so the pain incurred from this workload relates to creating and deleting inodes for the capture file.
-
Index records consist of main file ID for an indexed value, separated by field marks. A chain of 8K blocks comprising the record can cross cluster boundaries. While not necessarily in file address sequence, writing can delay rewriting the next time because of actively-flushing dirty blocks. I witnessed one nightly process writing daily activity records into a summary file. The index built on date contained a multiple-megabyte list of ID when complete. Each addition of a main file ID would rewrite the index record each time, one ID longer for each record containing "today". The approximately 2.5 hour run each night dropped to under 40 minutes, with the dirty blocks only flushing at the sync daemon cycle time instead of each write. Combined with other nightly processes interacting with j2_nPagesPerWriteBehindCluster, the physical writes became almost negligible.
-
Oversize records in dynamic and static hashed files are comprised of group-size blocks. An oversize write first adds the blocks to the free list, and then allocates from the free list, and possibly more at the end of the overflow area, to rewrite the record. Touching those blocks more than once can affect adjacent clusters and aggravate j2_nPagesPerWriteBehindCluster dirty page flushing.
These are examples of the workloads affected by j2_nPagesPerWriteBehindCluster.
Maybe I need to resume the Hitchhiker's Guide to the UniVerse series?
------------------------------
Mark A Baldridge
Principal Consultant
Thought Mirror
Nacogdoches, Texas United States
------------------------------
It's been a while since I've dipped into JFS2 - and there used to be some useful papers on optimising for random access on various IBM web pages, sadly long gone - at least at the original locations. As Mark commented, j2_nPagesPerWriteBehindCluster can have a major effect and if set inappropriately can have the effect of pausing physical disk access while buffers are flushed at intervals. While on heavy batch loads a delay of 30 seconds every hour or so may overall be more efficient on workload throughput, for an interactive workload a lockup of 30 seconds is far more significant compared to saving the total number of disk IOPS in a day. It's also worth noting that when setting the value with the ioo command, if you wish to have the change continue past the next reboot you need to explicitly say so to make the cha nge permanent.
As a generalisation which seems to work, setting j2_nPagesPerWriteBehindCluster to zero seems to be the best option. As does putting the log device on a separate fast disk in the Volume Group.
I also recall a limit somewhere - I think it was to the number of write channels per logical volume or volume group (but please cross-check me on this). Where a very large number of random access (i.e. hashed) files are being updated concurrently then it can be better to have multiple volumes than adopt a one-big-volume approach.
I spent quite a lot of time working with a customer on tuning and optimising their JFS2 file system some years ago, down the level of buffer sizes, policies, queues and load management, but there is no general 'one size fits all' answer unfortunately - and you can get to the point of diminishing returns fairly quickly.
Hoping this helps somewhat, unfortunately I no longer have the original IBM documents or the site-specific report I wrote at the time.
JJ
------------------------------
John Jenkins
Thame, Oxfordshire
------------------------------
It's been a while since I've dipped into JFS2 - and there used to be some useful papers on optimising for random access on various IBM web pages, sadly long gone - at least at the original locations. As Mark commented, j2_nPagesPerWriteBehindCluster can have a major effect and if set inappropriately can have the effect of pausing physical disk access while buffers are flushed at intervals. While on heavy batch loads a delay of 30 seconds every hour or so may overall be more efficient on workload throughput, for an interactive workload a lockup of 30 seconds is far more significant compared to saving the total number of disk IOPS in a day. It's also worth noting that when setting the value with the ioo command, if you wish to have the change continue past the next reboot you need to explicitly say so to make the cha nge permanent.
As a generalisation which seems to work, setting j2_nPagesPerWriteBehindCluster to zero seems to be the best option. As does putting the log device on a separate fast disk in the Volume Group.
I also recall a limit somewhere - I think it was to the number of write channels per logical volume or volume group (but please cross-check me on this). Where a very large number of random access (i.e. hashed) files are being updated concurrently then it can be better to have multiple volumes than adopt a one-big-volume approach.
I spent quite a lot of time working with a customer on tuning and optimising their JFS2 file system some years ago, down the level of buffer sizes, policies, queues and load management, but there is no general 'one size fits all' answer unfortunately - and you can get to the point of diminishing returns fairly quickly.
Hoping this helps somewhat, unfortunately I no longer have the original IBM documents or the site-specific report I wrote at the time.
JJ
------------------------------
John Jenkins
Thame, Oxfordshire
------------------------------
Some notes on queue depth:
Queue Depth
Average queue measures the outstanding requests to a storage device. As a
multiple process environment and not a multiple thread environment,
UniVerse applications will generate one I/O request at a time. Obviously
reads await the returning data, however, writes go to the disk cache and
subsequently flush to the disk in larger numbers.
Increasing the queue depth parameter permits more concurrent requests to
storage. The underlying storage can optimize the requests by minimizing
head motion and rotational delays.
The value of queue depth was initially configured to 64 on each of four
paths. For some reason, the value currently is now only 8.
Attempt to get the average queue, *avque* as reported by sar -d, reasonably
close to 0. For a period of time disks 40 and 44 reported values of 100. A
daily average shows that queue depth for disks 44, 47, 58, 59, 60, and 61
could improve significantly.
10:00:01 device %busy avque r+w/s blks/s avwait avserv
Average disk44 14.69 1.79 84 4268 1.87 4.40
Average disk45 0.00 0.50 0 0 0.00 0.92
Average disk46 0.00 0.50 0 0 0.00 0.65
Average disk47 7.73 8.95 101 1257 3.72 3.69
Average disk48 0.00 0.50 0 0 0.00 0.76
Average disk49 0.00 0.50 0 0 0.00 0.78
Average disk50 0.05 0.50 0 0 0.00 1.92
Average disk51 0.23 1.69 2 13 1.41 4.01
Average disk52 0.00 0.50 0 0 0.00 0.72
Average disk53 0.00 0.50 0 0 0.00 0.82
Average disk54 0.86 0.95 10 173 0.50 2.32
Average disk55 0.00 0.50 0 0 0.00 0.95
Average disk56 0.01 0.50 0 0 0.00 1.98
Average disk57 0.00 0.50 0 0 0.00 0.90
Average disk58 4.97 1.49 61 766 1.03 3.13
Average disk59 1.60 15.35 20 330 4.62 2.73
Average disk60 0.86 30.09 12 59 10.59 3.95
Average disk61 14.87 2.34 116 4895 2.10 3.82
Average disk66 0.00 0.50 0 0 0.00 1.01
Average disk30 0.01 0.50 0 0 0.00 1.59
Estimating the optimal setting for queue depth uses the following formula.
Queue depth = 256 / maximum number of LUNs
For a device, set the queue depth with the scsictl command.
scsictl -m queue_depth=21 /dev/rdsk/$dsksf
Use SAM and set the kernel parameter scsi_max_qdepth to make it global for
all disks.