Veritas NetBackup™ Appliance Capacity Planning and Performance Tuning Guide
- About this Guide
- Section I. Capacity planning
- Section II. Best Practices
- Section III. Performance tuning
- Section IV. Quick reference to Capacity planning and Performance tuning
About I/O monitoring and tuning
Table: Sample I/O statistics (collected with iostat - kxdt 5) is a sample output from . Under the column, the sdxx displays the SCSI devices statistics, while VxVMxx display VxVM virtual volume statistics. See the man page of iostat command for a complete description of each column.
Table: Sample I/O statistics (collected with iostat - kxdt 5)
Device | Rrqm/s | Wrqm/s | r/s | w/s | rKB/s | wKB/s | Avgrq-sz | Avgqu-sz | Await | Svctm | %util |
---|---|---|---|---|---|---|---|---|---|---|---|
sdaw | 0 | 0.4 | 0 | 8.8 | 0 | 3552 | 807.16 | 0.01 | 1.64 | 1.36 | 1.2 |
sdax | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
sdaz | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
sdba | 0 | 0 | 0 | 8 | 0 | 2894 | 723.4 | 0.01 | 1.7 | 1.1 | 0.88 |
sdbb | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
sdbc | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
sdbd | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
sdbe | 0 | 0.2 | 0 | 17 | 0 | 6786 | 798.33 | 0.03 | 1.88 | 1.36 | 2.32 |
sdbf | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
sdbg | 0 | 0.4 | 0.2 | 14.8 | 1.6 | 5468 | 729.32 | 0.12 | 8.11 | 4.53 | 6.8 |
sdbh | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
sdbi | 0 | 0.2 | 0.2 | 8.8 | 1.6 | 3222 | 716.27 | 0.02 | 2.67 | 1.69 | 1.52 |
sdbj | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
sdag | 0 | 0 | 0 | 15.2 | 0 | 6358 | 836.63 | 0.03 | 2.05 | 1.32 | 2 |
VxVM3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
VxVM4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
VxVM5 | 0 | 45 | 0.6 | 17.8 | 4.8 | 12303 | 1337.85 | 0.04 | 2.26 | 1.61 | 2.96 |
VxVM6 | 0 | 53.8 | 0 | 18.4 | 0 | 13502 | 1467.65 | 0.05 | 2.65 | 1.87 | 3.44 |
VxVM7 | 0 | 18 | 0.4 | 8.4 | 3.2 | 5743 | 1305.86 | 0.02 | 2.18 | 1.36 | 1.2 |
VxVM8 | 0 | 38.6 | 0.6 | 16.2 | 4.8 | 11225 | 1336.87 | 0.2 | 11.62 | 7.52 | 12.64 |
VxVM9 | 0 | 60 | 0.2 | 19.2 | 1.6 | 13064 | 1346.96 | 0.04 | 2.23 | 1.65 | 3.2 |
VxVM10 | 0 | 29.8 | 0 | 10.4 | 0 | 7349 | 1413.23 | 0.02 | 2.23 | 1.62 | 1.68 |
VxVM11 | 0 | 26.8 | 0.4 | 11.8 | 3.2 | 6573 | 1077.98 | 0.03 | 2.1 | 1.64 | 2 |
VxVM12 | 0 | 30 | 0.2 | 11.2 | 1.6 | 7440 | 1305.54 | 0.02 | 2.18 | 1.68 | 1.92 |
VxVM13 | 0 | 45 | 0.2 | 15.6 | 1.6 | 11652 | 1475.11 | 0.04 | 2.43 | 1.67 | 2.64 |
VxVM14 | 0 | 45 | 0.2 | 17.4 | 1.6 | 11895 | 1351.86 | 0.04 | 2.05 | 1.45 | 2.56 |
VxVM15 | 0 | 21 | 0.4 | 11.2 | 3.2 | 6814 | 1175.38 | 0.03 | 2.76 | 1.93 | 2.24 |
VxVM16 | 0 | 36 | 0.2 | 17 | 1.6 | 13358 | 1553.44 | 0.05 | 2.84 | 1.77 | 3.04 |
Note that iostat displays the I/O statistics for both SCSI and VxVM devices. Most of the time you only need to analyze the VxVM statistics. In the above table, VxVM5 - VxVM16 are the 12 VxVM volumes that MSDP used to store backup data. Each volume resides on a 7+2 Hardware RAID 6 LUN, and a file system is created on top of each LUN. So, each volume corresponds to one mounted file system.
The man page of iostat has a complete description of each column in the iostat output. The following describes some of the columns:
wrqm/s | Write requests queued to the device that are merged per second. High number in this column indicates the I/O pattern is more sequential, and thus more opportunity for multiple I/O requests to be merged and send to the device as a single request. Most of the time high request merge for read and write improves I/O subsystem performance. |
wKB/s | Kilobytes written to the device per second. |
avgrq-sz | Average size (in sectors) of the I/O request |
avgqu-sz | Average number of I/O requests waiting on device queue |
await | Time in milliseconds for the I/O request to be served. This include both device queue time and the time the request was serviced (i.e. svctm). |
svctm | the average service time of I/O requests |
wKB/s is KB written to the device per second and can be used to estimate the backup throughput per second. This can be done by adding wKB/s from VxVM5 - VxVM16 together. In a 0% deduplication workload, this sum should be very close to kilobytes received, i.e. "in" column under bond0 in the Network statistics table. Since the above statistics were captured while running 120 concurrent 98% deduplication backup streams, the wKB/s should be close to 2% of the kilobytes received from network. A simple calculation confirms the statement. The sum of wKB/sec from VxVM5 - VxVM16 is 12,097.1 KB, while in the Network statistics table, the value under bond0 "in" column is ~4,777,000KB. Dividing the two numbers (12,091.1/4,777,000) we get 2.5%. In other words, at this particular time, only 2.5% of backup data received from network need to be written to disks, because the rest of the data are duplicate of the existing data and there is no need to write them again.
In general, high deduplication ratio backup does not generate too much I/O activities, therefore it is unlikely to find I/O bottleneck. In the above table, disk service time, svctm, is mostly below 2ms, disk queue shows near 0 outstanding i/o, avgqu-sz, and %disk utilization are mostly below 5%. All these statistics indicate that I/O is not bottleneck. However, in the above table, we can see that for VxVM8 disk utilization %, svctm and await time are much higher than the other volumes. The statistics for this particular volume is not a cause for concern, you can safely ignore it. However, if you are curious about which file system the volume related to or if performance is bad enough that you want to find out more information about the volume, you can perform the following steps to find the filesystem that VxVM8 is mapped to:
Identify the device minor number by running the command ls - l /dev/vx/rdsk/nbuapp
The following is the sample output from the above command. The number in column 6, ie. "3 4 5 …12" are the device minor numbers, add VxVM in front of the number, you get VxVM3, VxVM4, .. VxVM12. These are the device name displayed in column 1 of the sample iostat output in Table: Sample I/O statistics (collected with iostat - kxdt 5). The last column, 1pdvol, 2pdvol, … 9pdvol are the VxVM virtual volume name which you can use for further drill down in step 2.
crw-------
1
root
root
199,
3
Sep
28
16:12
pdvol
crw-------
1
root
root
199,
4
Sep
28
16:12
1pdvol
crw-------
1
root
root
199,
5
Sep
28
16:12
2pdvol
crw-------
1
root
root
199,
6
Sep
28
16:12
3pdvol
crw-------
1
root
root
199,
7
Sep
28
16:12
4pdvol
crw-------
1
root
root
199,
8
Sep
28
16:12
5pdvol
crw-------
1
root
root
199,
9
Sep
28
16:12
6pdvol
crw-------
1
root
root
199,
10
Sep
28
16:12
7pdvol
crw-------
1
root
root
199,
11
Sep
28
16:12
8pdvol
crw-------
1
root
root
199,
12
Sep
28
16:12
9pdvol
Identify the LUN used for the volume with the command vxprint - ht. The command prints the following for each of the pdvol. For example, the volume of interest in our case is VxVM8 which is corresponding to
5pdvol
. The following output shows that5pdvol
resides on devicenbu_appliance0_29
. The output shows that5pdvol
resides on device0_29
. Prefixnbu_appliance
in front of0_29
to get the full device name (nbu_appliance0_29
). This device name is needed in step 4 as a parameter to vxdisk list command.v 5pdvol - ENABLED ACTIVE 82039901904 SELECT - fsgen pl 5pdvol-01 5pdvol ENABLED ACTIVE 82039901904 CONCAT - RW sd 50002974280F356058357-01 5pdvol-01 50008000011F356058357 0 82039901904 0 appl 0_29 ENA
Identify the file system that the volume,
5pdvol
, is mounted on with the command "df - hT". In the output of this command, you should find the following information corresponding to5pdvol
. The output shows5pdvol
is mounted on mount point /msdp/data/dp1/5pdvol and the size of the file system is 39TB.Filesystem
Type
Size
Used
Avail
Use%
Mounted on
/dev/vx/dsk/nbuapp/5pdvol
vxfs
39T
1.6G
38T
1%
/msdp/data/dp1/5pdvol
Identify the SCSI device name with the command vxdisk list nbu_appliance0_29. At the end of output from the above command, you can find the following:
sds
state=enabled
type=secondary
sdav
state=enabled
type=primary
sddb
state=enabled
type=primary
sdby
state=enabled
type=secondary
The above data shows that VxVM8 has four paths configured, two of them are active and two passive. In the iostat output, you should see that the wKB/s for VxVM8 is roughly the sum of wKB/s from sdav
and sddb
. In addition, the wKB/s from sdav and sddb should be very close to each other. This is due to the load balancing mechanism provided by VxVM multipath device driver, DMP.