Veritas NetBackup™ Appliance Capacity Planning and Performance Tuning Guide
- About this Guide
- Section I. Capacity planning
- Analyzing your backup requirements
- Designing your backup system
- Section II. Best Practices
- Section III. Performance tuning
- Section IV. Quick reference to Capacity planning and Performance tuning
Validating network bandwidth
It is recommended to test your network bandwidth before deployment just to ensure that it does not contain any bandwidth bottlenecks. Making sure that the network is performing well prevents any unexpected surprises when performance testing replications, backups and restores with appliances. A variety of tools can be used to test network bandwidth.
You can use the Nbperfchk command in all appliances that can measure network and disk read/write speeds as the I/O passes through NetBackup and NetBackup Appliances. It can be used at pre-deployment stages to measure network speeds before initiating AIR replication between primary server domains, or measuring disk write speeds on Appliance storage to check for any storage I/O performance problems before performing a backup and restore demonstration to the customer. Be sure to configure the appliance's network configuration and storage configuration before running any network or storage performance tests. Nbperfchk can be run from the Appliance's Shell Menu: Support->Nbperfchk
To use nbperfchk for network bandwidth tests between two appliances, run nbperfchk as a reader on one appliance and a writer on another, example:
Reader symprimary-a.Support> Nbperfchk run Please enter options: nbperfchk -i tcp::5000 -o null
Writer Symprimary-b.Support> Nbperfchk run Please enter options: nbperfchk -i zero: -o tcp:symprimary-a:5000
In the above example, the symprimary-b appliance is sending data to symprimary-a's 5000 TCP/IP port. The commands provide the following outputs:
Output for the reader symprimary-a.Support > Nbperfchk run Please enter options: nbperfchk -i tcp::5000 -o null: Statistics log are recorded in nbperfchk_results.log current rcv buff: 262144, set to 524288 current snd buff: 262144, set to 524288 final receive size 262144, send size 262144 226 MB @ 113.1 MB/sec, 226 MB @ 113.1 MB/sec, at 1369103091 566 MB @ 113.1 MB/sec, 339 MB @ 113.2 MB/sec, at 1369103094 905 MB @ 113.1 MB/sec, 339 MB @ 113.1 MB/sec, at 1369103097 1245 MB @ 113.2 MB/sec, 339 MB @ 113.2 MB/sec, at 1369103100 1585 MB @ 113.2 MB/sec, 339 MB @ 113.2 MB/sec, at 1369103103 1925 MB @ 113.2 MB/sec, 339 MB @ 113.2 MB/sec, at 1369103106 2265 MB @ 113.1 MB/sec, 339 MB @ 113.1 MB/sec, at 1369103109 2604 MB @ 113.1 MB/sec, 339 MB @ 113.2 MB/sec, at 1369103112 2944 MB @ 113.2 MB/sec, 339 MB @ 113.2 MB/sec, at 1369103115 3284 MB @ 113.2 MB/sec, 339 MB @ 113.2 MB/sec, at 1369103118 3624 MB @ 113.2 MB/sec, 339 MB @ 113.2 MB/sec, at 1369103121 3964 MB @ 113.2 MB/sec, 339 MB @ 113.1 MB/sec, at 1369103124 4303 MB @ 113.2 MB/sec, 339 MB @ 113.2 MB/sec, at 1369103127 4643 MB @ 113.2 MB/sec, 339 MB @ 113.2 MB/sec, at 1369103130 4983 MB @ 113.2 MB/sec, 339 MB @ 113.2 MB/sec, at 1369103133 5322 MB @ 113.2 MB/sec, 339 MB @ 113.1 MB/sec, at 1369103136 |
Output for the writer symprimary-b.Support > Nbperfchk run Please enter options: nbperfchk -i zero: -o tcp:symprimary-a:5000 Statistics log are recorded in nbperfchk_results.log current rcv buff: 262144, set to 524288 current snd buff: 262144, set to 524288 final receive size 1048576, send size 262144 340 MB @ 113.3 MB/sec, 340 MB @ 113.3 MB/sec, at 1369103067 680 MB @ 113.2 MB/sec, 339 MB @ 113.2 MB/sec, at 1369103070 1020 MB @ 113.2 MB/sec, 340 MB @ 113.2 MB/sec, at 1369103073 1360 MB @ 113.3 MB/sec, 340 MB @ 113.4 MB/sec, at 1369103076 1701 MB @ 113.4 MB/sec, 341 MB @ 113.7 MB/sec, at 1369103079 2040 MB @ 113.3 MB/sec, 339 MB @ 112.9 MB/sec, at 1369103082 2381 MB @ 113.3 MB/sec, 340 MB @ 113.3 MB/sec, at 1369103085 2721 MB @ 113.3 MB/sec, 340 MB @ 113.3 MB/sec, at 1369103088 3060 MB @ 113.2 MB/sec, 339 MB @ 112.9 MB/sec, at 1369103091 3400 MB @ 113.2 MB/sec, 340 MB @ 113.3 MB/sec, at 1369103094 3740 MB @ 113.2 MB/sec, 340 MB @ 113.3 MB/sec, at 1369103097 4080 MB @ 113.2 MB/sec, 339 MB @ 113.2 MB/sec, at 1369103100 4420 MB @ 113.2 MB/sec, 339 MB @ 113.2 MB/sec, at 1369103103 4759 MB @ 113.2 MB/sec, 339 MB @ 113.1 MB/sec, at 1369103106 5099 MB @ 113.2 MB/sec, 340 MB @ 113.2 MB/sec, at 1369103109 output: Connection reset by peer |
In the above example, the average network throughput is 113.2 MB/sec. If this network throughput is insufficient for the amount of data being protected, the network infrastructure needs to be examined and additional bandwidth needs to be added.
More Information