Veritas NetBackup™ Appliance Capacity Planning and Performance Tuning Guide
- About this Guide
- Section I. Capacity planning
- Analyzing your backup requirements
- Designing your backup system
- Section II. Best Practices
- Section III. Performance tuning
- Section IV. Quick reference to Capacity planning and Performance tuning
Network bonding
Network bonding has been available in the Linux environment for quite some time. This is a very useful network feature that significantly increases network scalability allowing multiple TCP streams to be load balanced across a number of network ports.
The commonly used network bonding modes with NetBackup Appliances are:
802.3ad bonding mode - This mode works by balancing the TCP frames across switch ports.
balance-alb bonding mode - This mode works by balancing the TCP frames across ports by using the operating system itself for frame balancing
They are both active-active in a sense that all NIC's or NIC ports are actively involved in the load balancing and provide load balancing for both incoming and outgoing data.
The following guidelines can help you to improve the performance of your appliance considering network bonding:
Network bonding has to be done across interfaces of the same type, it is impossible to bond 1 Gb/s and 10 Gb/s NIC's together.
Network Interface Cards do affect the CPU cycles to process data but this workload is not considered as significant on 1 Gb/s or even 10 Gb/s interfaces. The improved PCI communication and throughput in the NetBackup 52xx appliance hardware has a positive affect on the CPU performance. These appliances allow for faster processing of the data bus interrupts with positive effects on CPU utilization during high network loads.
When a number of 10 GbE NIC's bonded together and transferring data at the maximum will cause some CPU utilization. Exact CPU utilization is hard to quantify since it depends on a number of TCP parameters like MTU, network latency, and others.