NetBackup™ Backup Planning and Performance Tuning Guide
- NetBackup capacity planning
- Primary server configuration guidelines
- Media server configuration guidelines
- NetBackup hardware design and tuning considerations
- About NetBackup Media Server Deduplication (MSDP)
- MSDP tuning considerations
- MSDP sizing considerations
- Accelerator performance considerations
- Media configuration guidelines
- How to identify performance bottlenecks
- Best practices
- Best practices: NetBackup AdvancedDisk
- Best practices: NetBackup tape drive cleaning
- Best practices: Universal shares
- NetBackup for VMware sizing and best practices
- Best practices: Storage lifecycle policies (SLPs)
- Measuring Performance
- Table of NetBackup All Log Entries report
- Evaluating system components
- Tuning the NetBackup data transfer path
- NetBackup network performance in the data transfer path
- NetBackup server performance in the data transfer path
- About shared memory (number and size of data buffers)
- About the communication between NetBackup client and media server
- Effect of fragment size on NetBackup restores
- Other NetBackup restore performance issues
- About shared memory (number and size of data buffers)
- Tuning other NetBackup components
- How to improve NetBackup resource allocation
- How to improve FlashBackup performance
- Tuning disk I/O performance
About performance hierarchy level 2
Level 2 contains the connectivity options to enable communication with the external storage and clients. This level includes Ethernet Network Interface Cards (NIC), Fibre Channel (FC) Host Bus Adapters (HBA) Serial Attach SCSI (SAS) RAID controllers, and SAS HBA. Typical Server level platforms utilize 8 lane PCIe slots for connecting HBAs and NICs. Some systems will have 2 each 16 lane slot PCIe that double the performance of the individual slot but this limits the total number of slots available for the peripherals. Experience has shown that performance with 8 lane slots, either PCIe3 at 7.877GB/s or PCIe4 at 15.754GB/s is best from a cost and performance perspective. The best solution for the storage attached with Redundant Controllers and dual processor systems is to route two SAS or Fibre Channel HBAs from each of the processors in a dual CPU compute node. The complimentary best practice is to route the Fibre channel or SAS from the two ports on each Host Bus Adapter to each of the controllers. See below for a diagram of the attach.