NetBackup™ Backup Planning and Performance Tuning Guide
- NetBackup capacity planning
- Primary server configuration guidelines
- Media server configuration guidelines
- NetBackup hardware design and tuning considerations
- About NetBackup Media Server Deduplication (MSDP)
- MSDP tuning considerations
- MSDP sizing considerations
- Accelerator performance considerations
- Media configuration guidelines
- How to identify performance bottlenecks
- Best practices
- Best practices: NetBackup AdvancedDisk
- Best practices: NetBackup tape drive cleaning
- Best practices: Universal shares
- NetBackup for VMware sizing and best practices
- Best practices: Storage lifecycle policies (SLPs)
- Measuring Performance
- Table of NetBackup All Log Entries report
- Evaluating system components
- Tuning the NetBackup data transfer path
- NetBackup network performance in the data transfer path
- NetBackup server performance in the data transfer path
- About shared memory (number and size of data buffers)
- About the communication between NetBackup client and media server
- Effect of fragment size on NetBackup restores
- Other NetBackup restore performance issues
- About shared memory (number and size of data buffers)
- Tuning other NetBackup components
- How to improve NetBackup resource allocation
- How to improve FlashBackup performance
- Tuning disk I/O performance
About performance hierarchy level 3
Because of the number and the speed of the PCIe lanes, as noted in Figure 8-2, the ability of a single system to address a large number of clients and provide large capacity storage, is made possible. In 2021, the Intel Architecture will migrate to the Generation 4 of the PCIe protocol doubling the transfer rate on PCIe lanes from 0.985GB/s to 1.969 GB/s. Current generation of systems are largely 8 lane Generation 3 with a bandwidth of 7.877 GB/s. This speed allows for 2 ports of up to 25Gb Ethernet and up to 2 ports of 32Gb Fibre Channel HBAs to operate on the Generation 3 based systems.
Generation 4 PCIe includes the doubling of speed with an increase in the number of PCIe lanes per processor. The previous generation of Intel Xeon processors had 40 lane and the new Gen 4 processors have 64 lanes. AMD processors with the advent of the "Rome" have 128 Gen 4 lanes. When used as a dual processor system the interconnect between Intel processors is done with 2 or more "Ultra Path Interconnect" (UPI) links and the AMD Rome processors use 64 PCIe Gen 4 lanes from each processor as the interconnect. In both dual processor solutions, the number of available PCIe lanes is 128. If we compare the speed and number of lanes to Gen 3, there is a 3.2x performance improvement for the PCIe on the dual Intel processor architecture.
This increment creates a new use case for the Ethernet and Fibre Channel population. With an 8 lane PCIe slot we can increment to 4 each 25Gb ports per Ethernet NIC and 4 each 32Gb Fiber Channel HBA. Coupled with the larger number of cores per processor, the number of concurrent backups will increase on a per Media Server basis. This capability that the use of higher count ports per NIC or HBA allows the user to halve the number of PCIe cards on the system. This reduces cost and allows for a higher airflow that will maximize the life of the components. PCIe Gen5 is in process and will likely be the standard in the calendar year 2024. This will allow a doubling of performance and it is expected that quad port 100Gb Ethernet NICs will be the norm.