NetBackup™ Backup Planning and Performance Tuning Guide
- NetBackup capacity planning
- Primary server configuration guidelines
- Media server configuration guidelines
- NetBackup hardware design and tuning considerations
- About NetBackup Media Server Deduplication (MSDP)
- MSDP tuning considerations
- MSDP sizing considerations
- Accelerator performance considerations
- Media configuration guidelines
- How to identify performance bottlenecks
- Best practices
- Best practices: NetBackup AdvancedDisk
- Best practices: NetBackup tape drive cleaning
- Best practices: Universal shares
- NetBackup for VMware sizing and best practices
- Best practices: Storage lifecycle policies (SLPs)
- Measuring Performance
- Table of NetBackup All Log Entries report
- Evaluating system components
- Tuning the NetBackup data transfer path
- NetBackup network performance in the data transfer path
- NetBackup server performance in the data transfer path
- About shared memory (number and size of data buffers)
- About the communication between NetBackup client and media server
- Effect of fragment size on NetBackup restores
- Other NetBackup restore performance issues
- About shared memory (number and size of data buffers)
- Tuning other NetBackup components
- How to improve NetBackup resource allocation
- How to improve FlashBackup performance
- Tuning disk I/O performance
Hardware examples for better NetBackup performance
These examples are not intended as recommendations for your site. The examples illustrate various hardware factors that can affect NetBackup performance.
A general hardware configuration can have dual 16-gigabit Fibre Channel ports on a single PCI card.
In such a case, the following is true:
Potential bandwidth is approximately 14 Gb per second.
For maximum performance, the card must be plugged into at least an 8-lane 3.0 PCIe slot.
The next example shows a pyramid of bandwidth potentials with aggregation capabilities at some points.
Suppose you have the following hardware:
1x quad 1-gigabit ethernet
6x 10Gb Ethernet ports
4x 16Gb Fibre Channel (2 HBAs)
2x4x12Gb SAS port 12 bay JBOD with 8 TB 7200 RPM SAS-3 Disk drives. Up to 4 JBODs can be used to aggregate storage up to 256TiB with 8TB drive population.
1x Dual Socket Intel Scalable Server with 6 or more PCIe Slots, 64, 256 or 512 GB DDR4 ECC DRAM, two 16 core processors, Internal SAS-3 RAID controller with 2 internal and 2 external 4x12Gb ports
In this case, the following is one way to assemble the hardware so that no constraints limit throughput:
The quad 1-gigabit ethernet card can do approximately 400 MB per second throughput.
Each 10Gb Ethernet port can do approximately 1 GB per second for a total of 6 GB aggregated throughput.
Each 16Gb Fibre Channel port can do approximately 1.75 GB per second for a total of 7 GB per second.
The RAID controller / Disk JBOD combination can achieve 1.9GB per second with sequential data on a single JBOD.
Note the following:
Each card can therefore move 2 to 3.5GB per second. With five cards, the total potential is 13GB per second.
It should be noted that this level of performance would not be reached in a typical backup environment.
External influences such as network traffic, will have an effect on overall performance.
Utilizing the configuration detailed above will provide a high level of performance, especially if at least the "1GB RAM to 1TB MSDP" data rule is followed closely.