NetBackup™ Backup Planning and Performance Tuning Guide
- NetBackup capacity planning
- Primary server configuration guidelines
- Media server configuration guidelines
- NetBackup hardware design and tuning considerations
- About NetBackup Media Server Deduplication (MSDP)
- MSDP tuning considerations
- MSDP sizing considerations
- Accelerator performance considerations
- Media configuration guidelines
- How to identify performance bottlenecks
- Best practices
- Best practices: NetBackup AdvancedDisk
- Best practices: NetBackup tape drive cleaning
- Best practices: Universal shares
- NetBackup for VMware sizing and best practices
- Best practices: Storage lifecycle policies (SLPs)
- Measuring Performance
- Table of NetBackup All Log Entries report
- Evaluating system components
- Tuning the NetBackup data transfer path
- NetBackup network performance in the data transfer path
- NetBackup server performance in the data transfer path
- About shared memory (number and size of data buffers)
- About the communication between NetBackup client and media server
- Effect of fragment size on NetBackup restores
- Other NetBackup restore performance issues
- About shared memory (number and size of data buffers)
- Tuning other NetBackup components
- How to improve NetBackup resource allocation
- How to improve FlashBackup performance
- Tuning disk I/O performance
Methods for managing the catalog size
To manage the catalog size, consider the following:
Why are long-running catalog backups an issue?
Leverage Differential/Incremental Backups
Enable Catalog Compression
In general, large catalogs are the result of long-term retention (LTR) requirements or data sets with large numbers of files. (Typically, NAS filers can have millions of files.) The combination of these two situations can increase the catalog size requirements significantly. Preplanning and creating multiple NetBackup domains in such situations may be an option in very large environments.
However, defining a domain based upon retention is not a common practice. Clients with large NetBackup environments often plan their domains based upon the workload type groupings rather than upon LTR. Additionally, more backups with LTR are being directed to Access or Cloud S3.
NetBackup has no hard limit on catalog size. However, Veritas recommends as a best practice that you keep the catalog size under 4 TB to ensure good catalog backup and recovery performance. Depending on the size of the environment and the length of backup image retention, catalogs may grow in excess of 4 TB. This size is not an issue for NetBackup, but it can result in operational issues for the environment regarding the time it takes to perform catalog backups and recoveries. This directly impacts the ability to recover the environment in the event of a disaster.
Several methods can be implemented to help mitigate this issue:
Move the catalog to flash storage.
Implement incremental or differential schedules for daily backups. This approach can reduce the time required to perform these backups. It can however negatively affect catalog recovery times, and regular full backups are still recommended.
Implement database compression. This method shrinks the size of the catalog and improves performance of catalog backups.
Implement catalog archiving. This method shrinks the size of the active catalog, however, it can increase the time that is required to perform restores from archived images.
Create a separate NetBackup domain for long-term retention. In many cases, excessive catalog size is a result of long-term retention of backup images.
If catalog backups do not complete within the desired backup window, consider moving the catalog to higher performance storage. This method most directly improves catalog backup and recovery performance.
Daily differential or incremental backups can be used to ensure that regular catalog protection can be completed within the desired window. For more information on catalog schedules, refer to the NetBackup Administrator's Guide, Volume I.
If catalog backups do not complete within the desired backup window, consider the use of catalog archiving.
Catalog archiving reduces the size of online catalog data by relocating the large catalog files to secondary storage. NetBackup administration continues to require regularly scheduled catalog backups, but without the large amount of catalog data, the backups are faster.
For more information on archiving the catalog, refer to the NetBackup Administrator's Guide, Volume I.
When the image database portion of the catalog becomes too large for the available disk space, you can do either of the following:
Compress the image database.
Move the image database.
For details, refer to "About image catalog compression" " and "Moving the image catalog in the NetBackup Administrator's Guide, Volume I.
Note that NetBackup compresses the image database after each backup session, regardless of whether any backups were successful. The compression happens immediately before the execution of the session_notify
script and the backup of the image database. The actual backup session is extended until compression is complete. Compression is CPU-intensive. Make sure that primary server has enough free CPU cycles to handle the compression.
Long-term retention (LTR) of backup data for multiple years can result in very large catalogs. One method to reduce the effect of LTR is to use NetBackup replication and perform LTR in a separate dedicate domain. This method has the advantage of keeping the catalog for the primary production domain more manageable in size. The catalog in the LTR domain can still become large, however, this problem is of lesser operational effect because rapid recovery of LTR data is generally not required.