NetBackup™ Backup Planning and Performance Tuning Guide
- NetBackup capacity planning
- Primary server configuration guidelines
- Media server configuration guidelines
- NetBackup hardware design and tuning considerations
- About NetBackup Media Server Deduplication (MSDP)
- MSDP tuning considerations
- MSDP sizing considerations
- Accelerator performance considerations
- Media configuration guidelines
- How to identify performance bottlenecks
- Best practices
- Best practices: NetBackup AdvancedDisk
- Best practices: NetBackup tape drive cleaning
- Best practices: Universal shares
- NetBackup for VMware sizing and best practices
- Best practices: Storage lifecycle policies (SLPs)
- Measuring Performance
- Table of NetBackup All Log Entries report
- Evaluating system components
- Tuning the NetBackup data transfer path
- NetBackup network performance in the data transfer path
- NetBackup server performance in the data transfer path
- About shared memory (number and size of data buffers)
- About the communication between NetBackup client and media server
- Effect of fragment size on NetBackup restores
- Other NetBackup restore performance issues
- About shared memory (number and size of data buffers)
- Tuning other NetBackup components
- How to improve NetBackup resource allocation
- How to improve FlashBackup performance
- Tuning disk I/O performance
How fragment size affects restore of a multiplexed image on tape
bptm positions to the media fragment that contains the first file to be restored. If fast_locate is available, bptm uses that for the positioning. If fast_locate is not available, bptm uses MTFSF (forward space file mark) for the positioning. The restore cannot use "fine-tune" positioning to reach the block that contains the first file, because of the randomness of how multiplexed images are written. The restore starts to read, discarding data for other backup images included in the multiplexed group, and saving the data related to the image being restored. If the multiplex setting and number of co-mingled images were high at backup time, the restore may need to read and discard much more data than is actually restored.
The first file is then restored.
From that point, the logic is the same as for non-multiplexed restores, with one exception. If the current position and the next file position are in the same fragment, the restore cannot use positioning. It cannot use positioning for the same reason that it cannot use "fine-tune" positioning to get to the first file.
If the next file position is in a subsequent fragment (or on a different media), the restore uses positioning to reach that fragment. The restore does not read all the data in between.
Thus, smaller multiplexed fragments can be advantageous. The optimal fragment size depends on the site's data and situation. For multi-gigabyte images, it may be best to keep fragments to 1 gigabyte or less. The storage unit attribute that limits fragment size is based on the total amount of data in the fragment. It is not based on the total amount of data for any one client.
When multiplexed images are written, each time a client backup stream starts or ends, the result is a new fragment. A new fragment is also created when a checkpoint occurs for a backup that has checkpoint restart enabled. So not all fragments are of the maximum fragment size. End-of-media (EOM) also causes new fragment(s).
Some examples may help illustrate when smaller fragments do and do not help restores.
Example 1:
Assume you want to back up four streams to a multiplexed tape. Each stream is a single, 1-GB file. A default maximum fragment size of 1 TB has been specified. The resultant backup image logically looks like the following. 'TM' denotes a tape mark or file mark, which indicates the start of a fragment.
TM <4 gigabytes data> TM
To restore one of the 1-GB files, the restore positions to the TM. It then has to read all 4 GB to get the 1-GB file.
If you set the maximum fragment size to 1 GB:
TM <1 GB data> TM <1 GB data> TM <1 GB data> TM <1 GB data> TM
this size does not help: the restore still has to read all four fragments to pull out the 1 GB of the file being restored.
Example 2:
This example is the same as Example 1, but assume that four streams back up 1 GB of /home or C:\. With the maximum fragment size (
) set to a default of 1 TB (assuming that all streams are relatively the same performance), you again end up with:TM <4 GBs data> TM
Restoring the following
/home/file1
or
C:\file1 /home/file2
or
C:\file2
from one of the streams, NetBackup must read as much of the 4 GB as necessary to restore all the data. But, if you set
to 1 GB, the image looks like the following:TM <1 GB data> TM <1 GB data> TM <1 GB data> TM <1 GB data> TM
In this case, home/file1 or C:\file1 starts in the second fragment. bptm positions to the second fragment to start the restore of home/file1 or C:\file1. (1 GB of reading is saved so far.) After /home/file1 is done, if /home/file2 or C:\file2 is in the third or forth fragment, the restore can position to the beginning of that fragment before it starts reading.
These examples illustrate that whether fragmentation benefits a restore depends on the following: what the data is, what is being restored, and where in the image the data is. In Example 2, reducing the fragment size from 1 GB to half a GB (512 MB) increases the chance the restore can locate by skipping instead of reading, when restoring small amounts of an image.