Veritas NetBackup™ Deduplication Guide
- Introducing the NetBackup media server deduplication option
- Planning your deployment
- Planning your MSDP deployment
- NetBackup naming conventions
- About MSDP deduplication nodes
- About the NetBackup deduplication destinations
- About MSDP storage capacity
- About MSDP storage and connectivity requirements
- About NetBackup media server deduplication
- About NetBackup Client Direct deduplication
- About MSDP remote office client deduplication
- About the NetBackup Deduplication Engine credentials
- About the network interface for MSDP
- About MSDP port usage
- About MSDP optimized synthetic backups
- About MSDP and SAN Client
- About MSDP optimized duplication and replication
- About MSDP performance
- About MSDP stream handlers
- MSDP deployment best practices
- Use fully qualified domain names
- About scaling MSDP
- Send initial full backups to the storage server
- Increase the number of MSDP jobs gradually
- Introduce MSDP load balancing servers gradually
- Implement MSDP client deduplication gradually
- Use MSDP compression and encryption
- About the optimal number of backup streams for MSDP
- About storage unit groups for MSDP
- About protecting the MSDP data
- Save the MSDP storage server configuration
- Plan for disk write caching
- Provisioning the storage
- Licensing deduplication
- Configuring deduplication
- Configuring MSDP server-side deduplication
- Configuring MSDP client-side deduplication
- About the MSDP Deduplication Multi-Threaded Agent
- Configuring the Deduplication Multi-Threaded Agent behavior
- Configuring deduplication plug-in interaction with the Multi-Threaded Agent
- About MSDP fingerprinting
- About the MSDP fingerprint cache
- Configuring the MSDP fingerprint cache behavior
- About seeding the MSDP fingerprint cache for remote client deduplication
- Configuring MSDP fingerprint cache seeding on the client
- Configuring MSDP fingerprint cache seeding on the storage server
- Enabling 250-TB support for MSDP
- About MSDP Encryption using NetBackup KMS service
- About MSDP Encryption using external KMS server
- Configuring a storage server for a Media Server Deduplication Pool
- Configuring a storage server for a PureDisk Deduplication Pool
- About disk pools for NetBackup deduplication
- Configuring a disk pool for deduplication
- Creating the data directories for 250-TB MSDP support
- Adding volumes to a 250-TB Media Server Deduplication Pool
- Configuring a Media Server Deduplication Pool storage unit
- Configuring client attributes for MSDP client-side deduplication
- Disabling MSDP client-side deduplication for a client
- About MSDP compression
- About MSDP encryption
- MSDP compression and encryption settings matrix
- Configuring encryption for MSDP backups
- Configuring encryption for MSDP optimized duplication and replication
- About the rolling data conversion mechanism for MSDP
- Modes of rolling data conversion
- MSDP encryption behavior and compatibilities
- Configuring optimized synthetic backups for MSDP
- About a separate network path for MSDP duplication and replication
- Configuring a separate network path for MSDP duplication and replication
- About MSDP optimized duplication within the same domain
- Configuring MSDP optimized duplication within the same NetBackup domain
- About MSDP replication to a different domain
- Configuring MSDP replication to a different NetBackup domain
- About NetBackup Auto Image Replication
- About trusted master servers for Auto Image Replication
- About the certificate to be used for adding a trusted master server
- Adding a trusted master server using a NetBackup CA-signed (host ID-based) certificate
- Adding a trusted master server using external CA-signed certificate
- Removing a trusted master server
- Enabling NetBackup clustered master server inter-node authentication
- Configuring NetBackup CA and NetBackup host ID-based certificate for secure communication between the source and the target MSDP storage servers
- Configuring external CA for secure communication between the source MSDP storage server and the target MSDP storage server
- Configuring a target for MSDP replication to a remote domain
- About configuring MSDP optimized duplication and replication bandwidth
- About storage lifecycle policies
- About the storage lifecycle policies required for Auto Image Replication
- Creating a storage lifecycle policy
- About MSDP backup policy configuration
- Creating a backup policy
- Resilient Network properties
- Specifying resilient connections
- Adding an MSDP load balancing server
- About variable-length deduplication on NetBackup clients
- About the MSDP pd.conf configuration file
- Editing the MSDP pd.conf file
- About the MSDP contentrouter.cfg file
- About saving the MSDP storage server configuration
- Saving the MSDP storage server configuration
- Editing an MSDP storage server configuration file
- Setting the MSDP storage server configuration
- About the MSDP host configuration file
- Deleting an MSDP host configuration file
- Resetting the MSDP registry
- About protecting the MSDP catalog
- Changing the MSDP shadow catalog path
- Changing the MSDP shadow catalog schedule
- Changing the number of MSDP catalog shadow copies
- Configuring an MSDP catalog backup
- Updating an MSDP catalog backup policy
- About MSDP FIPS compliance
- Configuring the NetBackup client-side deduplication to support multiple interfaces of MSDP
- About MSDP multi-domain support
- About MSDP mutli-domain VLAN Support
- Configuring deduplication to the cloud with NetBackup Cloud Catalyst
- Using NetBackup Cloud Catalyst to upload deduplicated data to the cloud
- Cloud Catalyst requirements and limitations
- Configuring a Linux media server as a Cloud Catalyst storage server
- Configuring a Cloud Catalyst storage server for deduplication to the cloud
- How to configure a NetBackup Cloud Catalyst Appliance
- How to configure a Linux media server as a Cloud Catalyst storage server
- Configuring a Cloud Catalyst storage server as the target for the deduplications from MSDP storage servers
- Certificate validation using Online Certificate Status Protocol (OCSP)
- Managing Cloud Catalyst storage server with IAM Role or CREDS_CAPS credential broker type
- Configuring a storage lifecycle policy for NetBackup Cloud Catalyst
- About the Cloud Catalyst esfs.json configuration file
- About the Cloud Catalyst cache
- Controlling data traffic to the cloud when using Cloud Catalyst
- Configuring source control or target control optimized duplication for Cloud Catalyst
- Configuring a Cloud Catalyst storage server as the source for optimized duplication
- Decommissioning Cloud Catalyst cloud storage
- NetBackup Cloud Catalyst workflow processes
- Disaster recovery for Cloud Catalyst
- About image sharing in cloud using Cloud Catalyst
- MSDP cloud support
- About MSDP cloud support
- Creating a cloud storage unit
- Updating cloud credentials for a cloud LSU
- Updating encryption configurations for a cloud LSU
- Deleting a cloud LSU
- Backup data to cloud by using cloud LSU
- Duplicate data cloud by using cloud LSU
- Configuring AIR to use cloud LSU
- About backward compatibility support
- About the configuration items in cloud.json, contentrouter.cfg and spa.cfg
- About the tool updates for cloud support
- About the disaster recovery for cloud LSU
- About Image Sharing using MSDP cloud
- Monitoring deduplication activity
- Monitoring the MSDP deduplication and compression rates
- Viewing MSDP job details
- About MSDP storage capacity and usage reporting
- About MSDP container files
- Viewing storage usage within MSDP container files
- Viewing MSDP disk reports
- About monitoring MSDP processes
- Reporting on Auto Image Replication jobs
- Managing deduplication
- Managing MSDP servers
- Viewing MSDP storage servers
- Determining the MSDP storage server state
- Viewing MSDP storage server attributes
- Setting MSDP storage server attributes
- Changing MSDP storage server properties
- Clearing MSDP storage server attributes
- About changing the MSDP storage server name or storage path
- Changing the MSDP storage server name or storage path
- Removing an MSDP load balancing server
- Deleting an MSDP storage server
- Deleting the MSDP storage server configuration
- Managing NetBackup Deduplication Engine credentials
- Managing Media Server Deduplication Pools
- Viewing Media Server Deduplication Pools
- Determining the Media Server Deduplication Pool state
- Changing Media Server Deduplication Pool state
- Viewing Media Server Deduplication Pool attributes
- Setting a Media Server Deduplication Pool attribute
- Changing a Media Server Deduplication Pool properties
- Clearing a Media Server Deduplication Pool attribute
- Determining the MSDP disk volume state
- Changing the MSDP disk volume state
- Inventorying a NetBackup disk pool
- Deleting a Media Server Deduplication Pool
- Deleting backup images
- About MSDP queue processing
- Processing the MSDP transaction queue manually
- About MSDP data integrity checking
- Configuring MSDP data integrity checking behavior
- About managing MSDP storage read performance
- About MSDP storage rebasing
- About the MSDP data removal process
- Resizing the MSDP storage partition
- How MSDP restores work
- Configuring MSDP restores directly to a client
- About restoring files at a remote site
- About restoring from a backup at a target master domain
- Specifying the restore server
- Managing MSDP servers
- Recovering MSDP
- Replacing MSDP hosts
- Uninstalling MSDP
- Deduplication architecture
- Troubleshooting
- About unified logging
- About legacy logging
- NetBackup MSDP log files
- Troubleshooting MSDP installation issues
- Troubleshooting MSDP configuration issues
- Troubleshooting MSDP operational issues
- Verify that the MSDP server has sufficient memory
- MSDP backup or duplication job fails
- MSDP client deduplication fails
- MSDP volume state changes to DOWN when volume is unmounted
- MSDP errors, delayed response, hangs
- Cannot delete an MSDP disk pool
- MSDP media open error (83)
- MSDP media write error (84)
- MSDP no images successfully processed (191)
- MSDP storage full conditions
- Troubleshooting MSDP catalog backup
- Storage Platform Web Service (spws) does not start
- Disk volume API or command line option does not work
- Viewing MSDP disk errors and events
- MSDP event codes and messages
- Troubleshooting Cloud Catalyst issues
- Cloud Catalyst logs
- Problems encountered while using the Cloud Storage Server Configuration Wizard
- Disk pool problems
- Problems during cloud storage server configuration
- Status 191: No images were successfully processed
- Media write error (84) if due to a full local cache directory
- Troubleshooting restarting ESFS after the Cloud Catalyst storage server is down
- Restarting the vxesfsd process
- Problems restarting vxesfsd
- Unable to create CloudCatalyst with a media server that has version earlier to 8.2
- Cloud Catalyst troubleshooting tools
- Unable to obtain the administrator password to use an AWS EC2 instance that has a Windows OS
- Trouble shooting multi-domain issues
- Appendix A. Migrating to MSDP storage
- Index
About image sharing in cloud using Cloud Catalyst
Image sharing, which was earlier known as Automated disaster recovery (DR), provides a self-describing storage solution over Cloud Catalyst. Cloud Catalyst with Image sharing in cloud is a self-describing storage server. Cloud Catalyst without Image sharing in cloud is not a self-describing storage server.
Image sharing helps users with an easy and a visualized way to manage and provision images in cloud object storage and even the ability to convert backed up VMs as AWS instances in certain scenarios.
This topic contains the following sections:
In a situation where Cloud Catalyst backed up the deduplicated data to cloud, but the NetBackup catalog was available only on the on-premises NetBackup server. There, the data cannot be restored from the cloud without the on-premises NetBackup server.
Image sharing in cloud uploads the NetBackup catalog along with the backup images and lets you restore data from the cloud without the on-premises NetBackup server.
You can launch an all-in-one NetBackup in the cloud on demand called the cloud recovery host, and recover the backup images from cloud.
Image sharing discovers the backup images that are stored in AWS S3 through the REST APIs, recovers the NetBackup catalog, and restores the images.
You can use command line options or NetBackup Web UI that have the function as REST APIs.
Before you install NetBackup, create an instance based on RHEL 7.3 or later (up to RHEL 8.0) in AWS. You can also set up a computer based on RHEL 7.3 or later (up to RHEL 8.0). The recommendation is that the instance has more than 64 GB of memory, 8 CPUs.
The HTTPS port 443 is enabled.
Change host name to the server's FQDN.
Add the following items in the
/etc/hostsfile:"External IP" "Server's FQDN"
"Internal IP" "Server's FQDN"
For a computer, add the following items in the
/etc/hostsfile:"IP address" "Server's FQDN"
For an instance in AWS, change the search domain order in the
/etc/resolv.conffile to search external domains before internal domains.NetBackup should be an all-in-one setup.
Refer to the NetBackup Installation Guide for more information.
After installing NetBackup, you can run the ims_system_config.py script to configure image sharing.
The path to access the command is: /usr/openv/pdde/pdag/scripts/.
Use the following command to run the ims_system_config.py script:
Amazon Web Service cloud provider:
ims_system_config.py -k <AWS_access_key> -s <AWS_secret_access_key> -b <name_S3_bucket>
If you have configured IAM role in the EC2 instance, use the following command:
-python /usr/openv/pdde/pdag/scripts/ims_system_config.py -k dummy -s dummy -b <name_S3_bucket>
Microsoft Azure cloud provider:
ims_system_config.py -cp 2 -k <key_id> -s <secret_key> -b <container_name>
Other S3 compatible cloud provider (For example, Hitachi HCP):
If Cloud Instance has been existed in NetBackup, use the following command:
ims_system_config.py -cp 3 -t PureDisk -k <key_id> -s <secret_key> -b <bucket_name> -bs <bucket_sub_name> -c <Cloud_instance_name> [-p <mount_point>]
Or use the following command:
ims_system_config.py -cp 3 -t PureDisk -k <key_id> -s <secret_key> -b <bucket_name> -pt <cloud_provider_type> -sh <s3_hostname> -sp <s3_http_port> -sps <s3_https_port> -ssl <ssl_usage> [-p <mount_point>]
Example for HCP provider:
ims_system_config.py -cp 3 -t PureDisk -k xxx -s xxx -b emma -bs subtest -pt hitachicp -sh yyy.veritas.com -sp 80 -sps 443 -ssl 0
Description: (Specify the following options to use HCP cloud)
-cp 3: Specify third-party S3 cloud provider that is used.
-pt hitachicp: Specify cloud provider type as hitachicp (HCP LAN)
-t PureDisk_hitachicp_rawd: Specify storage server type as PureDisk_hitachicp_rawd
-sh <s3_hostname>: Specify HCP storage server host name
-sp <s3_http_port>: Specify HCP storage server HTTP port (Default is 80)
-sps <s3_https_port>: Specify HCP storage server HTTP port (Default is 443)
-ssl <ssl_usage>: Specify whether to use SSL. (0- Disable SSL. 1- Enable SSL. Default is 1.) If SSL is disabled, it uses <s3_http_port> to make connection to <s3_hostname>. Otherwise, it uses <s3_https_port>.
You can access NetBackup Web UI to use image sharing. For more information, refer to the Using image sharing from the NetBackup Web UI topic in the NetBackup Web UI Administrator's Guide.
You can use the nbimageshare command to use image sharing.
Run the nbimageshare command to list and import the virtual machine and standard images and then recover the virtual machines.
The path to access the command is: /usr/openv/netbackup/bin/admincmd/
For more information about the nbimageshare command, refer to the NetBackup Commands Reference Guide.
The following table lists the steps for image sharing and the command options:
Table: Steps for image sharing and the command options
| Step | Command |
|---|---|
|
Log on to NetBackup |
nbimageshare --login <username> <password> nbimageshare --login -interact |
|
List all the backup images that are in the cloud |
nbimageshare --listimage Note: In the list of images, the increment schedule type might be differential incremental or cumulative incremental. |
|
Import the backup images to NetBackup |
Import a single image: nbimageshare --singleimport <client> <policy> <backupID> Import multiple images: nbimageshare --batchimport <image_list_file_path> Note: The format of the image_list_file_path is same as the output of "list images". The multiple images number must be equal to or less than 64. You can import an already imported image. This action does not affect the NetBackup image catalog. |
|
Recover the VM as an AWS EC2 instance |
nbimageshare --recovervm <client> <policy> <backupID>
|
When KMS encryption is enabled, you can share the images in S3 bucket to the cloud recovery host with manual KMS key transfer.
On-premises side:
Storage server: Find the key group name for the given Storage server
Find contentrouter.cfg in /etc/pdregistry.cfg
Find key group name is in contentrouter.cfg under [KMSOptions]
(Example KMSKeyGroupName=amazon.com:test1)
NetBackup master server: Exports the key group with a passphrase to a file:
/usr/openv/netbackup/bin/admincmd/nbkmsutil -export -key_groups <key-group-name> -path <key file path>
Cloud recovery host (cloud side):
Copy the exported key to the cloud recovery host
Config KMS server
/usr/openv/netbackup/bin/nbkms -createemptydb /usr/openv/netbackup/bin/nbkms /usr/openv/netbackup/bin/nbkmscmd -discovernbkms -autodiscover
Import keys to KMS service.
/usr/openv/netbackup/bin/admincmd/nbkmsutil -import -path <key file path> -preserve_kgname
Configure the cloud recovery host with ims_system_config.py
In case of KMS key changes for the given group for on-premises storage server after the cloud recovery host is set up, you must export the key file from on-premises KMS server and import that key file on the cloud recovery host.
On-premises NetBackup master server: Exports the key group with a passphrase to a file
/usr/openv/netbackup/bin/admincmd/nbkmsutil -export -key_groups <key-group-name> -path <key file path>
Cloud recovery host:
/usr/openv/netbackup/bin/admincmd/nbkmsutil -deletekg -kgname <key-group-name> -force
/usr/openv/netbackup/bin/admincmd/nbkmsutil -import -path <key file path> -preserve_kgname
If an on-premises storage server is configured to use keys from external KMS server, then make sure that the same KMS server is configured on the cloud recovery host before running ims_system_config.py. To know more about configuring an external KMS server in NetBackup, refer to NetBackup Security and Encryption Guide.
Make sure that the external KMS server is reachable from the cloud recovery host on a specific port.
Before you run ims_system_config.py to configure the cloud recovery host on RHEL 8, install Python 2, and create a soft link from Python 2 to Python. Theims_system_config.py script uses Python 2.
After the image is imported to cloud, the image catalog still exists on the cloud. If the image is expired on the on-premises storage, then restoring the image on the cloud fails even though the image catalog exists on the cloud.
If the image expires on the cloud storage, the image catalog in the cloud is removed but the image data in the bucket is not removed.
You can restore any image that you import. For the recover the VM as an AWS EC2 instance option, you can recover only the VM images that are full backup images or accelerator-enabled incremental backup images to cloud.
Image sharing supports many policy types from NetBackup 8.2 or later. In the scenarios, CloudCatalyst, where the images are shared, must have a new installation of NetBackup 8.2 or later.
See the NetBackup compatibility lists for the latest information on the supported policy types.
After the image sharing is configured, the storage server is in a read-only mode.
For information on the VM recovery limitations, refer to the AWS VM import information in AWS help.
You can configure the maximum active jobs when the images are imported to cloud storage.
Modify the file path
/usr/openv/var/global/wsl/config/web.confto add the configuration item as imageshare.maxActiveJobLimit.For example, imageshare.maxActiveJobLimit=16.
The default value is 16 and the configurable range is 1 to 100.
If the import request is made and the active job count exceeds the configured limit, the following message is displayed:
"Current active job count exceeded active job count limitation".
The images that are direct to cloud storage can be shared.
In optimized deduplication or AIR cascading scenarios, only the images in Cloud Catalyst that has optimized deduplication or has an AIR target can be shared.
If Cloud Catalyst is not set for optimized deduplication or is not an AIR target, you cannot use image sharing. If Amazon Glacier is enabled in Cloud Catalyst, you cannot use image sharing.
In these scenarios to disable image sharing:
Modify the
<install_directory>/etc/puredisk/spa.cfgfile and add the following configuration item:EnableIMandTIR=false
Regarding the errors about role policy size limitation:
Errors that occur when the role policy size exceeds the maximum size is an AWS limitation. You can find the following error in a failed restore job:
"error occurred (LimitExceeded) when calling the PutRolePolicy operation: Maximum policy size of 10240 bytes exceeded for role vmimport"
Workaround:
You can change the maximum policy size limit for the vmimport role.
You can list and delete the existing policies using the following commands:
aws iam list-role-policies --role-name vmimport aws iam delete-role-policy --role-name vmimport --policy-name <bucketname> -vmimport
The recover operation includes AWS import process. Therefore, a vmdk image cannot be recovered concurrently in two restore jobs at the same time.
The image sharing feature can recover the virtual machines that satisfy the Amazon Web Services VM import prerequisites.
For more information about the prerequisites, refer to the following article:
https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html
If you cannot obtain the administrator password to use an AWS EC2 instance that has a Windows OS, the following error is displayed:
Password is not available. This instance was launched from a custom AMI, or the default password has changed. A password cannot be retrieved for this instance. If you have forgotten your password, you can reset it using the Amazon EC2 configuration service. For more information, see Passwords for a Windows Server Instance.This error occurs after the instance is launched from an AMI that is converted using image sharing.
For more information, refer to the following articles:
You cannot cancel an import job on the cloud recovery host.
If there is data optimization done on the on-premises image, you might not be able to restore the image that you have imported on the cloud recovery host. You can expire this image, import it again on the image-sharing server, and then restore the image.
After the backup job, duplication job, or AIR import job completes, you can import the images on a cloud recovery host.