NetBackup™ Deduplication Guide
- Introducing the NetBackup media server deduplication option
- Quick start
- Planning your deployment
- About MSDP storage and connectivity requirements
- About NetBackup media server deduplication
- About NetBackup Client Direct deduplication
- About MSDP remote office client deduplication
- About MSDP performance
- About MSDP stream handlers
- MSDP deployment best practices
- Provisioning the storage
- Licensing deduplication
- Configuring deduplication
- Configuring the Deduplication Multi-Threaded Agent behavior
- Configuring the MSDP fingerprint cache behavior
- Configuring MSDP fingerprint cache seeding on the storage server
- About MSDP Encryption using NetBackup KMS service
- Configuring a storage server for a Media Server Deduplication Pool
- Configuring a disk pool for deduplication
- Configuring a Media Server Deduplication Pool storage unit
- About MSDP optimized duplication within the same domain
- Configuring MSDP optimized duplication within the same NetBackup domain
- Configuring MSDP replication to a different NetBackup domain
- About NetBackup Auto Image Replication
- Configuring a target for MSDP replication to a remote domain
- Creating a storage lifecycle policy
- Resilient Network properties
- Editing the MSDP pd.conf file
- About protecting the MSDP catalog
- Configuring an MSDP catalog backup
- About NetBackup WORM storage support for immutable and indelible data
- MSDP cloud support
- About MSDP cloud support
- Cloud space reclamation
- About the disaster recovery for cloud LSU
- About Image Sharing using MSDP cloud
- About MSDP cloud immutable (WORM) storage support
- About immutable object support for AWS S3
- About immutable object support for AWS S3 compatible platforms
- About immutable storage support for Azure blob storage
- About immutable storage support for Google Cloud Storage
- S3 Interface for MSDP
- Configuring S3 interface for MSDP on MSDP build-your-own (BYO) server
- Identity and Access Management (IAM) for S3 interface for MSDP
- S3 APIs for S3 interface for MSDP
- Monitoring deduplication activity
- Managing deduplication
- Managing MSDP servers
- Managing NetBackup Deduplication Engine credentials
- Managing Media Server Deduplication Pools
- Changing a Media Server Deduplication Pool properties
- Configuring MSDP data integrity checking behavior
- About MSDP storage rebasing
- Managing MSDP servers
- Recovering MSDP
- Replacing MSDP hosts
- Uninstalling MSDP
- Deduplication architecture
- Configuring and using universal shares
- Using the ingest mode
- Enabling a universal share with object store
- Configuring isolated recovery environment (IRE)
- Using the NetBackup Deduplication Shell
- Managing users from the deduplication shell
- Managing certificates from the deduplication shell
- Managing NetBackup services from the deduplication shell
- Monitoring and troubleshooting NetBackup services from the deduplication shell
- Managing S3 service from the deduplication shell
- Troubleshooting
- About unified logging
- About legacy logging
- Troubleshooting MSDP installation issues
- Troubleshooting MSDP configuration issues
- Troubleshooting MSDP operational issues
- Trouble shooting multi-domain issues
- Appendix A. Migrating to MSDP storage
- Appendix B. Migrating from Cloud Catalyst to MSDP direct cloud tiering
- About direct migration from Cloud Catalyst to MSDP direct cloud tiering
- Appendix C. Encryption Crawler
About Image Sharing using MSDP cloud
Use image sharing to share the images from your on-premises NetBackup server to the NetBackup server running in AWS or Azure. The NetBackup server that is running in the cloud and configured for image sharing is called Cloud Recovery Server (CRS). Image sharing also provides the ability to convert backed up VMs as AWS instances or Azure VHD in certain scenarios.
MSDP with image sharing is a self-describing storage server. When you configure image sharing, NetBackup stores all the data and metadata that is required to recover the images in the cloud.
Note:
The Cloud Recovery Server version must be same or later than the on-premises NetBackup version.
The following table describes the image sharing feature workflow.
Table: Image sharing workflow
Task | Description |
---|---|
Prepare cloud recovery server. | You must have a virtual machine in your cloud environment and have NetBackup installed on it. You can deploy the virtual machine using one of the following ways.
|
Configure the NetBackup KMS server. | If KMS encryption is enabled, perform the following tasks. |
Configure image sharing on the cloud recovery server. | The NetBackup virtual machine in the cloud that is configured for image sharing is called cloud recovery server. Perform the following step to configure the image sharing: |
Use the image sharing. | After you configure this NetBackup virtual machine for image sharing, you can import the images from your on-premises environment to the cloud and recover them when required. You can also convert VMs to VHD in Azure or AMI in AWS. |
Read additional information about image sharing. |
In a situation where MSDP cloud backed up the deduplicated data to cloud, but the NetBackup catalog was available only on the on-premises NetBackup server. There, the data cannot be restored from the cloud without the on-premises NetBackup server.
Image sharing in cloud uploads the NetBackup catalog along with the backup images and lets you restore data from the cloud without the on-premises NetBackup server.
You can launch an all-in-one NetBackup in the cloud on demand called the cloud recovery server, and recover the backup images from cloud.
Image sharing discovers the backup images that are stored in cloud storage through the REST APIs, command line, or Web UI, recovers the NetBackup catalog, and restores the images.
You can use command line options or NetBackup Web UI that have the function as REST APIs.
For the imported Standard, MS Windows, and Universal share backup images, you can instantly access them with NetBackup Instant Access APIs as the exported share is in a read-only mode. For the imported VMware images, you can instantly scan them with the VMware Malware Scan APIs as the exported share is in a read-only mode.
Before you install NetBackup, create an instance based on RHEL 7.3 or later in cloud. You can also set up a computer based on RHEL 7.3 or later. The recommendation is that the instance has more than 64 GB of memory, 8 CPUs.
The HTTPS port 443 is enabled.
Change host name to the server's FQDN.
In Azure virtual machine, you must change the internal hostname, which is created automatically for you and cannot get internal hostname from IP address.
Add the following items in the
/etc/hosts
file:"External IP" "Server's FQDN"
"Internal IP" "Server's FQDN"
For a computer, add the following items in the
/etc/hosts
file:"IP address" "Server's FQDN"
(Optional) For an instance, change the search domain order in the
/etc/resolv.conf
file to search external domains before internal domains.NetBackup should be an all-in-one setup.
Refer to the NetBackup Installation Guide for more information.
You can access NetBackup Web UI to use image sharing. For more information, refer to the Create a Media Server Deduplication Pool (MSDP) storage server for image sharing topic in the NetBackup Web UI Administrator's Guide.
After installing NetBackup, you can run the ims_system_config.py script to configure image sharing.
The path to access the command is: /usr/openv/pdde/pdag/scripts/
.
Amazon Web Service cloud provider:
ims_system_config.py -t PureDisk -k <AWS_access_key> -s <AWS_secret_access_key> -b <name_S3_bucket> -bs <bucket_sub_name> [-r <bucket_region>] [-p <mount_point>]
If you have configured IAM role in the EC2 instance, use the following command:
ims_system_config.py -t PureDisk -k dummy -s dummy <bucket_name> -bs <bucket_sub_name> [-r <bucket_region>] [-p <mount_point>]
Microsoft Azure cloud provider:
ims_system_config.py -cp 2 -k <key_id> -s <secret_key> -b <container_name> -bs <bucket_sub_name> [-p <_mount_point_>]
Other S3 compatible cloud provider (For example, Hitachi HCP):
If Cloud Instance has been existed in NetBackup, use the following command:
ims_system_config.py -cp 3 -t PureDisk -k <key_id> -s <secret_key> -b <bucket_name> -bs <bucket_sub_name> -c <Cloud_instance_name> [-p <mount_point>]
Or use the following command:
ims_system_config.py -cp 3 -t PureDisk -k <key_id> -s <secret_key> -b <bucket_name> -pt <cloud_provider_type> -sh <s3_hostname> -sp <s3_http_port> -sps <s3_https_port> -ssl <ssl_usage> [-p <mount_point>]
Example for HCP provider:
ims_system_config.py -cp 3 -t PureDisk -k xxx -s xxx -b emma -bs subtest -pt hitachicp -sh yyy.veritas.com -sp 80 -sps 443 -ssl 0
Description: (Specify the following options to use HCP cloud)
-cp 3: Specify third-party S3 cloud provider that is used.
-pt hitachicp: Specify cloud provider type as hitachicp (HCP LAN)
-t PureDisk_hitachicp_rawd: Specify storage server type as PureDisk_hitachicp_rawd
-sh <s3_hostname>: Specify HCP storage server host name
-sp <s3_http_port>: Specify HCP storage server HTTP port (Default is 80)
-sps <s3_https_port>: Specify HCP storage server HTTP port (Default is 443)
-ssl <ssl_usage>: Specify whether to use SSL. (0- Disable SSL. 1- Enable SSL. Default is 1.) If SSL is disabled, it uses <s3_http_port> to make connection to <s3_hostname>. Otherwise, it uses <s3_https_port>.
You can access NetBackup Web UI to use image sharing. For more information, refer to the Using image sharing from the NetBackup Web UI topic in the NetBackup Web UI Administrator's Guide.
You can use the nbimageshare command to configure image sharing.
Run the nbimageshare command to list and import the virtual machine and standard images and then recover the virtual machines.
The path to access the command is: /usr/openv/netbackup/bin/admincmd/
For more information about the nbimageshare command, refer to the NetBackup Commands Reference Guide.
The following table lists the steps for image sharing and the command options:
Table: Steps for image sharing and the command options
Step | Command |
---|---|
Log on to NetBackup |
nbimageshare --login <username> <password> nbimageshare --login -interact |
List all the backup images that are in the cloud |
nbimageshare --listimage Note: In the list of images, the increment schedule type might be differential incremental or cumulative incremental. |
Import the backup images to NetBackup |
Import a single image: nbimageshare --singleimport <client> <policy> <backupID> Import multiple images: nbimageshare --batchimport <image_list_file_path> Note: The format of the image_list_file_path is same as the output of "list images". The multiple images number must be equal to or less than 64. You can import an already imported image. This action does not affect the NetBackup image catalog. |
Recover the VM as an AWS EC2 AMI or VHD in Azure |
nbimageshare --recovervm <client> <policy> <backupID>
|
When KMS encryption is enabled, you can share the images in the cloud storage to the cloud recovery server with manual KMS key transfer.
On-premises side:
Storage server: Find the key group name for the given Storage server
Find contentrouter.cfg in /etc/pdregistry.cfg
Find key group name is in contentrouter.cfg under [KMSOptions]
(Example KMSKeyGroupName=amazon.com:test1)
NetBackup primary server: Exports the key group with a passphrase to a file:
/usr/openv/netbackup/bin/admincmd/nbkmsutil -export -key_groups <key-group-name> -path <key file path>
cloud recovery server (cloud side):
Copy the exported key to the cloud recovery server
Config KMS server
/usr/openv/netbackup/bin/nbkms -createemptydb /usr/openv/netbackup/bin/nbkms /usr/openv/netbackup/bin/nbkmscmd -discovernbkms -autodiscover
Import keys to KMS service.
/usr/openv/netbackup/bin/admincmd/nbkmsutil -import -path <key file path> -preserve_kgname
Configure the cloud recovery server using NetBackup Web UI or with ims_system_config.py
On-premises KMS key changes:
In case of KMS key changes for the given group for on-premises storage server after the cloud recovery server is set up, you must export the key file from on-premises KMS server and import that key file on the cloud recovery server.
On-premises NetBackup primary server:
Exports the key group with a passphrase to a file:
/usr/openv/netbackup/bin/admincmd/nbkmsutil -export -key_groups <key-group-name> -path <key file path>
Cloud recovery server:
/usr/openv/netbackup/bin/admincmd/nbkmsutil -deletekg -kgname <key-group-name> -force
/usr/openv/netbackup/bin/admincmd/nbkmsutil -import -path <key file path> -preserve_kgname
If an on-premises storage server is configured to use keys from external KMS server, then make sure that the same KMS server is configured on the cloud recovery server before running ims_system_config.py. To know more about configuring an external KMS server in NetBackup, refer to NetBackup Security and Encryption Guide.
Make sure that the external KMS server is reachable from the cloud recovery server on a specific port.
It is recommended that you launch a cloud recovery server in the cloud on demand and don't upgrade it.
Do not use nbdevconfig to modify cloud LSU or add new cloud LSU in the image sharing server as it might cause an issue in the image sharing server (cloud recovery server). If KMS encryption is enabled in on-premise side after image sharing server is configured, the encrypted image cannot be import by this image sharing server.
Cloud LSU requires free disk space. When you configure image sharing server using the ims_system_config.py script, ensure that you have enough disk space in the default mount point or storage, or you can use -p parameter of ims_system_config.py to specify a different mount point to meet the requirement of free disk spaces.
After the image is imported in the image sharing server, the image catalog exists in the image sharing server. If the image is expired on the on-premises NetBackup domain, then restoring the image to the image sharing server fails even though the image catalog exists in the image sharing server.
If the image expires in the image sharing server, the image catalog in the image sharing server is removed but the image data in the cloud storage is not removed.
You can restore any image that you import in the image sharing server. Only VM images in AWS and Azure can be recovered because they can be converted into EC2 instance in AWS or VHD in Azure. VM images in other cloud storages cannot be converted, and can only be restored. You can recover only the VM images that are full backup images or accelerator-enabled incremental backup images.
Image sharing supports many policy types.
See the NetBackup compatibility lists for the latest information on the supported policy types.
After the image sharing is configured, the storage server is in a read-only mode. Some MSDP commands are not supported.
For information on the VM recovery limitations in AWS, refer to the AWS VM import information in AWS help.
You can configure the maximum active jobs when the images are imported to cloud storage.
Modify the file path
/usr/openv/var/global/wsl/config/web.conf
to add the configuration item as imageshare.maxActiveJobLimit.For example, imageshare.maxActiveJobLimit=16.
The default value is 16 and the configurable range is 1 to 100.
If the import request is made and the active job count exceeds the configured limit, the following message is displayed:
"Current active job count exceeded active job count limitation".
The images in cloud storage can be shared. If Amazon Glacier, Deep Archive or Azure Archive is enabled, you cannot use image sharing.
Regarding the errors about role policy size limitation in AWS:
Errors that occur when the role policy size exceeds the maximum size is an AWS limitation. You can find the following error in a failed restore job:
"error occurred (LimitExceeded) when calling the PutRolePolicy operation: Maximum policy size of 10240 bytes exceeded for role vmimport"
Workaround:
You can change the maximum policy size limit for the vmimport role.
You can list and delete the existing policies using the following commands:
aws iam list-role-policies --role-name vmimport aws iam delete-role-policy --role-name vmimport --policy-name <bucketname> -vmimport
The recover operation with AWS provider includes AWS import process. Therefore, a vmdk image cannot be recovered concurrently in two restore jobs at the same time.
In AWS, the image sharing feature can recover the virtual machines that satisfy the Amazon Web Services VM import prerequisites.
For more information about the prerequisites, refer to the following article:
https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html
If you cannot obtain the administrator password to use an AWS EC2 instance that has a Windows OS, the following error is displayed:
Password is not available. This instance was launched from a custom AMI, or the default password has changed. A password cannot be retrieved for this instance. If you have forgotten your password, you can reset it using the Amazon EC2 configuration service. For more information, see Passwords for a Windows Server Instance.
This error occurs after the instance is launched from an AMI that is converted using image sharing.
For more information, refer to the following articles:
You cannot cancel an import job on the cloud recovery server.
If there is data optimization done on the on-premises image, you might not be able to restore the image that you have imported on the cloud recovery server. You can expire this image, import it again on the image-sharing server, and then restore the image.
After the backup job, duplication job, or AIR import job completes, you can import the images on a cloud recovery server.The images that are created by User-Archive job cannot be imported.
If you want to convert a VM image again, you must delete the VHD from Azure blob.