NetBackup™ Deployment Guide for Kubernetes Clusters

Last Published:
Product(s): NetBackup & Alta Data Protection (10.4.0.1)
  1. Introduction
    1.  
      About Cloud Scale deployment
    2.  
      About NetBackup Snapshot Manager
    3.  
      About MSDP Scaleout
    4.  
      Required terminology
    5.  
      User roles and permissions
  2. Section I. Configurations
    1. Prerequisites
      1.  
        Preparing the environment for NetBackup installation on Kubernetes cluster
      2.  
        Prerequisites for MSDP Scaleout and Snapshot Manager (AKS/EKS)
      3. Prerequistes for Kubernetes cluster configuration
        1.  
          Config-Checker utility
        2.  
          Data-Migration for AKS
        3.  
          Webhooks validation for EKS
      4. Prerequisites for Cloud Scale configuration
        1.  
          Cluster specific settings
        2.  
          Cloud specific settings
      5.  
        Prerequisites for deploying environment operators
    2. Recommendations and Limitations
      1.  
        Recommendations of NetBackup deployment on Kubernetes cluster
      2.  
        Limitations of NetBackup deployment on Kubernetes cluster
      3.  
        Limitations in MSDP Scaleout
    3. Configurations
      1.  
        Contents of the TAR file
      2.  
        Initial configurations
      3.  
        Configuring the environment.yaml file
      4. Loading docker images
        1.  
          Installing the docker images for NetBackup
        2.  
          Installing the docker images for Snapshot Manager
        3.  
          Installing the docker images and binaries for MSDP Scaleout
      5.  
        Configuring NetBackup IT Analytics for NetBackup deployment
      6. Configuring NetBackup
        1. Primary and media server CR
          1.  
            After installing primary server CR
          2.  
            After Installing the media server CR
        2.  
          Elastic media server
    4. Configuration of key parameters in Cloud Scale deployments
      1.  
        Tuning touch files
      2.  
        Setting maximum jobs
      3.  
        Enabling intelligent catalog archiving
      4.  
        Enabling security settings
      5.  
        Configuring email server
      6.  
        Reducing catalog storage management
      7.  
        Configuring zone redundancy
      8.  
        Enabling client-side deduplication capabilities
  3. Section II. Deployment
    1. Deploying operators
      1.  
        Deploying the operators
    2. Deploying Postgres
      1.  
        Deploying Postgres
      2.  
        Enable request logging, update configuration, and copying files from/to PostgreSQL pod
    3. Deploying Cloud Scale
      1.  
        Installing Cloud Scale
    4. Deploying MSDP Scaleout
      1. MSDP Scaleout configuration
        1.  
          Initializing the MSDP operator
        2.  
          Configuring MSDP Scaleout
        3.  
          Configuring the MSDP cloud in MSDP Scaleout
        4.  
          Using MSDP Scaleout as a single storage pool in NetBackup
        5.  
          Using S3 service in MSDP Scaleout
        6.  
          Enabling MSDP S3 service after MSDP Scaleout is deployed
      2.  
        Deploying MSDP Scaleout
    5. Verifying Cloud Scale deployment
      1.  
        Verifying Cloud Scale deployment
  4. Section III. Monitoring and Management
    1. Monitoring NetBackup
      1.  
        Monitoring the application health
      2.  
        Telemetry reporting
      3.  
        About NetBackup operator logs
      4.  
        Monitoring Primary/Media server CRs
      5.  
        Expanding storage volumes
      6. Allocating static PV for Primary and Media pods
        1.  
          Recommendation for media server volume expansion
        2.  
          (AKS-specific) Allocating static PV for Primary and Media pods
        3.  
          (EKS-specific) Allocating static PV for Primary and Media pods
    2. Monitoring Snapshot Manager
      1.  
        Overview
      2.  
        Logs of Snapshot Manager
      3.  
        Configuration parameters
    3. Monitoring MSDP Scaleout
      1.  
        About MSDP Scaleout status and events
      2.  
        Monitoring with Amazon CloudWatch
      3.  
        Monitoring with Azure Container insights
      4.  
        The Kubernetes resources for MSDP Scaleout and MSDP operator
    4. Managing NetBackup
      1.  
        Managing NetBackup deployment using VxUpdate
      2.  
        Updating the Primary/Media server CRs
      3.  
        Migrating the cloud node for primary or media servers
    5. Managing the Load Balancer service
      1.  
        About the Load Balancer service
      2.  
        Notes for Load Balancer service
      3.  
        Opening the ports from the Load Balancer service
    6. Managing PostrgreSQL DBaaS
      1.  
        Changing database server password in DBaaS
      2.  
        Updating database certificate in DBaaS
    7. Performing catalog backup and recovery
      1.  
        Backing up a catalog
      2. Restoring a catalog
        1.  
          Primary server corrupted
        2.  
          MSDP-X corrupted
        3.  
          MSDP-X and Primary server corrupted
    8. Managing MSDP Scaleout
      1.  
        Adding MSDP engines
      2.  
        Adding data volumes
      3. Expanding existing data or catalog volumes
        1.  
          Manual storage expansion
      4.  
        MSDP Scaleout scaling recommendations
      5. MSDP Cloud backup and disaster recovery
        1.  
          About the reserved storage space
        2. Cloud LSU disaster recovery
          1.  
            Recovering MSDP S3 IAM configurations from cloud LSU
      6.  
        MSDP multi-domain support
      7.  
        Configuring Auto Image Replication
      8. About MSDP Scaleout logging and troubleshooting
        1.  
          Collecting the logs and the inspection information
  5. Section IV. Maintenance
    1. MSDP Scaleout Maintenance
      1.  
        Pausing the MSDP Scaleout operator for maintenance
      2.  
        Logging in to the pods
      3.  
        Reinstalling MSDP Scaleout operator
      4.  
        Migrating the MSDP Scaleout to another node pool
    2. PostgreSQL DBaaS Maintenance
      1.  
        Configuring maintenance window for PostgreSQL database in AWS
      2.  
        Setting up alarms for PostgreSQL DBaaS instance
    3. Patching mechanism for Primary and Media servers
      1.  
        Overview
      2.  
        Patching of containers
    4. Upgrading
      1.  
        Upgrading Cloud Scale deployment for Postgres using Helm charts
      2. Upgrading NetBackup individual components
        1.  
          Upgrading NetBackup operator
        2. Upgrading NetBackup application
          1.  
            Upgrade NetBackup from previous versions
          2.  
            Procedure to rollback when upgrade of NetBackup fails
        3.  
          Upgrading MSDP Scaleout
        4. Upgrading Snapshot Manager
          1.  
            Post-migration tasks
    5. Cloud Scale Disaster Recovery
      1.  
        Cluster backup
      2.  
        Environment backup
      3.  
        Cluster recovery
      4.  
        Cloud Scale recovery
      5.  
        Environment Disaster Recovery
      6.  
        DBaaS Disaster Recovery
    6. Uninstalling
      1.  
        Uninstalling NetBackup environment and the operators
      2.  
        Uninstalling Postgres using Helm charts
      3.  
        Uninstalling Snapshot Manager from Kubernetes cluster
      4. Uninstalling MSDP Scalout from Kubernetes cluster
        1.  
          Cleaning up MSDP Scaleout
        2.  
          Cleaning up the MSDP Scaleout operator
    7. Troubleshooting
      1. Troubleshooting AKS and EKS issues
        1.  
          View the list of operator resources
        2.  
          View the list of product resources
        3.  
          View operator logs
        4.  
          View primary logs
        5.  
          Socket connection failure
        6.  
          Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
        7.  
          Resolving the issue where the NetBackup server pod is not scheduled for long time
        8.  
          Resolving an issue where the Storage class does not exist
        9.  
          Resolving an issue where the primary server or media server deployment does not proceed
        10.  
          Resolving an issue of failed probes
        11.  
          Resolving token issues
        12.  
          Resolving an issue related to insufficient storage
        13.  
          Resolving an issue related to invalid nodepool
        14.  
          Resolving a token expiry issue
        15.  
          Resolve an issue related to KMS database
        16.  
          Resolve an issue related to pulling an image from the container registry
        17.  
          Resolving an issue related to recovery of data
        18.  
          Check primary server status
        19.  
          Pod status field shows as pending
        20.  
          Ensure that the container is running the patched image
        21.  
          Getting EEB information from an image, a running container, or persistent data
        22.  
          Resolving the certificate error issue in NetBackup operator pod logs
        23.  
          Pod restart failure due to liveness probe time-out
        24.  
          NetBackup messaging queue broker take more time to start
        25.  
          Host mapping conflict in NetBackup
        26.  
          Issue with capacity licensing reporting which takes longer time
        27.  
          Local connection is getting treated as insecure connection
        28.  
          Primary pod is in pending state for a long duration
        29.  
          Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
        30.  
          Storage server not supporting Instant Access capability on Web UI after upgrading NetBackup
        31.  
          Taint, Toleration, and Node affinity related issues in cpServer
        32.  
          Operations performed on cpServer in environment.yaml file are not reflected
        33.  
          Elastic media server related issues
        34.  
          Failed to register Snapshot Manager with NetBackup
        35.  
          Post Kubernetes cluster restart, flexsnap-listener pod went into CrashLoopBackoff state or pods were unable to connect to flexsnap-rabbitmq
        36.  
          Post Kubernetes cluster restart, issues observed in case of containerized Postgres deployment
      2. Troubleshooting AKS-specific issues
        1.  
          Data migration unsuccessful even after changing the storage class through the storage yaml file
        2.  
          Host validation failed on the target host
        3.  
          Primary pod goes in non-ready state
      3. Troubleshooting EKS-specific issues
        1.  
          Resolving the primary server connection issue
        2.  
          NetBackup Snapshot Manager deployment on EKS fails
        3.  
          Wrong EFS ID is provided in environment.yaml file
        4.  
          Primary pod is in ContainerCreating state
        5.  
          Webhook displays an error for PV not found
  6. Appendix A. CR template
    1.  
      Secret
    2. MSDP Scaleout CR
      1.  
        MSDP Scaleout CR template for AKS
      2.  
        MSDP Scaleout CR template for EKS

Environment Disaster Recovery

  1. Ensure that the Cloud Scale deployment has been cleaned up in the cluster.

    Perform the following to verify the cleanup process:

    • Ensure that the namespace associated with Cloud Scale deployment are deleted by using the following command:

      kubectl get ns

    • Confirm that storageclass, pv, clusterroles, clusterrolebindings, crd's associated with Cloud Scale deployment are deleted by using the following command:

      kubectl get sc,pv,crd,clusterrolebindings,clusterroles

  2. (For EKS) If deployment is in different AZ, update the subnet name in environment_backup.yaml file.

    For example, if earlier subnet name was subnet-az1 and new subnet is subnet-az2, then in environment_backup.yaml file, there would be a section for loadBalancerAnnotations as follows:

    loadBalancerAnnotations:
               service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-az1

    Update the name to new subnet name as follows:

    loadBalancerAnnotations:
               service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-az2

    Update all IPs used for Primary, MSDP, Media and Snapshot Manager server in respective section.

    Note:

    Change of FQDN is not supported.

    The following example shows how to change the IP for Primary server:

    Old entry in environment_backup.yaml file:

    ipList:
                - ipAddr: 12.123.12.123
                  fqdn: primary.netbackup.com

    Update the above old entry as follows:

    ipList:
                - ipAddr: 34.245.34.234
                  fqdn: primary.netbackup.com

    Similarly perform the above given procedure in the example (Primary server) for MSDP, Media and Snapshot Manager server.

  3. Ensure that the iplist listed in Primary, Media, MSDP and Snapshot Manager server sections of environment_backup.yaml file that was saved during backup must be free and resolvable. If deployment is in different AZ, then FQDN must be same, but IP can be changed, hence ensure that same FQDN's can map to different IP.

  4. (For EKS) Update spec > priamryServer > storage > catalog > storageClassName with new EFS ID which is created for primary.

  5. Search and delete the following sections from the backed up copy of environment_backup.yaml file:

    annotations, creationTimestamp, generation, resourceVersion, uid

    For example:

    Sample environment_backup.yaml file before deleting the above sections:

    apiVersion: netbackup.veritas.com/v2
    kind: Environment
    metadata:
      annotations:
        kubectl.kubernetes.io/last-applied-configuration: |
          {"apiVersion":"netbackup.veritas.com/v2","kind":"Environment","metadata":{"annotations":{},"name":"environment-sample","namespace":"nb-namespace"},"spec":{"configCheckMode":"skip","containerRegistry":"nbuk8sreg.azurecr.io","cpServer":[{"credential":{"secretName":"cp-creds"},"name":"cpserver-1","networkLoadBalancer":{"fqdn":"nbux-10-244-33-78.vxindia.veritas.com","ipAddr":"10.244.33.78"},"nodeSelector":{"controlPlane":{"labelKey":"agentpool","labelValue":"nbuxpool","nodepool":"nbuxpool"},"dataPlane":{"labelKey":"cp-data-pool","labelValue":"cpdata","nodepool":"cpdata"}},"storage":{"data":{"capacity":"30Gi","storageClassName":"managed-csi-hdd"},"log":{"capacity":"5Gi","storageClassName":"azurefile-csi-retain"}},"tag":"10.3-0003"}],"drInfoSecretName":"dr-info-secret","loadBalancerAnnotations":{"service.beta.kubernetes.io/azure-load-balancer-internal-subnet":"LB-RESERVED"},"mediaServers":[{"minimumReplicas":1,"name":"media1","networkLoadBalancer":{"ipList":[{"fqdn":"nbux-10-244-33-75.vxindia.veritas.com","ipAddr":"10.244.33.75"}]},"nodeSelector":{"labelKey":"agentpool","labelValue":"nbuxpool"},"replicas":1,"storage":{"data":{"capacity":"50Gi","storageClassName":"managed-csi-hdd"},"log":{"capacity":"30Gi","storageClassName":"managed-csi-hdd"}}}],"msdpScaleouts":[{"credential":{"secretName":"msdp-secret1"},"ipList":[{"fqdn":"nbux-10-244-33-76.vxindia.veritas.com","ipAddr":"10.244.33.76"}],"kms":{"keyGroup":"example-key-group","keySecret":"example-key-secret"},"loadBalancerAnnotations":{"service.beta.kubernetes.io/azure-load-balancer-internal":"true"},"name":"dedupe1","nodeSelector":{"labelKey":"agentpool","labelValue":"nbuxpool"},"replicas":1,"storage":{"dataVolumes":[{"capacity":"50Gi","storageClassName":"managed-csi-hdd"}],"log":{"capacity":"5Gi","storageClassName":"managed-csi-hdd"}},"tag":"19.0-0003"}],"primary":{"credSecretName":"primary-credential-secret","kmsDBSecret":"kms-secret","networkLoadBalancer":{"ipList":[{"fqdn":"nbux-10-244-33-74.vxindia.veritas.com","ipAddr":"10.244.33.74"}]},"nodeSelector":{"labelKey":"agentpool","labelValue":"nbuxpool"},"storage":{"catalog":{"autoVolumeExpansion":false,"capacity":"100Gi","storageClassName":"azurefile-csi-retain"},"data":{"capacity":"30Gi","storageClassName":"managed-csi-hdd"},"log":{"capacity":"30Gi","storageClassName":"managed-csi-hdd"}}},"tag":"10.3-0003"}}
      creationTimestamp: "2023-08-01T06:40:34Z"
      generation: 1
      name: environment-sample
      namespace: nb-namespace
      resourceVersion: "96785"
      uid: 7bf36bb2-2291-4a58-b72c-0bc85b60385b
    spec:
      configCheckMode: skip
      containerRegistry: nbuk8sreg.azurecr.io
      corePattern: /core/core.%e.%p.%t
    ....

    Sample environment_backup.yaml file after deleting the above sections:

    apiVersion: netbackup.veritas.com/v2
    kind: Environment
    metadata:
      name: environment-sample
      namespace: nb-namespace
    spec:
      configCheckMode: skip
      containerRegistry: nbuk8sreg.azurecr.io
      corePattern: /core/core.%e.%p.%t
    ....
  6. Ensure that nodeSelector is present in the environment_backup.yaml file and operators that were noted down during backup must be present in the cluster with required configurations.

  7. Perform the steps in the following section for deploying DBaaS:

    See DBaaS Disaster Recovery.

  8. Deploy MSDP, NetBackupand Snapshot Manager operators by performing the steps mentioned in the following section:

    See Deploying the operators.

  9. Run the following command to verify if the operators are running:

    $ kubectl get all --namespace netbackup-operator-system

    Verify that the STATUS of pod/netbackup-operator and pod/flexsnap-operator is showing as Running.

  10. Create namespace that is present in environment_backup.yaml file:

    kubectl create ns <sample-namespace>

  11. (For 10.4 and above) Deploy unified container, by performing the steps mentioned in the following section:

    See Deploying Postgres.

  12. Create secrets as follows using secret_backup.yaml file that was backed up:

    kubectl apply -f secret_backup.yaml

    Verify all secrets are created using the following command:

    kubectl get secrets -n <sample-namespace>

    Note:

    This step requires the backed up data in step 7 for secretName (MSDP credential) and drInfoSecretName file.

  13. Create configmaps and internal configmaps as follows:

    kubectl apply -f configmap_backup.yaml

    kubectl apply -f internalconfigmap_backup.yaml

    Verify if all configmaps are created by using the following command:

    kubectl get configmaps -n <sample-namespace>

    Note:

    This step requires the backed up data in step 10 for emailServerConfigmap file.

  14. (Required only for DBaaS deployment) Snapshot Manager restore steps:

    For AKS

    • Navigate to the snapshot resource created during backup and Create a disk under the recovered cluster infra resource group (for example, MC_<clusterRG>_<cluster name>_<cluster_region>).

    • Note down the resource ID of this disk (navigate to the Properties of the disk). It can be obtained from portal/az cli.

      Format of resource ID:/subscriptions/<subscription id>/resourceGroups/<MC_<clusterRG>_<cluster name>_<cluster_region>/providers/Microsoft.Compute/disks>/<disk name>

    • Create static PV using the resource ID of backed up disk. Copy the below yaml and update the pv name, size of the disk, namespace and storage class name in pgsql-pv.yaml file and apply the yaml:

      pgsql-pv.yaml

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: <pv name>
      spec:
        capacity:
          storage: <size of the disk>
        accessModes:
          - ReadWriteOnce
        persistentVolumeReclaimPolicy: Retain
        storageClassName: <storage class name>
        claimRef:
          name: psql-pvc
          namespace: <environment namespace>
        csi:
          driver: disk.csi.azure.com
          readOnly: false
          volumeHandle: <Resorce ID of the Disk>

      Example of pgsql-pv.yaml file:

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: psql-pv
      spec:
        capacity:
          storage: 30Gi
        accessModes:
          - ReadWriteOnce
        persistentVolumeReclaimPolicy: Retain
        storageClassName: gp2-immediate
        claimRef:
          name: psql-pvc
          namespace: nbux
        csi:
          driver: disk.csi.azure.com
          readOnly: false
          volumeHandle: /subscriptions/a332d749-22d8-48f6-9027-ff04b314e840/resourceGroups/MC_vibha-vasantraohadule-846288_auto_aks-vibha-vasantraohadule-846288_eastus2/providers/Microsoft.Compute/disks/psql-disk
      

      Create psql-pv using the following command:

      kubectl apply -f <path_to_psql_pv.yaml> -n <environment-namespace>

    • Ensure that the newly created PV is in Available state before restoring the Snapshot Manager server as follows:

      kubectl get pv | grep psql-pvc

      >> psql-pv 30Gi RWO managed-premium-disk Available nbu/psql-pvc 50s

    For EKS

    • Navigate to the EC2 > Snapshots in AWS Console and click on the Create volume from the snapshot (expand the Actions drop down) which is taken in backup step 2 in same availability zone (AZ) of volume attached to psql-pvc (mentioned in step 1 of backup steps).

      Note down the volumeID (for example, vol-xxxxxxxxxxxxxxx).

    • In case deployment is in different availability zone (AZ), user must change the availability zone (AZ) for volume and update the volumeID accordingly.

    • Create static PV using the backed up volumeID. Copy the below yaml and update the pv name, size of the disk, namespace and storage class name in pgsql-pv.yaml file and apply the yaml:

      pgsql-pv.yaml

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: <pv name>
      spec:
        accessModes:
        - ReadWriteOnce
        awsElasticBlockStore:
          fsType: <fs type>
          volumeID: <backed up volumeID>    # append this  aws://az-code/ , for e.g. aws://us-east-2b/ at the starting
        capacity:
          storage: 30Gi
        claimRef:
          apiVersion: v1
          kind: PersistentVolumeClaim
          name: psql-pvc
          namespace: <netbackup namespace>
        persistentVolumeReclaimPolicy: Retain
        storageClassName: <storage class name>
        volumeMode: Filesystem

      Sample yaml for pgsql-pv.yaml file:

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: psql-pv
      spec:
        accessModes:
        - ReadWriteOnce
        awsElasticBlockStore:
          fsType: ext4
          volumeID: aws://us-east-2b/vol-0d86d2ca38f231ede
        capacity:
          storage: 30Gi
        claimRef:
          apiVersion: v1
          kind: PersistentVolumeClaim
          name: psql-pvc
          namespace: nbu
        persistentVolumeReclaimPolicy: Retain
        storageClassName: gp2-immediate
        volumeMode: Filesystem

      Create psql-pv using the following command:

      kubectl apply -f <path_to_psql_pv.yaml> -n <netbackup-namespace>

      kubectl get pv | grep psql-pvc

    • Ensure that the newly created PV is in Available state before restoring the Snapshot Manager server as follows:

      kubectl get pv | grep psql-pvc

      >>> psql-pv 30Gi RWO gp2-immediate Available nbu/psql-pvc 50s

  15. Perform the following steps to recover the environment:

    • Make a copy of environment CR yaml (environment_backup.yaml) file with name environment_backup_copy.yaml and save it for later use.

    • Remove CP-server section from the original environment_backup.yaml file.

    • Modify the environment with the paused: true field in MSDP and Media sections. Modify the following and save it:

      spec > msdpscaleout > paused to true

      spec > > mediaservers > paused to true

      Only primary server must get deployed in this case. Now apply this modified environment.yaml file using the following command:

      kubectl apply -f <environment.yaml file name>

    • Once primary server is up and running:

      • Execute # kubectl exec -it -n <namespace> <primary-pod-name> -- /bin/bash command to exec into the primary pod.

      • Increase the debug logs level on primary server.

      • Create a directory DRPackages at persisted location using mkdir /mnt/nbdb/usr/openv/drpackage command and provide the permission as 757.

    • Copy back the earlier copied DR files to primary pod at /mnt/nbdb/usr/openv/drpackage file using the following command:

      kubectl cp <Path_of_DRPackages_on_host_machine> <primary-pod-namespace>/<primary-pod-name>:/mnt/nbdb/usr/openv/drpackage

    • Execute the following steps after executing into the primary server pod:

      • Change the ownership of files in /mnt/nbdb/usr/openv/drpackage using the chown nbsvcusr:nbsvcusr <file-name> command.

      • Deactivate NetBackup health probes using the /opt/veritas/vxapp-manage/nb-health deactivate command.

      • Stop the NetBackup services using /usr/openv/netbackup/bin/bp.kill_all command.

      • Execute the /usr/openv/netbackup/bin/admincmd/nbhostidentity -import -infile /mnt/nbdb/usr/openv/drpackage/.drpkg command.

      • Clear NetBackup host cache, run the bpclntcmd -clear_host_cache command.

      • Start NetBackup services using the /usr/openv/netbackup/bin/bp.start_all command.

      • Start NetBackup services using the /usr/openv/netbackup/bin/bp.start_all command.

      • Refresh the certificate revocation list using the /usr/openv/netbackup/bin/nbcertcmd -getcrl command.

    • Run the primary server reconciler.

      This can be done by editing the environment (using kubectl edit environment -n <namespace> command) and changing primary spec's paused field to true and save it.

      Then, to enable the reconciler to run, the environment needs to be edited again and the primary's paused field in spec should again be set to false.

      The SHA fingerprint will get updated in the primary CR's status.

    • Allow Auto reissue certificate from primary for MSDP, Media and Snapshot Manager server from Web UI.

      In Web UI, navigate to Security > Host Mappings > for the MSDP Storage Server, click on the 3 dots on the right > check Allow Auto reissue Certificate. Repeat this for media servers and Snapshot Manager server entries also.

    • Edit the environment using the kubectl edit environment -n <namespace> command and change paused field to false for MSDP.

    • Redeploy MSDP Scaleout on a cluster by using the same CR parameters and NetBackup re-issue token.

    • If the LSU cloud alias does not exist, you can use the following command to add it.

      /usr/openv/netbackup/bin/admincmd/csconfig cldinstance -as -in <instance-name> -sts <storage-server-name> -lsu_name <lsu-name>

      When MSDP Scaleout is up and running, re-use the cloud LSU on NetBackup primary server.

      /usr/openv/netbackup/bin/admincmd/nbdevconfig -setconfig -storage_server <STORAGESERVERNAME> -stype PureDisk -configlist <configuration file>

      Credentials, bucket name, and sub bucket name must be the same as the recovered Cloud LSU configuration in the previous MSDP Scaleout deployment.

      Configuration file template:

      V7.5 "operation" "reuse-lsu-cloud" string
      V7.5 "lsuName" "LSUNAME" string
      V7.5 "cmsCredName" "XXX" string
      V7.5 "lsuCloudAlias" "<STORAGESERVERNAME_LSUNAME>" string
      V7.5 "lsuCloudBucketName" "XXX" string
      V7.5 "lsuCloudBucketSubName" "XXX" string
      V7.5 "lsuKmsServerName" "XXX" string

      Note:

      For Veritas Alta Recovery Vault Azure storage, the cmsCredName is a credential name and cmsCredName can be any string. Add recovery vault credential in the CMS using the NetBackup Web UI and provide the credential name for cmsCredName. For more information, see About Veritas Alta Recovery Vault Azure topic in NetBackup Deduplication Guide.

    • On the first MSDP Engine of MSDP Scaleout, run the following command for each cloud LSU:

      sudo -E -u msdpsvc /usr/openv/pdde/pdcr/bin/cacontrol --catalog clouddr <LSUNAME>

    • Restart the MSDP services in the MSDP Scaleout.

      Option 1: Manually delete all the MSDP engine pods.

      kubectl delete pod <sample-engine-pod> -n <sample-cr-namespace>

      Option 2: Stop MSDP services in each MSDP engine pod. MSDP service starts automatically.

      kubectl exec <sample-engine-pod> -n <sample-cr-namespace> -c uss-engine -- /usr/openv/pdde/pdconfigure/pdde stop

  16. Edit environment CR and change paused = false for media server.

  17. Perform full Catalog Recovery using either of the options listed below:

    Trigger a Catalog Recovery from the Web UI.

    Or

    Exec into primary pod and run the bprecover -wizard command.

  18. Once recovery is completed, restart the NetBackup services:

    Stop NetBackup services using the /usr/openv/netbackup/bin/bp.kill_all command.

    Start NetBackup services using the /usr/openv/netbackup/bin/bp.start_all command.

  19. Activate NetBackup health probes using the /opt/veritas/vxapp-manage/nb-health activate command.

  20. Apply the backup_environment.yaml file and install Snapshot Manager server. Wait for Snapshot Manager pods to come up and in running state.

    Note:

    Ensure that cpServer section in CR is enabled.

  21. Post Disaster Recovery, if on host agent fails, run the following respective commands:

    • For Windows: From the command prompt navigate to the agent installation directory (C:\Program Files\Veritas\CloudPoint\) and run the following command:

      #flexsnap-agent.exe --renew --token <auth_token> renew

      This command fails in the first attempt. Rerun the command for successful attempt.

    • For Linux: Rerun the following command on Linux host:

      sudo flexsnap-agent --renew --token <auth_token>{}