NetBackup™ Deployment Guide for Amazon Elastic Kubernetes Services (EKS) Cluster

Last Published:
Product(s): NetBackup & Alta Data Protection (10.1)
  1. Introduction to NetBackup on EKS
    1.  
      About NetBackup deployment on Amazon Elastic Kubernetes (EKS) cluster
    2.  
      Required terminology
    3.  
      User roles and permissions
    4.  
      About MSDP Scaleout
    5.  
      About MSDP Scaleout components
    6.  
      Limitations in MSDP Scaleout
  2. Deployment with environment operators
    1. About deployment with the environment operator
      1.  
        Prerequisites
      2.  
        Contents of the TAR file
      3.  
        Known limitations
    2.  
      Deploying the operators manually
    3.  
      Deploying NetBackup and MSDP Scaleout manually
    4.  
      Configuring the environment.yaml file
    5.  
      Uninstalling NetBackup environment and the operators
    6.  
      Applying security patches
  3. Assessing cluster configuration before deployment
    1.  
      How does the webhook validation works
    2.  
      Webhooks validation execution details
    3.  
      How does the Config-Checker utility work
    4.  
      Config-Checker execution and status details
  4. Deploying NetBackup
    1.  
      Preparing the environment for NetBackup installation on EKS
    2.  
      Recommendations of NetBackup deployment on EKS
    3.  
      Limitations of NetBackup deployment on EKS
    4. About primary server CR and media server CR
      1.  
        After installing primary server CR
      2.  
        After Installing the media server CR
    5.  
      Monitoring the status of the CRs
    6.  
      Updating the CRs
    7.  
      Deleting the CRs
    8.  
      Configuring NetBackup IT Analytics for NetBackup deployment
    9.  
      Managing NetBackup deployment using VxUpdate
    10.  
      Migrating the node group for primary or media servers
  5. Upgrading NetBackup
    1.  
      Preparing for NetBackup upgrade
    2.  
      Upgrading NetBackup operator
    3.  
      Upgrading NetBackup application
    4.  
      Upgrade NetBackup during data migration
    5.  
      Procedure to rollback when upgrade fails
  6. Deploying MSDP Scaleout
    1.  
      Deploying MSDP Scaleout
    2.  
      Prerequisites
    3.  
      Installing the docker images and binaries
    4.  
      Initializing the MSDP operator
    5.  
      Configuring MSDP Scaleout
    6.  
      Using MSDP Scaleout as a single storage pool in NetBackup
    7.  
      Configuring the MSDP cloud in MSDP Scaleout
  7. Upgrading MSDP Scaleout
    1.  
      Upgrading MSDP Scaleout
  8. Monitoring NetBackup
    1.  
      Monitoring the application health
    2.  
      Telemetry reporting
    3.  
      About NetBackup operator logs
    4.  
      Expanding storage volumes
    5.  
      Allocating static PV for Primary and Media pods
  9. Monitoring MSDP Scaleout
    1.  
      About MSDP Scaleout status and events
    2.  
      Monitoring with Amazon CloudWatch
    3.  
      The Kubernetes resources for MSDP Scaleout and MSDP operator
  10. Managing the Load Balancer service
    1.  
      About the Load Balancer service
    2.  
      Notes for Load Balancer service
    3.  
      Opening the ports from the Load Balancer service
  11. Performing catalog backup and recovery
    1.  
      Backing up a catalog
    2.  
      Restoring a catalog
  12. Managing MSDP Scaleout
    1.  
      Adding MSDP engines
    2.  
      Adding data volumes
    3. Expanding existing data or catalog volumes
      1.  
        Manual storage expansion
    4.  
      MSDP Scaleout scaling recommendations
    5. MSDP Cloud backup and disaster recovery
      1.  
        About the reserved storage space
      2.  
        Cloud LSU disaster recovery
    6.  
      MSDP multi-domain support
    7.  
      Configuring Auto Image Replication
    8. About MSDP Scaleout logging and troubleshooting
      1.  
        Collecting the logs and the inspection information
  13. About MSDP Scaleout maintenance
    1.  
      Pausing the MSDP Scaleout operator for maintenance
    2.  
      Logging in to the pods
    3.  
      Reinstalling MSDP Scaleout operator
    4.  
      Migrating the MSDP Scaleout to another node group
  14. Uninstalling MSDP Scaleout from EKS
    1.  
      Cleaning up MSDP Scaleout
    2.  
      Cleaning up the MSDP Scaleout operator
  15. Troubleshooting
    1.  
      View the list of operator resources
    2.  
      View the list of product resources
    3.  
      View operator logs
    4.  
      View primary logs
    5.  
      Pod restart failure due to liveness probe time-out
    6.  
      Socket connection failure
    7.  
      Resolving an invalid license key issue
    8.  
      Resolving an issue where external IP address is not assigned to a NetBackup server's load balancer services
    9.  
      Resolving the issue where the NetBackup server pod is not scheduled for long time
    10.  
      Resolving an issue where the Storage class does not exist
    11.  
      Resolving an issue where the primary server or media server deployment does not proceed
    12.  
      Resolving an issue of failed probes
    13.  
      Resolving token issues
    14.  
      Resolving an issue related to insufficient storage
    15.  
      Resolving an issue related to invalid nodepool
    16.  
      Resolving a token expiry issue
    17.  
      Resolve an issue related to KMS database
    18.  
      Resolve an issue related to pulling an image from the container registry
    19.  
      Resolving an issue related to recovery of data
    20.  
      Check primary server status
    21.  
      Pod status field shows as pending
    22.  
      Ensure that the container is running the patched image
    23.  
      Getting EEB information from an image, a running container, or persistent data
    24.  
      Resolving the certificate error issue in NetBackup operator pod logs
    25.  
      Resolving the primary server connection issue
    26.  
      Primary pod is in pending state for a long duration
    27.  
      Host mapping conflict in NetBackup
    28.  
      NetBackup messaging queue broker take more time to start
    29.  
      Local connection is getting treated as insecure connection
    30.  
      Issue with capacity licensing reporting which takes longer time
    31.  
      Backing up data from Primary server's /mnt/nbdata/ directory fails with primary server as a client
  16. Appendix A. CR template
    1.  
      Secret
    2.  
      MSDP Scaleout CR

Upgrade NetBackup during data migration

Ensure that all the steps mentioned for data migration in the following section are performed before upgrading to the latest NetBackup or installing the latest :

See Preparing the environment for NetBackup installation on EKS.

  • User must have deployed NetBackup on AWS with EBS as its storage class.

    While upgrading to latest NetBackup, the existing catalog data of primary server will be migrated (copied) from EBS to Amazon elastic files.

  • Fresh NetBackup deployment: If user is deploying NetBackup for the first time, then Amazon elastic files will be used for primary server's catalog volume for any backup and restore operations.

Perform the following steps to create EFS when upgrading NetBackup from previous version

  1. To create EFS for primary server, see Create your Amazon EFS file system.

    EFS configuration can be as follow and user can update Throughput mode as required:

    Performance mode:  General Purpose

    Throughput mode: Provisioned (256 MiB/s)

    Availability zone: Regional

    Note:

    Throughput mode can be increased at runtime depending on the size of workloads and also if you are seeing performance issue you can increase the Throughput mode till 1024 MiB/s.

  2. Install efs-csi-controller driver on EKS cluster.

    For more information on installing the driver, see Amazon EFS CSI driver.

  3. Note down the EFS ID for further use.
  4. Mount EFS on any EC-2 instance and create and create two directories on EFS to store NetBackup data.

    For more information, see Mount on EC-2 instance.

    For example,

    [root@sych09b03v30 ~]# mkdir /efs
    [root@sych09b03v30 ~]# mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,
    retrans=2,noresvport <fs-0bde325bc5b8d6969>.efs.us-east-2.amazonaws.com:/ /efs  # change EFS ID
    

After changing the existing storage class from EBS to EFS for data migration, manually create PVC and PV with EFS volume handle and update the yaml file as described in the following procedure:

  1. Create new PVC and PV with EFS volume handle.

    • PVC

      CatlogPVC.yaml

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: catalog
        namespace: ns-155
      spec:
        accessModes:
          - ReadWriteMany
        storageClassName: ""
        resources:
          requests:
            storage: 100Gi
        volumeName: environment-pv-primary -catalog
    • PV

      catalogPV

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: environment-pv-primary-catalog
        labels:
          topology.kubernetes.io/region: us-east-2 # Give the region as your configuration in your cluster
          topology.kubernetes.io/zone: us-east-2c # Give the zone of your node instance, 
      can also check with subnet zone in which your node instance is there.
      spec:
        capacity:
          storage: 100Gi
        volumeMode: Filesystem
        accessModes:
          - ReadWriteMany
        storageClassName: ""
        persistentVolumeReclaimPolicy: Retain
        mountOptions:
          - iam
        csi:
          driver: efs.csi.aws.com
          volumeHandle: fs-07a82a46b4a7d87f8:/nbdata  #EFS id need to be changed as per your created EFS id
        claimRef:
          apiVersion: v1
          kind: PersistentVolumeClaim
          name: catalog  # catalog pvc name to which data to be copied namespace: ns-155

    PVC for data (EBS)

    PVC

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: data-< Primary name >-primary-0
      namespace: ns-155
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: <Storageclass name>
      resources:
        requests:
          storage: 30Gi
  2. Edit the environment.yaml file and change the value of paused to true in primary section and apply the yaml.

  3. Scale down the primary server using the following commands:

    • To get statefulset name: kubectl get sts -n < namespace in environment cr (ns-155)>

    • To scale down the STS: kubectl scale sts --replicas=0 < STS name > -n < Namespace >

  4. Copy the data using the migration yaml file as follows:

    catalogMigration.yaml

    apiVersion: batch/v1
    kind: Job
    metadata:
      name: rsync-data
      namespace: ns-155
    spec:
      template:
        spec:
          volumes:
          - name: source-pvc
            persistentVolumeClaim:
              # SOURCE PVC
              claimName: <EBS PVC name of catalog> # catalog-environment-migrate1-primary- 0
    # old PVC (EBS) from which data to be copied
          - name: destination-pvc
            persistentVolumeClaim:
              # DESTINATION PVC
              claimName: catalog   # new PVC (EFS) to which data will be copied
          securityContext:
            runAsUser: 0
            runAsGroup: 0
          containers:
          - name: netbackup-migration
            image: OPERATOR_IMAGE:TAG  
    #image name with tag
            command: ["/migration", '{"VolumesList":[{"Src":"srcPvc","Dest":"destPvc",
    "Verify":true,"StorageType":"catalog","OnlyCatalog":true}]}']
            volumeMounts:
            - name: source-pvc
              mountPath: /srcPvc
            - name: destination-pvc
              mountPath: /destPvc
          restartPolicy: Never

    dataMigration.yaml

    apiVersion: batch/v1
    kind: Job
    metadata:
      name: rsync-data2
      namespace: ns-155
    spec:
      template:
        spec:
          volumes:
          - name: source-pvc
            persistentVolumeClaim:
              # SOURCE PVC
              claimName: <EBS PVC name of catalog> # catalog-environment-migrate1-primary- 0
      # old PVC (EBS) from where data to be copied
          - name: destination-pvc
            persistentVolumeClaim:
              # DESTINATION PVC
              claimName: data (EBS) pvc name  # new PVC (EFS) to where data will be copied
          securityContext:
            runAsUser: 0
            runAsGroup: 0
          containers:
          - name: netbackup-migration
            image: OPERATOR_IMAGE:TAG  # image name with tag
            command: ["/migration", '{"VolumesList":[{"Src":"srcPvc","Dest":"destPvc",
    "Verify":true,"StorageType":"data","OnlyCatalog":false}]}']
            volumeMounts:
            - name: source-pvc
              mountPath: /srcPvc
            - name: destination-pvc
              mountPath: /destPvc
          restartPolicy: Never
  5. Delete the migration job once the pods are in complete state.

  6. For primary server, delete old PVC (EBS) of catalog volume.

    For example, catalog-<Name_of_primary>-primary-0 and create new PVC with same name (as deleted PVC) which were attached to primary server.

    • Follow the naming conventions of static PV and PVC to consume for Primary Server Deployment.

      catalog-<Name_of_primary>-primary-0
      data-<Name_of_primary>-primary-0
      Example:
      catalog-test-env-primary-0
      data-test-env-primary-0
      environment.yaml
      apiVersion: netbackup.veritas.com/v2
      kind: Environment
      metadata:
      name: test-env
      namespace: ns-155
      spec:
      ...
         primary:
           # Set name to control the name of the primary server.
      The default value is the same as the Environment's metadata.name.
           name: test-env
    • Yaml to create new catalog PVC:

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: catalog-test-env-primary-0
        namespace: ns-155
      spec:
        accessModes:
          - ReadWriteMany
        storageClassName: ""
        resources:
          requests:
            storage: 100Gi
        volumeName: environment-pv-primary-catalog
  7. Edit the PV (mounted on EFS) and replace the name, resource version, uid with new created PVC to meet the naming convention.

    Get the PV's and PVC's using the following commands:

    • To get PVC details: kubectl get pvc -n < Namespace>

    • Use edit command to get PVC details: kubectl edit pvc < New PVC(old name) name > -n < Namespace >

    • To get PV details: kubectl edit pv < PV name (in which data is copied) >

  8. Upgrade the MSDP with new build and image tag. Apply the following command to MSDP:

    ./kubectl-msdp init --image <<Image name:Tag>> --storageclass << Storage Class Name>> --namespace << Namespace >>

  9. Apply the following command operator from new build with new image tag and node selector:

    kubectl apply -k operator/

  10. Edit the environment.yaml file from new build and perform the following changes:

    • Add the tag: <new_tag_of_upgrade_image> tag separately under primary sections.

    • Change the storage class for catalog under primary section to catalog and log PV name respectively Set the paused=false under primary section.

    • Change the strorageClass for data and logs as with storageClassName and then apply environment.yaml file using the following command and ensure that the primary server is upgraded successfully:

      kubectl apply -f environment.yaml

    • Upgrade the MSDP Scaleout by updating the new image tag in msdpscaleout section in environment.yaml file.

    • Apply environment.yaml file using the following command and ensure that MSDP is deployed successfully:

      kubectl apply -f environment.yaml

    • Edit the environment.yaml file and update the image tag for Media Server in mediaServer section.

    • Apply environment.yaml file using the following command and ensure that the Media Server is deployed successfully:

      kubectl apply -f environment.yaml