NetBackup™ Deployment Guide for Amazon Elastic Kubernetes Services (EKS) Cluster
- Introduction to NetBackup on EKS
- Deployment with environment operators
- Assessing cluster configuration before deployment
- Deploying NetBackup
- About primary server CR and media server CR
- Upgrading NetBackup
- Deploying MSDP Scaleout
- Upgrading MSDP Scaleout
- Monitoring NetBackup
- Monitoring MSDP Scaleout
- Managing the Load Balancer service
- Performing catalog backup and recovery
- Managing MSDP Scaleout
- About MSDP Scaleout maintenance
- Uninstalling MSDP Scaleout from EKS
- Troubleshooting
- Appendix A. CR template
Upgrade NetBackup during data migration
Ensure that all the steps mentioned for data migration in the following section are performed before upgrading to the latest NetBackup or installing the latest :
See Preparing the environment for NetBackup installation on EKS.
User must have deployed NetBackup on AWS with
EBS
as its storage class.While upgrading to latest NetBackup, the existing catalog data of primary server will be migrated (copied) from
EBS
toAmazon elastic files
.Fresh NetBackup deployment: If user is deploying NetBackup for the first time, then
Amazon elastic files
will be used for primary server's catalog volume for any backup and restore operations.
Perform the following steps to create EFS when upgrading NetBackup from previous version
- To create EFS for primary server, see Create your Amazon EFS file system.
EFS configuration can be as follow and user can update Throughput mode as required:
Performance mode: General Purpose
Throughput mode: Provisioned (256 MiB/s)
Availability zone: Regional
Note:
Throughput mode can be increased at runtime depending on the size of workloads and also if you are seeing performance issue you can increase the Throughput mode till 1024 MiB/s.
- Install
efs-csi-controller
driver on EKS cluster.For more information on installing the driver, see Amazon EFS CSI driver.
- Note down the EFS ID for further use.
- Mount EFS on any EC-2 instance and create and create two directories on EFS to store NetBackup data.
For more information, see Mount on EC-2 instance.
For example,
[root@sych09b03v30 ~]# mkdir /efs [root@sych09b03v30 ~]# mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600, retrans=2,noresvport <fs-0bde325bc5b8d6969>.efs.us-east-2.amazonaws.com:/ /efs # change EFS ID
After changing the existing storage class from EBS to EFS for data migration, manually create PVC and PV with EFS volume handle and update the yaml file as described in the following procedure:
Create new PVC and PV with EFS volume handle.
CatlogPVC.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: catalog namespace: ns-155 spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 100Gi volumeName: environment-pv-primary -catalog
catalogPV
apiVersion: v1 kind: PersistentVolume metadata: name: environment-pv-primary-catalog labels: topology.kubernetes.io/region: us-east-2 # Give the region as your configuration in your cluster topology.kubernetes.io/zone: us-east-2c # Give the zone of your node instance,
can also check with subnet zone in which your node instance is there. spec: capacity: storage: 100Gi volumeMode: Filesystem accessModes: - ReadWriteMany storageClassName: "" persistentVolumeReclaimPolicy: Retain mountOptions: - iam csi: driver: efs.csi.aws.com volumeHandle: fs-07a82a46b4a7d87f8:/nbdata #EFS id need to be changed as per your created EFS id claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: catalog # catalog pvc name to which data to be copied namespace: ns-155
PVC for data (EBS)
PVC
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-< Primary name >-primary-0 namespace: ns-155 spec: accessModes: - ReadWriteOnce storageClassName: <Storageclass name> resources: requests: storage: 30Gi
Edit the
environment.yaml
file and change the value of paused to true in primary section and apply the yaml.Scale down the primary server using the following commands:
To get statefulset name: kubectl get sts -n < namespace in environment cr (ns-155)>
To scale down the STS: kubectl scale sts --replicas=0 < STS name > -n < Namespace >
Copy the data using the migration yaml file as follows:
catalogMigration.yaml
apiVersion: batch/v1 kind: Job metadata: name: rsync-data namespace: ns-155 spec: template: spec: volumes: - name: source-pvc persistentVolumeClaim: # SOURCE PVC claimName: <EBS PVC name of catalog> # catalog-environment-migrate1-primary- 0
# old PVC (EBS) from which data to be copied - name: destination-pvc persistentVolumeClaim: # DESTINATION PVC claimName: catalog # new PVC (EFS) to which data will be copied securityContext: runAsUser: 0 runAsGroup: 0 containers: - name: netbackup-migration image: OPERATOR_IMAGE:TAG
#image name with tag command: ["/migration", '{"VolumesList":[{"Src":"srcPvc","Dest":"destPvc", "Verify":true,"StorageType":"catalog","OnlyCatalog":true}]}']
volumeMounts: - name: source-pvc mountPath: /srcPvc - name: destination-pvc mountPath: /destPvc restartPolicy: Never
dataMigration.yaml
apiVersion: batch/v1 kind: Job metadata: name: rsync-data2 namespace: ns-155 spec: template: spec: volumes: - name: source-pvc persistentVolumeClaim: # SOURCE PVC claimName: <EBS PVC name of catalog> # catalog-environment-migrate1-primary- 0
# old PVC (EBS) from where data to be copied - name: destination-pvc persistentVolumeClaim: # DESTINATION PVC claimName: data (EBS) pvc name # new PVC (EFS) to where data will be copied securityContext: runAsUser: 0 runAsGroup: 0 containers: - name: netbackup-migration image: OPERATOR_IMAGE:TAG # image name with tag command: ["/migration", '{"VolumesList":[{"Src":"srcPvc","Dest":"destPvc", "Verify":true,"StorageType":"data","OnlyCatalog":false}]}']
volumeMounts: - name: source-pvc mountPath: /srcPvc - name: destination-pvc mountPath: /destPvc restartPolicy: Never
Delete the migration job once the pods are in complete state.
For primary server, delete old PVC (EBS) of catalog volume.
For example, catalog-<Name_of_primary>-primary-0 and create new PVC with same name (as deleted PVC) which were attached to primary server.
Follow the naming conventions of static PV and PVC to consume for Primary Server Deployment.
catalog-<Name_of_primary>-primary-0 data-<Name_of_primary>-primary-0 Example: catalog-test-env-primary-0 data-test-env-primary-0 environment.yaml apiVersion: netbackup.veritas.com/v2 kind: Environment metadata: name: test-env namespace: ns-155 spec: ... primary: # Set name to control the name of the primary server. The default value is the same as the Environment's metadata.name. name: test-env
Yaml to create new catalog PVC:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: catalog-test-env-primary-0 namespace: ns-155 spec: accessModes: - ReadWriteMany storageClassName: "" resources: requests: storage: 100Gi volumeName: environment-pv-primary-catalog
Edit the PV (mounted on EFS) and replace the name, resource version, uid with new created PVC to meet the naming convention.
Get the PV's and PVC's using the following commands:
To get PVC details: kubectl get pvc -n < Namespace>
Use edit command to get PVC details: kubectl edit pvc < New PVC(old name) name > -n < Namespace >
To get PV details: kubectl edit pv < PV name (in which data is copied) >
Upgrade the MSDP with new build and image tag. Apply the following command to MSDP:
./kubectl-msdp init --image <<Image name:Tag>> --storageclass << Storage Class Name>> --namespace << Namespace >>
Apply the following command operator from new build with
and :kubectl apply -k operator/
Edit the
environment.yaml
file from new build and perform the following changes:Add the
tag: <new_tag_of_upgrade_image>
tag separately under primary sections.Change the storage class for catalog under primary section to catalog and log PV name respectively Set the paused=false under primary section.
Change the
for data and logs as with and then applyenvironment.yaml
file using the following command and ensure that the primary server is upgraded successfully:kubectl apply -f environment.yaml
Upgrade the MSDP Scaleout by updating the new image tag in msdpscaleout section in
environment.yaml file
.Apply
environment.yaml
file using the following command and ensure that MSDP is deployed successfully:kubectl apply -f environment.yaml
Edit the
environment.yaml
file and update the image tag for Media Server in mediaServer section.Apply
environment.yaml file
using the following command and ensure that the Media Server is deployed successfully:kubectl apply -f environment.yaml