NetBackup™ Deployment Guide for Kubernetes Clusters
- Introduction
- Section I. Deployment
- Prerequisites for Kubernetes cluster configuration
- Deployment with environment operators
- Deploying NetBackup
- Primary and media server CR
- Deploying NetBackup using Helm charts
- Deploying MSDP Scaleout
- Deploying Snapshot Manager
- Section II. Monitoring and Management
- Monitoring NetBackup
- Monitoring MSDP Scaleout
- Monitoring Snapshot Manager
- Managing the Load Balancer service
- Managing MSDP Scaleout
- Performing catalog backup and recovery
- Section III. Maintenance
- MSDP Scaleout Maintenance
- Upgrading
- Uninstalling
- Troubleshooting
- Troubleshooting AKS and EKS issues
- Troubleshooting AKS-specific issues
- Troubleshooting EKS-specific issues
- Troubleshooting AKS and EKS issues
- Appendix A. CR template
Preparing the environment for NetBackup installation on Kubernetes cluster
Ensure that the following prerequisites are met before proceeding with the deployment for AKS/EKS.
Use the following checklist to prepare the AKS for installation.
Your Azure Kubernetes cluster must be created with appropriate network and configuration settings.
Supported Kubernetes cluster version is between 1.21.x and 1.24.x.
While creating the cluster, assign appropriate roles and permissions.
Use an existing Azure container registry or create a new one. Your Kubernetes cluster must be able to access this registry to pull the images from the container registry. For more information on the Azure container registry, see Azure Container Registry documentation.
It is recommended to create a separate node pool for Media server installation and select the
as . The autoscaling feature allows your node pool to scale dynamically by provisioning and de-provisioning the nodes as required automatically.A dedicated node pool for Primary server must be created in Azure Kubernetes cluster.
The following table lists the node configuration for the primary and media servers.
Node type
D16ds v4
Disk type
P30
vCPU
16
RAM
64 GiB
Total disk size per node (TiB)
1 TB
Number of disks/node
1
Cluster storage size
Small (4 nodes)
4 TB
Medium (8 nodes)
8 TB
Large (16 nodes)
16 TB
Another dedicated node pool must be created for Snapshot Manager (if it has to be deployed) with auto scaling enabled.
Following is the minimum configuration required for Snapshot Manager data plane node pool:
Node type
B4ms
RAM
8 GB
Number of nodes
Minimum 1 with auto scaling enabled.
Maximum pods per node
6 (system) + 4 (static pods) + RAM*2 (dynamic) = 26 pods or more
Following are the different scenario's on how the NetBackup Snapshot Manager calculates the number of job which can run at a given point in time, based on the above mentioned formula:
For 2 CPU's and 8 GB RAM node configuration:
CPU
More than 2 CPU's
RAM
8 GB
Maximum pods per node
6 (system) + 4 (static pods) + 8*2 = 16 (dynamic pods) = 26 or more
Autoscaling enabled
Minimum number =1 and Maximum = 3
Note:
Above configuration will run 8 jobs per node at once.
For 2/4/6 CPU's and 16 GB RAM node configuration:
CPU
More than 2/4/6 CPU's
RAM
16 GB
Maximum pods per node
6 (system) + 4 (Static pods) + 16*2=32 (Dynamic pods) = 42 or more
Autoscaling enabled
Minimum number =1 and Maximum = 3
Note:
Above configuration will run 16 jobs per node at once.
All the nodes in the node pool must be running the Linux operating system.
Taints and tolerations allows you to mark (taint) a node so that no pods can schedule onto it unless a pod explicitly tolerates the taint. Marking nodes instead of pods (as in node affinity/anti-affinity) is particularly useful for situations where most pods in the cluster must avoid scheduling onto the node.
Taints are set on the node pool while creating the node pool in the cluster. Tolerations are set on the pods.
To use this functionality, user must create the node pool with the following detail:
Add a label with certain key value. For example key = nbpool, value = nbnodes
Add a taint with the same key and value which is used for label in above step with effect as NoSchedule.
For example, key = nbpool, value = nbnodes, effect = NoSchedule
Provide these details in the operator yaml as follows. To update the toleration and node selector for operator pod,
Edit the
operator/patch/operator_patch.yaml
file. Provide the same label key:value in node selector section and in toleration sections. For example,nodeSelector: nbpool: nbnodes # Support node taints by adding pod tolerations equal to the specified nodeSelectors # For Toleartion NODE_SELECTOR_KEY used as a key and NODE_SELECTOR_VALUE as a value. tolerations: - key: nbpool operator: "Equal" value: nbnodes
Update the same label
key:value
aslabelKey
andlabelValue
in nodeselector section inenvironment.yaml
file.
If you want to use static public IPs, private IPs and fully qualified domain names for the load balancer service, the public IP addresses, private IP addresses and FQDNs must be created in AKS before deployment.
If you want to bind the load balancer service IPs to a specific subnet, the subnet must be created in AKS and its name must be updated in the
key in the section of the custom resource (CR).For more information on the network configuration for a load balancer service, refer to the How-to-Guide section of the Azure documentation.
For more information on managing the load balancer service, See About the Load Balancer service.
Create a storage class with Azure file storage type with
file.csi.azure.com
and allows volume expansion. It must be in LRS category with Premium SSD. It is recommended that the storage class has ,Retain
reclaim. Such storage class can be used for primary server as it supportsAzure premium files
storage only for catalog volume.For more information on Azure premium files, see Azure Files CSI driver.
For example,
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: {{ custome-storage-class-name }} provisioner: file.csi.azure.com reclaimPolicy: Retain allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer parameters: storageaccounttype: Premium_LRS protocol: nfs
Create a storage class with
Managed disc
storage type withallowVolumeExpansion = true
andReclaimPolicy=Retain
. This storage class will be used for Primary server data and log volume. Media server storage details support azure disks only.Customer's Azure subscription should have
role.For more information, see Azure built-in roles.
EKS-specific requirements
- Create a Kubernetes cluster with the following guidelines:
Use Kubernetes version 1.21 onwards.
AWS default CNI is used during cluster creation.
Create a nodegroup with only one availability zone and instance type should be of at least m5.4xlarge configuration and select the size of attached EBS volume for each node more than 100 GB.
The nodepool uses AWS manual or autoscaling group feature which allows your nodepool to scale by provisioning and de-provisioning the nodes as required automatically.
Note:
All the nodes in node group must be running on the Linux operating system.
Minimum required policies in IAM role:
AmazonEKSClusterPolicy
AmazonEKSWorkerNodePolicy
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_Policy
AmazonEKSServicePolicy
- Use an existing AWS Elastic Container Registry or create a new one and ensure that the EKS has full access to pull images from the elastic container registry.
- It is recommended to create separate node pool for Media server installation with autoscaler add-on installed in the cluster. The autoscaling feature allows your node pool to scale dynamically by provisioning and de-provisioning the nodes as required automatically.
- A dedicated node pool for Primary server must be created in Amazon Elastic Kubernetes Services cluster.
The following table lists the node configuration for the primary and media servers.
Node type
m5.4xlarge
vCPU
16
RAM
64 GiB
Total disk size per node (TiB)
1 TB
Number of disks/node
1
Cluster storage size
Small (4 nodes)
4 TB
Medium (8 nodes)
8 TB
Large (16 nodes)
16 TB
- Another dedicated node pool must be created for Snapshot Manager (if it has to be deployed) with auto scaling enabled.
Following is the minimum configuration required for Snapshot Manager data plane node pool:
Node type
t3.large
RAM
8 GB
Number of nodes
Minimum 1 with auto scaling enabled.
Maximum pods per node
Number of IPs required for Snapshot Manager data pool, must be greater than:
the number of nodes (for node's own IP) + (RAM size per node * 2 * number of nodes) + (number of all kube-system pods running on all nodes) + static listener pod + number of nodes( for fluent daemonset)
Number of IPs required for Snapshot Manager control pool, must be greater than:
number of nodes (for node's own IP) + number of flexsnap pods(15) + number of flexsnap services (6) + nginx load balancer IP + no. of additional off host agents + operator + (number of all kube-system pods running on all nodes)
Following are the different scenario's on how the NetBackup Snapshot Manager calculates the number of job which can run at a given point in time, based on the above mentioned formula:
For 2 CPU's and 8 GB RAM node configuration:
CPU
More than 2 CPU's
RAM
8 GB
Maximum pods per node
Number of IPs required for Snapshot Manager data pool, must be greater than:
number of nodes (for node's own IP) + (RAM size per node * 2 * number of nodes) + (number of all kube-system pods running on all nodes) + static listener pod + number of nodes( for fluent daemonset)
Number of IPs required for Snapshot Manager control pool, must be greater than:
number of nodes (for node's own IP) + number of flexsnap pods(15) + number of flexsnap services (6) + nginx load balancer IP + no. of additional off host agents + operator + (number of all kube-system pods running on all nodes)
Autoscaling enabled
Minimum number =1 and Maximum = 3
Note:
Above configuration will run 8 jobs per node at once.
For 2/4/6 CPU's and 16 GB RAM node configuration:
CPU
More than 2/4/6 CPU's
RAM
16 GB
Maximum pods per node
6 (system) + 4 (Static pods) + 16*2=32 (Dynamic pods) = 42 or more
Autoscaling enabled
Minimum number =1 and Maximum = 3
Note:
Above configuration will run 16 jobs per node at once.
- Taints and tolerations allows you to mark (taint) a node so that no pods can schedule onto it unless a pod explicitly tolerates the taint. Marking nodes instead of pods (as in node affinity/anti-affinity) is particularly useful for situations where most pods in the cluster must avoid scheduling onto the node.
Taints are set on the node group while creating the node group in the cluster. Tolerations are set on the pods.
To use this functionality, user must create the node group with the following detail:
Add a label with certain key value. For example key = nbpool, value = nbnodes
Add a taint with the same key and value which is used for label in above step with effect as NoSchedule.
For example, key = nbpool, value = nbnodes, effect = NoSchedule
Provide these details in the operator yaml as follows. To update the toleration and node selector for operator pod,
Edit the
operator/patch/operator_patch.yaml
file. Provide the same label key:value in node selector section and in toleration sections. For example,nodeSelector: nbpool: nbnodes # Support node taints by adding pod tolerations equal to the specified nodeSelectors # For Toleartion NODE_SELECTOR_KEY used as a key and NODE_SELECTOR_VALUE as a value. tolerations: - key: nbpool operator: "Equal" value: nbnodes
Update the same label
key:value
aslabelKey
andlabelValue
in nodeselector section inenvironment.yaml
file.
- Deploy aws load balancer controller add-on in the cluster.
For more information on installing the add-on, see Installing the AWS Load Balancer Controller add-on.
- Install cert-manager by using the following command:
$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
For more information, see Documentation for cert-manager installation.
- The FQDN that will be provided in primary server CR and media server CR specifications in networkLoadBalancer section must be DNS resolvable to the provided IP address.
- Amazon Elastic File System (Amazon EFS) for shared persistence storage. To create EFS for primary server, see Create your Amazon EFS file system.
EFS configuration can be as follow and user can update Throughput mode as required:
Performance mode: General Purpose
Throughput mode: Provisioned (256 MiB/s)
Availability zone: Regional
Note:
Throughput mode can be increased at runtime depending on the size of workloads and also if you are seeing performance issue you can increase the Throughput mode till 1024 MiB/s.
Note:
To install the add-on in the cluster, ensure that you install the Amazon EFS CSI driver. For more information on installing the Amazon EFS CSI driver, see Amazon EFS CSI driver.
- If NetBackup client is outside VPC or if you want to access the WEB UI from outside VPC then NetBackup client CIDR must be added with all NetBackup ports in security group inbound rule of cluster. See About the Load Balancer service. for more information on NetBackup ports.
To obtain the cluster security group, run the following command:
aws eks describe-cluster --name <my-cluster> --query cluster.resourcesVpcConfig.clusterSecurityGroupId
The following link helps to add inbound rule to the security group:
- Create a storage class with
EBS
storage type withallowVolumeExpansion = true
andReclaimPolicy=Retain
. This storage class is to be used for data and log for both primary and media servers.For example,
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" name: ebs-csi-storage-class parameters: fsType: ext4 type: gp2 provisioner: kubernetes.io/ebs.csi.aws.com reclaimPolicy: Retain volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true
Note:
Ensure that you install the Amazon EBS CSI driver to install the add-on in the cluster. For more information on installing the Amazon EBS CSI driver, see Managing the Amazon EBS CSI driver as an Amazon EKS add-on and Amazon EBS CSI driver.
- The EFS based PV must be specified for Primary server catalog volume with
ReclaimPolicy=Retain
.
Use the following checklist to address the prerequisites on the system that you want to use as a NetBackup host that connects to the AKS/EKS cluster.
AKS-specific
Linux operating system: For a complete list of compatible Linux operating systems, refer to the Software Compatibility List (SCL) at:
Install Docker on the host to install NetBackup container images through tar, and start the container service.
Prepare the host to manage the AKS cluster.
Install Azure CLI.
https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-linux/
Install Kubernetes CLI
https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
Log in to the Azure environment to access the Kubernetes cluster by running this command on Azure CLI:
# az login - identity
# az account set --subscription <subscriptionID>
# az aks get-credentials --resource-group
<resource_group_name> --name <cluster_name>
Log in to the container registry:
# az acr login -n <container-registry-name>
EKS-specific
Install AWS CLI.
For more information on installing the AWS CLI, see Installing or updating the latest version of the AWS CLI.
Install Kubectl CLI.
For more information on installing the Kubectl CLI, see Installing kubectl.
Configure docker to enable the push of the container images to the container registry.
Create the OIDC provider for the AWS EKS cluster.
For more information on creating the OIDC provider, see Create an IAM OIDC provider for your cluster.
Create an IAM service account for the AWS EKS cluster.
For more information on creating an IAM service account, see Configuring a Kubernetes service account to assume an IAM role.
If an IAM role needs an access to the EKS cluster, run the following command from the system that already has access to the EKS cluster:
kubectl edit -n kube-system configmap/aws-auth
For more information on creating an IAM role, see Enabling IAM user and role access to your cluster.
Login to the AWS environment to access the Kubernetes cluster by running the following command on AWS CLI:
aws eks --region <region_name> update-kubeconfig --name <cluster_name>
Free space of approximately 8.5GB on the location where you copy and extract the product installation TAR package file. If using docker locally, there should be approximately 8GB available on the
/var/lib/docker
location so that the images can be loaded to the docker cache, before being pushed to the container registry.AWS EFS-CSI driver should be installed for static PV/PVC creation of primary catalog volume.