NetBackup™ Snapshot Manager Install and Upgrade Guide
- Introduction
- Section I. NetBackup Snapshot Manager installation and configuration
- Preparing for NetBackup Snapshot Manager installation
- Deploying NetBackup Snapshot Manager using container images
- Deploying NetBackup Snapshot Manager extensions
- Installing the NetBackup Snapshot Manager extension on a VM
- Installing the NetBackup Snapshot Manager extension on a managed Kubernetes cluster (AKS) in Azure
- Installing the NetBackup Snapshot Manager extension on a managed Kubernetes cluster (EKS) in AWS
- Installing the NetBackup Snapshot Manager extension on a managed Kubernetes cluster (GKE) in GCP
- NetBackup Snapshot Manager cloud providers
- Configuration for protecting assets on cloud hosts/VM
- Protecting assets with NetBackup Snapshot Manager's on-host agent feature
- Installing and configuring NetBackup Snapshot Manager agent
- Configuring the NetBackup Snapshot Manager application plug-in
- Microsoft SQL plug-in
- Oracle plug-in
- Protecting assets with NetBackup Snapshot Manager's agentless feature
- NetBackup Snapshot Manager assets protection
- Volume Encryption in NetBackup Snapshot Manager
- NetBackup Snapshot Manager security
- Preparing for NetBackup Snapshot Manager installation
- Section II. NetBackup Snapshot Manager maintenance
- NetBackup Snapshot Manager logging
- Upgrading NetBackup Snapshot Manager
- Migrating and upgrading NetBackup Snapshot Manager
- Post-upgrade tasks
- Uninstalling NetBackup Snapshot Manager
- Troubleshooting NetBackup Snapshot Manager
Prerequisites to install the extension on a managed Kubernetes cluster in GCP
The NetBackup Snapshot Manager cloud-based extension can be deployed on a managed Kubernetes cluster in GCP for scaling the capacity of the NetBackup Snapshot Manager host to service a large number of requests concurrently.
The GCP managed Kubernetes cluster must be already deployed with appropriate network and configuration settings. The cluster must be able to communicate with NetBackup Snapshot Manager and the filestore.
Note:
The NetBackup Snapshot Manager and all the cluster nodepools must be in the same zone.
For more information, see Google Kubernetes Engine overview.
Use an existing container registry or create a new one, and ensure that the managed Kubernetes cluster has access to pull images from the container registry.
A dedicated nodepool for NetBackup Snapshot Manager workloads must be created with or without
enabled in the GKE cluster. The autoscaling feature allows your nodepool to scale dynamically by provisioning and de-provisioning the nodes as required automatically.NetBackup Snapshot Manager extension images (flexsnap-core, flexsnap-datamover , flexsnap-deploy, flexsnap-fluentd) must be uploaded to the container registry.
Prepare the host and the managed Kubernetes cluster in GCP
Select the NetBackup Snapshot Manager image supported on Ubuntu or RHEL system that meets the NetBackup Snapshot Manager installation requirements and create a host.
See Creating an instance or preparing the host to install NetBackup Snapshot Manager.
Verify that the port 5671 is open on the main NetBackup Snapshot Manager host.
See Verifying that specific ports are open on the instance or physical host.
Install a docker or podman container platform on the host and start the container service.
Prepare the NetBackup Snapshot Manager host to access Kubernetes cluster within your GCP environment.
Install gcloud CLI. For more information, see Install the gcloud CLI.
Install Kubernetes CLI.
For more information, refer to the following documents:
Create a gcr container registry or use the existing one if available, to which the NetBackup Snapshot Manager images will be uploaded (pushed).
Run the gcloud init to set the account. Ensure that this account has the required permissions to configure the Kubernetes cluster.
For more information on the required permissions, see Installing the NetBackup Snapshot Manager extension on a managed Kubernetes cluster (GKE) in GCP. For more information on
gcloud
command, refer to the following document:Connect to the cluster using the following command:
gcloud container clusters get-credentials <cluster-name> --zone <zone-name> --project <project-name>
For more information, refer to Install kubectl and configure cluster access.
Create a namespace for NetBackup Snapshot Manager from the command line on host system:
# kubectl create namespace <namespace-name>
# kubectl config set-context --current --namespace=<namespace-name>
Note:
User can provide any namespace name, it must be like
cloudpoint-system
.
Create a persistent volume
Reuse existing filestore.
Mount the filestore and create a directory (for example, dir_for_this_cp) only to be used by NetBackup Snapshot Manager.
Create a file (for example,
PV_file.yaml
) with the content as follows:apiVersion: v1 kind: PersistentVolume metadata: name: <name of the pv> spec: capacity: storage: <size in GB> accessModes: - ReadWriteMany nfs: path: <path to the dir created above> server: <ip of the filestore>
Run the following command to setup Persistent Volume:
kubectl apply -f <PV_file.yaml>
For more information about using file store with kubernetes cluster, refer to Accessing file shares from Google Kubernetes Engine clusters.