Veritas InfoScale™ for Kubernetes Environments 8.0.300 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Migrating applications to InfoScale
- Troubleshooting
Monitoring InfoScale
With InfoScale installed in OpenShift or Kubernetes environments, monitoring tools like Prometheus, Grafana, and AlertManager are enabled for monitoring of your InfoScale installation. These tools are the monitoring mechanism that OpenShift and Kubernetes provide. Prometheus collects massive data at regular intervals and presents the data as 'Metrics'. You can use this data for monitoring of cluster components and analyze performance of your InfoScale installation.
Note:
For a deported diskgroup, metrics is not visible.
On an OpenShift cluster, after you install InfoScale and configure InfoScale clusters, Prometheus monitoring is automatically installed.
On a Kubernetes cluster, following are the additional steps to install Prometheus
- Check if helm is available on the Kubernetes cluster.
- If it is available, go to step 4.
- Run the following steps -
url -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
- helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install <RELEASE_NAME> prometheus-community/kube-prometheus-stack
On an OpenShift cluster, following is the additional step
- Create/Edit
cluster-monitoring-config
in theopenshift-monitoring
namespace as underapiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true
Run the following commands to verify -
whether pods are created oc/kubectl get pods -A |grep infoscale
whether Prometheus monitoring pods are successfully installed.
On an OpenShift cluster - oc get pods -n {openshift-monitoring} | grep prometheus
On a Kubernetes cluster - kubectl get pods -n {namespace} | grep prometheus
Complete the following steps to enable Prometheus.
Run the following command to know the REST server name.
oc/kubectl get svc -n {Namespace where InfoScale is installed} |grep rest | awk '{print $1}'
Note
infoscale-sds-rest-<numerical value>
from the command output.Optionally, to collect metrics with TLS verification; a client certificate is required. Copy the following content into
prom-secret.yaml
and apply the file.apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: infoscale-prom-cert namespace: infoscale-vtas spec: commonName: infoscale-prom usages: - client auth duration: 2880h0m0s issuerRef: kind: ClusterIssuer name: infoscale-cert-issuer renewBefore: 720h0m0s secretName: infoscale-prom-tls
Copy the following content into
podmonitor.yaml
and apply the file. Ensure that you updateserverName
.apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: infoscale-metrics namespace: infoscale-vtas labels: release: kube-prometheus spec: selector: matchLabels: cvmmaster: "true" podMetricsEndpoints: - path: /infoscale/api/2.0/metrics port: rest-endpoint interval: 2m scrapeTimeout: 30s scheme: https tlsConfig: ca: secret: key: ca.crt name: infoscale-prom-tls cert: secret: key: tls.crt name: infoscale-prom-tls keySecret: key: tls.key name: infoscale-prom-tls serverName: infoscale-sds-rest-<Cluster ID>