Veritas InfoScale™ for Kubernetes Environments 8.0.300 - Linux
- Overview
- System requirements
- Preparing to install InfoScale on Containers
- Installing Veritas InfoScale on OpenShift
- Installing InfoScale on a system with Internet connectivity
- Installing InfoScale in an air gapped system
- Installing Veritas InfoScale on Kubernetes
- Configuring KMS-based Encryption on an OpenShift cluster
- Configuring KMS-based Encryption on a Kubernetes cluster
- InfoScale CSI deployment in Container environment
- Dynamic provisioning
- Snapshot provisioning (Creating volume snapshots)
- Managing InfoScale volume snapshots with Velero
- Volume cloning
- Installing and configuring InfoScale DR Manager on OpenShift
- Installing and configuring InfoScale DR Manager on Kubernetes
- Disaster Recovery scenarios
- Configuring InfoScale
- Administering InfoScale on Containers
- Migrating applications to InfoScale
- Troubleshooting
Configuring Alerts for monitoring InfoScale
You can configure various alerts for your InfoScale. To configure, you need to update and apply alertmanager.yaml
. The following table lists the alerts you can configure
Table: Alert expressions with the messages
Alert with its expression Message | ||
---|---|---|
Volume/PVC size threshold
Volume size exceeds <threshold value> %. | ||
Diskgroup-related alert
The diskgroup has <value> free bytes. | ||
NodeNotReady
Node is not ready. | ||
DiskFailure
Disk failed. | ||
DR-related alerts
Replication failed. |
The expressions for the alerts with their messages are entered in the alert manager configuration yaml file.
Configuring alert manager on an OpenShift cluster
- Be ready with the expression and its message for the alert you want to configure. See Table: Alert expressions with the messages.
- Copy the following and save the file as alertmanager.yaml.
apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: dr-alert namespace: <Namespace you want to monitor> spec: groups: - name: dralert rules: - alert: <Name of the Alert> annotations: message: '<Alert Message>' expr: | <Alert Expression> for: 5m labels: severity: critical prometheus: openshift-monitoring/k8s
Note:
Ensure that you update the alert message and expression.
- Run the following command on the bastion node.
oc apply -f alertmanager.yaml
- You can update and apply
alertmanager.yaml
for other alerts.
You can use the OpenShift monitoring features to configure alerts. See the following OpenShift help topics - Configuring the monitoring stack, Enabling monitoring for user-defined projects, and Enabling alert routing for user-defined projects.
Using OpenShift console to configure alerts
- Access the OpenShift console. Select Operators > Installed Operators in the left frame.
- Click InfoScale Cluster. Ensure that the cluster is in a 'Running' state.
- Select Observe > Targets in the left frame.
- Enter InfoScale as the search string as under. InfoScale deployment is displayed.
- Select Observe > Metrics in the left frame.
- In the screen that opens, you can add a new query or run saved queries.
- Click Add query to configure a new query.
- Assign a name to the query and select the Container.
- You can then select diskgroup, job, namespace, or pods. Click the parameter and select the value for which you want to configure an alert.
- Save the query. The saved query now gets listed. You can select a query in the main frame and click Run queries.
- You can also See Table: Alert expressions with the messages. and enter a query to search.
After you save the alert, the alert is listed in Observe > Alerts.
Using OpenShift console to silence or stop configured alerts
- Select Observe > Alerts in the left frame.
- Select the Alert you want to silence.
- Click the Actions menu (three vertical dots) at the end of the page.
Alert details are listed.
- Click Silence to stop the alert.
Configuring alert manager on a Kubernetes cluster
- Be ready with the expression and its message for the alert you want to configure. See Table: Alert expressions with the messages.
- Copy the following and save the file as alertmanager.yaml.
apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: annotations: meta.helm.sh/release-name: prometheus meta.helm.sh/release-namespace: default generation: 1 labels: app: kube-prometheus-stack app.kubernetes.io/instance: prometheus app.kubernetes.io/managed-by: Helm app.kubernetes.io/part-of: kube-prometheus-stack app.kubernetes.io/version: 45.31.1 chart: kube-prometheus-stack-45.31.1 heritage: Helm release: prometheus name: prometheus-kube-prometheus-alertmanager-infoscale1.rules namespace: infoscale-vtas spec: groups: - name: dralert.rules rules: - alert: <Alert> annotations: description: <Message> summary: <Summary> expr: |- <Expression> labels: severity: critical
Note:
Ensure that you update the alert message and expression.
- Run the following command on the master node.
kubectl apply -f alertmanager.yaml
- You can update and apply
alertmanager.yaml
for other alerts.