InfoScale™ 9.0 Virtualization Guide - Linux
- Section I. Overview of InfoScale solutions used in Linux virtualization
- Overview of supported products and technologies
- About InfoScale support for Linux virtualization environments
- About KVM technology
- Overview of supported products and technologies
- Section II. Implementing a basic KVM environment
- Getting started with basic KVM
- InfoScale solutions configuration options for the kernel-based virtual machines environment
- Installing and configuring VCS in a kernel-based virtual machine (KVM) environment
- Configuring KVM resources
- Getting started with basic KVM
- Section III. Implementing InfoScale an OpenStack environment
- Section IV. Implementing Linux virtualization use cases
- Application visibility and device discovery
- Server consolidation
- Physical to virtual migration
- Simplified management
- Application availability using Cluster Server
- Virtual machine availability
- Virtual machine availability for live migration
- Virtual to virtual clustering in a Hyper-V environment
- Virtual to virtual clustering in an OVM environment
- Multi-tier business service support
- Managing Docker containers with InfoScale Enterprise
- About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
- Managing storage capacity for Docker containers
- Offline migration of Docker containers
- Disaster recovery of volumes and file systems in Docker environments
- Section V. Reference
- Appendix A. Troubleshooting
- Appendix B. Sample configurations
- Appendix C. Where to find more information
- Appendix A. Troubleshooting
About InfoScale deployments in OpenShift Virtualization environments
InfoScale supports deployment within Kernel-based Virtual Machine (KVM) environments, which form the basis of its support for OpenShift Virtualization as well.
iSCSI requirement: When implementing InfoScale solutions in OpenShift Virtualization environments, iSCSI is the only supported storage protocol for accessing external storage.
Static IP mandate: For iSCSI connections to function reliably, the VMs must be configured with static IP addresses. Using dynamic IP addresses leads to connection disruptions if the addresses change during an operation, potentially causing data corruption or service outages.
The InfoScale Cluster Server (VCS) component relies on Low Latency Transport (LLT) for inter-node heartbeating and for communication within the cluster.
LLT network requirement: The stability of the VCS cluster is critically dependent on reliable, low-latency network links between the cluster nodes (VMs).
Static IP mandate: Network interfaces within the VMs dedicated to LLT traffic must be configured with static IP addresses. Using dynamic IPs for LLT links is not supported because it compromises cluster integrity.
Only Jumbo frames are supported for LLT communications in InfoScale in OpenShift Virtualization environments.
When running within OpenShift Virtualization VMs, the following special considerations apply:
OVS overhead: Within OpenShift Virtualization VMs, approximately 10 bytes are used by the Open vSwitch (OVS) infrastructure for each packet. Thus, when the underlying network is configured with the standard 1500 byte MTU, the effective MTU inside VMs is reduced to 1490 bytes.
MTU considerations: Due to the OVS overhead, enabling jumbo frames on the underlying physical network is essential for optimal LLT performance within VMs.
Jumbo frame configuration: If you implement jumbo frames, you must enable them at the following levels:
Physical switch infrastructure
Node network interfaces
OVS bridges
VM network interfaces
OpenShift Virtualization uses the following mechanisms to provide additional network interfaces to VMs and to facilitate static IP configurations:
Node Network Configuration Policy (NNCP): For both iSCSI and LLT networks, NNCP must be used to configure the underlying node network interfaces with appropriate settings, including static IP assignments at the node level.
Network Attachment Definition (NAD): Secondary network interfaces for VMs - like those for dedicated iSCSI or LLT traffic - must be provisioned by creating NADs that reference the configurations established by NNCP.
VM static IP configuration: After the NADs are attached to VM definitions, the static IP addresses must be configured within the VMs. For both iSCSI connections and LLT communications, these IPs must remain fixed throughout the VM lifecycle to maintain storage connectivity and cluster integrity.
When implementing this configuration, remember that both iSCSI and LLT networks require careful planning to ensure that IP addresses remain consistent. Any changes to these addresses can disrupt storage access or cluster communications, potentially leading to data unavailability or cluster split-brain scenarios.