Veritas InfoScale™ 8.0 Virtualization Guide - Linux
- Section I. Overview of Veritas InfoScale Solutions used in Linux virtualization
- Overview of supported products and technologies
- About Veritas InfoScale Solutions support for Linux virtualization environments
- About Kernel-based Virtual Machine (KVM) technology
- About the RHEV environment
- Overview of supported products and technologies
- Section II. Implementing a basic KVM environment
- Getting started with basic KVM
- Veritas InfoScale Solutions configuration options for the kernel-based virtual machines environment
- Installing and configuring Cluster Server in a kernel-based virtual machine (KVM) environment
- Configuring KVM resources
- Getting started with basic KVM
- Section III. Implementing Linux virtualization use cases
- Application visibility and device discovery
- Server consolidation
- Physical to virtual migration
- Simplified management
- Application availability using Cluster Server
- Virtual machine availability
- Virtual machine availability for live migration
- Virtual to virtual clustering in a Red Hat Enterprise Virtualization environment
- Virtual to virtual clustering in a Microsoft Hyper-V environment
- Virtual to virtual clustering in a Oracle Virtual Machine (OVM) environment
- Disaster recovery for virtual machines in the Red Hat Enterprise Virtualization environment
- Disaster recovery of volumes and file systems using Volume Replicator (VVR) and Veritas File Replicator (VFR)
- Multi-tier business service support
- Managing Docker containers with InfoScale Enterprise
- About the Cluster Server agents for Docker, Docker Daemon, and Docker Container
- Managing storage capacity for Docker containers
- Offline migration of Docker containers
- Disaster recovery of volumes and file systems in Docker environments
- Application visibility and device discovery
- Section IV. Reference
- Appendix A. Troubleshooting
- Appendix B. Sample configurations
- Appendix C. Where to find more information
- Appendix A. Troubleshooting
Configuring Veritas volume plugin with Docker 1.12 Swarm mode
Veritas volume plugin seamlessly works with Docker Swarm which allows container orchestration.
The following procedure uses
To configure Veritas volume plugin with Docker 1.12 Swarm mode
- Consider a docker swarm cluster of two nodes: docker1 and docker2.
# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS bd3ccjzm4qmo1ntil88r9q0la * docker1 Ready Active Leader d3rbrj0d4goyfckae0wozwwew docker2 Ready Active
- Create a Veritas volume.
# docker volume create -d veritas --name volume1 -o size=500m
- Create a MYSQL service from the swarm manager by providing source volume name.
Use
volume1
as the source volume name that was created using the Veritas driver.# docker service create --replicas 1 --name sql1 --mount type=volume,source=volume1,target=/var/lib/mysql,readonly=false -e MYSQL_ROOT_PASSWORD=root123 mysql
# docker service ps sql1
ID NAME IMAGE NODE 6e2dlvx27iwrgrwdcdf43u4d9 sql1.1 mysql docker1 DESIRED STATE CURRENT STATE ERROR Running Running 44 seconds ago
Where, mysql service is scheduled on node docker1.
- Write some persistent data to the mysql database on node docker1.
# docker ps -a
CONTAINER ID IMAGE COMMAND d844dfa66f65 mysql:latest "docker-entrypoint.sh" CREATED STATUS A minute ago Up PORTS NAMES 3306/tcp sql1.1.6e2dlvx27iwrgrwdcdf43u4d9
- [root@docker1] # docker exec -it d844dfa66f65 bash
- root@d844dfa66f65: /# mysql -proot123
mysql> create database swarm_test; Query OK, 1 row affected (0.02 sec) mysql> use swarm_test; Database changed mysql> create table people (name text, age integer); Query OK, 0 rows affected (0.04 sec) mysql> insert into people values ('Person1', 29); Query OK, 1 row affected (0.00 sec) mysql> insert into people values ('Person2', 31); Query OK, 1 row affected (0.01 sec) mysql> select * from people; +---------+------+ | name | age | +---------+------+ | Person1 | 29 | | Person2 | 31 | +----------+------+ 2 rows in set (0.00 sec)
- Simulate node failure on docker1. The MySQL service gets re-scheduled on another node by Docker Swarm.
[root@docker1]# docker node update --availability drain docker1
- [root@docker1]# docker service ps sql1
ID NAME IMAGE NODE 8rofbg2td0i7oubzyxpv0kvik sql1.1 mysql docker2 6e2dlvx27iwrgrwdcdf43u4d9 \_ sql1.1 mysql docker1 DESIRED STATE CURRENT STATE ERROR Running Running 47 seconds ago Shutdown Shutdown about a minute ago
The MySQL service gets re-scheduled on node docker2 by Docker Swarm.
- Verify on node docker2 that container with MySQL service is created and verify updated data in the database.
[root@docker2] # docker ps -a
CONTAINER ID IMAGE COMMAND 9fafb70c793b mysql:latest "docker-entrypoint.sh" CREATED STATUS About a minute ago Up PORTS NAMES 3306/tcp sql1.1.8rofbg2td0i7oubzyxpv0kvik
[root@docker2] # docker exec -it 9fafb70c793b bash
root@9fafb70c793b:/# mysql -proot123
mysql> use swarm_test; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> select * from people; +---------+------+ | name | age | +---------+------+ | Person1 | 29 | | Person2 | 31 | +----------+------+ 2 rows in set (0.00 sec)
In this procedure, as InfoScale storage made data volume available on the other node, container migration operation is successful.