Veritas InfoScale™ 7.1 Release Notes - AIX
- About this document
- Important release information
- About the Veritas InfoScale product suite
- Licensing Veritas InfoScale
- About Veritas Services and Operations Readiness Tools
- Changes introduced in 7.1
- Changes related to Veritas Cluster Server
- Changes in the Veritas Cluster Server Engine
- Changes related to installation and upgrades
- Changes related to Veritas Volume Manager
- Changes related to Veritas File System
- Changes related to Dynamic Multi-Pathing
- Changes related to Replication
- Changes related to Operating System
- Not supported in this release
- Changes related to Veritas Cluster Server
- System requirements
- Fixed Issues
- Known Issues
- Issues related to installation and upgrade
- Storage Foundation known issues
- Dynamic Multi-Pathing known issues
- Veritas Volume Manager known issues
- Veritas File System known issues
- Replication known issues
- Cluster Server known issues
- Operational issues for VCS
- Issues related to the VCS engine
- Issues related to the bundled agents
- Issues related to the VCS database agents
- Issues related to the agent framework
- Cluster Server agents for Volume Replicator known issues
- Issues related to Intelligent Monitoring Framework (IMF)
- Issues related to global clusters
- Issues related to the Cluster Manager (Java Console)
- VCS Cluster Configuration wizard issues
- LLT known issues
- I/O fencing known issues
- GAB known issues
- Operational issues for VCS
- Storage Foundation and High Availability known issues
- Storage Foundation Cluster File System High Availability known issues
- Storage Foundation for Oracle RAC known issues
- Oracle RAC known issues
- Storage Foundation Oracle RAC issues
- Storage Foundation for Databases (SFDB) tools known issues
- Issues related to installation and upgrade
- Software Limitations
- Storage Foundation software limitations
- Dynamic Multi-Pathing software limitations
- Veritas Volume Manager software limitations
- Veritas File System software limitations
- SmartIO software limitations
- Replication software limitations
- Cluster Server software limitations
- Limitations related to bundled agents
- Limitations related to VCS engine
- Veritas cluster configuration wizard limitations
- Limitations related to the VCS database agents
- Virtualizing shared storage using VIO servers and client partitions
- Cluster Manager (Java console) limitations
- Limitations related to I/O fencing
- Limitations related to bundled agents
- Storage Foundation Cluster File System High Availability software limitations
- Storage Foundation for Oracle RAC software limitations
- Storage Foundation for Databases (SFDB) tools software limitations
- Storage Foundation software limitations
- Documentation
When the disk detach policy is local and connectivity of some DMP node on which plex resides restores, reads continues to serve from only (n - 1) plexes, where n is total no of plexes in the volume (3871850)
The following two issues cause the problems.
The object level connectivity information is not updated after the underlying DMP node's connectivity is restored.
The read policy plex preference criteria is not re-computed (we re-compute it on every transaction. Since there is no detach in this case as detach policy is LDP and the connectivity is lost from the single node, the read policy preference criteria is not re-computed).
Workaround:
To resolve this issue, follow the steps.
- Update the object level connectivity information.
# /etc/vx/diag.d/vxcheckconn -g <diskgroup> -G -o <plex1_name>
# /etc/vx/diag.d/vxcheckconn -g <diskgroup> -G -o <plex2_name>
# /etc/vx/diag.d/vxcheckconn -g <diskgroup> -G -o <volume>
- Perform some dummy transaction so that the Read policy plex preference criteria get re-computed.
# /usr/sbin/vxassist -g <diskgroup> growby <volume> <size>
Note:
If all the plexes have same media type and layout, you can explicitly set the read policy to round as follows. It ensures that the reads are served from all the plexes in a round-robin style. Type:
# vxvol -g <diskgroup> rdpol round <volume>
Once some transaction occurs, the rdpol can be reverted back to select which is the default read policy. Type:
# vxvol -g <diskgroup> rdpol select <volume>