Veritas InfoScale 7.3.1 Release Notes - Windows
- Release notes for Veritas InfoScale
- Limitations
- Deployment limitations
- Cluster management limitations
- Storage management limitations
- Multi-pathing limitations
- Replication limitations
- Solution configuration limitations
- Internationalization and localization limitations
- Interoperability limitations
- Known issues
- Deployment issues
- Cluster management issues
- Cluster Server (VCS) issues
- Cluster Manager (Java Console) issues
- Global service group issues
- VMware virtual environment-related issues
- Cluster Server (VCS) issues
- Storage management issues
- Storage Foundation
- VEA console issues
- Snapshot and restore issues
- Snapshot scheduling issues
- Storage Foundation
- Multi-pathing issues
- Replication issues
- Solution configuration issues
- Disaster recovery (DR) configuration issues
- Fire drill (FD) configuration issues
- Quick recovery (QR) configuration issues
- Internationalization and localization issues
- Interoperability issues
- Miscellaneous issues
- Fibre Channel adapter issues
- Deployment issues
Snapback operation from a Slave node always reports being successful on that node, even when it's in progress or resynchronization fails on Master
As per the CVM functionality, all operations are performed on the Master node, including those initiated on a Slave node that are then shipped to and performed on Master. When you run a task from Slave that takes a longer time to complete (for example, snapback), then the commands on Slave report that the operation was successful as soon as the task is submitted to Master. Therefore, the progress of such tasks should be monitored on Master as any failure during the task execution would not be reported back to the Slave.
This issue occurs when, for a cluster-shared volume, you perform the snapback operation from a Slave node. While the operation is still being performed on Master, on Slave it is reported as being successful. This happens even when the resynchronization operation fails. (3283523)
Workaround: There is no workaround for this issue.