InfoScale™ 9.0 Release Notes - Windows

Last Published:
Product(s): InfoScale & Storage Foundation (9.0)
Platform: Windows
  1. Introduction and product requirements
    1.  
      About this document
    2.  
      Requirements
    3.  
      Documentation errata
  2. Changes introduced in this release
    1.  
      Windows Server 2025 support
    2.  
      Upgraded OpenSSL and TLS versions for enhanced security
    3.  
      Application monitoring on single-node clusters in VMware environments
    4.  
      UEFI Secure Boot support
    5.  
      Secure file system (SecureFS) support
    6.  
      Collection and display of real-time and historical statistics using Arctera Enterprise Administrator is no longer supported
    7.  
      Veritas High Availability Configuration wizard is no longer available
    8.  
      Online volume encryption at rest
    9.  
      Ability to attach regional disks in read-only mode in GCP environments
  3. Limitations
    1. Deployment limitations
      1.  
        Log on to remote nodes before installation
      2.  
        Silent installation does not support updating license keys after install
      3.  
        UUID files are always installed to the default installation path
      4.  
        SnapDrive service fails to start after InfoScale Availability or InfoScale Enterprise is uninstalled
    2. Cluster management limitations
      1.  
        EBS Multi-Attach support in AWS cloud and InfoScale service group configuration using wizards
      2.  
        Shared disk support in Azure cloud and InfoScale service group configuration using wizards
      3.  
        Undocumented commands and command options
      4.  
        Unable to restore user database using SnapManager for SQL
      5.  
        MountV agent does not work when Volume Shadow Copy service is enabled
      6.  
        WAN cards are not supported
      7.  
        System names must not include periods
      8.  
        Incorrect updates to path and name of types.cf with spaces
      9.  
        VCW does not support configuring broadcasting for UDP
      10.  
        Undefined behavior when using VCS wizards for modifying incorrectly configured service groups
      11.  
        Service group dependency limitation - no failover for some instances of parent group
      12.  
        Unable to monitor resources if Switch Independent NIC teaming mode is used
      13.  
        Windows Safe Mode boot options not supported
      14.  
        MountV agent does not detect file system change or corruption
      15.  
        MirrorView agent resource faults when agent is killed
      16.  
        Security issue when using Java GUI and default cluster admin credentials
      17.  
        VCS Authentication Service does not support node renaming
      18.  
        An MSMQ resource fails to come online after the virtual server name is changed
      19.  
        Configuration wizards do not allow modifying IPv4 address to IPv6
      20.  
        All servers in a cluster must run the same operating system
      21.  
        Running Java Console on a non-cluster system is recommended
      22.  
        Cluster Manager console does not update GlobalCounter
      23.  
        Cluster address for global cluster requires resolved virtual IP
    3. Storage management limitations
      1.  
        SFW does not support disks with unrecognized OEM partitions (3146848)
      2.  
        Only one disk gets removed from an MSFT compatible disk group even if multiple disks are selected to be removed
      3.  
        Cannot create MSFT compatible disk group if the host name has multibyte characters
      4.  
        Fault detection is slower in case of Multipath I/O over Fibre Channel
      5.  
        FlashSnap solution for EV does not support basic disks
      6.  
        Incorrect mapping of snapshot and source LUNs causes VxSVC to stops working
      7.  
        SFW does not support operations on disks with sector size greater than 512 bytes; VEA GUI displays incorrect size
      8.  
        Database or log files must not be on same volume as SQL Server
      9.  
        Operations in SFW may not be reflected in DISKPART
      10.  
        Disk signatures of system and its mirror may switch after ASR recovery
      11.  
        SFW does not support growing a LUN beyond 2 TB
      12.  
        SCSI reservation conflict occurs when setting up cluster disk groups
      13.  
        Snapshot operation fails when the Arctera VSS Provider is restarted while the Volume Shadow Copy service is running and the VSS providers are already loaded
      14.  
        When a node is added to a cluster, existing snapshot schedules are not replicated to the new node
      15.  
        Restore from Copy On Write (COW) snapshot of MSCS clustered shared volumes fails
      16.  
        Dynamic Disk Groups are not imported after system reboot in a Hyper-V environment
      17.  
        Storage Agent cannot reconnect to VDS service when restarting Storage Agent
      18.  
        SFW does not support transportable snapshots on Windows Server
      19.  
        vxsnapsql restores all SQL Server databases mounted on the same volume
      20.  
        Windows Disk Management console does not display basic disk converted from SFW dynamic disk
      21.  
        DCM or DRL log on thin provisioned disk causes all disks for volume to be treated as thin provisioned disks
      22.  
        After import/deport operations on SFW dynamic disk group, DISKPART command or Microsoft Disk Management console do not display all volumes
      23.  
        Restored Enterprise Vault components may appear inconsistent with other Enterprise Vault components
      24.  
        Enterprise Vault restore operation may fail for some components
      25.  
        Shrink volume operation may increase provisioned size of volume
      26.  
        Reclaim operations on a volume residing on a Hitachi array may not give optimal results
      27.  
        Storage migration of Hyper-V VM on cluster-shared volume resource is not supported from a Slave node
      28.  
        In a CVM environment, disconnecting and reconnecting hard disks may display an error
      29.  
        Limitations of SFW support for DMP
    4. Multi-pathing limitations
      1.  
        DSM ownership of LUNs
    5. Replication limitations
      1.  
        Resize Volume and Autogrow not supported in Synchronous mode
      2.  
        Expand volume not supported if RVG is in DCM logging mode
      3.  
        Fast failover is not supported if the RLINK is in hard synchronous mode
    6. Solution configuration limitations
      1.  
        Virtual fire drill not supported in Windows environments
      2.  
        Solutions wizard support in a 64-bit VMware environment
      3.  
        Solutions wizards fail to load unless the user has administrative privileges on the system
      4.  
        Discovery of SFW disk group and volume information sometimes fails when running Solutions wizards
      5.  
        DR Wizard does not create or validate service group resources if a service group with the same name already exists on the secondary site
      6.  
        Quick Recovery wizard displays only one XML file path for all databases, even if different file paths have been configured earlier
      7.  
        Enterprise Vault Task Controller and Storage services fail to start after running the Enterprise Vault Configuration Wizard if the MSMQ service fails to start
      8.  
        Wizard fails to discover SQL Server databases that contain certain characters
    7. Internationalization and localization limitations
      1.  
        Systems in a cluster must have same system locale setting
    8. Interoperability limitations
      1.  
        NBU restore changes the disk path and UUID due to which VMwareDisks resource reports an unknown state
      2.  
        Lock by third-party monitoring tools on shared volumes
      3.  
        SFW cannot coexist with early Symantec Anti-virus software
  4. Known issues
    1. Deployment issues
      1.  
        Reinstallation of an InfoScale product may fail due to pending cleanup tasks
      2.  
        Delayed installation on certain systems
      3.  
        Installation may fail with the "Windows Installer Service could not be accessed" error
      4.  
        Installation may fail with "Unspecified error" on a remote system
      5.  
        The installation may fail with "The system cannot find the file specified" error
      6.  
        Installation may fail with a fatal error for VCS msi
      7.  
        In SFW with Microsoft failover cluster, a parl.exe error message appears when system is restarted after SFW installation if Telemetry was selected during installation
      8.  
        Side-by-side error may appear in the Windows Event Viewer
      9.  
        FlashSnap License error message appears in the Application Event log after installing license key
      10.  
        Uninstallation may fail to remove certain folders
      11.  
        Error while uninstalling the product if licensing files are missing
      12.  
        The vxlicrep.exe may crash when the machine reboots after InfoScale Enterprise is installed
    2. Cluster management issues
      1. Cluster Server (VCS) issues
        1.  
          Cluster reconfiguration may fail post InfoScale installation (4117528)
        2.  
          The VCS Cluster Configuration Wizard may not be able to delete a cluster if it fails to stop HAD
        3.  
          Deleting a node from a cluster using the VCS Cluster Configuration Wizard may not remove it from main.cf
        4.  
          NetAppSnapDrive resource may fail with access denied error
        5.  
          Mount resource fails to bring file share service group online
        6.  
          Mount agent may go in unknown state on virtual machines
        7.  
          AutoStart may violate limits and prerequisites Load Policy
        8.  
          Enterprise Vault Configuration Wizard may fail to connect to SQL Server
        9.  
          File Share Configuration Wizard may create dangling VMDg resources
        10.  
          For volumes under VMNSDg resource, capacity monitoring and automatic volume growth policies do not get available to all cluster nodes
        11.  
          For creating VMNSDg resources, the VMGetDrive command not supported to retrieve a list of dynamic disk groups
        12.  
          First failover attempt might fault for a NativeDisks configuration
        13.  
          Resource fails to come online after failover on secondary
        14.  
          Upgrading a secure cluster may require HAD restart
        15.  
          New user does not have administrator rights in Java GUI
        16.  
          HTC resource probe operation fails and reports an UNKNOWN state
        17.  
          Resources in a parent service group may fail to come online if the AutoStart attribute for the resources is set to zero
        18.  
          VCS wizards may fail to probe resources
        19.  
          Changes to referenced attributes do not propagate
        20.  
          ArgListValue attribute may not display updated values
        21.  
          The Cluster Server High Availability Engine (HAD) service fails to stop
        22.  
          Engine may hang in LEAVING state
        23.  
          Timing issues with AutoStart policy
        24.  
          The VCS Cluster Configuration Wizard (VCW) supports NIC teaming but the Arctera High Availability Configuration Wizard does not
        25.  
          VCS engine HAD may not accept client connection requests even after the cluster is configured successfully
        26.  
          Hyper-V DR attribute settings should be changed in the MonitorVM resource if a monitored VM is migrated to a new volume
        27.  
          One or more VMNSDg resources may fail to come online during failover of a large service group
        28.  
          VDS error reported while bringing the NativeDisks and Mount resources online after a failover
        29.  
          SQL Server service resource does not fault even if detail monitoring fails
        30.  
          Delay in refreshing the VCS Java Console
        31.  
          NetBackup may fail to back up SQL Server database in VCS cluster environment
      2. Cluster Manager (Java Console) issues
        1.  
          Cluster connection error while converting local service group to a global service group
        2.  
          Repaint feature does not work properly when look and feel preference is set to Java
        3.  
          Exception when selecting preferences
        4.  
          Java Console errors in a localized environment
        5.  
          Common system names in a global cluster setup
        6.  
          Agent logs may not be displayed
        7.  
          Login attempts to the Cluster Manager may fail after a product upgrade
      3. Global service group issues
        1.  
          VCW configures a resource for GCO in a cluster without a valid GCO license
        2.  
          Group does not go online on AutoStart node
        3.  
          Cross-cluster switch may cause concurrency violation
        4.  
          Declare cluster dialog may not display highest priority cluster as failover target
        5.  
          Global group fails to come online on the DR site with a message that it is in the middle of a group operation
      4. VMware virtual environment-related issues
        1.  
          Partial log entries for service group offline operations in case of application monitoring on single-node clusters (4188474)
        2.  
          No corrective action is taken for a faulted service group even when its name is provided in the ServiceGroupName attribute of the AppMonHB resource (4187908)
        3.  
          Irrelevant INFO message about corrective action for a faulted service group (4178969)
        4.  
          VMwareDisks resource cannot go offline if VMware snapshot is taken when VMware disk is configured for monitoring
        5.  
          Guest virtual machines fail to detect the network connectivity loss
        6.  
          VMware vMotion fails to move a virtual machine back to an ESX host where the application was online
        7.  
          VCS commands may fail if the snapshot of a system on which the application is configured is reverted
        8.  
          VMWareDisks resource cannot probe (Unknown status) when virtual machine moves to a different ESX host
    3. Storage management issues
      1.  
        Performance overhead is observed in Windows encryption for sequential write workload. (4112733)
      2.  
        Data corruption observed on snap plex when user reverts cow snapshot and creates the mirror snap. (4114185)
      3.  
        The file with 1 KB size cannot be encrypted. (4104305)
      4.  
        The filesystem of a replicated encrypted volume fails to change. (4102345)
      5.  
        Mirrored volume cannot be created or added if the disks are created from array. (4113434)
      6.  
        The GCO network selection page does not show network adapter cards. (4100888)
      7. Storage Foundation issues
        1.  
          Disk fails to get added to a dynamic disk group after conversion from MBR to GPT (4055047)
        2.  
          In Microsoft Azure environment, InfoScale Storage cannot be used in cluster configurations that require shared storage
        3.  
          In a Microsoft Azure environment SFW fails to auto discover SSD media type for a VHD
        4.  
          Incorrect message appears in the Event Viewer for a hot relocation operation
        5.  
          A volume state continues to appear as "Healthy, Resynching"
        6.  
          After a mirror break-off operation, the volume label does not show on the Slave node
        7.  
          CVM cluster does not support node names with more than 15 characters
        8.  
          For CSDG with mirrored volumes, sometimes disks incorrectly shows the yellow warning icon
        9.  
          SSD is not removed successfully from the cache pool
        10.  
          On fast failover enabled configurations, VMDg resource takes a longer time to come online
        11.  
          On some configurations, the VSS snapshot operation may fail to add volumes to the snapshot set
        12.  
          Using Failover Cluster Manager you cannot migrate volumes belonging to Hyper-V virtual machines to a location with VMDg or Volume Manager Shared Volume resource type
        13.  
          Performance counters for Dynamic Disk and Dynamic Volume may fail to appear in the list of available counters
        14.  
          VSS snapshot of a Hyper-V virtual machine on SFW storage does not work from NetBackup
        15.  
          In some cases, when 80 or more mirrored volume resources fail over to another node in CVM, some volume resources fault causing all to fault
        16.  
          In CVM, if subdisk move operation for CSDG fails because of a cluster reconfiguration, it does not start again automatically
        17.  
          Stale volume entries under MountedDevices and CurrentControlSet registries not removed after the volume is deleted
        18.  
          Some arrays do not show correct Enclosure ID in VDID
        19.  
          In some cases, EV snapshot schedules fail
        20.  
          In CVM, Master node incorrectly displays two missing disk objects for a single disk disconnect
        21.  
          DRL plex is detached across CVM nodes if a disk with DRL plex is disconnected locally from the node where volume is online
        22.  
          Error while converting a Microsoft Disk Management Disk Group created using iSCSI disks to an SFW dynamic disk group
        23.  
          Snapshot schedules intermittently fail on the Slave node after failover
        24.  
          In VEA GUI, Tasks tab does not display task progress for the resynchronization of a cluster-shared volume if it is offline
        25.  
          Snapback operation from a Slave node always reports being successful on that node, even when it's in progress or resynchronization fails on Master
        26.  
          For cluster-shared volumes, only one file share per Microsoft failover cluster is supported
        27.  
          After cluster-shared volume resize operation, free space of the volume is not updated on nodes where volume is offline
        28.  
          Issues due to Microsoft Failover Clustering not recognizing SFW VMDg resource as storage class
        29.  
          If fast failover is enabled for a VMDg resource, then SFW volumes are not displayed in the New Share Wizard
        30.  
          Volume information not displayed for VMDg and RVG resources in Failover Cluster Manager
        31.  
          Failover of VMDg resource from one node to another does not mount the volume when disk group volume is converted from LDM to dynamic
        32.  
          vxverify command may not work if SmartMove was enabled while creating the mirrored volume
        33.  
          After installation of SFW or SFW HA, mirrored and RAID-5 volumes and disk groups cannot be created from LDM
        34.  
          Some operations related to shrinking or expanding a dynamic volume do not work
        35.  
          VEA GUI displays error while creating partitions
        36.  
          For dynamic disk group configured as VMNSDg resource, application component snapshot schedules are not replicated to other nodes in a cluster if created using VSS Snapshot Scheduler Wizard
        37.  
          In Microsoft failover cluster, if VxSVC is attached to Windows Debugger, it may stop responding when you try to bring offline a service group with VMDg resources
        38.  
          In some cases, updated VSS components are not displayed in VEA console
        39.  
          Storage reclamation commands do not work when SFW is run inside Hyper-V virtual machines
        40.  
          Unknown disk group may be seen after deleting a disk group
        41.  
          Wrong information related to disk information is displayed in the Veritas Enterprise Administrator (VEA) console. Single disk is displayed as two disks (harddisk and a missing disk)
        42.  
          SFW Configuration Utility for Hyper-V Live Migration support wizard shows the hosts as Configured even if any service fails to be configured properly
        43.  
          System shutdown or crash of one cluster node and subsequent reboot of other nodes resulting in the SFW messaging for Live Migration support to fail
        44.  
          Changing the FastFailover attribute for a VMDg resource from FALSE to TRUE throws an error message
        45.  
          A remote partition is assumed to be on the local node due to Enterprise Vault DNS alias check
        46.  
          After performing a restore operation on a COW snapshot, the "Allocated size" shadow storage field value is not getting updated on the VEA console
        47.  
          Messaging service does not retain its credentials after upgrading SFW and SFW HA
        48.  
          Enterprise Vault (EV) snapshots are displayed in the VEA console log as successful even when the snapshots are skipped and no snapshot is created
        49.  
          On a clustered setup, split-brain might cause the disks to go into a fail state
        50.  
          Takeover and Failback operation on Sun Controllers cause disk loss
        51.  
          Microsoft failover cluster disk resource may fail to come online on failover node in case of node crash or storage disconnect if DMP DSMs are installed
        52.  
          The Veritas Enterprise Administrator (VEA) console cannot remove the Logical Disk Management (LDM) missing disk to basic ones
        53.  
          After breaking a Logical Disk Manager (LDM) mirror volume through the LDM GUI, LDM shows 2 volumes with the same drive letter
        54.  
          Unable to failover between cluster nodes. Very slow volume arrival
        55.  
          VDS errors noticed in the event viewer log
        56.  
          An extra GUI Refresh is required to ensure that changes made to the volumes on a cluster disk group having the Volume Manager Disk Group (VMDg) resource gets reflected in the Failover Cluster Manager Console
        57.  
          DR wizard cannot create an RVG that contains more than 32 volumes
        58.  
          For a cluster setup, configure the Veritas Scheduler Services with a domain user account
        59.  
          If an inaccessible path is mentioned in the vxsnap create CLI, the snapshot gets created and the CLI fails
        60.  
          If snapshot set files are stored on a Fileshare path, then they are visible and accessible by all nodes in the VCS cluster
        61.  
          Sharing property of folders not persistent after system reboot
        62.  
          Microsoft Disk Management console displays an error when a basic disk is encapsulated
        63.  
          Results of a disk group split query on disks that contain a shadow storage area may not report the complete set of disks
        64.  
          Extending a simple volume in Microsoft Disk Management Disk Group fails
        65.  
          SFW cannot merge recovered disk back to RAID5 volume
        66.  
          Request for format volume occurs when importing dynamic disk group
        67.  
          Logging on to SFW as a member of the Windows Administrator group requires additional credentials
        68.  
          Certain operations on a dynamic volume cause a warning
        69.  
          Avoid encapsulating a disk that contains a system-critical basic volume
        70.  
          Sharing property of folders in clustering environment is not persistent
        71.  
          Entries under Task Tab may not be displayed with the correct name
        72.  
          Attempting to add a gatekeeper device to a dynamic disk group can cause problems with subsequent operations on that disk group until the storage agent is restarted
        73.  
          ASR fails to restore a disk group that has a missing disk
        74.  
          Mirrored volume in Microsoft Disk Management Disk Group does not resynchronize
        75.  
          Expand volume operation not supported for certain types of volumes created by Microsoft Disk Management
        76.  
          MirrorView resource cannot be brought online because of invalid security file
        77.  
          Known behavior with disk configuration in campus clusters
      8. VEA console issues
        1.  
          Login to VEA on an IPv6-enabled system with the Logged On User on this computer option may cause incorrect privileges to be assigned
        2.  
          VEA may fail to start when launched through the SCC, PowerShell, or Windows Start menu or Apps menu
        3.  
          On Windows operating systems, non-administrator user cannot log on to VEA GUI if UAC is enabled
        4.  
          VEA GUI sometimes does not show all the EV components
        5.  
          VEA GUI incorrectly shows yellow caution symbol on the disk icon
        6.  
          Reclaim storage space operation may not update progress in GUI
        7.  
          VEA GUI fails to log on to iSCSI target
        8.  
          VEA does not display properly when Windows color scheme is set to High Contrast Black
        9.  
          VEA displays objects incorrectly after Online/Offline disk operations
        10.  
          Disks displayed in Unknown disk group after system reboot
        11.  
          Diskgroup creation fails on Ultradisk in Azure cloud for logical sector size 4096-byte sector (4101641)
      9. Snapshot and restore issues
        1.  
          Vxsnap restore CLI command fails when specifying a full path name for a volume
        2.  
          Restoring COW snapshots causes earlier COW snapshots to be deleted
        3.  
          COW restore wizard does not update selected volumes
        4.  
          Snapshot operation requires additional time
        5.  
          Incorrect message displayed when wrong target is specified in vxsnap diffarea command
        6.  
          Restore operation on SQL Server component with missing volume fails
        7.  
          Snapshot of Microsoft Hyper-V virtual machine results in deported disk group on Hyper-V guest
        8.  
          Enterprise Vault restore operation fails for remote components
        9.  
          Persistent shadow copies are not supported for FAT and FAT32 volumes
        10.  
          Copy On Write (COW) snapshots are automatically deleted after shrink volume operation
        11.  
          Shadow storage settings for a Copy On Write (COW) snapshot persist after shrinking target volume
        12.  
          Copy On Write (COW) shadow storage settings for a volume persist on newly created volume after breaking its snapshot mirror
        13.  
          Conflict occurs when VSS snapshot schedules or VSS snapshots have identical snapshot set names
        14.  
          VSS Writers cannot be refreshed or contacted
        15.  
          Time-out errors may occur in Volume Shadow Copy Service (VSS) writers and result in snapshots that are not VSS compliant
        16.  
          vxsnapsql restore may fail to restore SQL Server database
        17.  
          VSS Snapshot of a volume fails after restarting the VSS provider service
        18.  
          CLI command, vxsnap prepare, does not create snapshot mirrors in a stripe layout
        19.  
          After taking a snapshot of a volume, the resize option of the snapshot is disabled
        20.  
          If the snapshot plex and original plex are of different sizes, the snapback fails
      10. Snapshot scheduling issues
        1.  
          Snapshot schedule fails as result of reattach operation error
        2.  
          Next run date information of snapshot schedule does not get updated automatically
        3.  
          VEA GUI may not display correct snapshot schedule information after Veritas Scheduler Service configuration update
        4.  
          Scheduled snapshots affected by transition to Daylight Savings Time
        5.  
          In a cluster environment, the scheduled snapshot configuration succeeds on the active node but fails on another cluster node
        6.  
          After a failover occurs, a snapshot operation scheduled within two minutes of the failover does not occur
        7.  
          Unable to create or delete schedules on a Microsoft failover cluster node while another cluster node is shutting down
        8.  
          Quick Recovery Wizard schedules are not executed if service group fails over to secondary zone in a replicated data cluster
        9.  
          On Windows Server, a scheduled snapshot operation may fail due to mounted volumes being locked by the OS
    4. Multi-pathing issues
      1.  
        Changes made to a multipathing policy of a LUN using the Microsoft Disk Management console, do not appear on the VEA GUI
      2.  
        vxdmpadm's deviceinfo and pathinfo with disk specified in p#c#t#l# parameter displays information only by one path
    5. Replication issues
      1.  
        VVR replication may fail if Symantec Endpoint Protection (SEP) version 12.1 is installed
      2.  
        VVR replication fails to start on systems where Symantec Endpoint Protection (SEP) version 12.1 or 12.1 RU2 is installed
      3.  
        RVGPrimary resource fails to come online if VCS engine debug logging is enabled
      4.  
        "Invalid Arguments" error while performing the online volume shrink
      5.  
        vxassist shrinkby or vxassist querymax operation fails with "Invalid Arguments"
      6.  
        In synchronous mode of replication, file system may incorrectly report volumes as raw and show "scan and fix" dialog box for fast failover configurations
      7.  
        VxSAS configuration wizard doesn't work in NAT environments
      8.  
        File system may incorrectly report volumes as raw due to I/O failure
      9.  
        NTFS errors are displayed in Event Viewer if fast-failover DR setup is configured with VVR
      10.  
        Volume shrink fails because RLINK cannot resume due to heavy I/Os
      11.  
        Online volume shrink operation fails for data volumes with multiple Secondaries if I/Os are active
      12.  
        RLINKs cannot connect after changing the heartbeat port number
      13.  
        On a DR setup, if Replicated Data Set (RDS) components are browsed for on the secondary site, then the VEA console does not respond
      14.  
        Secondary host is getting removed and added when scheduled sync snapshots are taken
      15.  
        Replication may stop if the disks are write cache enabled
      16.  
        Discrepancy in the Replication Time Lag Displayed in VEA and CLI
      17.  
        The vxrlink updates command displays inaccurate values
      18.  
        Some VVR operations may fail to complete in a cluster environment
      19.  
        IBC IOCTL Failed Error Message
      20.  
        Pause and Resume commands take a long time to complete
      21.  
        Replication keeps switching between the pause and resume state
      22.  
        VEA GUI has problems in adding secondary if all NICs on primary are DHCP enabled
      23.  
        Pause secondary operation fails when SQLIO is used for I/Os
      24.  
        Performance counter cannot be started for VVR remote hosts in perfmon GUI
      25.  
        VVR Graphs get distorted when bandwidth value limit is set very high
      26.  
        BSOD seen on a Hyper-V setup
      27.  
        Unable to start statistics collection for VVR Memory and VVR remote hosts object in Perfmon
      28.  
        Bunker primary fails to respond when trying to perform stop replication operation on secondary
      29.  
        CLI shows the "Volume in use" error when you dismount the ReFS data volumes on the Secondary RVG.
    6. Solution configuration issues
      1.  
        Oracle Enterprise Manager cannot be used for database control
      2.  
        Unexplained errors with DR wizard and QR wizard
      3.  
        VCS FD and DR wizards fail to configure application and hardware replication agent settings
      4. Disaster recovery (DR) configuration issues
        1.  
          The Disaster Recovery Configuration Wizard or the Fire Drill Wizard cannot proceed when configuring an application in an EMC SRDF replication environment
        2.  
          The DR Wizard does not provide a separate "GCO only" option for VVR-based replication
        3.  
          The Disaster Recovery Wizard fails if the primary and secondary sites are in different domains or if you run the wizard from another domain
        4.  
          The Disaster Recovery Wizard may fail to bring the RVGPrimary resources online
        5.  
          The Disaster Recovery Wizard requires that an existing storage layout for an application on a secondary site matches the primary site layout
        6.  
          The Disaster Recovery Wizard may fail to create the Secondary Replicator Log (SRL) volume
        7.  
          The Disaster Recovery Wizard may display a failed to discover NIC error on the Secondary system selection page
        8.  
          Service group cloning fails if you save and close the configuration in the Java Console while cloning is in progress
        9.  
          If RVGs are created manually with mismatched names, the DR Wizard does not recognize the RVG on the secondary site and attempts to create the secondary RVG
        10.  
          Cloned service group faults and fails over to another node during DR Wizard execution resulting in errors
        11.  
          DR wizard may display database constraint exception error after storage validation in EMC SRDF environment
        12.  
          DR wizard creation of secondary RVGs may fail due to mounted volumes being locked by the OS
        13.  
          DR wizard with VVR replication requires configuring the preferred network setting in VEA
        14.  
          DR wizard displays error message on failure to attach DCM logs for VVR replication
        15.  
          Disaster Recovery (DR) Wizard fails to automatically set the correct storage replication option in case of SRDF
        16.  
          Disaster Recovery (DR) Wizard reports an error during storage cloning operation in case of SRDF
      5. Fire drill (FD) configuration issues
        1.  
          Fire Drill Wizard may fail to recognize that a volume fits on a disk if the same disk is being used for another volume
        2.  
          Fire drill may fail if run again after a restore without exiting the wizard first
        3.  
          Fire Drill Wizard may time out before completing fire drill service group configuration
        4.  
          RegRep resource may fault while bringing the fire drill service group online during "Run Fire Drill" operation
        5.  
          Fire Drill Wizard in an HTC environment is untested in a configuration that uses the same horcm file for both regular and snapshot replication
        6.  
          FireDrill attribute is not consistently enabled or disabled
        7.  
          MountV resource state incorrectly set to UNKNOWN
      6. Quick recovery (QR) configuration issues
        1.  
          Quick Recovery Wizard allows identical names to be assigned to snapshot sets for different databases
    7. Internationalization and localization issues
      1.  
        Only US-ASCII characters are supported
      2.  
        Use only U.S. ASCII characters in the SFW or SFW HA installation directory name
      3.  
        VEA GUI cannot show double-byte characters correctly on (English) Windows operating system
      4.  
        VEA can't connect to the remote VEA server on non-English platforms
      5.  
        SSO configuration fails if the system name contains non-English locale characters [2910613]
      6.  
        VCS cluster may display "stale admin wait" state if the virtual computer name and the VCS cluster name contains non-English locale characters
      7.  
        Issues faced while configuring application monitoring for a Windows service having non-English locale characters in its name
    8. Interoperability issues
      1.  
        Backup Exec 12 installation fails in a VCS environment
      2.  
        Symantec Endpoint Protection security policy may block the VCS Cluster Configuration Wizard
      3.  
        VCS services do not start on systems where SEP 12.1 or later is installed
      4.  
        Several issues while you configure VCS on systems where Symantec Endpoint Protection (SEP) version 12.1 is installed
    9. Miscellaneous issues
      1.  
        Cluster node may become unresponsive if you try to modify network properties of adapters assigned to the VCS private network
      2.  
        MSMQ resource fails to come online if the MSMQ directory path contains double byte characters
      3.  
        Saving large configuration results in very large file size for main.cf
      4.  
        AutoStart may violate limits and prerequisites load policy
      5.  
        Trigger not invoked in REMOTE_BUILD state
      6.  
        Some alert messages do not display correctly
      7.  
        If VCS upgrade fails on one or more nodes, HAD fails to start and cluster becomes unusable
      8.  
        Custom settings in the cluster configuration are lost after an upgrade if attribute values contain double quote characters
      9.  
        Options on the Domain Selection panel in the VCS Cluster Configuration Wizard are disabled
      10.  
        Live migration of a VM, which is part of a VCS cluster where LLT is configured over Ethernet, from one Hyper-V host to another may result in inconsistent HAD state
      11.  
        If a NIC that is configured for LLT protocol is disabled, LLT does not notify clients
      12. Fibre Channel adapter issues
        1.  
          Emulex Fibre Channel adapters
        2.  
          QLogic Fibre Channel adapters
      13.  
        Storage agent issues and limitations in VMware virtual environments

CVM cluster does not support node names with more than 15 characters

This issue occurs while configuring a CVM cluster or adding a new node to it. If the node/host name exceeds the maximum limit of 15 characters, then you will not be able to configure it for the CVM cluster because it is not supported. (3351326)

Workaround: Ensure that the node name does not exceed the maximum limit of 15 characters.