Veritas Access Appliance Release Notes

Last Published:
Product(s): Appliances (8.0)
Platform: Access Appliance OS,Veritas 3340,Veritas 3350
  1. Overview of Access Appliance
    1.  
      About this release
    2. Changes in this release
      1.  
        Enhancements to cluster configuration workflow
      2.  
        Support for immutability
      3.  
        Accessing the WORM storage server instances for management tasks
      4.  
        Configuring MSDP-C with Access Appliance
      5.  
        Configuring user authentication using digital certificates or smart cards
      6.  
        Managing password policies
      7.  
        Setting login banners
      8.  
        New command structure for Access Appliance Shell commands
      9.  
        Support for new hardware model
      10.  
        Preupgrade check to determine if the appliance is ready for an upgrade
      11.  
        Terminology changes
    3.  
      Supported NetBackup client versions
    4.  
      Access Appliance simple storage service (S3) APIs
  2. Fixed issues
    1.  
      Fixed issues in this release
  3. Software limitations
    1.  
      Limitations on using shared LUNs
    2. Limitations related to installation and upgrade
      1.  
        If the required virtual IPs are not configured, then services like NFS, CIFS, and S3 do not function properly
      2.  
        Underscore character is not supported for host names
    3.  
      Limitations in the Backup mode
    4.  
      Access Appliance IPv6 limitations
    5.  
      FTP limitations
    6.  
      Limitations related to commands in a non-SSH environment
    7.  
      Limitations related to Veritas Data Deduplication
    8.  
      Kernel-based NFS v4 limitations
    9.  
      File system limitation
    10.  
      Access Appliance S3 server limitation
    11.  
      Long-term data retention (LTR) limitations
    12.  
      Limitations related to upgrade
    13. Limitation related to replication
      1.  
        Limitation related to episodic replication authentication
      2.  
        Limitation related to continuous replication
  4. Known issues
    1. Access Appliance known issues
      1. Admin issues
        1.  
          The user password gets displayed in the logs for the Admin> user add username system-admin|storage-admin|master command
      2. CIFS issues
        1.  
          Cannot enable the quota on a file system that is appended or added to the list of homedir
        2.  
          Deleting a CIFS share resets the default owner and group permissions for other CIFS shares on the same file system
        3.  
          Default CIFS share has owner other than root
        4.  
          CIFS mapuser command fails to map all the users from Active Directory (AD) to all the NIS/LDAP users
        5.  
          CIFS share may become unavailable when the CIFS server is in normal mode
        6.  
          CIFS share creation does not authenticate AD users
        7.  
          If you mount or access a CIFS share using the local user without netbios or cluster name, the operation fails
        8.  
          During upgrade, the CVM and CFS agents are not stopped
        9.  
          Unable to access CIFS shares using the Share Open command from the Access Appliance Shell menu
        10.  
          CIFS share may become inaccessible after an upgrade from Access Appliance version 7.4.2.400 to 8.0
        11.  
          Upgrade from Access Appliance version 7.4.2.400 to 8.0 fails if CIFS CTBD mode is configured
      3. General issues
        1.  
          Reimaging the appliance from the SSD device fails if a CD with the ISO image is inserted in the CD-ROM
        2.  
          A functionality of Access Appliance works from the master node but does not work from the slave node
        3.  
          The complete attribute list of adapters and RAIDs do not get displayed in the GET Rest API output
        4.  
          User account gets locked on a management or non-management console node
        5.  
          Setting retention on a directory path does not work from the Access Appliance command-line interface
      4. GUI issues
        1.  
          When provisioning the Access Appliance GUI, the option to generate S3 keys is not available after the LTR policy is activated
        2.  
          When provisioning storage, the Access web interface or the command-line interface displays storage capacity in MB, GB, TB, or PB
        3.  
          Restarting the server as part of the command to add and remove certificates gives an error on RHEL 7
        4.  
          Client certificate validation using OpenSSL ocsp does not work on RHEL 7
        5.  
          GUI does not support segregated IPv6 addresses while creating CIFS shares using the Enterprise Vault policy
        6.  
          During a rolling upgrade the UI becomes inaccessible
        7.  
          REST endpoint field gives an error message for valid values while registering S3-compatible as a cloud service
        8.  
          If lockdown mode was set using CLISH, switching the lockdown mode to Normal mode using the GUI fails if you set the retention period as 0
      5. Infrastructure issues
        1.  
          Mongo service does not start after a new node is added successfully
        2.  
          The Access Appliance management console is not available after a node is deleted and the remaining node is restarted
        3.  
          Unable to add an Appliance node to the cluster again after the Appliance node is turned off and removed from the Access Appliance cluster
      6. Installation and configuration issues
        1.  
          After you restart a node that uses RDMA LLT, LLT does not work, or the gabconfig - a command shows the jeopardy state
        2.  
          Running individual Access Appliance scripts may return inconsistent return codes
        3.  
          Installer does not list the initialized disks immediately after initializing the disks during I/O fencing configuration
        4.  
          If you run the Cluster> show command when a slave node is in the restart, shutdown, or crash state, the slave node throws an exception
        5.  
          If duplicate PCI IDs are added for the PCI exclusion, the Cluster> add node name command fails
        6.  
          Phantomgroup for the VLAN device does not come online if you create another VLAN device from the Access Appliance command-line interface after cluster configuration is done
        7.  
          Configuring Access Appliance with a preconfigured VLAN and a preconfigured bond fails
        8.  
          In a mixed mode Access Appliance cluster, after the execution of the Cluster> add node command, one type of unused IP does not get assigned as a physical IP to public NICs
        9.  
          NLMGroup service goes into a FAULTED state when the private IP (x.x.x.2) is not free
        10.  
          The cluster> show command does not detect all the nodes of the cluster
        11.  
          When you configure Access Appliance as an iSCSI target, the initiator authentication does not work
      7. Internationalization (I18N) issues
        1.  
          The Access Appliance command-line interface prompt disappears when characters in a foreign language are present in a command
      8. MSDP-C issues
        1.  
          MSDP-C duplication job fails with OpenStorage WORM lock error after the file system is grown to100%
      9. Networking issues
        1.  
          CVM service group goes into faulted state unexpectedly
        2.  
          In a mixed IPv4 and IPv6 VIP network set up, the IP balancing does not consider IP type
        3.  
          The netgroup search does not continue to search in NIS if the entry is not found in LDAP
        4.  
          The IPs hosted on an interface that is not the current IPv6 default gateway interface are not reachable outside the current IPv6 subnet
        5.  
          After network interface swapping between two private NICs or one private NIC and one public NIC, the service groups on the slave nodes are not probed
        6.  
          Unable to import the network module after an operating system upgrade
        7.  
          Network load balancer does not get configured with IPv6
        8.  
          Unable to add an IPv6-default gateway on an IPv4-installed cluster
        9.  
          LDAP over SSL may not work in Access Appliance 8.0
        10.  
          The network> swap command hangs if any node other than the console node is specified
        11.  
          LDAP user fails to establish SSH connection with the cluster when FTP is configured
        12.  
          Unable to configure primary, backup DC server as the command does not allow specifying multiple DC servers
      10. NFS issues
        1.  
          Latest directory content of server is not visible to the client if time is not synchronized across the nodes
        2.  
          NFS> share show command does not distinguish offline versus online shares
        3.  
          Kernel-NFS v4 lock failover does not happen correctly in case of a node crash
        4.  
          Kernel-NFS v4 export mount for Netgroup does not work correctly
        5.  
          When a file system goes into the FAULTED or OFFLINE state, the NFS share groups associated with the file system do not become offline on all the nodes
        6.  
          Add and delete NFS client operations fail from the GUI
        7.  
          Multiple NFS shares created on a single file system from CLISH do not get listed in the Restful API output and GUI
      11. ObjectAccess issues
        1.  
          When trying to connect to the S3 server over SSLS3, the client application may give a warning
        2.  
          File systems that are already created cannot be mapped as S3 buckets for local users using the GUI
        3.  
          If you have upgraded to Access Appliance 8.0 from an earlier release, access to S3 server fails if the cluster name has uppercase letters
        4.  
          If the cluster name does not follow the DNS hostname restrictions, you cannot work with the ObjectAccess service in Access Appliance
        5.  
          Bucket creation may fail with time-out error
        6.  
          Bucket deletion may fail with "No such bucket" or "No such key" error
        7.  
          Group configuration does not work in ObjectAccess if the group name contains a space
        8.  
          Self test failed for storage_s3test
      12. Replication issues
        1.  
          When running episodic replication and deduplication on the same cluster node, the episodic replication job fails in certain scenarios
        2.  
          The System> config import command does not import episodic replication keys and jobs
        3.  
          The job uses the schedule on the target after episodic replication failover
        4.  
          Episodic replication fails with error "connection reset by peer" if the target node fails over
        5.  
          Episodic replication jobs created in Access Appliance 7.2.1.1 or earlier versions are not recognized after an upgrade
        6.  
          Setting the bandwidth through the GUI is not enabled for episodic replication
        7.  
          Episodic replication job with encryption fails after job remove and add link with SSL certificate error
        8.  
          Episodic replication job status shows the entry for a link that was removed
        9.  
          Episodic replication job modification fails
        10.  
          If a share is created in RW mode on the target file system for episodic replication, then it may result in there being different number of files and directories on the target file system compared to the source file system
        11.  
          The promote operation may fail while performing episodic replication job failover/failback
        12.  
          Discrepancy is observed in the outputs of replication episodic service status and replication episodic job stats <job_name> commands
        13.  
          Continuous replication fails when the 'had' daemon is restarted on the target manually
        14.  
          Continuous replication is unable to go to the replicating state if the Storage Replicated Log becomes full
        15.  
          Unplanned failover and failback in continuous replication may fail if the communication of the IPTABLE rules between the cluster nodes does not happen correctly
        16.  
          Continuous replication configuration may fail if the continuous replication IP is not online on the master node but is online on another node
        17.  
          If you restart any node in the primary or the secondary cluster, replication may go into a PAUSED state
        18.  
          Unplanned failback fails if the source cluster goes down
        19.  
          Cloud tiering cannot be configured with continuous replication
        20.  
          After continuous replication failover/failback operations, the virtual IPs in the source may appear offline
        21.  
          Cannot use a file system to create an RVG if it has previously been enabled as the first file system in an RVG and later disabled
      13. STIG issues
        1.  
          When the STIG option is enabled, it enforces an account lockout for any user that enters three consecutive incorrect passwords
        2.  
          The changed password is not synchronized across the cluster
        3.  
          STIG status is not preserved if the system configuration is imported after reimaging the nodes
      14. Storage issues
        1.  
          Snapshot mount can fail if the snapshot quota is set
        2.  
          Sometimes the Storage> pool rmdisk command does not print a message
        3.  
          The Storage> pool rmdisk command sometimes can give an error where the file system name is not printed
        4.  
          Not able to enable quota for file system that is newly added in the list of CIFS home directories
        5.  
          Destroying the file system may not remove the /etc/mtab entry for the mount point
        6.  
          The Storage> fs online command returns an error, but the file system is online after several minutes
        7.  
          Removing disks from the pool fails if a DCO exists
        8.  
          Rollback refresh fails when running it after running Storage> fs growby or growto commands
        9.  
          If an exported DAS disk is in error state, it shows ERR on the local node and NOT_CONN on the remote nodes in Storage> list
        10.  
          Inconsistent cluster state with management service down when disabling I/O fencing
        11.  
          Storage> tier move command failover of node is not working
        12.  
          Rollback service group goes in faulted state when respective cache object is full and there is no way to clear the state
        13.  
          Event messages are not generated when cache objects get full
        14.  
          The Access Appliance command-line interface does not block uncompress and compress operations from running on the same file at the same time
        15.  
          Storage> tier move list command fails if one of the cluster nodes is rebooted
        16.  
          Pattern given as filter criteria to Storage> fs policy add sometimes erroneously transfers files that do not fit the criteria
        17.  
          When a policy run completes after issuing Storage> fs policy resume, the total data and total files count might not match the moved data and files count as shown in Storage> fs policy status
        18.  
          Storage> fs addcolumn operation fails but error notification is not sent
        19.  
          Storage> fs-growto and Storage> fs-growby commands give error with isolated disks
        20.  
          Unable to create space-optimized rollback when tiering is present
        21.  
          Enabling I/O fencing on a set up with Volume Manager objects present fails to import the disk group
        22.  
          File system creation fails when the pool contains only one disk
        23.  
          After starting the backup service, BackupGrp goes into FAULTED state on some nodes
        24.  
          File system creation fails with SSD pool
        25.  
          The CVM service group goes in to faulted state after you restart the management console node
        26.  
          The Storage> fs create command does not display the output correctly if one of the nodes of the cluster is in unknown state
        27.  
          Storage> fs growby and growto commands fail if the size of the file system or bucket is full
        28.  
          The operating system names of fencing disks are not consistent across the Access Appliance cluster that may lead to issues
        29.  
          The disk group import operation fails and all the services go into failed state when fencing is enabled
        30.  
          Error while creating a file system stating that the CVM master and management console are not on the same node
        31.  
          When you configure disk-based fencing, the cluster does not come online after you restart the node
        32.  
          After a node is restarted, the vxdclid process may generate core dump
        33.  
          The cluster> shutdown command does not shut down the node
        34.  
          Audit logging of WORM-enabled file systems do not get enabled if the file systems were offline during upgrade
      15. System issues
        1.  
          The System> ntp sync command without any argument does not appear to work correctly
        2.  
          Access services are up and running if system is restarted after manually stopping services on both the nodes
        3.  
          Phantom service group remains offline if a cluster node is restarted.
      16. Upgrade issues
        1.  
          Unable to roll back the system after an attempt to upgrade from 7.4.2.400 to 7.4.3.300 failed.
        2.  
          During the Access Appliance upgrade, I/O gets paused with an error message
        3.  
          During rolling upgrade, Access Appliance shutdown does not complete successfully
        4.  
          CVM is in FAULTED state after you perform a rolling upgrade
        5.  
          If rolling upgrade is performed when NFS v4 is configured using NFS lease, the system may hang
        6.  
          Stale file handle error is displayed during rolling upgrade
        7.  
          The upgrade operation fails if synchronous replication is configured
        8.  
          Rolling upgrade fails when the cluster has space-optimized rollback in online state
        9.  
          After upgrading from version 7.4.2 to a later version, the default route entry in the ip rule table on one of the cluster nodes is missing
        10.  
          GUI might fail to start after upgrading from version 7.4.2 to 8.0
        11.  
          Storage provisioning might fail after upgrading the Access Appliance from version 7.4.2 to 8.0
        12.  
          Upgrade may fail if operations such as such as OS reboot, cluster restart, and node stop and shutdown are used during the upgrade
        13.  
          A wrong upgrade status might be displayed while upgrading from version 7.4.2. to a later version.
        14.  
          Disk layout version (DLV) of file systems are not upgraded if the file systems were offline during the upgrade
        15.  
          Failed to retrieve the current password policy after upgrading to version 8.0
        16.  
          Upgrade from version 7.4.3.200 to 8.0 fails in post-upgrade self-test
      17. Veritas Data Deduplication issues
        1.  
          The Veritas Data Deduplication storage server does not come online on a newly added node in the cluster if the node was offline when you configured deduplication
        2.  
          The Veritas Data Deduplication server goes offline after destroying the bond interface on which the deduplication IP was online
        3.  
          If you grow the deduplication pool using the fs> grow command, and then try to grow it further using the dedupe> grow command, the dedupe> grow command fails
        4.  
          The Veritas Data Deduplication server goes offline after bond creation using the interface of the deduplication IP
        5.  
          Provisioning for Veritas Data Deduplication is displayed as failed in GUI
        6.  
          During reconfiguration of Veritas Data Deduplication with WORM, the specified username and password are not considered
        7.  
          WORM-enabled MSDP does not start after a switch or restart of the deduplication engine
      18. Access Appliance operational notes
        1.  
          Access services do not restart properly after storage shelf restart
  5. Getting help
    1.  
      Displaying the Online Help
    2.  
      Displaying the man pages
    3.  
      Using the Access Appliance product documentation

Continuous replication is unable to go to the replicating state if the Storage Replicated Log becomes full

While replicating data from the source cluster to the target cluster, if the Storage Replicated Log (SRL) becomes full, It goes into Data Change Map (DCM) mode. In DCM mode, it does not show the status as replicating.

Replication> continuous status test_fs
Name                   value
=====================  ==============================================
Replicated Data Set    rvg_test_fs
Replication Role       Primary
Replication link       link1

Primary Site Info:

Host name           10.10.2.70
RVG state           enabled for I/O

Secondary Site Info:

Host name           10.10.2.72
Configured mode     synchronous-override
Data status         inconsistent
Replication status  resync in progress (dcm resynchronization)
Current mode        asynchronous
Logging to          DCM (contains 551200 Kbytes) (SRL protection logging)

Workaround:

Run the following command on the source cluster for continuous data replication.

# vxrvg -g <dg_name> resync <rvg_name>

The command resynchronizes the source and the target cluster. You can check the status by entering the following command:

Replication> continuous status test_fs
Name                   value
=====================  =======================
Replicated Data Set    rvg_test_fs
Replication Role       Primary
Replication link       link1

Primary Site Info:

Host name              10.10.2.70
RVG state              enabled for I/O

Secondary Site Info:

Host name              10.10.2.72
Configured mode        synchronous-override
Data status            consistent, up-to-date
Replication status     replicating (connected)
Current mode           synchronous
Logging to             SRL
Timestamp Information  behind by  0h 0m 0s