Veritas Access Release Notes

Last Published:
Product(s): Access (7.4.2.400)
Platform: Linux
  1. Overview of Veritas Access
    1.  
      About this release
    2.  
      Important release information
    3. Changes in this release
      1.  
        Deprecated functionality in this release
    4.  
      Supported NetBackup client versions
    5.  
      Veritas Access simple storage service (S3) APIs
    6.  
      Required OS and third-party RPMs
  2. Software limitations
    1.  
      Limitations on using shared LUNs
    2. Flexible Storage Sharing limitations
      1.  
        If your cluster has DAS disks, you must limit the cluster name to ten characters at installation time
    3. Limitations related to installation and upgrade
      1.  
        If the required virtual IPs are not configured, then services like NFS, CIFS, and S3 do not function properly
      2.  
        Rolling upgrade is not supported from the Veritas Access command-line interface
      3.  
        Underscore character is not supported for host names
    4.  
      Limitations in the Backup mode
    5.  
      Veritas Access IPv6 limitations
    6.  
      FTP limitations
    7.  
      Intel Spectre Meltdown limitation
    8.  
      Limitations on using InfiniBand NICs in the Veritas Access cluster
    9.  
      Limitations related to commands in a non-SSH environment
    10.  
      Limitation on using Veritas Access in a virtual machine environment
    11.  
      Limitations related to Veritas Data Deduplication
    12.  
      Kernel-based NFS v4 limitations
    13.  
      File system limitation
    14.  
      Veritas Access S3 server limitation
    15.  
      Long-term data retention (LTR) limitations
    16.  
      Limitations related to upgrade
    17. Limitation related to replication
      1.  
        Limitation related to episodic replication authentication
      2.  
        Limitation related to continuous replication
  3. Known issues
    1. Veritas Access known issues
      1. Admin issues
        1.  
          The user password gets displayed in the logs for the Admin> user add username system-admin|storage-admin|master command
        2.  
          A user is created even if double quotation marks or single quotation marks are specified in a password
      2. CIFS issues
        1.  
          Cannot enable the quota on a file system that is appended or added to the list of homedir
        2.  
          Deleting a CIFS share resets the default owner and group permissions for other CIFS shares on the same file system
        3.  
          Default CIFS share has owner other than root
        4.  
          CIFS mapuser command fails to map all the users from Active Directory (AD) to all the NIS/LDAP users
        5.  
          CIFS share may become unavailable when the CIFS server is in normal mode
        6.  
          CIFS share creation does not authenticate AD users
        7.  
          If you mount or access a CIFS share using the local user without netbios or cluster name, the operation fails
        8.  
          During upgrade, the CVM and CFS agents are not stopped
      3. FTP issues
        1.  
          If a file system is used as homedir or anonymous_login_dir for FTP, this file system cannot be destroyed
        2.  
          The FTP> server start command reports the FTP server to be online even when it is not online
        3.  
          The FTP> session showdetails user=<AD username> command does not work
        4.  
          If the security in CIFS is not set to Active Directory (AD), you cannot log on to FTP through the AD user
        5.  
          If security is set to local, FTP does not work in case of a fresh operating system and Veritas Access installation
        6.  
          If the LDAP and local FTP user have the same user name, then the LDAP user cannot perform PUT operations when the security is changed from local to nis-ldap
        7.  
          FTP with LDAP as security is not accessible to a client who connects from the console node using virtual IPs
        8.  
          The FTP server starts even if the home directory is offline and if the security is changed to local, the FTP client writes on the root file system
      4. General issues
        1.  
          A functionality of Veritas Access works from the master node but does not work from the slave node
      5. GUI issues
        1.  
          When a new node is added or when a new cluster is installed and configured, the GUI may not start on the console node after a failover
        2.  
          When an earlier version of the Veritas Access cluster is upgraded, the GUI shows stale and incomplete data
        3.  
          Restarting the server as part of the command to add and remove certificates gives an error on RHEL 7
        4.  
          Client certificate validation using OpenSSL ocsp does not work on RHEL 7
        5.  
          GUI does not support segregated IPv6 addresses while creating CIFS shares using the Enterprise Vault policy
        6.  
          Unable to select a client in the Delete NFS client wizard
        7.  
          The set episodic replication link fails if the replication link is reconfigured
        8.  
          REST endpoint field gives an error message for valid values while registering S3-compatible as a cloud service
      6. Installation and configuration issues
        1.  
          After you restart a node that uses RDMA LLT, LLT does not work, or the gabconfig - a command shows the jeopardy state
        2.  
          Running individual Veritas Access scripts may return inconsistent return codes
        3.  
          Configuring Veritas Access with the installer fails when the SSH connection is lost
        4.  
          Excluding PCIs from the configuration fails when you configure Veritas Access using a response file
        5.  
          Installer does not list the initialized disks immediately after initializing the disks during I/O fencing configuration
        6.  
          If the same driver node is used for two installations at the same time, then the second installation shows the progress status of the first installation
        7.  
          If the same driver node is used for two or more installations at the same time, then the first installation session is terminated
        8.  
          If you run the Cluster> show command when a slave node is in the restart, shutdown, or crash state, the slave node throws an exception
        9.  
          If duplicate PCI IDs are added for the PCI exclusion, the Cluster> add node name command fails
        10.  
          If installing using a response file is started from the cluster node, then the installation session gets terminated after the configuring NICs section
        11.  
          After finishing system verification checks, the installer displays a warning message about missing third-party RPMs
        12.  
          Phantomgroup for the VLAN device does not come online if you create another VLAN device from the Veritas Access command-line interface after cluster configuration is done
        13.  
          Veritas Access fails to install if LDAP or the autofs home directories are preconfigured on the system
        14.  
          After the Veritas Access installation is complete, the installer does not clean the SSH keys of the driver node on the Veritas Access nodes from where the installation is triggered
        15.  
          Veritas Access installation fails if the nodes have older yum repositories and do not have Internet connectivity to reach RHN repositories
        16.  
          Configuring Veritas Access with a preconfigured VLAN and a preconfigured bond fails
        17.  
          When you configure Veritas Access, the common NICs may not be listed
        18.  
          In a mixed mode Veritas Access cluster, after the execution of the Cluster> add node command, one type of unused IP does not get assigned as a physical IP to public NICs
        19.  
          NLMGroup service goes into a FAULTED state when the private IP (x.x.x.2) is not free
        20.  
          The cluster> show command does not detect all the nodes of the cluster
        21.  
          Configuration fails during migration from a host-based NetBackup client to a container-based NetBackup client
      7. Internationalization (I18N) issues
        1.  
          The Veritas Access command-line interface prompt disappears when characters in a foreign language are present in a command
      8. Networking issues
        1.  
          CVM service group goes into faulted state unexpectedly
        2.  
          In a mixed IPv4 and IPv6 VIP network set up, the IP balancing does not consider IP type
        3.  
          The netgroup search does not continue to search in NIS if the entry is not found in LDAP
        4.  
          The IPs hosted on an interface that is not the current IPv6 default gateway interface are not reachable outside the current IPv6 subnet
        5.  
          After network interface swapping between two private NICs or one private NIC and one public NIC, the service groups on the slave nodes are not probed
        6.  
          Unable to import the network module after an operating system upgrade
        7.  
          LDAP with SSL on option does not work if you upgrade Veritas Access
        8.  
          Network load balancer does not get configured with IPv6
        9.  
          Unable to add an IPv6-default gateway on an IPv4-installed cluster
        10.  
          LDAP over SSL may not work in Veritas Access 7.4.2.400
        11.  
          The network> swap command hangs if any node other than the console node is specified
        12.  
          LDAP user fails to establish SSH connection with the cluster when FTP is configured
      9. NFS issues
        1.  
          Latest directory content of server is not visible to the client if time is not synchronized across the nodes
        2.  
          NFS> share show command does not distinguish offline versus online shares
        3.  
          Kernel-NFS v4 lock failover does not happen correctly in case of a node crash
        4.  
          Kernel-NFS v4 export mount for Netgroup does not work correctly
        5.  
          When a file system goes into the FAULTED or OFFLINE state, the NFS share groups associated with the file system do not become offline on all the nodes
      10. ObjectAccess issues
        1.  
          When trying to connect to the S3 server over SSLS3, the client application may give a warning
        2.  
          If you have upgraded to Veritas Access 7.4.2.400 from an earlier release, access to S3 server fails if the cluster name has uppercase letters
        3.  
          If the cluster name does not follow the DNS hostname restrictions, you cannot work with the ObjectAccess service in Veritas Access
        4.  
          Bucket creation may fail with time-out error
        5.  
          Bucket deletion may fail with "No such bucket" or "No such key" error
        6.  
          Group configuration does not work in ObjectAccess if the group name contains a space
      11. OpenStack issues
        1.  
          Cinder and Manila shares cannot be distinguished from the Veritas Access command-line interface
        2.  
          Cinder volume creation fails after a failure occurs on the target side
        3.  
          Cinder volume may fail to attach to the instance
        4.  
          Bootable volume creation for an iSCSI driver fails with an I/O error when a qcow image is used
      12. Replication issues
        1.  
          When running episodic replication and deduplication on the same cluster node, the episodic replication job fails in certain scenarios
        2.  
          The System> config import command does not import episodic replication keys and jobs
        3.  
          The job uses the schedule on the target after episodic replication failover
        4.  
          Episodic replication fails with error "connection reset by peer" if the target node fails over
        5.  
          Episodic replication jobs created in Veritas Access 7.2.1.1 or earlier versions are not recognized after an upgrade
        6.  
          Setting the bandwidth through the GUI is not enabled for episodic replication
        7.  
          Episodic replication job with encryption fails after job remove and add link with SSL certificate error
        8.  
          Episodic replication job status shows the entry for a link that was removed
        9.  
          Episodic replication job modification fails
        10.  
          If a share is created in RW mode on the target file system for episodic replication, then it may result in there being different number of files and directories on the target file system compared to the source file system
        11.  
          Continuous replication fails when the 'had' daemon is restarted on the target manually
        12.  
          Continuous replication is unable to go to the replicating state if the Storage Replicated Log becomes full
        13.  
          Unplanned failover and failback in continuous replication may fail if the communication of the IPTABLE rules between the cluster nodes does not happen correctly
        14.  
          Continuous replication configuration may fail if the continuous replication IP is not online on the master node but is online on another node
        15.  
          If you restart any node in the primary or the secondary cluster, replication may go into a PAUSED state
        16.  
          Unplanned failback fails if the source cluster goes down
        17.  
          Cloud tiering cannot be configured with continuous replication
        18.  
          After continuous replication failover/failback operations, the virtual IPs in the source may appear offline
      13. SmartIO issues
        1.  
          SmartIO writeback cachemode for a file system changes to read mode after taking the file system offline and then online
      14. Storage issues
        1.  
          Snapshot mount can fail if the snapshot quota is set
        2.  
          Sometimes the Storage> pool rmdisk command does not print a message
        3.  
          The Storage> pool rmdisk command sometimes can give an error where the file system name is not printed
        4.  
          Not able to enable quota for file system that is newly added in the list of CIFS home directories
        5.  
          Destroying the file system may not remove the /etc/mtab entry for the mount point
        6.  
          The Storage> fs online command returns an error, but the file system is online after several minutes
        7.  
          Removing disks from the pool fails if a DCO exists
        8.  
          Rollback refresh fails when running it after running Storage> fs growby or growto commands
        9.  
          If an exported DAS disk is in error state, it shows ERR on the local node and NOT_CONN on the remote nodes in Storage> list
        10.  
          Inconsistent cluster state with management service down when disabling I/O fencing
        11.  
          Storage> tier move command failover of node is not working
        12.  
          Storage> scanbus operation hangs at the time of I/O fencing operation
        13.  
          Rollback service group goes in faulted state when respective cache object is full and there is no way to clear the state
        14.  
          Event messages are not generated when cache objects get full
        15.  
          The Veritas Access command-line interface does not block uncompress and compress operations from running on the same file at the same time
        16.  
          Storage> tier move list command fails if one of the cluster nodes is rebooted
        17.  
          Pattern given as filter criteria to Storage> fs policy add sometimes erroneously transfers files that do not fit the criteria
        18.  
          When a policy run completes after issuing Storage> fs policy resume, the total data and total files count might not match the moved data and files count as shown in Storage> fs policy status
        19.  
          Storage> fs addcolumn operation fails but error notification is not sent
        20.  
          Storage> fs-growto and Storage> fs-growby commands give error with isolated disks
        21.  
          Unable to create space-optimized rollback when tiering is present
        22.  
          Enabling I/O fencing on a set up with Volume Manager objects present fails to import the disk group
        23.  
          File system creation fails when the pool contains only one disk
        24.  
          After starting the backup service, BackupGrp goes into FAULTED state on some nodes
        25.  
          File system creation fails with SSD pool
        26.  
          The CVM service group goes in to faulted state after you restart the management console node
        27.  
          The Storage> fs create command does not display the output correctly if one of the nodes of the cluster is in unknown state
        28.  
          Storage> fs growby and growto commands fail if the size of the file system or bucket is full
        29.  
          The operating system names of fencing disks are not consistent across the Veritas Access cluster that may lead to issues
        30.  
          The disk group import operation fails and all the services go into failed state when fencing is enabled
        31.  
          Error while creating a file system stating that the CVM master and management console are not on the same node
        32.  
          When you configure disk-based fencing, the cluster does not come online after you restart the node
        33.  
          After a node is restarted, the vxdclid process may generate core dump
        34.  
          The cluster> shutdown command does not shut down the node
        35.  
          Quorum is lost and the disk group is in disabled state
      15. System issues
        1.  
          The System> ntp sync command without any argument does not appear to work correctly
      16. Upgrade issues
        1.  
          Some vulnerabilities are present in the python-requests rpm which impacts rolling upgrade when you try to upgrade from 7.4.x to 7.4.2
        2.  
          During rolling upgrade, Veritas Access shutdown does not complete successfully
        3.  
          CVM is in FAULTED state after you perform a rolling upgrade
        4.  
          If rolling upgrade is performed when NFS v4 is configured using NFS lease, the system may hang
        5.  
          Stale file handle error is displayed during rolling upgrade
        6.  
          The upgrade operation fails if synchronous replication is configured
        7.  
          Rolling upgrade fails when the cluster has space-optimized rollback in online state
        8.  
          Upgrade fails if VVR is configured and IO processes are ongoing in the filesystem that are configured for VVR
        9.  
          After an upgrade from version 7.4.2, LDAP and NIS are in disabled state
        10.  
          GUI might fail to start after upgrading from version 7.4.2.301 to 7.4.2.400
        11.  
          Unable to list file systems after a rolling upgrade to version 7.4.2.400
        12.  
          After upgrading from version 7.4.2.301 to version 7.4.2.400, full discovery is not triggered automatically
        13.  
          Unable to add cloud as a tier after a rolling upgrade to version 7.4.2.400,
        14.  
          The dedupe stats command fails after performing an upgrade
        15.  
          Upgrading from version 7.4.2.300 to 7.4.2.400 fails on a source cluster
        16.  
          Unable to log in to a node using SSH as the master user after upgrading to 7.4.2.400
        17.  
          Unable to increase or decrease the file system size from the UI after upgrading from version 7.4.2.301 to 7.4.2.400
        18.  
          After migration, the system option show tunable values may get changed
      17. Veritas Data Deduplication issues
        1.  
          The Veritas Data Deduplication storage server does not come online on a newly added node in the cluster if the node was offline when you configured deduplication
        2.  
          The Veritas Data Deduplication server goes offline after destroying the bond interface on which the deduplication IP was online
        3.  
          If you grow the deduplication pool using the fs> grow command, and then try to grow it further using the dedupe> grow command, the dedupe> grow command fails
        4.  
          The Veritas Data Deduplication server goes offline after bond creation using the interface of the deduplication IP
        5.  
          Provisioning for Veritas Data Deduplication is displayed as failed in GUI
  4. Getting help
    1.  
      Displaying the Online Help
    2.  
      Displaying the man pages
    3.  
      Using the Veritas Access product documentation

Upgrade issues

This section describes known issues related to upgrade.