Veritas Access Administrator's Guide
- Section I. Introducing Veritas Access
- Section II. Configuring Veritas Access
- Adding users or roles
- Configuring the network
- Configuring authentication services
- Section III. Managing Veritas Access storage
- Configuring storage
- Configuring data integrity with I/O fencing
- Configuring ISCSI
- Veritas Access as an iSCSI target
- Configuring storage
- Section IV. Managing Veritas Access file access services
- Configuring the NFS server
- Setting up Kerberos authentication for NFS clients
- Using Veritas Access as a CIFS server
- About Active Directory (AD)
- About configuring CIFS for Active Directory (AD) domain mode
- About setting trusted domains
- About managing home directories
- About CIFS clustering modes
- About migrating CIFS shares and home directories
- About managing local users and groups
- Configuring an FTP server
- Using Veritas Access as an Object Store server
- Configuring the NFS server
- Section V. Monitoring and troubleshooting
- Section VI. Provisioning and managing Veritas Access file systems
- Creating and maintaining file systems
- Considerations for creating a file system
- Modifying a file system
- Managing a file system
- Creating and maintaining file systems
- Section VII. Configuring cloud storage
- Section VIII. Provisioning and managing Veritas Access shares
- Creating shares for applications
- Creating and maintaining NFS shares
- Creating and maintaining CIFS shares
- Using Veritas Access with OpenStack
- Integrating Veritas Access with Data Insight
- Section IX. Managing Veritas Access storage services
- Compressing files
- About compressing files
- Compression tasks
- Configuring SmartTier
- Configuring SmartIO
- Configuring episodic replication
- Episodic replication job failover and failback
- Configuring continuous replication
- How Veritas Access continuous replication works
- Continuous replication failover and failback
- Using snapshots
- Using instant rollbacks
- Compressing files
- Section X. Reference
Read performance tunables for a cloud tier in a scale-out file system
While reading files on a cloud tier, the read performance can reduce drastically if the application reads in very small chunks. Additionally, if the network latency is high for the cloud tier, then there is a big impact on the read performance.
A scale-out file system has a read-ahead mechanism to pre-fetch the data and store it locally to boost the read performance. A cloud cache is a cache to store the data read from the cloud tier on the primary storage. This enables it to serve future requests without having to read the file on the cloud tier.
In a read-ahead mechanism, Veritas Access pre-fetches more data than what is asked for. The read-ahead is performed asynchronously. The pre-fetched data is cached in the primary tier (on-premises storage). Any of the nodes in the cluster can use the cache. Hence, the data that is brought from the cloud from one node in the cluster can be used by the other nodes. Since these cached files occupy space on the meta file system, a reclaim thread runs periodically and removes the files, which were not accessed recently by checking the access time of the files.
Table: Configuration tunables for cloud cache lists the new configuration parameters that are required for a cloud cache
Table: Configuration tunables for cloud cache
Tunable name and command | Description |
---|---|
tfc_cld_cache_enable Command syntax: tunefs tfs_cld_cache_enable = <0/1> <mntpt> | Enables or disabled the caching feature. This parameter is enabled by default. |
tfs_cld_cache_list Command syntax: tunefs tfs_cld_cache_list <mntpt> | Lists the cloud-cache configurations. |
tfs_cld_cache_chksz Command syntax: tunefs tfs_cld_cache_chksz = <size in MB> <mntpt> | Specifies the chunk size, which is the size of the read request made to the cloud tier. A larger chunk size reduces the data movement cycles required to access data from the cloud tier. The default value is 32 MB. |
tfs_cld_cache_nread Command syntax: tunefs tfs_cld_cache_nread = <Max.number of read ahead> <mntpt> | Specifies the maximum number of read-ahead threads. The read-ahead threads read in chunk-sized requests from the cloud tier asynchronously. Total amount of data read from cloud = tfs_cld_cache_nread x tfs_cld_cache_chksz. So for a 32 MB chunk size, 256 MB of data is fetched from the cloud. The default value is 8 read-ahead threads. |
tfc_cld_cache_gctime Command syntax: tunefs tfs_cld_cache_gctime = <GC time in minutes> <mntpt> | Specifies the garbage collection time after which the cached files are removed. This follows the Least Recently Used (LRU) policy where the file that is not accessed within the specified garbage collection time is deleted to make space in the cache for new chunks. The default value is 3 minutes. |
Note:
You have to offline the file system and bring it online again to make the new values effective.
The cloud cache consumes storage from the primary storage. Hence, if the garbage collection has not started, the application can get an ENOSPC error even if there is space available in the primary storage (which is currently used by the cloud cache). Once the files are deleted from the cache based on their access time, the process continues normally.