Veritas Access Administrator's Guide
- Section I. Introducing Veritas Access
- Section II. Configuring Veritas Access
- Adding users or roles
- Configuring the network
- Configuring authentication services
- Section III. Managing Veritas Access storage
- Configuring storage
- Configuring data integrity with I/O fencing
- Configuring ISCSI
- Veritas Access as an iSCSI target
- Configuring storage
- Section IV. Managing Veritas Access file access services
- Configuring the NFS server
- Setting up Kerberos authentication for NFS clients
- Using Veritas Access as a CIFS server
- About Active Directory (AD)
- About configuring CIFS for Active Directory (AD) domain mode
- About setting trusted domains
- About managing home directories
- About CIFS clustering modes
- About migrating CIFS shares and home directories
- About managing local users and groups
- Configuring an FTP server
- Using Veritas Access as an Object Store server
- Configuring the NFS server
- Section V. Monitoring and troubleshooting
- Section VI. Provisioning and managing Veritas Access file systems
- Creating and maintaining file systems
- Considerations for creating a file system
- Modifying a file system
- Managing a file system
- Creating and maintaining file systems
- Section VII. Configuring cloud storage
- Section VIII. Provisioning and managing Veritas Access shares
- Creating shares for applications
- Creating and maintaining NFS shares
- Creating and maintaining CIFS shares
- Using Veritas Access with OpenStack
- Integrating Veritas Access with Data Insight
- Section IX. Managing Veritas Access storage services
- Compressing files
- About compressing files
- Compression tasks
- Configuring SmartTier
- Configuring SmartIO
- Configuring episodic replication
- Episodic replication job failover and failback
- Configuring continuous replication
- How Veritas Access continuous replication works
- Continuous replication failover and failback
- Using snapshots
- Using instant rollbacks
- Compressing files
- Section X. Reference
Scheduling compression jobs
Schedule compression jobs lets you compress pattern-based and age-based compression.
To schedule compression
- Create a scheduled compression:
Storage> compress schedule create new_schedule duration min \ [hour] [day_of_month] [month] [day_of_week] [node]
where new_schedule is the name of the schedule.
where duration is the duration specified in hours (1 or more).
where min is the minutes.
where hour is the hours.
where day is the day of the month.
where month is the month.
where day_of_week is the day of the week.
where node is the name of the node or you can use "
any
". - Start the schedule for a given file system:
Storage> compress schedule start fs_name schedule_name resource_level algorithm
where fs_name is the name of the file system.
where schedule_name is the name of the schedule.
where resource_level is either
low
,medium
, orhigh
.where algorithm is the file compression algorithm strength [1-9]. For example, you specify strength gzip-3 compression as "
3
". - Show the scheduled compression:
Storage> compress schedule show new_schedule
- (Optional) Create a pattern for the file system.
Storage> compress pattern create fs_name pattern
where pattern is the extensions of the file names separated by "
,
" For example,*.arc,*.dbf,*.tmp
. - (Optional) Create a modification age rule for the file system.
Storage> compress modage create fs_name mod_age
where mod_age is the modification age (age-based) specified units are in days.
- If you performed step 4 or 5, you can list the schedule details for the file system:
Storage> compress schedule list fs_name