Veritas Data Insight Administrator's Guide
- Section I. Getting started
- Introduction to Veritas Data Insight administration
- Configuring Data Insight global settings
- About scanning and event monitoring
- About filtering certain accounts, IP addresses, and paths
- About archiving data
- About Data Insight integration with Symantec Data Loss Prevention (DLP)
- Configuring advanced analytics
- About open shares
- About user risk score
- About bulk assignment of custodians
- Configuring Metadata Framework
- Section II. Configuring Data Insight
- Configuring Data Insight product users
- Configuring Data Insight product servers
- About node templates
- About automated alerts for patches and upgrades
- Configuring saved credentials
- Configuring directory service domains
- Adding a directory service domain to Data Insight
- Configuring containers
- Section III. Configuring native file systems in Data Insight
- Configuring NetApp 7-mode file server monitoring
- Configuring clustered NetApp file server monitoring
- About configuring secure communication between Data Insight and cluster-mode NetApp devices
- Configuring EMC Celerra or VNX monitoring
- Configuring EMC Isilon monitoring
- Configuring EMC Unity VSA file servers
- Configuring Hitachi NAS file server monitoring
- Configuring Windows File Server monitoring
- Configuring Veritas File System (VxFS) file server monitoring
- Configuring monitoring of a generic device
- Managing file servers
- Adding filers
- Adding shares
- Renaming storage devices
- Configuring NetApp 7-mode file server monitoring
- Section IV. Configuring SharePoint data sources
- Configuring monitoring of SharePoint web applications
- About the Data Insight web service for SharePoint
- Adding web applications
- Adding site collections
- Configuring monitoring of SharePoint Online accounts
- About SharePoint Online account monitoring
- Adding site collections to SharePoint Online accounts
- Configuring monitoring of SharePoint web applications
- Section V. Configuring cloud data sources
- Configuring monitoring of Box accounts
- Configuring OneDrive account monitoring
- Managing cloud sources
- Section VI. Configuring Object Storage Sources
- Section VII. Health and monitoring
- Section VIII. Alerts and policies
- Section IX. Remediation
- Configuring remediation settings
- Section X. Reference
- Appendix A. Data Insight best practices
- Appendix B. Migrating Data Insight components
- Appendix C. Backing up and restoring data
- Appendix D. Data Insight health checks
- About Data Insight health checks
- About Data Insight health checks
- Appendix E. Command File Reference
- Appendix F. Data Insight jobs
- Appendix G. Troubleshooting
- Troubleshooting FPolicy issues on NetApp devices
Understanding Data Insight best practices
To optimize the productivity and efficiency of Data Insight, you are advised to follow the guidelines given below:
Do not use System disk for Data directory. Use a separate disk instead.
Set up event notifications to ensure that errors and warnings are reported. Create a separate email distribution list including storage administrators, product administrators and other stakeholders.
Do not schedule scans at peak hours. That might impact user experience. It is advisable to schedule scans at off peak hours which will minimize user impact.
Audit Exclusions - Service Account exclusion
Exclude service accounts, application accounts from auditing.
If there is a third-party application that generates a lot of events residing on a volume, exclude that volume from auditing.
Scan Exclusions
Exclude scanning of specified folders or files like snapshot~ or any other temp files that will help in consuming less data and eventually improving overall performance.
Use high performance disks like SSD for indexers.
General guidelines around calculating index memory:
Computation speed for all the reports, including Dashboard computation can be enhanced by increasing the number of threads. You can decide to increase number of threads based on available resources like CPU usage on indexer and Management Server.
To see CPU usage and overall performance of Data Insight servers, navigate to
Settings >> Health and Monitoring >> Performance
Settings >> Inventory >> Data Insight Servers >> Select Node >> Statistics >> Performance
Retention Policy
You can define retention policies to ensure the database and log files are maintained over time. The retention policy affects sizing guidelines and disk space requirements. This is more important for retention of product logs for future troubleshooting and will have implication around disk space requirements
For any windows or third-party upgrade
Before upgrade, ensure that
all the Data Insight Services are gracefully stopped.
any classification request is not running.
any report or Index-Writer Job is not running.
After upgrade, ensure that
nothing is broken in event logs.
all the services are up and running.
If possible, perform the activity during maintenance window when users are comparatively less active and check Events log to see if anything is broken.
If you are using anti-virus, ensure that the AV scanner has exclusions for the Data Insight install folder, the Data folder and the OS Temp folder on the Management Server and Indexers.
Create containers to logically group related objects together for administration and reporting purposes.
Use latest available version of Data Insight to ensure that the most recent security and defect fixes are applied.
General
Use recommended system configurations for better throughput.
Use a classification server pool of multiple nodes to achieve higher throughput for large classification tasks.
Disable smart classification if not required. In Data Insight 6.3, the option will be disabled by defauly.
Smart classification requires significant resources on Indexer and Management Server nodes to automatically generate the list of files to classify.
Update default disk safeguard thresholds to higher values especially in case of PDF Files where uncompressed files can consume up to 40GB disk space (considering 16 threads and file sizes around 2.5 GB) hence the values given below will safeguard against disk usage reaching maximum limit.
Reset at 50 GB (or higher)
Stop at 45 GB (or higher)
As a part of classification, Data Insight does text extraction and uses the data directory for storing temporary files.
Maximum file size supported
Data Insight has a default maximum file size of 50MB. This limit can be changed in the Classification Configuration settings page.
Text extraction during classification is bounded by the uncompressed size of a file and this uncompressed size dictates whether files can be successfully classified. All Microsoft Office documents since Office 2007 use Office Open XML format (.docx, .pptx etc) which introduced compression.
Most Office docs therefore have a degree of compression ranging from 20%-70% depending on the mix of text and images, with pure text compressing to around 80%.
Files with a lot of images will compress less as images such as JPEG and PNG are already compressed.
PDFs are not compressed by default unless the 'Optimize PDF' option in Adobe Acrobat or similar PDF authoring applications has been used.
It has been observed that 16 concurrent files of 400MB uncompressed docx files can be classified without any memory exhaustion.
This means that 16 concurrent requests of docx files in a range of 100MB-250MB logical sized would probably work fine given the average compression ratio.
Note that the compression ratio is impossible to predict unless you analyse each file or have some indication of the type of content within the corpus.
These figures do not relate to volume/disk level compression, but the compression that Microsoft Office applies to the content. A .docx file is simply a ZIP container that can be opened in a tool such as 7-Zip to assess the uncompressed size.
The table below shows the file types and sizes tested with the recommended Classification Server specification:
Recommended maximum file sizes for classification without OCR enabled
Document type
Extensions
Maximum compressed file size tested
Maximum uncompressed file size tested
Microsoft Word
doc, docx, docm, dotm, dotx
200 MB
450 MB
Microsoft PowerPoint
ppt, pptx, pps, potm, potx, ppsm, ppsx
200 MB
450 MB
Office Tabular
xls, xlsx, xlt, xltx, xlsb, xlam
50 MB
100 MB
Adobe PDF
pdf
1 MB
Compressed PDFs are not yet tested. However, the maximum uncompressed size would mirror the compressed size of 1 GB.
Server specification used (the recommended Data Insight Classification Server specification)
16 Cores, 32GB RAM
16 classification threads running in parallel
Using Optical Character Recognition (OCR)
OCR usually results in higher memory consumption which eventually affects the classification performance.
Larger File support
It is possible that larger files than tested could be successfully classified, but it depends on the size of other files being classified at the same time. For example, if a 300MB DOCX is 1GB uncompressed, it could still be classified successfully if all other 15 files running in parallel are relatively small since the total memory used by the classification process would be within limits.
As there is no way to ensure that a mix of small and large files are classified at the same time, recommend that any DQL reports that are used to select files to classify are not ordered or segregated by file size. This ensures that the files submitted to VIC are done so as 'randomly' as possible.
For example, do not classify all 'small DOCX' files first and leave the largest ones until later. Classifying the very largest files together in one classification Job increases the risk that the total uncompressed size of 16 large files would lead to VIC memory exhaustion. Submitting a mix of file sizes together provides the best chance of large and large uncompressed files being successfully classified.
If using DQL to generate a report of files to classify, do not order the output of the report by size as that would lead to VIC processing the largest files together, whether they are sorted to appear at the start or end of the report.
Recommendations for creating classification Jobs
Use DQL reports which will filter out the files based on the above recommendations and then trigger classification requests accordingly.
Enable only required policies in VIC configuration.
As the number of enabled policies and policy complexity increases (such as using complex regular expressions or hundreds of keywords), the throughput tends to decrease.
Disable OCR if not required.
Configure the content fetch pause window to reduce the potential impact on the source devices.
The content fetch job copies files from the source devices to classify them.
By default, the job is paused from 7am to 7pm which matches normal working hours.
Recommend assessing the load on the devices during the content fetch as many customers have discovered the load does not disrupt any normal activities. If it can run 24-hours a day, that will help ensure that the classification process has a constant feed of files to classify and hence throughput can be increased.