NetBackup™ Deduplication Guide
- Introducing the NetBackup media server deduplication option
- Quick start
- Planning your deployment
- About MSDP storage and connectivity requirements
- About NetBackup media server deduplication
- About NetBackup Client Direct deduplication
- About MSDP remote office client deduplication
- About MSDP performance
- About MSDP stream handlers
- MSDP deployment best practices
- Provisioning the storage
- Licensing deduplication
- Configuring deduplication
- About the MSDP Deduplication Multi-Threaded Agent
- About MSDP fingerprinting
- Enabling 400 TB support for MSDP
- About MSDP Encryption using NetBackup Key Management Server service
- Configuring a storage server for a Media Server Deduplication Pool
- About disk pools for NetBackup deduplication
- Configuring a Media Server Deduplication Pool storage unit
- Configuring client attributes for MSDP client-side deduplication
- About MSDP encryption
- About a separate network path for MSDP duplication and replication
- About MSDP optimized duplication within the same domain
- Configuring MSDP replication to a different NetBackup domain
- About NetBackup Auto Image Replication
- Configuring a target for MSDP replication to a remote domain
- About storage lifecycle policies
- Resilient network properties
- About variable-length deduplication on NetBackup clients
- About the MSDP pd.conf configuration file
- About saving the MSDP storage server configuration
- About protecting the MSDP catalog
- About NetBackup WORM storage support for immutable and indelible data
- Running MSDP services with the non-root user
- MSDP volume group (MVG)
- About the MSDP volume group
- Configuring the MSDP volume group
- MSDP cloud support
- About MSDP cloud support
- Cloud space reclamation
- About the disaster recovery for cloud LSU
- About Image Sharing using MSDP cloud
- About MSDP cloud immutable (WORM) storage support
- About immutable object support for AWS S3
- About bucket-level immutable storage support for Google Cloud Storage
- About object-level immutable storage support for Google Cloud Storage
- About AWS IAM Role Anywhere support
- About Azure service principal support
- About NetBackup support for AWS Snowball Edge
- About the cloud direct
- S3 Interface for MSDP
- Configuring S3 interface for MSDP on MSDP build-your-own (BYO) server
- Identity and Access Management (IAM) for S3 interface for MSDP
- S3 APIs for S3 interface for MSDP
- Disaster recovery in S3 interface for MSDP
- Monitoring deduplication activity
- Viewing MSDP job details
- Managing deduplication
- Managing MSDP servers
- Managing NetBackup Deduplication Engine credentials
- Managing Media Server Deduplication Pools
- Changing a Media Server Deduplication Pool properties
- Configuring MSDP data integrity checking behavior
- About MSDP storage rebasing
- Managing MSDP servers
- Recovering MSDP
- Replacing MSDP hosts
- Uninstalling MSDP
- Deduplication architecture
- Configuring and managing universal shares
- Introduction to universal shares
- Prerequisites to configure universal shares
- Managing universal shares
- Restoring data using universal shares
- Advanced features of universal shares
- Direct universal share data to object store
- Universal share accelerator for data deduplication
- Configure a universal share accelerator
- About the universal share accelerator quota
- Load backup data to a universal share with the ingest mode
- Managing universal share services
- Troubleshooting issues related to universal shares
- Configuring isolated recovery environment (IRE)
- Configuring an isolated recovery environment using the web UI
- Configuring an isolated recovery environment using the command line
- Using the NetBackup Deduplication Shell
- Managing users from the deduplication shell
- About the external MSDP catalog backup
- Managing certificates from the deduplication shell
- Managing NetBackup services from the deduplication shell
- Monitoring and troubleshooting NetBackup services from the deduplication shell
- Managing S3 service from the deduplication shell
- Troubleshooting
- About unified logging
- About legacy logging
- Troubleshooting MSDP configuration issues
- Troubleshooting MSDP operational issues
- Trouble shooting multi-domain issues
- Appendix A. Migrating to MSDP storage
- Appendix B. Migrating from Cloud Catalyst to MSDP direct cloud tiering
- About direct migration from Cloud Catalyst to MSDP direct cloud tiering
- Appendix C. Encryption Crawler
Use Direct Network File System to improve the performance of Network Attached Storage
Direct Network File System (dNFS) provides performance improvements for Network Attached Storage (NAS) over standard NFS for Oracle Databases. Direct NFS allows Oracle software to skip the operating system's NFS client when it communicates with a storage server. Direct NFS also improves High Availability (HA) and scalability by supporting up to four parallel network paths to storage and load-balancing across these paths. These improvements result in cost savings for database storage.
NFS servers must have write size values (wtmax) of 32768 or higher.
NFS mount points must be mounted both by operating system NFS client and Direct NFS client.
Set the NFS buffer size parameters rsize and wsize to at least 1048576 using the following command:
rsize and wsize
nfs_server:/vol/DATA/oradata /mnt/ nfs\ rw,bg,hard,nointr,rsize=1048576,wsize=1048576,tcp,actimeo=0,vers=3,timeo=600
Ensure that the TCP network buffer size is large enough to not hinder Direct NFS performance. The following command can verify the TCP buffer size:
sysctl -a |grep -e net.ipv4.tcp_[rw]mem
TCP buffer output
net.ipv4.tcp_rmem = 4096 87380 1056768
net.ipv4.tcp_wmem = 4096 16384 1056768
To change the buffer size, open
/etc/sysctl.conf
as root and modify the following values:sysctl.conf
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
Before running
sysctl -p
, restart the network with /etc/rc.d/init.d/network restart.
To enable Direct NFS, run the following commands and restart the database instance:
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk dnfs_on
To enable Direct NFS, run the following commands and remove the oranfstab
file:
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk dnfs_off
In the following directories of Direct NFS, search for the oranfstab
file where the first matching entry (of the file) is the mount point. You can update the file to set up multipathing and handle other configuration details.
$ORACLE_HOME/dbs
/var/opt/oracle
/etc/mnttab
Use the following list of parameters to create the oranfstab
file for each NFS server that you want to access using Direct NFS:
Table: Parameters to create the oranfstab file
Parameter | Usage |
---|---|
Server | Unique identifier for the NFS server. |
Local | Network paths (up to 4) on the database host. |
Path | Network paths (up to 4) on the NFS server. |
Export | The exported volume on the NFS server. |
Mount | The local mount point for the exported volume. |
mnt_timeout | Time in seconds to wait for the first mount. |
dontroute | The operating system routing of outgoing messages is prevented. |
management | Network path for NFS server management interface. |
nfs_version | The NFS protocol version that the Direct NFS client uses. |
security_default | The default security mode that is applicable for all the exported NFS server paths for a server entry. |
security | The security level to enable security with the Kerberos authentication protocol with Direct NFS client. |
community | The community string for use in SNMP queries. |
Sample output of an oranfstab
file.
server: myNFSServer1 local: 192.168.1.1 path: 192.168.1.2 local: 192.168.2.1 path: 192.168.2.2 local: 192.168.3.1 path: 192.168.3.2 local: 192.168.4.1 path: 192.168.4.2 export: /vol/oradata1 mount: /mnt/oradata1 export: /vol/oradata2 mount: /mnt/oradata2 mnt_timeout: 600
Ensure that you set up the oradism
file at the following path: $ORACLE_HOME/bin/oradism
. Direct NFS uses this oradism
binary to issue mounts as root. The file must be local to each node and with the ownership of a root user.
To ensure that the file is local to each node, run the chown root $ORACLE_HOME/bin/oradism command. Run chmod 4755 $ORACLE_HOME/bin/oradism to ensure that the oradism
file has the correct access permissions.
Refer to the contents of the following tables for client monitoring.
Table: v$ tables
Item | Description |
---|---|
| Lists the NFS servers that the Direct NFS client has mounted. |
| Lists the files that the Direct NFS client has opened. |
| Lists the TCP connections that are established from the NFS server to Direct NFS. |
| Lists the statistics on the different NFS operations that the Oracle processes have issued. |
Ensure that Oracle 11g software or later is installed using the Oracle installer on the Windows server.
Create and configure the oranfstab
file. You must add the oranfstab
file in the %ORACLE_HOME%\dbs
directory. Ensure that any file extension (for example, text file - txt) is added in the file name.
Configure the oranfstab
as in the following way:
C:\>type %ORACLE_HOME%\dbs\oranfstab server: lnxnfs <=== NFS server Host name path: 10.171.52.54 <--- First path to NFS server ie NFS server NIC local: 10.171.52.33 <--- First client-side NIC export: /oraclenfs mount: y:\ uid:1000 gid:1000 C:\>
The Direct NFS client uses the UID or the GID value to access all NFS servers that are listed in the oranfstab
file. Direct NFS ignores a UID or the GID value of 0. The UID and the GID used in the earlier example is of an Oracle user from the NFS server.
The exported path from the NFS server must be accessible for read, write, and run operations by the Oracle user with the UID and the GID specified in the oranfstab
file. If neither UID nor GID is listed, the default value of 65534, is used to access all NFS servers listed in oranfstab
file.