InfoScale™ Cluster Server 9.0 Bundled Agents Reference Guide - Solaris
- Introducing bundled agents
- Storage agents
- DiskGroup agent
- DiskGroupSnap agent
- Notes for DiskGroupSnap agent
- Sample configurations for DiskGroupSnap agent
- Disk agent
- Volume agent
- VolumeSet agent
- Sample configurations for VolumeSet agent
- Mount agent
- Sample configurations for Mount agent
- Zpool agent
- VMwareDisks agent
- SFCache agent
- Network agents
- About the network agents
- IP agent
- NIC agent
- About the IPMultiNICB and MultiNICB agents
- IPMultiNICB agent
- Sample configurations for IPMultiNICB agent
- MultiNICB agent
- Sample configurations for MultiNICB agent
- DNS agent
- Agent notes for DNS agent
- About using the VCS DNS agent on UNIX with a secure Windows DNS server
- Sample configurations for DNS agent
- File share agents
- NFS agent
- NFSRestart agent
- Share agent
- About the Samba agents
- NetBios agent
- Service and application agents
- AlternateIO agent
- Apache HTTP server agent
- Application agent
- Notes for Application agent
- Sample configurations for Application agent
- CoordPoint agent
- LDom agent
- Dependencies
- Process agent
- Usage notes for Process agent
- Sample configurations for Process agent
- ProcessOnOnly agent
- Project agent
- RestServer agent
- Zone agent
- Infrastructure and support agents
- Testing agents
- Replication agents
Support for cloned Application agent
The Application agent is used to make applications highly available when an appropriate ISV agent is not available. To make multiple different applications highly available using a cluster, you must create a service group for each application. InfoScale lets you clone the Application agent so that you can configure a different service group for each application. You must then assign the appropriate operator permissions for each service group for it to function as expected.
Note:
A cloned Application agent is also IMF-aware.
To clone agent
- Stop the cluster.
# hastop -all -force
- On each node, copy the Application agent directory, and rename the agent as follows:
# cd /opt/VRTSvcs/bin
# cp -r Application newAppName
# cd newAppName
# mv ApplicationAgent newAppNameAgent
- On any one cluster node, navigate to the following directory:
# cd /etc/VRTSvcs/conf/config
Create a
newAppNameAgent.cf
file in this directory with following content:type newAppName ( static int IMF{} = { Mode=3, MonitorFreq=1, RegisterRetryLimit=3 } static str IMFRegList[] = { MonitorProcesses, User, PidFiles, MonitorProgram, StartProgram, LevelTwoMonitorFreq } static keylist SupportedActions = { "program.vfd", "user.vfd", "cksum.vfd", getcksum, propcv } static int LevelTwoMonitorFreq = 1 static str ArgList[] = { User, StartProgram, StopProgram, CleanProgram, MonitorProgram, PidFiles, MonitorProcesses, EnvFile, UseSUDash, State, IState, StartOnly } static int ContainerOpts{} = { RunInContainer=1, PassCInfo=0 } str User = root str StartProgram str StopProgram str CleanProgram str MonitorProgram str PidFiles[] str MonitorProcesses[] str EnvFile boolean UseSUDash = 0 boolean StartOnly = 0 )
Include the
newAppNameAgent.cf
file inmain.cf
.Then, start the cluster.
# hastart
- Start the cluster on all the other nodes to propagate the addition of cloned new application agent.
The following sample includes an application (app1
) and a cloned application (my_app1
).
Application app1 ( StartProgram = "/opt/app1/start" StopProgram = "/opt/app1/stop" CleanProgram = "/opt/app1/stop" MonitorProgram = "/opt/app1/monitor" PidFiles = { "/tmp/app1.pid" } ) MyApplication my_app1 ( StartProgram = "/opt/my_app1/start" StopProgram = "/opt/my_app1/stop" CleanProgram = "/opt/my_app1/stop" MonitorProgram = "/opt/my_app1/monitor" PidFiles = { "/tmp/my_app1.pid" } )