In order to make better use of system resources, and to reduce the time it takes to complete certain maintenance tasks, AD utilities are designed to process jobs concurrently. This parallel processing makes use of managers, which direct the actions of worker processes. The manager assigns each worker a processing job and monitors its progress. When a worker completes a job, the manager assigns it another until all jobs are complete.

AD Administration and AutoPatch can be directed to distribute processing tasks across multiple remote machines in a multi-node system. This type of parallel processing operation is called Distributed AD. It further reduces the time to complete a maintenance task by utilizing the processing capabilities of all of the nodes in the system.

Because the AD workers create and update file system objects as well as database objects, Distributed AD can only be used on systems that are using a Shared Application Tier File System to ensure that files are maintained in a single, centralized location.

Using Distributed AD
===============
Start AutoPatch or AD Controller with Distributed AD worker options
On one of your shared application tier file system nodes, start your AutoPatch or AD Administration session with the following command line options:

localworkers= workers=For example, to run an AutoPatch session with a total of eight workers (three workers on the local node and five workers on a remote node):

$ adpatch workers=8 localworkers=3 Start AD Controller on remote node(s)
On each of the additional shared application tier file system nodes, start an AD Controller session with the additional distributed command line option:

$ adctrl distributed=yAfter providing basic information, AD Controller will prompt for the worker number(s) to be started. For example, to start workers 4 through 8 on a second node, enter “4 5 6 7 8″ or “4-8″.



The following is an example of starting a three-node session with a total of five workers:

Node 1:
$ adpatch localworkers=3 workers=5Node 2:
$ adctrl distributed=y
Enter the worker range: 4Node 3:
$ adctrl distributed=y
Enter the worker range: 5————————————————

During execution of AutoPatch or AD Administration, you can start a normal AD Controller session (without distributed=y) from any of the nodes in the shared application tier file system environment to perform any of the standard AD Controller operations. All of the standard AD Controller options have the same effect on both local and non-local workers, with the following exception:

option 6: Tell manager to start a worker that has shutdownThis option will always result in a worker being started on the same node that the AutoPatch or AD Administration utility is running on. This means that if an AutoPatch worker exited on a distributed node, choosing this option will start the worker on the node that is currently running AutoPatch, rather than the node that was originally running the worker.
AD Controller Log Files

The log file created by AD Controller is created on whichever node the AD Controller session is started. This is to prevent file locking issues on certain platforms. It is therefore recommended that the AD Controller log file include the name of the node from which the AD Controller session is being invoked.
Managing Distributed Workers and Nodes

If a worker has exited on a distributed node
Make sure that the worker is not running.
Start AD Controller on the same distributed node using:
$ adctrl distributed=y force_restart=yEnter the worker number(s) to start when prompted. AD Controller will start those workers and then exit.

Start a normal AD Controller session on any node (without distributed=y) and tell the manager that the workers failed (option 4).

If a node in the Distributed AD environment fails
If you can successfully repair the node and wish to start reusing it in the current session:
Make sure that no workers are running.
Start AD Controller with distributed=y on the same node, entering the range of workers to be started.
Start AD Controller again using:
$ adctrl distributed=y force_restart=y
Enter the worker number(s) to start when prompted. AD Controller will start those workers and then exit.

Start a normal AD Controller session on any node (without distributed=y) and tell the manager that the workers failed (option 4).

If it is not possible to repair the node during the patching session, use AD Controller to tell the manager that the workers have failed (option 4) and it will reassign them.

If the database shuts down unexpectedly or the connection to the database fails If during your AutoPatch session the database connection fails, all AutoPatch and AD Controller sessions will exit. Once the database is available:
Restart AD Controller with distributed=y on each distributed node, entering the worker ranges again.

Start a normal AD Controller session (without distributed=y) and tell the manager that the workers failed (option 4)

Set all workers to restart status (option 2).
Restart the AutoPatch session. This will cause the workers on the distributed nodes to restart.