Replication Manager supports shared access to a database environment from multiple processes.
Each site in a replication group has just one network address (TCP/IP host name and port number). This means that only one process can accept incoming connections. At least one application process must invoke the DB_ENV->repmgr_start() method to initiate communications and management of the replication state.
If it is convenient, multiple processes may issue calls to the Replication Manager configuration methods, and multiple processes may call DB_ENV->repmgr_start(). Replication Manager automatically opens the TCP/IP listening socket in the first process to do so (we'll call it the "replication process" here), and ignores this step in any subsequent processes ("subordinate processes").
The local site network address is stored in shared memory, and remains intact even when (all) processes close their environment handles gracefully and terminate. A process which opens an environment handle without running recovery automatically inherits the existing local site network address configuration. Such a process may not change the local site address (although it is allowed to redundantly specify a local site configuration matching that which is already in effect).
In order to change the local site network address, the application must run recovery. The application can then specify a new local site address before restarting Replication Manager. The application should also remove the old local site address from the replication group if it is no longer needed.
Note that Replication Manager applications must follow all the usual rules for Berkeley DB multi-threaded and/or multi-process applications, such as ensuring that the recovery operation occurs single-threaded, only once, before any other thread or processes operate in the environment. Since Replication Manager creates its own background threads which operate on the environment, all environment handles must be opened with the DB_THREAD flag, even if the application is otherwise single-threaded per process.
At the replication master site, each Replication Manager process opens outgoing TCP/IP connections to all clients in the replication group. It uses these direct connections to send to clients any log records resulting from update transactions that the process executes. But all other replication activity —message processing, elections, etc.— takes place only in the "replication process".
Replication Manager notifies the application of certain events, using the callback function configured with the DB_ENV->set_event_notify() method. These notifications occur only in the process where the event itself occurred. Generally this means that most notifications occur only in the "replication process". Currently the only replication notification that can occur in a "subordinate process" is DB_EVENT_REP_PERM_FAILED.
It is not supported for a process running Replication Manager to spawn a subprocess.
Multi-process Replication Manager applications should handle failures in a manner consistent with the rules described in Handling failure in Transactional Data Store applications. To summarize, there are two ways to handle failure of a process:
The simple way is to kill all remaining processes, run recovery, and then restart all processes from the beginning. But this can be a bit drastic.
Using the DB_ENV->failchk() method, it is sometimes possible to leave surviving processes running, and just restart the failed process.
Multi-process Replication Manager applications using this technique must start a new process when an old process fails. It is not possible for a "subordinate process" to take over the duties of a failed "replication process". If the failed process happens to be the replication process, then after a failchk() call the next process to call DB_ENV->repmgr_start() will become the new replication process.