DCOM Explained
by Rosemary Rock-Evans
Digital Press
ISBN: 1555582168   Pub Date: 09/01/98

Previous Table of Contents Next


File per machine-Many of the MOM products use this approach, and we will see that Microsoft’s current Directory service is also based on this approach. In this approach, the Directory is a host-specific file containing information on objects and their IDs, users, groups, applications, and preferences plus other information relevant to that host. Thus each node contains the information that host and all its clients needs to operate.

The administrator generally has to set up each of these files manually. The setup may be possible via the network from a single remote console, which does ease the burden somewhat, although every file still has to be set up specifically. The alternative is for the administrator to have to actually physically go around to each node to set up the file—a really tedious task.

Overall, whichever method of setup is supported, this design option is an administratively complex and time-consuming one and one very prone to error. Much can change in a distributed application over time, and the administrator has to remember to make those changes on every node, every time the configuration changes. If he gets it wrong, the applications will fail, so this solution is not geared to high reliability.

It is also a solution that is more difficult to keep secure. By having data distributed over many nodes, the problems of restricting access to the data to only the administrator become much harder to achieve. But by having the information the node needs on the node, availability and performance are likely to be satisfactory.

Thus, this solution provides reasonably good availability and performance, but has the potential to provide poor reliability, is less secure, and also incurs a high administrative overhead.

Replicated file-In this approach, a single file of data is set up by the administrator containing all the information needed by the entire application or applications, and the middleware itself replicates this file on each node on the network. If the administrator changes the data, he or she changes the master copy on the main node, and the middleware then replicates the changes to all the other nodes.

Security is usually as easy to maintain as it is with the single file approach. In effect, no one can look at or change the data in the replicates as these are managed by the middleware itself. For all purposes, the replicated files are invisible to both the developer and administrator. Restrictions on access are then applied to the master copy in much the same way as with the single file approach.

Generally speaking, the data is held in memory—loaded upon system start-up—with backup of the data on disk. This solution provides excellent performance and availability, as all the information needed for an application is stored on the node—no network access is needed to obtain the data and contention for the shared Directory resource is lessened.

Performance is further improved as the data is loaded into memory upon system startup. Furthermore, the likelihood of the data’s being incorrect is reduced, as the administrator has only to set up and create one file; he does not have to set up numerous files on each node manually.

There is a small price to be paid for this as the duplication of data and resulting storage needed is an additional cost. Generally speaking, however, this is a minor overhead.

A slightly larger problem may be caused if the configuration is an extremely large one with thousands of nodes. In this case, the memory needed to hold the entire configuration may actually be too great for smaller machines, and alternative solutions may have to be used. These often rely on the single file solution approach—a set of clients is given a configuration file that shows where their nearest Directory service node or nodes (for availability) is based. The clients then access this node to get Directory information with the overheads we saw in the description of the single file solution.

There are also other minor problems with this solution. If the master node fails, although all the other nodes can continue working on the replicated copies of the Directory, the administrator cannot update the master with the correct information until the master has been restored. Furthermore, the administrator still has to create the file and keep it up-to-date as the applications and their configuration change—incurring the slight risk that the information may not be up-to-date or correct, affecting reliability.


Previous Table of Contents Next