DCOM Explained
by Rosemary Rock-Evans
Digital Press
ISBN: 1555582168   Pub Date: 09/01/98

Previous Table of Contents Next


Load Balancing

Load balancing is the spreading of the requests for a server across machines that each have a copy of the server on them. Load balancing can be combined with multithreading; a middleware product can be both multithreaded on a single machine and load balance across machines.

There are certain assumptions made about load balancing, which define whether it can be used or not. First and foremost, the component/server must be capable of being placed on more than one machine. Clearly, if the component accesses data in a database that is only accessible from one machine, the component cannot be duplicated. If the database can be accessed from more than one machine, then the component can be duplicated.


Figure 13.3  Load balancing

Second, the Directory must support the duplication of the component around the network without the programmer’s having to know that the component is duplicated. This latter point is important. Many Directory systems enable you to put the server on many machines, but each time you do it you have to give it a different name. This means the client has to use this different name, and true automatic and transparent load balancing isn’t possible.

It is worth noting that Microsoft’s current Registry service does enable you to duplicate components anywhere on a network (we will see how in the chapter on the Directory services) so load balancing—transparent load balancing—is possible. This contrasts with the approach taken in CORBA, where the Directory/Naming service does not support the duplication of a component around the network. Apart from the obvious dangers of this from a reliability view point—the component then becomes a single point of failure on the network and could cause your whole system to fail—it also means CORBA cannot support load balancing.

Third, load balancing must be an automatic function of the middleware—not a program-controlled service. If we take as an example a sophisticated product like TOP END to show how it operates, you will be able to see how load balancing works when it is automatic.

In TOP END, it is the Directory service which decides which, of a number of server instances on different nodes, are to receive the request. The Directory decides which server instance will receive the request by using one of three methods. The administrator decides which of the three methods are to be used. The options available are:

  Random-Where the servers and their nodes offer the same level of service and where workload distribution cannot be predicted, random routing may be used. As requests are received they are distributed in random order to the server instances. It is not possible using this method to predict which server will receive the message, but over time, each server will receive approximately the same number of requests.
  Round robin-In this case the workload is evenly distributed across each of the servers in a defined order. Requests are thus distributed one at a time in a predictable order to each server in turn.
  Enhanced routing-Achieved by using a load-balancing algorithm.

The load-balancing algorithm is based on three main parameters, which can be used in combination:

  Local/Remote Ratio-This is a ratio assigned by the administrator to direct workload either locally or remotely based on factors which he knows of, such as communication costs and availability. The ratio describes within a TOP END node what proportion of requests should be processed locally rather than remotely.
  Node Desirability-This is a weighted value assigned by the administrator to each node. The desirability of a node is based on its size, processor speed, and geographical location on the network, plus other things. In essence, the administrator must decide how desirable that node should appear to other nodes.
  Server Copies/Potential ratio-This is a ratio assigned by the administrator to influence how TOP END routes messages among multiple server nodes based on how many actual and potential server instances are available. In effect, the ratio describes what potential the node has to start up yet more server instances if the number currently provided is not enough. To make this work, TOP END periodically propagates status information to other server nodes in the configuration, and the frequency with which this status information is propagated can be set by a tuneable value called the “Notification Frequency.” This ratio can be used by the administrator to throttle or tune up the sensitivity of server instance startup and shutdown activity

TOP END has perhaps the most sophisticated load balancing capability of any middleware product I know of, and you can see from this description just how well developed the services in these really heavy-duty products actually are.

But what does MTS provide? Despite all this explanation, the answer is currently nothing!! But there are plans to support load balancing via MTS, and, as we have seen, it is potentially feasible given the structure of the Registry. Load balancing is planned for version 3 and will initially be based on a random method of allocation.

Shared Property Manager

We saw in the chapter on Windows NT that the operating system does provide support for shared memory on a single host. Windows NT has a virtual memory system which enables up to 32 processes to read the shared memory area.

The Shared Property Manager service provided with MTS extends this capability and makes it available to components. The Shared Property Manager thus provides shared access to information in main memory by components on one machine, but again, distributed shared memory is not supported. Microsoft provided the Shared Property Manager so that the programmer had an alternative to the various persistent store mechanisms available via the Active Data Object (of which we will learn more later in the chapter on Data Objects). It effectively provides a mechanism for storing data that does not need to be “durable.”

The Shared Property Manager has a built-in locking mechanism, just like Windows NT, but what makes it different is that it is accessed and manipulated using the COM interface. It is accessed via the ISharedProperty interface, which is intended to provide a means of classifying the data from Groups to Properties and down to values. And so, for the first time in this book, I will actually give you a little bit of code from this interface to show you how this might look.

ISharedPropertyGroupManager [to create the property group
ISharedPropertyGroup
Create Property [methods
Create PropertybyPosition
get-Property
get-PropertybyPosition
ISharedProperty
get-Value
put-Value


Previous Table of Contents Next