DCOM Explained
by Rosemary Rock-Evans
Digital Press
ISBN: 1555582168   Pub Date: 09/01/98

Previous Table of Contents Next


Chapter 13
Microsoft Transaction Server

  Code-named Viper
  Runs on Windows NT amd 95 only
  Provides number of functions

  Admin utilities—monitoring, statistics
  Buffer pool
  Shared property manager
  Automatic multithreading
  Just in time activation
  Distributed Transaction Processing support

Microsoft Transaction Server is a key component within DCOM. It not only provides distributed transaction processing support, but also a number of additional services aimed at providing the sorts of support found in products like CICS (a TP monitor). MTS is a step towards making DCOM more like the heavy-duty advanced Distributed Transaction Processing Monitors such as Tuxedo or TOP END. In this chapter we will look first at the additional services, and then I will explain in some detail what is meant by distributed trans-action processing and how it works in MTS.

A Bit of Background

Work started on Microsoft Transaction Server—code-named Viper—in early 1995. An early specification of the OLETX transaction protocol was made available to ISVs and database vendors for comment in June 1995 with the final specification posted on the Internet in March 1996. Nearly 80 companies were involved in the alpha testing of Viper, and it went into beta testing in June 1996.


Figure 13.1  Overview of MTS

Microsoft Transaction Server was released in December 1996. At the time, 80 technology vendors committed to the product including test tool vendors such as SQA and NuMega; tool vendors such as PowerSoft and MicroFocus; package vendors such as Software 2000 and Marcam; and DBMS vendors such as Informix, Sybase, and IBM with DB2.

An Overview of the Services Provided with MTS

Overall, the extra services Microsoft Transaction Server provides over and above DCOM services are:

  Additional administration utilities to help monitor transactions, monitor performance, etc. We won’t be covering these in this chapter but will cover them when we talk about administration as a whole in the chapter on administration services.
  Support for resource management and pooling-The resources which are managed and can be pooled include the threads, memory, and the connections:

  Buffer pool-A receiver listens to the network and accepts incoming calls, placing them in a memory buffer pool managed by a queue manager. The buffer is used to queue incoming calls so that they are treated on a first come, first served basis.
  The Shared Property Manager provides shared access to information in main memory by components on one machine.
  Automatic multithreading of the server (DCOM’s multithreading is not automatic).

  More sophisticated triggering mechanisms-“Just in time” object activation.
  Support for asynchronous processing-addition to the synchronous processing support provided by DCOM.
  Distributed Transaction support-Support for the update of distributed data.

MTS also provides support for “environment state” for each component, for example, transactions, language, security, etc. Support is planned for load balancing.

Buffer Pool Management

A transaction processing application is different from other types of application. It is characterized not just by the very obvious fact that its purpose is to handle business transactions but usually by high and often unpredictable volumes of data. These high volumes of transaction data must be handled by the system in an efficient way. Thus, one of the key tests of a transaction processing service is whether it can give good performance, even in times when volumes peak to extreme levels.

The buffer pool is there to help with input transaction handling and acts like a queue—storing and smoothing requests, keeping them in sequence so that they are executed in the correct order. The queue or buffer pool used by MTS is invisible to the programmer and certainly inaccessible. This feature is an internal service, something MTS is using to improve performance.

The way the memory pool is handled is remarkably similar to the way CICS works (for those of you familiar with CICS). In CICS, a Listener service handles the communication links on the network and collects requests received from the network software, breaking the requests down into their component parts.

MTS has a component called the Receiver, which performs an identical function—listening to the network and accepting incoming calls. In CICS, as requests are received by the Listeners, they place the requests on a thing called the Schedule queue. Again, MTS acts in the same way; the Listener places the incoming calls into the memory based buffer pool, which is managed by a queue manager. The buffer is used to queue incoming calls so that they are treated on a first come, first served basis. The queue contains the requests from multiple clients and is destined for multiple components or servers. Requests are placed on the queue in the order in which they were received by the Listeners.

Requests are dequeued from the pool in order. MTS passes the request to the server. Clearly, a server thread must be running to handle the request, and MTS ensures that a server thread exists to handle the request as it is removed from the buffer in one of three ways. It:

  activates the server using the underlying DCOM services (triggering), as we saw in a previous chapter
  passes the request to the server thread, if the server is already running and the thread is free
  automatically creates a new thread to handle the request

and this is where the automatic multithreading capability of MTS plays a part.

Automatic Multithreading

We saw in a previous chapter that when just using DCOM without MTS, the programmer has to handle the creation and deletion of threads. If the developer uses Transaction Server, the developer does not have to program the thread management, and the creation, allocation, and termination of threads is automatic. He or she still has to understand locking and apply the necessary locks to data. Thus he or she has to write his program so that it is “thread-safe,” but he or she doesn’t have to worry about manually creating and deleting the thread.

As we saw in the chapter which described threads, the purpose of multithreading is to improve performance. Threads can be processing in parallel, which means that many more clients can be handled simultaneously than would be the case if no threads were used. Microsoft uses threads because although they are more difficult to program, they are more efficient on memory than tasks and multitasking.


Figure 13.2  Automatic multithreading to handle multiple client requests

It could be the case, however, that you have so many clients that even on the machine you have you cannot handle the sheer volume of clients you may get. How, then, do you spread the load across machines so that client requests are handled by the same server, but on different machines? The answer is load balancing.


Previous Table of Contents Next