Basic multithreaded servers

A multithreaded server is any server that has more than one thread. Because a transport requires its own thread, multithreaded servers also have multiple transports. The number of thread-transport pairs that a server contains defines the number of requests that the server can handle in parallel. You create the first transport within the task directly from the TServiceDefinition instance and clone additional transports from this initial transport.

When a multithreaded server starts:

  1. The first thread in the task starts up and creates a TServiceDefinition using TStandardServiceDefinition.
  2. This thread creates the first transport for the first dispatcher, directly or indirectly.
  3. The thread then creates more threads to receive multiple requests. Each thread can only accommodate one transport; multiple threads cannot share transports. You do not have to create multiple MRemoteDispatcher instances; however, as you can share MRemoteDispatcher instances between threads.
Instances that derive from MRemoteDispatcher and threads do not need to follow a pre-set model. MRemoteDispatcher is extremely lightweight, so you can base your model on the weight and semantics of the specific derivation of MRemoteDispatcher that you intend to use.

Some possible implementations of multithreaded servers include:

NOTE Dispatcher pools are not available in this release.

Protecting Data

When you build a multithreaded server, you need to explicitly protect your data because MRemoteDispatcher is not multithread-safe.

A multithreaded server uses an MRemoteDispatcher for each thread. To build a server that shares data between more than one MRemoteDispatcher:

  1. Define separate classes for the data you want to share, including the protocol to access it. Include optional protocol to protect access to the data.
  2. Define your dispatcher class (derived from MRemoteDispatcher) to point at shared data, rather than to include it.
  3. Protect access to the data appropriately at the points where you use it. The granularity of the locking you require is variable and you can control this much more finely by defining your dispatcher class to access the data, rather than using a standard protocol.

Managing concurrency

Whatever configuration you choose for a multithreaded server, you need to design a policy for managing concurrency.

One simple way to handle concurrent requests is to choose a specific number of threads in advance and never change this number. This method is easy to design and implement, but the number of threads might have no relationship to the client needs. At times, the clients might be waiting because all the threads are busy. At other times, the threads might remain inactive, wasting system resources.

An alternative technique is to start the server with one thread and create a new thread for new requests each time the server receives a request to process. This way, the server always has one thread available for a new request. You delete each thread as the request completes processing, always retaining a minimum of one thread.

With this method, a client never needs to wait for a thread and there are always enough threads to handle client resources. You need to keep in mind that thread creation is relatively expensive, and in this model the server is creating threads constantly. Without an upper limit on the number of threads the server can have at one time, system performance can degrade and available memory can become an issue. You can enhance this model by limiting the total number of threads, which means that a client might have to wait for a thread to unblock. Also, you can cache threads and avoid creating more until the server exhausts the cache.


[Contents] [Previous] [Next]
Click the icon to mail questions or corrections about this material to Taligent personnel.
Copyright©1995 Taligent,Inc. All rights reserved.

Generated with WebMaker