[omniORB] omniORB behaviour under load
Sai-Lai Lo
S.Lo@uk.research.att.com
19 Oct 1999 23:54:14 +0100
>>>>> James FitzGibbon writes:
> * Carlson, Andy (andycarlson@ipo.att.com) [991011 08:34]:
>> 1. What is omniORB::maxTcpConnectionPerServer supposed to
>> do? - I expected it to throttle incoming connections after a
>> certain number, but for some reason it doesnt do this in my
>> application. Does it only apply to Strands within a Rope?
> AFAIK, this is used for outbound connections in a client application. A
> test application that I wrote spawned 20 threads, each invoking a method on
> a remote ORB. Without modifying omniORB::maxTcpConnectionPerServer, running
> 'netstat -aan' showed no more than 5 established connections to the remote
> host. The other threads were blocked when trying to make the connection.
> As each thread finished, one blocked thread was allowed to run. When I
> raised the value, all threads made their connection concurrently.
This is correct and is the intended usage of omniORB::maxTcpConnectionPerServer
> I'm not sure if there is a companion variable to throttle inbound
> connections. If you just want to reject the connections if there are too
> many already open, I suppose you could write your own gatekeeper module, as
> this has high-level control of connection acceptance.
This is exactly what I would suggest Andy to do. From the socket, the
gatekeeper can work out the source address. If it keeps a count on the
active connections from that address, it will be able to trottle back
the client by refusing the connection.
The only problem is how to keep track of the number of active connections
so that when a connection is broken the number is updated. There is no
specific hook at the moment to do just that. However, it is possible to
implement this using some knowledge of the ORB internals:
To do so, you have to install a giopServerThreadWrapper (see omniORB.h for
details). This wrapper is called whenever a Server thread is started. The
argument passed to the wrapper is a tcpSocketStrand object. Using the
handle() method of class tcpSocketStrand you can recover the socket.
When the server thread exits, i.e. the socket is closed, you can decrement
in your giopServerThreadWrapper the active connection count.
By the way, I guess you have already increase ulimit on the max. number of
socket descriptors per unix process. I believe the default is just 64 on Sun.
> An extremely neat idea would be to extend the gatekeeper interface to allow
> for more than just "Accept" and "Reject" as return values. If one could
> return "Accept" "Reject" or "Defer", and leave deferred connections in a
> blocked state (these connections would then be given preference when a slot
> opened up), then some very sophisticated connection management could be
> implemented on a site-by-site basis.
Trouble is a connection in "Defer" still consumes a socket which may not be
desirable. This might be useful when we move to a scale where the number of
tcp connections we can accept greatly exceed the number of threads we would
like to create. Then having a "Defer" state is useful as long as we have
not hit the tcp connection limit. While we are on this topic, I might as
well say that I believe the thread-per-connection model works quite well in
most cases but will not scale to support large number of tcp connections if
the OS put a much smaller limit on the number of threads one can create per
process. I intend to add to the ORB the ability to switch between
thread-per-connection to thread-pool and vice versa on the fly depending on
the load. It is on my to-do list for a long time...
Sai-Lai
--
Sai-Lai Lo S.Lo@uk.research.att.com
AT&T Laboratories Cambridge WWW: http://www.uk.research.att.com
24a Trumpington Street Tel: +44 1223 343000
Cambridge CB2 1QA Fax: +44 1223 313542
ENGLAND