[omniORB] Problem with migration from 4.0.7 to 4.1.3
Serguei Kolos
Serguei.Kolos at cern.ch
Thu Oct 23 18:41:27 BST 2008
Hello
While migrating from the omniORB 4.0.7 to 4.2.3 I have noticed significant
difference in the behavior of omniORB applications. I have server
application
which is using the following two options:
threadPerConnectionPolicy 0 // the server shall be able to process
// several
hundreds of clients concurrently
threadPoolWatchConnection 0 // for more efficient processing
// of concurrent
client requests
That was working fine with 4.0.7. Now with 4.1.3 if a client sends several
subsequent requests to the server then for every second request it get
response with 50 milliseconds delay. For example when running both the
client and server on the same machine the times for requests execution
look like (in milliseconds): 0.12 50.23 0.12 50.42 0.14
50.88 ...
This can be changed by decreasing the connectionWatchPeriod to something
very small (by default it is set to 50000 microseconds which seems to be
the
cause of the issue). But in this case the CPU consumption of the server
grows
significantly.
With respect to that I have several questions:
1. Is it a bug or a feature of the 4.1.3?
2. Can this be changed back to have the same behavior as in 4.0.7?
3. If not then is it possible to achieve with the omniORB 4.1.3 the same
response
time and CPU consumption as with 4.0.7 for a server handling many
concurrent
clients?
Cheers,
Sergei
PS: I'm running on Linux with 2.6.9 kernel using gcc 3.4. I have also
made some tests
with the omniORB echo example - its behavior is exactly the same.
More information about the omniORB-list
mailing list