[omniORB] What about the 100ms upcall delay ?
orel
esnard at labri.fr
Tue Aug 26 16:15:00 BST 2003
Hi,
I've trouble to explain bad performance with OmniORB401 on Linux... If
someone can help me ?
I've a classic scenario where A requests B to send continually
data to it through a callback object. So A and B are both client and
server.
After each data sending from B to A (oneway request), B wait for an
acknowledgement
from A to regulate data transfers.
Even if I try to test my program on one host with little data size
(sequence of 1Kb)
with all possible client and server configuration (thread pool with or
without watching,
thread per connection), I always have a 100ms delay for each send/ack step.
I first think that it looklikes the 50-100ms upcall delay detailled in
the WIKI.
But, if I wait the outConScanPeriod (don't ask me why this period ?) the
performance become marvelous... just 2ms for the same experiment !
Now I've reduced the outConScanPeriod to 2seconds and It works pretty
good...
Any idea to help me understand what happen in OmniORB during my experiment ?
best regards, Orel
PS: excuse my poor english but I'm french....
--
Aurélien ESNARD
Laboratoire Bordelais de Recherche en Informatique (LaBRI)
Universite de Bordeaux I
351, Cours de la Liberation
33405 Talence cedex
Tel: 05 56 84 24 85
E-mail: esnard at labri.fr
More information about the omniORB-list
mailing list