[omniORB] Bad performance with oneway invocation
Serguei Kolos
Serguei.Kolos at cern.ch
Fri Jul 18 17:16:21 BST 2003
Hello
Duncan Grisby wrote:
>On Monday 14 July, Serguei Kolos wrote:
>
>[...]
>
>
>>2. The asynchronous (oneway) round-trip time is from 3 to 10 times worse
>>with the omniORB.
>>
>>
>
>Before I address the specific points, can I just ask what your
>application is doing? Does it reflect what this benchmark is doing?
>
To some extent yes. My application is a kind of Information Service. It
is a single CORBA server
to which many other applications (~1000) publish their information.
There are also several tens
receiver applications, which subscribed to some information in this
server. When the info in the server
is changed, an appropriate subscriber is notified. The notification is a
oneway message, which fits
well to the IS model: the server does not really care if subscriber is
working or dead, but it has to
do notification efficiently in order not to allow slow subscribers to
affect the faster ones. In the
real configuration some receivers may get several thousands messages per
second. Of course
these messages are not empty, but they are normally short (few tens of
bytes).
>As you say in a later email...
>
>[...]
>
>
>> But, unfortunately there is a serious performance
>>drawback in one particular case - when a bunch of oneway
>>messages is sent to the server over single connection. In this case
>> the server reads many of them with a single recv operation
>>(which is very good!!!), but then it put each message into a separate
>>buffer by copying it with the memcpy function (giopStream::inputMessage
>>function in file giopStream.cc).
>>This seriously downgrades the performance in such cases and
>>noticeably (by a factor of 4) increases CPU consumption.
>>Can this code be re-worked to eliminate memory copying?
>>
>>
>
>It would be possible to avoid the copying, but to be honest I don't
>think it's worth the effort. The large amount of copying only happens
>in a very restricted set of circumstances, i.e. that a client is
>sending requests so fast that TCP batches them into single packets,
>_and_ that each request has very few arguments, so many requests fit
>in a single ORB buffer. Furthermore, the overhead of copying is only
>relevant to the overall invocation time if the operation
>implementation does so little work that the time to do a memory
>allocation and a memcpy is significant.
>
You are right about the memcpy operation. I'm sorry - I overestimated
it's impact.
The good thing is that the impact of splitting a bunch of oneway calls
is really negligible :-)
But the bad thing is that the problem with oneway calls still exist
somewhere :-( (I hope not only in my mind)
I repeated my tests putting a delay on client side between remote calls.
Now client sends only 50
messages per second. The server always receive one message in time and
does not do any splitting.
But ... the thread, which processes oneway calls, consumes 8 times more
CPU (user time) then the thread,
which processes identical two-way calls. The system CPU consumption is
almost identical.
Do you have any idea why it is so?
Thanks,
Sergei
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.omniorb-support.com/pipermail/omniorb-list/attachments/20030718/1fb61ada/attachment.htm
More information about the omniORB-list
mailing list