[omniORB] Performance comparison of OmniEvents and OmniNotifiy
Christopher Petrilli
petrilli at gmail.com
Wed Mar 2 09:45:44 GMT 2005
On Wed, 2 Mar 2005 11:02:59 +0000, Alex Tingle
<alex.tingle at bronermetals.com> wrote:
> > I've run Elvin with 2,500 events per second without using more the 10%
> > CPU on an iMac G5. It's pretty impressive, actually. I haven't had a
> > chance to push omniEvents anywhere near that yet, as I've only
> > recently started looking at moving to a pure CORBA environment. While
> > I generally argue against early optimization, I do have to worry about
> > performance in this scenerio.
>
> How much data are you sending with each event? Do you need a low
> latency as well as high throughput?
Each event is a roughly 30-item struct. I would say, without taking
into account overhead for CDR, that we're talking about roughly
256bytes of data. I don't know CORBA well enough to discuss how that
is impacted by CDR, but my suspicion---if it's like XDR---is that it's
not excessive.
Latency is actually not a problem in this specific situation.
Obviously 30 seconds would be absurd, but anything under a couple
seconds from the time the event is created to the time it exists in
the other process space at the other end of an event channel is fine.
> > Individual sources can generate hudreds of events in a second, and the
> > overall system may have to deal with 5-10 thousand events per second.
> > That necessitates dealing with things a bit differently.
>
> CORBA over TCP is always going to struggle to achieve rates like that,
> especially with the OMG's one-at-a-time delivery scheme. The latency on
> even a loopback can be ~0.1ms.
This was my reasoning for batching things. The latency of the
individual call overhead. Trust me, I've had people suggest SOAP was
the answer for this :-) I can't even imagine the disaster of that
experience. Now understand, each "event source" (potentially
hundreds) will be the source of 1-1000 events per second, with most on
the 1-50 range. The aggregate, of 5-10 thousand will only be seen by
a few places.
Honestly, I'm beginning to feel I'm running so close to the "ragged
edge" of what systems can do that I need to restructure my design and
figure out how to build a hierarchical system that reduces the need
for all the events to be seen in a few systems.
> As I say, my preference is for multicast over the network, and delivery
> via Unix pipe transport at the edges. The alternative is batching,
> which just pushes complexity into the user code.
I did some playing with Pyro on this as well, and the batching is
pretty minimal complexity in that scenerio. The way I thought about
it was basically something like this:
while 1:
events = remote_end.try_pull(max_batch=500)
if not events:
sleep(1)
deal_with_events(events)
It was more complex than that, but you get the point. That way, if
there were a "slow trickle" of events, then it wouldn't hit the system
too hard, and if the events were coming in faster, it would absorb
them as quickly as it could dispose of them.
> Of course you can do it programatically. 'eventf' just gets proxies
> from two channels and tells them to talk to each other. However, you
> may not realise that the relationship persists until it it destroyed -
> it does not have to be recreated each time omniEvents is restarted.
> Unless you need to change your architecture on the fly, you should be
> able to set this up at installation time and then just forget about it.
Thanks. This helps a lot.
Chris
--
| Christopher Petrilli
| petrilli at gmail.com
More information about the omniORB-list
mailing list