[omniORB] OmniEvents
Paul Nader
Paul.Nader@aals27.alcatel.com.au
Fri, 28 Jan 2000 22:45:50 +1100
Ole,
The Event Service specification doesn't address any connection persistency
issues (it doesn't even mention it as far as I know), so it is left up to the
implementation to decide. omniEvents 1.x is not persistent. omniEvents 2.x
is persistent. That is, the event channel retains references to all clients
between
restarts.
By the same token, a PushConsumer can be transient or persistent.
If you want a persistent push consumer (ie so that it seamlessly
re-connects to its original ProxyPushSupplier) the you need to
re-construct it identically when it restarts. With omniORB2 you
can do this as follows:
1. Saving the object key (eg in a file) which you can obtain using the _key()
method of its skeleton base class.
2. On restart re-build the consumer with the same key (you pass it to the
constructor of its skeleton base class). Once re-built, the
ProxyPushSupplier
will continue delivering events to the consumer. This is really were the
queue length QOS parameter comes in as it determines how many events
are queued while the consumer is dead.
If you must use a transient consumer (eg because you are using an Orb that does
not support it) I guess you could have the channel release the
ProxyPushSupplier
if its queue filled up (for example) on the assumption that the client _is_
transient
and will never be able to re-connect. There is some code commented out in
ProxyPushSupplier_i::_dispatch () which disconnects a PushConsumer if it
receives
an exception when trying to push data to it :
// DB(20, "ProxyPushSupplier_i : Exception notifying PushConsumer!");
// DB(20, "ProxyPushSupplier_i : Disconnectig PushConsumer.");
// // Cant disconnect directly because disconnect_push_supplier
// // signals this thread and disposes of the object. Create a
// // new thread to do the disconnection instead.
// omni_thread *disconnect_thread = new ProxyPushSupplierWorker(this,
// &ProxyPushSupplier_i::disconnect_push_supplier,
// omni_thread::PRIORITY_HIGH);
// disconnect_thread->join(0);
You could uncomment it and extend it so that it also tests to see if the proxy
queue is full as well.
omniEvents 2.x currently assumes that clients are persistent. In fact,
I would go as far as saying that most event channel implementations make that
same assumption...
Paul.
"Storm,Ole OLS" wrote:
> Hi Paul,
>
> > Your understanding is correct. The problem is that the event
> > channel can't
> > assume
> > that a client that dies without disconecting will not become
> > active again
> > sometime
> > in the future. Hence, it can't just discard the ProxySupplier.
>
> Ok, that seems reasonable. It would be nice, however, and could be a
> solution to our problem, if each ProxySupplier in a channel could be
> identified in some way, such that a consumer C_i that has died will have an
> oportunity to reuse the ProxySupplier when the consumer i restarted. Do you
> know if the Event Service specification adresses this problem?
>
> Regards,
> Ole.
>
> > > Hello omniORBers,
> > >
> > > We have been using the omniEvents 1.0.3 service for some
> > months now and are
> > > quite happy with it.
> > >
> > > We have, however, a small problem explained here:
> > >
> > > The setup is as follows. The system has one supplier (S)
> > and n consumers
> > > (C_i), that communicates through one common event channel
> > (EC). The event
> > > channel is owned and constructed by an EventChannelFactory
> > (ECF). The
> > > supplier, all consumers and the EventChannelFactory are
> > separate processes.
> > > The supplier and consumers are of following kind:
> > >
> > > S: PushSupplier,
> > > C_i: PullConsumer, using try_pull() to poll for events
> > >
> > > When a consumer is started, it connects to the event
> > channel, listens for
> > > events, and disconnects from the channel when terminated.
> > However, if a
> > > client goes down, i.e. is not terminated 'nicely', it never gets
> > > disconnected from the eventchannel! Since event channels
> > queue all events
> > > recieved until they are 'delivered' to consumers, the queue
> > associated with
> > > the consumer that died, will start to grow in size continously. As a
> > > consequence, the EventChannelFactory-process will start to
> > garbage whenever
> > > a client has a sudden death.
> > >
> > > Is there a way I can handle this problem within omniEvents 1.0.3???
> > >
> > > I have seen, that with omniEvents 2.0 it is now possible to
> > set how many
> > > events are buffered by each ProxySupplier. I think,
> > however, that this is
> > > only half a solution to this problem, since the
> > EventChannelFactory process
> > > will garbage the size of the buffer for every consumer that dies.
> > > Am I right, or have I misunderstood something here?
> > >
> > > Best regards,
> > >
> > > Ole.
> > >
> > > > Ole Storm, ols@kmd.dk
> > > > Udvikler, PUI
> > > > KMD A/S
> > > > Niels Bohrs Alle 185
> > > > 5220 Odense SƒŠ
> > > > Tlf 44 60 52 83
> > > >
> > > >
> > > >
> >