[omniORB] Question about socket connection management in ORB
Craig Rodrigues
rodrigc@mediaone.net
Tue, 17 Apr 2001 00:01:35 -0400
TAO VERSION: 1.1.14
ACE VERSION: 5.1.14
OMNIORB VERSION: 3.0.3
OMNIORBpy VERSION: 1.3
HOST MACHINE and OPERATING SYSTEM: Redhat Linux 6.2
COMPILER NAME AND VERSION (AND PATCHLEVEL): gcc 2.95.2
SYNOPSIS:
General question about how the ORB manages socket connections.
DESCRIPTION:
I am developing a simple testing application used to test the IDL interface
exposed by C++ TAO CORBA servers. My test application is
written in Python, and uses the Python CORBA bindings in
omniORB: http://www.uk.research.att.com/omniORB/omniORBpy/.
The CORBA language mapping for Python is very simple (20 pages!), and I have
had great success in using Python as a language for developing
test scripts for C++ TAO CORBA servers.
In my test Python application, I get an object reference to the
C++ TAO CORBA server, invoke a bunch of methods on this object reference
(which are exposed in the IDL), wait for a few minute or so, then
terminate the test application.
If I increase the TAO ORB debugging with -ORBDebugLevel 1, I notice
that when I terminate my test application, the ORB on the C++ server seems
to close a socket. This is true even if I terminate the test application
a minute or two after the last invocation of a CORBA method.
When a CORBA client and server connect via IIOP (TCP), is some kind
of connection caching/reuse going on? Is this an artifact
of TAO or omniORB? Can it be configured? I can see some performance
advantages for doing this vs. making a new TCP connection every
time a new CORBA method is invoked on the same object reference.
Thanks for any clarifications.
--
Craig Rodrigues
http://www.gis.net/~craigr
rodrigc@mediaone.net