<font size=2 face="Arial">Hello,<br>
<br>
We have OmniORB 4.1.6 running on Linux Kernel 2.6.32. We have a client
server architecture where in client needs to identify the failure of the
server application when the system hosting the server is taking reset.
Due to change in TCP in 2.6.32, client waits almost 10 minutes+ before
giving up on a Send call if the Server system reset while the Send call
is being made.<br>
<br>
We do not want to use setCLientCallTimeout since that will result in timeout
even if Server delays sending the response by some time due to some system
parameters.<br>
<br>
So we want to bail out only on true connection issues.<br>
<br>
So one of the option was to use TCP_USER_TIMEOUT ( </font><a href="http://man7.org/linux/man-pages/man7/tcp.7.html"><font size=2 color=blue face="Arial"><b><u>http://man7.org/linux/man-pages/man7/tcp.7.html</u></b></font></a><font size=2 face="Arial">
) so we can have a control over TCP to TCP communication.<br>
<br>
Questions :<br>
<br>
1/. Is there a way to get the Socket Descriptors value from a Object Reference
?. If so, I can get the sockfd and then do a setsockopt<br>
<br>
2/. Will OmniORB provide any mechanism to make use of TCP_USER_TIMEOUT
?<br>
<br>
Truly appreciate your help on this.<br>
<br>
Thank you,</font>
<br>
<br>
<br><font size=2 face="sans-serif">Thanks and regards<br>
<br>
Ravi Kumar Kulkarni<br>
POWER Firmware Development India<br>
MK GF 129 Manyatha K Blk<br>
ISTL, Bangalore<br>
Extn: 56822<br>
Mobile : 9731371000<br>
rkulkarn@in.ibm.com</font>