Oracle & tcp.nodelay parameter
Oracle Tips by Burleson Consulting
Important Note: The
default value for tcp.nodelay has been permanently changed to
The tcp.nodelay parameter
By default, Oracle*Net waits until the buffer is full before
transmitting data; therefore, requests are not always sent
immediately to their destinations. This is most commonly found when
large amounts of data are streamed from one location to another, and
Oracle*Net does not transmit the packet until the buffer is full.
Adding a protocol.ora file and specifying a tcp.nodelay to stop
buffer flushing delays can sometimes remedy this problem. The
parameter can be used both on the client and server. The
protocol.ora statement is:
tcp.nodelay = YES
When this parameter is specified, TCP buffering is skipped so that
every request is sent immediately. Slowdowns in the network may be
caused by an increase in network traffic due to smaller and more
frequent packet transmission.
Oracle-l notes this on tcp.nodelay and it's
relationship to the SDU and TDU parameters:
"Setting tcp.nodelay disables the
Nagle algorithm in the tcp stack that tries to efficiently
balance the data load of a packet with the delay in dispatching
Effectively, you're saying "to xxx with
optimizing the data payload ... send those babies now!". The
complete antithesis of what the SDU/TDU settings are trying to
do for you.
You'll end up with a larger number of
smaller packets on your WAN, and if it's latency that's your
problem, this will make matters worse, not better."
SEE CODE DEPOT FOR FULL SCRIPTS
This is an excerpt from my latest book "Oracle
Tuning: The Definitive Reference".
You can buy it direct from the publisher for 30%-off and get
instant access to the code depot of Oracle tuning scripts: