Craig Anderson <craig(a)abstract.co.nz> wrote:
reliable protocol over another makes the windowing/retransmission
algorithms conflict in a worst-of-both-worlds kind of way.
This can't be true. While it may hold in some instances, as a
general statement it can't be right. TCP/IP behaves poorly in
situations where bit error rates are high. If the overall latency
is also high, but the high bit error rate link has low latency then
the effect of placing a reliable protocol underneath TCP/IP (especially
one that does forward error correction) on the link will, without doubt,
improve real world performance.
Different situation. You're not talking about running the lower level
reliable protocol over the entire path length, as you are in uucp over
Link layer reliability can be tuned to suit the nature of the link; fast
retries are a reasonable thing to do when you know what the latency
limit of the link is (more or less), especially if you know it to be
short. Across the Internet you don't. If you run fast retries across
the 'net, you'd be retransmitting packets that will safely arrive, just
a little later than you thought. Put TCP on top of that and you'll
still get lots of retries when you don't need them -- exactly the
circumstances TCP was designed to avoid. Performance goes down because
the link is full of unnecessary retries. Meanwhile, if doing this
introduces a lot of latency, TCP's flow control will kick in, even if
things are going full tilt at the lower levels. Worst of both worlds.
Turn it around, and take a short-packet fast-retry protocol, oh, say
uucp, and run it over a longer retry, longer packet protocol TCP.
If the link experiences packet loss, TCP stops until it times out, and
that could be a while. Meanwhile, uucp keeps sending packets, filling
the TCP buffers. When the TCP link re-establishes, it ends up sending a
full buffer worth of repeated packets. Even if the link has merely a
highish latency, you can still get repeated packets, because uucp's
retry algorithm is based around serial line latencies, not Internet
latencies. So the traffic goes at best the same speed (minus the extra
overhead of course), and at worst very much slower. uucp may even give up
on the link completely, even when TCP would be perfectly capable of
recovering. Worst of both worlds.
Yes, if you have two boxes connected together via a slightly lossy link,
TCP between those two boxes over a reliable link protocol will work
better than TCP over a non-reliable one. But that's a corner case; the
general case is much more complicated. TCP is designed to deal with
large networks. You wouldn't run V.42 over IP (let alone TCP), because
that's not what it was designed for.
Not that TCP is perfect, mind. For example, TCP's slow-start can bite
hard when there is packet loss, hence the effectiveness of reliable link
layer protocols for high bit-error comms. But mixing retry algorithms
over the same path does not in the general case make for good comms,
especially when one is optimised for the full path and the other isn't.
 uucp, at least in later implementations, is tunable with long
packets and larger windows, which could help. But my own experience was
that this is itself limited -- with big packets, any unnecessary uucp
retries could greatly drop the performance of the link -- it isn't
automatically better to increase the packet sizes. This is true even
on serial links, much more so when running uucp over TCP.
To unsubscribe from nznog, send email to majordomo(a)list.waikato.ac.nz
where the body of your message reads: