On 21/05/2010, at 1:42 AM, Joe Abley wrote:
On 2010-05-20, at 09:22, Nathan Ward wrote:
Who on produced this report? Can they come to the
next NZNOG meeting for a flogging?
There are grains of truth in the idea that increased latency between clients and
resolvers can lead to decreased performance for web applications. Many of the newfangled
URIs and similar techniques deliberately to defeat caching, since caching for some
interactive web $buzzword.$excitement apps leads to user pain and suffering.
Vixie presented some data at the recent DNS-OARC meeting in Prague which described a
trend for decreasing DNS cache hits, and at least in some cases found that random-looking
URIs were contributing to the effect (see
If an application like Facebook can generate a few hundred HTTP sessions per page load,
it seems possible that cache misses (both in DNS and HTTP caches, remote and local) give a
greater effect that you would imagine, and perhaps the cumulative effect of
Dunedin-Auckland DNS latency has some noticeable effect. But I agree it seems like a
stretch (every cache miss in Auckland probably requires a trip to an authority-only server
across an ocean).
It's quite common to use random hostnames to encourage web browsers to parallelize
sessions, as (from memory) most browsers will not open more than 4 connections to a single
Some actual science might be nice to see, maybe.