You are missing a very important point: this is assuming your cache is
totally empty. So you pay this penalty once when the cache is cold.
During normal operation, a cache sees a 75-85% hit rate. 
I started by saying I was assuming a cold cache to show how much of the
total (worst case) is dominated by the GTLD servers compared to the
roots and the final lookup.
A cache miss to the root servers for example is less of a concern,
because most of the root servers have instances that are close (eg,
within New Zealand or Australia). In comparison there are no "close"
gtld servers, the best one appears to be local to the far end of the
southern cross, and at least one of them (e.gtld) appears to be in
Europe, well over 300ms away.
Compare with say Afilias's .org infrastructure which appears to have an
instance at APE (or at least somewhere very close to me), and a handful
of instances in Australia, Their best case is local, and their worst
case is on par with the GTLD best case.
between about 20% and 70% of that time is spent talking to the
GTLD servers. And the NS and A .com/.net glue are cachable for 86400,
so once a day, at least one person has to wait almost an entire extra
second. If you have 86,400 users that have to waste 1 extra second a
day, you've just wasted an entire lifetime.
This affirmation assumes all entries expire at the same time, and the
root zone has a 41 days TTL for the glue records
The glue for delegates of the root zone appear to be 2 days, where as
the glue for the root zone itself appears to be ~41 days. I was
ignoring the root zones own glue, and concentrating solely on the
delegations it makes.
The delegation/glue records in the GTLD zones appear to be for 2 days,
which puts my math off by a factor of two. Sorry, my bad for not
checking and assuming 1 day. These are the lookups that I think we
should be targeting for improvement.
If 86,400 users wasted one second, that's not a
lifetime, that's only a
day... unless we are talking about the lifetime of some insects.
Joking aside, to waste that amount of time 236 years have to pass,
because you waste one second per day.
Hrm, sorry I wasn't clear with my reasoning, assuming 86,400 users
wasting 1 second each per day, as an aggregate they are wasting 86,400
seconds each day. It's an amusing look at how a small potentially
"insignificant" amount of time can easily become significant if it
happens frequently enough and why fixing even the tiniest of delays in a
network can quickly add up to significant numbers even if they are
spread across a lot of individual users.
While 86,400 users obviously aren't all going to have to wait 1s each
for every query, the Internet has a long tail, if your users are going
to millions of unique sites every two days, at least one of them for
each site is going to have to pay the penalty of waiting for an
authorative answer, probably both the GTLD cost *and* the end
nameservers RTT cost in the same transaction. (Since if there is
nothing for that name in the cache, you're going to have to look up
both). After that it will be cached in their local browser cache, and
your recursive nameservers cache for other users to hit, but then when
those caches expire, someone has to pay that penalty again.
The other option is of course to pre-warm your cache with names you know
are resolved frequently to make sure that none of your users have to pay
these penalties. But the long tail bites you as someone is likely to
go to a domain that you've not warmed.
If you want to
improve Internet performance in New Zealand through
improving DNS infrastructure, try and get at least one GTLD server
instance hosted within New Zealand. the time it takes to go to the US
for the GTLD .COM/.NET/.EDU lookups is by far the easiest of those to
The gain for having an instance of each .COM/.NET/.EDU in New Zealand is
low, because a cache resolver will hit them only when the NS/A records
expire. A cache resolver usually queries more frequently the
authoritative nameservers for the domains the users ask for, rather than
True, however it's going to be hard to get an instance/server of every
single nameserver in common use within New Zealand, and as I mentioned
above the single user who ends up paying the GTLD transaction cost is
likely to end up paying the final authorative nameserver cost as well in
the same transaction. If you can't realistically reduce one, reducing
the other will provide some help.
I believe that getting an instance of a gtld to be a relatively simple
improvement that will give a reasonable sized win to a large number of
querys (although obviously spread over a large number of individual
users) that can be done once for all ISPs compared to other solutions
such as all ISPs providing extra infrastructure in the South Island to
capture DNS traffic, and then pay for extra national transit to get that
traffic to their international providers in Auckland.
In short, IMHO it's cost/benefit is likely to be simple compared to
other obvious ways of improving DNS resolution times in New Zealand.
That's not to say that other methods for improving DNS resolution times
in New Zealand aren't a good idea too, just that IMHO this is a simple
win. I could be wrong; I don't know much about how the GTLD
infrastructure is run. It appears that it isn't anycast at all which is
what is leading to these large resolution times, which suggests that
perhaps getting an anycast instance into NZ would be quite difficult. I
don't know, I'm just putting in my 2c.