On Sun, May 24, 2015 at 01:55:09PM +1200, Nathan Ward wrote:
Ia**m still researching exactly how it works, and of course there
doesna**t seem to be much documentation publicly available and it sounds
like ita**s difficult to get information by asking directly, but,
I asked a few questions back when it was new.
empirical evidence and reading their website
suggests that theya**re
judging how a**gooda** an Internet connection is by downloading a bunch of
files from a few servers under different conditions (i.e. some files avoid
caches, some dona**t, etc.).
Benchmarking web performance is complex. I used to be of the belief that testing
mostly cached performance is a good test. But now the majority of "slow sites"
are using https, and a lot of slow sites actually to be a problem with the cdn or
This is bad, because there is little in common
between downloading a file
from a server, and actual user experience of say browsing the web, sending
email, or playing games. Despite this, many networks in NZ have large
Right. I did some of my own testing years back, and from my own testing it seemed
to be that there two central isuses. One was most web sites require doing a lot of
if-modified-since requests, and with high latency, as there are too many to do in
one round trip time, and loss can make a /huge/ difference to performance, as not
only does it delay that request, but subsequent requests. (also if there's loss you
could have to wait 3 seconds for retransmit.. Linux has changed to 1 second as default
now, which should make things better - normally it's sped up by acking subseqent
but IMS requests don't trigger that)
Not only do they seem to not test to enough international destinations, but they
don't parallelise requests like a browser does. And that is a bit complicated to
amounts of engineering work driven by TrueNet,
because thata**s the
current standard that ISPs judge themselves by against the competition.
It does seem to be good at testing handover congestion.
While Ia**m not aware of any network having
specific config in place for
TrueNet, its results drive investment and change in areas that are perhaps
not really that important.
Handover congestion impacts all traffic.
One particular area of concern that I have, is that
the servers they
download these test files from are sometimes not in a particularly good
network position. Theya**re not testing a**Interneta** performance,
theya**re testing performance downloading some files from a particular
network location. They used to have some files on TradeMe for this, and
Getting a good representive spread of network locations is hard.
that was perhaps a little better, but that is, I
understand, no longer the
case. I should say, TradeMe was a slightly better test of user experience
because it was a reasonable thing for networks to be optimised for. It is
no better in terms of testing against a wider range of targets, and it
doesna**t actually test real user experience.
I think Amazon and Ebay would be better tests myself. :)
Another area of concern is the whole a**cacheda**
thing, which furthers
the outdated notion that web caching is a always good idea. Ia**m not
saying that it isna**t, every network is different, but it should be up to
the network operator to decide whether caching improves their customer
experience/costs/etc. and in which parts of their network. Because TrueNet
I hate to say it, but I've always been really keen on caching of resources, but
the vast majority of bulky content is moving towards cdn's etc, and forcing to
not cache. Even things like Netflix which uses http disables caching.
dona**t test customer experience, rather they test
the performance of
a**cacheda** data, providers who have decided that caching isna**t for
them are penalised.
Whatever you do you are going to favour some setups, and disfavour others. The
biggest concern I've had is that they seem to favour burstable single threaded
connections. When the majority of slowness is often on more active connections
with more simultaneous access where AQM can really help.
They also *seem* to only provide data about
providers who have paid them
money, though obviously I cana**t back that up. If it is true, it is
strange given that ComCom pay them as well. This is not technical though
so isna**t really appropriate for this list, but I figured Ia**d mention
it. If anyone can back that up, email me.
I think it's actually just providers that they have enough probes in the location
of. I wouldn't say Truenet is malicious, I just think it's a bit simplistic.
I have more thoughts on this, but theya**re not yet
well formed.. I intend
to write up something along these lines in the next couple of weeks. If
other people have data/thoughts that would be useful, email me.
In some ways i think it's best to keep on list?
A bit of a ramble..
Ok my concerns with Truenet started when they were comparing "advertised speed"
cable and "maximum attained speed" with ADSL. And so Cable by giving burst over
advertised speed often gave results above 100%. In my own experience with cable, there
often seemed to be a lot of packet loss on active connections. And my experience of
using cable was that it was often slower than ADSL for normal web browsing, downloads,
etc. But that it went fine with threaded downloads.
I think there were two factors with Cable performance being bad. One was that the TCP/IP
stack used on the transparent proxies gave lower performance than direct TCP connections
from Linux to Linux, which seemed to hurt Europe performance quite noticably. As well as
that normal TCP/IP connections seemed to get loss where UDP was fine. This may have been
to do with the spikiness of TCP/IP where UDP is constant rate. But the end result was
TCP/IP would struggle to do even 20 megabit when UDP would do 20 megabit with 0% loss.
The problem with web sites is that usually there isn't enough time to rapidly increase
sizes, even with TCP cubic fast enough to give 20 megabit/sec+ on small files. And on
files any loss can cut out the ramping up, and force retransmits, giving a noticable hit.
That is somewhere that a transparent proxy can really help with, as if the retransmits can
cut out, you can get 25%+ speed boost on medium sized files, even with no caching.
The second was there seemed to be often poor DNS performance. Which is really hard to
you if you benchmark the same destinations it will give HIT performance, when MISS is
matters. It may have changed now, but it seemed very common in the past to use
220.127.116.11, hard coded, and for some reason or other, performance seemed to be
worse than ADSL ISP's.
Which brings me to, what are you trying to test? Are you more interested in peak
worst case performance or median performance; testing popular sites, testing close sites,
a specific representation, or a representation that covers multiple regions or what.
There are a few solutions out there to test existing connections. I used to use this
benchmarking plugin, which requires manually running, and has stopped working.
to test DNS that can go through your browser history.
But as far as real-world like testing, there isn't a lot of automated stuff that can
A good test to my mind would be to emulate browsers, run all the time regardless of other
the link. Measure "pass/fail" rather than performance - say if a web site loads
in 4 seconds that's
fail, if it loads in 2 seconds pass. Measure availability. If a connection can't
play Youtube videos
5% of the time, just call that 5% of time a fail, as some users will shift their viewing
will let videos buffer, and otherws will change providers.
And then do you want to test large files or only medium sized files? If I'm
downloading a few GB then I
I am unlikely to watch it download. But for a 10mb file then I'd be waiting for it to
download, so if it
took 2 seconds, 5 seconds or 10 seconds it would matter to me. At the same time, if I
upload a file of
a few GB and I can't use the connection while it uploads then it's much more
I realise the reason that Truenet want to wait for an idle connection is that they want
but do you really care if your internet is fast or slow while you're not using it?
And I think that since Truenet has begun handover congestion has become less of an issue.
And in general
international performance isn't usually an issue. And if a web site is going slow,
now days, I think it's
often the destination, or a congestion issue internationally.
So instead of "all international is slow" things have shifted to issues with
Comcast, Verizon, AT&T, etc..
And a lot of these issues can even be region based in the US... like issues to Comcast in
San Jose, and it
can be the other end, or the connection within the US to them. And it's pretty well
known now that there are
a lot of peering issues in the US. Some places are worse than others, and for instance
Kansas can be kind of
hit and miss, but hardly any web sites are hosted there that New Zealanders would go to.
Also there are things like a lot of CDN's being hosted in Asia - and connections to
Japan can be routed via
Australia or the US. Singapore is often routed via the US. Countries like China and
Singapore the latency can
get pretty bad routing via the US, but Japan often doesn't matter nearly as much. And
if these CDN's are pull
caches and pull from the US, and you go NZ -> US -> Singapore, then the CDN pulls
from the US, there can be a
significant decrease in performance compared to hitting a US CDN.
So yeah, Truenet measures handover congestion fine, but to do "better" testing
would require a lot of concerted
effort. And I really think it's the kind of thing that has a global audience, and
really would need test nodes
in other countries as well, that can do things like VOIP emulation between nodes.