I’m still researching exactly how it works, and of course there doesn’t seem to be much documentation publicly available and it sounds like it’s difficult to get information by asking directly, but, empirical evidence and reading their website suggests that they’re judging how “good” an Internet connection is by downloading a bunch of files from a few servers under different conditions (i.e. some files avoid caches, some don’t, etc.).
This is bad, because there is little in common between downloading a file from a server, and actual user experience of say browsing the web, sending email, or playing games. Despite this, many networks in NZ have large amounts of engineering work driven by TrueNet, because that’s the current standard that ISPs judge themselves by against the competition. While I’m not aware of any network having specific config in place for TrueNet, its results drive investment and change in areas that are perhaps not really that important.
One particular area of concern that I have, is that the servers they download these test files from are sometimes not in a particularly good network position. They’re not testing “Internet” performance, they’re testing performance downloading some files from a particular network location. They used to have some files on TradeMe for this, and that was perhaps a little better, but that is, I understand, no longer the case. I should say, TradeMe was a slightly better test of user experience because it was a reasonable thing for networks to be optimised for. It is no better in terms of testing against a wider range of targets, and it doesn’t actually test real user experience.
Another area of concern is the whole “cached” thing, which furthers the outdated notion that web caching is a always good idea. I’m not saying that it isn’t, every network is different, but it should be up to the network operator to decide whether caching improves their customer experience/costs/etc. and in which parts of their network. Because TrueNet don’t test customer experience, rather they test the performance of “cached” data, providers who have decided that caching isn’t for them are penalised.
They also *seem* to only provide data about providers who have paid them money, though obviously I can’t back that up. If it is true, it is strange given that ComCom pay them as well. This is not technical though so isn’t really appropriate for this list, but I figured I’d mention it. If anyone can back that up, email me.
I have more thoughts on this, but they’re not yet well formed.. I intend to write up something along these lines in the next couple of weeks. If other people have data/thoughts that would be useful, email me.
On a more serious note, if you think there is a problem with TrueNet results. Then feel free to discuss. Id be interested in that.
On Sunday, 24 May 2015, Nathan Ward <email@example.com
Totally. Apologies to anyone who didn’t get the joke straight away, I felt like I put enough clues in the first message, but maybe not. Come on though, entroception? :-)
Lots have people have also privately linked me to the OneRNG project, which is actually doing legit RNGs, not just poking fun:
Also, they’re in NZ, so support them.
Designing RNGs for crypto use is a finicky thing and making so much as a single mistake can render the entire crypto system useless.
If you're just doing this for post hangover fun then cool :)
If you're serious about it then I'd suggest finding an existing team and look to contribute to their efforts.
These guys are cool.
as are these guys.
Both those projects would love collaborators and have the technical ability to peer review contributions
On Saturday, 23 May 2015, Nathan Ward <firstname.lastname@example.org
As we all know, random numbers are an important part of cryptography, which is required for tools that are important to network operators like the widely deployed RPKI. Most systems today generate random numbers using a PRNG, rather than a dedicated hardware random number generator. A PRNG is only as good as its entropy source - if you have a small amount of entropy, an attacker can start to predict the output of your system’s PRNG, which is obviously bad.
There are a number of existing tools for generating entropy:
- HAVEGE implementations (i.e. http://www.issihosts.com/haveged/)
- Audio/video sources (audio_entropyd and video_entropyd)
- Some other things like mouse/keyboard input (i.e. when you wiggle your mouse over that window in whatever that app was that generated keys, puttygen maybe?)
- Network interrupts is also a common source
A week or two back I was a little hungover and didn’t want to do any real work, and decided to write my own entropy harvesting tool, and have the code available at the below URL:
I strongly encourage you NOT to run this on production systems, until it has been certified (a good standard to shoot for would be NIST SP 800-90B perhaps), but I would like to get feedback, so please read the code and suggest improvements. Perhaps you have some additional sources of entropy data that would be useful.
I would particularly like a good way to “boil down” the entropy generated. The Linux kernel by default can take up to 4096 bits, and this provides far, far more than that, which just seems like a bit of overkill. Please feel free to submit pull requests with ideas for that - it would be a great project to learn a bit of Python.
Generally, entropy should not be sourced from a system you don’t control, or sourced over a network you don’t control, for obviously reasons. However, the quality of the data is so good in this case that I believe it negates those concerns. The data is also fetched with HTTPS, so as long as the source doesn't also starting using this as their entropy source (known as entroception) we should be OK - see my recommendation about production systems.
(opinions all my own, blah blah blah)
NZNOG mailing list