This is sufficiently thought provoking and on-topic
that I thought it worthy of posting. RDS.
The article disses Cisco fairly thoroughly (there are
also some compliments as well). I've added some comments
at the bottom. Feel free to discard my comments as appropriate.
From: Peter Ecclesine [mailto:firstname.lastname@example.org]
Sent: Monday, 4 June 2001 9:59 PM
Subject: Comments on Australia's VOIP nets
From: Frank A. Coluccio
Thursday, May 31, 2001 12:35 PM
Glenn Turner, who apparently operates the Australian Academic and
Research Network has some very interesting IMO things to say about
running voice over the Internet. This is from the ongoing discussion
on NANOG that questions: QOS or More Bandwidth? Enjoy, FAC
<so and so> wrote:
Whenever I did the cost of deploying and managing
and compared it with the cost of getting and managing more
capacity, it was always MUCH MUCH cheaper to get and manage
more capacity than to mess with more QoS.
We did one VoIP network deployment, and I tried each of the
different QoS services in IOS at that time (about 18 months ago)
both in the lab and in the field, and more bandwidth was the
Interesting. We have a national VoIP network which handles
long-distance calls for the Australian universities. It's not a
trial, it's a real VoIP rollout that interconnects the PBXs of the
universities. We think that's about 300,000 handsets.
More bandwidth doesn't cut it, as the voice calls then fail during
DDOS attacks upon the network infrastructure.
It's not too hard to fill even a 2.5Gbps link when new web servers
come with gigabit interfaces and when GbE campus backbones are being
rolled out. If these DDOS attacks use protocols like UDP DNS then
traffic shaping the protocol is problematic and the source IP subnet
from where the attack is launched needs to be filtered instead. Just
finding and filtering a DDOS source can easily take more than five
minutes, which pushes the availability to less than 99.999% and
leads to legal issues with telecommunications regulators about
access to emergency services.
We found the Cisco low latency queuing to be adequate. It still
has a fair amount of jitter, but not enough to matter for VoIP
calls with a diameter of 4,000Km.
We do have issues with Cisco's 7500: dCEF is still problematic,
but needed for the LLQ feature. The QoS features are too linked to
the hardware (you can't configure a service-policy that is enacted
on the VIP or main CPU, depending upon the hardware). Despite QoS
being most needed on cheap E1/T1 links, they expect you to upgrade
to a many thousand dollar VIP4 to support QoS.
We police access to QoS by source IP address, mapping non-conforming
traffic to DSCP=0. As this requires an access list executed for each
packet, the number of VoIP-speaking connecting sites to 50 or so, and
requires H.323/SIP proxies at the edge of the sites.
We don't use IntServ. When a link is down RSVP takes so long to
find an alternative path that the telephone user hangs up.
So what we are really waiting for is the implementation of the
combined IntServ/DiffServ model so that hosts on member networks
can do local authentication and bandwidth reservation and we can
police the amount of QoS traffic presented at each edge interface.
"Local authentication and bandwidth reservation" also hides a host
of issues that are yet to be fully addressed. Most universities
don't even *have* an authentication source that covers everyone
that can use IP telephony.
In practice, even with our deliberately simplistic implementation
this whole area is a IOS version nightmare. There's no excuse for
a monolithic statically-linked executable in this day and age.
Hopefully Juniper's success with monolithic kernel and user-space
programs will lead to Cisco looking to leapfrog the competition
and adopt the on-the-fly upgradable software modules as described
in Active Bridging (1997).
Having dissed Cisco, I should point out that their H.323/ISDN
gateway software that runs on their RAS boxes (5300, etc) was
the most solid of all the manufacturers we tested and Cisco was
the vendor most willing to fix the differing interpretations of
ISDN we enountered when we connected the North American-developed
RAS to our European-developed PABXs (Ericsson MD110, Alcatel, etc).
As far as the "engineering staff costs" argument goes, we have
found that they the real engineering time goes into hardening the
existing infrastructure. When carrying voice every engineering
shortcut you have taken comes back to haunt you -- clocking problems,
a low percentage of CRCs, ATM links with no OAM, synchronous links
with carrier nailed high.
Very little of the costs were due to the configuration of the
routers. Most of the cost was incurred at the edge of the network,
where there is a greater variety of devices, less correct
engineering, and worse configuration control.
The trans-Pacific problem is being solved. Give it
6-18 months and fibre between US/CA and ANZ or JP or SG should
drop significantly in price.
Although the price will drop, it will never drop to the levels of
trans-Atlantic prices. Even then, countries like Fiji which have
landing points would find it hard to raise the millions required
to buy an STM-1 from that landing point.
You can make a case that North America is unique. It doesn't have
2,000Km expanses with no population centers. It isn't split off
from the remainder of the world by mountain ranges so high that
to do an hour's lifting can take a week. It doesn't have a
population so huge that training people to lay fiber is in itself
a massive undertaking.
In these cases, no amount of money will buy you more bandwidth
tomorrow and the engineering costs of QoS and TE are immediately
worthwhile. VoIP and QoS let you use all of the long-haul optical
capacity for IP, whereas the slower growth of the Internet in North
America allowed the running parallel Internet and PSTN networks.
-- end snip
'Time doesn't fool around.' 'without prejudice' U.C.C. 1-207
======= End of Article =============================================
Comments by Rog. Please discard if you think not appropriate
to this list.
Cisco 7500, this was a pretty cool box 5 years ago. You all
bought lots of them. We acknowledge it is now struggling
to keep up, due to all the bandwidth. We'll trade it at
favourable rates to help. Current thinking within Cisco is....
7500 is a 5Gbps box. (really 4 x 1.25Gbps box). 6509/7600OSR
is a 256Gbps box. Sufficient horsepower for long and complex
ACL lists to counter DDOS attacks (but how do you find and load
the source of the attack into the ACL swiftly?)
16 port GE cards means price of GE is slightly below US$1K/port.
(7600-OSR appears to me to be 6509 with Optical Service Modules,
STM-4, STM-16, GigE blades????? No we don't know why they did this
Fiji. The cost of an STM-1 card is about 5-10K. Double it
for reliable service. So the millions referred to is really about
how the Southern Cross consortium recover their investment dosh.
Doing a sweet-heart deal for Fiji, and setting a price that
Fiji can afford, sets a very unwelcome precedent for negotiations
with other countries. So effectively Fiji misses out, and the
country most in need of low-cost communications cannot have it.
This is of interest to me as the same economic situation occurs in
rural NZ. If rural NZ had widely available, fast communications
infrastructure (10mpbs customer access minimum), then the choice of
where to run a new-economy business would be much wider.
The alternative is building more roads in Auckland. London built
the M25 as a possible solution, but it simply created more traffic.
Yeah, you have to have both, but comms networks are cheaper to
build than road networks.
\_ Roger De Salis Cisco Systems NZ Ltd
</' +64 25 481 452 L8, ASB Tower, 2 Hunter St
/) +64 4 496 9003 Wellington, New Zealand
(/ roger(a)desalis.gen.nz rdesalis(a)cisco.com
4/4/01. Mike Volpi, Cisco's chief strategy officer, announces four
key markets: VOIP, wireless LANs, content networking, streaming media.
To unsubscribe from nznog, send email to majordomo(a)list.waikato.ac.nz
where the body of your message reads: