“Our computers, he argued, have become more complex and less
Yes, they have become more complex, but no, they haven’t become less
reliable. Consider this paper
<https://www.bell-labs.com/usr/dmr/www/retro.ps> from 1977 by Dennis
Ritchie, entitled “The UNIX Time-sharing System—A Retrospective”: it
says “the typical period between software crashes (depending somewhat
on how much tinkering with the system has been going on recently) is
well over a fortnight of continuous operation”. Today, our Linux servers
can have uptimes measured in months or even years.
I can remember in the 1990s, there was a big gulf between the hardware
reliability of commodity PCs and more expensive “workstations” or
“servers”. Remember, those were the days when powerful,
multitasking, multiuser OSes with proper memory protection, like Linux,
the BSDs, the ones carrying the “Unix” brand, and Windows NT, were just
starting to be able to run on commodity PC hardware. And the result was
a lot of hardware crashes.
Nowadays, there is no longer this great difference in hardware
reliability. The main difference with servers is inclusion of
management features (e.g. hot-swap bays, remote consoles), not the
quality of the hardware.
Show replies by date