In comparison to the alternatives we had at the time, Linux was a fucking tank. Once it was up, you could expect to get 6 months to years of uptime unless you were installing new tools or changing hardware (no real USB/SATA yet, so hardware was a reboot situation).
If you got a Win98 machine up, it would eventually just hang. Yes, some could got a whole, but if you used it for general use it would crash the kernel out eventually. Same for MacOS (the OG MacOS).
The only real completion for stability was other UNIX systems, and few of those were available to the general public at a reasonable price point.
VAX/VMS was still around then, and as far as I recall, that was the king for uptime.
Linux back then supported much less hardware. I can remember even in the early aughts, there was while families of popular wireless network chipsets that weren’t supported.
VAX/VMS was such a beast! The hardware wasn’t readily available to the public, though.
Oh, the wireless chipsets in the 90’s into about 2005? or so…that was a bad time for anyone trying to run wireless. Hell, MS Windows didn’t even have network drivers baked in until what, WinXP? Wiring computer together in the 90’s was such a a trial, both for hardware and software fronts.
I was lucky to score a 3Com 3c905b fast 10/100 Ethernet card from a bussy in 1996. That was well supported across the board (Linux and Windows), and the IRQ settings for the PCI bus memory mapped I/O and IRQs was well documented.
Edit: buddy, not a hussy, though he kinda was… Your call in how you want to read it.
Do you remember the article about some university that accidentally walled in a Network server? It ran for years until they needed to put hands on it for something. They had to do the “follow the Ethernet cable” game until it went through the sheetrock into a dead space.
Hell my home server, running on low end Xeon hardware had uptime numbers around 3 years…then there was a power cut. Next down day was another power cut a year or so later. Total around 8 years running with 5 outages, all but one due to power loss (other was Ubuntu 16.04 - 18.04 upgrade).
Just updated to Ubuntu server 20.04 so uptime is only 7 days at this point.
Daily updates with rolling distro may cause issues but a stable system that wasn’t tinkered with would run and run and run. Our Linux fileserver at work had a 2 year uptime, only broke that for some drive additions and other adjustments, otherwise it would have just kept on chugging along without interaction. My debian ARM NAS runs without incident, the only shutdowns it sees are when I move equipment to different rooms or want to reroute power cables. Otherwise it would just always be working fine.
I am only a few pages in, but speaking as a Linux user in the 2020s, I am skeptical of the claim that Linux in 1999 would “never, ever break down.”
I was there Gandalf…
In comparison to the alternatives we had at the time, Linux was a fucking tank. Once it was up, you could expect to get 6 months to years of uptime unless you were installing new tools or changing hardware (no real USB/SATA yet, so hardware was a reboot situation).
If you got a Win98 machine up, it would eventually just hang. Yes, some could got a whole, but if you used it for general use it would crash the kernel out eventually. Same for MacOS (the OG MacOS).
The only real completion for stability was other UNIX systems, and few of those were available to the general public at a reasonable price point.
VAX/VMS was still around then, and as far as I recall, that was the king for uptime.
Linux back then supported much less hardware. I can remember even in the early aughts, there was while families of popular wireless network chipsets that weren’t supported.
VAX/VMS was such a beast! The hardware wasn’t readily available to the public, though.
Oh, the wireless chipsets in the 90’s into about 2005? or so…that was a bad time for anyone trying to run wireless. Hell, MS Windows didn’t even have network drivers baked in until what, WinXP? Wiring computer together in the 90’s was such a a trial, both for hardware and software fronts.
I was lucky to score a 3Com 3c905b fast 10/100 Ethernet card from a bussy in 1996. That was well supported across the board (Linux and Windows), and the IRQ settings for the PCI bus memory mapped I/O and IRQs was well documented.
Edit: buddy, not a hussy, though he kinda was… Your call in how you want to read it.
I assume that word also means something else than what I’m thinking…
Netware was rock solid.
Do you remember the article about some university that accidentally walled in a Network server? It ran for years until they needed to put hands on it for something. They had to do the “follow the Ethernet cable” game until it went through the sheetrock into a dead space.
The Register still has the article from 2001: https://www.theregister.com/2001/04/12/missing_novell_server_discovered_after/
How the f does that even happen ._.
Hell my home server, running on low end Xeon hardware had uptime numbers around 3 years…then there was a power cut. Next down day was another power cut a year or so later. Total around 8 years running with 5 outages, all but one due to power loss (other was Ubuntu 16.04 - 18.04 upgrade).
Just updated to Ubuntu server 20.04 so uptime is only 7 days at this point.
Daily updates with rolling distro may cause issues but a stable system that wasn’t tinkered with would run and run and run. Our Linux fileserver at work had a 2 year uptime, only broke that for some drive additions and other adjustments, otherwise it would have just kept on chugging along without interaction. My debian ARM NAS runs without incident, the only shutdowns it sees are when I move equipment to different rooms or want to reroute power cables. Otherwise it would just always be working fine.