The briefcase was mostly for storing files that would be synced between devices. The general idea was to allow loading contacts and documents onto your PDA, editing stuff on the go, and syncing it back.
Sync primarily happened via either a hardwired serial connection or wirelessly via infrared. The infrared sync worked, but was finicky and required that the infrared sensors on your laptop and PDA be directly facing each other.
I only played with this after I'd graduated and started making enough money that I could buy used PDA's just to see what they were capable of.
Run a DOS box inside that, and you actually have a VM running inside that VM.
Win9x (and I believe it started with Windows/386) are based on a hypervisor architecture with minimal protections. The Win32 environment is essentially a protected-mode DOS application that also runs its own application format, and is itself a VM, along with each "DOS box" that gets created.
95 and prior were literally running on top of DOS. If you exited the Windows environment, you were left on a real DOS shell. 98 is when they changed to what you're describing, AFAIK
>> The Win32 environment is essentially a protected-mode DOS application"
> 95 and prior were literally running on top of DOS.
Not sure how you're gathering that the GP is contradicting this. Windows 95, 98 and Me are all architecturally the same: a protected-mode DOS app.
Running DOS from within the Windows shell is not the same as exiting or skipping the boot of the Windows graphical shell, and is running in VM86 mode (which as the GP points out existed in Windows/386 / Windows 3.0 Enhanced 386 mode) See Virtual 8086 mode for some more overview.
95 starts from dos and can exit to dos (maybe? my memory isn't reliable back then, I seem to remember it having a shut down screen and I thought reboot into dos rather than exit to dos?), but that doesn't mean dos runs under it. When a 386 mode Windows runs, it subsumes DOS; the DOS becomes virtualized on top of the Windows kernel, and when Windows exits, it puts things back to how they were.
It's messy and weird, but progress. Windows NT operates differently and I don't think ever started from DOS.
The interesting part is that "subsumes DOS" actually happens in the style of https://en.wikipedia.org/wiki/Blue_Pill_(software) in that DOS isn't aware that a hypervisor has inserted itself under it and is now running it inside a VM.
NT is very much a traditional OS architecture in comparison.
No, what they are referring to: you could actually just exit Windows entirely, or just skip booting into it and what you would be left at is a plain real mode DOS environment.
From your own reference: To end-users, MS-DOS appears as an underlying component of Windows 95. For example, it is possible to prevent the loading of the graphical user interface and boot the system into a real-mode MS-DOS environment.
You can even try this in the linked VM: Shutdown and Restart in MS-DOS mode. That is a real mode DOS without any Windows running.
This was often necessary for games and other heavyweight protected mode DOS applications like CAD/CAM at the time that would not tolerate running in the VM86 environment.
It's really nothing at all like that. TTYs are the core interface of all terminals in *nix including the one you would use in any X11, Wayland or other graphical environment, those specific incarnations using something called psuedo-ttys but its all the same stuff in the kernel.
What you really mean are plain text mode consoles for which most UNIX likes further extend this to allowing multiple "virtual" consoles to share one display. And MacOS does have text consoles (see video_console.c in the darwin source), they are just not usually used and very hard to get to especially in Apple Silicon era macs (older macs can easily be booted in single user mode with Cmd+S)
But this is still not at all analogous since it's still the same OS underneath. Windows 95 family was a virtual machine manager launched from DOS so booting into DOS means not loading it at all.
The issue for compatibility wasn't so much the graphical shell part as the VMM (virtual machine manager).
The thing that always strikes me with these kinds of demos is how absurdly efficient old OSs were / had to be. It's obviously not an apples to apples comparison, as modern operating systems have to handle a lot more, but even Windows XP (which handled a lot of the same basic Internet browsing tasks as I still use now!) is almost comically fast on modern hardware.
Even functionally it's very comical because of how we were able to do on Win98 (even 95) almost everything we are doing today: editing documents, browsing the web (with images), playing games, watching videos, printing documents, listening to music.
The only functional improvements I can note over two decades are: Unicode support, higher resolution of everything (from hardware to content), and system reliability thanks to driver isolation.
Not true. Or it's true because of software bloat and resource waste these days.
When I was a teenager, I wanted to see what was the maximum resolution my monitor supported. It was a Samsung Syncmaster 14" entirely analog, no OSD, with analog adjustment knobs on the bottom. The video card was a Matrox Millennium II 8MB VRAM and the OS was Windows 3.10. The maximum resolution that I could get was 1600x1200 @ 41Hz interlaced. That's almost 2K (1.92K to be more precise). 41Hz interlaced hurt my eyes like hell but everything worked on the software side.
I can run a full virtual music studio on my Mac, with tens of channels, each with a collection of plugins, and multiple samples and audio files streaming off SSD at the same time.
None of that is GPU accelerated - except maybe the UI, which is split across three 4k monitors
On Windows 95 I could barely play a single WAV at once, and a dual monitor system was an exotic luxury.
Hardware has gotten a lot faster, and the software can do more without crashing. (Mostly.)
The real problem has been the move to browser+cloud for productivity applications. The OS is a front end for the browser, which is a front end for remote compute. This is hugely slow and inefficient compared to making everything work locally, and perhaps including some cloud-ish hooks for sharing.
Font rendering quality: It increases with font size, which is needed for high res displays. Besides, current font authors don't waste any time doing manual font hinting like the old fonts had. Good luck having crisp font rendering without manual hinting! It's all just a blur since antialiasing was introduced.
Huge images: How huge? Win95 without any patches can allocate 500MB to a single program (image viewer). That's 170 megapixels @ 24bit-color.
I mean, I was rocking a 1920x1200 CRT for my Win2k/Win98SE dual boot system that I had until WinXP64. They were plenty capable of efficiently using the screen real estate.
I ran a 1999-era machine well into the 2000s (I think 2007?) with Windows 98SE on 1600x1200 and it worked great, and faster than the XP computer I had.
1) if you're just referring to pointer size, wouldn't it be mostly a matter of computer hardware and recompiling to 64bit exes? Functionally it's the same
2) there's less cluster and more optimization. Why would it suffer at anything?
Sure, and on the mac side you've got Infinite Mac dot org where you can run System 1 through MacOS 9.0 in the browser. It's crazy to see how fast System 7.5 (which is more or less peak classic mac IMO) boots up versus System 9.0, and just how much attention to detail there was back then.
That's really just because the techniques we use now weren't developed then. I'd bet good money that a port of a modern spell checker would perform beautifully on an old 98 machine.
It's easy to confuse the bad performance of old software with the hardware it's on. But really a lot of the trouble is that the solutions we had to software problems back then are primitive and naive compared to the current state of the art. There's no reason that current software design practices can't massively improve older systems.
Hell, just try installing a modern Linux on an ancient PC, you'll see what I mean. Even considering that the hardware is physically slower, your experience is much, much closer to modern computing than 98 could ever hope to achieve.
The memory footprint of workbench was incredibly low. A base alpine image with no support for any graphics is several times that size. What are we doing?
Workbench had a very limited hardware it supported. Linux Kernel supports 1000x as many pieces of hardware from different CPU configurations, all the way to the most random USB/Firewire devices.
Like, software bloat is totally a thing, but comparing Workbench to the Linux Kernel (let alone the entire GNU/Linux environment) is ridiculously naive.
An alpine docker container doesn't have the kernel, so all that hardware support, and the kernel itself are still on top of that ~8MB GUI-less image. Part of it is elf header size vs hulk. Part of it is that no one bothers stripping symbols. But the reason for both of those reasons is simply that memory isn't scarce so we are lazy/efficient.
> An alpine docker container doesn't have the kernel, so all that hardware support
Only if you run it on Linux through para virtualization; in which case it's using the host's kernel. Potato/potato.
The fact that it supports virtualizing an entire other OS in a safe and privileged manner should just further reinforce why the kernel is larger. But ok, got me.
> 8MB GUI-less image
Sure, and you can see all of the contents of that here:
curl - a library that can handle full bidirectional HTTP communication in Unicode, including via SSL/TLS and arbitrarily manage file streams *or* utilize linux's built-in piping/redirection functionality
ssl - a full suite of cryptographic libraries and keys to allow secure communications and integration into other libraries/code (the aforementioned curl, for instance)
onigurama - a full regular expression library for use in other programs (language VMs like Ruby, for instance)
musl - a libc runtime and it's standard library
zlib - in-built compression functionality utilized by gzip, png and others
Can you point to a base install of workbench being able to do all of that? About the only thing in the alpine base layout that it is directly comparable to is BusyBox+bash.
> Part of it is elf header size vs hulk. Part of it is that no one bothers stripping symbols. But the reason for both of those reasons is simply that memory isn't scarce so we are lazy/efficient.
This is just some old guy "bah humbug" rant/conspiracy. Your Amiga with workbench is nowhere comparable to modern hardware+OSes. It can do somethings similarly, if you squint appropriately. At a much degraded image fidelity, color quality, insecure, primitively multitasking and non-networked manner with heavy RAM and CPU constraints.
I fully acknowledged software bloat is a thing. But we're not comparing some half-ass coded Electron app to some sleek handcoded C/C++/Rust desktop app. You're comparing base software built by decently-well educated engineers that does inordinately more than the comparison set, by so much moreso that it's ridiculous on its face. And then going on a rant about debug symbols and ELF headers (which brings a ton of benefits itself).
I remember when Linux kernel came in floppies, and a plain CD-ROM would give me basically the same what many kids now use on their text terminals with tmux.
Making use of what is cheap (hardware (CPU/ram) and network bandwidth) while avoiding to use what is expensive: programmers, tech guy helping you with an upgrade or a troubled windows install.
It's a fair comparison IMO. There are Linux flavours still making my 15 year old hardware faster with every other update. And any standard flavor still works like a charm.
There is no real reason OSes get more shitty with every version
But still, during the time of Windows 95/98 I could hardly play even a 128kbps MP3 with WinAmp in my Pentium 133MHz with 12MB of Ram where I had got rid of lots of bloatware. At the same time I could play it smoothly in Linux with a full blown desktop environment (Window Maker) running. So it was a bad performer in comparison even back then.
Let's not exaggerate: Winamp .mp3 playback was just fine on any Pentium running Windows 9x - though I concede that even my Pentium 75 had 16 MB RAM and I have never seen a Pentium with less than that.
On the other hand, extracting a CD and compressing it's .wav to .mp3 was a whole day of computing, and sending the files as attachments through SMTP was enough to elicit flowery vehement objections from my university's sysadmins and my friend's small ISP alike...
Alright, it might be that the memory fades and that it was OGG that I were struggling with, nevertheless the music playback worked way better in Linux. My issue I had were modelines/vertical refresh rate and having the graphics card recognized as it were my first computer and as a newbie with no friends to ask it felt rather steep to understand XFree86.
OGG would make a lot more sense. Even back in the Pentium days I believe there were optimized integer decoders that would handily outperform OGG stuff which only had a floating point decoder. Wikipedia is showing the og Pentium at 0.5 FP ops per clock cycle vs 1.88 integer ops.
Of course by the time you get to the Core architecture you're in the opposite situation where Sandy Bridge is at 16 FP ops and 6.2 integer ops per clock cycle.
Ah the times when the netiquette dictated not to attach large files to emails directed to mailing lists, but to upload them to some ftp server and only paste the link…
The whole industry was had more attention to performance and resources consumption.
Netiquette aside, a bunch of .mp3 as SMTP attachments hogged the single 64 kb/s that was my whole school's single link to the Internet and they entirely filled the receiving side's mail server storage, resulting in an outage... Both valid basis for criticism of my obnoxious behaviour in 1996 !
The first mp3 I ever played was Jun Kazama's theme from Tekken 2 on a Windows 95 box (Pentium 60) using Fraunhofer's own graphical mp3 player. I had to close all other open applications and not move the mouse around too much, lest the playback start to "crackle". This would've been about 1996, just a few scant months after Damaged Cybernetics, the famed emulation group, announced on their home page: "We are investigating the possibilities of using MPEG Layer III compression for music piracy."
My Pentium 75 definitely shipped with less than 16MB. I can't remember if we had 4 or 8, but I think we had 16MB when we junked it.
Either way, I played mp3s in winamp while on IRC and sometimes even running netscape. Took at least an hour to get a mp3 though, and I had to throw them all out when I got decent speakers and could hear how terrible they sounded.
Strange. I used a dual boot configuration back in the day and found the Windows experience a lot more efficient than the Linux desktop then.
It was less stable and secure of course, but Linux drivers were not really optimized and if you wanted a comparable desktop feel you needed something like KDE which needed a lot more memory and felt sluggish compared to Windows.
Also, there was rapid development in MP3 decoders at the time, they went from requiring about 100% of the CPU to less than 10% on about every Pentium system.
> I used a dual boot configuration back in the day and found the Windows experience a lot more efficient than the Linux desktop then.
That was often due to driver support, some hardware not performing as quickly (or sometimes not being as stable) under generic OSS drivers compared to their behaviour with the manufacturer's proprietary binaries (which were quite likely not available for Linux at all). It could be very hit-and-miss, with two otherwise very similar machines performing quite differently due to one controller on the motherboard. Sometimes it was due to the generic driver not knowing for sure that a given device supported faster modes well so erring on the side of caution, a not uncommon example being a drive/controller combination ending up running in PIO mode or an old DMA mode despite supporting something much faster, in which case you could get the performance back with a little “magic” configuration manually telling it to use that better mode.
Generic drives in Windows often had the same problems, but manufacturers tended to make device-specific drivers readily available (usually in the box) for those OSs.
Other times it came down to differing defaults for things like cache modes (write-through or write back, etc), power-saving options, and so forth, which again could be tweaked with config (though the discoverability of these config options was typically not very high).
It was also the awful amount of indirection of X(Free86/org) which made sure you had to jump through hoops to get anything on the screen. Even with accelerated hardware X doesn't feel as fast as Windows/macOS due to the insane amount of round-trips for perceived 'network transparency' which doesn't work that great and almost nobody uses.
I find it very disappointing some people are still fighting Wayland which, while not perfect, at least tries to get Linux desktops graphics stack 'on par' with macOS versions from 20 years ago...
Most clients using client side rendering which uses an awful amount of bandwidth and performs bad compared to VNC/RDP is not my idea of 'fine'.
Especially not if it's used to justify opposing a better solution.
For distant remote access with particularly animated or graphically detailed applications, perhaps. Though I'd not argue VNC as being lower bandwidth except where you've got it tuned to massively compress to the point where things that are bad through X are bad in different ways through VNC. RDP usually does better in this respect, as do some less common protocols.
Were there even other browsers back in '98 other than I think Netscape? How else would you have been expected to download, or utilize, the Internet? If a core component is considered bloatware, something is wrong.
It got worse with Windows ME but yes I felt that Windows 98 had lots of stuff to be removed. I were impressed at first glance with Windows Memphis until I realized it was the same as Windows 95 and I started to explore different shells as alternative to explorer.exe and I believe it was Litestep with a simple skin I got my best performance from.. but as I, had a better experience in Linux (besides graphics resolution) which I hade newly explored I booted into Windows less, and less...
I remember being able to watch bootleg movies on a Windows 2000, while Windows 98 was too slow on the same hardware. I suspect that had nothing to do with bloatware, but rather some internal inefficiencies while dealing with heavy CPU/memory/IO load.
Hah! I remember watching the first Matrix as DivX on a P200MMX with a 14” Compaq CRT. I had to use a Dos movie player (without starting Windows), as in Windows 98 it was way too slow.
I call bullshit on your call. Don’t you remember active desktop? That was as bloated as anything and crashed all the time. People having a IE crash screen as their wallpaper and being clueless as to how to get rid of it, was hilarious.
To be fair, that wasn't 'on' by default, and it wasn't necessarily bloatware, as it's a small activex module/component. It's actually fairly light with how well it works from a code-base perspective! but, it's akin to malware more than anything since it's inception.
Even standard Ubuntu runs very smooth on old hardware (if it has enough memory and a ssd). I installed windows 10 on my 15 year old think pad, it was usable, but quite slow. Switched to Ubuntu and everything felt snappy again.
I was just amazed that my super old laptop could run youtube/games for my 2 year old. It had Windows but was too slow to really use.
Fedora Silverblue was too heavy, but MX Linux made that computer run like my main. No slowdowns ever. Moments like this make me wonder who/what is leading us.
This! Oh, this! And look at the size as well. A fresh install of 11 is over 10gb.
It's something of a Parkinson's law of optimization. The more computing power there is, the more clutter and less optimization will be had until we fill the computing power. When your hardware is a big limiting factor, performance becomes a primary concern.
I think there is also something to be said about the size of the project. Win98 was in the hundreds, 11 is in the thousands. I have found much harder to get the fundamentals right on a larger project than a smaller one. Responsibility is diluted, architecture creates more seams and redundancies, and coordinating people is more complex. On top of that, slap a product culture that favors form over function, and cramming in the largest feature list possible, and you've got yourself a modern OS.
The main downside is that those OSes were also really quick to hack, and the smallest problem bluescreen'd the system and lost all your work.
But you can get a reasonably close experience with Linux or the BSDs and a simple GUI environment like the *box window managers. They're modern and snappy even on things like older generation RasPis.
Windows 2000 was memory protected, and still had very modest hardware requirements. IMO the pinnacle of Windows UX, everything after that was downhill.
Win2k was peak windows. Though, it did need a lot more RAM than 98 did - it was happy only at 128mb, and 256mb was optimal.
Meanwhile, 98 could run most things all day at 64meg.
98's swap and caching agent was definitely not as good as 2k though. You absolutely had to reboot at least daily or the chance of hitting swap would only increase upwards.
All of my computer muscle memory is from Windows 2000. It was surprisingly accessible for it's time. Modern macOS still can't use the tab key correctly to tab focus between certain buttons, inputs and actions.
Windows Server 2003 with Interix was my daily driver for a few years. That was my "peak Windows" moment. It had a fast UI, could run software meant for XP, and I could compile a ton of POSIX-targeted software.
I always thought that this was vaporware, something that was listed in books and white papers but never seen in the wild. Good that it eventually arrived.
It is true, yet multi language support was near non existent, you had a special version for CJK, digging the registry was a common thing, good luck trying to read a linux partition and an emulation of a bash shell would be the best you'd get.
All in all I respect the nostalgy, and see how many people would still be fine with these restrictions. I personally wouldn't want to go back to these day short of being paid a few trillions.
- full UTF-16 support, l10n was painful on purpose for licensing / differential pricing — Microsoft simply didn't want people in rich countries to grab MX/PH licenses and swap over to English/Japanese/other G7 languages. With the far more aggressive licensing schemes of XP and later that was less of a problem.
— Full IFS support, to allow arbitrary, high performance filesystem drivers. Win2k was just obsolete by the time those were mature enough to really use
— A full NT kernel with support for swappable userspaces. The POSIX subsystem was deliberately crippled by MS to fulfil federal requirements without allowing real interoperability, but nothing would've stopped them from doing a WSL1-equivalent BSD/Linux subsystem. (WSL2 would've been impossible simply because hardware virtualization for x86 didn't exist)
None of those features really need the full array of modern bloat, where even hitting the start button can take seconds to refresh all the adverts.
The core OS APIs still allow writing fast and snappy applications, but most apps nowadays are not written this way. Apps are using generic libraries that bring "the gorilla and the jungle" with them, browsers are relying on a JIT to draw their own UI, etc.
Even a lot of the new APIs are quite fast. I feel that slowness creeps into frameworks two ways.
1. Bad abstraction, causing the moral equivalent of N+1 queries in UI code. For example, modify one thing, causing layout, moving something else, causing layout, moving something else, causing layout, etc., etc., until it is all recalculated and re-laid out, and then allowing the paint.
2. Hidden serialization of asynchronous processes, essentially causing tiny pauses throughout the main thread.
I think this is one reason why people are so impressed with IMGUI. They imagine that the UI code must be doing an incredible amount of work to feel so slow, but then they watch a similar IMGUI app build and display the whole UI every frame 60-120 times per second, with plenty processing power left over. But if the other frameworks weren't wasting clock time, they could feel plenty fast.
You can get a good feel for what is really slow by using a very slow computer. The original Macintosh feels snappy, but it is doing one process in black and white. It's not hard to reach 30fps. When you take System 7, and put it on a 16MHz 68020 with half a memory bus (Mac LC), and run it at 16-bit color, painting those windows takes a long time. Our trick back then was to jump to black and white when we needed to get things done quickly, then back to higher bit depths when we wanted color or eye candy.
Now we have a situation where we can output full screen 24bit color in a millisecond or less, we are waiting on other things.
you can see this if you emerge (gentoo) or build from ports (*bsd). You try to compile vim and suddenly you're in the config page for CUPS (printer support) which then wants to know if you need Unicode support for a language you've never heard of (and if so, which font packages do you want to show it on).
>but even Windows XP (which handled a lot of the same basic Internet browsing tasks as I still use now!) is almost comically fast on modern hardware.
Why is it surprising that an old OS built for late 90's hardware runs "comically fast" on 2020's hardware?
Just look at XP system requirements:
- Pentium 233-megahertz (MHz) processor or faster (300 MHz is recommended)
- At least 64 megabytes (MB) of RAM (128 MB is recommended)
- At least 1.5 gigabytes (GB) of available space on the hard disk.
How would anyone expect that not to be much faster on a modern quad+ core, 4GHz system with 8GB+ RAM?
Basic desktop interaction is the same as 22 years ago, yet modern desktop environments are orders of magnitude slower at performing those same basic tasks.
What is striking is the fact that the interactions are the same between old and new shells/applications/syscalls/whatever, but modern software on modern hardware feels so much slower. The only reason for this is that modern software is doing a lot more stuff in the nooks and crannies of those interactions that it didn't used to do. Perhaps some of that stuff is useful, but some of it has to be code that is just less efficient or was just shoveled in because the CPU was faster and the RAM was more spacious.
Latency of what? Starting the OS is definitely faster. Launching software is also orders of magnitude faster. HDD don't requires hours of defragmentation to avoid slowing down to a halt. I was only a teenager in the windows 98 era but I remember I was just always waiting for the computer. This is not something I have experienced in a long time now.
In an application like Teams, the delay between striking a key on the keyboard and the corresponding glyph appearing on screen is comically bad - two orders of magnitude higher than performing the equivalent action on a computer from the early 1980s.
Surprise is modern OS runs slower in comparison without much perceptible improvements, inviting such a conclusion through rosy glasses that purported improvements since "then" must be nothing more than extreme amount of software bloat.
I think the big question everyone has in mind when they make comments about old OSes running fast on modern hardware is, "what is the juice that we are getting for the squeeze?"
It's not just that Win98 runs faster on modern hardware. It's that it does the same--or at least similar, as far as the user is concerned--tasks as Win11, but faster.
That leaves a gaping hole of "what was so important to add to the OS that it warrants slower interactions at every turn".
I opened the settings app on my work computer running Win10 the other day and it took a good 5 seconds before it showed up and painted. On Win-anything-less-than-or-equal-to-7, the settings window opened immediately and navigating to different settings was also immediate.
So what is Win10/11 doing?
And Linux isn't immune to the bloat. It is still smaller/faster than Windows, but it has managed to stay within a constant factor of Windows all this time. Modern Linux is significantly fatter than 20, even 10 year old Windows.
If the answer were "corpos gonna bloat", what's Linux's excuse? Operating systems across the board have gotten bigger and slower over time, for little visible benefit, and nobody has a coherent answer for why.
Some of the causes for bloat apply to both commercial and floss projects:
- heavier use of serialized I/O means much lower risk of pointer corruption or related threats, but adds overhead
- so do simple safety/sanity/security checks - each by itself is harmless, but over 10-20 years they add up
- more abstraction layers and more bookkeeping to handle much more complicated hardware setups, or just the passage of time (a Y2K secure date field takes two more bytes, a 64-bit time_t takes 4 more bytes than a 32-bit one, etc.)
- userspace devs are as lazy as they can be. Gnome these days is mostly Javascript to make the constant pointless rewrites faster, something that would've been unheard of 20 years ago. (The *box WMs meanwhile are still as fast as before.)
Add to that that Windows XP had no antivirus built in, heck it didn't eve ship with a bloody firewall to block incomign connections, so you got immediately pwned the moment you plugged into the internet or a USB drive from school. Then you installed a third party firewall and anti-virus solution and performance went to shit.
Yeah, it felt faster than today's equivalent at similar task, but at what cost.
If I strip out the catalyst, locks, interior and seats form my car, it will also go faster and give me better fuel economy but that doesn't really make it usable for me now.
>But that's a plus!
People were educated about cyber-risks in a real-life
No it wasn't, no they weren't. That's like saying that growing up in a rough neighborhood where you can get shot, or getting drafted into war, is good for you because it builds character and teaches you to protect yourself.
A bullshit OS full of security vulnerabilities is no fun or useful for any user or business. Yeah, it indirectly helped me in my childhood better understand security, threats and troubleshooting, but overall these faults were a net negative for humanity.
>And then they installed Tiny Personal Firewall
No they didn't. I did tech support at the time. Most XP PCs has whatever bullshit solution from Zone Alarm, Kaspersky, McAfee, etc was cheaper or more advertised at the time. If they had anything to begin with that is, otherwise they were full of Bonzi-Buddy, Strip-Girl animated spyware that came with your DivX player installation or via the usual 'Eminem - Lose Yourself.mp3.exe' Limewire download.
> \* realtime collaborative spreadsheet editing
> \* live backing up all my files
These are broadband adoption and bandwidth issues and don't explain why the OS has grown. We had real-time networked apps in 1998. We had networked file-sharing and backups.
> \* 28 inch high definition monitor
> \* watching movies in HD
> \* playing almost lifelike games
> \* Running large complex scripts on the same machine I can easily carry in one hand
> \* Running an IDE like JetBrains
These are hardware scaling issues and don't explain why the OS has grown. And we had IDEs in 1998.
> \* browsing an extremely hostile internet
Better sandboxing of the browser explains (though only partially) why the browser has grown, but doesn't explain why the OS has. If anything, the browser grew (in functionality and thus size) because the OS didn't (in functionality), so why did it (in size)?
>Better sandboxing of the browser explains (though only partially) why the browser has grown, but didn't explain why the OS has
Because the same kind of sandboxing happens on OS level now as well. In Windows 9X days the apps could access the hardware directly but not anymore for safety and security. Everything is now layers upon layers of sandboxing that must talk to the HW via APIs.
>because the OS didn't (in functionality)
Didn't it though? Or are we being needlessly snarky?
I don't remember Win 98 having on-line automatic OS update and automatic driver update, firewall, heuristic anti-malware, DirectX 12 support(check the featurelist compared to DX 5), support for multi-core 64bit CPUs, PAEX, NVME SSDs, WiFi, multiple high-dpi screens, virtualization(Hyper-V), USB4 & Thunderbolt, accelerated fancy transparency in the GUI, etc.
Yes, some of those features are hardware related, but having the OS support all those new HW features and exposing it reliably and securely it to the userspace adds bloat in from of binary code that makes all that work seamlessly.
All of those features and support for all that newer faster hardware, are a given now but were unheard of in consumer OS back then when PCs had the complexity of a toaster by comparison.
So let's stick to the facts, not emotionally driven rose tinted glasses.
> so why did it (in size)?
Because as HW became powerful and cheaper, it made it less profitable for SW companies to hyper optimize everything like they were dong in the Win9X days with assembly and stuff. It would be a needless expense that brings no ROI.
Yeah Win11 is needlessly overbloated compared to WinXP, but people said the same about WinXP when it launched, compared to Win98, and they said the same about Win95 when it launched, compared to DOS. Where does the buck stop? At the Altair 8800? Pretty sure that had no bloat.
Well, I watched HD video on my 1999 PC, and it worked fine. I had to resize it to fit 1600x1200; I don't know what all is going on behind the scenes to play a 1080p-encoded video in that situation. But it worked great. You don't need moder hardware for HD monitors at all.
> * Running an IDE like JetBrains
We had Visual Studio, which also worked great, and with WinForms was undoubtedly one of the best GUI app development tools of all time.
> * browsing an extremely hostile internet
I did this well into the 2000s, although I was an early Firefox evangelist and mostly used that when it came out, plus eventually NoScript as JS was continually used to enshittify the web.
As I watch many people using their UNIX like computers, as if time has stood still in 1980's terminals, I would say it that 2023 Emacs would do just fine.
Really, entering in some coffe shop coding sessions, is hardly any different from an IBM X Windows terminal into a DG/UX session in the university computer lab, now they are using a laptop and something else instead of twm or an ambar based text terminal.
My point is that the software we run now is significantly more resource intensive, even if its name hasn't changed. Emacs is slow on 2023 hardware. It would likely be unusable on 90s hardware.
I tried XP on a PII 333 MHz laptop with 96 MB of RAM. This laptop came with Windows 95 or 98. It still works by the way.
It ran, but it was quite slow. Windows 2000 managed better. Windows 98 even more, but internet on Windows 98, maybe not the best thing to do. Linux even more.
I can't see 233 MHz and 64 MB being reasonable minima, more like very bare minima as in "it boots".
(of course, this does not contradict anything essential from your comment)
> How would anyone expect that not to be much faster on a modern quad+ core, 4GHz system with 8GB+ RAM?
It's not a given.
For example, Windows XP may boot faster than 98 or 2K on the same hardware, because XP parallelizes and shortens some hardware initialization steps. (e.g. anything related to the network).
Win98 with file sharing spends almost a minute during boot (even from an SSD) just squeaking netbios frames.
It has been a long time since you last sniffed 9x Windows, right? It will speak NBFs even if you _don't have TCP/IP whatsoever_, NBTs otherwise, but they will delay the boot process by about a minute or so.
You're right, it's not an apples-to-apples comparison. The comparison is unfair to Windows 11 running on bare metal. You'd have to emulate 10 browser stacks recursively and then put Windows 98 on that to get a reasonable baseline.
I tried out the bootchess implementation on the website, and while it is very impressive to fit a chess-playing program in under 512 bytes, fair warning that it is not a complete implementation of chess. I tried castling and it didn't work, then found this comment explaining more:
https://www.pouet.net/prod.php?which=64962#c715279
Where is that? I remember that being an extra on the CD-ROM but I don't see it mounted and can't remember the name of the file to search for it. If the CD is there somewhere, does that mean the Weezer music video should be there too.
And then I got a BSOD (Invalid VxD dynamic link call from VMM(06)+1D72 to device "C001" service E74) when I turned on the Underwater screen saver.
https://copy.sh/v86/
My highlights:
- First version of Windows (1.01)
- SerenityOS <3
- and even ReactOS