Ah, memories. Used to use BeOS as my primary OS for a year or two. I think it's the only OS I ever found to be truly intuitive and pleasant to work with.
That said, I don't think the world would be in a better place had Apple chosen Be over NeXT. The elephant in the room is security: NeXTSTEP, being Unix-based, has some amount of security baked in from the ground up. BeOS didn't; it was more akin to Windows 95 or classic Mac OS on that front. Consequently, I doubt it could have made it far into the 21st century. It would have died unceremoniously in a steaming pile of AYBABTU. Taking Apple with it, presumably.
I used to work for Be in Menlo Park in a previous life and I can confirm that the code base quality would have made for a very bad outcome for Apple. Security was the least of the numerous serious issues. That said BeOS somewhat still exists in spirit as a lot of folks from Be went to build/contribute to Android.
> a lot of folks from Be went to build/contribute to Android.
Does that include the quality and security perspective as well? ;-) j/k
Having never crossed paths with a former Be employee before, __thank you so much__ for your contribution. BeOS was so instrumental to my perspective on computing and operating systems (and potentially the conception of my disdain for what Microsoft did to the world of operating systems around the turn of the century).
From a user perspective, BeOS was nearly perfect. Great UI and utilities, POSIX command line, so fast and responsive. The "install to Windows" option was amazing for trying things out. BeFS was wonderful (it's nice to see Mr. Giampaolo's work continue in macOS).
I too used to work at Be (Hi!) as well as developed applications for BeOS. I also worked at Apple on various releases of OS X. NextStep was far ahead of BeOS on multiple fronts. BeOS was a lot of fun to work on, but only scratched the surface of what was needed for a truly commercial general purpose OS. If Apple would have acquired Be instead of Next, who knows what the world would be like today. Apple ended up with a large number of former Be employess as well (some directly and others from Eazel.)
I can never let a thread about BeOS go by without adding my two cents, because I also worked at Be in Menlo Park, back in the day. (I went down with the ship and got laid off as they went out of business.)
I was sore about it at the time, but I agree that Apple made the right decision by choosing NextStep over BeOS. If for no other reason, because that's what brought Jobs back. It's hard to imagine Apple making their stunning comeback without him.
It was not in the box. Back then, it was still quite difficult to get a hold on an actual R3 box here in Europe. There was 1 official reseller here in the Netherlands and I actually bought their official demo machine: the famous first dual processor Abit BP6 with 2x 400Mhz Celeron processors. When picking it up in their office I spotted the poster and asked if I may have it. Still got a T-Shirt and a hat too ;-).
Which is ironic, given that I am yet to see a GNU/Linux based hardware setup that matches the experience, hence why I went back to macOS/Windows that much a much closer multimedia experience.
I'm curious what sort of issues you have in mind. I was never very familiar with BeOS but from what I understood the issue with it was more that its responsiveness came from very heavy use of multi-threading, but that also made it very hard to write robust apps for it as, in effect, all app code had to be thread safe. App devs found that condition too hard to handle.
Can I assume that the quality issues were somewhat related to that? BeOS devs found it no easier to write thread safe code in C++ than app devs did?
True, but a technology is only a means to an end, not an end itself. What sells is product.
You may have the finest tech on the planet—and that means precisely squat. What counts is putting bums on seats. Your seats. And keeping them there. Limps of tech are just a vehicle for that; to be used, abused, chewed up, and/or discarded on the road(s) to that end.
Apple could have done better; they certainly did plenty worse (Copland, Taligent, the first Mac OS).
As it turned out, NeXTSTEP proved it was indeed “good enough” to fit a pressing need at the time; and the rest was just hammering till it looked lickable enough for consumers to bite. All it was needed was a salesman to shift it—and Steve 2.0 proved to be one of the greatest salesman in modern marketing history.
That’s what made the difference between selling a tech to a million dyed-in-the-wirewool nerds, and selling tech to a billion everyday consumers. And then up-selling all of those customers to completely new worlds of products and services invented just for the purpose.
..
Want to created a whole new device? Hire Steve Wozniak.
Want to create a whole new world? Oh, but that is the real trick.
And Steve Jobs gave us the masterclass.
..
Had Steve started Be and Jean-Louis built NeXT, we would still be in the exact same situation today, and the only difference would be chunks of BeOS as the iPhone’s bones instead. Funny old world, eh? :)
I'm not sure I've ever encountered someone so invested in the "great man" theory of history.
Jobs was obviously talented, but assuming no matter where he went he would have had the same level of success is discounting a lot of luck in how everything lined up,and who was available to help bring all the things to market jobs is famous for. There's no guarantee the hundreds or thousands of people that were also essential to the major successes of Apple would have been around jobs had he stayed at Next. Those people deserve respect and recognition too.
You forgot his family had been the largest share holder of Disney not because Steve got apple. He is VERY successful to the point he even gave up getting anything but an private jet. That is billion of course but that is not success. What is.
And unlike v1 v2 seems better on human level as well. We do not need saint. He still parked in space for hadicapped only I guess. But let us admit, it is not just one for all. But all for one.
ISTR a tale of Legal keeping a large slush fund from which to pay off all the ex-Apple-employees that Steve 2.0 would straight tell to their face to fuck off. Just because that is what worked best for him†. :)
“But let us admit, it is not just one for all. But all for one.”
Damn straight. Epically focused leadership.
--
(† For all others who aspire to build their own businesses, there is HR procedure and askamanager.org—and do not for the life of you ever bypass either!)
>Epically focused leadership.
Just to support that, I remember hearing a story told by Larry Elison (they were apparently neighbours for a while), where he would pop over to see Steve, and would be subjected to the 100th viewing of Toy Story where Jobs was obsessively pointing out every new tiny improvement they'd made in the story or graphics.
“Those people deserve respect and recognition too.”
ORLY? Name them.
--
Not “great man”. Great vision.
Geeks tend massively to overrate the importance technical aptitude, which is what they’re good at, and underrate everything else—business experience, sales skills, market savvy, and other soft skills—which they’re not.
Contrast someone like Jobs, who understood the technical side well enough to be able to surround himself with high-quality technical people and communicate effectively with them, but make no mistake: they were there to deliver his vision, not their own.
Tech-exclusive geeks a useful resource, but they have to be kept on a zero-length leash lest they start thinking that they should be the ones in charge since they know more about tech than anyone else. And the moment they’re allowed to get away with it, and you end up with the tails-wagging-the-dog internecine malfunction that plagued Sculley’s Apple in the 90s and has to some extent resurfaced under Cook.
Lots of things happened under Jobs 2.0. That was NEVER one of them.
..
Case in point: Just take the endless gushing geek love for Cook-Apple’s Swift language. And then look at how little the iOS platform itself has moved forward over the 10 years, it’s taken to [partly] replace ObjC with the only incrementally improved Swift. When NeXT created what is now AppKit, it was 20 years ahead of its time. Now it’s a good ten behind, and massively devalued to boot by the rotten impedance-mismatch between ObjC/Cococa’s Smalltalk-inspired model and Swift’s C++-like semantics.
Had Jobs not passed, I seriously doubt Lattner’s pet project would ever have advanced to the point of daylight. Steve would’ve looked at it and asked: How can it add to Apple’s existing investments? And then told Lattner to chuck it, and create an “Objective-C 3.0”; that is, the smallest delta between what they already had (ObjC 2.0) and the modern, safe, easy-to-use (type-inferred, no-nonsense) language they so pressingly needed.
..
Look, I don’t doubt eventually Apple will migrate all but the large legacy productivity apps like Office and CC away from AppKit and ObjC and onto Swift and SwiftUI. But whose interest does that really serve? The ten million geeks who get paid for writing and rewriting all that code, and have huge fun squandering millions of development-hours doing so? Or the billion users, who for years see minimal progress or improvement in their iOS app experience?
Not to put too a fine a point on it: if Google Android is failing to capitalize on iPhone’s Swift-induced stall-out by charging ahead in that time, it’s only because it has the same geek-serving internal dysfunction undermining its own ability to innovate and advance the USER product experience.
--
TL;DR: I’ve launched a tech startup, [mis]run it, and cratered it. And that was with with a genuinely unique, groundbreaking, and already working tech with the product potential to revolutionize a major chunk of trillion-dollar global industry, saving and generating customers billions of dollars a year.
It’s an experience that has given me a whole new appreciation for what another nobody person starting out of his garage, and with his own false starts and failures, was ultimately able to build.
And I would trade 20 years of programming process for just one day of salesmanship from Steve Jobs’ left toe, and know I’d got the best deal by far. Like I say, this is not about a person. It is about having the larger vision and having the skills to deliver it.
Jobs was far more of a "tech guy" than either Sculley or Cook. He understood the technology very well, even if he wasn't writing code.
I would also say, Jobs had a far, far higher regard for technical talent than you do. He was absolutely obsessed with finding the absolute best engineering and technical people to work for him so he could deliver his vision. He recognized the value of Woz's talents more than Woz himself. He gathered the original Mac team. If he had, say, a random group of Microsoft or IBM developers, the Mac never would have happened. Same with Next, many of whom were still around to deliver iOS and the iPhone.
Your take is like a professional sports manager saying having good athletes isn't important, the quality of the manager's managing is the only thing that matters.
“Your take is like a professional sports manager saying having good athletes isn't important, the quality of the manager's managing is the only thing that matters.”
Postscript: You misread me. I understand where Jobs was coming from better than you think. But maybe I’m not explaining myself well.
..
When my old man retired, he was executive manager for a national power company overseeing distribution network. Senior leadership. But he started out as a junior line engineer freshly qualified from EE school, and over the following three decades worked his way up from that.
(I still remember those early Christmas callouts: all the lights’d go out; and off into the night he would go, like Batman.:)
And as he later always said to engineers under him, his job was to know enough engineering to manage them effectively, and their job was to be the experts at all the details and to always keep him right. And his engineers loved him for it. Not least ’cos that was a job where mistakes don’t just upset business and shut down chunks of the country, they cause closed-coffin funerals and legal inquests too.
--
i.e. My old man was a bloody great manager because he was a damn good engineer to begin with. And while he could’ve been a happy engineer doing happy engineering things all his life he was determined to be far more, and worked his arse off to achieve it too.
And that’s the kind of geek Steve Jobs was. Someone who could’ve easily lived within comfortable geeky limitations, but utterly refused to do so.
“Jobs was far more of a "tech guy" than either Sculley or Cook.”
Very true. “Renaissance Man” is a such cliche, but Steve Jobs really was. Having those tech skills and interests under his belt is what made him such a fabulous tech leader and tech salesman; and without that mix he’d have just been one more Swiss Tony bullshit artist in an ocean of the bums. (Like many here I’ve worked with that sort, and the old joke about the salesman, the developer, and the bear is frighteningly on the nose.)
But whereas someone like Woz loved and built tech for its own sake, and was perfectly happy doing that and nothing else all his life, Jobs always saw tech as just the means to his own ends: which wasn’t even inventing revolutionary new products so much as inventing revolutionary new markets to sell those products into. The idea that personal computers should be Consumer Devices that “Just Work”; that was absolutely Jobs.
And yeah, Job always used the very best tech talent he could find, because the man’s own standards started far above the level that most geeks declare “utterly impossible; can’t be done”, and he had ZERO tolerance for that. And of course, with the very best tools in hand, he wrangled that “impossible” right out of them; and the rest is history.
Woz made tech. Jobs made markets.
As for Sculley, he made a hash. And while Cook may be raking in cash right now, he’s really made a hash of it too: for he’s not made a single new new market† in a decade, while Apple’s rivals—Amazon and Google—are stealing the long-term lead that Jobs’s pre-Cook Apple had worked so hard to build up.
--
(† And no, things like earpods and TV programming do no count, because they’re only addons, not standalone products, and so can only sell as well the iPhone sells. And the moment iPhone sales drop off a cliff, Cook’s whole undiversified house of cards collapses, and they might as well shut up shop and give the money back to the shareholders.)
I hear you, I do, but here's another perspective: Jobs without Wozniak wound up being California's third-best Mercedes salesman.
And neither of them would've mattered a jot if they were born in the Democratic Republic of the Congo, or if they were medieval peasants, or if Jobs hadn't been adopted, or or or ...
Luck is enormously influential. There thousands of Jobsalikes per Jobs. Necessity isn't sufficiency.
I think Steve Jobs The Marketing and Sales Genius is an incorrect myth.
Jobs was an outstanding product manager who sweated all the details for his products. And in contrast to Tim Cook, Jobs was a passionate user of actual desktop and laptop computers. He sweated the details of the iPhone too, but his daily driver was a mac, not an iPad. Cook is less into the product aspect, and it really really shows. Cook is a numbers and logistics guy, but not really into the product.
That's a thing I think Apple has fixed recently with some reshuffling and putting a product person (Jeff Williams) in the COO role. The COO role is also a signal that he'll be the next CEO when Tim Cook retires.
To be clear, I don't disagree that Jobs was a great marketer. But that stemmed from his own personal involvement with the product design of the mac--and later the iOS devices--rather than some weirdly prodigious knack for marketing.
NeXTSTEP appears to have first gotten incorporated throughly into the OS X codebase. Browse through the Foundation library for the Mac - https://developer.apple.com/documentation/foundation/ . Everything that starts with NS was part of NextStep.
My understanding was always that NeXTSTEP served as the foundation of OS X, and while it certainly got a new desktop environment and compatibility with MacOS's legacy Carbon APIs, it was essentially still NeXTSTEP under the hood.
Original NeXT classes were prefixed NX_. Then NeXT worked with Sun to make a portable version of the GUI that could run on top of other OSes -- primarily targeting Solaris, of course, but also Windows NT.
That was called OpenStep and it is the source of classes with the prefix NS_ -- standing for Next/Sun.
But a bunch of the methods we have for securing, say, mobile phones, grew out of user accounts.
Personally I don't know Android innards deeply, but when I was trying to backup and restore a rooted phone I did notice that every app's files have a different owner uid/gid and the apps typically won't launch without that set up correctly. So it would seem they implemented per-app separation in this instance by having a uid per app.
Imagine a world where Google had chosen to build on a kernel that had spent many decades with no filesystem permissions at all. Perhaps they'd have to pay the same app compatibility costs that Microsoft did going from 9x to NT kernel, or changing the default filesystem to ACL'd-down NTFS.
Then you'd maybe get something like iOS, where the POSIX uid practically does not matter at all, and the strong security and separation is provided by other mechanisms like entitlements...
Someone else pointed out that BeOS allegedly had "quality and security" problems in general (I myself have no idea), so that may indeed have led to problems down the line, whereas BSD was pretty solid. But I agree with the OP and don't think POSIX security in particular is much of a factor today.
Yeah. Funny enough, if Apple had skipped OS X and gone directly to iOS, BeOS would have been a superior foundation. No uselessly mismatched security model or crusty legacy API baggage to clog up the new revolution in single-user always-online low-powered mobile devices.
Of course, that was in back in the days when an entire platform from hardware to userland could be exclusively optimized to utterly and comprehensively smash it in just one very specific and precisely targeted market. Which is, of course, exactly what the iPhone was.
Just as the first Apple Macintosh a decade earlier eschewed not only multi-user, multi-process, and even a kernel; every single bit and cycle its being being exclusive dedicated to delivering a revolutionary consumer UI experience instead!
In comparison, NeXTSTEP, which ultimately became iOS, is just one great huge glorious bodge. “Worse is Better” indeed!
..
Honesly, poor Be was just really unlucky in timing: a few years too late to usurp SGI; a few too early to take the vast online rich-content-streaming world all for its own. Just imagine… a BeOS-based smartphone hitting the global market in 2000, complete with live streaming AV media and conferencing from launch! And Oh!, how every Mac OS and Windows neckbeards would’ve screamed at that!:)
On a similar note, I've often wondered what Commodore's OS would have turned into. Not out of some misplaced nostalgia, just curiousity about the Could Have Been.
My guess is that by now in 2020, it would have at some point had an OSX moment where Commodore would have had to chuck it out, since both Apple and Microsoft have effectively done exactly that since then. Still, I'd love to peek into Amiga OS 9 descended from continual usage.
I think AmigaOS 3 could be a nice kernel as it is. And to make it more Unix-y memory protection could be introduced but only for a new userland process with more traditional syscalls.
It's a bit how DragonflyBSD is slowly converging to.
Amiga OS 9 would have looked very different from the Amiga OS that we know (I am talking from a developer's point of view, not about the GUI).
Since inter-process communication in Amiga OS was based on message passing with memory-sharing, it was impossible to add MMU-based memory protection later. As far as I know, even Amiga OS 4 (which runs on PowerPC platforms) is not able to provide full memory protection.
There was also only minimal support for resource tracking (although it was originally planned for the user interface). If a process crashed, its windows etc. would stay open. And nobody prevented a process to pass pointers to allocated system resources (e.g. a window) to other processes.
The API was incomplete and tied to the hardware, especially for everything concerning graphics. This encouraged programmers to directly access the hardware and the internal data structures of the OS. This situation was greatly improved in Amiga OS 3, of course far too late. Amiga OS 3 was basically two or three years too late. As far as I know, Apple provided much cleaner APIs, which greatly simplified later the evolution of their OS without breaking all existing programs.
Finally, the entire OS was designed for single-core CPUs. At several places in the OS, it is assumed that only one process can run at a time. This doesn't sound like a big issue (could be fixed, right?) but so far nobody has managed to port Amiga OS to multi-core CPUs (Amiga OS4 runs on multi-core CPUs, but it can only use one core).
I have been the owner of an Amiga 500 and Amiga 1200, but to be brutally honest, I see Amiga as a one-hit wonder. After the initial design in the mid-1980s, development of the OS and the hardware basically stopped.
> Since inter-process communication in Amiga OS was based on message passing with memory-sharing, it was impossible to add MMU-based memory protection later.
Why can't you do shared memory message passing with MMU protection? There is no reason an application in a modern memory protected OS can't voluntarily share pages when the use case is appropriate. This happens today. You can mmap the same pages, you can use posix shm, X has the shm extension...
But the predecessor to containers were features like having daemons chroot into somewhere else and drop their uid to something that can't do much. That very much grew out of the Unix solutions. If Unix daemons were written for decades assuming all processes have equal privilege maybe we wouldn't see that.
“Security” is a bit of a misnomer in this context: I think what you actually meant was “multi-user architecture” which, as remarked elsewhere, undergirds the whole notion of keeping processes from promiscuously sharing any and all resources.
Best counter example to their point is iOS, though, where POSIX permissions don't play much of a role in securing the system and separation applications.
I do like that you have to “sudo” a program to allow it to access certain files. Even if I am the only user, it stops malicious programs from modifying certain files without me noticing.
Posting links to XKCD like this is generally considered to be a low quality post, hence the downvotes. I’m not one of the downvoters, but thought I’d share the reason as nobody else did.
Edit: gotta love HN! I try to be helpful to someone else that was downvoted to heck with an explanation of why that was the case (based on past replies I’ve seen) and now my post is the own with a negative score. Cheers y’all!
Under the hood though there's multiple accounts which different applications use, the user might only log in with one but applications are isolated from each other and the system because of it.
If more than 50% of personal computers have 1 or 0 users, then the median would be 1, assuming 0 users is less common than 1, regardless of how many users the remaining computers had.
> Used to use BeOS as my primary OS for a year or two. I think it's the only OS I ever found to be truly intuitive and pleasant to work with.
I love everything I've read about BeOS but to be honest I must mention I couldn't understand how to use Haiku (I've never used the original BeOS) once I've tried - id didn't feel intuitive at all. And I'm not really stupid, I've been using different flavors of Linux as a primary OS for over a decade.
> That said, I don't think the world would be in a better place had Apple chosen Be over NeXT. The elephant in the room is security: NeXTSTEP, being Unix-based, has some amount of security baked in from the ground up. BeOS didn't; it was more akin to Windows 95 or classic Mac OS on that front.
Some times I miss the days of Windows 95 so much. I wish desktop OSes could be more simple, i.e. without multi-user and file access rights. When it's my own personal computer all I want of it from the security perspective is to prevent others from unlocking it or recovering data from it and to prevent any network communication except that I authorized. Sadly Linux still doesn't even have a decent implementation of the latter (Mac has LittleSnitch).
Windows 9x did pretty well for me - I've never caught a virus, never corrupted a system file and it was easy to fix for others who did.
Security, networking, multiuser, i10n, print (don’t even begin to underestimate this and quartz and display postscript) beos was an rtos with a neat UI. it was still fun but there was a gigantic pile of work before it could do what system 7 did, let alone what next did.
Additionally, NeXTStep had been in use in production on investment bank trading floors, in scientific institutions, and in military/intelligence agencies. It wasn't widely used, but it was used.
So while it might not have been quite ready for the median System 7 user's expectations, it was pretty solid.
May be so. But I have Mac OS 1.0 running on my MacBook. It is so slow and really not that working. Unlike the Mac OS 9. It is not that smooth. Luckily he found iPod ... even the colour one is very slow.
Also, the the familial relations of MacOS and Linux made it possible to share code fairly seamlessly between both (providing not talking about hardware integration).
In a world where we there was 3 separate universes: Windows, BeOS, and Linux it's possible Linux would've become more isolated.
BeOS had a regular Unix like (even posix IIRC) dev environment.
I was able to do most of the CS course work projects normally done on my University's Sun workstations on BeOS instead. Most of these of courses were data structures, algorithms, compilers, etc projects in C, and not things that required platform specific APIs.
But arguably, BeOS' overall model - a single user desktop OS built on top of but hiding its modern OS underpinnings like memory protection and preemptive multitasking - is far more similar to what eventually became MacOSX than Linux. Which isn't so surprising since it was built by ex apple folks. Remember that consumer OSs before this point had no memory protection or preemptive multitasking.
Linux, though it had the same modern OS features, was far more closely aligned in spirit with the timeshared modern multi-user Unix OS's like what ran the aforementioned Sun workstations (it's "Linus' Unix after all).
BeOS had a POSIX-compliant layer, but under the hood it was totally different from a UNIX.
Also, let’s keep in mind that Windows95 (released that same year) featured preemptive multitasking on a desktop user OS (albeit not a strong memory protection model), and WindowsNT has been available for a couple of years by then (having first shipped in 1993, If memory serves) and was a fully ‘modern’ OS (indeed it serves as the basis for the latter Windows), albeit with a comparatively large footprint.
I was an avid BeOS user (and coincidentally a NeXT user too) and I was enthralled by its capabilities, but in terms of system architecture it was a dead end.
IIRC the Unix compatibility layer had some pretty grotty warts. Porting Unix applications virtually always required fiddling to get them working, especially the network code.
Unfortunately this meant BeOS was perpetually behind the curve on stuff like the World Wide Web. I had a native FreeBSD build of Netscape long before Be managed to get a decent browser.
(A bit of research later:) It's actually a bit of a mixed bag. The "Operating System Reference Manual for the Lisa" [0] reads on pp. 1-3/1-4:
> Several processes can exist at one time, and they appear to run simultaneously because the CPU is multiplexed among them. The scheduler decides what process should use the CPU at any one time. It uses a generally non-preemptive scheduling algorithm. This means that a process wlll not lose the CPU unless it blocks. (…)
> A process can lose the CPU when one of the following happens:
> • The process calls an Operating System procedure or function.
> • The process references one of its code segments that is not currently in memory.
> If neither of these occur, the process will not lose the CPU.
In other words, non-preemptive, unless the OS becomes the foreground process, in which case it may block the active process in favor of another one currently in ready or blocked state.
It was ok. Back when I ran BeOS as my primary OS (2001 or so) I built half a C++ web application on BeOS, the other half on a HP-UX server logged in through an X terminal using ftp to sync between the two. Not much support in the wider *nix ecosystem though, so anything big would often fail to build.
I regretted having to move away from BeOS, it was by far the most pleasant OS I’ve used, but the lack of hardware and software support killed it.
In college I wrote a web server in beos and ported it back to Linux, learning pthreads along the way. Bonus achievement was making it multithreaded, so I got that for free, since beos makes you think architecturally as multithreaded first
bounced between windows and os/2, never really used beos as an os, mostly just as a toy for fun. the one thing I remember is that I could play a video that for the time looked amazing without issue. I want to say I even played Quake on it, in a window!
Sure, but at the time Windows 95 was released, they already had a couple of Windows NT releases (3.1, 3.5, and 3.51). Windows NT was a different, more modern operating system than the Windows 95/98/ME line. So, they did not have to evolve Windows 95 into a modern operating system. After ME, they 'just' switched their user base to another operating system and made this possible through API/ABI compatibility (which is quite a feat by itself).
But you have to consider what else was going on at the time: Microsoft was actively moving away from the DOS lineage. OS/2 had been in development since the mid-1980s, and, while that project came to an ugly end, they had also released the first version of Windows NT in the early '90s, and, by the late '90s, they were purposefully moving toward building their next-gen consumer OS on top of it.
Apple needed to be making similarly strong moves toward a multi-user OS with concerns like security baked in deeply. BeOS had the memory protection and the pre-emptive multitasking, which were definitely steps forward, but I don't think they would have taken Apple far enough to allow them to keep up with Microsoft. Which, in turn, would have allowed Microsoft to rest on its laurels, probably to the detriment of the Windows ecosystem.
I’ve never heard anyone say Windows is a problem because it’s proprietary. I have heard that having to pay to upgrade is a pain because you (the company) have to budget for it. Even then, you would also need to budget for the downtime and time to verify that it works before deploying the update, and both those have to be done on Linux too (it’s why LTS releases are a thing).
Anyways, Windows 10 may have its problems, but Microsoft the company is doing pretty well. Their stock is up about 50% this year (200% over the past 5). And that’s not to mention the fact that they’ve open sourced .NET among many other things.
Outside HN and Reddit talks, most people I know don't even care about FOSS OSes existence, they just want something that they buy at the shopping mall and can use right away.
In fairness, I don't think most people care about the OS at all, FOSS or otherwise; they care that the UI is something they can use, and that their apps work. If you perfected WINE overnight, I'll bet you could sit 80% of the population down at a lightly-skinned FreeBSD box and they'd never know.
I don't even think you'd need that for most of the population: it's been quite some time since the median user cared about desktop software[1]. I switched my parents over to a Linux Mint install a decade ago when I went away to college, and it lowered my over-the-phone tech support burden to zero overnight.
I also had (non-CS but very smart) friends who switched to (ie dual-booted) Linux on their own after seeing how much better my system was than a Windows box. A decade later, one of them is getting her PhD in veterinary pathology and still dual boots, firing Windows up only when she feels like gaming.
[1] My impression is that committed PC gamers aren't a large portion of the desktop user population, but I may be wrong.
I know a decent number of people who have That One Program that they've been using for 20 years and can't/won't leave. It probably varies by population group.
AYBABTU = All Your Base Are Belongs to Us, which is a mangled or broken translation in English of a Japanese phrase from a Japanese game `Zero Wing` [1]
You don't get to Apple is (large market cap, high customer satisfaction scores, high reviews in the tech press, etc.) because of marketing. If it were that easy, companies would just copy their marketing or load up on marketing and they would be successful.
And a huge part of Apple's current success is based on the tech and expertise they got from NeXT. That work underpins not just laptops and desktops but phones, tablets, set-top boxes, and more.
Perhaps you only get to where Apple is with world-class marketing.
Apple's iPod wasn't the first mp3 player, and it for damn sure wasn't technically superior.
The iPhone was not the first smartphone, nor the first phone with a touchscreen, nor the first phone with a web browser, nor the first phone with an App Store. It arguably had a better UX than incumbents, but better UX doesn't win markets just by dint of existing.
The iMac was a cute computer that couldn't run prevalent Windows software and didn't have a floppy drive.
Recent MacBook Pros have an awful keyboard, not just aesthetically but with known hardware problems. I understand at long last they're reverting to an older, better design.
Tech and expertise don't win just because they exist.
I'm as reflexively inclined as many technical people to be dismissive of marketing, but I dont think you're right here. You can't "just copy" marketing in the way you can't "just copy" anything else that a company is world-class in, and good marketing can indeed build market dominance (do you think coca cola is really a vastly superior technical innovation over Pepsi?)
The fact that it isn't a net good for users in most cases doesn't mean that it's trivial to do.
If people willingly exchange currency for products from a company and are satisfied with the value that they get out of it to the point that they become repeat customers, then how can you judge that no one except stockholders are benefitting?
This is very true. macOS and the iPhone, for me, went from being "obviously the very best of the best" to "the lesser of all evils".
When my 2015 rMBP finally gives up the ghost and / or when 10.13 loses compatibility with the applications I use, I have no idea what I'm going to do - probably buy another working 2015 rMBP used and pray that the Linux drivers are livable by then.
I know it's ridiculous, but it helps me fall asleep at night sometimes.
I feel like it's a huge step in the right direction, but for my own personal use:
- I still have mostly USB 2.0 peripherals. I don't see that changing anytime soon.
- I'm still hung up on the MagSafe adapter.
- I love the form factor. The 13" display is the perfect size, for me. I could've switched to a 15" 2015 rMBP with better specs, but I hated how big it was.
- I have no interest in using any version of macOS beyond 10.13, at present.
I'm really glad that they brought the Esc key back, especially as a pretty serious vim user. I don't know, maybe I'm stuck in the past. I'm certain that many, many people are really enjoying the new Macbook Pro 16; I just really, really like this laptop. It's the best computer I've ever owned.
I'm in the same boat as the sibling poster (albeit with a 15" machine) and I'll add this:
- The TouchBar is terrible
I hope they'll bring back a non-TouchBar configuration when they release the "new" keyboard on a 13" MacBook Pro. I could live with both a 13" or 15" laptop, but right now the list of drawbacks is still 1-2 items too long.
I was another former Be "power user." And I think that was probably accurate -- if you weren't in the "BeOS lifestyle" during the admittedly short window that it was possible, it's hard to understand how much promise it looked like it had. When I tell people I ran it full-time for over a year, they wonder how I managed to get anything done, but...
- Pe was a great GUI text editor, competitive with BBEdit on the Mac
- GoBe Productive was comparable to AppleWorks, but maybe a little better at being compatible with Microsoft Office
- SoundPlay was a great MP3 player that could do crazy things that I still don't see anything doing 20 years later (it had speed control for files, including playing backwards, and could mix files that were queued up for playback; it didn't have any library management, but BeOS's file system let you expose arbitrary metadata -- like MP3 song/artist/etc. tags! -- right in file windows)
- Mail-It was the second-best email client I ever used, behind the now also sadly-defunct Mailsmith
- e-Picture was an object-based bitmapped graphics editor similar in spirit and functionality to Macromedia's Fireworks, and was something I genuinely missed for years after leaving BeOS
And there were other programs that were amazing, even though I didn't use them: Adamation's video editor (videoElements? something like that), their audio editor audioElements, Steinberg's Nuendo, objektSynth, and two programs which are incredibly still being sold today: Lost Marble's Moho animation program, now sold by Smith Micro for Mac and PC, and the radio automation package TuneTracker (incredibly now being sold as a turnkey bundle with Haiku). Also, for years, there was a professional-grade theatre light/audio control system, LCS CueStation, that ran on BeOS -- and it actually ran Broadway and Las Vegas productions. I remember seeing it running at the Cirque de Soleil permanent installation at Disney World in Orlando.
At the time Apple bought Next rather than Be, I thought they'd made a horrible mistake. Given Apple's trajectory afterward, of course, it's hard to say that looking back. It's very possible that if they'd bought Be, they'd have gone under, although I think that would have less to do with technology than with the management they'd have ended up with (or more accurately, stayed with). But it's still an interesting "what if."
I actually toyed with the idea of starting a radio station based on the BeOS MP3 player + file system. The thought was to have a system without human DJs that used a simple web interface to gather "votes" for songs/genres, and to use the metadata in the file system to queue up the songs. If I remember correctly, BeOS also had a macro programming interface (ie. AREXX) that could be used to glue things together.
This made BeOS (and BeBox) a great product in my mind; the ability to use it in unexpected ways.
You may have hinted at it, but I think Apple's subsequent turnaround after acquiring Next was mainly due to their founder, Steve Jobs, coming back to Apple.
Jobs helped enormously, of course, but if Apple was still trying to sell classic MacOS in 2005 I'm not sure even Steve Jobs could have kept them afloat long enough to ship an iPhone.
That's true, but most keep forgetting that even before his comeback, Apple was very close to filing for bankruptcy and who knows what would have happened without the intervention from Gates. Microsoft was the only juggernaut who's fate was never doubted in the 1980s - 1990s.
NeXT hardware also failed but was the rightful choice for Apple over Be due to getting NeXTSTEP and Jobs again. But even after the war is over, we're now all generals.
wow, this list is bringing back all the memories. You're right, there was a short wave of enthusiasm, and some apps which were actually very innovative for the time. I remembered that it was actually used in some pro audio and lighting stuff, but I'd forgotten most of those apps. I remember playing around with Moho.
what's the name of the 2d illustration software that modelled real wet paint brushes and textured pencils? That was unlike anything I'd seen at the time, I remember putting quite a lot of effort into finding a compatible wacom.
Gosh. I remember the illustration program you're talking about but can't remember its name, either. :) I was surprised that it seemed to take so long for that concept to show up on other platforms, though -- other than Fractal Design Painter, it didn't seem like anyone on the Mac or Windows was really trying for that same kind of "real ink and paper" approach.
One of my favorite anecdotes about BeOS was that it had a CPU usage meter[1], and on the CPU meter there were on/off switches for each core. If you had two cores and turned one off, your computer would run at half speed. If you turned both off, your computer would crash. Someone once told me that this was filed as a bug against the OS and the response was "Works As Intended" and that it was the expected behavior.
(These are fuzzy memories from ~25 years ago. It would be nice if someone could confirm this story or tell me if it's just my imagination.)
The CPU monitor program was called Pulse and early versions allowed you to turn all the processors off and crash the machine. I think it was fixed in 3.something or 4.0.
The 8-way PIII Xeon was a Compaq someone tested BeOS on before it went into production. I Remember it being posted on some BeOS news site. There should be another screenshot or two with 25 avi files playing and a crap load of CPU hungry programs running at once. Impressive feat circa 2000. Edit: browse the screenshot directory for the other two. Amazing they survived time, internet, bit rot and my memory: http://birdhouse.org/beos/8way/
The BeOS scheduler prioritized the GUI and media programs so you could load the machine down to 100% and the GUI never stuttered and windows could be smoothly moved, maximized and minimized at 100% CPU. Rather, your programs would stutter. And everything was given a fair chance at CPU time.
Very nice design and the OS was built from the ground up for multimedia and threading for SMP. It was a real nice attempt at building a next generation desktop OS. Had no security even though it had basic POSIX compatibility and a bash shell. Security bits meant nothing.
I remember circa 2000 being able to simultaneously compile Mozilla, transfer DV video from a camcorder into an editor, check email, and surf the web on a dual Pentium Pro system with no hint of UI stutter or dropped frames in the firewire video transfer. It was at least another decade before SSDs and kernel improvements made that possible on Linux, Windows, or OS X.
The tradeoff was the throughput of your compilation was terrible. BeOS wasn't magic, it just prioritized the UI over all else. That's not advanced, it's just one possible choice.
MacOS prior to OS X had the same property: literally nothing else could happen at the same time if the user was moving the mouse, which is why you had to take the ball out of the mouse before burning a CD-R on that operating system.
Oh, sure, it was obviously limiting the other tasks. The point was that this is almost always the right choice for a general purpose operating system: no user wants to have their music skip, UI go unresponsive, file transfers to fail, etc. because the OS devoted resources to a batch process.
You’re only partially correct about classic macOS: you could definitely hang the OS by holding down the mouse button but this wasn’t a problem for playing music, burning CD-Rs, etc. in normal usage unless you had the cheapest of low-end setups because a small buffer would usually suffice. I worked with a bunch of graphic designers back then and they didn’t get coasters at a significant rate or more than their Windows-using counterparts, and they burned a lot of them since our clients couldn’t get large files over a network connection faster than weeks.
You can down play it all you want but it was a really nice OS for its time. It's smooth GUI was very competitive to other clunky windowing systems of the time. The advanced part was threading and smp support were woven into the system api making smp development a first class programming concept. Other operating systems felt like threading was bolted on and clunky. And thanks to the smp support prioritizing the GUI made 100% sense. And I believe there were some soft real time abilities of the scheduler so processes with high priority ran reliably.
Thanks for this. I remember being at MacWorld and watching a movie play while holding down menu items. On Classic Mac, which I was used to, this would block the entire OS (almost). BeOS seemed space-age.
Reminds me of a game called NieR:Automata. You play as an android and the skill/attribute-system is designed as a couple of slots for chips. There were chips for things like the minimap and various other overlays along with general attributes, so if you decided you want to exchange your experience gauge for another chip with more attack speed, you could totally do that.
Among these chips was one called "OS chip" you had from the very beginning. If you'd try to replace that or simply exchange it for another one you "died" instantly and were greeted by the end-credits.
When I started University in 2000, I had a quad-boot system: Win98, Win2000, BeOS 5 and Slackware Linux (using the BeOS bootload as my primary because it had the prettiest colors). I mostly used Slackware and Win98 (for games), but BeOS was really neat. It had support for the old Booktree video capture cards, could capture video without dropping frames like VirtualDub often did, and it even had support for disabling a CPU on multicpu systems (I only saw videos of this; never ran BeOS on a SMP system).
I wish we had more options today. On modern x86 hardware, you pretty much just have Windows, Linux and maybe FreeBSD/OpenBSD if you replace your Wi-Fi card with an older one (or MacOS if you're feeling Hackintoshy .. or just buy Apple hardware). I guess three is kinda the limit you're going to hit when it comes to broad support.
I think BeOS was the only OS that allowed smooth playback of videos and work at the same time, something Windows was capable 5 and Linux 10 years later :D
What technically enabled this on such limited hardware? Was there lack of security/containerization/sandboxing that made os call much faster and context switches better?
Other people mentioned the real preemptive scheduling — and the general overall better worst-case latency — but another factor was the clean design. The other operating systems tended to be okay in the absence of I/O content but once you hit the capacity of your hard drive you would find out that e.g. clicking a menu in your X11 app did a bunch of small file I/O in the background which would normally be cached but had been pushed out, etc. A common mitigation factor in that era was having separate drives for the operating system, home directory, and data so you could at least avoid contention for the few hundred IOPs a drive could sustain.
Yes. This always amazed me with BeOS. It would play 6 movie simultaneously making my PC very slow but still responsive. As if the framerate just went down.
Bear in mind that resolutions back then were much lower than now, and not all computers had 24 bit color frame buffers. Video cards ran one monitor for the most part, with no others attached.
Be had well written multi threading and preemptive multitasking implemented on a clean slate - no compatibility hacks required. That meant it worked well and was quick/responsive. There were still limits, and the OS didn't have many security protections that would get written in today.
Some people were, but it wasn't too common. Workstations had far higher resolutions long before this, but home PCs running non 3d accelerated hardware were still mostly 1024x768-ish.
The BeBox itself was vastly different hardware than a standard PC as well, so it could break a lot of rules as far as smooth concurrency and multitasking... kinda like the Amiga did.
Yup, had a 22” Mitsubishi monitor that could do that resolution in ~2002. Everyone would pick on me about the text being so small, but I’d let them sit at my desk squinting and I’d stand ten feet back and read the screen with ease as they struggled. The monitor was a beast though, around 70lbs if memory serves.
That was more the exception than the rule. Besides, 1080P is about 45% more pixels per frame than 1280, and likely at a higher frame rate. Big difference in hardware load.
I think it was their thread/process scheduler. It had a section of priorities which got hard realtime scheduling, then lower priority stuff got more "traditional" scheduling. (Alas, I don't know too much about thread/process scheduling so the details elude me.) That way the playback threads (and also other UI threads such as the window system) got the timeslices they needed.
Isnt giving near real time priority scheduling to audio/video how Windows handles things those days? I think I read that somewhere last week under Linux kernel scheduler behaviour response discussion.
Amiga did this in 1985. It's just that for compatibility reasons Apple couldn't do this. Even funnier: the fastest hardware to run old MacOS (68k version) on: an Amiga computer.
Ah yeah I still have my PowerComputing PowerTower Pro! At the time it was a current model, its 512mb of RAM was insane and my friends & classmates were jealous! hahah :)
Check out this video[0], basically an Amiga with an accelerator card potentially makes for the fastest environment to run 68k-based Mac OS (System 7) ...
Well, it's more akin to something like Wine where it's not exactly a virtual machine, since the processor instructions are the same. Tho that's about the extent of my understanding.. haha
I sometimes used my Atari ST with an emulator called Aladin.
"Cracked" to work without Mac ROMs. But wasn't really useful to me because of lack of applications (at the time).
IIRC there were solutions like this for the Amiga too.
That depended _very_ heavily on your graphics card at the time. In 2001, I could get X to crash on my work computer if I shook my mouse too fast. At home on my matrox card, yes, it was rock stable.
High definition playback is still not as smooth as it could be in browsers on Linux (or if your CPU is fast enough, it will drain your battery more quickly), because most browsers only have experimental support for video acceleration.
Pretty much any CPU released in the past decade should be capable of decoding 1080P video as well as a GPU (though yes, will use slightly more power). The only exceptions I can think of are early generation Atom processors, which were terribly slow.
Pretty much any CPU released in the past decade should be capable of decoding 1080P video as well as a GPU (though yes, will use slightly more power).
The point is that modern GPUs have hardware decoding for common codecs, and will use far less power than CPU decoding. But the major browsers on Linux (Firefox and Chrome) disable hardware decoding on Linux, because $PROBLEMS.
So, you end up with battery draining CPU-based 1080p decoding. And even more battery draining or choppy 4k decoding.
Linux could do that only if your system was lightly loaded. Once you started to have I/O contention, none of the available kernel schedulers could reliably avoid stuttering.
I had this experience too, my video card was so shitty that I wasn't able to watch 700mb divx videos in windows, I had to boot into linux and use mplayer.
This would be challenging with modern codecs using delta frames. The only way I can see it work is precomputing all frames from the preceeding keyframe. Doable, but decent effort for a fairly obscure feature.
I never saw BeOS do that with video, but I heard it do it with MP3 files. SoundPlay was a kind of crazy bananas MP3 player -- it could basically act as a mixer, letting you not only "queue up" multiple files but play each of them simultaneously at different volume levels and even different speeds. I've still never seen anything like it outside of DJ software.
> you pretty much just have Windows, Linux and maybe FreeBSD/OpenBSD [...]
That sounds just as good? Compared to quad-booting Win98/Win2000/BeOS5/Slackware, today you could quad-boot Win10/FreeBSD/OpenBSD/Ubuntu. Actually, depending on what you count as different systems and what exact hardware you have, you could have 2 laptops sitting on your desk: a pinebook running your choice of netbsd, openbsd, freebsd, or some linux (https://forum.pine64.org/forumdisplay.php?fid=107), and an x86 laptop multibooting Windows 10, Android, Ubuntu GNU/Linux, Alpine Busybox/Linux, FreeBSD, OpenBSD, NetBSD, and Redox (https://www.redox-os.org/screens/). That's 2 processor families in 2 machines running what I would count as 4 and 8 operating systems each.
There also used to be other CPU architectures--though even at the time, enough people complained about "Wintel" that maybe it was obvious that the alternatives weren't ever going to catch on.
People complained about "Wintel" because the 32-bit x86 chips were so fast and cheap they destroyed the market for RISC designs and killed existing RISC workstation and server architectures, like SPARC and HPPA and MIPS.
By the time the Pentium came around, the future looked like a completely monotonous stretch of Windows NT on x86 for ever and ever, amen. No serious hardware competition, other than Intel being smart enough to not kill AMD outright for fear of antitrust litigation, and no software competition on the desktop, with OSS OSes being barely usable then (due to an infinite backlog of shitty hardware like Winmodems and consumer-grade printers) and Apple in a permanent funk.
We were perpetually a bit afraid that Microsoft/Intel would pull something like Palladium/Trustworthy Computing [1] and lock down PC hardware but good, finally killing the Rebel Alliance of Linux/BSD, but somehow the hammer never quite fell. It did in the cell phone world, though, albeit in an inconsistent fashion.
To Microsoft's credit, the early Windows NT versions were multiplatform. I remember that my Windows NT 4.0 install CD had x86, Alpha, PowerPC, and MIPS support.
The other thing people forget, which is still a bit incomprehensible to me, is that the multiple Unix vendors were saying they'll migrate to Windows NT on IA-64.
...well, we all know what happened - but I've often thought that Microsoft hastened their demise.
Somewhere in there, of course, was also the whole SGI moving away from IRIX (SGI's unix variant) to Windows NT (IIRC, this was on the Octane platform) - there being some upset over it by the SGI community. Maybe that was part of the "last gasp"? I'm sure some here have better info about those times; I merely watched from the sidelines, because I certainly didn't have any access to SGI hardware, nor any means to purchase some myself - waaaaay out of my price range then and now.
Of course - had SGI not gone belly up, I'm not sure we'd have NVidia today...? So maybe there's a silver lining there at least?
They couldn't afford to compete with Intel on processors... they just didn't have the volumes and every generation kept getting more expensive. For Intel, it was getting relatively cheaper thanks to economies of scale since their unit volumes were exploding throughout the 90's. Also, Intel's dominance in manufacturing process kept leapfrogging their progress on the CPU architecture front.
It actually worked pretty nicely - if anything better back in those days when software expected to run on different unixes, before the linux monoculture of today.
> We were perpetually a bit afraid that Microsoft/Intel would pull something like Palladium/Trustworthy Computing [1] and lock down PC hardware but good, finally killing the Rebel Alliance of Linux/BSD, but somehow the hammer never quite fell. It did in the cell phone world, though, albeit in an inconsistent fashion.
I agree that phones are more locked down than desktops/laptops nowadays, but it's worth pointing out that neither Microsoft or Intel are really winners in this area. They both still are doing fairly well in the desktop/laptop in terms of market share though.
I honestly think it was less any type of Wintel conspiracy and more that platforms have network effects. Between Palladium not working out and Microsoft actually making Windows NT for some RISC ISA's, there wasn't actually an Intel/Microsoft conspiracy to dominate the industry together. They each wanted to separately dominate their part of the industry and both largely succeeded, but MS would have been just as happy selling Windows NT for SPARC/Alpha/PowerPC workstations and Intel would have been just as happy to have Macs or BeBoxes using their chips.
> I honestly think it was less any type of Wintel conspiracy and more that platforms have network effects.
True. I've always regarded "Wintel" as more descriptive than accusatory. It's just a handy shorthand to refer to one specific monoculture.
> Between Palladium not working out and Microsoft actually making Windows NT for some RISC ISA's, there wasn't actually an Intel/Microsoft conspiracy to dominate the industry together.
Right. They both happened to rise and converge, and it's humanity's need to see patterns which turns that into a conspiracy to take over the world. They both owe IBM a huge debt, and IBM did what it did with no intention of being knocked down by the companies it did business with.
> OS X was around in the days of XP and Linux was perfectly usable on the desktop.
> A few years earlier things were a little more bleak.
I admit I was unclear on the time I was talking about, and probably inadvertently mangled a few things.
As for Linux in the XP era, I was using it, yes, but I wouldn't recommend it to others back then because it still had pretty hard sticking points with regards to what hardware it could use. As I said, Winmodems (cheap sound cards with a phone jack instead of a speaker/microphone jack, which shove all of the modem functionality onto the CPU) were one issue, and then there was WiFi on laptops, and NTFS support wasn't there yet, either. I remember USB and the move away from dial-up as being big helps in hardware compatibility.
Yeah Wifi on Linux sucked in those days. For me that was the biggest pain point about desktop Linux. In fact I seem to recall having fewer issues with WiFi on FreeBSD than I did on Linux -- that's pure anecdata of course. I remember the first time I managed to get this one laptop's WiFi working without an external dongle and to do that I had to run Windows drivers on Linux via some wrapper-tool (not WINE). To this day I have no idea how that ever worked.
> I remember the first time I managed to get this one laptop's WiFi working without an external dongle and to do that I had to run Windows drivers on Linux via some wrapper-tool (not WINE). To this day I have no idea how that ever worked.
ndiswrapper. It's almost a shibboleth among people who were using Linux on laptops Way Back When.
> NDISwrapper is a free software driver wrapper that enables the use of Windows XP network device drivers (for devices such as PCI cards, USB modems, and routers) on Linux operating systems. NDISwrapper works by implementing the Windows kernel and NDIS APIs and dynamically linking Windows network drivers to this implementation. As a result, it only works on systems based on the instruction set architectures supported by Windows, namely IA-32 and x86-64.
[snip]
> When a Linux application calls a device which is registered on Linux as an NDISwrapper device, the NDISwrapper determines which Windows driver is targeted. It then converts the Linux query into Windows parlance, it calls the Windows driver, waits for the result and translates it into Linux parlance then sends the result back to the Linux application. It's possible from a Linux driver (NDISwrapper is a Linux driver) to call a Windows driver because they both execute in the same address space (the same as the Linux kernel). If the Windows driver is composed of layered drivers (for example one for Ethernet above one for USB) it's the upper layer driver which is called, and this upper layer will create new calls (IRP in Windows parlance) by calling the "mini ntoskrnl". So the "mini ntoskrnl" must know there are other drivers, it must have registered them in its internal database a priori by reading the Windows ".inf" files.
It's kind of amazing it worked as well as it did. It wasn't exactly fun setting it up, but I never had any actual problems with it as I recall.
Yeah I know what ndiswrapper is (though admittedly I had forgotten it's name). I should have been clearer in that I meant I was constantly amazed that such a tool existed in the first place and doubly amazed that it was reliable enough for day to day use.
Oh man! I first tried BeOS personal edition when it came on a CD with Maximum PC magazine. (Referring to same demo CD, though the poster is not me: https://arstechnica.com/civis/viewtopic.php?f=14&t=1067159&s.... Also, how crazy is it that Ars Technica’s forums have two decade old posts? In 2000, that would be like seeing forum posts from 1980.) I remember being so happy when we got SDSL, and I could get online from BeOS. (Before that, my computer had a winmodem.)
BeOS was very much a product of its time. (Microkernel, use of C++, etc.) What would a modern BeOS look like? My thought: use of a memory and thread safe language like Rust for the main app-level APIs. (Thread safety in BeOS applications, where every window ran in its own thread, was not trivial.) Probably more exokernel than microkernel, with direct access to GPUs and NICs and maybe even storage facilitated by hardware multiplexing. What else?
But if you change your question to "What would a modern OS look like?"
Fuchsia. (1)
The only relationship that they have is that a kernel engineer called Travis Geiselbrecht designed NewOS (Haiku's modified kernel) and Zircon (Fuchsia's Microkernel).
There's a bit of BeOS in Android. Binder IPC is much like BMessage. And nowadays everyone puts stuff like graphics and media in a separate user space daemons, which was unusual for the time. Pervasive multithreading basically happened in the form of pervasive multiprocessing.
I installed BeOS a long time ago on a PC. It was something ahead of the times.
I still remember how incredible it was the rotating cube demo where you coud drag and drop images and videos on the cube faces... it worked without a glitch on my pentium.
Agreed, I remember trying BeOS in the late 90s and I felt the way Tesla fans report feeling about their cars - "it just feels like the future".
The responsiveness of the UI was like nothing I'd ever seen before. Unfortunately BeOS fell by the wayside, but I have such fond memories I keep meaning to give Haiku a shot.
When Be wrote that demo the situation is that the other operating systems you might plausibly choose all have working video acceleration. Even Linux has basic capabilities in this area by that point. BeOS doesn't have that and doesn't have a road map to get it soon.
So, any of the other platforms can play full resolution video captured from a DVD for example, a use case actual people have, on a fairly cheap machine and BeOS won't be able to do that without a beast of a CPU because it doesn't have even have hardware colour transform acceleration or chromakey.
But - 1990s hardware video acceleration can only play one video at a time, because "I want to play three videos" isn't a top ask from actual users. So, Be's demo deliberately shows several different postage stamp videos instead of one higher resolution video, as the acceleration is no help to competitors there.
And then since you're doing it all in software, not rendering to a rectangle in hardware, the transform to have this low res video render as one side of a cube or textured onto a surface makes it only very slightly slower, rather than being impossible.
Audiences come away remembering they saw BeOS render videos on a 3D surface, and not conscious that it can't do full resolution video on the cheap hardware everybody has. Mission success.
BeOS R4.5 did have hardware accelerated OpenGL for 3dfx Voodoo cards. I played Quake 2 in 1999 with HW OpenGL acceleration. For R5 BeInc wanted to redo their OpenGL stack, and the initial prototypes seeded to testers actually had more FPS on BeOS than under Windows.
Eh, multithreading decoding could help a lot. And by the population of DVD video in computers (and the PS2), most people had a Pentium3 450MHZ at homes, which was more than enough for DVD video with an ASM optimized video player such as MPlayer and a good 2D video card.
Remember when the Amiga bouncing ball demo was impressive? Ironically 3D graphics ended up being the Amiga's specific achilles heel once Doom and co came on the scene.
That's curious to me. Doom is specifically not 3D. Was it a publishing issue (that Doom and co weren't produced for the Amiga), or a power issue, or something else?
The Amiga had planer graphics modes, while the PC/VGA cards had chunky mode (in 320x200x256 color mode).
It means that, to set the color of a single pixel on the Amiga, you had to manipulate bits at multiple locations in memory (5 in 32 colours), while for the PC each pixel was just one memory location; In chunky mode you could just do something like: videomem[320*y+x]=158 to set the pixel at (x,y) to color 158, where videomem would point directly to the graphics memory (at address 0xa0000) -- It really was the simplest graphics mode to work with!
If you just copied 2D graphics (without scaling/rotating) the Amiga could do it quite will using the blitter/processor, but 3D texture mapping was more challenging because you constantly read and write to individual pixels (each pixel potentially requiring 5 memreads/writes on the Amiga vs. 1 on the PC).
Doom's wall texture mapping was affine, which basically means scaling+rotation operations were involved. The sprites were also scaled. Both operations a problem to the Amiga.
As software based 3D texture mapping games became the new hot thing in 1993-1997, the Amiga was left behind. Probably wouldn't have been a problem if the Amiga has survived until the 3D accelerators in the late 90s.
This is quite well described elsewhere. Google is your friend if you want to know more! :-)
Also Amiga didn’t have hardware floating point whereas DX series of PCs in the 90s did. Essential for all those tricky 3D calculations and texture maps.
Quake has software full 3D which runs appallingly if you can't do fast FP, it's targeting the new Pentium CPUs which all have fast FPUs onboard, it runs OK on a fast 486DX but it flies on a cheap Pentium.
Doom is just integer calculations, it's fixed point math.
I didnt know Doom was all integer ... quite a feat.
In the general sense though the lack of floating point, as well as flat video addressing seriously hampered Amiga in the 3D ahem space.
EDIT I just remebered there is definitely at least one routine I know of that performs calculations based on IEEE 754 - “fast inverse square” or something. That could be at the root [badum] of my confusion vis-a-vis Doom ...
You are still getting confused by polygons. It was a 3D space that you could move around in. The matter of how it was rendered is an implementation detail.
Doom was a 2D space that looked like a 3D space due to rendering tricks. You could never move along the Z-axis though because the engine doesn't represent, calculate, or store one. That's why you can't jump, and there are no overlapping areas of the maps.
Regardless of the “technicalities”. My point was that this, and other 3D games were something that Amiga could not do well - whether 3D, or “simulated 3D”.
It really wasn't. Doom's gameplay almost entirely took place in a 2D maze with one-way walls. It was rendered to look 3D, and as you said, that's an implementation detail.
purely technical? You can't go above or below anything; no two objects can exist at the same X/Y; height doesn't exist in any true fashion (the attribute is used purely for rendering --- there is no axis!). How is the existence of the third axis in a supposedly 3D environment purely technical?
With only two axis, it is literally a 2D space, which gives some illusion of 3D as an implementation detail --- not the other way around.
It isn't "literally" a 2D space. It is "topologically" a 2D space in that you could represent it as a 2D space without loosing information. It doesn't provide 6 degrees of freedom but it is very much experienced as a 3D game environment.
EDIT also, using the term "literally" to talk about 3Dness when it is all rendered onto a 2D screen, is fairly precarious. No matter how many degrees of freedom, or how rendered, it will never be "literally" 3D, in the literal sense of the term.
> What’s left for us now is to wonder, how different would the desktop computer ecosystem look today if all those years ago, back in 1997, Apple decided to buy Be Inc. instead of NeXT? Would Tim Berners-Lee have used a BeBox to run the world’s first web server instead?
For this hypothetical scenario to ever had been possible, BeOS would’ve had to time travel, as TBL wrote WorldWideWeb on a NeXT machine in 1990[0]. BeOS was initially developed in 1991 per Wikipedia[1] and the initial release of BeOS to the public wasn’t until 1995.
I used BeOS for most of the 2nd half of the '90s and I guess in my mind at least the regrettable, messy, and unethical end of BeOS in 2001-2002 is emblematic of the Dot Com collapse.
Crushed by Microsoft's anti-competitive business practices and sold for scrap to a failing company who was unable to actually do anything with parts they wound up with but who never the less made damn sure that no one else could either.
BeOS was really something of what the future 'could' have almost been. Too bad that it was killed by better competitors. But again I think its fair to compare with the lessons learned from its successor 'Haiku' that can be learned by many other OSes:
From what I can see from using Haiku for a bit, it has the bazar community element from the open-source culture with its package management and ports system from Linux and BSD whilst being conservative with its design from its apps, UI, and SDK like macOS. Although I have tired it and its surprisingly "useable", the driver story is still a bit lacking. But from a GUI usability point of view compared with many Linux distros, it feels very consistent unlike the countless confusing interfaces coming from those distros.
Perhaps BeOS lives on in the Haiku project, but whats more interesting is that the real contender who learned from its failure is the OS that has its kernel named 'Zircon'.
I installed BeOS when it first came out. To me it was a cool tech demo, but it was fairly useless as it didn't have a usable browser (NetPositive was half baked at best), couldn't play a lot of video codecs and couldn't connect to a Windows network share.
I feel like if they launched a better experience for existing Windows users, it would have done much better.
That's a hell of an understatement right there. It still doesn't have any capability for accelerated video, does it?
Unfortunately that's the story for any OS these days that isn't already firmly established. Which is a huge shame since they all suck in their own ways.
> Unfortunately that's the story for any OS these days that isn't already firmly established.
Maybe because we're coming at this from the wrong perspective?
I love the theoretical idea that I could build a generic x86 box that can boot into any OS I feel like using, but has that ever truly been the case? We certainly don't pick software this way—if you're running Linux, you're not going to buy a copy of Final Cut and expect it to work.
Well-established software will of course work almost everywhere, but niche projects don't have the ability. Unless you use something based on Java or Electron, which is equivalent to using Virtualbox (or ESXi) in this comparison.
It's long been said that one of Apple's major advantages with macOS is they don't need to support any hardware under the sun. Non-coincidentally, the recommended way to make a Hackintosh is to custom build a PC and explicitly select Mac-compatible hardware.
Now, if an OS doesn't for instance have support for any model GPUs at all, cherry picking hardware won't help. But perhaps this is where projects like BeOS need to focus their resources.
> The "correct" way to go about things is to choose the OS first, and then select compatible hardware.
Yeah, wouldn't it be nice if we weren't constrained by real world requirements? If I were to write an OS today, the hardware I'm targeting may become quite rare and/or expensive tomorrow. Or it may just go out of fashion. Regardless, very few people are going to buy new hardware just to try out an OS they're not even sure they want to use yet.
> very few people are going to buy new hardware just to try out an OS
We do have VM's and emulators, but yes, the cost of switching OS's is huge. That's true with or without broad hardware compatibility.
My point is this: I don't think the idea of OS-agnostic hardware ever really existed. The fact that most Windows PC's can also run Linux is an exceptional accomplishment, and not something other projects can be expected to replicate. You might get other OS's to boot, but not with full functionality.
That's the case. I can't use Haiku til the video is sorted, and it looks like that's a long way out. I'd love to help but I don't know C++ and I don't have time to dive into something like that.
Well it wasn't as simple as "killed off by better competitors". It was actually both much better than Windows 98 and Mac OS at the time.
But ultimately the deathblow came from Apple which, after struggling with low sales and poor quality software, almost chose to buy BeInc's tech but dropped it so they could bring in Steve Jobs. So it was more like vendor lock-in (Windows) and corporate deals (Apple) as well as failing partners (Palm).
Apple also dropped it because they couldn't come together on price, partly because BeOS was in a fairly unfinished state:
> Apple's due diligence placed the value of Be at about $50 million and in early November it responded with a cash bid "well south of $100 million," according to Gassée. Be felt that Apple desperately needed its technology and Gassée's expertise. Apple noted that only $20 million had been invested in Be so far, and its offer represented a windfall, especially in light of the fact that the BeOS still needed three years of additional expensive development before it could ship (it didn't have any printer drivers, didn't support file sharing, wasn't available in languages other than English, and didn't run existing Mac applications). Direct talks between Amelio and Gassée broke down over price just after the Fall Comdex trade show, when Apple offered $125 million. Be's investors were said to be holding out for no less than $200 million, a figure Amelio considered "outrageous."
> ...With Be playing hard to get, Apple decided to play hardball and began investigating other options.
In my hazy recollection, there was another, rather pedestrian reason Apple didn't go for BeOS: it had almost no infrastructure for printing. The Mac's niche was prepress and desktop publishing (remember that phrase?), and BeOS could barely print and had no color management.
(Though I could be totally wrong on this, and welcome a correction.)
I also read a story about how BeOS DPx (developer previews) lacked decent printing support and this was another reason why Apple chose NeXT. The irony is that Apple had to redo the NeXT printing stack anyhow, as did BeOS in R4, and they both ended up using CUPS. Also another reason was lack of x86 support, which forced BeInc to quickly rush out a x86 port in R3.0. Intel were so impressed by the x86 performance, that they ended up investing $4M into BeInc.
Before I first saw BeOS running on a colleague's machine back in the ~mid-late 90s (same guy who introduced me to Python) I used an SGI Onyx Reality Engine machine [1] (roughly $250K computer back in the day) for molecular mechanics simulations and BeOS ran circles around it on perceived responsiveness. I really wish we have OS's that prioritize input/output latency over all else.
Fun vid that might bring back some memories :) https://www.youtube.com/watch?v=Bo3lUw9GUJA "SGI's $250,000 Graphics Supercomputer from 1993 - Silicon Graphics Onyx RealityEngine²"
Windows does that. Balance Set Manager boosts the priorities of threads that receive I/O. That’s why mouse movement is so smooth since Windows NT. (Not so much on Win9x).
I was drinking water when I read this and ended up snorting it out I laughed so hard when I read this. It’s so, so true.
I am a mobile dev and use a plethora of iOS and Android devices all the time, often on many different software versions.
It’s not unique to any platform, and seems to be most often affecting the keyboard input, but occasionally seems to affect the rest of the UI as well.
Software updates will oscillate between breaking and fixing these things on and off between different devices.
I’ve been doing app development since iOS 3 and the first version of the App Store and used an ADP1. There’s no method to the madness I can see. Especially with the keyboard input.
Lord help you if you happen to be using a 4S or 5 or iPad 2/3. I often wait for up to 10-15 seconds for text input to catch up to itself, only to catch up to itself all at once - I can type whole sentences knowing by the time I wait for them to finish it’ll be the same 10-15 seconds to wait as if I’ve typed a single character.
I should have said that I really want a general-purpose OS to prioritize perceived latency. Mobile OS (and gaming consoles) of course are tuned quite differently and in my brain I view them as completely different experiences when I use them. My desktop while significantly more powerful just _feels_ slower because of the latency.
Haiku is really good. Would recommend anyone to try it out in a VM (I had it running on my actual laptop for a short time, but unfortunately my job pretty much requires me to run Linux so it couldn't stay). Haiku has a really responsive UI with a 90s look so you can actually tell what is a button.
Oh man, I really do miss the days of actual coherent UI that is clearly "readable". The trend of flat UI drives me crazy. So much wasted cognitive effort just to make sense of something on-screen.
How painful was it to get it running on a laptop? I've been interested in Haiku for a long time now, but I don't really have a place to play with it except on my laptop.
You have to have the right laptop. At the time I had a Thinkpad (X1 IIRC?) which fairly much worked out of the box, but I'm fairly sure it won't work on $random laptop. For best results and the lowest barrier to entry try it first in a VM.
Actually these days, Haiku will probably at least boot on $random laptop, and if you have a WiFi chip that FreeBSD also supports (we reuse their network drivers), you can probably even connect wirelessly.
I run Haiku on my ThinkPad E550 (2015), I know some of the other devs run on Dell XPS 13, HPs, etc. And the same is true in towers (Ryzen is becoming a popular pick.)
GPU acceleration drivers are the one real kind of driver we lack at this point.
In this alternative universe, Objective-C would have died, Swift would never happened, and C++ would rule across desktop and mobile OSes (thinking only of Windows, BeOS X, Symbian, Windows CE and what might have been BeOS based iOS).
Also POSIX would be even less relevant, and very few would be buying BeOS X for doing Linux coding.
In my alternate universe, GNUStep or some other implementation of the NS APIs would have allowed Linux/BSD to rise in popularity along with a NeXT-ified Apple. Except that didn’t happen.
I think in your alternate universe it’s likely we’d have seen something entirely different emerge.
GNUStep still looks much like to the on the same state it used to be when I was using WindowMaker in the late 90's.
I regretted the time I once spent on the GNUStep room at FOSDEM, as the only subjects of relevance seemed to be the theme engine, as if there wasn't anything more relevant to get working properly.
What would have been interesting is if Apple had bought Be in 1993 or 1994, and incorporated BeOS tech in System 8, then wound up buying NeXT anyway at the end of 96 and OS X incorporated Be and NeXT tech.
Though it's entirely possible Jobs would have tossed the BeOS tech along with Quickdraw GX, Newton, etc.
Well, the last major revision to the Single Unix Specification was released in 2008. Ever since then, Linux has basically become the defacto POSIX standard. So while glibc may not rule the roost, most POSIX OSes bend to the will of the GNU userland and the Linux monoculture in many ways. It's not really like the old days with Sun, IBM, DEC and even Microsoft with Xenix. POSIX is being washed away a little more every day.
There is a C ABI. The "x86-64 System V ABI" (the ABI for C on everything except Windows on an x86-64, ie a typical PC) was designed by AMD working with early adopters on Linux and other platforms. Here are several extant ABI documents:
The ABI for C needs to agree less stuff than a C++ ABI but it's still quite a lot of stuff, if these things don't get agreed then code won't work unless everybody uses the same (version of the) compiler.
- Chrome OS (where JavaScript and WASM is what matters)
So no, it isn't everything except Windows on x86-64 and then there are all the other OSes running on ARM, MIPS, PowerPC, SPARC, PIC and plenty of other less relevant CPUs .
I feel like the biggest missed opportunity of the “mobile revolution” ten years ago was BeOS.
It seemed clear to me that Android would be a bust for smartphone manufacturers (nobody has really made money off of Android except for Google and Samsung, the latter of whom accomplished this by dominating that market).
If Sony, for example, had gotten ahold of BeOS and tried to vertically integrate in a manner similar to Apple, they could have been a contender.
Well there's your reason why Google has the expertise to build Fuchsia. Most of the BeOS guys were hired there to work on Android and now they are doing it again with Fuchsia.
We will all come back to this comment in 10 years to find ourselves running Fuchsia on our phones, tablets and our new Pixelbooks.
Some trivia: There was an unofficial successor of BeOS called ZETA from a german company yellowTAB (later magnussoft).
I remember this because they tried selling it via a homeshopping channel in german TV which was completely hilarious.
For some weird reason I still have two (?) copies of yellowTAB Zeta on my bookshelf, next to my boxed copy of Red Hat Linux 7.3. Amazingly Zeta got more use than Linux, because it was much easier to get everything working on my machine.
I just checked the discs and it's the same "Deluxe Edition RC2" as in that home shopping video. I think my copies were bought as special advance preview builds for third-party Zeta developers, though. And I don't think I paid as much as 100EUR...
I enjoy the regular reminiscing of BeOS, but for all the talk about how fast it was on hardware common at the time, I wonder why nobody remembers an even more impressive "tech demo" of an OS from that same period - QNX 6 desktop? An ISO of evaluation edition of 6.2 was easily downloadable for a while, and it was pretty neat:
I had a Passport for a while. A lovely phone, and the OS had an Android runtime so _some_ Android apps worked... but like other OSes that let you run non-native apps, it resulted in BB10 never attaining a critical mass of apps.
I remember seeing a BeBox under someone's desk at the video game company I was interning at in 1997. I nearly lost my shit. When Be released an Intel-compatible build while I was at Santa Clara in 1998, I installed it onto one of the lab computers. Sorry about that, IT team.
I ran BeOS back in the day (even have the developer book!) and I've been trying Haiku on and off over the years.
It's been interesting. The browser isn't quite all there yet but might be considered serviceable, and you can sort of get a working dev environment going on it (not many modern runtimes, though, nor a way to run Linux binaries that would let me do Clojure).
It's certainly worth keeping an eye on, although there were some weird decisions - for instance, I remember a thread on ARM support where whomever was tackling that was completely dissing the Raspberry Pi, and yet today, if I were to install it permanently on any machine to tinker with, it would almost certainly be a Pi...
> if I were to install it permanently on any machine to tinker with, it would almost certainly be a Pi
The pi is such a great device for this. If I were working on any niche operating system (or building one from scratch), I'd target qemu first, and the pi second. It may not be the nicest hardware out there, but it is a single platform that loads of people have, that allows trivial disk swapping (upside of no onboard flash -> everything on swappable SD card), and is dirt cheap.
Last time I tried Java was two years ago. A few months back I installed it on KVM to check out the state of WebPositive, but didn’t actually get to try coding anything since the browser couldn’t load my webmail.
The way I see it, the two are related-
Microsoft stopped it from having much of a chance as an independent OS.. This stopped us from getting it outside of a major vendor like Apple acquiring it.
Apple saw this, so didn't want to pay as much as they might have if it were selling well on it's own.
So Apple ended up going with NeXT "instead of plan Be" as it were.
Yep. OEMs were not allowed to sell dual-boot systems if they wanted to keep buying cheap MS licenses. I think they lost that lawsuit, but it was long after BeOS closed up their commercial operations.
Didn't Be publicly offer up a free Be OS license to the first OEM willing to ship a duel boot system?
When there were no takers for a free way to have your hardware stand out from the pack, it was hard to imagine a reason for every single OEM in a very crowded field to back away that didn't involve antitrust shenanigans.
I love how everyone blames Microsoft for what was OEMs doing a race to the bottom.
OEMs had an option, yes they would had to pay more for licenses, but no one pointed them a gun, or did visits mafia style to forbid them to sell other systems.
Microsoft, the incumbent, offered OEMs cheaper Windows licenses as long as there was no dual boot, or a windows license was paid for every machine sold regardless of what it ran.
Under these terms, it was suicide for OEMs to offer anything except “only Windows”. I was actually working for another company whose software was killed by Microsoft by a similar “or else” threat from Microsoft at the time.
Microsoft was a bully doing (likely) illegal things, except the FTC wasn’t doing its job.
More SKUs cost more in manufacturing, QA; choice paralysis may drive away customers. All for a relatively niche offering that might piss off their big product. Little upside. It's even a bit surprising Dell sells Linux laptops, considering.
It'd be business suicide to charge more just so they could offer an obscure OS as an option; in my mind, this is the kind of clear anticompetitive situation it'd be good to use the law to avoid. Instead the DOJ went after MS for bundling a web browser. So dumb.
Was it just a strategic misstep for Be to not sue at the time, when it would have mattered? Why did the DOJ care about MS bundling a web browser w/ its OS but not care about MS preventing OEMs from selling any other OS than Windows? The seems wildly, clearly more anticompetitive to me!
And why does no one care about this stuff anymore? Not only does iOS bundle a browser, you cannot install any other browser (you can install alternative UI frontends for webkit, but you can’t ship your own js/html etc engine). Not too mention that they can veto any software on the platform for any reason...
Jobs was able to squeeze Microsoft. In a way I don't think any one else would or could.
Referring to Microsoft outright stealing Apple's IP and Gates' subsequent settlement. The $150m investment. The commitment to maintain Office on Mac. Forever license to all of Microsoft's API, enabling Mail.app to have pretty great integration with Exchange, for example.
BeOS was probably the better tech. But Jobs had the better strategies and the better team.
Compare this to Jonathan Schwartz cotemporaneous mismanagement of Sun's amazing IP, snatching defeat out of the jaws of victory. Schwartz just wasn't a bare knuckled brawler like Jobs.
Schwartz was an odd duck in charge if another odd duck. His writing was a good read, but it was a little odd to see such antiestablishment talk from an establishment player. And in the near term, his own establishment suffered the most.
I used the hell out of Java, but most of their other tech was in the weird quadrant of cool stuff that I won’t use. Something about their fit and finish always left me cold. Or if not that, price point.
Who knows. Maybe BeOS also had other nice talented people working there, with good vision for products, that just needed more room and market that apple could provide. We will never know
Maybe! It’s more than just having the talent tho — you need to have a talented leader. Johnny Ive was at Apple for 5 years before Jobs showed up, without too much to show for it.
If Apple had failed, I wonder what Steve would have done after Pixar. Start a new computer company? Could he (or perhaps a better question: would he) have swooped in with Mac compatibility mode?
The real win was a mobile future sans Microsoft. I doubt smart phones would be what they are today if not for the iPhone, and the iPhone required huge resources and a maniac cracking the whip. Would Jony Ive leave a failing Apple and work with Steve? If not, would we even have the iPod let alone the iPhone?
On the plus side, Nokia might have still been around.
It was supposed to prove BeOS is a good idea. A problem they ran into with the BeBox and again later is that Be had lots of _fans_ rather than developers.
Fans think they're helping. If there's a hardware product then as a fan I should buy it right? Well, no, that's a subsidised development toolkit. Every unit that ships is VC sunk in the hope you'll write like Photoshop or Mosaic or 1-2-3 or something and create a market for this new system. When you instead compile two demo apps, call the support line to ask how to plug a mouse in, and tell all your friends to buy one as well, you are in fact not helping.
This is also because Be's original plan says competing with Windows is suicide (it was). They're going to build a system which is NOT a competitor for Windows, it's a whole separate environment. Maybe you have a Windows PC to do billing and surf that new Web, but in your studio you have a BeOS system and it doesn't matter that it can't print because that's not what it's for.
Be shouldn't have made it to the turn of the century, nowhere close. In 1998 the investors should have said we're sorry, there isn't money for this to continue Jean-Louis, better luck next time, and turn off the lights. BeOS gets sold off to whoever is in the market for pennies. But they got "lucky" they were in the right place at the right time. In 1999 you could raise an IPO for a dog turd so long as the offering paperwork said "Internet" on it and you were based on the US West Coast. The institutions got out, those fans who'd squandered the company's money and opportunity years earlier got in to replace them, and they all lost their shirts. In the process Be Inc. bought itself an extra couple of years runway.
Apple began to re-kick ass when they made an amazing laptop (tiBook). This was the first truly viable Unix laptop, with promise to be a cut above the competition (there was none) in terms of style and usability.
tiBook was bonkers. Suddenly, I didn't really need to sit in the room, surrounded by boxes, but rather could go to the park and access the room from under a tree.
If Be had managed to capture that, I think it would have been an amazing time. Imagine if SGI had pulled off the first titanium, unibody laptop, designed specifically for 3D. Would Alien-ware?
If there had been a BeLaptop, things might have been a lot different in 2000/2001, when things started to look very, very interesting.
I mean, the fact that Apple shipped a Unix laptop when all the other 'super-' vendors were unable to pull it off.,,
Agreed. One small pedantic nit. TiBook was not unibody. They did not machine the frame from one giant chunk of titanium. I had one and loved it also, but it was made from a bunch of parts. Fantastic laptop. Still have mine.
At that time, Linux on laptops was an absolutely great choice.
None of your vendors were considered viable - far, far too expensive. However, Linux on x86 laptops at the time was amazing - portable Linux.
So when Apple joined that party, and made the hardware neat, it was an easy switch. Portable, PPC-based, Unix. This was a delightful moment.
Of course, I still have a Linux laptop around, and 20 years later .. I consider moving back to it, 100%. The ride has been good with Apple, but the horizon doesn't look too great ..
Linux has always worked great if you choose your hardware wisely, and for a long time in the 90's it was perfectly viable to put a Linux machine against an SGI, Sun or DEC system as a workstation. Really, Linux had traction even before the 21st century cloud and 'droid reality came along.
For most of the 90's I was using Linux in some capacity, professionally as well as personally.
I also had a Linux laptop (Winbook, Sony, and then Sceptre..) on which I did a lot of development and for which my carefully selected hardware did in fact work, just fine - it was certainly viable as a dev platform, and for us Unix programmers at the time, a small and light Linux laptop was far more preferable to the pizzaboxes that had to be carried in the trunk .. or more, bigger iron that you would stay at the office to use, instead of having at home.
The point is Linux really did okay in the 90's, in terms of providing Unix devs a way of working away from the computer room. I think this is an underappreciated mechanism behind Linux' success over the years .. you could carry Linux with you pretty easily, but this was not the case for Solaris, Irix, etc.
I tend to agree. If they had targeted Mac or PC hardware (different at the time!) and perhaps built some nifty gizmos as add-in cards (LED blinkenlights array, GeekPort) they would have saved serious cash.
That was the time period when ordinary users could be persuaded to pay money for their operating systems and major upgrades.
I remember the Palm acquisition. At that time Palm had invested heavily in webOS [1] which wasn't as performant. When they acquired BeOS, I thought it was going to be a turn around. But that didn't happen either.
First of all Be was purchased by Palm several years before webOS ever came out. WebOS didn't even exist in the same company. After Palm split into palmSource(software) and palmOne(hardware), the BeOs stuff(along side the old PalmOS stuff) went with palmSource. Later palmOne bought the rights to the full Palm name from palmSource and became Palm again and that Palm came out with webOS.
What was truly revolutionary for BeOS was the muti-threaded user interface where you could have multiple mice connected to the same machine and they could interact with the UI at the same time. Hardly anyone paid any attention to this. But the possibilities are amazing.
We had one of the two-processor BeBoxes at my uni computer club – it was a really cool machine to play around on, and at the time I was using it (1999) one of our few graphical terminals with a web browser, so in quite a bit of demand.
We also had a NeXT tower, three SGI machines running IRIX, a Mac upon which someone had installed Mac OS X Server 1.0 (NeXT innards, UI looked like classic MacOS).
I kind of miss the diversity of systems we had back then. In many ways we've gone forward – tinkering is much easier now with the preponderance of cheap, fast dev boards and systems like the Raspberry Pi, but it does feel like actual user-facing stuff is now largely locked down, without as much innovation and competition.
That's sad Haiku won't get traction. An viable alternative on the desktop PC OS market would be a great thing to have (even on a commercial basis, not necessarily for free). Linux for 1% geeks, MacOS X for Apple hardware owners and Windows for everybody else does not seem like a healthy competition.
Android, iOS, Chrome, Facebook - everything is monopolized nowadays. Governments should really consider supporting alternative OSes, browsers and social networks for sake of national security as the monopolies enjoy too much power over the whole humanity nowadays.
Unfortunately the way the PC market is makes it basically impossible for a new desktop OS to show up at this point. The hurdle in drivers alone would be insane, and the easy solution to that is to only target a small set of hardware which effectively makes you a hardware company (kinda like Apple), but that probably won't work either since people are a lot less likely to try out your OS if they need to buy a new computer to do so.
> ...the easy solution to that is to only target a small set of hardware which effectively makes you a hardware company (kinda like Apple)
Perhaps with this idea the only companies able to do this are Apple, Microsoft and Google. Every other OS aiming to be a new desktop OS has essentially failed. Including the Linux desktop community and everyone else.
The desktop OS market only has room for 2 or luckily 3 and always requires at least a billion-dollar (in profit) tech company with such resources to even plan for years to do such a thing. Hence the fierce competition in this space, some may ask themselves why bother.
> but that probably won't work either since people are a lot less likely to try out your OS if they need to buy a new computer to do so.
The exception to that is Google with ChromeOS (Laptops) and Android (Mobile) which they're probably both replacing with Fuchsia. I'd expect Fuchsia running pixelbooks and phones to arrive in this decade. Which will make the desktop market as Windows, macOS and Fuchsia OS.
Perhaps one should build their new OS around a compatibility layer to support Windows drivers. I hardly am too much of a system programmer and don't know how much sense would that make but AFAIK this is possible. E.g. I remember using Windows WiFi NIC drivers on Kubuntu a decade ago.
> Apple’s decision to go with NeXT and Jobs was doubly perilous for us. Not only would we not be the next MacOS, Jobs immediately walked third party Mac hardware makers to their graves. No more Mac clones for BeOS. With tepid BeBox sales and no future on the Mac, Valley VCs weren’t keen for another round of funding — and the 1995 round was running out.
I was really taken with beos when it was a live product. However, Nextstep really was a much much better basis for taking Apple into the future compared to beos. As has been proven resoundingly first by the seamless switch from PowerPC to intel, and then the ongoing smoothness and general acceptance of OS X by industry.
I know it’s not universally loved but OSX/nextstep for me is really everything I could have wanted from an operating system.
When I went to college in 1999, a kid down the hall from me pushed me really hard to install Be. It looked really cool when he showed it to me...
... But no programs that I wanted ran on it! As cool as BeOS was, without programs, it was little more than a demo or a hobby.
Within a year I tried Windows NT and Windows 2000, and then forgot all about BeOS. Windows 2000 did everything that BeOS did for me, didn't crash, and ran all the programs I wanted to.
That's a coincidence this surfacing right now. There must be something in the air. I was an enthusiastic BeOS user back when it was a thing (ironically, I switched to it because I thought NEXTSTEP didn't have much of a future left), and I used it for a few years, quite happily. It left me with a legacy of several BeBoxes, and over the holiday period, I was vaguely inspired to dust them off and wonder about what to do with them. I got one of them booting perfectly, here's a link I posted to reddit showing it's first POST
I remember using one at NTH in Trondheim in 1995-96 and it was awesome compared to the silicon graphics stations or pcs. It just felt insanely smooth and quick compared to the clunkiness of the other machines. I wish it had taken off, it would probably have gotten us faster to multi core machines.
He was right. Without developers for a new OS, you are dead in the water. Which is my MS has fought so hard to keep compatibility.
Apple had to switch OS. That meant it had to persuade all its devs to switch OS. That meant it had to offer the devs something very special, and that something was NeXTstep and Interface Builder. NeXT's dev tools were the best in the software business and _that_ offered trad Mac devs a good enough reason to come across.
Be had nothing like that.
BeOS was wonderful, but it was not a replacement for NeXTstep as a replacement for MacOS.
But there was another company out there.
BeOS was a natively multiprocessor OS, when that was very rare. One of the reasons is that in fast x86 computers, the x86 chip is one of the most expensive single components in the machine, and it puts out most of the heat.
Especially at the end of the 1990s and early 2000s, the era of big fat Slot 2 Pentium IIIs and worse still Pentium 4s.
But there was one company making powerful desktops with the cheapest, coolest-running CPUs in the world, where making a dual-processor machine _in 1998_ was barely more expensive than making a uniprocessor one.
That company's CPUs are the best-selling CPUs ever designed and outsell all x86 chips put together (Intel + AMD + Via etc.) by well over 10 to 1.
And it needed a lightweight, SMP-capable OS very badly, right at the time Be was porting BeOS from PowerPC to x86...
That was actually a fairly awesome OS (for its day). Many folks thought Apple would buy it. When they purchased NeXT, instead, a lot of us were disappointed.
However, all these years later, I'm very glad that BeOS wasn't selected.
Mostly because of the UNIX subsystem. Having all that horsepower under the hood has been awesome.
Also, NeXTStep became Cocoa, which knocked PowerPlant into a cocked hat. That may not be as compelling an argument. I heard that the BeOS framework was damn nice (never tried it, myself).
Besides that, it is also very likely that Apple would have remained stagnant (or worse) if Steve Jobs had not been brought into the fold. I am not into persona worshipping, but Jobs brought some well-needed focus to a sea of overpriced beige boxes.
I was going through boxes of old CDs and DVDs the other day and throwing out a lot of crap, I found my old BeOS5 disk. Didn't throw out that one. I really enjoyed using that OS back in 2000.
I remember attending MacWorld Boston in '97, at the age of 12, and seeing a BeOS demo. I was blown away by a demonstration that consisted of a video file playing on the page of a rendered book.
If you clicked the page of the book and dragged it around, it simulated the page turning and the video deforming, without skipping a frame.
I may've only been 12, but that demo has stuck with me since.
Edit: I would love to lay eyes on that demo again if anyone has an idea of where video of it may still exist.
"The features it introduced that were brand new at the time are now ubiquitous — things such as preemptive multitasking"
Without taking anything away from actual new things introduced in BeOS, preemptive multitasking and dual CPUs were not "brand new". Computer system had been doing these for a long time before 1995, or even 1991 when BeOS was initially developed. Heck, minicomputers were doing this stuff in the 80's!
It is not the OS, not even the software ... Apple would still be a failure if it is all desktop OS. And the first thing he did is to do sos to Microsoft and ask for office license for 5% Apple share for a reason. That is done.
Of course you lost in a battle or even a campaign does not mean you just lost the war.
The world is better with Steve, Microsoft with competition ...
Now we even have Microsoft having GitHub and linux running on windows.
I was a paid BeOS customer and still have copies of multiple versions in the original branded shipper envelopes. I have fond memories of using it back during my college CS days. Really loved the interface, that was quite the time when I used NT4, Solaris, Irix, and BeOS all at the same time.
Crazy timing, just this week I pulled my BeBox put of storage and fired it up. Still impresses me even now, loads of nice touches. Also got a bit of a shock when I played a MIDI file and perfectly serviceable sound was produced by the little built in speaker.
My favorite part of BeOS was that in the control panel you could turn individual processors on and off. And it happily let you shoot yourself in the foot and turn off the last processor with no warning whatsoever.
Now I had read somewhere that Apple would have purchased BeOS but JLG was pushing for too much money. JLG fucked up the deal basically. I dont have a source... was years ago I read it.
Had JLG not fucked up the deal, they would have picked up BeOS and not NeXT.
BeOS was cool and all but I loved the BeBox with the GeekPort.
When I worked at a dot-com 1.0 failed startup in 1999 one of our big wigs was a former exec at Be, and he was like the coolest dude I knew at the time. Still up there.
As someone who's interested in platform/OS interface and UX design, is using modern Haiku (in a VM) a decent parallel for what using BeOS was like, or would I need to get ahold of the real thing?
I've played with both BeOS and Haiku, and I would say Haiku is practically the same UX wise. They have built more tooling etc on Haiku, like a package manager where as far as i remember BeOS was More windows like (download file, install)
> In 1990, Jean-Louis Gassée, who replaced Jobs in Apple as the head of Macintosh development, was also fired from the company. He then also formed his own computer company with the help of another ex-Apple employee, Steve Sakoman. They called it Be Inc, and their goal was to create a more modern operating system from scratch based on the object-oriented design of C++, using proprietary hardware that could allow for greater media capabilities unseen in personal computers at the time.
Even IBM with OS/2 couldn't surmount the juggernaut network effect of Windows. By 1990, this was apparent to many. It's odd that Gassée and company thought they could succeed where IBM had failed.
I saw BeOS as a spiritual successor to the Amiga OS, where it might be used as a turnkey media creation system. I think Apple was the Microsoft of this market, and had enough user/developer inertia to make inroads impossible. They sold a depressing 1800 BeBoxes.
IBM also failed to lock down the IBM PC hardware standard. They tried, with the PS/2 and the Micro Channel Architecture, but it turns out clone makers were more interested in standardizing and improving the existing non-standard (retconned into being the Industry Standard Architecture, ISA) into EISA than in signing on to perpetually license MCA from IBM.
And Jobs was celebrated for cracking 10% market share. It’s hindsight now for sure but it turns out you can do pretty well without a majority stake.
The problem I think was that Microsoft competed hardest in the enterprise space, which was supposed to be IBM’s roost. Apple went after creatives. It’s not clear what was left for a hypothetical third place contender.
it's one Microsoft's greatest business achievements to have won against the OS/2 and BeOS competition and it's one of Microsoft's biggest business failures to have not won against Google's Android.
Unlike Erlang which copies messages, and Pony which uses reference capabilities, BeAPI used a pragmatic approach where BMessages shared a kernel memory pool. Add the ability to filter messages and dynamically retarget messages, and you get a PRACTICAL (vs academic) Actor model. I wouldn't say it was half-hearted and primitive, on the contrary.
Erlang isn't exactly academic, or impractical. And the BMessages themselves did not share the kernel memory pool, it was the thread system (or BLooper if you will), but only some of the primitives. In fact, BeOS was kind of split brained. You could send a message to a thread either through the kernel send command (which had a limited queue length) or a one which use local shared memory (which effectively did not). That is, of course rather dangerous, since there are no protections on that memory (it being C++).
I don't miss the segfaults that I experienced time after time and strange stateful consistency errors from when I was programming in the BeOS, and I don't miss biting my knuckles when having to recast data as (void*) in order to send it to another thread, now that I'm programming in ERTS. I do however strongly appreciate the experience, because it was a hell of a lot of fun, and, importantly, I learned that you can't use a buffered matrix mult to save on memory allocations, because you don't know that your preemptive kernel won't kick you out midway through the matrix mult and start filling the bins with data from the other thread.
I would say that half of the reason why I am really adept at identifying race conditions in erlang is that I've seen them all, and worse, in BeOS.
And the BeOS runtime does not have half of the resilience and fault isolation properties that Erlang has, which is what I mean by "half hearted and primitive".
Ah the memories! Had no idea Haiku was an open source implementation of BeOS. Port Electron to it, and I commit to writing tools for this OS in my spare time!
That said, I don't think the world would be in a better place had Apple chosen Be over NeXT. The elephant in the room is security: NeXTSTEP, being Unix-based, has some amount of security baked in from the ground up. BeOS didn't; it was more akin to Windows 95 or classic Mac OS on that front. Consequently, I doubt it could have made it far into the 21st century. It would have died unceremoniously in a steaming pile of AYBABTU. Taking Apple with it, presumably.