Hacker News new | past | comments | ask | show | jobs | submit login
Will Unix become the next MS-DOS? (1985) [video] (youtube.com)
68 points by nfriedly on Aug 16, 2015 | hide | past | favorite | 53 comments



Everyone is saying "Nope". A couple events happened in the US in 2013/2014 - The number of cell phones surpassed the number of computers, and the number of smartphones surpassed the number of basic phones. Both iOS and Android are Unix derivatives, so I'd say Unix is winning the post-PC era.


Except that both lack UNIX userland, aren't POSIX compliant and have a sandbox design.


... and ridiculous hard coded limits like 128 shared objects per process:

http://androidxref.com/4.2_r1/xref/bionic/linker/linker.cpp#...

This hilarious memset(3) bug was shipping for years:

https://android-review.googlesource.com/#patch,sidebyside,14...

Well, that's what you get when you NIH your libc...


A similar memset bug happened in glibc too.


Android is partially POSIX compliant at least, in larger degree than Linux+glibc has been for the first decade.


The major theme of the episode is that the important part of Unix adoption is applications users use. Of course, in the context of 1985, POSIX compliance is an anachronism anyway. The video was recorded on the front side of the Unix Wars [1].

[1]: http://catb.org/~esr/writings/taoup/html/ch02s01.html#id2880...


You can install your own glibc and run a full Linux userland on top of an Android kernel.


I can also run a full UNIX userland on Windows, z/OS or OS/400, that doesn't make them UNIX.

And I doubt IPC will work anyway, given that the Android kernel doesn't do UNIX IPC.


Well and this is not even mentioning that the 'cloud' is overwhelmingly Unix (well, Linux, so Unix with a small-u)

In terms of installed operating systems and the applications people interact with, Unix or Unix-alike systems win. I don't care if the desktops they use are Windows, or if the phones they run (which have a Unix/Linux kernel usually) don't expose a shell prompt -- the applications they run get the largest part of their functionality through server daemons running on systems running *nix.


Which is quite sad, because now we are kind of stuck in OS design, with only Microsoft, Apple and a few universities pushing the OS research further.


I haven't seen Apple do anything interesting w.r.t. OS research. Microsoft Research had Singularity, WinFS and some other efforts.

There's definitely some Unixes that are still trying to do something different, like DragonFly BSD, MINIX 3, Genode and so forth.

Most of it is academic efforts, as always.


The container model introduced with XPC and the app sandbox model.

Being an hybrid kernel, not like the other UNIXes.

Introducing a super root model, where the traditional root becomes a mere power user.

Being an UNIX where Objective-C and now Swift have the spotlight (with a little space for C++ on IOKit), not C.


XPC is just a layer on top of Mach IPC. Not very familiar with the app sandbox model, but what exactly would be truly innovative about it?

Being a hybrid kernel barely has any significance when you don't exploit that fact, which XNU doesn't. Also, it's a hybrid of Mach, which is heavily anachronistic and clunky. DragonFly BSD, on the other hand, actually does through the vkernel mechanism.

I have no idea what you mean by the third one. Are you saying that the root user's privileges are segmented into multiple subsystems? POSIX has had capabilities for that. They suck and they're not actual capability-based security, but they do achieve logical privilege segmentation. Plan 9 and Inferno are the only ones from the Unix-ish lineage who successfully got rid of root entirely.

You can write applications targeting any subsystem provided there are bindings, which there are. For instance, Python and Vala are heavily used in contemporary Unix-like DEs for application programming.


> XPC is just a layer on top of Mach IPC. Not very familiar with the app sandbox model, but what exactly would be truly innovative about it?

It is not part of any POSIX compliant UNIX.

It also pushes developers into a micro-kernel style of application development, which isn't common in UNIX.

> Also, it's a hybrid of Mach, which is heavily anachronistic and clunky. DragonFly BSD, on the other hand, actually does through the vkernel mechanism.

Besides FreeBSD, I admit I hardly know any other BSD.

> I have no idea what you mean by the third one. Are you saying that the root user's privileges are segmented into multiple subsystems?

In El Capitan, you can only be real root in safe mode. In normal mode there are paths (e.g. /bin, /usr/bin,...) and APIs that aren't available even to processes running as root.

> You can write applications targeting any subsystem provided there are bindings, which there are. For instance, Python and Vala are heavily used in contemporary Unix-like DEs for application programming.

There is a big difference in being a first class language used in the vendor SDK and IDE, and a third party one used by some developers.


Honest question: When was the last time MS actually did something interesting in regards to OS features/architecture? I read genuinely neat little tidbits about new features in OS X, Linux, and the BSDs on a fairly regular basis, but I can't remember the last time I read about something like that in Windows. The impression that it leaves is that releases of Windows newer than, say, Vista or 7 bring little outside of general cleanup and tweaks to the user-facing parts.


If you want to see internal changes in Windows, look at the server versions, not the desktop ones. Over that time frame where you said nothing happened, Hyper-V has gone from a research project to a central part of the Windows server space. Powershell has gone from a tech preview to the recommended way to manage servers. Windows Server 2012 introduced ReFS and Storage Spaces. I'm sure Server 2016 will have a bunch of cool stuff too. The server-side Windows experience is radically different than it was when Windows 7 came out.

Why aren't you hearing about these features? Lots of reasons. Windows is closed-source, so the internal changes you hear about are the ones Microsoft wants you to hear about. Most of the shiny new toys are to support server features that you won't notice unless you're a Windows sysadmin. Most desktop users don't care about internal changes unless they break things, and Windows hasn't broken much on the internal side since Vista. Microsoft's own messaging about major changes has been targeted almost exclusively at Windows server admins.

Edit: That's not even getting into the new app development platform that shipped with 8/10, DirectX 12 adding Mantle-like APIs, first-class support for touch input, hybrid boot, booting from VHDs, etc. Microsoft has been busy over the past few years.


The research OS: Singularity, Midori and Drawbridge

The attempts to follow OS/400 footsteps with a managed userspace in Windows Phone 7.

The OS architecture of Windows Phone 8.

The way they are transitioning Windows into a App Container model with the WinRT execution model.


And not to forget, OS X, which still has significant market share (>5% of desktops/laptops) and is based on Unix (albeit XNU stands for X is not Unix). Looking at wikipedia, it still seems like a majority of web web clients are windows based, though[1].

[1] = https://en.wikipedia.org/wiki/Usage_share_of_operating_syste...


OSX is not "based on" Unix. OSX IS Unix: http://www.opengroup.org/openbrand/register/brand3607.htm


Is the certification from "The Open Group" relevant to anyone?

OS X 10.10 ships with a version of bash from 2007. Without something like homebrew or macports to install an updated userland the "Unix" core of OS X is rotting away.


It just means that the operating system ships with the properly-licensed UNIX software binaries and libraries, rather than the GNU free software replacements for said software. If you look in the man pages for any shell command OS X ships with, you'll see it's from the "BSD General Commands Manual". On Linux systems, this originates from GNU. Your example of bash is actually not part of that distribution, it's something additional OS X ships with just for userland purposes. I suppose the point of being a "certified UNIX distribution" is just so they can put the trademark on their website because it looks pretty.


Right.. and the actual BSD stuff is frozen in time as well:

  $ strings `which cp`|grep src
  $FreeBSD: src/bin/cp/utils.c,v 1.46 2005/09/05 04:36:08 csjp Exp $
  $FreeBSD: src/bin/cp/cp.c,v 1.52 2005/09/05 04:36:08 csjp Exp $

  $ strings `which ls`|grep src
  $FreeBSD: src/bin/ls/cmp.c,v 1.12 2002/06/30 05:13:54 obrien Exp $
  $FreeBSD: src/bin/ls/ls.c,v 1.66 2002/09/21 01:28:36 wollman Exp $
  $FreeBSD: src/bin/ls/print.c,v 1.57 2002/08/29 14:29:09 keramida Exp $
  $FreeBSD: src/bin/ls/util.c,v 1.38 2005/06/03 11:05:58 dd Exp $

https://svnweb.freebsd.org/base/head/bin/cp/cp.c?view=log and https://svnweb.freebsd.org/base/head/bin/ls/ls.c?view=log

show various fixes and new features added since then.

  $ strings /bin/* /sbin/* /usr/bin/* /usr/sbin/*|grep ,v
paints a pretty dismal picture.

One interesting way to look at this. The first commit from freebsd for ls.c is:

  Added Thu May 26 06:18:55 1994 UTC (21 years, 2 months ago) by rgrimes
  Original Path: vendor/CSRG/dist/bin/ls/ls.c
  File length: 13099 byte(s)
  BSD 4.4 Lite bin Sources

The last commit apple has is:

  Modified Fri Jun 3 11:05:58 2005 UTC (10 years, 2 months ago) by dd 
So apples version is almost closer in time to the original 4.4 sources as to the current version.


string `which ls`|grep src shows cvs strings. FreeBSD switched over to svn a long time ago and cvs id strings are not updated anymore. But svn log ls.c on the head branch shows the last change was on 20-July-2015.

In any case, why keep fiddling with the source code if the program does what it is intended to do?


Here's a good example. Came up just the other day.

    $ du -hs big.log;time cat big.log  > /dev/null
    199M	big.log

    real	0m0.045s
    user	0m0.002s
    sys	0m0.043s
    $ time gtr a b < big.log > /dev/null

    real	0m0.334s
    user	0m0.182s
    sys	0m0.142s
    $ time tr a b < big.log > /dev/null

    real	0m33.105s
    user	0m31.757s
    sys	0m0.488s
    $
I don't have a freebsd machine around to see if it's just apples tr that is broken.

I'm sure it's related to unicode bullshit, but the usual env var tricks don't seem to help.


UNIX is a spec. The point of being a "certified UNIX" is that the system conforms to SUS/POSIX and software developed targeting those specifications and their API are expected to run.

One might question the usefulness of SUS/POSIX in a world where most of the *nix ecosystem targets GNU, but saying it has no purpose beyond advertising is disingenuous.


Oh, I think its purpose is to make money for the open group.

I tried to download and run their test suite, but they lead to a page that says "For Purchase Enquiries please contact:"

They don't care about standards, they care about selling things.


Nothing could be further from the truth. Do you think these tests were created and maintained for free? Do you think the certification process happens on its own? Do you think there is no value in the whole Unix standard is stagnant, none-changing, and never reviewed or updated? And it all happens without cost? Do you think no one pays any attention to that?


Don't really understand your reply. Linux and GNU and {free,net,open}bsd are created and maintained "for free". Can things only have value if they cost money?

Is the unix certification only for large companies who can afford to throw money at the open group in order to get certified?

According to the wikipedia page:

> By decree of The Open Group, the term "UNIX" refers more to a class of operating systems than to a specific implementation of an operating system; those operating systems which meet The Open Group's Single UNIX Specification should be able to bear the UNIX 98 or UNIX 03 trademarks today, after the operating system's vendor pays a substantial certification fee and annual trademark royalties

Right.. Please tell me how this is not a meaningless certification? FreeBSD is more unix than OS X, and would likely have no problems running the test suite, but is not certified as "real unix" because that certification is about money.


Have you ever used a proper commercial UNIX (Aix, HP-UX,...)?

That bash version is actually very modern compared with their out-of-box experience.


I used to use some solaris boxes, that was always painful.

I knew some sysadmins liked it. There definitely was a mindset of some admins that solaris boxes were great. You did a full install of solaris that took up like 8G of disk space. Then you installed oracle. Then you never touched the machine again. As long as you never actually needed to use the machine for anything, solaris was great.

Solid kernel.. terrible userland. Still some very smart people working on zfs/dtrace/kvm, will be intersting to see what the next few years bring.


I always find it interesting to see these Computer Chronicles episodes with Gary Kildall (yay!) hosting. I haven't watched the full episode yet, but it's interesting since at the time his company was writing operating systems and tools that competed with both Unix and Microsoft. I watched an episode where they were comparing the Atari ST and Amiga, and I was impressed with how he never showed bias despite the operating system on the ST having been written by his company.


Gary had himself to blame for failing to see the future of PC market. He build his company on selling tens to hundreds (per week) copies of the $700 CP/M OS used to run $700 dBase II on $10K S100 systems. When IBM entered the market with PC/XT he insisted on pricing himself out of it at six times premium over DOS, as if he didnt believe industry would move in just few years from hundreds to almost hundred thousand boxes per week(1986).

As for him being impartial I wouldnt go so far. I also remember that episode and how they showed bouncing ball on Atari with a stupid "see, we can do it too" comment, totally ignoring the fact Atari 520ST was burning >80% CPU just to blit that ball across the screen while Amiga did it all in hardware while CPU was free to run other programs in full preemptive multitasking OS :)


Who cares that the ST was burning CPU to do it, in the grand scheme of things at that point in time all the fancy custom circuitry of the Amiga got them... what... 3-4 years ahead of the curve at the cost of massive investment of R&D and a price premium... ? By 1990 stock commodity PC graphics cards on a stock x86 machine could outcompete the Amiga, and Commodore was totally unable to pull another Jay Miner with its next chip set.

And multitasking on an 8mhz 68000 -- and with no memory protection in the slightest -- was not much of a win. We had multitasking extensions for the ST, too, and after fiddling with them for a while, the use cases were... unclear. Without an MMU it was dubious.

In reality the ST was cheap competition for the Macintosh, not the Amiga. And paired with a hi-res monitor and Atari's cheap laser printer and a copy of Calamus it was a cuthroat priced, quality DTP workstation. Or in a studio as a sequencer hooked up to a bank of synthesizers.

The Amiga found its niche in video applications... and games.

Anyways, this is all off topic to Gary Kildall :-)


I find myself reminded of how the PC has hosted a number of hardware battles. The "custom vs commodity" thing reminds me of when 3DFX (remember them) started shipping. In a sense special chips, except without a specific computer wrapped around it.

Makes one wonder where the Amiga had been had those chips been sitting at the end of a zorro bus rather than on the mainboard.


> Commodore was totally unable to pull another Jay Miner with its next chip set.

Indeed, bearing in mind that Jay Miner had designed his previous chips for Atari 8-bit computers, when he worked there. And that Commodore actually bought the Amiga from Amiga Corp ;-)


Just imagine the alternate history where Atari hadn't disappeared up its own ass in the early 80s and all the amazing tech they were sitting on could have shipped. Terrible internal management, amazing engineering. They had dual CPU BSD workstation systems in development, the dual x86/6502 1450XL, the 128 oscillator AMY sound chip, Jay Miner/Amiga on contract for something new... Then they went tits up, Commodore had internal fighting with Tramiel, Tramiel and sons (and some ex-Commodore engineers) took the Atari name and some of its assets, and Commodore mismanaged the Amiga (and the remaining 8-bit stuff) for the next 5 years.

If all that engineering talent could have just gotten their shit together instead of fighting over the table scraps... the PC industry would have been far more interesting.


This comment got me wondering! 80% seems like something of an overestimate. But is my memory rusty?

The ball looks like it's about 128 pixels wide and 100 pixels high, so 8 words x 100 rows. Note how the screen appears to be laid out in a bitplane-minded fashion: you have a 1 plane backdrop, a 1 plane shadow, and a 2-plane bouncing ball. So, you don't have to mask in your background when writing. Load source, write dest.

You'd pre-shift your ball lots of times to make this go quickly. (The shifted copies would have the ball's rotation applied, so you'd get a two-for-one with this.) One ball = 8 words * 2 bitplanes * 100 rows = 3,200 bytes. Plenty of room, even with 512K.

So your loop for one row will go something like this, I guess. Assume A0 points to your 2-plane sprite data, arranged bitplane 0, bitplane 1, bitplane 0, bitplane 1, and so on, for 8 words, then your 1-plane shadow data, again 8 words. Assume A1 points to bitplane 0.

First draw the ball:

    MOVEM.L (A0)+,D0-D7 ; 76
    MOVE.L D0,(A1)+     ; 12
    MOVE.L D1,1*8-4(A1) ; 16
    MOVE.L D2,2*8-4(A1) ; 16
    MOVE.L D3,3*8-4(A1) ; 16
    MOVE.L D4,4*8-4(A1) ; 16
    MOVE.L D5,5*8-4(A1) ; 16
    MOVE.L D6,6*8-4(A1) ; 16
    MOVE.L D7,7*8-4(A1) ; 16
So, 200 cycles per line.

Then draw the shadow in the same sort of way into bitplane 2 (this is the reason for the (A1)+ and the -4s everywhere above - it lets you start off with (A1) here, rather than 4(A1), saving you 4 cycles).

    MOVEM.W (A0)+,D0-D7 ; 44
    MOVE.W D0,(A1)      ; 8
    MOVE.W D1,1*8(A1)   ; 12
    MOVE.W D2,2*8(A1)   ; 12
    (and so on, as above)
So, for that, 136 cycles per line.

Cycles per frame = 8,000,000/50 = 160,000; cycles per ball = (200+136)*100 = 33,600.

Taking into account that you'd have to erase the old ball and its shadow, and you'd have to advance your pointer from row to row, and whatnot - maybe 25% in total?


I have no clue about planar graphic modes, but wouldnt you need to modify all of the bitplanes when moving a sprite (backgrount is a grid, not a solid color)?

Right, forgot ST cant do 640×512 4bit, so it would have to fake it at 4x smaller picture, that shrinks requirements considerably. What is ST memory bandwidth? something like almost 40KB per frame?

Sure, you can use pre rendered screens, at that point you could argue C64 could do raytracing because this: https://www.youtube.com/watch?v=yxZ7Idi2Bi4 displayed on real C64 with 16MB ram expansion, every frame of animation is 2 frame buffers in 320x200 ~16color NUFLI mode.


Pre-shifting and pre-rendering are two rather different things. If you could pre-render all of this, there'd be no question - you'd get 50Hz out of the ST, no problem! You'd use the hardware scrolling, and CPU usage would be something like 0.02%. But you can only fit 16 screen buffers into 512K, and that wouldn't be enough, so you need to do things properly.

Regarding the planes: you can modify each independently. Just set up your palette in the right way! For 4 planes, something like:

    0000 - light grey (background)
    0001 - purple (lines)
    0010 - dark grey (shadow)
    0011 - dark purple (shadowed line)
    0100 - white (ball)
    0101 - white
    0110 - white
    0111 - white
    1000 - red (ball)
    1001 - red
    1010 - red
    1011 - red
    1100 - pink (ball)
    1101 - pink
    1110 - pink
    1111 - pink
Now you can manipulate each plane separately, as appropriate, ignoring the rest, and you'll get the right result. Of course, this limits the number of colours you can have at once, the limit being 1 layer per plane, and then N+1 colours in total for N planes.

With the ball demo, you have 2 x 1-bit layers (2 colours each, 1 shared), and 1 x 2-bit layer (4 colours, 1 shared), for a total of 7.

All of this applies to the Amiga just as it does to the ST, because both operate using the same principles in this respect. (The Amiga's dual-playfield mode existed, I suspect, only because it was- cheaper than having more palette entries! If you had 64 palette entries, there'd be no need for it.)

(As for the resolution, I think the ball demo was 320x256 pixels? - certainly seems to have been the case judging by the screen grabs I could find. So I think my calculations stand. Don't ask me why they didn't go for 640x256x4bpp, because I have no idea.)


So sad it didnt :( Mostly out of vendor greed, anything with Unix in name was treated as $premium$.

On that note here is venix86, IBM XT/AT Unix variant, ready to run on emulators:

http://virtuallyfun.superglobalmegacorp.com/2015/08/14/ventu...


I loved seeing the name "winchester" for hard drives in those screenshots. I've been renaming the HDD on every computer I've owned since the 80's to that for kicks.


Along a similar line. Here's a very interesting show from 1978 about the "rise of the microprocessor".

http://www.bbc.co.uk/iplayer/episode/p01z4rrj/horizon-197719...

Edit: Here's the show on YouTube for those that can't get iPlayer or don't want to install Flash: https://www.youtube.com/watch?v=HW5Fvk8FNOQ


A lot of it's basic ideas did even with OS's that aren't Unix or Unix based. Everything is multiuser, multitasking and network based now.

OSX along with iOS which controls a large part of the mobile/tablet. Android has it's Linux roots too.

Windows NT set MS up to finally switch to a better OS and discard of DOS based Windows.

The server space continues today to be run by Unix based OS's (Linux, BSD, etc).

What's really interesting is that in 1985 when this video was out, Next was just being formed and they were building NextStep.


Unix is a user-hostile operating system! https://youtu.be/L8G1qg99Kl4?t=22m36s


Funnily, as it turned out, VMS became the new MS-DOS (well, kind of: http://windowsitpro.com/windows-client/windows-nt-and-vms-re... )


I like to think Windows actually was intended as a VMS parody. ;-)


VMS + 1 = WNT


I kinda like using a DOS-ish Linux install.

A "minimal" boot system that is highly user maintainable etc.

Gives a feeling of control that i kinda missed during the Windows years, where everything was hidden behind cryptic registry entries and services.


It is unfortunate MS turned OS/2 2.0 into an entire fiasco. There is a reason why it is my favorite topic.


I believe that the fiasco started with IBM using LOC as a metric to pay MS. It's like telling contractors that the will pay them better the most inefficient they get.


I know. I am talking about the ending of course.


I guess not.


Nope.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: