My company sent me to Boca Raton in 1983 to learn how to use the PC. It was a week long class with hands on experience configuring PCs and BAT file programming using "EDLIN".
I had PC serial #7 in the class and serial #1 was at the front of the classroom.
The instructors were very excited that they could use BAT files and were bragging that the whole class roster was entered and managed on the PC using only the tools that came with PC-DOS.
The course set me on the path of systems and programming and kept me fat and happy for the next 35 years.
First time I used an IBM PC I thought it was awful. I was used to 8 bit micros that had: colour, OS in ROM, BASIC in ROM, and sound.
You couldn't do anything with it before you used at least two disks: one for DOS and the other with an application.
I thought business people liked them because they wanted boring machines on purpose.
And then came the 16 bits machines: Amiga, ST, etc. And then the Acorn Archimedes. And IBM PCs were still awful. They started to make sense to me when 386s came about.
The 386's weren't even that great for games/graphics. Most games were still 286-compatible, and if you had a 386SX you might as well have had a 286. I remember being blown away by a friends Tandy because of the graphics and sound card it had, even though it was basically an IBM PC underneath.
The CGA graphics options on the IBM really didn't do them any favors. EGA wasn't much better but at least you could do games like Commander Keen. I remember playing games like Prince of Persia, BattleChess, and thinking "Finally. My friends with Atari and Amiga (and even Apple) computers have had these games for years."
It really took a 486 and a VGA card to let the PC start to shine. And then once the 3D games started coming out (Doom, Descent), you didn't even need a console anymore.
> if you had a 386SX you might as well have had a 286
Agreed. My family bought a 386sx/16 at my convincing, but based on the way we wound up using the machine, one of the contemporary 286/20's would have been a better choice. Not only was our 386sx relatively slow, the 386-specific features didn't really benefit our use case all that much and added additional overhead. (We bought it with the idea of running multiple DOS apps via DesqView/386, but never did... then Windows 3.0 breathed new life into 286 protected mode.)
> The CGA graphics options on the IBM really didn't do them any favors.
CGA was a terrible, terrible set of trade offs. The high resolution mode wasn't, and the color graphics mode was restricted to two fixed palettes of marginal utility. The Tandy 1000's 32K of video memory and 16 color 320x200 graphics were a huge improvement. (So was Hercules.)
Even worse were things like the Cyrix 486SLC(?), they were cheaply made systems and I think even some of them ran on 286 motherboards. A friend had one and it was every bit as slow as a 16MHz 386
Yeah... the thing I didn't mention about my family's old 386sx/16 was that it was similar to what you describe.
That machine was an ALR PowerFlex, which came out of the box as a 286/12. However, it also had a special CPU upgrade slot that allowed installation of higher end CPUs on daughterboards. Our machine had a 386sx/16 on a daughterboard, but ALR let you install CPU's as fast as a 486DX/25 into what was fundamentally a 286 box. If you didn't mind your state of the art 486 being limited to 5MB of 16-bit memory, it wasn't a bad choice. (This machine started my general thinking that upgradable hardware is overrated as a long term plan to keep equipment viable.)
Earlier than that, Cheetah made something reminiscent of this called the Adapter/386. It was a cheap way to install a 386 into a 286 machine. This gave you the ability to run 80386 specific software, but actually wound up making the machine a little slower. My guess is that they didn't sell many, and the primary takers were developers that wanted to get a cheap way to develop 80386-specific software without buying a new machine.
Correction , a 386 with SVGA card
I played Doom, Descent, Wolfstein 3d, Dune, Dune 2, Indiana Jones and the Fate of Atlantis, Warcraft II, Sim City 2000 and Hexen on a simple AMD 386@40 with 4 MiB of RAM and OAK SVGA 1MiB that my father build self.
The IBM PC came with Basic in ROM as well as the ability to load and save programs to tape. It was also available with color graphics.
...Of course most were sold with monochrome graphics and the clones cut out the built in Basic because it was IBM's code, as well as the tape routines, because floppies were so prevalent.
80 column support was key. More or less CP/M compatibility was also key.
One thing the 8088 PC had compared with the 8-bits was rudimentary memory management in the form of the segment registers. This decoupled the OS and BIOS from application code in a way that was not possible on previous 8-bits. People forget that in the world before memory management that if you wanted to change the OS your only option was to do something like "sysgen" a horrible slow process where you re-linked the OS and all application code. IBM's OS 360 and DEC's RSX-11 had this concept.
People hated segment registers, but the only other alternative was position independent code, like on the 6809. I would argue that this is worse, because it slows execution time.
> First time I used an IBM PC I thought it was awful.
It was a great business machine, though obviously you needed two floppy drives or, in my case, the massive 10 megabyte hard drive in the PC/XT.
You got a really great keyboard and a superb green screen, it was fast (for its day), and you had your pick of software. Lotus 1-2-3 was obviously the key program, though you needed a separate monitor for graphics. Word Perfect soon became the key word processor. Borland's Sidekick was a great utility.
You could add loads of expansion cards, and I added an extra 512K (which came as 18 chips in two plastic tubes) for only £999. This enabled the PC/XT to run Microsoft Xenix, which was the most popular Unix at the time.
I'd had an Apple II before that, with two external floppies and Apple's really crappy green screen monitor (no lower case). The PC/XT was a massive upgrade, as a business machine.
Of course, you couldn't play Choplifter with two paddles, but that wasn't a job requirement. However, you could get Microsoft Flight Simulator and, later on, DOOM and Civilization etc.
However, there was a period between the XT and/or AT when the alternatives were better, I had an Atari 720ST and a Mega 4, an Amiga 500 and a 1500, and an Acorn Archimedes. In fact, I still have some of those. It was the arrival of the 386, Windows 3 and CD-ROMs that started to finish those off.
If you had a system with only one floppy, you still had an A: and a B: drive. Any access to (virtual) B: would prompt you to insert the other floppy, and I/O would continue until the disk in A: was needed, and you'd be prompted to swap floppies again.
> Rather than talking of an open architecture, we might do better to talk of a modular architecture. The IBM would be a sort of computer erector set, a set of interchangeable components that the purchaser could snap together in whatever combination suited her needs and her pocketbook.
Has this been achieved with software components in any niche? Other than early IETF protocols, software mutation tends to challenge interoperability. Containers are making progress in this direction.
On the hardware front, http://www.opencompute.org has increased choices for those who can buy servers in large quantity. But small business has less choice these days. Even Dell, pioneer of build-to-order PCs, now favors pre-configured PCs.
On a positive note, it is now possible to buy some PCs without a Windows tax. Dell offers Linux and HP offers Linux/FreeDOS on select models. Is there room for PC OEM/ODMs to innovate in (free, non-bloatware) software components to increase the value of their mix-and-match hardware components?
> Has this been achieved with software components in any niche?
Isn't this what "programs" and "files" do? There used to be a time when computers were really one (or few) trick machines but since the time when anyone could install any program on their computer and have programs work with any file (regardless of which program originally made it), software on the whole is as modular (if not more modular) as hardware.
(which is also why i dislike the trend that iOS created with software being isolated both in terms of execution and file access, except a few preapproved holes here and there)
> There used to be a time when computers were really one (or few) trick machines
You mean the time when programming was done with plug boards?
Computers have been more or less universal since the time RAM was used to execute the programs, the fact that files are a convenient way to organize data does noes not detract from the fact that the internal structure of a file needs to be known if you want another program to make sense out of it.
In that sense the text file is the most enabling element here, and binary files with unknown structure the least.
The ability to quickly load a program quickly (rather than to load it in through elaborate means such as plugboards, switches, punched cards, paper tape or magnetic reel-to-reel tape) definitely has had a lot of effect on the universal application of computers but that doesn't mean they weren't universal machines.
And even in the time of plug boards there were tricks that would keep multiple programs active by means of switching out an entire bank or segment of configured boards for another.
Software is as modular as the hardware it runs on permits, but it can also very effectively restrict or expand that modularity, a nice example of this is virtual machines which present a machine entirely different than the one that the original hardware embodies.
> You mean the time when programming was done with plug boards?
Yes :-), my point was that we had this sort of open architecture for decades through the ability to run any program we like in our computers and these programs being able to use any file they want. Of course they need to know the files' internal structure (at least for almost all binary files and many text files), but this is a protocol issue - the files enable the creation of interfaces but what those interfaces look like is up to the programs. FWIW over the years, people move towards common formats - both binary and text based - that allow different programs to work together (from plain text files, to rich text, HTML, images like PNG and JPG, to Postscript and PDF).
> Computers have been more or less universal since the time RAM was used to execute the programs, the fact that files are a convenient way to organize data does noes not detract from the fact that the internal structure of a file needs to be known if you want another program to make sense out of it.
Have you seen Amiga OS Datatypes? It was an OS-provided interface that allowed applications to support new file formats which did not exist at the time of the application being developed. New file filters could be added/swapped at the OS level, adding functionality to all apps.
> Computers have been more or less universal since the time RAM was used to execute the programs, the fact that files are a convenient way to organize data does noes not detract from the fact that the internal structure of a file needs to be known if you want another program to make sense out of it.
> In that sense the text file is the most enabling element here, and binary files with unknown structure the least.
text files haven't always been universal. There used to be different text encodings (ASCII hasn't always been universal) and even different sizes of byte (eg some systems would have 6 or 7 bits to a byte). And even if two machines you're wanting to share data between were both an 8bit ASCII system; there's no guarantee that they would share the same dialect of BASIC, LISP, Pascal or C.
ASCII isn't universal even today (the page you are reading is UNICODE), but all text was much more readable and in general easier to process with ad-hoc filtering or other manipulation than binary formats.
This is what underpins most of the power of UNIX, but at the same time is something of a weak point: good and error free text processing is actually quite hard.
> ASCII isn't universal even today (the page you are reading is UNICODE)
That's not really a fair point because Unicode and nearly all of the other extended character sets (if not all of them) still follow ASCII for the lower ranges. This also includes Windows Code Pages, ISO-8859 (which itself contains more than a dozen different character sets) and all of the Unicode character sets too.
> but all text was much more readable and in general easier to process with ad-hoc filtering or other manipulation than binary formats.
Text is still a binary format. If you have a different byte size or significantly different enough base character set then you will still end up with gibberish. This is something we've come to take for granted in the "modern" era of ASCII but back in the day "copying text files" between incompatible systems would produce more garbage than just a few badly rendered characters or the carriage return issues you get when switching between UNIX and Windows.
So, essentially you are trying to make the point that even ASCII has its problems and that all data has to be encoded somehow before processing can be done on it. The latter seems to be self-evident and UNICODE is a response to the former.
That's not what I'm saying at all. ASCII is just a standard for how text is encoded into binary. However it hasn't been around forever and before it there were lots of different -incompatible- standards. It was so bad that some computers would have their own proprietary character set. Some computers also didn't even have 8 bits to a byte and since a byte was the unit for each character (albeit ASCII is technically 7bit but lets not get into that here), it meant systems with 6 or 7 bits to a byte would be massively incompatible as you're off by one or two bits on each character, which multiplies up with each subsequent character. This meant that text files were often fundamentally incompatible across different systems (I'm not talking like weird character rendering; I'm talking about the file looking like random binary noise).
ASCII changed a lot of that and did so with what was, in my opinion at least, a beautiful piece of design.
ASCII wasn't nearly as universal as you think it was.
And a byte never meant 8 bits, that's an octet.
ASCII definitely was - and is - a very useful standard, but it does not have the place in history that you assign to it.
In the world of micro-computing it was generally the standard (UNIX, the PC and the various minicomputers also really helped). But its limitations were apparent very soon after its introduction and almost every manufacturer had their own uses for higher order and some control characters.
Systems with 6 or 7 bits to a byte would not be 'massively incompatible' they functioned quite well with their own software and data encodings. That those were non-standard didn't matter much until you tried to import data from another computer or export data to another computer made by a different manufacturer.
Initially, manufacturers would use this as a kind of lock-in mechanism, but eventually they realized standardization was useful.
Even today such lock-in is still very much present in the word of text processing, in spite of all the attempts at getting the characters to be portable across programs running on the same system and between various systems formatting and special characters are easily lost in translation if you're not extra careful.
Ironically, the only thing you can - even today - rely on is ASCII7.
Finally we've reached the point where we can drop ASCII with all its warts and move to UNICODE, as much as ASCII was a 'beautiful piece of design' it was also very much English centric to the exclusion of much of the rest of the world (a neat reflection of both the balance of power and the location of the vast majority of computing infrastructure for a long time). If you lived in a non-English speaking country ASCII was something you had to work with, but probably not something that you thought of as elegant or beautiful.
With the greatest of respect, you don't seem to be taking much attention to the points I'm trying to raise. I don't know if that is down to a language barrier, myself explaining things poorly, or if you're just out to argue for the hell of it. But I'll bite...
> ASCII wasn't nearly as universal as you think it was.
I didn't say it was universal. It is now, obviously, but I was talking about _before_ it was even commonplace.
> And a byte never meant 8 bits, that's an octet.
I know. I was the one who raised the point about the differing sizes of byte. ;)
> ASCII definitely was - and is - a very useful standard, but it does not have the place in history that you assign to it.
On that we'll have to agree to disagree. But from what I do remember of early computing systems, it was a bitch working with systems that weren't ASCII compatible. So I'm immensely grateful regardless of it's place in history. However your experience might differ.
> In the world of micro-computing it was generally the standard (UNIX, the PC and the various minicomputers also really helped). But its limitations were apparent very soon after its introduction and almost every manufacturer had their own uses for higher order and some control characters.
Indeed but most of them were still ASCII compatible. Without ASCII there wouldn't have even been a compatible way to share text.
> Systems with 6 or 7 bits to a byte would not be 'massively incompatible' they functioned quite well with their own software and data encodings. That those were non-standard didn't matter much until you tried to import data from another computer or export data to another computer made by a different manufacturer.
That's oxymoronic. You literally just argued that differing bits wouldn't make systems incompatible with each other because they work fine on their own systems, they just wouldn't be compatible with other systems. The latter is literally the definition of "incompatible".
> Initially, manufacturers would use this as a kind of lock-in mechanism, but eventually they realized standardization was useful.
It wasn't really much to do with lock-in mechanisms - or at least not on the systems I used. It was just that the whole industry was pretty young so there was a lot of experimentation going on and different engineers with differing ideas about how to build hardware / write software. Plus the internet didn't even exist back then - not even ARPNET. So sharing data wasn't something that needed to happen commonly. From what I recall the biggest issues with character encodings back then were hardware related (eg teletypes) but the longevity of some of those computers is what lead to my exposure with them.
> Finally we've reached the point where we can drop ASCII with all its warts and move to UNICODE, as much as ASCII was a 'beautiful piece of design' it was also very much English centric to the exclusion of much of the rest of the world (a neat reflection of both the balance of power and the location of the vast majority of computing infrastructure for a long time). If you lived in a non-English speaking country ASCII was something you had to work with, but probably not something that you thought of as elegant or beautiful.
I use Unicode exclusively these days. With Unicode you have the best of both worlds - ASCII support for interacting with any legacy systems (ASCII character codes are still used heavily on Linux by the way - since the terminal is just a pseudo-teletype) while having the extended characters for international support. Though I don't agree with all of the characters that have been added to Unicode, I do agree with your point that ASCII wasn't nearly enough to meet the needs for non-English users. Given the era though, it was still an impressive and much needed standard.
A side question: is there a reason you capitalise Unicode?
> (which is also why i dislike the trend that iOS created with software being isolated both in terms of execution and file access, except a few preapproved holes here and there)
It seems to be the way consumer software is going though. It's a bizarre truism that we still aren't very good at sending files from one computer to another, even as we've got a lot better at specific cases like sharing photos or collaboratively editing documents. Maybe the lowest-common-denominator file model isn't good enough?
> Frankly all that is needed is a defined IPC protocol that the market sticks to.
We had one, and then we botched it. So now we went from HTML->HTML+JS->JS->webassembly. These are all steps back, not forward, soon we'll be simply using HTTP to ship application binaries that you no longer have any rights at all to, it is in many ways a situation worse than the one where everybody bought shrink wrapped software.
And it will be coded everything in free software, hidden away behind server virtual walls, without a dime or any kind of compensation being given back to the original developers.
It's probably easier today to buy a bunch of components and snap them together to build a desktop tower than it is to put together an erector set. If I recall, the trickiest part of the whole process was putting the right amount of thermal paste on the CPU before attaching the cooler.
Software, unfortunately seems to be going hard in the other direction. Half the time I can't even get copy/paste to work between apps on my smartphone. Just think of all the trillions of dollars that have been spent over the years trying to glue together different enterprise systems into Dr Moreauvian monstrosities.
On the software side, I'd argue that a SOA/microservice model accomplishes this: boundaries between services are defined by APIs based on open protocols and formats (HTTP, JSON, XML, graphQL...); new functionality can be achieved by writing a new microservice that mixes and matches functionality from existing services.
Regarding hardware: There are still plenty of companies selling build-to-order PCs, and sourcing parts is extremely easy. In fact, I'd argue that in some segments (such as gaming), DIY builds are much more common than they were 20 years ago. Of course, these machines aren't truly "open" for every definition of the word, but that has never been the case for the modern PC market.
>"In September of 1975 the company introduced the IBM 5100, their first “portable” computer. (“Portable” meant that it weighed just 55 pounds and you could buy a special travel case to lug it around in.)"
6 years later the "luggable" from Compaq became a real thorn in IBMs side. If you enjoy this type of computer history I highly recommend watching "Silicon Cowboys" a documentary about the rise of Compaq but also a bit about the history of the IBM PC and "clones". It's pretty fascinating and it's available on Netflix as well as free on Youtube:
How far back do you want to go with that? I remember assembling computers out of video boards, motherboards, etc.... but there are some people that remember soldering chips together... or transistors... or tubes... or relays. The trend towards higher levels of integration in computing extends back to the 50's (or maybe 40's).
> Computing was more fun
Fun is where you choose to find it. Modern machines are so good that they can be fun too.
Interesting the 8 inch floppys look similar to the ones used in the IBM displaywriter hard sectored and made a distinctive "graunching" sound when the drives where accessed.
I had PC serial #7 in the class and serial #1 was at the front of the classroom.
The instructors were very excited that they could use BAT files and were bragging that the whole class roster was entered and managed on the PC using only the tools that came with PC-DOS.
The course set me on the path of systems and programming and kept me fat and happy for the next 35 years.