With the greatest of respect, you don't seem to be taking much attention to the points I'm trying to raise. I don't know if that is down to a language barrier, myself explaining things poorly, or if you're just out to argue for the hell of it. But I'll bite...
> ASCII wasn't nearly as universal as you think it was.
I didn't say it was universal. It is now, obviously, but I was talking about _before_ it was even commonplace.
> And a byte never meant 8 bits, that's an octet.
I know. I was the one who raised the point about the differing sizes of byte. ;)
> ASCII definitely was - and is - a very useful standard, but it does not have the place in history that you assign to it.
On that we'll have to agree to disagree. But from what I do remember of early computing systems, it was a bitch working with systems that weren't ASCII compatible. So I'm immensely grateful regardless of it's place in history. However your experience might differ.
> In the world of micro-computing it was generally the standard (UNIX, the PC and the various minicomputers also really helped). But its limitations were apparent very soon after its introduction and almost every manufacturer had their own uses for higher order and some control characters.
Indeed but most of them were still ASCII compatible. Without ASCII there wouldn't have even been a compatible way to share text.
> Systems with 6 or 7 bits to a byte would not be 'massively incompatible' they functioned quite well with their own software and data encodings. That those were non-standard didn't matter much until you tried to import data from another computer or export data to another computer made by a different manufacturer.
That's oxymoronic. You literally just argued that differing bits wouldn't make systems incompatible with each other because they work fine on their own systems, they just wouldn't be compatible with other systems. The latter is literally the definition of "incompatible".
> Initially, manufacturers would use this as a kind of lock-in mechanism, but eventually they realized standardization was useful.
It wasn't really much to do with lock-in mechanisms - or at least not on the systems I used. It was just that the whole industry was pretty young so there was a lot of experimentation going on and different engineers with differing ideas about how to build hardware / write software. Plus the internet didn't even exist back then - not even ARPNET. So sharing data wasn't something that needed to happen commonly. From what I recall the biggest issues with character encodings back then were hardware related (eg teletypes) but the longevity of some of those computers is what lead to my exposure with them.
> Finally we've reached the point where we can drop ASCII with all its warts and move to UNICODE, as much as ASCII was a 'beautiful piece of design' it was also very much English centric to the exclusion of much of the rest of the world (a neat reflection of both the balance of power and the location of the vast majority of computing infrastructure for a long time). If you lived in a non-English speaking country ASCII was something you had to work with, but probably not something that you thought of as elegant or beautiful.
I use Unicode exclusively these days. With Unicode you have the best of both worlds - ASCII support for interacting with any legacy systems (ASCII character codes are still used heavily on Linux by the way - since the terminal is just a pseudo-teletype) while having the extended characters for international support. Though I don't agree with all of the characters that have been added to Unicode, I do agree with your point that ASCII wasn't nearly enough to meet the needs for non-English users. Given the era though, it was still an impressive and much needed standard.
A side question: is there a reason you capitalise Unicode?
> ASCII wasn't nearly as universal as you think it was.
I didn't say it was universal. It is now, obviously, but I was talking about _before_ it was even commonplace.
> And a byte never meant 8 bits, that's an octet.
I know. I was the one who raised the point about the differing sizes of byte. ;)
> ASCII definitely was - and is - a very useful standard, but it does not have the place in history that you assign to it.
On that we'll have to agree to disagree. But from what I do remember of early computing systems, it was a bitch working with systems that weren't ASCII compatible. So I'm immensely grateful regardless of it's place in history. However your experience might differ.
> In the world of micro-computing it was generally the standard (UNIX, the PC and the various minicomputers also really helped). But its limitations were apparent very soon after its introduction and almost every manufacturer had their own uses for higher order and some control characters.
Indeed but most of them were still ASCII compatible. Without ASCII there wouldn't have even been a compatible way to share text.
> Systems with 6 or 7 bits to a byte would not be 'massively incompatible' they functioned quite well with their own software and data encodings. That those were non-standard didn't matter much until you tried to import data from another computer or export data to another computer made by a different manufacturer.
That's oxymoronic. You literally just argued that differing bits wouldn't make systems incompatible with each other because they work fine on their own systems, they just wouldn't be compatible with other systems. The latter is literally the definition of "incompatible".
> Initially, manufacturers would use this as a kind of lock-in mechanism, but eventually they realized standardization was useful.
It wasn't really much to do with lock-in mechanisms - or at least not on the systems I used. It was just that the whole industry was pretty young so there was a lot of experimentation going on and different engineers with differing ideas about how to build hardware / write software. Plus the internet didn't even exist back then - not even ARPNET. So sharing data wasn't something that needed to happen commonly. From what I recall the biggest issues with character encodings back then were hardware related (eg teletypes) but the longevity of some of those computers is what lead to my exposure with them.
> Finally we've reached the point where we can drop ASCII with all its warts and move to UNICODE, as much as ASCII was a 'beautiful piece of design' it was also very much English centric to the exclusion of much of the rest of the world (a neat reflection of both the balance of power and the location of the vast majority of computing infrastructure for a long time). If you lived in a non-English speaking country ASCII was something you had to work with, but probably not something that you thought of as elegant or beautiful.
I use Unicode exclusively these days. With Unicode you have the best of both worlds - ASCII support for interacting with any legacy systems (ASCII character codes are still used heavily on Linux by the way - since the terminal is just a pseudo-teletype) while having the extended characters for international support. Though I don't agree with all of the characters that have been added to Unicode, I do agree with your point that ASCII wasn't nearly enough to meet the needs for non-English users. Given the era though, it was still an impressive and much needed standard.
A side question: is there a reason you capitalise Unicode?