The funny part, and perhaps the part that really damns the critics, is that (IF..GOTO,) GOTOs and line numbers are useful baseline concepts because they have a direct correspondence to what a computer actually does. People who went from BASIC to assembly language did not experience much cognitive dissonance compared to your average JS programmer. And decades later, computers still work roughly this way under the covers complete with (tests,) jumps and memory addresses. :-)
That's an interesting observation that's never occurred to me. As a teen I got a 4k 8-bit home computer in 1981, connected it to the family TV and taught myself BASIC programming from the manuals that came with it and by typing in simple game program listings from hobbyist magazines. As I got the hang of it I started customizing, expanding and combining different games just as you did.
When I got tired of BASIC and wanted to do more, the only option was this thing called assembly language. The three letter mnemonics were kind of cryptic at first but I somehow got the idea of calling up Motorola and some kind sales rep took pity on me and sent me the reference book for the 6809 CPU. The book was too advanced for me but fortunately, it came with folding quick reference card with a simple chart showing the registers, a listing of all the mnemonics and a sentence or two about each one. That card was my constant companion as I taught myself assembler by writing simple programs that would put graphics on the screen.
Just as you said, I never really had any conceptual problem with ideas like holding a numeric value in a register, an index pointer storing the address of a string or array in memory or conditionally branching. All of these had fairly direct analogs in BASIC like LET, IF, GOTO, GOSUB, ARRAY, PEEK and POKE. Even primordial 8-bit ROM based BASIC wasn't that awful. The biggest challenges were the lack of editing tools, being locked into line numbers and the cryptic two-letter error codes. I think a modern BASIC dialect with full-screen editing (no line numbers), named function calling with parameter passing and a decent debugger would still provide a reasonable introduction to computer programming.
This was enough to launch me on a successful lifelong career in high tech as a programmer, product manager, entrepreneur and eventually senior executive. Every bit of it self-taught with no formal computer education at all.
I would add the computed goto, as in something like:
GOSUB (X * 100)
Is really fast, and using switch, or case, or a set of if, then, else statements can work, but not work well.
There are times when a numeric value works for branching, and having this in BASIC maps directly to assembly language where that sort of thing gets done all the time.
That's not something the Locomotive BASIC of Amstrad or the C64 basic support, seems like at least these two require a literal line number (probably also because how else would `RENUM` work?)
To emulate that feature in a development tool for the Parallax Propeller (FlexSpin), a sequence of labels was used.
on X goto a, b, c, d, e, 10000, Fred
While not quite the same, the core need is addressed and compiles to a fast jump table.
Interestingly, that tool allows one to build programs in SPIN, assembly, C and BASIC. The developer can mix and match at will and it all compiles into an executable image.
The Propeller Chip, especially Propler 2, is a literal, embedded playground. Lots of fun.
My first exposure to the innards, from a software perspective, of a computer was a 6809 based system. The first assembly language I learned and the first “real” operating system with multitasking (Microware’s OS-9). I suppose it made an impression— I stuck with Motorola CPUs on my personal computers up to the 68040. :-)
I got a 64k CoCo2 when I was 10 or so. The Apple II was the only other computer I had ever used, and so my worldview was that BASIC was how you interacted with a computer. My Radio Shack carried Rainbow magazine and had lots of back issues available, and they were just absolutely delicious to me as a kid with all the program listings and ads for new hardware.
What totally confused me at the time though were program listings in assembly (I couldn't figure out how you were meant to type those in) and especially the discussions of OS-9. I didn't know what it was, and even in some cases where I found a Radio Shack with the Tandy OS-9 distro, it was like 100$ and didn't have any games as near as I could tell, so I couldn't figure out why you would pay so much for it. Also I lived in a rural location where even the nearest 6809-oriented BBS would have meant expensive toll calls, so I missed the opportunity to learn that way.
Anyway skip forward, I started to college in 1993, immediately found Usenet, then Linux, and spent the next 30 years or so steeped in that world. Every so often I would go back and do a little reading on the 6809 world, but because I had never really understood most of what was going on, I didn't have a great deal of nostalgia.
Finally though, a few weeks ago I came across a link (maybe here?) to a release of the VCC emulator, and although I had played with a couple of 6809 emulators before, I hadn't really gone down the rathole of finding the MultiPak roms, or hard disk controller paks, etc. I found a couple of hard drive images, one from the NitrOS9 Ease of Use project, and another random NitrOS9 image packed with old software. What was particularly fascinating to me was the NitrOS9 source itself- since my introduction to Unix was in the 486-MMU-having-era, to see what people were able to do with a 6809 and an assembler was just a joy to read and understand.
I feel like probably everything that has ever needed doing on a 6809 has probably been done, and I've done enough 8 bit assembly stuff in school that I don't have a powerful urge to go back and make something myself, but boy what a feeling of having come full circle when I cd'd into the NitrOS9 source directory and found a makefile of all things! I feel so fortunate to have been able to live through a time of such explosive growth and change, and hope to get the chance to do a little OS-9 hacking when I retire some day :)
Nice. I had gotten started with a CoCo 1 with just 16KiB of RAM. That eventually got upgraded to 64KiB and Extended Color BASIC. That made it easier to copy ROM cartridges and save them to cassette tape. One trick there was to cover over or cut the trace to a pin on the cartridge that prevented it from automatically starting. That was handy for switching between BASIC and the EDTASM+ ROM.
One use of all that was to fix the CoCo's clone of the arcade video game Galaxian, called Galactic Attack[1]. The CoCo had analog joysticks, and the writer of Galactic Attack thought it would be neat to have the ship you control track the X axis of the joystick. Except it would be unfair to whip the joystick from one side of the screen to the other while avoiding the enemy shots. So in the Galactic Attack game, the player ship lazily tracked the position of the joystick, moving slowly to the position of the stick. In practice this made the player ship hard (for me) to control and felt unresponsive. And it made it hard to hold still in the case an enemy bomb was close by.
I had already modified one of my Atari 2600 joysticks to work on the CoCo (probably a Rainbow magazine article). So what I did next was to modify the joystick routine to just create three zones (move left, dead zone, move right) for the X axis values. The game may or may not have been written with all relative branches (instead of absolute), so I might have had to fix that too.
CoCo had a UK clone made called the Dragon 32, which under the hood actually shipped with 64K ram, some careful tweaking and copying of the ROM to it's own spot allowed you to switch that upper 32K on and suddenly your ROM was RAM which meant you could extend the basic interpreter. Oh and you could also double the clock with a single poke 65495,0 .
Yep, we did both of those back in the day. The bank switching was needed for ROM copies that didn't relocate as easily, or if you were modifying the BASIC interpreter itself.
Towards the end of the 1980's I upgraded to a CoCo3, with 512KiB of RAM (wow! so much), a disk drive (156KiB) and OS-9 level 2. Also got the C compiler for OS-9 when it was on sale (discontinued).
My final setup would have the entire operating system loaded into RAM, with a RAM disk, upon which I'd copy in the C compiler. And display it via a glorious 80 columns on a monochrome monitor. It was with all that I started writing my own vi clone, but didn't get too far.
Oh wow that was a nice bit of kit for those days. For me the route was: TRS-80 at work -> TRS-80 pocket computer -> KIM-1 -> Dragon 32 -> BBC Model B -> Atari ST -> 286 -> 386
And a bunch of homebrew in between. But a 6809 with 512K RAM would have been very nice :)
The CoCo3 ran fairly well, though it did run hot. The higher speed variant of the 6809 in the CoCo3 was still fundamentally a 8/16-bit processor. The main upgrade compared to the older versions was a bank-switching chip. You could apparently map any 8KiB of the 512KiB to any of 8 positions in the 64KiB address space of the 6809. It was sort of like segment registers... except less flexible.
I sorely wanted an Atari 520ST when they first came out, but it was more than I could afford at the time.
After graduating from university, I eventually got a Gateway 386 with a whopping 4MiB of RAM. In part to play Wing Commander and Castle Wolfstein. I've mostly been a PC guy since, though I've dabbled in RISC-V more recently. Not ready yet to make that my main desktop though.
I've never even seen a CoCo3 in the wild here in Europe, and that's in spite of having worked for Tandy/RS. Maybe that was after the pull-out? That Dragon had a fantastic keyboard compared to the CoCo by the way, and that was one of the main reasons why I picked it.
The ST was my first 'serious' computer, with a massive amount of memory and an optional hard drive it really unlocked a whole bunch of capabilities, such as compiled languages and more RAM to work with, I used it for all kinds of commercial projects. In many ways X86 felt like a step back after working with the ST, especially after I figured out how to add more RAM to it. It also had a whole slew of useful ports including MIDI.
Today I use an old Thinkpad as my daily driver and it's funny, it's probably the oldest piece of hardware that I have here (a W540, 9 years old, $300 second hand including the 32G RAM in it), but it performs admirably and it uses very little power (everything on solar here so that matters a lot).
But I've been eyeing that RISC-V stuff as well and like you I'm still in hold mode. But it's getting closer.
Same here for 6809 and OS-9. I remember talking to friends writing 6502 assembly and comparing notes and it made me pretty happy I was working on the 6809 due to various operations and addressing modes.
The 6809 (and 6800) helped me pay for my computer habit back in the late 70s/early 80s. I wrote a program for those CPUs, "Dynamite Disassembler", which I originally wrote to reverse engineer a bunch of code in the Flex and OS-9 OSes. Cleaned it up and managed to sell enough copies, at something like $150 a pop, that I could afford to buy more computer gear while I was a mostly-broke college student.
It feels like WWCYO is still a good question, abstractly, to ask though even in cases where you don't think there's a good contrary argument, or don't have the knowledge, or it feels more value based.
For example, if I apply your example question to my beliefs, I don't necessarily come up with any specific answer, because I'm not super well versed in philosophy. It highlights that my belief that I exist might be on shaky ground, or it just might not be testable at all, but I'm open to being convinced otherwise.
And so that's the more abstract answer, which is : if I saw an argument that seemed rational to me, I might be convinced that I don't actually exist, but I'm sticking with what I believe for now. That's all you need -- WWCYO doesn't mean that there is a valid contrary argument, but that you're open to hearing one and changing your views. If the belief is more about experience, of course, you can get more specific about your null hypothesis.
I didn't want to put down the effort of the OP, because even they admit it's an interesting experiment, not something they recommend that anyone use, and I do think it's an interesting experiment. But I have exactly this feeling in general.
I like the idea of Rust, but I feel it has, unfortunately, been taken over by the seductive idea that abstraction is the purpose of a programming language (e.g the C++ crowd, among others). I can say from long experience, that while it probably won't impede its popularity, this isn't real progress in programming language design. These abstractions just create their own problems to solve on top of solving the original problem you wanted to solve by writing a program in the first place.
Inevitably, that means that you end up having to limit yourself to some particular "idiom" or subset of the language in production, so that you can get anything constructive done without the code being inscrutable or unmaintainable or just overblown for the task that's being performed.
I knew it was over when they started debating adding more and more meta-language type system features, and then added async/await -- which is the very definition of creating a problem to solve a problem.
So, as much as I appreciate Rust, I am looking forward a newer systems language with more discipline in its design and direction.
Go isn't perfect, but it definitely trends to the right flavor of simplicity and design discipline.
I like Rust a lot, enjoyed working with it for a year, I'd just wish the borrow checker would be easier with structs that have data from different sources - so I also didn't want to put down the effort and quite frankly might take a look.
"been taken over by the seductive idea that abstraction is the purpose of a programming language"
Most of the article is devoted to complications related to handling arrays that are inline at the tail of existing structs that usually are layout and size sensitive. Hence the related two restrictions that the count variable keep the same name and location in the struct, and there is no explicit array head pointer (just the implicit location at which the array starts at the end of the struct).
The reasoning most likely being that a bunch of annotations to structs, and perhaps some changes to calls to kmalloc() would be less destructive and much simpler than breaking the kernel ABI and altering the base size of any struct that used that idiom while also having to change every for loop or whatnot in the kernel that uses an explicit counter member name.
This is, of course, an argument against a strawman. I wish the author had not mentioned certain folks by name, because the analysis is interesting on its own, and makes some good points about apparent simplicity. However, by mentioning a specific person, and then restating that person's opinion poorly, it does a disservice to the material, and basically it just devolves the whole thing.
My understanding of Jon Blow's argument is not that he is against certain classes of "safe" languages, or even formal verification. It is that software, self-evidently, does not work well -- or at least not as well as it should. And a big reason for that is indeed layers of unnecessary complexity that allow people to pretend they are being thoughtful, but serve no useful purpose in the end. The meta-reason being that there is a distinct lack of care in the industry -- that the kind of meticulousness one would associate with something like formal verification (or more visibly, UI design and performance) isn't present in most software. It is, in fact, this kind of care and dedication that he is arguing for.
His language is an attempt to express that. That said, I'm not so sure it will be successful at it. I have some reservations that are sort of similar to the authors of this piece -- but I do appreciate how it makes an attempt, and I think is successful in certain parts, that I hope others borrow from (and I think some already have).
I endorse your much more thoughtful and well-argued post than my knee-jerk response down in the gray-colored section below.
John isn’t right about everything: he criticizes LSP in the cited talk, and I think the jury is in that we’re living in a golden age of rich language support (largely thanks to the huge success of VSCode). I think he was wrong on that.
But the guy takes his craft seriously, he demonstrably builds high-quality software that many, many people happily pay money for, and generally knows his stuff.
Even Rust gives up some developer affordances for performance, and while it’s quite fast used properly, there are still places where you want to go to a less-safe language because you’re counting every clock. Rust strikes a good balance, and some of my favorite software is written in it, but C++ isn’t obsolete.
I think Jai is looking like kind of what golang is advertised as: a modern C benefitting from decades of both experience and a new hardware landscape. I have no idea if it’s going to work out, but it bothers me when people dismiss ambitious projects from what sounds like a fairly uninformed perspective.
HN can’t make up its mind right now: is the hero the founder of the big YC-funded company that cuts some corners? Is it the lone contrarian self-funding ambitious software long after he didn’t need to work anymore?
> But the guy takes his craft seriously, he demonstrably builds high-quality software that many, many people happily pay money for, and generally knows his stuff.
He has made a few good games, but how has he done anything that would paint him as a competent language designer? Frankly, Blow has done very little (up to and including being a non-asshole) that would make me terribly interested in what he's up to.
Paul Graham is on record that the best languages are built by people who intend to use them, not for others to use. FWIW, I agree.
The jury is out on Jai, but it’s clearly not a toy. John emphasizes important stuff: build times, SOA/AOS as a first-class primitive, cache-friendliness in both the L1i and L1d. And he makes pragmatic engineering trade offs informed by modern hardware: you can afford to over-parse a bit if your grammar isn’t insane, this helps a lot in practice on multi-core. The list goes on.
And “he made a few good games” is really dismissive. He doesn’t launch a new game every year, but his admittedly short list of projects is 100% wild commercially and critically successful. On budgets that a FAANG spends changing the color of some UI elements.
And that’s kind of the point right? Doing better work takes time, and there is in fact a lucrative market for high-quality stuff.
As for him being an asshole? He’s aspy and curt and convinced he’s right, which is a bad look on the rare occasions when he’s wrong.
But Bryan Armstrong is on the front page doubling-down on such bad treatment of his employees and shareholders that they are in public, written revolt. This may have changed since I looked, but no one is calling him an asshole.
A world in which a passionate craftsman misses on diplomacy while discussing serious technical subject matter is an asshole but a well-connected CEO revoking employment offers after people already quit their old job is “making the hard calls” is basically the opposite of everything the word “hacker” stands for.
> And “he made a few good games” is really dismissive. He doesn’t launch a new game every year, but his admittedly short list of projects is 100% wild commercially and critically successful.
How is that dismissive? He has indeed made a few good games, but making good games doesn't certify you as a language designer any more than it makes you a good plumber or equinologist. Hollow Knight is my favorite game of all time, immensely successful both critically and commercially, and yet if Team Cherry were to release a programming language I reserve the right not to be terribly excited about it.
> But Bryan Armstrong is on the front page
OK, Bryan Armstrong is an asshole. I can call two people an asshole. I can call more people than that assholes too, if it comes to that.
> Asshole
Because I'm dismissive of Jonathan Blow? Listen, if you want to fanboy/girl your brains out over the guy, be my guest. He just doesn't impress me all that much and I don't think "aspy and curt and convinced he's right" is anything remotely approaching an excuse for poor behavior. I've been told I'm on the autism spectrum, too, yet I manage not to act like an asshole. Though clearly you disagree.
The cherry-pick quotes from parent and refute is the laziest form of argument on HN.
In this instance, it allows you to blow past the few concrete examples amongst many that I cited where Jai is trying new things in the language space. It’s not hard work to learn a little about Jai. Jai may be an utter failure, but it’s not a toy or hobby, it’s being co-designed with an interesting game engine that looks pretty hot. It’s at least as expressive as C99, compiles way faster on modern gear, targets LLVM and x86, it’s at a minimum interesting.
Calling someone who does their homework a “fanboi” is A-ok, but someone else is looking for an excuse for poor behavior?
> He has made a few good games, but how has he done anything that would paint him as a competent language designer?
You can watch his Twitch streams and see what he does and how he uses the language.
He's developing at least two games using it (the development of one of them he also shows on stream) and so far it's been proven to be a very strong contender for a C-like language suitable for game development. Just the fact that his entire 3-D game builds in under a few seconds is definitely something to aspire to.
Not necessarily. Speed is actually very easy to come by if we push down quality to the level of "wrong answers infinitely fast", which trivially allows you to achieve as much computational performance as a broken clock. As well, if you write code solo or in a small team, you will almost always get a more consistent, higher quality result than if it's written at corporate scale because that eliminates incidental communication overheads that get reflected in the software dependencies(e.g. Windows Terminal's low performance is mostly an artifact of Microsoft's processes).
Jon and the authors of uxn commit a common fallacy in that they're chasing a brass ring of in-the-small performance metrics, getting it in the form of particular demonstrations, and then gradually accreting features to it until, most likely, they end up in a similar position to the old tech. Many software projects start off as the "light and simple alternative" and then develop into something not light and simple. This isn't necessarily an issue for any particular project, because if you know the goal of your tech, you don't need all the features and so can omit some things to claim a definite advantage for the application; but it's not in and of itself a solution to the general issue of making computing better, because it entails bespoken effort from expert practitioners, while the general trend in computing tech is the same as most industrial automation - it's quality-first. Quality comes first when you automate, because a superhuman level of quality can redefine what's possible, and it can compensate for the downsides of not being a bespoke, artisanal result.
The actual problem faced by language authors is that they face difficulty in defining quality while also generalizing the problem space. New languages are mostly "old concepts, new syntax and libraries" - still giving improvements in UX and therefore quality, but with most of the features carried over from previous languages.
> Jon and the authors of uxn commit a common fallacy in that they're chasing a brass ring of in-the-small performance metrics, getting it in the form of particular demonstrations, and then gradually accreting features to it until, most likely, they end up in a similar position to the old tech.
The authors of this post note the same tendency with Gemini [1] and demonstrate that the clients are actually quite fat, much fatter than their rationale documents claim they should be.
> Jon and the authors of uxn commit a common fallacy in that they're chasing a brass ring of in-the-small performance metrics, getting it in the form of particular demonstrations, and then gradually accreting features to it until, most likely, they end up in a similar position to the old tech
If creating full games is an in-the-small performance for a particular demonstration, I can't even imagine what an in-the-big is.
> The actual problem faced by language authors is that they face difficulty in defining quality while also generalizing the problem space
As far as I can tell, jai is exactly that. It's already dropped a few features that looked fine in theory but didn't work for actual development.
> You can watch his Twitch streams and see what he does and how he uses the language.
I can watch coworkers blaze through Common Lisp in emacs, that doesn't mean it's the Next Big Thing in developer experience and performance.
The privacy of development has made Jai less than compelling, for a lot of us I think. I'd be personally more excited if I could use it and poke around at it, rather than see someone tinker on video streams. I get why he's doing things that way, but it's just hard to feel like it's going to be meaningful for anyone but him so long as it's kept hidden away.
I appreciate that the Jai closed beta is atypical for compilers these days, and plucks some strings about the bad old days of proprietary build chains.
With that said, it’s on record that an open source release is planned, and whether or not it works, it’s not insane to run something past 100 early adopters before putting it on GitHub.
With regards to language design, Blow is a guy with a series of YouTube videos.
The common thing in PL is to publish something written or code. So don’t be surprised when some people don’t feel like they have the time to go through an unconventional format.
> [...] counting every clock. Rust strikes a good balance, and some of my favorite software is written in it, but C++ isn’t obsolete.
This isn't a good argument for C++. If you can't get where you need to go in Rust because you are "counting every clock" you need to go down, which means writing assembler -- not sideways to C++. Once you're counting every clock, none of the high level languages can help you, because you're at the mercy of their compiler's internal representation of your program and their optimisers. If you care whether we use CPU instruction A or CPU instruction B, you need the assembler, which likewise cares, and not a high level language.
Both C++ and Rust provide inline assembler if that's what you resort to.
There are things to like about C++ but "counting every clock" isn't one of them.
> redefining “=“ to mean linear move is JavaScript level why the fuck did we do that.
IMNSHO this is another place where C++ has the defaults wrong. If you have both copy and move assignment, then move is the correct default. C++ didn't start out having move semantics at all, so this wasn't practical, but too bad.
From a pedagogic point of view the Rust choice is much easier to teach. Having taught move assignment, Copy is just an optimisation in Rust. Whereas in C++ you need to teach both separately, and it's understandable when people don't "get it".
Eh, fair enough. Frankly it would probably be better to do new operators anyways.
The “=“ operator meant copy for a long time, and everything from Java to Python technically kept those semantics by calling pointers “handles” or whatever.
I write a lot of C++ and type “std::move” too much, for some kinds of code it is in fact the default you want.
But there are plenty of punctuation characters in ASCII alone. Hell Pascal has been dead long enough we could bring “:=“ back.
“=“ meaning move is the worst kind of pun: it violates 30+ years of intuition, masks that there is still often some code-gen involved, and generally flexes the “Rust vibe” that whatever you knew before is irrelevant because we fixed computing.
Rust is a cool language in some ways. Rust attitude is: “we brigade HN. And you will do nothing, because you can do nothing.” I got into Rust despite the community.
I don't think it's as much of a strawman as you're making it out to be. In his talk, Blow says that higher-level abstractions haven't made programmers more productive than they used to be, and appears to use this as an argument against abstraction. He doesn't say (as far as I recall) that we shouldn't use something like formal verification, but he does put the blame for bad software at the feet of abstraction rather than "unnecessary complexity". Or at least if that were his point he wasn't particularly clear about it.
Particularly since the article spends such a big chunk of its text talking about bounds-checks and criticising Blow for being against them, when in fact in his language Jai, as far as I know, bounds-checks are enabled by default, although you can disable them selectively when they are e.g. in hot loops. So it's not any different than say Rust in this regard. It's the strawman of strawmen.
Blow's talk does raise a few valid questions but is so full of factually incorrect statements, cherry picking and contradictions[0] I'm surprised anyone can take it seriously.
It's also very hard, for me at least, to interpret the sum of his arguments in the talk as anything except "If you're not managing memory, you're not a real programmer."
Yea, I think a lot of programmers confuse the map for the territory. It's not only the data, but the program itself.
Almost no one actually cares how a particular program was written or how it understands it's input and output -- we care that it works with some level of quality. How one gets that result is irrelevant in the end. It could be written directly in twisty assembly code. Do not care[1]
Parts of these paradigms have useful tools for building working programs, but a great majority of the contents of these paradigms are simply about organization. This shows up most clearly in OO, and of course, functions are a way to organize code. This isn't a bad thing -- certainly helpful for humans working on a code base -- but it isn't actually relevant to the task the program itself performs or how fast or well it performs it.
So, of course, the input and output of a program aren't really conformant to any paradigm, because the paradigms are about organizing programs, not about performing a particular task.
[1] (it might even be more reliable, in some cases, because you would be forced to be careful and pay attention and all those little details you want to ignore are right there in your face (see: async) :-))
I think you might miss the audience here...this is talking /to/ programmers, after all - who decidedly do care about how the program is written, organized, etc.
I don't think anyone is making a "more correct" or even "more performant" argument here; maybe, a "more reliable" argument - but only by extension of "better organized, so less likely to include certain classes of bugs".
The elephant in the room is that Apple must already know this is happening. It's not like they need a viral blog article or a user flag uprising to tell them this -- It's been obvious to most technologically saavy users who own iphones -- and it must be 10x as obvious to someone who works at Apple on the actual App Store . They're just choosing to do nothing about it because it's advantageous to them. Period. But what else is new when large sums of money are involved...
Yes but what pushes the knife further into the gut is how apple claims it has a strict app review process to protect iOS users when they have obviously done a poor job here. They were quick to catch epic games bypassing their payment system but can't catch a very obvious scam that made it to their top 10 grossing app list.
That is not a new "Amiga", that's just a nostalgic upgrade to an old Amiga computer.
A new "Amiga" would be something completely different than the old Amiga (which has long been surpassed by PCs with high-end graphics cards) or the current software/hardware model of a PC (which could be improved in many ways if you decided not to be a slave to current hardware and software standards).
I'm not sure what that means, really, because creating something new and useful is a hard (but not impossible) task. It really does take commitment in these days of software that is POSIX everywhere and (graphics) hardware that is sometimes interesting and high performance, but very buggy and effectively, probably intentionally, undocumented.
The main trick behind Watson is to take the various systems (parsers, search, et al.) and hacks (constraints imposed by the rules of jeopardy) needed by a jeopardy playing bot and put them all together.
So, in some sense, you could say UIMA is what Watson did -- because it allowed a lot of flexibility for researchers to combine their efforts. Ranking and filtering becomes of ultimate importance in a system like that because at some point you have to make a decision. However, it is terribly reliant on the other modules at least getting somewhere in the ballpark -- and the ranking is also not, by itself, anything impressive.
So, it's an interesting case of how far you can get by just setting a single goal and slamming everything together -- but as it turns out, for every new domain you wish to apply something like that to, that magic ballpark is hard to reach without a significant amount of engineering & research effort to come up with new systems/hacks combined with a lot of relevant data. In other words, just like any other adhoc AIish system with a particular goal. Change the goal, change the system.
So, of course Watson was oversold, it was a PR and Sales effort from the beginning. Sort of like AlphaGo or DeepBlue -- you might be able to find one or two interesting ideas in the bowels of such a system -- but the system itself is not a generic one.