In the context of my life right now, this makes me pretty sad. Seeing effort, ingenuity and engineering leap the many hurdles of that era and build something that still seems remarkable to me because of prolonged exposure to the lack of talent, knowledge, skill, vision and perseverance in my corner of industry (the overlap of "web development" and "enterprise IT").
If the footage here was taken with modern camera equipment so I could pass it off as being from today, I feel like I could probably show it to colleagues, make up a story about how it takes some servos designed by Elon Musk, a quadcopter with a 4k camera that hovers in place above the maze (for whatever reason), uploading the footage (with no regard to latency) to a "Deep Learning" algorithm invented by Google and motion control algorithm designed by Boston Dynamics that "only" uses 8 GPUs and "only" costs a few dollars an hour running in Amazon Web Services without having many questions to answer about any part of that.
A selected quote from the latter:
"""
Real Programmers write in FORTRAN.
Maybe they do now,
in this decadent era of
Lite beer, hand calculators, and "user-friendly" software
"""
I'd consider NNs total overkill for this problem (though not if it were a more general problem of which this was just a test instance). Then I'd make my own hand-wavy assumptions about using any array of sensors (of which CCD, CMOS etc are all sophisticated forms) to first build a map + ideal path without the ball, then once the ball is in place to have a piece of software grab the lowest potential cell in each refresh window (smoothed by a low-weight decaying memory of past potentials) as the location of the ball, the potentials of the neighboring four cells as a heuristic for location within the cell, and adjust tilt accordingly to proceed along path.
edit: One of the points lotyrin touched on that I wanted to amplify is how one side-effect of technology advancing is that, when we must address them as users, small problems will likely get increasingly inefficient solutions because our omnitool is so much more powerful than needed.
If I were doing this today, I'd probably use OpenCV and load a package that manages the de-warping of the camera's image, the grabbing of features from the maze, and the smoothed ball center location. Much like in olden times I'd use that to adjust the angle of the board for this refresh window, then proceed to the next one. In that sense the biggest difference between the modern approach and one in 1982 is how incredibly much has been encapsulated by folks over the last 30+ years. And in another 5 years or so, I wouldn't be at all surprised to find people implementing these simple tasks with NNs because NNs have by then become _so easy_ to throw at any given problem.
That's where the money and expertise is, I'm guessing.
And it's one of those fields where you have to invest a truckload of time and energy in advance learning about everything, but once you've done that keeping up is (provided you're on a focused team) not that hard and it's easy to be productive. But webdev and enterprise IT are so thoroughly vertically integrated (especially when developing LoB apps) moving away basically means facing a nontrivial period of downtime.
If you're valued enough and don't have a primarily liquid lifestyle (relative to cost of living etc) then living off savings could probably work for long enough to study and line up a job somewhere else, but that doesn't discount the added cost/toll of the extra stress it will inevitably impose.
> But some enterprise IT jobs can be soul crushing.
Mmm, yeah. Good work-life balance that provides for some sense of distance is critical with such positions. Something that helps you see beyond the job.
If the job makes that difficult or impossible and it's not because you're in the middle of a sprint or some other temporary situation, burnout is a matter of when, not if. If you can't find a way to to mix things up and get a change of scenery within your current environment (eg, working on a different project, working as design lead on some totally unrelated team, etc), it might be a very very good idea to start mentally preparing yourself to redo your CV, I think.
> How do you non-financially weigh the benefit of pursuing a lifetime of rewarding work at say, one half the salary you could otherwise earn?
Obvious answers are quality of life, general mood and health, etc.
Money is after all a tool, not an end in and of itself. Sure, money will get you lots of "friends" and social status, but that's only because you look shiny and might drop breadcrumbs. I'm not aware of any other reasons money is useful; indeed, the more you have the more vulnerable you are, beyond certain thresholds.
TL;DR: If you share the sentiment of the parent commentator you may find this wall of text interesting.
You might be interested in Alan Kay's "Doing with Images Makes Symbols Pt 1", https://archive.org/details/AlanKeyD1987 (46:29). This video features excerpts from Ivan Sutherland's Sketchpad demo (https://www.youtube.com/watch?v=USyoT_Ha_bA, 10:34; https://www.youtube.com/watch?v=6orsmFndx_o, 19:19) and Douglas Englebart's seminal demonstration (https://archive.org/details/XD300-23_68HighlightsAResearchCn..., 34:45). I recommend watching all the videos in full to get a proper sense of context (Alan Kay's excerpts from Douglas's demonstration particularly don't include the bits that explain how the workstation's camera system works, which is important; also, the right audio channel in the Alan Kay video unfortunately seems to be dead, while the other videos are much better).
Highlights from Alan Kay's video include (in hi..low precedence) info about the Sketchpad's CPU, which you might want to listen to twice so you don't mishear (8:58), some really depressing acknowledgements (from Alan Kay himself) that Sketchpad was a never-reproduced gem even in the 80s (8:33), Ivan's (quoted) remarks on the features of the Sketchpad system (7:58), and acknowledgement about the lack of ease of use of light pens (7:12).
I consider the above videos to be my own personal pinnacle, what I look up to when I think about good computer design. Some may find them depressing for the reasons you've outlined.
It's sad that this obscure niche of computing doesn't lend itself to being very actionable in this day and age. There's definitely no comparison in the mainstream today, and nowhere you can wedge yourself in sideways and find yourself working on problems of a similar caliber.
The short answer to what happened might be that there came a point at which enough people started asking for fast answers that all the technicians collectively stopped tinkering and switched to specializing in finding the quickest solutions to problems.
Here is a fuller theory as to why everything's worked out the way it has, predicated on the idea that this problem is fundamentally human and social.
We're wanderers, so to counter that we have a fundamental, innate compulsion to focus on things and study them until the act of studying and exploring becomes mentally and physically difficult and impossible. This scales terribly, producing collectives of groups that have studied subjects so deeply that they reach a kind of "critical mass" and become difficult to communicate with (which inhibits the flow of ideas), and go from making headline-making leaps and bounds that keep their research in the collective conscious, to making inconsistent and halting progress that's infrequent enough to be boring, and on top of that is hard to relate to without an understanding of the group's iterative history of failure and success. This gives way to a kind of inbreeding, which further impedes the free flow of new and/or novel ideas.
Unfortunately, computers are still entirely human-driven (in terms of identifying and translating desires to goals) and only do what we tell them to (which isn't surprising, since we did build them for that, they are just tools in the end), and so their effectiveness is only as great as our ability to mentally model them and translate what we want them to do into binary.
This is ridiculously hard because our brains are innately vertical (in terms of conceptual isolation, interconnection of similar domains - as well as rapidly-shrinking attention spans, a concerning incidental observation), and it is very, very hard for us to be horizontal - to take in "the bigger picture" of an entire domain in the hopes of producing an effective solution. This is especially the case today.
I think the cutting edge of computer architecture reached the point where we couldn't fit the designs in our heads around '75-80 (because we got so good at designing them).
Alan Kay asked Ivan Sutherland about the Sketchpad and Ivan said what he said because he was using a computer that he was able to reason his way through.
This also goes some way to explaining https://www.cs.utah.edu/~elb/folklore/mel.html, which is an awesome story. Let me be clear, I believe this is because of the accessibility and understandability of the architecture, not just because of mental genius.
I also think the amazing developments of the 90s can be partially blamed on the fact that the home microcomputers of the 70s and 80s had similar processing power of the mainframes and minis of the 60s+, so in a really awesome collective coincidence, what hit the "amazing" 12 year olds of the 80s were bonsai, downscaled minicomputers that were big enough to accomplish real tasks on, but small enough to fit into your head. And so the people who had the aptitude and could access a computer got an incredible head start into computer science that made everything a whole lot easier to comprehend.
> Learning JS can be overwhelming. I know it can feel like there is an ocean of stuff you don’t know. Trying to soak it all up is like trying to soak up the real ocean with a beach towel.
> It’s never going to happen. From this point going forward, no single human being is ever going to have a completely full grasp of every corner of JavaScript, CSS, and Web APIs. Nobody is ever going to know everything there is to know about modern web architecture, Node, GraphQL, SQL, NoSQL, async control flows, functional programming, build pipeline tools, debuggers, memory profilers, paint profilers, flame graphs, React, Angular 2, TypeScript, Redux, ngrx/store, RxJS, Axios, Webpack, Browserify, Elm, Clojure, and every other exciting, scary, new, hipster Haskell thing that exists in the web dev world today.
> It’s never going to happen. I can’t keep up. Dan Abramov isn’t keeping up. Brendan Eich isn’t keeping up. Don’t stress out because you can’t, either. We’re all on the same bullet train here, and no matter what seat you’re sitting in, the world outside the windows is all a blur.
Oh hey, Déjà vu. The Web is the new microcontroller phenomena. I think literally every advanced technology develops beyond a point where it can be individually comprehended, where success or failure depends on how well it's comprehensible at scale. This is categorically hard to plan about on a small scale in advance.
So what's actionable about all of this? For my part - I've only just pieced this bigger picture together - I'm heading in the general direction of OS development, FPGAs, tiny microcontrollers, etc. (I'm hesitant to explore retro chip designs like the KISS-68030 (http://sowerbutts.com/retro/#kiss) because they use silicon that's not being made anymore, and one day I want to make some kind of handheld, self-contained device that lets you tinker with this order of hardware, so I want to stick to supply-chainable stuff.)
On a broader note, I noticed these threads recently:
To me this is one of the purist expressions of the hacker ethos. I work at Google, and our ethos focusing on doing things the right way, e.g. well engineered, scalable, etc. This works, but is often expensive and slow to execute, and the result may last longer without having to be rebuild from scratch.
But when you're hacking, you use whats available, often outside of the intended purpose of the components. You make the impossible, possible, today, rather than years from now.
So Cromemco was selling a totally non-consumer friendly item, but they made prosumer digital photography available 12 years before anyone else, and while this didn't benefit a huge number of people, this kind of early exploratory product creation can influence lots of other people to do things with it, that eventually spawns off industries.
I think the first real AI breakthrough, or what we think as SciFi style AI, might come not from researchers or large companies, but from some hackers mashing up a ton of techniques and approaches that make no logical sense to combine, and can't be fully explained, but somehow "work".
Yeah I agree it's a really cool hack! I wonder if they could have even used multiple chips to increase the resolution, assuming they could re-package the silicon to minimise gaps between them.
One of the books I really love is "Hackers: Heroes of the Computer Revolution" by Steven Levy. Which shows the inventiveness of those guys!
With respect to AI, that sounds really interesting, I'd not thought a lone researcher could create something so complex, but that would be really awesome. One of the projects I find really fascinating is OpenWorm, if I recall correctly they have a bunch of videos, where the neurons of the C. Elegans worm have been recorded optically and they're creating their model from those.
Interesting to consider how, according to my recent reading of some machine vision review literature, the industry's standard process for process-oriented vision tasks has apparently changed so little in 35 years. Already we see high contrast polarization of the image (reduction to black and white/binary), feedback loop, ring lighting, and task-specific image object classification. Of course, there are newer techniques, but for many linear processes incorporating machine vision the same general approach is still taken.
I was thinking that as well; I had a project that needed computer vision and I thought to hack a prototype to give to the real experts. I used OpenCV but with techniques I have learnt begin 90s in university on Sun Sparcstations in C. When I was done I handed it over for the 'real' implementation and that team told me it was done as they would have done it as well. Of course they knew many tricks to make it more optimal but like you say, the basis did not change much. Do the modern neural nets need this level of preprocessing too?
I used OpenCV for the first time around 2 years ago, and I asked one of my professors if he had a good CV text I could use to quickly learn the basic techniques. The book was from the late 90s, but the algorithms it covered were basically the same as those in the OpenCV API. It was quite surprising to me that not much had been added to the state-of-the-art since the publication of the textbook.
For those curious, I was writing some code for 2D object distance estimation for use in a undergrad robotics competition. My team ended up losing, but I think we could have done better had I not started reading up on CV and writing code only a week before the deadline. I'm surprised we were able to even get past the first round!
If the footage here was taken with modern camera equipment so I could pass it off as being from today, I feel like I could probably show it to colleagues, make up a story about how it takes some servos designed by Elon Musk, a quadcopter with a 4k camera that hovers in place above the maze (for whatever reason), uploading the footage (with no regard to latency) to a "Deep Learning" algorithm invented by Google and motion control algorithm designed by Boston Dynamics that "only" uses 8 GPUs and "only" costs a few dollars an hour running in Amazon Web Services without having many questions to answer about any part of that.