I don’t know who said it, but an amazing quote I love is: “they call it AI until it starts working, see autocomplete”
I love this because when a company tells me they do AI (as a software engineer) they tacitly say that they have little to no knowledge of where they want to go or what services they will be offering with that AI.
As someone who works in the field and works with LLMs on the daily - I feel like there are two camps at play. The field is bimodally distributed:
- AI as understandable tool that power concrete products. There's already tons of this on the market - autocorrect, car crash detection, heart arrythmia identification, driving a car, searching inside photos, etc. This crowd tends to be much quieter and occupy little of the public imagination.
- AI as religion. These are the Singularity folks, the Roko's Basilisk folks. This camp regards the current/imminent practical applications of AI as almost a distraction from the true goal: the birth of a Machine-God. Opinions are mixed about whether or not the Machine-God is Good or Bad, but they share the belief that the birth of Machine-God is imminent.
I'm being a bit uncharitable here since as someone who firmly belongs in the first camp I have so little patience for people in the second camp. Especially because half of the second camp was hawking monkey JPEGs 18 months ago.
AI as understandable tool that power concrete products.
This why my wariness.
Contemporary AI stands upon mechanical turks.
In contrast spellcheckers, checkers engines, and A* were built solely by people with employer provided health insurance.
In the old days, the hard work for professional pay was the justified means.
Today, taking advantage of the economically desperate is the justified means.
There’s no career path from mechanical turk to Amazon management because mechanical turk is not an Amazon position. It’s not even employment. No minimum wage. No benefits. No due process.
There's a blur between the two camps once you get to the so called "AGI" thing.
People think creating super-human intelligence is a technological challenge, but given that we aren't able to consistently rank human-level intelligence, the *recognition* that some AI has attained "super-human" levels is going to be a religious undertaking rather than a technological one.
And we're kind of close to the edges of that already. That's why discussions feel a bit more religious-y than in the past.
tl;dr: until there's a religion that worships some AI as a "god", there won't be any "AGI".
> tl;dr: until there's a religion that worships some AI as a "god", there won't be any "AGI".
I fear you may be correct. Though now I'm thinking of how AI have been gods in fiction, and hoping that this will be more of a Culture (or Bob) scenario than a I Have No Mouth scenario.
(And if the AI learns what humans are like and how to behave around them from reading All The Fiction, which may well be the case… hmm. Depends what role the AI chooses for itself: I hear romance is the biggest genre, so we may well be fine…)
There's a good breakdown and cliche-by-cliche comparison in there, but I find the penultimate paragraph both memorable and quotable:
> It’s also interesting to think about what would happen if we applied “Rapture of the Nerds” reasoning more widely. Can we ignore nuclear warfare because it’s the Armageddon of the Nerds? Can we ignore climate change because it’s the Tribulation of the Nerds? Can we ignore modern medicine because it’s the Jesus healing miracle of the Nerds? It’s been very common throughout history for technology to give us capabilities that were once dreamt of only in wishful religious ideologies: consider flight or artificial limbs. Why couldn’t it happen for increased intelligence and all the many things that would flow from it?
We cannot ignore those other things you list, because they are here already.
AGI is not, and there is no evidence that it is even possible. So it, we can safely ignore for now. Once some evidence exists that it may actually be achievable, we'll need to pay attention.
People in 1000 CE could (and did) safely ignore all those things, for this exact reason.
> AGI is not, and there is no evidence that it is even possible.
We ourselves are an existence proof that minds like ours can be made, that these minds can be made from matter, and that the design for the organisation of that matter can come from a simple process of repeatedly making imprecise copies and then picking the best at surviving long enough to make more imprecise copies.
> People in 1000 CE could (and did) safely ignore all those things
Whereas the people, and specifically leadership, of Japan unsafely ignored one of them on the 5th August 1945. Some of the leadership were still saying it couldn't possibly have been a real atomic bomb as late as the 7th, which is ultimately why the second bomb fell on the 9th.
>> AGI is not, and there is no evidence that it is even possible.
> We ourselves are an existence proof that minds like ours can be made, that these minds can be made from matter, and that the design for the organisation of that matter can come from a simple process of repeatedly making imprecise copies and then picking the best at surviving long enough to make more imprecise copies.
I think that "proof" rests on an as-yet circular assumption. Even if that assumption is accepted, there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.
> I think that "proof" rests on an as-yet circular assumption. Even if that assumption is accepted, there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.
I don't know what you mean by "as-yet circular assumption". (Though in the philosophy of knowledge, the Münchhausen trilemma says that everything is ultimately either circular, infinite regression, or dogmatic).
> there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability.
Sounds like you're arguing against ASI not AGI: G = General like us; S = Super-, exceeding us.
That said, there's evidence that ASI is also possible: All the different ways in which we've made new minds that do in fact greatly exceed ours in capability.
When I was a kid, "intelligent" was the way we described people who were good at maths, skilled chess players, good memories, having large vocabularies, knowing many languages, etc. Even ignoring the arithmetical component of maths (where a Pi Zero exceeds all of humanity combined even if each of us were operating at the standard of the current world record holder), we have had programs solving symbolic maths for a long time; Chess (and Go, Starcraft, Poker,…) have superhuman AI; even before GPT, Google Translate already knew (even if you filter the list to only those where it was of a higher standard than my second language) more languages than I can remember the names of (and a few of them even with augmented reality image-to-image translations).
And of course, for all the flaws the current LLMs have in peak skill, most absolutely have superhuman breadth of knowledge: I can beat GPT-3.5 as a software engineer, maths and logic puzzles, or when writing stories, but that's basically it.
What we have not made is anything that's both human (or superhuman) skill-level while also human-level generality — but saying the two parts separately isn't evidence that it can be done is analogous to looking at 1 gram of enriched uranium and a video of a 50 kg sphere of natural uranium being forced to implode spherically, and saying "there no evidence that humans are capable of designing an atom bomb or that it's possible to make an atom bomb that greatly exceeds chemical bombs in yield."
You won't get a proof until the deed is done. But that's the same with nuclear armageddon - you can't be sure it'll happen until after the planet's already glassed. Until then, evidence for probability of the event is all you have.
> there no evidence that humans are capable of designing a new mind or that it's possible to make a mind that greatly exceeds ours in capability
There's plenty of good reasons to assume it's possible, all while there's no evidence suggesting it's not.
"good reasons" sounds like another way of saying "no actual evidence, but a lot of hope". There is no actual evidence that it's possible, certainly not anytime soon. People pushing this narrative that AGI is anywhere close are never people working in the space, it's just the tech equivalent of the ancient aliens guys.
> People pushing this narrative that AGI is anywhere close are never people working in the space
Apart from the most famous AI developer group since near the beginning of this year, on the back of releasing an AI that's upset a lot of teachers and interview-question writers because it can pass so many of their existing quizzes without the student/candidate needing to understand anything.
I suppose you could argue that they are only saying "AGI could happen soon or far in the future" rather than "it will definitely be soon"…
Yes, the people selling the hammer want you to believe it's a sonic screwdriver. What else is new? You sort of prove my point when your evidence of who is making those claims are the people with a vested interest, not the actual scientists and non-equity developers who do the actual coding.
"But a company said the tech in their space might be ground-breaking earth-shattering life-changing stuff any minute now! What, you think people would just go on the internet and lie!?"
I haven't set up a No True Scotsman proposition, I made a very clear and straightforward assertion, that I've challenged others to disprove.
Show me one scientific paper on Machine Learning that suggests it's similar in mechanism to the human brain's method of learning.
It's not a lack of logical or rhetorical means to disprove that's stopping you (i.e. I'm not moving any goalposts), it's the lack of evidence existing, and that's not a No True Scotsman fallacy, it's just the thing legitimately not existing.
This is a myth; Japan was not in denial that the US had atomic bombs, it had its own atomic bomb program (though incredibly in-advanced), and was aware of Germany's program as well. It just didn't care.
What caused Japan to surrender was not the a-bombs, it was the USSR declaring war on them.
That aside, that still supports my point, which is that they should not ignore things that exist, while they can ignore things that don't. Like AGI.
I could've phrased it better, it sounds like you're criticising something other than what I meant.
One single plane flies over Hiroshima, ignored because "that can't possibly be a threat". The air raid warning had been cleared at 07:31, and many people were outside, going about their activities.
> it had its own atomic bomb program
Two programs; it was because they were not good enough that they thought the US couldn't have had the weapons:
--
The Japanese Army and Navy had their own independent atomic-bomb programs and therefore the Japanese understood enough to know how very difficult building it would be. Therefore, many Japanese and in particular the military members of the government refused to believe the United States had built an atomic bomb, and the Japanese military ordered their own independent tests to determine the cause of Hiroshima's destruction.[0] Admiral Soemu Toyoda, the Chief of the Naval General Staff, argued that even if the United States had made one, they could not have many more.[86] American strategists, having anticipated a reaction like Toyoda's, planned to drop a second bomb shortly after the first, to convince the Japanese that the U.S. had a large supply.[1]
[0] Frank, Richard B. (1999). Downfall: the End of the Imperial Japanese Empire. New York: Penguin. ISBN 978-0-14-100146-3
[1] Hasegawa, Tsuyoshi (2005). Racing the Enemy: Stalin, Truman, and the Surrender of Japan. Cambridge, Massachusetts: Harvard University Press. ISBN 978-0-674-01693-4
--
> AGI
You personally are a General Intelligence; we have Artificial Intelligence. Is GPT-4 a "general" "intelligence"? That depends on the standards for the words "general" and "intelligence". (Someone is probably arguing that anything trained by an evolutionary algorithm isn't necessarily "artificial", not that I know how it was trained, nor even care given I don't use that standard).
My college textbook on AI from 20 years back considered a large enough set of if-else statements (ie: an expert system) as rudimentary AI. Now we'd call it a bunch of hard coded if-else statements but 40 years ago it was state of the art AI and 20 years ago it was worth including in a textbook.
Norvig and Russell's textbook (one of the current go-to AI books) calls the big "if-else AI" a "simple reflex agent". It observes the environment in a rudimentary way and then goes through the if-then chain. One of the first things students (should) learn is how inefficient this is for more challenging problems.
My students were just given an assignment where they build AI to play Connect 4. Some will try to make a simple reflex solution because they want to avoid recursion, then come to office hours asking how to make it work better. It... can't. There really is an observable upper-bound on if-then performance.
> There really is an observable upper-bound on if-then performance.
Only on if-else chains you can code by hand.
There's a lot of machine learning methods that can be seen as using data to generate large networks of if-else decision points. There are methods that perform (a discrete simulation of) continuous generalization of if-else chains. And fundamentally, if-else chains with a loop thrown in the mix is a Turing-complete system, so it can do anything.
The problem here is that if-else chains are a really inefficient way for humans to model reality with. We can do much better with different approaches and abstractions, but since they all are equivalent to a finite sequence of if-else branches, it's not the if-else where the problem is - it's our own cognitive capacity.
It's a quality zinger, but ironically the product may have been a subset of the feature: I'd argue the product is the fact that Dropbox doesn't belong to a platform vendor and therefore can't be leveraged for anticompetitive purposes / lock-in.
Dropbox laughing in that $10B market cap and $2B+ ARR. Jobs was right about the concept (insert meme about Apple ecosystem devs realizing their product was killed by an Apple feature release), but wrong in that specific instance.
If I gave you $250,000,000 to grow a company, and then next year I saw you had $250,050,000 in the bank, then $250,102,000 next year, and so on, I'd be pretty annoyed that I backed you. You have so much money you could be spending on hiring, development, and marketing, and you're instead just slowly chugging along, padding the corporate bank account? What am I paying you for?! Give me my money back.
VC-backed companies that spend more than they earn aren't duds. It's the nature of VC-backed corporations.
They spent it all on storage and on their new spammy looking marketing emails that pester free users to upgrade I guess. I don't recall something really new on Dropbox since they were established.
I am paying Dropbox for storage and will pay them until I die. Rock solid sync and object durability, API access to my storage for my apps, no complaints whatsoever. I don't want new, I want storage I don't have to think about.
Until they die, relatively soon. 100 years from now Dropbox will be a distant memory, but locally mounted FTP directories under version control will be alive and well.
> but locally mounted FTP directories under version control will be alive and well. [1] [2]
This might matter to you, but it does not matter to me. In the meantime, my life will have been better and my time saved between now and death (certainly less than 100 years from now). That's what the money is for. Time is non renewable. If you have more time and ideology than money, I admit your solution is a better fit for your life and use case(s). Self host if you want, I have better things to do personally vs cobbling together technology that I can buy polished for the cost of two coffees a month. There's a product lesson in this subthread. No business lasts forever, the benefit is the value delivered during its lifecycle. Provide value, and I will happily let you ding my credit card monthly or annually (please support annual plans B2C!) forever. "Build something people want" or something like that [3].
Self hosted and versioned FTP drives represent true power, like the stone buildings that stand for centuries. A dropbox subscription is the shitty McMansion that falls apart after 10 years.
Unfortunately, longevity doesn't matter to an economy whose participants surf the cash flow. The shitty McMansion may fall apart after 10 years, but if it lets you earn more than it costs to replace, it's good enough. Sad as it is, a lot of the economy relies on the churn.
So was almost every unicorn startup. They purposefully aim for growth until its unsustainable, then switch over to exploiting their market position. We may not like it, but the business model is far from novel or unexpected.
This quote always struck me as a weird anti competitive flex, in the "we can crush you anytime" way.
And Apple later released the whole iCloud suite that made Dropbox a second class citizen in the OS, even as to this day Dropbox works better than iCloud in many ways. We more and more hear the "services revenue" drum at every Apple earning call, so Jobs was not wrong either.
I've had to deal with this kind of company in FOMO mode. They start from a solution (AI) in search of a problem to solve, while the ideal approach would be the inverse.
Pretty much a guarantee that a lot of money will be wasted while panickly iterating through pointless approaches. I figure this happens every time a new fundamental technology comes out, the dot-com bubble has probably seen many such companies.
Starting from a problem and making a solution requires that you understand both your problem domain and the domain of the solution very well. Much harder than taking a hammer and hitting everything that kind of looks like a nail
more money is being printed than ever before.. some people literally have to find something to do with it.. the waste in AI marketing is one result.
AI in the digital age is uniquely disruptive however, since it connects directly to the way we communicate.. so there is some reason to be wound-up by this, whatever role you are in
> “they call it AI until it starts working, see autocomplete”
No disrespect, but this is a pretty bad quote. Are we really at the point where a lazy tweet not-so-hot-take deserves to be stored in the annals of history?
> In the run up to Uber’s IPO in 2019, venture capital funds were flooded with pitches from startups offering “Uber for X”. Uber for parking spaces.
Ugh. The current French president famously proposed to "uberize" the economy, by which he meant less secure jobs that cost less to the employer. The C-- people in my workplace are already talking non stop about generative AI and the like. I don't look forward to hearing more marketing mumbo jumbo about AI-izing everything in the near future.
What I find funny is how it seems what we called "AI" shifted from what we actually mean when we speak of a human as "intelligent" to what we definitely do not consider a part of intelligence.
Take the robots from Boston Dynamics as an example. We do not consider a human intelligent because they can distinguish a cat from a chair, or walk without tripping or do a backflip. But we would consider them intelligent if they were able to make creative use of their physical ability, based on the data they have, to solve a problem. But when we marvel at the "AI" in things like robots, my understanding is that the "AI" is mostly in the perception (going from raw sensor data to a labelled model of the world) and the actuation (going from an instruction as "move one meter forward" or "do a backflip" to an actual sequence of action of the actuators). But the actual "intelligence" (the decision to open the door in order to go fetch the peanut butter and bread in the other room, and bring it back to complete the task "make me a peanut butter sandwich") is mostly classic if-then logic, or given by a human operator.
I see a similar pattern with things like ChatGPT: one might be impressed by the capacity of some students to learn textbooks by heart and parrot them perfectly, but those are not the students we call "intelligent". An intelligent student is one who might have forgotten the proof of a theorem or the exact definition of some term, but is able to creatively come to a correct and creative solution when presented a new problem (to which "parrot students" tend to fail miserably).
Arguing what is and isn't AI is kinda futile, since there's no solid definition. I remember some old footage I once found on YouTube of a very primitive automated system for an airport control tower (I think it was). I don't remember the year, but probably the 70's or so. The guy was calling it AI. Then went into detail about what the software was doing, even showing the code (maybe it was just the interface). It literally was just a whole lot of if/else statements. Once we get used to ChatGPT and the like, it will either stop being called AI or we will start talking more broadly about "General artificial intelligence" for the next thing to come.
Until recently I was on the side of not calling anything AI if it wasn't GAI, but honestly... who cares? Words are just tools to communicate ideas; if most people want to call the things we are doing today AI, that's ok, and it isn't even wrong. It isn't sentient, but it is arguably intelligent by some standards, and it is obviously artificial. If we understand each other, that's good enough.
> but honestly... who cares? Words are just tools to communicate ideas
That's the problem. If definitions constantly shift, then the signal-to-noise ration is much higher. It means that we no longer have an efficient way of conveying meaning. In engineering and science, terminology has to have very specific meanings, or the words become unusable for reasoning.
The idea that "pffft... whatever... words... am-I-right?" seems to come from the angle of post-facto justification of using incorrect terminology, or as a mechanism for gas-lighting people into believing something that has little foundation in the real world. This is something advertisers and politicians are constantly striving to do. I really hate that this has leaked into technology discussions, but here we are.
> An intelligent student is one who might have forgotten the proof of a theorem or the exact definition of some term, but is able to creatively come to a correct and creative solution when presented a new problem
brain needs to be able to retain some basic information to build upon.
I did not say that they are easy problems, nor that seeing a robot do a backflip isn't impressive. It was just an amused observation that what we call "intelligent" in a computer changed based on whatever the current hot tech can do.
I owned a vacuum cleaner back in the 90s with "fuzzy logic", the cool tech buzzword of the time. Who knows what it actually did. I suspect it just meant that the thing has a medium power setting between on and off.
I wish my 2015(16?) car had fuzzy logic for the climate control. It seems there may be a small difference between slightly hot and full hot, but there is a distinct switch between slightly cool and slightly hot. Sometimes I just want the fan to move ambient temperature air, not hot or cold.
Yeah, these arguments are reinventing history. People are happy calling basically anything AI[1], up until AI starts getting really good at stuff like natural language, and only then is language reinvented to kick it back down. AI has referred to any software that emulates aspects of humanlike decisionmaking to literally any nonzero degree of fidelity for as long as I've been alive.
And, like, what's the point anyway? Refusing to call it AI doesn't change its capabilities, for good or ill. It doesn't change that all of the leading capabilities labs bar FAIR really are aiming to build bona fide general intelligence, and even FAIR is named the Fundamental AI Research lab for a reason.
[1] eg. Pacman's Ghost AI — a totally cogent term for what sums up to a few lines of code and runs on a Z80. Wikipedia: “Each of the four ghosts has its own unique artificial intelligence (A.I.), or "personality": ...”
Funnily game AI is often smoke and mirrors.
devs often have really dumb down the game AI to be fair to the players. And they balance out the dumbing down by giving it bonuses.
example: game AI's often have unlimited ammo. But their bullets don't hurt the player much.
First-person shooter? You're certainly correct. No reason AI can't get headshots 100% of the time and have zero reaction time to the player coming into view.
Strategy games like Civilization? On hard difficulties, it's well-known that AI cheats. Probably because developing AI for a strategy game is pretty damn hard.
Yeah, these games are usually complex enough that creating a bullseye expert system AI for it is very difficult if not impossible. Chris Crawford was already talking about artificially boosting an AI player because of the inevitable skill gap between a fair AI player vs. an actual human in 1982, I remember reading an article about the AI of Eastern Front 1941 for the Atari 800, I'm sure it's not this one from Byte Magazine but it shows similar ideas: https://archive.org/details/byte-magazine-1982-12/page/n97/m...
That's describing faked intelligence, not artificially created (actual) intelligence. It's using cheats that kinda look like intelligence if you don't know what's really going on.
no, see, that's my point: you wouldn't call genetically modified grass 'artificial grass'. As you did, you have to qualify it as 'artificially created'. Because apart from being 'artificially created' it's real grass. Likewise your clone is a real human.
So 'artificially created' things are not automatically 'artificial'. They're only 'artificial' if the artifice is still apparent in the end result. If they aren't real.
So my point: if we're describing something as 'artificial' intelligence, we should expect it to be a fake substitute for real intelligence, that has been manufactured to serve that purpose.
If you said you had 'artificially created intelligence', that is a different - and much bigger - claim than that you have 'created artificial intelligence'. Much like if you said you had 'artificially created life' that is a much bigger claim than that you have 'created artificial life'.
> So my point: if we're describing something as 'artificial' intelligence, we should expect it to be a fake substitute for real intelligence, that has been manufactured to serve that purpose.
And my point is that I don't think most people are reading it that way. We're in a hype cycle where people are treating it like true general AI, like you'd see in sci-fi, instead of what it is: fake/simulated intelligence.
I think you're putting too much weight on the components of a term, instead of seeing the term as a whole.
I like to point to the TV industry. They're pretty good at jumping on any hype train they can to make the TV sound more advanced than last year's models.
I have no idea what it does but my Sony has some AI sprinkled in it somewhere. Meanwhile, I just need a big dumb display
Right! And what it 'really' is will push the philosophical 'hard problem' of consciousness. Will we be able to penetrate something human-like to know if it is conscious?
Inserting tech buzzwords into pitch decks is as old as the Valley itself. Not long ago "powered by blockchain" was all the rage and some of these companies were funded regardless of whether there was any working blockchain behind the curtain.
How many have pivoted to "powered by AI," I wonder?
So I get passing off if-then statements, mechanical turk/human intervention, or ChatGPT API calls as "AI".
But is there a problem saying that a product that incorporates machine learning techniques is powered by AI? I ask because I'm starting to see people separate "ML" from "AI". Where ML is basically everything pre-generative AI (recommendation systems, computer vision, machine translation) and AI is generative AI. Where as I've always viewed ML as a subcategory of "soft" AI. Is there a generally accepted difference between ML and AI?
From a scientific perspective, AI is the name of the general field that studies systems that try to exhibit intelligent behavior, and ML is a strict subset of that - it's the one that's currently more popular, but the study of, for example, planning algorithms or logic reasoning algorithms are other subfields of AI that have nothing to do with ML.
In my classes, I tell students that "AI" is the general population term for machine learning, data science, and "traditional" AI (pathfinding, planning, logic proofs, etc). They all touch on a part of AI, but the differences are hard to explain to non-tech people. People will interchange ML with AI and still mean the umbrella "all of it" concept. That's one of the reasons the article talks about getting more concrete tools that are used in the workflow. Laypeople might think a switch statement is AI and their potential customers also think its AI, that doesn't mean its really AI.
I'm actually booked to give an AI talk for business managers later this month and this is something I'm planning to cover. The article does a good job at outlining what to look for when you get those "AI for X" sales pitches.
Hasn't scifi and comp science been using AI since the early days?
They weren't using as marketing term, and this was before PC's so there wasn't much of a market.
When science fiction mentions artificial intelligence it is almost always in reference to what we now call general artificial intelligence. Which is very different than the machine learning we now call AI.
I think that's a good way to look at it. Math is a way to stats, stats and math are a way to do data science, data science is a foundation of machine learning, and machine learning is a tool to create AI.
The worst case of AI marketing that I have seen recently was an interview where the interviewee was describing ChatGPT 4’s capabilities. He was describing the model as having an IQ of 180 and comparing it to Einstein’s alleged IQ as well as ChatGPT 3, which had a lower IQ.
The subjectivity of IQ combined with the leading premise of being able to quantify a model’s performance with it is extremely disingenuous.
I can’t find a link, but I’ll share one if I do. I believe it was with someone in the C-suite at OpenAI.
1. My data is gobbled up without any real choice. And, when a org is being sly about using data for training, you won't even know until they come out with "Super AI Cloud" garbage.
2. My data is forever part of some black box ELIZA that I can never purge. Only way to do that now is to delete training data and retrain. (lolnope they wont do that per deletion request)
3. This ELIZA black box will at some later date spit out my personal information. And again, I cant even interrogate it to see my data and how it uses my data.
4. There is no real consent on acquisition, ingestion, or processing.
5. The damned thing isn't deterministic. Rerunning gives different results, which is exactly the opposite thing I want for a computer. And destroys any sort of troubleshooting.
6. I don't trust the creators of these AI black boxes. We don't know the rules that are put in. Only way to really test is to do a massive probabilistic test of thousands or millions of things, and you MIGHT find the problems.
I don’t get what people are trying to say when they say these kinds of things about AI. That human-level writing is as simple as a linear regression? That we could’ve had computer programs capable of human-level writing decades ago? Have they not used these AIs enough to see how powerful they are? Are they seeing the bad outputs and thinking that AIs are always doing that poorly?
Like seriously, if you’re telling me that it was obvious that a “linear regression” could pass the LSAT I’ve got a macvlan to sell you.
Formally it's a generalized linear model with a constructed feature set.
A "kitchen sink" regression with enough polynomial terms (x^2, x^3, etc.) and interaction terms (ab, (ab)^2, etc.) will be a function approximator the same way a neural net is.
The computational mechanics are different (there's a reason we dont use it) but in the land of infinite computational power it can be made equivalent.
>The computational mechanics are different (there's a reason we dont use it) but in the land of infinite computational power it can be made equivalent.
In the land of infinite computational power every computation is just a series of 1s and 0s added and subtracted. You can implement everything with just few more operations. But we don't live in a land of infinite computational power and it took us (as humanity) quite a while to discover things like transformer models. If we had the same hardware 10 years ago would we have discovered them back then? I very much doubt it. We didn't just need the hardware, we needed the labelled data sets, prior art in smaller models etc.
Personally I think current AI/ML (LLMs, ESRGANs, and diffusion models) have huge capability to increase people's productivity, but it will not happen overnight and not for everyone. People have to learn to use AI/ML.
This brings me to the "dangers of AI". I laugh at all these ideas that "AI will become sentient and it will take over the world", but I'm genuinely fearful of the world where we've became so used to AI delivered by few "cloud providers" that we cannot do certain jobs without it. Just like you can't be a modern architect without Cad software, there may be time when you'll not be able do any job without your "AI assistant". Now, what happens when there is essentially a monopoly on the market of "AI assistants"? They will start raising prices to the point in future paying off your "AI assistant" bill may be higher than your taxes and you'll have a choice of paying or not working at all.
This is why we have to run these models locally and advance local use of them. Yes (not at all)OpenAI will give you access to a huge model for a fraction of the cost, but it's like with the proverbial drug dealers that gives you the first hit for free, you'll more than make up for the cost of it once you get hooked up. The "dangers of AI" is that it becomes too centralised, not "uncontrollable"
No, it's not a linear regression, but it is at its heart optimization.
And yes, that is indeed like saying a computer is just a bunch of zeroes and ones and logic gates. It's true and beautiful and profound but nearly useless from a practical perspective. And the wonder, like computers when you scale the numbers of transistors to the billions, is that when you scale the number of parameters to the billions, you end up with something amazing.
Yeah but the people making these comments aren’t trying to point to the wonder of how simple mathematics can underpin large complex systems, they’re doing the opposite: Trying to trivialize the system and its immense potential for good and bad by pointing out that it uses simple mathematics under the hood (and thus can’t be that amazing).
No, they're not trying to deny the system is amazing. They're saying that it's not magic, and that we will soon be able to create comparably amazing systems ourselves.
The problem is the "intelligence" word causes more harm than value. It makes the whole world afraid of what is in fact just matrix multiplications.
We don't know what "intelligence" means, but we know it isn't matrix multiplication, or brute force algorithms, otherwise gears could be called intelligent.
I understand the importance of selling. But selling shouldn't be confused with deceiving consumers. It's hard to accept our work is used to create general panic for the sake of money.
>We don't know what "intelligence" means, but we know it isn't matrix multiplication, or brute force algorithms, otherwise gears could be called intelligent.
We know no such thing. The simplicity of the basic operations do not necessarily constrain the complexity of the whole system composed of those operations. We compose simple operations into complex units all the time.
That "it isn't matrix multiplication" argument is completely equivalent to "no computer can do it", and to "nobody can ever understand it". And is practically equivalent to "you need a soul to have intelligence".
The same applies to "brute force algorithms" and "otherwise gears could do it".
It's very likely that the current crop of LLMs do the wrong set of matrix multiplications. (If you ask me, it's a certainty.) But that doesn't change the fact that matrix multiplications can do anything.
>We don't know what "intelligence" means, but we know it isn't matrix multiplication
I didn't, how do you know? And to be honest many LLM seems to show intelligence to some degree, at least when they solve complex and novel problems I ask them in a random language and using N library, that's feel pretty intelligent to me and if it's not than also humans are not.
At least in the neuro that we know today, most [0] of the communication between neurons is done in the Fourier domain. In that, it's the frequency of firing events that matters, not that a neuron fired at all.
[0] by no means is it exclusive. The brain is really complicated and there are edge cases all over the place.
The fourier transform converts between time domain and frequency domain. The fourier transform is linear, which means it can be implemented as multiplying your time domain signal by a matrix of complex fourier coefficients. Usually we dont do it this way because the fft is faster than matrix multplication, but the result is the same.
There has been a long discussion in the neuroscience community how neuronal firing encodes information. The two extremes of the proposed solutions are temporal coding and rate coding, but there are intermediate positions such as held by the parent poster (and myself, and nowadays most other people that care about this niche).
Wulfram Gerstner's book has a nice discussion of the theory and limitations of different proposed coding schemes, which might interest the technical audience here. In particular, I think the parent poster alluded to the ideas presented in the last section here: https://neuronaldynamics.epfl.ch/online/Ch7.S6.html
Really and truly this "discussion" should have been put to rest a long time ago.
Neurons encode information with the temporal precision needed to encode that information. Since most objects are somewhat permanent, the temporal precision can be very low, yielding effectively a rate code. For tasks for which the (relative) timing is critical, e.g. the encoding of auditory sequences, the temporal precision of the neural code is high, yielding effectively a temporal code. Some problems are neither here nor there, in which case the neuron will encode as much information as possible at a low rate / temporal resolution, and then encode diminishing amounts of information at successively higher temporal resolutions (the Fourier code alluded to by the parent poster). Higher temporal precision typically requires more control, larger circuits, and hence more energy expenditure.
Is there a certain way to encode typical "continuos" information like a certain tilt of a line in an image?
I ran into this over ten years ago in an experiment on temporary plasticity of sensory input and I didn't find a quick answer before I had to leave the topic (ok, it left me)
> The problem is the "intelligence" word causes more harm than value.
By this measure "learning" could be inaccurate too. Does a human learn if they just commit something to memory? Programming is chalk full of people who know the code but haven't "learned" to program. Intrinsically we all know there's something deeper to learning than just memorizing or retaining to memory.
I think GP has a point. Marketing terms are to the benefit of the user to be relatable, whether accurate or not. We can and, to my knowledge, historically do retain the stories of what the correlation of a marketing term versus its actual technology is.
I think they can be right but also missing the point. Inaccurate but relatable statements can miscommunicate risk. If a marketer calls something “self driving intelligence” when an engineer prefers the more precise term “driver assistance software” it can lead to bad consequences when the consumer doesn’t understand the nuance.
First, "artificial intelligence" is the name of the field of study that tries to build systems that act intelligently, not a term for referring to the produced systems. Even if some outcome isn't (or wasn't) yet particularly intelligent, the name of the field is entirely appropriate since that's the goal which it is striving to approach.
Second, the general consensus of what "intelligence" means in the AI field is approximately the Legg&Hutter "Intelligence measures an agent’s ability to achieve goals in a wide range of environments." from https://arxiv.org/abs/0706.3639 .
It's worth noting that solely focuses on the capabilities of the system and intentionally completely disregards how it's built. Gears cannot be called intelligent iff it is impossible to build an system that acts intelligently solely from gears, and if it turns out that it is possible to build a gear arrangement that soundly demonstrates intelligent behavior, then those gears definitely could be called intelligent.
And it's also a clearly nonbinary definition - i.e., it's not whether something is or isn't intelligent, but a scale to measure how intelligent it is; and a observation that a particular system (or human!) demonstrates poor intelligence doesn't prevent us from discussing that it's more intelligent than something or someone who is even worse.
I do see where you are coming from, but perhaps it is good that people think of it as proper AI (with all the inherent concerns) so that we enact rules and regulations before there is a runaway ML arms race that actually does give us AGI. Will we be ready then otherwise?
If you built a complex series of gears that took input, revolved through different sets of millions of gears, and produced meaningful output, I would consider that a form of intelligence.
Agreed. My understanding of NN is that more than matrix multiplication, it’s that it’s a general purpose solver. You could write the same thing yourself, it would just take you ages.
So with unlimited budget and time, can you write something complex enough to seem intelligent? I think so. Is what you wrote actually intelligent? No idea, and I think that’s more philosophy than I’m interested in.
General purpose solving functions will only get better with time and already solve more than we can write solvers for by hand. I don’t suspect there’s a limit here, assuming we can keep improving in ways to scale its compute and scale the function goals.
You are basically describing an automaton, some of them were very elaborate and able to write with a pen on paper. People probably misinterpreted it for intelligence, basically thing have improved, but not changed.
Whether you build it with gears, transistors on silicon wafer, or biological neurons, it doesn't matter. If complex enough arrangement of enough neurons can give rise to intelligence, then so can enough transistors or enough gears.
"It makes the whole world afraid of what is in fact just matrix multiplications" is such a reductionist view that doesn't really capture the reality of AI taking people's jobs and reshaping the economy. That's like saying electricity is just "electrons moving through a wire"
You and the comment you're referring to are both making good comments, but may not be talking apples to apples.
I believe the post you're replying to is just saying that matrix multiplications (as useful as they are), aren't going to become Skynet.
Your post is pointing out that various AI techniques are replacing loads of jobs for folks that still need to make ends meet, but likely don't have the skill sets to magically become a web developer overnight. As a result, AI is pretty dangerous like many disruptions throughout history. Only in the past, there was usually still plenty of need for labor.
These matrix multiplications won't become Skynet, but that has nothing to do with the fact that they are matrix multiplications.
The better we get at understanding how various cognitive functions that once seemed mysterious can be accomplished by matrix multiplications, the easier it becomes to eventually create Skynet.
“Just a bunch of matrix multiplications” is also a bit odd because lots of jobs have been automated out of existence by tools way less complicated than matrix multiplications.
The weird thing, I think, about these matrix multiplications, seems to be that they might be coming for the jobs of people who are generally in the same field that invented them (programmers) and also they might be coming for the jobs of reporters, creatives, and hot-take authors. People with bigger platforms than factory workers.
> We don't know what "intelligence" means, but we know it isn't matrix multiplication
What makes you believe “intelligence” can’t emerge from lots of matrix multiplications? Unless you believe in some more mystical explanation, human intelligence is just electrochemical processes not that unlike computers.
Nothing discussed in this article is related to the concerns people have with the forefront of AI research. No one is afraid that the first computer that can convince >50% of the world that its sentient will be a SaaS app, or at least I hope not!
I think the disgust can be justified in some circumstances, when marketing is used to oversell or just outright deceive, in this case and others which involves taking advantage of common ignorance.
Autopilot is a particularly egregious case, though. Naming an experimental, very limited car feature after the device that flys the plane most of the time, and has for many years, really ought to have been illegal. It sets wildly inappropriate expectations.
Most of the other names are just random nice sounding names that they made up.
It is more like if they called DayQuil “Tracheotomax,” because you know, it helps you breathe!
I'm not a Elon fan but the autopilot in a plane does basically the same (if we are saying that planes and cars do the same thing, which is transport people) that Tesla's or many more manufacturers "autopilot" does on highways. Not an aviation expert but I don't think AP is controlling take-offs and landings. It controls cruise speed, altitude and direction. A car AP on a highway does the same (minus altitude).
Also a plane's autopilot has situations where it will yell at you to take over; and have situations where it's expected that the pilot recognizes problems and overrides the autopilot.
It's a good analogy from a technical standpoint, with the "minor" difference that in most situations pilots have a lot more time to react than the driver of a car. Which makes it very different from a consumer standpoint
> I'm not a Elon fan but the autopilot in a plane does basically the same (if we are saying that planes and cars do the same thing, which is transport people) that Tesla's or many more manufacturers "autopilot" does on highways.
Even if true, when you talk about consumer marketing, none of that matters.
I don't have name for it, but there's a whole class of bad-faith, deliberately misleading statements that exploit the difference between common and technical understandings (e.g. say something you know most people will inaccurately interpret as X, then fall back to the much narrower Y when challenged).
I think it is pretty well understood, for example if you look at something like the IEEE code of ethics, that technical professionals have an obligation to honesty beyond just not lying; a requirement to communicate in a way that helps the general public clear up likely misunderstandings.
But there is a very important difference, and that is that airplane autopilots are certified with extremely expensive years long tests to demonstrate failure rates of 10E-9 (once in a billion hours) or even stricter. Whereas a computer vision model is considered “good enough” by the car industry after just a few hundred hours of “self driving” without major accidents, and this is in spite of the fact that roads are full elements that are definitely a lot more unpredictable (eg. other drivers) than what airplanes usually encounter during landing (that is, a mostly empty runway)
If a plane's autopilot steers the plane into a cliff (and such cases have happened many times, 'controlled flight into terrain' is a thing and in quite a few of those cases the autopilot was involved - for example, in both cases here https://www.boldmethod.com/learn-to-fly/aeromedical-factors/... it seems it was turned on during impact), we don't consider it a fault of the autopilot, it's working as intended, as it's effective job is to keep the plane straight and level, not to make smart decisions about how to fly - that's up to the pilots.
In a similar manner, if some computer system in a car holds the steering wheel straight and the speed constant, it's working just as well as a plane's autopilot even if it crashes into a parked car at full speed.
True, but that's where the difference in intended usage is problematic. In the sky, autopilot can't accidentally hit another plane or barrier because it got confused about the road paint or construction signs. The stakes are a lot higher on the ground, even though it technically controls less of the vehicle than autopilot in a plane does (acceleration, braking, and steering compared to elevator, trim, roll, pitch, throttle, vector) since cars don't have to worry about 3 dimensional movement much while airplanes do.
Also not an aviation expert, but it looks like airplane autopilots can do everything except taxiing and taking off (they can do landings now I guess, in fact, just looking on Wikipedia it sounds like they are preferred in low visibility situations for some airports, because they have more sensors and the airports have maps/beacons to help them out).
Apparently it also must be engaged above 28000 ft. Imagine if autonomous vehicles were so good that they were required to be used while going at speed on the highway.
While that’s true (on a plane, I’ve seen simply keeping the wings level labelled “Autopilot” without it even maintaining altitude), it’s still a travesty.
a) Pilots have certification and training which includes proper use of whatever ‘autopilot’ that plane has.
b) Even so, the name still “over-promises” in an arena where doing so risks lives. So it should never have been called that even on a plane. Let alone on a car sold to consumers with little regulation.
What you say is true, however in this case, there's no communication of benefits. AI is not a benefit, it's an attention-grabbing buzzword.
I lament that people fall for buzzwords, hollow words that all mean the same thing to a fool - "Oh it's got <BUZZWORD>, that means GOOD! Just take my money!".
But I don't lament for long, and when I'm done lamenting, that's when I start selling.
“The characteristic feature of the loser is to bemoan, in general terms, mankind’s flaws, biases, contradictions, and irrationality—without exploiting them for fun and profit.”
I love the implicit admission here that marketing is BS. Either that or you didn't read the blog post - I don't think his point is that we're calling it the wrong thing, but rather that it opens the door for a lot of people to sell something they aren't actually doing.
I think this one annoys people because when engineers hear AI they think of a bunch of techniques that mostly didn't work and caused an AI winter. When they hear ML they think of the latest and greatest techniques that have moved the needle on some of the hardest problems in the space.
Machine learning itself used to be called "data mining" which is perhaps a more honest description of what it's all about. It amounts to a non-rigorous application of methods derived from statistics, e.g. 'deep' neural networks can be understood as a hierarchical (hence 'deep') version of regression models.
Yep. A lot of engineers don't like the idea that most people aren't engineers, don't think like them, and don't appreciate things the way they do. You can't sell "machine learning solutions" unless your target audience is developers building ML systems.
I have to disagree. "Machine learning" could just as easily be a marketing buzzword. Artificial Intelligence is just sexier because it's misleading (while being broad enough to be acceptable).
If machine learning were the broadly accepted term to refer to these techniques in society, the people who currently complain that "AI is misleading because they're not intelligent" would instead be complaining that "ML is misleading because they're not learning". I know this because I have already seen people complaining that ML is misleading because "they're not learning".
The reality is that no matter what structure the software takes, or what outputs it achieves, it can't falsify a fundamentally unfalsifiable belief that machines cannot be like people in ways that could imply any sort of social recognition of that status.
Ai is misleading because it's too broad and consumers confuse it as AGI which is far more powerful (and not yet possible). From a marketing perspective this is a feature, not a bug since it gives off the appearance of being a much bigger deal than it is.
Are we really going to pretend that marketing departments / companies aren't fully aware of and taking advantage of this misunderstanding? This just seems like common sense to me.
Intelligence covers everything from the behaviors of a single celled organism, to the meta organism of human society. There is no misunderstanding by marketing. Intelligence covers everything they are doing.
What we need is new words for behaviors that are much more focused to what the capabilities we want to achieve are.
On the contrary, the casual lies inherent to marketing - to capitalism - are absolutely something which should disgust you. That casual dishonesty is endemic, or that it elides complexity which may escape the average intelligence - these are not reasons to accept it. Marketing is disgusting.
> I’ve been told that a product was “driven by AI” only to find out it was driven by “if-then” statements.
At best, a system like that is an "expert system." It's not artificially-intelligent in any way.
This, BTW, is why many developed countries have strict labelling laws for food and trademarks. Otherwise, people will call something whatever they can get away with in order to sell it, even if it's not what they claim they're selling.
> At best, a system like that is an “expert system.” It’s not artificially-intelligent in any way.
“Expert system” was adopted for that particular form of AI (which it was also considered when it was developed) because “expertise” is a combination of both intelligence and knowledge.
So, if “AI” is misleading for it, “expert system” is more misleading.
Chemical names aren't the best examples to use, as even within the scientific community, it's extremely rare to use full IUPAC systematic names for well-known organic molecules. The fancy name for caffeine would be 1,3,7-trimethylxanthine, not 1,3,7-trimethyl-3,7-Dihydro-1H-purine-2,6-dione.
half the ordinary people I talk to are spooked by "AI" combine with general deterioration of services by big tech, invasive agreements and slow, growing awareness of what surveillance might look like
Not the parent, but yes, with my consumer hat on, ML is the method - how it learns is an implementation detail I shouldn't care about - while AI is the benefit - it applies something resembling intelligence to help address my needs.
The word "AI" sells consumers an abstract image of themselves as having something "intelligent" and "smart" at their service + edgy feeling of having almost person at your complete command but without(?) the moral issues of slavery.
If you take away buzzwords and apply good product design, when ML-based stuff works it's invisible powering features like "autocomplete" or "voice control" or "internet search".
But "autocomplete", "voice control" and "internet search" as we know them are terms that appeared relatively recently for capabilities that people even 30 years ago would have said are in the realm of sci-fi. It sounds to me like just moving-the-goalposts such that when something is proven to work well enough to have a name, it becomes "plain old tech" rather than AI. Is there any computer capability that when widely released you'd be ok with calling AI?
>Sorry, which is the methods and which is the benefits? In your mind is "AI" a benefit?
I think you're demonstrating the issue by focusing on the name again: "AI" and "Machine Learning" are the same thing. Engineers care about the method, so ML is more appropriate; it describes what they are doing. Consumers care about outcomes, so "AI" is used because it's familiar.
> I think you're demonstrating the issue by focusing on the name again: "AI" and "Machine Learning" are the same thing. Engineers care about the method, so ML is more appropriate; it describes what they are doing. Consumers care about outcomes, so "AI" is used because it's familiar.
So how is "AI" an outcome benefitting me as customer?
That's my point. They are the same thing. But the person I'm responding to implies that one is a benefit and the other one is a method. I was just asking which is which, seeing as I don't see the difference. They're both different names for the same set of tools, in my opinion.
I'd also add that, as much as many engineers hate the fact, marketing is very necessary to sell things to the general market. It's also a real skill set to figure out how to market things well. Even more so when trying to sell technical capabilities.
ML is the intersection of the set of successful things and the set of things we call AI today.
There are things we used to call AI, like inference engines, that were and are phenomenally successful (not to mention easier to implement). Type inference in modern programming languages, for instance, still uses the GOFAI technique of unification to solve for unspecified types of variables in a program.
That's why I found it funny the article said "There are companies claiming their products are powered by AI, when they're really powered by IF statements." Back in the day, AI was itself powered by IF statements.
> There are things we used to call AI, like inference engines, that were and are phenomenally successful (not to mention easier to implement). Type inference in modern programming languages, for instance, still uses the GOFAI technique of unification to solve for unspecified types of variables in a program.
Okay. I don't see how these application domains have anything to do with “AI” in the sense of something that resembles human reasoning in any given subarea (like pattern recognition, language).
Do you have any examples? I'm charged with figuring out how my organization can benefit from AI, and hearing that there are non-ML options is very relieving.
I assume it's not quite so simple as to include any algorithm, right? TFA even sort of refutes that idea, saying I’ve been told that a product was “driven by AI” only to find out it was driven by “if-then” statements.
There is also a large set of algorithms for adversarial search problems, such as playing board games, that traditionally don't use machine learning. Minimax is the simplest one, and then there are negamax, alpha-beta pruning, negascout, etc. I believe Stockfish (the world's strongest chess program, or close to it) has actually improved its evaluation function using machine learning, but this is a fairly recent development.
Any form of constraint solving tech. SAT solvers (used in hardware-synthesis, software verification, math proofs, etc.), Mixed Integer Solvers (usually sold for tens of thousands of dollars) that are used for hardcore optimization problems, Google's Operations Research toolkit (OR-tools), etc.
Unlike most algorithms, these things are general purpose, they can solve any* NP-complete problem (*usually in a useful amount of time).
My manager refers to these things as "Machine Reasoning" in contrast with "Machine Learning", since they start from the rules instead of from examples.
No personal experience here, but my understanding is that expert systems were considered a type of AI and were in the early days mostly implemented as a collection of "if-then" statements.
If you've used a GPS navigator you've used AI, Pathfinding is a type of AI. Saw mills use planning algorithms to extract the maximum amount of useful planks from lumber, that's AI.
AI in games (well at least NPC style AI) is also kind of unique in that it's actually good to try to trick the end users into thinking it's more intelligent than it is.
I love this because when a company tells me they do AI (as a software engineer) they tacitly say that they have little to no knowledge of where they want to go or what services they will be offering with that AI.