Well, given the specific way you asked that question I confirm your self assertion - and am quite certain that your level of Artificiality converges to zero, which would make you a GI without A...
- You stated to "feel" generally intelligent (A's don't feel and don't have an "I" that can feel)
- Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity
A "précis" as you wished:
Artificial — in the sense used here (apart from the usual "planfully built/programmed system" etc.) — algorithmic, formal, symbol-bound.
Humans as "cognitive system" have some similar traits of course - but obviously, there seems to be more than that.
I don't see how that's obvious. I'm not trying to be argumentative here, but it seems like these arguments always come down to a qualia, or the insistence that humans have some sort of 'spark' that machines don't have, therefore: AGI is not possible since machines don't have it.
I also don't understand the argument that "Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity". How does that follow?
What scientific evidence is there that we are anything other than a biochemical machine? And if we are a biochemical machine, how is that inherently capable of more than a silicon based machine is capable of?
> I also don't understand the argument that "Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity". How does that follow?
It doesn't follow.
Trivially demonstrated by the early LLM that got Blake Lemonie to break his NDA also emitting words which suggested to Lemonie that the LLM had an inner life.
Or, indeed, the output device y'all are using to read/listening to my words, which is also successfully emitting these words despite the output device very much only following an algorithm that simply recreates what it was told to recreate. "Ceci n'est pas une pipe", etc. https://en.wikipedia.org/wiki/The_Treachery_of_Images
Oh no, I am not at all trying to find an explanation of why this is (qualia etc.). There is simply no necessity for that. It is interesting, but not part of the scientific problem that i tried to find an answer to.
The proof (all three of them) holds without any explanatory effort concerning causalities around human frame-jumping etc.
For this paper, It is absolutely sufficient to prove that a) this cannot be reached algorithmically and that b) evidence clearly shows that humans can (somehow) do this , as they have already done this (quite often).
Alternatively, in order to not be contradictory doesn't it require the assumption that humans are not "algorithmic"? But does that not then presuppose (as the above commenter brought up) that we are not a biochemical machine? Is a machine not inherently algorithmic in nature?
Or at minimum presupposes that humans are more than just a biochemical machine. But then the question comes up again, where is the scientific evidence for this? In my view it's perfectly acceptable if the answer is something to the effect of "we don't currently have evidence for that, but this hints that we ought to look for it".
All that said, does "algorithmically" here perhaps exclude heuristics? Many times something can be shown to be unsolvable in the absolute sense yet readily solvable with extremely high success rate in practice using some heuristic.
OP seems to have a very confused idea of what an algorithmic process means... they think the process of humans determining what is truthful "cannot possibly be something algorithmic".
Which is certainly an opinion.
> whatever it is: it cannot possibly be something algorithmic
> Maybe OP should have looked at a dictionary for what certain words actually mean before defining them to be something nonsensical.
Making non-standard definitions of words isn't necessarily bad, and can be useful in certain texts. But if you do so, you need to make these definitions front-and-centre instead of just casually assuming your readers will share your non-standard meaning.
And where possible, I would still use the standard meanings and use newly made up terms to carry new concepts.
The model I am using is the conventional understanding of physics. What model are you using?
> language meaning is not immutable physics.
Our understanding of physics is not complete, so why would our model of it be final? No one is saying it is.
Everything we currently know about physics, all the experiments we've conducted, suggests the physical church turing thesis is true.
If you want to claim that the last x% of our missing knowledge will overturn everything and reality is in fact not computable, you are free to do so, and this may well even be true.
But so far the evidence is not in your favor and you'd do well to acknowledge that.
> Alternatively, in order to not be contradictory doesn't it require the assumption that humans are not "algorithmic"? But does that not then presuppose (as the above commenter brought up) that we are not a biochemical machine? Is a machine not inherently algorithmic in nature?
No, computation is algorithmic, real machines are not necessarily (of course, AGI still can't be ruled out even if algorithmic intelligence is, only AGI that does not incorporate some component with noncomputable behavior.)
> computation is algorithmic, real machines are not necessarily
Author seems to assume the latter condition is definitive, i.e. that real machines are not, and then derive extrapolations from that unproven assumption.
> No, computation is algorithmic, real machines are not necessarily
As the adjacent comment touches on are the laws of physics (as understood to date) not possible to simulate? Can't all possible machines be simulated at least in theory? I'm guessing my knowledge of the term "algorithmic" is lacking here.
As far as we can tell, all the known laws of nature are computable. And I think most of them are even efficiently computable, especially if you have a quantum computer.
Quantum mechanics is even linear!
Fun fact, quantum mechanics is also deterministic, if you stay away from bonkers interpretations like Copenhagen and stick to just the theory itself or saner interpretations.
Using computation/algorithmic methods we can simulate nonalgorithmic systems. So the world within a computer program can behave in a nonalgorithmic way.
Also, one might argue that universe/laws of physics are computational.
> Also, one might argue that universe/laws of physics are computational.
Maybe we need to define "computational" before moving on. To me this echoes the clockwork universe of the Enligthenment. Insights of quantum physics have shattered this idea.
> Insights of quantum physics have shattered this idea.
Not at all. Quantum mechanics is fully deterministic, if you stay away from bonkers interpretations like Copenhagen.
And, of course, you can simulate random processes just fine even on a deterministic system use a pseudo random number generator or you can just connect a physical hardware random number generator to your otherwise deterministic system. Compared to all the hardware used in our LLMs so far, random number cards are cheap kit.
Though I doubt a hardware random number generator will make the difference between dumb and intelligent systems: pseudo random number generators are just too good, and generalising a bit you'd need P=NP to be true for your system to behave differently with a good PRNG vs real random numbers.
You can simulate a nondeterministic process. There's just no way to consistently get a matching outcome. It's no different than running the process itself multiple times and getting different outputs for the same inputs.
> For this paper, It is absolutely sufficient to prove that a) this cannot be reached algorithmically and that b) evidence clearly shows that humans can (somehow) do this , as they have already done this (quite often).
The problem with these kinds of arguments is always that they conflate two possibly related but non-equivalent kinds of computational problem solving.
In computability theory, an uncomputability result essentially only proves that it's impossible to have an algorithm that will in all cases produce the correct result to a given problem. Such an impossibility result is valuable as a purely mathematical result, but also because what computer science generally wants is a provably correct algorithm: one that will, when performed exactly, always produce the correct answer.
However, similarly to any mathematical proof, a single counter-example is enough to invalidate a proof of correctness. Showing that an algorithm fails in a single corner case makes the algorithm not correct in a classical algorithmic sense. Similarly, for a computational problem, showing that any purported algorithm will inevitably fail even in a single case is enough to prove the problem uncomputable -- again, in the classical computability theory sense.
If you cannot have an exact algorithm, for either theoretical or practical reasons, and you still want a computational method for solving the problem in practice, you then turn to heuristics or something else that doesn't guarantee correctness but which might produce workable results often enough to be useful.
Even though something like the halting problem is uncomputable in the classical, always-inevitably-produces-correct-answer-in-finite-time sense, that does not necessarily stop it from being solved in a subset of cases, or to be solved often enough by some kind of a heuristic or non-exact algorithm to be useful.
When you say that something cannot be reached algorithmically, you're saying it's impossible to have an algorithm that would inevitably, systematically, always reach that solution in finite time. And you would in many cases be correct. Symbolic AI research ran into this problem due to the uncomputability of reasoning in predicate logic. (Uncomputability is not the main problem that symbolic AI ran into but it was one of them.)
The problem is that when you say that humans can somehow do this computationally impossible thing, you're not holding human cognition or problem solving to the same standard of computational correctness. We do find solutions to problems, answers to questions, and logical chains of reasoning, but we aren't guaranteed to.
You do seem to be aware of this, of course.
But you then run into the inevitable question of what you mean by AGI. If you hold AGI to the standard of classical computational correctness, to which you don't hold humans, you're correct that it's impossible. But you have also proven nothing new.
A more typical understanding of AGI would be something similar to human cognition -- not having formal guarantees but working well enough for operating in, understanding, or producing useful results the real world. (Human brains do that well in the real world -- thanks to having evolved in it!)
In the latter case, uncomputability results do not prove that kind of AGI to be impossible.
Indeed. And it's fairly trivial to see that computability isn't the right lens to view intelligence through:
The classic Turing test takes place over a finite amount of time. Normally less than an hour, but we can arbitrarily give the interlocutor, say, up to a week. If you don't like the Turing test, then just about any other test interaction we can make the system undergo will conclude below some fixed finite time. After all, humans are generally intelligent, even if they only get a handful of decades to prove it.
During that finite time interaction, only a finite amount of interaction will be exchanged.
Now in principle a system could have a big old lookup table with all prefixes of all possible interactions as keys, and values are probability distributions for what to send back next (and how long to wait before sending the reply). That table would be finite. And thus following it would be computable.
Of course, the table would be more than astronomical in size, and utterly impossible to manifest in our physical universe. But computability is too blunt an instrument to formalise this with.
In the real universe, you would need to _compress_ that table somehow, eg in a human brain or perhaps in an LLM or so. And then you need to be able to efficiently uncompress the parts of the table you need to produce the replies. Whether that's possible and how are all questions of complexity theory, not computability.
See Scott Aaronson's excellent 'Why Philosophers Should Care About Computational Complexity': https://arxiv.org/abs/1108.1791
Consciousness is an issue. If you write a program to add 2+2, you probably do not believe some entity poofs into existence, perceives itself as independently adding 2+2, and then poofs out of existence. Yet somehow, the idea of an emergent consciousness is that if you instead get it to do 100 basic operations, or perhaps 2^100 then suddenly this becomes true? The reason one might believe this is not because it's logical or reasonable - or even supported in any way, but because people assume their own conclusion. In particular if one takes a physicalist view of the universe then consciousness must be a physical process and so it simply must emerge at some sufficient degree of complexity.
But if you don't simply assume physicalism then this logic falls flat. And the more we discover about the universe, the weirder things become. How insane would you sound not that long ago to suggest that time itself would move at different rates for different people at the same "time", just to maintain a perceived constancy of the speed of light? It's nonsense, but it's real. So I'm quite reluctant to assume my own conclusion on anything with regards to the nature of the universe. Even relatively 'simple' things like quantum entanglement are already posing very difficult issues for a physicalist view of the universe.
>Yet somehow, the idea of an emergent consciousness is that if you instead get it to do 100 basic operations, or perhaps 2^100 then suddenly this becomes true
Why not? You can do a simple add with assembly language in a few operations. But if you put millions and millions of operations together you can get a video game with emergent behaviors. If you're just looking at the additions, where does the game come from? Is it still a game if it's not output to a monitor but an internal screen buffer?
You're not speaking of a behavior but of a "thing." Your consciousness sits idly inside your body, feeling as thought it's the driving all actions of its free will. There's no necessity, reason, or logical explanation for this thing to exist, let alone why or where it comes from.
No matter how many instructions you might use to create the most compelling simulation of a dragon in a video game, neither that dragon or any part of it is going to poof into existence. I'm sure this is something everybody would agree with. Yet with consciousness you want to claim 'well except its consciousness, yeah that'll poof into existence.' The assumption of physicalism ends up requiring people to make statements that they themselves would certainly call absurd if not for the fact that they are forced to make such statements because of said assumption!
And what is the justification for said assumption? There is none! As mentioned already quantum entanglement is posing major issues for physicalism, and I suspect we're really only just beginning to delve into the bizarro nature of our universe. So people embrace physicalism purely on faith.
>There's no necessity, reason, or logical explanation for this thing
I mean, I disagree. It's a internal virtual 'playground' you can bounce ideas off of and reason against. Obviously it imparts some survival benefits to creatures that have one at this point in evolution.
This gets to the issue. What is bouncing ideas off of yourself and reasoning against such? Well it's nothing particularly complex. A conditional is its most fundamental incarnation - add some variables and weights and you have just what you described in a few lines of code. Of course you don't think this poofs a consciousness into existence.
For consciousness to be emergent at some point there has to be wild hand-waving of 'well you see, it just needs to be more complex.' But any program is fundamentally nothing more than a simple set of instructions, so it all comes down to this issue. And if I hit a breakpoint and pause, and then start stepping through the assembly - ADD, MUL, CMP. Is the consciousness still imagining itself doing those things? Or does it just somehow disappear when I start stepping through instructions?
For even the most complex visual or behavior, you can stair step, quite rapidly, down to a very simple set of instructions. And no where in these steps is there any logical room for a consciousness to just suddenly appear.
My issue is that from a scientific point of view, physicalism is all we have. Everything else is belief, or some form of faith.
Your example about relativity is good. It might have sounded insane at some point, but it turns out, it is physics, which nicely falls into the physicalism concept.
If there is a falsifiable scientific theory that there is something other than a physical mechanism behind consciousness and intelligence, I haven't seen it.
I don't think science and consciousness go together quite well at this point. I'll claim consciousness doesn't exist. Try to prove me wrong. Of course I know I'm wrong because I am conscious, but that's literally impossible to prove, and it may very well be that way forever. You have no way of knowing I'm conscious - you could very well be the only conscious entity in existence. This is not the case because I can strongly assure you I'm conscious as well, but a philosophical zombie would say the same thing, so that assurance means nothing.
There are more than one theories, as well as some evidence that consciousness may not exist in the way we'd like to think.
It may be a trick our mind plays on us. The Global Workspace Theory addresses this, and some of the predictions this theory made have been supported by multiple experiments. If GWT is correct, it's very plausible, likely even, that an artificial intelligence could have the same type of consciousness.
That again requires assuming your own conclusion. Once again I have no way of knowing you are conscious. In order for any of this to not be nonsense I have to make a large number of assumptions including that you are conscious, that it is a physical process, that is an emergent process, and so on.
I am unwilling to accept any of the required assumptions because they are essentially based on faith.
Boltzmann brains and A. J. Ayer's "There is a thought now".
Ages ago, it occurred to me that the only thing that seemed to exist without needing a creator, was maths. That 2+2 was always 4, and it still would be even if there were not 4 things to count.
(From the quotation's date stamp, 2007, I had only finished university 6 months earlier, so don't expect anything good).
But as you'll see from my final paragraph, I no longer take this idea seriously, because anything that leads to most minds being free to believe untruths, is cognitively unstable by the same argument that applies to Boltzmann brains.
MUH leads to Aleph-1 infinite number of brains*. I'd need a reason for the probability distribution over minds to be zero almost everywhere in order for it to avoid the cognitively instability argument.
* if there is a bigger infinity, then more; but I have only basic knowledge of transfinites and am unclear if the "bigger" ones I've heard about are considered "real" or more along the lines of "if there was an infinite sequence of infinities, then…"
Human minds are fairly free to believe untruths. At least to a certain extent: it's rather hard to _really_ believe things that contradict your lived experience.
You can _say_ that you believe them, but you won't behave as if you believe them.
The problem with Boltzmann brains is that, by construction, they're going to have incorrect beliefs about almost everything.
Like, imagine watching a TV tuned to a dead station and somehow the random background noise looked and sounded like someone telling you the history of the world, and it really was just random noise doing this — that level of being wrong about almost everything.
Not even just errors like believing 1+1=3, but that this is equally likely as believing incoherent statements like 1+^Ω[fox emoji].
> What scientific evidence is there that we are anything other than a biochemical machine? And if we are a biochemical machine, how is that inherently capable of more than a silicon based machine is capable of
Iron and copper are both metals but only one can be hardened into steel
There is no reason why we should assume a silicon machine must have the same capabilities as a carbon machine
Unless you can show - even a single example would do - that we can compute a function that is outside the Turing computable set, then there is a very strong reason that we should assume a silicon machine has the same capabilities as a carbon machine to compute.
> There is no reason why we should assume a silicon machine must have the same capabilities as a carbon machine
Then make your computer out of carbon.
While the broader principle, that we don't know what we're doing and AI as it currently exists is a bit cargo-culty, this is a critique of the SOTA and is insufficient to be generalised: we can reasonably say "we probably have not", we can't say "we definitely cannot ever".
Who knows, perhaps our brains do somehow manage to do whacky quantum stuff despite seeming to be far too warm and messy for that. But even that is just an implementation detail.
> Who knows, perhaps our brains do somehow manage to do whacky quantum stuff despite seeming to be far too warm and messy for that. But even that is just an implementation detail.
Yes. And we are pretty close to building practical quantum computers. Though so far, we haven't really found much they would be good for. The most promising application seems to be for simulating quantum systems for material science.
> You stated to "feel" generally intelligent (A's don't feel and don't have an "I" that can feel) - Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity
This is completely unrelated to the proof in the link. You have to clearly explain what reasoning in your argument for “AGI is impossible” also implies human intelligence is possible. You can’t just jump to conclusions “you sound human therefore intelligence is possible”
It's simple: Either your proof holds for NGI as much as for AGI, or neither, or you can clearly define what differentiates them that makes it work for one and not the other.
Agreed. I thought my followup qs were fair. I'd like to understand the argument, but the first response makes me think it's not worth wading too deeply in.
So, in a word: a) there is no ghost in the machine when the machine is a formal symbol-bound machine. And b) to be “G” there must be a ghost in the machine.
Is that a fair summary of your summary?
If so do you spend time on both a and b in your papers? Both are statements that seem to generate vigorous emotional debate.
Well, given the specific way you asked that question I confirm your self assertion - and am quite certain that your level of Artificiality converges to zero, which would make you a GI without A...
- You stated to "feel" generally intelligent (A's don't feel and don't have an "I" that can feel) - Your nuanced, subtly ironic and self referential way of formulating clearly suggests that you are not a purely algorithmic entity
A "précis" as you wished: Artificial — in the sense used here (apart from the usual "planfully built/programmed system" etc.) — algorithmic, formal, symbol-bound.
Humans as "cognitive system" have some similar traits of course - but obviously, there seems to be more than that.