Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh no, I am not at all trying to find an explanation of why this is (qualia etc.). There is simply no necessity for that. It is interesting, but not part of the scientific problem that i tried to find an answer to.

The proof (all three of them) holds without any explanatory effort concerning causalities around human frame-jumping etc.

For this paper, It is absolutely sufficient to prove that a) this cannot be reached algorithmically and that b) evidence clearly shows that humans can (somehow) do this , as they have already done this (quite often).



> this cannot be reached algorithmically

> humans can (somehow) do this

Is this not contradictory?

Alternatively, in order to not be contradictory doesn't it require the assumption that humans are not "algorithmic"? But does that not then presuppose (as the above commenter brought up) that we are not a biochemical machine? Is a machine not inherently algorithmic in nature?

Or at minimum presupposes that humans are more than just a biochemical machine. But then the question comes up again, where is the scientific evidence for this? In my view it's perfectly acceptable if the answer is something to the effect of "we don't currently have evidence for that, but this hints that we ought to look for it".

All that said, does "algorithmically" here perhaps exclude heuristics? Many times something can be shown to be unsolvable in the absolute sense yet readily solvable with extremely high success rate in practice using some heuristic.


OP seems to have a very confused idea of what an algorithmic process means... they think the process of humans determining what is truthful "cannot possibly be something algorithmic".

Which is certainly an opinion.

> whatever it is: it cannot possibly be something algorithmic

https://news.ycombinator.com/item?id=44349299

Maybe OP should have looked at a dictionary for what certain words actually mean before defining them to be something nonsensical.


> Maybe OP should have looked at a dictionary for what certain words actually mean before defining them to be something nonsensical.

Making non-standard definitions of words isn't necessarily bad, and can be useful in certain texts. But if you do so, you need to make these definitions front-and-centre instead of just casually assuming your readers will share your non-standard meaning.

And where possible, I would still use the standard meanings and use newly made up terms to carry new concepts.


Maybe you need to update an outdated model?

Nothing in physics requires us to use your prior experience as some special epoch.

Meaning is mutable social relationship as language meaning is not immutable physics.


The model I am using is the conventional understanding of physics. What model are you using?

> language meaning is not immutable physics.

Our understanding of physics is not complete, so why would our model of it be final? No one is saying it is.

Everything we currently know about physics, all the experiments we've conducted, suggests the physical church turing thesis is true.

If you want to claim that the last x% of our missing knowledge will overturn everything and reality is in fact not computable, you are free to do so, and this may well even be true.

But so far the evidence is not in your favor and you'd do well to acknowledge that.


> Alternatively, in order to not be contradictory doesn't it require the assumption that humans are not "algorithmic"? But does that not then presuppose (as the above commenter brought up) that we are not a biochemical machine? Is a machine not inherently algorithmic in nature?

No, computation is algorithmic, real machines are not necessarily (of course, AGI still can't be ruled out even if algorithmic intelligence is, only AGI that does not incorporate some component with noncomputable behavior.)


> computation is algorithmic, real machines are not necessarily

Author seems to assume the latter condition is definitive, i.e. that real machines are not, and then derive extrapolations from that unproven assumption.


> No, computation is algorithmic, real machines are not necessarily

As the adjacent comment touches on are the laws of physics (as understood to date) not possible to simulate? Can't all possible machines be simulated at least in theory? I'm guessing my knowledge of the term "algorithmic" is lacking here.


As far as we can tell, all the known laws of nature are computable. And I think most of them are even efficiently computable, especially if you have a quantum computer.

Quantum mechanics is even linear!

Fun fact, quantum mechanics is also deterministic, if you stay away from bonkers interpretations like Copenhagen and stick to just the theory itself or saner interpretations.


Using computation/algorithmic methods we can simulate nonalgorithmic systems. So the world within a computer program can behave in a nonalgorithmic way.

Also, one might argue that universe/laws of physics are computational.


> Also, one might argue that universe/laws of physics are computational.

Maybe we need to define "computational" before moving on. To me this echoes the clockwork universe of the Enligthenment. Insights of quantum physics have shattered this idea.


> Insights of quantum physics have shattered this idea.

Not at all. Quantum mechanics is fully deterministic, if you stay away from bonkers interpretations like Copenhagen.

And, of course, you can simulate random processes just fine even on a deterministic system use a pseudo random number generator or you can just connect a physical hardware random number generator to your otherwise deterministic system. Compared to all the hardware used in our LLMs so far, random number cards are cheap kit.

Though I doubt a hardware random number generator will make the difference between dumb and intelligent systems: pseudo random number generators are just too good, and generalising a bit you'd need P=NP to be true for your system to behave differently with a good PRNG vs real random numbers.


You can simulate a nondeterministic process. There's just no way to consistently get a matching outcome. It's no different than running the process itself multiple times and getting different outputs for the same inputs.


> For this paper, It is absolutely sufficient to prove that a) this cannot be reached algorithmically and that b) evidence clearly shows that humans can (somehow) do this , as they have already done this (quite often).

The problem with these kinds of arguments is always that they conflate two possibly related but non-equivalent kinds of computational problem solving.

In computability theory, an uncomputability result essentially only proves that it's impossible to have an algorithm that will in all cases produce the correct result to a given problem. Such an impossibility result is valuable as a purely mathematical result, but also because what computer science generally wants is a provably correct algorithm: one that will, when performed exactly, always produce the correct answer.

However, similarly to any mathematical proof, a single counter-example is enough to invalidate a proof of correctness. Showing that an algorithm fails in a single corner case makes the algorithm not correct in a classical algorithmic sense. Similarly, for a computational problem, showing that any purported algorithm will inevitably fail even in a single case is enough to prove the problem uncomputable -- again, in the classical computability theory sense.

If you cannot have an exact algorithm, for either theoretical or practical reasons, and you still want a computational method for solving the problem in practice, you then turn to heuristics or something else that doesn't guarantee correctness but which might produce workable results often enough to be useful.

Even though something like the halting problem is uncomputable in the classical, always-inevitably-produces-correct-answer-in-finite-time sense, that does not necessarily stop it from being solved in a subset of cases, or to be solved often enough by some kind of a heuristic or non-exact algorithm to be useful.

When you say that something cannot be reached algorithmically, you're saying it's impossible to have an algorithm that would inevitably, systematically, always reach that solution in finite time. And you would in many cases be correct. Symbolic AI research ran into this problem due to the uncomputability of reasoning in predicate logic. (Uncomputability is not the main problem that symbolic AI ran into but it was one of them.)

The problem is that when you say that humans can somehow do this computationally impossible thing, you're not holding human cognition or problem solving to the same standard of computational correctness. We do find solutions to problems, answers to questions, and logical chains of reasoning, but we aren't guaranteed to.

You do seem to be aware of this, of course.

But you then run into the inevitable question of what you mean by AGI. If you hold AGI to the standard of classical computational correctness, to which you don't hold humans, you're correct that it's impossible. But you have also proven nothing new.

A more typical understanding of AGI would be something similar to human cognition -- not having formal guarantees but working well enough for operating in, understanding, or producing useful results the real world. (Human brains do that well in the real world -- thanks to having evolved in it!)

In the latter case, uncomputability results do not prove that kind of AGI to be impossible.


Indeed. And it's fairly trivial to see that computability isn't the right lens to view intelligence through:

The classic Turing test takes place over a finite amount of time. Normally less than an hour, but we can arbitrarily give the interlocutor, say, up to a week. If you don't like the Turing test, then just about any other test interaction we can make the system undergo will conclude below some fixed finite time. After all, humans are generally intelligent, even if they only get a handful of decades to prove it.

During that finite time interaction, only a finite amount of interaction will be exchanged.

Now in principle a system could have a big old lookup table with all prefixes of all possible interactions as keys, and values are probability distributions for what to send back next (and how long to wait before sending the reply). That table would be finite. And thus following it would be computable.

Of course, the table would be more than astronomical in size, and utterly impossible to manifest in our physical universe. But computability is too blunt an instrument to formalise this with.

In the real universe, you would need to _compress_ that table somehow, eg in a human brain or perhaps in an LLM or so. And then you need to be able to efficiently uncompress the parts of the table you need to produce the replies. Whether that's possible and how are all questions of complexity theory, not computability.

See Scott Aaronson's excellent 'Why Philosophers Should Care About Computational Complexity': https://arxiv.org/abs/1108.1791




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: