I've never understood what people see in this story. Meat is a complex organization of trillions of self-replicating machines, each of which (besides red blood cells maybe) is also incredibly complex... much more advanced than anything human minds have been able to build.
I can't imagine what a sentient entity would have to be made out of or how much less "efficient" or stable it would have to be than me for me to find it amusing. If it turns out that some slow geological process or a momentary dust cloud are actually self aware, the last thing I would do is laugh.
Is it just an abstract association that we're wet inside whereas computer chips are dry? That's just where our technology is right now, it largely reflects how our brains can't simulate fluid dynamics or conceive self replicating distributed systems efficiently enough so we resort to designing simple, solid state things.
i think the point of the story is exactly what you've gotten from it
namely, it's absurd to dismiss the possibility of self-awareness in some system because it's built out of parts different from the parts other self-aware systems you're familiar with are built out of
a useful thing to keep in mind when the stochastic parrots start squawking about how large language models aren't actually intelligent
The goalposts for intelligence always seem to change with each new foundation model that comes out. As long as people can understand the principles behind why a model works, it no longer seems intelligent.
while they don't yet rise to the level i would describe as 'intelligent' without qualifications, they do seem to be less unintelligent than most of the humans, and in particular most of the ones criticizing them in this way, who consistently repeat specific criticisms applicable to years-ago systems which have no factual connection to current reality
A disembodied paragraph that I've transmitted to you can appear to be intelligent or not, but it only really matters in the sense that you can ascribe that intellect to an agent.
The LLM isn't an agent and no intellect can be ascribed to it. It is a device actual intelligent agents have made and ascribing it intellect is equally as erroneous.
Going meta for a moment, this argument begs the question, assuming the conclusion "Therefore LLMs are not intelligent" in the premises "No intelligence can be ascribed to LLMs".
I'm not convinced it's even possible to come up with a principled, non-circular definition of intelligence (that is, not something like "intelligence is that trait displayed by humans when we...") that would include humans, include animals like crows and octopuses, include a hypothetical alien intelligence, but exclude LLMs.
I'm not arguing that LLMs are intelligent. I'm arguing that the debate is inherently unwinnable.
almost precisely the same assertions could be made about you with precisely the same degree of justification: you aren't an agent and no intellect can be ascribed to you. you are a device unintelligent agents have made and ascribing you intellect is equally as erroneous
an intelligent agent would have recognized that your argument relies on circular reasoning, but because you are a glorified autocomplete incapable of understanding the meanings of the words you are using, you posted a logically incoherent comment
(of course i don't actually believe that about you. but the justification for believing it about gpt-4 is even weaker)
Consciousness is generated when the universe computes by executing conditionals/if statements. All machines are quantum/conscious in their degrees of freedom, even mechanical ones: https://youtu.be/mcedCEhdLk0?si=_ueWQvnW6HQUNxcm
The universe is a min-consciousness/min-decision optimized supercomputer. This is demonstrated by quantum eraser and double slit experiments. If a machine does not distinguish upon certain past histories of incoming information, those histories will be fed as a superposition, effectively avoiding having to compute the dependency. These optimizations run backwards, in a reverse dependency injection style algorithm, which gives credence to Wheeler-Feynman time-reversed absorber theory: https://en.wikipedia.org/wiki/Wheeler%E2%80%93Feynman_absorb...
Lower consciousnesses make decisions which are fed as signal to higher consciousnesses. In this way, units like the neocortex can make decisions that are part of a broad conscious zoo of less complex systems, while only being burdened by their specific conditionals to compute.
Because quantum is about information systems, not about particles. It's about machines. And consciousness has always been "hard" for the subject, because they are a computer (E) affixed to memory (Mc^2.) All mass-energy in this universe is neuromorphic, possessing both compute (spirit) and memory (stuff.) Energy is NOT fungible, as all energy is tagged with its entire history of interactions, in the low frequency perturbations clinging to its wave function, effectively weak and old entanglements.
Planck's constant is the cost of compute per unit energy, 10^34 Hz/Joule. By multiplying by c^2, (10^8)^2, we can get Bremmerman's limit, the cost of compute per unit mass, 10^50 Hz/Kg.
https://en.wikipedia.org/wiki/Bremermann%27s_limit
Humans are self-replicating biochemical decision engines. But no more conscious than other decision making entities. Now, sentience, and self-attention is a different story. But we should at the very least start with understanding that qualia are a mindscape of decision making. There is no such thing as conscious non-action. Consciousness is literally action in physics, energy revolving over time: https://en.wikipedia.org/wiki/Action_(physics)
Planck's constant is a measure of quantum action, which effectively is the cost of compute..or rather..the cost of consciousness.
Lines up a bit too perfectly. Everyone has their threshold of coincidence I suppose. I am working on some hard science into measuring the amount of computation actually happening, in a more specific quantity than hz, related to reversible boolean functions, possibly their continuous analogs.
The joke is how you decide that the machine isn't an agent. If you believe only meat can be an agent and given the fact that machine isn't meat, it follows machine isn't an agent. The story reverses this chauvinism and shows machines finding an idea of thinking meat absurd for arguably better reason that machines are better fit for information processing than meat.
How are you defining intelligence? And how are you measuring the abilities in existing LLM systems to know they don't meet these criteria?
Honest questions by the way in case they come out snarky in text. I'm not aware of a single, agreed upon definition of intelligence or a verified test that we could use to know if a computer system has those capabilities.
Yann's explanation here is a pretty high level overview of his understanding of different thought modeling, it isn't really related to how we define intelligence at all and isn't a complete picture. The distinction drawn between System 1 & 2 as explained is more of a limitation in conditions given to the algorithm rather than ability of the algorithm itself (i.e. one could change parameters to allow for unlimited processing time)
Yann may touch in how we define intelligence elsewhere, I haven't deeply studied all of his work. Though I can say that OpenAI has taken to using relative economic value as their analog for comparing intelligence to humans. Personally that definition is pretty gross and offensive, I hope most people wouldn't agree that our intelligence can be directly tied to how much value we can produce in a given economic paradigm.
human-written words are not self-aware; they're just inert ink on paper, or the equivalent in other media. i've never seen even a small child make that error before. if you wrote that in earnest, you seem to be suffering from a bad kind of confusion and may need medical attention
> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
I’ve been visiting this site for a long time, and it’s getting to the point where it feels like people are ignoring this rule regularly, and it’s disappointing.
My recollection is that there was quite a lot of debate back then on whether machines would be able to think or be conscious with arguments on the can't think side being Searle's Chinese Room and similar (https://en.wikipedia.org/wiki/Chinese_room), which I always thought was a bit silly and people don't seem to take seriously now but did back in 1991.
The Made of Meat story was quite a nice counter argument to that.
The argument that sentience can't derive from computation alone because computations can be slowed down and done by hand still hold for me. The piece of paper written on or the silicon doesn't acquire different properties because what the operation will be used for, the GPU doesn't differentiate between graphic rendering and weight computation when it's adding 1. That's in opposition with animal brains that not only do computations but interact with real-world matter/fields directly. With that said of course we don't know if human like sentience is a prerequisite to (super)human level intelligence. But recent advancements in AI tends to diminish the argument sentience is a side effect of intelligence though.
>The piece of paper written on or the silicon doesn't acquire different properties because what the operation will be used for
I does though. Electric current in general doesn't compute, but electric current in a computer does compute. Thus electric current has a property of computation depending on what it's used for.
> With that said of course we don't know if human like sentience is a prerequisite to (super)human level intelligence. But recent advancements in AI tends to diminish the argument sentience is a side effect of intelligence though.
We already know the answer to this and it is no, it is not a pre-requisite unless "sentience" is itself an emergent property of intelligence. Hutter's mathematical formulation of an optimally intelligent agent (AIXI) is not computable but approximations of it are, that is to say super-human intelligence IS just a computable function (as human intelligence is resource bound and suboptimal) with no extra "sentience" required. The only limiting factor at this point is the computational resources to compute this function, with the resources we have now it is still at the "toy" stage: playing noughts and crosses and Pac-Man etc.
People used to think that "creativity" was required for playing chess... clearly those people had not heard of Minimax.
Your consciousness can be slowed, it takes time for you process input and make decisions. Your sense of time varies by circumstance (and sometimes drug). I don't know if you would argue a single neuron is aware of the thoughts it is enabling. The fact is i could easy argue that you are a chinese room.
Actually, the story doesn't say that it's machines if I can see correctly. It might just as well be other evolved organisms, just not protein/"meat" based.
Makes you wonder how can they have a concept of "meat", let alone such a dismissive one. We call "meat" the animal tissue that we eat as food- so either here "meat" is an inadequate term for "organic matter", or we have the curious case of an alien lifeform that eats organic matter while not being itself organic.
In any case, being non-organic, it's not clear where they might have gained such contemptuous familiarity with organic matter- although it's clear from the story that they know about it much less than they think.
English has separate terms for "flesh" and "meat" for complicated reasons, with "meat" having a stronger food implication. But I think the story uses "meat" because it's a funnier word in that context.
It’s not hard to imagine aliens knowing about organic matter since those are universal. I could see a disdain for it if that’s where they started and then “ascended”. Much like we might view crude stone tools compared to say the LHC. Or how we view a single celled organism.
my take on it is the absurdity of denying capacity to something simply because you look down on it for one reason or another. If there are signs that a thing does a thing, then it does it. It should not be weird that this thing does it, we should simply update our internal model to better reflect reality as we now know it and move on.
To flip it around, it is equally absurd to say computers can't be sentient because they're just math, or just minerals and electricity etc.
The point I took away is that almost all biological life is optimized for energy efficiency in its ecological niche. Any interplanetary species would be beyond those constraints, and would either evolve or engineer their way out of them as soon as possible to, you know, live! Not to mention eradicate disease, travel the galaxy, etc.
I can't imagine what a sentient entity would have to be made out of or how much less "efficient" or stable it would have to be than me for me to find it amusing. If it turns out that some slow geological process or a momentary dust cloud are actually self aware, the last thing I would do is laugh.
Is it just an abstract association that we're wet inside whereas computer chips are dry? That's just where our technology is right now, it largely reflects how our brains can't simulate fluid dynamics or conceive self replicating distributed systems efficiently enough so we resort to designing simple, solid state things.