I actually think major components supporting consciousness are already present in LLMs, and some of the 'requirements' like "perceiving time fluidly" are anthropomorphism: perception of time is streams of discrete signals; an LLM also processes streams of discrete signals -- just not as high resolution or analog as the ones our brains process.
There are certainly big missing pieces too though -- like the article talks about, physical grounding; to me, this should probably also include emotion and other neuro-chemical mechanisms. But I think we have a moral duty to look very critically at whatever "criteria" (doubtless these will keep changing as machine intelligence advances) society and the AI Labs end up developing to "define machine consciousness". Personally I think we're headed in a very direct, straight line back to widespread institutionalised slavery.
> There are certainly big missing pieces too though -- like the article talks about, physical grounding
I think that it may be possible to view consciousness as the combination of three things:
(1) A generalizable predictive function, capable of broad abstraction. (2) A sense of being in space. (3) A sense of being in time. (#2 and #3 can be combined into a "spatiotemporal sense.")
Animals have #2 and #3 in spades, but lack #1. LLMs possess #1 to a degree that can defeat any human, but lack #2 and #3. Without a sense of being in space and time, it's not clear that they are capable of possessing consciousness as we understand it.
There are certainly big missing pieces too though -- like the article talks about, physical grounding; to me, this should probably also include emotion and other neuro-chemical mechanisms. But I think we have a moral duty to look very critically at whatever "criteria" (doubtless these will keep changing as machine intelligence advances) society and the AI Labs end up developing to "define machine consciousness". Personally I think we're headed in a very direct, straight line back to widespread institutionalised slavery.