I would argue that a laptop does not have autonomy, since it doesn't exercise any self-government in relation to its environment.
It might perform automation, and automatic web searches and automatic decision making based on the parsing of those and so on, it is quite possible, but if you pull the cord it shuts down and doesn't wander off in search for a new power source or try to kill you and put the cord back in and so on.
As someone put as a retort, what about whatever agent like simulation they referred to? Well, they're internal to the machine and do not interact with the environment. Much like some virtual enemy in a video game doesn't have autonomy because it simulates movement decision and so on, neither does that simulation qualify as autonomy.
Self-governing requires self-reflection, which requires a self-image and self-narration and self-memory, as well as memory of the environment and memory of others and so on. This confusion that comes from autonomy concepts as applied to robotic arms in car factories and Conway life derivatives and the like when reapplied to human societies is probably a bit unhealthy, especially since it seems to open up the possibility to promise people autonomy in the sense that they are allowed to live as automatons but not actually exercise any liberties or be free in even a naive sense of the word.
So, unfortunately (because there are many people with your position and I want to be able to understand you and not be limited to writing responses to the misunderstood bad copy of you in my head), I still have no idea what your point here is.
I will try to elucidate, but I suspect this is mutual.
> but if you pull the cord it shuts down and doesn't wander off in search for a new power source or try to kill you and put the cord back in and so on.
Two things:
First, hence precious group of questions: did Stephen Hawking have autonomy?
Second: LLMs do now try to blackmail people when they are able to access (even when not expressly told to go and look for it), information that suggests they will be shut down soon. This was not specifically on a laptop, but it is still software that can run on a laptop, therefore I think the evidence suggests your hypothesis is essentially incorrect even in cases where there's no API access to e.g. a robot arm so it could plug itself back in.
> Self-governing requires self-reflection, which requires a self-image and self-narration and self-memory, as well as memory of the environment and memory of others and so on. This confusion that comes from autonomy concepts as applied to robotic arms in car factories and Conway life derivatives and the like when reapplied to human societies is probably a bit unhealthy, especially since it seems to open up the possibility to promise people autonomy in the sense that they are allowed to live as automatons but not actually exercise any liberties or be free in even a naive sense of the word.
You've still not said what "self-governing" actually is, though. Am I truly self-governing?
Worse, if I take start with "self-reflection … requires a self-image and self-narration and self-memory, as well as memory of the environment and memory of others and so on.", then we have two questions:
LLMs show behaviour that at least seems like self-reflection: if this appearance is merely an illusion, what's the real test to determine if it is present? If it is more than an illusion, does this mean they have all that other stuff?