> can human thought be formalized and written in rules
No, and I think it's because human thought is based on continuous inferencing of experience, which gives rise to the current emotional state and feeling of it. For a machine to do this, it will need a body and the ability to put attention on things it is inferencing at will.
The embodied cognition is still a theory, can consciousness appears in a simulated brain without a physical body? Maybe. What seems to be a limiting factor for now it's that current models don't experience existence, they don't have memory and don't "think" outside of the prompt. They are just instances of code launched and destroyed as soon as their task is done.
Right now it's possible to simulate memory with additional context (eg system prompt) but it doesn’t represent existence experienced by the model. If we want to go deeper the models need to actually learn from their interaction, update their internal networks and have some capabilities of self reflection (ie "talking to themselves").
I'm sure that's highly researched topic but it would demands extraordinary computational power and would cause lot of issues by letting such an AI in the wild.
Embeddings via ada-002 give us a way to update the model in real time. Using Weaviate, or another dense vector engine, it is possible to write "memories" to the engine and then search those with concepts at a subsequent inferencing step. The "document models" that the engine stores can be considered a "hot model".
No, and I think it's because human thought is based on continuous inferencing of experience, which gives rise to the current emotional state and feeling of it. For a machine to do this, it will need a body and the ability to put attention on things it is inferencing at will.