Recursive insight is possible with a model that self trains, but right now that would result in a detour into unreality. Perhaps with the right systems of vetting prior to incorporating new data into the retraining set.
Right now they just get stupider if you train them on their own output, which suggests that the quality of the data available in the training set is higher than the quality of output produced by the model as a general rule. The fidelity is < 1.0 . Apparently, it is possible to achieve fidelity >1 (the growth of human knowledge) but our algorithms are not so great at this point, it seems.
Not necessarily. For example Anthropic's ConstitutionalAI (CAI) leverages the model to substitute human judgments in RLHF, effectuating essentially RLAIF. CAI information is used to fine-tune the Claude model.
Broadly speaking, you require statistics at echelon N+1 when you are at rung N. We can amplify models by providing them additional time, self-reflexion, demand step by step planning, allow external tools, tune it on human preferences, or give it feedback from executing a code, or from a robot.
Yeah, it makes some sense that you could use a more intense introspection to train weaker ones… I wonder what the human analogue for that looks like.
Maybe working up a proof and then quizzing yourself on it?
As long as we get >N supervision and the difference is more than the model retrograde, it seems that could work. But it seems like there is a definite limit to that. The N-n1 difference will only stay above the improvement delta up to a point.
The model would learn from feedback, not just regurgitate the training set, as long as the model is part of a system that can generate this feedback. AlphaGo Zero had self play for feedback. Robots can check task execution success. Even chatting with us generates feedback to the model.
I think a discussion on induction is best done by splitting the resulting models in two: models based on statistics, and models based on (abstract/iconic) simulation. (Eg all swans are white vs all swans lay eggs.)
Since our reality is "atoms and void", and since the sun and earth are huge configurations of atoms locked together in a stable pattern, the sun coming up tomorrow has nothing to do with statistics. And bayesian reasoning plays no role in our predictions or certainty. At least not directly. It does indirectly, by asking what perturbation, what intervention, can stop this from happening? And how likely are such events?
What we know by experience, by abstraction, or empirically, are three distinct modes of knowing. Experiences are directly known and always true. (Experiences might reference other potentially false things, and might be false indirectly.)
That resolves the whole Mary knowledge problem. Books cannot inject that kind of direct knowledge. Thus the claim "mary knows everything" is either false, or only true for a smaller domain.
One can think of analogies, like tamper resistant logs, or unique CPU states while doing static analysis vs running a program.
All in all, the non-physicalist conclusions are widely overdrawn. More over, for what it is worth, Jackson himself no longer thinks this argument is a good one.
It is a virtual time machine. You can run time forwards, explore various possible futures, then pick the future aligned most with your preferences, capabilities, and taste for risk.
> But in this world, consciousness is, at root, a physical phenomenon, not a purely computational phenomenon.
That is entirely unknown. If it were a purely computational phenomenon, it would explain a lot, and nothing in this article argues against it. Except, perhaps, the iron bar mapping. But it seems to miss that a computer program basically defines a causal network. And recognizing and comparing causal networks is much more objective then the thought experiment acknowledges.
This might be a good thing. There are a few responses possible:
1. Do nothing, public gets angry.
2. Make bitcoin, or its mining, illegal. However, what is the criteria? All blockchains? Should google be illegal too? Both are just a digital product that the public wants.
3. Tax CO2 production by data centers, that might have a chance it can be coordinated broadly between nations, there is enough money there to tax, and bitcoin adds some urgency.
While doubtful CO2 can be taxed globally, without coordination, big companies play governments against each other.
"Should google be illegal too?" is a pretty ridiculous extrapolation. Nobody would go that far, a regulation like that doesn't achieve any climate goals and would have massive economic impact. You can probably provide an example that isn't wholly implausible like online banking? Even then, it is possible to draw a line.
Who draws that line? Corrupt politicians? Million dollar lobbying? Lines are smokescreen in front of the real problem. They mostly serve to protect the establishment. Ok, no more bitcoin, but what about the heavily subsidized dairy industry, the oil industry, and all the other that pollute heavily?
If a solution requires a "line" or makes no sense at the extreme, it's most likely not a solution.
The only thing that graph shows is that China was dirt poor in 1995, and is now still only at 25-35% of USA levels.