Indeed. But a point that is often omitted from comparisons with organic brains is how much "compute equivalent" we spent through evolution. The brain is not a blank slate; it has clear prior structure that is genetically encoded. You can see this as a form of pretraining through a RL process wherein reward ~= surviving and procreating. If you see things this way, data-efficiency comparisons are more appropriate in the context of learning a new task or piece of information, and foundation models tend to do this quite well.
Additionally, most of the energy cost comes from pretraining, but once we have the resulting weights, downstream fine-tuning or inference are comparatively quite cheap. So even if the energy cost is high, it may be worth it if we get powerful generalist models that we can specialize in many different ways.
> This is why we invented math and physics studies, to be able to accurately calculate, predict and reproduce those events.
We won't do away without those, but an intuitive understanding of the world can go a long way towards knowing when and how to use precise quantitative methods.
Additionally, most of the energy cost comes from pretraining, but once we have the resulting weights, downstream fine-tuning or inference are comparatively quite cheap. So even if the energy cost is high, it may be worth it if we get powerful generalist models that we can specialize in many different ways.
> This is why we invented math and physics studies, to be able to accurately calculate, predict and reproduce those events.
We won't do away without those, but an intuitive understanding of the world can go a long way towards knowing when and how to use precise quantitative methods.