> First, we evaluate, for each voxel, subject and narrative independently, whether the fMRI responses can be predicted from a linear combination of GPT-2’s activations (Fig. 1A). We summarize the precision of this mapping with a brain score M: i.e. the correlation between the true fMRI responses and the fMRI responses linearly predicted, with cross-validation, from GPT-2’s responses to the same narratives (cf. Methods).
Was this cross checked against arbitrary Inputs to GPT-2? I gather, with 1.5 Billion parameters, you can find a representative linear combination for everything.
They assume linearity. They map their choice of GPT-properties to their choice of (brain blood flow!) properties. They then claim there are correlations, with a few fMRI datasets.
If something serious was on the line, with this type of analysis, you'd be fired.
Reading this it feels like we might as well give up on there being any science any more, tbh. For this to appear in Nature -- it feels like the rubicon has been crossed.
How can we expect the public not to be "anti-vax" (etc.), or otherwise scientifically competent in the basic tennets of modern science (experiment, refutation, peer review) -- if Nature isnt?
It's not Nature, it's Scientific Reports. The bar to publication in the two couldn't be more different. Nature is one of the premier high impact journals, Sci. Rep. is a pretty middle of the road somewhat new open access journal.
> Scientific Reports is an online peer-reviewed open access scientific mega journal published by Nature Portfolio, covering all areas of the natural sciences. The journal was launched in 2011.[1] The journal has announced that their aim is to assess solely the scientific validity of a submitted paper, rather than its perceived importance, significance or impact.[2]
Of that last line, this is quite literally the opposite. The only grounds to accept this paper is how on-trend this topic is. The "scientific validity" of correlating floating point averages over historical text documents, and brain blood flow... is, c. 0%
This just is a crystallization all the pseudoscience trends of the last (> decade): associative statistical analysis; assuming linearity; reification fallacy; failure to construct relevant hypotheses to test; no counterfactual analysis; no series attempt at falsification; trivial sample sizes; profound failure to provide a plausible mechanism; profound failure to understand the basic theory in the relevant domains; "AI"; "Neural"; "fMRI"; etc.; paper participates in a system of financial incentives largely benefitting industrial companies with investment in relevant tech; paper is designed to be a press release for those companies.
If I were to design and teach a lecture series on contemporary pseudoscience, I'd be half-inclined to spend it all on this paper alone. It's a spectacular confluence of these trends.
I work in neuroscience and pharmacology. My impression of my own field is far different than what you state here. You made a statement about all scientific exploration but you seem to only read about a few limited areas
I happen to be BS-facing, it must be said. I ought calm myself with the vast amount of "normal science".
But likewise, we're in an era when "the man on the street" feels easy appealing to "the latest paper" delivered to him via an aside in a newspaper.
And at the same time, the "scientific" industry which produces this papers seems to have not merely taken the on-trend funding, but scarified its own methods to capture it.
In otherwords, "the man on the street" seems to have become the target demographic for a vast amount of science. From pop-psych to this, all designed to dazzle the lay reader.
Once only on popsci book shelves, now, everywhere in Nature!
Was this cross checked against arbitrary Inputs to GPT-2? I gather, with 1.5 Billion parameters, you can find a representative linear combination for everything.
The Bible Code comes to mind (https://en.wikipedia.org/wiki/Bible_code).