The "Ladder of Causation" proposed by Judea Pearl covers similar ground - "Rung 1” reasoning is the purely predictive work of ML models, "Rung 2" is the interactive optimization of reinforcement learning, and "Rung 3" is the counterfactual and casual reasoning / DGP construction and work of science. LLMs can parrot Rung 3 understanding from ingested texts but it can't generate it.
Of course, the hypothesis that they are sufficiently aware of everything -- existing and being nurtured solely to be slaughtered, having already seen many friends die, if female having had many children stolen from them -- to be traumatized to the point of utter emotional breakdown and unfeeling is also consistent with said observation. Humans who have survived war crimes and horrific battles tend to go pretty dead-eyed too. Death might even be welcomed, as a deliverance.
I know these pigs hadn’t seen death or had that kind of loss. Your idea here is important, but overplayed. It’s important because we have not given animals enough credit, but it’s overplayed because you’re giving them way too much human characteristics. For starters, they don’t have language, so their depth of thought and feeling are not similar. Imagine what they think about the origins of food. Hint: basically zero idea. Their thoughts on death and loss will be similarly weak.
That you seem to consider yourself sufficiently expert and authoritative to declare the idea "overplayed" -- and that you consider this declaration something that will further readers' understanding (which I at least view as the goal here) -- is... let's say, interesting.
In any case, no, I certainly would not attribute to pigs human-level cognition. But certainly, the notion that language is a prerequisite for thought (at least up through the level of "solving a Sudoku puzzle", per some research) has been largely discarded [0]. And certainly, as omnivores (who will kill and/or consume members of their own species, occasionally including even their own offspring, as well as members of other species) I think it's reasonable to suppose that pigs would have an inkling that one being's death is another's meal. And fundamentally, there is ample reason to suppose that pigs are closer to human-like cognition than the vast majority of species on earth. (Anecdata: they make for difficult pets owing to their need for stimulation, they can play video games [1], and -- as any reputable animal scientist will tell you -- once you get past certain bare-minimum things like ending the use of gestation crates, the most important things for pig welfare in industry are not group housing or (beyond a certain point) extra space, but rather giving them toys and a sense of cleanliness via clean bedding.)
But feel free to provide evidence that attests that "[t]heir thoughts on death and loss will be similarly weak." I would certainly take it into consideration.
> BNNs bring the following advantages over GPs: First, training large GPs is computationally expensive, and traditional training algorithms scale as the cube of the number of data points in the time series. In contrast, for a fixed width, training a BNN will often be approximately linear in the number of data points. Second, BNNs lend themselves better to GPU and TPU hardware acceleration than GP training operations.
If I'm not mistaken Hilbert Space Gaussian Processes (HSGPs) are O(mn+m) (where m is the number of basis functions, often something like m=30, m=60, or m=100), which is also a huge improvement over conventional GPs' O(n^3). I know that there are some constraints on HSGPs (e.g. they work best with stationary time series, and they're not quite as accurate, flexible, or readily interpretable or tunable as conventional GPs), but what would be the argument for an AutoBNN over an HSGP? Is it mainly about the lack of a need for domain expert input?
As an outsider to the "tech" scene, who did a PhD generals field on modern European intellectual history (which included a dollop of recent continental philosophy), it's endlessly fascinating to me how people in tech have fixated upon Girard. While a few of his ilk do come up in standard reading lists, he (generally) doesn't -- he is far more prominent vis-a-vis his peers in this discourse than in his "native" one. I suspect that this owes to path dependency and his metaphysics' compatibility with the industry's participants' socio-intellectual priors (so to speak).
I'd guess it's because anoraks/botanics/intellos/Geeks/geeks are pretty familiar with low-subculture-on-the-totem-pole becoming a unifying scapegoat, so it's an appealing idée fixe. (or is that merely recapitulating what you just said?)
On the other hand, I've been in the tech scene for nearly 4 decades, and this discussion is the first time I've encountered Girard (at least that I remember).
I think one further bit of horror is, it has the potential to erode your memory of the lost person, piece by piece regressing your recollection of them to the mean (so to speak) of the model's training set. At some point, will the bereaved -- to use the example given here -- just passively accept that the lost loved one actually was the sort of person to write about quilting, even when they never did in life?
(I think the author of the book discussed above, Osvaldo Martin, is the primary or sole contributor for the Rethinking implementations, in fact -- he had a full implementation in his own repo (https://github.com/aloctavodia/Statistical-Rethinking-with-P...) before deprecating it in favor of the above-linked one.)
The author was also a lead author on a CRC red-series book, Bayesian Modeling and Computation in Python Learning, published a few years ago (short review here):
Intractable and impossible are very different claims.
NP Hardness is a statement about the asymptotic difficulty of solving a problem at ever larger scales. It says ‘if you have a way to solve this problem at size n, that way will scale worse than polynomially when you try to apply it to a problem at size 2n’.
Which might place limits on the practicable scale of how big an n your approach gets to work for. But if your approach works practically for a big enough n to make AGI then your approach works - NP Hardness doesn’t matter.
And since we know that finite mass lumps of finite numbers of gray cells are capable of GI, we have a reasonable expectation that there is some n for which AGI might be possible.
The "Ladder of Causation" proposed by Judea Pearl covers similar ground - "Rung 1” reasoning is the purely predictive work of ML models, "Rung 2" is the interactive optimization of reinforcement learning, and "Rung 3" is the counterfactual and casual reasoning / DGP construction and work of science. LLMs can parrot Rung 3 understanding from ingested texts but it can't generate it.