"You can prove the presence of bugs, but not their absence"
—A Programmer
The systems employ concurrency at multiple levels, where the system design lacks a unified paradigm for establishing its correctness.
So concurrency "errors" should be expected.
I always start wondering why such bugs aren't more common. But then I realize a dilemma.
For concurrency errors, you can know an error occurs but have no way to work backwards to the specific conditions of its cause (incorrect design). All you can do is perturb the system into not exhibiting the error.
Lacking a formal way to establish correctness, we are left to an engineering of attrition, in which the author is engaged.
If the failure becomes common, system parameters will be adjusted to perturb behavior back into obscurity with a black art called "debugging" that approximates correctness.
The contravening property of the system is the underlying logic runs so fast with respect to human attention that failure modes are pressed into high likelihood of being observed and therefore "corrected" (approximately). Bugs which are common enough to merit attention are perturbed out of "existence" by "version charges."
This seems to imply that such failures should be both expected as unavoidable and rare according to limits of human attention according to engineering by attrition.
Welcome to the author's world. We are all facing this sunk cost.
In the bug picture (haha) we simply abide systems that "work" according to a distribution of our tolerance for the nuisance of their inevitable failure in modes that are too rare to be "corrected."
As the author explains he's working with newer versions of a machine. Huzzah!
Meanwhile, the ubiquity of deployments is scaling towards infinity, implying that a small clique of individual humans can expect to be driven mad by faults that appear demonic while the hoard of humanity lumbers on enduring the costs of "good enough" design.
Except maybe for the contingency that the strategic nuclear deterrent is be placed under control of an AI.
Luckily for the individual human there is death.
Unluckily for humanity, someone is likely trying to place the strategic nuclear deterrent under control of an AI
"Buridan's ass is an illustration of a paradox in philosophy in the conception of free will. It refers to a hypothetical situation wherein an ass (donkey) that is equally hungry and thirsty is placed precisely midway between a stack of hay and a pail of water. Since the paradox assumes the donkey will always go to whichever is closer, it dies of both hunger and thirst since it cannot make any rational decision between the hay and water.
A common variant of the paradox substitutes the hay and water for two identical piles of hay; the ass, unable to choose between the two, dies of hunger.
The paradox is named after the 14th-century French philosopher Jean Buridan, whose philosophy of moral determinism it satirizes.
Although the illustration is named after Buridan, philosophers have discussed the concept before him, notably Aristotle, who put forward the example of a man equally hungry and thirsty, and Al-Ghazali, who used a man faced with the choice of equally good dates.
A version of this situation appears as metastability in digital electronics, when a circuit must decide between two states based on an input that is in itself undefined (neither zero nor one).
Metastability becomes a problem if the circuit spends more time than it should in this "undecided" state, which is usually set by the speed of the clock the system is using.
Interesting"
I dislike the figure of speech "A lot to unpack," but the Catt dialog on "the glitch" and electric current has a lot going on and I found it well worth the listen.
In interview 2/2 on electric current the "Demystifying Science" pair strangely fall into the orthodoxy of wanting to block and control dialog in order to manage the apparent controversy illuminated by Catt's perspective. They fight back even as they clearly express curiosity about, and sympathy with\ Catt's views and laments. I came away noting there's a profound natural block in the discourse of science towards a commonsense which is obviously insufficient to accommodate the world as we now find it.
It so happens that N. Chomsky gives a lucid presentation on the history of science that pertains directly to Ivor Catt's laments on blocking of science by an engineering orthodoxy.
The talk begins with a proposal for 3 problems — "Plato's, Orwell's and Descartes'" — supported by a review of modern scientific thought, then segues into a supporting illustration of modern effects of these problems in the orthodoxy of NATO policy in Serbia. It's the tightest package of criticism of contemporary thought I've come across and will not be a waste of time for viewers at any level of interest and familiarity with the history of thought.
There are two versions of this presentation on Youtube made to different audiences on different dates. I prefer the clarity of this one.
The systems employ concurrency at multiple levels, where the system design lacks a unified paradigm for establishing its correctness.
So concurrency "errors" should be expected.
I always start wondering why such bugs aren't more common. But then I realize a dilemma.
For concurrency errors, you can know an error occurs but have no way to work backwards to the specific conditions of its cause (incorrect design). All you can do is perturb the system into not exhibiting the error.
Lacking a formal way to establish correctness, we are left to an engineering of attrition, in which the author is engaged.
If the failure becomes common, system parameters will be adjusted to perturb behavior back into obscurity with a black art called "debugging" that approximates correctness.
The contravening property of the system is the underlying logic runs so fast with respect to human attention that failure modes are pressed into high likelihood of being observed and therefore "corrected" (approximately). Bugs which are common enough to merit attention are perturbed out of "existence" by "version charges."
This seems to imply that such failures should be both expected as unavoidable and rare according to limits of human attention according to engineering by attrition.
Welcome to the author's world. We are all facing this sunk cost.
In the bug picture (haha) we simply abide systems that "work" according to a distribution of our tolerance for the nuisance of their inevitable failure in modes that are too rare to be "corrected."
As the author explains he's working with newer versions of a machine. Huzzah!
Meanwhile, the ubiquity of deployments is scaling towards infinity, implying that a small clique of individual humans can expect to be driven mad by faults that appear demonic while the hoard of humanity lumbers on enduring the costs of "good enough" design.
Except maybe for the contingency that the strategic nuclear deterrent is be placed under control of an AI.
Luckily for the individual human there is death.
Unluckily for humanity, someone is likely trying to place the strategic nuclear deterrent under control of an AI