_Unyielding_ describes the issues that arise when the author of the code doesn't have control of the transactionality / atomicity of their operations at the level they need to: https://glyph.twistedmatrix.com/2014/02/unyielding.html
To my mind it's very much a balancing act between "low power to the developer, high power to the language" and "high power to the developer, low power to the language" all up and down the stack from software / hardware to "consumer / framework".
TL;DR - he says that "doParallel" and "doConcurrently" are separate operations with distinct semantics that designers of programs must care about and that conflating the two (especially the common "doConcurrently-and-often-but-not-always-in-parallel") is one of the most common causes of bugs in programs that need to make progress in multiple threads of execution.
A common thing about writing like this article is that it confidently declares futures/promises are good for parallelism, but this is wrong - they are flawed because they introduce priority inversions.
To schedule tasks properly, you need to know who is waiting on them as early as possible. With a promise, you only know when you get to the "wait()" call, which is too late.
The correct solution is called structured concurrency.
Depends on if you're using Monix/Haskell style tasks (lazy, run when depended on) or JS-style promises (E Promise / C# Task / Java Future - run when instantiated, waiting is post-submission). Agreed completely though, structured concurrency is fantastic.
To my mind it's very much a balancing act between "low power to the developer, high power to the language" and "high power to the developer, low power to the language" all up and down the stack from software / hardware to "consumer / framework".