The performance not only depends on the tool, it also depends on the model, and the codebase you are working on (context), and the task given (prompt).
And all these factors are not independent. Some combinations work better than others. For example:
- Claude Sonnet 4 might work well with feature implementation, on backend code python code using Claude Code.
- Gemini 2.5 Pro works better for big fixes on frontend react codebases.
...
So you can't just test the tools alone and keep everything else constant. Instead you get a combinatorial explosion of tool * model * context * prompt to test.
16x Eval can tackle parts of the problem, but it doesn't cover factors like tools yet.
I think it's just written by someone who reads a lot of LLM output - lots of lists with bolded prefixes. Maybe there was some AI-assistance (or a lot), but I didn't get the impression that it was AI-generated as a whole.
The thing that sucks about it is maybe his english is bad (not his native language) so he relies on LLM output for his posts. Im inclined to cut people slack for this. But the rub is that it is indistinguishable from spam/slop generated for marketing/ads/whatever.
Or it's possible that he is one of those people that _realy_ adopted LLMs into _all_ their workflow, I guess, and he thinks the output is good enough as is, because it captured his general points?
LLMs have certainly damaged trust in general internet reading now, that's for sure.
I don't know why you do. I found the article interesting, derived value from it. I don't care if it's an LLM or a human that gave me the value. I don't see why it should matter.
It matters to me for so many reasons that I can't go over them all here. Maybe we have different priorities, and that's fine.
One reason why LLM generated text bothers me is because there's no conscious, coherent mind behind it. There's no communicative intent because language models are inherently incapable of it. When I read a blog post, I subconsciously create a mental model of the author, deduce what kind of common ground we might have and use this understanding to interpret the text. When I learn that an LLM generated a text I've read, that mental model shatters and I feel like I was lied to. It was just a machine pretending to be a human, and my time and attention could've been used to read something written by a living being.
I read blogs to learn about the thoughts of other humans. If I wanted to know what an LLM thought about the state of vibe coding, I could just ask one at any time.
Things “happen” in human history only because humans make them happen. If enough humans do or don’t want something to happen, then they can muster the collective power to achieve it.
The unstated corollary in this essay is that venture capital and oligarchs do not get to define our future simply because they have more money.
I refer you again to the essay; it's not inevitable that those with substantially more money than us should get to dominate us and define our future. They are but a tiny minority, and if/when enough of us see that future as not going our way, we can and will collectively withdraw our consent for the social and economic rules and structures which enable those oligarchs.
Would you say the industrial revolution would have been able to be stopped by enough humans not wanting to achieve it?
>The unstated premise of this essay is that venture capital and oligarchs do not get to define our future simply because they have more money.
AI would progress without them. Not as fast, but it would.
In my mind the inevitability of technological progress comes from our competition with each other and general desire do work more easily and effectively. The rate of change will increase with more resources dedicated to innovation, but people will always innovate.
Currently, AI is improved through concerted human effort and energy-intensive investments. Without that human interest and effort, progress in the field would slow.
But even if AI development continues unabated, nothing is forcing us to deploy AI in ways that reduce our quality of life. We have a choice over how it's used in our society because we are the ones who are building that society.
>Would you say the industrial revolution would have been able to be stopped by enough humans not wanting to achieve it?
Yes, let's start in early 1800s England: subsistence farmers were pushed off the land by the enclosure acts and, upon becoming landless, flocked to urban areas to work in factories. The resulting commodified market of mobile laborers enabled the rise of capitalism.
So let's say these pre-industrial subsistence farmers had instead chosen to identify with the working class Chartism movement of the mid-1800s and joined in a general strike against the landed classes who controlled parliament. In that case, the industrial revolution, lacking a sufficiently pliable workforce, might have been halted, or at least occurred in a more controlled way that minimized human suffering.
reply