My favourite one was debugging a crash in an Electron app deployed to iOS. It turned out throwing an exception from a point event callback (deep in our app's code) was bubbling up into the device's kernel code.
I agree with the article, but that's not how the vibe coders see themselves. From their perspective they can't see the gap between programming and product, and in my experience are pretty hostile to feedback from real software engineers.
I realise your answer wasn't assertive, but if I heard this from someone actively defending AI it would be a copout. If the selling point is that you can ask these AIs anything then one can't retroactively go "oh but not that" when a particular query doesn't pan out.
My point is the opposite of this point of view. I believe generative AI is the most significant advance since hypertext and the overlay of inferred semantic relationships via pagerank etc. In fact the creation of hypertext and the toolchains around it led to this point at all - neural networks were understood at that point and transformer attention is just an innovation. It’s the collective human assembly of language and visual interconnected knowledge at a pan cultural and global scale that enabled the current state.
The abilities of LLM alone to do astounding natural language processing beyond the ability of anything prior by unthinkable Turing test passing miles. The fact it can reason abductively, which computing techniques to date have been unable to is amazing. The fact you can mix it with multimodal regimes - images, motion, virtually anything that can be semantically linked via language, is breathtaking. The fact it can be augmented with prior computing techniques - IR, optimization, deductive solvers, and literally everything we’ve achieved to date should give anyone knowledgeable of such things shivers for what the future holds.
But I would never hold that generative AI techniques are replacements for known optimal techniques. But the ensemble is probably the solution to nearly every challenge we face. When we hit the limits of LLMs today, I think, well, at least we already have grand master beating chess solvers and it’s irrelevant the LLM can’t directly. The LLM and other generative AI techniques in my mind are like gasses that fill through learned approximation the things we’ve not been able to solve directly, including the assembly of those solutions ad hoc. This is why since the first time BERT came along I knew agent based techniques were the future.
Right now we live at time like early hypertext with respect to AI. Toolchains suck, LLMs are basically geocities pages with “under construction” signs. We will go through an explosive exploration, some stunning insights that’ll change the basic nature of our shared reality (some wonderful some insidious), then if we aren’t careful - and we rarely are - enshitification at scale unseen before.
This is a bit of a strawman. There are certainly people who claim that you can ask AIs anything but I don't think the parent commenter ever made that claim.
"AI is making incredible progress but still struggles with certain subsets of tasks" is self-consistent position.
Just use a flat C-style function API instead of a Singleton object. Singletons are only really needed in languages that enforce the 'everything is an object' folly.
Not exactly. For example, you can have a singleton object that maintains a persistent connection to a db to persist logs to. No one's going to inject the "ElasticsearchLogger" object in their method/class by accident, and even then, they'll only have access to the singleton state that the class lets them have access to. So now your private Counter variable is inside a global singleton without being accessible by anyone, even if that person is disregarding all of OP's rules.
A singleton object can encapsulate the global state, converting global variables to private fields. How would this be different? Because a counter singleton can for example disallow directly setting the count field, only allowing the count to be incremented through a method.
Yes good point. The module/unit acts as the singleton instance, in a sense, though that might be the incorrect way to put it.
In any case, I think variables that are “global” but encapsulated in this way lose the potential for harm we associate with a global variable the whole program may be directly reading and writing.
this reeks of backend engineers not caring about UX designers who don't understand the problem while the UI designers who do understand are barred from attending meetings for bad behavior. I'm not throwing shade.
I don't agree. There's no technical reason why the different API endpoints can't return the same ordering. The current top comment here[0] is from someone who has implemented this (IMO) correctly in a different homeserver implementation.
I read some of your other replies and I can't quite get a read on your line of reasoning.
The issue is we would give less attention to these things if it wasn't for the social credit the humans gave the vomit. So we engage in good faith and it turns out it was effectively a prank, and we have no choice but to value requests from those people less now because it was clear they didn't care about our response.
You ever watched a reviewbrah video? he doesn't get to "without any further ado" moment until after the halfway point of the video. The prank is the wasted time. But the joke is every other YTber does it more subversively without you getting any laughs out of it. It proves we give way more attention to slop then we dare to calculate.
Also pretty funny to compare to peanut butter & jelly, goes to show American self-centrism, and also the same bubble that's around AI tooling.