Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find the very popular response of "you're just not using it right" to be big copout for LLMs, especially at the scale we see today. It's hard to think of any other major tech product where it's acceptable to shift so much blame on the user. Typically if a user doesn't find value in the product, we agree that the product is poorly designed/implemented, not that the user is bad. But AI seems somehow exempt from this sentiment


> It's hard to think of any other major tech product where it's acceptable to shift so much blame on the user.

It's completely normal in development. How many years of programming experience you need for almost any language? How many days/weeks you need to use debuggers effectively? How long from the first contact with version control until you get git?

I think it's the opposite actually - it's common that new classes of tools in tech need experience to use well. Much less if you're moving to something different within the same class.


> LLMs, especially at the scale we see today

The OP qualifies how the marketing cycle for this product is beyond extreme, and its own category.

Normal people are being told to worry about AI ending the world, or all jobs disappearing.

Simply saying “the problem is the user”, without acknowledging the degree of hype, and expectation setting, the is irresponsible.


AI marketing isn't extreme - not on the LLM vendor side, at least; the hype is generated downstream of it, for various reasons. And it's not the marketing that's saying "you're using it wrong" - it's other users. So, unless you believe everyone reporting good experience with LLMs is a paid shill, there might actually be some merit to it.


It is extreme, and on the vendor side. The OpenAI non profit vs profit saga, was about profit seeking vs the future of humanity. People are talking about programming 3.0.

I can appreciate that it’s other users who are saying it’s wrong, but that doesn’t escape the point on ignoring the context.

Moreover, it’s unhelpful communication. Its gives up acknowledging a mutually shared context, the natural confusion that would arise from the ambiguous, high level hype, and the actual down to earth reality.

Even if you have found a way to make it work, having someone understand your workflow can’t happen without connecting the dots between their frame of reference and yours.


It really is, for example here is a quote from AI 2027:

> By early 2030, the robot economy has filled up the old SEZs, the new SEZs, and large parts of the ocean. The only place left to go is the human-controlled areas. [...]

> The new decade dawns with Consensus-1’s robot servitors spreading throughout the solar system. By 2035, trillions of tons of planetary material have been launched into space and turned into rings of satellites orbiting the sun. The surface of the Earth has been reshaped into Agent-4’s version of utopia: datacenters, laboratories, particle colliders, and many other wondrous constructions doing enormously successful and impressive research.

This scenario prediction, which is co-authored by a former OpenAI researcher (now at Future of Humanity Institute), received almost 1 thousand upvotes here on HN and the attention of the NYT and other large media outlets.

If you read that and still don't believe the AI hype is _extreme_ then I really don't know what else to tell you.

--

https://news.ycombinator.com/item?id=43571851


I think the relentless podcast blitz by OpenAI and Anthropic founders suggests otherwise. They're both keen to confirm that yes, in 5 - 10 years, no one will have any jobs any more. They're literally out there discussing a post employment world like it's an inevitability.

That's pretty extreme.


This was present (in a positive way, though) even in Soviet films for children.

    Позабыты хлопоты,
    Остановлен бег,
    Вкалывают роботы,
    Счастлив человек!

    Worries forgotten,
    The treadmill doesn't run,
    Robots are working,
    Humans have fun!


Those billions won't raise themselves, you know.

More generally, these execs are talking their book as they're in a low margin capital intensive businesses whose future is entirely dependent on raising a bunch more money, so hype and insane claims are necessary for funding.

Now, maybe they do sortof believe it, but if so, why do they keep hiring software engineers and other staff?


You have to be pretty native to think VC’s don’t astroturf forums and let random mobs steer discussions about their investments. Even dinosaurs like Microsoft have been caught doing exactly that many time. Including fake “letters to the editor” campaigns when newspapers were a thing


My experience with web forums has been: everything a poster disagrees with is astroturf and bots, everything a poster agrees with is brave people speaking truth to power. I don't doubt that LLM companies are astroturfing comments just like I don't doubt that anti LLM people are sharing threads in their internal Discords and asking their friends to brigade a thread. Trying to infer conspiracy to invalidate an opinion on the Internet is fraught.


> And it's not the marketing that's saying "you're using it wrong" - it's other users.

No, it's the non-coding managers who vibe-coded a half-working prototype, not other users. And here, the Dunning-Kruger effect is at play - those non-coding types do not understand that AI is not working for them either.

Full disclosure: I do rely on vibe-coded jq lines in one-off scripts that will definitely not process more data after the single intended use, and this is where AI saves my time.


It's called grassroots marketing. It works particularly well in the context of GenAI because it is fed with esoteric and ideological fragments that overlap with common beliefs and political trends. https://en.wikipedia.org/wiki/TESCREAL

Therefore, classical marketing is less dominant, although more present at down-stream sellers.


Right. Let's take a bunch of semi-related groups I don't like, and make up an acronym for them so any of my criticism can be applied to some subset of those groups in some form, thus making it seem legitimate and not just a bunch of half-assed strawman arguments.

Also, I guess you're saying I'm a paid shill, or have otherwise been brainwashed by marketing of the vendors, and therefore my positive experiences with LLMs are a lie? :).

I mean, you probably didn't mean that, but part of my point is that you see those positive reports here on HN too, from real people who've been in this community for a while and are not anonymous Internet users - you can't just dismiss that as "grassroot marketing".


> I mean, you probably didn't mean that

Correct, I think you've read too much into it. Grassroots marketing is not a pejorative term, either. Its strategy is to trigger positive reviews about your product, ideally by independent, credible community members, indeed.

That implies that those community members have motivations other than being paid. Ideologies and shared beliefs can be some of them. Being happy about the product is a prerequisite, whatever that means for the individual user.


It is completely typical, but at the same time abnormal to have tools with such poor usability.

A good debugger is very easy to use. I remember the Visual Studio debugger or the C++ debugger on Windows were a piece of cake 20 years ago, while gdb is still painful today. Java and .NET had excellent integrated debuggers while golang had a crap debugging story for so long that I don’t even use a debugger with it. In fact I almost never use debuggers any more.

Version control - same story. CVS for all its problems I had learned to use almost immediately and it had a GUI that was straightforward. git I still have to look up commands for in some cases. Literally all the good git UIs cost a non-trivial amount of money.

Programming languages are notoriously full of unnecessary complexity. Personal pet peeve: Rust lifetime management. If this is what it takes, just use GC (and I am - golang).


> git I still have to look up commands for in some cases

I believe that this is okay. One does not need to know the details about every specific git command in order to be able to use it efficiently most of the time.

It is the same with a programming language. Most people are unfamiliar with every peculiarity of every standard library function that the language offers. And that is okay. It does not prevent them from using language efficiently most of the time.

Also in other aspects of life, it is unnecessary to know everything by memory. For example, one does not need to know how to e.g. replace a blade on a lawn mower. But that is okay. It does not prevent them from using it efficiently most of the time.

The point is that if something is done less often, it is unnecessary to remember the specifics of it. It is fine to look it up when needed.


> It is completely typical, but at the same time abnormal to have tools with such poor usability.

The main difference I see is that LLMs are flaky, getting better over time, but still more so than traditional tooling like debuggers.

> Programming languages are notoriously full of unnecessary complexity. Personal pet peeve: Rust lifetime management. If this is what it takes, just use GC (and I am - golang).

Lifetime management is an inherently hard problem, especially if you need to be able to reason about it at compile time. I think there are some arguments to be made about tooling or syntax making reasoning about lifetimes easier, but not trivial. And in certain contexts (e.g., microcontrollers) garbage collectors are out of the question.


Nitpick: magit for emacs is good enough for everyone whom I’ve seen talk about it describe as “the best git correct” and it is completely free.


Linus did not show up in front of congress talking about how dangerously powerful unregulated version control was to the entirety of human civilization a year before he debuted Git and charged thousands a year to use it.


This seems like a non sequitur. What does this have to do with this thread?


It is completely reasonable to hold cursor/claude to a different standard than gdb or git.


What standard would that be?


Ok. You seem to be taking about a completely different issue of regulation.


Hmmm, I don't see it? Are debuggers hard to use? Sometimes. But the debugger is allowing you to do something you couldn't actually do before. i.e. set breakpoints, and step through your code. So, while tricky to use, you are still in a better position than not having it. Just because you can get better at using something doesn't automatically mean that using it as a beginner makes you worse off.

Same can be said for version control and programming.


i guarantee you there were millions of people that needed to be forced to use excel because they thought they could do the calculations faster by hand.

we retroactively assume that everyone just obviously adopts new technology, yet im sure there were tons and tons of people that retired rather than learning how computers worked when the PC revolution was happening.


> How many days/weeks you need to use debuggers effectively

I understand your point, but would counter with: gdb isn't marketed as a cuddly tool that can let anyone do anything.


>It's hard to think of any other major tech product where it's acceptable to shift so much blame on the user.

Is that perhaps because of the nature of the category of 'tech peoduct'. In other domains, this certainly isn't the case. Especially if the goal is to get the best result instead of the optimum output/effort balance.

Musical instruments are a clear case where the best results are down to the user. Most crafts are similar. There is the proverb "A bad craftsman blames his tools" that highlights that there are entire fields where the skill of the user is considered to be the most important thing.

When a product is aimed at as many people as the marketers can find, that focus on individual ability is lost and the product targets the lowest common denominator.

They are easier to use, but less capable at their peak. I think of the state of LLMs analogous to home computing at a stage of development somewhere around Altair to TRS-80 level. These are the first ones on the scene, people are exploring what they are good for, how they work, and sometimes putting them to effective use in new and interesting ways. It's not unreasonable to expect a degree of expertise at this stage.

The LLM equivalent of a Mac will come, plenty of people will attempt to make one before it's ready. There will be a few Apple Newtons along the way that will lead people to say the entire notion was foolhardy. Then someone will make it work. That's when you can expect to use something without expertise. We're not there yet.


> It's hard to think of any other major tech product where it's acceptable to shift so much blame on the user.

Maybe, but it isn't hard to think of developer tools where this is the case. This is the entire history of editor and IDE wars.

Imagine running this same study design with vim. How well would you expect the not-previously-experienced developers to perform in such a study?


No one is claiming 10x perf gains in vim.

It’s just a fun geeky thing to use with a lot of zany customizations. And after two hellish years of memory muscling enough keyboard bindings to finally be productive, you earned it! It’s a badge of pride!

But we all know you’re still fat fingering ggdG on occasion and silently cursing to yourself.


> No one is claiming 10x perf gains in vim.

Sure they are - or at least were, unitl the last couple years. Same thing with Emacs.

It's hard to claim this now, because the entire industry shifted towards webshit and cloud-based practices across the board, and the classical editors just can't keep up with VS Code. Despite the latter introducing LSP, which leveled the playing field wrt. code intelligence itself, the surrounding development process and the ecosystem increasingly demands you use web-based or web-derived tools and practices, which all see a browser engine as a basic building block. Classical editors can't match the UX/DX on that, plus the whole thing breaks basic assumptions about UI that were the source of the "10x perf gains" in vim and Emacs.

Ironically, a lot of the perf gains from AI come from letting you avoid dealing with the brokenness of the current tools and processes, that vim and Emacs are not equipped to handle.


Yeah I’m in my 40s and have been using vim for decades. Sure there was an occasional rando stirring up the forums about made-up productivity gains to get some traffic to their blog, but that was it. There has always been push back from many of the strongest vim advocates that the appeal is not about typing speed or whatever it was they were claiming. It’s just ergonomics and power.

It’s just not comparable to the LLM crazy hype train.

And to belabor your other point, I have treesitter, lsp, and GitHub Copilot agent all working flawlessly in neovim. Ts and lsp are neovim builtins now. And it’s custom built for exactly how I want it to be, and none of that blinking shit or nagging dialog boxes all over VSCode.

I have VScode and vim open to the same files all day quite literally side by side, because I work at Microsoft, share my screen often, and there are still people that have violent allergic reactions to a terminal and vim. Vim can do everything VSCode does and it’s not dogshit slow.


I am really curious what your thoughts on zed are, given that it has a lot of features and is still mostly vim compatible (from what i know) so you have the same ergonomics and power and it has some sane defaults / I don't need to tinker as much with zed as I would have to with nvim.

Its not that I don't like tinkering. I really enjoy tinkering with config files but I never could understand nvim personally since I usually want a lsp / good enough experience that nvim or any lunarvim etc. couldn't provide without me installing additional software.


I haven’t tried zed and I’m getting old and set in my ways. If it ain’t broke don’t fix it and all that.

So if the claim is that I can get everything I have out of vim, most importantly being unbeatably fast text buffers, and I don’t need a suitcase full of config files, that’s very compelling.

Is that the promise of zed?


I use most of the best vim features in VS Code with their vim bindings.

You'd be hard-pressed to find a popular editor without vim bindings.


> vim and Emacs are not equipped to handle.

You clearly don't have a slightest idea of what you're talking about.

Emacs is actually still amazing in the LLM era. Language is all about plain text. Plain text remains crucial and will remain important because it's human-readable, machine-parsable, version-control friendly, lightweight and fast, platform-independent, and resistant to obsolescence. Even when analyzing huge amounts of complex data - images, videos, audio-recordings, etc., we often have to reduce it to text representation.

And there's simply no tool better than Emacs today that is well-suited for dealing with plain text. Nothing even comes close to what you can do with text in Emacs.

Like, check this out - I am right now transcribing my audio notes into .srt (subtitle) files. There's subed-mode where you can read through subtitles, and even play the audio, karaoke style, while following the text. I can do so many different things from here - extract the summaries, search through things, gather analytics - e.g., how often have I said 'fuck' on Wednesdays, etc.

I can similarly play YouTube videos in mpv, while controlling the playback, volume, speed, etc. from Emacs; I can extract subtitles for a given video and search through them, play the vid from the exact place in the subs.

I very often grab a selected region of screen during Zoom sessions to OCR and extract text within it and put it in my notes - yes, I do it in Emacs.

I can probably examine images, analyze their elements, create comprehensive summaries, and formulate expert artistic evaluation and critique and even ask Emacs to read it aloud back to me - the possibilities are virtually limitless.

It allows you to engage with vast array of LLM models from anywhere. I can ask a question in the midst of typing a Slack reply or reading HN comments or when composing a git commit; I can fact-check my own assumptions. I can also use tools to analyze and refactor existing codebases and vibe-code new stuff.

Anything like that even five years ago seemed like a dream; today it is possible. We can now reduce any complex digital data to plain text. And that feels miraculous.

If anything, the LLM era has made Emacs an extremely compelling choice. To be honest, for me - it's not even a choice, it's the only seriously viable option I have - despite all its drawbacks. Everything else doesn't even come close - other options either lacking critical features or have merely promising ones. Emacs is absolutely, hands-down, one of the best tools we humans have ever produced to deal with plain text. Anyone who thinks it's an opinion and not a fact simply hasn't grokked Emacs or has no clue what you can do with it.


At first I thought you were replying to me and this was a revival of the old vim + emacs wars.

I’m so glad we’re past that now and can join forces against a common enemy.

Thank you brother.


There weren't any true "wars" to begin with. The entire thing is just absurd. These ideas are not even in competition, it's like arguing whether a piano or sheet music is "better".

Emacs veterans simply rejected the entire concept of modality, without even trying to understand what it is about. Emacs is inherently a modal editor. Key-chords are stateful, Transient menus (i.e. Magit) are modals, completion is a modal, isearch, dired, calc, C-u (universal argument), recursive editing — these are all modals. What the idea of vim-motions offers is a universal, simplified, structured language to deal with modality, that's all.

Vim users on the other hand keep saying "there's no such thing as vim-mode". And to a certain degree they are right — no vim plugin outside of vim/neovim implements all the features — IdeaVim, VSCode vim plugins, Sublime, etc. - all of them are full of holes and glaring deficiencies. With one notable exception — Evil-mode in Emacs. It is so wonderfully implemented, you wouldn't even notice that it is a plugin, an afterthought. It really does feel like a baked-in, native feature of the editor.

There are no "wars" in our industry — pretty much only misunderstanding, misinterpretation and misuse of certain ideas. It's not even technological — who knows, maybe it's not even sociotechnological. People simply like talking past each other, defending different values without acknowledging they're optimizing for different things.

It's not Vim's, Emacs' or VSCode's fault that we suffer from identity investment - we spend hundreds of hours using one so it becomes our identity. We suffer from simplification impulse — we just love binary choices, we constantly have the nagging "which is better?" question, even when it makes little sense. We're predisposed to tribal belonging — having a common enemy creates in-group cohesion.

But real, experienced craftspeople... they just use whatever works best for them in a given context. That's what we all should strive for — discover old and new ideas, study them, identify good ones, borrow them, shelve the bad ones (who knows, maybe in a different context they may still prove useful). Most importantly, use whatever makes you and your teammates happy. It's far more important than being more productive or being decisively right. If thy stupid thing works, perhaps it ain't that stupid?


Huh? Most people use tools like vim for productivity...

I agree with you that AI dev tools are overhyped at the moment. But IDEs were, in fact, overhyped (to a lesser degree) in the past.


What I like about IDE wars is that it remained a dispute between engineers. Some engineers like fancy pants IDEs and use them, some are good with vim and stick with that. No one ever assumed that Jetbrains autocomplete is going to replace me or that I am outdated for not using it - even if there might be a productivity cost associated with that choice.


Excellent point. But I do think that forcing people to use IDEs for productivity was a thing for awhile. But still agree that the current moment is a difference in kind not just in scale.


New technologies that require new ways of thinking are always this way. "Google-fu" was literally a hirable career skill in 2004 because nobody knew how to search to get optimal outcomes. They've done alright improving things since then - let's see how good Cursor is in 10 years.


>It's hard to think of any other major tech product where it's acceptable to shift so much blame on the user.

Apple's Response to iPhone 4 Antenna Problem: You're Holding It Wrong https://www.wired.com/2010/06/iphone-4-holding-it-wrong/


I don't see how the Antennagate can be qualified as "acceptable" since it caused a big public uproar and Apple had to settle a class action lawsuit.

https://www.businessinsider.com/apple-antennagate-scandal-ti...


it didnt end the iphone as a brand, or end smart phones altogether though.

how much did that uproar and settlement matter?


Mobile phone manufacturers were telling users this long before the iPhone was ever invented.

e.g., Nokia 1600 user guide from 2005 (page 16) [0]

[0] https://www.instructionsmanuals.com/sites/default/files/2019...


The important difference is that in your example, it was the manufacturer telling customers they're holding it wrong. With LLMs, the vendors say no such things - it's the actual users that are saying this to their peers.


I think the reason for that is maybe you’re comparing to traditional products that are deterministic or have specific features that add value?

If my phone keeps crashing or if the browser is slow or clunky then yes, it’s not on me, it’s the phone, but an LLM is a lot more open ended in what it can do. Unlike the phone example above where I expect it to work from a simple input (turning it on) or action (open browser, punch in a url), what an LLM does is more complex and nuanced.

Even the same prompt from different users might result in different output - so there is more onus on the user to craft the right input.

Perhaps that’s why AI is exempt for now.


It's a specialist tool. You wouldn't be surprised that it took awhile for someone to take a big to get at typed programming, parallel programming, docker, IaaC, etc. either.

We have 2 sibling teams, one the genAI devs and the other the regular GPU product devs. It is entirely unsurprising to me that the genAI developers are successfully using coding agents with long-running plans, while the GPU developers are still more at the level of chat-style back-and-forth.

At the same time, everyone sees the potential, and just like other automation movements, are investing in themselves and the code base.


>It's hard to think of any other major tech product where it's acceptable to shift so much blame on the user.

I have the opposite impression! I find it's hard to think of any other tech product where users expect to master it with no training at all. I think people get tricked into believing they need no training because the tool uses natural language as the UI.

You learn how to use a spreadsheet or a word processor, how to drive a car, sail a boat, play a guitar. In the 90s there were courses that spent hours teaching users how to work a mouse and keyboard!

Of course you need to learn how to use a coding assistant as well, it just makes sense.

There has already been a million words written about how to use LLMs from people who don't really know how to use LLM's. Everyone is learning, there is a boom, you can make a fortune selling knowledge about LLM's whether you have that knowledge or not.


Stay tuned, a new study is coming with another revelation: you aren't getting faster by using Vim when you are learning it.

My previous employer didn't even allow me to use Vim until I learned it properly so it wouldn't affect my productivity. Why would using a cursor automatically make you better at something if it's just new to you and you are already an elite programmer according to this study?


How did you measure this? Was the conslusion of your studies that typing/editing speed was the real bottlekneck for a SWE becoming 10x?


Not every tool can be figured out in a day (or a week or more). That doesn't mean that the tool is useless, or that the user is incapable.


There are plenty of examples of other tech where "you're just not using it right" is perfectly acceptable and many people find that it provides a high level of value. Rust and Vim being two that come immediately to mind. Both have sharp edges and steep learning curves, yet for some population are wildly popular, yet not right for everyone.

It's also possible for the user to be not using it right and that not be a value judgement on the user. We all suck at using new tools, that's part of learning.


I've spent the last 2 months trying to figure out how to utilize AI properly, and only in the last week do I feel that I've hit upon a workflow that's actually a force multiplier (vs divisor).


Cool! Contragtulations to the anecdotal feelings of productivity. Superimportant input to the discussion. Now I at least can confidently say that investors will definitely get the hundreds of billions, trillions spent back with a fat ROI and profits on top!

Thanks again!


On the other hand if you don't use vim, emacs, and other spawns from hell, you get labeled a noob and nothing can ever be said about their terrible UX.

I think we can be more open minded that an absolutely brand new technology (literally did not exist 3y ago) might require some amount of learning and adjusting, even for people who see themselves as an Einstein if only they wished to apply themselves.


> you get labeled a noob

No one would call one a noob for not using Vim or Emacs. But they might for a different reason.

If someone blindly rejects even the notion of these tools without attempting to understand the underlying ideas behind them, that certainly suggests the dilettante nature of the person making the argument.

The idea of vim-motions is a beautiful, elegant, pragmatic model. Thinking that it is somehow outdated is a misapprehension. It is timeless just like musical notation - similarly it provides compositional grammar and universal language, and leads to developing muscle memory; and just like it, it can be intimidating but rewarding.

Emacs is grounded on another amazing idea - one of the greatest ideas in computer science, the idea of Lisp. And Lisp is just as everlasting, like math notation or molecular formulas — it has rigid structural rules and uniform syntax, there's compositional clarity, meta-reasoning and universal readability.

These tools remain in use today despite the abundance of "brand new technology" because time and again these concepts have proven to be highly practical. Nothing prevents vim from being integrated into new tools, and the flexibility of Lisp allows for seamless integration of new tools within the old-school engine.


One could try to be poetic with LLMs in order to make their point stronger and still convince absolutely no one who wasn't already convinced.

I'm sure nobody really reject the notion of LLMs but sure as hell do like to moan if the new technology doesn't absolutely perfect fit their own way of working. Does that make them any different than people wanting an editor which is intuitive to use? Nobody will ever know.


> still convince absolutely no one who wasn't already convinced.

I don't know, people change their opinions all the time. I wasn't convinced about many ideas throughout my career, but I'm glad I found convincing arguments for some of them later.

> wanting an editor which is intuitive to use

Are you implying that Vim and Emacs are not?

Intuitive != Familiar. What feels unintuitive is often just unfamiliar. Vim's model actually feels pretty intuitive after the initial introduction. Emacs is pretty intuitive for someone who grokked Lisp basics - structural editing and REPL-driven development. The point is also subjective, for some people "intuitive editor" means "works like MS Word", but that's just one design philosophy, not an objective standard.

Tools that survive 30+ years and maintain passionate user bases must be doing something right, no?

> the new technology doesn't absolutely perfect fit their own way of working.

Emacs is extremely flexible, and thanks to that, I've rarely complained about new things not fitting my ways. I bend tools to fit my workflow if they don't align naturally — that's just the normal approach for a programmer.


> It's hard to think of any other major tech product where it's acceptable to shift so much blame on the user.

Sorry to be pedantic but this is really common in tech products: vim, emacs, any second-brain app, effectiveness of IDEs depending on learning its features, git, and more.


Well, surely vim is easy to use - I started it and and haven't stopped using it yet (one day I'll learn how to exit)


Just a few examples: Bicycle. Car(driving). Airplane(piloting). Welder. CNC machine. CAD.

All take quite an effort to master, until then they might slow one down or outright kill.


Kubernetes. AWS. React. All have high learning curves to use effectively, hoardes of footguns, etc. But if you get over that curve, they can be fantastic tools with tons of value. LLM-assisted dev tooling is similar, in my view.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: