Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This morning I was using an LLM to develop some SQL queries against a database it had never seen before. I gave it a starting point, and outlined what I wanted to do. It proposed a solution, which was a bit wrong, mostly because I hadn't given it the full schema to work with. Small nudges and corrections, and we had something that worked. From there, I iterated and added more features to the outputs.

At many points, the code would have an error; to deal with this, I just supply the error message, as-is to the LLM, and it proposes a fix. Sometimes the fix works, and sometimes I have to intervene to push the fix in the right direction. It's OK - the whole process took a couple hours, and probably would have been a whole day if I were doing it on my own, since I usually only need to remember anything about SQL syntax once every year or three.

A key part of the workflow, imo, was that we were working in the medium of the actual code. If the code is broken, we get an error, and can iterate. Asking for opinions doesn't really help...



I often wonder if people who report that LLMs are useless for code haven't cracked the fact that you need to to have a conversation with it - expecting a perfect result after your first prompt is setting it up for failure, the real test is if you can get to a working solution after iterating with it for a few rounds.


As someone who has finally found a way to increase productivity by adding some AI, my lesson has sort of been the opposite. If the initial response after you've provided the relevant context isn't obviously useful: give up. Maybe start over with slightly different context. A conversation after a bad result won't provide any signal you can do anything with, there is no understanding you can help improve.

It will happily spin forever responding in whatever tone is most directly relevant to your last message: provide an error and it will suggest you change something (it may even be correct every once in a while!), suggest a change and it'll tell you you're obviously right, suggest the opposite and you will be right again, ask if you've hit a dead end and yeah, here's why. You will not learn anything or get anywhere.

A conversation will only be useful if the response you got just needs tweaks. If you can't tell what it needs feel free to let it spin a few times, but expect to be disappointed. Use it for code you can fully test without much effort, actual test code often works well. Then a brief conversation will be useful.


Why would I do this, when I can just write it from scratch in less time than it takes you to have this conversation with the LLM?


Because once you get good at using LLMs you can write it with 5 rounds with an LLM in way less time than it would have taken you to type out the whole thing yourself, even if you got it exactly right first time coding it by hand.


I suspect this is only true if you are lousy at writing code or have a very slow typing speed


I suspect the opposite is only true if you haven't taken the time to learn how to productively use LLMs for coding.

(I've written a fair bit about this: https://simonwillison.net/tags/ai-assisted-programming/ and https://simonwillison.net/2025/Mar/11/using-llms-for-code/ and 80+ examples of tools I've built mostly with LLMs on https://tools.simonwillison.net/colophon )


Maybe I've missed it, but what did you use to perform the actual code changes on the repo?


You mean for https://tools.simonwillison.net/colophon ?

I've used a whole bunch of techniques.

Most of the code in there is directly copied and pasted in from https://claude.ai or https://chatgpt.com - often using Claude Artifacts to try it out first.

Some changes are made in VS Code using GitHub Copilot

I've used Claude Code for a few of them https://docs.anthropic.com/en/docs/agents-and-tools/claude-c...

Some were my own https://llm.datasette.io tool - I can run a prompt through that and save the result straight to a file

The commit messages usually link to either a "share" transcript or my own Gist showing the prompts that I used to build the tool in question.


So the main advantage is that LLMs can type faster than you?


Yes, exactly.


Burning down the rainforests so I don’t have to wait for my fingers.


The environmental impact of running prompts through (most) of these models is massively over-stated.

(I say "most" because GPT-4.5 is 1000x the price of GPT-4o-mini, which implies to me that it burns a whole lot more energy.)


If you do a basic query to GPT-4o every ten seconds it uses a blistering... hundred watts or so. More for long inputs, less when you're not using it that rapidly.


This is honestly really unimpressive

Typing speed is not usually the constraint for programming, for a programmer that knows what they are doing

Creating the solution is the hard work, typing it out is just a small portion of it


I know. That's why I've consistently said that LLMs give me a 2-5x productivity boost on the portion of my job which involves typing code into a computer... which is only about 10% of what I do. (One recent example: https://simonwillison.net/2024/Sep/10/software-misadventures... )

(I get boosts from LLMs to a bunch of activities too, like researching and planning, but those are less obvious than the coding acceleration.)


> That's why I've consistently said that LLMs give me a 2-5x productivity boost on the portion of my job which involves typing code into a computer... which is only about 10% of what I do

This explains it then. You aren't a software developer

You get a productivity boost from LLMs when writing code because it's not something you actually do very much

That makes sense

I write code for probably between 50-80% of any given week, which is pretty typical for any software dev I've ever worked with at any company I've ever worked at

So we're not really the same. It's no wonder LLMs help you, you code so little that you're constantly rusty


I'm a software developer: https://github.com/simonw

I very much doubt you spend 80% of your working time actively typing code into a computer.

My other activities include:

- Researching code. This is a LOT of my time - reading my own code, reading other code, reading through documentation, searching for useful libraries to use, evaluating if those libraries are any good.

- Exploratory coding in things like Jupyter notebooks, Firefox developer tools etc. I guess you could call this "coding time", but I don't consider it part of that 10% I mentioned earlier.

- Talking to people about the code I'm about to write (or the code I've just written).

- Filing issues, or updating issues with comments.

- Writing documentation for my code.

- Straight up thinking about code. I do a lot of that while walking the dog.

- Staying up-to-date on what's new in my industry.

- Arguing with people about whether or not LLMs are useful on Hacker News.


"typing code is a small portion of programming"

"I agree, only 10% of what I do is typing code"

"that explains it, you aren't a software developer"

What the hell?


You should check out Simon’s wikipedia and github pages, when you have time between your coding sprints.


You must not be learning very many new things then if you can't see a benefit to using an LLM. Sure, for the normal crud day-to-day type stuff, there is no need for an LLM. But when you are thrown into a new project, with new tools, new code, maybe a new language, new libraries, etc., then having an LLM is a huge benefit. In this situation, there is no way that you are going to be faster than an LLM.

Sure, it often spits out incomplete, non-ideal, or plain wrong answers, but that's where having SWE experience comes in to play to recognize it


> But when you are thrown into a new project, with new tools, new code, maybe a new language, new libraries, etc., then having an LLM is a huge benefit. In this situation, there is no way that you are going to be faster than an LLM.

In the middle of this thought, you changed the context from "learning new things" to "not being faster than an LLM"

It's easy to guess why. When you use the LLM you may be productive quicker, but I don't think you can argue that you are really learning anything

But yes, you're right. I don't learn new things from scratch very often, because I'm not changing contexts that frequently.

I want to be someone who had 10 years of experience in my domain, not 1 year of experience repeated 10 times, which means I cannot be starting over with new frameworks, new languages and such over and over


"When you use the LLM you may be productive quicker, but I don't think you can argue that you are really learning anything"

Here's some code I threw together without even looking at yesterday: https://github.com/simonw/tools/blob/main/incomplete-json-pr... (notes here: https://simonwillison.net/2025/Mar/28/incomplete-json-pretty... )

Reading it now, here are the things it can teach me:

    :root {
      --primary-color: #3498db;
      --secondary-color: #2980b9;
      --background-color: #f9f9f9;
      --card-background: #ffffff;
      --text-color: #333333;
      --border-color: #e0e0e0;
    }
    body {
      font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
      line-height: 1.6;
      color: var(--text-color);
      background-color: var(--background-color);
      padding: 20px;
That's a very clean example of CSS variables, which I've not used before in my own projects. I'll probably use that pattern myself in the future.

    textarea:focus {
      outline: none;
      border-color: var(--primary-color);
      box-shadow: 0 0 0 2px rgba(52, 152, 219, 0.2);
    }
Really nice focus box shadow effect there, another one for me to tuck away for later.

        <button id="clearButton">
          <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
            <rect x="3" y="3" width="18" height="18" rx="2" ry="2"></rect>
            <line x1="9" y1="9" x2="15" y2="15"></line>
            <line x1="15" y1="9" x2="9" y2="15"></line>
          </svg>
          Clear
        </button>
It honestly wouldn't have crossed my mind that embedding a tiny SVG inline inside a button could work that well for simple icons.

      // Copy to clipboard functionality
      copyButton.addEventListener('click', function() {
        const textToCopy = outputJson.textContent;
        
        navigator.clipboard.writeText(textToCopy).then(function() {
          // Success feedback
          copyButton.classList.add('copy-success');
          copyButton.textContent = ' Copied!';
          
          setTimeout(function() {
            copyButton.classList.remove('copy-success');
            copyButton.innerHTML = '<svg xmlns="http://www.w3.org/2000/svg" width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><rect x="9" y="9" width="13" height="13" rx="2" ry="2"></rect><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"></path></svg> Copy to Clipboard';
          }, 2000);
        });
      });
Very clean example of clipboard interaction using navigator.clipboard.writeText

And the final chunk of code on the page is a very pleasing implementation of a simple character-by-character non-validating JSON parser which indents as it goes: https://github.com/simonw/tools/blob/1b9ce52d23c1335777cfedf...

That's half a dozen little tricks I've learned from just one tiny LLM project which I only spent a few minutes on.

My point here is that if you actively want to learn things, LLMs are an extraordinary gift.


Exactly! I learn all kinds of things besides coding-related things, so I don't see how it's any different. ChatGPT 4o does an especially good job of walking thru the generated code to explain what it is doing. And, you can always ask for further clarification. If a coder is generating code but not learning anything, they are either doing something very mundane or they are being lazy and just copy/pasting without any thought--which is also a little dangerous, honestly.


It really depends on what you're trying to achieve.

I was trying to prototype a system and created a one-pager describing the main features, objectives, and restrictions. This took me about 45 minutes.

Then I feed it into Claude and asked to develop said system. It spent the next 15 minutes outputting file after file.

Then I ran "npm install" followed by "npm run" and got a "fully" (API was mocked) functional, mobile-friendly, and well documented system in just an hour of my time.

It'd have taken me an entire day of work to reach the same point.


Yeah nah. The endless loop of useless suggestions or ”solutions” is very easily achiavable and common, at least on my use cases, not matter how much you iterate with it. Iterating gets counter-productive pretty fast, imo. (Using 4o).


When I use Claude to iterate/troubleshoot I do it in a project and in multiple chats. So if I test something and it throws and error or gives an unexpected result I’ll start a new chat to deal with that problem, correct the code, update that in the project, then go back to my main thread and say “I’ve update this” and provide it the file, “now let’s do this”. When I started doing this it massively reduced the LLM getting lost or going off on weird quests. Iteration in side chats, regroup in the main thread. And then possibly another overarching “this is what I want to achieve” thread where I update it on the progress and ask what we should do next.


I have been thinking about this a lot recently. I have a colleague who simply can’t use LLMs for this reason - he expects them to work like a logical and precise machine, and finds interacting with them frustrating, weird and uncomfortable.

However, he has a very black and white approach to things and he also finds interacting with a lot of humans frustrating, weird and uncomfortable.

The more conversations I see about LLMs the more I’m beginning to feel that “LLM-whispering” is a soft skill that some people find very natural and can excel at, while others find it completely foreign, confusing and frustrating.


It really requires self-discipline to ignore the enthusiasm of the LLM as a signal for whether you are moving in the direction of a solution. I blame myself for lazy prompting, but have a hard time not just jumping in with a quick project, hoping the LLM can get somewhere with it, and not attempt things that are impossible, etc.


> OK - the whole process took a couple hours, and probably would have been a whole day if I were doing it on my own, since I usually only need to remember anything about SQL syntax once every year or three

If you have any reasonable understanding of SQL, I guarantee you could brush up on it and write it yourself in less than a couple of hours unless you're trying to do something very complex

SQL is absolutely trivial to write by hand


Obviously to a mega super genius like yourself an LLM is useless. But perhaps you can consider that others may actually benefit from LLMs, even if you’re way too talented to ever see a benefit?

You might also consider that you may be over-indexing on your own capabilities rather than evaluating the LLM’s capabilities.

Lets say an llm is only 25% as good as you but is 10% the cost. Surely you’d acknowledge there may be tasks that are better outsourced to the llm than to you, strictly from an ROI perspective?

It seems like your claim is that since you’re better than LLMs, LLMs are useless. But I think you need to consider the broader market for LLMs, even if you aren’t the target customer.


Knowing SQL isn't being a "mega super genius" or "way talented". SQL is flawed, but being hard to learn is not among its flaws. It's designed for untalented COBOL mainframe programmers on the theory that Codd's relational algebra and relational calculus would be too hard for them and prevent the adoption of relational databases.

However, whether SQL is "trivial to write by hand" very much depends on exactly what you are trying to do with it.


Sure, I could do that. But I would learn where to put my join statements relative to the where statements, and then forget it again in a month because I have lots of other tihngs that I actually need to know on a daily basis. I can easily outsource the boilerplate to the LLM and get to a reasonable starting place for free.

Think of it as managing cognitive load. Wandering off to relearn SQL boilerplate is a distraction from my medium-term goal.

edit: I also believe I'm less likely to get a really dumb 'gotcha' if I start from the LLM rather than cobbling together knowledge from some random docs.


If you don’t take care to understand what the LLM outputs, how can you be confident that it works in the general case, edge cases and all? Most of the time that I spend as a software engineer is reasoning about the code and its logic to convince myself it will do the right thing in all states and for all inputs. That’s not something that can be offloaded to an LLM. In the SQL case, that means actually understanding the semantics and nuances of the specific SQL dialect.


That makes sense, and from what I’ve heard this sort of simple quick prototyping is where LLM coding works well. The problem with my case was I’m working with multiple large code bases, and couldn’t pinpoint the problem to a specific line, or even file. So I wasn’t gonna just copy multiple git repos into the chat

(The details: I was working with running a Bayesian sampler across multiple compute nodes with MPI. There seemed to be a pathological interaction between the code and MPI where things looked like they were working, but never actually progressed.)


I wonder if it breaks like this: people who don't know how to code find LLMs very helpful and don't realize where they are wrong. People who do know immediately see all the things they get wrong and they just give up and say "I'll do it myself".


> Small nudges and corrections, and we had something that worked. From there, I iterated and added more features to the outputs.

FWIW, I've seen people online refer to this as "vibe coding".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: