Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a lot of work around UX and how you interact with the LLM. For example, given an entire React app + a user prompt to update it, which code snippet do you feed to the LLM? The LLM cannot read your mind. In a way it feels like the application layer's job to help it read your mind.


You probably know this, but just want to say, founder to founder: don't listen to this argument at all.

People are so fond of saying "just wait and the new model will do this." And very smart people I know say it (especially when they work for OpenAI or Anthropic!).

It might be partly true (of course it's situational). But it's a glib and irrelevant thing to say. Model capabilities do not advance like some continuous exponential across all domains and skills. Not even close.

Product design is exploring the solution to human problems in a way that you can bundle and sell. Novel solutions to human problems tend to come from humans, applying effort over time (with the help of models of course) to understand the problem and separate out what's essential and what's irrelevant to the solution.

(A related comment on the adjacent thread.)


I love what you wrote in the other thread too:

> "The hard part of being an engineer is not writing JavaScript. It is building a solution that addresses the essential complexity of a problem without adding accidental complexity."

I have been oversimplifying it as "LLMs cannot read you mind."


Do you not think that’ll get solved by a future generation of cleverer LLMs though? As someone pointed out in another comment, they get better results with Gemini 2.5 already.

People already seem quite annoyed with Cursor based on that thread the other day with the hallucinated customer support.

Interested in anyone’s opinion


> Interested in anyone’s opinion

Okay, I'll bite. My issue w/ statements like "building something an LLM will do in the future" is a constant goal-post moving argument.

It seems to equate to "how is this getting funded when AGI is going to do it eventually anyways". That applies to literally everything. "Why bother building a social media platform, soon an LLM will be able to build an entire one in a day!", "Why bother becoming a plumber, soon an LLM will be able to control manufacturing equipment to build a robot that can do it better than any human", "Who needs architects, LLMs will soon be able to design perfect buildings for whatever use case!".

If your point is only that some companies are currently getting funded that are a weekend-project away from getting Apple-d out of existence, then I would definitely agree that some companies are like that (just like some app companies were like that 6 years ago). Some companies are just super basic wrappers around someone else's LLM, but the expectation (from investors, at least) is that there's a bigger goal and the "easy weekend project" approach is for validation and building some sort of user base now.

However, I also disagree that this is the case here. Building good UX's around LLM usage is not just "using LLMs", and figuring out the use cases people actually want is also not just "using LLMs".


100% agree. There is a bigger point too: People assume LLM capabilities are like FLOPs or something, as if they are a single number.

In reality, building products is an exploration of a complex state space of _human_ needs and possible solutions. This complexity doesn't go away. The hard part of being an engineer is not writing JavaScript. It is building a solution that addresses the essential complexity of a problem without adding accidental complexity.

The reason this is relevant is that it's just the same for LLMs! They are tripped up just like human engineers into adding accidental complexity when they don't understand the problem well enough and then they don't solve the real problem. So saying "just wait, LLMs will do that in the future" is not much different than saying "just wait, some smarter human engineer might come along and solve that problem better than you". It's possibly true, possibly false. And certainly not helpful.

If you work on a problem over time, sometimes you'll do much better than smarter person who has the wrong tools or doesn't understand the problem. And that's no different for LLMs.


> Do you not think that’ll get solved by a future generation of cleverer LLMs though?

I don't. But obviously I'm biased + have spent perhaps too much time at application layer. I think there will still be a large amount of tooling + feeding of context to get the best result and I don't see a world in which we let LLMs run hog wild on our computers any time soon, especially for prototyping workflows at the enterprise level.

And for the sake of discussion: let's say these cleverer, future generation LLMs do exist... then I think the entire workflow will be very different. Hard to say how. Perhaps knowledge work as we know it will be unrecognizable.

Re: the Gemini 2.5 comment, I would love to compare it prompt for prompt. Looks like the prompt they are comparing it to didn't include the requirements for the Rubik's cubes scrambler/timer/solver. That said, I wouldn't be surprised if one LLM — Gemini 2.5 in this case — is better at creating a Rubik's cube compared to Sonnet 3.7/3.5 with our system prompt. (Not a lot of product teams are prompting our platform to build Rubik's cube in three.js lol). But if it is better, what's great is we can easily swap it out and start using Gemini.


Cool. Thanks for the insight. Good luck with it all! It seems like lots of people DO see the value in it!


Claude Code?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: