Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m exhausted by these types of posts.

I am a developer with >25 of professional experience.

I am unable to get these things to do anything useful.

I’ve tried: different models, limiting my scope, breaking it down to small tasks, prompt “engineering”; and am still getting less than useless results.

I say less than useless, because I will then additionally waste time debugging or slamming my head against the wall the llm built before I abandon it and go to the official docs and find out the llm is suggesting an API access paradigm that became deprecated in the last major version update.

People on this site love to talk about “muh productivity!”, but always stop short of saying what they got from this productivity boost: pay raise, less time working; or what they built, what level of employment they are, or who they work for.

Are all of these posts just astroturfed?! I ask that sincerely.

Do you all just make “todo SPAs” at your employers?



> I am a developer with >25 of professional experience. I am unable to get these things to do anything useful.

I am a developer with >25 of professional experience. I was able to do useful things with these tools from day one of trying.

This puzzles me so much every time reading such. Either I am more stupid than average and I just think the results are more useful then I can come up with, or maybe I have a knack for finding out how these tools can work for me?

When I read such comments I ask myself, do you even use stackoverflow or are you smarter than those results?


LLMs have dramatically different results depending on the domain. Getting LLMs to help me learn typescript is a joy, getting them to help me fix distributed consensus problems in my fully bespoke codebase make them look worse than useless.

Some people will find them amazing, some will find them a net negative.

Although finding truly zero use for them makes it hard for me to believe that this person really tried with creativity and an open mind


Very much this, I have > 25 years programming experience, but not with typescript and react, it’s helping me with my current project. I ignore probably 2/3 of its auto suggestions, but increasingly I now highlight some code and ask it to just do x for me, rather having to go google the right function/ css magic


Does that feel as rewarding as doing it yourself?


I'm fine with it, I've forgotten more frameworks and libs for now dead devices/services/OS etc over the years that it's largely pointless memorising these things, I'm very happy for a machine to help me get to where I want to be, and less time faffing about with google/stackoverflow the better, like I said the failure rate is still fairly high, but still useful enough.


> getting them to help me fix distributed consensus problems in my fully bespoke codebase make them look worse than useless.

Often the complex context of such problem is more clear in your head than you can write down. No wonder the LLM cannot solve it, it has not the right info on the problem. But if you then suggest to it: what if it had to do with this or that race condition since service A does not know the end time of service Z, it can often come up with different search strategies to find that out.


> or are you smarter than those results?

“Smarter than” are your words. I’ve just yet to get any real utility from them.

> When I read such comments I ask myself, do you even use stackoverflow

I find this a strange comparison.

I’ve yet to see people replacing employees with “stack overflow” or M$FT shoe horning “stack overflow” into their flagship product or former NSA members joining the board of “stack overflow” or billions in funding pouring into “stack overflow” or constant posts to HN asking “how do you use ‘stack overflow’ to improve productivity?”.


To me the SO comparison makes sense.

5 years ago (and up to this day) when I stumbled into a non obvious problem, i resorted to a search engine to read more about the context of such problem. Often a very similar question had been posed on SO. There a problem/solution pair was presented very similar to mine. I did the translation between the proposed solution into mine. This all made perfect sense to me, and dozens of colleagues did the same.

Today you can do the same with an LLM, with the difference that often the LLM does the translation, in often very complex ways, from a set of similar problem/solution pairs into my particular problem/solution pair. It also can find out if some set applies or not, to my particular problem. I can ask it more questions to find out if/why the solution is good.

So that alone is a very big timesaver. But in fact what I described is just the tip of the iceberg in ways the LLM helps me.

So now my question is, do you use such SO problem/solution pairs for help, or do you simply find things out by a lot of thinking combined with reading and experimenting?


Your last sentence was me earlier this year ... seeing headlines about productivity, etc. and then trying out the tools and not finding that.

I have however found it to be very helpful, completely replacing usage of StackOverflow for instance. Instead of googling for your problem, ask AI and provide very specific information like version numbers of libraries, etc. and it usually comes back with something helpful.

All those headlines and nonsense, like most content online these days it looks like marketing content, not journalism. AI tools are helpful and in some ways feel like an evolution of search engine technology from ~25 years ago. Treat its output like a junior developer or intern. It does require some effort, like coaching a junior dev or intern. You can also ask it stupid questions, things like tech you haven't worked on in a while but you "should know". Its helpful to get back up to speed on things like that.


Have you tried making a "todo SPA" of your own with the help of these AI tools? I think it is useful for folks to take a step back and try working on something simpler/easier as an intro to these AI tools. And then ramp up the complexity/difficulty from there. When the tools don't work, it can be extremely frustrating. But when they do work, they really do enhance productivity. But it takes a little bit of time to figure out where that boundary is, and it also takes a little time to figure out how much effort to put into using the tool when you are near that boundary. i.e. sometimes I know the AI tools can help me, but the amount of effort I need to put into writing the prompt is not worth the help that I will get. And other times, I know that no amount of prompting is going to get me back something useful.


What is the point of “ramping up” though? You don’t learn much about how to prompt in the process, you just get worse results. So now I have my useless todo and ramp up to my code and it falls face first in the mud in the next sentence. I can chew up everything for it and explain where it’s wrong or re-prompt with clarifications, but the problem is, I write code faster than that and with less frustration, cause it’s at least deterministic. And I’m neither a rockstar developer nor too smart.

What I would like from LLMs is a developer’s buddy. A chat that sits aside and ai-lints my sources and the current location, suggesting ideas, reminding of potential issues, etc. So I could dismiss or take note of its tips in a few clicks. I don’t think anyone built that yet.


I'm always stumped by comments like this. I'm at a point where ~70% of my code is AI-written, and the majority of the remaining is mostly because it would take too much time to provide enough context to the tool/LLM of choice for it to be able to produce the code I need.

Given the right context and the right choice of model/tools, I think ~90-95% of the code I write could be generated. And this is not for doing trivial CRUD; I work on a production app with 8 other people.

I'm really curious if you could give examples of problems that you tried and failed to use these tools for?


Two recent examples.

Please link to your history when you get one of these things to build my example so I can see how you managed to do it.

First, a friend without technical knowledge wanted to query an API using SQL. (At a previous firm he learned how to write “SELECT * FROM … WHERE …” statements.)

He asked one of these llms to do this, that he paid a premium for, and it suggested installing VSCode and writing an extension to ingest an API and to query it with python.

I am unfamiliar with VSCode so I’m unsure if this is even feasible, but after 3 days of trying to get it to work, he asked me, and I set up a python environment for him, and wrote a 5 line script to allow him to query the data ingested from the API into SQLite.

For me, the last time I tried, I asked one to write me a container solution using Docker that allowed for live coding of a Clojure project. I wanted it to give me solutions for each: deps.edn and lein.

I wasted hours, because it always felt “just around the corner”, trying to get it to output anything of use for either paradigm then when I abandoned the llm I quickly found, via a web search, a blog post someone wrote that did exactly what I asked, for a lein project of their own, and I just changed it to work for my project, and then again for the deps.edn version on my own.


This isn't a knock on you. The prompt (including additional context like specific examples/docs) has an incredible influence on the output.

Do you know what your friend asked? "Query an API with SQL" sounds like you're sending SQL with POST api[.]com/query. What you built is more like

"Make a request to this endpoint, using these parameters, and store it in sqlite. This is the shape of the data coming back and this is what I want the table(s) to look like."

Gpt4o or Claude could easily write that five line script, if given the right information.

I find writing prompts and working with LLMs to be an entirely new way of thinking. My struggle has been to learn how to write out my tacit knowledge and be comfortable iterating.

Do you still have your conversation where you tried to build the Docker project?


Share my experience. I tried 3-4 different LLMs, and one of them is outstanding.

For code samples, popular programming languages are much better than languages like Clojure.

Two examples: About a week ago, I had found Myers' string diff algorithm and asked to write some code and initially it spat out Python code. I asked it write a Common Lisp code, and it generated about 90% complete code. I rewrote it again and the whole thing took less than a day. It was my first time seeing 'quality' from machine generated code.

I experimented further. I found Automerge-Java and want to write a Clojure wrapper. So asked it how to parse Java source files and it showed a Python code. I ran it and gave some feedback than I could get almost perfect output, which is easy to process from Clojure side. After three days, I could write interface generator. From my experience, this type of work is time consuming process and three days is pretty good I think. I fed it concrete patterns and pointed mistakes less than ten times.

Overall, it still lacks of 'creativity' but for concrete examples with 'right' patterns, saves a huge amount of time.


Hold on..

In my experience I found that ChatGPT writes awesome Clojure code. So much so that most of my clojure code in the last few months were written by clojure. sure it gets some stuff wrong but overall it knows more clojure functions than I do.

My prompts start by asking it questions about appropriate functions to use and then write the code itself. The prompts have to be a bit verbose and give it instructions to evaluate and think before generating output.


As the other commenter said, prompting is everything, and most LLMs are sycophants and will try to do anything you tell them without pausing to tell you "why the hell are you trying to query an API with SQL? That's not what SQL is for". While it's possible to build stuff with llms with little to no technical knowledge, it's still very hit and miss.

With that said, the space is moving incredibly fast and the latest Claude/GPT-o1 are far ahead of anything that was available 3-6 months ago. Unfortunately Claude doesn't allow sharing publicly like ChatGPT, but here is a gist of Claude's answer for +- the same question your friend asked:

https://gist.github.com/ldorigo/1a243218e00d75dd2baaf0634640...

I'm on mobile so it wasn't handy to quickly paste an example API request/documentation for the LLM to follow; so there's a chance it might have hallucinated some if the API parameters - but if I included that; in my experience the code would work on first shot 90% of the time.

Regarding your second query, I'm too unfamiliar with clojure and the two solutions you mentioned to really understand what you were trying to achieve, but if you explain just a little bit more, I'm happy to record a screencast of me figuring it out with llms/genai tools from the ground up. What do you mean with "a container solution that allows for live coding"?


Not OP, but I believe Clojure has a REPL that lets you run and edit code, persisting the changes from the REPL.

Off the top of my head you would want a Dockerfile with the version of Clojure you're working in, a mounted volume for sharing between host/container data. My guess is the two different things they mentioned are dependency sources.

LLMs require a -fu, similar to the Googlefu of old, to get what you want out of them.


Not OP, but I've been trying to use Copilot to help me determine how to force Android's ConnectivityService to select the network I've created as its default.

The network shows up as the default network in netd. It shows up as a network when I dumpsys connectivity, but I cannot get it to be what ConnectivityService considers default.

I'm open to changing the code within AOSP as this is a research project, but even then Copilot just has no idea what to do and it keeps recommending the same non-working "solutions" over and over again.

FWIW, I'm using Copilot Pro.


It’s a niche thing you’re trying to do, and likely not seen code that does that, thus it can’t help … it can’t actually think its way around it.


Has it not ingested the entire AOSP codebase? I was under the impression that OpenAI had trained GPT-* on just about everything available to the public.

It's not necessarily niche either. The codebase already does this all the time. I just can't figure out why it won't do it for me.

FWIW, at least 80% of my time in writing software for an R&D laboratory is devoted to solving problems like this.


Ya they are either astroturfed or... I think there's just a lot of like young junior JavaScript developers who really haven't built like a full program with multiple features by themselves.

I think they do some sort of like online tutorial and then they sort of like go through some sort of a course and then they get a job but they're only like doing like small pieces of like code writing themselves and I guess that's where like these editors help them some more.

You see more and more YC startups these days using TypeScript and Node as their stack, which is so strange to me.

But I agree with you. The AI stuff hasn't worked for me at all apart from some smarter autocomplete.


I've been programming for about 30 years and I get a lot of benefit from these tools.

My day job is mainly data science and data forensics and these LLM tools are fantastic for both as they excel at writing scripts and data processing tools. SQL queries, R plots, Pandas data frame manipulation, etc.

They also work well for non-trivial applications like these that I made with Claude Projects writing 90% - 95% of the code:

https://github.com/williamcotton/guish

https://github.com/williamcotton/search-query-parser-scratch...


No need to be exhausted by the post. If AI doesn't help you, move on.

Probably you're really smarter and faster than the average developer.

The post is about finding out what things can help to to make it work for others. :)


I suspect it’s not a smarter developer thing, but a stupider code thing.

Programming for a client is making their processes easier, ideally as few clicks as possible. Programming for a programmer does the same to programming.

The thing with our “industry” is that it doesn’t automate programming at all, so smart people “build” random bs all day that should have been made a part of some generic library decades ago and made available off the shelf by all decent runtimes.

Making a form with validation and data objects and a backend with orm/sql connection and migrations and auth and etc etc. It all was solved millions of times and no one bats an eye why tf they reimplement multiple klocs of it over and over again.

That’s where AI shines. It builds you this damn stupid form that takes two days of work otherwise.

Very nice.

But it’s not programming. If anything, it’s a shame. A spit into the face of programming that somehow got normalized by… not sure whom. We take a bare, raw runtime like node/python/go and a browser and call it “a platform”. What platform? It’s as platform as INT 13h is an RDBMS.

I think AI usefulness division clearly shows us that right now, but most are blind to it by inertia.


We re-trained or let go everyone that does not want to or cannot be a client facing consultant ; with our in house builder, we don't need fulltime devs anymore. It's 'productive' as in much higher profit margins with less people.

We are in the custom ERP space.


I suspect this will be the future for many devs who develop boring CRUD apps, like ERP. No point having developers who only convert requirements crafted by others to code if LLM's can speed up that part enough. Such basic developer role will largely merge with business person / product owner / project manager role.

Ultimately, I think it's easier to teach business skills to a developer than to teach a business person enough code fluency to produce and deploy working code with help of LLM's.


Thanks for the reply.

I would like to explore this more if you are willing.

Do you consider yourself a “developer”? What is your title at said company?

Do you write code for yourself or for this business?

Who determined and what criteria define who “cannot be a client facing consultant”?

What is an “in house builder”?

What were these “fulltime devs” that you said “you don’t need anymore” doing before these llms?

Do your customers know you swapped from human workers to llms? Are they comfortable with this transition?

How did this change result in “much higher profit margins”?

When you say “with less people” did you just give multiple peoples’ workloads to a single dev or did the devs you retained ask for more work?

What do you use an llm for in the ERP space?

Why would clients use you if they could just use the llm?


> Do you consider yourself a “developer”? What is your title at said company?

Yes , for the past 40 years. And CTO/co-founder.

> Do you write code for yourself or for this business?

I have been writing DSL, code generators and other tooling for the past around 20 years for this company. Before that I did the same thing for educational software (also my company).

> Who determined and what criteria define who “cannot be a client facing consultant”?

They did; some people just don't like sitting with clients noting down very dry formulae and business rules.

> What is an “in house builder”?

Our in-house tooling which uses AI to create the software.

> What were these “fulltime devs” that you said “you don’t need anymore” doing before these llms?

Building LoB apps, bugifxing, maintaining, translating Excel or business rules to (Java) code.

> Do your customers know you swapped from human workers to llms? Are they comfortable with this transition?

Yes, they like it; faster (sometimes immediate results) and easier to track; no black box; just people sitting next to you.

> How did this change result in “much higher profit margins”?

Very high fees for these consultants but now they do 'all the work'; in total they make more hours than they did before, however much less than they did as programmers. But the fees are such a multiply that the end result is larger profits.

> When you say “with less people” did you just give multiple peoples’ workloads to a single dev or did the devs you retained ask for more work?

Yes, 1 consultant now does that work and can manage more.

> What do you use an llm for in the ERP space?

Feed it specs which get translated to software. This is not the type of 'he mate, get me a logistics system in german'; the specs are detailed and in the technical format we also use to write code ourselves the past 20+ years.

> Why would clients use you if they could just use the llm?

See above, we have a lot of know-how and code built in. That's why we cannot really sell this product either as no-one will get useful stuff out of it without training.


Thanks for taking the time to answer.

It sounds like you already had 20+ years of human made tooling already built and you use the llm to orchestration and onboarding.(?)

I’m glad you found a solution that works for you.

I could see that use case.

When I did consulting work the initial onboarding of new clients to my tooling was a lot drudge work, but I definitely felt my job was more about the work after that phase of satisfying requests for additional features, and updating out of date methods and technologies.

I wonder what your plans are for when your tools fall out of date or fail to satisfy some new normal?

Hire ”seasonal” programmers again? Or have an llm try to build analogues to what your developers built those precious 20+ years?

(‘precious’ was a typo of ‘previous’ but I left it in because I thought it was funny)


Well, it's one of my businesses so I will probably sell it. I have others which I like a lot more and they have more staying power (and are less bothered by AI, although it helps, but not enough yet; my favorite business is a business which does very urgent emergency software repairs: the current LLMs are way too hallucinatory for that ; it's wasting too much time and really solid tooling I haven't managed to build around it; you cannot imagine how terrible, and therefor unique/diverse, software around the world is).


Start small and you might see the value. I find ai useless to produce code for me. But if I'm stuck on naming a variable or a complex object it's a great brainstorming tool. Almost like a very complex thesaurus.

And if I need to write a shell script that's good enough for a single one-off job, well, shell script is far from my native programming language, so it'll do a better job than I will


I get tremendous value. But only when using API:s that have ’always’ been more or less stable.

I agree, systems with rapidly evolving featureset are painful.

Successes: Git, any bash script, misc linear algebra recipes. Random debug tools in javascript (js and plain old html is stable enough). C++. C#. Sometimes Python.

Biggest value currently, I guess, is the data debug tool I wrote myself for specifically for an ongoing project.

Now, the ’value’ to me here means I don’t have to toil in some tedious, trivial task which is burdensome mainly because everybody uses different names for similar concepts, and modern computing systems are a mishmash of dozen things that work slightly differently.

So, to me ChatGPT is the idiot savant assistant I can use for a bunch of tedious and boring things that are still valuable doing.

I get paid for some niche very specific C++ stuff I’m fairly good at and like doing. But it’s the 85% of the rest of the things (like git or CMake or bash) I can’t stand.


I’m working on a nextjs project, nextjs made a bunch of breaking changes and doesn’t document things consistently or comprehensively, I have a lot of grief using LLM on this framework.

This is something framework/libs/apis should factor in for future, how can you make your project LLM friendly in order to make it dev friendly.


> I am unable to get these things to do anything useful.

My experience is widely different than yours. I use Copilot extensively to explain aspects of codebases, generate documentation, and even fill in boilerplate code for automated tests. You need to provide the right context with the right system prompts, which needs some effort from your end, and you cannot expect perfect outputs.

In the end it's like any software developer tool: you need to learn how to use it, and when it makes sense to do so. That needs some continuous effort from your end to work your way up to a proficient level.

> People on this site love to talk about “muh productivity!”, but always stop short of saying what they got from this productivity boost: (...)

I don't understand what you're trying to say. I mean, have you ever asked that type of loaded question on discussions on googling for answers, going to Stack Overflow, or even posting questions on customer support pages?

But to answer your question, I spend far less time troubleshooting, reading code to build up context, and even googling for topics or browsing stack overflow. I was able to gather requirements, design whole systems, and put together proofs of concept requiring far less iterations than what I would otherwise have to go through. This means less drudge work, with all the benefits to quality of life that this brings.


These services retain historical records of interactions.

Can you show me an example of successfully doing what you claim you do?


This is a WIP, but here's the test suite for a recursive decent powered search DSL:

https://github.com/williamcotton/search-query-parser-scratch...

Claude, using Projects, wrote perhaps 90% of this project with my detailed guidance.

It does a double pass, the first pass recursive descent to get strings as leaf nodes and then another pass to get multiple errors reported at once.

There's also a React component for a search input box powered by Monaco and complete with completions, error underlines and messaging, and syntax highlighting:

https://github.com/williamcotton/search-query-parser-scratch...

Feel free to browse the commit history to get an idea of how much time this saved me. Spoiler alert: it saved a lot of time. Frankly, I wouldn't have bothered with this project without offloading most of the work to an LLM.

There's a lot more than this and if you want a demo you can:

  git clone git@github.com:williamcotton/search-query-parser-scratchpad.git
  cd search-input-query-react
  npm install
  npm run dev
Put something like this into the input:

  -status:out price:<130 (sneakers or shoes)
And then play around with valid and invalid syntax.

It has Sqlite WASM running in the browser with demo data so you'll get some actual results.

If you want a guided video chat tour of how I used the tool I'd be happy to arrange that. It takes too much work to get things out of Claude.


> These services retain historical records of interactions.

Thats not universally true, for example AWS hosts their own version of Claude specifically for non-retention and guarantee that your data and requests are not used for training. This is legally backed up and governments and banks use this version to guarantee that submitted queries are not retained.

I’m a developer with about the same amount of experience as you (22 years) and LLMs are incredibly useful to me, but only really as an advanced tab completion (I use paid version of cursor with the latest Claude model) and it easily 5x’s my productivity. The most benefit comes from refactoring code where I change one line, the llm detects what I’m doing and then updates all the other lines in the file. Could I do this manually? Yes absolutely, but it just turned a 2 minute activity into (literally) a 2 second activity.

These micro speed ups have a benefit of time for sure, but there’s a WAY, WAAAY larger benefit: my momentum stays up because I’m not getting cognitively fatigued doing trivialities.

Do I read and check what the llm writes? Of course.

Does it make mistakes? Sometimes, but until I have access to the all-knowing perfect god machine I’m doing cost benefit on the imperfect one, and it’s still worth it A LOT.

And no, I don’t write SPA TODO apps, I am the founder of a quantum chemistry startup, LLMs write a lot of our helpers, deployment code, review our scientific experiments and help us brainstorm, write our documentation, tests and much more. The whole company uses them and they are all more productive by doing so.

How do we know it works? We just hit experimental parity and labs have verified that our simulations match predictions with a negligible margin of error. Could we have built this without LLMs? Yes sure, but we did it in 4.5 months, I estimate it would have taken at least 12 without them -

Again - do they make mistakes? Yes, but who doesn’t? The benefits FAR outweigh the negatives.


> Can you show me an example of successfully doing what you claim you do?

In theory technically nothing prevents me from doing that, but I use it for professional work. Do you understand what you're asking?


If you were sincere you’d share a single transcript where the AI was completely useless for solving your problem.

“This new search engine sucks it can’t find anything”.

“Share what you searched for”

“No”


There's an element of "I'm not going to do your homework for you" I find sometimes.

I've also never asked it to spit out more than an HTML boilerplate or two, but it is useful for asking best options when given a choice between two programming patterns.


I’m exhausted by these types of posts.

They never include concrete details on what they're trying to do. what languages they're using, what frameworks, which LLM. They occasional state which tool, but then don't go into detail how they're using it. There's never any links to chats/sessions showing the prompts they're giving it and the answers they're finding so unacceptable.

Imagine if you got bug reports from customers with that little detail.

Actual in-depth details would go a long way to debugging why people are reporting such different experiences.

It takes a back and forth exchange with the LLM for it to make progress. Expertise in using an LLM is not just knowing what to prompt it, but more importantly, when to stop, fix the code yourself, and keep going. Without throwing baby out with the bathwater just because you still had to do something by hand, where the baby is "using an LLM. in the first place".

If I had to guess though, I think that's where people differ. Just like with every skill there's a beginners plateau and you hit a wall and have to push through (fatigue/boredom/disillusionment/etc). If the way you're using the LLM means you haven't gotten a hallucination by then, and you've seen how wildly more productive and how it's able to take away some of the bullshit in programming; if no bad stuff has hit the wall and you take to it like a fish in water, you can push through some of the dumber errors it makes.

If, however, you are doing something esoteric (aka not using JavaScript/python) and are met with hallucinations and scrutinize every line of code it produces, even going into it with an open mind, it's easier to give up and just stop there. That may not even be the wrong thing to do! Different programmers deliver value in different ways. You don't want Gilfoyle when you need Richard Hendriks, or vice versa, a company needs both of them.

So: show us the non-functional wall on GitHub the LLM built, or even just name the language used and the library it hallucinated.

But again, getting perfect code out of the LLM is a non-goal, don't get distracted by it. LLM-assisted or not, you get graded on value derived from code that actually gets committed and sent for review and put into production. So if the LLM is being dumb, go read and fix the code, give it your fixed code, and move on with your life, or at least into the next TODO/ticket.


No one includes complete detail at saying it’s useful and life-changing too, so that’s fair. It might turn out that what works for those for whom it works is trivial “code” not worth the ssd blocks it occupies. This is actually my current theory, cause LLMs (all of them, yes we tried all of them) are capable of what I tend to not think about as programming but as industry nonsense which should have been automated/abstracted/libraried away ages ago.

Maybe show us the successful code it built and we’ll see what type it is, cause recording failures is only useful in hindsight. I have no logs of lenghty struggling with llm stupidity.

getting perfect code out of the LLM is a non-goal

It stops being a goal after just a few tries, naturally. The problem is usually not that it isn’t perfect, the problem is it doesn’t understand the problem at all and tends to some resembling mediocrity instead. You can’t just fix it and move on.


"No one includes complete detail at saying it’s useful and life-changing too"

There are at least 3 posts in this very discussion sharing details and githup repos with code written mostly by LLM.


I see a weather.com class, a parser and a react gui boilerplate boilerplate tsx folder. All three are textbook and trivial areas. We are only missing an ad hoc crud orm here.

In my opinion that is not code, it’s not a business logic. What is presented is (not to offend anyone, I look at code not people) useless github-code carcasses that it contains in abundance. Real code solves problems, this code solves nothing, it just exists. A parser, a ui, an http query - it’s a boilerplate boilerplate boilerplate. You aren’t coding at writing it. It’s “my arduino is blinking leds” level of programming.

I think that’s the difference in our perception. I won’t share my current code purely for technical reasons, but for an overview, it fuzzy-detects elements on virtual displays and plays simple games with dynamic objects on screen, behaving completely human input-wise, all based on a complex statistical schedule. It uses a stack of tech and ideas that llms fails at miserably. Llms are completely useless at anything in there, because there’s basically no boilerplate and no “prior art”. I probably could offload around 15% to an llm, but through the pain of explaining what it’s supposed to assist with.

Maybe it’s me, but I think that most of the jobs that involve trivial things like “show fields with status” or “read/write a string format” are not programming jobs, but an artefact of a stupid industry that created them out of mud-level baseline it allowed to persist. These should have been removed long ago regardless of AI. People just had way too much money (for a while) to paycheck all that nonsense.

Edit: I mean not just removed, but replaced with instruments to free these jobs from existing. AI is an utterly sarcastic answer to this problem, as it automates and creates more of that absurdity rather than less.


That’s funny — I have the opposite opinion, and think people like you might be poor engineers or problem solvers. These tools are amazing for productivity.


Sorry.

Hope the other posts here help you.


I started with BASICA and GWBASIC in the 80s, and though I've had some diversions, I would say there haven't been many days since where I haven't thought about solving problems with code. I still don't feel particularly qualified to answer, but I guess I probably am.

> Are all of these posts just astroturfed?! I ask that sincerely.

This is amusing to me - since GPT 4 or so, I've been wondering if the real fake grass is actually folks saying this shit is useless.

I think I'd need a bit more insight into how you are trying to use it to really help, but one thing you wrote did stand out to me:

> the llm is suggesting an API access paradigm that became deprecated

Don't trust its specific knowledge any more than you absolutely have to. For example, I was recently working with the Harvest API, and even though it can easily recite a fair bit about such a well-known API on its own, I would never trust it to.

Go find the relevant bits from API/library docs and share in your prompt. Enclose each bit of text using unambiguous delimiters (triple-backtick works well), and let it know what's up at the start. Here's a slightly contrived example prompt I might use:

---

Harvest / ERP integration - currently we have code to create time entries with a duration, but I'd like to be able to also create entries without a duration (effectively starting a timer). Please update providers/harvest.py, services/time_entries.py, endpoints/time_entries.api accordingly. I've included project structure and relevant API docs after my code. Please output all changed files complete with no omissions.

providers/harvest.py: " contents "

services/time_entries.py: " contents "

endpoints/time_entries.api: " contents "

project structure: " /app /endpoints time_entries.py project_reports.py user_analytics.py /services time_entries.py project_reports.py user_analytics.py /providers harvest.py "

harvest docs: " relevant object definitions, endpoint request/response details, pagination parameters, etc "

---

I have a couple simple scripts that assist with sanitization / reversal (using keyword db), concatenation of files with titles and backticks > clipboard buffer, and updating files from clipboard either with a single diff confirmation or hunk-based (like `git add -p`). It isn't perfect, but I am absolutely certain it saves me so much time.

Also, I never let it choose libraries for me. If I am starting something from scratch, I generally always tell it exactly what I want to use, and as above, any time I think it might get confused, I provide reference material. If I'm not sure what I want to use, I will first ask it about options and research on my own.


>> Are all of these posts just astroturfed?! I ask that sincerely.

>This is amusing to me - since GPT 4 or so, I've been wondering if the real fake grass is actually folks saying this shit is useless.

Heh. If you really wanna conspiracy theory it, if you wanted positive marketing copy to sepl people on using an LLM, would you just use plain ChatGPT, or, because you're in the industry, would you post ernest-sounding anti-LLM messages, then use the responses as additional training data for fine-tuning LLMtoolMarketerGPT, and iterate from there?


I suspect there may have been a few levels of conspiracy that you had to work through before you got to where you are today.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: