Hacker News new | past | comments | ask | show | jobs | submit login

Ok, so when people say that they use ChatGPT for coding, they use it as a code snippet library? Here, a function that has been probably written million times before in some similar form, and a template for a simple web app.



I am really struggling to make chat gpt write anything near functioning for the projects i am working on. Then i read the code people share and claim is made by ai and understand why. It’s really basic code that instead of copy pasting from stackoverflow is copied from a text generator. Perhaps it may evolve as a tech but right now it’s at the stage of pareidolia but instead of seeing faces people see intelligence where there is none.


Had the same experience with chatgpt with gpt3.5, but gpt4 is wayyy better.

Also Github Co-pilot has been good for a long time.


This isn't an example of a coding problem, but just yesterday we had a very difficult technical problem come up that one of our partners, a very large IT company couldn't solve. None of our engineers knew how to solve it after trying for a few days. I would say I'm an expert in the industry and have been doing this over 20 years as a developer. I thought I knew the best way to solve it, but I wasn't sure exactly. I asked ChatGPT(GPT-4) and the answer it gave, although not perfect in every way, gave pretty much exactly the method we needed to solve the issue.

It's a bit scary because knowing this stuff is sort-of our secret sauce and GPT-4 was able to give an even better answer than I was able to give. It helped us out a lot. We are now taking the solution back to the customer and will be implementing it.

A few additional thoughts:

1. I knew exactly what type of question to ask it to get the right answer (i.e. if someone used a different prompt maybe they would get a different answer) 2. I knew immediately that the answer it gave was what we needed to implement. Some parts of the answer were not helpful or misleading and i was able to disregard those parts. Maybe someone else would have to take more time figuring it out.

I imagine future versions of GPT will be better at both points.


this is exactly the kind of post that folks who say "oh - it wont take our jobs - our jobs are safe (way too advanced for silly AI)" - need to be reading. You're an expert, and it answered an expert question, and your colleagues couldnt do it either...after a few days...and this is just early days.


I don't think this is the take away. I'm in a similar situation as the GP, but the crux is that we still need people to make the decisions on what needs to happen - the computer 'just' helps with the 'how'. You need to be a domain expert to be able to ask the right questions and pick out the useful parts from the results.

I'm also not so worried about 'oh but the machines will keep getting better'. I mean, they will, but the above will still remain true, at least until we get to the point where the machines start to make and drive decisions on their own. Which we'll get to, but by that point, I/we will have bigger problems than computers being better at programming than I am.


I look at it differently. If what we've been writing can be replaced by a machine, that leaves us with coming up with more creative solutions. We can now spend our time more usefully, I think!


I use GPT-4, it's like pairing up with a junior developer who read all the documentation.

I follow the same steps as always to design the code, but when it's time to implement something, I ask the bot to do it, then I review it and move on to the next function.


"pairing up with a junior developer who read all the documentation" is a really good description.


Exactly. You still have to command it.


To give an example, I had 3 things on my hobby to-do list that fit together: learning some data compression algos, learning Golang and testing out Chatgpt/Copilot coding efficiency. So I downloaded slides and exercises from a course of my old uni, and started implementing the exercises in VSCode with Copilot. Whenever I had a question, I would consult chatgpt. Whenever my Copilot code didn't work, I'd go through it line by line to see what's wrong and paste it to ChatGPT to see if it knows what's wrong.

My experience:

- Copilot did a decent job at suggesting functions when I typed out comments. It got progressively worse as the compression algorithms got less common (eg. Huffman vs. entropy coding) The smaller the functions, the more manageable it was. <- You need to write good tests, you want to step carefully through every line of code it writes.

- ChatGPT got most things right, and a few things really wrong. It was always super confident. <- Super dangerous if you don't double check, basically only usable as a refresher on a topic you are already good on.

- If given a piece of code and asked "How would you suggest to refactor this?", ChatGPT gave mostly very useful ideas. <- This is something that I will keep in my workflow.

- The bigger the project got, the worse the overall codebase looked. It became a mix of styles, pretty inconsistent, more subtle bugs were introduced that took me a while to figure out. (The nice thing about lossless compression is that you know if the code is right by using it)

- My "productivity", how many lines of code "I" wrote, was way beyond anything I could usually reach. I do also feel like I got a very quick start into Golang with this, much faster and broader than reading documentation or doing a getting started. That said, that knowledge now definitely needs to be refined in order to use the appropriate concept at the right time. Comparing my code with more professional Golang code bases I spot a lot of things that need to be improved.


Chatgpt with gpt3.5 or 4? Big difference


Posting some recent prompts as examples as I've started to use it more for coding:

- I want to create a Google Docs addon using AppScript which has an interface sidebar. Please describe the necessary steps to create a sidebar interface. The sidebar interface should have a title with the text "Google Docs JSON Styles", it should have a large textbox which can contain multiple lines of text (fill it with lorem ipsum), and it should have 4 buttons placed horizontally underneath the text box, with the captions "Save", "Apply", "Copy", "Paste".

This produced the necessary Javascript + HTML + external steps to set up the project.

- I'm building a website using a React frontend hosted on example.com with a Django REST API backend hosted on api.example.com - Authentication is handled via Firebase Auth, currently using an Authorization header provided for API calls. Now I want to have authenticated links such that the user can click a link to access an authenticated download via the API. What mechanism can I use to create a link to an API endpoint which will require the user to be authenticated?

This gave me a the frontend and backend code necessary to set up authenticated links, with several alternate approaches following further prompting.

- Given an owner id, repository id, commit id, and private access token; give me a URL that lets me download a ZIP archive of a Github repository.

In all of these examples I received the desired result with helpful descriptions.

As a counter-example:

- Write a program in Brainfuck that outputs the string "CHATGPT".

In this case, the result is the Brainfuck program for "Hello World", with a step by step break-down of the program explaining confidently why it would output "CHATGPT", while being entirely wrong.


For me it’s pasting a snippet of code I’m struggling to debug and asking it to help. Sometimes it works, sometimes it comes up with an answer that is also buggy or doesn’t work.

I find it much more useful for soft things like writing a complaint to a vendor for me, dealing with customer service, or cover letters from a job ad that I only have to slightly tweak. Takes me five minutes instead of 30+ and I got a few compliments about my “outstanding” cover letter.


Wrong, gpt4 is very capable of writing “new” and complex code.

Also gpt4 >> gpt3.5 for coding.


> gpt4 >> gpt3.5

Leftwise bitshift would make gpt4 worse though :P


"Also gpt4 >> gpt3.5 for coding. "

This is not my subjective experience. It's better, sometimes somewhat, but also much slower. I'm not sure which one is the better tradeoff for real work. I have it on 4 by default now, don't want to think about this for every question I ask, but for my work, 3.5 was fine.


Do you have a non-trivial example of this?

I tried to make GPT-4 create an algorithm for an enhanced tic-tac-toe game (think Wordle vs Quordle to somewhat visualize the difference) and it's failing miserably.


https://arxiv.org/abs/2303.12712 there are a lot of examples in this paper.

Also it’s not that it writes perfect code straight away and you never have to re-prompt or change anything manually. But it’s still an insane speed-up.


Do you know if there are any papers published by people who don't work at Microsoft so may actually be more objective?


Just try it yourself, it’s not that hard…


Welcome to the age of the unbenchmarkable product. You say one thing, someone says another, and here we are...


There are programming benchmarks though (data contamination issues left aside)


Yes, which is funny because there was an article on here with some pretty hard data which showed not much difference or maybe worse performance from GPT4 than 3.5-turbo.

You'll likely refute that as your mind is already made up, but there you go, another conflicting and confusing data point.


What are talking about? Just compare the output of a 3.5 vs 4 yourself for a problem you are interested in, it’s a single click in the interface.. Do you always need a study or an “expert“ to make up your mind?


Benchmarks are good. You may be a less experienced software engineer than others (or maybe more experienced?), then you will tell me “ChatGPT x is insane bro", but that's only a matter of perspective. A benchmark gives us facts, outside of our own experience, not opinions.

I'm sure ChatGPT4 would likely agree :)




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: