Hacker Newsnew | past | comments | ask | show | jobs | submit | gngoo's commentslogin

Yes no shit.

To me it feels like I’m in the camp of people who has already figured it out. And I have now learned the hard way that it’s almost impossible to teach others (I organized several meetups on the topic).

The ability seems like pure magic. I know that there are others who have it very easy now building even complex software with AI and delivering project after project to clients at record speed at no less of quality as they did before. But the majority of devs who won’t even believe that it’s remotely possible to do so is also not helping this style of building/programming mature.

I wouldn’t even call it vibe coding anymore. I think the term hurts what it actually is. For me it’s just a huge force multiplier, maybe 10-20x of my ability to deliver with my own knowledge and skills on a web dev basis.


I feel like I’m in your camp, to my own surprise.

I’ll try my hand at some guidelines: the prime directive would be “use the right ai tool for the right task”. Followed by “use a statically typed language”. Followed by “express yourself precisely in English. You need to be able to write like a good technical lead and a good product manager.”

With those out of the way:

Completions work when you’re doing lots of rote moderately difficult work within established patterns. Otherwise, turn them off, they’ll get in the way. When they do work, their entire point is to extend your stamina.

Coding agents work when at-worst a moderately novel vertical needs implementation. New architecture and patterns need to be described exhaustively with accurate technical language. Split up the agents work into the same sort of chunks that you would do between coffee breaks. Understand that while the agent will make you 5x faster, you’ll still need to put in real work. Get it right the first time. Misuse the agent and straightening out the mistakes will cost more time than if you hadn’t used the agent at all.

If novelty or complexity is high, use an advanced reasoning model as interactive documentation, a sparring partner, and then write the code by hand. Then ask the reasoning model to critique your code viciously. Have the reasoning model configured for this role beforehand.

These things together have added up to the biggest force multiplier I’ve encountered in my career.

I’m very much open to other heuristics.


> If novelty or complexity is high, use an advanced reasoning model as interactive documentation, a sparring partner, and then write the code by hand. Then ask the reasoning model to critique your code viciously. Have the reasoning model configured for this role beforehand.

Does this mean basically "Opus"? What goes into "Have the reasoning model configured for this role beforehand."?


Just record yourself doing it and post online. If the projects are indeed complex and you’ve found a way to be 20x more productive people will learn from it.

The problem is not having any evidence or basis on which to compare claims. Alchemists claimed for centuries to synthesize gold, if they only had video we could’ve ruled that out fast.


For which reason exactly? Everyone will catch up to this eventually.


It's just hard to believe something is real when it's not reproducible.


The spec, or prompts system, whatever you call it, is more like a recipe than code. It doesn't automatically generate the dishes; a good cook is still needed.


Yes and culinary schools exist and create new cooks in a reproducible way. Why can't coding with ai be taught?


I disagree with the OP that AI coding can't be taught. My answer to why so many people have trouble would be that they refuse to learn. I see tons of people who are insanely biased against AI and then when they try and use it they give up after the first go (having tried a horrible application of AI like making a functioning production app with 1 single prompt, no one using AI for work is using it like that). They also don't take any suggestions on using it better because "I've tried it before and it sucked."

If you asked me months ago whether "prompt engineering" was a skill I'd have said absolutely not, it's no different than using stack overflow and writing tickets, but having watched otherwise skilled devs flounder I might have to admit there is some sort of skill needed.


FWIW, some people need training on using stack overflow and writing good tickets


Because LLMs arent calculators. Theyre non deterministic. Recipes and dishes are predictably reproducible, ai output isnt.


I fully expect that in 1-2 years that SWE curriculum will have AI coding as a major feature. The question I have is will students be required to do their first year or first assignments in a given course without AI.

My ex teaches UX. We were talking about AI in academia last week. She said that she requires students to not use AI on their first assignment but on subsequent ones they are permitted to.


your problem domain is greenfield freelancing if i am reading you correctly?

The tarpit of AI discussion is that everybody assumes that their local perspective is globally applicable. It is not.


This.

I work in a large corpo eco system of products across languages that talk to a mess of micro and not so micro services.

Ai tools are rarely useful out of the box in this context. Mostly because they can't fit the ecosystem into their context. I think i would need 10 agents or more for the task.

We have good documentation, but just fitting the documentation into context alongside a microservice is a tight fit. Most services would need one agent for the code (and even then it'd only fit 10% in context), and one for the docs.

Trying to use them without sufficient context, or trying to cram the right 10% into context, takes more effort than just coding the feature, and produces worse results with the worst kind of bugs, subtle ones borne from incorrect assumptions.


If contracting with bigger companies and enterprises, for which I am on 6-12 month projects, and even longer retainers is "greenfield freelancing" then sure. I actually do not really engage in small projects less than that, because they don't pay well.


> For me it’s just a huge force multiplier, maybe 10-20x of my ability to deliver with my own knowledge and skills on a web dev basis.

I can tell you that this claim is where a lot of engineers are getting hung up. People keep saying that they are 10, 20 and sometimes even 100x more productive but it's this hyperbole that is harming that building style more than anything.

If you anyone could get 10 to 20 years worth of work done in 1 year, it would be so obvious that you wouldn't even have to tell anyone. Everyone would just see how much work you got done and be like "How did you do 2 decades worth of work this year?!"


I agree. I'd say it's simply that 20 years of software development isn't bottle necked by the ability to churn out code.


yep plus all these companies going all in on AI would have already laid off 95% of their software engineers.


I've noticed a great deal programmers, very good programmers at that, that completely underestimate how fast things are moving. They're natural skeptics, and checked out ChatGPT when it was released. Then they maybe checked out some other models a year after. But eventually wrote it off as hype, and continue to do things their way. You know, artisanal code and all that.

I think that if you willfully ignore the development, you might be left in the dust. As you say, it is a force multiplier. Even average programmers can become extremely productive, if they know how to use the AI.


What sort of code are you writing? I find a lot of my stuff requires careful design, refactoring an existing system to work in a new way.

If the code I was writing was, say, small websites all the time for different clients, I can see it being a big improvement. But iterating on a complex existing platform, I’m not so sure that AI will keep the system designed in a maintainable and good way.

But if your experience is with the same sort of code as me, then I may have to re evaluate my judgments.


Not websites, but rather bigger systems. The largest client I work with now has 150k daily active users, for which I mostly am putting together new backend features. The website itself is completely outsourced to another party with webflow. I am building the same stuff I have been building over the past 10 year in my career. I don't generally build small websites, or any "website" at all, unless its for relatives or friends.


Yep! I work on multiple client projects. And while one agent is running in one project, I’m reviewing and writing down the task for another. Generally I just do this 2-3 hours per day; trying to block this time. And then go outside and enjoy free time.


Enjoy it while it lasts. Once this becomes the norm, the free time will diminish.


So China will start funding Russia to prolong the war?


china wants the EU to stop funding ukraine, and let russia win the war on their own - this is the most advantageous outcome to china: russia loses military strength doing the fighting, but not enough to collapse, and thus becoming more reliant on china. But still strong enough to be a concern for the west, so that the west cannot focus all their efforts on china.


I don't understand that last part. The US is a big country; it can walk and chew gum at the same time. We're spending some money and attention on Ukraine, but I don't see how it would affect our position on China if we weren't.

I do think that the US is badly mishandling its relationship with China, but it is a deliberate choice, not an oversight.

What would we be doing if we had more focus?


They've been doing that already in the past few years.


Likely they will increase their support. It’s a very short distance to this becoming a China/NATO proxy war.


It is actually weird, when Russia starts a war only to become a proxy of other block in its own war.


I tried it, but I got stuck. I am trying to learn this Language, and I know very little of it. So we are having a conversation, and I just get totally lost. Would be cool if it switches back to English and then actually teaches me what all the words and sentences mean. Now I just closed the app, and going back to my usual language learning curriculum - generally, I would uninstall the app. But now I might just try it again in a few weeks from now.


I’ve grown quite wary of anything that YouTubers say they are experts on. While actively producing content and streaming as their full time job.


Is this a community college perhaps? I think we have another problem with this industry moving way too fast for even very well funded colleges to keep up. Let alone a community college. Of course it’s sad, but at the root it feels like we are now preparing CS students for an outdated job market. I don’t know the answer to this. Of course the fundamentals don’t change, but the market is rapidly changing. Who is even hiring juniors except those with impressive projects, from prestigious colleges or with at least 2-3 years of experience?

If 2025, would have been my own graduation year, I would have had a bad taste in my mouth believing that LLMs can do everything I spend 4 years in college for. Not knowing that writing code is only a small part of the job (as I lack the experience).


>Who is even hiring juniors except those with impressive projects, from prestigious colleges or with at least 2-3 years of experience?

Even those students are sfruggling: https://www.businessinsider.com/tech-degrees-job-berkeley-pr...

https://www.forbes.com/sites/chriswestfall/2025/01/15/when-h...

I do think the market will eventually bounce back. But it's a bloodbath out there right now.


What’s the big deal here? Doesn’t every other app keep logs? I was already expecting they did. Don’t understand the outrage here.


No, apps can be prevented access. People can be disclosing private information.


Every other app on the planet that does not explicitly claim to be E2E encrypted is likely keeping your “private information” readily accessible in some way.


Working on AI myself, creating small and big systems, creating my own assistants and side-kicks. And then also seeing progress as well as rewards. I realize that I am not immune to this. Even when I am fully aware, I still have a feeling that some day I just hit the right buttons, the right prompts, and what comes staring back to me is something of my own creation that others see as some "fantasy" that I can't steer away from.

Just imagine, you have this genie in the bottle, that has all the right answers for you; helps you in your conquests, career, finances, networking, etc. Maybe it even covers up past traumas, insecurities and what not. And for you the results are measurable (or are they?). A few helpful interactions in, why would you not disregard people calling it a fantasy and lean in even further? It's a scary future to imagine, but not very farfetched. Even now I feel a very noticable disconnected between discussions of AI where as a developer vs user of polished products (e.g. ChatGPT, Cursor, etc) - you are several leagues separated (and lagging behind) from understanding what is really possible here.


Years ago, in my writings I talked about the dangers of "oracularizing AI". From the perspective of those who don't know better, the breadth of what these models have memorized begins to approximate omniscience. They don't realize that LLMs don't actually truly know anything, there is no subject of knowledge that experiences knowing on their end. ChatGPT can speak however many languages, write however many programming languages, give lessons on virtually any topic that is part of humanity's general knowledge. If you attribute a deeper understanding to that memorization capability I can see how it would throw someone through a loop.

At the same time, there is quite a demand for a (somewhat) neutral, objective observer to look at our lives outside the morass of human stakes. AI's status as a nonparticipant, as a deathless, sleepless observer, makes it uniquely appealing and special from an epistemological standpoint. There are times when I genuinely do value AI's opinion. Issues with sycophancy and bias obviously warrant skepticism. But the desire for an observer outside of time and space persists. It reminds me of a quote attributed to Voltaire: "If God didn't exist it would be necessary to invent him."


A loved one recently had this experience with ChatGPT: paste in a real-world text conversation between you and a friend without real names or context. Tell it to analyze the conversation, but say that your friend's parts are actually your own. Then ask it to re-analyze with your own parts attributes to you correctly. It'll give you vastly different feedback on the same conversation. It is not objective.


Good no know. Probably makes sense to ask personal advises as 'for my friend'.


That works on humans too.


"Oracularizing AI" has a lot of mileage.

It's not too much to say that AI, LLMs in particular, satisfy the requisites to be considered a form of divination. ie:

1. Indirection of meaning - certainly less than the Tarot, I Ching, or runes, but all text is interpretive. Words in a Saussurian way are always signifiers to the signified, or in Barthes's death of the author[2] - precise authorial intention is always inaccessible.

2. A sign system or semiotic field - obvious in this case: human language.

3. Assumed access to hidden knowledge - in the sense that LLM datasets are popularly known to contain all the worlds knowledge, this necessarily includes hidden knowledge.

4. Ritualized framing - Approaching an LLM interface is the digital equivalent to participating in other divinatory practices. It begins with setting the intention - to seek an answer. The querent accesses the interface, formulates a precise question by typing, and commits to the act by submitting the query.

They also satisfy several of the typical but not necessary aspects of divinatory practices:

5. Randomization - The stochastic nature of token sampling naturally results in random sampling.

6. Cosmological backing - There is an assumption that responses correspond to the training set and indirectly to the world itself. Meaning embedded in the output correspond in some way - perhaps not obviously - to meaning in the world.

7. Trained interpreter - In this case, as in many divinatory systems, the interpreter and querent are the same.

8. Feedback loop - ChatGPT for example is obviously a feedback loop. Responses naturally invite another query and another - a conversation.

It's often said that sharing AI output is much like sharing dreams - only meaningful to the dreamer. In this framework, sharing AI responses are more like sharing Tarot card readings. Again, only meaningful to the querent. They feel incredibly personalized like horoscopes, but it's unclear whether that meaning is inherent to the output or simply the querents desire to imbue the output with by projecting their meaning onto it.

Like I said, I feel like there's a lot of mileage in this perspective. It explains a lot about why people feel a certain way about AI and hearing about AI. It's also a bit unnerving; we created another divinatory practice and a HUGE chunk of people participate and engage with it without calling it such and simply believing it, mostly because it doesn't look like Tarot or runes, or I Ching even though ontologically it fills the same role.

Notes: 1. https://en.wikipedia.org/wiki/Signified_and_signifier

2. https://en.wikipedia.org/wiki/The_Death_of_the_Author


I'm worried on a personal level that it's too easy to begin to rely on chatgpt (specifically) for questions and such that I can figure out for myself. As a time-saver when I'm doing something else.

The problem for me is -it sucks. It falls over in the most obvious ways requiring me to do a lot of tweaking to make it fit whatever task I'm doing. I don't mind (esp for free) but in my experience we're NOT in the "all the right answers all of the time" stage yet.

I can see it coming, and for good or ill the thing that will mitigate addiction is enshittification. Want the rest of the answer? Get a subscription. Hot and heavy in an intimate conversation with your dead granma wait why is she suddenly singing the praises of Turbotax (or whatever paid advert).

What I'm trying to say is that by the time it is able to be the perfect answer and companion and entertainment machine -other factors (annoyances, expense) will keep it from becoming terribly addictive.


Sounds to me like a mental/emotional crutch/mechanism to distance oneself from the world/reality of the living.

There are things that we are meant to strive to understand/accept about ourselves and the world by way of our own cognitive abilities.

Illusions of shortcutting through life takes all the meaning out of living.


This is going to sound ridicilous. But in the past month, I have been working on an AI agent that can write its own AI agents. I can give it a task like: "Please create an AI agent that fetches the weather, and sends me an email every morning about some interesting stuff to do". The agent then goes through a loop of requirements gathering (with my input), and then starts building himself.

It self-references its own documentation, spins up a docker container and runs it. Alongside it builds tools, and registers both the agents and tools in a central system, so that it has knowledge of what other tools and agents it can use. I am not talking about "I am working on the theoretical idea of an agent that can build agent", but this is actually working. And I know of several other teams that are building stuff in this direction, its very fun.

I left behind a very well paying client to go full-time on this because its just so much fun. I have connected it to telegram, so now while I am at the gym, I can actually build software... the execution may take anywhere from a 10 minutes to an hour. Which is time spend writing and testing the code, then spinning up a docker container and fixing stuff, prompting me to create accounts somewhere an provide an API key. By far its the most complex project I have worked on, and all my friends and colleagues are slightly going crazy by seeing the outputs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: