Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Every time I’ve ever read a {CLAUDE|GEMINI|QWEN}.md I’ve thought all this information could just be in CONTRIBUTING.md instead.


If I'm writing for a human contributor, I'm gonna have a pretty high bar for the quality of that writing.

An agent on the other hand, one who is in that sweet spot where they're no longer ignorant, and not yet confused... It's nice to have them dump their understanding to agent_primers/subsystem_foo.md for consumption by the next agent that touches that subsystem. I don't usually even read these until I suspect a problem in one. They're just nuggets of context transfer.


Yes! I want an option to always add README.md to the context; It would force me to have a useful, up to date document about how to build, run, and edit my projects.


You can include in your prompt for it to read the README!


Ultimately if this stuff is actually intelligent it should be using the same sources of information that we intelligent beings use. Feels silly to have to have to jump through all these hoops to make it work today


It’s a crapshoot of it actually does.


> It would force me to have a useful, up to date document about how to build, run, and edit my projects.

Not really: our AI agents are probably smart enough to even make sense of somewhat bad instructions.


They’re definitely not, Claude and all other agents frequently forget the build and test commands present in CLAUDE/etc.md for my various repos (even though most of them were were initialized by the AI).


Whether Claude and co understand is probably not a great proxy for whether your docs are good for humans.


Hmmm, in my experience if something like documentation confuses the current SOTA LLMs, then it will confuse the average developer for sure.


I had the other direction in mind: you can put together some text that the average LLM will figure out, but that will be really annoying for your average developer.

Eg if you write your instructions in a mixture of base 64, traditional Chinese, morse code and Polish, the LLM will still figure it out.


Not the case at all. AI agents will happily turn your bad ideas into code.


Non sequitur?

I am talking about LLMs figuring out how to build your project with some bad and incomplete instructions plus educated guessing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: