Hacker Newsnew | past | comments | ask | show | jobs | submit | zmj's commentslogin

Your claude.md (or equivalent) is the best way to teach them. At the end of any non-trivial coding session, I'll ask for it to propose edits/additions to that file based on both the functional changes and the process we followed to get there.


How do I distill 30 years of experience/knowledge into a Claude.md file? People learn, LLMs don't - end of story.


> People learn, LLMs don't - end of story.

That's not the end of the story, though. LLMs don't learn, but you can provide them with a "handbook" that they read in every time you start a new conversation with them. While it might take a human months or years to learn what's in that handbook, the LLM digests it in seconds. Yes, you have to keep feeding it the handbook every time you start from a clean slate, and it might have taken you months to get that handbook into the complete state it's in. But maybe that's not so bad.


The good thing about this process its it means such a handbook functions as documentation for humans too, if properly written.

Claude is actually quite good at reading project documentation and code comments and acting on them. So it's also useful for encouraging project authors to write such documentation.

I'm now old enough that I need such breadcrumbs around the code to get context anyways. I won't remember why I did things without them.


The same way you program...

Break apart your knowledge into relevant chunks for Claude so that you can only have what's useful in its context window.


That's because f2's result could depend on whether f1 has executed.


I see many people learning to use chatbots as practical tools that don't understand the process that produces their output. They don't anticipate the way that output will be shaped by every detail of their request. This is an attempt to bridge that conceptual gap.


This article is talking about single-writer, single-reader storage. I think it's correct in that context. Most of the hairy problems with caches don't come up until you're multi-writer, multi-reader.


I recently reencoded my photography archive to webp. It's a static site hosted from S3. I was pretty happy with the size reduction.


There's already some experimental evidence that LLMs can be more persuasive than humans in the same context: https://www.science.org/content/article/unethical-ai-researc...

I don't think anyone can confidently make assertions about the upper bound on persuasiveness.


I don't think there's a confident upper bound. I just don't see why it's self-evident that the upper bound is beyond anything we've ever seen in human history.


> Someone at AWS probably thought about this, easy to provision serverless Postgres, and they just didn’t build it.

AWS is working on this as well: https://aws.amazon.com/blogs/database/introducing-amazon-aur...


DSQL is genuinely serverless (much more so than "Aurora Serverless"), but it's a very long way from vanilla Postgres. Think of it more like a SQL version of DynamoDB.


I implemented rsync's binary diff/patch in .NET several years ago: https://github.com/zmj/rsync-delta

It's a decent protocol, but it has shortcomings. I'd expect most future use cases for that kind of thing to reach for a content-defined chunking algorithm tuned towards their common file formats and sizes.


If you described today's AI capabilities to someone from 3 years ago, that would also sound like science fiction. Extrapolate.


Pretty similar story in .NET. Make sure your inner loops are allocation-free, then ensure allocations are short-lived, then clean up the long tail of large allocations.


.NET is far more tolerant to high allocation traffic since its GC is generational and overall more sophisticated (even if at the cost of tail latency, although that is workload-dependent).

Doing huge allocations which go to LOH is quite punishing, but even substantial inter-generational traffic won't kill it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: