If hackernews existing during the dawn of personal computing you bet your ass that every single cpu release by intel or ibm or whatever would be front page news and everyone would be talking about how they were going to use computers to automate all paperwork etc etc.
youre projecting a deficiency of the human brain onto computers. computers have advantages that our brains dont (perfect and large memory), theres no reason to think that we should try to recreate how humans do things.
why would you bother with all these summaries if you can just read and remember the code perfectly.
Because the context window of the LLM is limited similar to humans. That’s the entire point of the article. If the LLM has similar limitations to humans than we give it similar work arounds.
Sure you can say that LLMs have unlimited context, but then what are you doing in this thread? The title on this page is saying that context is a bottleneck.
yeah once again you need the right context to override what's in the weights. It may not know how to use the responses api, so you need to provide examples in context (or tools to fetch them)
This is just an issue with people who expect AI to solve all of lifes problems before they get out of bed not realising they have no idea how AI works or what it produces and decide "it stops working because it sucks" instead of "it stops working because I don't know what I'm doing"
i mean in a way xcode is incredibly valuable product considering anyone who wants to publish to the app store (where they collect insane rent for zero effort) needs to interact with it.
I can confirm this. Have a relative that works for Apple. Made the mistake of complaining in front of her about their take on sales (they make six-figures annually from my app). She went off on how much they give us (including Xcode) for the privilege of having an app on their phones.
Still don't have a contact there. Would have thought I would at least get someone there to talk with if issues come up.
the average person doesnt need to do that. The benchmark for "is this response accurate and personable enough" on any basic chat app has been saturated for at least a year at this point.
one of you uses more test-time compute before giving your answer :)
is the first person actually incapable of thinking deeply or do they just prefer the other way/have a lower barrier for what is worth saying.
I find i like to sit back and think in a group conversation and chime in at the right moment with something insightful, where other people are just going next token prediction stream of concious blabbering.
They can think deeply, it just takes active effort, whereas the broad-and-quick approach comes naturally. Sometimes sitting back and listening can be fun for them.
you can try getting into improv comedy to develop this sort of skill. I'm also generally a slow thinker, but i dont actually think we think slower, I think we have too high a barrier for what we allow ourselves to say. We're afraid of making a mistake or saying something stupid, but most people just blurt out the first thought that percolates from their subconscious.
you just need some perspective.
reply