Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Gemini's 1M token context window is really unbeatable.

How does that work in practice? Swallowing a full 1M context window would take in the order of minutes, no? Is it possible to do this for, say, an entire codebase and then cache the results?



In my experience with Gemini it definitely does not take a few minutes. I think that's a big difference between Claude and Gemini. I don't know exactly what Google is doing under the hood there, I don't think it's just quantization, but it's definitely much faster than Claude.

Caching a code base is tricky, because whenever you modify the code base, you're invalidating parts of the cache and due to conditional probability any changed tokens will change the results.


Right now this is just in the AI Studio web UI. I have a few command line/scripts to put together a file or two and drop those in. So far I've put in about 450k of stuff there and then over a very long conversation and iterations on a bunch of things built up another 350k of tokens into that window.

Then start over again to clean things out. It's not flawless, but it is surprising what it'll remember from a while back in the conversation.

I've been meaning to pick up some of the more automated tooling and editors, but for the phase of the project I'm in right now, it's unnecessary and the web UI or the Claude app are good enough for what I'm doing.


I’m curious about this as well, especially since all coding assistants I’ve used truncate long before 1M tokens.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: