Stack Overflow is now a proprietary database too. Given the choice between that, and a proprietary robot offering 10x as much clarity and quality, I'd choose the robot. Not all LLMs are proprietary in the same way as Claude. Many LLMs have their weights publicly available, like Gemma. But I can understand if you feel like a floating point numbers file is de facto proprietary tool. But if you're smart, you would look at this instead as an opportunity to invent the tools that will make this knowledge accessible. I've been working with Mozilla to build a "fantasy mode" feature for Firefox, which works similar to incognito mode, where you have a local LLM generate a synthetic version of the world wide web on the fly. This gives you the ability to explore the knowledge contained in LLM weights using an intuitive familiar browser-based interface. So far it's about as fast as 56k dialup was in the 1990s but as microprocessors become faster, I believe we'll be able to generate artificial realities of useful information we can't live without which are superior to Stack Overflow today.
Forgive me if I have missed something, but how is a synthetic version of the web (which sounds interesting and impressive in its own right) in any way comparable to a vast, indexed repository of curated and organized technical knowledge shared by experts with nuanced experiences and insights?
> but as microprocessors become faster, I believe we'll be able to generate artificial realities of useful information we can't live without which are superior to Stack Overflow today.
This sounds great! There isn't enough slop in the web, so this sounds like a good way to experience browsing without any non-ai generated nonsense getting in the way!