I’m guilty of this. I’m trying to be more mindful when using LLM-generated code. It’s mostly a personal issue: I tend to procrastinate and hope the code “just works.”
We need to stay vigilant,otherwise we will pay the cost by fixing LLM bugs later.
First time ive ever heard someone admit this, only ever heard people accuse theur coworkers of it. This is honestly a very sad thing to hear a professional dev say
Its also easy to do 120b on CPU if you have the resources. I had 120b running on my home LLM CPU inference box in just as long as it took to download the GGUFs, git pull and rebuild llama-server.
I had it running at 40t/s with zero effort and 50t/s with a brief tweaking.
Its just too bad that even the 120b isn't really worth running compared to the other models that are out there.
It really is amazing what ggerganov and the llama.cpp team have done to democratize LLMs for individuals that can't afford a massive GPU farm worth more than the average annual salary.
2xEPYC Genoa w/768GB of DDR5-4800 and an A5000 24GB card.
I built it in January 2024 for about $6k and have thoroughly enjoyed running every new model as it gets released. Some of the best money I’ve ever spent.
I've seen some mentions of pure-cpu setups being successful for large models using old epyc/xeon workstations off ebay with 40+ cpus. Interesting approach!
Wow that's not bad. It's strange, for me it is much much slower on a Radeon Pro VII (also 16GB, with a memory bandwidth of 1TB/s!) and a Ryzen 5 5600 with also 64GB. It's basically unworkably slow. Also, I only get 100% CPU when I check ollama ps, the GPU is not being used at all :( It's also counterproductive because the model is just too large for 64GB.
I wonder what makes it work so well on yours! My CPU isn't much slower and my GPU probably faster.
AMD basically decided they wanted to focus on HPC and data center customers rather than consumers, and so GPGPU driver support for consumer cards has been
non-existing or terrible[1].
The Radeon VII Pro is not a consumer card though and works well with ROCm. It even has datacenter "grade" HBM2 memory that most Nvidias don't have. The continuing support has been dropped but ROCm of course still works fine. It's nearly as fast in Ollama as my 4090 (which I don't use for AI regularly but I just play with it sometimes)
Why is it hard to set up llms? You can just ask an llm to do it for you, no? If this relatively simple task is already too much for llms then what good are they?
In the case of the GPT-OSS models, the worst (time consuming) part of supporting it is the new format they've trained the model with, "OpenAI harmony", in my own clients I couldn't just replace the model and call it a day, but still working on getting then to work correctly with tool calling...
If you are trying to get facts out of an LLM you are using it wrong, if you want a fact it should use a tool (eg we search, rag etc) to get the information that contains the fact (Wikipedia page, documentation etc) and then parse that document for the fact and return it to you.
These tools are literally being marketed as AI, yet it presents false information as fact. 'using it wrong' can't be an argument here. I would rather then tool is honest about confidence levels and mechanisms to research further - then feed that fact back into 'AI' for the next step.
I’ve used Firefox for years and really wanted to stick with it, but too many sites keep breaking. I originally ditched Chrome because it chewed through my RAM, but on the new M4 MacBook I’ve got headroom, so I’ve reluctantly gone back to Chrome. Painful switch, but I don’t have much choice right now.
It's somewhat of a taboo around here, and every time I have mentioned this there has been a bunch of responses certifying that Firerox works perfectly for them.
I genuinely can't think of any sites I come across that are broken, at least visibly enough for me to notice. I think that speaks more to the variety in browsing habits than anything else. I'm sure they exist and I don't think it's a taboo. People who don't share that impression probably just don't visit any of those broken sites, e.g. me.
Some forms just break in Firefox for me. I’ve been applying to a lot of tech companies, and roughly 10% of their application forms fail in Firefox but work fine in Chrome. I can’t figure out why it’s inconsistent. Even some CAPTCHA and payment pop‑ups won’t load.
We need to stay vigilant,otherwise we will pay the cost by fixing LLM bugs later.