This is a good test - the salient point is that it is fine if the LLM is confused, or even gets it wrong! But what I suspect would happen is that it would confabulate details which aren't in the photo to justify the incorrect EXIF answer. This is not fine.
Every time I ask claude code to please fix this CSV import it starts to add several hundred lines of random modules, byzantine error handling, logging bullshit... with the pinnacle a 1240 line CRUD API when i asked it to add a CLI :/
I'm back to copying and pasting stuff into a chat window, so I have a bit more control over what those deranged, expensive busy beavers want to cook up.
That's 12.9 tokens per line when given 16k output context, which seems borderline doable, I'll grant you that... but mind you that these agentic code assistents don't need a single pass to accomplish their acts of verbosity.
They can just plan, stew for minutes on end, derail themselves, stew some more, do more edits, eat up $5 in API calls and there you are. An entirely new 1000+ line file, believe it or not.
the advantage of local LLM is that you literally could find many models that have no cloud equivalent. someone could have made a fine tune to meet your needs. if you can't find a generic model that meets your need, you can get an appropriate size model you can run, build your or get dataset. then train the cloud, then use the model locally.
Have we not gone all in on soil? Your CPU and GPU comes from the soil. The electrical batteries powering all your electronic device comes from the soil. Take a hard look around you, almost everything "artificial" you can see and touch came from the ground.
Lots of things come from the earth for sure, but I think soil is worth distinguishing from the sources of silicon wafers and lithium batteries.
Soil is a living, breathing, hospitable community of earth, fungi, insects, water, and countless other organisms. You can’t make silicon wafers from it, but it’s the cornerstone of entire ecosystems. It might be one of the most precious yet overlooked natural resources
libgen has been getting taken down especially after it came out that "AI companies" downloaded the entire archive for their training. Is it even still up? Furthermore, you can go go arXiv and see papers that got released yesterday or today. You can't find those on libgen or scihub.
Why not? If we line up to race. You can't say why compare v8 to v6 turbo or electric engine. It's a race, the drive train doesn't matter. Who gets to the finish line first?
No one is shopping for GPU by fp8, fp16, fp32, fp64. It's all about cost/performance factor. 8 bits is as good as 32bits, great performance is even been pulled out of 4 bits...
I think it's more like saying I ran a mile in 8 minutes whereas it took you 15 minutes to run the same distance, but you weigh twice what I do and also can squat 600 lbs. Like, that's impressive, but it's sure not helping your running time.
Dropping the analogy: f64 multiplication is a lot harder than f8 multiplication, but for ML tasks it's just not needed. f8 multiplication hardware is the right tool for the job.
reply