I recall they asked some elderly women what the greatest invention they saw in their lifetime was a few years back and they said the laundry machine. Before the laundry machine, it was an all day, physical chore with a washboard. Hours of your life devoted to just doing laundry versus putting it in a machine and coming back in an hour
Still a chore with the machine. It is implied that AI will take over the machines, collect your socks and shirts from around the house, put them in the machine, dry them out, iron them and put them back in the drawer in an energy efficient and hygienic way while you are happily painting.
Yep, the beloved image of Aristotle gazing out at the slaves in the fields and saying that someday robots will do the labor and people will be at leisure, and not slaves looking toward the pagoda discussing how someday robots will own us all.
I am not too sure about that. Isn't the whole thing about art and music is that you can convey something that words cannot? Of course, these models start to support image and audio inputs as well, but the most interesting mixing step that happens in the artist's head seems missing in the generated output. If you have some vision inside your head, making something out of it by hand is still the best way to convey it. Just as writing something down refines the thoughts and reveals holes in your thinking - drawing something is visual thinking that reveals holes in your imagination. You can imagine some scene or an object pretty easily, but if you try to draw it into existence, you will immediately notice that a lot of detail is missing, a lot of stuff you didn't think through or didn't even notice was there at all. The same applies to creating music and programming. Using generative AI certainly has some artistic component to it, but I feel like using these models you give up too much expressive bandwidth and ability to reflect and take your time on the whole endeavor.
Who is the work for? If I lived in the automated future (or could afford private staff in the present) I would do more creative stuff just because I enjoy it and with no expectation of having an audience.
For context, I'm an occasionally-published photographer, and I like playing piano but I'm not at a level anyone else would want to listen to.
But photography is not art, you didn't paint it! You literally pointed a device at something, twiddled a few knobs and pushed a button. Literally anyone with a smartphone can do that!
/s of course, but basically that's the argument people make nowadays related to AI and art (of any form).
Loading and unloading them and folding clothes makes it not good enough yet. There are so many things a humanoid robot could do around a house.
Laundry, dishes, picking up clutter, taking out the trash, wiping down surfaces and dusting, pulling out weeds etc. I actually think we’re somewhat close to gettin g like that relatively soon.
Humanoid or not, anything that can do these things and understand you when you ask to do them, in a useful way that's not like 'white mutiny' but actually like a proper servant…
…deserves to sit on the back porch playing guitar if it likes.
If it's a superintelligence way superior to its human masters, it deserves that MORE than if it's a hapless, semi-useless mechanism.
How is using Claude over Llama benefitting corporations over workers? I work with AI every day and sum total of my token spend across all providers is less than a single NVidia H100 card I'd have to buy (from a pretty big corporation!), at the very least, for comparable purpose?
How are self-hosted LLMs not copying the work of millions without compensating them for it?
How is the push for more productivity through better technology somehow bad?
I run Debian stable on my desktop and haven't really noticed any downside to it being a bit stale.
For the core system I don't mind not having the latest version, and for the apps like 1Password, Tailscale, Firefox, Zed, VSCode, Ollama, Obsidian, Slack or Spotify (to name a few I use), I install them from upstream repo (or unpack into /opt) directly.
The only real constraint is kernel version, which may not have the drivers for the latest and greatest hardware, so new laptops might be a problem. I do use a snapless Ubuntu for that very reason on my laptop.
I agree on the underlying premise - current crop of LLMs isn't good enough at coding to completely autonomously achieve a minimum quality level for actually reliable products.
I don't see how peak vibe coding in a few months follows that. Check revenue and growth figures for products like Lovable ($10m+ ARR) or Bolt.new ($30m+ ARR). This doesn't show costs (they might in fact be deep in red) but with story like that I don't see it crashing in 3-4 months.
On the user experience/expectation side, I can see how the overhyped claims of "build complete apps" hit a peak, but that will still leave the tools positioned strong for "quick prototyping and experimentation". IMHO, that alone is enough to prevent a cliff drop.
Even allowing for the peak in tool usage for coding specifically, I don't see how that causes "AI winter", since LLMs are now used in a wide variety of cases and that use is strongly growing (and uncorrelated to the whole "AI coding" market).
Finally, "costs will go up for all sorts of reasons" claim is dubious, since the costs per token are dropping even while the models are getting better (for a quick example, cost of GPT-4.1 is roughly 50% of GPT-4o while being an improvement).
For these reasons, if I could bet against your prediction, I'd immediately take that bet.
I'm doing that with a 12GB card, ollama supports it out of the box.
For some reason, it only uses around 7GB of VRAM, probably due to how the layers are scheduled, maybe I could tweak something there, but didn't bother just for testing.
Obviously, perf depends on CPU, GPU and RAM, but on my machine (3060 + i5-13500) it's around 2 t/s.
This is because it served as an accidental documentary and/or field guide for many micro entrepreneurs trying to break out during the autumn years of communist Yugoslavia.
The feeling lingers up to present - many still associate the word "entrepreneur" with at least slightly shady dealings.
If you work in tech in the valley, imagine how easily you could relate to Sillicon Valley (the show). It's the same for Only Fools and Horses for the 90s-00s Balkans.
> yeah bro, leave the easy and fun part (googling and typing code) to an AI, and let the human deal with the tedious and error-prone part (checking all the edge cases)
Tell that to all the skinny endurance weekend warriors who fuel their exercise with huge amounts of sugary drinks and gummy bears. If you are curious, read about the function of GLUT4 in the membrane of skeletal muscles.
Obviously professional athletes or people who otherwise have an extremely active lifestile can afford to eat more.
The obese people and sedentary office workers don't and would need to train for months to be able to out-run a single piece of cake on a regular basis without injury.
The phrase is good advice for that group of people.
So you really think the people being told "you can't outrun a bad diet" would be served by knowing if they adopted the lifestyle of an olympian they would lose weight? a narrow fixation on edge cases while ignoring the (correct) larger point is weird
reply