I've largely given up using tmux in favor of wezterm: wezterm has the ability to remote connect to a persistent terminal running on another machine, and has native window objects so things like mouse support and copy/paste work out of the box. It also has a kind of mosh-like support, though not quite as good at persisting connections over bad networks or network disconnections as mosh+tmux.
The down side is that it's really sensitive to versions, right now I'm struggling to get it to work on NixOS in the version that I have running on my Ubuntu dev box and MacOS laptop.
I'm traveling for the next couple weeks and don't want to take my home or work laptop, so I'm setting up a nice, old Chromebook with NixOS. I've dabbled with NixOS before. But this time I have been using Claude Code to set it up, and it's really good at it. Makes it painless, even without being very experienced with NixOS.
Anyone care to compare the current Aider with Claude Code? I tried Aider 6+ months ago and liked it but haven't tried it more recently because Claude Code is working so well for me. But I keep feeling like I should try Aider again.
Aider is good at one-shotting Git commits, but requires a human in the loop for a lot of iteration. Claude Code is better at iterating on problems that take multiple tries to get right (which is most problems IMO). I was really impressed by Aider until I started using CC.
~20 years ago I was helping someone with their storage server for their mail server, it had 8x 10K discs in a RAID-1. It was always struggling to keep up and often couldn't keep up. Intel had just come out with their SSDs, and I had a friend with access to one that was larger in size than the array. I shipped this little laptop drive off to get installed in the storage server.
Once the SSD was on the server, I did a "pvmove" to move the data from spinning array to SSD, while the system was up and running. Over the next several hours the load on the server dropped. It was comical that these 8 hard drives could be replaced by something smaller than my wallet.
If those drives were short-stroked, they probably could have kept up with that SSD, though at a reduced capacity. SSD would probably have a lower power bill over its life, though. I did some calculations for an array of short-stroked 15k SAS disks to replace a consumer 4GB SSD for a write-intensive app that chews through SSDs, and its performance would be in spitting distance of the SSD. Ended up not doing it due to likely not having any parts availability for 15k SAS drives in the not too distant future.
Except that we would have needed a lot more than the 8 drives then, to keep the same capacity. I think it was 1TB of storage around 1999. For mail storage it unfortunately needed both capacity and seek latency.
A trick I've been doing since I got the Combusion thermometer is, if I'm trying for a 205 temp target: I run it until the surface temp reaches 205, then turn down the heat to get the ambient temp to around 205 for the rest of the cook until the interior gets up to 205. I've tried it a couple times and had really great results.
Note the article explained: "leading to an overall cooling even as the center warmed", that's what I think you are missing above. If the internal temp comes up to 132, but the exterior drops to 95...
Don't get me wrong, I've been a subscriber for a very long time, and I get a lot of great content there. But going there to watch something specific, or watching a TV series, really sucks.
I recently realized a few studios (IIRC Warner Bros and Paramount) had put a lot of content there including movies and TV shows. I decided to watch Dick Van Dyke, because I'm a Carl Reiner fan. You can't really "Watch Next" a TV show and then go in to watch the next episode. And in fact sometimes it just wants to show you the shows in a non-linear order. "I want to watch the next Dick Van Dyke" is not something that YouTube makes easy. Another example, a friend recent sent me The Chit Show, I opened the playlist of the shows, and it played them in the reverse order (which I didn't really understand until the end when I realized I was on the first episode).
Also, the YouTube algorithm for suggesting things for you to watch is really bad. It gets stuck in ruts and it's hard to get out of them.
YouTube is amazing for learning DIY things, which is a large part of why I have subscribed for so long. But for watching entertainment the whole UI really just doesn't work.
> You can't really "Watch Next" a TV show and then go in to watch the next episode.
From the article:
A coming YouTube feature, called “shows,” can automatically queue the next episode on a channel, rather than serving whatever the recommendation algorithm thinks you’ll like best from billions of options.
And the Watch Next always insists on showing me some crazed anti-jew, anti-wax misogynistic conspiracy horseshit after three episodes of Dick Van Dyke. You can't even block creators on Youtube to make sure their content is not autoplayed or recommended to you
I haven't tried that recently (~3 years), does that work with concurrency or do you need to ensure one backup is running at a time? Back when I tried it I got the sense that it wasn't really meant to have many machines accessing the repo at once, and decided it was probably worth wasting space but having potentially more robust backups. Especially for my home use case where I only have a couple machines I'm backing up. But it'd be pretty cool if I could replace my main backup servers (using rsync --inplace and zfs snapshots) with restic and get deduplication.
It works. In general, multiple clients can back up to/restore from the same repository at the same time and do writes/reads in parallel. However, restic does have a concept of exclusive and non-exclusive locks and I would recommend reading the manual/reference section on locks. It has some smart logic to detect and clean up stale locks by itself.
Locks are created e.g. when you want to forget/prune data or when doing a check. The way I handle this is that I use systemd timers for my backup jobs. Before I do e.g. a check command I use an ansible ad-hoc command to pause the systemd units on all hosts and then wait until their operations are done. After doing my modifications to the repos I enable the units again.
Another tip is that you can create individual keys for your hosts for the same repository. Each host gets its own key so that host compromise only leads to that key being compromised which can then be revoked after the breach. And as I said I use rest-servers in append-only mode so a hacker can only "waste storage" in case of a breach. And I also back up to multiple different locations (sequentially) so if a backup location is compromised I could recover from that.
I don't back up the full hosts, mainly application data. I use tags to tag by application, backup type, etc. One pain point is, as I mentioned, that the snapshot IDs in the different repositories/locations are different. Also, because I back up sequentially, data may have already changed between writing to the different locations. But this is still better than syncing them with another tool as that would be bad in case one of the backup locations was compromised. The tag combinations help me deal with this issue.
Restic really is an insanely powerful tool and can do almost everything other backup tools can!
The only major downside to me is that it is not available in library form to be used in a Go program. But that may change in the future.
Also, what would be even cooler for the multiple backup locations, is if the encrypted data could be distributed using e.g. something like shamir secret sharing where you'd need access to k of n backup locations to recreate the secret data. That would also mean that you wouldn't have to trust whatever provider you use to back up to (e.g. if it's amazon s3 or something).
The issue with this is that if someone hacks one of the hosts now they have access to the backups of all your other hosts. With borg at least and the standard setup, would be cool if I was wrong though
Backups are append only and each host gets its own key, the keys can be individually revoked.
Edit: I have to correct myself. After further research, it seems that append-only != write-only. Thus you are correct in that a single host could possibly access/read data backed up by another host. I suppose it depends on use-case whether that is a problem.
It would be nice if one of the backup systems supported public key crypto for the bulk of the data, so that the keys used for recovering data would be different from the keys used for backing up. I know there is an open ticket for one of restic/borg, because I subscribed to it a few years ago and periodically get updates on it, but nobody has come up with a solution to it yet.
Our local drive in movie theater (remember those) offers various meal options including burgers, and I've taken to ordering the Impossible there because somehow several times in their beef burgers I've gotten significant bone chunks, to the extent that I was surprised I didn't break a tooth on them.
Here's my current rule of thumb: If you have successfully built a couple projects using agentic tooling and Claude 4 or similar models: you are doing a fine job of keeping up. Otherwise, you are at least a generation behind.
>Isn't the whole promise of AI tools that they just work?
No, not at all. Like with pretty much any development tool, you need to get proficient with them.
>what skill am I missing out on
At this point, it seems like pretty much all of them related to generative AI. But, the most recent of them that I'll point at is: tooling tooling tooling, and prompting. But the specific answer (to answer your "exactly") is going to depend on you and what problems you are solving. That's why on tries not to fall behind, so you can see how to use tooling in a rapidly evolving landscape, for your exact circumstances.
>I think I'm a reasonably good communicator in both voice and text, so what skill am I failing to train by not using LLMs right now?
You know how to achieve something you will use different words with different people? You don't talk to your spouse the same way you talk to your parents or your children or your friends or your coworkers, right? You understand that if you are familiar with someone you speak to them differently if you want to achieve something, yes?
this is just ridiculous. you can get up to speed with SOTA tooling in a few hours. A system prompt is just a prompt that runs every time. Tool calls are just patterns that are fine tuned into place so that we can parse specific types of LLM output with traditional software. Agents are just a LLM REPL with a context specific system prompt, and limited ability to execute commands
none of this stuff is complicated, and the models themselves have been basically the same since GPT-2 was released years ago
> A system prompt is just a prompt that runs every time. Tool calls are just patterns that are fine tuned into place so that we can parse specific types of LLM output with traditional software. Agents are just a LLM REPL with a context specific system prompt, and limited ability to execute commands
pulling the covers back so hard and so fast is going to be shocking for some.
To make it more concrete you can try and build something yourself. Grab a small model off of hugging face that you can run locally. Then put a rest API in front of it so you can make a request with curl, send in some text, and get back in the response what the llm returned. Now in the API prepend some text to what came on the request ( this is your system prompt ) like "you are an expert programmer, be brief and concise when answering the following". Now add a session to your API and include the past 5 requests from the same user along with the new one when passing to the llm. Update your prepended text (the system prompt) with "consider the first 5 requests/responses when formulating your response to the question". you can see where this is going, all of the tools and agents are some combination of the above and/or even adding more than one model.
At the end of the day, everyone has a LLM at the core predicting and outputting the next most likely string of characters that would follow from an input string of characters.
It may not be as ridiculous as you think. I have 25 years of experience with Python, and the generative AI tooling is teaching me useful things in Python when I work with it.
Right now they only thing being left behind is my actual work to be done since I am spending more and mpre time fighting off cursor-written degenerate slop code from creeping into the codebase from "pioneer" developers who are slowly forgetting how to program.
Building projects is valuable, but "keeping up" is contextual - someone using AI effectively in their specific domain is ahead regardless of which generation of tools they're using.
The down side is that it's really sensitive to versions, right now I'm struggling to get it to work on NixOS in the version that I have running on my Ubuntu dev box and MacOS laptop.
reply