I'd like if you address the main point of his post: being left-handed.
I've never liked fountain pens because most languages are written left-to-right, which means you will get smudged much more easily than if you were right-handed.
The seemingly best advice I've seen is to learn how to be an "underwriter", aka position your hand north to where you're writing, instead of sideways. I say seemingly because I'm not willing to spend that amount in effort when I can write fine with pens.
The person you're answering is also using a REPL while coding, just not accessing it directly (= manually writing in the REPL stdin)
Instead he interacts with it via his editor's tooling, where you are in a normal file, and use a shortcut that will send a sexp/function/region/etc into the running repl and display the result.
So just to be clear you are using the repl directly?
The article mentions SBCL which is a well regarded open source Common Lisp implementation.
Common Lisp invented REPLs and the way most people use it now answers your question with "both".
A REPL usually runs locally in a subprocess or remotely through a REPL server and then your editor talks to that running Lisp Image where you can send code for compilation and recompilation while the image is running and also interact with it directly through the REPL which includes the debugger.
The GP you are referencing uses the common SLIME[0] package for anything of consequence which works exactly as described above.
So what's left is to answer GP question, which nobody has done:
What are the use cases for using the repl directly rather than through something like SLIME?
You answered "both" which I'm sure is correct, but I'm curious as to specifically which usages you find better directly through the repl. The only reason I can see is when you can't be bothered to (or are unable to) start SLIME, otherwise even to evaluate small expressions I'd rather have them written in a file to easily keep a history of them or edit them.
I also know people who never use tools like SLIME and prefer just using the repl for simplicity.
This was also my impression when reading the article, as someone who uses Sly heavily, every day. I can't imagine not having in-editor access to functionality like recompiling the function at point, or live evaluation of testing forms directly from the buffer. As Stew (the Clojure guy) pointed out in a video from a number of years ago, nobody should be typing anything raw into the in-editor REPL prompt; you should be sending forms directly from the code buffer.
How do I maintain that workflow if I'm to use native REPLs?
It's "both" because if I want to interact with the SBCL REPL in SLIME I swap to the buffer for it and type in whatever I want which includes reader macros and such mentioned in the article.
I don't understand your logic unless you're implicitly saying the camera needs to go away, but then you should say so (so that I can properly disagree with you).
The notch makes for a smaller menu bar but without the notch there would be no menu bar there, it would take the space underneath instead.
> I don't understand your logic unless you're implicitly saying the camera needs to go away
Yup. We never had camera before the notch.
> The notch makes for a smaller menu bar but without the notch there would be no menu bar there
Yup. Before the notch we neither had a menu bar that could comfortably fit most menu items even in professional apps, nor did we have a camera.
BTW, you literally are saying "The notch makes for a smaller menu bar". Imagine if I wrote that as the first sentence in my comment, then there would be no misunderstanding
The screen is a 16:10 screen with some extra pixels added next to the notch.
By default, the system uses a resolution of 1512x982 (14"), which you can change to 1512x945 (16:10) to move the menu bar below the notch and end up with black pixels next to the notch.
"If you go make weird contortions and workarounds you might just find a semi-working non-solution to a problem that didn't exist until Apple introduced it".
I didn't think this would be so hard to understand. It's not a false dichotomy, without the notch the screen would have to be smaller to have the camera there. You didn't lose any pixels.
If that's the reason why people dunk on ai-assisted programming, fine.
That's not the impression I had though, the criticism I usually see is around laziness, low-effort submissions, etc... Which are not inherent issues of using llms.
But they are exacerbated by them, so the criticism still stands. No one visits HN for low-quality same-loking submissions. It’s like frequenting r/toolgifs and suddenly almost every post is about one specific hammer. That’d be understandably annoying, and while not the inherent fault of the hammer, it would be an issue only possible because it exists.
I don't disagree, it's annoying. But what's the solution here? Bashing quality submissions because they use AI?
Even if LLMs don't succeed in their seemingly ultimate goal of replacing humans, and I don't think they will, there's no future where we completely stop using them.
I guess either we find out a way to filter out ai-slop or we wait until people are tired of rehashing the same low-effort criticisms.
Now you seem like one of the few people that is concerned over environmental issues and I respect that, if that's why people are against them it's a whole other discussion and we can disregard anything I said here.
Now you lost me. Expecting one server per person in a household is unrealistic. Even if software becomes perfect, what about the hardware aspect? Expecting a family of 5 to have 5 servers all available and reachable from anywhere sounds like a nightmare, and just a waste of electricity.
Your whole premise is that self hosting software can become a one-click deploy, if they can achieve that I'm sure different settings per user is possible. If who is legally responsible about what your brother does with the family serve is really such a big question, then let's just accept widely adoption of self hosting is not going to happen.
A server could be a $30 silent soap-sized box hanging on the router consuming 5 watts, you plug it in and it sets up services and domains ready to access. Why would this be a nightmare? It is already feasible on all levels. Assuming the house has fiber, reliability shouldn't be much of an issue.
“Hello, Metropolitan Police here. We have a warrant to seize… docker container ab38asdf8765jk on your home server. Go ahead and export its attached volumes and the image. We’ll wait.”
A 30 dollars box will replace cloud storage, photo storage, video steaming, and all the other services people expect to have? I don't think you have though through what exactly you're trying to replace, we're not talking about tech people wanting to host their static blog here.
A Raspberry Pi 5 already does all of the examples you mentioned. Add a hard disk o it, and a somewhat-reliable power supply, and you're looking at a hundred bucks. We are not that far away from that.
Does it replicate Netflix? No. But honestly, most people do not host videos on Netflix.
Now add raid storage, nobody should keep their photos and other important documents in a single drive, then at least double the price because you're not going to sell a raspberry kit to the general public but a polished product that needs almost no install steps. Or triple it, because you're also going to need tech support and updates.
And how are you going to reach your personal server? More and more people don't even get to have a public IP for their router anymore, and having every non-tech person punch a hole in their firewall to access their photos... I'm sure that's going to go well.
And if you somehow manage to do that, how are you going to share your photos in your personal server with your friends? Because that's pretty high in people's needs.
if the servers are all in the same house then the police is not going to ask who's server they can take, they are just going to take all of them. so if that is a concern, it would be lost. but GP is not talking about people living together, but not sharing with relatives who live elsewhere.
Because they want to control as much of the market as possible, everyone and their dog is using LLMs for work and mails and groceries.
That doesn't change their usefulness, if tomorrow they all increase the price x10 it will remain useful for many use cases. Not to mention than in a year or two the costs might go down an order of magnitude for the same accuracy.
> "Because they all have slight pros and cons, and you may want to program some functionality in 1.0 or 2.0, or 3.0, or you're going to train in LLM, or you're going to just run from LLM"
He doesn't say they will fully replace each other (or had fully replaced each other, since his definition of 2.0 is quite old by now)