I don't think you succeeded. The product name in the article title is 'Claude Code Checkpoints', and the url is 'claude-checkpoints.com'. Nowhere do you note that Claude is a trademark of Anthropic, or disclaim association with Anthropic, or describe that you obtained permission to use their trademarked name in your product. I personally was confused about whether this was an official product at first. I'd be surprised if Anthropic didn't get peeved about this.
I like Zig, but if we look at the numbers, the difference is probably more to do with funding that anything:
Zig (programming language) - First appeared 8 February 2016; 9 years ago
Rust (programming language) - First appeared January 19, 2012; 13 years ago
Also, Zig at this point isn't really a brand new language anymore. I have comments on their issues dating back to 2018, so it's been a very active language since at least then.
Those are not comparable dates. The Zig "first appeared" date is a few months into development by Andrew in his spare time. The Rust "first appeared" date is after 3 years of development by Graydon in his spare time, followed by 3 years of development by a Mozilla-sponsored team of engineers.
So they're gonna just finnish up their standard lib and THEN spend a year doing nothing but docs for everything they made?
Just getting started is an even bigger reason to have good docs to clearly communicate how the libraries and APIs work!
I wouldn't even read a push request containing a new function if the creator didn't bother writing a short description and usage clarification.
Getting started is a good excuse for limited libraries or support (same situationwith rust). But lack of even basic docs is not acceptable if you want user adoption.
This would be true if the source of funding were the standard kind of corporate funding. But there’s reason to believe that the backing money behind this corporation does not care in the slightest and regards this sort of poultry fine as merely the cost of doing its particular business, which is also not a standard type of corporate business.
well, he was arguing that it's not worse than 99% of the human slop that gets posted, so where do you draw the line?
* well crafted, human only?
* Well crafted, whether human or AI?
* Poorly crafted, human
* well crafted, AI only
* Poorly crafted, AI only
* Just junk?
etc.
I think people will intuitively get a feel for when content is only AI generated. If people spend time writing a prompt that doesn't make it so wordy, and has personality, and it OK, then fine.
Also, big opportunity going to be out there for AI detected content, whether in forums, coming in inmail inboxes, on your corp file share, etc...
Paraphrasing the late great Joe Armstrong, the great thing about Erlang as opposed to just about any other language is that every year the same program gets twice as fast as last year.
Manycores hasn't succeeded because frankly the programming model of essentially every other language is stuck in 1950. I, the program, am the entire and sole thing running on this computer, and must manually manage resources to match its capabilities. Hence async/await, mutable memory, race checkers, function coloring, all that nonsense. If half the effort spent straining to get the ghost PDP-11 ruling all the programming languages had been spent on cleaning up the (several) warts in the actor model and its few implementations, we'd all be driving Waymos on Jupiter by now.
I'm curious, which actor model warts are you referring to exactly?
[The obvious candidates from my point of view are (1) it's an abstract mathematical model with dispersed application/implementations, most of which introduce additional constraints (in other words, there is no central theory of the actor model implementation space), and (2) the message transport semantics are fixed: the model assumes eventual out-of-order delivery of an unbounded stream of messages. I think they should have enumerated the space of transport capabilities including ordered/unordered, reliable/unreliable within the core model. Treatment of bounded queuing in the core model would also be nice, but you can model that as an unreliable intermediate actor that drops messages or implements a backpressure handshake when the queue is full.]
I don't think either of those are particularly problematic. The actor model as implemented by Erlang is concrete and robust enough. The big problems with the actor model are, in my opinion, around (1) speed optimizations for immutable memory and message passing (currently, there's a great deal of copying and pointer chasing involved, which can be slow and is a ripe area for optimization), (2) (for Erlang) speed and QOL improvements for math and strings (Erlang historically is not about fast math or string handling, but both of those do comprise a great deal of general purpose programming), (3) (for Erlang) operational QOL misc improvements (e.g. existing distribution, ets, amnesia, failover, hot upgrade, node deployment, build process range from arcane (amnesia, hot upgrades, etc.all the way up to covered-in-terrifying-spiders (e.g. debugging queuing issues, rebar3))
There is no lineage between The Actor Model and Erlang. The creators of Erlang are on record as having never heard of the Actor Model (as developed by Hewitt, Agha and colleagues at MIT). None of the points you make (including the first one) are a part of any formal definition or elaboration of the Actor Model that I have seen, which was one of my points: there is no unified theory of the Actor Model that addresses all of the practical issues.
With respect to your point (1), you might be interested in Pony, which has been discussed here from time to time, most recently: https://news.ycombinator.com/item?id=44719413 Of course there are other actor-based systems in wide use such as Akka.
Erlang's runtime system, the BEAM, automatically takes care of scheduling the execution of lightweight erlang processes across many cpus/cores. So a well written Erlang program can be sped up almost linearly by adding more cpus/cores. And since we are seeing more and more cores being crammed into cpus each year, what Joe meant is that by deploying your code on the latest cpu, you've doubled the performance without touching your code.
I did find some actual tutorials in the end but … I’ll be honest it seems intimidating. I thought it would be more geared towards the everyday person in 2025.
frequently your session/context may drop (e.g. claude crashes, or your internet dies, or your computer restarts, etc.). Claude does best when it can recover the context and understand the current situation from clear documentation, rather than trying to reverse engineer intent and structure from an existing code base. Also, the human frequently does read the code docs as there may be places where Claude gets stuck or doesn't do what you want, but a human can reason their way into success and unstick the obstacle.
From Claude -r you can resume any conversation at any previous point, so there isn’t a way to lose context that way. As opposed to compact, which I find makes it act brain dead afterwards for a while
Not sure if intentional, but you trimmed the context. The question was about whether he'd abuse power to get retribution. Here's the link to the start of the video: https://youtu.be/dQkrWL7YuGk?t=1
What you trimmed off:
> Hannity: Under no circumstances, you're promising America tonight, you would never abuse power as retribution against anybody
> Trump: Except for Day One... look he's going crazy... Except day one.
> Hannity: Meaning?
> Trump: I'm going to close the border and we're going to drill, drill, drill
Update from day 199:
The border is closed (partially illegally) [1], US oil rigs are down [2], and he is in fact abusing power as retribution against dozens of people and institutions.
The false narrative that there's a 'mainstream media' which is lies and propaganda, as compared to plucky unafraid truth-telling upstarts like Fox and Sinclair and Joe Rogan.
reply