> Some 9/11 attackers were CIA assets and protected from FBI/police scrutiny as such
Citation seriously needed. This is untrue unless the words you are using are not to be understood in any sense common to English speakers. The most generous fact based interpretation I can give it is that Saudi financiers were underscrutinized for political reasons, resulting in missed opportunities to stop the attacks. The actual attackers were not CIA nor FBI assets.
> The most generous fact based interpretation I can give it is that Saudi financiers were underscrutinized for political reasons, resulting in missed opportunities to stop the attacks.
We more or less agree. "Asset" does not mean card-carrying CIA agent.
Me and my wife made something very similar (https://nativi.sh/). I find it's a really good use case for LLMs to actually help you learn rather than doing everything for you.
Also, not sure if you're getting hugged to death but I'm getting this in the interface but not seeing any network failures.
At least one Florida man is out there plinking Walmart drones at 400 feet with a 9mm. Saw another who took one down from a boat also with a pistol (probably 9mm also), but can't find the video now.
The Venn diagram of "shoots at drones" and "concerned with other people's safety" is two separate circles.
> I consider [having a big benefit at 100% vs an 80/20 rule] a characteristic of type systems in general; a type system that you can rely on is vastly more useful than a type system you can almost rely on, and it doesn’t take much “almost” to greatly diminish the utility of a given type system.
This! This is why I don't particularly care for gradual typing in languages like Python. It's a lot of extra overhead but you still can't really rely on it for much. Typescript types are just barely over the hump in terms of being "always" enough reliable to really lean on it.
I agree with the 100% rule. The problem with Typescript is how many teams allow “any”. They’ll say, “We’re using TypeScript! The autocomplete is great!” And on the surface, it feels safe. You get some compiler errors when you make a breaking change. But the any’s run through the codebase like holes in Swiss cheese, and you never know when you’ll hit one, until you’ve caused a bug in production. And then they try to deal with it by writing more tests. Having 100% type coverage is far more important.
In my rather small code base I’ve been quite happy with ”unknown” instead of any. It makes me use it less because of the extra checks, and catches the occasional bug, while still having an escape hatch in cases of extensive type wrangling.
The other approach, having an absolutist view of types, can be very constraining and complex even for relatively simple domain problems. Rust for instance is imo in diminishing returns territory. Enums? Everyone loves them, uses them daily and even write their own out of joy. OTOH, it took years of debate to get GATs implemented (is it now? I haven’t checked), and not because people like and need them, but because they are a necessary technicality to do fundamental things (especially with async).
Typescript's --strict is sometimes a very different ballgame from default. I appreciate why in a brownfield you start with the default, but I don't understand how any project starts greenfield work without strict in 2025. (But also I've fought to get brownfield projects to --strict as fast as possible. Explicit `any` at least is code searchable with the most basic grep skills and gives you a TODO burndown chart for after the fastest conversion to --strict.)
Typescript's --strict still isn't technically Sound, in the functional programming sense, but that gets back to the pragmatism mentioned in the article of trying to get that 80/20 benefit of enough FP purity to reap as many benefits without insisting on the investment to get 100% purity. (Arguably why Typescript beat Flow in the marketplace.)
And yet, type annotations in Python are a tremendous improvement and they catch a lot of bugs before they ever appear. Even if I could rely on the type system for nothing it would still catch the bugs that it catches. In fact, there are places where I rely on the type system because I know it does a good job: pure functions on immutable data. And this leads to a secondary benefit: because the type checker is so good at finding errors in pure functions on immutable data, you end up pushing more of your code into those functions.
It may be the exact opposite. You can't express (at least you shouldn't try to avoid Turing tarpit-like issues) all the desired constraints for your problem domain using just the type system (you need a readable general purpose programming language for that).
If you think your type system is both readable and powerful then why would you need yet another programming language? (Haskell comes to mind as an example of such language--don't know how true it is). The opposite (runtime language used at compile time) may also be successful eg Zig.
Gradual typing in Python provides the best of both worlds: things that are easy to express as types you express as types. On the other hand, you don't need to bend over backwards and refactor half your code just to satisfy your compiler (Rust comes to mind). You can choose the trade off suitable for your project and be dynamic where it is beneficial. Different projects may require a different boundary. There is no size fits all.
P.S. As I understand it, the article itself is about "pragmatism beats purity."
On the other hand, if you think of a programming language as a specialized tool then you choose the tool for the job and don’t reach for your swiss army knife to chop down a tree.
The problem with gradually typed languages is that there are few such trees that should be chopped by their blunt blades. At least Rust is the best for a number of things instead of mediocre at all of them.
One counterpoint to this is local self exploratory programming. For that a swiss army knife is ideal, but in those cases who cares about functional programming or abstractions?
I wonder if lunar space elevators might be the fix here. If I understand correctly, such an elevator would not be as subject to the perturbations since the tension would keep it's orbit stable (is it still an orbit if it's tethered?).
Another option might be a LORAN style system put up on towers. With lower gravity and no atmosphere I imagine we could stick transmitters up very high without super complex construction, maybe even just a giant carbon fiber tube with a transmitter at the top.
> Which means if you actually edited those files, you might fill up your HD much more quickly than you expected.
I'm not sure if this is what you intended, but just to be sure: writing changes to a cloned file doesn't immediately duplicate the entire file again in order to write those changes — they're actually written out-of-line, and the identical blocks are only stored once. From [the docs](^1) posted in a sibling comment:
> Modifications to the data are written elsewhere, and both files continue to share the unmodified blocks. You can use this behavior, for example, to reduce storage space required for document revisions and copies. The figure below shows a file named “My file” and its copy “My file copy” that have two blocks in common and one block that varies between them. On file systems like HFS Plus, they’d each need three on-disk blocks, but on an Apple File System volume, the two common blocks are shared.
The key is “unmodified” and how APFS knows or doesn’t know whether they are modified. How many apps write on block boundaries or even mutate just in disk data that has changed vs overwriting or replacing atomically? For most applications there is no benefit and a significant risk of corruption.
So APFS supports it, but there is no way to control what an app is going to do, and after it’s done it, no way to know what APFS has done.
For apps which write a new file and replace atomically, the CoW mechanism doesn't come into play at all. The new file is a new file.
I don't understand what makes you think there's a significant risk of corruption. Are you talking about the risk of something modifying a file while the dedupe is happening? Or do you think there's risk associated with just having deduplicated files on disk?
The vast majority of apps using structured data and not block oriented data formats. A major exception is databases, but common file formats that most people work with - images, text, etc. often aren't best mutated directly on disk, but rewritten either to the same file or a new file. Without some transactional capability, mutating a file directly on disk can corrupt a file if the writer fails in the middle of the write. More than a few text editors use this as their method of saving to ensure that there is never an inconsistent state of that file on disk.
What will happen when the original file will be deleted? Often this handled by block reference counters, which just would be decreased. How APFS handles this? Is there any master/copy concepts or just block references?