> Allows me to move directories around, without having to depend on an IDE to keep namespaces in sync.
So... without and IDE, you'd move code around, let the NS be changed due to being placed in a different directory structure, and then fix namespaces on use-sites manually?
> I want all of C#, with terseness and not forcing OOP.
C# does not force OOP on you. You can have a single namespace, single static partial class and spread its members across as many files as you want. So the "ceremony" consists of the following snippet _per file_
namespace NS;
static partial class C {
// Your methods here
}
I agree with the first example. I wish C# could do “free” floating functions which just lives in the namespace itself. Your second example could do with the extra namespace indentation by doing
I was arguing that the hierarchy is already in your path. The explicit namespace (and even the class name) is redundant if what you want are mostly functions.
Currently, you're forced to define the hierarchy in the directory structure, and then again with namespaces. There should be a way to opt out of this.
> [...] how do you find the code that will run when they do fail? You would have to traverse [...]
I work in a .NET world and there many developers have this bad habit of "interface everything", even if it has just 1 concrete implementation; some even do it for DTOs. "Go to implementation" of a method, and you end up in the interface's declaration so you have to jump through additional hoops to get to it. And you're out of luck when the implementation is in another assembly. The IDE _could_ decompile it if it were a direct reference, but it can't find it for you. When you're out of luck, you have to debug and step into it.
But this brings me to dependency injection containers. More powerful ones (e.g., Autofac) can establish hierarchical scopes, where new scopes can (re)define registrations; similar to LISP's dynamically scoped variables. What a service resolves to at run-time depends on the current DI scope hierarchy.
Which brings me to the point: I've realized that effects can be simulated to some degree by injecting an instance of `ISomeEffectHandler` into a class/method and invoking methods on it to cause the effect. How the effect is handled is determined by the current DI registration of `ISomeEffectHandler`, which can be varied dynamically throughout the program.
(Alternately, inject it as a class member.) Now, the currently installed implementation of `IErrorConditions` can throw, log, or whatever. I haven't fully pursued this line of though with stuff like `yield`.
The annoyance is that the .NET standard library already does this precise thing, but haphazardly and in far fewer places than ideal.
ILogger and IProgress<T> comes to mind immediately, but IMemoryCache too if you squint at it. It literally just "sets" and "gets" a dictionary of values, which makes it a "state" effect. TimeProvider might be considered an algebraic effect also.
> I work in a .NET world and there many developers have this bad habit of "interface everything", even if it has just 1 concrete implementation
I work on a Java backend that is similar to what you're describing, but Intellij IDEA is smart enough to notice there is exactly one non-test implementation and bring me to its source code.
not that familiar with java, but in .net when you do this, it is very common for the implementation to be in a separate assembly, part of a different project
Doesn’t that imply an interface is necessary though, so you can compile (and potentially release) the components separately? I don’t use .net but this sounds quite similar to pulling things into separate crates in Rust or different compilation units in C, which is frequently good practice.
Definitely that could imply the necessity of an interface, but often it's simply done, because everyone working in a project blindly follows an already established poor convention.
by "it's common in the .Net world" I mean that it seems to be an antipattern that's blindly followed. If there's only ever one implementation, it is the interface, imo
> you end up in the interface's declaration so you have to jump through additional hoops to get to it
Bit of a tangent, but this really annoys me when I work on TypeScript (which isn't all that often, so maybe there's some trick I'm missing)—clicking through to check out the definition of a library function very often just takes me to a .d.ts file full of type definitions, even if the library is written in TypeScript to begin with.
In an ideal world I probably shouldn't really need to care how a library function is implemented, but the world is far from ideal.
Recently I wished for "VB for Web". Something that'd make it easy for a tech-competent, but a non-programmer, person to prototype a functional web application.
I've been a co-founder of a Norwegian startup, Quine AS, that attempted to automate workflows in media productions (as in movies, series, commercials). Ultimately, we've failed; the company was dissolved in July 2024. I've used a couple of weeks of vacation to clean up and document the reusable parts of the code, and to write about (parts of) our history.
For those wondering: we (the co-founders) bought the IP back from the liquidation assets and agreed to open-source it.
I've been a co-founder of a Norwegian startup, Quine AS, that attempted to automate workflows in media productions (as in movies, series, commercials). Ultimately, we've failed; the company was dissolved in July 2024. I've used a couple of weeks of vacation to clean up and document the reusable parts of the code, and to write about (parts of) our history.
Yep, if you have in one hand "Inside Windows NT" by Helen Custer (or one of its successors, "Windows Internals"[0]) and on the other "VMS Internals and Data Structures"[1]...
DEC cancelled Cutler's pet projects Mica and Prism, a 64-bit RISC chip and a portable multi-personality next-gen OS to run on it.
He was upset. Ripe for head-hunting. MS did so, but he insisted on bringing his core team.
MS did not know what to do with him and put the team to work on some corner of LAN Manager.
Then Windows 3.0 was a hit, which led to IBM and Microsoft divorcing. IBM kept OS/2 1.x (16-bit, for the '286) and OS/2 2.x (32-bit, for the '386).
Microsoft got OS/2 3.x, a planned CPU-independent portable OS.
It was working on it on Intel's RISC chip, the i860, codenamed N-Ten. The OS was named after it: OS/2 NT.
Then, with Windows now suddenly a big deal, getting a next-gen Windows working was suddenly a big urgent issue.
Cutler got the job: finish OS/2 NT and make it work.
OS/2 NT was renamed Windows NT, and as it was barely a skeleton of an OS, Cutler took a lot of the design of Mica -- the cancelled next-gen VMS -- and built NT around that.
DEC found out and sued, and DEC got very sweet deals on NT and Exchange and things as a result. (Compaq totally blew this and died as a result. It deserved worse.)
DEC also salvaged the Prism project and it was launched as DEC Alpha, the first 64-bit RISC chip.
Alpha was the first 64-bit platform NT ran on, and it was also the first non-x86 platform Linux ran on.
Interestingly enough, NT used the Alpha as a 32 bit processor[0]. Although NT on Alpha was used to port Windows to 64 bits in lieu of using the (very slow) Itanium simulator[1].
Actually, this was discussed on HN earlier[2], and I believe you wrote an article for The Register[3] about it :)
It provides the opposite: a mutable CoW data structure that is extremely cheap to "fork" so that all subsequent updates occur only on the new "fork" and are invisible to the old "fork".
Depends on how you define "persistent data structure". In most definitions that I've encountered, a new version is made after each update. This code makes a new version only when you explicitly request it with Fork(). This allows you to
- Use the data structure as a "standard" tree, sharing one instance across threads and use locking for thread-safety
- Use it as a fully-persistent structure by calling "Fork" before every modification
- Or anything in between
I needed the 3rd case: a cache manager gives out a forked view of the actual cache contents to its clients. Thus the cache is always in control of the "master" copy, and if a client modifies its own fork, the cache and other clients are not affected.
Inside a black hole? Consider a neutron star with mass 1 gram less than needed for it to become a black hole. Since the neutron star is not (yet) a black hole, we can 'see' it. Send in the 1 gram and watch while the neutron star converts to a black hole
where, as usually proposed, the mass that was the neutron star suddenly shrinks to the "singularity" at the center of the black hole.
Now, it appears that there is a huge change -- neutron star to a black hole -- from a small input, the 1 gram, that is, in math terms, there is a jump discontinuity.
There was something about the physics of the neutrons that kept the neutron star from shrinking to a singularity. Well, maybe that something also keeps that mass plus the 1 gram from shrinking to a singularity. That is, if there is no jump discontinuity, the inside of that black hole is essentially just like that neutron star.
> watch while the neutron star converts to a black hole where, as usually proposed, the mass that was the neutron star suddenly shrinks to the "singularity" at the center of the black hole.
This "conversion" doesn't imply matter transitioning from one state to another. The main thing happening during transition to a black hole is that the light can't escape anymore - you see the star in one moment, and can't see it in another moment. Not necessarily because of some matter transition, but because it stops radiating light.
Singularity is a mathematical artifact, we don't know what's happening in the blackhole with matter and don't really care since it has no effect on the outside world.
We don’t know that an object with 1 gram less than needed to become a black hole will be a neutron star. There may be other denser states or matter in between, like quark stars or strange stars that are still not dense enough to become black holes.
Under GR it doesn’t matter since as the mass increases beyond a critical point an event horizon will form and all that matter will be compressed into a singularity regardless.
So... without and IDE, you'd move code around, let the NS be changed due to being placed in a different directory structure, and then fix namespaces on use-sites manually?
> I want all of C#, with terseness and not forcing OOP.
C# does not force OOP on you. You can have a single namespace, single static partial class and spread its members across as many files as you want. So the "ceremony" consists of the following snippet _per file_
reply