This feels like the same pattern as Dark leaving OCaml for F#: https://blog.darklang.com/leaving-ocaml//. Ecosystem matters a lot these days. Outside of these two specific cases, I wonder if we're, as an industry, too afraid of writing this kind of stuff now I feel like it was done a lot before, and not at all these days. Sure, NIH syndrome is a fallacy, but having to write one library may not be so bad. I would be glad to hear about any experience with that.
That is why when comparing languages we should always look beyond grammar and semantics.
This is nothing new, it is also a reason why languages like C and C++ won the systems programming wars from the 1990's.
After a while one gets tired to write wrapper libraries, or having to pay extra for an additional compiler that isn't part of the platform SDKs.
Hence why successful languages always need some kind of killer feature, or a company with deep enough pockets willing to push it into the mainstream no matter what.
Same applies to new OS architecture ideas as well.
Why wouldn’t Zig be applicable anywhere where c is applicable? Afaik it can also compile to C as a target other than the many architectures it supports.
That's not released or finished, last I checked a few weeks ago.
Edit: Oh, looks like it was released just a few days after I last checked; ha. Although, it's not clear to me whether it's intended for end-user use yet.
Nim is elegant, relatively safe, and not interpreted; its performance is within a stone's throw of C, and roughly on par with Zig; but with better safety guarantees.
As I've said somewhere else, don't think in terms of languages but in terms of use cases. Nim can make a lot of sense for some cases where C is used, and not much for others. The same is true for Zig. There is no and there will never be a "C replacement", just other options depending on what you are doing.
My two cents as a non-professional programmer: i've found hacking on someone else's codebase to be very hard with/without a debugger in dynamic languages like JS/Python where most things are untyped and you get runtime exceptions upon eg. trying to call a method on a nil object.
BUT back on the thread's topic, since i started programming in Rust, the only time i've felt it was hard to wrap my head around the compiler's output was with complex async programming. Otherwise, every single time i felt like a simple text editor was more than enough with rustc's output to understand what's going on, because in Rust everything is very explicit and statically typed with clear lifetimes, and the compiler has very helpful error messages often including suggestions on how to fix the error.
For me, everything (non-async) Rust is a clear win in terms of understandability/hackability, compared to all other languages i've tried before (admittedly not a lot of them). I think complex IDE tooling can ease the pain, but proper language design can help prevent the disease in the first place.
EDIT: i should add, since i started programming in Rust, i've only once or twice seen a runtime panic (due to array indexing error on my side). Otherwise, literally all of the bugs i had written were caught by the compiler before the program ran. For me it was a huge win to spend one more hour to please the compiler in exchange for spending days less debugging stuff afterwards.
While I use scripting languages when needed my main languages were always compiled with static typing. And I did not need debugger for hacking code. I need debugger mostly for tracing my own code when I have some bugs mostly related to algorithmic errors, not with the program blowing up on me.
I did not program in Rust so I can not really judge the language but I doubt that it is so nice and expressive comparatively to modern C++ that suddenly the types of bugs I am hunting will magically disappear.
> I need debugger mostly for tracing my own code when I have some bugs mostly related to algorithmic errors, not with the program blowing up on me.
Then you're a much better programmer than i am! :)
For algo debugging i just use pen and paper. For more surprising results, print statements are usually all i need.
> suddenly the types of bugs I am hunting will magically disappear
Maybe not, but i'd recommend to give it a try, if only to offer a different perspective. For me personally, strict and expressive enums, mandatory error handling and Option/Result types as language core features (among others) have definitely eliminated most bugs i write. Well i still write those bugs, but at least the compiler doesn't compile like everything is fine, and instead lets me know why my program is flawed.
Oh, you mean target architectures. But stuff like SuperH is used only for embedded these days, where even C is often rather idiosyncratic. For most coders, Zig is comparable to other mainstream languages in terms of supporting mainstream platforms.
Anyway, this is really a matter of implementation, not a language issue. There's nothing about Zig that makes it inherently impossible to support SH2 or any other platform - indeed, as others have noted, they already have a C backend in the works, so the endgame is to support everything that has a C compiler.
Also, as far as C interop goes, if I remember correctly, Nim can't just take a C header and expose everything declared in it - you still have to manually redeclare the functions and types that you want in Nim, no? You can use c2nim, of course, but that's not really any different than generators for other languages, and requires extra build steps etc. Zig handles it all transparently.
> Anyway, this is really a matter of implementation, not a language issue.
I think separating the two is a bit artificial. Python being slow is partially an implementation issue but the fast implementations can't run everything. When you compare languages, you have to compare implementations, otherwise it's meaningless.
You have to compare ecosystems, but when doing so, you still have to compare PL design and PL implementation separately, because they have different implications. A quality-of-implementation issue means that something can be done, but isn't done by this particular implementation. A language limitation applies to all implementations.
That's not really true, you can go around language limitations. Go has codegen for generics, JavaScript has TS for static types, Babel for """macros""". Lots of propositions that are not in JS now can be used with Babel. Python has C extensions.
TypeScript is a different language from JavaScript, and C is a different language from Python, so I don't think those are good examples. Similarly, various macro languages that sit on top of something else are also languages in their own right.
And sure, you can always "fix" a language by designing a derivative higher-level language that transpiles into the old one. In fact, this is a time-honored tradition - C++ was originally just such a transpiler (to C). But the very fact that you have to do this points at the original design deficiencies.
> Anyway, this is really a matter of implementation, not a language issue.
That's kind of what people are getting at in this whole conversation though isn't it, ecosystems around languages matter. They can't be an afterthought.
Of course - which is exactly why Zig is not ignoring this. But we still have to compare apples to apples, and oranges to oranges. My original comment was about languages - specifically the ability of the language to consume libraries from another language with minimal hassle, and the response was that Nim somehow does it better.
I'm not even sure why arch support was brought up in this thread, to be honest, because it's not relevant at all? If your problem is unsupported architecture, it's a blocker long before you need to use any libraries...
> Hence why successful languages always need some kind of killer feature, or a company with deep enough pockets willing to push it into the mainstream no matter what.
There's a third strategy: hitching your wagon to an already-successful ecosystem, like languages such as Kotlin do.
That strategy always falls apart when the ecosystem goes into a direction that the guest languages did not forsaw, or already created incompatible concepts, and then get the dilema of what to expose from the underlying ecosystem.
Using Kotlin as example, its initial selling point was Java compatibility, now it wants to be everywhere, and its Java compatibility story is also constrained for what Android Java is capable of.
So the tooling attrition increases, with KMM, wrapping FFI stuff to be called from Java like coroutines, and everything that is coming with Loom, Panama and Valhala.
I used to work at an OCaml company and it wasn't nearly as much of an issue as one might predict. You can (it turns out) build a very successful business even if there aren't a lot of existing libraries, or if the language lacks certain basic features like native multithreading (same with Python of course). I don't have a great model for why this isn't devastatingly expensive, but it's probably some combination of
* Most existing libraries are kind of bad anyway so you're not missing out much by not using them
* If you write everything yourself you get system expertise "for free", and gaining expertise in something that already exists is hard
* You can tailor everything to your business requirements
* Writing libraries is "fun work" so it's easier to be productive while doing it
I think Jane Street is a big exception. It's like when PG espouses lisp. Back in the 90's[1] language ecosystems were very sparse. An ecosystem was a few big libraries and a syntax highlighter. Now stuff like IDEs, linting, packages, etc. have made people's standards quite high for ecosystems. On the flip side, back in the day languages like OCaml and Lisp had stuff other languages could only dream about. Functions as arguments! Macros! Type inference! But now, barring macros, these features are all in mainstream languages.
If you were to do a similar company now, you'd have to recruit people who still write code like in the 90's: emacs/vim hackers who can write their own libraries and don't need no stinking IDE. Except you now have a significantly smaller advantage because a lot of the languages have caught up and while your god tier programmers can write their own custom serialization library, that's still more developer time than using serde.
Which is why a lot of people are moving to Rust I suppose. You still get the hip factor but responsibly. It's the Foo Fighters of languages. Cool, but not irresponsible.
The big difference between Rust and OCaml is that a company the size of Jane Street can influence OCaml development, while it takes one the size of Amazon (according to the recent accusations) to do the same with Rust. I think OCaml has one of those "ancient" communities that seem to value independence more than consensus. Rust is very hard to build without cargo, OCaml works fine with make or dune. I'm not sure if focusing on independence is the right tradeoff for most companies, but I can see some cases where it might be.
> If you were to do a similar company now, you'd have to recruit people who still write code like in the 90's: emacs/vim hackers who can write their own libraries and don't need no stinking IDE.
IDE support is getting there with OCaml. In VSCode, it's not as good as TypeScript but it's usable.
> Except you now have a significantly smaller advantage because a lot of the languages have caught up and while your god tier programmers can write their own custom serialization library, that's still more developer time than using serde.
There are a few libraries that you can use. Serde also tends to make the already long compilation time blow up.
I was writing code in the 90's, and my first IDE was Turbo Basic in 1990 precisely, followed by Turbo Pascal alongside Turbo Vision and Object Windows Library.
Eventually I also got into Turbo C, Turbo C++, and then upgraded myself into Borland C++, used Visual Basic 3.0 in 1994, and a couple of years later Visual C++ 5.0 was my first MS C++ IDE.
Mac OS MPW was an IDE and stuff like AMOS and DevPac were IDEs as well.
Java IDEs started to show up around 1998, like Visual Cafe and the first Eclipse, after being ported from IBM's Visual Age.
Visual Age, which were the IDEs for Smalltalk and C++ from IBM for OS/2 and Aix.
The only group that wasn't using IDEs were the UNIX folks, thankfully XEmacs was around to bring some back sanity when I had to code in UNIX.
I'm curious about these early IDEs. My knowledge of 90's programming is solely from secondary sources. What features did they have? Did they do stuff like automatic renaming or goto definition? Were those features done syntactically or semantically? How fast were they? A common complaint I've read is that people could type faster than an IDE could keep up, which is something I rarely encounter these days.
Clojure has access to the Java library ecosystem and works beautifully in IntelliJ. That may be one of the best ratios of language properties to tooling quality.
These days, you don’t need to build an IDE from scratch - you can just build some language server support for your language and plug into existing IDEs. It’s much less work!
Also, as an aside that’s not really germane to the argument, it’s possible (and IMO preferable) to write code without using an IDE. It forces you to write code that’s broken up into contexts small enough to fit in human working memory. This pays off in the long run. However, once people in your company start writing code with an IDE, it requires more context and becomes almost impossible to edit without an IDE.
Haskell is another language besides OCaml that doesn’t have a ton of MEGACORP backing but nonetheless forms the basis for several very successful companies and groups within MEGACORPs, and where many developers prefer the experience of using it despite not having a $10M IDE like you would for Java. And speaking of that, all the ludicrously expensive and complicated IDEs mostly suck anyway!
> you'd have to recruit people who still write code like in the 90's: emacs/vim hackers who can write their own libraries and don't need no stinking IDE
I wasn’t writing code in the 90s, but I’ve worked at places like this and I would take it any day over “people who copy/paste from stack overflow and get lost without autocomplete” - unless the novel alternative you have in mind is something better than that?
> Which is why a lot of people are moving to Rust I suppose. You still get the hip factor but responsibly
Rust does seem to be in the schelling point of “better enough than C++ to get us off our asses, but no so much better as to scare the devs”. Not sure I’d say it’s especially “responsible” though.
Language servers are certainly a big improvement. However there's a different between "there exists" and "this is a community priority". In some language communities the developers use IDEs, they like IDEs and they make IDEs a priority. In other communities there's one or two people who like them, kinda, and maintain a plugin. Let's put it this way, I don't see OCaml moving to a query based compiler anytime soon.
I'm not sure I agree with the no-IDE part. It feels very "Plato complaining about the invention of writing". Human working memory is quite narrow and quite fickle. If you step away from a codebase for a while, or you're learning it for the first time, an IDE can really help with your bearings. I agree that code should be broken up into contexts and well organized, but I don't think the editor should be the forcing function here.
And IDEs are great! Goto definition that works even in complicated situations unlike ctags; inline type inference; generating imports. I don't begrudge someone using emacs or vim (2nd generation emacs user) but I gotta say, IDEs work wonders.
As for who I'd recruit, I think it's a false dichotomy to say that the alternative is "people who copy/paste from stack overflow and get lost without autocomplete". There's plenty of great, legit developers who can write you a custom serialization library in nano, but choose to use IntelliJ and packages because it gets the job done.
I don't mean to denigrate the OCaml, Haskell or Lisp communities. I wish more people wrote them! But I also recognize that these languages went from secret weapons to, well, a valid option in a trade off matrix. I'd still love to work at Jane Street, although between this comment and my application record, that may be a pipe dream.
> Also, as an aside that’s not really germane to the argument, it’s possible (and IMO preferable) to write code without using an IDE. It forces you to write code that’s broken up into contexts small enough to fit in human working memory
No, complex programs by definition don’t fit into human working memory. Even with best practices, FP, whatever, function composition alone can’t always elevate the complexity to the requirement’s level — so in the end you will end up with larger chunks of code for which you will have to use code navigation features - for which I personally prefer an IDE, but that is subjective.
> No, complex programs by definition don’t fit into human working memory.
If you write your code in the right way they don’t have to. That’s the point.
You shouldn’t need to comprehend your entire program at once to work with it.
> Even with best practices, FP, whatever, function composition alone can’t always elevate the complexity to the requirement’s level
Function composition isn’t the pinnacle of abstraction. We have many other abstraction techniques (like rich types, lawful typeclasses, parametric programming, etc.) which allow for any given subcomponent of a program to be conceptualized in its entirety.
> If you write your code in the right way they don’t have to. That’s the point.
I recommend you try working through some equality proofs in Coq, first with and then without coqtop / Proof General. I think you may change your mind about this rather rapidly. And many proofs get much more complex than that.
I’ve used (and developed) plenty of proof assistants. Proofs are one very narrow domain where automation is basically a no-brainer. You don’t really lose out from the proof having high semantic arity. With normal code, you do lose out.
I dunno. I think the line between "normal code" and proofs is a lot blurrier than many people make it out to be, and I don't think there's a huge advantage to asking people to run a type checker or inference algorithm in their heads even if you have managed to encode all your program's invariants in the type system (which is impossible or impractical in many languages). I say that as someone who doesn't use an IDE except when I'm forced to: I know I lose out on a lot of productivity because of it, and I don't find that removing the IDE forces me to develop better or more concise code.
I can work without autocomplete, but I find my productivity is about 20% when I have to go back and forth with the compiler on syntax errors and symbol names, like back in college. Working professionally with an IDE that just doesn't happen.
PG seems to be the only person who has built a successful business using Lisp. While thousands of successful companies are using C++/Java/.. etc. why do you think so few companies have succeeded with Lisp?
While it's true that there are lots of bad libraries and many libraries are easy, you really isolate yourself from the broader ecosystem by doing this. Vendors and 3rd party solutions are now much harder to use, and when you use them you'll probably only use them at a surface level instead of using all the features.
And some things are so mature and vast you don't have a chance of building them yourself. If what you are doing can be done well in very mature ecosystems like React or PyTorch, the effort to recreate them will dwarf the time spend on important work.
> If what you are doing can be done well in very mature ecosystems like React or PyTorch, the effort to recreate them will dwarf the time spend on important work.
Sometimes that's effort that's not really necessary. At work we had a team build a dashboard with React and a few graphs recently. It clocks at 2000 unique dependencies. That's not a typo, it's two thousands dependencies for a few graphs. Reimplementing all of that would take many man-years of work, but I think it wouldn't be necessary in the first place. Chart.js doesn't use any runtime dependencies, and could probably fill most of our needs. Chart.js is 50k of Javascript, which is a lot but probably more than we need. I don't know how much time it would take to make a reimplementation to have the API we need, but I think it's doable. Why would we want to do that? Because those 2000 dependencies are a liability. Last time I checked 165 were looking for funding. It would be easy to imagine a malicious actor paying a few people to contribute to low profile but widely used JS libraries, and take over the maintenance when the original maintainer becomes tired. I don't know if this is a worse liability than developing our own chart library. I don't know much about security and the tradeoff involved.
All of that to say, isolation from the broader ecosystem may be a good thing.
Yes, it's not free in the sense that this is paid developer time, and also a delay before actual production deployment.
As long as learning the basics of a 3rd-party library takes a relatively short time, those who use it have an advantage: they ship faster. Certainly mastering it may take as long as writing one's own. But you can do that while writing more production code and shipping features. Also, you get improvements made by other people for free (because likely you're using an OSS library anyway).
That's interesting, thanks for sharing your experience. Sometimes I wonder how much interesting experience I miss by mostly using things people already built. Sure, if you're just trying to get something done it's probably faster, but on the other hand spending more time gives you experience.
My experience as well. I worked in industries (Games development) where no open-source or proprietary solutions existed for what we needed. So we built it ourselves. It wasn’t that hard and we never had problems with it because it did 100% exactly what we needed and nothing else. If any new feature was needed we simply added it. I would routinely get an ask from a programmer, artist, musician or game designer and would have it implemented and a new version ready for them within a day or two. The productivity gains were immense.
Multi-threading can often be handed off to the OS in the form of just run more processes. So in most cases there is really no need for the language to handle it.
Its still nice to have shared memory especially in a functional language where due to lots more immutability it isn't as big of a price (i.e. more concurrency safe). Especially if your sharing in-memory caches and the like.
I've seen this save tons of dollars in cloud setups (100,000's) in my career in more than one place. Safe multi-threading was marketed as part of the advantage of functional programming to many people; which IMO why I find it strange that OcAML didn't have it for so long. Lightweight threads (e.g. Tasks, Channels, etc) use less system resources than processes as well.
You can do anything with anything but you usually pay some price of inefficiency to do so. That may be small, but sometimes it is large enough to matter.
Agreed; especially since it didn't have it originally. I'm sure some compromises were made to do it in a way that fits into the execution model/doesn't cause regressions to older code. They are both good languages for sure which makes sense because one is derived from the other.
F# through itself and .NET has had the equivalent functionality for many years however (Threads, Tasks, Async's, Channel libraries, etc) being the one of the first languages (before C#) to have an Async construct. I would imagine it would take some time I think for OcAML's ecosystem to catch up API wise with this where it matters for application development. Looking at OcAML's recent release notes I see work to slowly migrate multicore into OcAML but the feature itself hasn't landed yet? Think it comes in 5.0.
I have to say though F# performance is quite good too those days from working/dabbling with both languages + the ecosystem is bigger. I would like to see/understand where the OcAML perf advantage is there is any for evaluation. The CLR has really good performance these days; its has features to optimise some things (e.g. value types make a big difference to some data structures, hot loops vs the JVM) from my experience especially if you don't allow Unsafe code for many algo's/data structures. For example I just looked at the benchmarks game (I know has its problems including bad contributed implementations for niche ecosystems) but it shows .NET Core (and therefore F#) performance is within the same scale as OcAML at times beating it (https://benchmarksgame-team.pages.debian.net/benchmarksgame/...).
If you don't need to share any data between processes, sure. Otherwise, you will find yourself forced into a pattern that is:
* extremely costly
* hard to make portable cross-platform
* hard to make secure (since data external to the process can't be trusted like shared-memory data can).
* requires constantly validating, serializing, and deserializing data, wasting developer time on something irrelevant to their problem domain
* adds a bunch of new failure modes due to the aforementioned plus having to worry about independent processes crashing
* fails to integrate into the type system of your language (relevant for something like Rust that can track things like uniqueness / immutability for you)
* cannot handle proper sharing of non-memory resources, except on a finicky case-by-case basis that is even more limited and hard to make cross-platform than the original communication (so now you need to stream events to specific "main" processes etc., which is a giant headache).
* Can cause significant additional performance problems due to things like extra context switching, scheduler stupidity, etc.
* Can result in huge memory bloat due to needing redundant copies of data structures and resources, even if they're not actually supposed to be different per thread.
Make no mistake here, when you're not just running multiple copies of something that don't communicate with each other, "just use processes" is a massive pain in the ass and performance and productivity loss for the developers using it, as well as hurting user experience with things like memory bloat. The only reason to go multiprocess when you have threads is in order to create a security boundary for defense in depth, like the Chromium does--and even they are starting to reach the limits of how far you can push the multiprocess model before things become unusably slow / unworkable for the developers.
There isn't really a need for anything, which is kind of the point. You can use a random language that's missing a ton of functionality and you can probably make it work.
Kind of, but like OCaml-to-F# is like "Dark goes from the Prius of languages nobody uses, to the Tesla of the .NET ecosystem." The aims (ecological in the car case) of the purchaser are similar, but the car [at least in its ecosystem] is sleeker and now the roads are a bit different.
On the other hand this is like "Wallaroo moves from the Cadillac of languages nobody uses, to the Chevy Equinox of languages nobody uses." Like, totally fine, you grew older and had kids and needed an SUV to keep up with home life, no shame in that... but there is a wistful "ah when we were young" to the transition, no?
I know a little about cars and I'm still confused - I think I just have a very different view of the relative "it factors" between each pair of languages. De gustibus...
Yeah I mean that's fair too. My impressions are poorly formed but the analogy is that both F# and OCaml are based on functional programming, which to my mind takes a step back from the "imperative programming -> OOP -> shared state multithreading" history into an alternative history, I'm phrasing this as them being "electric cars" for the first half. OCaml is not really the swankiest of swank in the alt-languages community, so I chose a Prius to be like "what's a car that folks know as eco-friendly but it's not very prestigious?" ... meanwhile F# is like "we are THE functional programming of the .NET world, come over here we're cool and slick and all-electric" hence the Tesla comparison.
Pony is like the far-off "ah maybe someday I'll be able to use that at work, maybe for some little experiment" thing and it reminded me of going to a dealer and being like "let me drive the Caddy, you know I'm not gonna buy it, I know I'm not gonna buy it, but I just wanted to live a little today." I don't have any particular experience with Chevy SUVs so I just chose one at random, the point was that Rust is like a "look we're just trying to be C with explicit contracts between parts that allow for memory safety" type of language, very practical and chunky and like people love it don't get me wrong... just, it's an SUV. It's less opinionated and more just "let's get from point A to point B safely."
I think it's one of those cases where using metaphors doesn't help clarify the thought, and instead obscure it. Rust shares a lot with OCaml, and so with F#. F# is "the" functional programming language of the .NET world, but it's also because it's the only one, and it's a second class citizen.
I will also add that Rust is not trying to be C (and neither trying to "replace C"). It's here to offer an alternative, that in some cases makes more sense than sticking with C. C code means a lot of thing. For example, some people code in C89 because they find some kind of purity in it. You're never going to get that from Rust. For some people, it means fast and secure code, like with Python's cryptography. That's a place where Rust can be used. For some other people, it's C because that's the only thing that's allowed by some authority. Again, Rust isn't going to fit here until/if it's allowed. I think in general, trying to reason in terms of use case leads to better comprehension than trying to think in languages.
But outside of that, the move was basically the same. They found another language that's very similar, but that has a way bigger ecosystem.
> Rust shares a lot with OCaml, and so with F#. F# is "the" functional programming language of the .NET world, but it's also because it's the only one, and it's a second class citizen.
No, F# has a lot more OO than OCaml, and there’s a significant difference in features (ex active patterns in F#, functors OCaml). I liked what I used of F#, but for any serious program it’s more Multi-paradigm than functional, since you’ll end up doing a lot of OO.
In my experience that isn't quite true. You usually OO for IO/interop if a C# library is being used, then its module code for the most part all the way down (e.g. ASP NET Core define a class for the controller, then have it interop to F# FP code for the most parts). With some newer F# frameworks you don't even have to do that these days.
Having some experience with large scale F# codebases its rare you define a class compared to records, unions and functions. 100's of functions, 50-100 small types, 1-2 classes approx is usually the ratio I've seen for a typical microservice (YMMV).
I have a very different perception of OCaml compared to you (compared to most people?)
When I think of OCaml, the concepts that come to mind are brilliant French computer scientists and hedge funds. A practical Haskell. When I think of F#, I think of... .NET, bright enterprise programmers who want to work with a tolerable language, and that's about it. If forced to name a user of it, I'd say "uhh, no idea... Maybe Stack Overflow?"
(That's entirely aside from the relative merits of both languages, which are leaps and bounds ahead of both most OOP and functional languages.)
I've seen both being used by hedge funds and finance/banks actually. A lot of F# use anecdotally is closed source finance (this has changed now I think) which is why IMO it didn't have as much open source visibility or people showing its use. OcAML is probably in a similar boat. Having hidden use cases however means breaking changes in the language are harder to judge.
OCaml object system is better than what .NET offers, IMO. On one hand, it enforces clear interface/implementation separation ("classes aren't types"), while structured typing for objects makes this arrangement easy to use in practice. But then there are also powerful features such as multiple inheritance.
The biggest quirk coming from something like Java or C# is that you can't downcast. But classes can still opt into this ability (by using virtual methods + extensible variants) where it makes sense; and in most cases, the presence of downcasts means that a discriminated union is probably a better fit to model something than a class.
It's things like that that makes OCaml what it is. It supports OO even if you don't use it all the time, it supports imperative constructs. I remember reading in "Le langage Caml", by Xavier Leroy, that you should use an imperative loop over recursion if the loop is simple, and keep recursion for complex use cases, where it makes sense. That's not something you often hear from functional programmers, probably because the ones we hear are more obsessed with purity than practicality. But it's a great way to show OCaml's values.
Was pony really a Cadillac? Cadillacs are supposed to be comfortable and large (and not fast). Part of the design (if you believe Robert virding) of erlang's process system is that it makes error handling (an important part of being a developer) very comfortable because it just does the sane thing with little or no effort. Pony, by contrast, obsessively required you to handle errors and shoehorned you into a theory-inspired actor system , missing the point that erlang processes are practical units of composable failure domains, not theoretical actor concurrency units
Yeah I was worried about that part giving offense, but the sentiment became a lot longer when shifted from “language that nobody uses” to “language that I can't convince any employers to let me use because they are worried about hiring problems” lol
It also makes me think of CircleCI where they stayed in Clojure for quite some time - it really didn't have that much need for libraries (and the ones it did need, such as AWS, were provided by Java).
When evaluating whether to use a non-mainstream language, the rule I use now is:
- will I need to interact significantly with the outside world in a way that can only be done with libraries?
- if not, do I gain a lot with this non-mainstream language?
That contrasts against how I used to do it, where I viewed it as a trade-off between the ecosystem and the advantage of the non-mainstream language.
It's not just the libraries, it's the tools, and those are a much bigger lift. I noticed it with Scala dropping Eclipse support and some users shifting to Kotlin; you couldn't have a clearer example of a strictly worse language, but JetBrains and Google are supporting it, and the difference between a good IDE and not is huge. And when I tried to step up and fix the Scala Eclipse plugin myself and saw what kind of byzantine tower of confusion goes into making an IDE I started to have a bit more sympathy for that kind of decision.
> you couldn't have a clearer example of a strictly worse language...
A language that does NOT have as many features and limits more what you can do is NOT a strictly worse language. You can never say a language is worse than another, anyway, in general: it's always relative to what usage you have in mind. Your apparent disdain for a language just on the basis of the language features shows that you have a lot to learn about language economics, mentioned in other threads here.
On the contrary, it takes zero knowledge or experience to say "hurr durr use the right tool for the job"; anyone who has real knowledge and experience should have actual views on which things are good and bad overall.
I’m not one who values languages based on number of features alone (otherwise C++ or C# would be my all go), but based on the synergy of them. I think in this case, Scala is a really elegant language with many features that all come from some simple to understand primitives, for example everything is an object, that creates a highly coherent language.
This is in contrast with Kotlin, that tries to gain popularity by including many features, but always feeling “just sugar syntax over Java” to me.
> Kotlin is strictly worse if you value language features above all else. In that, there are several (several!) features it doesn't have.
Could you please elaborate on that? It was my understanding that Kotlin did everything that Java did (or any JVM-based language) but actually added first-class support for basic features missing from Java that required magic sauce like Lombok to fill in the gaps.
It's never had working error highlighting for Scala. I filed a bug where using a parameterized member type was incorrectly highlighted as an error, the next version using a parameterized member types was never highlighted as an error. I filed a bug with a case that should have been an error wasn't, the next version my original bug was back. I gave up at that point.
Heh, that's a good reference I hadn't read before. I feel like maybe Clojure and OCaml were strike one and two followed by a home run with the F# and dotnet call. Prob feeling real smug right now with aot, wasm, and hot reload being first class citizens haha.
How depressing. Probably true. But depressing nevertheless. Bigger frameworks, more complicated libraries, deeper multi tiered tooling, all of these things that we call ecosystem, reduce access to general purpose programming and creativity. We've created a bureaucracy of execution so complicated that we need vast amounts of funding to keep us at the tiller doing the biddings of e-commerce apps. It's like the founding fathers of programming have been reborn as Sir Humphreys.
This is how it is in every applied field. It's not as if 2x4s fall from trees and we use them in our construction. The 2x4 is a specific manufacturing output that's used as inputs for lots and lots of other things. It's the same way when it comes to software. Just like when doing cabinetry nothing is stopping you from processing wood yourself, likewise nothing is stopping you from rolling your own frameworks. But then you'll find you're closer to selling a Morton Chair [1] than regular furniture. (And FWIW that might be fine for your problem domain, especially if you're in the market of beautiful, high-value, handmade chairs with long lead times.)
Where this fails for me is that the 2x4 is a standard. And it's simple. And it's universally acceptable. I can send my wife and my kids alike to buy one from who knows where with little to zero instruction. They can work with it with ease. The "ecosystem" fades into the background. But modern software ecosystems are the exact opposite. You spend more time trying to learn/master/navigate the ecosystem than you do working with the metaphorical 2x4. To make matters worse, getting a 2x4 has been temporally stable for a long time. Not much has changed since my grandfather could send me to the store to buy a 2x4 on my own. But software ecosystems evolve and migrate weekly. Your argument demonstrates that all developed fields create ecosystems. But it does not address the issue that not all ecosystems are equal. Some are good. Some are bad. It's my opinion that modern software ecosystems look more like the British Civil Service than the ecosystem that produces 2x4s.
Let me refer back to Brooks’s famous paper: there are no silver bullets. There has been no order of magnitude increase since the appearance of the first managed languages, which is multiple decades now. According to him, the only way we can somehow “cheat” are way into more productivity is through ecosystems, that is standing on the shoulder of giants.
Like, the only reason are computers are remotely working fine is that great deal of abstractions.
If you're doing things on servers that you manage yourself and not using lots of saas, you can probably still do things in an obscure programming language.