Hacker News new | past | comments | ask | show | jobs | submit login
Why we built Vade Studio in Clojure (vadelabs.com)
131 points by puredanger 15 hours ago | hide | past | favorite | 60 comments





Maybe this architecture approach would be challenging in Java or Go, but the style of immutable data, don’t go crazy wrapping stuff in classes is very doable in most languages. We enforce “no mutation of data you did not just instantiate” at Notion, and use TypeScript’s powerful type system with tagged union types to ensure exhaustive handling of new variants which I really miss in languages that don’t have it (go).

I guess the major advantage for Closure with this style is the “persisted” data structures end up sharing some bytes behind the scenes - it’s nice the language is explicitly situated around this style, rather than TypeScript’s chaotic Wild West kitchen sink design. What I don’t understand the advantage for “state management”. Like, you build a new state object, and then mutate some pointer from prevState to nextState… that’s what everyone else is doing too.

There are times though when it’s nice to switch gears from function-and-data to an OO approach when you need to maintain a lot of invariants, interior mutability has substantial performance advantages, or you really want to make sure callers are interpreting the data’s semantics correctly. So our style has ended up being “functional/immutable business logic and user data” w/ “zero inheritance OO for data structures”.

Whenever I read some open source TypeScript code that’s using the language like it’s Java like `class implements ISomething` ruining cmd-click go to method or an elaborate inheritance hierarchy it makes me sad.


> What I don’t understand the advantage for “state management”. Like, you build a new state object, and then mutate some pointer from prevState to nextState… that’s what everyone else is doing too.

Clojure's real super power is its reference type(s) (in particular the atom). Rich does an excellent job explaining them in this video: https://www.youtube.com/watch?v=wASCH_gPnDw&t=2278s


Clojure is a lot of fun to tinker with, but man… I love my static types. I think I’d hate to work on a large codebase in Clojure and constantly be wondering what exactly “m” is.

One of the myriad reasons why Common Lisp is far superior to Clojure is the ability to

  (declare (type Integer m))

Typed Clojure is available as a library.

You wouldn't because of the repl. You would jack in and no exactly what m is.

With static types, I don't have the use the repl at all, I can simply hover over it in my editor.

With a REPL-connected editor (and most have a way to do this), you can simply hover over it in your editor as well. Even though most languages can have a REPL today, few integrate it in the development experience the way lisps do.

Interesting story. I am not entirely convinced that all credit should go to the programming language here, though.

My theory is that communicating abstractions is hard. If you work on your own, or in a (very) small team, you can come up with powerful abstractions that allow you to build amazing systems, quickly. However, sharing the underlying ideas and philosophy with new team members can be daunting. As systems grow, and mistakes are made, it becomes more and more likely that you run into serious problems.

This may also be why Java and similar object oriented programming languages are so successful for systems that have to be maintained for ages, by large teams of developers. There are but few abstractions and patterns, and it does not allow you to shoot yourself in the foot, nor to blow your whole leg off. Conversely, this may also be why complex frameworks, such as Spring, are not always so nice, because they introduce (too?) powerful abstractions, for example through annotations. It may also clarify why more powerful languages such as Scala, Common Lisp, Smalltalk, Haskell, etc, consistently fail to pick up steam.

Another theory is that not every developer is comfortable with abstract concepts, and that it simply takes a team of smart people to handle those.


Another theory is that C inspired languages are very mechanistic and easier to visualize. Same goes for OOP with the Animal->{Cat,Dog} explanation. But that's just surface level and once you get to the difficult part (memory management in C and software design in Java) where the ability to grasp abstractions is required, we're back to square one.

I believe once you've got to some point, dealing with abstractions is a way of life. It's either in the language, the technical requirements, or the software design.


"Objects are the way we think" is one of the largest design traps ever laid in software development. Because if you design your program like it, unless in certain special circumstances, it will be shit.

> It may also clarify why more powerful languages such as Scala, Common Lisp, Smalltalk, Haskell, etc, consistently fail to pick up steam.

Languages need a window of opportunity, and many of those squandered it.

Clojure won over Scala because at the time when people were loooking for an alternative JVM langauge, Clojure was more of a departure from Java and seemed to have better tooling (compile times and syntax support) than Scala.

Smalltalk and Common Lisp wasted their moment by not being cheap/free to people using micros in the 1980s.

Lisp, especially, very much wasted its moment with micros. The fact that no Lisper had the vision to dump a Lisp onto the bank switched micros (which makes GC really easy and useful) of the mid to late 1980s is a self-inflicted bullet wound. Lots of us hated doing assembly language programming but had no real alternative. This was a loss born of pure arrogance of Lispers who looked down on those micros as not being "real machines".

I weep for all the hours I wasted doing assembly language as a teenager that I could have been writing Lisp. How much software could have been written that would have been <100 lines of Lisp if only someone had written that tool?


Lisps really only come into their own above a certain size/amount of resources. For early Lisp the PDP-11 with 2-4 MB RAM was considered to be nice. There were some Lisp implementations for the PCs but they suffered from the need for compatibility with older hardware.

...in what sense has Clojure actually won over Scala?

I see way more Scala in companies last ~5y and have the impression of its ecosystem being more robust. Not uncommon for greenfields. It's longer than that I even encountered an active Clojure codebase. This is from a data-engineer perspective.

Clojure may be more popular for some niche of app startups perhaps? We are in different "bubbles" I suppose.

EDIT: Data disagrees with you also.

https://www.tiobe.com/tiobe-index/

https://redmonk.com/sogrady/2024/09/12/language-rankings-6-2...

https://survey.stackoverflow.co/2024/technology#1-programmin...


I can't really speak to modern stuff, and it is certainly possible my memory is faulty. Scala was a PITA in the early 2000s and you were generally better served with something else if you could move off the JVM. Clojure came in about mid 2000s and seemed to be what a bunch of people stuck on the JVM but doing data processing were desperate to find.

My feeling was that a lot of Clojure folks moved on as the data processing stuff moved on from Java/JVM.

My impression has been that JVM-based languages have effectively been on a steady general decline for a while now. Java has fixed a lot of its issues; Kotlin gave the Java expats somewhere to go. And Javascript/Node along with Go drained out the general masses who didn't really want to be on the JVM anyhow.

However, it is interesting that Clojure has effectively disappeared in those rankings.


> Lots of us hated doing assembly language programming but had no real alternative.

I kind of fail to see Lisp as an alternative to assembler on mid 80s micros.

Though, there were several cheap Lisps for PCs...


The bank switched memory architectures were basically unused in mid 80s micros (C128, CoCo3, etc.).

Lots of utility software like spell checkers and the like still existed. These would be trivial to implement in Lisp but are really annoying in assembler.

Lisp would have been really good relative to BASIC interpreters at the time--especially since you could have tokenized the atoms. It also would have freed people from line numbers. Linked lists work well on these kinds of machines. 64K is solid for a Lisp if you own the whole machine. You can run over a bank of 16K of memory for GC in about 50 milliseconds or so on those architectures.

Had one of the Lisperati evangelized Lisp on micros, the world would look very different. Alas, they were off charging a gazillion bucks to government contracts.

However, to be fair, only Hejlsberg had the correct insights from putting Pascal on the Nascom.


> Lisp would have been really good relative to BASIC interpreters at the time

I see no evidence for that. Lisp was a pain on tiny machines with bad user interface.

> 64K is solid for a Lisp if you own the whole machine.

I had a Lisp on an Apple II. It was a useless toy. I was using UCSD Pascal and Modula 2 on it. Much better.

I had Cambridge Lisp on an Atari with 68k CPU. It was next to unusable due to frequent crashes on calling FFI functions.

The first good Lisp implementation I got was MacScheme on the Mac and then the breakthrough was Macintosh Common Lisp from Coral Software.

> Had one of the Lisperati evangelized Lisp on micros

There were articles for example in the Byte magazine. Lisp simply was a bad fit to tiny machines. Lisp wasn't very efficient for small memory. Maybe with lots of work implementing a tiny Lisp in assembler. But who would have paid for it? People need to eat. The tiny Lisp for the Apple II was not usable, due to the lack of useful programming environment.

> Alas, they were off charging a gazillion bucks to government contracts.

At least there were people willing to pay for it.


> There were articles for example in the Byte magazine.

And they were stupid. Even "good" Lisp references didn't cover the important things like hashes and arrays. Everybody covered the recursive crap over and over and over ad nauseam while people who actually used Lisp almost always sidestepped those parts of the language.

> I had a Lisp on an Apple II. It was a useless toy. I was using UCSD Pascal and Modula 2 on it. Much better.

And yet UCSD Pascal was using a P-machine. So, the problem was the implementation and not the concept. Which was exactly my point.

> At least there were people willing to pay for it.

Temporarily. But then it died when the big money went away and left Lisp all but dead. All the while all the people using languages on those "toys" kept right on going.


Powerful abstractions tend to come back and bite you a few years later when the industry trends shift and everyone else starts using a different set of abstractions. Now that small team is stuck maintaining those custom abstractions forever and is unable to take advantage of new abstractions from vendors or open source projects. So their progress stagnates while competitors race ahead. I've been on the wrong side of that before.

Clojure has some interesting advantages - which doesn't mean others might not.

Rapid application technologies, methedologies, or frameworks are not unusual.

I know some wonderfully productive polyglot developers who by their own choice end up at Clojure. It doesn't have to be for everyone.

I wouldn't rule out that Clojure doesn't deserve credit. I wouldn't think it's a good idea to discredit Clojure from not having tried it myself.

I do hope someone with extensive Clojure experience can weigh in on the advantages.

How easy something is a codebase grows is something to really consider.

This product regardless of how it's built is pretty impressive. I'd be open to learning advantages and comparisons without denying it.


I'm curious if Elixir could provide a similar development environment?

Seems like many similar capabilities, like a focus on immutable data structures, pure functions, being able to patch and update running systems without a restart, etc.


For the most part, yes.

CIDER and nREPL is better tech than IEX though. I live in both and Clojure is much more enjoyable.


I came to the opposite conclusion for the following reasons:

1. IEx provides a robust and interactive debugging environment that allows me to dig into whatever I want, even when running in production. I've never lost state in IEx, but that happens fairly often in CIDER and nREPL.

2. IEx uses Elixir's compilation model, which is a lot faster than CIDER and nREPL, leading to faster debugging cycles.

3. IEx is tightly integrated with Elixir whereas Clojure's tools are more fragmented.

4. IEx doesn't carry the overhead of additional middleware that CIDER and nREPL do.

I'm also not a fan of JVM deployments, so I've migrated all my code away from Clojure to Elixir during the past 10 years.


> Today, we're building Vade Studio with just three developers – myself and two developers who joined as interns when in college. (...) Here's what we've accomplished: (...)

In how many man-hours/days? It's hard to know if the list is long or short only knowing that calendar time should be multiplied by three for calculating people time spent...


  Because Clojure treats data as first-class citizens, we could build our own lightweight conflict resolution system using pure functions that operate on these transactions.
What does it mean to say Clojure "treat data a first-class citizen"? I understand FP would treat function as first-class citizen, but the statement seems to mean something different.

OOP generally "hides" data as internal state of class instances. Everything is private unless expressed as a method on an object.

The two sentences around the one you quoted should answer the question as well:

    > With Clojure, we modeled the entire collaboration system as a stream of immutable data transformations. Each user action becomes a transaction in our system.
And

   > When conflicts occur, our system can merge changes intelligently because we're working with pure data structures rather than complex objects.
Whereas OOP languages combine behavior and data into a single thing (classes with methods model behavior and hide state i.e. data) functional languages separate them: functions model behavior, and data is treated more like an input and output rather than "state".

In particular with clojure, data structures tend to be immutable and functions tend to not have side effects. This gives rise to the benefits the article talks about, though is not without its own drawbacks.


They most likely refer to homoiconicity [1], as Clojure is a dialect of Lisp. However, it's hard to say for sure, and maybe they were simply referring to the built-in syntax for maps, lists, etc.

[1]: https://en.wikipedia.org/wiki/Homoiconicity


Is there a technical reason I can't sign into Studio with email? I'll really try to avoid signing in with other platforms, but I'll consider Github if there's some reason it has to be. I'll never sign into a service with Google.

Agreed. If a google account goes away, so does the access to all your google authenticated stuff.

I only use google for email logins for services I don't take seriously and am willing to lose.


@OP "Model our domain as a graph of attributes and relationships" and "generate resolvers". I'm curious what your model looks like so that you are able to "generate resolvers"? I had looked into using Malli as the model, but curious what route you took.

I think these words will make more sense in the context of Pathom.

https://pathom3.wsscode.com/


Uncle Bob approved this article!!

Incredible history, I feel like Clojure makes magic. What I like about functional programming is that it brings other perspectives of how things CAN work!!

Congratulations by the life change


I've built similar systems using Apache Airflow and Temporal, but the complexity was overwhelming. Using simple maps with enter/leave phases for workflow steps is much cleaner than dealing with DAG frameworks.

I can't find pricing on the same. Though it is no code, there must be a way for me to work with code directly if i wish to do so. No mobile apps. It would be great if you can generate both web apps and mobile apps.

Ultimately, it all comes down to build what you're comfortable with. Additionally, when you're managing large organizations and teams. Build with what you can hire quickly for and easily scale with.

Quick (and cheap?) hires are not necessarily good hires. In my experience (and my theory) developer productivity can range from 0.5x to 5x and more, and those developers in the upper range tend to look for certain programming language which they enjoy, like Rust, Go, Elixir, Scala and Clojure. They are hard to get if you are on a "boring" stack like Java, NodeJS, PHP. So if you might need to invest some time and money to find the right people, but at the end you make a better deal: Even if the salary is twice as much, the productiviy is even more. Additionally less people means less communication overhead, which is another advante.

I'm not in the business of cheap. I do care about resource availability though.

I find the opposite to be true, that best and most productive developers tend to be more language agnostic than average, although I'm not saying they don't have their preferences.

Specifically, I find language evangelists particularly likely to be closer to .5x than 5x. And that's before you even account for their tendency to push for rewriting stuff that already works, because "<insert language du jour here> is the future, it's going to be great and bug free," often instead of solving the highest impact problems.


Oddly, I think both are true, at the same time.

I've worked with language zealots and it's awful. Especially the ones with the hardcore purely functional obsession. But that can apply to almost anything: folks that refuse to use anything but $TECH (K8S, FreeBSD, etc). Zealots like this general care less about delivering and more about what they get to play with.

Then you have the folks that care about delivering. They're not language agnostic, they have strong opinions. But also: they communicate and collaborate, they actually CARE: they have real empathy for their users and their co-workers, they're pragmatic. Some of these folks have a lot of experience in pushing hard to make things work, and they've learned some valuable lessons in what (not) to do again. Sometimes that can manifest as preferences for languages / frameworks / etc.

It's a messy industry, and it can be extremely hard to separate the wheat from the chaff. But a small team with a few of those can do truly game changing work. And there's many local optima to be had. Get a highly motivated and gelled team using any of: Elixr / Typescript / Zig / Rust / Ada / Ocaml / Scala / Python / etc, and you'll see magic. Yes, you don't need fancy tech to achieve that. There's more than a few of those writing C for example, but you're unlikely to see these folks writing COBOL.


Yeah, this has been my experience too. The mentality seems similar to "productivity hackers" who spend more time figuring out the quickest, most optimal way to do a thing than people who just do the thing.

One of the things I've noticed is that people who just do the thing, take note of what's annoying, and fix the most annoying things about a process later on tend to make the most impressive dents in a system or process, especially since they spend time mulling over the idea in their head and so by the time they implement, they aren't "zero-shotting" a solution to what's generally a complex issue.

I agree with you but also agree with the above, if youre stuck permanently in some tangled codebase with a boring language/style, the really good programmers tend to find something more fun to work on - unless they can bring their new skills/experience to bear. personally I'll only go back to doing boring stuff if i can't find a job doing the fun stuff

hear! hear!

100% agree. You have hit the nail on the head. I went from Common Lisp to Go to now Rust and find that Rust devs are the best so far on average.

There are fewer of them, they ask for more money, but they really are exceptional. Especially Rust devs right now because there are not a lot of jobs you only find the most passionate and the most brilliant in that space. A short window though which will close as Rust gets more popular to startups, take advantage of it now.


In my case, it was definitely worth becoming uncomfortable for a bit to learn Clojure because I was very uncomfortable with the experience of many of the other languages. It’s also great to have endless backwards compatibility and little reliance on changing external libraries baked in.

Never opposed to sacrificing some comfort for learning.

And for superpowers. :-)

This. And End users rarely care what the solution is coded in if it's a tool they use and don't modify or script at the code level.

Anyone else unable to login with github to studio?

Yes, I was unable; just bumped me back to the login

same just redirects me to login every time

Same here

> Each new layer of complexity fed my developer ego.

I'm unable to understand this mindset. All the time I read things like "Developers love complexity because it feeds their egos" but I've never encountered a situation in which added complexity made me more proud of the work. Just the opposite: being able to do more made me more proud of the work I put in, and complexity was the price I paid for that ability. The greatest hacks, the ones that etch people's names into history, are the ones -- like Unix and the Doom engine -- that achieve phenomenal feats with very little code and/or extreme parsimony of design. This is no more true than in that famous ego-stroking/dick-measuring contest of programming, the demoscene. My favorite example being the 4k demo Omniscent: https://www.youtube.com/watch?v=G1Q9LtnnE4w

Being able to stand up a 100-node K8s cluster to provide a basic web service, connected to a React SPA front end to provide all the functionality of a Delphi program from the 90s doesn't stroke the ego of any programmer I know of; but it might stroke their manager's ego because it gives them an opportunity to empire-build and requisition a larger budget next year.


Indeed, I often tell people that one of the “hardest” things to do in software development is actually managing complexity (on any significant sized code base that is, on smaller ones it’s probably not going to be an issue).

Big long lived code bases are all about this battle against complexity and the speed at which you can add new or update features largely comes down to how well you’re doing at management of complexity.


I was extremely puzzled by that statement too. I would hate to work with someone like that.

Look these folks can do whatever the heck they want, use whatever language they want.

However my criteria for selecting a language for use in a professional context:

0: fit to task - obviously the language has to be able to do the job - to take this seriously you must define the job and what its requirements are and map those against the candidate languages

1: hiring and recruiting - there must be a mainstream sized talent pool - talent shortages are not acceptable - and I don't buy the argument that "smart people are attracted to non mainstream languages which is how we find smart people", it is simply not true that "most smart people program with Scala/Haskell/Elixir/whatever" - there's smart and smarter working on the mainstream languages.

2: size of programming community, size of knowledge base, size of open source community - don't end up with a code base stuck in an obscure corner of the Internet where few people know what is going on

3: AI - how well can AI program in this language? The size of the training set counts here - all the mainstream languages have had vast amounts of knowledge ingested and thus Claude can write decent code or at least has a shot at it. And in future this will likely get better again based on volume of training data. AI counts for a huge amount - if you are using a language that the AI knows little about then there's little productivity related benefits coming to your development team.

4: tools, IDE support, linters, compilers, build tools etc. It's a real obstacle to fire up your IDE and find that the IDE knows nothing about the language you are using, or that the language plugin was written by some guy who did it for the love and its not complete or professional or updated or something.

5: hiring and recruiting - it's the top priority and the bottom and every priority in between. If you can't find the people then you are in big trouble I have seen this play out over and over where the CTO's favorite non-mainstream language is used in a professional context and for years - maybe decades after the company suffers trying to find people. And decades after the CTO moved on to a new company and a new favorite language.

So what is a mainstream language? Arguable but personally it looks like Python, Java, JavaScript/TypeScript, C#, Golang. To a lesser extent Ruby only because Ruby developers have always been hard to find even though there is lots of community and knowledge and tools etc. Rust seems to have remained somewhat niche when its peer Golang has grown rapidly. Probably C and C++ depending on context. Maybe Kotlin? How cares what I think anyway its up to you. My main point is - in a professional context the language should be chosen to service the needs of the business. Be systematic and professional and don't bring your hobbies into it because the business needs come first.

And for home/hobbies/fun? Do whatever the heck you like.


The signal to noise ratio is way better if you take some eccentric language.

The amount of knuckleheads that Ive had to interview just to get a single coherent developer is mind boggling (remote first).


I think in order of average dev quality (highest to lowest) I recently found:

Rust Common Lisp Go Ruby/Elixir C++ Python C# Typescript Java Javascript


> talent shortages are not acceptable - and I don't buy the argument that "smart people are attracted to non mainstream languages which is how we find smart people", it is simply not true that "most smart people program with Scala/Haskell/Elixir/whatever" - there's smart and smarter working on the mainstream languages.

Smart people can be trained in any language and become effective in a reasonably short period of time. I remember one company I worked at, we hired a couple of fresh grads who'd only worked with Java at school based on how promising they seemed; they were contributing meaningfully to our C++ code base within months. If you work in Lisp or Haskell or Smalltalk or maybe even Ruby, chances are pretty good you've an interesting enough code base to attract and retain this kind of programmer. Smart people paired with the right language can be effective in far smaller numbers as well.

The major drawback, however, is that programmers who are this intelligent and this interested in the work itself (rather than the money or career advancement opportunities) are likely to be prickly individualists who have cultivated within themselves Larry Wall's three programmer virtues: Laziness, Impatience, and Hubris. So either you know how to support the needs of such a programmer, or you want to hire from a slightly less intelligent and insightful, though still capable, segment of the talent pool which means no, you're not going to be targeting those powerful languages off the beaten track. (But you are going to have to do a bit more chucklehead filtering.)

> if you are using a language that the AI knows little about then there's little productivity related benefits coming to your development team.

This is vacuously true because the consequent is always true. The wheels are kind of falling off "Dissociated Press on steroids" as a massive productivity booster over the long haul. I think that by the time you have an AI capable of making decisions and crystallizing intent the way a human programmer can, then you really have to consider whether to give that AI the kind of rights we currently only afford humans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: