Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In 2021, I find it hard to justify using a dynamically typed language for any project that exceeds a few hundreds of lines. It's not a trade off, it's a net loss.

The current crop of statically typed languages (from the oldest ones, e.g. C#, to the more recent ones, e.g. Kotlin and Rust) is basically doing everything that dynamically typed languages used to have a monopoly on, but on top of that, they offer performance, automatic refactorings (pretty much impossible to achieve on dynamically typed languages without human supervision), fantastic IDE's and debuggability, stellar package management (still a nightmare in dynamic land), etc...



Yeah, I had a fairly large (about a year of solo dev work) app that I maintained both Clojure and F# ports of, doing a compare and contrast of the various language strengths. One day I refactored the F# to be async, a change that affected like half the codebase, but was completed pretty mechanically via changing the core lines, then following the red squigglies until everything compiled again, and it basically worked the first time. I then looked at doing the same to the Clojure code, poked at it a couple times, and that was pretty much the end of the Clojure port.


Hey, so my my career path has been C# (many years) -> F# (couple years) -> Clojure (3 months). I understand multithreading primarily through the lens of async/await, and have been having trouble fully grokking the Clojure's multithreading. One of the commandments of async/await is don't block: https://blog.stephencleary.com/2012/07/dont-block-on-async-c...

Which is why the async monad tends to infect everything. Clojure, as far as I can tell so far, doesn't support anything similar to computation expressions. So I'm guessing your "poked at it a couple times" was something like calling `pmap` and/or blocking a future? All my multithreaded Clojure code quickly blocks the thread... and I can't tell if this is idiomatic or if there's a better way.


Not even. It was opening it, looking, realizing it would take a couple weeks, and going back to F#. I did this a couple times before fully giving up.

IIRC/IIUC, Clojure's async support is closer to Go's (I've never used go), in the form of explicit channels. Though you can wrap that in a monad pretty easily, which I did for fun one day (https://gist.github.com/daxfohl/5ca4da331901596ae376). But neither option was easy to port AFAICT before giving up.

Note it's possible that porting async functionality to Clojure may have been easier that I thought at the time. Maybe adding some channels and having them do their thing could have "just worked". I was used to async requiring everything above it to be async too. But maybe channels don't require that, and you can just plop them in the low level code and it all magically works. A very brief venture into Go since then has made me wonder about that.


Sounds more like you ran into a conflict of mental model and language feature, not necessarily that the language couldn’t achieve your goal simply.


Yeah, quite possible. I haven't worked on the project in ~six years and lost all context, but I'd revisit it and see if perhaps there was a simple solution if any of it was still current.


Well, I think I stumbled on this article way back when I was originally porting, and it looks like it still holds: https://martintrojer.github.io/clojure/2014/03/09/working-wi....

While core.async pays homage to go, it's simply not go, and it's harder to work with, and generally the changes are going to be more invasive, and looking around at modern resources, I don't see anything that indicates much has changed. So while I might have been more efficient if I'd had the go mental model, that was definitely not the only problem. Migrating was too much to do in my fairly large project, hunting and pecking at each instance I made an async call. Whereas with F# it was truly mechanical and hard to mess up, as I described above.


Multi-threaded code is normally not implemented in an async style, but instead is done where each thread of execution is synchronous.

Async style comes into play generally for languages that lack real threads, or as a way to manage callbacks (even if single threaded), or in order to wait for blocking IO without the need for a real thread.

So ya, it's idiomatic to use blocking to coordinate between different threads in Clojure, same as Java.

Java decided to work on making stackful coroutines instead of stackless like C#. That requires a lot more work, but should be coming eventually to Java. At that point, your "blocking" code in Clojure will no longer block a real thread, but a lightweight fiber instead. But patience is needed for it.

In the meantime, if you're dealing with non-blocking IO that operates with callback semantics or other callback style code, what you can do in Clojure to make working with that easier is use one of:

> core.async - https://github.com/clojure/core.async

> Promesa - https://github.com/funcool/promesa

> Missionary - https://github.com/leonoel/missionary

> Missionary's lower level coroutine lib - https://github.com/leonoel/cloroutine/blob/master/doc/02-asy...


Thanks for the reply - what you say makes a lot of sense. I watched Rich's talk on Async and was like... "cool so `core.async` follows this pattern right?!" ...not quite.

I'll check out your other links though, much appreciated. Also hearing that I should just be okay with blocking is well, good to hear explicitly.


> In 2021, I find it hard to justify using a dynamically typed language for any project that exceeds a few hundreds of lines. It's not a trade off, it's a net loss.

Only if you are skimping on tests. There's a tradeoff here - "dynamically typed" languages generally are way easier to write tests for. The expectation is that you will have plenty of them.

Given that most language's type systems are horrible (Java and C# included) I don't really think it's automatically a net gain. Haskell IS definitely a net gain, despite the friction. I'd argue that Rust is very positive too.

Performance is not dependent on the type system, it's more about language specification (some specs paint compilers into a corner) and compiler maturity. Heck, Javascript will smoke many statically typed languages and can approach even some C implementations(depending on the problem), due to the sheer amount of resources that got spent into JS VMs.

Some implementations will allow you to specify type hints which accomplish much of the same. Which is something you can do on Clojure by the way.

Automatic 'refactorings' is also something that's very language dependent. I'd argue that any Lisp-like language is way easier for machines to process than most "statically typed" languages. IDEs and debugability... have you ever used Common Lisp? I'll take a condition system over some IDE UI any day. Not to mention, there's less 'refactoring' needed.

Package management is completely unrelated to type systems.

Rust's robust package management has more to do with it being a modern implementation than with its type system. They have learned from other's mistakes.

Sure, in a _corporate_ setting, where you have little control over a project that spans hundreds of people, I think the trade-off is skewed towards the most strict implementation you can possibly think of. Not only type systems, but everything else, down to code standards (one of the reasons why I think Golang got popular).

In 2021, I would expect people to keep the distinction between languages and their implementations.


Here's what I've noticed with my tests and dynamic languages. I'll get type errors that static typing would have caught. However those errors occur in places I was missing testing of actual functionality. Had I had the functionality tests, then the type error would have been picked up by my tests. And had I just had static typing, the type system would not have been enough to prove the code actually works, so I would have needed tests anyways.

Point being, I don't really buy that a static type system saves me any time writing and maintaining tests, because type systems are totally unable to express algorithms. And with a working test suite (which you will need regardless of static vs dynamic) large refactors become just as mechanical in dynamic languages as they are in static languages.


> type systems are totally unable to express algorithms

You don't know much about types if you think that.

As for dynamic typing "helping" you to find code that you need to write tests for: There are already far more sophisticated static analysis tools to measure code coverage.


  doubler :: Num a => [a] -> [a]
  doubler xs = take 2 xs
Passes the type checker, thanks type system! /s

I like static typing, but static typing advocates seriously overstate how much protection the type system gives you. Hickey really said it best: "We used to say 'If it compiles it works' and that's as true now as it was then."

As for dynamic typing "helping find code to write tests", that's not a feature, it's a huge downside. Neither side is perfect, but in my experience the benefits of the static checker are overblown since I need to write tests anyways. And also like you say, there's a variety of great static analysis tools you should be using as well.


Proving that your program is consistent is only one of the many benefits that a static type system brings you.

I'd say the main one is that it enables automatic refactorings, which are mathematically impossible to achieve when you don't have type annotations.

Thanks to automatic refactorings, code bases are easier to maintain and evolve, as opposed to dynamically typed languages where developers are often afraid to refactor, and usually end up letting the code rot.

It's also a great way to document your code so that new hires can easily jump on board. It enables great IDE support, and very often, unlocks performance that dynamically typed languages can never match.


> I'd say the main one is that it enables automatic refactorings, which are mathematically impossible to achieve when you don't have type annotations.

Yeah, it's really not impossible. Maybe in theory, it's "mathematically impossible", but in practice, doing a search on your local codebase and understanding the code you find makes it easy to do refactors too. Dynamic languages also can help making refactoring obsolete, as you can create data structures that doesn't matter if they are User or Person, as long as it has a Name, print the name (or whatever, just a simple example) whereas with a static type system, you'd have to change all the User to Person. You're basically locking your program together, making it coupled and harder to change.

> It enables great IDE support

Is there anything specific that IDEs support for static typing that you can't have for dynamic languages? I mostly work with Clojure and have everything my peers have when using TypeScript or any other typed language.

> unlocks performance that dynamically typed languages can never match

I think there is a lot more to performance than just types. Now the TechEmpower benchmarks aren't perfect, but a JS framework is at the 2nd place in the composite benchmark, beating every Rust benchmark. How do you explain this if we consider your argument that types will for sure make everything faster and more efficient?

https://www.techempower.com/benchmarks/#section=data-r20&hw=...


I'm not sure when did this become about typing systems promising you'll never write tests? IMO nobody says that.

Let me give you one example.

When I'm coding in Rust and I forget to match on one of my sum type's variants, the compiler will immediately yell at me.

When I'm coding in Elixir, the compiler doesn't care if I do exhaustive pattern matching because it doesn't know all possible return values. In these conditions it's extremely easy to not write code that deals with a return value that appears rarely.

That's one of the values of static typing for me.


That was meant as a response to tsss apparently overvaluing his type checks.

The pattern matching example is one that often comes up talking about typing. Yes it's great that the type checker finds all the places you didn't deal with your new sum type varient...except here's the rub. All that code was working just fine before. Your static type checker is forcing a bunch of code that never needed to know or care about certain values onto all places where you used pattern matching. I don't think this speaks to the value of static typing, I think it suggests that pattern matching is a bad idea that leads to overly coupled code where parts of the system that really shouldn't need to know about each other are now forced to deal with situations they don't care about.


> Your static type checker is forcing a bunch of code that never needed to know or care about certain values onto all places where you used pattern matching.

It's not "forcing" anything, you are evolving your program and the compiler is helping you not play a whack-a-mole by actually telling you every place that must be corrected in order to account for the change.

Wasn't aware that evolving a project is called forcing. :P

> I think it suggests that pattern matching is a bad idea that leads to overly coupled code where parts of the system that really shouldn't need to know about each other are now forced to deal with situations they don't care about.

That's a super random statement, dude. If a sum type change makes 7 places in your code not compile then obviously those pieces of code do care about it -- you wouldn't write it that way if it didn't. Nobody put a gun on your head forcing you to include the sum type in these places in the code just because, right?

Overall I am not following your train of thought. You seem to be negatively biased. I've seen from my practice only benefits by enforcing exhaustive pattern matching. Many times I facepalmed after I got a compiler error in Rust and was saying "gods, I absolutely would've missed that if I wrote it in a dynamic language".


You can very easily write a "safe" version of that function that will not type check and in this case you don't even need dependent types. So: bad example on your part.


So you're saying without dependent types you can express in a type a function that will return double each element in the list thus removing the need to test the function?

If you can do that, that's awesome, but I'm not seeing how.


I meant that you can write a version of your function with the same definition that will not type check since `take 2` is illegal for lists of length less than 2.

As for a function that will "double" a list, i.e. turn [1,2] into [1,1,2,2]: That is definitely possible with dependent types as they can express arbitrary statements. I'm not sure if you can do it without dependent types, but I'm inclined to say yes: something like an HList should work. Universal quantification over the HList parameters will ensure that the only way to create new values of the parameter types is to copy the old ones, as long as you disallow any form of `forall a. () -> a` in the type system.

Something like this, which is just the `double` function lifted into the universe of types, _might_ work, though its utility is questionable:

    type family DoubleList xs :: 'HList -> 'HList where
        DoubleCons ('HCons x ': xs) =  'HCons x ': 'HCons x ': Double xs
        DoubleNil 'HNil = 'HNil


So totally different function, I meant by double to turn [1,2] into [2,4], however that's a really neat example. I hadn't seen the family extension in Haskell before. You're right there is a ton more to type systems than I was aware of. I was following some links in this thread and found this as well: https://www.parsonsmatt.org/2017/10/11/type_safety_back_and_....

I was less than impressed with type systems because like the blog post says, they tend to just kick the can down the road. The blog post uses a technique like you did in your example where rather than emitting more complicated type, they use the type system to protect the inputs of the function thus moving handling with the problem to the edges of the system which seems like a huge win. Between your example and that post I'm starting to see what people mean when they talk about programming in types, as its almost like the type system become a DSL with its own built-in test suite with which to program rather than a full programming language.

Either way very thought provoking, thank you for your responses.


> I meant by double to turn [1,2] into [2,4]

Hmm, I think you can do that too, but you'd have to assign each int value its own singleton type, which would be ridiculous and not gain you anything since you're just moving the logic up one level in the hierarchy of universes.

> what people mean when they talk about programming in types

If the type system is powerful enough then you can express any function at the type level. Some languages with universal polymorphism make no difference between types and terms. Any function can also be used at the type-level, kind-level and so on. Though usually just defining a simple wrapper type with smart constructor will get you 80% of the way in a business application with 2% of the effort of real type-level programming.


We can debate this forever, but all I can say is that at my work we have equal part Java and Clojure, and we have some Kotlin and sole Scala as well. Out of all of them, Clojure does not cause us anymore issues, it doesn't take us any longer to add features, it doesn't perform any worse, and it doesn't have any more defects than the others.

My conclusion is that it's a matter of personal preference honestly. Those are all really good languages. Personally I have more fun and enjoy using Clojure more. I would say I tend to find I'm more productive in it, but I believe that's more a result of me finding using it more enjoyable then anything else.


I find immutability way more important.

I don't pick Clojure for its dynamic typing, I pick it for other reasons. I've tried Haskell but it really doesn't seem to mesh with the way I tend to develop a program. But I would love to have more static languages with the pervasive immutability of Clojure.


I really like F# for this, it's like Haskell-lite


> automatic refactorings (pretty much impossible to achieve on dynamically typed languages without human supervision)

...are we talking about the thing pioneered by Smalltalk's Refactoring Browser?


You are forgetting that Smalltalk with its image has the visibility of the whole world AST, so its dynamism has some helping metadata to go along the OS features.

Also it wasn't perfect, hence why Strongtalk was born, the remains of which now live on Hotspot.


My question is how does that work in a dynamically typed language? In static typed language we can know scope & type of a variable and we can't change much in runtime.


In Clojure you know the number of arguments to a function and the name of functions and variables, and the code is all very well structured as an AST (being a Lisp).

So you can do a lot of refactorings with that such as:

Rename function, rename variable, rename namespace, extract constant, extract function, extract local variable, extract global variable, convert to thread-first, convert to thread-last, auto-import, clean imports, find all use, inline function, move function/variable to a different namespace, and some more.

The only thing is you can't change the "type" of something and statically know what broke.


All these refactorings can only be done automatically and safely if you have type annotations (i.e. core).

Without them, all these refactorings can break your code (as in, not even compiling, let alone run).


I believe you're mistaken, but please explain otherwise?

None of those seem to require type information from my reasoning (and are also all available in Emacs for Clojure)

For example, moving a function from one namespace to another, you know where this function is being used from the require declarations, and you know where you've been told to move it too and where it currently resides. So you can simply change the old require pointing to its old namespace to point to the new namespace and cut/paste the function from the old to the new. Nothing requires knowing the type or the arguments or the return value of the function.

See a gif of it in action: https://raw.githubusercontent.com/clojure-emacs/clj-refactor...


Sure: https://www.beust.com/weblog/2021/06/20/refactoring-a-dynami...

Even Smalltalk's refactoring browser made mistakes which humans had to fix by hand. Which is not surprising, because in the absence of type annotation, the IDE doesn't have enough knowledge to perform safe refactorings.


That blog is talking about refactoring a method, not a function.

In Clojure, I'm talking about renaming a function, which can be done without types.

See the difference is that with a method:

x.f()

You have to know the type of `x` to find the right `f`, but with a function in Clojure:

    (ns foo
      (:require [a :refer [f]]))

    (f x)
The location of `f` is not dependent on the type of `x`, you known statically that this `f` is inside the namespace `a`, because of the require clause that says that in `foo`, `f` refers to the `f` inside of `a`.

And this is unambiguous in Clojure because there cannot be more than one `f` inside `a`.

If you had two `f` this would be the code in Clojure:

    (ns a)
    (defn f [] "I'm in a")

    (ns b)
    (defn f [] "I'm in b")

    (ns foo
      (:require [a :refer [f]]
                [b :refer [f] 
                   :rename {f bf}]))

    (f x)
    (bf x)
You're forced to rename the other f, and now it's clear statically again that `bf` is the `f` from `b` and `f` is the one from `a`, no need to know the type of `x` for it.


You are pushing fud about not having typing systems at all... they are valuable to automated systems for introspection to some degree.

However you are talking about typing as if all typing is static - static typing has little value beyond warm and fuzzies on the developers part, dynamic typed systems are able to perform just as well. At which point, the dynamic part can allow you to mostly drop the types.

statically typed systems, you will note, tend to come with ecosystems dedicated to using the static bits as little as possible. And they provide no guarantee of correctness.

Yes, new devs might be able to latch onto some specific typing a bit better, but I don't care if you have all the automated refactors and a hundred new employees, if your codebase sucks and is incorrect, your static analysis is worth didly squat.


It's your opinion though, there's nothing scientific about what you're saying. Take mocking for example, in Ruby/Rails it's a breeze. In Java you need to invent a dependency injection framework (Spring) to do it.


The best response from the statically-typed world is functional programming and explicit dependencies (Haskell, OCaml, F#), which makes mocking unnecessary most of the time. OOP (Java, C#) is not the true standard for static-typing, just the most common one.


I think you are mistaken. Mocking and DI frameworks are two unrelated concepts. There is nothing in Java that forces you to use a DI framework, e.g., Spring if you want to use mocks during testing.


In theory, I agree, but I don't think that holds terribly true in practice.

One of the ideas behind IoC frameworks (which build on top of DI) is that you could swap out implementation classes. For a great deal of software (and especially in cloud-hosted, SaaS style microservice architecture) the test stubs are the only other implementations that ever get injected.

Most code bases could ditch IoC if Java provided a language-level construct, even if that construct were only for the test harness.


Java has a mechanism, just pass alternate implemenations in constructors. If you must, a setter method. For most code you don't need to bring in the overhead of Spring, and @Autowired isn't really more convenient typing wise. Plus your unit tests become trivial, they're just POJOs with @Test annotations.

Spring is great when you need that dynamic control at runtime (especially when code dependencies are separated by modules) but you're just aping what good dynamic languages like Clojure or Common Lisp give you for free. But I can't complain too much, developing modern Java with its popular frameworks and with JRebel is getting closer to the Lisp experience every year, I'd rather have that than for Java to remain stagnate like in its 1.6/1.7 days.


Let's say I have a class called User and in it a method that says the current time. So User#say_current_time which simply accesses the Date class (it takes no arguments).

Can you show me how you would mock the current time of that method in Java?

It's one line of Ruby/Javascript code to do that.


Without using a mock framework, assuming User#say_current_time isn't a private or static method then:

    final Date testDate = someFixedDate;
    User testUser = new User() {
        @Override
        Date say_current_time() {
            return testDate;
        }
    };
If it is private and/or static, you can get around it without having to change the code, but if you own the code, you should just do that... Often the change will be as simple as replacing some method's raw usage of Date.now() with a local say_curent_time() method that uses it or some injected dependency just so you can mock Date.now() without hassle.

But your point further down that in Java you have to think about your code structure more to accommodate tests is valid. I think it's easy to drink the kool-aid and start believing that many code structuring styles that enable easier testing in Java are actually very often just better styles regardless of language, but you're not going to really see the point if you do nothing but Ruby/JS where you can get away with not doing such things for longer. Mostly it has to do with dynamic languages offering looser and later and dynamic binding than static languages (which also frequently makes them easier to refactor even if you don't have automated tools). One big exception is if your language supports multiple dispatch, a lot of super ugly Java-isms go away and you shouldn't emulate them. The book Working Effectively with Legacy Code is a good reference for what works well in Java and C++ (and similar situations in other languages), it's mostly about techniques for breaking dependencies.


I am assuming this is easier in Ruby because you can monkey patch classes?

Mockito in Java has a nifty way of doing this with Mockito.mockStatic:

  @Test
  public void mockTime() throws InterruptedException {
    LocalDateTime fake = LocalDateTime.of(2021, 7, 2, 19, 0, 0);

    try (MockedStatic<LocalDateTime> call = Mockito.mockStatic(LocalDateTime.class)) {
      call.when(LocalDateTime::now).thenReturn(fake);

      assertThat(LocalDateTime.now()).isEqualTo(fake);
      Thread.sleep(2_000);
      assertThat(LocalDateTime.now()).isEqualTo(fake);
    }

    LocalDateTime now = LocalDateTime.now();
    assertThat(now).isAfter(fake);
    assertThat(now).isNotEqualTo(fake);
  }
Or you can pass a Clock instance and use .now(clock). That Clock then can be either a system clock or a fixed value.


> I am assuming this is easier in Ruby because you can monkey patch classes?

Yes, that was my point. I see it's possible in Java though, hurts my eyes a bit but possible :)


I'll take clean contractual interfaces (aka actual principle of least surprise) over "I can globally change what time means with one line of code!" on large projects every time.


If you want to use DI, in java 8 you could inject a java.time.Clock instance in the constructor and provide a fixed instance at the required time in your test e.g.

    Instant testNow = ...
    User u = new User(Clock.fixed(testNow, ZoneOffset.UTC));
    u.sayCurrentTime();
although it would be better design to have sayCurrentTime take a date parameter instead of depending on an external dependency.


Yes that was my point. You don't need DI or to structure your code any differently in Ruby/JS/Python. You just mock a method.


In my experience the need to mock out individual methods like this is an indication that the code is badly structured in the first place. The time source is effectively a global variable so in this example you'd want to pass the time as a parameter to `sayCurrentTime` and avoid the need to mock anything in the first place. A lot of C#/java codebases do seem to make excessive use of mocks and DI in this way though.


    User mock = mock(User.java)
    when(mock.say_current_time()).thenReturn(someDate)


OK. first I could be ignorant about Java since I haven't touched it in more than a decade. Which library is doing that? And also what is mock(User.java) returning - is it an actual User instance or a stub? I want a real User instance (nothing mocked in it) with just the one method mocked.

And again if this is possible I will admit ignorance and tip my hat at the Java guys.


It's Mockito [1], which has been a standard for a while. There are other libraries and they use different strategies to provide this kind of functionalities (dynamic proxies, bytecode weaving, annotation processing, etc...).

[1] https://site.mockito.org/


And ... is the whole user being mocked or just the method?


It creates a stub, but you can also configure it to pass any method calls to original implementation. You should be tiping your hat i think.

https://javadoc.io/static/org.mockito/mockito-core/3.11.2/or...

User mock = mock(User.java)

Should have been

User mock = mock(User.class)


Ah oops, I've been writing exclusively Kotlin for several years, my Java is becoming rusty (no pun intended).


I think what you want is a "spy" (partial mock), not a full "mock", but yes, both are possible. You can partially mock classes, i.e., specific methods only. Syntax is almost the same, instead of mock(User.class) you write spy(User.class).


The fact that there are such libraries in existence means that there is no pain associated to this particular activity. Not only do you get great mocking frameworks, they are actually very robust and benefit from static types.

Mocking dynamically typed languages is monkey patching, something that the industry has been moving away from for more than a decade. And for good reasons.


> The fact that there are such libraries in existence means that there is no pain associated to this particular activity

I can say the same about Rails + RSpec. It exists therefore it's good.

> Mocking dynamically typed languages is monkey patching, something that the industry has been moving away

That's a reach. There are millions of javascript/python/php/ruby/elixir devs that don't use types or annotations. They mock. "The industry" isn't one cohesive thing.


Not only this, but the programming style where you pass around dictionaries / maps for everything yet have expectations about what keys they contain works just as easily in JS, and with TypeScript or Flow you get a lot more help from the compiler than you do using spec (as I understand it).


Although you are right, the Clojure community probably by and large agrees with you. That is why everyone is excited about spec - it looks a lot like a type system for Clojure.


I must respectfully disagree with the points you've brought up.


Can you elaborate why? To be honest, I don't have experience with large-scale Clojure codebases, but I have my fair share working on fairly hefty Python and Perl projects, and I tend to think that the parent commenter is mostly right. What makes you think they are incorrect?


Not who you are responding to, but the common idea that static types are all win and no cost has become very popular these days, but isn't true, it's just that the benefits of static typing are immediately apparent and obvious, but their costs are more diffuse and less obvious. I thought this was a pretty good write up on the subject that gets at a few of the benefits https://lispcast.com/clojure-and-types/

Just to name some of the costs of static types briefly:

* they are very blunt -- they will forbid many perfectly valid programs just on the basis that you haven't fit your program into the type system's view of how to encode invariants. So in a static typing language you are always to greater or lesser extent modifying your code away from how you could have naturally expressed the functionality towards helping the compiler understand it.

* Sometimes this is not such a big change from how you'd otherwise write, but other times the challenge of writing some code could be virtually completely in the problem of how to express your invariants within the type system, and it becomes an obsession/game. I've seen this run rampant in the Scala world where the complexity of code reaches the level of satire.

* Everything you encode via static types is something that you would actually have to change your code to allow it to change. Maybe this seems obvious, but it has big implications against how coupled and fragile your code is. Consider in Scala you're parsing a document into a static type like.

    case class Record(
      id: Long,
      name: String,
      createTs: Instant,
      tags: Tags,
    } 
    
    case class Tags(
      maker: Option[String],
      category: Option[Category],
      source: Option[Source],
    )
//...

In this example, what happens if there are new fields on Records or Tags? Our program can't "pass through" this data from one end to an other without knowing about it and updating the code to reflect these changes. What if there's a new Tag added? That's a refactor+redeploy. What if the Category tag adds a new field? refactor+redeply. In a language as open and flexible as Clojure, this information can pass through your application without issue. Clojure programs are able to be less fragile and coupled because of this.

* Using dynamic maps to represent data allows you to program generically and allows for better code reuse, again in a less coupled way than you would be able to easily achieve in static types. Consider for instance how you would do something like `(select-keys record [:id :create-ts])` in Scala. You'd have to hand-code that implementation for every kind of object you want to use it on. What about something like updating all updatable fields of an object? Again you'll have to hardcode that for all objects in scala like

    case class UpdatableRecordFields(name: Option[String], tags: Option[Tags]) 
    def update(r: Record, updatableFields: UpdatableRecordFields) = {
      var result = r
      updatableFields.name.foreach(r = r.copy(name = _))
      updatableFields.tags.foreach(r = r.copy(tags = _))
      result
    }
all this is specific code and not reusable! In clojure, you can solve this for once and for all!

    (defn update [{:keys [prev-obj new-obj updatable-fields}]
      (merge obj (select-keys new-fields updatable-fields)))
    
    (update 
      {:prev-obj {:id 1 :name "ross" :createTs (now) :tags {:category "Toys"}} 
       :new-obj {:name "rachel"} 
       :updatable-fields [:name :tags]})
      => {:id 1 :name "rachel" :createTs (now) :tags {:category "Toys"}}  

I think Rich Hickey made this point really well in this funny rant https://youtu.be/aSEQfqNYNAc.

Anyways I could go on but have to get back to work, cheers!


Your third point about having to encode everything isn’t quite true. Your example is just brittle in that it doesn’t allow additional values to show up causing it to break when they do. That’s not a feature of static type systems but how you wrote the code.

This blog post[1] has a good explanation about it, if you can forgive the occasional snarkyness that the author employs.

In a dynamic system you’re still encoding the type of the data, just less explicitly than you would in a static system and without all the aid the compiler would give you to make sure you do it right.

[1]: https://lexi-lambda.github.io/blog/2020/01/19/no-dynamic-typ...


It's important to note that this article talks about something that is missing from most statically typed languages.

It's best to refrain from debating static VS dynamic as generic stereotype and catch all.

You need to look at Clojure vs X, where if X is Haskell, Java, Kotlin and C#, what the article talks about doesn't apply and Clojure has the edge. If it's OCaml or F# than they in some scenarios don't suffer from that issue like the others and equal Clojure. But then there are other aspects to consider if your were to do a full comparison.

In that way, one needs to understand the full scope of Clojure's trade offs as a whole. It was not made "dynamic" for fun.

Overall, most programming languages are quite well balanced with regards to each other and their trade-offs. What matters more is which one fits your playing style best.


I think many peoples' experience is that most real world data models aren't as perfect as making up toy examples in blog posts. Requirements and individuals change over time. You can make an argument that in a perfect world with infinite time and money that static typing may be better because you can always model things precisely, but whether you can do that practically over longer periods of time should be a debatable question.


I've seen this article and I applaud it for addressing the issue thoroughly but I still am not convinced that static typing as we know it is as flexible and generic as dynamic typing. Let's go at this from an other angle, with a thought experiment. I hope you won't find it sarcastic or patronizing, just trying to draw an analogy here.

So, in statically typed languages, it is not idiomatic to pass around heterogeneous dynamic maps, at least in application code, like it is in Ruby/Clojure/etc. But one analogy we can draw which could drive some intuition for static typing enthusiasts is to forget about objects and consider lists. It is perfectly familiar to Scala/Java/C# programmers to pass around Lists, even though they're highly dynamic. So now think about what programming would be like if we didn't have dynamic lists, and instead whenever you wanted to build a collection, you had to go through the same rigamarole that you have to when defining a new User/Record/Tags object.

So instead of being able to use fully general `List` objects, when you want to create a list, that will be its own custom type. So instead of

  val list = List(1,2,3,4)
you'll have to do:

    case class List4(_0: Int, _1: Int, _2: Int, _3: Int)
    val list = List4(1,2,3,4)
This represents what we're trying to do much more accurately and type-safely than with dynamic Lists, but what is the cost? We can't append to the list, we can't `.map(...)` the list, we can't take the sum of the list. Well, actually we can!

    case class List5(_0: Int, _1: Int, _2: Int, _3: Int, _4)
    def append(list4: List4, elem: Int): List5 = List5(list4._0, list4._1, list4._2, list4._3, elem)
    def map(list4: List4, f: Int => Int): List4 = List4(f(list4._0), f(list4._1), f(list4._2), f(list4._3))
    def sum(list4: List4): Int = list4._0 + list4._1 + list4._2 + list4._3
So what's the problem? I've shown that the statically defined list is can handle the cases that I initially thought were missing. In fact, for any such operation you are missing from the dynamic list implementation, I can come up with a static version which will be much more type safe and more explicit on what it expects and what it returns.

I think it's obvious what is missing, it's that all this code is way too specific, you can't reuse any code from List4 in List5, and just a whole host of other problems. Well, this is pretty much exactly the same kinds of problems that you run into with static typing when you're applying it to domain objects like User/Record/Car. It's just that we're very used to these limitations, so it never really occurs to us what kind of cost we're paying for the guarantees we're getting.

That's not to say dynamic typing is right and static typing is wrong, but I do think that there really are significant costs to static typing and people don't think about it.


I’m not sure I follow your analogy. I think the dynamism of a list is separate from the type system. I can say I have a list of integers but that doesn’t limit its size.

I can think of instances where that might be useful and I think there’s even work being done in that direction in things like Idris that I really know very little about.

There are trade offs in everything. I’m definitely a fan of dynamic type systems especially things like Lisp and Smalltalk where I can interact with the running system as I go, and not having to specify types up front helps with that. Type inference will get you close to that in a more static system, but it can only do so much.

The value I see in static type systems comes from being able to rely on the tooling to help me reason about what I’m trying to build, especially as it gets larger. I think of this as being something like what Doug Englebert was pointing at when he talked about augmented intelligence.

I use Python at work and while there are tools that can do some pretty decent static analysis of it, I find myself longing for something like Rust more and more.

Another example I would point to beyond the blog post I previously mentioned is Rust’s serde library. It totally allows you to round trip data while only specifiying the parts you care about. I don’t think static type systems are as static as most like to think. It’s more about knowns and unknowns and being explicit about them.


I believe your comments provided a good insight into your approach to programming. I may be wrong in my understanding, but let me elaborate.

You expect your programming language to be a continuation of your thoughts, it should be flexible and ductile to your improvisations. You see static typing as a cumbersome restricting bureaucracy you have to obey to.

Whereas I see type system like a tool that helps to structure my thoughts, define the rules and interfaces between construction blocks of my program. It is a scaffolding for a growing body of code. I found that in many cases, well defined data structures and declarations of functions are enough to clearly describe how some piece of code is meant to work.

It seems we developed different preferred ways of writing code, maybe, influenced by our primary languages, features of character, type of software we create. I used Scala for several years, but recently I regularly use Python. Shaping my code with dataclasses and empty functions is my preferred way to begin.


It is absolutely possible to have the same type for values that have the same shape.

You can have a `Map k v` that is a record that dynamic languages have that they call object/map.(make k/v Object or Dynamic if you want)

You don't need to create a new type with precise information if you just want that(no you don't need to instantiate type params everywhere). There is definitely limitations in type-systems (requiring advanced acrobatics) but most programs don't run into them and HM type system (https://en.wikipedia.org/wiki/Hindley%E2%80%93Milner_type_sy...) has stood the test of time.

For a great introduction on the idea of a type system, see: https://www.youtube.com/watch?v=brE_dyedGm0 .


Let me address your criticism from Scala's point of view

> they are very blunt

I'm more blunt than the complier usually. I really want 'clever' programs to be rejected. In rare situations when I'm sure I know something the complier doesn't, there are escape hatches like type casting or @ignoreVariace annotation.

> the problem of how to express your invariants within the type system

The decision of where to stop to encode invariants using the type system totally depends on a programmer. Experience matters here.

> Our program can't "pass through" this data from one end to an other

It's a valid point, but can be addressed by passing data as tuple (parsedData, originalData).

> What if there's a new Tag added? What if the Category tag adds a new field?

If it doesn't require changes in your code, you've modelled your domain wrong - tags should be just a Map[String, String]. If it does, you have to refactor+redeploy anyway.

> What about something like updating all updatable fields of an object

I'm not sure what exactly you meant here, but if you want to transform object in a boilerplate-free way, macroses are the answer. There is even a library for this exact purpose: https://scalalandio.github.io/chimney/! C# and Java have to resort to reflection, unfortunately.


> In a language as open and flexible as Clojure, this information can pass through your application without issue. Clojure programs are able to be less fragile and coupled because of this.

Or this can wreak havoc :) Nothing stops you from writing Map<Object, Object> or Map[Any, Any], right?


That's true! But now we'll get into what is possible vs what is idiomatic, common, and supported by the language/stdlib/tooling/libraries/community. If I remember correctly, Rich Hickey did actually do some development for the US census, programming sort of in a Clojure way but in C#, before creating Clojure. But it just looked so alien and was so high-friction that he ended up just creating Clojure. As the article I linked to points out, "at some point, you're just re-implementing Clojure". That being said, it's definitely possible, I just have almost never seen anyone program like that in Java/Scala.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: