From the blurb you quoted, it sounds like Lua doesn't check function argument arity natively, and fennel can. This means that fennel applies a runtime check to validate if a function was called with an expected arity, and if not, propagates an error (however Lua does that, I do not know), hence why arity checked functions are not as performant.
In OO languages (Java, C#, C++, etc...), and functional ones (F#, Haskell, OCaml, etc...) types do not validate the correctness of logic, they evaluate the correctness of data structures and their access/mutation as they are manifested in the language.
OO languages consider data correctness as access patterns of encapsulated primitives (or other data structures) through defined behaviors on a "class".
Functional languages consider data correctness as access patterns which preserve previous version of data which a caller may still need to reference, or another part of the system references. Functional languages (in most cases) disallow direct mutation without some sort of immutability or persistence.
Types have little (or nothing) to do with program correctness, or data correctness as it relates to external storage engines and their concurrency guarantees (i.e. FoundationDB, PostgreSQL, MySQL, etc...).
Therefore, in this scenario, what matter's most is the correctness of underlying storage (including the rigor of validating expected behavior) and the event sourcing/messaging systems, not on the internal data structures and their idioms conveyed by the language.
> In OO languages (Java, C#, C++, etc...), and functional ones (F#, Haskell, OCaml, etc...) types do not validate the correctness of logic, they evaluate the correctness of data structures and their access/mutation as they are manifested in the language.
Nonsense. Types validate whatever you use them to validate, which can certainly include what we usually call "logic".
> Types have little (or nothing) to do with program correctness
On the contrary, they're still the most effective technique we've found for improving program correctness at low cost.
> Types validate whatever you use them to validate, which can certainly include what we usually call "logic".
I'd love an example of this! I concede that I could be wrong on the point's of ML/Haskell families, however, it relies on the practitioner correctly using the type system to the extreme (at least, that is my impression). C++ and other similar OO's, the type system isn't as compelling as a correctness measure.
> On the contrary, they're still the most effective technique we've found for improving program correctness
In which domain are you working in where this has been the case? It may be my experience, but types as I have seen them used in industry have been more as "data containers with some behaviors".
I'd appreciate some examples of where you think I may be getting types wrong or missing the point.
> I concede that I could be wrong on the point's of ML/Haskell families, however, it relies on the practitioner correctly using the type system to the extreme (at least, that is my impression).
I wouldn't say it's "extreme", it's very normal and natural. You just stick everything in the types and it works.
> C++ and other similar OO's, the type system isn't as compelling as a correctness measure.
> In which domain are you working in where this has been the case? It may be my experience, but types as I have seen them used in industry have been more as "data containers with some behaviors".
> I'd appreciate some examples of where you think I may be getting types wrong or missing the point.
I've worked in "regular industry" and found types to be very effective. Let me turn it around: what kind of logic errors are you seeing that you think wouldn't be eliminated by using types? Types can't help you avoid errors in the specification itself, and there are a few domains where they may not yet be practical (mainly math-heavy things like linear algebra, where there's centuries' worth of mathematics that's applicable except where it isn't, and we just don't capture all of that in practical type systems yet), but the vast majority of the time you can construct your types in ways that force your logic to be correct because you just don't offer the ability to do the wrong thing.
> I wouldn't say it's "extreme", it's very normal and natural. You just stick everything in the types and it works.
Ah, then I seem to be missing the point/intention. Thanks for illuminating that it should feel "natural". I think I need to spend more time with ML/Haskell families.
> ... the vast majority of the time you can construct your types in ways that force your logic to be correct because you just don't offer the ability to do the wrong thing
I think that is where I haven't had experience, nor much exposure to, when it comes to "static typing" in the true sense/intention of making a program correct. More so (which I think I loosely alluded to), is that the type system was being used to "model objects in the real world".
> what kind of logic errors are you seeing that you think wouldn't be eliminated by using types?
Classic "logic" stuff. Forgetting to modify and return a map given some other information. Accidentally returning the inverse of a boolean (i.e. !isSomething(x) vs isSomething(x)), incorrect adds or bit shifts, concurrent access to shared data structures (shared memory may be a more apt term), reconciling two different pieces of data into a shared one, algorithmic implementations (though I think you touched on this being less likely when using a type system designed to encourage correct algorithms by applying type theory, so it may be moot).
To maybe give some clarity to what I mean by correctness, I mean does the program match the expected behavior of the programmer. In playing around with F#, I've written buggy F#, but the types all matched up. I had a guarantee that my program would run in a non-faulty manner (bar any system fault), but the program was not correct.
> and found types to be very effective
I'm very curious where you've seen this be very effective (if you're able to share), and with what language/technology.
---
Also, as a last quip, I am enjoying learning about your perspective with type systems. There are some points you have brought up which have caused me to think harder and in more depth about the concept and it's application, and want to voice that I appreciate you putting in the time to have this exchange. Want to this up before I forget, since it is getting late in my neck of the woods.
> Classic "logic" stuff. Forgetting to modify and return a map given some other information. Accidentally returning the inverse of a boolean (i.e. !isSomething(x) vs isSomething(x)), incorrect adds or bit shifts
If something really is just a map or a boolean or an integer then you can't avoid this kind of thing. But usually it isn't, it's something meaningful in your domain, and then you can make and enforce the right distinctions. CachedResults is not the same as FullyPopulatedResults (and neither is just a Map); rather than x and a flag for whether x isSomething, how about an Either XThatIsSomething XThatIsNotSomething? Adds and bitshifts are a wide range of operations, and if you're doing deep numerical work then that's one place where I've found that existing type systems often can't keep up, but a lot of the time you're in a domain where you don't need to do that - e.g. if you only need to add values then you can use a type that wouldn't even allow you to subtract them (indeed most of the time I see people using integers they're opaque IDs that it never makes sense to do anything mathematical with - it would be nonsense to multiply a user ID by a permission group ID, so why expose them as integers?).
> I'm very curious where you've seen this be very effective (if you're able to share), and with what language/technology.
Various industries (finance, adtech, messaging), largely "backend" whether direct API backends or more batchy/offline "big data" processing, but also even web frontends. Mostly Scala.
> If something really is just a map or a boolean or an integer then you can't avoid this kind of thing. But usually it isn't,
No matter how much you abstract out and dress up a boolean, at its heart it's still a boolean value and I don't see how making a custom type based on boolean would prevent returning the inverse of a boolean value bug.
Most of the time a "boolean" isn't a boolean - it's a flag to indicate one of two different things, which is something you can represent more directly.
You brought up the examples of doing the wrong thing with booleans. With a good type system you can cut down on needing to use booleans in the first place for a lot of things. Your isSomething(x) example is a good one: presumably some code after this is implicitly relying on this predicate being true about x. If you forget to do the check, or you invert the check, then that's a bug.
But another way to do this is to encode the predicate into the type system, so the compiler makes you get it right. Concretely, supposing x is a string, and you need to check if x is a valid username before invoking username-related code on x. Then you can have a function like:
fn as_username(x: String) -> Optional[Username]
A Username is just a type alias for a String, i.e. the runtime representation is the exact same, with no overhead. You put the parsing/validation logic inside that function. Then code expecting a username will take a value of type Username rather than String. If as_username is the only function with Username in the return type, then having a value of type Username is proof that the as_username function was already called at some point previously, and gave its blessing to the underlying string so that the Optional could be unpacked.
match as_username(raw_string) {
// compiler forces us to handle both cases, ie we can't forget
// to check validity
case Some(username: Username) {
// code in here can assume we have a proper username
}
case None {
// handle what to do otherwise
}
}
Sure, you have to write the as_username function correctly, there's no getting around that. But you only have to get it right once.
> Are you familiar with the idea of "making illegal states unrepresentable", and "parse, don't validate"?
I haven't heard of those concepts/ideas before. Thanks for linking the article to define those concepts. With your example, and the article mentioning that "parsing should take place at the boundaries" (paraphrase), I can see how types (a la ML families) can be defined and composed give internal coherence once an external input has been parsed and hence validated.
Really interesting approach which I haven't considered before!
Agree that you can use types to express and prove logical properties via compiler; it can be a fun way to solve a problem though too much of it tends to frustrate coworkers in JavaLand. It's also not exactly "low cost"; here's an old quip I have in my quotes file:
"With Scala you feel smart having just got something to work in a beautiful way but when you look around the room to tell your clojure colleague how clever you are, you notice he left 3 hours ago and there is a post-it saying use a Map." --Daniel Worthington-Bodart
> On the contrary, they're still the most effective technique we've found for improving program correctness at low cost.
This is not borne out by research, such as there is any of any quality: https://danluu.com/empirical-pl/ The best intervention to improve correctness, if not already being done, is code review: https://twitter.com/hillelogram/status/1120495752969641986 This doesn't necessarily mean dynamic types are better, just that if static types are better, they aren't tremendously so to obviously show in studies, unlike code review benefit studies.
My own bias is in favor of dynamic types, though I think the way Common Lisp does it is a lot better than Python (plus Lisp is flexible enough in other ways to let static type enthusiasts have their cake and eat it too https://github.com/coalton-lang/coalton), and Python better than PHP, and PHP better than JS. And I prefer Java to PHP and JS. Just like not all static type systems are C, not all dynamic type systems are JS. Untyped langs like assembly or Forth are interesting but I don't have enough experience.
I don't find the argument that valuable though, since I think just focusing on dynamic vs static is one of the least interesting division points when comparing languages or practices, and may be a matter of preferred style, like baseball pitching, more than anything. If we're trading experience takes I think Clojure's immutable-by-default prevents more bugs than any statically typed language that is mutable by default; that is, default mutable or default immutable (and this goes for collections too) is a much more important property than static types or dynamic types. It's not exactly a low cost intervention though, and when you really need to optimize you'll be encouraged by the profiler to replace some things with Java native arrays and so on. I don't think changing to static types would make a quality difference (especially when things like spec exist to get many of the same or more benefits) and would also not be a low cost intervention.
Some domains also demand tools beyond type proofs. i.e. things like TLA+. I think adding static types on top of that isn't very valuable, similar to adding static types on top of immutable-by-default isn't very valuable.
Last quip to reflect on. "What's true of every bug found in the field? ... It passed the type checker. ... It passed all the tests. Okay. So now what do you do? Right? I think we're in this world I'd like to call guardrail programming. Right? It's really sad. We're like: I can make change because I have tests. Who does that? Who drives their car around banging against the guardrail saying, "Whoa! I'm glad I've got these guardrails because I'd never make it to the show on time."" --Rich Hickey (https://www.infoq.com/presentations/Simple-Made-Easy/)
Your point about underlying data structures is spot on, in my view.
> Last quip to reflect on. "What's true of every bug found in the field? ... It passed the type checker. ... It passed all the tests. Okay. So now what do you do? Right? I think we're in this world I'd like to call guardrail programming. Right? It's really sad. We're like: I can make change because I have tests. Who does that? Who drives their car around banging against the guardrail saying, "Whoa! I'm glad I've got these guardrails because I'd never make it to the show on time."
I've been avoiding bringing up this point/example, but this is what prompted me to start thinking more deeply about types and their tradeoffs. Great talk, IMO.
Quips are not a good basis for decision-making. Immutability is valuable but there is no conflict between immutability and types; you should use both. As for "bugs found in the field", having switched to Scala I pretty much don't encounter them.
To the OP (if you read this), I appreciate your candor, and you're not alone. Heal well. Heal with your family and loved ones. Lean into them. I sincerely hope you move onto greener pastures.
After a quick glance, it seems like htmx would complement thymeleaf, if the web page/app you're writing doesn't need any sort of eager client (eager as in, treat and interaction with a remote service as "successful" and resolve the error in the background somehow).
W/ htmx + clojure, you can define your ui like so:
And like that you have a page with a button that tracks a counter and updates the ui (I haven't tested this, YMMV).
Also if it isn't clear, you can also keep all your markup as Clojure data structures, which means you can write an `html` function which has the common styles/scripts/etc.. necessary so you get a ton of re-use with an already similar syntax vs needing to learn a new templating syntax w/ its own conventions.
W/ clojurescript, to get the same behavior you need:
- clojurescript toolchain w/ some configuration of how you'll bundle/package it
- an idea of how you'll distribute your application (serve spa from same API service? S3/Object store? another web service? How to reconcile state?)
- an idea of how you'll reconcile state between local/server, if you want to go that route. If only local, nbd. If server, you add a handler and fetch data once SPA or cljs has loaded.
- and idea of what format you want to consume (JSON, HTML, XML,text) and then write the translation between that format and your markup.
etc...
I hope the above answers your question, or at the very least offers another perspective.
As a quick postscript, I think it is underestimated how convoluted templates/templating engines are, given they have the tendency to implement their own language/semantics outside of the PL you're using. I have much respect for the authors of template engines/spec, the engineering that goes into them is impressive, however, most I have come across tend to be a leaky abstraction. Once I experienced writing markup as Clojure data structures, it ruined me for templates permanently. I don't want to go back to writing templates, and doing an assessment of "what language does this template engine implement, and is it simple/easy to learn?" is an exercise I do not miss.
Note: I've been Clojure/ClojureScript developing professionally for almost two years now and have debated most of the above internally during that time. Done some templating w/ JS, Go.
hey thanks so much! truly appreciate the detailed answer.
I have one doubt and uve probably answered it...but i still cant see it.
im not able to figure out what is it that htmx is saving you in the example above. with clojurescript, wouldnt you have written very similar code ? i mean all you are doing is calling an api.
is it automatically doing conversion of JSON to ur DTO/business object. that cant be right can it ?
my mind is telling me that it is some kind of lifecycle management - like before/after hooks. make sure that the html loads after server is loaded, etc. is that what it is ?
Taking clojure out of the equation and only looking at htmx, I would say it buys you is simplicity, and for most use cases that is the difference between a shipped thing and a dead one (thing in this case is project, product, etc...). Granted, htmx isn't a silver bullet, as you have to learn some of the idioms of the library. But it may be worth it to some (it is to me), to not have to bring in an entire JS toolchain to get a thing bootstrapped.
im not able to figure out what is it that htmx is saving you in the example above. with clojurescript, wouldnt you have written very similar code ? i mean all you are doing is calling an api. is it automatically doing conversion of JSON to ur DTO/business object. that cant be right can it ?
It can kind of be what you need it to be (if I am interpreting the API provided by htmx correctly). The way I have been using it is to partially update my DOM based on some user interaction (either a POST, PUT, DELETE), by returning html. This makes updates html -> html vs json -> frontend framework -> html.
As far as ClojureScript code, the above example would resemble the equivalent ClojureScript code almost 100% (with some slight differences). The example above was a little contrived, in that it is so simple. However, if you think about a larger use case (i.e. 100's of elements need to be rendered from a db), it doesn't become contrived.
With a Clojure backend + htmx, all I do is write my business logic, define my html via hiccup (Clojure vectors w/ a convention similar to html) and return the html to the client, which I can ship without needing to coordinate different pieces. In addition, the hiccup I hopefully defined is broken up into functions that'll allow me to partially update the DOM for different user CRUD operations.
With a Clojure backend + ClojureScript SPA, its the same-ish business logic on the backend to return results from a JSON API, some more logic on the front end to validate said JSON, some additional logic to add said data to a global store, then finally my view can update. Then in order to get you're app out there, you'll need to figure how to get you're application deployed. There has been a lot of work in ClojureScript land as far a build/packaging tools, but they can still be rough around the edges, which results in headaches sometimes between dev/prod. After you've gone through that and put your app in an S3 bucket/object store (for the sake of example), you'll need to get your API deployed (which should be same steps as above backend + htmx).
So while it may seem the same, there is a lot more complexity that comes with ClojureScript SPA.
Also, as a final thought for this section, even if you went with vanilla ClojureScript + html, you won't get much for free. You still need to compile your ClojureScript, which still requires DOM API's, which you'll still need to call from ClojureScript to handle updating the DOM from the ClojureScript which still needs to call AJAX/fetch, which still needs to resolve promises in order to get your JSON data (or html data), which still needs to translate your interchange format (in this instance JSON) to html. At this point, are the layers of abstraction worth it?
my mind is telling me that it is some kind of lifecycle management - like before/after hooks. make sure that the html loads after server is loaded, etc. is that what it is ?
Sort of. I like to think of htmx as filling the role of sync-ing my web view/page/app with backend state without all the ceremony and fuss of a frontend framework. I can handle all the logic and UI definition on the server and defer http requests (GET, POST, PUT, PATCH, DELETE) and DOM patching to htmx. Kind of like how a user of a frontend framework defers that same patching to the underlying virtual dom implementation.
Personally, when I first saw htmx I thought "well... that doesn't seem useful. I can get a React (or Vue, or Solid, or Svelte, or X) spun up w/ a cli command and do all my things there. But then I used it and couldn't have been more wrong on its usefulness. I encourage you to play around with htmx, try implementing the counter example in your preferred language, see if you come to the same conclusion you had previously.
There are cases where a SPA is appropriate and merits the investment, but I think they should be the exception not the rule, due to the complexity brought about by a frontend framework.
In conclusion, I hope I answered your questions. I am happy to continue this discourse if you still want to chat about it. At the end of the day though, all we're trying to do is render web pages, so do whatever makes sense to you to accomplish that.
>The way I have been using it is to partially update my DOM based on some user interaction (either a POST, PUT, DELETE), by returning html. This makes updates html -> html vs json -> frontend framework -> html.
oh this clicked. Now i get what htmx is doing. its basically doing a react-ish way of UI changes without a full page reload (which thymeleaf would have done).
so what happens if i click something that needs to go to another "page"? do you "swap" the html out with the html of the other page ? or do you trigger a full page reload.
For SEO reasons, im kinda putting the requirement that every interaction must have a unique url
its basically doing a react-ish way of UI changes without a full page reload (which thymeleaf would have done).
Interesting, I missed that in my scan of the thymeleaf docs. I haven't used that library at all (and tend to shy away from Java land in general, unless it's Clojure).
And for my purposes, yes. I may be missing the point of the library, but that is how I've tended to use it.
so what happens if i click something that needs to go to another "page"? do you "swap" the html out with the html of the other page ? or do you trigger a full page reload. For SEO reasons, im kinda putting the requirement that every interaction must have a unique url
I'd say pick your poison. You could go either route and it would most likely be roughly the same code.
From the blurb you quoted, it sounds like Lua doesn't check function argument arity natively, and fennel can. This means that fennel applies a runtime check to validate if a function was called with an expected arity, and if not, propagates an error (however Lua does that, I do not know), hence why arity checked functions are not as performant.