There are a lot of comments pointing out pros and cons of Scala, comparing it to other languages using superficial proxies.
I'd like to share a different perspective based on my own work with OOP-centric and FP-centric languages.
If you take OOP to its logical conclusion, OOP is a slippery slope that eventually leads to Gang-Of-Four centric designs.
If you take FP to its logical conclusion, FP is a slippery slope that eventually leads to Monad-Transformer centric designs.
Both are equally valid ways to design complex applications, so it all comes down to individual taste.
Personally I have a taste for mathematical abstractions, so Scala is better suited for my way of thinking.
It's obvious that Scala was specifically designed to be a solid foundation on which one can implement the concepts of a little known branch of mathematics called the category theory, and almost every design choice in the language flows from there.
If we use this as the basis for comparison to other languages, it's very easy to understand that Scala has carved out a very powerful niche for itself and is here to stay.
> It's obvious that Scala was specifically designed to be a solid foundation on which one can implement the concepts of a little known branch of mathematics called the category theory, and almost every design choice in the language flows from there.
Nonsense. Scala was designed to make programming easier and safer than Java - XML literals and pattern matching certainly have nothing to do with category theory. Scala's for/yield has surprising behaviour when e.g. mixing lists and sets, precisely because it doesn't have a categorical grounding and was implemented in terms of what working programmers wanted to do with it rather than enforcing that you've formed a valid semigroupoid in the category of endofunctors.
Over time it's emerged that some categorical constructs provide useful ways of thinking about code and solving programming problems, but you're putting the cart before the horse if you think the language was designed for category theory.
> Scala's for/yield has surprising behaviour ... because it doesn't have a categorical grounding
Not sure what you're talking about, but for/yield is syntactic sugar for map/flatMap, just like the "do notation" is in Haskell. Of course it has theoretical grounding, because it wouldn't work without flatMap being the monadic bind.
> mixing lists and sets
Again, not entirely sure what you're talking about, but if true, it's entirely unrelated to for/yield.
> If you take FP to its logical conclusion, FP is a slippery slope that eventually leads to Monad-Transformer centric designs.
I don't understand what you mean by this, and I write production haskell for a living. We don't have teetering towers of transformers, and the best advice I've seen is often "put away the shiny tools and just use functions",
https://lukepalmer.wordpress.com/2010/01/24/haskell-antipatt... . Similarly in http://www.parsonsmatt.org/2018/03/22/three_layer_haskell_ca... , which is like one real transformer layer and a way of claiming only the capabilities you need in your impure code. His "invert your mocks" article talks about this, too.
Some are fixed (Either, inference for many of the type lambda cases), some are being fixed in the next version (container overloads, CanBuildFrom), some are not really issues at all (free theorems are fine, an extra map on a monad is not a problem and sometimes more efficient, specific compiler bugs have been fixed but never said anything about the language in general, Haskell as used in the wild (with orphan instances) doesn't guarantee typeclass coherence either), some are real but exaggerated issues (subtyping and implicits, type inference for tricky recursion), a few are genuine issues that remain and probably always will (having to trampoline your monads, no kind system).
It's been a while since I lurked Haskell forums/haunts (was into it way back when), but at that time Lens was a big deal...if that's not a highfalutin library made for the most entrenched monad geeks (and for the purpose of making setters and getters, no less), I don't know what is. I guess I'm really shocked to hear that Haskell is all pragmatic and simple now with slim abstractions...but I'd love to be corrected.
I'd suggest that you're giving Lens a pretty short shrift. It's not "just" getters and setters as it works on immutable data. It also has a composition story that's better than normal getters and setters and _much_ better than other immutable data update stories.
I disagree with that. Haskell is category theory as a language. The Scala community got invaded by Haskellites trying to turn Scala into Haskell, but there’s a particular style of OOP/FP hybrid that requires a language like Scala to use/teach/explore. The problem is that in creating the tool to enable that, we ended up with a language that was too big to have a coherent style, which led to an incredibly fractured community.
> I disagree with that. Haskell is category theory as a language
No it is really not. Haskell only really has one category of any import, typically denoted 'Hask'. There is no builtin way to embed or express arbitrary categories in Haskell. It's not even clear what this would mean, given that Haskell only has one conception of arrow (the function type '->'). In fact, Haskell is so far removed from true category theory that we have to use a compiler plugin (CCC by Conal Elliot) to get anything approaching compilation to arbitrary categories.
Haskell is based on the polymorphically typed lambda calculus. Some decades after its design, some people realized they could use some abstract algebra concepts to help structure programs. You can use these concepts in any language with a sufficiently expressive type system. It is not specific to Haskell
Designing good multi-paradigm languages is very hard. I'm a big fan of Mozart-Oz and Common Lisp, and I think Odersky did a really good job with Scala. In fact, given that Common Lisp is languishing and Mozart was never a serious real world contender, Scala fills in a niche where there are not many competitors. C++, just for some use cases, Julia for others, and .NET.
The fact that many organizations working on massive datasets have embraced Scala proves there are not many alternatives around, sadly.
I don't think CL is attracting a lot of new developers or there's a lot of new libraries getting developed. But I would be glad to be proven wrong. Perhaps Racket will eventually become a CL replacement, now it's adopting a lot of Chez low-level stuff and it's multiparadigm efforts keep growing.
My guess would be the JVM. Racket is an academic language, designed for academia. At least it markets itself as such. Made to explore the design space of programming languages.
Clojure was designed and marketed for the enterprise. Builds on the JVM, full Java interop, emphasis on pragmatism.
I disagree with this. Haskell's creation [1, 2] predates the realisation (by the FP community at large)
of the close connection between parts of category theory and parts of
pure functional programming, which happened in the 1990s, perhaps
driven by Moggi's realisation that monads are a fundamental
abstraction in computation that can reconcile effects with pure functional computation. More importantly, there is no category
"Hask" of Haskell programs, and there isn't even a candidate category that's even close.
One of Odersky's motives in creating Scala was bringing the power of
Haskell into the JVM world, although this is not the only motive
(tight integration of OO and FP being another).
Haskell's key innovation over its predecessors (Miranda and ML) was
the addition of higher-kinded types, which in turn made ad-hoc
polymorphism digestable in a typed world (in the form of type-classes). The combination of HKTs and
type-classes enables the monadic abstractions that contemporary
Haskell programming is widely known for today.
Note that Scala has HKTs and type classes (via implicits), so Haskell
programs can typically transliterated into idiomatic Scala without
major complications.
> One of Odersky's motives in creating Scala was bringing the power of Haskell into the JVM world, although this is not the only motive (thigh integration of OO and FP being another).
I'm not sure why you would think that. Odersky has long claimed ML as his primary functional programming influence, not Haskell. It does have HKT, so there's that...but that's about it. Typeclasses aren't really a part of the language. They were made possible by implicits. They have since had some language level support (context bounds), but that is more of a nice to have than a primary influence.
Just because Scala can "do" Haskell doesn't mean odersky was inspired by it any more than ML or Java.
From an abstract PL theory POV, Haskell's key innovation over ML was HKTs.
It was an experiment at the time, but one that was successful beyond
all expectation: I cannot imagine designing a new PL without HKTs. (Note
that Rust is also trying to add HKTs, but has been running into difficulties with type-based lifetime tracking IIRC.)
Implicits are a generalisation of default arguments (I don't know
where they were pioneered, maybe C++). Implicits were first tried in
Haskell [1]. Scala refined them in several steps, the last being [2]
which represents implicits on the type leve. Mimicking type classes via implicits
is a well-established Scala idiom [3].
In fact, ML modules have had higher-kinded types [1] since before Haskell even existed. I guess Haskell's main innovation in this domain is really its very convenient higher-kinded parametric polymorphism with type classes.
[1] MacQueen, D B (1984). Modules for Standard ML. Conference Record of the 1984 ACM Symposium on LISP and Functional Programming Languages. 198-207.
Thanks. That's really interesting. I need to read MacQueen's paper. Is this full HKT, in the sense that all constructions wiht HKTs can be encoded in MacQueen's system?
If by "all constructions wiht HKTs", you mean what can be done with HKTs in Haskell, then I'd say yes. It is well-known that ML modules provide a very advanced level of expressiveness, especially since OCaml's introduction of first-class modules. Also, Scala took this idea further and provides principled recursive first-class modules (which Scala calls dependent object types).
The problem is that modules in ML have a verbose syntax and are clunky to use compared to type classes. OCaml's modular implicits aim to make this better (see https://arxiv.org/abs/1512.01895), taking inspiration from Sala's implicits.
I understand that making modules first-class gives them a lot of expressivity. But they are a fairly recent development" at least in OCaml the come much after Haskell. But are modules in MacQueen's sense first-class?
> Implicits are a generalisation of default arguments.
There is an extreme semantic difference between an implicit parameter and a default argument. Have you spent any time at all using them? That's like claiming HKT is just a generalization of type constructors. Would you claim that HKT isn't anything new or novel because constructors have been around forever?
Default arguments and implicits both have the same key idea: you can
omit arguments to functions, and the compiler, guided by type
information, synthesises the missing arguments during compilation. In
order to understand the difference between both, it is crucial to
realise that this compile-time synthesis of missing arguments has two
related but different dimensions.
- Declaration that an argument is allowed to be omitted (and hence
synthesised automatically).
- Declaration of the missing argument that is used in this synthesis.
Default arguments merge these two into one, e.g. with
def f ( int x ) ( bool b = false ) = ...
f (2)
f (2)
all calls f(2) become f(2)(false). The problem with this is that the
devault value to be used in synthesis cannot be context dependent.
Implicit arguments separate these two, enabling the programmer to
make default context dependent, e. g.
def f ( int x ) ( implicit bool b ) = ...
implicit val c = true
f (2)
implicit val c = false
f (2)
Now the first call f(2)
is rewritten to f(2)(true), while the second becomes f(2)(false).
> Would you claim that HKT isn't anything new or novel because constructors have been around forever?
I'm not sure I see the connection: constructors are program constructs, while HKTs are "types for types".
> Default arguments and implicits both have the same key idea: you can omit arguments to functions, and the compiler, guided by type information, synthesises the missing arguments during compilation.
Saying implicits are a generalization of default arguments because they're both synthesized during compilation is like saying steam trains are a generalization of pipes because they're both made of metal. It elides too big a difference to be helpful, to the point that it's actually misleading.
I'm not saying implicits and default parameters are the same thing. I'm saying the key idea behind implicits is the realisation that default arguments merge two ideas that should be kept separate, if we want context dependent defaults. Some of implicit's other features can (and have been) added to default arguments, see my other reply.
> I'm saying the key idea behind implicits is the realisation that default arguments merge two ideas that should be kept separate, if we want context dependent defaults.
This isn't true though - that's not what implicits are for and not how they're used. Indeed the clearest proof is that Scala still has default arguments, and they still have the behaviour given in your OCaml example. They're different things.
I did not say implicits and default arguments are the same thing. I said that implicits improve upon default arguments in various ways, the key realisation being the split between providing elided values in a context-dependent way, and permitting the elision of values.
I should not have said the key idea, but a key idea.
Default arguments are convenient if you don't need context dependence of elided arguments. I'd probably have remove defaults in order to have a smaller and more simple language.
Maybe a disagreement here is fueled by the power level difference in these two features. Beyond the superficial injection examples, Scala implicits also support injecting values constructed dynamically and also recursive/derived implicits.
Your example of nested functions with default arguments does not correspond to what derived implicits do. As a result of implicit resolution, an expression with an arbitrary number of subexpressions may be synthesized, the shape of which depends on the types involves. Default parameters simply cannot do that.
The example you give to justify "recursion is quite restricted" is not a restriction on recursion at all, it's an ambiguity problem. Define `a` as `y` to shadow the function parameter, and it compiles.
Thanks for the Simplicity reference, hadn't seen that yet. Seems like it builds on the implicit calculus [1], which applies more generally to any lambda calculus than in the Scala context.
Would love to see a citation re: Odersky (or if he'd chime in here).
My general assumption is that if Scala was intended to be category theory, implemented, something along the lines of scalaz or cats would have been built into the language -- the philosophy of the language does not include a JS-esque philosophy of small stdlib, big library ecosystem.
What does this have to do with the original assertion that "one of Odersky's motives in creating Scala was bringing the power of Haskell into the JVM world"?
AFAIK, Odersky doesn't particularly like people trying to replicate Haskell patterns in Scala. For example, he thinks using monads for most effects is inappropriate.
I'd like to share a different perspective based on my own work with OOP-centric and FP-centric languages.
If you take OOP to its logical conclusion, OOP is a slippery slope that eventually leads to Gang-Of-Four centric designs.
If you take FP to its logical conclusion, FP is a slippery slope that eventually leads to Monad-Transformer centric designs.
Both are equally valid ways to design complex applications, so it all comes down to individual taste.
Personally I have a taste for mathematical abstractions, so Scala is better suited for my way of thinking.
It's obvious that Scala was specifically designed to be a solid foundation on which one can implement the concepts of a little known branch of mathematics called the category theory, and almost every design choice in the language flows from there.
If we use this as the basis for comparison to other languages, it's very easy to understand that Scala has carved out a very powerful niche for itself and is here to stay.