I've seen uses of implicits fall into three categories:
(1) Dependency injection, e.g. ExecutionContext
(2) DSLs/syntax sugar, e.g. custom string interpolation, specs2
(3) Typeclasses and type-level programming, e.g. Play JSON Reads/Writes, cats, shapeless
The more I have seen Scala, the more I realize that #1 is cancer.
Implicits should not be used to reduce the number of characters typed. If used at all, they should be used for a better reason, e.g. it is a necessary part of a typesafe pattern, like typeclasses.
People unfortunately go to great length to avoid typing function parameters so will do all kinds of shit to reduce arguments in function calls. This includes all kinds of insane dependency injection frameworks that depend on runtime introspection or bytecode manipulation, or stateful objects being passed around instead of plain values.
There's even this best practice around that says "have functions with 3 params or less" which is complete bullshit. I mean, you actually have company wide policies with enforced lint rules on maximum number of params and people end up passing around giant records (maps, objects) that then get unwrapped.
I cannot blame implicits, if anything, if people are going to go to such great lengths to avoid passing arguments in functions, I'd rather see implicits instead of Spring or Guice, because at the very least implicits are compile time.
The actual cancer are the best practices that drive people to doing this.
I wouldn't say that best practices drive people to doing stupid stuff, at least not directly. And they usually do have merit.
The issue is that people apply them without understanding the motivations behind.
F.x. the "too many method or constructor args" warning signals that a class or a method is probably doing too much, so please decompose. A person without understanding the motivation will think it's a sort of a game and wrap some params into a map or move some them into globals :-\
I find it insane that someone would use an actual lint rule to enforce a best practice about few function parameters (loosely defended by this thin complexity argument).
Functions of appropriate complexity routinely have more than 3 arguments, even in standard library functions.
Why not just use common sense and code review? If a certain function actually needs 10 arguments for customization (for example, this is common in plotting libraries), so be it. If during code review the writer can give a reasonable account of why the complexity is useful, then just move on.
I cannot see any way to defend putting this is a linting rule.
I'm not a fan of those either (and 3 arguments is insanely low indeed), but I see a way it can work: you always have a choice to suppress a linter finding if your case is "legitimate".
F.x. in our case the limit is 7. I think most people would agree that 7+ args constructor or method is worrying. But there are cases where it can be ok (with or without some stretch), f.x. if it's some sort of a glue code that does a very mechanical thing and breaking it down would be nothing short of gold-plating.
Edit: In general I find the signaling that linter gives useful. At the same time I agree that mature teams can do without it just fine.
Yes, but then you just encourage people to litter the code with noisy linter suppressing directives or comments.
The point of directives that suppress specific linter rules is that it should be used in rare, exceptional cases when an otherwise common lint rule needs to be violated.
But having a large number of function parameters is not rare or exceptional: you have that for legitimate reasons all the time.
If a lint rule causes lint overrides to become a daily experience, I’d say that clearly tells you that the lint rule itself is wrong, not the software practices.
To me, someone who would support a lint rule reducing function parameters would almost certainly be someone who had a deeply unpragmatic and unrealistic obsession with certain kinds of design patterns and refactoring patterns.
You code professionally long enough and you start to realize that most design patterns, especially OO cookie cutter patterns, are pretty crap and only have a few narrow use cases.
Introducing some type of abstraction purely to allow refactoring into function signatures with fewer parameters would be a nasty bad code smell to me.
> But having a large number of function parameters is not rare or exceptional: you have that for legitimate reasons all the time.
That doesn't match my experience. IME, virtually most cases where I've run into asignaturas with more than about 3 parameters, there have been one or more groups that should have been passed to another function whose result should have been passed to the present function.
But that may be context dependent.
> If a lint rule causes lint overrides to become a daily experience, I’d say that clearly tells you that the lint rule itself is wrong
Sure, so in a context where you need to do that daily, the lint rule is bad for that context.
> But having a large number of function parameters is not rare or exceptional: you have that for legitimate reasons all the time.
I think here we can only speak of our own experiences. In my team's experience that linter rule had been triggered only 2-3 times in 2 year history. Far from being "daily experience". And in all cases it was warranted: it was a poor choice of abstraction that resulted in accumulation ever-growing number of dependencies in a class.
> Introducing some type of abstraction purely to allow refactoring into function signatures with fewer parameters would be a nasty bad code smell to me.
Referring to my earlier comment, the point is not to blindly refactor something to satisfy the linter, but to actually understand:
* is there really a problem (in _your_ terms) behind a given linter signal?
* if there is a problem - fix it
* if not - well, suppress it for this instance, adjust or get rid of this linting rule.
> You code professionally long enough and you start to realize that most design patterns, especially OO cookie cutter patterns, are pretty crap and only have a few narrow use cases.
When it comes to OO patterns I tend to agree: many are quite narrow and the way they are pitched ends with people looking for nails with a bunch of hammers. I.e. people keep "applying patterns" instead of fixing problems in their code. But those kind of linting rules are not about design patterns.
I've felt similar after seeing the rise of Rubocop after Sandi Metz did some talks and wrote a book about object oriented design in Ruby.
Her advice is totally sound, if you hear her out she does not advocate for the dogmatism that had since infected the community (and the linting tools...).
Yet still, you will work for a company who insists on a hard and fast rule that a method cannot be more than 10 lines long. Or a class cannot contain over 100 lines.
It's not on the same level as implicits and everything else Scala lets you do, but still...it makes code worse.
Your 11 line long function that is perfectly readable has to become a shorter function that calls other methods that wrap the same logic, but are otherwise not useful in any other context. Or you have to fuck with the syntax to reduce the line count.
You end up with code full of indirection because the community decided to quantify readable code and build policy around it.
And even if you really do want to have "X params or less", Scala makes it really easy to achieve with case classes. Group your parameters into data classes of X parameters or less, and viola, you have reasonably succinct, readable solution.
eh it’s not that clear. all uses of implicits reduce the number of characters typed. there’s no functionality they unlock that you would not gain with another explicit parameter (or 2) and a helper class/type(s). by reducing characters typed they enable a different syntax that you wouldn’t normally do because without implicits it’s cumbersome. but then if code is unwieldy because it’s got too many letters, there’s probably a good argument for a feature that reduces letters.
i agree about ExecutionContext. I have done a lot of scala and really find implicits useful to reduce lines but the cost is so high. it’s to point where the scala chat room at work is people posting compiler messages and the answer a lot of the time is you need this magic import.
the takeaway i have is if functional programming and monads and referential transparency are some top tier of programming, is this import Implicits._; MoreImplicits._ and param’s of A[B[_], D] really the way things should be? is this what people as eschewing when they say functional is the way to go? it just feels so overkill but you have to use it all over your stack to make the map/flatmap chain work. and once every method is just some long for comprehension you are finally “winning”
In Haskell, typeclasses are language-level construct, not needing any implicits, and monadic code is easy to write using do-notation which is syntax sugar over binding ("comprehensions").
In Scala, typeclasses are a pattern, something achievable using a bunch of language features strung together, and implicits are likely an inevitable part of that mix. For-comprehensions is what "sequential computation" really is if you want to represent it as a function, and not forget to check intermediate results. It has numerous advantages, but again, on the language level it's a pattern, a contraption that makes monadic style possible, but not necessarily easy on the eyes.
My team decided to avoid #1 at all costs recently. The prime example was for Twitter's Timer class which for some reason is an implicit argument in a lot of their library code. We just always pass in timers explicitly now. We generally only use implicits for (de)serialization, whether it be JSON, bytes, encoded strings, etc. The one exception is our DAG state machine code which uses implicits to resolve which data is available at each step. I didn't work on the base code, but it makes writing DAGs really straight forward and mostly painless.
Also for anyone that sees this, Intellij has a shortcut for viewing which implicits are being used in statements (CMD + Shift + P on MacOS). It's really helpful if you're trying to figure out implicit resolution.
I feel that it's a bit more than syntax sugar - I'll use this to normalize APIs between different third-party libraries (including the Scala standard library) so that I don't have to remember e.g. whether Slick implemented recover() for DBIOs (hint - it didn't, which breaks the mental abstraction of "this is like a Future, but different") or whether Map()s have a `filterValues()` method to match their `filterKeys()` (again, no).
In your example, not really, but in general, assuming the implicit is in scope most of the time, `x.method` is much more discoverable through autocomplete than `method(x)`.
Incidentally, the fact that Haskell's function calls look like `f x` instead of `x.f` probably makes building a pleasant Haskell IDE quite difficult.
> assuming the implicit is in scope most of the time,
And that's really the rub.
I write
import com.example.MagicThings._
and one of the things imported is an implicit class or method which has the visible on that. The name I see in the code is not even one of the names I am importing, which IMO significantly hurts discoverability.
Without an IDE, discoverability is not great even in implicitless code. In `x.y.z()`, you don't need to import the type of `y`, much less the method `z`. And an IDE's "go to definition" works fine on extension methods.
IMO the greater discoverability problem is knowing and remembering what to import in the first place.
Still, even with an IDE, extension methods do add complexity and should only be considered when they'd be called very frequently. In my current (Unity/C#) codebase, the ratio of static helper methods to extension methods is probably around 10:1 or more, and that seems alright.
Although it sounds very controversial, it might actually be good if libraries could by default add their implicits to the prelude. SBT (which is horrible in many other ways) lets its plugins do something similar, and that seems to be ok.
Which of those categories contains CanBuildFrom? It's #1, right? But instead of the number of characters typed, I think its primary value is in removing (from the call site) implementation details that the reader of the code need not be concerned with.
I heartily recommend [splain][0] to anyone debugging non-trivial implicits. It is a scalac compiler plugin that, among other things, will swap out the horribly unhelpful "implicit not found" or "diverging implicit expansion" messages for an indented and annotated representation of the search tree that the compiler went through before giving up.
`-Xlog-implicits` is good to use every now and then, but it quickly becomes unreadable for any decent sized project.
Well grown libraries are using the `@implicitNotFound` annotation to generate better library-controlled compiler errors, with advice given for what to do. It has its limitations of course, but proved very useful in many situations.
A search tree is cool, but it's not going to explain how to get a value when it has restrictions (dependencies).
`@implicitNotFound` still only gives you a top-level failure message. Also, my comment was aimed more at developers of libraries with complex implicit derivations not the consumers of such libraries.
I wish I knew about this when I wrote Scala (albeit briefly). It's one of those "ill posed problems" when debugging - what is the implicit from a third party library that is not in scope that makes this work?
I... mostly agree but I think that the type-class use case allows for stronger DSLs that let you seperate code from effects in a pretty clean way. A lot of functionally pure stuff works off of this and building a DSL like this without type classes or implicits might be a bit messy https://www.chrisstucchio.com/blog/2015/free_monads_in_scala...
Implicit parameters in Scala are solved at compile time and lexically scoped so do not qualify for "magic".
> why bother
Because software development is hard, we're solving hard problems and it's good to have better and better tools to do that.
In particular Scala's implicit parameters, along with its support for higher kinded types and functional programming in general allow for modeling your problem domain via tools imported from Haskell and other FP languages, because implicit parameters are in fact a generalization of type classes. And type classes are many times a superior alternative to OOP for building polymorphic code.
Plus the landscape is very competitive and the market is global. I remember in 2000 when the bubble burst the jobs of thousands of developers disappeared over night. To stay competitive, a "why bother" attitude is not going to cut it when the current bubble bursts, just saying.
> solved at compile time and lexically scoped so do not qualify for "magic".
That seems like too limiting a definition of "magic" to me.
> implicit parameters are in fact a generalization of type classes.
Well, not exactly, as far as I can tell. Type classes (in Haskell at least) are significantly more ergonomic. Since they semantically describe the user's types, they allow you to perform their operations directly with the user's values. Implicit parameters have to be pulled out of the implicit environment and used explicitly themselves, as far as I can tell.
It's true that the underlying mechanism is (roughly) the same, but the surface-level behavior is what really matters, and there they are very different.
> "That seems like too limiting a definition of "magic" to me"
What is your definition of "magic" then? Things that are unfamiliar?
Please don't mention "explicit vs implicit", because Scala's implicits are in fact explicit, plus we can always talk of assembly language and how it lacks any magic.
> "Well, not exactly, as far as I can tell. Type classes (in Haskell at least) are significantly more ergonomic."
Ergonomics has nothing to do with whether a concept is a generalization of another.
> "Since they semantically describe the user's types, they allow you to perform their operations directly with the user's values. Implicit parameters have to be pulled out of the implicit environment and used explicitly themselves, as far as I can tell."
Nope, both type clases and implicit parameters allow for describing functions that turn type names into values. That's all there is to it. Or in other words, both type classes and Scala's implicits are about return type polymorphism.
The only actual differences are that:
1. type classes in Haskell are supposedly coherent, i.e. the compiler is supposed to force a single type class instance in the whole project, however GHC does in fact not do that so you can end up with multiple instances of the same type in the same project
2. Scala's implicits are more flexible in what they allow, for example it has priority rules in case of conflicts, which allow say those implicits to work in the presence of OOP subtyping, or to work with multiple type params for describing things that are difficult to describe via type classes (but not necessarily desirable, e.g. the CanBuildFrom pattern)
3. working with Haskell's type classes is nicer, because for hierarchies in Scala you end up dealing with OOP issues (i.e. in modelling Haskell's type class hierarchies, we ended up debating on inheritance vs composition), plus we do a lot of manual plumbing in libraries like Typelevel Cats
But trust me when I say this, fundamentally they are the same ;-)
I don't think you understand my point: the implementation doesn't matter, what matters is the user's mental model. The mental model of typeclasses is focused of a specific, clearly defined use.
> 1. type classes in Haskell are supposedly coherent, i.e. the compiler is supposed to force a single type class instance in the whole project, however GHC does in fact not do that so you can end up with multiple instances of the same type in the same project
Could not agree more. I hate when developers try to be "clever" at the cost of legibility and simplicity. Scala is full of this "cleverness" built into the language itself.
well, having just aborted a new job run by a person who pushed implicits, I came to realize the use of implicits essentially kept him one step ahead of others in understanding what explicitly his code was meant to do. Like a primal nerd turf war, for no group benefit.
There is also some interesting work (for dotty only atm) on doing the type checking in two phases which would allow the second phase to be parallelized. This could be paired together with the Rsc type checker for the first pass for even bigger improvements.
As to technical interviews I've always believed the rule was: ask a silly question, get a silly (but still enlightening) answer. That series of blog posts is the height of this precept.
If you keep an eye on Indeed.com and Angel.co you'll notice a decline in Scala adoption over the last few years. I can't help thinking implicits are partly responsible for this. They're certainly the main reason I ditched Scala. More than any other language there seem to be more exmaples, with Scala, of a team trying it then switching to something simpler.
I think implicits are fine as long as they're used primarily for the typeclass pattern. The extension methods and conversions aspect of implicits should be confined to library DSLs
i wonder if implicits could present a security risk, insofar as the compiler may grab some [perhaps mistakenly in-scope] identifier which has sensitive data on it.
I commonly see this assertion but it never made much sense to me: Rust is not that complex, and it's certainly nowhere near the complexity of Scala or C++ let alone worse (to me the sole topic of C++ constructors & assignment feels more complex than the entirety of Rust).
What it is is highly front-loaded, especially through the early learning cliff of integrating the borrow checker.
Meh I would say Rust is just as complex as Scala. I actually don't think Scala is all that complex, it has few features, they're just write powerful and general ones which take some getting used to I think.
I believe a big part of the famous Javascript semantics can be emulated in Scala with a library.
Once upon a time I wrote a toy Android app with Scaloid, and tried to use some on-device print-debugging. toast("foo") worked just fine. toast(42) crashed at runtime. Turns out Scaloid offers an implicit conversion from Int to CharSequence, and it even doesn't turn 42 into anything like "42". I'm not kidding [1]. In Android you normally have "resource identifiers" as integers with no compile-time type information, and then at runtime you never use these as actual numbers, instead you every single time pass them to some function like getText or getDrawable. Sound like the perfect case for implicit conversion, right?
(1) Dependency injection, e.g. ExecutionContext
(2) DSLs/syntax sugar, e.g. custom string interpolation, specs2
(3) Typeclasses and type-level programming, e.g. Play JSON Reads/Writes, cats, shapeless
The more I have seen Scala, the more I realize that #1 is cancer.
Implicits should not be used to reduce the number of characters typed. If used at all, they should be used for a better reason, e.g. it is a necessary part of a typesafe pattern, like typeclasses.