Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It can using pure or return or if working with just Maybe specifically then Maybe is defined like so:

data Maybe a = Just a | Nothing

So to make an X a Maybe X, you'd put a Just before a value of type X.

For example:

one :: Int

one = 1

mOne :: Maybe Int

mOne = Just one -- alternatively, pure one since our type signature tells us what pure should resolve to.

Reason we can do this is because Maybe is also an Applicative and a Monad and so implements pure and return which takes an ordinary value and wraps it up into an instance of what we want.



Sounds similar to how you need to do Some(x) when passing x to something expecting an Option in rust.

Swift interestingly doesn’t require this, but only because Optionals are granted lots of extra syntax sugar in the language. It’s really wrapping it in .some(x) for you behind the scenes, but the compiler can figure this out on its own.

This means that in swift, changing a function from f(T) to f(T?) (ie. f(Optional<T>)) is a source-compatible change, albeit not an ABI-compatible one.


Isn't that explicit casting? Implicit casting would be automatically performed by the compiler without the need to (re)write any explicit code.


> mOne = Just one

I'd call that explicit casting. Implicit casting would be

    mOne = one
Compiler already knows what "one" is, it could insert the "Just" itself, no? Possibly due to an operator defined on Maybe that does this transformation?

That is, are there some technical reasons it doesn't?

Or is it just (no pun inteded) a language choice?


Why would this be useful? Why do you want the types to change underneath you?


Better question: Why would you want your call site code to break when your type signature gets changed in a way that doesn't necessitate breaking anything?


Because what you're asking for precludes the concept of mathematical guarantees. I'm not taking your question at face value, because you could be asking why call site code should break when the type signature generalises (which is a useful thing), but that's not what you're asking.

It seems you're asking for code to be both null safe and not null safe simultaneously.

Having a language just decide that it would like to change the types of the values flowing through a system is wild. It's one of the reasons that JavaScript is a trash fire.


You are misunderstanding things.


I’m certainly misunderstanding why so many people in this thread insist on speaking authoritatively on a topic they clearly know very little about.


Because it otherwise forces the caller to have an extra explicit step that doesn't really contribute to anything. It's a trivial transform, and as such just gets in the way of what the code actually does.

Of course with great power comes great responsibility, so it's a tool that should be used sparingly and deliberately.

Now as mentioned I don't use Haskell, but that's why I like it in other languages.

I asked as I was curious if there was something that prevented this in Haskell, beyond a design choice.


A good reason to use Haskell is that it generally guides the programmer away from doing things like this.

If you make a breaking change to your API, then you should want your tools to tell you loud and clear that it’s a breaking change.

I also don’t agree that keeping the structure around values internally logically consistent “doesn’t really contribute to anything”. On the contrary, I think this idea is hugely important. How would your idea generalise? The compiler should just know that my `Int` should be a `Maybe Int` and cast it for me. Should the compiler also know that my `[a]` should be cast to a `(a, b)` because incidentally we’re fairly confident that list should always have two elements in it?

I think if this way of thinking is unfamiliar, then it’s a good reason to learn Haskell (or Elm, which is at least as good, or maybe better, for driving this point home).


> The compiler should just know that my `Int` should be a `Maybe Int` and cast it for me.

The compiler should not "just" know it. It would know it because we told it how.

Consider a function that takes a float and returns a complex number. I then change the function to take a complex number ("Complex Float" in Haskell if I read the docs right), and returns a float.

I could then tell the compiler, by implementing an implicit cast operator, how to cast float to complex. The implicit part is then that the compiler tries it without me telling it to use the cast explicitly.

Then any code that worked with the old function should work perfectly fine using the modified function without modifications, since per definition the reals are contained in the complex numbers.

This is how I do it in several languages I've used.


But now you’re talking about something else aren’t you? Now you’re talking about generalising. You can generalise, for example, from Float to Floating a => a. But I don’t understand how the original Int to Maybe Int change could be sensible. How does that work?


The exact same way? Or perhaps you could tell me, in which cases can an Int not be turned into a Maybe Int?

Again, I don't know Haskell, so from the outside it looks like much the same as the float -> complex conversion.


It sounds like you need to learn about parametricity.

https://www.well-typed.com/blog/2015/05/parametricity/


It doesn't sound like that to me. Could you not just answer his question?


To whom shall I send the invoice?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: