Having recently worked with a Ruby codebase where anything could be nil at anytime, I consider even references to nil to be a code smell. Developers are inherently prone to ignoring error handling, so they'll happily ignore the fact that Order#customer really means Order#customer_or_nil. And when they see errors in production, they'll slap an andand on it and call it fixed. The code becomes extremely difficult to reason about.
I prefer that functions throw an exception when unable to do what they promised, rather than return nil or a null object. The try-catch block serves as documentation that the function might fail. If someone forgets to catch the exception, it will shut everything down rather than leave the program in an unanticipated state that could lead to an error elsewhere.
Nils are a very bad code smell. They come from C's null, which is a billion dollar mistake[1], according to its creator: Tonny Hoare. Specially now that we have monads[2, 3].
Scala's standard library provides very helpful information on how to replace null with its Maybe class (called Option in Scala). Just take a peek into their collections library[4], and search for Option.
Nils are a very bad code smell. They come from C's null, which is a billion dollar mistake[1], according to its creator: Tonny Hoare. Specially now that we have monads[2, 3].
Do you have a source for this? I was under the impression nil was directly taken from Smalltalk, which derived it from Lisp.
The null reference was invented by C.A.R. Hoare in 1965 as part of the Algol W language. Hoare later (2009) described his invention as a "billion-dollar mistake":[10][11]
Nil was a part of Lisp 1, which is why I was confused. It predates Algol W by seven years. As null references and nil are somewhat different beasts, I'm not sure the criticism "billion dollar mistake" applies fully.
This is something that's nice about ML-family languages. Data is not "nullable" by default. If you want to introduce that possibility, you need to wrap it in some form of "option" type - and you're forced to deal with the consequences of that explicitly.
So now you can call nil.whatever.you.like and get nil. It's also still falsy, but now every nil value in your app silently works and doesn't throw an error.
It's both impressive and scary that Ruby lets you (or that new contractor you're not sure about) change the fundamentals of the language in one line of code.
I've said this in a comment at rosania.org, but I think it bears repeating here: this is why we need a #blank? or #null? protocol that user classes can participate in as part of core. #nil? and an inextensible FalseClass just aren't enough to do the sorts of thing that Avdi was trying to do in his original post. My tentative proposal is languishing here: http://redmine.ruby-lang.org/issues/5372
As a application creator, is is only bad when/if you open source part of it. Otherwise, you can always run the gems' tests against the compillation of your env.
Conceptually, nil != false. You shouldn't use 'nil' to mean 'false'. You should always test explicitly for nil-ness using #nil?. If you write
if finished?
do_something
end
then finished? is not expected, and may not, return nil.
If you expect some_value to be able to be nil (which, for boolean values, is hardly ever), you should write
if !some_value.nil? && some_value
do_something
end
In the end, these 'convenient' coercions that allow you to use any value as a boolean only come back to bite you, because unexpected stuff happens when stuff is unexpectedly nil. If it makes my coworkers cry that an object is nil and true at the same time, then they are at fault, not me.
Edit: and more importantly (I forgot to point this out explicitly) a non-nil value != true. That any string or integer is 'true' can lead to very hard-to-find bugs.
Such code is everywhere in the Ruby community and explicitly endorsed by many. I have never seen code like your second example in the wild, and it's far more complicated than the idiomatic way of writing it:
if some_value
do_something
end
It's a core expectation of the language that !!nil_thing == false, so why violate it?
you often don't actually want it to be executed when some_value is nil instead of false. And when it is, you go scratching your head and searching where some_value came from, to discover some silly typo.
In some ways, the reverse is even worse: when you write
if some_value
do_something
end
and you change some_value from nil to some sensible default like '', you will forget to update this clause and you will not understand why the code is being executed. If you use
if !some_value.nil?
do_something
end
you don't have that problem, because the nil-test sticks out like a sore thumb.
I disagree. I find it most natural to assume nil and false are negatively charged because that's how the language is defined. If I want to differentiate between nil and false, I use .nil? or == false.
Somehow this has never been a problem for me in eight years and hundreds of thousands of Ruby lines--but every programmer is different, and I'd like to hear other views.
change the base case of some_value from nil to 0, nil to NullObject
change some_value from nil to some sensible default like ''
What, what, what are you doing? Who uses an empty string to mean "uninitialized"? Nil and {0, "", etc) have very different semantic meanings. How would you discriminate between the default "" and a user-submitted ""? If you mean to check whether it has changed from the default, compare it to a default constant. If you mean to check emptiness, use #empty? or #blank?. I read these kinds of explicit .nil? checks as fighting the language, and in some ways, a violation of the cultural contract in Ruby and Lisp around truth charge.
I agree with aphyr's response to you. This has never been a problem for me. I think your issue is that you consider an empty string to be a sensible default representing false or nil.
I fully agree that those sane defaults indeed aren't right for representing false or nil. What I'm arguing is rather the reverse: nil is often initially used as an (insane?) default and that is later changed without updating all related checks (because they don't stick out and demand attention, which makes them easy to overlook or 'overthink'), which causes bugs.
This could lead to a tri-state ifs (with true/false/nil branches).
Or you can just consider that 2-branch 'if' runs the first branch if the test expr is true and the second 'otherwise'. (So that doesn't make nil false, it just makes it non-true).
And then your interpretation and everyone else co-exists and we can all code together.
You probably shouldn't use 'nil' to mean 'false', but since pretty much every compiler / interpreter out there will take the same branch when given a condition that's either nil or false, I'd say it's much safer to stick w/ what's existed for decades rather than accommodate some sort of tri-state boolean.
See my response to necubi. The main problem is the reverse:
if some_value
something with some_value
end
where you first use nil to signal false, but later change the base case of some_value from nil to 0, nil to NullObject or nil to some other base value. Since you think of these base cases as 'nothing', you will forget to change this related conditional the first time around, because it seems it would still do the right thing.
Isn't this a flaw in Ruby, though? It also means that you can't create a delegate/decorator/proxy object for a false object and have it be false as well, which goes against the general "everything is determined by sending messages" vibe Ruby has going on.
I know the reasoning (it's much faster to do math on object IDs than it is to call a method), but there are workarounds for this (e.g. only allowing frozen objects to be boolean-false, and having the object's truth-ness/false-ness represented by the return value of its #false? method at the time of freezing—thus allowing the interpreter to locate it in a semantically-meaningful part of object-ID space that can later be masked for in a TEST instruction.)
With infinite spare time, what I'd try would be to make NilClass usable as a base class. The way that would work would be to have all instances of classes inheriting from NilClass live in the 0bXXX...XXX100 object ID space. Nil is currently 0b100, which is consistent, and this doesn't clash with any of the other immediates - it just translates to a different memory alignment rule in MRI.
The ruby people will tell you that ruby's x, where x is power, flexibility, etc., comes from the ability to shoot yourself in the foot so just don't shoot yourself in the foot.
Speaking as a Ruby user, I think nil and the entire behaviour around truthyness and falseness are wrong. I should be able to create my own nil objects, and I especially would love to be able to create false objects that carry more information.
For example (and there are code smells in this, but it gets the idea across):
if account = Account.create(params[:account])
...
else
complainAbout account.errors
end
Truthiness should not be reserved for whether an object exists, it could also be used for whether it is valid, complete, or anything else.
My particular example might not be a great one, but I think framework and library developers ought to be able to work a consistent use for truthiness and falseness into their creations, e.g. a true object has been saved, a true object is valid, a true object is complete, a true object represents the current state of the world instead of the past or future or a wrong state, and so forth.
This is the next step for my #blank? proposal over here: http://redmine.ruby-lang.org/issues/5372 - to make #blank? or #null? objects evaluate as false in boolean expressions.
To me this isn't a problem with nil. This is a problem with the create method. create either returns you the object or something ambiguous (nil). it COULD return you the object or something useful, like an error class, or an error code, or whatever....
If you care enough about the state of what's being returned then it would be trivial to test if the returned object was of the class you were expecting (Account) and if not handle what it did return (some error indication) as appropriate.
I'm ok with that. BUT if that's how we want to go, why not ditch truthiness for non-nil objects altogether?My complaint is that truthiness is baked into the language in such an inflexible way.
I'm not sure how I feel about that. It just seems so useful to have it, BUT I think there's definitely a strong argument to be made for your proposition.
Alternately, why not just switch to a functional language where, it seems, ambiguity and other related problems rarely make it past the bouncer at the front door.
I think this is a design issue that is orthogonal to functional languages or static typing. We could easily create a language where writing "if foo" causes an error when foo doesn't resolve to an instance of Boolean. If we like, we could add coercion to boolean through the #to_b" method (although I have issues with implicit coercion).
At a deep level, I wonder if my issue is with if stamens and boolean operators being magic outside of the object system. Smalltalk gets this right. Scheme gets this right. if "if" is defined in the standard library rather than being magic syntactic construct, we can write our own control primitives:
provided account = Account.create(params[:account])
...
end
That added flexibility comes at the cost of standardization.
There are practical reasons that we're not all coding in IO. Allowing for the redefinition of control primitives is a slippery slope. In short order, we may have apps that read well to us but aren't clear to others.
I think truthiness and validity/completeness are different things. Overloading truthiness sounds to me like a recipe for confusion and less expressiveness.
That said, I'm all for experimentation and would love to see this idea tried.
I prefer that functions throw an exception when unable to do what they promised, rather than return nil or a null object. The try-catch block serves as documentation that the function might fail. If someone forgets to catch the exception, it will shut everything down rather than leave the program in an unanticipated state that could lead to an error elsewhere.