Hacker News new | past | comments | ask | show | jobs | submit login

As someone who spent a good chunk of their career teaching TDD, I agree and disagree. Yes, tests are crucial. However, which tests to write varies among other things by language. When I write Ruby, I benefit from testing even the simplest things. When I write Rust or even Type Script or Swift I'll focus much more on tests for complex logic and on integration and acceptance tests. Static typing eliminates an entire, larger category of issue I need to address with tests.



When I write Ruby, I test expected behaviours, and if there are type errors in there those tends to fall out from tests I needed anyway. If you need to test specifically for errors due types, then generally that suggests that either your application does not normally exercise those code paths at all and/or you fail to test behaviours of your application.


100% on testing behavior. That said, I still end up writing more and lower-level tests in Ruby to get fast feedback on silly errors, both in terms of locality of the failing test and in terms of not executing the entire stack to test variations of behavior somewhere deep inside.


Does this include writing libraries for use by third-parties?


To an extent. In libraries for third parties you do need to test contracts specified by your docs, but a well written Ruby library should intentionally avoid over-testing typing (both in separate tests and in code) and focus on testing behaviours.

Sure, test for sane failure modes in line with the documented contract, and that may include the occasional test that is de facto a type test, but often testing for types, especially in languages with poor type systems, but also in Ruby where we have alternatives, ends up with tests for classes which is frequently the wrong thing.

E.g. in Ruby never, ever check for somearg.kind_of?(IO) if all you ever do is somearg.read - if you absolutely must typecheck somearg, the Ruby way is to check for presence of "#read", e.g. somearg.respond_to?(:read), or try and fail responsibly (and often just allowing the NoMethodError to bubble up is the right way to fail).

Also think it's just fine for people to add and ship Sorbet type declarations for gems etc. to then signal those contracts so people can verify them if they choose. There's no reason not to offer that when it's reasonable to do so.


I have zero to no experience coding in Ruby, so I cannot comment in that case.

In Python or JavaScript/TypeScript, though, my experience is that failing to validate types at the borders (i.e. every single function/method/constructor/generator/... exposed in your API) is pretty much guaranteed to end up, months later, with developers attempting to sherlock out surprising breakages in production from logs that make no sense and traces that do not show anything remotely close to the actual culprit.

I have the scars to show it :) Of course, YMMV.


The "Ruby way" in this respect tends to be to avoid being over-prescriptive. That doesn't mean "do no checks", but "do only the checks actually needed" with a very different expectation of "what is needed" than in many other languages.

Hence don't check for an IO object when what you care about is the presence of a "#read" method.

Or don't check if something is an Array if what matters is that it supports "#map". Instead, either somearg.respond_to?(:map), or if it needs more than map, somearg.respond_to?(Enumerable) (this seems broad, but supporting Enumerable only requires implementing "#each" and including the module, so a caller "worst case" can reopen a class or wrap their object), or call one of Array(somearg) (tries "#to_ary" then "#to_a" then falls back to returning [somearg]) or Array.try_convert(somearg) (tries "#to_ary" then falls back on nil).

Consistently picking the most generic applicable options when faced with choices like that still enforces the contract on the boundary, but also tends to lead to code with much less ceremony. E.g. far fewer [something]Adapter classes, or glue code to convert data before calling methods that are being overly prescriptive.

But for most statically typed languages, when people talk about static typing, they still talk about a class and a type as interchangeable (there are exceptions, and it's getting better, and with increased type inference coupled with increased support for type annotations and analysis of dynamic languages, I expect there to be an increasing convergence, though).


This makes sense, although there are downsides to not naming types, as these serve as documentation. Python documentation has a strong tendency to using descriptions to avoid giving names to types, which imho makes the documentation needlessly confusing.

Note that I'm writing "types" and not "classes". Interfaces do just as well, if not better!

For what it's worth, in the static realm, OCaml introduce features that let code check for presence of a method (instead of implementing an interface) ~25 years ago. Unfortunately, this proved rather unwieldy and, to the best of my knowledge, nobody uses that feature.


The problem to me of naming types is that a lot of the time there is no meaningful name to give, and so it becomes noise.

When there is a meaningful name to give, in Ruby you might define a module, and include it and use that as an interface:

   module HelloWorld
      def hello
        raise "Not implemented"
      end
   end

   class Foo
     include HelloWorld
   end

   Foo.new.hello # Raises an exception

   Foo.new.kind_of?(HelloWorld) # Returns true
(And anytime you want something that behaves somewhat like an ordered collection, you typically just want object.kind_of?(Enumerable)).

There are plenty of contract-checking frameworks for Ruby [1] but they see little use, because most of our contracts tend to be extremely simple. Along the line of "implements method X" or "includes module Y".

Sorbet w/inference tools and optionally storing the types in a separate files (it supports inline too) might slowly change that (and effectively let you use modules as "proper" interfaces), since the biggest opposition to types in Ruby tends to be visual noise and forcing refactoring, and if it can be turned on/off in your IDE and regenerated, a lot of the objections fall away. It's not that we (well many of us) don't want the extra type checks, but that we don't want to pay the cost of making the code less readable.

[1] Here's one: http://egonschiele.github.io/contracts.ruby/ and of course there's Sorbet: https://sorbet.org/blog/2020/07/30/ruby-3-rbs-sorbet


> The problem to me of naming types is that a lot of the time there is no meaningful name to give, and so it becomes noise.

Do you have examples? That's not my experience.

Of course, we may simply be working on very different types of code :)


It's hard to give a neat example, because it really changes how you write code and it's been a long time since I had to suffer through much static code.

But e.g consider basically almost any case with abstract member functions / methods. While that happens in Ruby (or the equivalent: Leaving a method which raises an exception unless overridden) it's far less common.

Typically you'll instead implement defaults in a module if there is lots of functionality that a client might want for itself). These are generally fine to name, and type check by the module name.

But when that is not* the case (any class consisting mostly of abstract methods), a Ruby implementation would be more likely to instead expect the presence of a single method and embed the default functionality in the calling class. That's the kind of scenarios where you end up with the kind of long, contrived class names people tend to mock Java for in particular.

But going further, a Ruby implementation will instead often take an argument that is expected to support "to_proc" or a block argument, so that you can provide a lambda/closure to do what you'd otherwise put in an "adapter" or "shim" class.

Conversely, any time where you demand (whether in a static language by type annotiations or in a dynamic one by checking for the class) an object of a very specific class and only call one or a few methods on it, you're sidestepping this problem by avoiding the creation of a more precise interface that'd clutter up the code. You don't create and name interfaces that encompass the specific operations needed by each method because it'd be a total mess.


Fair enough.

A few common examples for which I rely upon static typing: - I have a proof that operation X is complete, because I have a protocol to follow; - this string is a Matrix room id (and not a Matrix user id or a Matrix server id, also strings); - this time is in milliseconds.

Sounds like very different use cases :)


Sure! But you need to be a testing practitioner anyway. I agree that perhaps the number of tests is different, but the teams processes and cultures should already be there.

But to counter myself, the perceived lack of developer speed and flexiblity of a strongly typed language (with Rust, I always get the feeling that I'm fighting with the compiler!) is also solved in the long run with practice and tooling.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: