To an extent. In libraries for third parties you do need to test contracts specified by your docs, but a well written Ruby library should intentionally avoid over-testing typing (both in separate tests and in code) and focus on testing behaviours.
Sure, test for sane failure modes in line with the documented contract, and that may include the occasional test that is de facto a type test, but often testing for types, especially in languages with poor type systems, but also in Ruby where we have alternatives, ends up with tests for classes which is frequently the wrong thing.
E.g. in Ruby never, ever check for somearg.kind_of?(IO) if all you ever do is somearg.read - if you absolutely must typecheck somearg, the Ruby way is to check for presence of "#read", e.g. somearg.respond_to?(:read), or try and fail responsibly (and often just allowing the NoMethodError to bubble up is the right way to fail).
Also think it's just fine for people to add and ship Sorbet type declarations for gems etc. to then signal those contracts so people can verify them if they choose. There's no reason not to offer that when it's reasonable to do so.
I have zero to no experience coding in Ruby, so I cannot comment in that case.
In Python or JavaScript/TypeScript, though, my experience is that failing to validate types at the borders (i.e. every single function/method/constructor/generator/... exposed in your API) is pretty much guaranteed to end up, months later, with developers attempting to sherlock out surprising breakages in production from logs that make no sense and traces that do not show anything remotely close to the actual culprit.
The "Ruby way" in this respect tends to be to avoid being over-prescriptive. That doesn't mean "do no checks", but "do only the checks actually needed" with a very different expectation of "what is needed" than in many other languages.
Hence don't check for an IO object when what you care about is the presence of a "#read" method.
Or don't check if something is an Array if what matters is that it supports "#map". Instead, either somearg.respond_to?(:map), or if it needs more than map, somearg.respond_to?(Enumerable) (this seems broad, but supporting Enumerable only requires implementing "#each" and including the module, so a caller "worst case" can reopen a class or wrap their object), or call one of Array(somearg) (tries "#to_ary" then "#to_a" then falls back to returning [somearg]) or Array.try_convert(somearg) (tries "#to_ary" then falls back on nil).
Consistently picking the most generic applicable options when faced with choices like that still enforces the contract on the boundary, but also tends to lead to code with much less ceremony. E.g. far fewer [something]Adapter classes, or glue code to convert data before calling methods that are being overly prescriptive.
But for most statically typed languages, when people talk about static typing, they still talk about a class and a type as interchangeable (there are exceptions, and it's getting better, and with increased type inference coupled with increased support for type annotations and analysis of dynamic languages, I expect there to be an increasing convergence, though).
This makes sense, although there are downsides to not naming types, as these serve as documentation. Python documentation has a strong tendency to using descriptions to avoid giving names to types, which imho makes the documentation needlessly confusing.
Note that I'm writing "types" and not "classes". Interfaces do just as well, if not better!
For what it's worth, in the static realm, OCaml introduce features that let code check for presence of a method (instead of implementing an interface) ~25 years ago. Unfortunately, this proved rather unwieldy and, to the best of my knowledge, nobody uses that feature.
The problem to me of naming types is that a lot of the time there is no meaningful name to give, and so it becomes noise.
When there is a meaningful name to give, in Ruby you might define a module, and include it and use that as an interface:
module HelloWorld
def hello
raise "Not implemented"
end
end
class Foo
include HelloWorld
end
Foo.new.hello # Raises an exception
Foo.new.kind_of?(HelloWorld) # Returns true
(And anytime you want something that behaves somewhat like an ordered collection, you typically just want object.kind_of?(Enumerable)).
There are plenty of contract-checking frameworks for Ruby [1] but they see little use, because most of our contracts tend to be extremely simple. Along the line of "implements method X" or "includes module Y".
Sorbet w/inference tools and optionally storing the types in a separate files (it supports inline too) might slowly change that (and effectively let you use modules as "proper" interfaces), since the biggest opposition to types in Ruby tends to be visual noise and forcing refactoring, and if it can be turned on/off in your IDE and regenerated, a lot of the objections fall away. It's not that we (well many of us) don't want the extra type checks, but that we don't want to pay the cost of making the code less readable.
It's hard to give a neat example, because it really changes how you write code and it's been a long time since I had to suffer through much static code.
But e.g consider basically almost any case with abstract member functions / methods. While that happens in Ruby (or the equivalent: Leaving a method which raises an exception unless overridden) it's far less common.
Typically you'll instead implement defaults in a module if there is lots of functionality that a client might want for itself). These are generally fine to name, and type check by the module name.
But when that is not* the case (any class consisting mostly of abstract methods), a Ruby implementation would be more likely to instead expect the presence of a single method and embed the default functionality in the calling class. That's the kind of scenarios where you end up with the kind of long, contrived class names people tend to mock Java for in particular.
But going further, a Ruby implementation will instead often take an argument that is expected to support "to_proc" or a block argument, so that you can provide a lambda/closure to do what you'd otherwise put in an "adapter" or "shim" class.
Conversely, any time where you demand (whether in a static language by type annotiations or in a dynamic one by checking for the class) an object of a very specific class and only call one or a few methods on it, you're sidestepping this problem by avoiding the creation of a more precise interface that'd clutter up the code. You don't create and name interfaces that encompass the specific operations needed by each method because it'd be a total mess.
A few common examples for which I rely upon static typing:
- I have a proof that operation X is complete, because I have a protocol to follow;
- this string is a Matrix room id (and not a Matrix user id or a Matrix server id, also strings);
- this time is in milliseconds.