Hacker News new | past | comments | ask | show | jobs | submit login

Honestly I don't get why some people want to move to static types. Ruby is a dynamic language, that's the point of it... Giant orgs can just use Java or something. We need some languages to stay productive for those of us who work solo or in small groups. If I wanted static types I'd use Java, Go or something (probably Haskell).



This is a set of statements that strike me as pretty unreflective of the state of things these days. I have slung a lot of Ruby in my life and I literally-not-figuratively stopped the second I laid my hands on TypeScript because we've hit the point where gradual static typing is both easily available and super easy to work with. (And there's also Rust, which can scratch a whole different set of itches that I don't happen to personally have.)

If your idea of static typing is Java or Go or "something", if your idea of it is that it's for "giant orgs" and not improving your own correctness and throughput of code (I write better code, faster, in TypeScript than I ever did in Ruby!), yeah, it might not make sense. But that's a pretty out-of-date take, I think, and the wins the suitably plastic individual can get, even on a solo or small-team project, are significant.


The static languages I've used most are actually Haskell and recently Pony (mostly for things that involve a bunch of maths and processing in parallel). Just brought Go/Java up because it seems everyone is trying to turn every language into those.

I don't want Ruby to become another TS because I'd use a typed language if I wanted one. The problems I use Ruby for don't need the speed of a statically typed language and it's nice using a dynamic one for.


I think what the parent comment is saying (that I agree with) is that it's easy to over-estimate the cost and under-estimate the value. You don't _have_ to do anything with types, but it's another tool to use where it makes sense. Or put another way, provided it's not mandatory what's the downside to having another tool in the toolbox?


The downside is the ecosystem turning into the JS/TS ecosystem. Where the zealots have pushed TS to the point it all just looks like C# and what makes JS nice is being lost.


Do you have any specific examples of things that make JS nice that have been lost because of the popularity of TS? As with types in Ruby, I was under the impression nothing was really lost as TS could be added incrementally as developers saw value.

I don't think I've talked to anyone who has gotten "over the hump" with TS and felt like they were missing something from JS - I may be in a bubble though. It's almost frustratingly beloved in my experience.


I’ve been writing TS for a year now and I find TS annoying. Especially for react components with state management of some kind, the types get so complex you almost need unit tests to assert they are what you think they are. Additionally, TS being a structural type system with no access to nominal types at all eliminates a whole class of “ghosts of departed proofs” modeling techniques. (And, I know you can work around this, but those workarounds are ugly.)

My view is that I’d use TS if I have to but I’d pick either plain JS / CLJS or something like purescript if I really wanted types.


The year is 2021.

People argue for a toolchain that takes a static language, compiles it to a dynamic one, then runs it with an interpreter on the server side.

Humanity ended shortly thereafter.


I haven’t touched the ecosystem really for a number of years but my last JS project was in TS and I hated it for the same reason. The overhead of bringing in yet more npm modules and types and build steps for questionable levels type safety. Languages with much better type systems exist.


Such as?


The dynamic nature of JavaScript is lost when using TypeScript.

Although TypeScript can be added incrementally, in practice I've only seen it totally replace JavaScript.

While I do believe more people prefer TypeScript over JavaScript, I think it's because those people never deep dived JavaScript or bothered to learn it enough to see how powerful it really is. There are also people that just prefer typed languages and will shun languages without static types.

This really is a religious war and boils down to one's opinion. I like JavaScript without typing.


Once you have more than one team in the same codebase the opinions start to line up as people start to think the types will save them from stepping on each others' toes. Which it might! Publishing types is also a poor-man's contract testing, so people who like that sort of thing will like that sort of thing.


I have implemented a JavaScript virtual machine. I feel confident in my knowledge of JavaScript as a language and as an operating environment.

What power am I no longer able to leverage using TypeScript when it's appropriate to do so?


Why the downvoting? You somehow upset the typed mob I guess...


1. What's wrong with C#?

2. What made JS uniquely nice?

I wanted to leave these questions entirely open to answer, but I'll add my own opinion on (2) because I feel compelled: nothing, JS is quite possibly the worst language ever designed. Certainly the worst in widespread use.


Why is JS one of the worst languages ever designed? I've used many languages and JS is one of the better designed ones in my opinion.


>Why is JS one of the worst languages ever designed?

Implicit type conversions.

Function scoping.

Null and undefined, what is the difference?

Accessing an undefined variable doesn't throw an exception.

Assigning to an undefined variable without var puts it in the global scope.

No integer type.

I could go on and on. The only reason it has become successful is because it has a monopoly in the browser.


My favorite recent discovery is calling a js function with too many or too few parameters. It'll just go ahead and do it.


There’s an implicit “arguments” function argument through which you can access the extra parameters. It’s JSs way of function overloading


Which looks like an array but isn't.


There’s a lot of that kind of thing in JS, isn’t there? :(


I'd add

- Actually using any of the distinguishing features of prototypical inheritance is nearly always a bad idea, which makes the use of that model in the first place very questionable. One of the cornerstones of the language is pretty much one big foot-gun, to be wholly avoided.


What is [] + [] in JavaScript?


There is the kernel of a nice idea in JS but so many bad early design decisions can never be walked back.


I would argue that TypeScript does yeoman's work in doing much of that walking back.

I plumb forgot that assigning a variable without `var` or `let` put it in the global scope because TypeScript yells at you for it.


> I'd use a typed language if I wanted one.

I think the general point the above commenter is trying to make is that not all typed languages are equivalent: i.e. if you're avoiding TS/typed ruby/etc. because you don't want something like Haskell/Java, then that's not a well-informed decision as they're (all) radically different approaches to static typing, each with their own unique benefits and drawbacks.

TS is nothing like Java nor Haskell. Nor Rust.

(I can't personally speak to Ruby/Pony yet)


Wait, Pony as in https://www.ponylang.io/ ? Since this is the first time I've seen someone bring it up, can I what your experience using it has been like? And also what you were using it for?


That's the one. Experience is that it's very good at what it does: multithreaded/distributed programs. Good C interop. It forces you into it's paradigm, but it's a good one for creating performant code across cores.

Tooling isn't great, it's missing things like a language server (compiler gives good error messages at least), but it's also quite simple (whole syntax fits into a smallish page) and the documentation is decent.

Playing around with it for doing economic simulations. Actors + good multithreaded performance = pretty much perfect for the domain. Makes sense too, the creator and half the core members came from the financial field.

Ruby is my go-to for scripting, one off programs, and anything web-related (blog, putting together a small crud site). For things where I want performance, I just use a compiled/static language.


you don't use static type for speed, the first time i write some code in TS(wh compile to js) i typed something of web-storage the editor say this function assume string and you put a num, if i use js i need to run the program to know that. is cool ultra expressive typing(haskell) but you don need nothing so fancy to have huge gains in productivity. ruby and python are grate langues to write scripts and rails make a lot for you but having the compiler take your hand slaps bugs like nothing


> But that's a pretty out-of-date take

Gradual/optional static typing are not new ideias. It’s just that they are fashionable now.

It used to be that not having to deal with types at all was the cool place to be in. Our computers were getting so much faster every year, why would performance be a concern? Programmers are more productive in dynamic languages and computer time is cheap, etc, etc.

The “correctness” pitch, the required for larger projects, compiletime vs runtime errors are all discussable, even though they are often thrown in the conversation as irrefutable advantages of static typing.

The performance angle, not so much. Static will almost always be faster then dynamic typing, even with all the crazy tricks we’ve developed over the decades.


It’s not that they’re fashionable, it’s that their ergonomics have improved significantly over the past decade. I saw dynamic types as a response to cumbersome static type systems, but increasingly type systems are becoming more expressive and less cumbersome.


Yeah, this is how I see it, too. Sort of a gradual convergence.

On the static side, Java and especially C# are simply _better_ than they used to be. Type inference is great - I can just write var x = new List<Thing>() rather than having to stupidly repeat the type e.g. List<Thing> x = new List<Thing>(). Add in generics, and lambdas, and about a dozen other things that have now become widespread, and it's really a different game than 10 years ago.

Coming the other way, I never expected to see Intellisense-like autocompletion for languages like Ruby or Python. It honestly feels magical. Refactoring has gotten much better to where I can pull methods up and down inheritance chains using PyCharm or other mainstream IDEs.

I actually think it's mobile that's pushed people back toward static. Swift, ObjC, Java, and Kotlin are all static and there's less emphasis on this "one language" idea that drove a lot of people toward trying to use the same language (JS/TS) for their React front-ends and node backends.


But C# has had everything you've listed for over 10 years. Along with a great IDE that has provided very useful intellisense since pretty much the beginning.


You couldn't (easily) use C# to write the frontend for a web application that you wanted to use as either a developer or a user, though (which is to say sit down, ASP.NET Web Pages, nobody ever liked you).

That's the lock that TypeScript turns, IMO.


There aren't that many languages that do this even today, much less 10 years ago. You've basically turned what was a discussion about static typing in general into Javascript vs Typescript.


Not only C# and Java, but C++ has auto, Typescript has let, Rust has let and I think many other languages have type inference.


When it comes to code, I'm fairly certain "correctness" is in fact an irrefutable advantage.

The argument has always been whether or not the price of that correctness is too high.

But at this point if you're using a modern IDE / code editor (e.g. VSCode), it's actually easier to write statically typed code because the inference / auto-completion / etc is so much better when you do.

At least with TypeScript in VSCode.


> The argument has always been whether or not the price of that correctness is too high.

No. I mean, that's been part of the argument, but so has whether static typing actually gave you useful correctness. Many static languages have had type systems that are more designed around convenience of compilation (and performance of compiled code) than correctness; the type systems of the optional typecheckers of modern dynamic languages are leaps and bounds better than static type systems of most popular static languages a couple decades ago, and have been for some time.

> But at this point if you're using a modern IDE / code editor (e.g. VSCode), it's actually easier to write statically typed code because the inference / auto-completion / etc is so much better when you do.

If you have good inference, statically typed and dynamically typed code is virtually indistinguishable TypeProf-IDE for Ruby, discussed in TFA, generates type signatures by inference from plain Ruby code.


Strong disagree with the idea that dynamic inference steps to static declarations, particularly for Ruby and especially if you're using anything that expects you to write code inside of its DSL. The inference tends to fall off and because it isn't expected of you, nobody's doing anything to help there.

Maybe Rails is special-cased enough to get away with it, but I still maintain a project in Grape and none of the autocomplete solutions I have found for Ruby help significantly at all. Meanwhile over in TypeScript, I literally can't remember the last time I used `any` or `unknown` except at a module's edge during validation of untrusted input, and autocomplete is awesome.


> it's actually easier to write statically typed code because the inference / auto-completion / etc is so much better when you do.

Ruby has a pretty nice language server, nice linter, REPL (Pry), many other tools.

The best development experience IMO is still stuff like SLIME or Smalltalk environments.

"Intellisence" just makes using statically typed languages bearable. It's not a unique feature.


Intellisense will allow you to do explorative programming, finding what method is callable on a given object. It’s much easier to look up n methods than to try to guess what is the whole state of the program where it may accept this and that, where even in the best case you might get a fraction of what a statically typed language’s autocompletion allows for.

Like just seriously try intellij with java and pycharm for a php code.


Ruby has (always had) a REPL and .methods gives you all possible methods you can call on an object. Editors with completion have been around forever too.

"Intellisence" is just what MS calls it. It's not unique to statically typed languages. Completion has existed for dynamic languages probably longer than I've been alive. In fact, the reason tooling for Java/C# is so good is because the VM can feed it info as it's running, since even though the language is static the runtime is dynamic.


I have used that REPL extensively. I've written plugins for Pry for my personal usage. There are absolutely cases I've run into where it is nice to break into code, write it inline, and save the buffer. This is true. But I've run into more cases where having to do that to know what I'm working with in the first place is a tiresome slog.


Please try one out, the difference in quality is staggering. Also, .methods is a runtime thing, that’s hardly useful. Of course at runtime you must know the available methods.


I do write code in statically typed languages (C++, Haskell, have done Java before, etc...).

> Also, .methods is a runtime thing, that’s hardly useful

I think you missed the whole point of a REPL and dynamic languages... Developing while program is running = runtime things are definitely useful, they provide instant feedback (including completions, linting, that sort of thing).


Or try RubyMine for Ruby.

The challenge of providing a list of available methods in a dynamic environment is real, for sure, but there are great tools out there doing it right now.


That’s why I used correctness in quotes, because it’s no guarantee.


It's not a guarantee, but an environment/language that yells at you when you try to pass a boolean to a function that should only ever accept a string is going to result in a more correct program than one that just... doesn't say anything and lets you happily do things that make zero sense.


With Visual Studio also. Intellisense works wonders.


> Gradual/optional static typing are not new ideias. It’s just that they are fashionable now.

I'd add that Common Lisp has had optional type hints (as both documentation and performance improvement) for getting close to 40 years now.


You might not care about types but the computer you are running your code on, cares very much.


Are they releasing processors with typed registers now?

AFAIK registers and assembly is untyped.


That's true, however I think you're misunderstanding how languages work. The Ruby interpreter knows the "type" of everything. How could it not? It was there when you defined a variable, and it was there when you added data to it. It's there when you define interfaces and other things that might affect the type of something. It's perfectly capable of holding all of that information. And there's no meaningful difference between parsing "int x = 3;" and "var x = 3;" in a program.


I've gotten a lot of value from integrating Sorbet into a non-trival 15 year old Rails app. The gradual nature of the typing is very nice.

I've never written TypeScript but I suspect the tooling around Sorbet is pretty far behind at this point but it's still worth it. For example, there are is a whole class of unit tests that no longer need to be written. In addition to the gradual typing, having access to interfaces, typed structs and enums is all nice too.


Yeah, Sorbet and the progress being made in Ruby-land makes me want to go back and give it a good look. I really like Ruby. I just really like writing silly in/out tests less.


    I just really like writing silly in/out tests less.
Were you writing a bunch of tests to make sure that A is passing arguments of the right type to B?

I'm not sure that's a great use of time. If A is passing the wrong thing to B, B will throw a `NoMethodError` anyway once it tries to do anything with the arguments, which will make the spec test fail anyway.

But maybe I'm misunderstanding what you mean by "silly in/out tests"...


You can get away with that if you're writing something for immediate delivery, though I think it leads to a lot of not-my-problem thinking in teams that have to scale. On the other hand, I write a lot of libraries, both for internal and external consumption, and there are a lot of operations, in that context, where you won't get a `NoMethodError`--serialization, for example. You can happily serialize an integer if it's passed instead of a string. That doesn't mean it makes sense, or whatever is consuming it is going to be able to make heads or tails of it, and having to run code in order to know whether you've made trivial mistakes is shitty and demoralizing to an end user when they have to divine what a `NoMethodError` means when it's thrown deep inside of a library.

It can also then create security issues around untrusted input; you should be sanitizing at module boundaries any time something might be sensibly used with rando input, IMO, rather than relying on web developers who may or may not be competent enough to duplicate other consumers' effort to do it.

Having to do less work to enforce sanity at module boundaries, and having it tied in with the type system when you do have to do it, is a powerful force-multiplier. For example: I use `runtypes` to create validators in TypeScript when I must handle untrusted input and it's smart enough to take the validator I specify and create a type of the same shape for use at compile-time, so users who aren't dealing with untrusted input just have the nice computer cross the T's and dot the I's for them.


Those are really good counterexamples and I love your points.

However... just kidding, there's no "however." Great reply.

    You can get away with that if you're writing 
    something for immediate delivery, 
I've worked on some big Rails monoliths and this just wasn't an issue very often.

But as you say, I think library code is another story.

Incidentally, whether library or application code, I've found that keyword arguments with well-chosen names have reduced this problem even further. It's not clear that `do_something("blue")` is incorrect, but `do_something(user_age: "blue")` is self-evidently bad.


Out of date how? Tons of people write pure javascript and not typescript. They are all not as productive as you?


I'm saying that I am significantly faster and more correct TypeScript than JavaScript, and I know literally no people who are starting greenfield projects in JavaScript today.

People choosing to write JavaScript in 2021 might be plenty productive. I trust their code much less, though, unless they're writing a battery of tests to establish sanity at their module boundaries and making the extra effort to ensure correctness that you get very cheaply with TypeScript.


YOU are faster with Typescript. Can you accept the fact that certain people are not and do not like Typescript? You live in a Typescript bubble so you conclude everyone is doing that. Just open a jobs board and see for yourself not everyone is doing Typescript.


Ruby isn't forcing you to use static types. That being said, I actually think something like Sorbet makes even solo/small teams more productive because it trades some additional time writing boilerplate for dramatically reducing the number of bugs you write.


A good test suite is better at keeping the code bug free.

Not needing types is another benefit of good testing.

That's my experience, at least.


I don't think that static typing obviates the need for tests. However:

* Static typing catches many kinds of bugs earlier by simply not allowing you to write incorrect code in the first place.

* No matter how good your test suite is, you're still putting the burden on the human to always remember to write tests for corner cases.

* Static typing allows you have to write many fewer tests by making invalid data unrepresentable. You don't have to write millions of unit tests of the type "what if this list is empty" if the function literally can't accept an empty list.


> Static typing allows you have to write many fewer tests by making invalid data unrepresentable. You don't have to write millions of unit tests of the type "what if this list is empty" if the function literally can't accept an empty list.

We just don't write these kind of tests on our Rails codebases and we are fine. The world hasn't exploded yet anyway. If some piece of code is super tricky and sensitive then sure maybe then (though I have yet to see such a test case I think), but as a rule? No.


I'm surprised that you never have bugs of the type "this thing is nil and I didn't expect it to be". This is by far the most common type of bug I see in production Ruby applications and it simply can't happen with something like Sorbet.


Null exceptions can even happen in Java. I wasnt refering to those.


But most types (besides primitives) in Java are nullable, right? Sorbet literally wouldn't let you write this code:

https://sorbet.run/#%23%20typed%3A%20true%0Aextend%20T%3A%3A...

Null is simply the most frequent example of this issue. Getting an integer rather than a string is super common, for example. Or a string instead of a date.


What does super common mean? Do I see these bugs once a month? No, I do not. Null exceptions are common I agree with that. For me its not enough to appreciate something like Java or Go but I understand the argument.


If you do TDD, you catch bugs before the code is even written.

I don't write a million tests. There are much better strategies for testing.

Simple tests that just executes the code will catch the vast majority of type mistakes.


If you're just testing pieces of code in isolation, then you're either writing lots of tests or making assumptions about the kinds of input that your function could reasonably receive or return. If you're testing end-to-end, you're either writing lots of (much more expensive) tests or you're missing corner cases. I don't think that's problematic per se. Testing is just a bit of a mismatch for the problems that type systems solve.


Do you think it's possible for well used types to displace the need for some tests?


I'm not the person you're responding to, but I would say: "yeah, but it'd probably be bad practice."

What would you be testing for, exactly? That `SomeClass#some_method` raises a `NoMethodError` when you pass it the wrong thing?

You could test for that, but I don't think that would be a good use of code or your time. If A is passing the wrong things to B, your specs will fail anyway on that `NoMethodError` once B tries to actually do something with those improper argument(s).

I suppose it could be useful in cases where you'd stubbed out B in your specs, but ehhhh.


My experience with a medium-size but long timespan project was the two things you mentioned a bit dismissively are actually pretty massive time sucks. People are terrible at stubbing, and refactoring without types is painful. The vast majority of my Ruby experience is within Rails, so maybe that's a contributing factor?

Having types not line up at various boundaries (DB/API/reading from a file) is already a pain, but Ruby made it worse by having that bad data pass through many layers until it actually blows up somewhere far removed from the issue. I worked on very real bugs where e.g. a corner case lead to a date being deserialized as a string and then because the last few characters were numeric interpreted as a number so when treated as a date resolved as millis since epoch (or something similarly crazy). It took ~1 of those bugs for me to be convinced that I had no interest in dealing with those kinds of problems, and adding a 'Date' type means it fails in exactly the right place immediately and is a 2 second fix.

I agree with you though - it'd be a silly test to write, so how do you get the correctness/robustness without either types or tests that look like they're effectively validating types?


> Having types not line up at various boundaries (DB/API/reading from a file) is already a pain

That's not really accurate - Postgres (for instance) types ARE mapped to Ruby types whenever you read something from the database, just like they are being mapped to Java types or any other language. I guess you mean you can have inadvertent type coercion where a Ruby string is saved into a PG numeric column or something?

Anyway for super sensitive code (let's say payments/prices) you can work with DRY types or Sorbet or any other solution. As a rule it's quite rare that I actually see these problems. Can they occur? Sure. Do they happen so often I wish I was using C++? No. In fact I can't even say it happens more than once a year that I see this type of bug.


> I suppose it could be useful in cases where you'd stubbed out B in your specs, but ehhhh.

You're talking like that's rare and not something people constantly do when writing tests in Ruby.


It is of course super common. Unit tests should be stubbing out as much as possible/reasonable.

The presumption is that we're talking about a well rounded test suite with both unit tests and integration tests, and the integration tests would indeed be catching the fact that A is passing entirely the wrong thing to B.

Of course, this does mean we're leaning heavily on the test suite here. But, in any nontrivial application I hope that we would have a robust test suite, right? Otherwise we're going to have some problems whether we're statically or dynamically typed.


Over time I have come to view mock- and stub-heavy tests as next to worthless.


In my experience this here (usually some method unexpectedly getting passed `nil`) is the single biggest class of bugs in production Ruby applications.

Also, what happens when you don't get a NoMethodError? Duck typing is an extremely common practice in Ruby, which means that you can easily run into situations where code "runs" but the output is nonsensical.


    In my experience this here (usually some 
    method unexpectedly getting passed `nil`) 
    is the single biggest class of bugs in 
    production Ruby applications.
Right, but how do tests help you here?

Your specs aren't passing those unexpected nils, and if they were, your specs would fail on the resulting NoMethodError.

Static analysis can find these potential problems in a statically typed language - I miss the days when Resharper would let me know about potential nulls/nils. That's a strong argument for static typing. But, this particular comment thread is about addressing this class of bug in Ruby via tests, and my vote would be generally no.

    Duck typing is an extremely common practice 
    in Ruby, which means that you can easily run 
    into situations where code "runs" but the output 
    is nonsensical.
I've been working with Ruby full time for years (admittedly, mostly boring CRUD Rails apps) and I've just never found this to be a problem. It obviously can happen, but I just don't see those name collisions ever happening.


> But, this particular comment thread is about addressing this class of bug in Ruby via tests, and my vote would be generally no.

I'm pretty sure I started the comment thread ;).

I think there's a couple of scenarios when a function gets some rogue input (e.g., a string instead of an integer) which then triggers a NoMethodError. I agree that tests don't really help you here - what are you testing, that your method correctly throws a NoMethodError? That being said, to combat this issue you often see dynamically typed codebases littered with scattershot validation, i.e. fail gracefully if you somehow wind up with an unexpected input. And then you often write tests for that. Static typing often obviates these tests, because presumably the rogue input has already been handled further up the call stack. Stubbing is the other issue; I think it happens much more frequently than you suggest that A is tested (stubbing B) and B is tested in isolation; or perhaps there's only one test of A which doesn't stub B, and that test doesn't happen to trigger the NoMethodError.

So I think we're mostly agreeing, but I do think you get to write fewer tests overall.

> It obviously can happen, but I just don't see those name collisions ever happening.

It has happened to me (I think in the Rails context less often) but I agree it's not exactly common. It is deadly when it does, though.


I came the other way. As in, tests displaced types.

Did Java for 13 years. Then moved to Ruby and lost the type system I had leaned so hard on.

After an adjustment period I got into the new groove of just testing all code, and I don't miss my hard typed days at all.


I think people remembering to test all the typing-related corner cases is never going to come close to being as effective as something enforced by static checking -- and even if it did, it seems like I've lost all the time I saved and then some, if that's the price of not having to write the declarations.


This has been my experience using Sorbet.

I sometimes get annoyed when I get stuck screwing around with the RBI files. Then I get in the flow and remember how fast Sorbet allows me to move.


dramatically reducing the number of bugs you write.

I would say that's a big overstatement.


Have you tried writing a Ruby app with Sorbet vs without? I have, within the last year, and that's been my experience.


It's not, because "bugs you write" includes bugs that don't make it to production. I doubt there's a programmer who's worked in a language without static types who's not familiar with the "write, run, read error, find silly bug, fix silly bug, repeat" development loop. Usually it becomes second nature.


> Honestly I don't get why some people want to move to static types

I don't understand how anyone that has experience with dynamically typed languages and the insane runtime errors that can result from them would ever consider using a dynamically typed language. It's terrible and it actually provides little to no benefit in development speed. People always say development is faster in a dynamically typed language, this is not my experience. You need to actually run the program and step into it with a debugger in order to determine the type of anything at run time.

Given the popularity of typescript and how nearly all major internet companies have moved to typed versions of their dynamically typed languages it's clear to me the whole dynamic typing experiment has failed absolutely miserably.


> People always say development is faster in a dynamically typed language, this is not my experience. You need to actually run the program and step into it with a debugger in order to determine the type of anything at run time.

You just develop on the running program... If you just write it in your editor, run, check for errors, stop, edit more, run, stop, etc... then yes, you don't gain anything.


> You just develop on the running program...

Ah yes, don't worry about correctness at all, just wing it. I take it you haven't had to debug some of the stuff you've written?


You asked about the productivity boost... Well it's in being able to code, debug, etc... a running program.

Did you miss a decade or two of CS history? Lisp and Smalltalk have been around long enough that the value of programming with dynamic languages is known... I mean, there's shaceships and whatnot running on Lisp.

Or did CS simply start and end with Java?


Why are you bringing up Java? Java is terrible in its own way. Of course you will be more productive without types if the alternative is only Java (which it isn't). Typescript has an extremely expressive type system.


>I take it you haven't had to debug some of the stuff you've written?

Most Ruby devs work as contractors. They deliver the software and go to the next paying contract. They don't have to live with their software. :)


>Given the popularity of typescript and how nearly all major internet companies have moved to typed versions of their dynamically typed languages it's clear to me the whole dynamic typing experiment has failed absolutely miserably.

Python is one of the most popular programming languages.


Because there is a huge amount of non-programmers using it for single-use code.


“No true Scotsman…”


And they added types to it


Static typing is like an automatic test for a specific category of errors- type errors. That's why it's useful.

Ruby's choice of making it optional means folks like yourself who want to move fast and break things are welcome to do so, but those of us working on more mature systems that need the reliability and lack of bugs can add this on and get the safety.


Ruby is strongly typed anyway... The runtime complains if you mis-type something just as a compiler would complain. In fact, linters just catch it as you type (plus any other typos).

Mixing it up with JS (which is weakly typed)?


I don't want a move to enforced static typing everywhere in Ruby. But rbs is relatively non-intrusive. Having the standard library covered by it, and allowing people to tighten up things where they make sense makes it less cumbersome to avoid elsewhere. And it allows those people who feel they need to have type hints everywhere in their IDE to still use Ruby. I find it clutters things up more than it helps, but if it helps others then that's great.

I'm slightly concerned that people will overdo it. E.g. I've more than once wanted to pass an input to something that enforced stricter typing than necessary via guards e.g. checking its input with #kind_of? when it otherwise only needed a class that implemented a sensible #read (for example). But Rubyists are pragmatic - I think after a period of overzealous annotations (the way people went totally overboard with monkey patching for a while) most people will keep the type declarations just loose enough.


I really feel the same about javascript with typescript. It feels a heck of a lot like new grads or old c#/java devs trying to keep up with the times by switching to javascript, but then resisting the code structure and loose typings.


It's interesting though how much more robust the library ecosystem has become after Typescript attracted a new crowd that wouldn't haven otherwise gone near JavaScript with a ten foot pole..

Maybe with the new developments in Ruby ecosystem somebody will fix Net::Http so an absolute request timeout can be set to protect against slow client/server and maybe oversized payloads as well. Maybe 2030?


in what way is the library ecosystem more robust? if anything library use is down due to centralization around specific frameworks and tool chains.


That seems to be more in line with "what's the benefit of static types?" than "what benefit does JS get from static types?". JavaScript is not only a very different language than it was 15+ years ago, but also takes up a significantly higher percentage of an app's codebase. The web is a drastically different place than when JS was created.


it taking up more of an app's code doesn't mean that it suddenly has a need for typing. but even then, that assertion is incorrect, javascript has been making up a large majority of the codebase for quite some time now. Nodejs is 12 years old and jquery 15. That argument is one that could've been made like 9 years ago, but even then, multiple projects have tried adding typing and the javascript community strongly rejected it (heck typescript was pushing for it in 2014). Anecdotally, I've met considerably more and more developers coming from C# projects who are being made to do frontend work or find work in the javascript world, who seem to be very intent on pushing for typescript in their orgs.


> but even then, that assertion is incorrect, javascript has been making up a large majority of the codebase for quite some time now. Nodejs is 12 years old and jquery 15. That argument is one that could've been made like 9 years ago,

Typescript is 9 years old. React was still a project then and Angular was brand new. SPA's were still a new-ish concept then. JS was not used as widely 10 years ago as it is now, thanks to SPA's. The web was a drastically different place back then.

I am not at all saying that TypeScript should always be used, far from that. I just think it has a valid spot in the world of web development.


This is exactly how I feel as well. Though, maybe I’ll come around once I learn TS better? I recently had to use it on a new project. TS was, quite literally, a nightmare. I was amazed how less efficient and how much longer things took. We probably spent 20% of our time writing JS and 80% trying to figure out how to get TS to stop complaining.


Using a tool that you don’t understand is always like that, and is frankly foolish. I don’t know why we as a group so adamant of sitting down and properly learning something, and only form an opinion on that after we did that.


I started using dynamic languages since around 2008 (Python and Javascript). Before that I was more into C/C++.

Granted, I've only written C in University settings where I'm writing small programs. I had no idea how to write "real" programs. But with Python and Javascript it felt like I could more easily write "real" programs.

What I found out is that I quickly burned out. Around 2011 I felt like I don't know how to start making programs. Programming basically became dreadful. I taught myself to wade through it anyway, convinced I would find the joy again when I get more proficient.

What eventually made me redisocver the joy of Programming is switching to procedural programming and static typing.

First it was with Typescript. Now with Go.

Writing programs with just structs and functions is really enjoyable.

Programming in dynamically typed languages is dreadful. There's no joy in it.


This is my experience as well. Working with dynamic languages soured my experience with programming so far that I stopped doing projects of my own when I was in jobs requiring them. Sounds like hyperbole, but it's something I didn't realise until several years later, after having recovered the joy of programming (which happened because I moved jobs and started using a static language again).

I guess that in part it's a matter of mentality or personal style (I know that I think mostly in terms of guarantees, invariants and so on, which is why types are so useful to me; I feel like other people think more in terms of operations when they program, and maybe dynamic programming suits them more). But there are some huge advantages as well. The tooling is far better (like the incremental compilation in Eclipse, also available in IntelliJ but turned off by default IIRC, which highlights compilation error while you are still writing the code); there are fewer unit tests required because there are fewer things that can go wrong, since a lot of them can be enforced by the type system; most important of all, reading someone else's code is far easier because mandatory documentation, in the form of type names, is all over the place (yep, not a fan of type inference either). And there are more and more advantages.

I just don't see myself working on even a moderately sized code base in a dynamic language. The pain is too real; I have been there.


Have you wrangled with JSON using Go? Absolutely dreadful.

Have you written multiple microservices in Go? The lack of an opinionated framework often means that each microservice contains code that is organized in its own unique ad-hoc way with lots of repeated boiler-plate code. The learning-curve to understand how each service's code is organized gets old fast. With Ruby and with RoR I never have to waste time with this and I can get straight to the business logic.

When writing one-off scripts, I can see the advantages of dealing with just structs and functions.


> Have you wrangled with JSON using Go? Absolutely dreadful.

That was my first semi-serious project. It was around 2012. Yes it was dreadful. The problem is I was still programming with mentality of someone who wants to use dynamic typing.

I would read JSON from an http endpoint, and then just make assumptions as to what keys are availabe, and just read them off like this:

    data['key1']['key2']
Just like with dynamic typing, when the keys don't exist for whatever reason, your program crashes.

I don't do that anymore. When programin in Go, JSON is just a serialization format for a struct. You have a concrete struct type and you just use json to fill it with data. It's so easy and trivial.

> Have you written multiple microservices in Go?

God no! Why would I do that? I hate microservices. I like Go because I can make a self contained application.

> The lack of an opinionated framework often means that each microservice contains code that is organized in its own unique ad-hoc way with lots of repeated boiler-plate code.

So you're talking about working within an organization with multiple insulated teams, each writing services that are supposed to communicate with each other, but the teams themselves don't follow any standard.

Well, that's just one of the awful things about microservices. I don't think the language matters.

> With Ruby and with RoR I never have to waste time with this and I can get straight to the business logic.

Having to debug a RoR codebase was the worst experience in my programming life. It's all magic. Trying to read the code helps you with nothing. You can't read the code to understand the program. You have to read the RoR documentation to understand all the magic. Worse, you can't just read a part of it: you have to read all of it. Because nothing in the code will give you any hint as to _which_ magical aspects of RoR this codebase is using. So unless you know all the magic that RoR does, you have no idea what's going on.

> When writing one-off scripts, I can see the advantages of dealing with just structs and functions.

It's the exact opposite. When writing one off scripts, I can see why someone would not want to bother with structs. You usually are dealing with strings any way (filenames, paths, keys in a json/yaml file, etc).


> I don't do that anymore. When programin in Go, JSON is just a serialization format for a struct. You have a concrete struct type and you just use json to fill it with data. It's so easy and trivial.

This does mean that you should know and define your schema in advance, which is not always doable. An alternative is to use a sum type and pattern to safely deconstruct `interface{}` but Go has a notably poor support for sum types. Static typing is not really a culprit here, but Go would make a bad example for anonymous JSON parsing.


>Have you wrangled with JSON using Go? Absolutely dreadful.

What's dreadful about it? I use it all the time and find it easy to use. I love static typing though and prefer my JSON to have a proper schema (to automatically generate transport layers in both Go and JS from the same source)

>The lack of an opinionated framework often means that each microservice contains code that is organized in its own unique ad-hoc way with lots of repeated boiler-plate code.

Our organization has a microservice generator tool which creates a new microservice for you so that you could start writing business logic immediately and didn't have to think about how to organize your code. It doesn't really do much - it defines a source generator for the transport layer (from protobuf descriptions so you don't have to deal with JSON), creates a bunch of folders ("put domain logic here" etc.), and adds some infrastructure code to connect to DB, RabbitMQ etc. while also adding imports to our org-wide common utils Go library. It took like a few days to create this tool, but the author already had a lot of experience writing microservices so he knew how to structure code and what dependencies to use.


I'm the different guy but here is my opnion:

>Have you wrangled with JSON using Go? Absolutely dreadful.

No it is not. You just take a json object, put it down as Go struct. Yes, it takes more time unlike in JS or Ruby but it makes this code much more readable. You can open a project and see what kind of json it expects as an input. All contracts are there. Not need for any kind of schemas or yml definitions (though you can generate one if you need to).

>Have you written multiple microservices in Go?

We currectlly have more than a hundred of those. The lack of an opinionated framework is a bad thing only from the management point of view. Once you and your team is done with this - there is really no difference from using a framework.

Again - yes, it requires more time and most likely not a great option for a small company without a developed background (ie no preferred ways for doing different kind of things) but no a problem for a relatively big company. We have a number of teams solving different problems and thus using different ways of building their services (micro or not).

>With Ruby and with RoR I never have to waste time with this and I can get straight to the business logic.

Yes, until you face a problem were your favourite gem is not enough to do the job and you have to go the hacky root.

PS: But honestly this whole topic is getting old. I've build apps in both Ruby (mostly not Rails though, we've had Roda) and Go and while RoR\Ruby\etc are great for one set of tasks I'd never use them for some other tasks. For example systems integrations which is my main job for the last ~5 years.

People often forget the web dev is not just your clients' browser to server communication.


> Have you wrangled with JSON using Go? Absolutely dreadful.

Yes. Of course it's dreadful. It's JSON. It's way better in Go than Ruby or Javascript, though.


:) I love typed languages and my current favorite is Rust. I got my most recent position because when asked what I if I preferred procedural style programming or OOP. I told them I tend to lean heavy on simple single inheritance schemes rather than complicated hard to trace architectures and over-engineered design patterns, the occasional interface class for OOP that makes sense, and mostly procedural concentrating on bending to the data rather than bending the data to your needs. It was the only thing that separated me from the other c++ candidates as we all had similar levels of experience and coding skill.


Dynamic languages start becoming problematic as more the size of the codebase increases. It becomes very hard to maintain.


Use whatever language you want; there is no large codebase that's easy to maintain.


I agree, but maintaining a large codebase in a statically typed language as compared to a dynamic language is magnitudes easier.

As the codebase grows larger, if people dont have the discipline to write proper code, the code will get more and more complicated since you will not know what exactly you will get. Mostly dynamic languages are easy to get into (which is a strength) but this becomes a weakness as the project becomes larger.


By the time they get giant with [interpreted language], the giant org cannot pivot to Java without sacrificing a year of forward momentum on product development.

It's far easier for them to adopt a static analysis tool that mimics the type safety that Java provides than to migrate their codebase.


Ruby is NOT becoming static. It will just have tools like Sorbet or rbs files. The community will for sure stay mostly in dynamic programming. The huge orgs like Shopify or Stripe who want more strictness will now be able to do it in their Ruby codebases.


Probably they got sick of having to read a function body (and maybe a couple levels of indirection from there) to figure out what the hell a method returns while doing maintenance programming on a hoary old Ruby codebase.


Ruby is and will be the same language even after static types. A Ruby programmer will need to opt-in to writing type annotations in an external file and run a separate type checker (that is not a part of MRI binary). The benefit is if libraries ship with type definitions, type inference can detect type errors early in the IDE. If no type definitions are available, you do not get ahead of runtime type errors.


It is possible to be very productive in a language with a static type system. In my view, it is actually much easier.


Mostly, I really want Rails in a typed language.


There's a reason it didn't come from a typed language... It uses a lot of dynamic features and metaprogramming.


That would be nice. There are so many attempt to clone it and the clones are always missing some important piece of what makes rails rails. Lack of good, heavily integrated libraries is a universal failure of all of the ones I’ve seen.


This is the most frustrating part - having worked in an enterprise Rails project, I wouldn't wish that on anyone valuing their own time. At the same time, it is the most complete offering I've used by a long shot for getting started.

I was writing a web app in Rust (I want to be cool!) and only after a couple hours of implementing CRUD did I realize I'd effectively made an inconsistently implemented rails scaffolding. The default opinionated round trip for DB -> application -> UI for CRUD is still really slick.


Redwood does this in TypeScript




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: