Hacker News new | past | comments | ask | show | jobs | submit login

Ruby the language may be fast but the whole ecosystem is painfully slow. Try writing a server that serves 1mb of json per request out of some db query and some calls to other services. I get 100 requests per second in Rails. Same service rewritten in go serves 100k requests/s.



Why Rails, instead of a lighter-weight framework, if performance is such a priority? Obviously that wouldn't get you to [anywhere near] the performance of compiled Go code, but Rails has a lot of overhead.

What database is on the backend, and is that db serving cached content? What happens if you cache it with e.g. redis to avoid the heavyweight rails ORM stuff?

Do you have granular benchmarks for the db query, requests to other services, and the web processing itself (using artificially pre-cached responses from db and those other services)?


First implementation was in Rails because the company is a Rails shop and the monolith was the easiest place to get something that works.

If you rewrite it anyway, might as well use something else than ruby.

The db (mysql) is not the problem in the scenario I described.


> If you rewrite it anyway, might as well use something else than ruby.

You just said the company is a rails shop, why would you force a new language on everyone who already invested in understanding ruby?


Because people know Ruby well, including use cases that it's not a good fit for. We use Go in other places, too.


Why not something like Sinatra so it’s easy to prototype and move on from there if it’s still not good enough? What you’re describing isn’t that complex and the base Ruby language is so much better than Go.


Why does that matter?

You’re likely in one of two situations as a businesss:

* You’re a struggling startup. Development velocity trumps literally everything. Your server costs are trivial, just turn them up.

* You’re a successful business (maybe in part because you moved fast with Ruby), you can pay down the debt on that absurdly large response. Chunk it, paginate it, remove unnecessary attributes.


1MB is "absurdly large"? This is not the Todo app industry sorry.

This is paginated (page size of 1000) and the caller chooses only the attribute they need already, thanks.

Even successful companies care whether they need to run 1000 servers or one for something.


You're pushing 100Gb/s of JSON (1Mb*100k/s)? AND your calling other services + a DB per request on a single server? I'm skeptical.


The test was local, ie using the loopback interface on a large server.


Are you actually going to the DB or is that json synthetically generated? Is it the same json? What exactly are we testing here?


Sorry I was oversimplifying. Most of the data for the response comes from the db with some API calls for authorization and some auxiliary data. Benchmark was actually hitting the DB. Quite a bit of the 1mb response size is redundant (json-api format).


In my experience, you get 10x to 100x gains easily switching to something lighter than Rails, like Roda.


But, like, why do you need a 1MB json response? That’s probably either a bad design or a use-case Rails is not designed for.


It's a paginated list of 1000 objects of 1kb size each. Any nontrivial API will have responses like that.

My entire point was that Rails is not designed for this.


But you could return the 1000 objects (or less? 1000 records sounds like a lot for any UI to show at once) of 1kb size and allow the clients to request specific pages with a request parameter. There may be applications where you need to ship the full 1M records I guess, but that seems like very much an edge case as far as web apps go.


True, you would not return 1000 objects at once to the frontend.

I first thought it's just a backend use-case, where processing 1000 records in a paginated result is common, but the parent mentions "rails", so it sounds like a frontend use-case.


It's a backend use case.


1,000 records is absolutely not a lot on modern computers or connections. On a business LAN, this request should take well under a second full latency.

On an average mobile connection, it’s maybe a second or so.


You’re right. It’s not a lot for a machine. The point isn’t the speed capability. It’s why? What UI has 1,000 rows in, e.g., a table all at once (much less 1M)?


Many plots contain thousands of data points. Eg: 10 x 100 heatmap which supports sorting by various metadata. This is a common visualization for biological data where your data matrix is samples x proteins, so potentially much larger than 1000 data points.


Your assumption that humans consume this data is wrong. It's actually machines that need it.


The better question is

“Why would I pointlessly accept this clear case of massive technical debt for literally no reason what-so-ever?”

Rails does not present any sort of promise that go does not also present, so just saying “yeah, I’ll handcuff my app like this cause I feel like using Ruby” is, frankly, absurd.

When you ask the right questions, you never land on Ruby, and that’s why Ruby continues to decline.


It depends on the metrics you care about.

Ruby is concise compared to Go though. I like Go and but when I use it I have to accept that I'll write (and debug) at twice as much code as I would do in Ruby.

If you're mostly just loading data into a large fast cache, lines of code may be a more critical dimension than execution speed.

That's how well designed Rails projects work and you get most of what you need straight out of the box.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: