Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
I made a Phoenix webapp as a veteran Rails dev (reddit.com)
83 points by stanislavb on May 21, 2017 | hide | past | favorite | 57 comments


What did he gain? Some performance? Did he need it, and was there no simpler way to get it?

I get the premise, just trying things and having fun. But I get the impression that in a lot (most?) scenarios this would be a lateral move at best. His last comment about going with Phoenix instead of Rails for new projects, if he's not just being kind, that seems like a major case of throwing out the baby with the bath water.

Goodbye 10 years of experience, knowledge of pitfalls, etc...hello 1 month of promise and years of gotchas and relearnings for small benefit?


I used to be very actively involved in the ColdFusion community, and heard the same criticism when I'd encourage others to expand their skillset.


Being aware and trying things is one thing, I meant more a seemingly cavalier attitude of taking all your eggs out of an old technology basket and putting them in a new one, with slim justification.

And then there is survivorship bias. What if by some strange turn of events, ColdFusion had continued to evolve and eaten its respective niche? While I wouldn't condone an attitude of not being open to new things, the ColdFusion stalwarts would look like sages, while people who invested in an upstart technology could well have wasted a lot of time.

The move "backwards" to older, more mature technologies happens often enough, too. Startup uses latest NoSQL thing, and then realizes an old timey SQL database would have saved them a lot of trouble, etc. It cuts both ways.


I used my ~9 years experience in ruby/rails and invested 100% in elixiR/phoenix. I've found elixir way more simpler and elegant for my tasks, to the point that I start my new projects in phoenix (and ported some rails app to it too).

I still respect and use ruby/rails, but it's not my first choice anymore.


I 100% respect that. It sounds like you didn't jump to that conclusion in 1 Phoenix project over a month, though...and when you say "my tasks" the situation may have been that you weren't churning out "normal" CRUD apps that Rails is so good at?

For sure its a lack of careful thinking and consideration of trade offs I'd take any issue with...it sounds like you didnt come to those conclusions out of emotion or a deep yearning for something new.


Yes, I approached elixir/phoenix (well, my very first project on it was a sort of ecommerce library), btw it's true that I didn't choose in only one month. Moreover, I tried other languages/techs during the years (Go and NodeJS/Express, for example).

And yes, sometimes my tasks go beyond the simple CRUD, but as of today, I'd choose Phoenix anyway because I find mysef more comfortable with it rather than with Rails.

Even the fact that I can think to use a 512mb DigitalOcean droplet without fear is a plus, but not the main reason :-P


The performance question is a good one. Erlang has never been a performance wonder, so I'd be surprised to hear that Elixir is noticeably faster than Ruby.


What are you talking about, its like an order of magnitude faster than ruby? It might not be as fast as Go or Java or whatever, but its not even a comparison to ruby, and particularly rails.


Is it really that much faster? I mean, Rails is a separate problem, but I would have put Erlang in the same perf bracket as Ruby/Perl/Python.


Perhaps in raw computation its more similar, but based on the context of your comment (and its parent), it seems more like you're comparing it to ruby+rails in a real-life scenario (say, a web app). (The original comment was about "what did he gain"?)

In that case, the multithreaded nature of Elixir and the VM there's no comparison.

I tried searching for raw ruby+erlang benchmarks but couldnt find much in the short time I looked.


Can confirm, elixir+phoenix outperforms ruby+rails by ~10x.

If we're talking about pure ruby and elixir, things become complicated, you can do some stuff a bit faster in ruby sometimes (where ruby is using native optimized C code)


Remember that the starting point for your comparison is Ruby. It's a great language, but not one you reach for if performance is a primary consideration.

DHH even acknowledges this: "But sure, I'd like free CPU cycles too. I just happen to care much more about free developer cycles and am willing to trade the former for the latter."


Is it merely coincidence that every time I hear about Phoenix it's from the Rails community? Is it popular with people who haven't come to it from Rails?


A never-Rails dev here using Elixir/Phoenix, easily the best choice I've made after hearing positive things here on HN.

The Elixir community may be an amalgamation of Rails folks looking for better runtime performance and cool new tech, and Erlang folks looking for syntactic sugar and a growing developer community. (I'm neither.)

Elixir's author was one of Rails core contributors, which probably also helps in bringing the Rails crowd over. One could think of Elixir as a "better" version of Rails rewritten on top of the Erlang stack, even if Rails and Elixir are apples and oranges under the hood.

See also: the inverse of OP's article, the creator of Erlang trying out Elixir (2013) [1]

[1] http://joearms.github.io/2013/05/31/a-week-with-elixir.html#...


Phoenix is heavily inspired by Rails, and I also think some of the core contributors are the same. No reason people unfamiliar with Rails can't use/learn it, but if they are previously familiar with Rails it can be easier.


As a Django dev, who has done bits of Rails, I felt it suffered from some of the same problems with Rails that cause me to prefer Django. I'd love to see what something inspired by the Django/Python community could be.


Python/Django dev here that made the switch and love it. I would also like to hear what your specific concerns were.

* edited for spelling


Would you mind elaborating? I'm curious to hear which Rails problems you felt translated also to Phoenix.


I am currently learning Elixir. Weirdly the thing I am blocking the most is the functional aspect. I don't fully get why `List.get_elem list, 1` is superior to `list[1]`?

(I know it seems a pretty obvious question, but I can't find satisfactory answers that make sense to me at least.)


Because with more expressive constructs, you may find not having to write list[ix] very often. For example, a traditional "for" loop may be replaced more succinctly with an operation such as "Enum.map" or "Enum.filter" [1]. Data transformations made more obvious.

Coming from imperative languages especially (speaking for myself), have to re-think/re-learn how you write code, but it does end up being a worthwhile investment

[1] https://hexdocs.pm/elixir/Enum.html


I want to beleive that, but I don't know. For example, in Elixir, you would write something like this to square a list: 'Enum.map [1,2,3], $($1 * $1)'. In Ruby: '[1,2,3].map {|e| e * e}'. The Ruby seems more natural to me, but maybe it's the years of OOP speaking and not reason.


Btw, there are a number of ways to write that expression depending on the complexity of the function

Short syntax:

   Enum.map [1,2,3], &(&1 * &1)
Long syntax:

   Enum.map [1,2,3], (fn num ->
     num * num
   end)
Or even:

   square_me = &(&1 * &1)
   Enum.map [1,2,3], square_me

   square_me_too = fn(num) -> num * num end
   Enum.map [1,2,3], square_me_too
Defining a function separately (e.g. even in another module) and referencing it:

   def square(num) do
      num * num
   end

   def doing_something() do
      ...

      Enum.map [1,2,3], &square/1
      Enum.map [1,2,3], &SomeOtherModule.square/1

      ...
   end


yeah and in Ruby, you can do:

    class Array
      def square
        map {|e| e * e}
      end
    end
Why storing code in a seperate module is a superior way than storing code in its class?


You've now altered a global object which other code may rely on. Indeed, you may overwrite someone else's Array#square method (or have yours overwritten). As a separate module, the additional code is logically distinct. You can find many discussions about monkey-patching elsewhere. It's not my intent here to argue one way or the other, just to answer your question as to why one may choose package the code as a separate module.


If it is to be reused by other modules, or if it is complex (and likely itself needs to be split up into smaller functions)

Think of:

   items = ["file_a", "file_b", "file_c"]
   Enum.map items, &HelpfulDownloaderModule.download/1
download/1 is probably pretty complex and/or can be of use in other pieces of code.


Ive been studying functional languages for the most part of 2017 and the first example makes more sense to me.


Because it's not superior, not on its own. You need context to tell which one is better. Not to mention that you look too closely, so you miss higher-level design that makes the difference between functional and imperative languages.


Can you provide a minimal example where functional clearly overtake oop?

Bad oop is clearly inferior to good functional. ie React. But if OOP is able to do functional as well, shouldn't OOP win on the long run?


I think it's also about the ecosystem. With Elixir all third party packages etc. can be relied on having no side-effects / immutability.

If you try to code functional in an OOP language, you would have to make sure that no third party modules (or coworkers accidentally) write code that "misbehaves" in such aspects. Just like you currently have to make sure that the Ruby code you use is threadsafe.

With immutability you gain all those features baked into Erlang/OTP (scalability, reliability).


When coding moves from science to art.


Elixir lists aren't index based arrays. They're linked lists. If you Google around there are some good forum posts explaining the difference.


Beside pattern matching - that I agree is awesome - is there anything else that make Elixir approach superior to Ruby one?


I am functional newbie too but with

`List.get_elem` you can write an higher order function `List.get_Nth_elm` which takes `List.get_elem` as an agrument .


A rails veteran that still heavily relies on ActiveRecord's callbacks sounds a bit weird to me...


Yeah... that immediately was a red flag for inexperience. Not to mention when hes "not sure" if things were cleaned up. How have you been programming for 10 years and don't know how to properly clean up records, whether its through the database, ORM, or manually in a transaction.

It sounds like this guy has sucked on the rails teet for far too long, instead of challenging himself with other approaches and paradigms.


Do you have any resources to expand on the issues with AR callbacks? I'm interested to read up on this.


AR callbacks can also make it harder to write tests.


In short: you lose control of what is happening, especially when you expand/inherit other classes. These kind of problems arise after few months of development, when the codebase is grown enough and starts becoming even more difficult to change or extend.

Try to check Trailblazer for a different approach.


What would you use instead?


Commands (or Service Objects) are a better approach. Isolating business logic and actions let you keep the control of what's going on.


I'd like to see him try crystal next and how that compares


Crystal? What was your experience?


I originally started as a ruby / rails dev and while I still do that, Crystal is phenomenal to work with.

https://crystal-lang.org/

It's still a smaller community so frameworks and such are still being developed. Currently there's a Sinatra inspired library, Kemal:

https://github.com/kemalcr/kemal

and then there's Kemalyst, which handles MVC and the like:

https://github.com/kemalyst/kemalyst


> Just the fact that app can utilize all cpu cores without you doing anything special is pretty damn awesome.

Rails apps can utilize all cpu cores without you doing anything special as well (Since Rails 5 its default web-server is Puma, which is multi-process and multi-threaded). I agree this fact is pretty damn awesome, but what (if any) are the significant differences between Phoenix and Rails in this respect?


No it can't. It's technically impossible for a Rails app to utilize all cpu cores because of the GIL. You're confusing two different things: the web server and the actual Ruby app. The rails app is still limited by the GIL and, therefore, runs within a single process.


So you're saying that when the OP talks about 'app can utilize all cpu cores', he's specifically referring to individual requests spawning multiple concurrent threads and therefore utilizing all CPU cores in isolation, as opposed to the web server utilizing all CPU cores across multiple concurrent requests by serving them on separate processes/threads? I would be interested if the former was the case (and would be interested in practical examples), but I highly doubt it.


Yes, the former. He's saying that his Elixir app should scale linearly to N times faster on N cores processors (up to ~20 cores).


Rails can scale in this way too, by serving concurrent requests across multiple forked worker-processes (note that the GIL doesn't prevent multiple _processes_ from running Ruby code concurrently across multiple cores, only multiple threads in a single process).

At first, it sounded like you were implying (by distinguishing between the web server and 'the actual Ruby app') that Phoenix automatically parallelizes portions of an _individual_ request across multiple threads/cores using some special asynchronous dataflow-paradigm primitives or something. It would be unique and truly awesome if that were true, but I still highly doubt it.

I don't doubt that Phoenix probably delivers orders-of-magnitude better performance than Rails in many other respects, but 'utilizing all cpu cores without you doing anything special' is nothing unique to Phoenix over Rails as I understand it.


Well, by its nature a web request is sequential, so in 95% of the cases it's just:

  process response -> deserialize -> put/get something from the DB -> send response

However, let's say you need to send off many requests to multiple external services in order to respond to a particular request. These would be trivially parallelized in Erlang.

Another thing to note is that, let's say Puma is running 8 workers and a bug crashes the app. Every client connected to that worker is going to lose its connection while that worker is restarted. In Erlang, you have one process for each client so only one client notices the crash.

And finally, the overhead of starting multiple Ruby processes (in this case OS processes) per Ruby interpreter is way, way larger than a starting multiple Erlang processes. It's a difference of 300MB vs 2KB.


The entire basis of Erlang (well, it's VM, BEAM) are to be built from asynchronous primitives. From what I know, Phoenix works well with it.

So yes, Phoenix does "automatically parallelizes portions of an _individual_ request across multiple threads/cores using some special asynchronous dataflow-paradigm primitives or something."

Edit: quoting you is not meant to be condescending... I'm not sure if it comes off that way or not.

Edit 2: Not an expert and it may not be parallelized, but portions of the requess can be doled out to other threads, and when there's any down time from IO, other requests can use the thread...


I think that's not true. If you have e. g. 4 cores, and you start Puma with 4 workers (i. e. processes) it will utilize all cores.

I guess you're mixing up processes with Ruby threads.

You normally start as many Puma processes as your CPU/Core count with probably additional threads per workers.


Not sure why you are being voted down. This is exactly how you would scale to multicore with Puma.


Because of the GIL, Ruby threads will be run sequentially, within a single process, even thought they appear to be parallel. So, if a part of a program blocks the process, your CPU is being under-utilised.

Erlang, on the other hand, has a preemptive scheduler that is capable of keeping all cores hot.


That's true for threads (although most of your threads should be blocked by IO, e. g. DB queries, and that means the GIL lock isn't a big issue).

But the beauty of HTTP is that you can scale simply by adding more "endpoints". That means you can add as many processes (workers) as necessary to utilize all CPUs, or even add as many servers as you can afford.

I'm not saying that Elixir does not do a better job with it's lean processes and built-in OTP stuff.

But it's not true that you can't utilize all CPUs with a Rails app.


> The rails app is still limited by the GIL and, therefore, runs within a single process.

Rails app-servers fork multiple OS processes to utilize all CPU cores despite the GIL.

If you define a 'Rails app' as a single un-forked OS process (a definition I've never heard before) then yes, tautologically, a 'Rails app' could not utilize all CPU cores because of the GIL.


Elixir/Erlang and apps written in them parallelize embarrassingly well. Think the native asynchronous nature of node, but using all cores.


We are using Phoenix and elixir in a small service. Loving it so far. Super fast and organized nicely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: