Hacker News new | past | comments | ask | show | jobs | submit | markov_twain's comments login

Github keeps the refs for pull requests separate from branches and tags. For example, you can fetch pull request #123 into a local branch called pull-123 with: `git fetch origin refs/pull/123/head:pull-123`.

When you open a pull request from a branch, Github creates the separate pull request refs to track changes to that branch. I'm assuming they have some kind of after-receive hook that updates the ref whenever you push.

However, if you close the pull request (or delete the branch, which automatically closes the pull request), then rebase the branch locally and force push your branch, the pull request will not be updated, and you still see all the comments and historical data.

You can then open a new pull request from that branch, and go through the code review process again. If you mention #123 in the description of the new pull request, it'll create a link at the bottom of the discussion on the original, closed one. This helps keep the separate discussions tied together if you're coming back later and want to see all revisions of this branch.


Here's an entirely working example I just created: https://gist.github.com/benolee/ca49aaa9a0363c18904b


Thanks for this--I mentioned finagle to jcoglan on twitter yesterday after I read his blog post, and I don't think he was aware of the similarities.

I actually didn't know about futures until I learned about scala and finagle. I watched a talk on twitter's service stack given by marius eriksen and was blown away. My coworkers heard me rambling on about futures for weeks afterwards, and I found that it was difficult to explain what was so great about them. So I'm not surprised at the negative reactions in the comments here (although jcoglan did a much better job of exlaining them than I ever did).


Even worse, he lists "brainfuck" as a technology that's easier to use outside of the big-company matrix. There is no such thing as "brainfuck". There are various dialects and derivative languages. There is the object-oriented programming language which solves its problem excellently and whose limitations are well-known and sometimes very painful, especially when dealing with huge amounts (multi-node) of data. There are other models of computation with different trade-offs. There is no such thing as "brainfuck". It is not a technology or skill or anything else. There's Toadskin and Smallfuck and Doublefuck and various others, but there isn't a "BrainFuck". I can't type "brainfuck start" at the command line and get anything useful to happen.

Sorry, I had to do that.


The concept of "brainfuck" is one that is constructive, whereas "nosql" is deconstructive: one defines something from nothing, and the other defines something only by what it isn't. When you look at the ecosystem of NoSQL solutions, you don't really find much commonality... there are patterns, but they are largely defined by prototypical examples that are insanely disparate (BigTable, Dynamo, memcached). Your past experience working with HBase (other than in the general "problem solving skills transfer" way) doesn't help one later work with fundamentally different systems like Riak.

Your sarcasm thereby definitely hits the exact statement made, but seems to entirely miss the underlying point: you are correct that even for SQL you have to use a concrete implementation, not the abstract concept, but to an important extent which database you end up using doesn't really matter... they are all pretty much the same (and yes, this is coming from someone who in different contexts will implore people to not judge "SQL" poorly due to problems inherent in "MySQL"). That is just not true of "NoSQL": if you want to be a little more honest in the comparison, you could try something like "not-Java" (which one also imagines is difficult to use inside of a big-company matrix).


Brainfuck does exist. It's an esoteric programming language. http://en.wikipedia.org/wiki/Brainfuck


NoSQL also exists. Carlo Strozzi first used the term NoSQL in 1998 as a name for his open source relational database that did not offer a SQL interface. http://publications.lib.chalmers.se/records/fulltext/123839....


Pretty sure markov_twain was being sarcastic.


Ah. Didn't quite catch that. My apologies.


I like that chrome can read .bf files


I randomly stumbled across this gist by tenderlove https://gist.github.com/tenderlove/4576780 that displays references as a tree view using d3.js. I think that this gist in particular is pointing out a bug in that the "references" for Fixnums (also Symbols) in an array aren't returned by ObjectSpace.find_references.


Heroku apps at the very least sit behind the heroku router, which I would assume is in the same datacenter. If the information in this quora answer http://www.quora.com/Scalability/How-does-Heroku-work is still valid, your app is also behind a front-facing nginx reverse-proxy.

I'm not sure that actually means it's handling slow-clients, though.


I think the important part is that the reverse-proxy should buffer the request, but I can't find anything about wether or not the Heroku router does this.


Unfortunately, Ruby 1.9.x is not copy-on-write friendly, mainly due to the garbage collection strategy. I would venture to guess that most Rails apps these days are running on some version of Ruby in the 1.9 series. So siong1987 is generally correct in the assumption that forking a Rails process n times will consume about n times the amount of memory as a single process. However, Ruby 2.0 has a new garbage collector (called bitmap-marking) that promises to be copy-on-write friendly, so as adoption of that increases your suggestions will become more important.


Brightbox has Ruby 1.9 packages with the GC copy-on-write-friendly patches included. We run it in production with good results.


AFAIK, to use the brightbox packages on heroku you'd have to write a buildpack that either downloads all the dependencies and runs through the same build process that brightbox uses, or use something like heroku-buildpack-fakesu to create a fakeroot type of environment where you can install debs.

Another issue with using the brightbox packages is that if you happen to run into a bug, you'll have to figure out if the bug was caused by something non-standard in your ruby installation or if it's actually a bug in ruby.

One last thing to note is that it looks like the latest brightbox release targets patchlevel 327, while ruby core is at 392 (not counting 2.0.0-p0), so you're missing a lot bug fixes until the brightbox team gets around to building against the latest release.


We don't deploy on Heroku, so I didn't know about that process.

Your other two points are true though. We consider the trade-off worth it for the memory gains.


This bit of ruby should take care of it

    HashMap = HashTable = Map = Table = Dictionary = Hash
And if you're feeling adventurous,

    Object.send :remove_const, :Hash


Please never, ever do this


If doing this is such a bad idea then why is it so easy?


If doing:

  #define BEGIN {
  #define END }
were such a bad idea, then why is it so easy?


Maybe the language is poorly designed.


Or maybe the connection between being able to do something and it being a good idea to do something is just in your head.


In my experience, making obviously bad things difficult or impossible improves reliability. This idea certainly resides within my cranial cavity, but that doesn't necessarily make it wrong.


How could:

  HashMap = HashTable = Map = Table = Dictionary = Hash
possibly not qualify as "obviously bad"? The only reason you've offered up is because it is easy...


I think this is fine. The obviously bad part is being able to remove/change constants, especially as these changes are global.


The obviously bad part is that you pollute the global namespace for no reason other than laziness. When someone comes across code that uses a "Table" object interchangeably with "Dictionary" and "Hash", then he's going to have to look through the source code to find this bizarre line only to find out that you renamed a built-in container for no good reason.


Yes, I suppose that's also true.


I approve of this. Don't listen to that other guy. This should be standard.

One addition though:

    Cocktionary = HashMap
"Dictionary" never really made sense.


Literally reinventing the wheel


fwiw, I find the syntax definition in the ruby_parser gem at https://github.com/seattlerb/ruby_parser/blob/master/lib/rub... to be much easier reading, as it's written for racc (the ruby equivalent of yacc), although as the readme warns, it's not perfect.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: