Hacker News new | past | comments | ask | show | jobs | submit login
What is a y-combinator? (stackoverflow.com)
128 points by lenmod on July 15, 2011 | hide | past | favorite | 36 comments



My favorite explanation remains the one I came up with at http://www.mail-archive.com/boston-pm@mail.pm.org/msg02716.h....


Let's just pretend we are binding values. There are ways to make perl look like you are doing parameter assignment.

    sub {
        my $builder = shift;
        return sub {
            my $n = shift;
            return $builder->($builder)->($n);
        };
    }->(
        sub {
            my $recurse = shift;
            return sub {
                my $n = shift;
                say "$n";  # Just a way to test
                if (0 == $n) {
                    return 1;
                }
                else {
                    return $n * $recurse->($recurse)->($n - 1);
                }
            };
       }
    )->(5);
I was confused about the "perl can't do this" part.


Pretending you are binding values instead of assignment means you can pretend to achieve the goal in Perl.

But when you drop the pretenses, you're still doing assignment. (Though otherwise you've perfectly duplicated the pattern.)


I say I say I say...

  use strict; use warnings; use signatures;

  sub ( $builder ) {
	return sub ( $n ) {
        return $builder->($builder)->($n);
    };
  }->(
	sub ( $recurse ) {
		return sub ( $n ) {
            if (0 == $n) {
                return 1;
            } else {
				return $n * $recurse->($recurse)->($n - 1);
        	}
        };
    }
  )->(5);


Cute. But I wrote the email under discussion in 2005, and the first version of signatures was not released until 2008.

I still think that my email was accurate as of 2005.


You should definitely add that as an answer on SO. The current answers are not very useful.



I just worked through this for myself. Maybe someone will find another approach useful:

http://stackoverflow.com/questions/93526/what-is-a-y-combina...


Aristotle also does a great explanation in this use.perl.org post: http://use.perl.org/~Aristotle/journal/30896


Has anyone on HN ever actually used one of these in code they've written?

I've never understood why y-combinator is useful or impressive. Interesting, maybe, but not really useful.


I use them in the Fexl language http://fexl.com .

Although the Fexl language itself has syntax for recursive definitions, it translates everything to combinators internally, so there are no "symbol tables" or "environments" at run time. Therefore it uses the Y combinator to implement those recursive definitions.

A while ago I wrote this detailed example: http://news.ycombinator.com/item?id=2719635

tl;dr here is a non-recursive definition of the append function for two lists:

  (Y \append \x\y x y \h\t cons h; append t y)
If you need more parentheses for clarity, here you go:

  (Y (\append \x\y x y \h\t cons h (append t y)))


I would say that the y-combinator is interesting to the such a degree that it becomes useful. Sure you won't and shouldn't use the y-combinator in any real world applications. But first off it demonstrates the true power of the lambda calculus to represent computation, and more importantly to the practical programmer is that understanding it will provide a deep insight into the nature of computation, which is profoundly useful if you ask me.


It's a stretch to say that you shouldn't use them in a real application. Combinators in general can be useful for implementing functional programming languages, and those languages can be useful for applications.

That may sound like bickering over a fine point, but I'm just saying that combinators can be more than a theoretical tool.

It seems reasonably fast for my purposes so far, and the expansion rule is just a one-liner: https://github.com/chkoreff/Fexl/blob/master/src/Y.c


"Y in Practical Programs" by Bruce McAdam:

http://www.dsi.uniroma1.it/~labella/absMcAdam.ps (or search with Google for an HTML translation).


It's useful for lambda functions that recursively call themselves, you don't need these in practice because you wouldn't use a lambda. You would use a named function, therefore, they are almost purely theoretical aspect of Lambda Calculus. I can't think of any programming language that is purely lambda calculus.


Fexl http://fexl.com is essentially nothing more than a thin layer of lambda calculus on top of C, implemented using combinators.



It's all about removing dependencies, and y-combinators have the fewest dependencies for accomplishing the greatest number of functions.


After you wrap your head around that barely useful, rarely used technique the next question is, why did pg name his company after it?


He started a business, and now his business is starting other businesses. Recursive...


Y Combinator is not simply recursion, and a function that creates other functions isn't necessarily recursive anyway. He should have called it Factory Pattern.


Paul Graham isn't a Java hacker, and isn't a fan of complex OO design patterns. Naming it Factory Pattern would therefore be unlikely to appeal to him.

See http://www.paulgraham.com/noop.html for evidence that he isn't a big fan of OO.


I guess the computer science-y name would be Higher-order Function.


Just a small remark: Factory pattern is anything but complex. Y combinator actually seems much more complex to me (though, I'm not proficient in functional programming, so that might be why I find it complicated).


> (though, I'm not proficient in functional programming, so that might be why I find it complicated).

Hint.


Thanks for posting that ... I'm happy to see I'm not the only one who eschews OOP and sticks to plain old routines. I've been programming for over 30 years, and I went through an OOP phase, but I got over it.


Do you really stick to plain old routines? If you have a data structure, and a module full of functions that manipulate it, that is an object. Just with a polluted global namespace and extra typing.


You don't even need the data structure since you can simulate that with functions too in order to make a basic object system. Consider a simple example like this:

   (define (init-counter)
     (local
        ((define counter 0)
        (define (inc) (set! counter (add1 counter)))
        (define (dec) (set! counter (sub1 counter)))
        (define (current-count) counter))
      (values inc dec current-count)))
Then you can just import this code into wherever you want to use it and do something like:

   (define-values (counter++ counter-- current-count) (init-counter))
Plenty of people have said data structures are a poor mans functions. I think it's in that famous list of programmer quotes collected by alan perlis that everyone inevitably comes across at some point.

In many cases where you want object orientation something like this will be sufficient. No need to go the whole way with inheritance and whatnot.


You know, even when I did use OOP, I avoided inheritance altogether and stuck with containment instead. The semantics seemed clearer to me, and I later read some essays from excellent programmers who shared my sentiments.

Nice trick with your little module-in-a-box there. I think it's roughly analogous to the technique I used in my "handy_module" example here: http://news.ycombinator.com/item?id=2580717 .


That's right, we are both using the same technique.

It's interesting to note though that the difference between this and a data structure is not that great. I haven't looked into it but I'm certain I've read somewhere that data structures as implemented in racket (which is what my example is written in) actually desugar into something like my example code.

I still like to use this technique here and there, especially when there will only be one copy of the structure in use in my program. This is because it's simply more convenient to write something like (counter++) rather than declaring a global COUNTER and typing in (set! COUNTER (add1 COUNTER)) etc.


Thanks for asking. Yeah, I actually do. Certainly when I'm programming in C, I use plain old routines, 'cause that's all I got.

But recently I even "de-objectified" a body of Perl code I wrote (a standalone server called "loom" https://loom.cc/source).

I systematically refactored it to get rid of all $object->function constructs. I ended up with a flat name space with names like trans_get, trans_put, span_encrypt, diceware_passphrase, file_update, dttm_as_cookie, api_grid_move, valid_id, trimblanks, token_get, etc. etc.

I even have a little script in there called "show_subs" which lists all the routine names in alphabetical order. There are 410 of them.

You might call that a "polluted" name space, but to me it smells like clarity. I can now look at any snippet of code completely out of context and know exactly what it does.

You might say that the downside is that each piece of code is not "configurable", meaning that it can't operate on different types of things depending on context. If and when that ever became necessary, I could easily manage it. But I never found it to be necessary.

By the way, this discussion reminds me of this article posted earlier: http://erlang.org/pipermail/erlang-questions/2011-May/058769... . I look at the list of functions he exports there and I think yeah, I bet I could use those in a heartbeat without a second thought.


I see it more as being a generative nucleus that makes all computation possible.


He actually mentions why in a Mixergy interview. I believe it's one that's still free to view, but to paraphrase:

People who see it and recognize it are the type of people he wants to catch attention from(hackers). Suits, would see it and ignore it.


It's an opportunity for HN'ers to hold forth on programming.


FORTH ?KNOW IF HONK ELSE FORTH LEARN THEN


Fixed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: