Hacker News new | past | comments | ask | show | jobs | submit login
Strangest language feature (stackoverflow.com)
121 points by tszming on Aug 13, 2010 | hide | past | favorite | 77 comments



I'm a bit disappointed in this discussion. "strangest" language feature sounds like a crowd pointing and laughing. Boring! What are some novel, underappreciated, unexpectedly useful language features? (And this emphatically isn't about "my favorite language vs yours".)

Here are a couple, to get things rolling.

Pattern matching. This is a favorite of mine - stating what you want (or expect), possibly with variables, and letting the language generate the tree of if/case statements. I find pattern-matching code extremely easy to read, easy to extend, almost always right the first time, etc. It works with both static (ML-style variant types, reminding you to check exhaustively) and dynamic (Erlang-style, "handle what you can or throw an exception") typing. awk is based around a weaker form of pattern matching (only matching against ints and regexes, rather than whole trees), and it's still great for quick hacks. PM seems to mesh poorly with conventional OO, though.

Compile-time metaprogramming. This is HN, you've probably heard plenty about Lisp macros, so I'll leave it at that.

Incremental development / hot code loading. Erlang has this (with soft real-time guarantees), Lisp, Smalltalk, and basically anything image-based has had this for ages. It's often easier to grow a system piece by piece than to build it upfront and plug in together afterward.

Iterators/generators. Without needing a fully lazy-by-default language such as Haskell, it's often handy to have a handle into a (possibly infinite) stream that you can pull from as necessary. Prolog combines pattern matching (unification really, a step beyond pattern matching) with backtracking, so you can write "generate and test" code: "here's what I want, this generates a stream of all possible candidates, here's how to narrow it down". It's sometimes too slow for use in production (due to combinatorial blowups), but it's excellent for prototyping, and can usually be made good enough with some tweaks.

Coroutines and actors. Like iterators and generators, but breaking the flow of execution into lots of little autonomous pieces. Communication makes dataflow explicit, most things can be easily understood in isolation, and they can often be run in parallel. (Coroutines are cooperative.)


To this excellent list I'd add:

First Class Continuations like in Scheme.


What other languages have first class continuations? Scheme does, some SML implementations do (smlnj has call/cc, IIRC), Ruby has call/cc* , anything else?

* But I've heard it's really not implemented for performance, and probably doesn't get used that much. I'm not really into Ruby, so...anyone?

Of course, you can do CPS yourself, though realistically you need first-class functions, tail-call optimization, and garbage collection or the extra bookkeeping will make it overwhelmingly complicated.


Gotta read through a bit, but there are some excellent ones in there. Personal favorite with C++ templates:

  assert( ( o-------------o
            |L             \
            | L             \
            |  L             \
            |   o-------------o
            |   !             !
            !   !             !
            o   |             !
             L  |             !
              L |             !
               L|             !
                o-------------o ).volume == ( o-------------o
                                              |             !
                                              !             !
                                              !             !
                                              o-------------o ).area * int(I-------------I) );
http://stackoverflow.com/questions/1995113/strangest-languag...


I was disappointed after clicking the link, because from the title, it sounded like it would be an insightful list of unusual useful features, but instead it's mostly a boring list of language design bugs or quirks.


The video of implementing Conway's Game of Life in APL is awesome:

http://www.youtube.com/watch?v=a9xAKttWgP4


I'm curious. How come no one claim APL being expressive?


To be usefully expressive, a language must not only be compact, but easy to read.

From what I've seen, APL is pretty much a write-only language. Code can't be read, only deciphered.


There's a trick to it. I was reading Ken Iverson's Notation as a Tool of Thought paper (look it up, it's worth a read) which is basically a whistlestop introduction to APL. I realised that it's very difficult (for me) to reason about when I read it left to right, but it's almost trivial read right to left.

That's right, it's a language that makes more sense backwards than it does forwards.



I heard somebody describe APL as associating "left of right", rather than "left to right", and it suddenly made sense. Reading it right to left is like reading it from the inside outward.


People who know APL can read it. I've been dabbling in J and K off and on, and it makes sense to me, though I still need the dictionary. APL code looks dense and intimidating, sure, but so does a page of Chinese. Both use ideograms rather than words made of letters, but the APL vocabulary really isn't that large. You probably already know what + does.

Also, I can't read Bosnian, but that says more about me than Bosnian. It doesn't make it a "write-only language".


But Chinese isn't any more expressive, just shorter. The fact that you type "國" instead of "nation" is only lexicographically compact, not semantically. Studies show that proficient readers of Chinese do have marginally higher speed and comprehension than phonetic languages, but that doesn't translate into being more expressive.

At risk of being slighty Euro-centric, in fact, from what little I know, western literature and tradition seems to have more emphasis on semantic content and the conveyance of literal ideas than does most Chinese.


APL is also semantically compact, because every verb implicitly maps over a whole collection.* This interests me more than the whole "programming with wingings" bit, but that's the part people get hung up on. Like all those damn parentheses in Lisp, or the whitespace in Python. (Shrug.) I'm more interested in J, which limits itself to ASCII.

* Joy also does this, but seems much less practical.

Chinese is also not a great comparison, because the vocabulary is so large. I originally said Turkish, then Bosnian, but changed it to Chinese because the "page with a grid of Chinese characters" is probably how most people think APL looks.


> Also, I can't read Chinese, but that says more about me than Chinese. It doesn't make it a "write-only language".

On the other hand, the fact that the Chinese have put their language through several reforms because of systematic problems with literacy and reading comprehension does show that, on a continuum between "readable" and "write-only", the Chinese language leans more toward the latter.


True, but I think you're focusing on the wrong aspect. (I originally said Turkish, then Bosnian, then decided that the ideogram-rather-than-letters aspect was worth drawing a parallel to Chinese, and have since changed it back to Bosnian.)

The parallels to actual human languages also break down here - reading APL is intimidating like reading Chinese, but it also has less than a hundred symbols (fifty-ish, IIRC), so it's really not comparabl in terms of "systematic problems with literacy and reading comprehension". I think the APL-languages are more like vi. Do vi(m) users think "9kdd2jp"? In writing, it looks like noise.


I don't think that follows. The causes could be primarily historical, for example. One would have to have detailed knowledge to make a case here.


The reason people can't read APL easily is it's one of many languages programmers know. If APL was the ONLY programming language any ever used anywhere, then all programmers would be able to read it easily after a little programming experience. Natural languages have the tersest syntax around, and people understand them easily because it's the ONLY language they learn, or one of only a handful if they're polylingual. It would be the same with a programming language.


Not really... The syntax of natural languages isn't really comparable with programming languages. Completely different modes of understanding. Calling programming languages "languages" is only an analogy. Humans have highly evolved linguistic capabilities that don't really apply to programming languages (though if you could invent one that leveraged natural language capability, you might have something very interesting on your hands.)


Vorg is simply saying -- and is quite right -- that APL is regarded as unreadable only because it's unfamiliar. People who know APL-based languages are amazingly proficient with them. The "write-only" slur is a canard. (Actually, these languages constitute one of the truly great contributions to programming. APL belongs on a short list with Lisp, Smalltalk, C, Forth, and very few others.)

It's depressing how the computing world is overrun with majoritarian prejudice. Even here. I suppose it's just human nature, but an open mind is so rare.


I do hope I don't fall into that camp. I learned programming in BASIC, moved on to game development in C/C++ and DirectX, programmed professionally in Java for 6 years, learned Scheme, learned Clojure, wrote a book on Clojure, am now learning Haskell...

But I was motivated to do all of that because of people I respected who said "This contains some genuinely new ideas worth learning." So far, I haven't heard anyone say that about APL, and so I guess I assumed that either its differentiation was only in a hyper-terse syntax, or that it was not generally suitable for a generalized programming language (similar to regex).

If that's an invalid assumption, then please point me towards an article elucidating what's great about APL, and I'll read it sincerely.


APL contains some genuinely new ideas worth learning. I'd suggest looking into J, as a more accessible and modern APL dialect. (K / Q is also really, really cool, but J is less commercially restricted.)

I was just curious because it was so different, but the thing that really convinced me was going through some of the J "labs" (a bunch of math lessons included with J, that incidentally teach it along the way) and spending a couple hours picking apart this code (http://nsl.com/papers/origins.htm) to get into the APL frame of mind.

I would suggest not getting too hung up on the syntax, and instead focusing on the pros and cons of doing everything over collections by default, with implicit looping. "Thinking big", as Henry Rich calls it in _J for C Programmers_ (also a good resource). The ideogram-based syntax is an historical thing, and getting fixated on that is like thinking the only novel thing about Lisp is all the parens.

See also: "The World's Most Mind-Bending Language Has the Best Development Environment" (http://prog21.dadgum.com/48.html)


Oh, then I think you'd find a lot to like about APL/J/K. It's not just that they contain genuinely new ideas worth learning; they're mindblowing in the way that Lisp and Forth are. I agree with silentbicycle that your best bet is probably the tutorials that ship with J (http://www.jsoftware.com/start.htm). The closest thing to a manifesto is Iverson's Turing award lecture Notation as a Tool of Thought (http://www.jsoftware.com/papers/tot.htm). But the really hardcore, beyond mindblowing stuff is Arthur Whitney's work on K, Q, and kdb. (Edit: the Q book is available online at https://code.kx.com/trac/wiki/QforMortals2/contents, user and pwd "anonymous").


I second what gruseom says: K (and Q, the newer version of K) is amazing. J seems more theoretically satisfyingly mathy to me, but K is a straight-up hacking language. It's really too bad that it's closed and oh-so expensive (but since it was made for real-time stock trading, and it works, I'm not surprised).

Here's a good intro, by Arthur Whitney himself: http://www.vector.org.uk/archive/v101/whitney101_74.htm


Interesting. Downloading the non-commercial Q implementation and checking it out.


Hey, let us know if you find anything that piques your interest.


Just because you understand a language does not mean you understand or recognize all algorithms that can be expressed in that language.


Yeah, but that's hardly limited to APL.


That's why I said 'language' and not 'APL'.


Ah, I saw that and came here to post it, too. It's fascinating.

I learned a bit of Haskell on project euler, and when you solve a level you get to see the other solutions; the APL ones just blew my mind. What a strange little language.


Dynamic scope. Not really very strange but very unusual; not supported by many languages. AFAIK Perl and Lisp(ish) languages are the only ones. For example:

  sub foo {my $x='fish'; local $y='chips'; &bar;}

  sub bar {print "\$x is >$x< and \$y is >$y<\n";}
This will output

  $x is >< and $y is >chips<
It's quite handy - saves passing around a "context" object, for example. The remarkable thing is how few languages implement some form of it.


Dynamic scope is supported in older Lisps (eLisp is the only widely used variant with it), and a variety of scripting languages including PHP, Perl, TCL and vimsh (that's vim's internal scripting language). It is rarish because it has been recognized as a bad design, but it still turns up because it is easy to implement.


It is often recognized as bad design - though I don't think that's always fair. I thought of this because I was just reading some Java code that passed around a context object with all kinds of random things stuffed into it. It was, in effect, a way of simulating dynamic scope. There's a time and a place for everything. I'm sure dynamic scope can be horribly abused, but it's also nice to have in the toolbox for when you actually need to do what it does.


> eLisp is the only widely used variant with it

I think you mean it's the only widely used variant where it's the default. It's available in CL and Clojure when you want (need?) it.


And Haskell:

   import Control.Monad.Reader

    type Action = ReaderT String IO ()

    foo :: Action
    foo = do
      bar
      local (const "new state") bar
      bar

    bar :: Action
    bar = liftIO . putStrLn =<< ask

    > runReaderT foo "foo"
    foo
    new state
    foo


As well as scheme.


How does tcl, php, and vimscript support dynamic scoping? Examples please.

IIRC common lisp supports dynamic scoping too -- it defaults to lexical scoping though.


If you ever tried implementing a language, you'll see that dynamic scope is actually much easier. So it's not that language x doesn't implement dynamic scope. It's that language x doesn't actually implement lexical scope so it's left with dynamic scope :)


I'm not sure I understand your sentence (did you mean language x does implement dynamic scope?), but I feel obliged to mention Perl offers lexical scoping as well--within each file and each block.


If you're a lazy language designer dynamic scope sort of emerges from the design proces while one really has to do extra work to support lexical scoping in a language.


Perl 5 has dynamic scoping for backward compatibility with Perl 4, which had only dynamic scoping, and not only that, it botched it so that this code didn't work right:

    sub greet {
        local ($who) = @_;
        print "hello, $who\n";
    }

    $who = "world";
    &greet($who);
This would print, "hello, " with a newline, due to a design bug called "variable suicide", which has been fixed in newer Perls.

Needless to say, this made the extract-subroutine refactoring quite a bit more trouble in Perl 4.


It doesn't just have it for backwards compatibility, it's still a very useful feature in some cases:

    sub foo {
        local $ENV{HOME} = "/foo/bar";
        Code::I::Don't::Own();
    }
    foo();
Having dynamic scope here ensures that the foreign code will use the environment I want without me having to alter it.


Perl 6 also has dynamic scoping. But I don't think they kept it just for backward compatibility with Perl 4 & 5 :)

Here is another example where dynamic scoping can indeed be very useful:

    sub bar { 'bar' }
    sub baz { bar() }

    sub foo {
        no warnings 'redefine';
        local *bar = sub { 'My Bar' };
        baz();
    }

    say foo();   # => 'My Bar'
    say baz();   # => 'bar'


I can't say I'm surprised that three of the top 5 features on the post are Javascript :|


It is because the ECMAScript type system is a real WTF.

I always found the type system of PHP questionable, especially the useless and bug provoking unity between arrays and hash tables. However, ECMAScript provides such mis-design at an even lower level of the language. Nevertheless, I still like working with ECMAScript because at least it has proper closures.


The strangest language 'feature' I've come across must be 'ticks' in PHP:

> A tick is an event that occurs for every N low-level tickable statements executed by the parser within the declare block. The value for N is specified using ticks=N within the declare blocks's directive section.

( http://www.php.net/manual/en/control-structures.declare.php )

PHP is a hodgepodge of inconsistent APIs and general weirdness already, but I really did sit back and have a good WTF moment when I discovered this 'feature'. I have yet to encounter many people who have ever heard of ticks, let alone anyone who has actually implement functionality that depended on them. I've never seen something like that in Python or Ruby.


It's really not that strange - it's probably intended debugging & profiling code. It sounds like Lua's debugging hook (http://www.lua.org/manual/5.1/manual.html#lua_sethook), which can run a callback every N vm instructions.

I made a simple profiler by setting the hook to run every X ticks (typically 10,000), recording the call stack, and then seeing where most of the execution happens. If every other time you look at the vm, it's in the same place, then it's clearly a hotspot.

I don't know of an equivalent in Python or Ruby, off the top of my head, but I'd be surprised if they weren't available.


My favorite examples from Perl. The reset function. The dump function. And the fact that pos($x) = pos($x) is NOT a no-op. (It clears an internal flag allowing a zero-length regular expression to match again. Yes, I've used this.)

Another fun one is that in Perl that local can operate on data structures. As in local $foo{bar}. (I use this every so often in recursion sanity checks. But it is still an odd feature.)


Awesome comment on trigraphs in C:

These let you use the "WTF operator":

  (foo() != ERROR)??!??! cerr << "Error occurred" << endl;


[Language pedant: although it's legal C, the use of cerr and endl and << is more likely to occur in C++!]


<< is C++ only.


<< is the left-shift operator in C, so the OP's code is syntactically valid C.

It ought not to compile though, since it's trying to shift cerr by a string!


Thanks. Sorry. I'm stupid...


"Goddamn operator overloading" -- Rev Wright


Strangest language feature...


It amazes me how so many times this python feature gets overlooked:

    >>> x=5
    >>> 1<x<10
    True
    >>> 1<x<3
    False
EDIT: fixed spaces, thanks epochwolf for the indent tip.


If anyone is interested, here's how you could implement it in Ruby: http://judofyr.net/posts/chained-comparisons.html


    >>> x=5 
    >>> 1<x<10 
    True 
    >>> 1<x<3 
    False
(Indent 4 spaces to show code)


that's good-strange, though, not bad-strange.

i think in the OA they were mostly talking about bad-strange.


How is call/cc not on there...don't tell me your mind wasn't blown when you learned about that one.


I'd have to say intercal's COMEFROM: http://en.wikipedia.org/wiki/COME_FROM


Perl will convert from base 26 to base 10 if you know how to ask:

  $ perl -E 'say lc (@a="a".."asdf")'
  30530


Base 26 or 36? Is 'a' 0 or 10? Does 's' overflow to the next column, like 09 in octal?


It's interesting how many of these that aren't outright misfeatures are recognisable as half-implemented Common Lisp features if you squint at them sideways: Python's comparison chains, bc's quit/halt difference, VB7's Loop, JS's multiple-value return, SFINAE in C++... I'm sure there are others.


I mostly agree but in the case of comparison chains it is lisp that has the half-implemented version: of course a<b<c is (< a b c), but a<=b<c, which is actually more common in my code, becomes (and (<= a b) (< b c)).

Mathematica, which has much in common with lisp, does something analogous to (inequalities a '<= b '< c), using symbols for the comparisons.


Icon conditions were pretty interesting.


I am happy to not see Ruby represented here.


Just on the top of my head:

1. The flip-flop

    >> s = true
    >> (1..10).reject { true if (s = !s) .. (s) }
    => [1, 3, 5, 7, 9]
2. A snake in my Ruby?!?

    # Works in both Python and Ruby
    print """Hello World"""
3. END vs at_exit

    END {
      puts "Ruby has END ..."
    }

    at_exit do
      puts "... and at_exit"
    end

    puts "Do you know the difference?"
4. BEGIN/END wtf?

    # Have you ever seen these in production code?
    BEGIN {}
    END {}
5. The wonders of a single space

    # Returns the position where the regexp matches "foobar",
    # or nil if no match.
    def get(regexp = /bar/)
      regexp =~ "foobar"
    end

    get /1#/   # => nil
    get / 1#/  # => 3    <- WTF?
6. Local variable tables; don't we all love them?

    def get(regexp = /bar/)
      regexp =~ "foobar"
    end

    get /1#/   # => nil
    get = 5
    get /1#/   # => 5
    get / 1#/  # => 5


Just guessing that your #4, BEGIN and END, are for using Ruby like awk (that is, from the shell—you should never see them in a script.) Try this (and cringe):

    cat /usr/share/dict/words | ruby -lne 'puts $_.reverse; n = (n || 0) + 1; END{ puts "total: #{ n }" }'


For me it was strange that Ruby used "end" as the block ending delimiter in a world and era where so many other languages were using "}" quite effectively, thank you very much. And Python was showing you didn't even need that much. It felt like a throwback to an earlier era of programming, like COBOL or Pascal.


There are a lot stranger stuff hiding in Ruby.

Flip-flops. Do I need to elaborate?


Ruby got its flip-flop operator from Perl, which in turn got it from sed and awk. It can be really useful for keeping simple state, e.g.:

    # parse mail messages
    while (<$email>) {
        $in_header =   1  .. /^$/;
        $in_body   = /^$/ .. eof;
        if ($in_header) {
            # do something
        } else { # in body
            # do something else
        }
    }

And other examples at http://perldoc.perl.org/perlop.html#Range-Operators


You can use {} instead of do/end if you'd like.


great! wonder why most people don't use it? I keep seeing "end" in almost every chunk of Ruby code I see on the web.


The general convention is that {} is for one line blocks, and do/end for multiline.

    [1,2,3].each{|n| puts n }
versus

    [1,2,3].each do |n|
      m = n + 1
      puts m
    end
The reason? do/end is much prettier than {}, but {} is really nice for chaining...

    [1,3,2].collect{|n| n + 2 }.sort


I find it easier to type "end" than "}".


but it's 3 characters versus 1. is it easier because you don't have to hit SHIFT?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: