Hacker News new | past | comments | ask | show | jobs | submit | drobilla's comments login

Yet another fine example of why ESR seriously needs to stop pretending to be the arbiter of "hacker culture" as if he speaks for all of us.


I'm very happy to see Rust stabilize, about time we get a systems(ish) programming language with a half decent type system. With that said... I need to get some bikeshedding off my chest:

I hate to let such a triviality lower my enthusiasm for a language so much, but I just can not get over that awful inconsistent closure syntax :/

I don't get it. Most everything else has a nice unique keyword syntax, fn uses (args, in, parenthesis), proc syntax made consistent sense, then lambda is this crazy || linenoise thing that doesn't fit in at all. The "borrow the good ideas from other languages" approach has resulted in a great language, but "cram random syntax from other languages that doesn't fit" doesn't work out so well.


The natural thing to want in a C-like syntax is the "arrow function" closure syntax (like ES6 or C#), but that required too much lookahead to parse. Having a keyword discourages functional style, which would be a shame in a language with a powerful iterator library. So Rust went with the Ruby/Smalltalk-style bars, which are nice, concise, and easy to parse.


It seems like the human parser should be given priority over the computer parser, when considering what is easy and what is hard. The machines work for us!


As far as I know, Rust now has an LL(1) grammar, which means that writing parsers for it can be done by hand (or with the more powerful LALR(1) and LR(1) parser generators). This is very important for humans too, because it means more people are likely to write tools to process Rust code. If you hope to have automatic indentation, auto-completion, refactoring, formatting tools, etc. keeping the syntax simple is really important.


Can't they just expose the parser as a library? Actually, it looks like they did, with the rustc crate.

Hand-writing a parser for some other language leads to madness - just ask the folks who've done SWIG, GDB, or most IDE syntax-checkers. You'll inevitably get some corner-cases wrong, or the language definition will change underneath you long after you've ceased to maintain the tool. Instead, the language should just expose its compiler front-end as a library, and then you can either serialize the AST to some common format for analysis outside the language or build your tools directly on top of that library.


You missed the whole point.

By making the language simple you can easily implement your own parser. This opens up the ability to write native parsers in other languages, say vimscript. By keeping it super simple there -are- no corner-cases.

There are many benefits to this (like the formatters etc that others have alluded to) from things like IDE integration (imagine lifetime elision visualisation, invalid move notifications, etc) static analysis tools and more. None of these tools then need to be written in Rust. It also means it's easier to implement support in pre-existing multi-language tools.

Don't underestimate the necessity of a simple parseable grammar. Besides, people have endured much worse slights in syntax (see here Erlang).


The vim formatters/syntax checkers I've used that actually try to parse the language - other than Lisp, which is the limiting case - are generally terrible. They all miss some corner case that makes them useless for daily work, since they generate too many false-positives on real code.

The ones I actually use all call out to the actual compiler - Python, Go, or Clang for C++.

Just because people write their own parsers doesn't make it a good idea. It may've been necessary when most compilers were proprietary and people didn't have an idea how to make a good API for a parser. But now - just don't do it. You'll save both you and your users a lot of pain.


We are not quite ready to support users of libsyntax and librustc as we are very serious about keeping our stable api stable, and freezing those apis would really impact the future development of the compiler, so it will not be exposed in rust 1.0. We want to eventually get something for this purpose though.


That's what D does, it has infinite look-ahead in cases and there are still multiple parsers available as libraries.


This human parser happens to prefer Rust's closure syntax to any other language's. :) Well, for usage, anyway... the written-out type of a closure for use in function signatures is not nearly as concise.


How much worse is it? Can parsing actually be so expensive that it dictates the syntax of the language?


Parsing a context free grammar in general is O(n^3), but parsing an LL(1) grammar is O(n). That's a pretty huge difference, if you're not careful. Imagine you've got a million lines of code to parse.


As a Rubyist, I found the closure syntax quite comfortable, as they're almost identical. Passing a closure or lambda to some function foo:

        x = foo {|x|   x + 1 }    # Ruby
    let x =  foo(|x| { x + 1 }); //Rust
    let x =  foo(|x|   x + 1 );  // single expressions don't need {}s
That said, I'm not sure what the exact reason was for choosing the syntax, as that was before my time.


> That said, I'm not sure what the exact reason was for choosing the syntax, as that was before my time.

I think the reason is just that it's very concise, and lightweight closure syntax makes things like `Option::map` feel like first-class parts of the language. The closure you pass just sort of seamlessly "blends in".

Note that having especially sugary here is not so uncommon, for example Haskell has `\x -> blah` for Rust's `|x| blah`.

I'm personally very happy that the closure syntax is as concise as it is.


I am not a Rust user and maybe I am reading the post wrong but it seems that syntax has been deprecated:

> Closures: Rust now supports full capture-clause inference and has deprecated the temporary |:| notation, making closures much more ergonomic to use.


That's talking about something different.

For a brief period, closures had to be annotated in certain cases like |&: args| or |&mut: args| or |: args| to determine whether they captured their environment by (mutable) reference or by value/move.

Now that this is inferred in all cases, closure arguments can just be written as |args| in all cases, just as they were before the current Fn* traits were introduced.


> For a brief period, closures had to be annotated in certain cases like |&: args| or |&mut: args| or |: args| to determine whether they captured their environment by (mutable) reference or by value/move.

That particular annotation controlled the access a closure has to its environment, not how it's captured. |&:|, |&mut: |, and |:| corresponded to the Fn, FnMut, and FnOnce traits, respectively. If you look at the signatures of those traits, you'll see that Fn's method takes self by reference, FnMut takes it by mutable reference, and FnOnce takes it by value. In particular, this means that the body of an FnOnce closure can move values out from the closure (that's why it can only be called once), whereas Fn and FnMut can only access values in the closure by reference and mutable reference, respectively.

The way variables are captured from the environment into the closure is controlled by the "move" keyword. If the "move" keyword precedes a closure expression, then variables from the environment are moved into the closure, which takes ownership of them. "move" is usually associated with FnOnce closures, but it's also needed when returning a boxed Fn or FnMut closure from a function, as you can see below:

    fn make_appender(x: String) -> Box<Fn(&str) -> String + 'static> {
        Box::new(move |y|
            // The closure has & access to its captured variable, but
            // it has been moved into the closure so it outlives the
            // body of make_appender, thanks to the move
            // keyword.
            x.clone() + y
        )
    }

    fn main() {
        let x = "foo".to_string();
        let appender = make_appender(x);
        println!("{} {}", appender("bar"), appender("baz"));
    }


> The way variables are captured from the environment into the closure is controlled by the "move" keyword. If the "move" keyword precedes a closure expression, then variables from the environment are moved into the closure, which takes ownership of them.

Note: if you don't specify `move`, then the captures are determined in the usual way:

`|| v.len()` captures `v` via an immutable borrow

`|| v.push(0)` captures `v` via a mutable borrow

`|| v.into_iter()` captures `v` by moving it


Thanks for the correction.


Fortunately with the annotations gone, I think there will be a lot less confusion! The "move" keyword on its own is pretty straightforward, at least once you've learned Rust's ownership model.


I wish they also had the option of a pure lambda that isn't a closure. You can't pass a pure lambda to an argument expecting an ordinary function.


What was deprecated is the explicit closure type specification, which is only the ":" part.


Right. IIRC, now your choices are, roughly:

    |args| expr // upvars captured by reference, can't be called after function has gone out of scope

    move |args| expr // upvars moved from function to the closure context (or copied if trivially copyable)
This is simple and good enough for most use cases. If you want more complex schemes, you have to implement them manually, e.g. to reference count the upvars, like Apple blocks do by default, wrap them in Rc and capture that.

Accepting closures is a bit more complicated though.


That's not quite right. In the non-move case, the capture is inferred per upvar. See my other comment for details (https://news.ycombinator.com/item?id=9047766)


I'll take "fighting with syntax" over fighting with a GUI any day.


I will never understand the tendency, especially in the web community, to consider complicated, ambiguous, inconsistent, and difficult to parse syntaxes as "friendly".

It might seem like a good idea at first, but over time it just results in a culture that publishes horrifically broken markup, and increasingly baroque implementations that try to work around every possible screw-up instead of simply flagging errors. This is, in my opinion, the biggest screw-up of the web: if early browsers simply flagged errors and pointed out where and what they are clearly, the web would be a much nicer place. Instead, actually writing a parser for web markup is a nearly impossible task, and not the weekend hack it should be.

One of the, if not the, great thing(s) about JSON is its beautifully simple syntax. 5 images on json.org precisely explain what it is, and that's that. You can write a parser for it in basically any language without any fancy libraries or frameworks in a few hours. That is a good thing, regardless of whether you personally would ever have to do such a thing, because it keeps the barrier of entry low and trickles through the ecosystem in positive ways.

Missing the point of JSON? Yes, but missing much more, including lessons that should have been learned.

That said, a few of the ideas are good ones (comments and such), but the bad ones (making crucial delimiters optional) are so bad they more than outweigh the benefits.


This guy is so biased he can't even see straight.


Not quite as easily solvable with software as you might think. Most such scheduling problems are NP-hard.


I'm convinced that part of it is easy. To do it optimally is NP-hard, but one doesn't need to find a perfect solution.

The problem is that solutions are always a little bit ill-defined and won't be 100% captured by any generic software model, and that they need to be amenable to further semi-manual tweaking.


I have been a middle school/ high school teacher for 15 years, and I have watched countless hours spent on inefficient approaches to scheduling. I have also recognized that it is not a trivial problem to solve. I started writing a program to help with the problem years ago, but didn't make the time to see it through, as I wasn't the one doing scheduling.

I think an overall approach would be to present a number of schedules that could work, and expect the user to do some manual adjustments. Perhaps present a schedule that could work, user checks off which classes to keep, and program shuffles other classes. Repeat until done. I think that's basically the approach people use when they do it manually.


Engineers generally already have such things.

I don't think "programmer" is a well-defined enough concept to be associated with an oath. You can't go around calling yourself a Medical Doctor or an Engineer if you aren't, but pretty much anybody who codes at all can reasonably call themselves a programmer.

Even with all that aside, such oaths only really work for things like causing blatant harm. Unqualified "freedom" and "liberty" (especially in the USA) are so nebulous they mean almost nothing at all. The overwhelming majority of programmers work for organisations whose goals (e.g. profit, power) are not - and are often directly opposed to - "freedom" by any reasonable definition.

In short, I don't think the answer to your question is "no, there shouldn't" so much as "no, there couldn't".


The Littler Guide to HTML Email:

Don't.


> I could either have another expensive piece of paper, or add more code to my github account for future employment...

You could always do both. Academics who can actually write good code are a pretty rare commodity.


I'm a developer at a state university: Free Tuition.

I feel guilty about not up and doing it already, but I'm only 1 year out of college, and don't want to go back to that grind just yet.

I really have no excuse aside from "I don't wanna" right now, and that's a pretty shit reason.


Someone needs to write a "why getting a job in industry is a waste of time" rebuttal to this tired argument to illustrate how ridiculous it is to evaluate everything with a single (and shallow) goal structure. After all, it typically doesn't contribute anything new or meaningful to scientific knowledge, therefore it is a waste of time (...).

I guess I shouldn't expect any better from The Economist, but not everyone in the world just does whatever they can to make the most money possible, ignoring all other considerations.

Assuming I'm making enough money to survive comfortably (which is essentially guaranteed with computer anything, including grad school itself), I don't care. What I get to work on, and what environment I get to work on it in, is far, far, more important. Overridingly important.

This mentality (though usually not so extreme) is pretty common among math and computing people, which is probably why industry is finally starting to notice that workplace perks are very important. There are plenty of extremely talented people who'd gladly take a 50% pay cut to work in a less shitty environment...


Very true.

Most programmers in NYC could get a raise going to Wall Street. They don't do it because they don't like the abuse.

Most Physics professors could get a raise going to Wall Street. They don't do it for a variety of reasons.

It's about more than money.

But... The core of the story is true. There's a systemic problem with the Phd market. Every school and professor has an incentive to churn out more than the market needs. Like any career choice, people need to go into this with open eyes.


There are programmer jobs in finance that don't entail abuse.


Finance yes. Finance that are close enough to the trading to warrant good money - less so.


Most programmers in NYC could get a raise going to Wall Street. They don't do it because they don't like the abuse.

Wall Street is, on the whole, less abusive these days than VC-istan. However, it is more selective. Also, the chances for rapid career progress aren't there. You can make as much money, but you certainly won't be leading a team at 27 in a hedge fund; whereas if you are 27 and not at least a "tech lead" in VC-istan, you've lost.

Don't rule out finance based on its reputation, because it's not as bad as it's made out to be, especially relative to the other high-paying options. I've seen both and engineers are treated worse, on average, in the VC-funded world than in the hedge funds.


I am planning on doing a PhD because of financial benefits in the long run. Here's why:

- Immediate respect. It is harder to question the authority of a (technical) PhD. This is much more pronounced in 3rd world economies. Also Germany. Basically anywhere besides the non-super-meritocratic US. This is a huge benefit: my opinions are more likely to be listened to, my consulting fees are likely to be higher and be more credible, career breaks (maternity) will not set me back by much, because a PhD proves technical mastery of an area as well as tenacity, responsibility, and independence - all of which non-PhDs will continously try to prove to new coworkers and new management.

- Universities are forever open, but not so with Masters. That gives immense job security as a private university lecturer for the rest of your life, in any country (flexibility to travel and find a job even in countries you don't speak the language of). Private universities in rich countries pay a lot to be able to say they have a, say, MIT-PhD on their faculty. I knew a person who had a data-entry job and saw the faculty salaries in Saudi Arabia. The profs were making 396$k/year, housing and living costs fully compensated by the university.

- Flexibility to find jobs forever (especially because of the university thing). Normally I imagine it would be harder to be a salaried employee after 65 years of age. Nobody will hire you because it is so weird to have an employee your dad's age. Ageism is real. But because you can always lecture, you are pretty much unretirable.

The 4-5 year salary cut is nothing compared to a lifetime of these benefits and zero anxiety about job prospects after 50. I personally want to work and earn money in a respectable job well into my 80s.


You have a very hard awakening in queue. Of course I can only speak of my direct experience, in Mexico, but here it goes:

1. Business people tend to mistrust highly educated people with no at least as impressive industry credentials. The meme of the crazy scientist with the head on the moon an no concern to real concerns runs deep and wide. Once you have both the schooling and the provable hands on experience it starts to pay off, but getting there is not trivial. Also, to make it pay it off you must go into consulting, since no employer will think they can afford you full time past some point. And to make it as consultant you need to pick an specialty that provides hard qualitatively measurable value.

2. Scratch that immense job security in the private university. As a matter of fact, they tend to hire a lot of adjunct professors and post-doc lecturers in order to avoid giving the sinecure for full professors. Public universities and research centers are still ok, but you will have a hard time in any education center whose bottom line depends on undergrad tuition.

3. Don't count on the flexibility thing either. University may be happy to hire part time lecturers with lots of industry experience, but not the other way around. I had a very hard time crawling out of this particular hole and have know others that never were able to make it back after a "short stint as a teacher".

4. Jobs prospect after 50 might be right, but you have to know how to play your cards really well. It is not a given, and in any case you are probably better off knowing your way in industry than relying on academia.


What you are talking about when you talk about those large salaries is something of a pipe dream for most people pursuing degrees. I too was lured into more education with high prospects but the truth is that it's really difficult to get in on the awesomeness. If you want to make a statement about long-term benefits you better talk about the average situation. Some PhD's do provide quite a bit more money on average, but a lot of PhD's (math, for example) provide only a marginal benefit over a Master's degree. I don't have the data in front of me but when I checked the situation for math, the difference in the mean was about 2-3 thousand dollars per year. If you take into account the financial opportunity costs of doing a PhD (things like low wages, tuition, loans), it will take you about 40-50 years AFTER finishing your degree to be on par, on average.


Secure tenured positions are very competitive globally. There are only sufficient such jobs for a small percentage of PhD graduates.

Further, the university sector may be on the verge of massive disruption - look at startups such as Coursera and Udacity. Its difficult to be sure it'll provide reliable employment by the time you reach 65.

I've just finished a PhD program in an Irish university, which I got a lot out of; but I wouldn't do it for job security, or earning power.

But maybe you are talking about doing a PhD in a best-in-world institution; if so, maybe the picture is different.


Even if you have a PhD it isn't necessarily easily to find a job as a lecturer in an university. Often the criteria for hiring a lecturer is their publication record, and if you leave academia for the industry you probably won't be writing any papers. Without publications getting a lecturer position probably won't happen. This is the case in most European universities that I know. Also, the average time a student takes to complete a PhD, especially if you are planning to work while you do it, is well over 5 years.


Totally on point. I don't think it's useful to compare the value of a successful PhD experience to some amount of money. This is especially true for the people who would be considering a PhD in the first place.

I agree that people value the environment they work in, but I think that to the people you outline, working in a culture that values original ideas and open discourse is more important than having workplace perks like free sodas.


Did you really just shit on the Economist on the whole?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: