This article is mostly whining that evidence-free speculation about how to write good software is no longer publishable in top conferences. And the major evidence cited is that there's a specific citation style required, a standard feature of every kind of publishing since forever. I promise (having reviewed many times for the specific conference under discussion) that no one's paper is rejected (or even denigrated) for failing to use appropriate citation style, people comment on it the same way they would comment on any other style issue.
I think that's a pretty uncharitable take; I thought there were several interesting questions raised by the author:
1. Should conference "service" be something we expect of postdocs (and even PhD candidates) rather than established experts?
> Often, as a result, the PC is staffed by junior, ambitious academics intent on filling their résumés. Note that it does not matter for these résumés whether the person did a good or bad job as a referee! [...] I very much doubt that the submissions of Einstein, Curie, Planck, and such to the Solvay conferences were assessed by postdocs. Top conferences should be the responsibility of the established leaders in the field.
2. Should programme chairs strive to maintain exclusivity of their conference track, or look for important ideas that deserve to be communicated?
> As a simple example, consider a paper that introduces a new concept, but does not completely work out its implications and has a number of imperfections. In the careerist view, it is normal to reject it as not ready for full endorsement. In the scientific view, the question for the program committee (PC) becomes: is the idea important enough to warrant publication even if it still has rough edges? The answer may well be yes. [...] Since top conferences boast of their high rejection rates, typically 80% to 90%, referees must look for reasons to reject the papers in their pile rather than arguments for accepting them.
3. Is computer science suffering from a focus on orthopraxy rather than scientific method?
> What threatens to make conferences irrelevant is a specific case of the general phenomenon of bureaucratization of science. Some of the bureaucratization process is inevitable: research no longer involves a few thousand elite members in a dozen countries (as it did before the mid-1900s), but is a global academic and industry business drawing in enormous amounts of money and millions of players for whom a publication is not just an opportunity to share their latest results, but a career step.
I think it's not holding up that well outside of predictions about AI research itself. In particular, he makes a lot of predictions about AI impact on persuasion, propaganda, the information environment, etc that have not happened.
Could you give some specific examples of things you feel definitely did not come to pass? Because I see a lot of people here talking about how the article missed the mark on propaganda; meanwhile I can tab over to twitter and see a substantial portion of the comment section of every high-engagement tweet being accused of being Russia-run LLM propaganda bots.
Agree. The base claims about LLMs getting bigger, more popular, and capturing people's imagination are right. Those claims are as easy as it gets, though.
Look into the specific claims and it's not as amazing. Like the claim that models will require an entire year to train, when in reality it's on the order of weeks.
The societal claims also fall apart quickly:
> Censorship is widespread and increasing, as it has for the last decade or two. Big neural nets read posts and view memes, scanning for toxicity and hate speech and a few other things. (More things keep getting added to the list.) Someone had the bright idea of making the newsfeed recommendation algorithm gently ‘nudge’ people towards spewing less hate speech; now a component of its reward function is minimizing the probability that the user will say something worthy of censorship in the next 48 hours.
This is a common trend in rationalist and "X-risk" writers: Write a big article with mostly safe claims (LLMs will get bigger and perform better!) and a lot of hedging, then people will always see the article as primarily correct. When you extract out the easy claims and look at the specifics, it's not as impressive.
This article also shows some major signs that the author is deeply embedded in specific online bubbles, like this:
> Most of America gets their news from Twitter, Reddit, etc.
Sites like Reddit and Twitter feel like the entire universe when you're embedded in them, but when you step back and look at the numbers only a fraction of the US population are active users.
This doesn’t seem like a great way to reason about the predictions.
For something like this, saying “There is no evidence showing it” is a good enough refutation.
Counterpointing that “Well, there could be a lot of this going on, but it is in secret.” - that could be a justification for any kooky theory out there. Bigfoot, UFOs, ghosts. Maybe AI has already replaced all of us and we’re Cylons. Something we couldn’t know.
The predictions are specific enough that they are falsifiable, so they should stand or fall based on the clear material evidence supporting or contradicting them.
Notably, the definition given in the Wikipedia entry referencing TRAC means that "homoiconic" is a property of an _implementation_, not of a language. This would mean that Lisp, a programming language, could not properly be described as homoiconic, since it admits multiple implementations including those that do not have this property (eg, SBCL rather clearly doesn't).
History is of course valuable to learn, but as a criticism of the work this is almost precisely the "turn to the camera and say that he's the same kind of communist I am" tweet made flesh.
> Black Panther was a fine movie but its politics were a bit iffy. wouldve been way better if at the end the Black Panther turned to the camera & said "i am communist now" & then specified hes the exact kind of communist i am
I really appreciate you were able to recognize the tweet and give some context. I'm a bit slow, especially on Sunday mornings :) -- I still don't understand what OP means.
Do you have any ideas? Maybe its commie talk to say these aren't related to WWII?
Or maybe they find the article political?
Seems pretty straightforward to me, guy from country says people from other countries turned something complex into something simple for clickbait, documents it.
We considered that syntax, but `from` was not a reserved keyword already, whereas `import` was, so the parsing situation with the actual syntax was much better.
It is funny to see how person who clearly takes all their knowledge from books tells other person their actual experience couldn't happen, because that is what some book says. I'm not person who you replied to, but just from reading their comment I immediately knew they are talking from experience, because i partly saw, partly know from stories of my parents and grandparents, exactly the same kind of lifestyle. Should i now say that your source is "entirely false"?
Nothing about it is false. In fact it's lived experience.
Sorry, we cooked with wood most of the time, kept animals, and even washed clothes and such by hand (no washing machines until much later). It really wasn't that big of a deal. And we weren't even that rural (next to decent sized city).
I'm sure they had it harder in Texas or Utah or Alaska. Not a universal thing.
Very few people I know doing real work in Racket have a SLIME-like workflow. In general, Racket discouraging this style is related to us being a bunch of professors, but not really in the way you say. Instead, it's because it's not possible to give a sensible semantics to the top-level, especially in the presence of macros. We care a lot about macros, and we also care a lot about sensible semantics, and thus the attitude towards the REPL. The slogan in the Racket community that "the top level is hopeless" expresses this sentiment.
reply