Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Collusion rings threaten the integrity of computer science research (acm.org)
707 points by djoldman on May 26, 2021 | hide | past | favorite | 395 comments


This is a bit surprising, but hardly the biggest problem. I’ve posted before about it, but academic has to get away from the infantilism of using KPIs to judge creative intellectual work.

Part of the problem is that people seem to want an objective evaluation of a piece of research, and measure the value for taxpayer money. Well, you can’t get an objective evaluation. It’s all subjective. And “impact” as a quantifiable entity in general is nonsense, for one thing the timescales prohibit its measurement at the point where it may be useful.

The solution is to use management. Lots of people object here and say “but nepotism, favouritism” and yep that’s a problem, but it is less of a problem that the decline of western universities. You can circumvent it somewhat by rotation, by involving external figures, by a hierarchy that ensures people are not colluding, but ultimately you just have to trust people and accept some wastage.

People aren’t in academia for the money. It’s a vocation. You’re not going to have many people milking the system. Things went pretty well before the metric culture invaded the academy. They can go well again.


> People aren’t in academia for the money. It’s a vocation. You’re not going to have many people milking the system.

Before introducing the KPIs, a majority of polish science was basically people milking the system and doing barely any (valueable) research. It was seen as an easy, safe and ok paying job where the only major hassle is having to teach the students. You often needed connections to get in. It was partially like that because of the communist legacy, where playing ball with the communist party was the most important merit for promotion, which, over the course of 45 years (the span of communism in Poland), filled the academia management ranks with conformist mediocrities.

Now, after a series of major reforms, there's a ton of KPIs, and people are now doing plenty of makework research to collect the required points, but still little valueable work gets done. Also, people interested in doing genuine science who would be doing it under the old system are now discouraged from joining academia, because in the system they're expected to game the points system and not to do real work.

What is the lesson from this is? Creating institutionalized science is hard? It requires a long tradition and scientific cultural standards and can't be just wished into place by bureaucrats? Also, perhaps it's good to be doing the science for some purpose, which in the US case are often DoD grants, where the military expects some practical application. This application may be extremely distant, vague and uncertain (they fund pure math research!), but still, they're the client and they expect results. Whereas the (unstated) goal of science in Poland seems to be just to increase the prestige of Polish science and its Universities by getting papers into prestigious journals, whereas the actual science being done doesn't matter at all - basically state-level navel gazing.


This is a good cautionary lesson. Problems like collusion rings in computer science are more serious than they appear. If left unchecked for a decade or two, the cheaters will rise to the top and take the positions of power in the field, and then it's going to be almost impossible to fix the system.

Relying on altruistic tendencies for people in academia is not adequate. Everyone starts out in academics as school children, and get filtered out or filter themselves out pursuing other things. Those who remain will be the ones who love to learn and teach, those who just cannot accept loss/failure, and, sadly, those who are afraid of change. The more competitive the field becomes, the harder it is to succeed, the more we select for the hyper-competitive or fearful over the altruistic.


I looked at doing my PhD. I was really interested in startup failure and wanted to research that.

The system I was presented with would have meant my supervisors got more say in what I spent my time on than I did. They heavily skewed to supporting existing psychological models of entrepreneurial failure which I wasn't interested in.

The bureaucracy, authoritarianism and endless hoop-jumping around it was a total red flag for me. I opted out.

My conclusions:

PhD grads aren't smarter than others. They're just more willing to put up with bullshit, conform to meaningless rules, and jump through hoops.

Academic research is rarely about the things it says that it's about, and seems to be more about maintaining/improving the career prospects of the academics.

Academic research is heavily affected by political spats with other academics that have nothing to do with the actual subject, but more to do with ego, pride and interpersonal dislike.


My brother abandoned his PhD at CMU just before his defense (which would have gone fine). He was intensely passionate about his topic area, logical frameworks, which is very academic stuff indeed. Not really something you could pursue in industry. As he was wrapping up his PhD he also got his top choice for a post doc position at Inria, but the amount of political maneuvering he had to do to get it made him so angry he abandoned his dream of staying in academia. He just couldn't stomach the idea of having to do that politicking over and over again all his life. So he went to a hedge fund to work a little bit more than a decade, got his magic number of millions, and is a full time dad now.

Humanity as a whole is losing out on some potentially amazing long term research and researchers because of this dynamic.

I have no idea how to fix it systematically.


You're not wrong, but you're attributing the conclusions (e.g., grads aren't smarter) to the wrong reasons (willing to put up with bullshit).

PhD grads aren't smarter, they're playing a different game. There's bullshit everywhere, you just chose a different pile to call home -- probably because it smelled better to your reward/bullshit tradeoff.

Why would PhD's choose that pile of bullshit of yours? ...

As you say, "Academic research is rarely about the things it says its about" is a good observation. The reason is that most of the time grants are awarded to professors to try a method (their specialty) to a problem (the thing the grant is about). This looks like the professor is padding his career, but honestly, that's the point.

We would ideally have a huge set of professors with perfect specializations such that a combination of professors could solve any problem. Science funding is ensuring that deep, old expertise is preserved in case it is useful. Grants are a way to simultaneously test that usefulness for modern problems and expand it a little towards modern problems by producing new phd's with slightly mutated expertise. This is why PhDs endure their own type of bullshit, because they want to be part of this particular kind of knowledge legacy. There are other ways of doing this. A problem-first (vs solution-first) approach is kind of better for a different kind of venue, like business, NASA, etc.

Political spats? Absolutely. No contest there. Turf wars are a thing.


I was more addressing the "you've got to be really smart to complete a PhD" trope. From what I saw, the primary qualification is the ability to jump through pointless hoops because an authority says you should.

You're totally right that academia doesn't have a monopoly on this kind of bullshit. But few other places get away with it so much, and manage to maintain a (rapidly crumbling) reputation of not being 90% bullshit.


Yes, the "PhD's are smart" trope doesn't give enough air to smart non-phd's. But I think, at least around here, equivalent experience is just as useful as a PhD, but PhD's have the benefit of being extremely vocal / public about their experience in the form of state-sponsored projects and publications.

I would suggest that your own rapidly crumbling opinion of academia is probably not endemic. The top universities still produce top-tier R&D, and top-tier candidates that go on to do top-tier work for large and small companies.

It's definitely ok and understandable that some (or even many) people would feel differently, but my own opinion (coming from a mid-tier school) is that my training was absolutely of inestimable benefit to my job prospects, and was 100% enabling of my current career in robotics R&D. And I'm guessing that getting admitted to MIT would, without fail, make anyone happy in the entire world. (Though I didn't go to MIT obv)


I've long wondered if the answer for this and other issues like tax evasion wouldn't be to often change the metrics.

If you start cheating the metrics, or optimizing a lot towards them, it becomes counter-productive when they change. As such, the most efficient way forward would be to work without trying to optimize for a temporary metric. On the flip side, it would be troublesome to convince people to different, complex forms each time.

What first gave me the idea was the concept of "lubricating" headers (submitting some with random values) for future http protocols, to combat "ossification", where middle boxes start to meddle with them and become obsolete when they don't recognize the new fields, instead of transmitting them.


I always thought having a pool of 20 metrics and randomly picking 4 every quarter/year for bonuses/promotions/whatever might get people to stop optimizing for single thing.


If you have 20 independent metrics, this could work. Typically though, the metrics aren't independent as they all try to measure "doing your job well".

In that case, if you gamed the "wrong" metrics, you still come out ahead. Not as much as for gaming the right metrics, but still.


This is fine until someone sues because they had similar or better metrics than someone else but the other person got in in one year and they were refused. A lot of the reliance on metrics is CYA by administrators.


A more deterministic alternative: a weighted sum of the pool of metrics, where the weight can be gradually adjusted over time.


The Joint Commission (healthcare field) seems to do this. There is at least some churn in metrics for each triennial inspection of healthcare facilities and operations. Aside from reducing gaming, it's a good way to make sure BI teams always have work to do.


A slightly different take from me would be to have multiple metrics that kind of cross each other out making the tradeoffs explicit instead of implicit. It's fine to temporarily optimise towards anyone but at least we know which trade off was made.


I recognize the symptoms you describe, but I find the picture you painted a bit too pessimistic, given that in the field that I'm familiar with (theoretical CS), groups in Poland have been among the strongest and most visible in Europe in the past 10 years.


because it's fresh blood and there is a skill set filter which reduces nepotism and other forms of corruption.

Looking at other departments, say, sociology* it's a dumpster. Full of cronies, retired politicians and relatives.

*Just an example which doesn't require "hard" skills like math.


The old system had too much Slack. The new system has too much Moloch. https://slatestarcodex.com/2020/05/12/studies-on-slack/


Wow, I couldn't disagree more. I can assure you that if you mean by "management" subjective evaluation by local administrative staff or "higher ups" like full professors, then it's the worst system you could possibly have. It invariably leads to corruption, favouritism, and brown nosing. Funding authorities have been fighting these rotten structures for the past 20 years where I work (Southern Europe), but they cannot do too much because the university system is not under their control, only the funding for national research projects.

There is only one reasonable way to evaluate research and researchers, that's to evaluate the content of their work and publications by external evaluation panels and tell these panels explicitly that they should not base their assessments solely on indicator counting, but on the overall merits, originality, and prospects of the research according to their subjective opinion. Metrics shouldn't even be used as a tie-breaker, they should only ever be used as weak indicators, and this must be explicitly mentioned in the guidelines.

In addition, you need a few other guidelines and laws. For example, it must be prohibited that someone becomes a postdoc at the same place where they obtained their Ph.D. We have people who study at university X, do their exams at university X, do their Ph.D. at university X under the same professors they always knew, then become postdocs working for their former supervisors and being exploited by them (teaching the same bad seminar their former supervisor teaches the past 20 years), and then get tenure in a rigged call. And the worst thing about it is that they feel entitled to all of this.

You've got to break this vicious cycle, but with your suggestion of using a methodology that worked in 1950s (with an order of magnitude less candidates) this could never be achieved.


I couldn't agree more with the former and less with the latter.

I think external evaluation panels (and in particular, from a third country) are the way to go. We already have good examples, for example, in ERC grant panels. The ERC uses exactly the strategy you mention and it has an impeccable reputation, I know many people who applied with or without success but I know no one who felt treated unfairly.

But I'm against blanket rules prohibiting postdocs or positions at the same place of the PhD, at least from my Southern European (Spanish) point of view. This is often touted in Southern European countries because in the US no one does that so it must be evil, so clearly we should ban it to be more like Americans and see if this makes our scientific productivity closer to theirs. But European (especially Southern European) culture is not US culture. People want to settle down, be near their loved ones, and there's nothing intrinsically bad about that that should be punished. Plus, the job market is much less dynamic so even for those who don't mind bumping around, it can be hard to reconcile with a possible spouse who has a career too. And finally, if you push people in this region to move, most of the time the result will be that they end up in a Northern European country (or the US, Canada, etc.) where they make 3x or 4x more, and never come back - once you have experienced a much better salary it's hard to justify returning, I have seen it plenty of times.

Bring on the external evaluation panels, and then there will be no need for any measure forcing people to move, which would reduce inclusivity and thus the talent pool.


But optimizing for originality is one of the reasons of the current mess so why is this an appropriate metric? Reproducing research is very valuable and a way to combat fake results.


But many less well off EU countries have only 1 good Uni and many people don't like moving away. That would mean forcing people to move out of their homeland just for tenure. I'd rather we stop having lifetime professors and limit the tenure to 10-15 years


Sorry, but I don't understand how forcing people to find a new job after 10-15 years ensures they're not forced out of their homeland.


The comment you reply to didn't mention forcing people to find a new job. As far as I understand, they could apply to the same job. But they would have to apply and earn it again, which means that (at least, if the process is fair) they wouldn't be able to stop caring about doing a good job due to having a guaranteed job for life.


> [T]hey could apply to the same job. But they would have to apply and earn it again, which means that (at least, if the process is fair) they wouldn't be able to stop caring about doing a good job due to having a guaranteed job for life.

But that means that a tenured professor will start acting in non-academically-independent ways at some point before their tenure is up to avoid messing up their re-applicaction.

It is also quite likely that events such as departmental re-orgs could be timed around tenure expiration to eliminate specific job descriptions in order to make re-applying more difficult.

You might be able to achieve the same goals though some combination of making tenure transferrable between cooperating institutions, mandatory sabbaticals, requiring review committees be partly or wholely staffed from outside the institution, etc.


> We have people who study at university X, do their exams at university X, do their Ph.D. at university X under the same professors they always knew, then become postdocs working for their former supervisors and being exploited by them

Independent of every other point, this is in my point of view a real problem. It happens so often and I don't understand why there's often no law that at least a postdoc at another institution has to be done.


I'd say the problem is there is simply too many people wanting to get a position compared to how many open positions there are. In my college PhD's are practically handed out in return for working on some projects which generate revenue for the Uni + doing the teaching assistant work. After that everyone goes back to the industry because it's practically impossible to get a professor title. And no one ever loses their job so there are more professors barely doing anything than those investing their life into it. They should make it so that you can only be a professor for max 15 years then you have to go. Use that expertise in the industry and let someone else try being a professor.


"The solution is to use management."

Not sure what you mean by that, but KPIs are generally put in place by high-level management. Or do you want more micromanagement?

Either way, I think the solution is not more control, quite the opposite. I think the solution is just to remove the extrinsic incentives.

Some people say UBI will cause people to do nothing and that is probably true, but the flip side is that the output of the remaining people will likely be many times higher both in volume and quality (with the total volume much lower but higher quality). Not having their energy completely destroyed by all the busy work necessary to show they are working.


This is a laudable opinion BUT.

(Speaking about Spain).

The KPIs are there because OFFICIALLY (and this is strictly so) you are not even allowed to get a tenured position without an absurd number of (in Maths) JCR papers IN THE FIRST QUARTILE.

This is so stupid it is not even funny but how can you fight that when your PhD students depend on those metrics?


With respect, you're saying metrics are used because they are used. Officially or semi-officially, doesn't matter. There needs to be a collective and individual process of disowning the metrics.


Personally I think we need to do away with tenure. Academic institutions should just hire good researchers as employees and do good research.

There was a time in history when tenure made sense, but today the tenure track process forces a lot of people to go after low-hanging fruit that has a high probability of being accepted for publication, instead of trying things that are meaningful to try but may fail.


That is precisely why tenure is important: you can think without time pressures.


Having tenure does not mean you do not have time pressure. You still have students and postdocs under you and you still need to win grants to do research and pay salaries.

You are constantly under evaluation whether you have tenure or not.


> You are constantly under evaluation whether you have tenure or not.

And what happens if you're poorly evaluated? They can't fire you anyway. Will they move you to a boiler room as a punishment?


Presumably this depends on the country and the institution, but you certainly could be "overlooked" and end up with work that is tedious and unlikely to garner plaudits. I'm sure that after a while this could lead to something approximating firing, being "asked to move on" or similar.


Ahah what. All my tenured professors had constant time pressures.


Presumably this depends on country and institution - probably on discipline too - but every tenured academic I know works horrendously hard. Nothing lasts longer than the cash from the last funding cycle. The job becomes more and more bureaucratic and mired in red-tape. The time-pressures might often be measured in years, but they are definitely present.


Or without political pressures. Provided that you get tenure in the first place.


> Things went pretty well before the metric culture invaded the academy. They can go well again.

That won't happen, mostly because employers have outsourced education and vetting (in the form of requiring bachelor/master degrees) to universities (and the associated costs to governments and/or students who pay tuitions) instead of the old style vocational training/apprenticeship system where the employers had to pay.

Want to restore academia to only those actually interested in science? Make employers pay a substantial tax on jobs requiring academic degrees.


> The solution is to use management.

You comment does not seem to contain any explanation as to why 'using management' would solve the problem you allude to. Can you elaborate?


The issue is everyone's subjective measure varies by a lot. The NIPS experiment showed that the papers selected for publication by peer review is more on the random side. Now assume you got this much randomness but on your career progression. And it would be even worse as in the peer review system works on more niche topics then university panel judging the work could be so their review will be even more subjective and changing with time.

In short, I think it is definitely clear neither the citation nor the journal brand is the best proxy for the worthiness but the system you are proposing is worse while still reliant on subjective judgement.


The problem is not about judging creative intellectual work. We already know how to do that, and the academia usually manages to do it just fine.

It's not really about judging people either. When it comes to choosing which people to reward with jobs, promotions, grants, and prizes, we already know the solution: expert panels that spend nontrivial time with each application. Sometimes there are political or administrative reasons that override academic excellence, but in general the academia has figured out how to evaluate shortlisted people.

The real problem is shortlisting people. For example, when a reputable university has an open faculty position, it typically gets tens of great applications. Because the people evaluating the applications are busy professors, they need a way of throwing away most of the applications with minimal effort. From the applicant's perspective, this means you only get a chance if the first impression makes you stand out among other great applicants. And that's why it matters that you publish in prestigious conferences/journals, come from a prestigious university, and have prestigious PhD/postdoc supervisors.


> academic has to get away from the infantilism of using KPIs

Well, science doesn't care that much for KPIs, per se. It's more that the managers want numbers to steer by.

In academia, getting promoted means more management tasks. So higher up academics have been indoctrinated to want numbers. Is the scourge of management.

As such: not a big fan of your solution.


> People aren't in academia for the money.

I agree overall with what you wrote, but have to comment here, because I think this is already not the case in many settings. I can only speak for the US, but in my experience with some other places overseas similar issues are developing.

There are many legitimate hypotheses for why this is the case, but in general at many universities, as far as climbing the academic ladder is concerned, publication metrics are no longer relevant. That is, some baseline is required, but beyond that, most of the focus is on money and grant sizes. I've been in promotion meetings discussing junior faculty that are not publishing and this is brushed aside because they have large grants. I've also repeatedly heard sentiments to the effect of "papers are a dime a dozen, grant dollars distinguish good from bad research."

Again, there's lots of reasonable opinions about this, but I've come to a place where I've decided this is incentivizing corruption. Good research is only weakly correlated with its grant reimbursement, and regardless, it's lead to a focus on something only weakly associated with research quality. Discussions with university staff where you're openly pressured to artificially inflate costs to bring in more indirect funds should raise questions. Just as it's apparent that incentivizing (relatively) superficial bibliometric indices like publication count or h factors leads to superficial science, incentivizing research through grant money has the same effect, but differently.

So yes, going into academics is not the way to make money if that's what you want. However, I think nowadays in the US, it's very much all about the money for large segments, who are milking the system right now at this moment.

Also, in theory, yes, management is the solution, but really management is how we've gotten into this mess. Good management, yes, bad management no. But how do you insure the former?

Fixing this mess academics has slid down (in my perception, maybe everything really is fine) will require a lot of changes that will be controversial and painful to many, and I don't think there's a single magic bullet cure. Eliminating indirect funds is probably one thing, funding research through different mechanisms is another, maybe lotteries, probably opening up grant review processes to the general public. Maybe dissociating scientific research from the university system even more so than has been the case is also necessary. Maybe incentivizing a change in university administration structures. Probably all of the above, plus a lot else.

How to get things to go well again is achievable in theory but how to get there is less clear given the amount of change involved.


One way forward could be to lower the bar for publications.

Once it's no longer about being in the esteemed and scarce "10%", they won't bother because they don't need to. Imagine a process where the only criteria are technical soundness and novelty, and as long as minimal standards are met, it's a "go". Call it the "ArXiv + quality check" model.

Neither formal acceptance to publish nor citation numbers truly mark scientific excellence; perhaps, winning a "test of time award" does, or appearing in a text book 10 years later.

I've been reviewing occasionally since ~1995, regularly since ~2004, and I've never heard of collusion rings happening in my sub-area of CS (ML, IR, NLP). I have caught people submitting to multiple conferences without disclosing it. Ignoring past work that is relevant is common, more often our of blissful ignorance, and occasionally likely with full intent. I'm not saying I doubt the report, but I suspect the bigger problem that CS has is a large percentage of poor-quality work that couldn't be replicated.

BTW, the most blantant thing I've heard of (from a contact complaining about it on LinkedIn) is someone had their very own core paper from their PhD thesis plagiarised - submitted to another conference (again) but with different author names on it... and they even cited the real author's PhD thesis!


You can't help but be entertained by the amount of plagiarism that exists. A while back I found out that one of my papers [1] was being sold as a final year thesis via ..a video ad on YouTube [2].

[1] https://ieeexplore.ieee.org/abstract/document/6746236 [2] https://www.youtube.com/watch?v=MFNFScqN47o


Don't students have to defend a thesis any more?

I mean I could buy your paper but I would have to know it by heart and understand it in order to defend it.

At least that's how it went when I was studying applied physics (1974-1977).

If I had just bought a paper I would have had a really hard time in the viva voce.

Edit: by -> buy


Depending on the country, the PhD defense is more or less ceremonial. If you have made it to the defense, you can pretty much not fail anymore.


When I defended my PhD in 2013 at Georgia Tech Chemistry, the first half was the public defense, including Q&A. The second half was closed-door, consisting of just the committee and the PhD student. This second half was known to be more combative and it was common for the committee to request additional work. It was very rare for any one to flat out fail, since the student’s advisor wouldn’t allow them to defend until they had sufficient novel research.

In my program, many of us would strategize for the 30-60 minutes of the closed door grilling. We sought to give our committee members obvious things to criticize with the PhD student having prepared arguments to defend against these criticisms. E.g., I ashamedly included quite a few spelling and grammar errors in the first few pages of the summary section of the thesis (the only part anyone would actually read) and we spent at least 15 minutes on my horrible writing ability.

In general, the main outcome of the closed door portion of the defense was requests for additional work. It was common for committee members to suggest additional things that could “improve” the thesis work. Not surprisingly, many of these suggestions involved applying a committee member's methods, even if not plausibly applicable, so that one would publish another paper citing the committee member’s work. Some students, including myself, would have a job lined up before the defense to timebox the amount of additional work that could be requested.


hahaha I had a PhD defence in 2018 in the UK. Criticisms included:

"I don't trust your maths" "I don't feel this analysis is right, but I can't describe in what way" "you are clearly not very knowledgeable" and many other similar things.

Asking me a question and then before I can open my mouth answering it yourself, and then insulting me for not answering it was the start of the viva defence and it set the tone for the rest.

I was also heavily criticised for not having cited a paper that came out in-between submitting my thesis and the defence, despite this being literally impossible to have done so, and, despite having already had gotten a job in that time, was given limited time to do additional experiments, write whole new chapters, new code, do new experiments, etc. Ended up adding 90 pages of material to the thesis.

In the end I had to quit the job I had just got, because It would have been impossible to not fail my PhD program and keep the job.

Afterword's, in behind closed doors discussions it was revealed that one guy had pushed for almost all of the required extra work deliberately to try to make me fail, because I had done something he could not.


These people should be getting treatment at least on par with what they got with #metoo, why is that not the case?

I mean, personally, I don't really care about reputation of academia, but given that you described a horrible story where somebody basically tried to ruin your career, and given that you decided not to name specific people or institutions, it seems that the whole (UK) academia will have to take the reputation hit for the alleged scandal.


Given that they succeeded in completing the poisoning of my opinion of academia and scientific research there is no career there at all. I say complete because PhDs can feel unpleasant, but that's mostly a result of high stress, difficult work, no security, and complete lack of a social life and interaction with other people for 4 years, rather than science itself but it does set an emotional impression of what research is like that is somewhat difficult to overcome.


I defended my PhD in 2009 at George Mason University (GMU). It had two halves, but they were reversed: the first half was private, the second half was public. I think that's a better system.

In the first half I was privately grilled by my committee. They wouldn't let the dissertation go to a public defense until they'd satisfied themselves that it was fine.

As far as I can tell the public defenses usually look ceremonial at GMU (at least in its engineering department), but they aren't actually ceremonial. Anyone from the public can ask questions at a public defense, so they aren't ceremonial. However, the goal of the (first) private half was to try to make sure that the defender is ready for arbitrary questions (because he understands the material). So it's unusual for the public to ask questions that the defender isn't able to answer. I got some questions I hadn't heard before in my public defense, but I was able to handle them.

You can see my public defense here: https://www.youtube.com/watch?v=QYH18NpsRu8


That's really not the case here in the UK, I know several people who failed their vivas and had to make substantial changes to their thesis. It's not common, because there are intermediate level mini-theses and vivas, and people who do badly in those often don't finish at all, but it does happen.


Yes, in UK mine was terrifying. In all likelihood candidate would get destroyed trying that tactic.


I second that statement. I know at least two people who failed their viva in the UK and ended up getting an MPhil instead of a PhD. In both cases the candidates were told by their advisor not to go with the defense but they decided to go for it anyway. I know several more that got “major corrections„ which meant a lot of extra work and another review six months later. edit: typo


My supervisor had a student who apparently just said "no" as her answer to each critique, 38 times. She got the level of fail which is "never ever come back" - which apparently does exist.


Nobody likes a public humiliation so perhaps the defense event is almost ceremonial but the real defense took place before that in order to get the event itself scheduled?

So that even if people don't fail their defenses often, there might be many that didn't get theirs scheduled at all? This is just speculation on my part.


still the effort to come up with something original is order(s) of magnitude more than copying. it's likely that the potential buyers have enough grounding to understand the papers.


What you propose is actually how it used to function until around the mid-20th century. Journals used to be very permissive when accepting papers with the editor only doing a cursory check to make sure the paper wasn't total garbage.

More info here: https://michaelnielsen.org/blog/three-myths-about-scientific...


That’s how it used to be. The reason behind the shift is the rapid increase in the amount of research output in the past decades. The reason that “arXiv + quality check” won’t work is that the amount of research funding and permanent positions (related) did not increase accordingly. (Think of all the people producing science for low wages as Phds and postdocs and then quitting.) Right now the burden of the (unfair?) selection is mostly on the publishers and their prestigious journals/conferences, which then funding agencies/institutions take into account when funding research/hiring people. If we switch “arXiv + quality check”, that burden will just move to the funding agencies/institutions, but it won’t solve the underlying fundamental problem.


In the US, both academic hiring and proposal reviews are already laborious jobs where committee members are supposed to investigate the candidate/proposal deeply on their merits and potential. It shouldn't be a new burden for them to be unable to take shortcuts.

I think the recent wave of low-impact submissions and co-authorship rings is the result of developing countries trying to simplify that process and tying hiring/promotion/pay directly to publication count and related easy metrics.


Are supposed to, but doing that would require so much more work than is usually being put into it now. Especially in the first rounds, it would be impossible to review all candidates/proposals thoroughly without having staff dedicated to just that. (IMO institutions should have such highly-skilled staff.)

It may be true that the SNR in research output from developing countries is lower, but there is still lots of good science. But essentially no publishers with good reputation. So even with the same SNR, the increased pool of countries producing science would add to the publication pressure.


> Imagine a process where the only criteria are technical soundness and novelty, and as long as minimal standards are met, it's a "go". Call it the "ArXiv + quality check" model.

One possible issue is that researchers usually need to justify their research to somebody who's not in their field. Conferences are one way to do this. So are citation counts. Both are highly imperfect, but outsiders typically want some signal that doesn't require being an expert in a person's chosen field. The "Arxiv + quality check" model doesn't seem to provide this.

> I suspect the bigger problem that CS has is a large percentage of poor-quality work that couldn't be replicated.

As a sort of ML researcher for several years, I agree.


As a fellow ML researcher, I want to add that the lack of code along with the publication makes the problem worse.$BIGGROUP gets a paper whose core contribution is a library, published, and yet they haven't released the code 6 months after the conference, effectively claiming credits for something unverifiable.


I guess this can be different depending on your specific field, but in NLP it really changed for the better in the last few years.

I don't have data, but from subjective experience, 5-6 years ago most papers in major NLP conferences didn't have an associated code repository. Now, the overwhelming majority do.

There are still many other problems, for example a big one is reporting of spurious improvements that can vanish if you get a less lucky random seed. But at least including code is now common practice.


Back when I did a stint at something NLP-ish for my master's, one of the problems seemed to be that, apart from lack of code, the data was also often non-public and specific to the study. That made it impossible to compare different algorithms even as far as the results reported in the publications themselves go because the testing methodology was all over the place and the datasets used for testing various algorithms might have been all different. You couldn't really make much out of the reported results even if you believed the authors reported honestly and had their methodology more or less straight.

I suppose the situation regarding common datasets might vary between subfields and NLP tasks, so maybe I just saw a weird corner of it.

Of course the code was also nowhere to be seen.

Availability of code would of course be even more important, both because of replicability and general verifiability, and also because that would allow you to do a comparison with any number of datasets yourself.

Glad to hear that code availability has been improving.

> There are still many other problems, for example a big one is reporting of spurious improvements that can vanish if you get a less lucky random seed.

Considering that a lot of NLP is at least somewhat based on machine learning, don't people do cross-validation or something?


For NLP, sharing data is a bit of a problem though.

You do a paper showing that problem X can be solved slightly better by downloading and training on a billion tweets.

But you don’t have the copyright to those tweets, so you can’t share data.

> don't people do cross-validation or something

A lot of stable problems comes with a dataset already split into train and test.


> You do a paper showing that problem X can be solved slightly better by downloading and training on a billion tweets.

That's true. Sometimes you might try to tweak the algorithm itself rather than the data, though, or experiment with different kinds of preprocessing or something, and in those cases it would be helpful to be able to do different experiments with shared datasets.

My limited experiences were from around the time deep learning was only about to become a big thing, so it might have been different then. Maybe you nowadays just throw more tweets and GPUs at the problem.


Even novelty is a dubious standard that is often used to discard replications and meta-analyses (not novel), empirical work (no novel theory), reformulations and simplifications of existing research (not novel) and null findings (nothing was found so not novel).


Lolwhat, empirical work is the only thing that can confirm (or rather, refute) the worthiness of a theory !?


Well, then you need something else to use as a basis for tenure decisions.

Also, a "best 10%" conference/journal is valuable -- I have only so much time in the day. There are a few conferences for which it is always a good use of my time to read all the abstracts, plus one or two of the papers that seem most interesting. I can't do that for every conference, or even most conferences, in my area.

So the "best 10%" conference/journal is valuable to the consumers, and the prestiege is valuable to the producers. Therefore I think such a thing would simply re-emerge if you somehow killed it.


I mean there are several journals who publish based only on "technical soundness", Nature's Scientific Reports is probably the most well known one. I don't think it helps, in fact IMO it's detrimental because journals would be even more flooded with publications. Also in the article they talk about the conference getting 10,000 submissions, that quickly becomes unmanageable as a conference to accept.


In deep learning a lot of papers end up only on arXiv, and authors don't bother (or need) to send it anywhere. Or even more, some seminal papers and up only on the author's page (e.g. one introducing GPT-2 https://openai.com/blog/better-language-models/). I am not a big fan of the latter, as it gives problems with long-term availability.

Moreover, the correlation with acceptance and impact in existing, but not that high: https://medium.com/ai2-blog/what-open-data-tells-us-about-re...

Of course, only corporate researchers can rely on not publishing in established journals - as their salary and position does not come from "publish or perish" metrics. Ironically, it means that there is more academic freedom in private companies than in academia.


Bell Labs in its heyday was still reputed to be publish or perish, or patent or perish, without any tenure system either. At my university, we had a previous Bell Labs researcher that said he had been pushed out. As a corporate researcher, there is still high pressure to output something.


At it's best, that's what scientific reports is (in a number of subfields)--a place to put research that is reviewed for technical correctness. There may be debate about how well it succeeds, but I think it's useful. I would prefer to have more papers out where people worked on something, found it wasn't necessarily exciting, but it was solid work and it saves other people time.


It has long seemed like there was soft collusion in academics anyway. There is plenty of complaining and suspicion about how bad and rigged reviews can sometimes seem to be. There are plenty of rumors of PIs who influence what gets published through back-channel alleys. But even in the absence of outright nefarious activity, there's a reinforcement cycle where bigger labs influence what papers get in, receive more grant money, and get better about influencing what papers get in. Even gently guiding what topics are acceptable, in the long term shuts out newcomers. Just about any graduate student by the time they graduate has reviewed papers in a "double blind" system where they could reliably identify the authors, there are always tell-tale markers and styles. It's really hard to find true anonymity.

I've been thinking on an off about review system that might improve on things. I'm imagining perhaps: reviewers and authors both get to see who they are, and conflicts of interest can be called out by other people after review and before publication; reviewers are chosen at random, not allowed to bid; reviewers are sent a series of pairs of papers, and asked to choose which one they'd rather see, scores and ultimately publication can be decided by rank choice vote rather than reviewer assignment; comments on paper improvement would be completely optional. Would this be better or worse than what we have? Would it deter explicit collusion?


You can’t be as candid in your criticisms if the identity of the reviewers is made available to the authors. This is the main reason they’re made anonymous. If you think there can be a conflict of interest, the main mechanism in most fields is to point who shouldn’t be a reviewer for your paper. If you forget someone and they are asked, it’s also their responsibility to recuse themselves. Of course it’s based on trust. Trust is still the foundation of academia.


I’d agree with all those thoughts with respect to the system we have today. Yes, the goal of de-anonymizing reviews would be to eliminate trust in reviewers as a requirement for achieving impartial and fair reviewing (given the many ways we’ve already seen trust failure), and hopefully the side effect would be to increase overall trust in the system. Making reviewers part of the public record would also allow exposure of bad behavior and explicit collusion, without having to go to great lengths to prove it like they did in the article. I’m not entirely sure that the level of candidness that anonymity invites is necessary or helpful for a healthy and robust review system. But, note that my suggestion above doesn’t depend on critical feedback at all. One of the things I’ve found problematic in today’s review system is that reviewers assign absolute scores to a paper, but every reviewer has their own notion of what is good and bad. It’s common for an entire group of reviewers in one sub-topic to give average lower scores than a group in another sub-topic, meaning that different sections of a given journal or conference have different acceptance rates. My suggestion for rank choice voting is partly to ensure that all reviewers are working in the same units, and are all weighted equally. (But I assume there is a possibility of unintended consequences and/or opportunities to game the system with what I suggested, so I’m curious if people see ways that might happen.)


If reviewers are de-anonymized, the opposite of what you think will happen. Nobody other than the most socially stunted would be willing to write a harsh review and make an enemy for life. People are petty. Most fields in academia are incredibly small, with everyone knowing each other. Reviews would become way more positive and useless. Collusion for positive review is pretty rare and wouldn’t really change with real names anyway because it’s pretty subjective to claim a positive judgement is obviously the result of collusion. It’s also already the job of the editors to collate the reviews and make his own judgment based on them. If one review is overly positive and adds no useful information, it can and will be discarded. The editor can even call for another reviewer if he/she is still undecided. I’ve reviewed papers and recommended acceptance, yet the paper was still rejected; the opposite has happened too. As a reviewer you are only advising the editor.


> Reviews would become way more positive and useless.

In my corner of CS, there are plenty reviews of debatable use that are very negative. In fact, I have a pet hypothesis that the average score of the lower scoring but accepted papers is negative (scale from -3 to 3). And I wouldn't be surprised if the median paper's score was negative.

Positive uselessness doesn't seem like that much worse.


Are you sure about that? I’m not sure you are hearing and understanding my suggestion. A review would always consist of one paper being ranked above another one. The tone of the review comments would have no bearing on the usefulness of the review. The reason we even have critiques in reviews is to justify the score that the reviewer assigns. Reviewers are specifically asked to produce reasons. What I’m suggesting is different because reviewers would rank two papers against each other and not be expected to justify their decision, nor would they be asked to provide a magnitude for their opinion. Comments and critiques would be allowed, but there’s no need for harsh reviews.

I’ve seen a lot of overly and unnecessarily harsh reviews. Anonymizing enables over-stating criticism, it happens routinely. I don’t think I agree that harsh critique is necessary for a healthy review system. It is already the case that good reviews are not extremely harsh, they focus on the facts and are willing to stand by their statements. I don’t personally know that many researchers who have trouble being direct in person and face to face, or of offering constructive criticism.

> As a reviewer you are only advising the editor

This completely depends on the journal or conference. Quite a few of them, especially the larger ones, do not override reviews casually nor often. And what I’m suggesting is a system where this idea can change, where editors can more easily trust the review results, and won’t need to override the decision.

> People are petty.

This might well be true. And so I’m not entirely understanding your argument. It seems to be simultaneously suggesting we have a problem, and defending the status quo as the way it needs to be. What would you suggest as a way to improve the review system so that pettiness has less influence than it does today?


Ranking only makes sense in a venue where the number of papers or presentations is limited. There many online journals of good reputation that will publish anything over a certain (obviously not perfectly well-defined) bar. The number of papers tends to increase over the years as the numbers of researchers also increase, and this is considered acceptable. Other journals are so selective that they won’t publish anything if nothing is deemed worthy.

You’re also misunderstanding the point of reviews. They also serve as comments to the authors to modify their manuscript and make it acceptable for publication.


The part I found interesting was that the author notes that it is well-known that the quality of the work has little to do with its odds of acceptance even when the system is 'functioning' as designed. Is it cheating to game a system that is already fundamentally broken? If the quality of your work is inadequate to secure publication and your career depends on it, it wouldn't be very hard to convince yourself that you can't cheat a rigged game. What 'integrity' is actually being threatened? Perhaps some of the energy devoted to identifying these collusion rings would be better spent developing a review process that is at least somewhat biased in favour of good research rather than the density of the authors' professional network. Or at least in mitigating the poisonous 'publish or perish' rules that lead to this sort of phenomena.


> the author notes that it is well-known that the quality of the work has little to do with its odds of acceptance even when the system is 'functioning' as designed

I think this is a little more pessimistic than what the piece says. The NeurIPS (then-NIPS) experiment said that about 60% of papers accepted by one PC got rejected by the other. That doesn't actually mean "the quality of the work has little to do with its odds of acceptance". It may just be that there's a paper has to cross a quality threshold, and once it's past, then the outcome has a lot more variation.

My personal take on NeurIPS specifically is that there's a fraction of bad papers, maybe 40%, that probably shouldn't and won't get in. Then there's a minority of very nice papers that probably should and will get in, maybe 5-10%. And then there are a bunch of middling papers where a lot of it is luck and drawing friendly reviewers. But these aren't bad papers, and you can't really just churn them out, they're just not very good papers.


Seems it's also rational for an opportunistic player of the publications game to split their work into the maximal number of marginally acceptable publications, which would result in a high rejection %.


I wouldn't be surprised if that's already happening and has become the preferred strategy for the last decade at least.



Don't forget there can be papers recognised universally as acceptable.

From memory, it was 25% rejected by both PCs, 15% accepted by both PCs, and the middle 60% random.


then why not review projects in a double blind fashion ? Those which are accepted twice will get published and voilà. But that's twice the work.


Double-blind usually means something else in reviewing: the reviewer does not know who the author of the paper is, and the author does not learn who the reviewer is. Single-blind reviewing is when the reviewer knows who the author of the paper is, but the author still does not learn who the reviewer is.

As you mention, the problem with having two sets of reviewers is that it's hard enough for conferences like NeurIPS to find one set of qualified reviewers. Usually at least 1/3 of reviewers on any given paper produce a poor review, either because they don't care or because they really lack expertise. Complaining about this is so widespread that it even doubles as a sort of icebreaker for researchers, but nobody has a good solution.


I agree with one of the other posters, the result of the study does not highlight a problem that the acceptance of the work is unrelated to the quality. It's more a fact of the distribution of the "quality".

Essentially if you look at review scores for a conference which has say a 40% accept rate (and this is quite similar across fields I'd imagine), you find there's 10-20% (depending on conference) of papers that are clear reject for all reviewers, then there's probably around 10%-15% of papers which are very clear rejects now the rest of the papers are very similar in scores so the cut-off becomes quite arbitrary (and depends on luck as well). This is actually well known for grant applications and a sign that there is likely not enough money in the system.


The bigger issue here, and what is threatening the integrity of the research, is blind reliance on conference publication count as a proxy for research quality.

Maybe it's time to move on from some of these conferences, and focus on interactions that maximize sharing research findings. I know that is unrealistic, but like every other metric, conference acceptance ceases to have value once the metric itself is what people care about.


I often go back to this talk from Michael Stonebraker about, in his terms, the diarrhea of papers [1]. It's difficult to justify time towards anything that doesn't lead to a near-term publication or another line on your CV.

[1] https://youtu.be/DJFKl_5JTnA?t=853


A really awesome talk - it's nowhere close to my field of research, but it is oh so relevant anyways, thank you for linking it and I'd recommend everyone interested in this larger topic to listen to what Stonebraker had to say on this.

And one of the suggested solutions (at ~22:00) seems that it would work if it can be adopted - essentially, have top university administrators evaluate top x papers for hiring/tenure decisions and ignore everything beyond that number. What you measure is what you get; if you measure count, you get a deluge of 'least publishable units', if you measure your top 3 papers, then everyone will focus on quality instead of quantity. A counterargument probably is that it's easier for the administors to measure quantity in a way that seems objective and resistant to any arguments or appeals decisions, and it's far harder to objectively compare quality especially if the candidates are from different subfields of research.


This is what the UK REF system does effectively. You get evaluated on at most your top 5 papers.


The top 5 in the last 4 years - which is pretty lunatic when you think about it. A Ph.d - which is meant to extend human knowledge by 1 iota - takes 3 years to write. So what does 5 papers indicate?

The REF is evaluated by humans. I take 2/3 weeks to read a paper. For good papers I might work on them for 6 weeks + to really get into the technique. How can a REF reviewer consume 5 papers to evaluate an academic? How can they consume 200 papers from 40 academics?

The right thing would be to have the department sumbit it's top 5 papers - 3 reviewers could really see what is going on is a department then.


REF is 7 years I believe. 2/3 weeks per paper - I'm not sure what stage of your career you are at but I can't see how this would be necessary to evaluate the quality of a paper ( which presumably has already passed peer review and been published in a top tier venue ). Also, a PhD is in effect a trainee researcher. I don't think it's unreasonable to expect a more experienced academic to have a higher output rate.


>which presumably has already passed peer review and been published in a top tier venue

So - how many IEEE conferences are there!? Also things change, it was much easier to get into Neurips 7 years ago... but it's not necessarily the case that the papers from this year will have as much impact or be as good as the papers from then. And as Neurips itself showed, the peer review process is somewhat random - papers are rejected by different panels meaning that getting in is likely an achievement but also possibly a bit lucky. I don't think that evaluating the papers based on where they were published is a good way to allocate public money.

As a strong concrete example, paper 13 from Ferguson's group at Imperial is probably one of the most important documents for the last 20 years (if you live in the UK or France), it was "published" on a website...


No comment on the quality of many IEEE conferences :) I think most people active in a field know what the best venues are. I certainly don't disagree that there is a lot of randomness to the review process. And frankly picking which papers are likely to be most influential in ten years time is super difficult (e.g. see how often best papers have minimal subsequent impact). Regarding the Ferguson document, I haven't served on a REF review panel, but I think if you can justify it as being influential/high quality through other means that should be acceptable?


OFC you are right - it would be fairly career limiting to try to talk it down I would think! But I am concerned that the REF should work by the reviewers actually looking at the material, rather than looking at where the material was published...


This is a great lecture. While he is specifically talking about database research, a lot of what he is relevant to academia.


"When a measure becomes a target, it ceases to be a good measure." - Goodhart's Law

It's not even just about academia in general, it's a universal problem caused by lack of personal accountability through the globalization of talent. The exact same dynamic was shown multiple times in HBO's The Wire. Gaming the metrics. LPU's are no different than 10 minute YouTube videos.


I made a similar comment in another thread here, but the problem is that there will always be non-experts who want to assess the quality of research being done in different fields. These might be deans, department heads, VPs of R+D labs, and so on. Ideally, all research would have clear measurable impact, but a lot of good research doesn't, at least not immediately, so there needs to be some way for non-experts to measure quality. Conference/journal publications and citation counts are highly imperfect solutions, but I'm not sure what the better candidates are.


Assessment can never be done by non-experts. They do not understand the subject matter and cannot even judge which journals are good. It just makes no sense.

When someone is hired (whether for tenure or time-limited), their research as a whole has to be evaluated by external, independent committees who take into account the content of the research and do not base their judgment on indicators only. There is no shortcut around that.

The biggest annoyance nowadays is the decision-makers' insistence on "excellence", though. You cannot have only excellent people everywhere, as per the definition of "excellent", yet this demand is in every fucking guideline for postdocs and tenure-track position. It's absolutely ridiculous.


Assessment can never be done by non-experts. They do not understand the subject matter and cannot even judge which journals are good. It just makes no sense.

The problem with this assertion is: non-experts are paying for all of this.

Imagine you are an ordinary taxpayer. You are feeling the pinch yourself, you look around you and see infrastructure crumbling, every day the press says healthcare and this and that is underfunded. Now along comes some scientist, he or she wants a few billion for a new particle collider that will make no difference whatsoever to your life, and their only justification for it is "well other scientists say we should get all this money, and they're cleverer than you, shut up".

Can you see why funding something with no accountability might be considered problematic?

There's also the fact that an expert who can't explain something to a non-expert probably doesn't understand it very well themselves...


If non-experts are in charge of individual funding, then you're virtually guaranteed to get nonsensical decisions. It couldn't possibly work, and even if you got it to work, the results would be abysmal.

> Can you see why funding something with no accountability might be considered problematic?

Maybe you're confusing budget decisions with funding and hiring decisions. These are fundamentally different. Universities and research institutions, as well as national funding authorities, get budgets that are decided politically, i.e., by elected representatives. These can have broad categories and guidelines or preferred research areas (e.g. "excellence initiatives"). Budgets are usually allocated well in advance, for instance our national funding authority gets budget security for 4 year periods (if I'm not mistaken). How they spend it is dictated by political guidelines for the respective period and plenty of complicated national and international laws.

In contrast, I was talking about hiring decisions and decisions about individual funding. How can it not be obvious to you that these decisions need to be made by experts on the basis of CVs and scientific project proposals, not by politicians or other laymen?


Only experts can make this judgement in the short term. As time goes by, the reception of the work accumulates and it becomes increasingly possible for non-experts to judge.

It took over 20 years after the Standard Model reached broad acceptance (ie., the experts thought the theory was probably right) for the first supercollider powerful enough to observe the Higgs boson to be financed. This was enough time for policy makers to reach high confidence that the experts had not gone badly wrong and for there to be reasonably well-informed public opinion on the merits of the search.


The better candidate is spending more on the evaluation.

E.g, for my company's twice-yearly evaluation, everybody writes up a short report on the most impactful stuff they've done including evidence, this is evaluated by their manager to give a score, and then there's a series of group meetings between managers to make sure that the scores are calibrated, including looking at all types of metrics that can be dug up and comparing to our written role descriptions for different levels. It takes a lot of time but creates fair scores.

This is extremely labor intensive, but that's the thing: To create anything resembling fair evaluation of a large group of people that do a large set of different things, you need to do things that are labor intensive. Using a simple set of metrics don't cut it.


The department head at least should be something of an expert, and be able to consult with more specialized experts. Then that department head can pass on an evaluation to management.


Most experts (who understand what a department head is) don't want to be department heads. In the UK there used to be a tradition of making someone be head of department for 2 years or so and then letting them become a professor again so that the next junior could do the head of stint! This has faded because of much nastier politics (everyone hates you after you sacked their friends) and a rise in admin due to the industrialization of the bigger departments (its not a job for an amateur any more)


Looks like it could work if you had to do it in a different university ? (Also, less cronyism.)


The playing field where one can share research findings is far more uneven, more tilted in favor of the prestigious labs and individuals more than the current field of conferences and the blind review process.

Findings and papers from well known individuals (read: twitter accounts) do get far more attention , more citations. Of course, one can argue that, broadly, well known labs and individuals are wel known because of their tendency to do great work, write better papers. And that’s true. However, the above still holds, in my experience as PhD student in ML. Anecdotally, I have seen instances where a less interesting paper from a renowned lab got more attention and eventually more citations than a better paper accepted at the same venue by a less renowned lab on the same topic.


I completely agree with you and this is a significant issue, because it makes it hard for someone not from one of the established places to get research attention.

I would also argue that with the increased importance of the (mostly) commercial "high-impact" journals this has become worse. I know that some of the professional (non-expert) editors of these journals specifically look at the citation counts of authors before accepting to send them out to review, because their main aim is to get people reading the articles, not necessarily good science.


This is objectively worse. Commercial journals need to be bygone. I feel that the publishing venues in the field of machine learning are a great example of non commercial, (relatively) transparent, open access publications. Look at ICLR, ICML, ACL, NeurIPS, for instance. All of their proceedings are open access. Most papers their are available on arXiv as well (which has its pros and cons both that I'd rather not get into here). Many of them also have an open review process wherein the reviewers post their reviews for all to see in a forum like interface, allowing frequent back and forth with the authors. In the best case, its more akin to assist the authors refine their potentially sound findings, a dynamic process where the paper may get revised a couple times before acceptance.

I believe journals need to adapt the openreview process. There are obvious parallels in the lifecycle of a journal submission (from submission to acceptance) and the openreview process, except that the latter is accelerated (for better or for worse).


Yes, the cloud that the large commercial journals have over the academic process is really worrying. A publication in Nature can mean the difference between tenure and no tenure or getting a grant or no grant.

I have to say that I find the review process of the copernicus journals very interesting. You can see a description here: [1]. Unfortunately I don't work in a related field otherwise I would have published there already.

[1] https://www.atmospheric-chemistry-and-physics.net/peer_revie...


Yeah, I was wondering : has someone tried to compare citations of papers by researchers that do not tweet / blog to those who do ?


Yep.

> We find considerable evidence that, overall, article citations are positively correlated with tweets about the article, and we find little evidence to suggest that author gender affects the transmission of research in this new media

https://journals.plos.org/plosone/article?id=10.1371/journal...

(I've only skimmed the paper, a few months ago).


It's worth noting some context:

- Conferences and participants have increased exponentially over the last few years.

- Students in AI/ML graduate programs have similarly increased.

- Huge numbers of companies are hiring AI/ML graduates.

- What constitutes a true advance AI/ML is difficult to determine. Deep learning is fairly ad-hoc method - a tweak to an existing method that allows you exceed soto (state of the art) is the simplest way to get attention. But that's not a method and so you're many others pushing similar tweaks also.

This environment in particular seems like it would exacerbate all the ordinary pressures to cheat found in the academic environment. It has something of the quality of the last blow-out of a bubble. And the thing is that even with deep learning being real, the dynamics seem fated to push things to the point that expectation are sufficiently far past reality that the whole collapses, for a bit.


I'm not in academia, but in the grand tradition of "why don't you just..." solutions crossed with "technical solutions to people problems":

Would it help at all if rather than participants reviewing 3 papers, each reviewed 2 papers and validated the review of 3 more papers?

This is computer science here, with things like the set NP whose defining characteristic is that it's easier to check a solution than generate it.

I'm imagining having some standard that reviews are held to in order to make them validatable. When validating a review, you are just confirming that the issues brought up are reasonable. Same for the compliments.

Sure, it's not perfect because the validators wouldn't dive in as deep or have as much context as the reviewers, but sitting here in my obsidian tower of industry, it seems like it would at least make collusion attacks more difficult. Hopefully without increasing the already heavy load on reviewers.

(It very much seems like an incomplete solution -- we only have to look at politics and regulatory capture to see how far wrong things can go, in ways immune to straightforward interventions. Really, you need to tear down as many of the obstacles to a culture of trust as you can. Taping over the holes in a leaking bucket doesn't work for long.)


That would only work if the review decisions could be expected to be reasonably consistent from person to person. But, from the article:

> In a well-publicized case in 2014, organizers of the Neural Information Processing Systems Conference formed two independent program committees and had 10% of submissions reviewed by both. The result was that almost 60% of papers accepted by one program committee were rejected by the other, suggesting that the fate of many papers is determined by the specifics of the reviewers selected and not just the inherent value of the work itself.

With this much demonstrated discrepancy between two sets of reviewers, it’s hard to believe that adding a validation step would produce a consistent improvement. How can people be expected to find improperly accepted papers when they have less than 50% agreement on the acceptance of good-faith submissions?

Honestly I think this seeming randomness in acceptance is at the heart of why people might think cheating is acceptable. If the process is not reliable, why bother submitting to it?


The described result is not in conflict with decisions being reasonably consistent.

Suppose that two reviewers independently rank papers 80% on quality and 20% on chance factors. With good odds, the two reviewers will agree with each other on the relative rankings of any given pair of papers. But their lists of the top 10% of papers will largely not be in agreement with each other.


>The result was that almost 60% of papers accepted by one program committee were rejected by the other, suggesting that the fate of many papers is determined by the specifics of the reviewers selected and not just the inherent value of the work itself.

I referenced Kahneham's latest book, Noise, above but this is exactly the problem he focuses on. There are solutions.


Wouldn’t a ‘solution’ only work if there actually is some underlying objective quality to a paper? It might be the case that different reviewers disagree because there is no right answer, because there is no such thing as objective quality.


Well wisdom of the crowds suggest that there is a truth... and a conference is charged with representing the best so I think that randomization and meta moderation + other strategies are better in the long run.


This is in aggregate. If you look at great papers, they are almost surely accepted. Bad papers are almost surely rejected. But those in the middle are a coin toss.


The problem with Academic collusion rings is that eventually the ring can hold such influence that all "major" research comes from the ring. As ring members benefit there is no reason for them to switch to an alternate system. If one wants to progress in their chosen field then there is no benefit to publishing in an unread source.


I think this is a really interesting idea. It does seem more effective for validating that reviewers are not torching the work of others — is there a nice way to validate that reviewers are not giving their friends a pass, short of reviewing the manuscript and the review?


Sometimes the collusion is blatantly obvious.

Back in grad school a colleague of mine spent nine months on an experiment in a new field and submitted it as a paper to a quality journal. Six months later, the paper was rejected for lack of novelty: One of the reviewers had found a paper with a figure-by-figure duplication of the same experiment -- published on arXiv a week before the rejection decision. Both the managing editor and the author on the arXiv paper were from Chinese universities.

We wrote a rebuttal and submitted a complaint to the journal editor, but no justice was forthcoming. My colleage switched research directions to avoid the collusion and now takes pains not to submit papers without a coauthor who has enough clout in the field to deter blatant research theft. He also avoids dealing with editors from institutions in China.

He ended up graduating two years later than planned.


Reading about that makes me very angry. I long to see some justice for your colleague. Lots of good suggestions to guard with arXiv, I hope many young academics coming through read this post and learn from it.

I wish I had something more substantive to add, but, all I can say is I hope your colleague knows that even if the swindlers and plagiarizers take our research, years of our lives, etc, be they in China or wherever the whole wide world else ain't able to take away the heart of a real one. They know the research ain't theirs, and they know they depend on real producers to be able to commit their crimes, or do anything actually useful and it's not the other way around. These plagiarists are parasites, and one day we'll be free of them.


Sadly, I cannot share your optimism.

As long as metrics like impact factor and citation count determine the trajectories of academic careers, plagiarism, collusion, and other forms of academic dishonesty will not go a way.

The fact of the matter is that most academics do not have time to dig into whether someone's work is intellectually dishonest. Being dishonest has a huge payoff as long as you don't cause a scandal.

Anyone who plans to stay in academia should understand that the metagame has changed over the last 50 years. Largely because human attention has not scaled with rate at which academics are exposed to and expected to assimilate new information. There is a larger payoff to exploiting the lack of attention than to earnestly carrying out some meaningful research program. At the very least, do both. By no means do only the latter.


Anything metric that becomes gamed ceases to be useful. If we all know the game is to optimize for number of citations, then the jig is up. It will eventually cease to be a useful system and fail its ultimate goal. Institutions can handle some amount of it, but left unchecked, the institution will fail when quality falls enough to prevent any further value creation. Certain journals or periodicals will lose their notoriety, and with it, bring down the credibility of the entire system in which they exist.


The folks who have benefited from the system are in power. They are too heavily invested in extending its existence in its current, increasingly corrupt form.

I used to be enamored of academia and its promise. I am now glad there are increasing avenues for success for researchers who choose to leave it, given the limits of their capability to express their ideas honestly with the hope of being recognized for valuable work.


"By no means do not only carry out meaningful research"? Happily, I cannot share your cynicism.

There are metrics and citations being taken, of our dishonest acts. They determine the trajectories of virtually everything relevant to our lives, and can have momentous impacts on the lives of others. I cannot judge any other man or woman, but I know I would be a coward, a fool, and a fraud to diversify any kind of portfolio of mine in this life, by consciously choosing to sprinkle in some exploitation or lies.


Actually completely agree with you about not making yourself a coward, a fool and a fraud.

I should have been clearer. If you want to stay in academia, not debasing yourself comes at a high cost in terms of your career. It is better to leave than to debase yourself.

Academia is a system worth destroying. Participate in its destruction. Leave. There are so many exciting opportunities to continue doing meaningful work outside of universities and research institutes.


Thank you for clarifying to an ideologue-dog. If I'd chosen to remain at university, this post would certainly be a wake-up call to me. Respect.


There certainly are aspects of academia that are overdue for improvement. However, for folks with a CS PhD degree, my guess is that moving out of academia has a high chance of you ending up contributing to surveillance capitalism.

Now there's a system worth destroying.


The other way to mitigate this is to put your work on ArXiv before submitting it to conferences/journals, which is becoming more and more acceptable in at least some CS sub-fields.


This is becoming more popular in another field (non-CS) I follow, but for different reasons:

Some have been posting their papers to pre-print servers and then skipping straight to commercialization attempts. This is especially concerning in the health and fitness world, where some supplement makers and fitness gurus are uploading documents to pre-print servers to give the illusion of being published authors. Casual observers may not be able to tell the difference between published, peer-reviewed papers and some random document uploaded that has a DOI on a pre-print server.

This doesn’t carry much weight in academia, but it can fool non-academic observers. I’m not sure if or how it will translate to CS papers, but I wouldn’t be surprised if skipping peer review becomes more common as the pace of publishing increases.


> I wouldn’t be surprised if skipping peer review becomes more common as the pace of publishing increases.

This is probably a controversial opinion, but I think this makes sense anyway. Peer review & editing from journals made a lot more sense in the world where physically publishing, printing & distributing papers made out of atoms was expensive and difficult. And where retractions and corrections were near impossible, and where we didn't have systems for tracking reputation.

More and more I imagine research becoming like blogging - where "publication" happens by first putting your work online, and then getting feedback in the public domain. And "journals" are replaced by sites like HN or Mastodon which aggregate content and form focal points for a given community.

It won't be perfect, but neither is the current system. And speaking as a mostly independent researcher, the idea of signing over copyright of my work for the privilege of putting my work on their website is preposterous.

Sunlight is the best disinfectant for the kind of corruption described here.


> More and more I imagine research becoming like blogging - where "publication" happens by first putting your work online, and then getting feedback in the public domain. And "journals" are replaced by sites like HN or Mastodon which aggregate content and form focal points for a given community.

That honestly sounds like it either would be a step backwards, or it would end up re-inventing peer review. I think one of the basic functions of peer review is to provide a basic authoritative quality filter to help make the fire-hose manageable. As someone who lacks the infinite time needed to check everything myself, I find those kinds of filters valuable.


> That honestly sounds like it either would be a step backwards, or it would end up re-inventing peer review.

Peer review is good. Pre publication peer review is bad. Bring back the pre WW2 system. End the enormous waste of reviewer time and ludicrous delay. If it was good enough for Einstein it’s good enough. The only person deciding if it’s good enough to publish should be one editor.


> Peer review is good. Pre publication peer review is bad. Bring back the pre WW2 system. End the enormous waste of reviewer time and ludicrous delay. If it was good enough for Einstein it’s good enough. The only person deciding if it’s good enough to publish should be one editor.

IIRC, the current peer review system was created because academic specialization and the quantity of papers increased mid-century to a point where the pre-WWII system became unworkable. Specialization and volume have continued to increase, and it's hard to see how that makes the old system workable again.


Is it possible to upload to arXiv but have it as private, but have the upload date reserved. Then if something like this happens, you can turn it to public and show that yours was uploaded first. Obviously arXiv itself would need to be a trusted middleman and guarantee that the system isn't being cheated.


Or upload an encrypted version of the file or just a hash of the figure ahead of time? Then release the descrypt key later or when the figure is published, you can point to the hash which was uploaded publicly a long time ago?


This is exactly what we built at https://assembl.net with blockchain timestamping (and it really worked/works)!!! We had many researchers using it but sadly failed to commercialize.

We called it Assembl Chronos. It’s now available here at https://provenance.cerebrum.com. Please give it a try, I’d love to hear your thoughts :)

(More info: https://www.prnewswire.com/news-releases/assembl-chronos-a-b...)


This would be the ultimate proof, but it would also require the system of justice to care enough to accept the proof. It sounds like they just don't care.


I don't think the system of justice would ever go after that random Chinese scientist, but it may be able to take down the fake arXiv entry, as well as help confirm to journals than your idea came first.


So make them care - tell your representatives, vote, etc. let’s make noise about it and it will happen.


The journal already had documented evidence that the legitimate paper was submitted to the journal before the plagerism was submitted to ArXiv. I don't know why they would trust an ArXiv timestamp more than their own.


The ArXiv timestamp/paper is public, so even if the conference doesn't do the right thing, you should still be able to submit it elsewhere.


Can you give some more details? What field was that in? Can you maybe point to the arxiv paper? What was the journal your colleague submitted to?

I'm trying to understand your story because many details are very different from the things I know about scientific publishing.

Your colleague was a grad student who researched and conducted an experiment in a new field without a supervisor (you say there wasn't a coauthor with enough research cloud)? This never happens in my field and pretty much any technical field I'm aware of.

It then went to the review process and was rejected six month later because a reviewer found (a "copy" of) the paper on arxiv? And the rejection reason was lack of novelty? In many fields arxiv does not count as a "already published" . In particular if the submission date of the article is before the article showing up on arxiv.

Also you say the article on arxiv was very obviously copied. So did this result in plagiarism investigations? While there are many things wrong with the current review process, accusations of instances of plagiarism are typically dealt with very quickly and in my experience pretty much always results in the editors in chief getting involved. So did this happen? Also in my experience there is generally much more scepticism against chinese authors than western authors when this happens. The quality of research and publications from China has dramatically increased in the last 5-10 years though.


The comment you're replying to was quite detailed. Any additional data would identify the paper and therefore the person, transforming OP's comment from an anecdote into a use of HN for griefing purposes. Let's not do that.


No there is clearly not enough detail to assess the story. Apart from the question if it is justified to name the Arxiv paper (I believe pointing out scientific dishonesty publicly is very important), he could have named the field and journal without revealing anything.

I clearly pointed out some gaps in the story that everyone vaguely familiar with scientific publishing would find, that's why I'm asking for clarification.


I don't see any gaps in their story, to be honest.

A supervisor can easily not have "enough clout in the field to deter blatant research theft". If you're a tenured professor from a random university in a random country and are not among the top few names in your field, you probably can and will supervise theses, but good luck convincing an editor from an elite institution to care about you.

And in your last two paragraphs, I think you're being a bit naive... what if the editor in chief is directly involved, or is a friend of the plagiarizer? Whom do you turn to? How do you prove that you did the experiment first if there is no proof but the submission, which is in control of the editor in chief?

I fortunately have never been the victim of such a despicable ploy, but I have seen all sorts of malpractice in journals, including editors that didn't care about blatant plagiarism. I don't think it's a predominant thing, fortunately there are plenty of honest journals and honest editors. But it happens. I'm quite familiar with scientific publishing, having authored a good number of papers, and I find the story perfectly believable.


While I've never had an experience that bad, I've certainly peer reviewed utter garbage from one Chinese professor, who was publishing something like 200 papers a year.

I don't care how many graduate students you have, there is no way all of that is original research. And in my case, it was an obfuscated version of a very well known theory that any freshman would know.


Stated another way Chinese university journal editors stole a little over two percent of his life


This is the fundamental flaw with open research and open source. You can't have both open access and working intellectual property protections, especially when there are jurisdictions that don't care about your laws, or are even actively encouraging theft.

It's easy to complain about China but that's what the Mainland Chinese system is designed to do. If they're able to break ours then that's just survival of the fittest. Adopt of perish. Who says you have to roll over and simply let them get away with it? Why are our universities collaborating with these "researchers"? Name, shame and blacklist them.


[flagged]


Any evidence that it came from the lab?


https://www.cnn.com/2021/05/25/politics/wuhan-lab-covid-orig...

US intelligence reports that researchers from the virology lab in Wuhan were hospitalized with an unknown illness in November of 2019, most notably before the "patient #0" was infected.


It seems recently they uncovered evidence that patient 0 never visited any wet markets. Among some other head scratchers. Fauci says it's a coin flip, 50/50 odds it came from a lab.

More research is necessary, but it's worth keeping your mind open to new evidence.


To answer that we need to know what happened to / where is Shi Zhengli?

Are her social media accounts now empty shells run by the state to pretend she is still around?

Her history is spooky. And there is no evidence she is still alive.


Hm, this took a turn. As someone who hasn't heard anything about this, where should I start digging?

Her wikipedia page seems sterile, and searching her name doesn't bring up anything about possible death.


She didn't necessarily die. I wish we knew more about her actions since the start of the pandemic because her role in all this is central.


Cite sources for these claims. Otherwise, this is just unsubstantiated conspiracy mongering.


Here is my favorite roundup on the subject: https://project-evidence.github.io/

In my own amateur opinion, either natural crossover or laboratory escape theories are plausible but we'll almost certainly never be able to prove it either way.



Some say it couldn’t have been designed: https://www.livescience.com/coronavirus-not-human-made-in-la...


Reading between the lines, they started out by “designing” i.e. choosing genetic sequences. But they found out it was more efficient to generate more virulent viruses simply by natural selection in mouse and human respiratory cell cultures.


Similar to how Monsanto chooses herbicide resistant strains of seeds compatible with glyphosate


That wasn’t the question.


The US health industry funded it


The only appropriate response to that is to go scorched earth and try and get not just those to published the copied paper but the reviewer who linked it fired. And then make it very clear what the journal let happen to others in the field.

Also, a DMCA request to arXiv would likely work.


It’s unclear that stealing someone’s scientific ideas is a copyright violation. Though it may depend on exactly what was copied and how.

> In no case does copyright protection for an original work of authorship extend to any idea, procedure, process, system, method of operation, concept, principle, or discovery, regardless of the form in which it is described, explained, illustrated, or embodied in such work


The ideas themselves aren't but it appears the paper was a blatant copy, image by image, and this IS a copyright violation.


"regardless of the form in which it is described, explained, illustrated, or embodied in such work" - didn't know this either, but apparently even image-by-image copies are allowed.


Scientific ideas are one thing, however if scientific papers don’t have copyright protection then Sci-Hub should be legally fine. So, that doesn’t seem to be the case.


EU has an eIDAS law about qualified signatures and timestamps for years with even an original open-source implementation from EU. You can add it to the PDF and keep it on your machine. It's verifiable by any court in the EU because it's "unbreakable". We use it for every document when communicating with officials and I think more people should do that. You don't need cool blockchains and stuff when you have cryptography.


> Back in grad school a colleague of mine spent nine months on an experiment in a new field and submitted it as a paper to a quality journal. Six months later, the paper was rejected for lack of novelty: One of the reviewers had found a paper with a figure-by-figure duplication of the same experiment -- published on arXiv a week before the rejection decision. Both the managing editor and the author on the arXiv paper were from Chinese universities.

Could you clarify whether the duplicate paper was also submitted to the same journal or elsewhere, and if submitted whether it was accepted?

I'm trying to figure out whether the point was to steal credit, or to spike your colleague's submission.

> We wrote a rebuttal and submitted a complaint to the journal editor, but no justice was forthcoming.

Yeah this is the sort of thing that seems to only ever get resolved if a public stink is made, often on social media these days, because the integrity of both the editor and the journal is being implicated, which ends up with the can of worms[0] being swept under the rug if at all possible.

Of course people are going to keep tripping over the bump in the rug, and your colleague might not even have been the first victim.

[0] Having to check everything else the editor has ever done for similar misconduct, potentially retractions galore, making victims whole, process improvements, etc.


So your colleague submitted their paper before the other paper was posted to arXiv and yet it was still rejected? Crazy!


Yup. That was my reaction at first, too. Definitely taught me a lot about informal trust systems in academia.


But these things can affect real credibility if not superficial credibility.

I wonder if it's worth staring a web-site where friendly scientists put papers side by side and say 'Was this a Rip Off'? Whereby at least the scammers get some form of possible public shaming for as long as the paper exists.

I can just imagine that site coming up in web searches ... how could people not click it?


I wonder if there's already a tool that clusters papers by similarity or else how do the reviewers reject a paper on such ground. How do they find it? Do they know them all by heart? Maybe we (the community) can help by creating an open platform that scraps openly available papers and automatically finds and sorts similar ones.


I’m an academic computer scientist. I submit things I review through my institutions cheating detection system. I have found plagiarized submissions this way.


I was not aware of it, thanks for the info. I followed up on the topic and found the famous ones.


Why can’t academics insert some sort of cryptographic proof of ownership or publication date? The ownership could be done with something like Keybase, and as far as dates, that could be as simple as private Git repo that you make public or some fancy blockchain solution. But I suppose even then, people could look the other way.


> Back in grad school a colleague of mine spent nine months on an experiment in a new field and submitted it as a paper to a quality journal. Six months later, the paper was rejected for lack of novelty: One of the reviewers had found a paper with a figure-by-figure duplication of the same experiment -- published on arXiv a week before the rejection decision. Both the managing editor and the author on the arXiv paper were from Chinese universities.

I have no experience in this area, and I am probably going to ask a dumb question: why not always submit the paper to arXiv before submitting it to a journal to protect against this kind of theft? Wouldn't that clearly and indisputably establish priority?


Or use other ways that proof you had the work before anybody has the chance of copying it.


that's a good reason to put it papers on arxiv before you submit them


That's why you upload to arXiv the da you submit it - to prevent such blatant research theft.


What happens if people start colluding behind the scenes of arXiv to change their records? Even if the people running it now are totally trustworthy, they won't be in control forever.


Publicly tweet the hash of your PDF, email the hash of your PDF to yourself, upload your PDF to google drive, dropbox, github, and s3. Essentially the whole world would have to collude against you for someone to claim they wrote it before you.


Archive.org


> He ended up graduating two years later than planned.

This is so unfair, some cheaters from China got free research papers to their name and the legit author had to waste two years of their life


That story doesn't sound very likely to me. If something like that actually happened, for sure your colleague didn't submit to a "quality" journal.


It wasn't Nature Materials, but it was one of the top journals for that branch of materials science. I think we were shooting for IF 5 to 10, but I wasn't involved in the work beyond editing the rebuttal.


IF is a ridiculous measure in the first place. It says nothing about quality. It once was an interesting number from a purely observational point of view, but once journals are actually rated by it, it has no meaning any more.


Preprint servers for the win!


We actually have a paper at ICML this year on exactly defending against these collusion rings: https://arxiv.org/abs/2102.06020

One critical vulnerability in the current reviewing pipeline is that the reviewer assignment algorithm places too much weight on the bids. Imagine if you bid on only your friend's paper. The assignment system, if they assign you to any paper at all, is highly likely to assign you to your friend's paper. If you register duplicate accounts or if there are enough colluders, the chance of being assigned to that paper is extremely high.

Fortunately, this is also easy to detect because your bid should reflect your expertise, and in this case it doesn't. What we showed in our paper is that you can reliably remove these abnormal bids. It's not a perfect solution, but it helps.


Coming from a natural science field we have a very different review process (but we also "only" review up to 100 papers per committee). Essentially we have topical sub-committees with ~10-15 members. The authors choose which sub-committee they submit to and there is typically a rearranging process by the program chairs and subcommittee chairs to check if there are some very obvious wrong category submissions. I should note that it's typically a disadvantage to submit to the wrong subcommittee, because if members don't really understand the paper they are much more likely to reject. Every committee member reads all the papers (and indicates conflicts if necessary). We then have a committee meeting where all papers are discussed and accept/reject is being voted on. In these meetings it does happen that one reviewer picks up a subtle point (or finds e.g. a previous publication), that others have missed and this can lead to the reject of even highly scored papers. Having this many eyes and a discussion about he papers definitely helps IMO. The big difference here is that we don't get 10,000 submissions (more like 1,000).

I was actually very surprised that it is possible to register duplicate accounts at those CSE conferences. We get send a single invite to our work address and need to lock into the system using that email. And we are being nominated to get onto the committee.


I mentioned elsewhere in the thread, but in other areas of computer science it works in a very similar way to yours.


Coming from the natural sciences, I was really surprised that reviewers get to pick what they review! Despite the deeper problems discussed above, removing this glaring bug in the system would be quite easy. If editors have to choose the reviewer assignment may be less efficient, but at least editors have to think more about who would be a good reviewer, and mostly they also know the players in their field and who may have conflicts of interest.


Except if you and your friend have the same area of expertise. I'd actually think that's more likely. These fields are super specialized.


Yea, I think this is likely a pretty big issue with highly specialized field - there can be an issue with new participants breaking into the field (due to an entrenched old boys club) along with a difficulty getting enough sample data specific to that corner of academia.

I wonder if we could randomly assign reviewers but allow the reviewers to self-report a level of familiarity on the subject matter in general (ideally in advance) and on the paper topic in particular.


People can also cheat on this self-reported familiarity right? On specialized areas I don't see a solution, the colluders might as well be the only experts in the field so you have to enlist them no matter what. But from the description this doesn't seem like what's happening.

"The colluders hide conflicts of interest, then bid to review these papers, sometimes from duplicate accounts, in an attempt to be assigned to these papers as reviewers."


I think that by involving randomness before any active self-selection you wouldn't necessarily involve specialty-wide clusters but you'd make it a lot harder to actively seek out the documents you want to review.

It might also help to attack the ability to create duplicate accounts. Given how relatively few professors exist in the world I'd assume you could put a lot more effort into duplicate account detection than they are right now.


Collusion is one of two major problems with modern research in CS. The other one, perhaps even bigger, is its lack of substance and relevance. Most research is meant to fill out resumes with long lists of papers on impressive-sounding venues and bump institutional numbers in order to get more money and promotions. Never mind how naive, irrelevant, inapplicable or unrepresentative that research is.

It's, of course, a very hard problem to solve. It takes a lot of effort to evaluate the real impact of research.


Agree with first paragraph, less with second. I’m pretty experienced in CS. You do some research for six months and we’ll have a line meeting, I’ll know if you’re actually working on things that matter rather than boosting your CV.


Oooh.. this sounds like a great computer science problem.

"How to get an objective rating in the presence of adversaries"

It is probably extensible to generic reviews as well... so things like the Amazon scam. But in contrast to Amazon, conference participants are motivated to review.

I honestly don't see why all participants can't be considered as part of the peer review pool and everybody votes. I'd guess you run a risk of being scooped but maybe a conference should consist of all papers with the top N being considered worthy of publication. Maybe the remaining could be considered pre-publication... I mean everything is on ArviX anyways.

So instead of bids you have randomization. Kahneman's latest book talks about this and it's been making the rounds on NPR, NyTimes etc...

https://www.amazon.com/Noise-Human-Judgment-Daniel-Kahneman/...


In many such events all participants are required to be part of the peer review pool.

However, they review a limited amount of papers (e.g. 3) - "everybody votes" presumes that everybody has an opinion on the rating of every paper. That does not scale - getting a reasonable opinion about a random paper, i.e. reviewing it, takes significant effort; an event may have 1000 or 10000 papers, having every participant review 3 papers is already a significant amount of work, and getting much more "votes" than that for every paper is impractical.

It's unfeasable and even undesirable for everyone to even skim all the submitted papers in their subfield - one big purpose of peer review is to filter out papers so that everyone else can focus on reading only a smaller selection of best papers instead of sifting through everything submitted. The deluge of papers (even "diarrhea of papers" as called in a lecture linked in another comment) is a real problem, I'm a full-time researcher and I still barely have time to read only a fraction of what's getting written.


In theory you could probably do something like have three runoff rounds, such that low-scoring papers are eliminated before people do their second review.


I disagree plus meta-moderation might help. Then again we see voting rings in HN... but a conference has an entrance fee so maybe that would limit it.


Reinforcement Learning with Game Theory - precisely what Littman, the author of the article, specializes on.


All of those solutions assumes that an objective rating exists. There might just not be one.


Bingo...we have a winner.

This issue reeks with the rank smell of base politics and in/out group dynamics, and humans have been fighting, in the abstract, these issues since the time Egyptians were building the pyramids.

How can there possibly be an "objective rating" when career advancement, peer respect, and big money all are in the mix depending upon results?


It's not just personal interest here, it's that we can't tell what is good research at the outset, it might take years to be able to appreciate it. It's like climbing a mountain when you can't see the path ahead and have no map. It might lead to the top, or might lead to a lesser peak, or you have to pass a chasm.

In other words objectives are deceiving and rating is based on objectives.

Example: mRNA inventor being sidelined at her university when her method wasn't famous

Example2: Schmidhuber inventing stuff and being forgotten because data and compute were just too small back then

It's all about building a diverse collection of stepping stones. Any new discovery might seem useless and we can't tell which are going to matter years later, but we need the diversity to hedge against the unknown.


Indeed. There is a lot of talk in my field equating large variance in reviews with bad reviewing, but sometimes it's just because we are humans.

Take for example a paper that presents a very innovative method, but with subpar results; and another one that presents an incremental improvement on some existing method, but with results that advance the state of the art. Which is better?

Even if you ask knowledgeable, careful and honest reviewers, you will get contradictory responses, because it's highly subjective whether you rate originality as more important than results or vice versa (and other factors, like whether you think the first method can eventually be improved to be useful or not, which is often just an educated guess). I see this happening all the time, and I don't think it's something that can be "fixed", it's just how humans work.


This is not a CS problem (unless everything is a CS problem) but a very well known market design problem

https://en.m.wikipedia.org/wiki/Collusion


I believe they were thinking of it as a consensus problem, where parties need to agree on an objective evaluation in the presence of adversaries eg. authors, people with very similar/identical publications, plagiarists.


Too many papers and reviewing is a thankless job (there is no personal upside from review), and there would be a conflict of interest.


It is very interesting to me that people can look at this, the replication crisis, and Sokal Squared, and not see that there is a fundamental flaw in the current model of academia as a for-profit publish-or-perish warzone, and instead declare some disciplines are somehow more bloody.


In CS basically everything is on ArXiv.. so the ideas and information are out there. The conference publication gives exposure and some credential-ism but still the idea is already out there to evaluate.


The article mentions a student taking his own life. I remember this was huge news a couple years ago at ISCA '19 and something that really shook my decision to pursue academia.

After that event, SIGARCH launched an investigation. After a couple years, here were the results of that investigation.

https://www.sigarch.org/other-announcements/isca-19-joint-in...

Worth noting is that the investigation actually initially found __no__ misconduct. Imagine that? A student kills himself, and you conclude it was the victim's fault, and not the environment that drove him.

It was only until this post [1] emerged that they relaunched the investigation.

[1] https://huixiangvoice.medium.com/evidence-put-doubts-on-the-...


That is very good work from the ACM. They don't whitewash anything, in fact they even keep the option open to re-assess their position should further details come to light. Impressed.


From the medium article

> It should have been unnecessary that we expose these evidence and challenge the result of the investigation, if the committee can drive a responsible, transparent and thorough investigation

An interesting question : Did ACM require the followup Medium article to update their position? I don't know the details of the case. However, merely updating positions when situations are black and white are some of the easiest scenarios. I wouldn't be impressed if black and white situations are assessed as black and white. This doesn't mean that one shouldn't do so. I'd expect those scenarios to be a bare minimum requirement.


Yes, but you can't really fault them for that because without any evidence to go on it would have been a fishing expedition. So compared to some of these other investigations that I'm familiar with I think they did it by the book.


It’s hardly reasonable to ignore the evidence of a students suicide note making a specific and detailed accusation of academic misconduct. It’s not a fishing expedition when you’re looking for something that you have reason to believe exists.


> They don't whitewash anything, in fact they even keep the option open to re-assess their position should further details come to light.

Did I miss something? All I can gather from the announcement PDF is that “several individuals” have been disciplined to varying degrees, the least severe being just a warning letter. No names named, no other details. Most of the announcement was just reiterating they took the investigation seriously. Kind of hard to determine from the announcement what details have and have not been considered, and whether certain individuals have been punished too lightly, no?

The announcement does mention a confidential report has been submitted for further review. Did anything concrete ever come out?

(I suppose it would at least be relatively obvious after a while which individuals are subject to a 15-year ban.)


FYI the author is one of the founding professors of the main ML course at Georgia Tech's online CS masters program:

https://omscs.gatech.edu/cs-7641-machine-learning


I took an online course instructed by those guys and it was great. Worth watching the videos just for the banter between them.


Yeah, they had one of the best remote talks I've ever seen at last year's NeurIPS: https://nips.cc/virtual/2020/public/invited_16166.html


I long for the long lost time when research was less of an industry. If you read 19th and early 20th research, it comes of from an alien world. You had to be curious about your subject and smart back then.


I was amazed to read about W.E.B DuBois and his scientific work documenting the effect of systemic racism back in the 1900s in Philadelphia and that it leads to disparate health outcomes. There was a show on PBS that referenced it with respect to covid disparities among minorities showing that Dr. DuBois was way ahead of his time. In fact, we, as a society, still haven't learned anything more than 100 years later!


Agreed.

You almost forget now days that "getting a PhD" or "getting a tenured position" has nothing at all to do with the process of scientific research.


> You had to be curious about your subject and smart back then.

And being landed gentry probably went a long way as well.


You also had to have substantial personal resources to do research, at least in the 19th century. Going back to that system would probably reduce the number of people doing science for the wrong reasons, but it would also strongly reduce the number of people doing science at all.


I mean, it's well within possibility for you to spend 20 years as SWE earning six figures, and then retire early to be an independent researcher.

As long as you are happy to explore a field that doesn't require millions of dollars of equipment and aren't competing for fame, there's nothing stopping you.


I'm looking forward to it.


One of solutions is to let individuals choose who to trust. I can pick to not trust the certain person, or certain publication or conference, and have my personalized scientist ranking recounted.

And of course, I could choose to delegate the trust, and to “follow” someone, which would mean to incorporate their rankings, especially in areas where I don’t orient that much.

Do you think this would work?


I think the delegation of trust is already what is kind of happening in terms of academics trusting journals.

I do agree that there's probably some cleverer solution on a personal level, but I think the journal system exists as a kind of guard against untrusted actors, and yet fails.


Or, just have real repercussions for unethical behavior such as this. If they've identified these people, revoke reviewer status for them. If it was egregious and they have good evidence, publicly announce the revocation. If getting published has a positive benefit on a career, public information about past unethical behavior with regard to publishing and research should hopefully have a negative one.


> One of solutions is to let individuals choose who to trust. I can pick to not trust the certain person, or certain publication or conference, and have my personalized scientist ranking recounted.

Are you asserting a 'trust' whitelist, a 'distrust' grey or blacklist, or some combination?

Are assertions meant to be linked to or backed by evidence?

Are these assertions publicly visible?

Is there a limit to the number of assertions that can be made?

I think you can see how such a system might devolve into formalized collusion.

*> And of course, I could choose to delegate the trust, and to “follow” someone

In the absence of explicit delegation or following, is trust/distrust intended to be transitive at all (ala PageRank or Advogato WoT)?


Isn’t this already how it works? The current state represents everyone doing what you describe... enough people chose to delegate trust to the publications that everyone is fighting to get into, and here we are.


Yes, I think this would work, and am working on it. Message me if interested.


Hey Mike, I'd be interested in being involved too. Ping me, you have my email.


While this practice is certainly bad, it seems like an inevitable consequence of rating professional development based on publication in specific journals. Perhaps if they were evaluated based on the feedback of their manager and coworkers, as in other fields, this would be less of an issue.


I'm looking forward to when academia actually tumbles down. People like James Burke have been awaiting this for ages.


> People like James Burke have been awaiting this for ages.

I would like to more about this. Can you point to any interviews or other materials that shed light on Burke's thoughts on academia?


The conclusion of connections season 1 pretty much is that knowledge needs to made more freely available. He has also held a dozen of long winded talks on the topic like e.g. https://www.youtube.com/watch?v=gvIy52kX-uU There also exists this quite more concise interview: https://www.youtube.com/watch?v=q3XoLycc9io


It's sad to hear that. Other research areas like medicine, pharmacy or history probalby have the same problem but nobody is looking for it yet. My guess is, the more money is to be made or raised, the higher the chances for nefarious practices.


One side comment: It's true that the focus on publications has made the # of papers go through the roof. On the other hand, I can quickly sift through papers using google scholar, whereas I never could before. For any fact I'm looking into, reams have been written about it. This is, to my experience, a huge boon to writing papers and doing research quickly.

This has nothing to do with collusion, but some in this thread are saying that just the number of papers is indicative of a problem.


Very sad to hear about this. The top theory conferences are much smaller. Reviewers are usually invited to participate by a committee member who has met them personally or at least knows their body of work. If a low-quality paper was accepted, many people would notice. There are downsides, but the upside is that these kinds of collusion tactics wouldn't have a chance.


One way forward could be to lower the bar for publications.

Once it's no longer about being in the esteemed and scarce "10%", they won't bother because they don't need to. Imagine a process where the only criteria are technical soundness and novelty, and as long as minimal standards are met, it's a "go". Call it the "ArXiv + quality check" model.

Neither formal acceptance to publish nor citation numbers truly mark scientific excellence; perhaps, winning a "test of time award" does, or appearing in a text book 10 years later.


The same cartel in each sub field follows you from conference to conference too. I’ve seen often that these people bid for the papers in a particular subfield and get assigned to the paper that they rejected in the previous conference. I’ve seen two reviewers give the exact same review at two different conferences even though the paper had incorporated the changes from the previous one. Thankfully the AC was receptive to the concerns raised by the author and the paper was accepted.


This isn't surprising, and it isn't limited to CS research. I've known people who got papers in different fields because of a similar approach.

It's also noteworthy that it seems that Chinese engage in this quite a bit, either due to their culture that does not forbid cheating (e.g. see how they pass the GRE or other standard tests) or because there are a lot of them.


Why would exposing the names of the reviewers/conferences do more harm than good? We want to discourage such behavior don't we.


Because of the strong tendency to scapegoat the specific people named, drive them out of academia, and then celebrate victory while things continue in exactly the same way. (Ok, not exactly -- it improves for a while, people get sneakier, and then it continues in exactly the same way.)

Chipping off the tip of an iceberg isn't a good long term strategy.


Why wouldn't be a good thing for academia drive out unethical actors?


Too much focus on the individuals and not enough focus on the systematic problem, at least according to the author.


The topic is mechanisms being abused. Can the compensating mechanism you're proposing also be abused?


HN is so funny - here is a thread where I tried to articulate how incredibly broken the academic system is, and tons of HN academics claimed it was perfect and downvoted me https://news.ycombinator.com/item?id=27242576

I guess the system is inherently broken after all?


academic cliques are so old that they’ve spawned their own subfields of meta-science just to analyze them.


https://en.wikipedia.org/wiki/Sayre%27s_law

"Academic politics is the most vicious and bitter form of politics, because the stakes are so low."


I wonder how many other conferences have this same collusion going on. I remember reading a similar story years ago for anthropology/sociology/pedagogy (cannot remember which)


Serious question: why are paper authors allowed to be reviewers at all? That seems like the main logistical issue in this review process.


Why are paper authors able to also act as reviewers? This seems to be a big part of the problem. Is there a lack of topical experts?


Yes. A large fraction of the experts are submitting to the same most-prestigious conferences every year.



Also known as the Lance Armstrong principle.


I just got a 500 response from this link.



We need a blockchain for peer reviews!


There's a 'hot' subculture in our society that says 'if you ain't cheating, you ain't trying', that refers to lies as 'hustle', that rewards and embraces deception as just agressiveness and boldness, as a norm for life and business and even a celebration of human nature - as if the worst elements of human nature define us any more than the best, as if we don't have that choice (at least, that's how I am trying to articulate it).

It has predictable results. Where are we going to get reliable research, and anything else, if we can't trust each other. Trust is an incredible business tool - highly efficient when you can take risks, be vulnerable, and don't have worry about the other person. Trust is an incredible tool for personal relationships, for the same reasons, and because if you can't trust them and can't be vulnerable, you have a very limited relationship.


> 'if you ain't cheating, you ain't trying'

I'm a US expat and escaping this culture is one of the things that's made me happiest - I tend to call it the bullshit culture because my favorite example is... writing a good paper for class is admired - but what's really praised is writing a paper that gets good marks without ever having read the subject matter. Being able to spin lies about a topic you've no understanding of and turn that into a marketable skill is a dark potent for the future of America. I think it's always been somewhat present, but since emerging strongly out of the business world in the eighties it's gained a lot of steam.

We are a society that can benefit from cooperation where everyone gets a fair slice of the pie, but that society is eroded if we praise and not shame those people who betray societal trust and cheat the system.


Is this really an American thing?

At least in academia, American universities (and Western universities in general) have had a good track record overall with academic integrity, and it has set them apart. This seems to have degraded recently, perhaps because of the increasing pressure of the “publish or perish” system (or dozens of other potential causes).

We do celebrate people who excel academically seemingly effortlessly. But we don’t celebrate bullshit artists so much in school. In business, and particularly tech, it’s another story.


Are you in academia? I am and the state of the publishing is quite horrific. I simply don't read any papers at least in my field unless I'm specifically interested in some detail during my research, or am handed something specifically by a colleague or sometimes if asked specifically to do a peer-review.

This is a bit of a public secret, but quite widely researchers don't really trust articles anymore, if they ever did. Maybe some plot or dataset may give some insight and maybe some discussion has worthy information to ponder on. But mostly they're just some ads to put in a yet another funding application.

Most articles are just churned out to get some lines to CV or to look good in some metric. Publish or perish has turned into full-on bullshit or perish. The whole peer-review system (which is just around 50 years old anyway) is on the verge of just grinding to a halt due to the stupendous volume of hastily hacked together manuscripts.

I think many are still sort of hoping that this will somehow sort itself out. But the collapse of the quality after the explosion of electronic journals, consolidation of publishing houses and overall structure that doesn't really care at all about what is actually in the papers doesn't give much realistic hope.

It should be noted that there's sort of a "parallel reality" in academia behind the publication show. The ethos for academic integrity is still quite strong, teaching tends to be valued by the community (but not by the system) and face-to-face discussions can be of very high quality. But the signal-to-noise is so low in publishing that it's not really worth following.

We really need to get some new arrangement so that we don't drown in all this bullshit. Word-of-mouth, open data repos, conferences and just blogging and pushing stuff to git repos probably is most that's needed. The publishing structure is becoming just plain unnecessary bureaucracy.


Is scale part of the issue? A lot of academic/publishing culture and norms were established when it was considerably smaller.. number of people, not just number of papers.

SEO is a kind of explicit analogy. Google pagerank was modelled on academic publishing, and it worked until it went live. From that point, links started to decrease as a quality signal.. spam. Publish or perish is a similar sort of dynamic.

Honestly, I think most legible systems for determining merit have these sort of issues. If advancement, accolade, grants or somesuch are determined by a formal system, whatever that system used as a signal or metric becomes corrupted. Hence why Word-to-mouth, open data repos, conferences and just blogging and pushing stuff to git repos does work. It's informal.


Yes, I'm becoming quite convinced that in general its stupid to meter almost anything. Why can't we just starting to give social repercussions to bullshit so we can just trust that people aren't scamming everybody all the time. In general, competition is mostly just waste of everybody's time, and in the end it's usually easiest to win just by cheating.

A sort of reputation system is in place in almost all peer-to-peer societies, it tends to form automatically. I don't think we really need any of this weird mess of a system.

We have Wikipedia, we have open source, we have OSM, we have all sort of things that should be "impossible" given the dismal perception people have of other people. This perception is just plain wrong and really harmful.


The alternative to "metering" is tolerating "waste." One reason tenure declined, for example, was checked-out professors. Perhaps that's a price worth paying. The other half of that coin is brilliant people with full freedom to pursue science unencumbered by bullshit.

It's a hard sell though. The cost of metering is subtle. The do-nothing tenured professor is visible.

Wikipedia, OSS, etc really are the shining beacons. Existence proof for something better. Someone needs to write The Cathedral and the Bazaar, but in non geekish.


This is what annoys me as well and I find it stems from trying to force the profit/salary motive into academia. Gladly it seems that even if the structure is put there, most people in academia don't care about the money per se much. Some care for the status and prestige, but salaries don't get you that in this community.

Perhaps surprisingly to some, many in academia would just like to research and teach with some quite modest salary and don't have to think about money at all. E.g. I would gladly and with no hesitations take a €2000/month tenure and keep on doing what I'm doing just more efficiently for everybody. I've been trying to pitch this idea to the funders here in Finland, but to no avail, they simply don't care if the funding system is useful or not for the academic community or humanity, they're focusing on playing the same old (maybe 10 years or so here) application lottery that's not only waste of time, but corrupts the whole community and even the very content of thinking in academica.

"Money" in academia is really abstract as well, and when discussed its not salary, but funding for projects or students or such. And because the funding structure is so bizarre and convoluted you just see big numbers with currency signs flowing everywhere, but this doesn't seem to have much to do with anything concrete happening around.

If academia becomes a place where you can get rich, the system will be in just years corrupted into some bizarre thing where advertisers advertise to each other for the sake of advertising.

Luckily cats can't be herded.


>> Luckily cats can't be herded

Some of it is intentional "motive hacking." As you say, prestige, research funding and the like are as (or more) operative as salary.

Some of it is unintentional. Before publish or perish, publishing volume probably was a signal for something. I doubt it was ever a signal for high quality research, but low (or no) volume may have been a signal for low quality. Also, formal decision making bodies (like grant makers or tenure committees) tend to gravitate to quantitative, legible metrics.

Whatever the reason initially, publishing volume became a hugely important thing with impacts on many aspects of research.

At the same time, in CS especially, the number of researchers has also ballooned. That's a whole other strain on a system of, at core, knowledge dissemination.


Yes, I worked in academia up until recently, and keep up with the literature.

My comment was on the historical status of academia in the US as a whole (think last 120 years), not just the current state globally.

You’re lamenting the quality of academic publishing in particular. The US now publishes less than 17% of science and engineering papers, but its papers are often the most highly cited. So yes, there has been a huge increase in the number of papers, and number of low-quality papers, but this isn’t necessarily being driven by the US, as the original comment would have implied.

You claim researchers don’t really trust articles anymore, and I agree that it takes a lot more work to filter out the noise now, and I’m less optimistic that authors are presenting an honest, objective appraisal of their results. But significant research is still happening, and academic publishing is still the primary way that information is disseminated. People seem to rely more on name recognition (author, school, journal) now. It probably varies field but field. I’m in a field where results are often proof-based and that tends to be harder to fake.


I don't consider countries much. There are definitely some differences per country, but I think the variation even within a same university is so large that its hard to infer much based on country of origin. But I acknowledge that I ignored that part of your comment in my reply and maybe in that left a wrong impression. But I meant no implication that this was somehow US-driven.

Internationalism is so ingrained in the academic culture (at least on fields I'm familiar with) that it doesn't even really register what country somebody's from or is working in. There are definitely some differences especially in the more "overt" parts of the culture (hats and robes and different titles etc), but these are of very little significance for anything but some ceremonies.

My working experience is from Finland, Sweden and UK, but in academia people come and go between countries very frequently so colleagues tend to be from all over.

There are at least some stereotypes that some countries are more prone to the e.g. citation rings, but I don't find that very relevant, as I think the whole system is quite broken and the publishing (at least in English language) forums are typically not country specific at all. Probably something like this happens in more or less any country.


> This is a bit of a public secret, but quite widely researchers don't really trust articles anymore, if they ever did. Maybe some plot or dataset may give some insight and maybe some discussion has worthy information to ponder on. But mostly they're just some ads to put in a yet another funding application.

> It should be noted that there's sort of a "parallel reality" in academia behind the publication show.

So what should be the guidelines for someone who is not a researcher, but an engineer, and hopes to stay informed by reading relevant papers from a specific field. (You know the folks who should apply some of that in practice)


Depends on the field and purpose. First of all, academic papers tend to be quite hard to approach if you aren't in the field, even if the quality of the paper is good. Articles almost necessarily cater to a very specific audience and lots of background is assumed almost by necessity. Also papers are not usually read linearly, researchers learn to get the gist of paper in just a few glances if its close to their own fields, and sort of hop around to see if there's something "unexpected".

Also individual papers tend to focus on one very specific problem at a time. This is typically related to some actual larger "debate" and can be difficult to see if one's not familiar with the larger issue. Also especially conclusions tend to have quite heavy implied assumptions that are just generally accepted in the field.

I "stay informed" mostly by face-to-face discussions and emails and such. I don't read much papers myself, but many of my colleagues do and I just hear from them, or ask them if there is new stuff around related to something I'm pondering.

To get an overall view of "state-of-the-art" I'd recommend starting with masters' or doctoral theses. These typically require more elaborate presentation of the background and its typically put out in more readable terms with less assumptions of the readers background knowledge.

In some fields review articles are a good starting point as well, and they tend to briefly sum up the required background, but my understanding is that some fields don't do those much.

If you read "random" articles, I'd do a quick smell-test before digging in. See if code is available, ignore papers with clear hype in the abstract off-hand. You can also "navigate" the field by following citations, although this can be technically annoying as the publishing format is still tailored towards print, even though very few journals are actually printed anymore. If you hit a paywall, try sci-hub or just move on to a next one unless you're looking for something really specific.

If you have something more specific in mind, just email or call or go talk some researcher that looks to be doing something related to what you are looking for. Researchers tend to be quite eager to answer to the public of their stuff, and its seen as sort of a public service duty as well. Depends on the researcher quite a bit though. Maybe a good starting point would be somebody a bit "lower on the ladder". Maybe a postdoc or a PhD student (this depends on the country as well). Professors tend to be busier and actually may not be that up-to-date with their field (especially on technically detailed level) as they spend most of their time in administration and the funding ratrace.

Depending on the country you can just attend lectures too. At least in Finland university lectures are public by law (with some restrictions on e.g. practical lab stuff etc). You can see if the lecturer doesn't seem too busy after the lecture and just go and ask.

You can also just try go to conferences. They usually have a fee in theory, but I don't think you'll be turned away if you just browse around for posters or so, especially if its a smaller one. The fees are just sort of a scam (long and sad story) and researchers organizing the thing usually don't care about the fees at all.

For some fields there are some good youtube channels that provide summaries that can get you started. E.g. Two Minute Papers is good for machine learning/machine vision/"AI"/etc related stuff: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg For computer graphics SIGGRAPH "video papers" are really nice and even entertaining: https://kesen.realtimerendering.com/sig2021.html

Hard to give more specific tips with such broad question. If you have a field or topic in mind, I could maybe give something more concrete.


I assume this varies field by field. In my field (computer security and cryptography) we don’t have such an antagonistic view of research papers by our colleagues.


I'm not familiar with those fields, but yes this varies a lot by the field. I'd assume cryptography at least is more math-based (at least theoretical stuff) and the math scene is quite different from empirical sciences or engineering.

The antagonistic view is not just towards my colleagues, I'm not particularly proud of my own papers either. I find it more a nuisance to "pay the bills" and a lot of my research goes unpublished (at least in journals) due to all the IMHO unnecessary hassle involved. Just a blog or something would be a lot nicer and probably would communicate the work better, and would ease the pretension of objectivity which I find mostly causes wrong impressions and makes writing really a chore.


> At least in academia, American universities (and Western universities in general) have had a good track record overall with academic integrity, and it has set them apart.

The difference is that these collusion rings or bogus studies are discussed and exposed publicly. It's not the case in Asia.


> we don’t celebrate bullshit artists so much in school.

This was not my experience of school in the US. Just as one example, two words: Cliff's Notes.


American here. We actually hate this shit.


I don't think it's nearly as prevalent in Academia. I agree that that sort of deception is quickly punished in Academia if it's discovered and publicized, but I think it still is viewed positively in private contexts. I think it's more of a cultural issue in general.

You can read a bunch of articles on Bernie Madoff, Martin Shkreli and Billy McFarland that romanticize the cunning with which these folks exploited others. Most pieces on them will present an overall negative tone but often feature some pretty glowing admiration of them. Let's also not forget that tax evasion by Trump was praised repeatedly as him beating the system - that's a pretty common view point, more common (especially when it comes to taxes) than the view that those individuals are failing to pay their fair share from what I've observed at least. Trumps a complicated example due to all the political baggage around him so maybe just look at companies like Apple, Google and Facebook - they regularly offshore large portions of their profits and I really doubt the people working to those ends feel any shame, instead it's likely a "beating the system" motivation.


> I tend to call it the bullshit culture because my favorite example is... writing a good paper for class is admired - but what's really praised is writing a paper that gets good marks without ever having read the subject matter.

I don't think this is an American thing. In fact, I've never seen this praised anywhere outside of maybe a few people back in High School.

I worked for a company that expanded rapidly with distributed offices all over the world. One of the growing pains we had was that the managers from certain countries, America included, were very trusting by default. This opened the door to a lot of manipulation from employees in certain other countries (which I'm deliberately not going to name) where getting away with a lie was more or less considered acceptable as long as you weren't caught.


In my time in academia, 20 years ago, cheating specifically was much more common with students (and researchers) from what I would guess was your certain other countries. An example of general corruption, I think.


> I'm a US expat and escaping this culture is one of the things that's made me happiest...

Really? Cause I'm an American who has lived, studied, and worked overseas. Let's just say it's not American (or generally Western) coworkers and classmates who are NOTORIOUS for cheating.

And I think many of us here who've attended "diverse" universities or work for companies with "multicultural" staff have a pretty damn good idea which cultures and nationalities are more likely to be cheating.


Funny that you say that, because in my country there is also somewhat of a culture of praising rogues and cheaters, and I had a professor who would say (when talking about cheating by copying in exams, I think) that he spent some years in America, and there no one would ever do such things, because there was a honor system and no one would ever think about do it such a thing.

Not that I ever believed him much about that (I do think he believed what he said...).


"Being able to spin lies about a topic you've no understanding of and turn that into a marketable skill is a dark potent for the future of America."

At once, such a great, and terrible, sentence.


"Being able to spin lies about a topic you've no understanding of and turn that into a marketable skill is a dark potent for the future of America."

At once such a great, and terrible, sentence.


Where did you move to?


Oh, Canada - the take-home is lower but the healthcare is nice. I'm now even a dual citizen!


It’s just a general breakdown of the rule of law and trust all over our society. When everyone around you is breaking the rules and receiving no repercussions, then following the rules yourself is equivalent to choosing to lose.

We used to have strong institutions that were supposed to help push groups into not choosing the bad corner of the prisoners dilemma, but they all seemed to have degraded. I suppose they could have always been like this and the curtain has just been removed, but I’d argue that the perception that following the rules was the best personal choice is almost as valuable as it being true


Yes. Here are some stats on the breakdown of trust in society: https://medium.com/@slowerdawn/the-decline-of-trust-in-the-u...

This animated gif makes it clear: https://invisible.college/reputation/declining-trust.gif


Lack of trust is a symptom not the cause. I hate it when I hear politicians and the like bemoan a "lack of trust". I don't want to trust you! I want to hold you to account!


Deliberately conflating cause & effect is a classic politician’s move. It doesn’t even have to be a reversal of the two: it’s probably even more frequent when they cite one part of a positive feedback loop as the unambiguous cause and the other as the clear effect.

The classic example of the later are the politicians who rant “if only these people would get married and stay married, they wouldn’t be so poor” and neglect to consider that impoverished communities create the conditions for rampant single motherhood. The desire to raise children does not magically vanish merely because there are exactly zero worthwhile men in your community that aren’t your father’s age.


Actually trust is both a symptom and a cause. The causality goes both ways.

(1) If you lose trust in someone, you'll be less likely to try to find truth in their statements. Finding truth in someone's statements requires time and attention, and we'll invest that attention in people we trust.

(2) If you find falsehoods in their statements, you will lose trust in them.

Overall in Psychology, trust is a primary indicator of a relationship, marriage, partnership, or organization succeeding or falling apart. It's both a symptom and a cause of all other factors.


Can be trusted with what? Given how complicated life is, I'm actually impressed with how morally people behaves. In that sense, I trust that most people are doing what they can, and they are trying to do no harm to others. But that doesn't mean I trust people to be very competent anyway. Trust what.


Do you disagree with either the specific operationalization of trust in the article or even the abstract concept of operationalization of trust in general?


I think he is just saying that trusting someone with your walley and trusting someome with your life are different benchmarks


In republics without absolute rulers, individuals trust each other to maintain the state out of a sense of civic duty, and not to declare themselves king.

This trust is greatly eroded by calls to eliminate direct taxes on land holders. In America the local property tax is the most important thing holding society together. It provides residents with assurances that regardless of how corrupt the public process for distributing legal tender becomes, that the richest cannot simply buy the entire country and turn the continent into a private estate which their descendants will inherit in perpetuity without paying enormous taxes to everyone else.

When James Madison organized the assessment of property taxes at the national level during his presidency it resulted in the 'Era of Good Feelings' and a relative low point of political polarization. In contrast when state governments introduced sales taxes for the first time to reduce property taxes it prolonged the Great Depression, and when NYC gave the largest property tax abatements in the country to Donald Trump it lead to the Trump presidency which increased political polarization.


> It’s just a general breakdown of the rule of law and trust all over our society. When everyone around you is breaking the rules and receiving no repercussions, then following the rules yourself is equivalent to choosing to lose.

More specifically, it's the perception of a breakdown that drives this behavior.

When it comes to cheating, there's a growing perception that "everyone else is doing it" and therefore it's not wrong to play the same games as everyone else.

The current political and social media discourse revolves around ideas that "the system is rigged" combined with a die-hard notion that anyone who disagrees with you is wrong and/or evil. When people are bombarded with these ideas every day on their social media feeds, cheating a little bit to get yourself ahead doesn't feel like cheating. It just feels like leveling the playing field.


> When everyone around you is breaking the rules and receiving no repercussions, then following the rules yourself is equivalent to choosing to lose.

Depends on how you define "lose". And also on what you think the "rules" are.

For example: most drivers in the US routinely exceed the posted speed limit on roads. Is that "violating the rules"? In a legal sense, it is, since if a cop catches you he can give you a ticket and you pay a ifne and points go on your driving record. But nobody considers you a bad person for doing it, and I would argue that doing it, if you don't cause an accident, is not harming anyone. However, obeying the speed limit is also not considered a bad thing; all it really means is you get where you're going a bit slower. The tradeoff is yours to make; you don't "lose" by choosing to obey the posted rule.

Now consider an example at the other end of the spectrum: all of the shenanigans with mortgages and the financial system that caused the crash of 2008. Those who "followed the rules" leading up to the crash--for example, those, like my wife and me, who limited our mortgage and the size of house we bought to what we could comfortably afford--did not "lose". Sure, the value of our home went down, but we had no need to sell it then. It was still a house and we could still live in it just fine. Sure, we didn't have a bigger house with more bells and whistles, but we also didn't have to worry about what might happen if the housing market crashed. In other words, we made a tradeoff not much different from the one made by the person who obeys the speed limit and just gets where they're going a bit slower--but still gets there.


Indeed. I wonder if the internet also has something to do with this feeling. In a way, people are looking for justifications to doing things the wrong/illegal way. And in the past the information you got about how the world works is from other people around you. Now you can look up people in the same boat and apply whatever logic they did.


Engagement-optimizing media definitely are a part of it. That means both social media sites and classical news media. Both feed us a heavily warped view of the world - one in which nothing works, and everyone tries to cheat. That's because people being good, following the rules, helping each other and accomplishing things together is not newsworthy, and stands no chance against outrage-inducing stories.


trust emerges in situations with iterated interaction; defecting is most effective in anonymous or discrete situations, since there is less opportunity for punishment.

america has urbanized rapidly in the last half century, at the same time that family formation has broken down and life-long jobs have become a thing of the past. we are atomized and thrust into constant competition. i don't mean to idealize a past that i did not even experience, but there is something to be said for having roots and knowing your neighbors. we arguably have more opportunity at the cost of stable identity -- reputation and trust naturally accrete around the kind of stability we lack.

if you talk to older people, people around my grandparents' age or thereabouts, you will hear that they no longer recognize america, that it is fundamentally different than the culture they grew up in, in terms of values. i find myself thinking about this a lot.


> there is something to be said for having roots and knowing your neighbors

I think knowing your neighbors is overvalued. My evidence is Tokyo and living in transient, largely ethnically homogenous sharehouses — in ethnically homogenous areas — for long periods of time.


There is a huge short vs long term bias here. Constantly burning everyone around you requires a long line of new suckers. But, within a community trust can have great long term benefits.


Probably why we need more and more trustless systems. Hopefully crypto works.


In this case, crypto would mainly serve to hide collusion more effectively.


Sure, but in general, decentralization will make it harder to effectively steer institutions and behavior with collusion.


Academic communities are often quite small; back when I was publishing and serving on committees it often wasn't hard to figure out who the reviewers were, because a paper proposing a revolutionary new way to do X would be sent to top world experts on X. In those cases, the fact that you could sometimes figure out who was who limited possible bad behavior, because clearly unfair reviews would damage the reviewer's reputation.


Not sure I follow. It sounds like the experts in your example are able to effectively identify fair reviews based on their merit. This would need to be the case for people to pick out unfair reviews (and therefore deter reviewers from writing bad reviews). But it that's the case, anonymity shouldn't incentivize anyone to write bad reviews (since bad reviews would be identifiable and dismissible on their own). If anything, I would think that de-anonymization (implicit in this case, since reviewers are presumably nominally anonymous) would be more likely to empower cult-of-personality effects (or in general, would incentivize looking beyond the merits of a review, to get into an influential person's good books), making bad reviews more influential than they otherwise would be.


This has always existed and always will.

In the old days you had to know your place. WASPs smoked cigars and ran things. Those old guys drinking sherry and wearing tweed helped each other out. The Irish were cops, Italians firemen.

In tech it’s pretty obvious to see various constituencies doing dishonest shit help others out.


Refreshing to see someone on the inside who notices.

From the outside this degradation of values and "social lawlessness" has been apparent for years now, especially since 2016.. Really hope this doesn't spill over.

Oh, and good lucking mending this.


Someone on the inside of what? I don’t think I have a particularly privelaged view or extra knowledge. Rule of law is like poli sci 101 and I can observe the lack of trust just in the general trend of messaging in mainstream media and discussion boards like this


I interpreted "our society" as "US" and meant inside/outside relative to that. Sorry for the lack of clarity.


YCombinator seems to encourage this. Here's what they say about their dinner events: "Talks are strictly off the record to encourage candor, because the inside story of most startups is more colorful than the one presented later to the public. Because YC has been around so long and we have personal relationships with most of the speakers, they trust that what they say won’t get out and tell us a lot of medium-secret stuff."

Anecdotally, I've heard stories about Zuckerberg confessing/bragging about all sorts of nasty things at these dinners.

Really, this stuff should just be shamed. Sadly, too often calling out bad behavior just gets you called a "hater"...


I've actually just encountered this first fucking hand and it disgusted me. I'm an aspiring founder working on something and met a coworker who had been through YC W17. I wanted to pick his brain on his experience, especially since I've been applying to YC myself.

This coworker told me he and his "startup":

- routinely lied to potential customers on the size of their client list

- misled clients on the depth and completeness of their product

- blatantly broke CA laws to cut cost corners

All of this was done to secure contracts in order to secure more funding. "Always be selling", he said.

He literally fucking said to me that he learned to "be dubious, not deceitful" which is probably one of the most deceitful things I've ever heard.

Made me sick to my stomach and pretty much validated (1) why I never moved to SF in the first place, instead moved to NYC and (2) how much of a fraud YC has become. Absolute fucking madness.


Lol, they're not even trying to hide it. Do you see what people like Paul Graham tweet? They're pretty open about these nasty things and then they get all surprised when people call them out on it.


Could you please give an example for the public audience? Or what is the worst, ie most appalling, you’ve seen?


Normalization play a huge role in creating this. There are far too many rules in our society at all levels that make no sense and exist seemingly to benefit the rich and powerful. Highly demanded goods and services are prohibited (drugs, gambling, prostitution). Bureaucratic nonsense at every turn. So much schoolwork that it's practically required to cheat to succeed. Testing with barely any resemblance of real-world conditions. Abstract methods that bore students to death.


The role of bureaucracy is particularly relevant and interesting. Joseph Tainter argues in The Collapse of Complex Societies that the diminishing returns of increasing layers of bureaucracy are an important reason for the collapse of societies:

https://en.wikipedia.org/wiki/Joseph_Tainter

Quoting from the book:

“Sociopolitical organizations constantly encounter problems that require increased investment merely to preserve the status quo. This investment comes in such forms as increasing size of bureaucracies, increasing specialization of bureaucracies, cumulative organizational solutions, increasing costs of legitimizing activities, and increasing costs of internal control and external defense. All of these must be borne by levying greater costs on the support population, often to no increased advantage. As the number and costliness of organizational investments increases, the proportion of a society's budget available for investment in future economic growth must decline.”


Bureaucracy means no individual is to blame... But you still have to have a way to blow the whistle or vote people out and shake things up.


I think cheating-culture requires more than just a lot of arbitrary rules. You need to also have friends who can help, role models you can see getting ahead while cheating and so-forth.


Those aren't reasons to act dishonestly and abuse other people. In fact, those are reasons to do the opposite - if you find the world so terrible, do something to make it better.

It's up to you and me. Nobody else is coming to save us.


When people are surrounded by rules that make no sense, many start to question all rules. It's not a justification for acting dishonestly, but it is the cause.

One way we can make the world better is by fixing the rules. Getting rid of ones that are unjust, and making the rest more consistent.


>"One way we can make the world better is by fixing the rules. Getting rid of ones that are unjust, and making the rest more consistent."

I once saw sign on a street: do not do something (do not remember what) and reference to a City Bylaw numbered as 37 thousand and something. That is just for one city. Good luck changing this sheer insanity.


I don't think the quantity of laws is really an issue as long as they are being applied fairly and (and this is the vague part) society views them as generally just.


>"I don't think the quantity of laws is really an issue"

And I think it is. At this quantity the quality will definitely suffer. Besides, I did read some bylaws at some point out of curiosity and without going into details many of them are outright unjust/deficient/etc (in my opinion of course)


"If a law is unjust, a man is not only right to disobey it, he is obligated to do so."

The question of course is, peoples interpretations of which laws are just and unjust are subject to bias and individual incentives.


> "If a law is unjust, a man is not only right to disobey it, he is obligated to do so."

Incidentally, while I recognize thr popularity of this quote, its fairly ridiculous taken literally for laws which are prohibitory rather than obligatory.

Viewing a prohibition as unjust does not obligate me to violate the prohibition; believing people should be free from government constraint to do something doesn’t require me to do that thing.

“Disregard” or “discount” in place of “disobey” would be more generally valid.


Laws which are inconsistent with promoting the extension of human life can be ruled to be unjust.


> Laws which are inconsistent with promoting the extension of human life can be ruled to be unjust.

So outlawing theft, rape, fraud, kidnapping, false imprisonment, etc. - that's all unjust?


Don’t those laws promote human life? And by life, we can definite it to be a “self-moving power”, as Plato defines it in his Laws.


Anytime I see a job description that mentions the world "hustle", and trust me there are quite a few in tech, I immediately discard it... but this description of the culture portrays it more accurately that I could have put into words. I think I shall bring it up next time I see a job description like this and let them know that is the impression they are giving.


Even though I share your cynicism, 'A Hustle' is very different from 'Having Hustle' i.e. the noun means something from the adjective.

Having 'hustle' is actually important at a startup, it's part of the essential aspect of it. I'd argue a 'hacker' has a kind of hustle.

I'm pretty suspicious of these things as well, but I've also come to believe in my many years that a bit of koolaid is fine as long as it has self awareness.

I wish Elon would stop posting all the stupid things he does, but I think that's what you get with 'all the other stuff'.

If Elon were fully polite, conscientious, a 'good listener' I'm not sure Tesla would exist. So we pay for the existence of Tesla by accepting that he's going to do dumb tweets about Crytocoins.


> If Elon were fully polite, conscientious, a 'good listener' I'm not sure Tesla would exist.

I think that's a BS excuse for bad behavior. How would being polite and concientious harm Tesla? It's simply that he has power and has low standards for his behavior.

I know plenty of successful people who do what Musk fails to do.


"I know plenty of successful people who do what Musk fails to do. "

Except you don't know any successful people who've remotely done what he's done either.

Talking about 'Dogecoin' is a little unseemly, but it's nowhere in the realm of 'toxic' or 'bad acting'

Without massive public support and sympathy, Telsa wouldn't exist, it's a movement as much as anything, and so you need kind of a showman.

His appearances on SNL etc. are part of that public drama that keeps Tesla stock going with enough support to keep the legitimacy of the dream alive.

I'm seriously doubtful that a quiet, unassuming person would have been able to do most of that.

Expressive, bombastic characters will by virtue of the volume of their actions, sometimes creep up to the line. It's normal. There's nothing wrong with Elon, he's just a little cheezy and spouts too hard with some things.


Unfortunately it is not even a subculture. It is often openly touted in mainstream culture. Steve Jobs is often brought up as a hero of that type of thinking.

You are absolutely right that trust and reliability are very valuable. Societies with high trust tend to be richer and much more productive than societies overriden by cheating and corruption.


>Steve Jobs is often brought up as a hero of that type of thinking.

Like the anecdote that he leased a new car every month so he could avoid registering it with the state.


This is also often necessitated by brutal evaluation systems that incorporate stack ranking and up or out in their various forms.

This almost necessitates some cheating to survive and the only people left are those who survived this system and hence the culture slowly rots.


Don't forget the big one, "no one knows what they're doing anyway ", a veteran structural engineer has some gaps in his understanding, so that's no different from me, so I should have doubts about launching a skyscraper startup.


In case others wonder or worry, as I do, that few care about the issues in my parent comment, I'll take the unusual step of reporting that it has more upvotes than much of the front page - far more than I've ever seen.

(I'm trying to respect the HN tradition/guideline of not talking about votes. I don't really care about Internet points; I think the implied interest in the issue is worthwhile and applicable in this case.)


> There's a 'hot' subculture in our society that says 'if you ain't cheating, you ain't trying', that refers to lies as 'hustle', that rewards and embraces deception as just agressiveness and boldness, as a norm for life and business and even a celebration of human nature - as if the worst elements of human nature define us any more than the best, as if we don't have that choice (at least, that's how I am trying to articulate it).

I'm not sure that you intended it this way, but this reads as very oblique (i.e., "wink and nudge"). Which subculture are you referring to, and what particular relationship do you think they have to research in Computer Science?


> I'm not sure that you intended it this way, but this reads as very oblique (i.e., "wink and nudge"). Which subculture are you referring to, and what particular relationship do you think they have to research in Computer Science?

I am referring to no particular subculture. Lots of people around me embrace it, including from all over the political spectrum (if that's what you are thinking).

I think the broader society sets the norms for computer science, as with everything else. For example, when star athletes like Barry Bonds, or entire teams like the Houston Astros, or much of college sports, cheat with few reprocussions (and in the past, that wasn't the case - players were banned and school sports programs were basically shut down, etc.) that affects computer science research.


Yeah, honestly go to any conference in the last decade and you'll see some people who are just... out of place in an academic, like, when they were 19 they listened to a podcast that claimed PhDs made XX% more money, so they decided to do that. These people don't care about research, don't care to understand research, they just want to publish, get their degree, and get paychecks from Google/Facebook/Apple. Luckily I've seen a number of these types of people fail to find any high profile jobs after they graduate, so I guess something is still working.


Could apply to research, application development, web development, credential management...


Lou Pai is the greatest of all time and every man (or breadwinner) would be a fool to blindly follow any other script in society that is presented to them

Getting a divorce court judge to force you to sell your Enron shares at the top, nuking the regulator’s ability to charge you with insider trading, while you elope with your younger hotter high libido stripper nymph to the mountain you bought?

These are our role models


All the myths and stories of pretty much every culture reinforce these beliefs though. Stories are never about passive people putting in 12 hours a day, day after day building upon honest work. Its about slick instant gratification, getting ahead, winning through cunning, etc. The only way to really do that in the real world is through luck or cheating.


What myths and stories are you referring to? Hesiod’s Works and Days is opposed to exactly what you suggest. And the Myth of Eden in Judaism - the received tradition of its origin - is a commentary and the need to toil in life.

Both “Athens and Jerusalem” are principled according to honest toil. And look at the results!


George Washington, Ghandi, MLK, Churchill, Lincoln, Jobs, Torvalds, every religious figure, sports figures, artists, etc. ... they all are about "slick instant gratification, getting ahead, winning through cunning"?


Reminds me of Jugaad [0] or Chabuduo mindset in mainland China.

Basically if all it takes is getting citations, then forming a citation ring is "Chabuduo" and you'll only lose face if you are caught (not good enough).

[0] https://en.wikipedia.org/wiki/Jugaad


This kind of collusion is driven far more by perverse incentives than some alleged cultural phenomenon of half-assery people like to attach an incorrect but exotic-sounding foreign phrase to. I can't speak for Jugaad, but 差不多 ("Chabuduo") is not at all appropriate here [1]. Just call it what it is: cheating, collusion and conspiracy.

[1] https://news.ycombinator.com/item?id=27052249. This reminds me of the Japanese buzzword bingo of earlier decades.


Jugaad is very different from this sort of collusion.


From the wiki article it sounds like it just refers to extremely improvised engineering, something akin to "bubble gum and baling wire" or "MacGyvering" something. Something like a kludge, but a bit less pejorative.


like, uh, hacking—hacking together something?


Yes, but "hack" is so overloaded and has so many definitions that using it as a definition itself is supremely unhelpful.


Reap what you sow. We spend our whole lives being told it's perfectly OK that every corporation and person in power behaving like a sociopath is totally fine and this is the culture you get in return.


Academia WAS designed to be an insulated castle from all that, that's why Newton lived in a shitty apartment while getting money from his mom. The institution of research was supposed to be a rich man's game, people who didn't give a shit about practicality, just one upping each other. Once people realized academics could be leveraged to do cool shit like build A-Bombs, it was over.


> Once people realized academics could be leveraged to do cool shit like build A-Bombs, it was over.

Arguably, it's the opposite. Once people realized academics could do this kind of cool shit, they got showered with money and told to do whatever the they want. That's how we got the incredible scientific and engineering advances of the second half of 20th century.

Then the beancounters started asking questions about what the money actually buys, and research quickly turned into another short-term, self-contained, profit-chasing game, starved for resources and only occasionally producing something actually useful.


Nyquist, Bode, Shannon, etc. didn’t need an A bomb to advance their research which is the bedrock of the digital control systems that microcomputers are.


Yes. They also didn't need to spend 90% of their time chasing grant money and publishing papers.

As it is, if our researchers are spending almost all their time thinking about and doing things other than research, what do we expect?

On a tangent, software industry has a bit of similar problem, with the best developers being forced to enter management roles[0] instead of solving technical problems. That is, a developer progresses from doing shoddy work to doing mediocre work and then, just as they start doing high-quality work, they get told to manage a new cohort of juniors doing shoddy work instead. I wonder if that's why so much software is hot garbage these days.

--

[0] - Whether proper ones on management path, or "fake" ones like principal developer, where you get all the managerial responsibilities with none of the authority.


>The institution of research was supposed to be a rich man’s game, people who didn’t give a shit about practicality, just one upping each other.

Thaaaaannnkkk you. It has technically been reserved for Christian Aristocracy, which is practically a world where there is no desperation for calories. Instead many of the greats had an anxiety of their immortality (see Fourier).


Doesn't this make the case that we should be building institutions and systems resistant to this type of cheating?

Why should it be possible at all to game Journals in this way? Particularly in Computer Science journals where people think about edge cases for a living...


> Doesn't this make the case that we should be building institutions and systems resistant to this type of cheating?

Absolutely. I think people are intimidated, demoralized (de-moralized) and where once they believed anything was possible, any social problem could be solved (even those old as history, such as women's rights, human rights, etc.), now they've somehow drunk the wrong Kool Aid, some stuff distributed by Jim Jones.

Time to get to work.


I don't think we can actually solve this on a case by case basis. Other threads in this discussion have highlighted cultural normalization of greed in one of its various forms and I would tend to agree with that angle.

Those systems that can be built to be resistant to greed would definitely benefit from it - but I think it's more of an issue with society at large.


We gotta start somewhere, and certainly the place to start isn't naysaying. What should we do?


Heh. We need to set up a research project that redteam/blueteams various journals and conferences. And bug bounties for disclosing zero-days in peer review processes/practices.

(The redeem/blueteam thing is kinda unethical, so maybe the University of Minnesota should do it...)


2 years ago Prof at my old school had a student commit suicide over being pressured to go along with this. You can Google your way to figuring out where and who. Last month Prof finally resigned. No other repercussions. People think academia is some priesthood it's not. It's a business like any other with a law of averages determined number of bad actors.


That incident is the same one referred to in the CACM article and elsewhere throughout this thread.


Ya coming back to this post I saw the other comments. Guilty as charged of not reading the article.


I think the people in Academia think it's a 'Priesthood' and it seems more like a 'Police Union'.


I'm glad he finally lost his position, at least. I remember reading about that last year when the university was still defending him.


Exactly.


I fully support the concerns that the author brings up. It's a big problem and we need to figure out how to address it. At the same time, I can't help but notice that this is coming from the ACM itself. The credibility of ACM has really tanked in my eyes in the last 2 - 3 years. Communications of the ACM magazine is full of articles that belong to the "social engineering magazine" and not to the Computer Science publication. Last time I was renewing my membership, I had to sign some kind of a pledge "not to harass people". Are you kidding me? What are we, 12? Because of that, while it's a very important issue, I am very distracted thinking about ACM itself. I guess this is a lesson that reputation and credibility is important. Once you lose it, everything you say, even if it's truly good, gets colored in a certain way.


I've discovered that the software industry is made up of some of the most dishonest, insecure, power-hungry people on the face of this godless earth.

Only a tiny percentage of developers seem to actually enjoy coding - Most of them have no interest in it and only see it as a mechanism to acquire money, power and influence.

Disinformation is rampant because contrarians are punished and conformists are rewarded. The rot is deep in the guts of the industry. Those who have the most power and the loudest voices hoard all the attention for themselves and are unwilling to give exposure to any alternative views - Their deep insecurity drives them to surround themselves only with yes-people and to block out all critics; avoiding disagreement at all costs... Downvote, suppress, censor...

Powerful people in this industry need to put aside their insecurity by embracing disagreement, allow themselves to change their minds, and give a voice to contrarian views and ideas; even when it risks hurting their current interests.

Powerful people should seek the truth and try to promote the narratives which make the most sense; not the narratives which happen to benefit them the most. Everyone is free to move their money to match the changing narratives, so why do powerful people invest so much effort in keeping the focus on narratives which only maintain the status quo? To protect their friends? To protect the system? That is immoral - Capitalism was not designed for this kind of arbitrary altruism. For every person you 'help', you hurt 100 others.

As much as people love to bash Elon Musk right now, he should be applauded for constantly trying to adapt to the narratives which make the most sense as opposed to rotting in his own filth and succumbing to tribalism like everyone else.


> the software industry

How many industries have you worked in? This is fairly common in many walks of life. As I get older I'm getting better at identifying the lunatics in charge. Often they are nice people, but also cause massive amounts of chaos, because they have no clue what is going on. No wonder they feel the need to exert micro-control.

Anyway, thanks for the rant. Always interesting to look at the bottom of the barrel for the HN rejects :-)


Apologies, but are you using a bot? This sounds like it was generated using GPT-3.

I realize you might be a human with feelings, but your screed above has a curious structure that looks... off somehow.

Anyway, if you are genuinely angry and wrote the above, I acknowledge your emotions, and don't have anything else to add.


No way. Any decent AI trained on HN would have figured out how to mention climate change, Stephen Wolfram, and urbanism.

Definitely written by either a human or a really bad AI.


The primary target audience for most of my comments is AI for training purposes (unsupervised learning). I've given up on humans.

Most humans don't have enough background knowledge or sufficiently diverse life experience to make sense of this.

I'd have to write a whole book to explain my reasoning behind this statement. I got a lot of my knowledge from HN articles and comments so it should tie in nicely.


Many people are proposing "fixes" for this kind of behavior in the comments.

Fundamentally, there is no way to fix it, because the system is self-disregulating.

In other words, there is no mechanism to bring the academic system back to honesty over time; there is a mechanism to make it more dishonest over time.

This is inherent to government solutions. In modern times, democracy was supposed to be the regulating mechanism of the bureaucracy, but obviously isn't working. The bureaucracy itself certainly is disregulatory. Modern academia is simply an organ of the larger government bureaucracy.


It is probably too late to save Computer Science Research. Efforts in that direction are likely wasted. More important is to keep the contagion from spreading to allied fields. Grants probably should stop immediately. People doing serious work will need to move to another area where they might be able to contribute. People evaluating work in these other areas will need to guard against allowing theirs to be overtaken by the same downward spiral.

In perhaps a generation, a similar specialty might be bootstrapped and begin to take on problems had been of interest in the old one. What to call the new specialty will be its smallest problem.


Why shouldn't we assume that similar collusion exists in every scientific field?


Oh it does. I'm interdisciplinary. I've been offered the "opportunity" to be part of two citation rings in Economics, and that's from attending a handful of conferences.

Which if one looks at the state of economics right now, is an object lesson on this kind of stuff never ending well.


I guess economists can say that they're "just responding to the incentives of the market".


Without any personal context, this stance appears very "baby meets bathwater". In particular, I'm not sure I see anything about this particular situation that renders it purely a problem of CS research. What makes other fields (presumably) immune or less predisposed to these kinds of issues?

Your position also paints CS in very broad strokes; in my experience, the only commonality between some subfields of computer science is that they use computers. Graphics, hardware architecture, programming languages, networks, and so on, are all essentially loosely coupled with their own organizing communities and directions. Some of these subfields are more closely tied to mathematics or electrical engineering than strictly to other parts of computer science. If there is an incurable "contagion" that afflicts all of these, I must admit it hard to believe that this contagion would not prove (if not already be proven) effective beyond the artificial confines of the term "computer science".


> It is probably too late to save Computer Science Research.

Why do you say that? Do you have any experience in the field?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: