Hacker News new | past | comments | ask | show | jobs | submit login
The real reason (climate) scientists don't want to release their code (jgc.org)
76 points by jgrahamc on Nov 4, 2010 | hide | past | favorite | 101 comments



They fear what would happen to their reputations if it were obvious how the sausage was being made. (Not unique to climate change: my academic AI research produced decent results on some data sets with code of truly abominable quality. Then again, I wasn't asking anybody to bet the economy on my results.)

Plus, if people actually provided code, someone might actually get it into their heads to run it. That can't happen. No, literally, it can't happen. The level of professionalism with regards to source control, documentation, distribution, etc in most academic labs is insufficient to allow the code to be executed outside of the environment that it actually executes on. If you put a tarball up somewhere, somebody who tries to run it is going to get a compile error because they're missing library foo or running on an architecture which doesn't match the byte lengths hardcoded into the assembly file, and then they're going to email you, and that is going to suck your time doing "customer support" when you should be doing what academics actually get paid to do: write grant proposals.

This, by the way, means that peer review by necessity consists of checking that you cited the right people, scratched the right backs, and wrote your paper in the style currently in fashion in your discipline, because reproducing calculations or data sets is virtually impossible.


If your process isn't sufficiently documented so that your professional peers could reproduce your results with your data, then your process is unreliable. We berate academics all the time for improper or incomplete documentation of manual experiments. I can't see any reason why source code should be held to a lesser standard.


Well, really it's the algorithm that needs to be published so others can reproduce the results. In principle, publishing code is similar to publishing the sequence to put together your experimental apparatus, which isn't done either.


I think pretty much everything looks more impressive before you see how it actually works.


Seriously. Relatedly, it is seriously impressive that systems this comprehensively screwed up seem to still converge on producing acceptable work much of the time. Big companies manage to get us all fed and fly us around the world. That scares the bejeesus out of me. I have put my lives in the hands of someone who was selected by an HR department (assisted by, even worse, a union)


Convergence of scientific results could be validation of the theory. But it could also be because people are anxious to publish contradictory results.

From Richard Feynman's1974 CalTech Commencement:

"We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It's a little bit off because he had the incorrect value for the viscosity of air. It's interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan's, and the next one's a little bit bigger than that, and the next one's a little bit bigger than that, until finally they settle down to a number which is higher.

Why didn't they discover the new number was higher right away? It's a thing that scientists are ashamed of--this history--because it's apparent that people did things like this: When they got a number that was too high above Millikan's, they thought something must be wrong--and they would look for and find a reason why something might be wrong. When they got a number close to Millikan's value they didn't look so hard. And so they eliminated the numbers that were too far off, and did other things like that. We've learned those tricks nowadays, and now we don't have that kind of a disease.

But this long history of learning how to not fool ourselves--of having utter scientific integrity--is, I'm sorry to say, something that we haven't specifically included in any particular course that I know of. We just hope you've caught on by osmosis"


anxious to publish

Should be "anxious not to publish", or "anxious about publishing", I think.


I get your point, but those results were still diverging from the original wrong answer and converging on the right one, just not di/converging as fast as would be ideal.


I don't know about that - I've seen a lot of industrial/manufacturing processes over the years that were damned impressive and had lots of complexities that I had no idea existed.


The only exception that I can think of is GPS (not the receivers, the system). Most people simply know that it involves satellites somehow. That system delights me in its cleverness, effectiveness, and scalability.


Nuclear magnetic resonance, as used to make MRI scans, is pretty damn impressive if you ask me.


The first time I looked at the code for a large 3D game engine, I felt this. It was almost like someone explaining a magic trick for you, you go "oh, so it all just worked because I was looking from this direction and that detail is something most people don't notice". Very disappointing, and it actually ruined it a bit for me.


I don't buy the point that once source code is published, you have to do all the "customer support" stuff. You don't have any obligation to do customer support so that you should be able to decide when you'd like to reply the request or whether reply it at all.

Source code is a very precise way to exhibit the work you have done. It is so precise that even particular machine can understand it. I am really happy to see that recently in NIPS more and more people published their MATLAB code along with their papers.


Oh, you must do "customer support"! Otherwise, they will use your code and report 1) that your code does not work or that 2) their code is better because they did not use your code well.

And for code that is not easy to install (e.g., requires many libraries, frameworks, dev tools, etc.), you need to spend time on documentation and even then, you often need to answer questions like "how can i create an eclipse plug-in to use your code or lxml failed to install"...


"Otherwise, they will use your code and report 1) that your code does not work or that 2) their code is better because they did not use your code well."

This x 1000! This is a topic near to my heart as I've actually had people use models & software we provided to them, not making an effort to understand fundamental concepts and theory behind it (or just being incapable of understanding) and then publish papers saying our models are crap! (not even providing better ways, nor comparing to anything else, just saying 'I put in some data and the results were unrealistic, this model is bad).


I agree. I've released my code, and there are several examples of people taking it and using it to do stuff that is totally outside its range of reliability because they just ran the code and didn't bother to learn enough about the problem. Then they publish crap and the crap comes back to me because MY code gets a bad rep for creating incorrect results.


Mostly you only concern about the criticism from within scientific community. Since your result is published, generally, people with background will spend time to figure out how to use it by themselves. And their criticism is only relevant when it gets published. At that point, you are surely obligated to response such criticism. Most of the time, it is really not relevant that some random people send you email and have such "eclipse plug-in" problem.


"some random people send you email and have such "eclipse plug-in" problem" -> They are not random people, they are Ph.D. students working on their thesis and you can be assured that they will find a way to publish their comparison one way or the other. Some Ph.D. students will go at great length to figure out other's code before asking a question, but some will suck your energy big time and still do a poor job at representing your work. Now compare this to those who do not release their code: they can only be evaluated on their published results and they can spend their time on their next study/research work.

My point is that there is not a lot of incentive to release code: it requires a lot of your time, it is not always clear that granting agencies/employers consider this as being more positive than publications, and it exposes your work to all sorts of unfair replication/comparison.

Now, if research work without released code were never cited, that would be a terrific incentive to release your code :-)

P.S. I speak from experience and I release the code of my most important research work.


"Mostly you only concern about the criticism from within scientific community"

Ha. That doesn't apply to climate scientists. And there was the case of Andy Schlafly of Conservapedia (a creationist lawyer) demanding all sorts of nonsense from a scientist who documented evolution through many, many generations of bacteria.

At its worst, you get ideological cranks ganging up and demanding information, possibly through FOIA requests, pretty much for the sole purpose of using up all the scientist's time responding. Then when they don't respond, and/or it turns out they aren't subject to FOIA requests, the cranks freak out, break into a server, steal emails, and declare a conspiracy.


People with background usually don't have the time to really discover how it works to make it run. They'll just ask.


"Then again, I wasn't asking anybody to bet the economy on my results."

To be fair, no climate scientist is asking anyone to bet the economy on their results, it's the various interest groups that pressure scientists into giving an answer that is less nuanced as they would have liked and then blame them when things go wrong. (at least for the 10 or so I know personally, but I'm pretty sure no others do, either). Actually this is true for all scientists I know in field that have political relevance.


Publicly available statements and evidence of past behavior has led me to reject my default charitable supposition that climate scientists and interests groups are disjoint sets.


I've attended climate science conferences and worked for years writing code for and with climate scientists. Down in the trenches, pretty much the only thing I saw was scientists interested in accurate results. I don't, however, claim to have worked with nearly everyone in the field. But I do on occasion feel compelled to defend my former coworkers (I work for myself now).


Well I would nuance that, in that there are many interest groups why try to sway public opinion by posing as scientists. Like ANH I write code for and have worked with climate scientists, actual ones who have no real political or financial motivation and just really try to model the world around us as accurately as possible. So I guess it boils down to the definition of 'scientist'. I don't call every quack who get a book on Lulu or marketer who is paid to write scientific-sounding 'position papers' a scientist.


How is that necessarily a charitable supposition? If a scientist believes in his or her research, it would be positively inhuman of them NOT to advocate for the application of it.


This. Especially in a field where, if the questions are to be taken at face value then the results can be incredibly socially important.


Can you link some examples?


I am optimistic that more codes will become available soon. This subject is dear to the heart of many scientists and is often discussed -- see this recent HN item:

http://news.ycombinator.com/item?id=1789134


It's a simple matter of incentives.

We all want scientists to share their code because that's the positive sum action. But individual scientists aren't paid based on how well the scientific community is doing. They're awarded positions, grants and prestige on their individual performance against other scientists. So scientists worried about their career think in zero sum terms: if I publish this source code, will I be pipped to the next paper? Well, I'll publish this other piece to make myself look good, since I'm not following it up; and then I'll collect the citations too."

We can wring our hands about scientists acting in bad faith all we like, but it's obvious we just have to change the incentives. Funding agencies just need to award higher weight to journals that demand source releases, transitioning to only weighting those journals.


In concept you're right, but I want to clear up some terminology. This isn't a "zero-sum game" issue: it's a "prisoner's dilemma" [1]

In the prisoner's dilemma, the parties can work together to yield a common, greater result. But they might not do so because the common solution requires trust; any individual might go for the easy answer that brings himself a return while screwing the others.

[1] https://secure.wikimedia.org/wikipedia/en/wiki/Prisoner%27s_...


I considered this, but if you assume that the scientists don't benefit from science functioning properly, then they're playing a zero sum game with other scientists --- they're simply competing with each other for a fixed pool of resources. That was what I meant.

You could argue that a prisoner's dilemma view of it is more realistic. In this view, the scientists all desperately do want to get it right, but they know they can't because they'll be punished for doing it by their peers, who'll seize the opportunity to get ahead at their expense. This model is plausible too, but it's different from the one I suggested. So it's not actually a terminological difference.


> It's a simple matter of incentives.

Incentives are difficult to get right.


Actually, I have heard one good argument against open-sourcing scientific code [1]. It's not bullet-proof, and it won't apply in all situations, but I think there's a nugget of truth in there.

If I write a paper describing an algorithm (or process, or simulation, etc) and open source my code, someone attempting to reproduce and confirm my work is likely to take my code, run it, and (obviously) find that it agrees with my published results. No confirmation has actually taken place - errors in the code will only confirm errors in the results. Further work may then be based on this code, which will result in compound errors.

If, however, I carefully describe my algorithm and its purpose in my paper, but don't open source the code, anyone who wishes to reproduce my results will have to re-implement my code, based on my description. This is vastly more likely to highlight any bugs in my implementation and will therefore be more effective in confirming or disconfirming my findings.

I'm not sure yet what I think about this argument. It seems to only apply in certain domains and within a limited scope (what if the bug exists in my operating system? Or my math library?) but in relatively simple simulation models, it may have some validity.

What do you think?

[1] From Adrian Thompson, if you're interested: http://www.informatics.sussex.ac.uk/users/adrianth/ade.html


Your argument doesn't seem to apply to most code in natural sciences. The difference being that their theoretical models are, in most cases, only approximated by code. Yet, in papers, they claim to prove or test their theoretical model by experiments on the approximation.

Note that this is nothing terribly new. Sometimes experiments testing a hypothesis give false positives because something went wrong in the experiment. In that sense there is no real difference between a faulty thermometer and a bug in your code.

Having written this, I think your argument in fact does apply after all. In natural sciences one would argue that if your buggy code confirms an invalid hypothesis, someone redoing your argument with the same code would not uncover the problem. Publishing your code invites people to use your faulty thermomenter.

Of course I'm assuming that the published paper contains all necessary details to reproduce the experiment. In case of e.g. climate models or modelling the formation of galaxies that might be a problem because the code _is_ the experiment. Describing what code does is very hard, and it would be easier to just publish it in stead.


" Describing what code does is very hard, and it would be easier to just publish it in stead.'

Depends on what level you're describing it. "It does an FFT" or "it sorts" are pretty clear. It gets hairy if you describe the specific details of the implementations. But the implementation is likely irrelevant, because other scientists can choose any implementation. Even with complex models you ought to be able to piece much of them together out of descriptions at that higher level.


We, as tax payers, are being asked to bet over a trillion dollars on this research. It not only affects the first world economies, but it will be a serious source of problems to third world countries ("you know how we got are standard of living so high? well, you can't do that too"). The refusal to hand over code just leads to more suspicion. This is too political already and secrecy doesn't help. It is bad enough that the basic measurement methods are in dispute.


This can only end if peer-reviewed journals require source code (and if possible datasets) to be made available as well. High-impact journals have the weight to enforce such policies.

It's true that third parties can apply methods easily to new data. But it is a testimony to the method, and references will help building the reputation of the original inventor.

Another concern only addressed in the comments on this blog post is that most scientists do not produce beautiful programs. The reasons are twofold:

- Programs are hacked together as quickly as possible to produce results. Scientists are mostly concerned with testing their theories, and not so much in producing software for public consumption.

- Most scientists are not great programmers.

Consequently, scientists usually do not want to make their source code available.

This situation sucks, given that in many countries taxpayers fund science.


I have an MSc in CS and then did (still doing) a degree in Physics. I can report that you are right, most scientists are not great programmer. Most scientists just don't care about readability, maintainability, warnings, library dependencies, cross-platform issues, versioning, testing, etc.

I think it'd be perfectly reasonable for society and the gov't to demand that scientists who produce results based on data and data processing make the data and the programs available to their peers and the general public, and also demand that it satisfies some reasonable software engineering expectations. (These will be set by their own peers anyway.)

It is accepted by scientists that it is important to communicate research findings in the form of well-written human-readable papers. Over time the community must accept that including well-written machine-parsable data and code is just as important part of the scientific communication framework.

Finally, a personal anecdote. I was contacted by a senior researcher to work on a cutting-edge numeric simulation project, and he sent me the source code, but made it clear that it is "top secret", even though it is gov't funded, as it is much better than its competitors, also gov't funded. He estimated that this advantage will last for several years, meaning several papers.


Deeply disturbing fact. The main researcher in our group works in cutting edge numeric simulations, and although he does not publish his code (mostly because "you can code it yourself! It is really easy, listen" and go on for 3 hours), if you were to ask, he would give it to you without any problems.

But according to what you say, this means that he (still programs in Fortran) would need to start caring about versioning, cross-platform stuff (fortran, remember), readability (well, it's fortran, for me it is always unreadable) and maintainability. In addition to his usual duties of "doing research", giving lectures, answering students, advising PhD's and asking for grants.

The problem is most likely to be solved when everyone in the field can code the algorithm in the paper. If the peer reviewer can write a program following the algorithm and applied on the data it validates the results, this ends up working well. I doubt (very profoundly) that giving the source will result in better peer reviews. The peer reviewer may have the code, but he will most likely compile, run and check that the numbers are the same.


I doubt (very profoundly) that giving the source will result in better peer reviews.

I strongly agree with you on this point. The peer reviewer obviously won't have the time to check the code. But the very act of releasing the code will force the original author to be more thorough. In this sense, it will result in better, less buggy code, better results and better science.

Also, going one step beyond the peer reviewer, I think an interested party has the right to check the results for software and data bugs, given that we're in agreement that the peer reviewer probably won't have the time?

Another anecdote. We found a really interesting signal in a popular astro dataset, which would have put us on the frontpage of a few magazines for sure. After many sleepless nights, it turned out that it was a bug in the processing pipeline software. It took me an unreasonable amount of time to find the bug, because the software wasn't released, so I had to go hunting around for clues in the appropriate papers. (This was a very large survey, so there were tons of papers.) If we had higher software engineering standards, maybe this bug would not have made it into the production pipeline and pollute the dataset.


In my group, we had two guys (me and another) who had a strong CS background, so we could advise on such issues.

So, in general, the professor's strategy could be to simply make sure he always has such a person (Phd or postdoc) as part of his team, who advises on coding standards, versioning, cross platform issues, etc.


But then money is a huge problem (at least here in Spain). At most, you can have a mathematician who knows about this and can help, and it is pretty hard (I could on a basic level, but I know only two other people in our department using version control, for example).


Yes, taxpayers fund science, but commenting, beautifying and documenting code is not what a physicist/mathematician/climate researcher wants to spend his time. Usually you want to be doing research, may it be by direct coding or by doing something else, related. Doing this kind of stuff is far worse than filling grant proposals or doing other bureaucratic stuff.

On the other side, I disagree with "most scientists are not great programmers". What is a "great programmer"? In my definition, it is someone who can write a program to solve a problem without too much hassle. And a lot of scientists I know satisfy this to terrific levels. Of course, they use no orthogonality, nor source code control, nor do extreme programming and usually don't write test cases. They just do what is asked as quickly as possible to keep on doing what is needed to do.


I disagree strongly about your definition of "great programmer". Your definition is of a "somebody who can program".


"Somebody who can program" have written probably 99% of used applications in the world today. From Bill Joy to Donald Knuth to David Cutler to Linus Torvalds to Guy Steele to Jamie Zawinsky to Guido van Rossum to John Carmack, and almost everyone in between.

Great programmers somehow only appear to write books and give pristine examples of how you build infinitely extensible architectures.


John Carmack produces the most maintainable and readable codebases. I happen to have the Quake3 source code up on my github at, so you can see for yourself:

https://github.com/mtrencseni/quake3

Donald Knuth is the author of literate programming, which is a framework for writing human-readable programs:

http://www-cs-faculty.stanford.edu/~uno/lp.html


Have you read how each of them actually write their code? Both write them precisely how the other poster noted.

And I've read the Quake 3 code extensively. Great code, but certainly not pristine, and I'm sure if you handed it to code review to virtually anyone you know, they'd find a whole bunch of sylistic and architectural issues with it. Like take a look at the playerDie code. You're telling me you wouldn't have said, "Rewrite this?" if you a colleague handed this to you?

And yes, Knuth is the author of literate programming, but that's not how the code started out. Read his letters on computer science.

Tarjan wrote a similar thing, I think in his ACM Turing Award lecture.

I picked those names because they are the best our industry has. But even with that, they all pretty much write code the way the previous post noted.


I know a lot of people who "can program" and when confronted with a problem would spend a whole afternoon deciding if they can do it in the main function or need to code separate pieces and then a week to really do it. A great programmer just picks the problem and writes the code that solves it.


You've rather obviously never been exposed to real great programmers. Your "can program" definition is actually describing a really bad programmer, and your "great programmer" definition describes most kids who make it past one semester of high school computer science class.


Probably not to 'real' great programmers. But I've given classes on numerical algorithms for the CS program in our university. None of these guys (and gals) except for one or two out of 70 or 80 will be better than "someone who can program".


As a side note, your 'obviously' probably means that most mathematicians I know are therefore not 'great programmers'. Ok, maybe my definition is flawed. But when they have to do some program to solve anything, they'll find a way to do it.


Is it possible that in the midst of their no-comments, no-test-cases spaghetti they introduce a few bugs that affect the results? If so, they need to release their code to be scrutinized along with their findings.


I don't see where I said it was spaghetti code. Sometimes a goto in C can save quite a few running cycles. Write a quick Runge-Kutta-Fehlberg integrator without it.


> Yes, taxpayers fund science, but commenting, beautifying and documenting code is not what a physicist/mathematician/climate researcher wants to spend his time. Usually you want to be doing research

Hunting down all the right citations, putting results in good prose, formatting the paper to the standard of where you are trying to get it published, responding to peer reviews (as silly as they can get), doing peer reviews for others... all things that are not "doing research" (strictly speaking) and to some are not enjoyable.

But they have to be done anyway, it's a quality standard that has to be met, everyone suffers for it and everyone benefits from it. It's just that the scientific community has let itself set really lousy standards for code sharing. It should be rectified.


You'll fix source code quality problems the day that research grants can contemplate the money to pay for scientific programming, meaning people who know their shit on modeling and running simulations on distributed systems.

Do bear in mind that those people are not cheap, so it's doubtful that anyone would want to incur the expense.


Why are we singling out climate scientists here? The only article of the three that were linked that was solely about climate scientists was the one from RealClimate; the other two make it more than clear that these issues span the full scientific spectrum.

And why dismiss so casually the argument that running the code used to generate a paper's result provides no actual independent verification of that result? How does running the same buggy code and getting the same buggy result help anyone? As long as a paper describes its methods in enough detail that someone else can write their own verification code, I would actually argue that it's better for science for the accompanying code to not be released, lest a single codebase's bugs propagate through a field.

The real problem, if there is one here, is the idea that a scientist's career could go anywhere if their results aren't being independently validated. A person with a result that only they (or their code) can produce just isn't a scientist, and their results should never get paraded around until they're independently verified.


Why are we singling out climate scientists here?

Because this recent rash of articles is a result of "ClimateGate". Clearly the issues raised are more general.

And why dismiss so casually the argument that running the code used to generate a paper's result provides no actual independent verification of that result? How does running the same buggy code and getting the same buggy result help anyone

I think it's a bogus argument because it's one scientist deciding to protect another scientist from doing something silly. I like your argument about the code base's bugs propagating but I don't buy it. If you look at CRUTEM3 you'll see that hidden, buggy code from the Met Office has resulted in erroneous _data_ propagating the field even though there was a detailed description of the algorithm available (http://blog.jgc.org/2010/04/met-office-confirms-that-station...). It would have been far easier to fix that problem had the source code been available. It was only when an enthusiastic amateur (myself) reproduced the algorithm in the paper that the bug was discovered.


It was only when an enthusiastic amateur (myself) reproduced the algorithm in the paper that the bug was discovered.

But that's the actual problem, that nobody else tried to verify the data themselves before accepting it into the field. If you could reproduce the algorithm in the paper without the source code, why couldn't they?

And while it may have meant that the Met Office's code would itself have been fixed faster, I don't buy the idea that having the code available necessarily would have meant the errors in the resulting data would have been discovered faster. That would imply that people would have actually dived into the code looking for bugs, but we've already established that the people in the field are bad programmers who feel they have more interesting things to do. Why isn't it just as plausible that they would have run the code, seen the same buggy result, and labored under the impression they had verified something?


I'm torn on this issue, but I certainly don't think that whether giving out the code will decrease the chances of independent verification is a "bogus argument", and it's not about "protecting" anyone from anything.

Writing your own code for anything but trivial analysis is a huge time sink. If I can take someone else's code instead of writing my own, I'll do so. There is a very real chance that making all codes public will seriously increase overall consolidation and decrease independent verifications. (Independent verifications are a problem anyway because funding agencies are unlikely to fund redoing the same experiment and journals are less likely to publish them.)


What I have seen so far is that very bright and capable scientists (physicists, for instance) who are non-programmers[1] are usually extremely ashamed of their code. I'm talking even CS Professors, who spend most of the time proving theorems. Structuring code well and making sure it's correct is hard, and they know.

[1] Programmer = somebody who spends 8 hours a day at it.


If "Programmer = somebody who spends 8 hours a day at it." then my students of Numerical Analysis are programmers, and not mathematicians. They are currently coding an assignment on continuation of zeros and (at least looks like) they are spending a ton of hours each day on it (and making me loose a lot of time answering email questions, by the way)


I can readily believe that they're (currently working as) programmers, but where do you get "not mathematicians" from? If you're spending 8 hours a day writing mathematical code and understand the mathematics, then in my book you're being both a programmer and a mathematician.

Incidentally, my experience is that plenty of people who are programmers are ashamed of a lot of their code too, at least in the sense that they wouldn't want anyone else reading it and judging them. Writing code that looks good as well as getting the job done is hard, whoever's doing it, and it's by no means always worth the effort.


Because they have strong problems understanding the mathematics, but they devote all their time to code something they don't understand . I have tried my best to get them to understand it, or convince them to understand first and code later, to no use.


And conversely, software developers are not programmers, because they're doing a whole lot of other things in addition to coding.


Taxpayers and scientists have a deal: we provide some support for your education and research, and in return you show us how to do stuff.

If you don't like that deal, governments have an even better one: we give you patent rights on what you invent -- as long as you show us how it is done.

These deals aren't altruism on the part of the public. Nobody thinks science is a charity. It's vital to the interests of the particular nations and the species as a whole.

In my opinion, no institution of higher learning that is supported by taxpayers should be giving out credentials to people who are so insecure and unprofessional as to not be able or willing to completely describe how they reached whatever conclusions they have. And that's not even getting into the issue of taking research and making political arguments out of it. That raises the bar even higher.

It's a scandal. And the only reason it's coming out is because some people -- for whatever reason -- have a bug in their shorts about climate science.

It's time to set some ethical standards for all scientific research. Open data, open programming on standardized platforms, and elimination of scientist-as-activist. There's just too much dirt and conflict of interest in certain areas of science. Not all, by any means. But enough to leave a bad taste in the average citizen's mouth. I love science. We deserve better than this. Something needs fixing.


I agree that code should be made public, but I couldn't disagree more with the last paragraph.

Even worse, I take offence to the sentiments expressed in it. It's time to set ethical standards?! Really? Are they so unethical? I would claim that scientists have, by and large, very high ethical standards. Considering that they themselves are being pushed more and more to market their research to get funding, the highest standard for our work remains truth, and nothing but the truth.

Elimination of scientists as activists is a non-statement. Assuming that scientists believe in their own work --a very common affliction-- what does it mean to be an activist? Just that they try to convince others of the validity of their work? Isn't it unethical _not_ to warn the world of the impeding doom your research has uncovered? And to be somewhat insistent if people do not want to hear it?

Don't confuse politicians, crooks and businessmen in the climate change debate with the scientists. While all pretend to use scientific arguments, very few do. The political question should be: does the risk of climate change justify the costs that might prevent it.


The risk of climate change shouldn't be political. That needs to be studied, rigorously. Al Gore may have closed the debate on whether climate change is happening, but to what extent is far from settled. Are X costs worth preventing Y damage, should be the political debate. We don't know Y yet.


"And the only reason it's coming out is because some people -- for whatever reason -- have a bug in their shorts about climate science"

On the other hand, scientists have spent decades fully aware that they'll have to build their own labs, obtain their own equipment, get their own animals if the work requires them, and follow the described procedures.

They can't use the other guy's mice or zebrafish or monkey. They probably can't use the other guy's lab. They might be able to use the same telescope for astronomy, but it's better to use a different telescope to control for some fluke of the original.

So given all that, it hardly seems a huge deal to not be able to use someone else's code, as long as you know what the code was supposed to be doing.


"So given all that, it hardly seems a huge deal to not be able to use someone else's code, as long as you know what the code was supposed to be doing."

But that's a vacuous statement. The only description of what the code is doing is the code. The description you get in a paper is the goal of the code, but goals aren't results. (Mistaking stated goals for results is a surprisingly common systematic error, programmers make it all the time when they believe the hype a project puts out before the project actually has any results. Once you start looking for it it's hard to see a day go by without someone doing this.) I can make a simple information theoretic argument that it is blindingly obvious that any significant code base will have many more bits in it than any journal article could, so it is literally mathematically impossible for a journal article to accurately describe the code. If goals were the same as results all of our jobs would look very different.

Moreover, the reason why you can't use the other guy's monkeys is purely physical. If they could use the other guy's monkeys, they would. They try as it is; the entire purpose of controlled breeding gene lines is to try to erase variations. If they could use guaranteed-atomically-identical lab hardware, they would. (The story of cold fusion would probably be very different if this were possible, for instance; instead of an effect that nobody could reliably replicate, there would be one set of results that everybody could replicate and probably rapidly explain. Note I'm not making a claim about cold fusion itself, just the way history turned out.) Don't elevate an accidental physical limitation to an essential component of science. Very few things hit the true ideal of science, and in general the closer you can get, the closer you should get.


"If they could use the other guy's monkeys, they would. "

No, they wouldn't, any more than a scientist would use only one monkey. Because animals vary. Hell, my Mom had two uteruses. If you don't look at different specimens, you don't know if what you observe is peculiar to the one specimen, or generally applicable.

Using atomically identical hardware would be a bad idea, because then you wouldn't know for sure if your results are replicated because the experiment was good, or because of some quirk in the apparatus.


I only say it is desirable to be able to replicate results. Nowhere do I say that science must consist ONLY of replicating results. It actually is a bit weird to me that so many people leap to this conclusion when it is obviously falsified by other existing sciences where results absolutely can be exactly replicated and yet scientists do not deliberately discard that ability.

Replication of results is merely a step in the process. Once you establish replicated results, you proceed from there. Maybe you observe that the results are irrelevant because your monkey is weird in some way, and you test it on another monkey to establish that point; others can then examine their copies and decide whether you've got a point. Maybe you build on the experiment with a standard test bed. Maybe you take apart the apparatus to demonstrate how a bit of impurity corrupted the results, and then everybody else can do the same disassembly. Maybe you shuffle in some new monkeys and hardware and run the test again to be sure. Not being able to precisely replicate results is a handicap that the various sciences proceeds through anyhow because it has no choice, not a desirable part of the process. Sure, there's a small amount of danger that you might overfit your results, but there's no guarantees anywhere; it's less than the danger that you face from non-replicable results. Not having replicability only throws away options, options that smart people could use to further their science, option that when missing can only slow progress down. That dumb people might misuse it really isn't a very interesting point.

And again, I reiterate that to the extent possible animal researchers do in fact try their best to get animals identical as possible, so I give you not only the theory I outline above, but the practice of biology as well, where carefully controlled standardized gene lines are used.


I work in a lab that does monkey research. What you say is more true about smaller animals like mice, where there are lots of variants, knockouts, etc.

Macaques don't really offer those kinds of options, at least not that I've ever heard of. You just hope they're healthy, disease-free, relatively smart, and are easy to get along with.


on standardized platforms

This would be horrifying. Maybe it could work in some fields, but not in mine (neuroscience). What, I'm going to use Java v.<whatever> on Ubuntu v.<whatever> because a funding agency tells me I have to?

I agree with much of your post, but not this bit.

edit: I think much of the professional insecurity of my field -- which feeds into the stuff you're complaining about -- is driven by the fact that way more grad student positions are funded than there are academic jobs. I wrote about this in more detail here: http://news.ycombinator.com/item?id=470181


I think lutthorn's point above is on the money. Publish the algorithms, data, assumptions, and any other implementation-agnostic information required to reimplement, but don't bother with the actual code.

Successfully reimplementing the experiment on disparate platforms would liekly serve to support the findings even more. It might be more work up front for researchers to have to do the complete implementation in their preferred platform and then work out the kinks, but it might improve the actual science being done.


> I think lutthorn's point above is on the money. Publish the algorithms, data, assumptions, and any other implementation-agnostic information required to reimplement, but don't bother with the actual code.

If you don't publish the code and I come up with a different result while supposedly using the same algorithm, data, and assumptions, how do we know where a discrepancy between my results and yours comes from?

Note that if you're taking public money, the code isn't yours. It's the publics. Don't like that? Don't take public money. (If you work for Google, Facebook, etc, they own what you do on their dime - same deal here.)


You do realize that the way the incentives are laid out, it's exactly the people who do what you ask who will not get permanent jobs, so you'll still be left with those who don't. You can't accomplish what you want by bitching. As someone said in the discussion about Google hiring yesterday, "incentives matter". I agree with you that more emphasis needs to be put on careful work instead of cranking out flashy publications, but until people are rewarded for that, things are not going to change much.


Very well said - I find the current state of publishing in academia appalling. Don't forget that most research is behind a pay wall just for the damn PDF! I think you nailed it with "insecure". The little PhDs need to know their work isn't designed to just get them tenure...


This is an incredibly ignorant and insulting comment. Nobody goes into science to get rich. And job security at 40 is hardly the primary goal either.

The majority of scientists go into science to add to the common pool of knowledge. For many, seeing their work being used in a positive way is extremely gratifying.

The pay walls are not the choice of most scientists. There is a push towards open-access, but it's not easy. To publish a single open-access article a scientist must pay thousands of dollars. For example, a single article in BMC bioinformatics costs US$1805. This must come out of already hard fought for and limited grant money, money that must also pay the salaries of many younger scientists still in training.


Good intensions doesn't counteract the outcome of bad incentives. Obviously tenure isn't about being rich - but it is about being independent in your work. The system produces behaviors in its "publish or perish" style that are centered on building your metrics for getting tenure / status recognition.

How many conference intros have we all heard that say, "X person has been published X times and in Y journals". Are you saying you've never heard advisors talk about splitting articles up and the like? How about the decline of repeating published experiments?

These aren't isolated incidents at all. Status seeking is in all of us, and I don't consider it a bad thing. The metrics just need to be aligned with the goals of the endeavor - furthering knowledge.

Currently, we have a system where most grad students are chattel slaves seeking PhDs for a tenure track positions that 95% won't be able to get. So competitive is the tenure system that people are doing the above things to have a chance.

It's like the idealist-activist who becomes a pragmatic-politician and learns along the way the cruel facts of life and the system. If that's an ignorant or insulting point of view, then welcome to humanity... ;-)


Like we choose to have our PDF's under pay walls. I don't even have printed copies of my paper, because the publisher does not want to make the expenditure. And if I were to lose my password, I would have no access to my own paper for download (and I don't have access to the rest of papers in the same issue, of course).


So you're saying that you can't release due to some licensing / copyright issue, or that it's just something that takes extra time? I definitely understand the former - not that I like it, but the latter is inexcusable.


It depends heavily on the journal. Most journals have a "final draft policy": What they print is only theirs to publish. But you can self-post whatever previous versions you have. In my case, I think there are one or two minor spelling mistakes in the versions I have posted in ArXiV and my homepage. It does not take extra time (at least not a lot) to self-post it or publish on ArXiV (just a little hassle with image conversion problems, YMMV)


Would you say that you're the exception in posting them for the public? If so, would it be worth while for someone to try to get at these non-final but still perfectly useful papers?


I really can't tell. As far as I know, all people in my department publish freely his documents: either in ArXiV or in the department page for submitted papers. Also, ArXiV has a huge numnber of articles in Mathematics, the growing trend is to submit it there. I guess that most mathematicians (or at least, young ones) at least provide some draft version of their published manuscripts online, freely available.


Yea, ArXiV is a great resource, and I actually hadn't seen that it's grown this much. Your department is one of the good ones - I salute you! Here's to more doing the same.


I also hope everyone starts doing it. There is no point in making research unavailable to the public just for the sake of keeping the journal's "level". The future is open content, but most publishers are still blind to it


Couldn't you have emailed it to yourself or copied it to a USB drive when you originally wrote it?


I hope you are joking. What I mean is that access to the journal pdf is blocked. I have of course the sources, several copies and pdf's hanging around, as does my co-author.


In computer science academia, I have not heard of someone refusing to release their code. This seems quite bizarre to me. Of course it is not usually very polished code, but still there is no justification for hiding it.


It happens all the time in astrophysics. Codes are competitive edges, and the support burden from people asking questions about your code that you did release is also a very real issue.


This article seems to have three goals.

1. Spread FUD (Fear, Uncertainty, and Doubt) about the scientific results used to create evidence for global warming.

2. Observe that the training and skills of scientists processing data, building models, and drawing conclusions from data need to be improved.

3. Promote a very limited view of the scientific method where "replicating a result" means "accessing another scientist's data and computer programs and duplicating the processing that was performed". Independent verification usually means that a totally independent experiment is run to test the same hypothesis, new data is gathered and processed and a result produced which is compared with previous results (and those predicted by current theories). Verification means that the same phenomenon is observed at the same level modulo the statistics of measurement.


1. That's not right. I'm not interested in FUD, I am interested in the debate about releasing source code that's come about because of the so-called "ClimateGate" thing.

3. Also not correct. I simply don't believe that not releasing source code is the right answer. It's one group of scientists claiming to save another group from themselves. The argument appears to be that if they released the code others would run it and be satisfied with the result. So? That's just bad science and tells you something about the people who run the code. The solution isn't to protect idiots from themselves.


Other people have thought about, and at least started to solve the problem of academic source code being extremely proof-of-concept rather that production or resume ready pieces of software engineering art:

http://matt.might.net/articles/crapl/


One of the big problems is there is no incentive for repeating or asserting previous results/findings. So even if someone is doubtful of an assertion made there is no incentive to follow up and verify the assertion in general. I don't think sharing code or secret data cleaning methods is going to bring much change unless someone is rewarded for repeating the results.


I have a question, after so much reading and commenting in this thread (and the original post). How many of the people here (programmers and non programmers) have peer-reviewed a paper, or written a paper (mind you, not in CS) that has been peer-reviewed?


I've worked/studied computational physics for a few years. My experience has not been good.

First, I don't think that we've learned how to make complex models yet. But in fairness, it's a really hard problem. If my numerical code is wrong, I won't get a segfault. Rather, I may notice "unusual" patterns in my model output, which could be:

- A genuine physical effect

- An artifact of the assumptions we used (because models are simplifications)

- A numerical method that hasn't converged, or whose accuracy is insufficient

- A bug

Untangling this is nigh impossible, unless you rely on very, very careful testing of independent parts. That's how the NASA does it [1], but it's simply not within the realm of what the typical physicist can/will do (and understandably so, numerics is hard).

The solution would be to have tried and tested libraries, built by numerical specialists, so that physicists would only have to specify the equations to solve. That's what Mathematica does, and it's the only sane way I know of making complex models.

But it's slow, so physicists use Fortran instead, and code their own numerical routines in the name of efficiency. Tragedy ensues. Fortran's abstraction capabilities are below C [2]. Modularity is out of the window.

I spent a summer working on one particularly huge model, that had been developed and tweaked over twenty years. At some point I encountered a strange 1/2 factor in a variable assignment, and questioned my advisor about it.

"Oh, is that still in there? That's a fudge factor, we should remove it."

A fudge factor. No comment, no variable name, just 1/2.

Another scientist told me: "No one really knows anymore what equations are solved in there.", to which my advisor replied "Ha, if we gathered all the scientists for an afternoon, we could probably figure it out."

But I agree with the other posters and jgrahamc: the incentives for producing quality code and models are just not there. And sadly, I don't see them changing anytime soon.

[1] http://www.fastcompany.com/node/28121/print?

[2] (At least, the subset of Fortran used by the physicists I've met. Modern Fortran is a bit different.)


For a non-American this is one of the most typical patterns in HN worldview. I call it the "libertarian style conspiracy theory".


What conspiracy theory?


I don't think you can call yourself an academic if you're unwilling to share and describe your methodology in sufficient detail that others can follow it. That's the major difference between academia and industry. Also it should obviously be mandatory for taxpayer-funded research.


This sounds plausible, but is quite naive really. We are talking about huge amounts of money which is at risk if the cat gets out of the bag. Please remember that these sleazebags also do everything to prevent raw data being available. They just want us to accept their "findings" and pocket the next multi-million dollar check.


Everything they can to prevent raw data being available, up to and including posting it freely on their own websites:

http://www.realclimate.org/index.php/data-sources/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: