Hacker News new | past | comments | ask | show | jobs | submit login

I think in astronomy they generate tens of terabytes per night and an experiment may involve automatically searching through the data for instances of something rare, like one star almost exactly behind another star, or an imminent supernova, or whatever. To test the program that does the searching you need the raw data, which until recently, at least, was stored on magnetic tape because they don't need random access to it: they read through all the archived data once per month (say) and apply all current experiments to it, so whenever you submit a new experiment you get the results back one month later.

I like the idea of publishing the data with the paper but it's not feasible in every case.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: