Hacker News new | past | comments | ask | show | jobs | submit login

But I find such "adjustments" distasteful.

Well, you have to deal with it somehow. If you know the source of the noise, there's no excuse for not modelling it.




Sure, but the point is that if there's something wrong with your model of the noise, the result could be completely invalid.

Look at the history of Milliken's oil-drop experiment: http://en.wikipedia.org/wiki/Oil_drop_experiment#Millikan.27...

He got the wrong answer, but subsequent researchers who got different results kind of smoothed their own results to match the "correct" (actually wrong) expected result.

I'm not saying that happened here, but it's always a possibility.

Essentially, how do you know when you've modelled the noise accurately enough? When you get the expected right answer? Why not continue to refine the noise model, or consider other ones?


The role that psychology and social dynamics plays in science ought to be more studied. To ignore it is foolish. One person I knew in commutative algebra said that he always adds references to the famous people in the field in his papers for ass kissing purposes. It's just one anecdote but this sort of stuff, I suspect, is much more common than it is thought.



I like the definition of humility by Eliezer Yudkowsky on less wrong - "To be humble is to take specific actions in anticipation of your own errors."

Given Dr. Everitt's commitment to this experiment, I'd wager he was humble enough according to that definition ... though that assumption of mine would simply be a "halo effect" instance?


What you are describing is a very real danger, but if you are modeling actual noise, it's less of a risk than if you are modeling some small systematic shift that you claim you must subtract from the signal. With noise, it's more unlikely that the model would just by chance give a systematic shift to the data. More likely, if the model is wrong, it just won't decrease the noise level.


It's was actually systematic shift, but a periodic one. It wasn't random noise.


Well, true. If the noise was truly random, you couldn't model it, of course. I just meant "noise" in the sense of "apparently random variations that actually aren't random".


That's not quite true, since a small signal+random noise will certainly give you a prediction that is testable.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: