Hacker News new | past | comments | ask | show | jobs | submit login

Oh no I love sci-hub but I second-guess linking it. Thanks for the mirror!

So, reading the paper proper, I was wrong about it being the authors' talks. They picked existing engineering talks and interviews. It'd be nice to be able to actually see the clips to get a sense of how much they changed the quality. They say they put the clips on youtube for the test but don't take the opportunity to link the clips.

In the Science Friday graph all the error bars overlap so meh. For the combined technical talks the error bars don't overlap and they're all in the same direction, which is way more encouraging, though I really wish they'd tried it with a pool of more talks to remove that systemic effect. I also worry about the fact that random mechanical turks may differ from the usual audience for a technical talk in important ways (e.g. not knowing the subject and so having nothing to go on besides audio quality). They say in a note at the end of the paper that they tried filtering by correctly answering a question at the end, and that the effect was still there, but they don't show the data for it or say if it affected the strength.

Overall I feel it's a reasonable methodology and certainly suggestive results, if a bit lacking in the exact methodological details and the number of talks they tried it on. And it's not like I have a prior that audio quality doesn't matter...




Yeah, that was my overall take: reasonably convincing, neither a dumpster fire nor a slamdunk.

These days I don't believe these sorts of psychology effects until I see a Registered Report, but this also isn't the hot steaming garbage you'd expect from the OP summary. It's more or less what you'd expect from a small but not super-small sample investigating a moderate real effect across the board which does seem pretty plausible a priori.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: