Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What you say is all true and I actually completely agree with you (and like how you articulate those points – great to read it distilled that way) but at the same time probably not a good idea at all to do in most circumstances.

It is alright to decide that in certain cases you can act with imperfect information.

But to be clear, I actually think there may be situations where pouring a lot of effort into really understanding confusion is confusion. It‘s just very context dependent. (And I think you consistently underrate that progress you can make in understanding confusion or any other thing impacting conversion and use by using qualitative methods.)



Regarding underestimating qualitative methods: I'm actually all for them. It may turn out, it's all you need. (Maybe, a quantitative test will be required to prove your point, but it will probably not contribute much to a solution.) It's really that I think that A/B testing is somewhat overrated. (Especially, since you will probably not really know what you're actually measuring without appropriate preparation, which will provide the heavy lifting already. A/B testing should really be just about whether you can generalize on a solution and the assumptions behind this or not. Using this as a tool for optimization, on the other hand, may be rather dangerous, as it doesn't suggest any relations between your various variables, or the various layers of fixes you apply.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: