The qualitative comparison suggests that the item2vec may produce _more_ homogenous / boring results, which is kinda unfortunate; the interesting question in recommendations is how to find "aspirational" recommendations (things the shopper would not have looked for on their own).
I would really love to see an analysis that did an A/B test using more traditional CF and this, and see what the revenue lift was, because "accuracy" as measured here doesn't necessarily map onto the objective that you care about in the real world.
On the other hand, I played with using collaborative filtering to improve the personalization of language models for speech recognition for shopping, and in that context this approach sounds like it might have been super useful, because it was actually fairly challenging to get broad enough coverage of the full set of items from a small number of purchases for the purposes of language modeling. Having good embeddings would have helped a lot.
"I would really love to see an analysis that did an A/B test using more traditional CF and this, and see what the revenue lift was, because "accuracy" as measured here doesn't necessarily map onto the objective that you care about in the real world."
It may be an urban myth, but somebody told me Amazon tweaked their recommendation algorithm to occasionally provide random items, the thinking being that people might be persuaded to buy something on the mere suggestion that they would like it.
A multi-armed bandit will occasionally provide 'random' items as part of the exploration phase. Perhaps that's what's going on, and not any sort of diabolical self-fulfilling prophecy.
Thorsten Joachims gave a talk at Amazon Machine Learning Conference 2015, about doing specifically that. That may be what someone was talking about. I've been trying to find the paper related to the work, but am struggling to find it.
The qualitative comparison suggests that the item2vec may produce _more_ homogenous / boring results, which is kinda unfortunate; the interesting question in recommendations is how to find "aspirational" recommendations (things the shopper would not have looked for on their own).
I would really love to see an analysis that did an A/B test using more traditional CF and this, and see what the revenue lift was, because "accuracy" as measured here doesn't necessarily map onto the objective that you care about in the real world.
On the other hand, I played with using collaborative filtering to improve the personalization of language models for speech recognition for shopping, and in that context this approach sounds like it might have been super useful, because it was actually fairly challenging to get broad enough coverage of the full set of items from a small number of purchases for the purposes of language modeling. Having good embeddings would have helped a lot.