Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Recommendation algorithms really need to come with extensive UIs for tweaking, tuning and cleanup (less like this, more like this, don't include this, you're seeing this because of... etc...), maybe completely new 'exploration UIs' to actively find new stuff at the edge and beyond my own bubble.

For instance, having to open random YouTube links in private browsing mode just to prevent that the home page is forever "poisoned" with this specific type of crap is kinda bizarre.

IMHO recommendation algorithms could be really useful if they allowed the user to play a more active role. Give us a "power user UI".



> Recommendation algorithms really need to come with extensive UIs for tweaking, tuning and cleanup (less like this, more like this, don't include this, you're seeing this because of... etc...), maybe completely new 'exploration UIs' to actively find new stuff at the edge and beyond my own bubble.

> IMHO recommendation algorithms could be really useful if they allowed the user to play a more active role. Give us a "power user UI".

I think you are fundamentally misunderstanding the optimization behind “the algorithm”.

They’re not optimizing for usefulness or, god forbid, life enhancing stuff. They’re optimizing for engagement. Sugar for our brains. If the user could control what they see in the platform we could self optimize for usefulness. This would go directly against their wishes.

1h week of life enhancing videos = 10$

10h week of clickbait or rage inducing videos = 100$

Yeap, user control, never gonna happen.


How about a recommendation algorithm that works like this:

- When you upvote an item, the algorithm connects you to other people who upvoted it and shows you what else they upvoted ("more like this")

- When you downvote - the algorithm disconnects you from those who upvoted it and you see less content from them ("less like this")

Some properties:

- You have an easy way to expand your connections - just upvote/submit content you liked.

- Because it is so simple, the algorithm can explain to you exactly why you are seeing a given recommendation - because it was liked by people who liked X, Y and Z (where X, Y and Z are things that you liked before).

- Since your connections are a result of your explicit content ratings - you don't have to worry about poisoning your recommendations by content you merely viewed. You have active control over your recommendations.

If that sounds interesting, check out my hobby project that works exactly this way: https://linklonk.com - it's free, no ads, no tracking.


That sounds like it would be great at creating echo chambers.


Yes, this is a very natural conclusion to make. We all have definitely seen people upvoting content as a way to promote their opinion, even if the content is not accurate or an outright lie. This happens on Facebook, Twitter, Reddit. If people behave this way on LinkLonk then we can imagine how it would connect people with the same opinions into echo chambers.

But I would argue that this behaviour is the result of the incentive systems. Our likes/retweets/upvotes don't have much effect on us. The recipients of these actions are other people - they affect what other people see. And when we are given tools that can influence other people it should no surprise when we use them to do just that. The content that is good at influencing (we are clearly right and they are clearly wrong) is often sensationalized and misleading. The means justify the ends. And the other side lies too.

On LinkLonk the incentives are different and therefore the behaviour would be different. Your upvotes have a limited power to affect what other people see. Instead, your upvotes primarily determine the type of content your future self will see. I believe that you would be more likely to upvote content that informed you, than content that simply says how right your side is (which usually has little new information to you).

I commented some more on this topic here: https://news.ycombinator.com/item?id=28834278


I feel as though companies want their “smart” things (algorithms, recommendations, devices) to do stuff better than us, maybe they think that’s what we want, but I’d just like them to serve us better instead - let me tell it what to do and it do exactly that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: