Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If the social contract says "delete things when you see a delete message", then that's useful in a federated environment.

Only bad actors will disobey and they will have to modify their software in order to do so. This doesn't provide absolute protection against bad actors, but since most bad actors don't own a time machine, it reduces the scope of the harm they can enact.

Consider the adversary "angry ex-boyfriend." Let's assume he wasn't always angry and isn't a sociopath. By the time he has become angry, the sensitive posts have already been deleted. This makes a difference in real life to real people.

There is indeed a user-interface concern, to not over-promise to the end-user. But that doesn't justify leaving such a useful thing out at the protocol level.



> Only bad actors will disobey and they will have to modify their software in order to do so.

I have an hourly bup backup for each of my servers, that goes to a backup host. I am not a bad actor, have not modified any software, and yet if I run an activityhub server I will have a complete timeline at 1hr granularity, so basically anything not deleted within minutes of posting.

(Considering a switch to borg backup when I have the time to really evaluate it)


>> most bad actors don't own a time machine

You mean besides me, of course. XD

EDIT: It's a joke people, for Pete's sake.


Actually, this is a real issue. For one, there is the Wayback Machine, which could very well see increased usage and mindshare as legally mandated content takedowns increase. For another, if, say, Facebook wanted to harvest data from this network to create shadow profiles and flesh out missing patterns in their analytics, then they could easily follow everything, keep the raw data/content internal, and never develop the ability to retroactively un-analyze that data when a delete request comes in.


This is a legitimate concern.

I would not put it past Facebook (or other businesses which are addicted to harvesting user data) to behave badly against networks like this. However, for the sake of their reputations they'd still probably do it quietly, which means many of the person-to-person attacks that deletion protects against would still be thwarted.

This is a problem the same way Facebook's privacy controls are a problem. Facebook themselves are not bound by them, but they're still useful if you want to protect your data from other users of the platform.

And FWIW, I think most of the big crawlers respect robots.txt - this is the same sort of thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: