>We could, trivially, outright stop all anonymous trolling and disinformation. Purveyors of social media don't authenticate their users because they don't want to. Their business models require that they remain willfully ignorant.
Yes well what argument would you make, if you could or would, to change the current landscape?
Social (popular) medias all eventually implode. Lifecycle maturity models and all that. Seems to me that Facebook and Twitter are well into the top of their S-curves. (I can't speak to TikTok.)
If I wanted to accelerate their demise, I'd attack their revenue. Like pop the digital advertising bubble. Congressional and criminal investigations into digital ad fraud would mosdef do the trick.
--
If we could go back in time, perhaps lessons for whatever comes next, I'd advocate three general categories of reforms.
1) All the "well duh" stuff that Sen Mark Warner et al advocate. Here's the PR for SAFE TECH Act and Warner's white paper.
I particularly like clearly identifying bots. Some are authentic, legit activity. So not an outright ban of bots.
"Media literacy" is quixotic; I guess they want to say they tried.
I want to know more about "information fiduciaries"; see #3 below.
2) Nerf the algorithms, squelching instead of boosting viral content. Addressed by section 1.4 of this commission's recommendations. (Which also has a lot of "well duh" general purpose civil society stuff.)
3) Most radically: Individual property rights over personal data. My data is me. If someone is using my data in some economic way, I want my cut. This nicely dovetails (necessitates) the misc proposals of treating aggregated data as a liability, instead of as an asset. Which would totally flip the current script for investors, regulators, insurers, etc.
I've tried to understand the opposition to "personal data sovereignty" -- just came up with that, clever!, because I don't know what else to call it. I dimly recall some "privacy experts" in California concern trolling that state's initiatives. I think their reasoning was something like "we can't put a price on personal data because that'd encourage more collection". Um. Okay. Felt very cassandra, unattached to our reality. So a philosophical rather than a practical opposition, I suppose.
FWIW, talking about this stuff is really hard. My "pay me for using my data" proposal doesn't make sense unless the audience already understands the current ecosystem.
As I've said elsewhere, I worked on electronic medical records information exchanges. Our startup was bought by a national laboratory (Quest Diagnostics). I sat in various meetings and calls, with PHBs, lawyers, and other goons, brainstorming ways to further monetize medical records.
Back in the mid-aughts, every single participant (doctors, hospitals, labs, scripts, insurers, pharma) absolutely considered patient data as "theirs". And our potential partners like Google Health and Microsoft HealthVault and Cerner and EPIC were all hellbound in trying to figure out how to monetize it.
>We could, trivially, outright stop all anonymous trolling and disinformation. Purveyors of social media don't authenticate their users because they don't want to. Their business models require that they remain willfully ignorant.
Yes well what argument would you make, if you could or would, to change the current landscape?