"Instagram is actually pretty hostile to security researchers studying their platform. ...
Which is unfortunate because Instagram is one of the platforms most useful to bad actors."
Hmm, perhaps those two facts are more related than Belotti's wording suggests.
It isn't until we have proper laws against that, and politicians themselves use these platforms to further their own agenda (if not spread outright misinformation) so it's unlikely to happen. Plus with the control that these platforms nowadays have on public discourse it's likely that any attempt at such a law will have public opinion swayed against it immediately.
I wonder if their thought-process is "if we don't, someone else will". But the fact that Gab, Parlor, et al. haven't gotten significant traction, I think, is proof against that.
Gab preaches to the choir - everyone there is already gone. Parler seems to be something between catastrophe and fraud against its investors.
They don’t differentiate from the dominant players enough to be considered categories on their own, the audience already has the desired views, and they aggressively moderate against conflicting discourse.
Parler was incredibly useful for uncovering the January 6th attack, however.
Eventually some service will take over Facebook and Twitter, but I doubt that it’ll be something that’s so targeted to the far-right.
Back when I was still on Twitter, I remember folks in the office were having a game to see who had the most bots following them. I guess there was some online form that evaluated the people following you and determined if they were real or not.
I'm sure bots have gotten much better at avoiding detection, but it would be interesting to see if these programs could still ferret them out as real or fake. Likewise, to your point, it would be interesting if the users themselves could tell either.
Maybe also related, I've also found Instagram to be the easiest service to create extra accounts on. Facebook, Google, Twitter, etc all demand phone numbers to create an account, if not immediately than within minutes of opening. They're all set up to make it as hard as possible to use any kind of fake phone number. And all quick on the trigger to ban new accounts if anything looks the least bit odd, or require even more elaborate confirmation and security.
Instagram meanwhile seems to be, just give it an email on any service and a password, and poof you've got a new account. Post away with anything you want, follow and DM anyone, and no bans, locks, or requests for more info. Maybe they'll lock you out if you misbehave enough, but it seems to be actually hard to hit any limits like that without actually doing something they don't like.
There is a well funded and well organized COVID-19 misinformation and smear campaign targeted at citizens of the small country I live in. The same is probably true for pretty much any country right now, but in this case we're talking about a target audience of single digit millions with a language that isn't spoken anywhere else.
To make matters worse, this does not seem to be bots but sock puppet accounts and the tweets they put out seem to be written by humans. This campaign has some ties to certain political parties too.
What are some means that I could, as a technical person, assist my journalist friends in digging out some info about these actors and their connections? This article gave some pointers, but did not go into specifics.
I am mostly interested in Twitter activity. The situation is similar on Facebook but I am less interested in that side. Any tools for digging out some Twitter statistics, given a handful of accounts and suspicious tweets to start with?
I wouldn't count that as misinformation; it is sincere advocacy. A misinformation campaign would be organized by intelligency agency for the purpose of weakening another country.
A misinformation campaign is a campaign (=concerted communication effort) to spread misinformation.
Whether the people who spread it know it is in fact misinformation is a different discussion. Whether someone just fed into existing grievances in a clever way another one altogether.
Not exactly a bot misinformation campaign, but interesting, thanks.
I think of QAnon types as being a bit muddle-headed. I doubt they're capable of creating a bot campaign. Russia could, but they don't need to: we do it to ourselves.
An Occam/Hanlon's Razor corollary: don't attribute to bots or shills that which idiocy can fully explain.
There are some useful idiots in the mix but some of the individuals do it obviously as a full time job. They are also more skilled at it than the average nutcase. It seems to be their job to put the words into the mouths of these conspiracy nuts.
I'm have a suspicion that some of them are funded by an entity backed by a nation state actor. These individuals have connections to some earlier ops whose backing has been revealed.
Hmm, perhaps those two facts are more related than Belotti's wording suggests.