Second, what's the purpose of this? I can imagine it'll be quite useful to BBG properties like Voice of America, RadioFreeEurope and generally US government PR/propaganda, but how am I supposed to not worry that this will be used for censorship via US-based companies and some future law combating "fake news" and/or "hate speech"?
> how am I supposed to not worry that this will be used for censorship via US-based companies and some future law combating "fake news" and/or "hate speech"?
That's exactly what it is. When a government's filled with extremists, bet on the extreme.
3. You can't. As you built more capabilities to process the data it becomes more valuable. Incidentally that is why I think Facebook is so valuable. Cambridge Analytica managed to affect U.S. elections with a small subset of this data.
There were billions spent on ads, so the effect is pretty questionable given how small a piece of that they were and the overall tactics have been in use for at least a decade now.
That said, there's plenty to be mad about, but it's more in how Facebook was sharing everything for you (part of the reason I never made an account there) and less with the wranglings over what a ToS violation means on top of that.
I am pretty sure that they tried and had some effect. It is hard to estimate the magnitude of the effect and probably impossible to prove if they managed to sway the elections.
I am as incensed over Cambridge Analytica's effort to microtarget, as I was over OFA's similar effort, using similar data from similar sources ... though ... Cambridge's source was an allegedly dishonest broker from what has been reported, and OFA went to the source and got assistance.
The headline isn't and shouldn't be that Cambridge impacted the election. This is obviously not true. There are other similar claims about other entities impacting the election with positively miniscule spends on adverts that are even less true than this, but they seem to be taken as gospel as being true by some.
The headline should be that we all collectively give FB, Twitter, etc. as much information as we wish, are encouraged to provide more, and thus increasing the value of us as the commodity being sold.
I resisted FB for many years, until I saw it could be used as an communication system for my family (who were not responding to emails/calls). It has some value to me there.
But the sheer scale of the humint gathering, the analytics they are putting in place, boggles the mind.
Cambridge used a bad data broker, and did exactly what OFA did 4 years earlier. Mebbe we should focus our anger on this, and demand no microtargeting using socially derived data for elections. I didn't see anyone protesting that in 2012. Why now in 2016?
That's part of an important question to answer ... as we cannot excuse violations of privacy when it goes in a direction we like, versus a direction we don't.
> Mebbe we should focus our anger on this, and demand no microtargeting using socially derived data for elections
Just one out of the many links for the lazy, written just after the 2012 elections: "How Data and Micro-Targeting Won the 2012 Election for Obama - Antony Young-Mindshare North America" (https://www.mediavillage.com/article/how-data-and-micro-targ...) .
Reading it now is just, I don't know how to say it because English is not my mother tongue, but maybe ghoul-y is the word? That feeling when you watch a series-B horror movie and you can see the monster is in the house, is just in the room next to the victim, but you can't tell the victim because, well, you have no psych powers. Just copy-pasting some of the paragraphs from that article (which I had found after a quick google search) shows that we should have known about this monster since at least (now) 6 years, we should have seen that it was in the room just next to us, but we did nothing, we only made it worse:
> How did Obama win? (...) At the heart of these two strategies, was micro-targeting.
> Micro-targeting is the ability to dissect in this case, the voter population in to narrow segments and customize messaging to them, both in on-the-ground activities and in the media. (...) But it was the sophistication and the scale of how they executed this strategy that in the end, proved the knock-out punch for the Democrats.
and especially
> The Obama camp in preparing for this election, established a huge Analytics group that comprised of behavioral scientists, data technologists and mathematicians. They worked tirelessly to gather and interpret data to inform every part of the campaign. They built up a voter file that included voter history, demographic profiles, but also collected numerous other data points around interests … for example, did they give to charitable organizations or which magazines did they read to help them better understand who they were and better identify the group of 'persuadables' to target.
and
> That data was able to be drilled down to zip codes, individual households and in many cases individuals within those households.
and then it gets WTF-y (pardon my French):
> Volunteers canvassing door to door or calling constituents were able to access these profiles via an app accessed on an iPad, iPhone or Android mobile device to provide an instant transcript to help them steer their conversations. They were also able to input new data from their conversation back into the database real time.
> The profiles informed their direct and email fundraising efforts. They used issues such Obama's support for gay marriage or Romney's missteps in his portrayal of women to directly target more liberal and professional women on their database, with messages that "Obama is for women," using that opportunity to solicit contributions to his campaign
Answering that question may be the purpose of this request - since no one knows the state of discourse prior to or after any identified propaganda/ads, it's rather hard to assign any impact. By monitoring the discourse, DHS can then measure changes in sentiment on salient features in order to measure the impact of various propaganda/ad campaigns.
That being said, I think DHS might be going a bit overboard as far as the monitoring goes.
Cambridge Analytica served ads to people. This is something that all candidates, campaigns and their affiliates do. Sure it collected data on people through sketchy means, but the essential part of its work was to serve advertisements.
Similar companies have been employed by virtually every other presidential candidate in the last decade.
We can't be mad at CA without being mad at the entirety of industrial political adveritising.
This reminds me of the time when I lost my sense of self for a moment, that suddenly the world was just kind of happening and I had no precedent over the cars passing, everything being quite equal. I guess it was a bit of ego-death without any drugs or meditation.
Also interesting that the author mentioned Emptiness. I couldn't help but be reminded of Zen throughout.
Category theory is a formal semantics for (or alternative to, depending on one's perspective) type theory and thus functional programming. It's been widely used (for example, in Haskell, ML, Agda, Coq, Idris, etc) not only for formal foundations but to derive many "smaller" practical applications. Many of the creations from that domain have been useful in many other languages. Are you unaware of this and asking something else?
It's an extremely good fit and highly productive, to answer your question directly.
I am aware of these applications. I am not aware of any "smaller" practical applications that are derived thanks to the categorical interpretation, can you give any examples?
In this paper we describe a functorial data migration scenario about the manufacturing service capability of a distributed supply chain. The scenario is a category-theoretic analog of an OWL ontology-based semantic enrichment scenario developed at the National Institute of Standards and Technology (NIST). The scenario is presented using, and is included with, the open-source FQL tool, available for download at categoricaldata.net/fql.html.
This is part of a series of work on applying category theory to databases. The initial work was to cast database concepts into categorical concepts, this led to a clarification of various concepts such as many kinds of SQL query being instances of limits and colimits. The theory was then used to extrapolate via category theoretic concepts to develop new database manipulation concepts.
This is a general recipe for applying category theory, though there are other approaches.
I don't have any papers on this hard drive but databases, grammar formalisms and generally many data structures (you might know about monads, comonads, arrows, zippers, etc.) If you start digging on hackage, you'll find many interesting data structures, many of which were derived categorically or at least algebraically (they often cite the papers that inspired their implementation.)
Obviously all the work can be done without category theory but since the mid-2000's I gather that many insights have been gained by exporing various categories and their relations.
Edit: I think one of the main benefits is efficiency. Even if one doesn't start from categorical formalisms, you can later use them to pare things down to what's absolutely necessary.
People already poited out that a) Sweden wasn't very poor in the first place b) didn't participate much in conflict during the time; and, most importantly c) technology was extremely influential in the sense that there was just more to go around.
I want to point out that I think (c) is by far the most important fact but that the post ignores; however, it also ignores lesser factors such as the widespread participation and effects of labor unions.
I see the post as grossly oversimplifying (by conveniently ignoring many other historical factors) while the last time I check, this is something that's still very much debated by modern economists.