Network structure, fake news, and why nice people don’t go Nazi

Note, 15th January 2019: A similar study to the one quoted here has been retracted after someone attempting to replicate the result found a bug in the authors’ code. This one has not, but it is probably worth bearing in mind. Retraction Watch coverage.

So how might one of those CA campaigns work? Here are some links.

This writeup of a new paper tells us a couple of things. One of them is about network structure. The MIT researchers quoted found that the high-order nodes – the most important users – were much more likely to forward real news than fake.

This is interesting. A common way to think about the unusually important nodes in a scale-free network (approximately!) is that they are signal-boosters. If a message hits one of them it goes viral. In this view, the main service they provide to the community is as an aggregator of connections. This tends to suggest that they either impose their views on the community in normal times, or else end up being the vectors of viral change.

The result quoted suggests that filtering, though, is at least as important and maybe more. In the context of Twitter, a significant fraction of messages coming into an important node might never be seen, but the choice of which ones are seen is biased towards response. Key users, therefore, tend to forward good stuff and drop crap. This implies they are stabilising rather than destabilising actors.

So, our message is getting filtered out at key nodes. What to do? One approach would be classic influencer-management – identify them, reach out to them, and convince them. Or bribe or suborn them. But these options have their problems. There is another way. Let’s have a look at that second point.

The second point is that fake news, in general, had a higher probability of being forwarded than real news. In other words, it was a cat blindfold – precisely because it was fake, it could be optimised for response better than anything that needed to respect external constraints. This is the criterion of bullshit according to Harry Frankfurt. It’s also, according to Stick It Up Your Punter, the classic way to win at News International and why so many Murdoch editors were features writers rather than reporters. The advantage may be as high as 70%.

Filtering has limited capacity. In this gloss on Daniel Kahneman, I summarised a lesson from his work as Target the depleted and deplete the targeted. One way to defeat the filter nodes is to overwhelm them with sheer quantity. There are only so many messages that get seen; flooding is one way to get in there. Another option is to drain the human being behind it by escalating drama and conflict. The two of them are complementary and synergistic. Dramatic conflict generates volume, and dramatic conflict at scale is just more.

How to deliver this? Well, it turns out that 0.1% of the accounts push 80% of the fake news and 1% push 100% of it. In an important way, a community is a fine balance between amplification and filtering. If we could somehow coordinate the 1%, or create more of them by manufacturing bots, we could really make well-known nodes in the network core’s life a misery and…

So how do we do that? Well, there’s this. Creepy love scientist Dr Spectre’s dank app implemented screening for a range of factors. They included basic demographics, the Big 5 psychological traits, but also the so-called dark triad, and fascinatingly, a list of “sensational interests”.

It’s fairly obvious that “militarism – guns, shooting, crossbows, knives”, “violent occultism – drugs, black magic, paganism”, and “credulousness – the paranormal, flying saucers” are going to be positively weighted in our model. Presumably “intellectual activities – singing and making music, foreign travel, the environment” and “wholesome activities – camping, gardening, hillwalking” are the opposite. Also, he was interested in who believed in star signs.

I guess Dorothy Thompson was right: nice people don’t go Nazi.

Now, this just leaves us with the choice of an aim, the heart of strategy. I am fascinated to see that most of the CA campaigns we know anything about were intended to discourage people from voting. Much of the conversation about this whole issue gets stuck on the point that nothing they produced was likely to win anyone over from D to R or vice versa, but this is just a statement of how wedded so many people are to the median-voter theorem and the associated model of advertising in which you aim to reach as many qualified leads as possible.

If you aren’t interested in convincing people in the middle of the spectrum, though, a lot of constraints are relaxed. The Cultural Cognition Blog makes a really interesting point here.

Its Facebook postings are not about the facts relating to one or another policy but about the facts of social proof—who is fighting whom over what

Advertising and campaigning aim to convince by advancing their version of the facts. Propaganda aims to mobilise by advancing its version of social consensus. Disinformation aims to demobilise and disorient by advancing its version of social conflict. If it’s really that bad and idiotic out there, maybe I should just nope out and stay at home? Here’s a case study of just that.

4 Comments on "Network structure, fake news, and why nice people don’t go Nazi"


  1. Good stuff.

    I’d add that it seems we don’t have a very complete picture of what was put on FB or what campaigns actually spent their money on. FB has been very selective in what info it has released.

    Reply

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.