Category: hacker

There is no such thing as a “UK national security number”

What are national security numbers? Why would you think TalkTalk would have some? Why would you believe anyone who claims to have them is the real deal, when:

  • The UK doesn’t have a thing called a Social Security Number
  • Nor does it have a thing called a National Security Number
  • It does have a National Insurance Number
  • NINos aren’t used like SSNs – we don’t use them as general purpose identifiers

Clearly someone here thinks the UK has something like a US social security number and uses it in the same way, but vaguely realises it’s not called that, although not enough to ctrl+K and look it up. Or they know they don’t have a trove of TalkTalk users’ NINos, they’re bullshitting, and our “former cybercrime detective” didn’t notice the difference.

Update: Boy, 15, arrested in Ulster.

if you can’t spell this you might be a troll

So I got round to reading the original paper about automatically predicting who’s likely to be a troll. This was always likely to be fun:


Defining trolls as those who get banned for trolling, a pragmatic solution if nothing else, they obtained a large corpus of comments from three high-volume sources, CNN, a gamer news site, and Breitbart. (Clearly they weren’t about to risk not finding enough trolls.) They paid people to classify the comments on various metrics, and also derived a lot of algorithmic metrics, and used this to train a machine learning model to guess which users were likely to be banned down the line.

The results are pretty fascinating. For a start, there are two kinds of troll – ones who troll-out fast, explode, and get banned, and ones whose trollness develops gradually. But it always develops, getting worse over time.

In general, we can conclude that trolls of all kinds post too much, they obsess about relatively few topics, they are often off topic, and their prose is unreadable as measured by an automated index of readability. Readability was one of the strongest predictors they found. They also generate lots of replies and monopolise attention.


Not surprisingly, predictions are harder the further the moment of the ban is into the future. However, the classifier was most effective looking at the last 5 to 10 posts – it actually lost forecasting skill if you gave it more data. Fortunately, because trolling is a progressive condition that tends to get worse, scoring the last 10 comments on a rolling basis is a valid strategy.

Their algorithm, in the end, identified trolls with about 80% reliability. Very interestingly indeed, its performance didn’t suffer much if it was trained against normal below-the-line noise and then used on gamergate, or if it was trained against gamers and then used on libertarians (perhaps less of a surprise), or whatever. The authors argue that this is an indication that it’s picking up some kind of pondlife tao, an invariant essence of disruptive windbag.


The really interesting bit, though, was when they got to the feedback-loop between potential trolls, moderators, and the civilian population. You might think that being able to identify potential trolls within the first 5-10 comments presents the possibility of an early-intervention strategy. My own experience with Fistful of Euros back when it had 150-comment threads about the Middle East was that explicit early warnings – yellow cards – often worked. They found, however, that earlier and more aggressive intervention from moderators and other users was correlated with faster escalation. Specifically, those who had posts deleted early saw their readability index scores worsen rapidly, one of the strongest markers of trollness.

Now, you might say this doesn’t matter. Just stick the OtoMerala Super-Rapid 76 in automatic close-in defence mode and let the machines do the work!

But there’s a serious issue here and it’s our old pal, the Terroriser algorithm. They make the excellent point that 80% is pretty good but it’s a lot of false-positive results. Given that the principal components we mentioned above are basically conventional norms of discursive civility, there’s also the problem that our filter might be both racist and snobbish. The fact it worked well across dissimilar communities, though, is encouraging.

The distinction between fast and slow trolls – Hi-FBUs and Lo-FBUs in the paper – also suggests that there’s something going on here about different strategies of anti-social behaviour. Perhaps trolls with more cultural capital adopt strategies of disruption that allow them to persist longer and do more damage? More research, as they say, is needed. That said, I wouldn’t write off early intervention completely, and neither do the authors – the question may just be an optimisation.

Obviously what it needs now is an implementation.


I am really impressed both by this OpenNews post about how to tackle a huge pile of documents, and also by the tools recommended. After all:

What I received a month later from Nash County, N.C., were two boxes filled with thousands of printed pages of emails. Double-sided.

One of the problems it solves is that your filesystem is usually very, very good at finding files, on all kinds of criteria, and fast – just look at any unix/linux find examples page – but that presupposes that the information you have is broken out into files whose boundaries map roughly to a logical structure within the underlying data.

Also, one of the best things is also the simplest: Overview has a feature that pulls a randomly selected sample of documents.

The blog is crazy good, too. Interestingly, I remember IBM announcing their big investment in big data the other year and giving “Computational Journalism” as one of the use cases.

Did I say the blog was good? The blog is good.


Look what our reader Dan O’Huiginn is up to!

I’ve always thought a great extension for DocumentCloud would be a plugin that generates a concordance of the documents, as it still strikes me as a big heavy way of just dumping out a lot of scanned-jpeg PDFs, which is what most people do with it.

Project Lobster user stories

OK, a bit of Lobster. Two things have happened recently to up my tuit on the project. First, I learned that Drew Conway of the Zero Intelligence Agents blog has given NetworkX the ability to generate a force-directed graph in d3.js, which you can stick right in a web page. Second, I’ve been reading the Flask docs and falling in love. So now, it’s got a github repo and structure and I have a pretty good idea of how to build it, and I took some decisions.

Lobster has basically 3 user stories. These are:

“Explore and investigate”

I ooh and aah over the network graph. I search for ministers I hate. I look up issues, lobbies, or subjects I am interested in. I drill down to more detail. Then I get angry, and lobby back via WriteToThem etc.

“Rouse the mob”

I notice something outrageous. I customise the data presentation in order to make my point and to see the issue more clearly. I get a URI to this view, and spread it to everyone I know.
for who in whoville.whos:
...tweet(my point)

“Enduring stare/fix in place by surveillance”

I pick lobbies, ministers, or subjects I am interested in. I customise the data presentation to understand the problem better and identify significant events. I register alerts to tell me when something happens.

Obviously, these three elements make up a larger message. Once the mob has been roused, it’s important to monitor the results (the Health & Social Care Bill “listening pause” being a case in point). Further, an alert going off is a cue to investigate further, that leads to a call to action.

Anyway, the next to-do is to rework a pile of ugly code from the analytics scrapers. Also, this is a pretty sweet way of plotting data and close to what I want.

Lib Dems: not quite useless

So, Wired writes up three West Point professors and their algorithm to decide which members of a terrorist network to zap. Apparently they implemented it in 30 lines of Python. The paper is here, with some pseudocode and the tantalising hint that they used NetworkX, but no Python. However, even the Wired piece tells us enough to reverse engineer it.

The key idea here is that whacking terrorist leaders is often stupid, because it causes the enemy to adopt a flatter, more decentralised, and therefore less vulnerable network structure. Also, they point out, the leaders are often forces for restraint and points of contact for negotiation.

Being who they were, they decided that they could fix this with a better optimisation. They looked at the network-wide degree centrality, a measurement of the centralisation or otherwise of the whole network which is defined as the fraction of total nodes in the network an average node is connected to. They then asked how this changed when they removed a node from the network. And they reasoned that increasing it was desirable, as it rendered the network overall more fragile and unstable.

Now, the Lobster Project uses weighted betweenness centrality – the fraction of the shortest routes through the network that pass through a given node, with more important nodes being accounted for as such – as its centrality metric. There is no particular reason to think that this would work differently.

So I thought I’d implement it. Their implementation used 30 lines, but I presume that includes the test harness to generate or load a specimen network as well as the analysis. Here goes:

def greedy_fragile(mgraph, month, mini, nodes):
...network_wide_centrality = float(sum(nodes.values())/len(nodes.values()))
...n = centrality_nodes(mgraph)
...nwc = float(sum(n.values())/len(n.values()))
...mgraph.add_node(mini[0], mini[1])
...return {'Minister': mini[0], 'Title': mini[1]['Title'], 'Department': mini[1]['Department'], 'Date': month, 'Greedy_Fragile': network_wide_centrality - nwc}

mgraph is the NetworkX graph object, month is the month, mini is the minister (or lobby), nodes is the precomputed list of nodes and their centrality values. Obviously, if it wasn’t for the weird datastore thing I’d have done this recursively and made it return the values for the whole network rather than calling it for each node.

And it works. The first result was that one particular minister was slightly reducing the overall centralisation (and therefore fragility/instability) of the system as a whole. And he’s Ed Davey. As the point of having Lib Dems is meant to be reducing the centrality of Dave from PR and paddock-boy in the system, this suggests that we shouldn’t get rid of him yet.

Theme: Cyber-Oddness

Well, this is a story. Who hacked the French presidency? The original source of the story is Le Telegramme de Brest, a bit of a surprise but not the first time a really crazy news story got out in the regional press first. It suggests the attack took place at some point during the transition from President Sarkozy to President Hollande, between the 6th and the 15th of May, and the presidential transition was used as a cover story for the clean-up operation.

This piece in L’Express is mostly boilerplate “cyberwar”, but it does give some details of the exploit and points the finger…at the United States. Now, I’ve no idea how they can be so sure, but there is some actual information in there.

Apparently, the exploit consisted of three steps. The first was a version of the now-classic spearphishing attack. Several officials were sent a message on Facebook, presumably crafted for them, inviting them to follow a link, which led to a fake version of their intranet’s login page. This harvested their login credentials. The second step used the logins to deploy the Flame worm to the Elysee’s network. Flame would compromise some of the computers, which could then be searched for interesting information.

The reasoning is, apparently, that Flame was based on Stuxnet and everyone knows Stuxnet was the Israelis and therefore that’s the same as the Americans. I paraphrase a bit. I would argue that, based on what we actually know, it’s a best-of-breed solution, with one element (the spear-phishing) that is stereotypically associated with the Chinese (like so), and another (the code from Stuxnet) that originates with someone who doesn’t like the Iranians, and further work (the development from Stuxnet to Flame) from a third party.

This is completely normal for malware development, as it is for real viruses (how long before we start talking about “genetic” viruses to force the distinction?), and this is why “attribution” is difficult. Oh yes, and don’t distribute links to documents inside the firewall on Facebook!

Meanwhile, it seems someone nicked the entire Greek ID card database, near enough, and then there was the whole crazy-weird GPS timing/NTP bug incident, where the stratum 1 time sources run by the US Naval Observatory (yeah, where Dick Cheney used to live) stopped working, as did and NIST’s time source, and NTP servers reacted weirdly differently from the way they’re meant to, and for a while the NIST GPS archive didn’t show any data.

A Project Lobster progress report!

So I completely forgot I needed to register for OKFN’s Open Interests Europe hackathon last weekend, which even had a lobbying track, and just round the corner from the office, too.

I decided to have my own lobbying hackathon by eating pizza and caffeine pills and being misogynistic spending my weekend finishing the Lobster Project’s analytics scrapers for ministers and lobbies respectively. I abandoned the plan of generating NetworkX objects and storing them in the database for later use in favour of directly generating them and reading out the metrics, and dealing with the performance hit by writing slightly less horrible code.

Specifically, I decided to optimise for fewer calls to the database API. Memoising the rankings function cuts its usage from two calls a meeting to 82 for the first month, plus any future changes, and storing the cache itself means that only new combinations of ministers and titles generate a query in future runs. Getting all the lobbies for the month in one query, and then processing them in Python using itertools, replaces one query for each meeting with one admittedly complex query per month and a small function.

This still took far longer than I expected to run, but then I realised there was more data.

Anyway, they work and they are generating results by month, so we will be able to draw nice time series charts, up to September 2011. Unfortunately, the ScraperWiki datastore is doing something quite weird – replacing float values with nulls or zeroes – and although I thought I might have fucked up type declarations, pragma tells me that the column types are what they ought to be. So I’ve got a query outstanding with the ScraperWiki folk.

Netroots UK catchup

Other stuff from Netroots UK.

Having chugged through my official Brown Bag Lunch (which actually included Ribena, in a disturbingly infantilising touch), I went to the open space group on the Leveson inquiry. This ended up merging with the one on the LIBOR scandal. I was able to contribute by knowing how the LIBOR panel was meant to work, although we couldn’t get away from the point that separating investment and retail/commercial banking wouldn’t have helped because BarCap was big enough in its own right to be on the panel.

One point which everyone thought would resonate was that the scandal represented an attack on an institution that had relied on its members’ fair dealing. Exactly what to do with it, though, was harder. Could this support the Co-operative’s claim to buy the branches demerged out of Lloyds? Or a Leveson inquiry, but with banks? Of course there have already been inquiries, but then, the original ideal type of this kind of inquiry, the Pecora Committee, wasn’t the first inquiry or even the second into Wall Street in the 1920s.

What else? I went to one of the more tech-centric workshops, run by Blue State Digital. This was pretty good; I liked the point that Facebook advertising was usually a “hopeless waste of £2.50”, but it did have its uses. Those weren’t anything Facebook would want, though. Specifically, the ad-targeting tool lets you get a quick estimate of the size of a potential audience – input the demographics, locations, and search strings you’re interested in, and it spits out an estimate of your audience.

The other one was using it to bait your enemies. If you had a reasonable amount of information, you could place an ad that your target would have to read every time they logged in. This amused me more than a little.

Everyone, but everyone, loves ScraperWiki.

What else? WhoFundsYou scored thinktanks by the degree to which they are forthcoming about their funding. Astonishingly enough, Respublica, the “Not the Other” TaxPayers’ Alliance, and the Adam Smith Institute (no less) got an E. The very, very serious Centre for Policy Studies and Institute for Economic Affairs, and the somewhat less serious but certainly influential Policy Exchange and Centre for Social Justice got a D. You could have mistaken the score-card for a left-right political spectrum, as IPPR, Progress, Resolution Foundation, NEF, SMF, and Compass all got As, while Demos, Reform, the Fabians, and Policy Network got Bs. CentreForum was, superbly, right in the centre with Civitas and the Smith Institute.

It is telling that the distinction between wanktanks like Respublica and TPA and the Very, Very Serious ASI disappears on this scale.

Owen Jones has a lot of good laugh lines. The BSD people are good but self-satisfied. Clifford Singer is funny. I really regret missing the workshop on shooting better video on smartphones as I have zero video skills (even if their live demo was the traditional fiasco). You can’t hear anyone speaking anywhere in Congress House without using a loud hailer.