The theory of the eccentric billionaire, and why politicians got so awful

Chris Dillow asks why politicians seem to be so hopeless today. I tend to think of this as a consequence of the great divergence, the gapping-out of inequality across the developed economies since the 1980s. Why? Consider this guy. That’s right – an eccentric billionaire, giving away fivers!

This used to be a stock cartoon-strip trope, almost certainly an atavism from the Edwardian first age of globalisation via the institutional history of DC Thomson. But by the time the Thomson comics popularised him in the 1930s, his days were already numbered. WW1 saw the UK Gini coefficient radically cut back. It recovered in the 1920s, but never quite hit the same heights, and fell steadily thereafter. Interestingly, the Chamberlain managed economy seems to have done a surprising amount of the work of the great compression.

The chart is from this excellent post. That was then, though. Now the monster is baaack. One of the most interesting, and damning, features of the great divergence is its weird fractal quality. As well as the 1% streaking ahead of the 99%, the 0.1% rocketed ahead of the 1%, and the 0.01% even more so. Payoff structures in the FIRE sector, board-level executive compensation in Forbes 2000 companies, and a whole lot of other ways rich people make money displayed the same fractal hockey-stick pattern.

I think our explanation might be in here. Marx wrote that the state was nothing but an executive committee for managing the common affairs of the bourgeoisie. For many years you could operationalise this by saying that conservative parties’ material basis rested on their ability to get enough company directors and top managers to contribute their time and money to the cause. Conservative politicians therefore faced a mechanism of responsibility with multiple triggers. They needed to keep the directorate, in general, happy rather than suck up to one single individual actor. This is harder.

Extreme right-tail inequality has essentially destroyed this mechanism of responsibility. There are now quite a lot of private fortunes around whose discretionary spending power is large compared with the cost of political campaigning. Although US politics is notoriously expensive, the unit-size is still less than a billion dollars. Also, as Donald Trump demonstrated, it turns out to be possible to save on some of the biggest line items, like TV advertising and political consultancy. Rather than needing to make credible commitments to a significant fraction of the directorate, political entrepreneurs can now concentrate on finding themselves a couple of big individual donors who share their special interests or particular obsessions – eccentric billionaires, in a word. It is easier to suck up to one man’s ego than it is to please a conference centre full of subtly different economic interests, and the best way to do so will often be to flatter his eccentricities. (His? Very often, but ask an Australian about Gina Rinehart.) This is a structural reduction in political accountability. Worryingly, countries like the UK where politics has been kept cheap by regulatory intervention are more rather than less vulnerable to this.

It may even be the case that the set of potential donors is more likely than the wider population to be crackpots. Very large financial wins are associated with either contrarian choices – short when everyone else is long, or hopping onto Apple when everyone else wanted Nokia – or making the same choice as everyone else, but in size. Both amount to a greater appetite for risk. A lot of people who like taking risks are like that because they are crazy. Another large group of the risk-loving are like that because they are so soaked in privilege they don’t need to care. Neither group are likely to know when they are wrong, one out of craziness and the other out of complacency or mediocrity.

Over the long term, risk-loving individuals ought to crash and burn, but in a high inequality economy, there is another option – you can cash in your chips…and try your crazy theories in politics. The representatives of Mill’s “stupid capital of the country” have been replaced by the representatives of its crazy or rather, unaccountable or autocratic capital. Interestingly, Dwight D. Eisenhower noticed a similar phenomenon with regard to Texan oil investors, maybe not dissimilar in their payoff structure or politics to today’s creepy VCs.

Demand-determined productivity

A quick thought after this twitter conversation:

There is quite a large set of firms for which productivity is fundamentally demand-driven. This is a distinct point from Verdoorn’s law, the argument that productivity improvement is most likely to happen when firms are producing close to capacity. Consider me writing a research note about productivity, which is apparently what I do instead of writing blog posts about productivity, because society considers the first activity to have measurable productivity and therefore pays my gas bill.

Productivity is defined as outputs divided by inputs. The most common form of this metric is labour productivity, output divided by inputs, per hour worked. So the production cost of that report is just what it cost to employ me for the time I spent working on it. That’s the input. The output is going to be the price Ovum charges for it, multiplied by the number of copies we sell. Actually, most copies are part of a wider subscription to our research and some are free as part of consulting projects, but we can get around that by saying it’s the weighted average price multiplied by the volume. (If you are enough monopolist to set your prices where you want, there’s a whole other ball of wax here.)

It’s therefore obvious that the more copies of the report that go out, the higher productivity will be. Measured productivity is exactly proportional to demand. This is true whether Ovum sells more copies if demand goes up or whether it increases its prices. This seems silly, but it’s inherent in the definition of productivity. It’s also something you logically have to believe if you believe demand and supply are meaningful concepts and you are trying to be consistent. If people want this research, that probably says something good about it, and any productivity metric needs to take account of that.

This sticks out a lot for information goods, because it usually costs very little to reproduce them. Issuing one more PDF costs next to nothing. The point is stronger than that, though. Information goods are a special case of a much bigger general case. To say that it cost x hours of my time to prepare this report, and it costs basically nothing to reproduce it, is to say that the process of preparing it has high fixed costs and negligible variable costs.

This is a little counter-intuitive, because we usually think of labour as a variable cost. Obviously they could sack me, but just as obviously, if you want to sell research on telecoms companies you’ll have to employ telecoms analysts. The relevant distinction is between costs that are fixed or decreasing with respect to volume, and costs that increase with volume.

The whole project of industrialisation, after all, is about using more capital, a fixed cost, to reduce the marginal cost of production and take advantage of economies of scale. The set of firms whose cost structures are dominated by fixed costs rather than volume-variable ones is therefore large. A chemical plant that takes in cheap ingredients and produces a much more valuable compound might spend a lot of capital to set up its process, which is then a fixed cost. It will not spend dramatically more running flat out because its inputs are cheap. Therefore, its productivity is determined by demand.

The same would be true about capital-intensive manufacturing, about software, a lot of IT services, anything that sells intellectual property, and anything where output changes in variety more than it changes in volume. Also, anything whose cost schedule, aka supply curve, has a step function will show this behaviour in the short run. If you are at step x, you can increase output up to step y without spending more money, and therefore both productivity and profitability will increase with demand until you hit the step.

We can also restate this point as a distinction between processes with decreasing or constant returns to scale, and those with increasing returns to scale. If returns to scale are constant or less at the current scale, productivity is indifferent to demand. If they are increasing, productivity is demand-determined.

The year the journalistic sewer changed hands

Everyone’s talking about this, and I agree with the point in the kicker:

Still, it is less often we think about Bannon simply as a media executive in charge of a private company. Any successful media executive produces content to expand audience size

One thing the Buzzfeed article shows, although it doesn’t call it out explicitly, is that the Breitbannon show took on a very specific function in the media ecosystem – it was where you could leak stuff you couldn’t actually run under your own byline, whether because it was too libellous or offensive to take responsibility for it, of such poor sourcing quality or stupendous triviality your editor wouldn’t want it, or because your motivation for publishing it was to insult somebody for your own reasons. They mention case after wretched case.

There’s always an outlet that takes on this role, the main sewer of the journosphere. You can tell it because it’s the worst-sourced outlet that any mass circulation outlet will follow up. Within recent memory, the Drudge Report played this role for years, tellingly, right up until Breitbart took over from it. In the UK, Paul Staines made a career of sorts out of it, but in the pre-Internet era, Private Eye did (and does) far more of this stuff than it would ever admit. In France, Le Canard Enchainé can be very like that – as well as genuinely important investigations and document leaks, a lot of its word count for a typical week is trivial-but-vicious political gossip and poor-quality hit pieces someone sent it.

It can be hard to tell whether you’re taking on the stories no-one else has the balls to cover, or picking up the trash nobody else will admit to dropping in the street. Also, there’s a gift economy at work – I’ll let you see this doc if you plant this quote. This is definitely the problem for the Eye and Le Canard, but you’d struggle to make a case for Drudge or Bannonbart. The cut-over from Drudge to Breitbart, though, is interesting. It implies that one way to look at the 2016 experience was that the group of fairly horrible politicians and journalists whose association with Matt Drudge had defined his standards as being the worst acceptable ones lost their grip, and a new group of even more horrible actors were able to set up a new, worse standard of depravity, not least because they had a new downstream distribution network.

Sometimes outlets like this get to drive the wider news, and it is almost invariably horrible, no matter what their pretentions to quality are. I guess my point is that treating this lot as a firm operating in the news industry points up how much of their awfulness is driven by the awfulness of journalist subculture.

Fake news and the Afghan war

I have just been re-reading Dr Mike Martin’s An Intimate War, his anthropological study of the conflict in Helmand the British army was inserted into. Something that struck me this time round: Martin’s account of the role of rumours and conspiracy theories, what we might now call fake news.

Martin reports encountering many people who claimed to believe that the British were secretly supporting the Taliban, including people in the diaspora as far away as London. Some of them had developed sophisticated theories of politics around this core belief.

The Taliban, the word here meaning the ideological movement that seized Kabul in 1994, were supported and heavily influenced by Pakistan. Everyone knew that and they were right. Pakistani officers affected the tropes of British regimental culture, used British-style titles and organisational forms, and Pakistan received British development aid. Therefore, some of them reasoned, Britain had never actually decolonised from Pakistan and was still in charge behind the scenes. Its motive in this was to pursue a proxy war with the United States, and Helmand just happened to be the unfortunate theatre in which this conflict was playing out.

Martin points out that this exercise in cultural bricolage served to explain a variety of phenomena that were otherwise baffling. Why did the British want to make contact with someone everyone knew was a leader of the Taliban, or alternatively, the “Taliban”? Why did they think the Americans were too keen to unleash airpower? Why did the Americans think the British weren’t aggressive enough? Why did they support this or that politician who was also a drug smuggler and the brother of a jihadi chief? Why were the British and the Afghan government supporting this guy, while the US Special Forces in their separate chain of command supplied and encouraged his worst enemies?

Like Evelyn Waugh’s Colonel Grace-Groundling-Marchpole, the MI5 officer who hoped to eventually link everyone into one great conspiracy so there would be no more war, people created theories that explained a chaotic world and gave them at least an illusion of control. And of course this was true of us. Axis of Evil thinking defined the good guys and the bad guys. Population-centric counterinsurgency defined an insurgency, a government, and a population of civilians caught between them. Martin demonstrates that this made about as much sense as believing the Taliban was secretly manipulated by an undead British Raj. It wasn’t even that the guerrillas merged back into the population. The concepts of “guerrillas” and “population” were mistaken, although David Kilcullen’s notion of “survival-oriented civilians” was more to the point. The same people might be Talibs, policemen, farmers, opium smugglers, and logistics contractors within the course of a day, whatever happened to be expedient in that moment of desperate micro-politics.

But there’s something missing from his argument on this point. He also notes that essentially everyone he interviewed listened regularly to the BBC World Service for news of the outside world they could rely on. Who could possibly do this if they believed the British were secretly still ruling Pakistan and sending droves of Talibs to fight a third world war with America, in your house? Who, listening to the BBC news and acting on it, could believe that?

Martin doesn’t say, but the whole thrust of the book implies, that of course they didn’t. When it helped to feel like you were in the know – as when you were interviewed by an anthropologist, or speaking to someone you needed to impress – you did it. When you needed to know about, say, Iranian politics and their interest in the water supply – something of critical importance to everybody involved – you listened to the BBC. Even if the belief had important internal consequences, it was also performative, and it was performed for a practical purpose.

This is close to the notion of “identity-protecting cognition”, but I have never been convinced by it. The famous “Kentucky Farmer”‘s decision to believe in climate change when it suits him is by definition useless as there is no wall he can build that will fix his problem locally. And what is the practical purpose of retweeting yet more @RealFKNNews?

One thing that might carry over is my favourite obsession, social trust. Martin notes someone who wished he would one day know who his friends were, and that the local word for “hostility” or “enmity” translates as “cousinness”.

Weak sauce

Stephen Bush in the New Statesman makes an argument I’ve heard from a few people. The young’uns are furious and therefore Corbyn, but they’re also “Thatcher’s children”. So the Tories can solve all their problems by offering something about “getting on the housing ladder”. It looks like they’re going to implement this.

I am not impressed. Here’s why. Hold on to your hat. First-time buyer gimmes are already Government policy. Help to Buy is a thing. The trailed announcement is apparently that it won’t stop in 2021 or whenever. More broadly, it’s been policy all the way back to 2010 that there should be gimmes for first-time buyers and tax breaks for property developers, and section 163 planning requirements for affordable housing should be 80% of London rents. The gimmes-and-breaks policy is in full effect.

As Molotov said to Ribbentrop: if the British are finished, why are we in this air raid shelter? If it was going to be incredibly popular, wouldn’t that have happened? Well, it’s probably because HTB looks after the really rather well off.

What would a real “offer” look like? Well, you could build an enormous amount of council housing. You’d have to end the right-to-buy or you’d just be filling a bucket without a bottom. And you’d have had to repudiate the one thing about Thatcherism anybody actually liked. You could impose rent control and clamp down on buy-to-let lending. And you’d have comprehensively pissed on your voters’ and donors’ chips. You could cap utility prices, or launch a major economic stimulus to drive up wages….of course you won’t. And then, there’s Brexit. Maybe really big HTB, plus dumping Brexit, plus cutting off the culture war bullshit…that might make some progress?

All these things would have one thing in common. You’d have to give up being the Tories. As I see it, they pretty much are the Assured Shorthold Tenancy Party, fighting for anything that looks after the real estate business, with the Brexit guys as an extra internal veto actor. What is it they could give up on that wouldn’t be vetoed?

This reminds me a little of left-wing people doing the Very Real Concerns thing. You can’t go full racist because you wouldn’t be you, and in any case the party has plenty of potential coalitions to veto and disavow you. So you come up with a weak-sauce sop, but nobody wants to hear about nudge bills if renationalising the utilities is available.

Update: If you want a worked example of this post, here’s Theresa May with a full-throated defence of railway privatisation of all things. Chained to the legacy.

Linguistic prescriptivism sucks

Following up on the previous post, here’s something fascinating. The developers of an AI project that is meant to provide a vast base of conceptual associations to help computers process text in English are trying to purge it of racism.

As I understand it, ConceptNet is meant to help your application parse incoming natural language speech. By definition, this will be a dip out of the pool of living English, for good or ill. And if you are training a machine learning algorithm to understand it better, the weights in the association graph are going to trend towards whatever the incoming speech corpus implies. In as much as the project is meant to comprehend English, it must be a descriptive one, and that means dealing with the language warts and all.

The important question is what the application does then. If your app is making judgments on the basis of word associations, it’s likely to end up being seriously prejudiced in some way or other. The purpose of the system is what it does, as Stafford Beer said; the problem with the system is also what it does. The D-word rules.

It was called a perceptron for a reason, damn it

This Technology Review piece about the inventor of backpropagation in neural networks crystallises some important issues about the current AI boom. Advances (or supposed advances) in AI are often held to put our view of ourselves in question. Its failures are held, by anyone who pays attention to them, to put our view of intelligence in question. This has historically been more useful. Now, though, I think the lesson is about our view of machines.

What do you expect of a machine? Well, you expect it to do some well-defined task, without making mistakes of its own, faster or stronger or for longer than human or animal bodies could do it. We have machines that do this for elementary arithmetic operations – they’re called computers – and it turns out you can do all kinds of stuff with them. As long, of course, as you can break down the task into elementary logical primitives that act on inputs and outputs that you can define unambiguously.

And that’s what programming is. You try to define whatever it is in terms a computer can understand, and organise them in ways that let it use its heroic athletic stupidity. Increment, decrement, compare, conditional branch, jump. That’s why it’s hard.

Art tried to illustrate artificial intelligence with this distinction. The intelligent computer would be terrifyingly smart, but would be persuaded or confounded or subverted by human empathy or original creativity. The computer they imagined was a big version (sometimes, a literally huge one, missing the microprocessor future) of the mainframes or #5 crossbar switches or IBM 360s of the day; a machine that manufactured thought in the same way a steam engine revolves a wheel.

Around about this time, the first neural networks were developed. It’s important to remember this was already a retreat from the idea of building a reasoning machine directly – creating an analogue to a network of nerves is an effort to copy natural intelligence. Interestingly, the name “perceptron” was coined for them – not a thinking machine, but a perceiving machine.

Much later we learned how to make use of neural nets and how to build computers powerful enough to make that practical. What we do with them is invariably to perceive stuff. Is there a horse in this photograph? Are these search terms relevant to this document? Recently we’ve taken to calling this artificial intelligence or machine learning, but when it was in the research stage we used to call it machine vision.

If you imagine an intelligent machine you probably imagine that it would think, but it would be a machine – it would do it fast, and perfectly, within the limits of what it could do. It wouldn’t get the wrong day or make trivial errors of arithmetic or logic. It would be something like the reasoning, deliberative self Daniel Kahneman calls System 1, just without its tiresome flaws. It would reason as you might run, but without tiring so long as it had electric power. Awesome.

This is exactly not what deep learning systems deliver. Instead, they automate part of System 2’s tireless, swift, intuitive functions. They just don’t do it very well.

Machines don’t get tired; but then neither does human pattern-recognition. You have to be in dire straits indeed to mistake the man opposite you on the Tube for a dog, or your spouse for a hat. It is true that a machine will not daydream or sneak off for a cigarette, but this is really a statement about work discipline. Airport security screeners zone out, miss stuff, and know that nobody really takes it seriously. Air traffic controllers work shorter shifts, are paid much more, and are well aware that it matters. They seem far more attentive.

Notoriously, people are easily fooled in System 2 mode. There are whole books listing the standard cognitive biases. One of the most fascinating things about deep learning systems, though, is that there is a whole general class of inputs that fools them, and it just looks like…analogue TV interference or bad photo processing. The failure is radical. It’s not that the confidence of detection falls from 99% to 95% or 60%, it’s that they detect something completely, wonderfully, spectacularly wrong with very high confidence.

You might think that this is rather like one of the classic optical illusions, but it’s worse than that. If you notice that you look at something this way, and then that way, and it looks different, you’ll notice something is odd. This is not something our deep learner will do. Nor is it able to identify any bias that might exist in the corpus of data it was trained on…or maybe it is.

If there is any property of the training data set that is strongly predictive of the training criterion, it will zero in on that property with the ferocious clarity of Darwinism. In the 1980s, an early backpropagating neural network was set to find Soviet tanks in a pile of reconnaissance photographs. It worked, until someone noticed that the Red Army usually trained when the weather was good, and in any case the satellite could only see them when the sky was clear. The medical school at St Thomas’ Hospital in London found theirs had learned that their successful students were usually white.

The success of deep learning systems has given us better machine perception. This is really useful. What it does well is matching or identifying patterns, very fast, for longer than you can reasonably expect people to do. It automates a small part of the glorious wonder of intuition. It also automates everything terrible about it, and adds brilliantly creative mistakes of its own. There is something wonderful about the idea of a machine that gets it completely, hopelessly wrong.

Unfortunately, we have convinced ourselves that it is like System 1 but faster and stronger. This is nonsense.

A really interesting question, meanwhile, would be what a third approach to AI might be like. The high postwar imagined intelligence in terms of reasoning. Later, we tried perception. That leaves action. Research into natural – you might say real – cognition emphasises this more and more. Action, though, implies responsibility.

You should be reading Matt Black’s fantastic NHS blog

The NHS is in the news, so it’s probably time to promote the hell out of this awesome blog I found!

Point the first: There is no relationship between “rising demand” for A&E treatment and waiting times. Also, a large majority of people waiting in A&E need to be admitted, so there is no point badgering people to see their GP/go to a pharmacist/call a number instead.

Point the second: demand isn’t actually rising much, instead, we started counting people who go to walk-in clinics as A&E attenders. As a consequence, the constant initiatives badgering people to go to walk-in clinics, minor injury units, GPs, pharmacists, or just fuck off and die and don’t bother us already actually make the problem worse.

Point the third: the 4-hour wait target is part of the problem, not part of the solution, because in effect it rewards those A&Es who either maximise the number of patients waiting 3 hours, 59 minutes, or else palm off as many patients as possible on some other hospital. (The histogram is spookily like the distribution of house prices when stamp duty was levied on a similar basis.) Fascinatingly, the best-performing hospitals show less of this effect. Also, for some reason, hospital discharge processes slow right down every morning around 8am, causing queues to propagate back through the system.

Point the fourth, from a different blog it links to: the biggest cause of discharge waits and hence of queues in A&E turns out to be just handing out medicines from the pharmacy, and this could be dramatically improved by not letting the doctors touch it, because unlike the pharmacists they can’t be trusted not to screw it up.

Point the fifth: there is no shortage of A&E docs, but the Royal College of Emergency Medicine understandably likes the idea of more of them.

Point the sixth: picking fights with the docs about working weekends is stupid, because the driver of queuing is discharge, not admissions.

Point the seventh: hospitals typically discharge about 20% of their patients a day, but they do it mostly just before knocking off at 5pm, while emergency admissions seem to follow the sun and peak in the middle of the day, so a queue must mathematically exist until arrivals drop off during the night.

Point the eighth: one overriding theme in all of these is that the NHS’s tradespeople are really important and we should trust them much more relative to the docs. Ironically, though, it’s by using the tools of statistical process control and scientific management that this socially radical conclusion becomes apparent.

Point the ninth: Jeremy Hunt is still health secretary. Why?


It turns out that 10% of the world’s surviving pagers are in the NHS and apparently we could save some money (not very much money, but maybe more meaningful in the context of an NHS trust telecoms budget) by replacing them with an app. I’m really not so sure about that. As Geoff Hall, of the Informatics Leeds Cancer Centre, says:

“Pagers seem like old technology, but they still exist purely for their inherent high levels of resilience. They are simple to use, i.e. calls can be pushed out by ringing one number, there is an audit trail, the device is easy to carry, and the battery lasts months, not hours. They do only one task, but they do it well. They provide a last line of defence”.

One thing I would also say is that they operate in low frequency bands that cover the broad acres and penetrate through walls deep into buildings. Moving the application onto a cellular network would probably require quite a few trusts to get their MNO to install in-building infrastructure, which is a serious pain in a heavily serviced medical environment full of stuff that you’re not meant to use radio transmitters around. Alternatively they could use WiFi or something funky like MuLTEFire but that all costs money and involves complicated installs.

I do wonder if this might be a use case for one of the new low-power wide area network (LPWAN) technologies. These were invented for Internet of Things applications, but reliably delivering small and even small-ish data payloads with stringent coverage and in-building requirements and minimal power draw is a job right up their street. It would also be an interesting exercise to decide what you’d actually want in a pager now.

Here’s a brutal and harrowing account of, among other things, the huge gap between “having an app” and “having an emergency service”.

2017: The ultimate development of the modern British campaign

That Tory after-action report (one, two, three) is quite the thing. Something that sticks out for me is that the 2017 election might have been the moment when the shrinking Tory membership finally caught up with them. This is something that has been promised for getting on for decades, but if it can’t go on like this, it won’t. Long-term trends have their effect through contingent moments in history.

For example:

For example, after the 2010 election, the Conservative Party held about 500,000 email addresses, which had shrunk to about 300,000 which were still usable by the time planning began again in 2013. By the end of the 2015 General Election, that list had grown to around 1.4 million through proactive gathering of addresses through online campaigns – but, by April 2017, almost two years of leakage had again drastically reduced the list.

The same went for data gathered through canvassing. CCHQ’s Voter ID and Get Out The Vote operation in 2015 had worked well, particularly in its ruthless targeting of voters in what were then Lib Dem seats. But that data was now out of date, and even the proportion of it which was still usable was now far less relevant. The 2017 battlegrounds largely weren’t in those former Lib Dem seats, and the potential swing vote mostly wasn’t Lib-Con. Data on the disillusioned former Labour and UKIP voters in Midlands and Northern seats was in short supply. The Conservative Party had played no formal role in the EU referendum, and so had no Leave/Remain canvassing data of its own. And the intervening local and mayoral elections had not yielded enough data to overcome either problem. The advent of Brexit had certainly fractured old loyalties, but the Conservative Party only had a limited idea of where this had happened, and who it had happened to.

Politics is very often a question of lists, lists that are generated by activism. The Messina/Crosby, Facebook-bomber campaign model was no different. In fact, because of its heavy reliance on targeted ad drops from CCHQ, it was much more so. Of course, a major reason for the Tories’ love of online advertising was precisely that it was a substitute for mobilisation. From the lists sprang the strategy, notably the Leave = UKIP = Tory concept, the effort to flip the North, and the obsessive focus on May herself.

What they came up with – principally via work by Messina and Textor – was a calculation that focused on winning over two groups of voters: former UKIP supporters, and direct switchers from Labour, particularly those who had voted Leave and were often part of Labour’s traditional working class vote, many of whom were to be found in seats in the Midlands and the North. Their plan, dubbed “Targeted Voter Turnout” (TVT), sat at the centre of everything that was to follow with the ground campaign (of which, more tomorrow). It dictated who the potential switchers were, it extrapolated from the consumer data to determine where they probably lived, down to individuals within households, and by looking at how many such people lived in each constituency, it drove the choice of which seats to target. While ordinarily, other specialised aspects of the campaign – such as digital – would develop their own models, the lack of time meant the whole operation rested on this analysis.

At the same time, messages were being developed and tested to sharpen the campaign’s appeal to those groups. One fateful development at this early stage was the decision to shift from a more traditional, team-based model of campaigning, to a highly personalised obsession with May herself as the figurehead – an approach we now know was recommended by Crosby and Textor in April, shortly before the campaign. We all remember the banners proclaiming “Theresa May’s team”, and anyone involved in the canvassing operation will recall the scripts that spoke endlessly of “Theresa May’s candidate”. The die had to be cast quickly, on the basis of less than ideal information – so it was.

The campaign operated largely blind once the whistle blew, both because it didn’t get doorstep feedback and because somebody – possibly Lynton Crosby – had bought into the fashionable idea that the polls are always wrong.

At this point, we need to consider how a campaign in this situation might learn that it has such a problem. It could get it from the polls – but with a time lag, with lots of noise, and with the caveat that the Conservative experience in 2015 and 2016 taught them not to trust pollsters. It could hear it from its own campaign data – but if there are movements taking place among people whom you aren’t canvassing, then you won’t hear it. Or it could hear it from its candidates and activists – but CCHQ, as we will see in tomorrow’s article, has a somewhat sceptical view of its own colleagues on the ground.

Tory target selection was also driven by a basically untried model. Even when there was feedback, it’s possible it was so patchy that it created an availability bias:

Around the country, as the results came in, numerous experienced campaigners in Tory seats with large majorities realised to their horror that while they had been travelling often long distances to give mutual aid to supposed target seats where Labour won convincingly, Tory-held seats far closer to them had been lost. In one instance, a well-resourced association saw the Labour majority in their allotted target seat increase, while a Conservative seat which they drove through regularly to get to the target was lost.

This mismatch got worse as time went on, too. Positive early canvassing returns (pre-manifesto) and the encouraging local election results led CCHQ’s strategists to start not only treating Tory-held marginals as safe, but to divert resources away from the more marginal Labour-held target seats and towards target seats further down the list, ie those with bigger majorities.

Although the highly centralised campaign model was a consequence of the need for speed and the lack of numbers, it also suited its architects down to the ground. After all, it was their decision to call an election anyway. It meant that the Conservative HQ got to pick literally all the candidates:

The immediate problem of needing to select hundreds of candidates in a very tight window was answered with special rules to override the usual, more drawn-out, process. These divided seats into two groups, with a distinct process for each. The first, Conservative-held seats where the sitting MP was standing down and Opposition-held target seats, would be presented with a shortlist of three candidates, decided by CCHQ, from which the Association had to select. The second, non-target seats, would simply have a candidate imposed on them.

Write the leaflets:

Blueprint, the online ordering system for Tory leaflets, offered a limited range of templates based on the type of seat – Tory-held, target, non-target – and the bulk of the content for each leaflet was pre-written centrally on the national message. All the target seat candidates were rushed to London to take part in a marathon photoshoot to ensure each had a picture with the Prime Minister, but there was normally only a small space for local messaging or even information about who the Conservative candidate “standing with Theresa May” was. “We were only allowed about 20 words per leaflet about local issues, rest taken up with Theresa May, Brexit, and Corbyn,” complains an officer in a target seat who used the system.

And post to Facebook:

The same centralised approach applied online. “CCHQ took admin rights to our Facebook pages, but everything they posted was “we’re better than Jeremy Corbyn/Jeremy Corbyn will lead to chaos”, as one candidate puts it.

In many ways, Theresa May’s 2017 campaign was the ultimate development of the way British political campaigns had been going for years, a genuine heir to Blair if you will. It wrote the leaflets, picked the candidates, posted to hundreds of Facebook groups over the local chairman’s name. As you might expect from that, it also didn’t pick up on some important developments. The effort to “get the band back together” and repeat 2015 missed that 2015 wasn’t all that. Despite politicising the Treasury to an unheard-of degree, ramping up rhetorical aggression, and benefiting from the unconvenanted blessings of the Lib Dem collapse and SNP triumph, the Tories of 2015 still only just squeaked over the line by pushing the campaigning rules to the limit and a bit beyond. Having turned the amplifier up to 11, there wasn’t anywhere left to go.

CCHQ’s advice on electoral law led them to believe they had found a way to avoid that problem: regional battle buses, that toured multiple seats, would, they believed, constitute national spending, avoiding the danger that a bus could wipe out a large chunk of a candidate’s campaign budget in a single day.

As the Electoral Commission and CCHQ have now discovered, that belief was wrong. The investigation that ensued into allegations that MPs had failed to properly declare their local spending as a result of the tactic proved damaging. Individuals and the party as a whole were derided as committing electoral fraud (mostly wrongly, it turned out) and MPs found themselves subjected to police interviews and negative media coverage. CCHQ was reluctant, and therefore shamefully slow, to put its hands up and clearly admit that candidates were innocent of any intentional offence precisely because it had issued them with incorrect advice.

In 2015, the polls were wrong and the modelling seemed to work. As a result, there was near religious faith that the same thing would happen. This is really astonishing:

Despite the inaccuracies which had by then become obvious, the decision was taken not to confine the GOTV knocking up to Conservative pledges – those who had confirmed their intention to vote Tory. Instead, troops in a variety of target seats were sent to knock on doors based on the inaccurate TVT data. A flawed hope still seemed to be lingering that the model might turn up voters who hadn’t been contacted yet.

“On the day, we were not using canvass data, but going to doors extrapolated from the demographics,” a long-serving campaigner tells me. Another found that “inexperienced centrally appointed campaign managers who don’t get elections, activists or campaigning…[were] telling activists not to call on pledges.” This appears to have been a central instruction, enforced by the Party’s local employees.

They didn’t knock up voters who they’d IDd as Tories, because the computer said no. Instead they went to addresses where they hoped a potential voter might dwell – in other words, they didn’t actually do any knocking-up, just continued canvassing. Even if you have good lists, you still find people have moved right up until polling day, so the sheer waste of time involved in working from a model based on data from 2005 is awe-inspiring.

And apparently a whole bulk e-mail drop only happened the morning afterwards. I would have paid cash money to watch that.

Anyway, their conclusions are here. The point that sticks out a mile, because it is hardly mentioned, is that they need to find some activists from somewhere. How they will do that is far from obvious. On the other hand, I worry that Tory bungling played a huge role in the election and they can surely not be this hopeless again. At the very least, surely next time they will bother to knock their own likelies up.