Category: Internet

Scale and scalability

I’ve been thinking a lot about scaling and the economics of the cloud recently after reading this. Specifically, this quote:

The costs for most SaaS products tend to find economies of scale early. If you are just selling software, distribution is essentially free, and you can support millions of users after the initial development. But the cost for infrastructure-as-a-service products (like Segment) tends to grow linearly with adoption. Not sub-linearly.

To unpack this a little, most Infrastructure-as-a-Service providers make “linear scaling” a big part of their sell. That is to say, their pricing is based on pure volume billing. You use more RAM, or CPU cores, or networking bandwidth, or transactions on the database service, or whatever, and your bill goes up proportionately. This speaks to the desire for predictability, and also to a couple of déformations professionelles.

First of all, if you program computers you learn pretty quickly that it’s easy to come up with solutions that work, but that become disastrously inefficient when you apply them on a large scale. Because computers do things quickly, this isn’t obvious until you try it – it might be fast because it’s right, or fast because the test case is trivial. Identifying the scaling properties of programs is both something that is heavily emphasised in teaching, and something that will invariably bite you in the arse in practice. One way or the other, everyone eventually learns the lesson.

Second, it’s quite common in the economics of the Internet that you meet a supply curve with a step function. Back in the day, BT Wholesale used to sell its main backhaul product to independent ISPs in increments of 155Mbps. As traffic increases, the marginal cost of more traffic is zero right up to the point where the 95th percentile of capacity utilisation hit the increment, and then either the service ground to a halt or you sent thousands of pounds to BTw for each one of your backhaul links. When the BBC iPlayer launched, all the UK’s independent ISPs were repeatedly thrashed by this huge BT-branded cashbat until several of them went bust.

So everyone is primed to look out for a) nasty scaling properties and b) cliff-edge pricing structures. The IaaS pricing model speaks to this. But it has a dark side.

When we talk about “economies of scale” we mean that something becomes cheaper to produce, per unit, as we produce more of it. This is, in some sense, the tao of the Industrial Revolution. To put it in the terms we’ve used so far, something that has economies of scale exhibits sub-linear scaling. The dark side of the cloud is that its users have got rid of the risk of pathological scaling, but they’ve done it by giving up their right to exploit sub-linear scaling. Instead, the IaaS industry has captured the benefits of scale. Its costs are sub-linear, but its pricing is linear, so as it scales up, it captures more and more of its customers’ potential profitability into its own margins.

There’s a twist to this. Really big IaaS customers can usually negotiate a much better deal. At the other end of the curve, there’s usually a free tier. So the effective pricing curve – the supply curve – is actually super-linear for a large proportion of the customer base. And, if we go back to the original blog post, there’s also a dynamic element.

Because outsourcing infrastructure is so damn easy (RDS, Redshift, S3, etc), it’s easy to fall into a cycle where the first response to any problem is to spend more money.

One of the ways sub-linear scaling happens in IT is, of course, optimisation. But here you’re being encouraged not to bother, to do the equivalent of throwing hardware at the problem. Technical improvement is also being concentrated into the cloud provider. And, of course, there’s a huge helping of confusopoly at work here too. The exact details of what you’re paying for end up being…hazy?

In our case, this meant digging through the bill line-by-line and scrutinizing every single resource. To do this, we enabled AWS Detailed billing. It dumps the full raw logs of instance-hours, provisioned databases, and all other resources into S3. In turn, we then imported that data into Redshift using Heroku’s AWSBilling worker for further analysis.

I heard you liked cloud, so I built you a cloud to analyse the bills from your cloud provider. In the cloud.

So here’s a thought. Want to know how employees at Apple and Google are more productive? The answer may just be “monopoly” or more accurately, “oligopoly”. And as Dietrich Vollrath likes to point out, under standard assumptions, more monopoly means lower measured productivity economy-wide.

The menace of correlated hype cycles

In case you wonder, there’s not been much activity around here due to a project over at the Pol. Think Project Lobster, but Eurocratic. Expect an uptick on the blog from now on.

First of all, something that has struck me at work and by following Maciej Ceglowski‘s talks. We all remember the crack from the 2008 financial crisis – when it matters, all correlations go to unity. Let’s apply this to the celebrated Gartner hype cycle. If anyone’s not seen it before, here’s Wikipedia’s CC-BY-SA licensed version of the chart, by user NeedCokeNow.


What interests me is that this is basically OK from wider society’s perspective so long as the hype-cycles are uncorrelated. In 2000, the .com crash wasn’t so bad because most of the wider economy – even the IT sector – didn’t care that much, even though there was substantial correlation between the .coms and the fixed-line telecoms industry. (Mobile was an important source of decorrelation.)

But the more they become correlated, the more additional risk accumulates just because of the correlation. Correlation is itself risky, see 2008. We might call it Bacon Meteor risk, in honour of Maciej’s twitter handle, because I like the image of the bacon meteor slamming into the atmosphere, ushering in an impact winter that kills off all the unicorns.

I see the risk accumulating due to the correlation of three Valley subsegments: Advertising, Big Data, and the Internet of Things. These segments all share the lack of an obvious end-user revenue model, a high degree of regulatory, political, and security risk that isn’t currently accounted for, a highly aggressive VC-led finance model that tends to render their finances even more intransparent than releasing them in untagged PDFs on a 9-month cycle, and most importantly, cross-dependency that brings about tight coupling between the hype cycles.

Ads are the catch-all business case, the justification for all this stuff, the classic case of Maciej’s notion of investor storytime or the world’s most targeted ad (here’s a fine example). Online advertising is itself a business in massive crisis – prices are plummeting, volumes soaring in an effort to keep up, the ad networks have become the world’s premier malware vector, and not surprisingly, everyone’s using ad blockers.

In order to process all the data and deliver the ads, you need the armamentarium of the Big Data sector. Therefore, the ad sector is dependent on the big data guys for technology and the big data guys on the ads for revenue. Collecting all this stuff also means collecting security, regulatory, and political risk.

It also seems that there are diminishing returns to ad targeting data. This ought to be obvious, because advertising aspires to create new customers. That’s the point. Perfect targeting would return only those people who are already certain to buy your product. This is useless, rather like Borges’ map the size of the country. As a result, we’re in a red queen’s race; more and more volume, and more and more categories, of data are needed to win each additional clickthrough.

Hence the Internet of Things and the way every startup in this sector also wants to monetise the data. IoT devices create data, which can be fed into the big data sector, and used to target ads. The ads are meant to validate the investors’ valuations and therefore make the next VC round possible, which incidentally these days usually permits key insiders to cash out, like the IPO used to back in the day.

And you know? I wouldn’t mind seeing the whole smug, creepy, not-as-smart-as-it-makes-out mess with its startlingly poor software dry up and blow away. I am seriously disappointed in the Internet; Google Images can’t find me a pile of dead unicorns. The only problem is that correlation risk. If it didn’t exist I’d be wholeheartedly cheering for a classic rich man’s panic that would hit pretty much exclusively people who can afford to lose the money and richly deserve it. But it does.

The transmission mechanism that worries me is, of course, real estate.

Culture-bound syndromes

Swinging off something I discussed in another place, the Wikipedia list of culture-bound syndromes is fascinatingly odd, although several of them seem to reduce to depression and several more to sexism. I wonder if different Wikipedias have different ones?

But what interests me is this: what with globalisation an’ all, will these get smoothed out by the invisible hand like so many obscure languages, until we’re all crazy according to world-class best practice and international standards?

Or will we get something different? Weird jarring mashups from the grab-bag of available symptoms are a possibility. Try a combination of ufufuyane, tanning addiction, and scrupulosity, or perhaps boufée délirante, smilorexia, and puppy pregnancy syndrome.

That’s if nothing entirely novel emerges.

Perhaps it already has and “Troll (Internet)” should be in the list. Perhaps I should put it there.

#rugbyleague tries streaming on the web. it doesn’t go well

Oh Rugby League, must it always be so? The answer is always yes. The FFR XIII, the French governing body, had the great idea of streaming their match with Wales today on the web, presumably because TV wasn’t interested and there are plenty of weirdos who would get up for the England/Samoa and Australia/New Zealand who would also watch the French game.

But tell me, having made the momentous decision, did they do a good job? Did they ask people who knew how to do a good job? You know the answer.

It ended up on Dailymotion, in really terrible quality, with no score, but not before they’d also knocked over their own WordPress site by putting the embedded video on the front page and handing out the link, so the thundering herd hit whatever VPS they bought for their website first rather than Dailymotion’s CDN infrastructure. Not surprisingly the database got its knickers in a twist. Why involve a database when what you really need is a cache?

So a good idea that our amateurish execution turned into a humiliating fiasco. Where have we heard that one before?

My correct views on net neutrality

OK, the net neutrality. Just to set down how I think about this. The fundamental issue here is the termination fee regime on the Internet, or rather the lack of one.

So what is termination? When a phone call (remember them?) goes from network A to network B, network B charges network A for “terminating” the call, i.e. accepting it for final delivery to one of their subscribers. This practice originates with the way 19th century German postal administrations handled cross-boundary mail. There were other options, but the Prussians insisted on being paid for accepting inbound mail, and what they said went. This practice was taken over by the Universal Postal Union as a worldwide standard, so you can blame Anthony Trollope.

When international telephone interconnection became a thing, the ITU took over the postmen’s practices. Later we got privatisation, and with the emergence of GSM, a vast increase in the importance of interconnection through the issue of roaming.

But on the Internet, there is no termination regime, or rather there is one, but the termination fee is in principle zero. It’s possible, of course, to charge termination on IP traffic, and in fact mobile operators do this to each other via the GRX/IPX system, but there isn’t very much mobile-to-mobile traffic because everything interesting is on the proper Internet.

The first big, important point about termination is that it’s a purely regulatory construct. It feels aesthetically right to think in terms of “importing” or “exporting” traffic, but it doesn’t actually describe an economic reality. For two internetworks to interconnect, they have to run a cable down to a meet-me room somewhere, perhaps place some equipment, and do some configuration. These costs are all incurred at the time of setting up the link.

This takes us to the second big, important point, which is that it’s a purely regulatory construct. Very importantly, the direction that the traffic is flowing doesn’t affect the costs at all. The glass fibres don’t know which way the packets are moving. Whether the up/down ratio is 50:50, 80:20, or something else doesn’t change the costs of production. How much total traffic there is certainly does change the total cost, but that’s a separate issue.

This has important consequences.

Mobile carriers love termination; this is why they whine so much about Neelie Kroes and why they tend to report their numbers as “ex-MTR” and also as “including regulatory MTR changes”. (The technical term for the second metric is “the truth”.) They have two reasons for this. The first is that, although termination is charged to other operators, that doesn’t mean it’s a wash. Even if all the operators had roughly balanced termination accounts, they would still be able to pass the cost on to the customer, i.e. you.

The second is that termination fees flow towards the big battalions. If I have 20 million subscribers and you have 1 million subscribers, your subs are much more likely to call mine and hence generate termination revenue than the other way around. And because termination has nothing in principle to do with the underlying costs of production, it’s pure economic rent and hence, margin. Mmm, margin!

There is of course no reason to treat 1870s Prussian postal practice as sacrosanct, to say the least. There might have been more reason back when telecoms operators were public services, but that was then, and anyway there were plenty of problems with that set-up. Now that the big battalions are private interests, it’s much harder to defend.

By contrast, a system with no termination regime, known variously as bill-and-keep, settlement-free interconnection, etc has the property that big operators implicitly subsidise small ones, and specifically, access operators implicitly subsidise hosting operators. I don’t have to pay Deutsche Telekom or whoever to let its subscribers read my blog. This is a very important part of the Internet’s special nature.

The telephony ecosystem provides for universal interconnection – anyone on the phone can call any other number – but it provides only very poorly for applications, rather than access, operators. It sometimes claims to provide more thoroughly for public service, but it usually only does so when forced to by regulatory action. The Internet termination regime provides for universal interconnection, but also provides much more thoroughly for the existence of stuff you might want to interconnect with.

It is also true that, because termination is a regulatory construct, major access operators who want to charge something like a termination fee are usurping the powers of the regulator.

Universal interconnection is very valuable. Fans of markets in everything tend to think that it wouldn’t be so bad if you needed to subscribe to multiple ISPs to get the whole of the Internet. This is just a reflection of their general pollyannaism. We can see this because the market has spoken; nobody wants not-quite the Internet. When something like that is offered, customers invariably demand the real thing. Similarly, when filtering is offered as a commercial product, nobody ever buys it. They may think it’s good for everyone else, but they don’t want it for themselves. Everyone would like 1Gbps symmetric, thanks, and get out of the way. They would also want a full BGP routeview if they only knew what one was.

This is also why it is a much less important issue in the EU than in the US. In the EU, structural separation and wholesale requirements mean that the whole of the Internet and shut up is always likely to be on offer. In the US, not so much.

As the former chief engineer at Akamai, Patrick Gilmore, said on NANOG recently, having seen the size of the routing table pass 500,000, why don’t we make an effort to tidy up and push it back underneath? Especially as this would give all the world’s Cisco Super 720 routers a new lease of life.

In general, brilliant schemes to reorganise the Internet aim for incremental efficiency gains at the cost of the weirdness, slack, and flexibility that makes it special. The slack and quirks are a source of antifragility and strength. It is crazy talk to spoil the ship for three months’ deferred investment, especially when tweaking and better practice often deliver more.

Many password. So changing. Much heartbleed

So #Heartbleed, perhaps the best software bug ever. I spent much of today checking websites and changing passwords. Fortunately, I use the Firefox password manager to store mine and sync them with the browser on my mobile phone, so I could open it, search for “https://”, and work through the list. I eventually used 30 or so random sequences from, starting with anything that had money attached. It was an advance on my plan, over a decade old, of using the names of Australian cattle stations.

That was fair enough, but I kept running into the same problem – I had to log in, root around in some e-commerce site to find the “change password” link, and then futz around still more to persuade Firefox to save the new password. The champion was probably a ticketabc site where I had to feign interest in a Pharcyde gig to change my password.

The problem that you can’t explicitly edit the passwords is solved with this extension, which also helps with some web sites that don’t flag the password fields properly. PayPal even stops you copying and pasting, to make absolutely sure you can’t use it without passing a typing test.

But this is all kludge. The main problem with passwords is that if they are any good, you can’t remember them. The other main problem with passwords is that if you can think them up, they probably aren’t any good. The other other main problem with them is that the whole life-cycle is so almost.

What I want is this: my Web browser generates a genuinely long and random password whenever I need one, and stores it. It fetches it whenever I want to log in. When I don’t want it any more, it deletes it. If there is some reason to think it’s been compromised, I press a button and the password is revoked and a new one generated.

Seems simple enough, and I was thinking about getting the JavaScript book out and making a browser extension…until I started changing the passwords. The problem is that there are so, so many daft, broken, almost ways of implementing simple password schemes. And wouldn’t it be that bloody horrible Verified by Visa mess that doesn’t either pass or fail the test for Heartbleed, when it is supposedly all that stands between my money and the scum of the Internet?, I’m looking at you.

What I want, then, is a simple standard that allows a Web site (or if you like, anything else using it) to trigger the creation of a password by the password manager, which then stores it for later use, and that provides for the password to later be changed. This must allow for an external device to generate the password if desired, for a master credential, and for the password store to be sync’d between machines if desired. It must also allow for a big REVOKE ALL THE THINGS button that causes all (or a subset) of the stored passwords to be expired and regenerated.

That’s basically an API with five calls:

>makePassword(site, username)

>login(site, username, password)

>logout(site, username)

>deletePassword(site, username, password)

>revokePassword(site, username, password)

and the fifth is really just a delete followed immediately by a make.

Why the hell hasn’t W3C done anything like this? It seems such a basic and useful project compared to the vast effort poured into the semantic web black hole.

Update: Naadir Jeewa objects.

I think he is wrong. Not only is OAuth in the sense of “sign in with Facebook”, i.e. the sense in which it gets used, a bad case of pre-Snowden thinking, it’s also true that it works for me about 25% of the time.

Ladies and gentlemen, we are floating in money

Something we’ve needed for a while: a good hard stomp on the knuckles of all this MAGIC FACEBOOK DRONEZ FOR AFRICA nonsense. Provided. I especially like the point that in fact, mobile operators are building 3G coverage in these places right now using the exciting new technology of sticking the antenna on a pole. A case in point: Vodafone’s M-PESA mobile payments platform is moving this spring, into hosting in Kenya, having so far been based in a VF data centre in Germany. That’s a huge vote of confidence in Kenyan infrastructure.

I would only add that a typical national cellular network is between 3,000 and 13,000 Node Bs, and that’s a lot of flying robots, especially when you think that they will need to rotate home for downtime. It’s also a hell of a lot of aerial activity for countries that don’t have much in the way of air traffic control. And typical monthly blended ARPU in these areas is around $5. If you want to attach a flying robot to each cell, how’s that going to add up?

It’s basically the equivalent to all the people who were going to cover this or that with free WiFi back in about 2004 and we wouldn’t need boring carriers with all their boring regulation and boring unions and boring universal service and boring and why you so boring, Sven Radioplanner?

Speaking of which, I saw a Bell Labs presentation in about 2005 of research into mobile base stations that would actually be mobile themselves, chugging about airports on their wheels to optimise the network design. I note that I’ve yet to find a Node B chasing me into a tube station, like the infrastructure for the Direct Line phone. I suspect that the problem of designing such a highly dynamic radio network might be quite complicated. Presumably the drones talk to each other, so it’s a mesh network, and one thing we know about those is that they don’t scale particularly well.

It is actually true that the bellhead/nethead divide persists after all these years. At MWC the other week, I was amazed at the big deal people on the main site made about having an app or a Web site, while over at the developer event people would start up a RESTful service during their own presentations. Similarly, at IETF this week, I mentioned BCP38 to someone and they had no idea what it was – the stereotype of being a bit unworldly and not really interested in user or operator problems has a grain of truth.

But this sort of stupid cap-badge politics divide is just that – stupid, and misleading. It also acts as camouflage for all sorts of ugly prejudices and assumptions, in this case that Africans need saving by DRONEZ, that Facebook is the first of their concerns, that everyone who works for a telco or worse, a government, is an idiot, and that only idiots get involved with infrastructure.

Meanwhile in the UK, we still haven’t fixed the thing where you get to not pay rates on new fibre until it’s sold and profitable, but only if you’re BT, and Cory Doctorow is worrying about the renewed London property boom eating start-ups so they can be replaced by oligarch units.

Here’s a really nice group profile of Xavier Niel, Stéphane Richard, and Martin Bouygues from Le Monde. It’s a pity the reporter doesn’t sound able to assess anything technical they say critically – it’s certainly not true that Free doesn’t do engineering – but it does point up the way they seem to come from three different versions of France. Richard, the super-elite but entirely general purpose technocrat; Bouygues, the Neuilly heir to a fortune built on selling construction projects to the government; Niel, the guy from post-1983 who ran away to the Internet and thinks everyone should learn programming.

daft IP addressing choices

This is only one of the reasons why squatting in other people’s netblocks is a bad idea. To understand the point, you’ve got to go back to the BT 21CN project, which was one of those “the Internet is just another service over our private network” ideas telcos tend to love. Although a lot of it didn’t work, like the weird ethernet-level multiservice router, they did build a huge MPLS core network that carries all the other stuff – i.e. mostly the Internet – as encapsulated traffic.

Because they did it this way, they also didn’t do IPv6, which left them with a problem. One of the advantages of doing it the way they did was that they could trivially have a parallel management network. But that meant finding at least two addresses per device for the whole of the UK. So they had the bright idea of picking a big netblock that doesn’t appear in the Internet routing table, and “borrowing” that.

Sensibly, they looked for one that would be very unlikely to ever be announced. Some organisations who got huge IP allocations back in the day, like MIT with its 3 /8 blocks, have been prevailed on to give at least some of them back for public use. The classic case is the trade show Interop, which used to own 45/8 and only use it one week a year.

The US Department of Defense, however, has a hell of a lot of address space, and usually doesn’t route publicly for fairly obvious reasons. And if they don’t want to give it up, who’s going to make them? So they peeked into the DODNIC allocation and picked 30/8. This is quite common; one day somebody will audit it all and there will be surprises.

PRISM. Sometimes it’s easier to solve these things in L

I think it is probably important to direct attention to this post, which contains the only convincing explanation of PRISM I’ve yet seen, including the tiny budget (if it only cost $20m to process everything in Apple, Google, Facebook etc, what do they need all those data centres for), the overt denials, and the denial of any technical backdoor.

Basically, the argument is that PRISM is an innovation in the technology of law rather than the technology of computing, some sort of expedited court order programmed in Lawyer requiring the disclosure of specified data, and perhaps providing for enduring or repeated collection. This would avoid the need to duplicate vast amounts of infrastructure or trawl every damn thing, would stick to the letter of the law, and would help engineers sleep, as it wouldn’t imply creating a vulnerability that could be used by both the NSA and God-knows-who. It would also permit the President and such folk to deny that everyone was being monitored, as of course they are not.

That said, data could be requested on anybody who the court could be convinced was of interest. As the legalities seem quite permissive and anyway the court is a bit of a flexible friend, this means a lot of people. And in an important sense it doesn’t matter. The fact that surveillance is possible is important in itself. Bentham’s panopticon was based on the combination of overt surveillance – the prisoners knew that there was a guard watching them – and covert surveillance – the fact that the prisoners didn’t know at any given moment who the guard might be watching and therefore could not be certain they were not being observed.

The degree to which this was an aim of PRISM must be limited, because it was after all meant to be secret. But it is hard to avoid the conclusion that it’s there.

Something else. I’ve occasionally said that the Great Firewall of China should be seen as a protectionist trade-barrier as much as an instrument of censorship. Huge Chinese Internet companies exist that probably wouldn’t if everyone there used Facebook, Google, etc. Here you see another benefit of it – the Public Security Bureau gets to spy on QQ, but it’s harder for the Americans (or anyone else) to poke around. This may explain why the NSA seems to pick up lots of data from India and much less from KSA or China; you can PRISM for terrorists trying to affect the Indo-Pak nuclear balance and you can’t for Chinese targets.

Borders are always interesting, and this is today’s version.

Iran, of course, does another twist on this. It has a vigorous internal ISP industry, but monopolises international interconnection through a nationalised telco, DCI, that practices serious censorship. However, the same company also sells unfiltered, real Internet connectivity to actors outside Iran, notably in Oman, Pakistan, Iraq, and Afghanistan, almost certainly following Iranian foreign policy goals. DCI has even gone so far as to invest heavily in a new Europe-Middle East submarine cable to add capacity and improve quality (notably by taking a shorter route to Europe, and adding path-diversity against Cap’n Bubba and his anchor). Back in 2006, supposedly, the best Internet service in Kabul was in the cybercafe they installed in the Iranian embassy’s cultural centre.

(A starter-for-ten. Has anyone else noticed that the major cloud computing providers, Amazon Web Services, Salesforce/Heroku, Rackspace et al, aren’t mentioned?)


Yahoo! has not joined any program in which we volunteer to share user data with the U.S. government. We do not voluntarily disclose user information. The only disclosures that occur are in response to specific demands. And, when the government does request user data from Yahoo!, we protect our users. We demand that such requests be made through lawful means and for lawful purposes. We fight any requests that we deem unclear, improper, overbroad, or unlawful. We carefully scrutinize each request, respond only when required to do so, and provide the least amount of data possible consistent with the law.

The notion that Yahoo! gives any federal agency vast or unfettered access to our users’ records is categorically false. Of the hundreds of millions of users we serve, an infinitesimal percentage will ever be the subject of a government data collection directive. Where a request for data is received, we require the government to identify in each instance specific users and a specific lawful purpose for which their information is requested. Then, and only then, do our employees evaluate the request and legal requirements in order to respond—or deny—the request.

Yahoo!’s top lawyer, spinning like a top, but basically confirming the notion of PRISM as a surveillance technology implemented in Lawyer.

internal chaos, exported

A case of China exporting its internal chaos, as Jamie Kenny would say; I was recently talking to someone who had installed a wireless broadband network in China, and they mentioned that they’d had an exciting experience with a Huawei router. Politicians whose constituents include Huawei’s competitors are endlessly insinuating that their equipment is always secretly talking back to the Chinese, but no-one has ever caught them at it.

So our chap was suitably fascinated when they turned the thing up and they immediately started to see traffic heading for an apparently inexplicable address within China Telecom’s provincial network in Guangdong. Now, they weren’t in the province, but of course Huawei HQ is. Of course they fired up a monitoring tool to capture the traffic and see what it was.

It turned out to be the router’s internal inter-chassis traffic, which should have been going to its own loopback interface, but was instead leaking onto the Internet. It seemed that someone in Huawei had borrowed some public IP addresses to use in their lab, rather than either using Huawei address space privately, or else using the designated private address space, had used the address in the router firmware, and had then forgotten about it. (Rather like that time all the D-Link Wi-Fi boxes in the world started asking some guy in Denmark for a time signal, in case you think it’s just the Chinese who do these things.)

Obviously, routing via China would have been…suboptimal, and would have involved passing through the Great Firewall. But it would have worked in Huawei’s lab, or locally in Guangdong. No conspiracy, just internal chaos leaking across the border.