Kibitzing SAGE

Even more virus blogging. So, the SAGE 58 meeting on the 21st September supported going back into lockdown. Among much else it said this:

An effective test, trace and isolate (TTI) system is important to reduce the incidence of infections in the community. Estimates of the effectiveness of this system on R are difficult to ascertain. The relatively low levels of engagement with the system (comparing ONS incidence estimates with NHS Test and Trace numbers) coupled with testing delays and likely poor rates of adherence with self-isolation suggests that this system is having a marginal impact on transmission at the moment

This is interesting. The main reason why local outbreak investigations do better than the call centre is that they deal with outbreaks with a defined setting, where there might be a list of people involved. Meanwhile, a lot of people who should be self-isolating aren’t, because they don’t know what the rules are. SAGE, though, is driving at something completely different.

Let’s close-read the paragraph I quoted. SAGE is apparently comparing the ONS incidence estimates with the count of cases issued by the NHS. SAGE believes that this shows relatively low levels of engagement with the system. To put it another way, SAGE is arguing that rather than the NHS not testing cases, or failing to trace their contacts, the ONS survey shows that there are a lot of cases who aren’t even going to get tested or speaking to a doctor. Implicitly, SAGE is also arguing that the ONS survey is right. At a higher level, SAGE is also arguing implicitly that this unobserved element of the epidemic is important.

If true, this might explain a lot. If you don’t know who was infected at the party, but you know somebody was, it’s possible to warn everyone there and also to reason about who it might be by a process of elimination. You ask Colleen. If you don’t know they were ever there, logically enough, you can’t. On the other hand, this is a big claim needing big evidence.

The ONS publishes a lot of detail on how their survey works, and their partners at Oxford University publish the full specifications here.

Specifically, the ONS recruited a randomly selected panel of respondents out of the huge sample it maintains for the Labour Force Survey, the gold standard measurement of things like unemployment, hours, and wages. It’s unlikely that any one random sample will actually hit off the demographic profile of the population, especially not as not everyone in the LFS panel who was asked will have chosen to take part, so the panel is weighted to make it match. In the first phase of the survey, back in May, they recruited 10,000 households. Since then, they’ve begun approaching additional panellists at a rate of 3,000 households a week, every week, before scaling this up to a target of 26,000 a week from the end of July. The aim of this scale-up is to generate a much bigger panel for the colds and flu season. It may be important that the recruitment is going to taper off starting at the end of this month.

Random samples of the panellists are tested weekly, some on an antibody test (to see if they have ever been exposed) and some on an RT-PCR test. The percentage of panellists testing positive should be within a calculated margin of error of the percentage of the nation who would test positive if they were tested, and the change from week to week should give us a measure of the growth rate. ONS uses the now-famous multilateral regression with post-stratification (MRP) technique to provide estimates for regional and demographic subsets of the survey, down to the 9 former government office regions.

Here’s the difficult bit. A survey like the ONS’s is often likened to tasting a soup to tell if there’s enough salt in it. You don’t need to eat all the soup or even a lot of the soup, you just need to stir it enough to get a well-mixed spoonful, a representative sample. Fair enough, if we’re tasting a soup that’s been through a blender. A really basic fact about the pandemic is that it has happened in clusters. The famous R value is the average transmissions per case. If the distribution of transmissions by cases was flat, or if it followed a normal bell-curve distribution, this would be enough, but it doesn’t. Instead, the distribution has a long tail to the right, described by the slightly less famous parameter k. 10 per cent of the cases cause 80 per cent of the secondary transmissions. So your chance of catching it is determined, first of all, by whether or not you’re in a cluster.

The ONS survey has a problem here. If a respondent finds herself by chance in a cluster and catches the virus, it’s unlikely that many other respondents will be in the same cluster, as the whole point of the sampling process is to choose respondents in an evenly distributed manner and clusters by definition aren’t evenly distributed. This effect will tend to under-count the epidemic. On the other hand, if multiple respondents do happen to be in the same cluster, the weighting process might over-count them, especially if the respondents involved are demographically rare or difficult to contact and therefore need to be upweighted to achieve a representative sample. This problem is hard. Clusters are only identifiable in hindsight, and trying to get respondents from known clusters would obviously introduce a massive selection bias. Further, if a cluster occurs and no respondent is in it, the survey is entirely blind to its existence. The ONS’s specification says:

It is not anticipated that sufficient information will be available from the original Phase I cross-sectional survey to construct (Bayesian and non-Bayesian) spatiotemporal models (for example, mixed generalised additive models can also incorporate geographical effects). Ultimately however, these cross-sectional models will be included to estimate the impact of both calendar time and geographical proximity on seroprevalence as measured by the different immunity assays. We anticipate that two analyses will be performed. The first model will be a mixed effects model that accounts for the survey design using survey weights but ignores any spatial correlation. The second model will be a (Bayesian) spatiotemporal model, accounting for both spatial correlation and the complex survey design. In this model, individual and area-specific factors will be used in the model to predict the seroprevalence in areas from which no seroprevalence data has been collected. Additional models will consider weighting for non-response.

To put it another way, they’re trying to predict how many cases there are in locations where there’s no data through the assumption that the correlates in locations with data also hold for the ones without data. A major point here is that the autumn scale-up is deliberately designed to increase the survey’s geographical coverage and improve the regional estimates.

The case count from the medical system is a very different measurement. It’s the number of people diagnosed on a given day, minus the number lost in that damn .xls file. This logically excludes anyone who was not diagnosed, for whatever reason, and therefore it can only ever be an undercount, although it is very unlikely to be an overcount. Although it misses a lot of people who haven’t sought medical care, it might do a lot better on clusters, as a cluster is by definition an event when a lot of people get ill or test positive, and clusters are followed up by local public health teams whose results go into this count.

For me, the interesting question is which number is closest to what we want to observe. The doctors have a useful distinction – there is SARS-CoV2, the virus, and COVID-19, the disease. The social phenomenon of the pandemic is driven by the disease, and I would argue that the rate of people getting ill, seeking care, and needing to isolate is probably the interesting one as a result.

This leaves us with the question of why the SAGE team made the choice they did. Unfortunately they’re not saying. The whole document is extremely thin on evidence or indeed arguments, and doesn’t really contain numbers. It gives the strong impression that evidence and reasons are part of the detail that a certain kind of big shot insists on being protected from. It has, in a nutshell, been dumbed down.

I can think of a few options:

They strongly believe the ONS modelling. There are good reasons to do that. It’s the ONS and they’re not stupid. Their count is going up, like the testing count, and going up in the places the testing count is going up, so it measures something similar. However, you have to wonder about the estimate of the serological epidemic over and above the social epidemic.

They think true asymptomatic spread is a thing. As far as anyone knows, people who get COVID-19 take about four days to show symptoms and are seriously contagious for four days across the appearance of the symptoms, with the chance of transmission declining beyond that. If they believe the driver of the pandemic really is people who never notice anything, this is new. Also, more virus is worse, less is better; it would be very surprising if viral load, and therefore symptoms, wasn’t a driver of transmission. And this is not what the SAGE document says – they say it’s low levels of engagement with the system.

Thirdly, taking SAGE’s words at their face value, people are not engaging with the system. That is to say, a lot of people are coughing and sweating around without doing or saying anything. This is big news. Explanations might include the absence of support for self-isolators, denial, belief that nothing is available (thanks in part to our wonderful media), fear of the virus at testing sites or GPs, or just general chaos and lack of social trust. This, of course, would be a political statement, and perhaps there are a few things there you wouldn’t put in your SAGE report.

4 Comments on "Kibitzing SAGE"


  1. ONS ask for symptoms as part of the survey. Presumably they know how much of their numbers are from symptomatic and from asymptomatic

    Reply

  2. Towards the end of your post you say :- “This leaves us with the question of why the SAGE team made the choice they did.”

    I had to go back to the start to try to work out what choice SAGE has made (in your view) and I’m not sure that I fully understand. Is it possible to explain in the form “SAGE chose X rather than Y”.

    SAGE appear to be saying that there is an unobserved element of the epidemic that is important. Are you saying that this is incorrect, or are you saying that it might be correct but the evidence presented by SAGE is weak?

    Observation of the Test and Trace system suggest that it may be true that there is a very important unobserved element of the pandemic. The system is too fragmented. The information gets to local tracers too late and with too many errors. It is focused on forward tracing – ie tracing people who may have been in contact with a positive case after they got sick and thus asking large numbers of people with a small chance of being infected to self-isolate (which means losing income and locking themselves in their garden-shed if they were really to follow the guidelines). It isn’t focused enough on backward tracing:- ie tracing back to where positive cases got infected, which might find the super-spreader events and thus be able to observe how the pandemic is spreading.

    So SAGE may be partly right but for the wrong reasons. Possibly some people do not engage with the system, but also it is possible that people do get tested but the system isn’t being used to observe how the pandemic is spreading.

    Reply

    1. the choice of which data series to believe.

      It isn’t focused enough on backward tracing:- ie tracing back to where positive cases got infected, which might find the super-spreader events and thus be able to observe how the pandemic is spreading.

      This is what the PHE outbreak investigation teams are doing.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.