The UK NHSX “contact tracing” app is being deployed today, in one small place, to test whether or not this approach might help get us out of lockdown. Unfortunately, the launch is beset with published argument one way and the other about whether or not this app is technically good, meets privacy expectations, or simply whether it will work.
As everyone knows, Apple and Google are coming up with their own OS-specific APIs that will allow contact tracing. Fundamentally, there is a big difference between their approach and the NHSX approach. It has been described as “centralised” versus “decentralised”; this doesn’t really help people understand what’s going on.
What do they do?
The core difference is what is reported when someone is unwell. The Apple and Google (AAG) approach is that each user gets a series of identifiers that change frequently. If a user becomes unwell, they publish their list of identifiers. Since users are swapping their identifiers at each contact, they can check this central list to see if anyone they swapped an identifier with later became unwell. This is still kind-of centralised, but an unwell users publishes their own IDs, and others who came into contact then maybe need to make their own decision about risks.
The NHSX approach is different. Users don’t swap identifiers that change, but encrypted static identifiers. Users who become unwell don’t publish their own identifiers, but send the encrypted identifiers of those they came into contact with. The central system then makes a decision about whether to inform any of those users about contacts.
What don’t they do?
In other respects, the applications appear to function similarly. No-one seems to be asking for GPS permission, so the app doesn’t know where you are and isn’t tracking your location (directly).
Neither app uploads data about contacts automatically. The AAG approach only ever uploads your own data; the NHSX one uploads other people’s identifiers but only if you are symptomatic.
The Credibility Gap
Somewhat unfortunately, this type of “big data” analysis is pretty out of vogue. The politics has been pored over, especially who has access to the data, what might come of it afterwards, etc.
I think it’s sad that a tool which is potentially very powerful - which network analysis undoubtedly is - could be neutered because there is insufficient trust and goodwill amongst the public.
While an awful lot of thought has clearly gone into the NHSX design, it is clearly much more susceptible to “triangulation” attacks. This is to say that while the data being traded is anonymous, there are avenues to be able to re-identify people by correlating data with other datasets.
As an obvious example, if you were able to gain access to a commercial parcel delivery dataset, it would be straightforward to match a driver’s route to their NHSX app identifier based simply on postal outcodes and timestamps. Once you’d done that, you could work out the people they’d contacted based on the delivery co-ordinates. The only defence against this would be hope that such brief contact would not register on the app.
The NCSC Technical Report is a good document, and demonstrates that there is awareness of this credibility gap within the developers. However, I suspect it’s probably not going to help matters much: those who are against it will remain against it.
The Policy Gap
What I think is interesting about the AAG approach is that they’ve effectively encoded public policy into their design. The concept of being “ill” is binary: you either publish your identifiers, or you don’t. As we know with self-diagnosis, there will be lots of false positives. If people only upload their identifiers after a genuine positive PCR test result (for example), that may simply be too late from a containment standpoint.
Worryingly, asymptomatic people pretty much never appear in the AAG model. It doesn’t really implement “track and trace”: what it does do is allow people to be warned if contacts became diagnosed.
Additionally, the AAG model is banking on the idea that there is no information within the network necessary to detect illness. Should we worry that someone with symptoms has had a high level of contact with other people? If someone has minor symptoms, but has contacted very few others, do we warn those others? The AAG model is silent on issues like this. In the NHSX model, the data is at least partially there and available for analysis.
The NHSX model will be better than the AAG if some of the following are true:
- it’s important to discover symptomatic people early, before they have the chance to learn a test result.
- it’s crucial to risk-assess the contacts based on length, frequency, or cumulative events.
- it becomes later necessary to add the ability to ask asymptomatic people to upload their contact logs, in order to fully operate.
The NHSX model is much closer to track and trace. The AAG model will be better if the following are true:
- there are technical reasons to require lower data capture and to allow the operating system to simply broadcast beacons.
- there is no need to judge severity or likelihood of disease, only capture positivity (or not).
- adherence to usage at the scale needed requires trust, which only comes from the original software vendor.
I personally think it is wholly wrong to look at this as simply a privacy battle: it’s not. These apps appear destined to function in very different ways, offering different capability.
I’m not an epidemiologist, obviously, so I can’t comment on what capability is necessary. However, it’s not clear to me that the NHSX ask is wrong: a lot of it make sense.
There are a some geeky concerns I have, as well:
- is it really going to be possible for NHSX to roll out an app doing relatively complex stuff like this across multiple hardware devices and succeed? This is a huge risk.
- is their encryption up to scratch? I’ve read the overview in the technical paper, and it looks ok - but the devil is always in the detail.
- is their approach to operating bluetooth actually going to work? Low-power apps usually need significant operating system co-operation. While the developers must be reasonably confident their approach works, the wild west of the Android landscape in particular is extremely complex.
I think there are privacy risks; I think they’ve downplayed the risk of reidentification particularly. However, I have to say that the AAG model doesn’t really seem to me to be true track and trace. If we (as in the UK) want to track, test, and isolate ourselves out of this disaster, the AAG solution would be better than nothing but there’s a clear risk that it’s insufficient data. I would personally err on the side of collecting more data.
To get over the credibility gap, I think NHSX need to come out early and clearly that the data will be ring-fenced for this one project and destroyed entirely at the end. They shouldn’t risk the success of the project on adherence.