Smarter contact tracing

Not just another cell-phone idea

Can Contact Tracing Work At COVID Scale? Amit Kaushal and Russ B. Altman JULY 8, 2020 10.1377/hblog20200630.746159

You cannot trace everybody, so be smart about who you trace. This article points out the impracticality of massive contact tracing, and how to build a learning system to make it useful anyway. Contact tracing is hard, and when there are too many cases it starts to break down. But we need to figure it out, especially in high-priority settings and in places with limited outbreaks. There are also many idiosyncrasies in Covid infection patterns. A well-executed learning system can gradually make smarter judgments about where to look for cases, who to test, who to quarantine, and when to lift the quarantine.

As we build our nation’s tracing operations, we need to ensure that they are effective at identifying contacts while attempting to quarantine as few people as possible, for as short a duration as possible. To ensure contact tracing remains viable at scale, we must develop data-driven metrics to evaluate and adapt our contact tracing efforts. Historically, successful contact tracing has been measured by its sensitivity [based on more is better]. However, at scale “more is better” breaks down. We must have corresponding metrics for specificity, to … exclude from quarantine those people who have not themselves become carriers of the virus.

Can Contact Tracing Work At COVID Scale? | Health Affairs

But will America’s current political decision-making paralysis, chaos, and suspicion allow the systematic tracing program that would be required? At the national level it seems unlikely. But this approach can be done by states or smaller units. There are probably some states with enough leadership and public willingness to be serious about suppressing Covid before it wipes out another 6 months of jobs and education!

The First Smart Air Quality Tracker?

The first microprocessor is almost 50 years old, but microprocessors (MPUs) continue to revolutionize new areas. (First MPU = Intel 4004, in 1971, which Intel designed for a calculator company!) In concert with Moore’s Law and now ubiquitous wireless two-way wireless data transmission (thanks, Qualcomm!). smartphones have become a basic building block of many products.

A companion to explain what’s in your air, anywhere. Flow is the intelligent device that fits into your daily life and helps you make the best air quality choices for yourself, your family, your community.

Source: Flow, by Plume Labs | The First Smart Air Quality Tracker

Here is a quick review I wrote of the “Flow” pollution meter, after using it for a few months.  I wrote it as a comment on a blog post by Meredith Fowlie about monitoring the effects of fires in N. California.

Memorial Sloan Kettering’s Season of Turmoil – The New York Times

America’s health care research system has many problems. The overall result is poor return on the money spent. The lure of big $ is a factor in many of them. Two specific problems:

  • What gets research $ (including from Federal $) is heavily driven by profit potential, not medical potential. Ideas that can’t be patented get little research.
  • Academic career incentives distort both topics of research (what will corporate sponsors pay for?) and publication. The “replicability crisis” is not just in social sciences.

This NYT article illustrates one way that drug companies indirectly manipulate research agendas: huge payments to influential researchers. In this article, Board of Directors fees. Large speaking fees for nominal work are another common mechanism. Here are some others:

Flacking for Big Pharma

Drugmakers don’t just compromise doctors; they also undermine top medical journals and skew medical research. By Harriet A. Washington | June 3, 2011

I could go on and on about this problem, partly because I live in a biotech town and work at a biotech university. I have posted about this elsewhere in this blog. But since it’s not an area where I am doing research, I will restrain myself.

Rescuing a medical treatment from failure in a clinical trial by using  Post Hoc Bayesian Analysis 

How can researchers maximize learning from experiments, especially from very expensive experiments such as clinical trials? This article shows how a Bayesian analysis of the data would have been much more informative, and likely would have saved a useful new technique for dealing with ARDS.

I am a big supporter of Bayesian methods, which will become even more important/useful with machine learning. But a colleague, Dr. Nick Eubank, pointed out that the data could also have been re-analyzed using frequentist statistics. The problem with the original analysis was not primarily that they used frequentist statistics. Rather, it was that they set a fixed (and rather large) threshold for defining success. This threshold was probably unattainable. But the clinical trial could still have been “saved,” even by conventional statistics.

Source: Extracorporeal Membrane Oxygenation for Severe Acute Respiratory Distress Syndrome and Posterior Probability of Mortality Benefit in a Post Hoc Bayesian Analysis of a Randomized Clinical Trial. | Critical Care Medicine | JAMA | JAMA Network

Here is a draft of a letter to the editor on this subject. Apologies for the very academic tone – that’s what we do for academic journals!

The study analyzed in their article was shut down prematurely due to the unlikelihood that it would attain the target level of performance. Their paper shows that this might have been avoided, and the technique shown to have benefit, if their analysis had been performed before terminating the trial. A related analysis could usefully have been done within the frequentist statistical framework. According to their Table 2, a frequentist analysis (equivalent to an uninformative prior) would have suggested a 96% chance that the treatment was beneficial, and an 85% chance that it had RR < .9 .

The reason the original study appeared to be failing was not solely that it was analyzed with frequentist methods. It also failed because the target threshold for “success” was set at a high threshold, namely RR < .67. Thus, although the full Bayesian analysis of the article was more informative, even frequentist statistics can be useful to investigate the implications of different definitions of success.

Credit for this observation goes to Nick. I will ask him for permission to include one of his emails to me on this subject.

does the Harvard Business School, Michael Porter, teach the essence of business strategy is the elimination of competition, by regulation if possible. Is this legal? Is this basically socialism or communism? – Quora

Original question on Quora: Why does the Harvard Business School, Michael Porter, teach the essence of business strategy is the elimination of competition, by regulation if possible. Is this legal? Is this basically socialism or communism?

My response: Trying to pin this on Michael Porter is ridiculous. He says no such thing. Based on the way the question is phrased, I wonder if there is an ideological purpose in asking it.

But in any case, there is a serious issue behind the question, namely an increasing level of oligopoly (decreasing levels of competition) among companies in many US industries. See, for example, “Big Companies Are Getting a Chokehold on the Economy Even Goldman Sachs is worried that they’re stifling competition, holding down wages and weighing on growth.”  or.

“America Has a Monopoly Problem—and It’s Huge”.

One theory about this trend is that it is partly due to growing power of corporations in Washington. That, in turn, may be traced partly to the increasing role of money in elections, largely as a result of the infamous Supreme Court “Citizens United” decision. For example, the way Trump’s massive tax cuts were put together without any hearings and in a VERY short period of time, and the amount of “goodies” for many industries in the resulting package, would never have happened with previous massive changes in taxes.

An effective strategy in some highly concentrated industries is to persuade the government to selectively regulate your industry, in ways that favor large and established companies. That is, all companies may experience higher costs because of a regulation, but if your company can respond more cheaply than anyone else, it is still a net win for you. An example is pharmaceuticals. For example pharma companies increasingly use the legal system, regulations, and side deals to keep generic drugs off the market for years after drug patents expire. The industry has also been very effective at keeping foreign competitors out – e.g. blocking imports by individual citizens from Canada.

(I buy one medication at $1 per pill from abroad, when it costs $30/pill at the local Rite-Aid. But it takes a lot of research and effort.)

Source: (32) Why does the Harvard Business School, Michael Porter, teach the essence of business strategy is the elimination of competition, by regulation if possible. Is this legal? Is this basically socialism or communism? – Quora

450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” 

Disclosure in the United Kingdom has sparked a heated debate about the health impacts of an errant algorithm
By Robert N. Charette

Source: 450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” – IEEE Spectrum

It sounds like what we used to call a “bug” to me. I guess bugs are now promoted to “algorithm failures”. 

Nearly half a million elderly women in the United Kingdom missed mammography exams because of a scheduling error caused by one incorrect computer algorithm, and several hundred of those women may have died early as a result. Last week, the U.K. Health Minister Jeremy Hunt announced that an independent inquiry had been launched to determine how a “computer algorithm failure” stretching back to 2009 caused some 450,000 patients in England between the ages of 68 to 71 to not be invited for their final breast cancer screenings.

The errant algorithm was in the National Health System’s (NHS) breast cancer screening scheduling software, and remained undiscovered for nine years.

“Tragically, there are likely to be some people in this group who would have been alive today if the failure had not happened,” Hunt went on to tell Parliament. He added that based on statistical modeling, the number who may have died prematurely as a result was estimated to be between 135 and 270 women.

Source: 450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” – IEEE Spectrum

What snakes are growing in the Gardens of Technological Eden?

Two emerging technologies are revolutionizing industries, and will soon have big impacts on our health, jobs, entertainment, and entire lives. They are Artificial Intelligence, and Big Data. Of course, these have already had big effects in certain applications, but I expect that they will become even more important as they improve. My colleague Dr. James Short is putting together a conference called Data West at the San Diego Supercomputer Center, and I came up with a list of fears that might disrupt their emergence.

1) If we continue to learn that ALL large data repositories will be hacked from time to time (Experian; National Security Agency), what blowback will that create against data collection? Perhaps none in the US, but in some other countries, it will cause less willingness to allow companies to collect consumer data.

2) Consensual reality is unraveling, mainly as a result of deliberate, sophisticated, distributed, attacks. That should concern all of us as citizens. Should it also worry us as data users, or will chaos in public venues not leak over into formal data? For example, if information portals (YouTube, Facebook, etc.) are forced to take a more active role in censoring content, will advertisers care? Again, Europe may be very different. We can presume that any countermeasures will only be partly effective – the problem probably does not have a good technical solution.

3) Malware, extortion, etc. aimed at companies. Will this “poison the well” in general?

4) Malware, extortion, doxing, etc. aimed at Internet of Things users, such as household thermostats, security cameras, cars. Will this cause a backlash against sellers of these systems, or will people accept it as the “new normal.” So far, people have seemed willing to bet that it won’t affect them personally, but will that change. For example, what will happen when auto accidents are caused by deliberate but unknown parties who advertise their success? When someone records all conversations within reach of the Alexa box in the living room?

Each of these scenarios has at least a 20% chance of becoming common. At a minimum, they will require more spending on defenses. Will any become large enough to suppress entire applications of these new technologies?

I have not said anything about employment and income distribution. They may change for the worse over the next 20 years, but the causes and solutions won’t be simple, and I doubt that political pressure will become strong enough to alter technology evolution.