Smarter contact tracing

Not just another cell-phone idea

Can Contact Tracing Work At COVID Scale? Amit Kaushal and Russ B. Altman JULY 8, 2020 10.1377/hblog20200630.746159

You cannot trace everybody, so be smart about who you trace. This article points out the impracticality of massive contact tracing, and how to build a learning system to make it useful anyway. Contact tracing is hard, and when there are too many cases it starts to break down. But we need to figure it out, especially in high-priority settings and in places with limited outbreaks. There are also many idiosyncrasies in Covid infection patterns. A well-executed learning system can gradually make smarter judgments about where to look for cases, who to test, who to quarantine, and when to lift the quarantine.

As we build our nation’s tracing operations, we need to ensure that they are effective at identifying contacts while attempting to quarantine as few people as possible, for as short a duration as possible. To ensure contact tracing remains viable at scale, we must develop data-driven metrics to evaluate and adapt our contact tracing efforts. Historically, successful contact tracing has been measured by its sensitivity [based on more is better]. However, at scale “more is better” breaks down. We must have corresponding metrics for specificity, to … exclude from quarantine those people who have not themselves become carriers of the virus.

Can Contact Tracing Work At COVID Scale? | Health Affairs

But will America’s current political decision-making paralysis, chaos, and suspicion allow the systematic tracing program that would be required? At the national level it seems unlikely. But this approach can be done by states or smaller units. There are probably some states with enough leadership and public willingness to be serious about suppressing Covid before it wipes out another 6 months of jobs and education!

Don’t expect Level 5 Autonomous cars for decades

Why I don’t expect fully autonomous city driving in my lifetime (approx 25 years).

Paraphrase: The strange and crazy things that people do. .. a ball bouncing in front of your car, a child falling down, a car running a red light, head-down pedestrian. A level-5 car has to handle all of these cases, reliably.

These situations require 1) a giant set of learning data 2) Very rapid computing 3) Severe braking. Autonomous cars today are very slow + very cautious in order to allow more time for decisions and for braking.

My view:

There is no magic bullet that can solve these 3 problems, except keeping autonomous cars off of city streets. And all 3 get worse in bad weather, including fog much less in snow.

Also, there are lots of behavioral issues, such as “knowing” the behavior of pedestrians in different cities. Uber discovered that frequent braking/accelerating makes riders carsick – so they re-tuned their safety margins, and their car killed a pedestrian.

A counter-argument (partly from Don Norman, jnd1er): Human drivers are not good at these situations either, and occasionally hit people. Therefore, we should not wait for perfection, but instead systems that on balance are better than humans.  As distracted driving gets worse, the tradeoff in favor of autonomous cars will shift.

But there is another approach to distracted driving. Treat it like drunk driving. Make it socially and legally unacceptable. Drunk driving used to be treated like an accident, with very light penalties even in fatal accidents.

Finally, I’m not sure if any amount of real-life driving will be good enough to develop  training datasets for the rarest edge cases. Developers will need supplemental methods to handle them, including simulated accidents and some causal modeling. For example, the probabilities of different events change by location and time of day. Good drivers know this, and adjust. Perhaps cars will need adjustable parameters that shift their algorithm tuning in different circumstances.

Source of the quotation: Experts at the Table: The challenges to build a single chip to handle future autonomous functions of a vehicle span many areas across the design process.

Source: Semiconductor Engineering – Challenges To Building Level 5 Automotive Chips

The First Smart Air Quality Tracker?

The first microprocessor is almost 50 years old, but microprocessors (MPUs) continue to revolutionize new areas. (First MPU = Intel 4004, in 1971, which Intel designed for a calculator company!) In concert with Moore’s Law and now ubiquitous wireless two-way wireless data transmission (thanks, Qualcomm!). smartphones have become a basic building block of many products.

A companion to explain what’s in your air, anywhere. Flow is the intelligent device that fits into your daily life and helps you make the best air quality choices for yourself, your family, your community.

Source: Flow, by Plume Labs | The First Smart Air Quality Tracker

Here is a quick review I wrote of the “Flow” pollution meter, after using it for a few months.  I wrote it as a comment on a blog post by Meredith Fowlie about monitoring the effects of fires in N. California.

Some U.S. police departments dump body-camera programs amid high costs – The Washington Post

Smaller departments that struggle with the cost of equipment and storage of data are ending or suspending programs aimed at transparency and accountability.

Source: Some U.S. police departments dump body-camera programs amid high costs – The Washington Post

My comment: this was predictable. Video data gets big very quickly. See my discussion 3 years ago.

Would ‘explainable AI’ force companies to give away too much? Not really.

Here is an argument for allowing companies to maintain a lot of secrecy about how their data mining (AI) models work. The claime is that revealing information will put  companies at a competitive disadvantage. Sorry, that is not enough of a reason. And it’s not actually true, as far as I can tell.

The first consideration when discussing transparency in AI should be data, the fuel that powers the algorithms. Because data is the foundation for all AI, it is valid to want to know where the data…

Source: The problem with ‘explainable AI’ | TechCrunch

Here is my response.

Your questions are good ones. But you seem to think that explainability cannot be achieved except by giving away all the work that led to the AI system. That is a straw man. Take deep systems, for example. The IP includes:
1) The training set of data
2) The core architecture of the network (number of layers etc)
3) The training procedures over time, including all the testing and tuning that went on.
4) The resulting system (weights, filters, transforms, etc).
5) HIgher-level “explanations,” whatever those may be. (For me, these might be a reduced-form model that is approximately linear, and can be interpreted.)

Revealing even #4 would be somewhat useful to competitors, but not decisive. The original developers will be able to update and refine their model, while people with only #4 will not. The same for any of the other elements.

I suspect the main fear about revealing this, at least among for-profit companies, is that it opens them up to second-guessing . For example, what do you want to bet that the systems now being used to suggest recidivism have bugs? Someone with enough expertise and $ might be able to make intelligent guesses about bugs, although I don’t see how they could prove them.
Sure, such criticism would make companies more cautious, and cost them money. And big companies might be better able to hide behind layers of lawyers and obfuscation. But those hypothetical problems are quite a distance in the future. Society deserves to, and should, do more to figure out where these systems have problems. Let’s allow some experiments, and even some different laws in different jurisdictions, to go forward for a few years. To prevent this is just trusting the self-appointed experts to do what is in everyone else’s best interests. We know that works poorly!

450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” 

Disclosure in the United Kingdom has sparked a heated debate about the health impacts of an errant algorithm
By Robert N. Charette

Source: 450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” – IEEE Spectrum

It sounds like what we used to call a “bug” to me. I guess bugs are now promoted to “algorithm failures”. 

Nearly half a million elderly women in the United Kingdom missed mammography exams because of a scheduling error caused by one incorrect computer algorithm, and several hundred of those women may have died early as a result. Last week, the U.K. Health Minister Jeremy Hunt announced that an independent inquiry had been launched to determine how a “computer algorithm failure” stretching back to 2009 caused some 450,000 patients in England between the ages of 68 to 71 to not be invited for their final breast cancer screenings.

The errant algorithm was in the National Health System’s (NHS) breast cancer screening scheduling software, and remained undiscovered for nine years.

“Tragically, there are likely to be some people in this group who would have been alive today if the failure had not happened,” Hunt went on to tell Parliament. He added that based on statistical modeling, the number who may have died prematurely as a result was estimated to be between 135 and 270 women.

Source: 450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” – IEEE Spectrum