The First Smart Air Quality Tracker?

It is now almost 50 years since the first microprocessor, but it continues to revolutionize new areas. (First MPU = Intel 4004, in 1971, which Intel designed for a calculator company!) In concert with Moore’s Law and now ubiquitous wireless two-way wireless data transmission (thanks, Qualcomm!). smartphones have become a basic building block of many products.

A companion to explain what’s in your air, anywhere. Flow is the intelligent device that fits into your daily life and helps you make the best air quality choices for yourself, your family, your community.

Source: Flow, by Plume Labs | The First Smart Air Quality Tracker

Here is a quick review I wrote of the “Flow” pollution meter, after using it for a few months.  I wrote it as a comment on a blog post by Meredith Fowlie about monitoring the effects of fires in N. California.

I started with a particulate meter (a handheld model, not PurpleAir). Now I also have a Plume Labs unit running full time. It measures PM2.5, but also PM10, NO2 and Volatile Organic Compounds (smog components). https://plumelabs.com/en/flow/
After a few months of use, I am impressed by the hardware. It shows very sharp peaks when we are cooking or something else disturbs indoor air. Sensitivity and consistency are both high.
Another advantage is that it is very portable. It’s actually designed to be worn on your belt while commuting, to discover local hot spots. All data is GPS flagged if you turn that feature on. I think their hope is to build time/location history for many major cities, using crowdsourced data.

Accuracy is harder to assess. The PM2.5 readings are much lower than on my other meter, and are usually below 5. We keep it in our bedroom, and while we use a Roomba frequently, I am skeptical about such low numbers. Readings above 20 happen less than once a week. But as usual with these devices, because outside meters (as discussed in the article) vary so much there is no way to calibrate it against other information.

The software that goes on your phone is “slick,” but it presents the information in a very limited format. It is optimized for use by commuters/runners. If you want to look at your data differently, such as over multiple days, you are out of luck.
Price is about $180. I compare alternatives for quite a while before selecting this one. It is considerably less expensive than other sensors that go beyond particulates.

Modern smartphones now allow revolutionary advances in portable measurements and in citizen science. They have huge computational power with highly standardized interfaces for application-specific hardware, such as pollution monitors, to link to. Instrument makers now need nothing more than a Bluetooth radio to give their devices graphical displays, real-time tracking and alerting, location flagging, months of data storage, and many other features that used to add hundreds or thousands of dollars to instrument prices.

Pollution measured over the course of a day as the owner travels. This is the display shown on my phone.

Memorial Sloan Kettering’s Season of Turmoil – The New York Times

America’s health care research system has many problems. The overall result is poor return on the money spent. The lure of big $ is a factor in many of them. Two specific problems:

  • What gets research $ (including from Federal $) is heavily driven by profit potential, not medical potential. Ideas that can’t be patented get little research.
  • Academic career incentives distort both topics of research (what will corporate sponsors pay for?) and publication. The “replicability crisis” is not just in social sciences.

This NYT article illustrates one way that drug companies indirectly manipulate research agendas: huge payments to influential researchers. In this article, Board of Directors fees. Large speaking fees for nominal work are another common mechanism. Here are some others:

Flacking for Big Pharma

Drugmakers don’t just compromise doctors; they also undermine top medical journals and skew medical research. By Harriet A. Washington | June 3, 2011

I could go on and on about this problem, partly because I live in a biotech town and work at a biotech university. I have posted about this elsewhere in this blog. But since it’s not an area where I am doing research, I will restrain myself.

Rescuing a medical treatment from failure in a clinical trial by using  Post Hoc Bayesian Analysis 

How can researchers maximize learning from experiments, especially from very expensive experiments such as clinical trials? This article shows how a Bayesian analysis of the data would have been much more informative, and likely would have saved a useful new technique for dealing with ARDS.

I am a big supporter of Bayesian methods, which will become even more important/useful with machine learning. But a colleague, Dr. Nick Eubank, pointed out that the data could also have been re-analyzed using frequentist statistics. The problem with the original analysis was not primarily that they used frequentist statistics. Rather, it was that they set a fixed (and rather large) threshold for defining success. This threshold was probably unattainable. But the clinical trial could still have been “saved,” even by conventional statistics.

Source: Extracorporeal Membrane Oxygenation for Severe Acute Respiratory Distress Syndrome and Posterior Probability of Mortality Benefit in a Post Hoc Bayesian Analysis of a Randomized Clinical Trial. | Critical Care Medicine | JAMA | JAMA Network

Here is a draft of a letter to the editor on this subject. Apologies for the very academic tone – that’s what we do for academic journals!

The study analyzed in their article was shut down prematurely due to the unlikelihood that it would attain the target level of performance. Their paper shows that this might have been avoided, and the technique shown to have benefit, if their analysis had been performed before terminating the trial. A related analysis could usefully have been done within the frequentist statistical framework. According to their Table 2, a frequentist analysis (equivalent to an uninformative prior) would have suggested a 96% chance that the treatment was beneficial, and an 85% chance that it had RR < .9 .

The reason the original study appeared to be failing was not solely that it was analyzed with frequentist methods. It also failed because the target threshold for “success” was set at a high threshold, namely RR < .67. Thus, although the full Bayesian analysis of the article was more informative, even frequentist statistics can be useful to investigate the implications of different definitions of success.

Credit for this observation goes to Nick. I will ask him for permission to include one of his emails to me on this subject.

does the Harvard Business School, Michael Porter, teach the essence of business strategy is the elimination of competition, by regulation if possible. Is this legal? Is this basically socialism or communism? – Quora

Original question on Quora: Why does the Harvard Business School, Michael Porter, teach the essence of business strategy is the elimination of competition, by regulation if possible. Is this legal? Is this basically socialism or communism?

My response: Trying to pin this on Michael Porter is ridiculous. He says no such thing. Based on the way the question is phrased, I wonder if there is an ideological purpose in asking it.

But in any case, there is a serious issue behind the question, namely an increasing level of oligopoly (decreasing levels of competition) among companies in many US industries. See, for example, “Big Companies Are Getting a Chokehold on the Economy Even Goldman Sachs is worried that they’re stifling competition, holding down wages and weighing on growth.”  or.

“America Has a Monopoly Problem—and It’s Huge”.

One theory about this trend is that it is partly due to growing power of corporations in Washington. That, in turn, may be traced partly to the increasing role of money in elections, largely as a result of the infamous Supreme Court “Citizens United” decision. For example, the way Trump’s massive tax cuts were put together without any hearings and in a VERY short period of time, and the amount of “goodies” for many industries in the resulting package, would never have happened with previous massive changes in taxes.

An effective strategy in some highly concentrated industries is to persuade the government to selectively regulate your industry, in ways that favor large and established companies. That is, all companies may experience higher costs because of a regulation, but if your company can respond more cheaply than anyone else, it is still a net win for you. An example is pharmaceuticals. For example pharma companies increasingly use the legal system, regulations, and side deals to keep generic drugs off the market for years after drug patents expire. The industry has also been very effective at keeping foreign competitors out – e.g. blocking imports by individual citizens from Canada.

(I buy one medication at $1 per pill from abroad, when it costs $30/pill at the local Rite-Aid. But it takes a lot of research and effort.)

Source: (32) Why does the Harvard Business School, Michael Porter, teach the essence of business strategy is the elimination of competition, by regulation if possible. Is this legal? Is this basically socialism or communism? – Quora

450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” 

Disclosure in the United Kingdom has sparked a heated debate about the health impacts of an errant algorithm
By Robert N. Charette

Source: 450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” – IEEE Spectrum

It sounds like what we used to call a “bug” to me. I guess bugs are now promoted to “algorithm failures”. 

Nearly half a million elderly women in the United Kingdom missed mammography exams because of a scheduling error caused by one incorrect computer algorithm, and several hundred of those women may have died early as a result. Last week, the U.K. Health Minister Jeremy Hunt announced that an independent inquiry had been launched to determine how a “computer algorithm failure” stretching back to 2009 caused some 450,000 patients in England between the ages of 68 to 71 to not be invited for their final breast cancer screenings.

The errant algorithm was in the National Health System’s (NHS) breast cancer screening scheduling software, and remained undiscovered for nine years.

“Tragically, there are likely to be some people in this group who would have been alive today if the failure had not happened,” Hunt went on to tell Parliament. He added that based on statistical modeling, the number who may have died prematurely as a result was estimated to be between 135 and 270 women.

Source: 450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” – IEEE Spectrum

What snakes are growing in the Gardens of Technological Eden?

Two emerging technologies are revolutionizing industries, and will soon have big impacts on our health, jobs, entertainment, and entire lives. They are Artificial Intelligence, and Big Data. Of course, these have already had big effects in certain applications, but I expect that they will become even more important as they improve. My colleague Dr. James Short is putting together a conference called Data West at the San Diego Supercomputer Center, and I came up with a list of fears that might disrupt their emergence.

1) If we continue to learn that ALL large data repositories will be hacked from time to time (Experian; National Security Agency), what blowback will that create against data collection? Perhaps none in the US, but in some other countries, it will cause less willingness to allow companies to collect consumer data.

2) Consensual reality is unraveling, mainly as a result of deliberate, sophisticated, distributed, attacks. That should concern all of us as citizens. Should it also worry us as data users, or will chaos in public venues not leak over into formal data? For example, if information portals (YouTube, Facebook, etc.) are forced to take a more active role in censoring content, will advertisers care? Again, Europe may be very different. We can presume that any countermeasures will only be partly effective – the problem probably does not have a good technical solution.

3) Malware, extortion, etc. aimed at companies. Will this “poison the well” in general?

4) Malware, extortion, doxing, etc. aimed at Internet of Things users, such as household thermostats, security cameras, cars. Will this cause a backlash against sellers of these systems, or will people accept it as the “new normal.” So far, people have seemed willing to bet that it won’t affect them personally, but will that change. For example, what will happen when auto accidents are caused by deliberate but unknown parties who advertise their success? When someone records all conversations within reach of the Alexa box in the living room?

Each of these scenarios has at least a 20% chance of becoming common. At a minimum, they will require more spending on defenses. Will any become large enough to suppress entire applications of these new technologies?

I have not said anything about employment and income distribution. They may change for the worse over the next 20 years, but the causes and solutions won’t be simple, and I doubt that political pressure will become strong enough to alter technology evolution.

Reality versus belief, and the American right

Warning: this post is entirely opinion about American politics.

Bret Stephens had an interesting op-ed in the NY Times recently. On first reading, it was great. Then I went through the comments, and realized it was quite one-sided. (He is a conservative, over from the WS Journal.) So I wrote the following letter to the editor.

In his column of Sept. 24 Mr Stephens sharp eye noticed, and sharp tongue castigated, only the left’s fundamental error in today’s discussions: judging arguments based on the speaker’s identity. But even more destructive is the fundamental error found primarily on the right: judging arguments based on the desire to believe them. That Congressman R believes something, no matter how strongly, does not make it true, nor a valid basis for setting policy.

I  am at a university that emphasizes science and engineering, and teaches little about Mr. Stephens’ Great Books. But we  teach our students that objective reality exists, and that it matters.  We base our arguments on empirical evidence. And if evidence is insufficient, we look for more.

Here are a few examples of facts that are somehow viewed as controversial: making contraception and information more available to teenagers reduces unwanted pregnancies, and abortions. (See Colorado for a large-scale proof.) Vaccinations reduce disease. Cutting income taxes of the rich will do little to stimulate the economy when the economy is near full employment. Pumping gases into the atmosphere creates a “greenhouse effect.” There is room to disagree about what actions to take as a result of these facts, but not about the facts themselves.

I have elsewhere argued that America (and other parts of the world) are retreating from Reason back to Faith, reversing the Enlightenment of the 1600s. If this continues, the consequences for our country will be dire. But that is a longer discussion.