The First Smart Air Quality Tracker?

It is now almost 50 years since the first microprocessor, but it continues to revolutionize new areas. (First MPU = Intel 4004, in 1971, which Intel designed for a calculator company!) In concert with Moore’s Law and now ubiquitous wireless two-way wireless data transmission (thanks, Qualcomm!). smartphones have become a basic building block of many products.

A companion to explain what’s in your air, anywhere. Flow is the intelligent device that fits into your daily life and helps you make the best air quality choices for yourself, your family, your community.

Source: Flow, by Plume Labs | The First Smart Air Quality Tracker

Here is a quick review I wrote of the “Flow” pollution meter, after using it for a few months.  I wrote it as a comment on a blog post by Meredith Fowlie about monitoring the effects of fires in N. California.

I started with a particulate meter (a handheld model, not PurpleAir). Now I also have a Plume Labs unit running full time. It measures PM2.5, but also PM10, NO2 and Volatile Organic Compounds (smog components). https://plumelabs.com/en/flow/
After a few months of use, I am impressed by the hardware. It shows very sharp peaks when we are cooking or something else disturbs indoor air. Sensitivity and consistency are both high.
Another advantage is that it is very portable. It’s actually designed to be worn on your belt while commuting, to discover local hot spots. All data is GPS flagged if you turn that feature on. I think their hope is to build time/location history for many major cities, using crowdsourced data.

Accuracy is harder to assess. The PM2.5 readings are much lower than on my other meter, and are usually below 5. We keep it in our bedroom, and while we use a Roomba frequently, I am skeptical about such low numbers. Readings above 20 happen less than once a week. But as usual with these devices, because outside meters (as discussed in the article) vary so much there is no way to calibrate it against other information.

The software that goes on your phone is “slick,” but it presents the information in a very limited format. It is optimized for use by commuters/runners. If you want to look at your data differently, such as over multiple days, you are out of luck.
Price is about $180. I compare alternatives for quite a while before selecting this one. It is considerably less expensive than other sensors that go beyond particulates.

Modern smartphones now allow revolutionary advances in portable measurements and in citizen science. They have huge computational power with highly standardized interfaces for application-specific hardware, such as pollution monitors, to link to. Instrument makers now need nothing more than a Bluetooth radio to give their devices graphical displays, real-time tracking and alerting, location flagging, months of data storage, and many other features that used to add hundreds or thousands of dollars to instrument prices.

Pollution measured over the course of a day as the owner travels. This is the display shown on my phone.

Some U.S. police departments dump body-camera programs amid high costs – The Washington Post

Smaller departments that struggle with the cost of equipment and storage of data are ending or suspending programs aimed at transparency and accountability.

Source: Some U.S. police departments dump body-camera programs amid high costs – The Washington Post

My comment: this was predictable. Video data gets big very quickly. See my discussion 3 years ago.

Would ‘explainable AI’ force companies to give away too much? Not really.

Here is an argument for allowing companies to maintain a lot of secrecy about how their data mining (AI) models work. The claime is that revealing information will put  companies at a competitive disadvantage. Sorry, that is not enough of a reason. And it’s not actually true, as far as I can tell.

The first consideration when discussing transparency in AI should be data, the fuel that powers the algorithms. Because data is the foundation for all AI, it is valid to want to know where the data…

Source: The problem with ‘explainable AI’ | TechCrunch

Here is my response.

Your questions are good ones. But you seem to think that explainability cannot be achieved except by giving away all the work that led to the AI system. That is a straw man. Take deep systems, for example. The IP includes:
1) The training set of data
2) The core architecture of the network (number of layers etc)
3) The training procedures over time, including all the testing and tuning that went on.
4) The resulting system (weights, filters, transforms, etc).
5) HIgher-level “explanations,” whatever those may be. (For me, these might be a reduced-form model that is approximately linear, and can be interpreted.)

Revealing even #4 would be somewhat useful to competitors, but not decisive. The original developers will be able to update and refine their model, while people with only #4 will not. The same for any of the other elements.

I suspect the main fear about revealing this, at least among for-profit companies, is that it opens them up to second-guessing . For example, what do you want to bet that the systems now being used to suggest recidivism have bugs? Someone with enough expertise and $ might be able to make intelligent guesses about bugs, although I don’t see how they could prove them.
Sure, such criticism would make companies more cautious, and cost them money. And big companies might be better able to hide behind layers of lawyers and obfuscation. But those hypothetical problems are quite a distance in the future. Society deserves to, and should, do more to figure out where these systems have problems. Let’s allow some experiments, and even some different laws in different jurisdictions, to go forward for a few years. To prevent this is just trusting the self-appointed experts to do what is in everyone else’s best interests. We know that works poorly!

450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” 

Disclosure in the United Kingdom has sparked a heated debate about the health impacts of an errant algorithm
By Robert N. Charette

Source: 450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” – IEEE Spectrum

It sounds like what we used to call a “bug” to me. I guess bugs are now promoted to “algorithm failures”. 

Nearly half a million elderly women in the United Kingdom missed mammography exams because of a scheduling error caused by one incorrect computer algorithm, and several hundred of those women may have died early as a result. Last week, the U.K. Health Minister Jeremy Hunt announced that an independent inquiry had been launched to determine how a “computer algorithm failure” stretching back to 2009 caused some 450,000 patients in England between the ages of 68 to 71 to not be invited for their final breast cancer screenings.

The errant algorithm was in the National Health System’s (NHS) breast cancer screening scheduling software, and remained undiscovered for nine years.

“Tragically, there are likely to be some people in this group who would have been alive today if the failure had not happened,” Hunt went on to tell Parliament. He added that based on statistical modeling, the number who may have died prematurely as a result was estimated to be between 135 and 270 women.

Source: 450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” – IEEE Spectrum

Car repossession: Big Data +AI tools are not value-neutral

Does recent technology inherently favor capitalists over workers?

There is a lot of concern about AI potentially causing massive unemployment. The question of whether “this time will be different” is still open. But another insidious effect is gaining speed: putting tools in the hands of large companies that make it more expensive and more oppressive to run into financial trouble. In essence,  harder to live on the edges of “The System.”

  •  Cars with even one late payment can be spotted, and repossessed, faster. “Business has more than doubled since 2014….”  This is during a period of ostensible economic growth.
  • “Even with the rising deployment of remote engine cutoffs and GPS locators in cars, repo agencies remain dominant. … Agents are finding repos they never would have a few years ago.”
  • “So much of America is just a heartbeat away from a repossession — even good people, decent people who aren’t deadbeats,” said Patrick Altes, a veteran agent in Daytona Beach, Fla. “It seems like a different environment than it’s ever been.”
  • “The company’s goal is to capture every plate in Ohio and use that information to reveal patterns. A plate shot outside an apartment at 5 a.m. tells you that’s probably where the driver spends the night, no matter their listed home address. So when a repo order comes in for a car, the agent already knows where to look.”
  • Source: The surprising return of the repo man – The Washington Post

Continue reading