Semiconductors get old, and eventually die. It’s getting worse.

How do semiconductor companies plan for aging? There has never been a truly efficient solution, and according to this article, problems are getting worse. For example, electronics in cars continue to get more complex (and more safety critical). But cars are used in very different ways after being sold, and in very different climates.

Electromigration

Electromigration is one form of aging. Credit:  JoupYoup – Own work, CC BY-SA 4.0, 

When a device is used constantly in a heavy load model for aging, particular stress patterns exaggerate things. An Uber-like vehicle, whether fully automated or not, has a completely different use model than the standard family car that actually stays parked in a particular state a lot of the time, even though the electronics are always somewhat alive. There’s a completely different aging model and you can’t guard-band both cases correctly.

Aging is dealt with by heuristics, which typically add a “safety margin” to designs. But it’s not accurate, and leaves money (chip area = $ per chip) on the table.

Moreover, margin typically isn’t just one thing. It’s actually a stack.“The foundry, with the models that they give us, includes a little bit of padding to cover themselves,” said ANSYS’ Geada. “And then the library vendor adds a little bit of padding and nobody talks about what that is, but everybody adds up this stack of margin along the way. “

Source: Circuit Aging Becoming A Critical Consideration 

But of course, the semicon industry has been dealing with emerging challenges like this for its entire existence. Each new problem starts at a low stage of knowledge, beginning with Stage 0 (nobody knows the problem exists) and usually ending at about Stage 6.

The First Smart Air Quality Tracker?

It is now almost 50 years since the first microprocessor, but it continues to revolutionize new areas. (First MPU = Intel 4004, in 1971, which Intel designed for a calculator company!) In concert with Moore’s Law and now ubiquitous wireless two-way wireless data transmission (thanks, Qualcomm!). smartphones have become a basic building block of many products.

A companion to explain what’s in your air, anywhere. Flow is the intelligent device that fits into your daily life and helps you make the best air quality choices for yourself, your family, your community.

Source: Flow, by Plume Labs | The First Smart Air Quality Tracker

Here is a quick review I wrote of the “Flow” pollution meter, after using it for a few months.  I wrote it as a comment on a blog post by Meredith Fowlie about monitoring the effects of fires in N. California.

I started with a particulate meter (a handheld model, not PurpleAir). Now I also have a Plume Labs unit running full time. It measures PM2.5, but also PM10, NO2 and Volatile Organic Compounds (smog components). https://plumelabs.com/en/flow/
After a few months of use, I am impressed by the hardware. It shows very sharp peaks when we are cooking or something else disturbs indoor air. Sensitivity and consistency are both high.
Another advantage is that it is very portable. It’s actually designed to be worn on your belt while commuting, to discover local hot spots. All data is GPS flagged if you turn that feature on. I think their hope is to build time/location history for many major cities, using crowdsourced data.

Accuracy is harder to assess. The PM2.5 readings are much lower than on my other meter, and are usually below 5. We keep it in our bedroom, and while we use a Roomba frequently, I am skeptical about such low numbers. Readings above 20 happen less than once a week. But as usual with these devices, because outside meters (as discussed in the article) vary so much there is no way to calibrate it against other information.

The software that goes on your phone is “slick,” but it presents the information in a very limited format. It is optimized for use by commuters/runners. If you want to look at your data differently, such as over multiple days, you are out of luck.
Price is about $180. I compare alternatives for quite a while before selecting this one. It is considerably less expensive than other sensors that go beyond particulates.

Modern smartphones now allow revolutionary advances in portable measurements and in citizen science. They have huge computational power with highly standardized interfaces for application-specific hardware, such as pollution monitors, to link to. Instrument makers now need nothing more than a Bluetooth radio to give their devices graphical displays, real-time tracking and alerting, location flagging, months of data storage, and many other features that used to add hundreds or thousands of dollars to instrument prices.

Pollution measured over the course of a day as the owner travels. This is the display shown on my phone.

Memorial Sloan Kettering’s Season of Turmoil – The New York Times

America’s health care research system has many problems. The overall result is poor return on the money spent. The lure of big $ is a factor in many of them. Two specific problems:

  • What gets research $ (including from Federal $) is heavily driven by profit potential, not medical potential. Ideas that can’t be patented get little research.
  • Academic career incentives distort both topics of research (what will corporate sponsors pay for?) and publication. The “replicability crisis” is not just in social sciences.

This NYT article illustrates one way that drug companies indirectly manipulate research agendas: huge payments to influential researchers. In this article, Board of Directors fees. Large speaking fees for nominal work are another common mechanism. Here are some others:

Flacking for Big Pharma

Drugmakers don’t just compromise doctors; they also undermine top medical journals and skew medical research. By Harriet A. Washington | June 3, 2011

I could go on and on about this problem, partly because I live in a biotech town and work at a biotech university. I have posted about this elsewhere in this blog. But since it’s not an area where I am doing research, I will restrain myself.

Scrivener 3 for Academic Writing: An In-Depth Review

I use a variety of specialized software for note taking, managing academic papers, etc. Rather than write my own review of Scrivener, I link to someone else’s here. I added a comment to her post, about using it with bibliography software.

Feel free to add links to your own favorite Scrivener reviews, in the comments. There is a fair amount of overhead in learning Scrivener, but for longer projects (eg > 10,000 words) it saves writers from “multiple version hell.”

In this in-depth Scrivener 3 review, I show you why Scrivener is the best word processor for academic writing. Unlike Word, Scrivener 3 keeps all your research and writing in one place. Its best features? The ability to drag and drop to reorganize your draft, split screen mode, word targets, and linguistic focus.

Source: Scrivener 3 for Academic Writing: An In-Depth Review

Rescuing a medical treatment from failure in a clinical trial by using  Post Hoc Bayesian Analysis 

How can researchers maximize learning from experiments, especially from very expensive experiments such as clinical trials? This article shows how a Bayesian analysis of the data would have been much more informative, and likely would have saved a useful new technique for dealing with ARDS.

I am a big supporter of Bayesian methods, which will become even more important/useful with machine learning. But a colleague, Dr. Nick Eubank, pointed out that the data could also have been re-analyzed using frequentist statistics. The problem with the original analysis was not primarily that they used frequentist statistics. Rather, it was that they set a fixed (and rather large) threshold for defining success. This threshold was probably unattainable. But the clinical trial could still have been “saved,” even by conventional statistics.

Source: Extracorporeal Membrane Oxygenation for Severe Acute Respiratory Distress Syndrome and Posterior Probability of Mortality Benefit in a Post Hoc Bayesian Analysis of a Randomized Clinical Trial. | Critical Care Medicine | JAMA | JAMA Network

Here is a draft of a letter to the editor on this subject. Apologies for the very academic tone – that’s what we do for academic journals!

The study analyzed in their article was shut down prematurely due to the unlikelihood that it would attain the target level of performance. Their paper shows that this might have been avoided, and the technique shown to have benefit, if their analysis had been performed before terminating the trial. A related analysis could usefully have been done within the frequentist statistical framework. According to their Table 2, a frequentist analysis (equivalent to an uninformative prior) would have suggested a 96% chance that the treatment was beneficial, and an 85% chance that it had RR < .9 .

The reason the original study appeared to be failing was not solely that it was analyzed with frequentist methods. It also failed because the target threshold for “success” was set at a high threshold, namely RR < .67. Thus, although the full Bayesian analysis of the article was more informative, even frequentist statistics can be useful to investigate the implications of different definitions of success.

Credit for this observation goes to Nick. I will ask him for permission to include one of his emails to me on this subject.

Some U.S. police departments dump body-camera programs amid high costs – The Washington Post

Smaller departments that struggle with the cost of equipment and storage of data are ending or suspending programs aimed at transparency and accountability.

Source: Some U.S. police departments dump body-camera programs amid high costs – The Washington Post

My comment: this was predictable. Video data gets big very quickly. See my discussion 3 years ago.

SDG&E adopts residential Time of Use pricing – only 30 years late!

This picture looks exciting, doesn’t it? But the vertical axis is not to scale. In fact the price changes are so small that they are barely visible. See the next figure. For 7 months a year, my prices will vary by only $.02/kWh over a week!

https://www.sdge.com/sites/default/files/TOU_summer.jpg

Source: When you use energy matters | San Diego Gas & Electric

Our local utility, San Diego Gas & Electric, just sent us a notice that we will be switching to Time of Use (TOU) pricing. I have no objections, BUT:

  1. TOU was innovative in the 1980s. But for any house with a smart meter, which we all have now, it has been dominated by real-time related pricing for at least 20 years.
  2. The price differentials are negligible – 1 cent per kwh, or about 3 percent! In the winter almost nobody will adjust their usage, or even keep track of it.  At least differences in the summer are substantially larger – as much as 30¢ per kwh.

Continue reading