Semiconductors get old, and eventually die. It’s getting worse.

I once assumed that semiconductors lasted effectively forever. But even electronic devices wear out. How do semiconductor companies plan for aging?

There has never been a really good solution, and according to this article, problems are getting worse. For example, electronics in cars continue to get more complex (and more safety critical). But cars are used in very different ways after being sold, and in very different climates.This makes it impossible to predict how fast a particular car will age.

Electromigration
Electromigration is one form of aging. Credit:  JoupYoup – Own work, CC BY-SA 4.0, 

When a device is used constantly in a heavy load model for aging, particular stress patterns exaggerate things. An Uber-like vehicle, whether fully automated or not, has a completely different use model than the standard family car that actually stays parked in a particular state a lot of the time, even though the electronics are always somewhat alive. There’s a completely different aging model and you can’t guard-band both cases correctly.

Memorial Sloan Kettering’s Season of Turmoil – The New York Times

America’s health care research system has many problems. The overall result is poor return on the money spent. The lure of big $ is a factor in many of them. Two specific problems:

  • What gets research $ (including from Federal $) is heavily driven by profit potential, not medical potential. Ideas that can’t be patented get little research.
  • Academic career incentives distort both topics of research (what will corporate sponsors pay for?) and publication. The “replicability crisis” is not just in social sciences.

This NYT article illustrates one way that drug companies indirectly manipulate research agendas: huge payments to influential researchers. In this article, Board of Directors fees. Large speaking fees for nominal work are another common mechanism. Here are some others:

Flacking for Big Pharma

Drugmakers don’t just compromise doctors; they also undermine top medical journals and skew medical research. By Harriet A. Washington | June 3, 2011

I could go on and on about this problem, partly because I live in a biotech town and work at a biotech university. I have posted about this elsewhere in this blog. But since it’s not an area where I am doing research, I will restrain myself.

Rescuing a medical treatment from failure in a clinical trial by using  Post Hoc Bayesian Analysis 

How can researchers maximize learning from experiments, especially from very expensive experiments such as clinical trials? This article shows how a Bayesian analysis of the data would have been much more informative, and likely would have saved a useful new technique for dealing with ARDS.

I am a big supporter of Bayesian methods, which will become even more important/useful with machine learning. But a colleague, Dr. Nick Eubank, pointed out that the data could also have been re-analyzed using frequentist statistics. The problem with the original analysis was not primarily that they used frequentist statistics. Rather, it was that they set a fixed (and rather large) threshold for defining success. This threshold was probably unattainable. But the clinical trial could still have been “saved,” even by conventional statistics.

Source: Extracorporeal Membrane Oxygenation for Severe Acute Respiratory Distress Syndrome and Posterior Probability of Mortality Benefit in a Post Hoc Bayesian Analysis of a Randomized Clinical Trial. | Critical Care Medicine | JAMA | JAMA Network

Here is a draft of a letter to the editor on this subject. Apologies for the very academic tone – that’s what we do for academic journals!

The study analyzed in their article was shut down prematurely due to the unlikelihood that it would attain the target level of performance. Their paper shows that this might have been avoided, and the technique shown to have benefit, if their analysis had been performed before terminating the trial. A related analysis could usefully have been done within the frequentist statistical framework. According to their Table 2, a frequentist analysis (equivalent to an uninformative prior) would have suggested a 96% chance that the treatment was beneficial, and an 85% chance that it had RR < .9 .

The reason the original study appeared to be failing was not solely that it was analyzed with frequentist methods. It also failed because the target threshold for “success” was set at a high threshold, namely RR < .67. Thus, although the full Bayesian analysis of the article was more informative, even frequentist statistics can be useful to investigate the implications of different definitions of success.

Credit for this observation goes to Nick. I will ask him for permission to include one of his emails to me on this subject.

450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” 

Disclosure in the United Kingdom has sparked a heated debate about the health impacts of an errant algorithm
By Robert N. Charette

Source: 450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” – IEEE Spectrum

It sounds like what we used to call a “bug” to me. I guess bugs are now promoted to “algorithm failures”. 

Nearly half a million elderly women in the United Kingdom missed mammography exams because of a scheduling error caused by one incorrect computer algorithm, and several hundred of those women may have died early as a result. Last week, the U.K. Health Minister Jeremy Hunt announced that an independent inquiry had been launched to determine how a “computer algorithm failure” stretching back to 2009 caused some 450,000 patients in England between the ages of 68 to 71 to not be invited for their final breast cancer screenings.

The errant algorithm was in the National Health System’s (NHS) breast cancer screening scheduling software, and remained undiscovered for nine years.

“Tragically, there are likely to be some people in this group who would have been alive today if the failure had not happened,” Hunt went on to tell Parliament. He added that based on statistical modeling, the number who may have died prematurely as a result was estimated to be between 135 and 270 women.

Source: 450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” – IEEE Spectrum

Elon Musk keeps making the same mistakes at Tesla

My friend at NYU, Prof. Melissa Schilling, (thanks, Oscar) and I have a running debate about Tesla. She emphasizes how smart and genuinely innovative Musk is. I emphasize how he seems to treat Tesla like another R&D driven company – but it is making a very different product. Melissa is quoted in this article:

Tesla risks a blowout as problems mount, but fans keep the hype machine in overdrive

Case in point: Tesla sent workers home, with no pay, for the production shutdown last week. My discussion is after the break.

During the pause, workers can choose to use vacation days or stay home without pay. This is the second such temporary shutdown in three months for a vehicle that’s already significantly behind schedule.

Source: Tesla Is Temporarily Shutting Down Model 3 Production. Again.

Continue reading

Tesla employees say Gigafactory problems worse than known

By now, Tesla’s manufacturing problems are completely  predictable. See my explanation, after the break. At least Wall St. is starting to catch on.
Also in this article: Tesla’s gigafactory for batteries has very similar problems. That  surprises me; I thought they had competent allies helping with batteries.

But one engineer who works there cautioned that the automated lines still can’t run at full capacity. “There’s no redundancy, so when one thing goes wrong, everything shuts down. And what’s really concerning are the quality issues.”

Source: Tesla employees say Gigafactory problems worse than known

Continue reading