I have long argued that the FDA has an incentive to delay the introduction of new drugs because approving a bad drug (Type I error) has more severe consequences for the FDA than does failing to approve a good drug (Type II […]
Source: Is the FDA Too Conservative or Too Aggressive?
My take: this paper by Vahid Montazerhodjat and Andrew Lo is interesting, but it only looks at one issue, and there are many other problems that make overapproval more likely. There are many biases in the drug pipeline and FDA approval process, most of which are heavily in favor of approving drugs that do nothing (and yet, still have side effects). To mention one of many, the population used to test drugs is younger, healthier, more homogeneous, and more compliant than the population that ends up actually taking the drug. A second bias is that the testing process screens out people who have major side effects – they stop taking the drug, and are dropped from the sample (and from the statistical analysis at the end). So we only see the people with moderate or no side effects. Both of these problems lead to biases, which better statistical methods cannot remove.
The paper is interesting, but it is working from an idealized model of the drug research process, and I would not take its quantitative results seriously. The basic logic seems sound, though: there should be different approval standards for different diseases.
When the doctor’s away, the patient is more likely to survive | Ars Technica.
Very surprising. When cardiologists are away from the hospital, deaths after heart failure or cardiac arrest declined. I’ll probably use this in my course this Spring. (Or perhaps in both courses: Big Data, and Operations Quality in Healthcare.)
No, a study did not link GM crops to 22 diseases.
And a candidate for worst graph of the year, appearing to show that deaths from a certain class of diseases grew in parallel with some farming trends. ! (Figure 16 in the article, which is at http://www.organic-systems.org/journal/92/JOS_Volume-9_Number-2_Nov_2014-Swanson-et-al.pdf ). Any steadily increasing time series can be plotted so that they lie approximately on top of each other, if you distort the scales enough. Other “causes” they could have plotted, with approximately the same results: cell-phone per capita, percentage of cars on the road with ABS brakes, and (for all I know) average campaign spending per Congressional race.
Under-reporting of clinical trials has been a problem for for decades (if not more). Only in the last few years has the medical community realized the pernicious effects this has on our knowledge about “what works” in medicine. If “bad” results don’t get permitted, all kinds of problems ensue, such as overly-optimistic views of new drugs, repeating of expensive and potentially dangerous research, and general waste of money. Since the NIH is such a big funder of medical research, this affects taxpayers too!
In any case, the NIH continues its slow (but steady?) crackdown on this issue. They are even threatening to cut off funding for researchers who don’t make their results available! (Of course a lot of research is funded by pharmaceutical companies, so this is hardly a comprehensive threat.)
I track this kind of thing because of my interest in “How societies learn” about technology. Forgetting and ignoring are powerful forces in retarding learning.
JAMA. Published online November 19, 2014. doi:10.1001/jama.2014.10716
Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts – IEEE Spectrum.
I agree 100% with the following discussion of big data learning methods, which is excerpted from an interview. Big Data is still in the ascending phase of the hype cycle, and its abilities are being way over-promised. In addition, there is a great shortage of expertise. Even people who take my course on the subject are only learning “enough to be dangerous.” It will take them months more of applied work to begin to develop reasonable instincts, and appropriate skepticism.
As we are now realizing, standard econometrics/regression analysis has many of the same problems, such as publication biases and excess re-use of data. And one can argue that it’s effects e.g. in health care have also been overblown to the point of being dangerous. (In particular, the randomized controlled trials approach to evaluating pharmaceuticals is much too optimistic about evaluating side effects. I’ve posted messages about this before.) The important difference is that now the popular press has adopted Big Data as its miracle du jour.
One result is excess credulity. On the NPR Marketplace program recently, they had a breathless story about The Weather Channel, and its ability to forecast amazing things using big data. The specific example was that certain weather conditions in Miami in January predict raspberry sales. What nonsense. How many Januaries of raspberry sales can they be basing that relationship on? 3? 10?
Why Big Data Could Be a Big Fail [this is the headline that the interviewee objected to – see below]
Spectrum: If we could turn now to the subject of big data, a theme that runs through your remarks is that there is a certain fool’s gold element to our current obsession with it. For example, you’ve predicted that society is about to experience an epidemic of false positives coming out of big-data projects.
Michael Jordan: When you have large amounts of data, your appetite for hypotheses tends to get even larger. And if it’s growing faster than the statistical strength of the data, then many of your inferences are likely to be false. They are likely to be white noise.
Spectrum: How so?
Time to debunk another widely covered press story about wonderful new inventions coming from a tech giant. Ars Technica had one of many articles about Google’s “announcement” of a blood glucose sensor in a contact lens. The discussion after the article is good, as often happens with Ars. Here’s my quick explanation of why the concept will fail. Unfortunately.
Non-invasive glucose testing is the perennial “pot of gold at the end of the rainbow.” Google is not the first to try using tears; the others have failed, and they will too. They say it is “5 years away,” which is equivalent to saying “We have not yet tested it on real diabetics.”
The problem is basically that tears won’t track blood glucose levels closely. Tears are secreted by the lacrimal gland. I’ve never studied it, but the composition of its secretion is sure to depend on a multitude of variables. (Think: sweat, saliva, etc.) Even if a relationship exists and can be quantified “on average,” there will be lags.
It’s possible that a device like this could supplement other measurement systems. But nothing will be as good as actual blood measurements. Therefore finger sticks will always be needed for calibration. The best realistic case is that a contact lens device could serve as an early warning; but finger sticks will still be needed for validation before taking any action.
via Google introduces smart contact lens project to measure glucose levels | Ars Technica.
This column in Scientific American from a 30-year veteran of science journalism has some good perspective on the ongoing controversy about non-replicability of so many scientific results. I wish I knew a system solution.
Discussing his findings in Scientific American two years ago, Ioannidis writes: “False positives and exaggerated results in peer-reviewed scientific studies have reached epidemic proportions in recent years. The problem is rampant in economics, the social sciences and even the natural sciences, but it is particularly egregious in biomedicine.”
A Dig Through Old Files Reminds Me Why I’m So Critical of Science — blogs.scientificamerican.com