Ridiculous: “Why Oxfam is getting it wrong about poverty” – CapX

Bilge from a right-wing pseudo-intellectual. I’ve never heard of this guy before, but he seems to be an expert in deception rather than analysis.

As it’s Davos time, Oxfam has issued its traditional demand for a handout.  Their wealth report this year informs us that a mere eight people have more wealth than the bottom 50 percent of the world’s population. This is entirely true of course. But Oxfam’s solution is that we should take it from the rich and […]Source: Why Oxfam is getting it wrong about poverty – CapX

This is an example of deceptive reasoning. Here’s my quick analysis:

Worstall writes:

>The result is that entrepreneurs get to keep some 3 per cent of the value of their creations. The other 97 per cent of the value flows to us consumers out here.
….
>Poverty exists and obviously we’d prefer that it didn’t. That’s why we need more rich people not fewer: because we need someone to create value for the rest of us to consume.

So he is equating “rich people” to “entrepreneurs” to “creators of value.” If only that were true. Although a small number of tech entrepreneurs get most of the publicity (Steve Jobs, Bill Gates, etc), most of the giant corporate profits are coming from increasing market power/decreasing competition in many markets. For example, few outside the industry think that the “financial services” industry (e.g. investment banking) creates value comparable to the huge profits it makes.

He is also using misdirection to imply that Sam Walton’s heirs were the entrepreneurs who created Walmart’s economic value!

Finally, he keeps using a “3 percent” number to imply that “the masses” get 97 percent of increasing economic value, and the ultra-rich get only 3 percent. In fact median income has not grown for several decades. While the overall GDP has doubled in the last 30 years, the extra income has gone entirely to the upper ten percent. (Median household income rose by 8% in the last 30 years.)
Sources: http://www.multpl.com/us-gdp-inflation-adjusted/table
https://fred.stlouisfed.org/series/MEHOINUSA672N

A slightly different way of measuring. Compare black and red lines.

figure-9-e1455724425470

So the blog post is a dishonest piece of fallacious reasoning. Is this typical of the Adam Smith Institute, where he is apparently based? Is this the average reasoning level of right-wing intellectuals today?

By the way, I’m sure there are problems with Oxfam’s report – just not the ones he claims.

Why Elon Musk’s New Strategy Makes Sense — Really??

Claim: “The history of architectural innovation is on his side. Source: Why Elon Musk’s New Strategy Makes Sense” by Joshua Gans

I’ve seen many people encouraging Musk’s integration of Solar City with Tesla, but it strikes me as a weak move. There is some synergy between electric cars and home PV, but electric energy is mostly fungible. Only if local utilities use really dumb pricing schemes for solar power would it be useful to bypass them if you have an EV. (Admittedly, many utilities do exactly that.)

Second, his closing argument contradicts a lot of other analysis.  I have not read Gans book. But he writes that:

As I outline in my book, The Disruption Dilemma, the companies that have thrived in the face of architectural disruption of this kind are those that have kept all the parts close and in control rather than spread them out.

But, “keeping all the parts under your control” rules out 99% of startups. And it also seems historically incorrect. IBM, when it started the IBM-PC revolution, did so by surrendering control of almost everything, including the OS, processor, hard drive, and applications. IBM made all of these things for its mainframes, but it revolutionized the industry by NOT controlling them for personal computers. And this was certainly architectural disruption – the shift from a closed to an open architecture.

I’ll have to look at his book. Or ask my friend Liz Lyons down the hall, who was his student.

Can Motorola establish a new smartphone platform?

Every electronics company dreams of starting a new platform that other firms adopt and build on. It’s one of the few paths to riches  in electronics (think: iPhone, Android, Blu-Ray, CDMA, Steam, Playstation). Check out extensive writing by my friend Michael Cusumano and his colleague Annabelle Gawer, such as this article in Sloan Management Review. (May be behind a paywall.) Although even if successful, the originator may have to make so many deals that it does not capture much rent. (Think: Android again, Blu-Ray again, Wi-Fi, 4G, HDTV, etc.) And doing it successfully is very hard, even for large companies.

moto-1935  A  related dream is modularity without sacrificing performance. This has been discussed for cell phones for many years, although in the past I have been skeptical. This article, though, sounds as if Motorola has a chance at doing both. Technically, it sounds like a good concept, if they can pull it off as well as the article suggests. Of course, technical excellence is  never sufficient to become a standard. And Motorola, with all its ownership turmoil in recent years, is not very credible. But I’m heartened to think that the goal of a modular smartphone may be technically realistic, which would be great for consumers. (It’s important that Moto is not talking about creating a new operating system or app platform. Just look at Nokia and Microsoft to see how hard that is.)

Video version of the Wired article.

Continue reading

Econometrics versus statistical analysis

I teach a course on Data Mining, called Big Data Analytics. (See here for the course web site.) As I began to learn its culture and methods, clear differences from econometrics showed up. Since my students are well trained in standard econometrics, the distinctions are important to help guide them.

One important difference, at least where I teach, is that econometrics formulates statistical problems as hypothesis tests. Students do not learn other tools, and therefore they have trouble  recognizing problems where hypothesis tests are not the right approach.  Example: when viewing satellite images, distinguish urban from non-urban areas. This cannot be solved well in a hypothesis testing framework.

Another difference is less fundamental, but also important in practice: using out-of-sample methods to validate and test estimators is a religious practice in data mining, but is almost not taught in standard econometrics. (Again, I’m sure PhD courses at UCSD are an exception, but it is still rare to see economics papers that use out of sample tests.) Of course in theory econometrics formulas give good error bounds on fitted equations (I still remember the matrix formulas that Jerry Hausman and others drilled into us in the first year of grad school). But the theory assumes  that there are no omitted variables and no measurement errors! Of course all real models have many omitted variables. Doubly so since “omitted” variable includes all  nonlinear transforms of included variables.

Here are two recent columns on other differences between economists’ and statisticians’ approaches to problem solving.

I am not an econometrician  by Rob Hyndman.

and

Differences between econometrics and statistics: From varying treatment effects to utilities, economists seem to like models that are fixed in stone, while statisticians tend to be more comfortable with variation, by Andrew Gelman.

Good data mining reference books

The students in my Big Data Analytics course asked for a list of books on the subject they should have in their library. UCSD has an excellent library, including digital versions of many technical books, so my  list is entirely books that can be downloaded on our campus. Many are from Springer. There are several other books that I have purchased, generally from O’Reilly, that are not listed here because they are not available on campus.

These are intended as reference books for people who have taken one course in R and data mining. Some of them are “cookbooks” for R. Others discuss various machine learning techniques. BDA16 reference book suggestions

If you have other suggestions, please add them in the comments with a brief description of what is covered.

Zika virus: Forbes columnist can’t bear to say “market failure”

Here’s a column by a Forbes blogger about Zika saying that “we should not wait so long to develop vaccines against tropical diseases.” He concludes:

 Many pharmaceutical companies don’t focus on a disease until it becomes common enough to be highly profitable. The trouble is the vaccine world has become a bit like the plot line for “She’s All That” or “Cinderella.” Attention towards a person or thing does not occur until a cool person notices he or she or it. But when it comes to disease and stock market opportunities, as the saying goes, once your grandmother knows about it, it is usually too late.

Source: Zika Vaccine: Another Example Of Waiting Until It’s Too Late? – Forbes

This is not news. And it’s a classic situation where market forces are not enough to give socially desirable behavior. Developing a vaccine for a disease that is not in rich countries has low expected profitability. Even if the disease goes epidemic, pharma company will have to sell at a price near marginal cost.

The only solution is to use a different way to fund development. Contests, grants (Gates foundation), purchase guarantees (used by US DoD) all work. But waiting for the traditional patent system + pharma profit motive won’t lead to timely development of medication for poor-country diseases.

I guess a Forbes columnist is not allowed to point this out.

Is the FDA Too Conservative or Too Aggressive?

I have long argued that the FDA has an incentive to delay the introduction of new drugs because approving a bad drug (Type I error) has more severe consequences for the FDA than does failing to approve a good drug (Type II […]

Source: Is the FDA Too Conservative or Too Aggressive?

My take: this paper by  Vahid Montazerhodjat and Andrew Lo is interesting, but it only looks at one issue, and there are many other problems that make overapproval more likely. There are many  biases in the drug pipeline and FDA approval process, most of which are heavily in favor of approving drugs that do nothing (and yet, still have side effects). To mention one of many, the population used to test drugs is younger, healthier, more homogeneous, and more compliant than the population that ends up actually taking the drug. A second bias is that the testing process screens out people who have major side effects – they stop taking the drug, and are dropped from the sample (and from the statistical analysis at the end). So we only see the people with moderate or no side effects. Both of these problems lead to biases, which better statistical methods cannot remove.

The paper is interesting, but it is working from an idealized model of the drug research process, and I would not take its quantitative results seriously. The basic logic seems sound, though: there should be different approval standards for different diseases.