Is technical knowledge fractal?

My analysis of technical knowledge in manufacturing, aviation, and elsewhere suggests that it is fractal, i.e. that any portion of a knowledge graph can be further decomposed into a detailed knowledge graph in its own right. Limits on human knowledge mean that the frontiers of current graphs are always “fuzzy,” i.e. at low stages of knowledge. Further technology development will clarify clarify the current periphery of a graph, but reveals new fuzzy portions.

To the extent this hypothesis is true, i.e. that knowledge is fractal, it has a lot of implications. For example, high-tech industries must operate in frontier regions where much is known, but some important issues are not well understood. People are better than machines at dealing with ambiguity, so the faster the rate of technological progress, the more an industry needs people and cannot automate its activities.

 

How useful is data mining without human judgment?

A recent exchange on Mathbabe’s blog about the meaning of Big Data led me to some insights about where decisions need human judgment and analysis, and where we can turn decisions over to automated data mining. For example, serving up “you might also like X” in a web store will work a lot better than estimating how many people have flu. Why?

Here’s what I wrote. (Not clear if her WordPress interface picked it up.)

Cathy, big data in your sense does not work widely. If you say that “no human judgment is needed,” this is approximately equivalent to “the relationships do not need to be supported by causal theory, just by raw correlation.” This works great in certain domains. But the underlying correlations have to be changing relatively slowly, compared to the amount of data that is available. With enough data for “this month,” an empirical relationship which holds for multiple months can be data mined  (discovered) and used to make decisions, without human judgment.

But many of the world’s important problems don’t have that much stability. For example trying to use searches to track the spread of an annual flu, at the state-by-state level, won’t be very reliable without human judgement. The correlation between search terms and flu incidence in 2012 is not likely to be the same in 2013. One reason is that news cycles very from year to year, so in some years people are more frightened of the flu than other years, and do more searches. Consider the following experiment: use the “big data relationships” from 2010, to track the incidence of flu in 2014. It won’t work very well, will it?
On the other hand, if you could get accurate weekly data about flu incidence, the same methods might work much better. Using the correlations between search terms and flu in November might give reasonably accurate estimates in December.

Automated systems based on data mining are a form of closed-loop decision systems. (Closed loop basically means “no human in the loop.”) Closed-loop feedback works great under certain conditions, and very poorly under others. A key difference is whether the system designer has sufficient (accurate) knowledge about the system’s true behavior.

Once again “it all comes back to knowledge.”

The Economist praises a dangerous and obsolete management concept

The Economist just published a short article in praise of the experience curve. Even the first sentence is wrong . Here’s their lead-in.

The more experience a firm has in producing a particular product, the lower its costs

The experience curve is an idea developed by the Boston Consulting Group (BCG) in the mid-1960s.

Actually, no. The experience curve, known as the learning curve, goes back to the aircraft industry before World War II. (An excellent review of the history and application to management up to 1980 is J.M. Dutton, A. Thomas, J.E. Butler, “The history of progress functions as a managerial technology,” Business History Review 58 (2) (1984). )

Here’s my comment on the Economist article:

It’s sad to see such an obsolete and downright dangerous theory get this favorable write-up. BCG (and later Bain) ruined numerous businesses by persuading them to blindly follow “the experience curve.”

The danger in the Experience Curve concept is that it claims that improvement is _inevitable_ and _ the same for everyone in an industry_. Neither of these is remotely correct. If it were correct, the biggest firm would be able to reduce its costs faster than everyone else, and would become unassailable. This was exactly the theory behind BCG’s matrix, and it’s WRONG. General Motors was bigger than Toyota until 2008, but Toyota had lower costs, and faster declining costs, since at least 1965 or 1970. For decades GM claimed this was due to lower labor costs, but that was refuted in the book The Machine That Changed the World, which showed that Toyota (and others) were much more efficient than US auto makers per labor hour.

It’s certainly true that, properly managed, experience can facilitate improvement. But there’s been 25 years of research now showing that improvement requires deliberate effort, and that the improvement process takes careful management. Toyota, through JIT and “The Toyota Production Process,” essentially invented a system for making more rapid improvement – hence it surpassed GM and everyone else, while a fraction of their size. The semiconductor industry had its own epiphany about the folly of the experience curve, when a major research project run out of Berkeley surveyed a variety of fabs and found vastly different performance that had little  to do with scale or cumulative experience.

Even BCG no longer claims the experience curve is valid, as far as I know. (I’d be happy to hear from others who have experienced BCG’s views in the last 5 years.)

I could go on and on (and I did, in stuff I wrote 20 years ago on this topic)!  We need to drive a stake through the heart of this idea. It’s not that it’s totally and utterly wrong, because the learning curve  has some ex post validity. But it has little predictive power, and even less as a normative theory of how to manage learning!