Is high-speed Internet pointless​? No.

A contributor to Dave Farber’s IP (“Important People” list) recently stated that 1 Megabit per second  (Mbps) is adequate bandwidth for consumers. This compares to “high speed Internet” which in the US is 20 Mbps or higher, and Korea where speeds over 50 Mbps are common.

My response: 1 Mbps is woefully low for any estimate of “useful bandwidth” to an individual, much less to a home. It’s risky to give regulators an any excuse to further ignore consumer desires for faster connections.  1 Mbps is too low by at least one order of magnitude, quite likely by three orders of magnitude, and conceivably by even more. I have written this note in an effort to squash the 1Mbps idea in case it gets “out into the world.”

The  claim that 1 Megabit per second is adequate:

>From: Brett Glass <brett@lariat.net>
>Date: Sun, Dec 31, 2017 at 2:14 PM

> The fact is that, according to neurophysiologists, the entire bandwidth of
> all of the human senses combined is about 1 Mbps. (Some place it slightly
> higher, at 1.25 Mbps.) Thus, to completely saturate all inputs to the human
> nervous system, one does not even need a T1 line – much less tens of megabits.
> And therefore, a typical household needs nowhere near 25 Mbps – even if they
> were all simultaneously immersed in high quality virtual reality. Even the

My response:

First, I don’t know where the 1Mbps number comes from, but a common number is the bandwidth of the optic nerve, which is generally assessed at around 10Mbps. See references.

 

retina-diagram

An American Scientist article on “How the Retina Works” is available here.

 

Second, a considerable amount of pre-processing occurs in the retina and the layer under the retina, before reaching the optic nerve. These serve as the first layers of a neural network, and handle issues like edge detection.

Continue reading

What snakes are growing in the Gardens of Technological Eden?

Two emerging technologies are revolutionizing industries, and will soon have big impacts on our health, jobs, entertainment, and entire lives. They are Artificial Intelligence, and Big Data. Of course, these have already had big effects in certain applications, but I expect that they will become even more important as they improve. My colleague Dr. James Short is putting together a conference called Data West at the San Diego Supercomputer Center, and I came up with a list of fears that might disrupt their emergence.

1) If we continue to learn that ALL large data repositories will be hacked from time to time (Experian; National Security Agency), what blowback will that create against data collection? Perhaps none in the US, but in some other countries, it will cause less willingness to allow companies to collect consumer data.

2) Consensual reality is unraveling, mainly as a result of deliberate, sophisticated, distributed, attacks. That should concern all of us as citizens. Should it also worry us as data users, or will chaos in public venues not leak over into formal data? For example, if information portals (YouTube, Facebook, etc.) are forced to take a more active role in censoring content, will advertisers care? Again, Europe may be very different. We can presume that any countermeasures will only be partly effective – the problem probably does not have a good technical solution.

3) Malware, extortion, etc. aimed at companies. Will this “poison the well” in general?

4) Malware, extortion, doxing, etc. aimed at Internet of Things users, such as household thermostats, security cameras, cars. Will this cause a backlash against sellers of these systems, or will people accept it as the “new normal.” So far, people have seemed willing to bet that it won’t affect them personally, but will that change. For example, what will happen when auto accidents are caused by deliberate but unknown parties who advertise their success? When someone records all conversations within reach of the Alexa box in the living room?

Each of these scenarios has at least a 20% chance of becoming common. At a minimum, they will require more spending on defenses. Will any become large enough to suppress entire applications of these new technologies?

I have not said anything about employment and income distribution. They may change for the worse over the next 20 years, but the causes and solutions won’t be simple, and I doubt that political pressure will become strong enough to alter technology evolution.

Automation and the Future of Work – Lecture Notes 2017

One of my students reported that he was having trouble finding my lecture notes from this course, so I am putting them in one place. I will update this for the last few classes.

 Topic  Date of class  File name+link
Final projects;
Diffusion of innovation;
financial evaluation;
technology life cycles
 May 15, 17 A+W 2017 May 17 Bohn adoption
models

3 cases of service automation  May 8   Internet of things
Human expertise
& AI in medicine
April 17   Q+W week 3 medicine
 Trends in employment  April 4  A+W17 Bohn April 4

Some of the aviation discussions are not yet here.

Schumpeter: The University of Chicago worries about a lack of competition | The Economist

Its economists used to champion big firms, but the mood has shifted

Source: Schumpeter: The University of Chicago worries about a lack of competition | The Economist

There is an emerging consensus among economists that competition in the economy has weakened significantly. That is bad news: it means that incumbent firms may not need to innovate as much, and that inequality may increase if companies can hoard profits and spend less on investment and wages.

Yes, I certainly see this in tech fields.The double consequences are scary.

Thanks to colleague Prof. Liz Lyons for suggesting this.

Art to science in moderating internet content

This article describes the efforts of Facebook, Youtube, and similar hosts of user-generated content, to screen unacceptable material. (Both speech and images.) It’s apparently a grim task, because of the depravity of some material.  For the first decade, moderation methods were heavily ad hoc, but  gradually grew more complex and formalized in response to questions such as when to allow violent images as news. In aviation terms, it was at Stage 2: Rules + Instruments. Now, some companies are developing Stage  3 (standard procedures) and Stage 4 (automated) methods.

Continue reading

Decrypting the iPhone – some speculation

The NY Times says nobody knows how the FBI decrypted the infamous iPhone. That is certainly true, but there is speculation about physically opening up one of its chips and reading its crypto key. http://www.nytimes.com/aponline/2016/03/29/us/politics/ap-us-apple-encryption.html Years ago, I looked at reverse engineering of chip designs by  physically disassembling them. Here are some comments on how difficult this is, although it certainly may be possible.

Physically attacking a chip is an old, but difficult, method of breaking into a system that you control. In 2008, Ed Felton and others read DRAM chips that had been turned off, by freezing them in liquid nitrogen. But they were reading the outside pins of the chip package. http://www.nytimes.com/2008/02/22/technology/22chip.html Partly to prevent that, but mostly for speed and cost reasons, processors like those inside a smart phone now include modules like graphics, cache, and security on the same die and chip. So there is no way to read such data from outside the package, unless a design has a bug.

To read signals from inside a chip, you need to figure out the logical and physical layouts of the chip, which are proprietary and, with up to 100 million logic gates, very complex. Then you need to be able to inject and read signals with a physical separation of 100 nanometers(nm) or less. By comparison, the wavelength of light is 400 nm or greater. And the chip designers knew you might try, and perhaps did their best to make it impossible. Of course, companies still attempt to reverse engineer their competitors’ chips, so some expertise does exist.

chip-labeled

Finally, if you are physically slicing up a unique device, I would guess that one slip and you may not be able to recover. You can’t just shut off power and start over the way you can with software attacks.

Here is one example of successfully dissecting a security chip, back in 2010. It was not easy!

 

Using data mining to ban trolls on League of Legends

Something I just found for my Big Data class.

Riot rolls out automated, instant bans for League of Legends trolls

Machine learning system aims to remove problem players “within 15 minutes.”

An interesting thread of player comments has a good discussion of potential problems with automated bans. Only time will tell how well the company develops the system to get around these issues.

This company also took an experimental approach to banning players. And hired 3 PhDs in Cognitive Science to develop it. (Just to be clear, their experiments did not appear to be automated A/B style experiments.) After the jump is a screen shot from that system.

League of Legends screen shot

But, I’m not tempted to play League of Legends to study player behavior and experiment with getting banned! (I don’t think I’ve ever tried an MMO beyond some prototypes 15 years ago.)  If any players want to post your observations here, great.