I recently audited some lectures by friend and China expert Prof. Susan Shirk. She bans computers in her lectures. But one student sitting near me had his machine out and was “busy” with the usual distractions. (Didn’t he know the Associate Dean was a few seats away?) I asked Susan about him after class. “He told me he can’t take notes without a computer.” Obviously the computer is not the big issue on his note taking. Actually, it probably IS the issue – but in a negative way.
Not one computer mirrors the overheads.
James Kwak has beaten the distraction of cell phones – by removing most apps, including browsers.
I know that its enormous powers of distraction also make me lose focus on work, tune out in meetings, stay up too late at night, and, worst of all, ignore people in the same room with me. We all know this. We’re addicted to the dopamine hit we get when we look at our email and there’s actually something good in there, so we keep checking our email hoping to feel it again.
via How I Achieved Peace by Crippling My Phone — Bull Market — Medium.
Clay Shirky, an Internet sociologist, has a good discussion of why he recently banned computers in his classrooms. Excerpt:
I came late and reluctantly to this decision — I have been teaching classes about the internet since 1998, and I’ve generally had a laissez-faire attitude towards technology use in the classroom. This was partly because the subject of my classes made technology use feel organic, …. And finally, there’s not wanting to infantilize my students, who are adults, even if young ones — time management is their job, not mine.
Despite these rationales, the practical effects of my decision to allow technology use in class grew worse over time. The level of distraction in my classes seemed to grow, even though it was the same professor and largely the same set of topics, …
Over the years, I’ve noticed that when I do have a specific reason to ask everyone to set aside their devices (‘Lids down’, in the parlance of my department), it’s as if someone has let fresh air into the room. The conversation brightens, and more recently, there is a sense of relief from many of the students. Multi-tasking is cognitively exhausting — when we do it by choice, being asked to stop can come as a welcome change.
So this year, I moved from recommending setting aside laptops and phones to requiring it, adding this to the class rules: “Stay focused. (No devices in class, unless the assignment requires it.)” …
The rules for flying radio controlled aircraft are under tremendous debate and change, mainly because of two new technologies that have together created a new business. The technologies are tiny flight management systems costing about $100, and excellent lightweight cameras like the GoPro (invented by a UCSD grad). The new business is using drones for low-altitude photography (and eventually for other applications, although IMO not for package delivery).
Congress put the Federal Aviation Administration in charge of figuring out what rule changes are needed. So far it has done a slow and weak job. (One result is that the U.S. has lost leadership of the industry, and may even become a backwater. That is a topic for another day.)
Pilots are instinctively concerned about risks to manned aircraft, from unmanned aircraft. Much argument back and forth has ensued, but there is little or no modeling or investigation. (What happens when a 2 pound quadcopter collides with small plane at 140 knots? Apparently there have been zero experiments on the issue.) Here is an interesting blog post on this issue.
Why See and Avoid Doesn’t Work – AVweb Insider Article.
My take on this issue is that the likelihood of serious air-to-air collisions is tiny. Far fewer than bird strikes, for example. A much bigger sour of injuries will be untrained idiots flying drones over crowds of people.
This short interview has some good explanations.
LeCun: Actually, I think the basics of machine learning are quite simple to understand….
A pattern recognition system is like a black box with a camera at one end, a green light and a red light on top, and a whole bunch of knobs on the front. The learning algorithm tries to adjust the knobs so that when, say, a dog is in front of the camera, the red light turns on, and when a car is put in front of the camera, the green light turns on. You show a dog to the machine. If the red light is bright, don’t do anything. If it’s dim, tweak the knobs so that the light gets brighter. If the green light turns on, tweak the knobs so that it gets dimmer. Then show a car, and tweak the knobs so that the red light get dimmer and the green light gets brighter. If you show many examples of the cars and dogs, and you keep adjusting the knobs just a little bit each time, eventually the machine will get the right answer every time.
Why unsupervised learning is critical in the long run, but does not yet work:
The type of learning that we use in actual Deep Learning systems is very restricted. What works in practice in Deep Learning is “supervised” learning. You show a picture to the system, and you tell it it’s a car, and it adjusts its parameters to say “car” next time around. Then you show it a chair. Then a person. And after a few million examples, and after several days or weeks of computing time, depending on the size of the system, it figures it out.
Now, humans and animals don’t learn this way. You’re not told the name of every object you look at when you’re a baby. And yet the notion of objects, the notion that the world is three-dimensional, the notion that when I put an object behind another one, the object is still there—you actually learn those. You’re not born with these concepts; you learn them. We call that type of learning “unsupervised” learning.
Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter – IEEE Spectrum.
When the doctor’s away, the patient is more likely to survive | Ars Technica.
Very surprising. When cardiologists are away from the hospital, deaths after heart failure or cardiac arrest declined. I’ll probably use this in my course this Spring. (Or perhaps in both courses: Big Data, and Operations Quality in Healthcare.)
No, a study did not link GM crops to 22 diseases.
And a candidate for worst graph of the year, appearing to show that deaths from a certain class of diseases grew in parallel with some farming trends. ! (Figure 16 in the article, which is at http://www.organic-systems.org/journal/92/JOS_Volume-9_Number-2_Nov_2014-Swanson-et-al.pdf ). Any steadily increasing time series can be plotted so that they lie approximately on top of each other, if you distort the scales enough. Other “causes” they could have plotted, with approximately the same results: cell-phone per capita, percentage of cars on the road with ABS brakes, and (for all I know) average campaign spending per Congressional race.
Under-reporting of clinical trials has been a problem for for decades (if not more). Only in the last few years has the medical community realized the pernicious effects this has on our knowledge about “what works” in medicine. If “bad” results don’t get permitted, all kinds of problems ensue, such as overly-optimistic views of new drugs, repeating of expensive and potentially dangerous research, and general waste of money. Since the NIH is such a big funder of medical research, this affects taxpayers too!
In any case, the NIH continues its slow (but steady?) crackdown on this issue. They are even threatening to cut off funding for researchers who don’t make their results available! (Of course a lot of research is funded by pharmaceutical companies, so this is hardly a comprehensive threat.)
I track this kind of thing because of my interest in “How societies learn” about technology. Forgetting and ignoring are powerful forces in retarding learning.
JAMA. Published online November 19, 2014. doi:10.1001/jama.2014.10716
Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts – IEEE Spectrum.
I agree 100% with the following discussion of big data learning methods, which is excerpted from an interview. Big Data is still in the ascending phase of the hype cycle, and its abilities are being way over-promised. In addition, there is a great shortage of expertise. Even people who take my course on the subject are only learning “enough to be dangerous.” It will take them months more of applied work to begin to develop reasonable instincts, and appropriate skepticism.
As we are now realizing, standard econometrics/regression analysis has many of the same problems, such as publication biases and excess re-use of data. And one can argue that it’s effects e.g. in health care have also been overblown to the point of being dangerous. (In particular, the randomized controlled trials approach to evaluating pharmaceuticals is much too optimistic about evaluating side effects. I’ve posted messages about this before.) The important difference is that now the popular press has adopted Big Data as its miracle du jour.
One result is excess credulity. On the NPR Marketplace program recently, they had a breathless story about The Weather Channel, and its ability to forecast amazing things using big data. The specific example was that certain weather conditions in Miami in January predict raspberry sales. What nonsense. How many Januaries of raspberry sales can they be basing that relationship on? 3? 10?
Why Big Data Could Be a Big Fail [this is the headline that the interviewee objected to – see below]
Spectrum: If we could turn now to the subject of big data, a theme that runs through your remarks is that there is a certain fool’s gold element to our current obsession with it. For example, you’ve predicted that society is about to experience an epidemic of false positives coming out of big-data projects.
Michael Jordan: When you have large amounts of data, your appetite for hypotheses tends to get even larger. And if it’s growing faster than the statistical strength of the data, then many of your inferences are likely to be false. They are likely to be white noise.
Spectrum: How so?