Don’t expect Level 5 Autonomous cars for decades

Why I don’t expect fully autonomous city driving in my lifetime (approx 25 years).

Paraphrase: The strange and crazy things that people do. .. a ball bouncing in front of your car, a child falling down, a car running a red light, head-down pedestrian. A level-5 car has to handle all of these cases, reliably.

These situations require 1) a giant set of learning data 2) Very rapid computing 3) Severe braking. Autonomous cars today are very slow + very cautious in order to allow more time for decisions and for braking.

My view:

There is no magic bullet that can solve these 3 problems, except keeping autonomous cars off of city streets. And all 3 get worse in bad weather, including fog much less in snow.

Also, there are lots of behavioral issues, such as “knowing” the behavior of pedestrians in different cities. Uber discovered that frequent braking/accelerating makes riders carsick – so they re-tuned their safety margins, and their car killed a pedestrian.

A counter-argument (partly from Don Norman, jnd1er): Human drivers are not good at these situations either, and occasionally hit people. Therefore, we should not wait for perfection, but instead systems that on balance are better than humans.  As distracted driving gets worse, the tradeoff in favor of autonomous cars will shift.

But there is another approach to distracted driving. Treat it like drunk driving. Make it socially and legally unacceptable. Drunk driving used to be treated like an accident, with very light penalties even in fatal accidents.

Finally, I’m not sure if any amount of real-life driving will be good enough to develop  training datasets for the rarest edge cases. Developers will need supplemental methods to handle them, including simulated accidents and some causal modeling. For example, the probabilities of different events change by location and time of day. Good drivers know this, and adjust. Perhaps cars will need adjustable parameters that shift their algorithm tuning in different circumstances.

Source of the quotation: Experts at the Table: The challenges to build a single chip to handle future autonomous functions of a vehicle span many areas across the design process.

Source: Semiconductor Engineering – Challenges To Building Level 5 Automotive Chips

The First Smart Air Quality Tracker?

The first microprocessor is almost 50 years old, but microprocessors (MPUs) continue to revolutionize new areas. (First MPU = Intel 4004, in 1971, which Intel designed for a calculator company!) In concert with Moore’s Law and now ubiquitous wireless two-way wireless data transmission (thanks, Qualcomm!). smartphones have become a basic building block of many products.

A companion to explain what’s in your air, anywhere. Flow is the intelligent device that fits into your daily life and helps you make the best air quality choices for yourself, your family, your community.

Source: Flow, by Plume Labs | The First Smart Air Quality Tracker

Here is a quick review I wrote of the “Flow” pollution meter, after using it for a few months.  I wrote it as a comment on a blog post by Meredith Fowlie about monitoring the effects of fires in N. California.

Some U.S. police departments dump body-camera programs amid high costs – The Washington Post

Smaller departments that struggle with the cost of equipment and storage of data are ending or suspending programs aimed at transparency and accountability.

Source: Some U.S. police departments dump body-camera programs amid high costs – The Washington Post

My comment: this was predictable. Video data gets big very quickly. See my discussion 3 years ago.

Would ‘explainable AI’ force companies to give away too much? Not really.

Here is an argument for allowing companies to maintain a lot of secrecy about how their data mining (AI) models work. The claime is that revealing information will put  companies at a competitive disadvantage. Sorry, that is not enough of a reason. And it’s not actually true, as far as I can tell.

The first consideration when discussing transparency in AI should be data, the fuel that powers the algorithms. Because data is the foundation for all AI, it is valid to want to know where the data…

Source: The problem with ‘explainable AI’ | TechCrunch

Here is my response.

Your questions are good ones. But you seem to think that explainability cannot be achieved except by giving away all the work that led to the AI system. That is a straw man. Take deep systems, for example. The IP includes:
1) The training set of data
2) The core architecture of the network (number of layers etc)
3) The training procedures over time, including all the testing and tuning that went on.
4) The resulting system (weights, filters, transforms, etc).
5) HIgher-level “explanations,” whatever those may be. (For me, these might be a reduced-form model that is approximately linear, and can be interpreted.)

Revealing even #4 would be somewhat useful to competitors, but not decisive. The original developers will be able to update and refine their model, while people with only #4 will not. The same for any of the other elements.

I suspect the main fear about revealing this, at least among for-profit companies, is that it opens them up to second-guessing . For example, what do you want to bet that the systems now being used to suggest recidivism have bugs? Someone with enough expertise and $ might be able to make intelligent guesses about bugs, although I don’t see how they could prove them.
Sure, such criticism would make companies more cautious, and cost them money. And big companies might be better able to hide behind layers of lawyers and obfuscation. But those hypothetical problems are quite a distance in the future. Society deserves to, and should, do more to figure out where these systems have problems. Let’s allow some experiments, and even some different laws in different jurisdictions, to go forward for a few years. To prevent this is just trusting the self-appointed experts to do what is in everyone else’s best interests. We know that works poorly!

450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” 

Disclosure in the United Kingdom has sparked a heated debate about the health impacts of an errant algorithm
By Robert N. Charette

Source: 450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” – IEEE Spectrum

It sounds like what we used to call a “bug” to me. I guess bugs are now promoted to “algorithm failures”. 

Nearly half a million elderly women in the United Kingdom missed mammography exams because of a scheduling error caused by one incorrect computer algorithm, and several hundred of those women may have died early as a result. Last week, the U.K. Health Minister Jeremy Hunt announced that an independent inquiry had been launched to determine how a “computer algorithm failure” stretching back to 2009 caused some 450,000 patients in England between the ages of 68 to 71 to not be invited for their final breast cancer screenings.

The errant algorithm was in the National Health System’s (NHS) breast cancer screening scheduling software, and remained undiscovered for nine years.

“Tragically, there are likely to be some people in this group who would have been alive today if the failure had not happened,” Hunt went on to tell Parliament. He added that based on statistical modeling, the number who may have died prematurely as a result was estimated to be between 135 and 270 women.

Source: 450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” – IEEE Spectrum

Car repossession: Big Data +AI tools are not value-neutral

Does recent technology inherently favor capitalists over workers?

There is a lot of concern about AI potentially causing massive unemployment. The question of whether “this time will be different” is still open. But another insidious effect is gaining speed: putting tools in the hands of large companies that make it more expensive and more oppressive to run into financial trouble. In essence,  harder to live on the edges of “The System.”

  •  Cars with even one late payment can be spotted, and repossessed, faster. “Business has more than doubled since 2014….”  This is during a period of ostensible economic growth.
  • “Even with the rising deployment of remote engine cutoffs and GPS locators in cars, repo agencies remain dominant. … Agents are finding repos they never would have a few years ago.”
  • “So much of America is just a heartbeat away from a repossession — even good people, decent people who aren’t deadbeats,” said Patrick Altes, a veteran agent in Daytona Beach, Fla. “It seems like a different environment than it’s ever been.”
  • “The company’s goal is to capture every plate in Ohio and use that information to reveal patterns. A plate shot outside an apartment at 5 a.m. tells you that’s probably where the driver spends the night, no matter their listed home address. So when a repo order comes in for a car, the agent already knows where to look.”
  • Source: The surprising return of the repo man – The Washington Post

Continue reading