Self-driving cars may eventually work together to create nearly real-time maps. But we’re nowhere close to that now.
Source: The Key to Autonomous Driving? An Impossibly Perfect Map – WSJ
Self-driving cars may eventually work together to create nearly real-time maps. But we’re nowhere close to that now.
Source: The Key to Autonomous Driving? An Impossibly Perfect Map – WSJ
Here is an argument for allowing companies to maintain a lot of secrecy about how their data mining (AI) models work. The claime is that revealing information will put companies at a competitive disadvantage. Sorry, that is not enough of a reason. And it’s not actually true, as far as I can tell.
The first consideration when discussing transparency in AI should be data, the fuel that powers the algorithms. Because data is the foundation for all AI, it is valid to want to know where the data…
Here is my response.
Your questions are good ones. But you seem to think that explainability cannot be achieved except by giving away all the work that led to the AI system. That is a straw man. Take deep systems, for example. The IP includes:
1) The training set of data
2) The core architecture of the network (number of layers etc)
3) The training procedures over time, including all the testing and tuning that went on.
4) The resulting system (weights, filters, transforms, etc).
5) HIgher-level “explanations,” whatever those may be. (For me, these might be a reduced-form model that is approximately linear, and can be interpreted.)Revealing even #4 would be somewhat useful to competitors, but not decisive. The original developers will be able to update and refine their model, while people with only #4 will not. The same for any of the other elements.
I suspect the main fear about revealing this, at least among for-profit companies, is that it opens them up to second-guessing . For example, what do you want to bet that the systems now being used to suggest recidivism have bugs? Someone with enough expertise and $ might be able to make intelligent guesses about bugs, although I don’t see how they could prove them.
Sure, such criticism would make companies more cautious, and cost them money. And big companies might be better able to hide behind layers of lawyers and obfuscation. But those hypothetical problems are quite a distance in the future. Society deserves to, and should, do more to figure out where these systems have problems. Let’s allow some experiments, and even some different laws in different jurisdictions, to go forward for a few years. To prevent this is just trusting the self-appointed experts to do what is in everyone else’s best interests. We know that works poorly!
Disclosure in the United Kingdom has sparked a heated debate about the health impacts of an errant algorithm
By Robert N. Charette
Source: 450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” – IEEE Spectrum
It sounds like what we used to call a “bug” to me. I guess bugs are now promoted to “algorithm failures”.
Nearly half a million elderly women in the United Kingdom missed mammography exams because of a scheduling error caused by one incorrect computer algorithm, and several hundred of those women may have died early as a result. Last week, the U.K. Health Minister Jeremy Hunt announced that an independent inquiry had been launched to determine how a “computer algorithm failure” stretching back to 2009 caused some 450,000 patients in England between the ages of 68 to 71 to not be invited for their final breast cancer screenings.
The errant algorithm was in the National Health System’s (NHS) breast cancer screening scheduling software, and remained undiscovered for nine years.
“Tragically, there are likely to be some people in this group who would have been alive today if the failure had not happened,” Hunt went on to tell Parliament. He added that based on statistical modeling, the number who may have died prematurely as a result was estimated to be between 135 and 270 women.
Source: 450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” – IEEE Spectrum
There is a lot of concern about AI potentially causing massive unemployment. The question of whether “this time will be different” is still open. But another insidious effect is gaining speed: putting tools in the hands of large companies that make it more expensive and more oppressive to run into financial trouble. In essence, harder to live on the edges of “The System.”
A nice graphical illustration of what happened when NYC subway rules were changed in seemingly small ways. The time/distance buffers that used to exist between consecutive trains shrank, to the point that a small “blip” causes cascading effects in subsequent trains. TOM once more. (Thanks to Arpita Verghese.)
My friend at NYU, Prof. Melissa Schilling, (thanks, Oscar) and I have a running debate about Tesla. She emphasizes how smart and genuinely innovative Musk is. I emphasize how he seems to treat Tesla like another R&D driven company – but it is making a very different product. Melissa is quoted in this article:
Case in point: Tesla sent workers home, with no pay, for the production shutdown last week. My discussion is after the break.
During the pause, workers can choose to use vacation days or stay home without pay. This is the second such temporary shutdown in three months for a vehicle that’s already significantly behind schedule.
Source: Tesla Is Temporarily Shutting Down Model 3 Production. Again.
By now, Tesla’s manufacturing problems are completely predictable. See my explanation, after the break. At least Wall St. is starting to catch on.
Also in this article: Tesla’s gigafactory for batteries has very similar problems. That surprises me; I thought they had competent allies helping with batteries.
But one engineer who works there cautioned that the automated lines still can’t run at full capacity. “There’s no redundancy, so when one thing goes wrong, everything shuts down. And what’s really concerning are the quality issues.”
Source: Tesla employees say Gigafactory problems worse than known