

Discover more from The Reputation Algorithm
National Algorithm Safety Board
'The risks from algorithms are big enough that it’s time to for the government to get involved.'
This post was originally published on June 18, 2017.
Katherine Ellen Foley of Quartz reports that Ben Shneiderman, a computer scientist from the University of Maryland, “thinks the risks from algorithms are big enough that it’s time to for the government to get involved.”
In the lecture at the Alan Turing Institute in London shown above, Shneiderman points out that there are numerous examples of how algorithms can fail.
Those failures are typically not life and death affairs, as in the case of Beauty.ai.
BoingBoing reported on the joint Microsoft/Nvidia/Youth Laboratories project that “applied machine learning techniques to rank 600,000 user-submitted selfies for their beauty, and picked 44 finalists: six Asians, one dark-skinned person, and thirty-seven white people.”
The European-based project disproportionately sourced their selfies from other white Europeans, providing a glaring case of sample bias.
In other cases, however, the consequences are much more dire.
Criminal sentencing algorithms run the risk, for example, of meting out harsher sentences for African Americans unless protections are taken in the development of the algorithm.
In 2016, the National Highway Traffic Safety Administration investigated the performance of autopilot during a fatal crash of a Tesla Model S.
The example Shneiderman cited was the autopilot failure that brought down Air France flight 447 in 2009.
Additionally, due to machine learning techniques, algorithms are increasingly writing themselves, essentially creating a black box that no human can decipher.
Since we are depending upon algorithms in more and more aspects of our everyday lives, Shneiderman has called for the creation of a “National Algorithm Safety Board” that would work similarly to the National Transportation Safety Board for vehicles.
Such a board would provide both ongoing and retroactive oversight for high-stakes algorithms, investigate instances where algorithms went awry, and provide an independent third party to review and disclose just how these programs work.
It is not hard to imagine how algorithm failures could result in damaged reputations but they could very well create massive societal disruptions (such as in the case of fake news) and even prove fatal.