Content-type: text/html
[] []
All Ethical Issues

Safety

Category: Social and Cultural Issues

Is AI safe? That may seem like an odd question, but the issue comes to the fore in the case of automated vehicles, and as analytics and AI are used in more and more systems - everything from construction to mechanics to avionics - the question of safety becomes relevant.

Beyond the question of whether we can trust AI is the broader question of whether producers of AI and analytics-based systems are actually concerned about safety. For example, in the U.S. the National Transportation Safety Board (NTSB) said that Uber's "inadequate safety culture" contributed to a fatal collision between an Uber automated test vehicle and a pedestrian…. the vehicle's factory-installed forward collision warning and automatic emergency braking systems were deactivated during the operation of the automated system" (NTSB, 2019).

In general, there is a concern about the technology industry's disregard for the potential impact and consequences of their work. The impact on safety could be direct, as in the Uber case, or indirect, as in the case of misleading content (Metz and Blumenthal, 2019) that could, say, lead people into dangerous patterns of behaviour, such as failing to vaccinate (Hoffman, 2019), or violent behaviour, such as vigilante attacks on innocent civilians in India (McLaughlin, 2018).

Analytics can be hacked in ways that are difficult to detect. For example, "engineers were able to fool the world's top image classification systems into labeling the animal as a gibbon with 99% certainty, despite the fact that these alterations are utterly indiscernible to the human eye...The same technique was later used to fool the neural networks that guide autonomous vehicles into misclassifying a stop sign as a merge sign, also with high certainty" (Danzig, 2020). Even more concerning, in 2018 researchers used a 3D printer "to create a turtle that, regardless of the angle from which it was viewed by leading object recognition systems, was always classified as a rifle" (Ibid.).

Examples and Articles

A Detroit community college professor is fighting Silicon Valley’s surveillance machine. People are listening.
"Far from academia’s elite institutions, Gilliard, 51, has emerged as an influential thinker on the relationship between trendy tech tools, privacy and race. From “digital redlining” to “luxury surveillance,” he has helped coin concepts that are reframing the debate around technology’s impacts and awakening recognition that seemingly apolitical products can harm marginalized groups. While some scholars confine their work to peer-reviewed journals, Gilliard posts prolifically on Twitter, wryly skewering consumer tech launches and flagging the latest example of what he sees as blinkered techno-optimism or surveillance creep. (Among his aphorisms: “Automating that racist thing is not going to make it less racist.”) It’s an irony of the world Silicon Valley has constructed that an otherwise obscure rhetoric and composition teacher with a Twitter habit could emerge as one of its sharpest foils. Among a growing chorus of critics taking on an industry that’s remolding the world in its image, Gilliard is not the most prominent or credentialed. Yet his outsider status is integral to a worldview that is finding an audience not only on social media but in the halls of academia, journalism and Washington." Direct Link

,

Ensure AI safety before worrying about the singularity
"With AI research and development progressing at an unprecedented rate, artificial superintelligence seems closer than projected by most. It is more imperative to secure AI safety now as any later might be too late." Direct Link


Do you have another example of Safety? Suggest it here

Force:yes