Content-type: text/html
[] []
All Ethical Issues

Racial Profiling

Category: When It is Fundamentally Dubious

It is arguable that there is no ethical application of analytics to identify specific races for special treatment. As one commenter on a BoingBoing article suggested, "Imagine a billboard that alternated between advertising Cabernet Sauvignon or Malt Liquor depending on the skin tone of the person looking at it." The article itself described an 'ethnicity-detection camera' that could be used to identify Uyghurs for special treatment (Beschizza, 2019).

Analytics can erroneously attribute to race outcomes that have other causes. For example, an analytics engine may find "a city's crime data reflect the historical policing and surveillance in minority and low-income communities." In fact, however, the outcome may reflect the racism of police officers rather than the race of the citizens. In one case, "Researchers looked at over 10 years of Charlotte's data to find patterns of abuses. They found that the most significant predictor of inappropriate interactions were the officers themselves" (Arthur, 2016; Ekowo and Palmer., 2016). A similar study by ProPublica reached similar conclusions (Angwin, et.al., 2016). "Blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend. It makes the opposite mistake among whites."

It is also arguable that using race as a label is itself ethically questionable, first, because it fosters and promotes racial discrimination, seond, because the categories defined by race are not significant (for example, there is no principled distinction between 'black' and 'white', especially in a population that may have elements of both), and third, because the usual categories of race (white, black, hispanic, Asian) reflect a colonial perspective.

Examples and Articles

A Detroit community college professor is fighting Silicon Valley’s surveillance machine. People are listening.
"Far from academia’s elite institutions, Gilliard, 51, has emerged as an influential thinker on the relationship between trendy tech tools, privacy and race. From “digital redlining” to “luxury surveillance,” he has helped coin concepts that are reframing the debate around technology’s impacts and awakening recognition that seemingly apolitical products can harm marginalized groups. While some scholars confine their work to peer-reviewed journals, Gilliard posts prolifically on Twitter, wryly skewering consumer tech launches and flagging the latest example of what he sees as blinkered techno-optimism or surveillance creep. (Among his aphorisms: “Automating that racist thing is not going to make it less racist.”) It’s an irony of the world Silicon Valley has constructed that an otherwise obscure rhetoric and composition teacher with a Twitter habit could emerge as one of its sharpest foils. Among a growing chorus of critics taking on an industry that’s remolding the world in its image, Gilliard is not the most prominent or credentialed. Yet his outsider status is integral to a worldview that is finding an audience not only on social media but in the halls of academia, journalism and Washington." Direct Link

,

Racial Profiling Goes High Tech With Facial Recognition
"Biased facial recognition system disproportionately labels minority UCLA students and faculty as criminals" Willie Jones, IEEE Spectrum, 24 Feb 2020 Direct Link


Do you have another example of Racial Profiling? Suggest it here

Force:yes