Content-type: text/html
[] []
Applications of Analytics

Moderating Discourse

Category: Deontic Analytics

Moderation is a difficult task that involves judgements and interactions on a continual basis. Human moderation is expensive and requires training. However, AI moderation faces numerous challenges. For example, St. John's professor Kate Klonick argues, "One of the things that's really, really hard — and has always been hard — is when people post bad content that's removable, but they post it in protest or to raise awareness. Generally, the biggest threat is going to be over-censorship rather than under-censorship." (Field & Lapowsky, 2020) Twitter found similar issues. "We want to be clear: While we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring." (Gadde & Derella, 2020)

During the Covid 19 outbreak companies began relying more on automated moderation. YouTube reported, "We will temporarily start relying more on technology to help with some of the work normally done by reviewers," the company announced in a blog post (YouTube, 2020). The sort of technologies it can use to do this might include, for example, a neural net that identifies rage on a Twitch stream (Yan, 2020).

Over time, moderation will become more interactive. As chatbots evolve they have the potential to intervene and mediate human conversation, encouraging engagement and tempering over-reaction. Current chatbots (Sennaar, 2019) are far from this ideal. However an early example of this approach is suggested by marketing for Ed Tech Foundy’s Differ, “a class communication app that uses chatbots and artificial intelligence to increase student engagement, performance and retention,” which is part of a larger research program (Nilsen, 2019).

Additionally, moderation will focus on more than just text. For example, in 2019 Facebook announced Whole Post Integrity Embeddings, “which allows its AI systems to simultaneously analyze the entirety of a post — the images and the text — for signs of a violation.” (Field & Lapowsky, 2020)

Examples and Articles

A Moderator ChatBot for Civic Discourse
"The Stanford Online Deliberation Platform has so far been used in classroom settings and is being more thoroughly tested in a controlled experiment. And in the Spring of 2020, it will go live to a random sample of 200 people in Japan." Feb 6, 2020 | Katharine Miller Direct Link

,

Content moderation, AI, and the question of scale
"AI seems like the perfect response to the growing challenges of content moderation on social media platforms: the immense scale of the data, the relentlessness of the violations, and the need for human judgments without wanting humans to have to make them." Direct Link


Do you have another example of Moderating Discourse? Suggest it here

Force:yes