According to Penn State researchers, users of social media may place the same level of confidence in artificial intelligence (AI) as they do in human editors to flag offensive content and hate speech.
According to the researchers, people have more faith in artificial intelligence (AI) when they consider the accuracy and objectivity of machines. Users are less likely to trust machines if they know they can’t make decisions based on their own feelings.
S. Shyam Sundar, co-director of the Media Effects Research Laboratory and James P. Jimirro Professor of Media Effects at the Donald P. Bellisario College of Communications, says that the findings could help developers make better AI-powered content curation systems that can handle the huge amounts of information that are being made right now.
Sundar, who is also a member of Penn State’s Institute for Computational and Data Sciences, stated that there is an urgent need for content moderation on social media and, more broadly, online media. “We have news editors who act as gatekeepers in traditional media. However, because the gates are so open online, it may not be possible for humans to perform gatekeeping, especially given the amount of information being produced. This study examines the differences between human and automated content moderators in terms of how people react to them, as the industry moves more and more towards automated solutions.
Each has advantages and disadvantages. According to Maria D. Molina, assistant professor of advertising and public relations at Michigan State and the study’s first author, people tend to more accurately judge whether content is harmful in cases where it is racist or potentially encourages self-harm. However, the sheer volume of content being created and shared online is too much for people to handle.
AI editors can quickly look at content, but people often doubt their ability to make good suggestions and worry that the data may be hidden.
“The question of whether artificial intelligence editors are limiting a person’s freedom of expression arises when we think about automated content moderation,” said Molina. As a result, there is a conflict between the idea that we need content moderation because people are sharing so much problematic material and the concern that AI won’t be able to do it effectively. Therefore, our ultimate goal is to figure out how to create AI content moderators that people can rely on without limiting their ability to express themselves freely. ”
Interactive transparency and transparency
Molina asserts that one method for creating a reliable moderation system is to integrate humans and AI in the moderation process. She continued by saying that one strategy for boosting user confidence in AI is transparency, or letting users know when a machine is intervening in moderation. However, the researchers found that “interactive transparency,” which allows users to make suggestions to the AIs, seems to increase user trust even more.
The researchers enlisted 676 participants to interact with a content classification system in order to study transparency and interactive transparency as well as other factors. 18 experimental conditions were used to test how the source of moderation (AI, human, or both) and transparency (regular, interactive, or no transparency) might affect the participant’s trust in AI content editors. Participants were randomly assigned to one of the conditions. The classification of the content as harmful or hateful—whether it was labeled as “flagged” or “not flagged”—was tested by the researchers. Suicidal ideation was the subject of the “harmful” test content, whereas hate speech was the subject of the “hateful” test content.
The researchers found, among other things, that whether users trust an AI content moderator depends on whether they talk about the good things about machines, like their accuracy and objectivity, or the bad things, like how they can’t make subjective decisions about subtleties in human language.
Giving users the chance to weigh in on whether or not online information is harmful may increase user trust. Researchers found that people in the study who added their own words to a list of words chosen by AI and used to classify posts trusted the AI editor just as much as they trusted a human editor.
Ethics issues
According to Sundar, freeing humans from the task of reviewing content goes beyond simply providing employees with a break from a tiresome task. He said that when people work as editors, they are exposed to hours of violent and intolerable images and content.
According to Sundar, who is also the director of Penn State’s Center for Socially Responsible Artificial Intelligence, “automated content moderation is necessary from an ethical standpoint.” Human content moderators should not have to deal with harmful content every day because they are working for the public good.
Future research, in Molina’s opinion, might examine how to assist people in not only believing in AI but also comprehending it. She also said that interactive transparency might be essential to comprehending AI.
According to Molina, not only having faith in systems but also involving people so they can actually understand AI is something that is very important. How can we improve people’s understanding of AI using this idea of interactive transparency and other techniques? How can we best present AI so that it elicits the appropriate mix of respect for machine capability and skepticism regarding its flaws? These issues merit investigation.