HomeARTIFICIAL INTELLIGENCEContent Moderation In Social Networks: Should AI Be Controlled?

Content Moderation In Social Networks: Should AI Be Controlled?

Content Moderation: Unfortunately, social media is marketplace of time, with negative consequences from discrimination to illegal activities. AI-controlled content moderation should prevent that. So far, with moderate success. A research group has proposed that AI be regularly checked on social media. 

Some hate them, and others cannot do without them: social networks. But no matter how you feel about Facebook, Tik Tok & Co: The world exchanges ideas on social media platforms. This ranges from dance instructions and fitness tips to political discussions. The dimensions are enormous. In 2018, someone used social media for the first time every eleven seconds. Today, 3.8 billion people worldwide are active on at least one social network.

The number of posts is correspondingly high – and human moderators can hardly keep up with the controls. Because that’s the dark side of social media, there is also a lot of offensive, discriminatory, or even simply illegal content cavorting here.

Automated Content Moderation Is Designed To Filter Out Unwanted Content

Many networks use artificial intelligence (AI) to ensure that the standards and laws are observed on the platforms. Self-learning algorithms are programmed so that they recognize dangerous or prohibited content and can remove it accordingly.

But automated content moderation is also problematic. It can handle large amounts of content. But the decisions that the algorithms make are not always fair and very rarely transparent for users or legislators.

Why was a post deleted? What was problematic about the photo? How does an AI decide which content to filter out? These are questions about which the social media platforms keep the public in the dark – sometimes even deliberately. Many large platforms fear that they will release trade secrets by revealing their algorithms.

The Initiative Calls For More Transparency And Openness

This is why the “Ethics of Digitalization” initiative has come together to make suggestions for a fairer and more transparent automated content moderation. Behind the project is the Global Network of Internet and Society Research Centres (NoC) under the patronage of Federal President Frank-Walter Steinmeier. The Mercator Foundation finances the project.

The initiative has published the first suggestions on how the AI-controlled moderation on social media can be made more open. In general, scientists believe that AIs in social networks have not yet worked particularly cleanly and can also be problematic.

Content Moderation With AI Is Not As Intelligent As Many Belief

Depending on the platform, automated content moderation may, for example, ensure that far too much or far too little content is deleted. Because AIs still have significant weaknesses when correctly classifying human posts. According to the initiative, intelligent algorithms often only pay attention to specific keywords but do not necessarily recognize the context of a post. This leads, for example, to an algorithm taking a slang word or regionalism as an insult even though it was not meant to be.

In addition, users can specifically avoid specific terms and still post illegal or offensive content. In addition, it is difficult for the algorithms to interpret content on memes or in GIFs, so moderation is relatively moderate here.

At the same time, it is sometimes just difficult to draw the line between hate speech and freedom of expression. In its first three briefings, the research initiative has put together proposals that make platform operators more responsible for designing their automated content moderation.

More Transparency For Users

Anyone who is often on social media has probably already experienced it: Posts sometimes disappear for inexplicable reasons. Who removed it – and why? Was it a human moderator or an automated process?

Affected users often do not even know what the problem is. That has to change, say the researchers in their briefing. When a post is removed, users should be informed about the background and know whether it was an automated deletion. The better users understand the algorithms, the easier it is to adhere to the given standards.

Users Must Be Able To Complain

Due to the high error rate of algorithms when moderating content, there must then be a quick and, above all, practical contact point for users to submit queries or complaints. Some social networks do offer a contact option. Often the reactions are slow, or the users get no answer. The researchers complain that this is also not in the interests of transparency.

Companies Should Be More Open

The scientists also criticise that the mechanisms of the algorithms are very opaque. Some companies, such as Microsoft or the city ​​of Amsterdam, share their data with the public in an open data program. However, such an exchange has so far been voluntary. Accordingly, few companies openly show the criteria according to their algorithms moderate content.

At the same time, many companies refrain from giving scientists any insight into how the algorithms work. The quarterly transparency reports are often too general. There is also a lack of general standards so that the data are not necessarily comparable.

Check Algorithms From Outside

It is, therefore, clear to the researchers: the content not only has to be more transparent. They should also be checked regularly by an independent and scientific body. Similar to the data protection checks under the GDPR, automated content moderation should also be possible. This could be done, for example, by an external international examination commission, which ensures that companies comply with laws when moderating their content or that they have not (unintentionally) programmed prejudices into the AIs.

In her opinion, it is possible to give scientists and authorities access to the algorithms without revealing trade secrets. This means that companies could no longer hide behind the uncontrollability of AI but would have to take responsibility for the functionality of their algorithms.

This examination committee would, of course, have to be commissioned and financed independently of the company and would have to bring the relevant technical know-how with them. Since very few companies would voluntarily submit to such tests, such tests would have to be required by law. The researchers hope this could make automated content moderation more transparent, easier to understand, and more ethically responsible.

ALSO READ: Customer communication: why language is so important to the brand

Techno News Feedhttps://www.technonewsfeed.com
Technonewsfeed is an innovative and inventive tech platform that provides users with vivid and well-researched tech content.
RELATED ARTICLES

LATEST ARTICLES