Moderators on the Front Lines of Internet Security – MIT Technology Review
Digital first responders screen malicious content online so we don’t have to.
It’s no secret that digital predators are lurking online in record numbers, exposing others to harmful language, images, videos and actions. Back 300 hours of user-generated content uploaded to the Internet every minute, protecting unsuspecting users has become a big issue. Agreed Variety, user-generated content represents 39% of all time spent with media. So how can companies protect online spaces and maintain brand integrity with so much independently generated content?

Enter resilient human moderators (also called digital first responders) who readily accept the difficult task of ensuring the safety of our digital experiences.
One might ask, what exactly does a content moderator do? To answer that question, let’s start from the beginning.
Don’t settle for half the story.
Get free access to tech news here and now.
Subscribe now!
Already a subscriber? enter
What is content moderation?
Although the term moderation often misunderstood, its central purpose is clear. Regarding content, moderation is the act of preventing extreme or malicious behavior, such as offensive language, exposure to graphic images or videos, and user fraud or exploitation.
There are six types of content moderation:
- Without moderation. there is no content control or intervention where bad actors can cause harm to others
- Initial moderation. content is displayed before it starts, based on predetermined guidelines
- Post-moderation. Content is displayed as it appears and removed if deemed inappropriate
- Reactive moderation. content is only verified if other users report it
- Automated moderation. content is actively filtered and removed using AI-powered automation
- Distributed moderation. inappropriate content is removed based on the votes of many community members
Why is content moderation important for companies?
Malicious and illegal behavior by bad actors puts companies at significant risk in the following ways:
- Loss of credibility and brand reputation
- Exposing harmful content to vulnerable audiences, such as children
- Failed to protect customers from fraudulent activity
- Loss of customers to competitors who can offer more secure experiences
- Authorization of fake or fraudulent accounts
The importance of content moderation goes beyond business protection, however. Managing and removing sensitive and objectionable content is important for every age group.
As many third-party trust and security service experts can attest, mitigating the broadest range of risks requires a multi-pronged approach. Content moderators should use both preventative and proactive measures to maximize user safety and protect brand trust. In today’s highly politically and socially charged online environment, a wait-and-see “non-moderation” approach is no longer an option.
“The virtue of justice consists in moderation regulated by wisdom.” – Aristotle
Why are human content moderators so critical?
Many types of content moderation involve human intervention at some point. However, reactive moderation and distributed moderation are not ideal approaches because malicious content is only addressed after it has been exposed to users. Post-moderation offers an alternative approach where AI-powered algorithms monitor content for certain risk factors and then alert the moderator to check if certain posts, images or videos are actually harmful and should be removed. Thanks to machine learning, the accuracy of these algorithms improves over time.
While it would be ideal to eliminate the need for human content moderators, given the nature of the content they are exposed to (including child sexual abuse material, graphic violence, and other harmful online behavior), it is unlikely that this will ever be possible. Human understanding, comprehension, interpretation and empathy simply cannot be duplicated by artificial means. These human qualities are essential for maintaining integrity and authenticity in communication. in fact, 90% of consumers say authenticity is important when deciding which brands they love and support (from 86% in 2017).
While the digital age has given us the advanced, intelligent tools (such as automation and artificial intelligence) needed to prevent or mitigate the lion’s share of today’s risks, human content moderators are still needed to act as intermediaries, knowingly putting themselves at risk to protect users. for. and as many brands.
Making the digital world a safer place
While the role of content moderator makes the digital world a safer place for others, it exposes moderators to disturbing content. They are essentially digital first responders, protecting innocent, unsuspecting users from emotionally disturbing content, especially users who are more vulnerable like children.
Few provider of trust and security servicesThey believe that a more thoughtful and user-centered way to approach moderation is to view the issue as a parent trying to protect their child; something that can (and perhaps should) be a foundation for all brands, and one that certainly motivates brave moderators. around the world to continue fighting today’s online evil.
Next time you’re scrolling through your social media feed with carefree abandon, take a moment to think about more than just the content you’re seeing; consider your unwanted content. don’t do it take a look and silently thank the frontline moderators for the personal sacrifices they make every day.
This content was produced by Teleperformance. It was not written by the editors of MIT Technology Review.