ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Building a safer internet through content moderation 

Linked InTwitterFacebook

Andy Lulham at Verifymy explains how content moderation technology calls for AI and humans to work together to protect children online

 

As of 2024, 95 million photos and videos are shared on Instagram each day and 500 minutes of video are uploaded to YouTube per second. And even this is just a small proportion of the estimated 328.77 million terabytes of data created online every single day.

 

While the majority may be positive and entertaining, this firehose also contains a growing volume of inappropriate, harmful and illegal content. From violence and self-harm, to extremist content and now deepfakes, tackling the harmful content deluge will be the defining challenge for platforms, regulators and society over the coming years. 

 

Content moderation has become vital. It plays a crucial role in maintaining the integrity and safety of online platforms, particularly when safeguarding minors. Moderating this volume of content is clearly an enormous challenge. However, the emergence of artificial intelligence (AI) is further transforming the content moderation landscape, enabling enhanced automation, accuracy, and scalability.

 

AI: a helping hand 

AI has drastically transformed content moderation solutions by enabling the automated scanning of images, pre-recorded videos, live streams, and other types of content in an instant. It can identify issues such as underage activity in adult entertainment, nudity, sexual activity, extreme violence, self-harm, and hate symbols within user-generated content platforms, including social media.

 

This automated approach not only accelerates the moderation process, but also provides scalability, a vital feature in an era where the sheer volume of online content wouldn’t be possible with exclusively human moderation. 

 

AI for content moderation is trained on huge volumes of "ground truth data" – archives of tagged images and videos, ranging from weapons to explicit content. The accuracy and efficacy of AI systems directly correlate to the quality and quantity of this data.

 

Once trained, AI can effectively detect various forms of harmful content, enabling content moderation, especially in live streaming scenarios, to be viable across diverse platforms with varying legal and community standards.

 

The human touch

Despite its significant advancements, AI is not perfect on its own. Specific nuances and contextual cues can confuse systems and generate inaccurate outcomes. For example, AI might be unable to differentiate between a kitchen knife used in a cooking video and a weapon used in an act of violence or distinguish a toy gun in a children’s commercial from an actual firearm. 

 

Although AI automation allows organisations to moderate at scale and reduce costs by eliminating the need for a large team of moderators, even the most advanced technology requires human judgement to accompany it. When AI flags content as potentially harmful or in violation of guidelines, human moderators can step in to review and make the final call.

 

This hybrid approach ensures that while AI extends the scope of content moderation and streamlines the process, humans retain the ultimate authority, especially in complex cases. 

 

AI and humans working hand in hand  

Looking ahead, the sophistication of AI identification and verification techniques will continue to increase. This includes improving the accuracy of matching individuals featured in various types of content with their identity documents—a next step in ensuring consent and mitigating unauthorised content distribution.

 

Additionally, with AI’s learning capabilities, it will constantly improve its accuracy and efficiency, with the potential to reduce the need for human intervention as it continues to evolve. 

 

However, the human element will continue to be necessary, especially in the process of appeals and dispute resolution related to content moderation decisions. Humans not only provide the nuanced perspective and understanding that current AI technologies lack, but also serve as a check against potential algorithmic biases or errors. 

 

The successful implementation of AI in content moderation requires a strong commitment to continuous improvement and the meeting of ethical standards. As AI systems become increasingly advanced, it is crucial to regularly audit these systems to mitigate potential biases, ensure fairness, and safeguard user privacy.

 

Digital platforms must also invest in ongoing training for human moderators to effectively handle the nuanced cases flagged by AI for review.  

 

Equally importantly, given the psychologically taxing nature of content moderation work, platforms and solution providers must have a responsibility to prioritise the mental health of their human moderators. This includes providing robust psychological support, wellness resources, and strategies to limit prolonged exposure to disturbing content. 

 

By adopting a proactive and responsible approach to AI-powered content moderation, online platforms can cultivate digital spaces that promote creativity, connection, and constructive dialogue while also protecting users from harm. 

 

A joined-up approach for a safer internet  

Ultimately AI-powered content moderation solutions offer platforms a comprehensive toolkit to tackle the challenges of the digital age. This cutting-edge technology enables real-time monitoring and filtering of massive volumes of user-generated content (UGC). This not only helps platforms maintain a safe and compliant online environment but also allows them to scale their moderation efforts efficiently as they grow. 

 

Looking ahead, the partnership between humans and AI will be crucial in navigating the complexities of digital content, ensuring that online spaces remain safe, inclusive, and respectful of diverse views and legal frameworks. 

 


 

Andy Lulham is COO at Verifymy 

 

Main image courtesy of iStockPhoto.com and fasphotographic

Linked InTwitterFacebook
Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings