TikTok has announced the removal of over 2.1 million videos in Nigeria during the second quarter of 2024 for breaching its community guidelines.
This was revealed in the platform’s Community Guidelines Enforcement Report released on Tuesday.
According to the report, 99.1 percent of the videos were taken down proactively—before any users had reported them—and 90.7 percent of these removals occurred within 24 hours of their upload.
“Key findings show that 99.1 percent of these videos were proactively removed before users reported them, with 90.7 percent taken down within 24 hours. These figures highlight TikTok’s commitment to staying ahead of harmful content, ensuring a safer platform for Nigerian users,” the report stated.
The affected videos represented less than 1 percent of the total content uploaded on TikTok by Nigerian users during the reporting period, showcasing the relatively small percentage of violations amidst the massive volume of content shared on the platform daily.
TikTok’s aggressive approach to content moderation is consistent with its global efforts. In June 2024 alone, the company removed over 178 million videos worldwide. Of these, 144 million were removed via automated detection systems designed to identify content that violates community guidelines without the need for human intervention.
Globally, TikTok reported a proactive detection rate of 98.2 percent, signaling a continued focus on improving its systems for identifying harmful or inappropriate content. With its content moderation practices becoming increasingly efficient, the platform is now addressing the majority of harmful content before it reaches a broad audience.
“With a proactive detection rate now at 98.2 percent globally, TikTok is more efficient than ever at addressing harmful content before users encounter it,” the short-form mobile video platform noted in the report.
TikTok assured users that it remains committed to building a safer online experience. The company stated that it would continue to invest heavily in improving content moderation technologies, including developing more sophisticated tools to identify and understand potential risks in real time.