TikTok disclosed the figures in its 2025 Q1 Community Guidelines Enforcement Report, attributing the move to its continued push to keep its digital environment “safe and trustworthy”.

The report showed that 92.1 per cent of the removed content had not been viewed at the time of takedown, with several posts flagged and deleted within just 24 hours of being uploaded.

The firm credited its swift response to a blend of machine-led moderation and trained personnel.

“This approach is vital in mitigating the damaging effects of misinformation, hate speech, and violent material on the platform,” TikTok reiterated.

It further reported that its global systems are now identifying and removing harmful material more efficiently than ever before.

“With a proactive detection rate now at 99 per cent globally, TikTok is more efficient than ever at addressing harmful content before users encounter it,” TikTok added.

The clampdown did not stop at video content. During the same period, 43,000 user accounts in Kenya were also banned for violating TikTok’s guidelines.

The platform’s LIVE feature also saw increased enforcement, with TikTok confirming that 19 million live sessions were terminated globally during the quarter.

“This shows how effective TikTok's prioritisation of moderation accuracy has been, as the number of appeals remains steady amid the increase in automated moderation,” TikTok revealed.

Although TikTok LIVE is designed to help users connect and build communities in real time, the company has tightened its LIVE Monetisation Guidelines to more clearly define what content cannot be monetised.

As part of broader measures aimed at protecting vulnerable users, especially minors, TikTok announced a partnership with Childline Kenya.

The collaboration is intended to offer expert assistance to young users who encounter or report distressing content related to suicide, self-harm, hate, or harassment.

Kenyans using the app have been urged to take an active role in community safety by reporting videos, comments, or accounts they believe breach the platform’s policies.

Reports can be submitted through the TikTok Help Centre.

The enforcement report underlines TikTok’s increasingly aggressive stance on content control, part of a wider industry trend where tech companies are under growing pressure to combat harmful online material quickly and decisively.