Harris Zainul was quoted by Benar News on 11 October 2024

Media and policy analysts questioned the effectiveness of using artificial intelligence over human staff.

By Iman Muttaqin Yusof

Chinese social media platform TikTok said it laid off nearly 500 employees in Malaysia who were tasked with deleting inappropriate content, and will replace them with cheaper artificial intelligence tools, but analysts questioned whether machines could do a better job.

The layoffs by TikTok owner ByteDance come after the Malaysian government pressed it and other social media platforms to enhance systems and safeguards to moderate content in response to a rise in online abuse, misinformation and other harmful content.

Last year, Malaysia singled out ByteDance as not abiding by the country’s laws. The Southeast Asian nation has now drawn up strict regulations that take effect on Jan. 1, 2025. Among them is a mandate that social media firms must obtain annual operating licenses.

The layoffs, announced via internal emails on Wednesday, primarily targeted employees in TikTok’s Trust and Safety Regional Operations who were responsible for content moderation, quality analysis and team management.

“They’ve been training AI [artificial intelligence] systems to take over content moderation tasks for a while now. We used to tag videos that violated content policies, and the AI would learn from that,” a ByteDance employee, who was not authorized to speak publicly, told BenarNews.

“Now they’ve decided AI can handle it better – both cost-wise and in terms of efficiency.”

In a statement to BenarNews on Friday, a TikTok spokesperson said the changes were part of an effort to improve efficiency.

“We’re making these changes as part of our ongoing efforts to further strengthen our global operating model for content moderation,” the unnamed spokesperson said.

“We expect to invest U.S. $2 billion globally in trust and safety in 2024 alone and are continuing to improve the efficacy of our efforts, with 80% of violative content now removed by automated technologies.”

The job of content moderators is to review user-generated posts, including texts, images and videos, to determine if they should be deleted, restricted or remain unchanged, based on platform policies. In May, TikTok said it had more than 40,000 people working as content moderators across the globe.

Some of the Malaysian workers said they moderated content not only for Southeast Asia but also for other regions.

Earlier this year, TikTok announced that Malaysia topped the list of nations in 2023 requesting it remove content deemed offensive. Its report noted that it had received 2,002 government requests to remove over 6,000 pieces of content, adding its removal rate was about 90%.

‘Not about performance’

One of the affected employees said the layoffs were sudden and shocking.

“Late on Wednesday, we received an email saying our roles were impacted. They said they will minimize staff, and most tasks will now be outsourced to external partners,” said the employee who asked not to be identified, citing fear of reprisals. “It’s not about performance – even team leaders were hit.”

The source, who spoke to BenarNews, said the workers losing their jobs would receive compensation based on years of service. Those terms were not released.

ByteDance, like other global tech firms, faces increasing global regulatory scrutiny.

Earlier this year, the internet regulator Malaysian Communications and Multimedia Commission (MCMC) reported an increase in harmful social media content.

In August, the government reiterated the licensing regulations for social media platforms to take effect at the beginning of 2025. Those regulations are to expand oversight and protect users from scams, cyberbullying, misinformation and sexual crimes against children.

“The Malaysian government remains steadfast in implementing a regulatory framework to ensure a safer internet for the people of Malaysia, especially for children and families,” Communications Minister Fahmi Fadzil said at the time.

In the United States in April, President Joe Biden signed a measure into law that would ban TikTok from app stores unless ByteDance divested from the platform’s U.S. business, Radio Free Asia, a news service affiliated with BenarNews, reported.

The Associated Press news agency reported that TikTok had argued in court last month that an American law potentially banning the platform in a few months was unconstitutional, while the U.S. Justice Department said it was needed to address national security risks posed by the app.

China’s Ministry of Commerce has said it would oppose a forced sale of TikTok.

Effectiveness in question

Despite TikTok’s move toward AI-based content moderation, analysts remain skeptical about its effectiveness in fully replacing human moderators.

Policy analyst Harris Zainul, deputy director of research at Institute of Strategic & International Studies Malaysia, said that while AI could improve efficiency, it may not be as effective as human oversight.

“AI can be more efficient than human moderators, especially at scale as these platforms do with user-generated content, but efficiency is no guarantee for effectiveness,” Harris told BenarNews.

“There are still questions about the accuracy of these algorithms, and unfortunately, there are no third-party assessments of the content moderation systems these platforms implement.”

Benjamin Loh, a senior lecturer in media studies at Taylor’s University in Malaysia, pointed out that human moderation, while more expensive, is necessary for platforms operating in Malaysia and other multilingual and diverse countries.

“At this point, AI is definitely not reliable enough, but I understand the rationale behind it, as human moderation is far more expensive,” Loh told BenarNews.

- Advertisement -