Harris Zainul was quoted by Nikkei Asia on 11 October 2024
Moderator jobs affected as platform says 80% of content removals automated
By Norman Goh, Nikkei staff writer
KUALA LUMPUR — ByteDance, the parent company of social media platform TikTok, confirmed on Friday that it will be laying off hundreds of staff in Malaysia as more content moderation is handled by technology.
A spokesperson from TikTok in Malaysia told Nikkei Asia that the job cuts are part of an ongoing plan to “further strengthen the global operating model for content moderation.”
“We expect to invest $2 billion globally in trust and safety in 2024 alone and are continuing to improve the efficacy of our efforts, with 80% of violative content now removed by automated technologies,” the spokesperson said.
The job cuts were first reported by local business portal The Malaysian Reserve.
The company did not confirm the exact number of jobs affected, but sources familiar with the matter told Nikkei Asia that over 700 staff involved in moderating content on TikTok in Malaysia will be affected by the decision. In June, TikTok cut 450 jobs at its Indonesian unit after a deal to acquire a majority stake in local e-commerce company Tokopedia.
TikTok employs over 110,000 people across more than 30 countries and moderates content in more than 70 languages. It uses a combination of human reviewers and automated moderation systems based on machine learning models to monitor and deal with problematic posts.
In May, tech portal The Information reported that TikTok planned to lay off around 1,000 people working in global user operations, content and marketing.
According to government data, there were about 30 million active social media users in Malaysia in 2023 and 28.68 million accounts on TikTok.
In its bi-annual Government Removal Request Report for 2023, TikTok said Malaysia led Southeast Asia in number of requests for content removal, at 1,862 cases, in the second half of 2023.
Malaysia introduced licensing regulations for social media platforms in August as part of efforts to expand its powers of oversight. The licensing regime will come into effect in 2025 and is aimed at protecting online users amid a surge in scams, cyberbullying, misinformation and sexual crimes against children.
Human moderators play an important role in the larger content moderation process, as their decisions are fed into algorithms used to train AI models, according to Harris Zainul, deputy director research at the Institute of Strategic and International Studies Malaysia, a local think tank.
Now, Harris said, TikTok’s automated system is at a level where the company is more comfortable scaling back on a number of human moderators.
He said progress in adopting AI for social media moderation will get better, with “most technologies on a constant curve of improvement,” but added that concerns remain.
“Is that adequate, considering the kind of online harms that we are seeing today? Because if we expect high safety standards from other industries — for example, pharmaceuticals, energy or automobiles — why are we not expecting similar standards from social media platforms? Over the past decade, whatever happens on social media platforms can have real life consequences,” Harris said.
This article first published in Nikkei Asia on 11 October 2024