Key Takeaways:
- Hundreds of UK online safety workers at TikTok have signed agreements to leave the company, despite the firm claiming the cuts were "still proposals only".
- Over 400 online safety workers have agreed to leave the social media company, with only five left in consultation.
- TikTok plans to increase the role of AI in its moderation, while maintaining some human safety workers, but whistleblowers have raised concerns that the AI is not ready to handle the nuances of online safety.
- The company has been accused of obscuring the reality of job cuts to MPs and has been asked to provide evidence that its safety rates will not worsen after the cuts.
- TikTok has been criticized for offshoring jobs to agencies in other countries and for using AI as a "fig leaf" to justify job cuts.
Introduction to the Issue
The recent announcement by TikTok of mass layoffs to its Trust and Safety teams has sparked controversy and concern among online safety workers and MPs. Despite the company’s claims that the cuts were "still proposals only", hundreds of UK online safety workers have already signed agreements to leave the company. This has raised questions about the impact of these cuts on online safety and the role of AI in moderation.
The Reality of the Job Cuts
According to whistleblowers, over 400 online safety workers have agreed to leave the social media company, with only five left in consultation. The workers were given a deadline of 31 October to sign mutual termination agreements, and those who signed by that date were offered a better deal. However, despite the company’s claims that the cuts were still proposals, the workers were asked to hand in their laptops and had access to their work systems revoked. They were put on gardening leave until 30 December, and many have expressed concerns about the impact of the cuts on online safety.
Concerns About Online Safety
The whistleblowers have raised concerns that the cuts will put users at risk, particularly children and teenagers. They argue that AI is not ready to handle the nuances of online safety, and that human moderators are essential to identifying and removing harmful content. One whistleblower, Lucy, said: "There are a lot of nuances in the language. AI cannot understand all the nuances. AI cannot differentiate some ironic comment or versus a real threat or bullying or of a lot of things that have to do with user safety, mainly of children and teenagers." Another whistleblower, Anna, said: "People are getting new ideas and new trends are coming. AI cannot get this. Even now, with the things that it’s supposed to be ready to do, I don’t think it’s ready."
TikTok’s Response
TikTok has responded to the concerns by saying that it will increase the role of AI in its moderation, while maintaining some human safety workers. The company’s director of public policy and government affairs for northern Europe, Ali Law, said: "Our focus is on making sure the platform is as safe as possible. And we will make deployments of the most advanced technology in order to achieve that, working with the many thousands of trust and safety professionals that we will have at TikTok around the world on an ongoing basis." However, the company has been accused of obscuring the reality of the job cuts to MPs and has been asked to provide evidence that its safety rates will not worsen after the cuts.
The Role of AI in Moderation
TikTok’s use of AI in moderation has been a subject of controversy. The company has said that it will use a combination of technology and human teams to keep its users safe, and that over 85% of the content removed for violating its rules is identified and taken down by automated technologies. However, the whistleblowers have raised concerns that the AI is not ready to handle the nuances of online safety, and that human moderators are essential to identifying and removing harmful content. John Chadfield, national officer for the Communication Workers’ Union, said: "AI is a fantastic fig leaf. It’s a fig leaf for greed. In TikTok’s case, there’s a fundamental wish to not be an employer of a significant amount of staff."
Offshoring Jobs
TikTok has also been criticized for offshoring jobs to agencies in other countries. The company has been accused of using third-party agencies to hire moderators in countries such as Portugal, and of advertising jobs in other countries. This has raised concerns about the impact of the job cuts on UK workers and the potential risks to online safety. The Communication Workers’ Union has said that the offshoring of jobs is a "fundamental wish" of the company to avoid employing a significant number of staff.
Conclusion
The controversy surrounding TikTok’s job cuts and its use of AI in moderation has raised important questions about the impact of these changes on online safety. While the company has claimed that the cuts were "still proposals only", the reality is that hundreds of UK online safety workers have already signed agreements to leave the company. The whistleblowers have raised concerns that the cuts will put users at risk, particularly children and teenagers, and that AI is not ready to handle the nuances of online safety. As the company continues to grow and expand its use of AI in moderation, it is essential that it prioritizes online safety and transparency, and provides evidence that its safety rates will not worsen after the cuts.