IE 11 is not supported. For an optimal experience visit our site on another browser.

Tech layoffs shrink ‘trust and safety’ teams, raising fears of backsliding efforts to curb online abuse

“Fewer people means less work is being done in a lot of different spaces,” said one of Twitter’s remaining content moderation staffers.
The Twitter headquarters in San Francisco, on Dec. 8, 2022.
The Twitter headquarters in San Francisco, on Dec. 8, 2022.Jeff Chiu / AP file

Social media companies have slashed hundreds of content moderation jobs during the ongoing wave of tech layoffs, stoking fears among industry workers and online safety advocates that major platforms are less capable of curbing abuse than they were just months ago.

Tech companies have announced more than 101,000 job cuts this year alone, on top of the nearly 160,000 over the course of 2022, according to tracker Layoffs.fyi. Among the wide range of job functions affected by those reductions are “trust and safety” teams — the units within major platform operators and at the contracting firms they hire that enforce content policies and counter hate speech and disinformation.

Earlier this month, Alphabet reportedly reduced the workforce of Jigsaw, a Google unit that builds content moderation tools and describes itself as tracking “threats to open societies,” such as civilian surveillance, by at least a third in recent weeks. Meta’s main subcontractor for content moderation in Africa said in January that it was cutting 200 employees as it shifted away from content review services. In November, Twitter’s mass layoffs affected many staffers charged with curbing prohibited content like hate speech and targeted harassment, and the company disbanded its Trust and Safety Council the following month.

Postings on Indeed with “trust and safety” in their job titles were down 70% last month from January 2022 among employers in all sectors, the job board told NBC News. And within the tech sector, ZipRecruiter said job postings on its platform related to “people safety” outside of cybersecurity roles fell by roughly half between October and January.

While tech recruiting in particular has pulled back across the board as the industry contracts from its pandemic hiring spree, advocates said the worldwide need for content moderation remains acute.

“The markets are going up and down, but the need for trust and safety practices is constant or, if anything, increases over time,” said Charlotte Willner, executive director of the Trust & Safety Professional Association, a global organization for workers who develop and enforce digital platforms’ policies around online behavior.

A Twitter employee who still works on the company’s trust and safety operations and requested not to be identified for fear of retribution described feeling worried and overwhelmed since the department’s reductions last fall.

“We were already underrepresented globally. The U.S. had much more staffing than outside the U.S.,” the employee said. “In places like India, which are really fraught with complicated religious and ethnic divisions, that hateful conduct and potentially violent conduct has really increased. Fewer people means less work is being done in a lot of different spaces.”

Twitter accounts offering to trade or sell material featuring child sexual abuse remained on the platform for months after CEO Elon Musk vowed in November to crack down on child exploitation, NBC News reported in January. “We definitely know we still have work to do in the space, and certainly believe we have been improving rapidly,” Twitter said at the time in response to the findings.

Twitter didn’t respond to requests for comment. A spokesperson for Alphabet didn’t comment on Jigsaw.

A Meta spokesperson said the company “respect[s] Sama’s decision to exit the content review services it provides to social media platforms. We are working with our partners during this transition to ensure there’s no impact on our ability to review content.” Meta has more than 40,000 people “working on safety and security,” including 15,000 content reviewers, the spokesperson said.

Concerns about trust and safety reductions coincide with growing interest in Washington  in tightening regulation of Big Tech on multiple fronts.

In his State of the Union address on Tuesday, President Biden urged Congress to “pass bipartisan legislation to strengthen antitrust enforcement and prevent big online platforms from giving their own products an unfair advantage,” and to “impose stricter limits on the personal data the companies collect on all of us.” Biden and lawmakers in both parties have also signaled openness to reforming Section 230, a measure that has long shielded tech companies from liability for the speech and activity on their platforms.

“Various governments are seeking to force large tech companies and social media platforms [to become more] responsible for ‘harmful’ content,” said Alan Woodward, a cybersecurity expert and professor at the University of Surrey in the U.K.

In addition to putting tech firms at greater risk of regulation, any backsliding on content moderation “should worry everyone,” he said. “This is not just about weeding out inappropriate child abuse material but covers subtle areas of misinformation that we know are aimed at influencing our democracy.”