Social media companies have cut hundreds of content moderation jobs during the current wave of tech layoffs, stoking fears among industry workers and online safety advocates that the major platforms are less able to rein in the spread. abuse than a few months ago.

Tech companies have announced more than 101,000 job cuts this year alone, on top of the nearly 160,000 over the course of 2022, according to tracker Layoffs.fyi. Among the wide range of job functions affected by those reductions are “trust and safety” teams, the units within major platform operators and at the contractor companies they hire to enforce content policies and counter hate speech. hate and misinformation.

Earlier this month, it was reported that Alphabet reduced jigsaw templatea unit of Google that creates content moderation tools and describes himself as tracking «threats to open societies,» such as civilian surveillance, by at least a third in recent weeks. Meta’s main content moderation subcontractor in Africa said in January that he was cut 200 employees as it moved away from content review services. Mass layoffs at Twitter in November affected many employees tasked with curbing banned content, such as hate speech and targeted harassment, and the company dissolved its Trust and Safety Council the following month.

Postings on Indeed with «trust and assurance» in their job titles were down 70% last month from January 2022 among employers across all industries, the job board told NBC News. While tech recruiting in particular has slipped across the board as the industry shrinks from its pandemic hiring spree, advocates said the global need for content moderation remains acute.

“Markets go up and down, but the need for trust and safety practices is constant or, if anything, increases over time,” said Charlotte Willner, executive director of the Trust & Safety Professional Association, a global organization for workers who develop and enforce the digital platforms’ policies on online behavior.

A Twitter employee who still works in the company’s trust and security operations and asked not to be named for fear of retaliation described feeling worried and overwhelmed since the department’s cutbacks last fall.

“We were already underrepresented globally. The United States had much more staff than outside the United States,” the employee said. “In places like India, which are really plagued by complicated religious and ethnic divisions, such hateful conduct and potentially violent conduct has really increased. Fewer people means less work is being done in many different spaces.”

Twitter accounts offering to trade or sell material about child sexual abuse remained on the platform for months after CEO Elon Musk vowed in November to crack down on child exploitation, NBC News reported in January. “We definitely know we still have work to do in space, and we certainly think we’ve been improving rapidly,” Twitter said at the time in response to the findings.

A representative for Alphabet had no comment. Twitter did not respond to requests for comment.

A Meta spokesperson said the company «respects[s] Sama’s decision to exit the content review services it provides to social media platforms. We are working with our partners during this transition to ensure there is no impact on our ability to review content.» Meta has more than 40,000 people «working on safety and security,» including 15,000 content reviewers, the spokesperson said.

Concerns about reductions in trust and security coincide with growing interest in Washington to tighten regulation of Big Tech on multiple fronts.

In his State of the Union address on Tuesday, President Biden urged Congress to “pass bipartisan legislation to strengthen antitrust enforcement and prevent large online platforms from giving their own products an unfair advantage” as “impose stricter limits on the personal data companies collect on all of us. Biden and lawmakers from both parties have also signaled their openness to reforming Section 230, a measure that has long shielded tech companies from liability for speech and activity on their platforms.

“Several governments seek to force big tech companies and social media platforms [to become more] responsible for ‘harmful’ content,” said Alan Woodward, a cybersecurity expert and professor at the University of Surrey in the UK.

In addition to putting tech companies at greater risk of regulation, any rollback on content moderation «should concern everyone,» he said. “This is not just about removing inappropriate child abuse material, but covering subtle areas of misinformation that we know are aimed at influencing our democracy.”