Twitter accounts offering to trade or sell child sexual abuse material under thinly veiled terms and hashtags have remained online for months, even after CEO Elon Musk said he would combat child exploitation on the platform.

priority 1”, Musk called out to him in a November 20 tweet. He also criticized the former leaders of Twitter, stating that they did little to address child sexual exploitation and that he was intent on turning things around.

But since that statement, at least dozens of accounts have continued to post hundreds of joint tweets using terms, abbreviations, and hashtags that indicate the sale of what Twitter calls child sexual exploitation material, based on a count of tweets for a single day. The signs and signals are well known among experts and law enforcement working to stop the spread of such material.

The tweets reviewed by NBC News offer to sell or trade content that is commonly known as child pornography or child sexual abuse material (CSAM). The tweets do not show MASI, and NBC News did not see any MASI while reporting on this article.

Some tweets and accounts have been active for months and predate the Musk acquisition. They remained live on the platform as of Friday morning.

During Musk’s tenure, many more tweets reviewed by NBC News were published over a period of weeks. Some users who tweeted CSAM offers appeared to delete the tweets shortly after posting them, apparently to avoid detection, and then posted similar offers from the same accounts. Some accounts that offer MASI said that Twitter had closed their previous accounts, but they could create new ones.

Twitter did not respond to a request for comment. According to twitter rules posted in October 2020, “Twitter has zero tolerance for any material that depicts or promotes child sexual exploitation, one of the most serious violations of the Twitter Rules. This may include media, text, illustrated or computer generated images.”

It’s unclear how many people remain on Twitter to address MASI after Musk enacted several rounds of layoffs and issued an ultimatum that led to a wave of resignations. Musk has hired outside help, and the company said in December that his suspension of accounts for child sexual exploitation had increased considerably. A representative for the US child labor watchdog, the National Center for Missing & Exploited Children, said the number of CSAM reports detected and flagged by the company has not changed since the Musk acquisition.

Twitter also dissolved the company’s trust and safety council, which included nonprofit organizations focused on tackling MASI.

Twitter Annual Report to the Securities and Exchange Commission said the company employed more than 7,500 people as of the end of 2021. According to internal records obtained by NBC News, Twitter’s overall headcount had dwindled to about 1,340 active employees as of early January, with about 20 people working in the company’s Trust and Safety organization. . That’s less than half of Trust and Safety’s previous workforce.

A former employee who worked on child safety issues, a specialization that fell to a larger Trust & Safety group, said many product managers and engineers who were on the team that enforced anti-CSAM rules and related violations before of the Musk purchase they had left the company. . The employee asked to remain anonymous because he had signed a confidentiality agreement. It is not known precisely how many people Musk has assigned to those tasks now.

Since Musk took over the platform, Twitter has cut the number of engineers at the company in half, according to internal records and people familiar with the situation.

CSAM has been a perpetual problem for social media platforms. And while some technology has been developed to automate the detection and removal of EMA and related content, the problem remains one that needs human intervention as it develops and changes, according to Victoria Baines, an expert in child exploitation crimes who has worked with the UK Home Office. National Agency against Crime, Europol, the European Cyber ​​Crime Center and Facebook.

“If you fire most of the trusted and security staff, the humans who understand these things, and rely entirely on algorithms and automated means of detection and reporting, you will only be scratching the surface of the CSAM phenomenon on Twitter. Baines said. «We really, really need those humans to pick up on the cues of what doesn’t look and sound right.»

The accounts seen by NBC News promoting the sale of MASI follow a familiar pattern. NBC News found tweets posted since October promoting EMA trading that are still active (apparently Twitter doesn’t catch them) and hashtags that have become rallying points for users to provide information on connecting on other Internet platforms to trade, buy and sell the exploitation material.

In tweets seen by NBC News, users claiming to sell MASI were able to avoid moderation with terms, hashtags, and thinly disguised codes that can be easily deciphered.

Some of the tweets are cheeky and their intent was clearly identifiable (NBC News is not publishing details about those tweets and hashtags so as not to further expand their reach). While the common abbreviation «CP,» a ubiquitous shorthand for «child pornography» that’s widely used online, isn’t searchable on Twitter, one user who had posted 20 tweets promoting his materials used another search hashtag, writing «Sell the entire CP collection,» in a tweet posted on December 28. The tweet remained active for a week until the account appeared to be suspended following NBC News’ contact with Twitter. A search on Friday found similar tweets still on the platform. Others used keywords associated with children, replacing certain letters with punctuation marks like asterisks, instructing users to send direct messages to their accounts. Some accounts even included prices in account bios and tweets.

None of the accounts reviewed by NBC News posted explicit or nude photos or videos of abuse on Twitter, but some posted images of fully clothed or semi-nude youth along with messages offering to sell «leaked» or «taunted» images.

Many of the accounts that use Twitter to promote harmful content mentioned using virtual storage accounts on MEGA, a New Zealand-based encrypted file-sharing site. The accounts posted videos of themselves scrolling through MEGA, displaying folder names that suggested child abuse and incest.

In a statement, MEGA Chief Executive Stephen Hall said the company has a «zero tolerance» policy towards MASI in the service. «If a public link is reported to contain MASI, we immediately disable the link, permanently close the user’s account and provide full details to New Zealand authorities and any relevant international authorities,» Hall said. “We encourage other platforms to provide us with any signals they are aware of so we can take action on Mega. Similarly, we provide others with the information we receive.”

The MASI affair regarding MEGA and Twitter has triggered at least one prosecution in the US.

As of June 2022 Department of Justice Press release announcing the sentencing of a person convicted of «transporting and possessing thousands of images depicting child sexual abuse» described how the person used Twitter.

“In late 2019, as part of an ongoing investigation, officers identified a Twitter user who posted two MEGA links to child pornography,” the press release reads. The release said the individual “admitted to viewing child pornography online and provided investigators with his MEGA account information. The account was later found to contain thousands of files containing child pornography.»

Almost all of the tweets seen by NBC News that advertised or promoted MASI used hashtags that referenced MEGA or another similar service, allowing users to search and locate their tweets. Even though the hashtags were active for months, they are searchable on the platform.

The problem has been widespread enough to draw the attention of some Twitter users. In 25 tweets, users tagged Musk using at least one top hashtag to alert him to the content. The first tweet tagging the hashtag to Musk via his username read: «@elonmusk I doubt you’ll see this but it has come to my attention that [this] hashtag has quite a few accounts asking/selling cp. I was going to report them all but there are too many, even more in the replies. Just a heads up.

Historically, Twitter has cracked down on some similar hashtags, such as one related to cloud storage service Dropbox, which now appears to be restricted in Twitter search. In a statement, a Dropbox representative said: “Child sexual exploitation and abuse has no place on Dropbox and violates our Terms of Service and Acceptable Use Policy. Dropbox uses a variety of tools, including industry-standard automatic detection technology and human review, to find potentially infringing content and act accordingly.»

The automated systems used by many social media platforms were originally created to detect images and prevent their continued distribution online.

Facebook has used a technology called PhotoDNA, along with human content moderators, for a decade to detect and prevent the distribution of MASI.

Automated technologies have been developed at various companies to scan and detect text that could be associated with MASI. WhatsApp, a company owned by Meta, says it uses machine learning to scan text in new profiles and groups for the language.

The former Twitter employee said the company had been working to improve automated technology to block problematic hashtags. But they stressed that human participation would be needed to mark new hashtags and for their application.

“Once you know the hashtags you’re looking for, hashtag detection for moderation is an automated process. Identifying hashtags that are potentially anti-policy requires human input.” they told NBC News. “In general, machines today aren’t taught to automatically infer whether a hashtag that hasn’t been seen before is possibly online or is being used by people searching for or sharing MASI; It’s possible, but it’s usually faster to use expert opinion to add a hashtag that’s being misused in detection tools than to wait for a model to learn it.»

Just as important, Baines and the former employee said, is the fact that text-based detection could overcorrect or raise potential free-speech issues. MEGA, for example, is used for many types of content besides CSAM, so the question of how to moderate hashtags that refer to the service is not a straightforward one.

«You need humans, is the short answer,» Baines said. «And I don’t know if there’s anyone left doing these things.»