Graphic videos of animal abuse have circulated widely on Twitter in recent weeks, sparking outrage and renewed concern over the platform’s moderation practices.

One such video, in which a kitten appears to be placed in a blender and then killed, has become so notorious that reactions have turned into your own genre of internet content.

Laura Clemens, 46, said her 11-year-old son came home from school in London two weeks ago and asked if she had seen the video.

“There’s something about a cat in a blender,” Clemens recalled her son saying.

Clemens said he went on Twitter and searched for «cat,» and the search box suggested searching for «cat in a blender.»

Clemens said she clicked on the suggested search term and was instantly shown gruesome video of what appeared to be a kitten being killed inside a blender. For users who have not manually disabled autoplay, the video will start playing instantly. NBC News was able to replicate the same process to show the video on Wednesday.

Clemens said she’s grateful her son asked her about the video instead of just going on Twitter and typing the word «cat» himself.

“I’m glad my son talked to me, but there must be a lot of parents whose kids just look it up,” he said.

The spread of the video, as well as its presence in suggested Twitter searches, is part of a worrying trend of animal cruelty videos that have flooded the social media platform following Elon Musk’s inauguration, which included mass layoffs and deep cuts in the moderation and security of the company’s content. equipment

Last weekend, gory videos of two violent events in Texas spread on Twitter, with some users saying the images had been inserted into the platform’s «For You» algorithmic feed.

The animal abuse videos appear to predate those videos. Several users have tweeted that have seen the cat videowith some trying to get musk care in the question – some dating from Begginings of may. Clemens said she flagged the video on May 3 to the Twitter support account and to Ella Irwin, Twitter’s vice president of trust and safety and one of Musk’s closest advisers.

“Young kids know this has been trending on their site. My little one hasn’t seen it but he knows it. It should not be an autocomplete suggestion,” Clemens wrote.

Neither Irwin nor Twitter security responded to the tweet, Clemens said.

Yoel Roth, Twitter’s former head of trust and security, told NBC News that he believes the company likely rolled back a number of security measures meant to stop these kinds of autocomplete issues.

Roth explained that Twitter’s autocomplete search results were known internally as «type-ahead search» and that the company had built a system to prevent illegal, illicit, and dangerous content from appearing as autocomplete suggestions.

“There is an extensive, well-constructed and maintained list of things that leaked type-ahead search, and much of it was built with wildcards and regular expressions,” Roth said.

Roth said there was a multi-step process to prevent videos of gore and death from appearing in autocomplete search suggestions. The process was a combination of automatic and human moderation, flagging animal cruelty and violent videos before they started automatically appearing in search results.

“The write-ahead search really wasn’t easy to break. These are long-standing systems with multiple layers of redundancy,” Roth said. «If it just stops working, it almost defies probability.»

Autocomplete suggestions in search bars are a common feature on many social media platforms and can often display disturbing content. Terms for «dog» and «cat» were autocompleted in viral animal cruelty videos in Twitter’s search box on Thursday, when NBC News contacted the company for comment. The Twitter press account automatically responded with a poop emoji, the company’s standard response for the past month.

As of Friday afternoon, Twitter appeared to have disabled its autocomplete suggestions in its search bar.