Zijiao Chen can read your mind, with a little help from powerful artificial intelligence and an fMRI machine.

Chen, a doctoral student at the National University of Singapore, is part of a team of researchers who have shown they can decode human brain scans to say what a person is imagining in their mind, according to an article published in november.

His team, made up of researchers from the National University of Singapore, the Chinese University of Hong Kong and Stanford University, did this by using brain scans of participants as they looked at more than 1,000 images: a red fire truck, a gray building, a giraffe eating leaves, while inside a functional magnetic resonance imaging, or fMRI, machine, which recorded the resulting brain signals over time. The researchers then sent those signals through an AI model to train it to associate certain brain patterns with certain images.

Later, when the subjects were shown new images on the fMRI, the system detected the patient’s brain waves, generated an abbreviated description of what it believed corresponded to those brain waves, and used an AI imager to produce a facsimile of the best guess. the image that the participant saw.

The results are surprising and dreamlike. An image of a house and a driveway resulted in a similarly colored amalgamation of a bedroom and a living room. An ornate stone tower shown to a study participant generated images of a similar tower, with windows placed at unrealistic angles. A bear turned into a strange, furry, dog-like creature.

The resulting generated image matched the attributes (color, shape, etc.) and semantic meaning of the original image approximately 84% of the time.

Researchers are working to convert brain activity into images in an AI brain scanning study at the National University of Singapore.nbc news

While the experiment requires training the model on each individual participant’s brain activity over the course of about 20 hours before it can deduce images from the fMRI data, the researchers believe that in just a decade the technology could be used on anyone, anywhere. .

«It could help disabled patients to recover what they see, what they think,» Chen said. In the ideal case, Chen added, humans won’t even have to use cell phones to communicate. «We can only think.»

The results involved only a handful of study subjects, but the findings suggest that the team’s non-invasive brain recordings could be a first step toward more accurately and efficiently decoding images from inside the brain.

Researchers have been working on technology to decode brain activity for more than a decade. And many AI researchers are currently working on various neurology-related AI applications, including similar projects like those of Goal and the University of Texas at Austin to decode speech and language.

University of California, Berkeley scientist Jack Gallant began studying brain decoding more than a decade ago using a different algorithm. He said the pace at which this technology is developed depends not only on the model used to decode the brain, in this case AI, but also on brain imaging devices and the amount of data available to researchers. Both the development of the fMRI machine and the collection of data pose hurdles for anyone studying brain decoding.

“It’s the same as going to Xerox PARC in the 1970s and saying, ‘Look, we’re all going to have PCs on our desks,’” Gallant said.

While he could see brain scrambling used in the medical field in the next decade, he said it is still several decades away from use in the general public.

Still, it’s the latest in a boom in AI technology that has captured the public’s imagination. AI-generated media, from images and voices to Shakespearean sonnets and term works, have demonstrated some of the advances technology has made in recent years, especially since so-called transformative models have made it possible to feed vast amounts of data into the AI ​​in such a way that it can learn patterns quickly.

The team at the National University of Singapore used image-generating artificial intelligence software called Stable Diffusion, which has been adopted around the world to produce stylized images of cats, friends, spaceships and just about anything else a person could ask for. .

The software allows Associate Professor Helen Zhao and her colleagues to summarize an image using a vocabulary of color, shape and other variables, and have Stable Diffusion produce an image almost instantly.

The images the system produces are thematically faithful to the original image, but not a photographic match, perhaps because each person’s perception of reality is different, he said.

“When you look at the grass, maybe I will think of the mountains and then you will think of the flowers and other people will think of the river,” Zhao said.

Human imagination, he explained, can cause differences in image output. But the differences can also be the result of AI, which can return different images from the same set of inputs.

The AI ​​model feeds on visual «chips» to produce images of a person’s brain signals. So instead of a vocabulary of words, you are given a vocabulary of colors and shapes that come together to create the image.

Images generated from AI.
Images generated from AI.Courtesy of the National University of Singapore

But the system must be trained hard on the brain waves of a specific person, so it is a long way from widespread deployment.

“The truth is that there is still a lot of room for improvement,” Zhao said. «Basically, you have to go into a scanner and look at thousands of images, then we can make the prediction on you.»

It’s not yet possible to bring in strangers off the street to read their minds, «but we’re trying to generalize across topics in the future,» he said.

Like many recent AI developments, brain-reading technology raises ethical and legal concerns. Some experts say that in the wrong hands, the AI ​​model could be used for interrogation or surveillance.

“I think the line is very fine between what could be empowering and what could be oppressive,” said Nita Farahany, professor of law and ethics in new technologies at Duke University. “Unless we get ahead of ourselves, I think we are more likely to see the oppressive implications of the technology.”

He worries that AI brain-decoding could lead to information being commodified by companies or abused by governments, and outlined brain-sensing products already on or about to hit the market that could create a world where we are not just sharing our brain readings. , but judged by them.

“This is a world where not only is their brain activity collected and their brain state monitored, from attention to focus,” he said, “but people are hired, fired and promoted based on what their performance shows. brain metrics. .”

“It is already going mainstream and we need governance and rights in place right now before it becomes something that is really part of everyday life for everyone,” he said.

Researchers in Singapore continue to develop their technology, hoping to first decrease the number of hours a subject will need to spend in an fMRI machine. Then, they will scale the number of subjects they test.

“We think it is possible in the future,” Zhao said. «And with [a larger] amount of data available in a machine learning model will achieve even better performance.”