Scientists at the University of Texas at Austin have developed an AI-based decoder that can non-invasively translate brain activity into a continuous stream of text. This is the first time a person's thoughts have been read without needing surgical implants. In addition, the decoder can accurately reconstruct speech while people listen to a story or even imagine one silently, using fMRI scan data. The achievement overcomes a fundamental limitation of fMRI, an inherent time lag that makes tracking brain activity in real-time impossible. The decoder was trained to match brain activity to meaning using a large language model, GPT-1, a precursor to OpenAI's ChatGPT.
The participants were asked to watch four short, silent videos while in the scanner, and the decoder could use their brain activity to describe some of the content accurately. However, according to The Guardian's article, the decoder needed help with certain aspects of language, including pronouns. The decoder was personalized, and when the model was tested on another person, the readout was unintelligible. There are concerns about the technology's bad use, and the team has worked to avoid such situations. The team now hopes to assess whether the technique could be applied to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).
Kommentare