technology

New MRI tool can ‘read minds’ by translating brain activity into speech


Understanding thoughts could have huge implications – both positive and negative (Picture: Getty)

The ability to read minds may have just moved a step closer.

Researchers have developed a non-invasive speech decoder that can translate brain activity into words – something that could one day help those who have lost the ability to speak.

Using functional magnetic resonance imaging (fMRI), a team from the University of Texas has created a brain-computer interface that is able to generate full sentences based on what people are thinking. 

Previous language decoders have often required invasive brain surgery to function, while a common stumbling block for external decoders is the speed of spoken English compared to the rate at which fMRI can detect brain activity. A single brain image could be affected by more than 20 words.

To overcome this, the team recorded three subjects listening to 16 hours of narrative stories to train the model to map the brain activity associated not only with specific words, but also phrases. An algorithm is then used to search for the best sequences based on the ‘relatively slow’ brain signals.

‘The goal is to take recordings of users’ brain activity and predict the words the user was hearing, saying or imagining,’ said lead author Jerry Tang. ‘Eventually we hope it can help people who have lost the ability to speak, like those who have suffered brain injuries such as strokes, or illnesses like ALS [a form of motor neurone disease].’

Co-author Alex Huth, of the university’s HuthLab, said a primary difference between their decoder and other non-invasive models was the area of the brain it mapped.

Readers Also Like:  Watch out Samsung and LG! I tried a new TV that's a game-changer for your home
The fMRI monitors brain activity when the subject is hearing or thinking words (Picture: Jerry Tang/Alexander Huth)

‘Most existing systems work by looking at the last stage of speech output,’ said Mr Huth. ‘They record from the motor areas of the brain, the areas that control the mouth, the larynx and the tongue. What they decode is how the person is trying to move their mouth to say something, which can be effective and is used in people with locked-in syndrome – which is extremely cool.

‘Our system works on a very different level, it works at the level of ideas, semantics and meaning, which is why what we get out aren’t the exact words someone heard or spoke – it’s the same idea but sometimes expressed in different words.’

For example, when a user heard the words ‘I don’t have my driver’s licence yet’, the decoder predicted the words ‘she hasn’t even started learning to drive yet’.

The model was also able to translate people’s thoughts while they were watching videos with no dialogue.

‘The subjects were instructed not to do anything while watching them,’ said Mr Huth. ‘But when we put that data into the decoder it spat out a description of what was happening in the video.

‘It’s really getting at the idea of what’s happening, which is exciting.’

However, Mr Tang stressed the importance of mental privacy.

‘Nobody’s brain should be decoded without their cooperation,’ he said. ‘Our study found you need someone’s cooperation to both train and run the decoder.’

Decoders trained on one person could not be used on another, while users were also able to ‘sabotage’ readings.

Readers Also Like:  TikTok fined £12.7m for misusing children's data

‘People can sabotage a decoder by performing simple tasks like naming animals or telling a story,’ said Mr Tang.

‘Of course this could all change as technology gets better, so we believe it’s important to keep researching and enact policy that protects each person’s mental privacy.’


MORE : Cure for motor neurone disease ‘possible’ as Doddie Weir’s charity invests millions into research


MORE : Scientists create a robot that can 3D print inside the body using ‘bio-ink’





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.