After years of anticipation, Apple finally unveiled its first AR/VR headset, the Vision Pro, this week.
With its really cool mixed reality features, there’s one thing we might have missed — mind reading.
The Vision Pro might be a soft launch of the tech giant’s brain-computer-interface (BCI) technology.
Essentially BCI can allow a person to control an external device using brain signals. It’s primarily being used to aid people with disabilities, but Apple’s headset might make it more accessible.
Neurotechnology professional Sterling Crispin, who worked on the headset, said in a tweet that it could predict when ‘a user was going to click on something before they actually did’.
‘Your pupil reacts before you click in part because you expect something will happen after you click. So you can create biofeedback with a user’s brain by monitoring their eye behavior, and redesigning the UI in real time to create more of this anticipatory pupil response,’ he explained.
‘It’s a crude brain computer interface via the eyes, but very cool. And I’d take that over invasive brain surgery any day.’
Brain implant chips developed by Elon Musk’s Neuralink are an example of invasive BCIs that require surgery to implant electrodes directly into the brain.
Typically, non-invasive BCIs use electrodes placed on the scalp or on the surface of the eye to measure brain activity.
While Invasive BCIs are more accurate than non-invasive ones, they carry a higher risk of complications.
Crispin who has worked in VR and AR technology for over a decade, said he worked at Apple as a Neurotechnology Prototyping Researcher in the Technology Development Group.
He called Apple’s Vision Pro a ‘culmination of the whole industry into a single product’.
Crispin went on to explain that he was involved in work detecting the mental state of users based on data from their bodies and brain when they were in immersive experiences.
When users are immersed in mixed reality experiences, ‘AI models are trying to predict if you are feeling curious, mind wandering, scared, paying attention, remembering a past experience, or some other cognitive state’, he said.
This is done through measurements like eye tracking, electrical activity in the brain, heartbeats and rhythms, muscle activity, blood density in the brain, blood pressure, skin conductance etc.
He detailed another patent that goes into detail about using machine learning and signals from the body and brain to adapt immersive environments by changing what you’re seeing and hearing in the background.
Crispin added that it will take until the end of this decade for the industry to fully catch up to the tech’s grand vision.
MORE : ‘Oh ducking hell!’ – Apple set to fix frustrating autocorrect feature
MORE : Apple unveils its first AR/VR headset, the Vision Pro