Via Flickr Researchers at Stanford University developed a Brain-computer interface (BCI) which converts neural signals into spoken words. This BCI, detailed in a study published in Cell, uses sensors implanted in the motor cortex to detect brain activity linked to inner speech. Machine-learning models interpret these signals to predict intended words in real time. These advancements now provide more options for individuals with severe paralysis, allowing them to utilize their inner speech to communicate. Devices that track eye movements or subtle muscle twitches allow users to choose words on screens. The study involved four participants: three with amyotrophic lateral sclerosis (ALS) and one with a brain stem stroke. All had pre-existing brain sensors. Participants imagined sentences, which appeared on screens from a 125,000-word vocabulary. This…

Click here to read the full article at The Gateway Pundit.

Login or subscribe today!

Login or Subscribe