A Prelude to Speech: How the Brain Forms Words – Neuroscience News

by thinkia.org.in
0 comment


Summary: Researchers made a groundbreaking discovery on how the human brain forms words before speaking. By utilizing Neuropixels probes, they’ve mapped out how neurons represent speech sounds and assemble them into language.

This study not only sheds light on the complex cognitive steps involved in speech production but also opens up possibilities for treating speech and language disorders. The technology could lead to artificial prosthetics for synthetic speech, benefiting those with neurological disorders.

Key Facts:

  1. The study uses advanced Neuropixels probes to record neuron activities in the brain, showing how we think of and produce words.
  2. Researchers found neurons dedicated to both speaking and listening, revealing separate brain functions for language production and comprehension.
  3. The findings could help develop treatments for speech and language disorders and lead to brain-machine interfaces for synthetic speech.

Source: Harvard

By using advanced brain recording techniques, a new study led by researchers from Harvard-affiliated Massachusetts General Hospital demonstrates how neurons in the human brain work together to allow people to think about what words they want to say and then produce them aloud through speech.

The findings provide a detailed map of how speech sounds such as consonants and vowels are represented in the brain well before they are even spoken and how they are strung together during language production.

The work, which is published in Nature, could lead to improvements in the understanding and treatment of speech and language disorders.

“Although speaking usually seems easy, our brains perform many complex cognitive steps in the production of natural speech — including coming up with the words we want to say, planning the articulatory movements, and producing our intended vocalizations,” says senior author Ziv Williams, an associate professor in neurosurgery at MGH and Harvard Medical School.

“Our brains perform these feats surprisingly fast — about three words per second in natural speech — with remarkably few errors. Yet how we precisely achieve this feat has remained a mystery.”

When they used a cutting-edge technology called Neuropixels probes to record the activities of single neurons in the prefrontal cortex, a frontal region of the human brain, Williams and his colleagues identified cells that are involved in language production and that may underlie the ability to speak. They also found that there are separate groups of neurons in the brain dedicated to speaking and listening.

“The use of Neuropixels probes in humans was first pioneered at MGH,” said Williams. “These probes are remarkable — they are smaller than the width of a human hair, yet they also have hundreds of channels that are capable of simultaneously recording the activity of dozens or even hundreds of individual neurons.”

Williams worked to develop the recording techniques with Sydney Cash, a professor in neurology at MGH and Harvard Medical School, who also helped lead the study.

The research shows how neurons represent some of the most basic elements involved in constructing spoken words — from simple speech sounds called phonemes to their assembly into more complex strings such as syllables.

For example, the consonant “da,” which is produced by touching the tongue to the hard palate behind the teeth, is needed to produce the word dog. By recording individual neurons, the researchers found that certain neurons become active before this phoneme is spoken out loud. Other neurons reflected more complex aspects of word construction such as the specific assembly of phonemes into syllables.

With their technology, the investigators showed that it’s possible to reliably determine the speech sounds that individuals will utter before they articulate them. In other words, scientists can predict what combination of consonants and vowels will be produced before the words are actually spoken. This capability could be leveraged to build artificial prosthetics or brain-machine interfaces capable of producing synthetic speech, which could benefit a range of patients.

“Disruptions in the speech and language networks are observed in a wide variety of neurological disorders — including stroke, traumatic brain injury, tumors, neurodegenerative disorders, neurodevelopmental disorders, and more,” said Arjun Khanna, a postdoctoral fellow in the Williams Lab and a co-author on the study.

“Our hope is that a better understanding of the basic neural circuitry that enables speech and language will pave the way for the development of treatments for these disorders.”

The researchers hope to expand on their work by studying more complex language processes that will allow them to investigate questions related to how people choose the words that they intend to say and how the brain assembles words into sentences that convey an individual’s thoughts and feelings to others.

About this language and speech research news

Author: MGH Communications
Source: Harvard
Contact: MGH Communications – Harvard
Image: The image is credited to Neuroscience News

Original Research: Open access.
Single-neuronal elements of speech production in humans” by Ziv Williams et al. Nature


Abstract

Single-neuronal elements of speech production in humans

Humans are capable of generating extraordinarily diverse articulatory movement combinations to produce meaningful speech. This ability to orchestrate specific phonetic sequences, and their syllabification and inflection over subsecond timescales allows us to produce thousands of word sounds and is a core component of language. The fundamental cellular units and constructs by which we plan and produce words during speech, however, remain largely unknown.

Here, using acute ultrahigh-density Neuropixels recordings capable of sampling across the cortical column in humans, we discover neurons in the language-dominant prefrontal cortex that encoded detailed information about the phonetic arrangement and composition of planned words during the production of natural speech.

These neurons represented the specific order and structure of articulatory events before utterance and reflected the segmentation of phonetic sequences into distinct syllables. They also accurately predicted the phonetic, syllabic and morphological components of upcoming words and showed a temporally ordered dynamic.

Collectively, we show how these mixtures of cells are broadly organized along the cortical column and how their activity patterns transition from articulation planning to production. We also demonstrate how these cells reliably track the detailed composition of consonant and vowel sounds during perception and how they distinguish processes specifically related to speaking from those related to listening.

Together, these findings reveal a remarkably structured organization and encoding cascade of phonetic representations by prefrontal neurons in humans and demonstrate a cellular process that can support the production of speech.

You may also like

Thinkia is a professional platform where we provide informative content like current world news, all types of educational content, health awareness, food awareness, travel awareness, ideas and tips. We hope you like all the content provided by us.

Editors' Picks

Latest Posts

Copyright © 2024 | Thinkia | All Right Reserved