More
    HomeBiologyBrains time-stamp sounds to process words

    Brains time-stamp sounds to process words

    According to a recent study by a group of psychologists and linguists, our brains “time-stamp” the order of incoming sounds, enabling us to accurately understand the words that we hear. Its findings, which are published in the journal Nature Communications, provide fresh perspectives on the complex workings of the nervous system.

    The paper’s lead author, Laura Gwilliams, who was an NYU doctoral student at the time of the research and is now a postdoctoral fellow at the University of California, San Francisco, explains that in order to understand speech, your brain needs to accurately interpret both the identity of the speech sounds and the order in which they were uttered to correctly recognize the words being said. “We demonstrate how the brain accomplishes this feat by demonstrating how various sounds are processed by various neuronal populations.” Additionally, the amount of time since each sound entered the ear is time-stamped. This enables the listener to understand both the speaker’s intended word order as well as the nature of the sounds they are making.

    The role of the brain in processing individual sounds has been extensively studied, but little is known about how we handle the rapid auditory sequences that make up speech. The dynamics of the brain may be better understood in the future, which may help treat neurological conditions that impair speech comprehension.

    Given that speech sounds develop so quickly, the scientists who conducted the Nature Communications study wanted to know how the brain interprets the identity and sequence of speech sounds. This is crucial because in order to correctly understand the words being spoken, your brain must correctly interpret both the identity of the speech sounds (such as l-e-m-o-n) and the order in which they were uttered (such as 1-2-3-4-5). (e.g., “lemon” and not “melon”).

    In order to do this, they monitored the brain activity of more than 20 people, all of whom were native English speakers, as they listened to an audiobook for two hours each. The researchers specifically connected the patients’ brain activity with the characteristics of speech sounds that set one sound apart from another (e.g., “m” vs. “n”).

    The study’s findings revealed that the brain maintains a running representation of the previous three speech sounds by using a buffer to process speech. The findings also demonstrated that the auditory cortex neurons in the brain communicate with one another to analyze many sounds simultaneously without confusing their identities.

    Gwilliams, who will return to NYU’s Department of Psychology as an assistant professor in 2023, says, “We found that each spoken sound generates a cascade of neurons firing in multiple areas in the auditory cortex.” This implies that information about each individual sound in the phonetic word “k-a-t” is transmitted in a regular manner between various brains populations, helping to time-stamp each sounds with its relative order.

    The study’s other authors included David Poeppel, a professor in NYU’s Department of Psychology and managing director of the Ernst Struengmann Institute for Neuroscience in Frankfurt, Germany; Jean-Remi King, a professor in the Department of Linguistics at NYU and the NYU Abu Dhabi Institute; and Alec Marantz, a professor in the Department of Linguistics at NYU.

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Must Read

    spot_img