Infants Hear More Speech Than Music At Home – Neuroscience News

by thinkia.org.in
0 comment


Summary: A new study compared the amount of music and speech infants hear at home. Researchers found that infants are exposed to more speech than music, with the gap increasing as they grow.

Most music in infants’ environments comes from electronic sources, unlike speech, which is often in-person. The study, using daylong audio recordings, aims to understand the impact of music on infant development compared to speech.

Key Facts:

  1. Infants are exposed to more spoken language than music, with this trend increasing over time.
  2. Most music heard by infants comes from electronic devices, while speech is primarily from in-person interactions.
  3. The study used daylong audio recordings to provide a detailed snapshot of infants’ auditory environments.

Source: University of Washington

Speech and music are the dominant elements of an infant’s auditory environment. While past research has shown that speech plays a critical role in children’s language development, less is known about the music that infants hear.

A new University of Washington study, published May 21 in Developmental Science, is the first to compare the amount of music and speech that children hear in infancy. Results showed that infants hear more spoken language than music, with the gap widening as the babies get older.

Since audio recordings lack context, researchers are also interested in when music moments are happening in infants’ lives. Credit: Neuroscience News

“We wanted to get a snapshot of what’s happening in infants’ home environments,” said corresponding author Christina Zhao, a UW research assistant professor of speech and hearing sciences.

“Quite a few studies have looked at how many words babies hear at home, and they’ve shown that it’s the amount of infant-directed speech that’s important in language development. We realized we don’t know anything about what type of music babies are hearing and how it compares to speech.”

Researchers analyzed a dataset of daylong audio recordings collected in English-learning infants’ home environments at ages 6, 10, 14, 18 and 24 months. At every age, infants were exposed to more music from an electronic device than an in-person source. This pattern was reversed for speech. While the percentage of speech intended for infants significantly increased with time, it stayed the same for music.

“We’re shocked at how little music is in these recordings,” said Zhao, who is also the director of the Lab for Early Auditory Perception (LEAP), housed in the Institute for Learning & Brain Sciences (I-LABS).

“The majority of music is not intended for babies. We can imagine these are songs streaming in the background or on the radio in the car. A lot of it is just ambient.”

This differs from the highly engaging, multi-sensory movement-oriented music intervention that Zhao and her team had previously implemented in lab settings. During these sessions, music played while infants were given instruments and researchers taught caregivers how to synchronize their babies’ movement with music. A control group of babies then came to the lab just to play.

“We did that twice,” Zhao said. “Both times, we saw the same result: that music intervention was enhancing infant’s neural responses to speech sounds. That got us thinking about what would happen in the real world. This study is the first step into that bigger question.”

Past studies have largely relied on qualitative and quantitative parental reports to examine musical input in infants’ environments, but parents tend to overestimate the amount they talk or sing to their children.

This study closes the gap by analyzing daylong auditory recordings made with Language Environment Analysis (LENA) recording devices. The recordings, originally created for a separate study, documented infants’ natural sound environment for up to 16 hours per day for two days at each recording age.

Researchers then crowdsourced the process of annotating the LENA data through the citizen science Zooniverse platform. Volunteers were asked to determine if there was speech or music in the clip.

When speech or music was identified, listeners were then asked whether it came from an in-person or electronic source. Finally, they judged whether the speech or music was intended for a baby.

Since this research featured a limited sample, researchers are now interested in expanding their dataset to determine if the result can be generalized to different cultures and populations. A follow-up study will examine the same type of LENA recordings from infants in Latinx families. Since audio recordings lack context, researchers are also interested in when music moments are happening in infants’ lives.

“We’re curious to see whether music input is correlated with any developmental milestones later on for these babies,” Zhao said.

“We know speech input is highly correlated with later language skills. In our data, we see that speech and music input are not correlated—so it’s not like a family who tends to talk more will also have more music. We’re trying to see if music contributes more independently to certain aspects of development.”

About this music, language, and neurodevelopment research news

Author: Lauren Kirschman
Source: University of Washington
Contact: Lauren Kirschman – University of Washington
Image: The image is credited to Neuroscience News

Original Research: Closed access.
Comparison of speech and music input in North American infants’ home environment over the first 2 years of life” by Christina Zhao et al. Developmental Science


Abstract

Comparison of speech and music input in North American infants’ home environment over the first 2 years of life

Infants are immersed in a world of sounds from the moment their auditory system becomes functional, and experience with the auditory world shapes how their brain processes sounds in their environment. Across cultures, speech and music are two dominant auditory signals in infants’ daily lives.

Decades of research have repeatedly shown that both quantity and quality of speech input play critical roles in infant language development. Less is known about the music input infants receive in their environment.

This study is the first to compare music input to speech input across infancy by analyzing a longitudinal dataset of daylong audio recordings collected in English-learning infants’ home environments, at 6, 10, 14, 18, and 24 months of age.

Using a crowdsourcing approach, 643 naïve listeners annotated 12,000 short snippets (10 s) randomly sampled from the recordings using Zooniverse, an online citizen-science platform. Results show that infants overall receive significantly more speech input than music input and the gap widens as the infants get older.

At every age point, infants were exposed to more music from an electronic device than an in-person source; this pattern was reversed for speech. The percentage of input intended for infants remained the same over time for music while that percentage significantly increased for speech.

We propose possible explanations for the limited music input compared to speech input observed in the present (North American) dataset and discuss future directions.

We also discuss the opportunities and caveats in using a crowdsourcing approach to analyze large audio datasets. 

You may also like

Thinkia is a professional platform where we provide informative content like current world news, all types of educational content, health awareness, food awareness, travel awareness, ideas and tips. We hope you like all the content provided by us.

Editors' Picks

Latest Posts

Copyright © 2024 | Thinkia | All Right Reserved