Children’s cortical speech tracking in child-adult and child-robot interactions
Children’s cortical speech tracking in child-adult and child-robot interactionsSynthesized speech technology holds potential for enabling natural conversations between humans and machines, particularly in social robotics. However, the combination of synthesized speech with social robots still lacks some qualities of natural speech, which is crucial for human robot interactions, especially for children. In this study, we recorded the neural activity of 5-year-old, typically developing children from middle to high socio-economic households using EEG while they listened to stories narrated by either an adult or a social robot, specifically Furhat. We measured cortical speech tracking to compare how well children's brains tracked synthesized speech from a robot compared to natural speech from an adult. Our results suggest that children do indeed show cortical speech tracking in both scenarios. The results also suggest that cortical speech tracking requires larger time delays between the speech and the response to reach its peak in child-robot interaction compared to child-adult interaction. Possible sources of these differences along with their implications are discussed.https://www.psych.uni-goettingen.de/en/lang/publications/children2019s-cortical-speech-tracking-in-child-adult-and-child-robot-interactionshttps://www.psych.uni-goettingen.de/@@site-logo/university-of-goettingen-logo.svg
Fatih Sivridag and Nivedita Mani
Children’s cortical speech tracking in child-adult and child-robot interactions
Developmental Psychology
Synthesized speech technology holds potential for enabling natural conversations between humans and machines, particularly in social robotics. However, the combination of synthesized speech with social robots still lacks some qualities of natural speech, which is crucial for human robot interactions, especially for children. In this study, we recorded the neural activity of 5-year-old, typically developing children from middle to high socio-economic households using EEG while they listened to stories narrated by either an adult or a social robot, specifically Furhat. We measured cortical speech tracking to compare how well children's brains tracked synthesized speech from a robot compared to natural speech from an adult. Our results suggest that children do indeed show cortical speech tracking in both scenarios. The results also suggest that cortical speech tracking requires larger time delays between the speech and the response to reach its peak in child-robot interaction compared to child-adult interaction. Possible sources of these differences along with their implications are discussed.