It can be hard to understand children when they talk. There are many reasons for this. I like to illustrate this with a story about a time I met a young child and her dad in an elevator. The little girl said to me, “da-da day!” and to her father’s surprise, I responded, “Cool! I like Thomas the Train, too!”
Why did this child say “da-da day” for Thomas the Train? As a toddler, she was still learning to use her mouth muscles to make the sounds of her native language, in this case, English. Omitting sounds from the ends of words and replacing the remaining sounds with the same sound made the phrase easier to say. She did not yet have the grammar skills to make a whole sentence about Thomas the Train that might have helped a listener understand her. And to top it off, she was still learning the conversational rule that mature communicators don’t start an interaction by simply saying the word for something they like. Her underdeveloped ability to produce sounds, use grammar, and follow conversational rules all contributed to difficulties understanding her.
Luckily for this little girl, I was able to use several strategies to understand her anyway. As a speech-language pathologist, I spend a lot of time listening to children speak, particularly children with speech disorders. This gave me practice understanding child speech. I was familiar with popular children’s toys and television shows, so I had a “word bank” to help me guess the types of things a child would say. She was wearing a Thomas the Tank Engine sweatshirt, so I was able to use my general knowledge that children like to talk about things in their immediate environment, and wear clothes that depict their interests, to further narrow down the options and make an educated guess. Experience and relying on background knowledge are just some of the strategies adults use to understand child speech.
As scientists who study child speech and language development, we want to learn more about the strategies adults use to understand child speech. That’s why we created the “Say What?” game in KidTalk. In this game, parents listen to a recording of a child talking and guess what was said. By learning which characteristics of child speech make it easier or harder to understand, we can figure out what strategies adults use when they listen to child speech. By including recordings of your own child (if you consent to this in settings), we can also study differences in how parents understand their own children compared to unfamiliar children. Knowing more about how adults understand child speech will help us teach computers to understand child speech better — and any parent who has watched a child attempt to use “Hey, Siri” can attest that there is lots of room for improvement here!
The KidTalk project will also help scientists better understand the process of child speech development. We already know the average age at which children learn to produce different sounds and the average order in which they are learned. However, there is a lot of variability from child to child and we do not know why. A child who produces /s/ correctly at age 3 and one who does not learn this sound until age 7 are both completely normal. And some children do not develop adult-like pronunciations on their own and need speech therapy. While many scientists have proposed explanations for children not learning all the sounds of their native language, we aren’t entirely sure why this happens.
Our current methods to address these questions about child speech development have been limited by what we can study in a laboratory. In a lab-based study, we might ask a child to repeat single words and mark which sounds they said correctly. KidTalk lets us study speech sound production in realistic contexts. When parents transcribe those recordings, researchers know what the child was saying (or attempting to say). With frequent transcribed recordings of the same child, we can study the small, gradual, acoustic changes in their pronunciation as their cute errors morph into adult-like speech.