-
Previous research has shown that English infants are sensitive to mispronun ciations of vowels in familiar words by as early as 15-months of age. Thes results suggest that not only are infants sensitive to large mispronunciations of the vowels in words, but also sensitive to smaller mispronunciations, involving changes to only one dimension of the vowel. The current study broadens this research by comparing infants' sensitivity to the different types of changes involved in the mispronunciations. These included changes to the backness, height, and roundedness of the vowel. Our results confirm that 18-month-olds are sensitive to small changes to the vowels in familiar words Our results also indicate a differential sensitivity of vocalic specification, with infants being more sensitive to changes in vowel height and vowel backness than vowel roundedness. Taken together, the results provide clear evidence for specificity of vowels and vocalic features such as vowel height and backness in infants' lexical representations.
-
14-month-olds are sensitive to mispronunciations of the vowels and consonants in familiar words. To examine the development of this sensitivity further, the current study tests 12-month-olds' sensitivity to different kinds of vowel and consonant mispronunciations of familiar words. The results reveal that vocalic changes influence word recognition, irrespective of the kinds of vocalic changes made. While consonant changes influenced word recognition in a similar manner, this was restricted to place and manner of articulation changes. Infants did not display sensitivity to voicing changes. Infants' sensitivity to vowel mispronunciations, but not consonant mispronunciations, was influenced by their vocabulary size - infants with larger vocabularies were more sensitive to vowel mispronunciations than infants with smaller vocabularies. The results are discussed in terms of different models attempting to chart the development of acoustically or phonologically specified representations of words during infancy.
-
Do infants implicitly name visually fixated objects whose names are known, and does this information influence their preference for looking at other objects? We presented 18-month-old infants with a picture-based phonological priming task and examined their recognition of named targets in primed (e.g., dog-door) and unrelated (e.g., dog-boat) trials. Infants showed better recognition of the target object in primed than in unrelated trials across three measures. As the prime image was never explicitly named during the experiment, the only explanation for the systematic influence of the prime image on target recognition is that infants, like adults, can implicitly name visually fixated images and that these implicitly generated names can prime infants' subsequent responses in a paired visual-object spoken-word-recognition task.
-
Investigated the cognitive processes involved in 24-month-old toddler word recognition by examining how words are represented in a toddler's mind, focusing on whether the phonological properties of words are important for their organization in the toddler lexicon in 2 experiments. A digital video scoring system was used in Experiment 1 to assess visual events while 32 toddlers (aged 23-24 months) were presented with the same experiment used by N. Mani and K. Plunkett (2010) in which phonologically related and unrelated primes were used to see whether the phonological relations between prime target pairs would influence children's target recognition. Despite showing target recognition in both conditions, the toddlers looked longer at the target following unrelated primed trials compared to related primed trials. The authors suggest that this pattern of responding is indicative of lexical-level interference effects influencing target responding in 24-month-olds. In Experiment 2 the only procedural difference was that unprimed baseline trials were included in which 28 toddlers (aged 22-25 months) were presented with a cross in the middle of the screen in place of a prime image followed by the simultaneous presentation of target-distractor images and subsequent naming of the target image. While results added support to the findings of Experiment 1, it was also seen that large cohort trials also resulted in reduced target looking compared to small target cohort trials, indicating that phonological priming is not a necessary condition for the observed lexical level cohort effects. It is concluded that by 24 months of age, children's responding in word recognition tasks approximates to adult-like performance in that words begin to cluster together in the toddler lexicon based on their phonological properties so that word recognition involves the activation and processing of phonologically related words.
-
Children look longer at a familiar object when presented with either correct pronunciations or small mispronunciations of consonants in the object's label, but not following larger mispronunciations. The current article examines whether children display a similar graded sensitivity to different degrees of mispronunciations of the vowels in familiar words, by testing children's sensitivity to 1-feature, 2-feature and 3-feature mispronunciations of the vowels of familiar labels: Children aged 1;6 did not show a graded sensitivity to vowel mispronunciations, even when the trial length was increased to allow them more time to form a response. Two-year-olds displayed a robust sensitivity to increases in vowel mispronunciation size, differentiating between small and large mispronunciations. While this suggests that early lexical representations contain information about the features contributing to vocalic identity, we present evidence that this graded sensitivity is better explained by the acoustic characteristics of the different mispronunciation types presented to children.
-
-
This paper investigated how foreign-accented stress cues affect on-line speech comprehension in British speakers of English. While unstressed English vowels are usually reduced to / @ /, Dutch speakers of English only slightly centralize them. Speakers of both languages differentiate stress by suprasegmentals (duration and intensity). In a cross-modal priming experiment, English listeners heard sentences ending in monosyllabic prime fragments—produced by either an English or a Dutch speaker of English—and performed lexical decisions on visual targets. Primes were either stress-matching (“ab” excised from absurd ), stress-mismatching (“ab” from absence ), or unrelated (“pro” from profound ) with respect to the target (e.g., ABSURD). Results showed a priming effect for stress-matching primes only when produced by the English speaker, suggesting that vowel qual- ity is a more important cue to word stress than suprasegmental information. Furthermore, for visual targets with word-initial secondary stress that do not require vowel reduction (e.g., CAMPAIGN), resembling the Dutch way of realizing stress, there was a priming effect for both speakers. Hence, our data suggest that Dutch-accented English is not harder to understand in general , but it is in instances where the language-specific implementation of lexical stress differs across languages.
-
Previous behavioural research suggests that infants possess phonologically detailed representations of the vowels and consonants in familiar words. These tasks examine infants’ sensitivity to mispronunciations of a target label in the presence of a target and distracter image. Sensitivity to the mispronunciation may, therefore, be contaminated by the degree of mismatch between the distracter label and the heard mispronounced label. Event-related potential (ERP) studies allow investigation of infants’ sensitivity to the relationship between a heard label (correct or mispronounced) and the referent alone using single picture trials. ERPs also provide information about the timing of lexico-phonological activation in infant word recognition. The current study examined 14-month-olds’ sensitivity to vowel mispronunciations of familiar words using ERP data from single picture trials. Infants were presented with familiar images followed by a correct pronunciation of its label, a vowel mispronunciation or a phonologically unrelated non-word. The results support and extend previous behavioural findings that 14-month-olds are sensitive to mispronunciations of the vowels in familiar words using an ERP task. We suggest that the presence of pictorial context reinforces infants’ sensitivity to mispronunciations of words, and that mispronunciation sensitivity may rely on infants accessing the cross-modal associations between word forms and their meanings.
-
Are there individual differences in children's prediction of upcoming linguistic input and what do these differences reflect? Using a variant of the preferential looking paradigm (Golinkoff, Hirsh-Pasek, Cauley, & Gordon, 1987), we found that, upon hearing a sentence like, ``The boy eats a big cake,'' 2-year-olds fixate edible objects in a visual scene (a cake) soon after they hear the semantically constraining verb eats and prior to hearing the word cake. Importantly, children's prediction skills were significantly correlated with their productive vocabulary size--skilled producers (i.e., children with large production vocabularies) showed evidence of predicting upcoming linguistic input, while low producers did not. Furthermore, we found that children's prediction ability is tied specifically to their production skills and not to their comprehension skills. Prediction is really a piece of cake, but only for skilled producers.
-
What are the processes underlying word recognition in the toddler lexicon? Work with adults suggests that, by 5-years of age, hearing a word leads to cascaded activation of other phonologically, semantically and phono-semantically related words (Huang & Snedeker, 2010; Marslen-Wilson & Zwitserlood, 1989). Given substantial differences in children’s sensitivity to phonological and semantic relationships between words in the first few years of life (Arias-Trejo & Plunkett, 2010; Newman, Samuelson, & Gupta, 2009; Storkel & Hoover, 2012), the current set of experiments investigated whether children younger than five also show such phono-semantic priming. Using a picture-priming task, Experiments 1 and 2 presented 2-year-olds with phono-semantically related prime-target pairs, where the label for the prime image is phonologically related (Experiment 1 – onset CV overlap, Experiment 2 – rhyme VC overlap) to a semantic associate of the target label. Across both experiments, toddlers recognised a word faster when this was preceded by a phono-semantically related prime relative to an unrelated prime. Overall, the results provide strong evidence that word recognition involves cascaded processing of phono-semantically related words by 2-years of age.
-
We examined how words from bilingual toddlers’ second language (L2) primed recognition of related target words in their first lan- guage (L1). On critical trials, prime–target word pairs were either (a) phonologically related, with L2 primes overlapped phonologically with L1 target words [e.g., slide (L2 prime)– Kleid (L1 target, ‘‘dress’’)], or (b) phonologically related through translation, with L1 translations of L2 primes rhymed with the L1 target words [e.g., leg (L2 prime, L1 translation, ‘‘Bein’’)– Stein (L1 target, ‘‘stone’’). Evidence of facilitated target recognition in the phonological priming condition suggests language nonselective access but not necessarily lexical access. However, a late interference effect on target recognition in the phonological priming through translation condition provides evidence for language nonselective lexical access: The L2 prime ( leg ) could influence L1 target recognition ( Stein ) in this condition only if both the L2 prime ( leg ) and its L1 translation (‘‘Bein’’) were concurrently activated. In addition, age- and gender-matched monolingual toddler controls showed no difference between conditions, providing further evidence that the results with bilingual toddlers were driven by cross-language activation. The current study, therefore, presents the first-ever evidence of cross-talk between the two languages of bilinguals even as they begin to acquire fluency in their second language.
-
Investigated 24-month-olds' word recognition in sentence-medial positions in two experiments using an intermodal preferential-looking paradigm. In Experiment 1, 33 French toddlers detected word-final voicing mispronunciations and compensated for native voicing assimilations in the middle of sentences. In Experiment 2, 31 English toddlers detected word-final voicing mispronunciations but did not compensate for illicit voicing assimilations. In summary, French and English 24-month-olds can take into account fine phonetic detail even if words are presented in the middle of sentences. In addition, French toddlers show language-specific compensation abilities for pronunciation variation caused by native voicing assimilation.
-
Using a picture pointing task, this study examines toddlers processing of phonological alternations that trigger sound changes in connected speech. Three experiments investigate whether 2 1/2- to 3-year-old children take into account assimilations - processes by which phonological features of one sound spread to adjacent sounds - for the purpose of word recognition (e.g., in English, ten pounds can be produced as te[mp]ounds). English toddlers (n = 18) show sensitivity to native place assimilations during lexical access in Experiment 1. Likewise, French toddlers (n = 27) compensate for French voicing assimilations in Experiment 2. However, French toddlers (n = 27) do not take into account a hypothetical non-native place assimilation rule in Experiment 3, suggesting that compensation for assimilation is already language specific.
-
Visual cues from the speaker's face, such as the discriminable mouth movements used to produce speech sounds, improve discrimination of these sounds by adults. The speaker's face, however, provides more information than just the mouth movements used to produce speech--it also provides a visual indexical cue of the identity of the speaker. The current article examines the extent to which there is separable encoding of speaker identity in speech processing and asks whether speech discrimination is influenced by speaker identity. Does consistent pairing of different speakers' faces with different sounds--that is, hearing one speaker saying one sound and a second speaker saying the second sound--influence the brain's discrimination of the sounds? ERP data from participants previously exposed to consistent speaker-sound pairing indicated improved detection of the phoneme change relative to participants previously exposed to inconsistent speaker-sound pairing--that is, hearing both speakers say both sounds. The results strongly suggest an influence of visual speaker identity in speech processing.
-
What is the relative salience of different aspects of word meaning in the developing lexicon? The current study examines the time-course of retrieval of semantic and color knowledge associated with words during toddler word recognition: At what point do toddlers orient toward an image of a yellow cup upon hearing color-matching words such as 'banana' (typically yellow) relative to unrelated words (e.g., 'house')? Do children orient faster to semantic matching images relative to color matching images, for example, orient faster to an image of a cookie relative to a yellow cup upon hearing the word 'banana'? The results strongly suggest a prioritization of semantic information over color information in children's word-referent mappings. This indicates that even for natural objects (e.g., food, animals that are more likely to have a prototypical color), semantic knowledge is a more salient aspect of toddler's word meaning than color knowledge. For 24-month-old Dutch toddlers, bananas are thus more edible than they are yellow.
-
-
The current study investigated the interaction of implicit grammatical gender and semantic category knowledge during object identification. German-learning toddlers (24-month-olds) were presented with picture pairs and heard a noun (without a preceding article) labeling one of the pictures. Labels for target and distracter images either matched or mismatched in grammatical gender and either matched or mismatched in semantic category. When target and distracter overlapped in both semantic and gender information, target recognition was impaired compared with when target and distracter overlapped on only one dimension. Results suggest that by 24 months of age, German-learning toddlers are already forming not only semantic but also grammatical gender categories and that these sources of information are activated, and interact, during object identification.
-
At about 7 months of age, infants listen longer to sentences containing familiar words – but not deviant pronunciations of familiar words (Jusczyk & Aslin, 1995). This finding suggests that infants are able to segment familiar words from fluent speech and that they store words in sufficient phonological detail to recognize deviations from a familiar word. This finding does not examine whether it is, nevertheless, easier for infants to segment words from sentences when these words sound similar to familiar words. Across three experiments, the present study investigates whether familiarity with a word helps infants segment similar‐sounding words from fluent speech and if they are able to discriminate these similar‐sounding words from other words later on. Results suggest that word‐form familiarity may be a powerful tool bootstrapping further lexical acquisition.
-
While the specificity of infants’ early lexical representations has been studied extensively, researchers have only recently begun to investigate how words are organized in the developing lexicon and what mental representations are activated during processing of a word. Integrating these two lines of research, the current study asks how specific the phonological match between a perceived word and its stored form has to be in order to lead to (cascaded) lexical activation of related words during infant lexical processing. We presented German 24-month-olds with a cross-modal semantic priming task where the prime word was either correctly or incorrectly pronounced. Results indicate that correct pronunciations and mispronunciations both elicit similar semantic priming effects, suggesting that the infant word recognition system is flexible enough to handle deviations from the correct form. This might be an important prerequisite to children’s ability to cope with imperfect input and to recognize words under more challenging circumstances.
-
Examined whether bilinguals implicitly generate picture labels in both of their languages when tested in their first language (L1) with a cross-modal event-related potential (ERP) priming paradigm. The results extended previous findings by showing that not only do bilinguals implicitly generate the labels for visually fixated images in both of their languages when immersed in their L1, but also that these implicitly generated labels in one language could prime recognition of subsequently presented auditory targets across languages (i.e., L2-L1). Thus, support is provided for cascaded models of lexical access during speech production as well as a new priming paradigm for the study of bilingual language processing.