Table 1 - uploaded by Julia L Evans
Content may be subject to copyright.
Test Words, Nonwords, and Part-Words in Experiments 1 and 2

Test Words, Nonwords, and Part-Words in Experiments 1 and 2

Source publication
Article
Full-text available
The present experiments investigated how the process of statistically segmenting words from fluent speech is linked to the process of mapping meanings to words. Seventeen-month-old infants first participated in a statistical word segmentation task, which was immediately followed by an object-label-learning task. Infants presented with labels that w...

Contexts in source publication

Context 1
... Segmentation Task: To control for possible arbitrary listening preferences, we created two counterbalanced versions of the artificial language (Language 1: timay, dobu, piga, mano; Language 2: nomay, mati, gabu, pido). Words from Language 1 were nonwords for Language 2, and vice versa (see Table 1). A trained female speaker unfamiliar with the stimuli read sequences of approximately 20 syllables from each language; each sequence included 1 or 2 extra syllables at the beginning and end that were cut from the final recording. ...
Context 2
... middle syllables were spliced into a fluent speech stream with no pauses or other reliable acoustic cues to word boundaries (98 syllables/min; F0 = 224 Hz). As in Experiment 1, we generated two counterbalanced languages (Language 1 words: timay, dobu, gapi, moku; Language 2 words: pimo, kuga, buti, maydo; see Table 1). ...

Similar publications

Article
Full-text available
Our long-term objective is to develop an auditory training program that will enhance speech recognition in those situations where patients most want improvement. As a first step, the current investigation trained participants using either a single talker or multiple talkers to determine if auditory training leads to transfer-appropriate gains. The...
Article
Full-text available
To recognize phonemes across variation in talkers, listeners can use information about vocal characteristics, a process referred to as "talker normalization." The present study investigates the cortical mechanisms underlying talker normalization using fMRI. Listeners recognized target words presented in either a spoken list produced by a single tal...
Article
Full-text available
To determine whether a clinically obtainable measure of audibility, the aided Speech Intelligibility Index (SII; American National Standards Institute, 2007), is more sensitive than the pure-tone average (PTA) at predicting the lexical abilities of children who wear hearing aids (CHA). School-age CHA and age-matched children with normal hearing (CN...
Article
Full-text available
Cochlear implants provide users with limited spectral and temporal information. In this study, the amount of spectral and temporal information was systematically varied through simulations of cochlear implant processors using a noise-excited vocoder. Spectral information was controlled by varying the number of channels between 1 and 16, and tempora...
Article
Full-text available
This study aimed to determine the relative processing cost associated with comprehension of an unfamiliar native accent under adverse listening conditions. Two sentence verification experiments were conducted in which listeners heard sentences at various signal-to-noise ratios. In Experiment 1, these sentences were spoken in a familiar or an unfami...

Citations

... Previous perceptual narrowing research has relied on online computer mouse or key presses by the experimenter to calculate habituation and record infant looking times (for example, see Estes et al., 2007;Fennell & Waxman, 2010;Graf Estes & Bowen, 2013;Oakes et al., 2019;Polka et al., 2014;Singh et al., 2017;Sundara et al., 2008; for review, see Oakes, 2010). There are inherent disadvantages of button-press measurements for infant looking time. ...
... Not only does tracking statistical distributions help infants to find word forms in fluent speech, but high-TP sequences are also learned better as object labels than their equally frequent low-TP counterparts at 17 months of age (Graf Estes, Evans, Alibali, & Saffran, 2007;Hay, Pelucchi, Estes, & Saffran, 2011). Hay et al. (2011) showed this by presenting 17-month-old monolingual English-learning infants with naturally spoken Italian sentences that contained four target word forms (Hay et al., 2011;Experiment 3). ...
... Immediately following familiarization, infants were trained on mappings between novel objects and either HTP or LTP words. Infants readily learned mappings between HTP words and objects, but they failed to learn LTP mappings (see Graf Estes et al., 2007 for a similar finding using artificial language materials). These results suggest that statistical learning may contribute to infants' ability to solve a fundamental problem in language learning-identifying word forms within continuous speech-and that the outcome of this process may play a role in solving yet another challenging task-mapping word forms to referents. ...
... Infants' sensitivity to statistical distributions in their linguistic environment influences both segmenting potential word forms (e.g., Saffran et al., 1996, Pelucchi et al., 2009a and mapping those word forms to referents (e.g., Graf Estes et al., 2007;Hay et al., 2011). Although word forms with strong statistics (e.g., high TPs between syllables) are better learned as labels at 17 months (Graf Estes et al., 2007;Hay et al., 2011), by 21−23 months, HTP word forms are no longer advantaged in mapping tasks, at least when there is no need to remember word forms over time Shoaib et al., 2018). Nevertheless, in Experiment 1, we found that 23-month-old infants are influenced by word forms' TPs when there is a delay between encountering the word forms in speech and opportunities to map them to referents. ...
Article
Infants are sensitive to statistics in spoken language that aid word‐form segmentation and immediate mapping to referents. However, it is not clear whether this sensitivity influences the formation and retention of word‐referent mappings across a delay, two real‐world challenges that learners must overcome. We tested how the timing of referent training, relative to familiarization with transitional probabilities (TPs) in speech, impacts English‐learning 23‐month‐olds’ ability to form and retain word‐referent mappings. In Experiment 1, we tested infants’ ability to retain TP information across a 10‐min delay and use it in the service of word learning. Infants successfully mapped high‐TP but not low‐TP words to referents. In Experiment 2, infants readily mapped the same words even when they were unfamiliar. In Experiment 3, high‐ and low‐TP word‐referent mappings were trained immediately after familiarization, and infants readily remembered these associations 10 min later. In sum, although 23‐month‐old infants do not need strong statistics to map word forms to referents immediately, or to remember those mappings across a delay, infants are nevertheless sensitive to these statistics in the speech stream, and they influence mapping after a delay. These findings suggest that, by 23 months of age, sensitivity to statistics in speech may impact infants’ language development by leading word forms with low coherence to be poorly mapped following even a short period of consolidation.
... TPs tend to dip at word boundaries, and thus provide information about which syllable sequences form a word, and which span a word boundary (Saksida et al., 2017;Swingley, 1999Swingley, , 2005. There is also evidence that TPs play a facilitative role in learning word meanings, at least in 17-month-olds (Graf Estes et al., 2007;Hay et al., 2011), such that high-TP (HTP) words are more readily mapped to referents than words with lower statistical coherence, or low-TPs (LTPs). However, infants' sensitivity to statistical structure is not likely to play a static role in language learning (Forest et al., 2023). ...
... In addition to correlational evidence that tracking TPs in speech supports lexical development, there is evidence from experimental tasks that TPs influence how readily infants learn speech sequences as labels in mapping tasks (Graf Estes et al., 2007;Hay et al., 2011). For example, in Hay et al. (2011) 17-month-olds were familiarized with Italian speech containing two HTP and two LTP words (see Table 1). ...
... If the "HTP advantage" were actually an "LTP disadvantage," then LTP words should have been learned worse than both HTP words and unfamiliar words. In an artificial language paradigm, Graf Estes et al. (2007) also found that infants are better able to map HTP sequences than LTP part-sequences or unfamiliar words, consistent with the existence of a HTP advantage at this age. ...
Article
Full-text available
Infants’ sensitivity to transitional probabilities (TPs) supports language development by facilitating mapping high-TP (HTP) words to meaning, at least up to 18 months of age. Here we tested whether this HTP advantage holds as lexical development progresses, and infants become better at forming word–referent mappings. Two groups of 24-month-olds (N = 64 and all White, tested in the United States) first listened to Italian sentences containing HTP and low-TP (LTP) words. We then used HTP and LTP words, and sequences that violated these statistics, in a mapping task. Infants learned HTP and LTP words equally well. They also learned LTP violations as well as LTP words, but learned HTP words better than HTP violations. Thus, by 2 years of age sensitivity to TPs does not lead to an HTP advantage but rather to poor mapping of violations of HTP word forms.
... Assigning meaning to words is another challenge for language learners. There is evidence that, early in development, recently segmented words (with stronger TPs) are treated as better candidate labels on subsequent mapping tasks (Graf Estes et al., 2007;Hay et al., 2011). While the benefit of high TP sequences during word learning appears to diminish across development (Karaman et al., 2022 Q2 ;Mirman et al., 2008;Shoaib et al., 2018), learners continue to be remarkably successful both at segmenting speech using TP information (Saffran et al., 1996; but see Black & Bergmann, 2017) and at making one-to-one mappings between labels and referents (Graf Estes, 2009;Graf Estes et al., 2007;Lany & Saffran, 2010). ...
... There is evidence that, early in development, recently segmented words (with stronger TPs) are treated as better candidate labels on subsequent mapping tasks (Graf Estes et al., 2007;Hay et al., 2011). While the benefit of high TP sequences during word learning appears to diminish across development (Karaman et al., 2022 Q2 ;Mirman et al., 2008;Shoaib et al., 2018), learners continue to be remarkably successful both at segmenting speech using TP information (Saffran et al., 1996; but see Black & Bergmann, 2017) and at making one-to-one mappings between labels and referents (Graf Estes, 2009;Graf Estes et al., 2007;Lany & Saffran, 2010). Furthermore, across the lifespan, language learners rely on phonotactics from their natural languages when learning novel words, with words with stronger PPs being learned faster and more accurately than words with weaker PPs Storkel et al., 2013;but Cristia 2018). ...
Article
Full-text available
Language learners track conditional probabilities to find words in continuous speech and to map words and objects across ambiguous contexts. It remains unclear, however, whether learners can leverage the structure of the linguistic input to do both tasks at the same time. To explore this question, we combined speech segmentation and cross-situational word learning into a single task. In Experiment 1, when adults (N = 60) simultaneously segmented continuous speech and mapped the newly segmented words to objects, they demonstrated better performance than when either task was performed alone. However, when the speech stream had conflicting statistics, participants were able to correctly map words to objects, but were at chance level on speech segmentation. In Experiment 2, we used a more sensitive speech segmentation measure to find that adults (N = 35), exposed to the same conflicting speech stream, correctly identified non-words as such, but were still unable to discriminate between words and part-words. Again, mapping was above chance. Our study suggests that learners can track multiple sources of statistical information to find and map words to objects in noisy environments. It also prompts questions on how to effectively measure the knowledge arising from these learning experiences.
... They can pass implicit ToM tasks because they are excellent learners of statistical regularities in behavioural patterns, and they have biased attention towards human faces (especially eyes) and human motion, allowing them to predict how events unfold. Statistical learning in infants has been demonstrated concerning word-segmentation (Saffran, Aslin & Newport, 1996), word-object pairings (Estes et al., 2007;Hay et al., 2011;Smith & Yu, 2008) and word categorisation (Erickson, Thiessen & Graf Estes, 2014;Saffran & Kirkham, 2018). Furthermore, it is not limited to language acquisition but is domain general, as evidenced by infants learning statistical patterns related to visually presented object sequences (Kirkham, Slemmer & Johnson, 2002). ...
Article
Full-text available
Understanding the origins of human social cognition is a central challenge in contemporary science. In recent decades, the idea of a 'Theory of Mind' (ToM) has emerged as the most popular way of explaining unique features of human social cognition. This default view has been progressively undermined by research on 'implicit' ToM, which suggests that relevant precursor abilities may already be present in preverbal human infants and great apes. However, this area of research suffers from conceptual difficulties and empirical limitations, including explanatory circularity, over-intellectualisation, and inconsistent empirical replication. Our article breaks new ground by adapting 'script theory' for application to both linguistic and non-linguistic agents. It thereby provides a new theoretical framework able to resolve the aforementioned issues, generate novel predictions, and provide a plausible account of how individuals make sense of the behaviour of others. Script theory is based on the premise that pre-verbal infants and great apes are capable of basic forms of agency-detection and non-mentalistic goal understanding, allowing individuals to form event-schemata that are then used to make sense of the behaviour of others. We show how script theory circumvents fundamental problems created by ToM-based frameworks, explains patterns of inconsistent replication, and offers important novel predictions regarding how humans and other animals understand and predict the behaviour of others.
... For instance, one of the most common manipulations in PAL is related to word familiarity, and the degree to which long-term knowledge contributes to short-term encoding and retrieval processes (Ellis & Beaton, 1993;Papagno et al., 1991;Papagno & Vallar, 1992;Service & Craik, 1993), but this topic has barely been broached in CSWL (see Escudero et al., 2013, for an exception). Similarly, while many studies have examined the role of verbal working memory as the system underlying PAL (e.g., Baddeley et al., 1998Baddeley et al., , 2017Ellis & Beaton, 1993;Freedman & Martin, 2001;Gupta, 2003;Kazanas et al., 2020;Litt et al., 2019;Papagno et al., 1991;Papagno & Vallar, 1992;Rothkopf, 1957;Steinel et al., 2007;Ylinen et al., 2020), studies on CSWL have developed from statistical learning theories applied to language (e.g., Aslin, 2017;Conway et al., 2010;Frost et al., 2015;Graf Estes et al., 2007;Mirman et al., 2008;Saffran et al., 1996a, b), and thus have paid very little attention to the memory systems that might underlie CSWL. Yet, although PAL and CSWL differ in the level of ambiguity at word-referent exposure, both paradigms involve associating a word form with a referent, and maintaining this mapping over time. ...
Article
Word learning is one of the first steps into language, and vocabulary knowledge predicts reading, speaking, and writing ability. There are several pathways to word learning and little is known about how they differ. Previous research has investigated paired-associate (PAL) and cross-situational word learning (CSWL) separately, limiting the understanding of how the learning process compares across the two. In PAL, the roles of word familiarity and working memory have been thoroughly examined, but these same factors have received very little attention in CSWL. We randomly assigned 126 monolingual adults to PAL or CSWL. In each task, names of 12 novel objects were learned (six familiar words, six unfamiliar words). Logistic mixed-effects models examined whether word-learning paradigm, word type and working memory (measured with a backward digit-span task) predicted learning. Results suggest better learning performance in PAL and on familiar words. Working memory predicted word learning across paradigms, but no interactions were found between any of the predictors. This suggests that PAL is easier than CSWL, likely because of reduced ambiguity between the word and the referent, but that learning across both paradigms is equally enhanced by word familiarity, and similarly supported by working memory.
... By extracting correlations, our brains determine a "best guess" about future sensory inputs and thus perform better (de Lange, Heilbron, & Kok, 2018). Statistical learning effect has been observed repeatedly in literature using a variety of stimulus sequences such as word segmentations (Graf Estes, Evans, Alibali, & Saffran, 2007), actions (Baldwin, Andersson, Saffran, & Meyer, 2008), and abstract shapes (Turk-Browne et al., 2005), even when the task involves multiple streams presented simultaneously (Jimenez & Vazquez, 2005;Salet, Kruijne, & van Rijn, 2021;Weiermann, Cock, & Meier, 2010). For example, participants have detected and tracked the underlying statistics of two streams of artificial languages, suggesting an ability to conceptualize multiple linguistic representations (Benitez, Bulgarelli, Byers-Heinlein, Saffran, & Weiss, 2020). ...
Article
Numerous joint action studies have demonstrated that certain low-level aspects (e.g., stimuli and responses) of a co-actor's task can be automatically and implicitly represented by us as actors, biasing our own task performance in a joint action setup. However, it remains unclear whether individuals also represent more abstract, high-level aspects of a co-actor's task, such as regularity. In the first five experiments, participants participated alongside their co-actors and responded to a mixed shape sequence generated by randomly interleaving two fixed order sequences of shapes in both the pre- and post-test sessions. But different intermediate practice sessions were undergone by participants across experiments. When practicing their own fixed order sequences in a mixed shape sequence, either together with another person (Experiment 1) or alone but informed that their partner was performing the same practice task in a different room (Experiment 4), participants exhibited a learning effect on their co-actors' practiced sequences. This indirect learning effect was absent when one of the co-actors did not participate due to either being removed from the practice (Experiment 2) or sitting still without offering responses (Experiment 3), as well as when the two co-actors practiced together but responded to two distinct properties of stimuli (e.g., colour and shape, respectively), with one having regularity and the other not. Finally, participants exhibited comparable direct learning effects on their own practiced sequences for Experiments 1–5 as when performing the pre-test, practice, and post-test sessions alone for Experiment 6. These results demonstrate that, when practicing together, or even when believing that they are acting together with a partner, co-actors do represent the task regularity of one another through social statistical learning and transfer this learned regularity to subsequent task performances. The present study extends our understanding of co-representation in the joint action context in terms of the more abstract and high-level task features people co-represent, such as a co-actor's task regularity.
... At the babbling stage toward the end of the first year, languagespecific phonological knowledge becomes manifest in the infant's output (Vihman et al. 1985;Werker and Tees 1999;Werker and Hensch 2015) and, in close temporal vicinity, phonetic discrimination and categorization can be demonstrated in speech perception (Werker and Tees 1999;Tsao et al. 2004;Kuhl et al. 2005;Werker and Hensch 2015). Further studies have shown that infants can store whole word forms as early as 8 months of age (Jusczyk and Hohne 1997), and such word storage occurs in accordance with statistical properties of the lexicon (Estes et al. 2007). Furthermore, mathematical modeling of vocabulary exposure in infants suggests that by 18 months children may have even a few thousand whole words in their lexicon, without necessarily yet associating them with any meaning (Swingley 2007). ...
Article
Full-text available
Although teaching animals a few meaningful signs is usually time-consuming, children acquire words easily after only a few exposures, a phenomenon termed “fast-mapping.” Meanwhile, most neural network learning algorithms fail to achieve reliable information storage quickly, raising the question of whether a mechanistic explanation of fast-mapping is possible. Here, we applied brain-constrained neural models mimicking fronto-temporal-occipital regions to simulate key features of semantic associative learning. We compared networks (i) with prior encounters with phonological and conceptual knowledge, as claimed by fast-mapping theory, and (ii) without such prior knowledge. Fast-mapping simulations showed word-specific representations to emerge quickly after 1–10 learning events, whereas direct word learning showed word-meaning mappings only after 40–100 events. Furthermore, hub regions appeared to be essential for fast-mapping, and attention facilitated it, but was not strictly necessary. These findings provide a better understanding of the critical mechanisms underlying the human brain’s unique ability to acquire new words rapidly.
... For example, dog was understood by a higher proportion of children at 13 months of age than was deer, and this should be reflected by a more accurate segmentation of dog than deer (i.e., dog is correctly segmented on more occasions). Theoretically, the reasoning behind using word learning as a proxy for segmentation performance is that vocabulary knowledge (word-meaning mapping) is facilitated by word segmentation (e.g., Estes et al., 2007;Hay et al., 2011). For example, in Estes et al.'s (2007) study, infants were able to extract, store, and recognize word forms previously presented in fluent speech to successfully perform a label-object association task. ...
Article
Word segmentation is a crucial step in children's vocabulary learning. While computational models of word segmentation can capture infants’ performance in small‐scale artificial tasks, the examination of early word segmentation in naturalistic settings has been limited by the lack of measures that can relate models’ performance to developmental data. Here, we extended CLASSIC (Chunking Lexical and Sublexical Sequences in Children; Jones et al., 2021), a corpus‐trained chunking model that can simulate several memory and phonological and vocabulary learning phenomena to allow it to perform word segmentation using utterance boundary information, and we have named this extended version CLASSIC utterance boundary (CLASSIC‐UB). Further, we compared our model to the performance of children on a wide range of new measures, capitalizing on the link between word segmentation and vocabulary learning abilities. We showed that the combination of chunking and utterance‐boundary information used by CLASSIC utterance boundary allowed a better prediction of English‐learning children's output vocabulary than did other models. A one‐page Accessible Summary of this article in non‐technical language is freely available in the Supporting Information online and at https://oasis‐database.org
... One promising direction for identifying predictors of persisting delay in late talkers is to look at the processes that support language growth in typically developing children. A wide range of research shows that infants and toddlers are adept at finding and using regularities to learn words, including statistical differences in transition probabilities (Graf Estes et al., 2007), patterns of word-object co-occurrence (Smith & Yu, 2008), and the relation between the syntactic context of a naming event (e.g. "this is a…") and attention to specific object features (Landau et al., 1992). ...
Article
Full-text available
Children with delays in expressive language (late talkers) have heterogeneous developmental trajectories. Some are late bloomers who eventually “catch‐up,” but others have persisting delays or are later diagnosed with developmental language disorder (DLD). Early in development it is unclear which children will belong to which group. We compare the toddler vocabulary composition of late talkers with different long‐term outcomes. The literature suggests most children with typical development (TD) have vocabularies dominated by names for categories organized by similarity in shape (e.g., cup), which supports a bias to attend to shape when generalizing names of novel nouns—a bias associated with accelerated vocabulary development. Previous work has shown that as a group, late talkers tend to say fewer names for categories organized by shape and are less likely to show a “shape bias” than TD children. Here, in a retrospective analysis of 850 children, we compared the vocabulary composition of groups of toddlers who were late bloomers or persisting late talkers. At Time 1 (13‐27 months), the persisting late talkers said a smaller proportion of shape‐based nouns than both TD children and late bloomers who “caught up” to typically sized vocabularies months later (18‐38‐months). Additionally, children who received a DLD diagnosis between 4 and 7 years said a significantly smaller proportion of shape‐based nouns in year two than TD children and children with other diagnoses (e.g., dyslexia). These findings bring new insight into sources of heterogeneity amongst late talkers and offer a new metric for assessing risk. Research Highlights Toddler vocabulary composition, including the proportion of names for categories organized by shape, like spoon, was used to retrospectively compare outcomes of late talking children Persisting Late Talkers said a smaller proportion of shape‐based nouns during toddlerhood relative to Late Bloomers (late talkers who later caught up to have typically‐sized vocabularies) Children with later DLD diagnoses said a smaller proportion of shape‐based nouns during toddlerhood relative to children without a DLD diagnosis The data illustrate the cascading effects of vocabulary composition on subsequent language development and suggest vocabulary composition may be one important marker of persisting delays