Article

Do Phonological Representations Specify Variables? Evidence from the Obligatory Contour Principle

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Mental variables are central to symbolic accounts of cognition. Conversely, according to the pattern associator hypothesis, variables are obsolete. We examine the representation of variables by investigating the Obligatory Contour Principle (OCP, McCarthy, 1986) in Hebrew. The OCP constrains gemination in Hebrew roots. Gemination is well formed at the root's end (e.g., SMM), but not in its beginning (e.g., SSM). Roots and geminates, however, are variables; hence, according to the pattern associator view, the OCP is unrepresentable. Three experiments demonstrate that speakers are sensitive to the presence of root gemination and constrain its location. In forming words from novel biconsonantal roots, speakers prefer to reduplicate the root's final over its initial radical, and they rate such outputs as more acceptable. The avoidance or rejection of root-initial gemination is independent of its position in the word and is inexplicable by the statistical frequency of root tokens. Our results suggest that linguistic representations specify variables. Speakers' competence, however, is governed by violable constraints.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Psycholinguistic research in the last 20 years has closely investigated whether the unique root-and-pattern structure as described in root-based linguistic theories is reflected in the mental processes of Hebrew speakers as well as Arabic (a Semitic language with similar root-andpattern structure) during complex word recognition and production. Overall, robust empirical evidence was found for the existence of the consonantal root as a mental constituent using various techniques, such as masked priming (Hebrew: Deutsch, Frost, & Forster, 1998;Feldman & Bentin, 1994;Frost, Forster, & Deutsch, 1997;Arabic: Boudelaa & Marslen-Wilson, 2005), cross-modal priming (Hebrew: Frost, Deutsch, Gilboa, Tannenbaum, & Marslen-Wilson, 2000;Arabic: Boudelaa & Marslen-Wilson, 2011, picture-word interference paradigm (Deutsch, 2016;Deutsch & Meir, 2011;Kolan, Leikin, & Zwitserlood, 2011), the segment-switching task (Feldman, Frost, & Pnini, 1995), semantic judgement task (Prior & Markus, 2014), examination of pseudowords in a lexical decision task (Yablonski & Ben-Shachar, 2016), elicited production and acceptability judgement of novel words (Berent, Everett, & Shimron, 2001;Berent & Shimron, 1997), eye tracking (Deutsch, Frost, Pelleg, Pollatsek, & Rayner, 2003;Deutsch, Frost, Pollatsek, & Rayner, 2005) and online measures such as MEG (Gwilliams & Marantz, 2015;Kastner, Pylkkänen, & Marantz, 2018), fMRI (lexical related judgement: Bick, Goelman, & Frost, 2008;masked priming: Bick, Frost, & Goelman, 2010) and EEG (Boudelaa, Pulvermüller, Hauk, Shtyrov, & Marslen-Wilson, 2010). ...
... Berent and colleagues assumed that if speakers apply this constraint to novel roots, it would support the existence of roots as mental symbolic variables. In acceptability judgement tasks, speakers have rated novel words with novel initial-root gemination as very unnatural compared to novel words with novel roots of three different consonants or with final-root gemination (Berent et al., 2001;Berent & Shimron, 1997). In elicited production tasks, they were presented with a biconsonantal novel root (e.g., SM) and an existing target vowel pattern (e.g., CaCaCti), and were required to produce a novel word with the root and pattern presented to them. ...
... In elicited production tasks, they were presented with a biconsonantal novel root (e.g., SM) and an existing target vowel pattern (e.g., CaCaCti), and were required to produce a novel word with the root and pattern presented to them. The results showed that novel words were produced with final-root gemination (doubling the consonant in the final position of the root, such as samamti) in 46% of the responses, while only less than 0.5% of the responses included novel words with initial-root gemination (sasamti; Berent, 2002;Berent et al., 2001). ...
Thesis
For many years, psycholinguistic evidence has been predominantly based on findings from native speakers of Indo-European languages, primarily English, thus providing a rather limited perspective into the human language system. In recent years a growing body of experimental research has been devoted to broadening this picture, testing a wide range of speakers and languages, aiming to understanding the factors that lead to variability in linguistic performance. The present dissertation investigates sources of variability within the morphological domain, examining how and to what extent morphological processes and representations are shaped by specific properties of languages and speakers. Firstly, the present work focuses on a less explored language, Hebrew, to investigate how the unique non-concatenative morphological structure of Hebrew, namely a non-linear combination of consonantal roots and vowel patterns to form lexical entries (L-M-D + CiCeC = limed ‘teach’), affects morphological processes and representations in the Hebrew lexicon. Secondly, a less investigated population was tested: late learners of a second language. We directly compare native (L1) and non-native (L2) speakers, specifically highly proficient and immersed late learners of Hebrew. Throughout all publications, we have focused on a morphological phenomenon of inflectional classes (called binyanim; singular: binyan), comparing productive (class Piel, e.g., limed ‘teach’) and unproductive (class Paal, e.g., lamad ‘learn’) verbal inflectional classes. By using this test case, two psycholinguistic aspects of morphology were examined: (i) how morphological structure affects online recognition of complex words, using masked priming (Publications I and II) and cross-modal priming (Publication III) techniques, and (ii) what type of cues are used when extending morpho-phonological patterns to novel complex forms, a process referred to as morphological generalization, using an elicited production task (Publication IV). The findings obtained in the four manuscripts, either published or under review, provide significant insights into the role of productivity in Hebrew morphological processing and generalization in L1 and L2 speakers. Firstly, the present L1 data revealed a close relationship between productivity of Hebrew verbal classes and recognition process, as revealed in both priming techniques. The consonantal root was accessed only in the productive class (Piel) but not the unproductive class (Paal). Another dissociation between the two classes was revealed in the cross-modal priming, yielding a semantic relatedness effect only for Paal but not Piel primes. These findings are taken to reflect that the Hebrew mental representations display a balance between stored undecomposable unstructured stems (Paal) and decomposed structured stems (Piel), in a similar manner to a typical dual-route architecture, showing that the Hebrew mental lexicon is less unique than previously claimed in psycholinguistic research. The results of the generalization study, however, indicate that there are still substantial differences between inflectional classes of Hebrew and other Indo-European classes, particularly in the type of information they rely on in generalization to novel forms. Hebrew binyan generalization relies more on cues of argument structure and less on phonological cues. Secondly, clear L1/L2 differences were observed in the sensitivity to abstract morphological and morpho-syntactic information during complex word recognition and generalization. While L1 Hebrew speakers were sensitive to the binyan information during recognition, expressed by the contrast in root priming, L2 speakers showed similar root priming effects for both classes, but only when the primes were presented in an infinitive form. A root priming effect was not obtained for primes in a finite form. These patterns are interpreted as evidence for a reduced sensitivity of L2 speakers to morphological information, such as information about inflectional classes, and evidence for processing costs in recognition of forms carrying complex morpho-syntactic information. Reduced reliance on structural information cues was found in production of novel verbal forms, when the L2 group displayed a weaker effect of argument structure for Piel responses, in comparison to the L1 group. Given the L2 results, we suggest that morphological and morphosyntactic information remains challenging for late bilinguals, even at high proficiency levels.
... There is a growing body of literature showing that phonological grammar influences phonological performance. We know that grammar plays a role in phoneme identification (Coetzee, 2005;Massaro and Cohen, 1983;Moreton, 2002), the segmentation of speech into words (Kirk, 2001;Suomi et al., 1997), lexical decision (Berent, Shimron and Vaknin, 2001), word-likeness ratings (Berent, Everett and Shimron, 2001;Frisch and Zawaydeh, 2001), etc. Once we accept that performance reflects the influence of grammar, we can use performance data as a window on what grammar looks like. ...
... This paper is structured as follows: I start out with a general discussion of the relationship between grammar and word-likeness judgments. The next section discusses the results of word-likeness experiments performed by Berent and colleagues (Berent and Shimron, 1997;Berent, Everett and Shimron, 2001;Berent, Shimron and Vaknin, 2001) with Hebrew speakers. These experiments show that Hebrew speakers categorically distinguish between grammatical and ungrammatical forms in some task conditions, but that they also make gradient well-formedness distinctions in other task conditions. ...
... Berent and colleagues (Berent, Everett and Shimron, 2001;Berent, Shimron and Vaknin, 2001;Berent and Shimron, 1997) conducted a series of experiments in which they tested whether this restriction plays a role in how Hebrew speakers rate nonce words. In word-likeness rating tasks, they found that Hebrew speakers rated the two kinds of possible words, [X-Y-Z] and [X-Y-Y], equally good and both better than the ungrammatical *[X-X-Y]-forms. ...
Article
In this paper, I discuss the results of word-likeness rating experiments with Hebrew and English speakers that show that language users use their grammar in a categorical and a gradient manner. In word-likeness rating tasks, subjects make the categorical distinction between grammatical and ungrammatical – they assign all grammatical forms equally high ratings and all ungrammatical forms equally low ratings. However, in comparative word-likeness tasks, subjects are forced to make distinctions between different grammatical or ungrammatical forms. In these experiments, they make finer gradient well-formedness distinctions. This poses a challenge on the one hand to standard derivational models of generative grammar, which can easily account for the categorical distinction between grammatical and ungrammatical, but have more difficulty with the gradient well-formedness distinctions. It also challenges models in which the categorical distinction between grammatical and ungrammatical does not exist, but in which an ungrammatical form is simply a form with very low probability. I show that the inherent comparative character of an OT grammar enables it to model both kinds of behaviors in a straightforward manner.
... ssm) are rated less acceptable than roots with adjacent identical consonants at their end (e.g. smm, Berent & Shimron 1997 ;Berent, Everett & Shimron 2001(a)). The sensitivity to the location of identical root radicals is also observed online. ...
... Specifically, Berent et al. (2002) observed the asymmetry in the location of identical consonants for roots including novel geminate phonemes ; hence ssm and smmtype roots were equally (un)familiar. Likewise, speakers discriminate between novel smm-type roots and no-gemination controls where the materials are equated for the co-occurrence of root radicals (Berent et al. 2001(a)). These results demonstrate that speakers are sensitive to root structure, and that they can discriminate between various root types depending on the presence of identical consonants and their location in the root. ...
... (c) Identity is not entirely desirable. Replicating previous results (Berent & Shimron 1997 ;Berent et al. 2001(a)), roots with identical consonants at their end were rated lower than controls. This last result is unexpected under the view that root final identity does not violate the OCP (McCarthy 1986). ...
Article
Full-text available
It is well known that Semitic languages restrict the co-occurrence of identical and homorganic consonants in the root. The identity hypnosis attributes this pattern to distinct constraints on identical and nonidentical homorganic consonants (e.g. McCarthy 1986, 1884). Conversely, the similarity hypothesis captures these restrictions in terms of a single monotonic ban on perceived similarity (Pierrehumbert 1993; Frisch, Broe & Pierrehumbert 1997). We compare these accounts by examining the acceptability of roots with identical nad homorganic consonants at their end. If well-formedness is an inverse, monotonic function of similarity, then roots with identical (fully similar) consonants should be less acceptable than roots with homorganic (partially similar) consonants. Contrary to this prediction, Hebrew speakers prefer root final identity to homorganicity. Our results suggest that speakers encode long-distance identity among root radicals in a manner that is distinct from feature similarity.
... Speakers' sensitivity to root structure. One source of evidence for speakers' knowledge of root structure 14 comes from rating tasks (Berent, Everett, & Shimron, 2001;Berent & Shimron, 1997). In these tasks, participants rate the acceptability of novel words as potential Hebrew words. ...
... Converging evidence for the representation of root structure is provided by a production task (Berent, Everett, & Shimron, 2001). In this task, participants were given a novel test root consisting of only two consonants (e.g., sm). ...
... On this view, the formation of geminates (e.g., sm® smm) is indistinguishablefrom the formation of nongeminate response (e.g., sm®sml )-the rate of geminate responses and their location are merely a by-product of the statistical structure of the Hebrew language. To examine this explanation, we (Berent, Everett, & Shimron, 2001) calculated the expected rate of geminate versus nongeminate responses from the distribution of existing Hebrew roots. The predicted rate that the addition of a segment results in a root-final geminate versus nonidentical segment for our materials is 4.9%. ...
Article
Full-text available
Connectionist models have gained considerable success as accounts of how printed words are named. Their success challenges the view of grapheme-to-phoneme correspondences (GPCs) as rules. By extension, however, this challenge is sometimes interpreted also as evidence against linguistic rules and variables. This inference tacitly assumes that the generalizations inherent in reading (specifically, GPCs) are similar in their scope to linguistic generalizations and that they are each reducible to token associations. I examine this assumption by comparing the scope of generalizations required for mapping graphemes to phonemes and several linguistic phonological generalizations. Marcus (1998b) distinguishes between two types of generalizations: those that fall within a model's training space and those that exceed it. The scope of generalizations is determined by the model's representational choices--specifically, the implementation of operations over mental variables. An analysis of GPCs suggests that such generalizations do not appeal to variables; hence, they may not exceed the training space. Likewise, certain phonological regularities, such as syllable phonotactic constraints and place assimilation, may be captured by an associative process. In contrast, other phonological processes appeal to variables; hence, such generalizations potentially exceed the training space. I discuss one such case, the obligatory contour principle. I demonstrate that speakers conform to this constraint and that their behavior is inexplicable by the statistical structure of the language. This analysis suggests that, unlike GPCs, phonological generalizations may exceed the training space. Thus, despite their success in modeling GPCs, eliminative connectionist models of phonology assembly may be unable to provide a complete account for phonology. To the extent that reading is subject to phonological constraints, its modeling may require implementing operations over variables.
... In contrast, generalizations predicted by the associative statistical account should be sensitive to the properties of specific root instances. Our experimental investigation of OCP effects in Hebrew demonstrates that speakers extend the constraint on root structure to novel roots (Berent, Bibi, & Tzelgov, 2000; Berent, Everett, & Shimron, 2001; Berent & Shimron, 1997; Vaknin, 2001). The existing findings, however, are inconsistent with a statistical explanation. ...
... sm ! sml) even though the expected frequency of geminate responses was far lower than the expected frequency of addition responses (Berent, Everett, & Shimron, 2001). A second challenge to a statistical account is the sensitivity of speakers to the formal structure of geminates, i.e. their identity. ...
... This distributional fact is attributed to an active phonological constraint, the OCP. Previous experiments (Berent et al., 2000; Berent, Everett, & Shimron, 2001; Berent & Shimron, 1997; Berent, Shimron, & Vaknin, 2001) have demonstrated that the constraint on root structure generalizes to novel roots within the phonemic inventory of Hebrew. The present experiments examined whether this constraint generalizes to roots with foreign phonemes. ...
Article
Does the productive use of language stem from the manipulation of mental variables (e.g. "noun", "any consonant")? If linguistic constraints appeal to variables, rather than instances (e.g. "dog", "m"), then they should generalize to any representable novel instance, including instances that fall beyond the phonological space of a language. We test this prediction by investigating a constraint on the structure of Hebrew roots. Hebrew frequently exhibits geminates (e.g. ss) in its roots, but it strictly constraints their location: geminates are frequent at the end of the root (e.g. mss), but rare at its beginning (e.g. ssm). Symbolic accounts capture the ban on root-initial geminates as *XXY, where X and Y are variables that stand for any two distinct consonants. If the constraint on root structure appeals to the identity of abstract variables, then speakers should be able to extend it to root geminates with foreign phonemes, including phonemes with foreign feature values. We present findings from three experiments supporting this prediction. These results suggest that a complete account of linguistic processing must incorporate mechanisms for generalization outside the representational space of trained items. Mentally-represented variables would allow speakers to make such generalizations.
... I review two sources of experimental evidence for the OCP: a production task and lexical decision experiments. An extensive discussion of these findings may be found in Berent, Everett, and Shimron (2001a) and Berent, Shimron, and Vaknin (2001b). ...
... The perception of gemination as indicating ''wordhood'' can also explain the absence of significant differences between root-initial gemination and no gemination. Rating experiments consistently show that root-initial gemination is considered significantly less acceptable than roots with no gemination (Berent & Shimron, 1997;Berent et al., 2001a). Why does the ill formedness of such roots not facilitate their rejection compared to no-gemination roots? ...
... All things being equal, the frequency of geminate responses should be lower than nongeminate responses. To be more specific in this prediction, Berent et al. (2001a) calculated the expected frequency of geminate vs addition responses to their materials by summing the bigram frequency of all possible triliteral responses. The expected frequency of geminate responses compared to the total possible responses was 4%. ...
Article
Hebrew frequently exhibits geminates in the root but strictly constrains their location: Root-initial gemination is rare (e.g., ssm), whereas root-final gemination (e.g., smm) is frequent. Four experiments demonstrate that Hebrew speakers generalize this constraint to novel roots. When speakers are encouraged to form a triliteral root from a biconsonantal input (e.g., sm), they frequently reduplicate the root's final radical (e.g., smm), but not its initial radical (e.g., ssm). Likewise, the rejection of novel root foils with root initial geminates is easier than roots with final geminates. In both cases, speakers' performance is inexplicable by the statistical structure of the Hebrew language. Speakers' ability to freely generalize the constraint on root structure suggests that their linguistic competence appeals to mental variables.
... in Semitic languages, identical consonants are frequent at the end of the putative root (e.g., svv), but rare in its beginning (e.g., ssv, Greenberg, 1950). Our earlier work has shown that Hebrew speakers generalize this restriction to novel forms, irrespective of the location of the (putative) root's radicals (i.e., consonants) in the word (Berent, Everett, & Shimron, 2001a;Berent, Marcus, Shimron, & Gafos, 2002;Berent, Vaknin, & Shimron, 2004). Those findings suggest that the Semitic grammar restricts the structure of atomic lexical representations, but leave open the question that is key for present purposes: is the domain of that restriction the root (consistent with the hypothesis that the structure of lexical representations varies across languages) or the stem's consonants (consistent with the hypothesis of Universally fixed lexical representations)? ...
... Although such a result would not falsify root-based approaches, it would call into question the need for a special morphological unit for Semitic. 5 Indeed, linguistic (McCarthy, 1986) and psycholinguistic evidence (e.g., Berent & Shimron, 1997;Berent et al., 2001a) suggests that surface forms in which identical consonants are truly adjacent (e.g., massik, from the root ssk) are far less acceptable than forms in which they are separated by vowels (e.g., sisek). Such restrictions appear to be unrelated to the structure of lexical representations, as they apply even when the root lacks identical consonants altogether (e.g., the verb shalatti, I ruled, from the root, shltnote that the identical consonants in the surface form are strictly due to the concatenation of the stem and the suffix). ...
... The triplet members with identical consonants were matched on their segments. We also attempted to match root members for the frequency of their consonant combinations (C1C2, C2C3, and C1C3), determined by the co-occurrence of these radicals in a database including all productive Hebrew roots listed in the Even-Shoshan Hebrew dictionary (for further discussion, see Berent et al., 2001a). Because ssk roots are extremely rare, it was impossible to control for their radical co-occurrence. ...
Article
Is the structure of lexical representations universal, or do languages vary in the fundamental ways in which they represent lexical information? Here, we consider a touchstone case: whether Semitic languages require a special morpheme, the consonantal root. In so doing, we explore a well-known constraint on the location of identical consonants that has often been used as motivation for root representations in Semitic languages: Identical consonants frequently occur at the end of putative roots (e.g., skk), but rarely occur in their beginning (e.g., ssk). Although this restriction has traditionally been stated over roots, an alternative account could be stated over stems, a representational entity that is found more widely across the world's languages. To test this possibility, we investigate the acceptability of a single set of roots, manifesting identity initially, finally or not at all (e.g., ssk versus skk versus rmk) across two nominal paradigms: CéCeC (a paradigm in which identical consonants are rare) and CiCúC (a paradigm in which identical consonants are frequent). If Semitic lexical representations consist of roots only, then similar restrictions on consonant co-occurrence should be observed in the two paradigms. Conversely, if speakers store stems, then the restriction on consonant co-occurrence might be modulated by the properties of the nominal paradigm (be it by means of statistical properties or their grammatical sources). Findings from rating and lexical decision experiments with both visual and auditory stimuli support the stem hypothesis: compared to controls (e.g., rmk), forms with identical consonants (e.g., ssk, skk) are less acceptable in the CéCeC than in the CiCúC paradigm. Although our results do not falsify root-based accounts, they strongly raise the possibility that stems could account for the observed restriction on consonantal identity. As such, our results raise fresh challenge to the notion that different languages require distinct sets of representational resources.
... The nature of phonological knowledge and representations is, once again, the topic of years of lively discussions and controversies. The "phonological mind," to quote the term introduced by Berent (2013), can either be described as an algebraic device (Berent et al., 2001;Berent, 2013), operating on discrete variables through algebraic rules (Marcus, 2001), or it can be conceived as a statistical machine, operating on continuous sensorimotor phonetic or discrete phonemic units, exploiting the machinery of connectionism and probabilistic computations (McClelland, 2009). In recent years, exemplar-based models (e.g., Pierrehumbert, 2001;Coleman, 2002), together with "usage-based phonology" (Bybee, 2001), have offered a new framework proposing to derive the whole set of phonetic categorization and phonological computational processes from the emergent properties of statistical distributions of produced and perceived items stored in memory. ...
... Over the years, this was followed by a large number of studies exploiting quantitative ratings on nonce words for assessing phonotactic knowledge (e.g. Treiman et al., 1996;Vitevitch et al., 1997;Treiman & Barry, 2000;Bailey & Hahn, 2001;Albright, 2007) or the status of phonological rules, e.g. the obligatory contour principle (Berent & Shimron, 1997;Berent et al., 2001) or rule organization in optimality theory (Coleman & Pierrehumbert, 1997;Hammond, 2004). ...
Article
Phonological regularities in a given language can be described as a set of formal rules applied to logical expressions (e.g., the value of a distinctive feature) or alternatively as distributional properties emerging from the phonetic substance. An indirect way to assess how phonology is represented in a speaker’s mind consists in testing how phonological regularities are transferred to non-words. This is the objective of this study, focusing on Coratino, a dialect from southern Italy spoken in the Apulia region. In Coratino, a complex process of vowel reduction operates, transforming the /i e ɛ u o ɔ a/ system for stressed vowels into a system with a smaller number of vowels for unstressed configurations, characterized by four major properties: (1) all word-initial vowels are maintained, even unstressed; (2) /a/ is never reduced, even unstressed; (3) unstressed vowels /i e ɛ u o ɔ/ are protected against reduction when they are adjacent to a consonant that shares articulation (labiality and velarity for /u o ɔ/ and palatality for /i e ɛ/); (4) when they are reduced, high vowels are reduced to /ɨ/ and mid vowels to /ə/. A production experiment was carried out on 19 speakers of Coratino to test whether these properties were displayed with non-words. The production data display a complex pattern which seems to imply both explicit/formal rules and distributional properties transferred statistically to non-words. Furthermore, the speakers appear to vary considerably in how they perform this task. Altogether, this suggests that both formal rules and distributional principles contribute to the encoding of Coratino phonology in the speaker’s mind.
... Taking phonotactics as an example, the specific legal combinations of phonemes can be considered language-specific parameters. However, universal preferences for specific sonority profiles of pho e e se ue i g ha e ee suggested to ep ese t u i e sal pho ologi al ell-fo ed ess (Berent, 2013;Berent, Everett, & Shimron, 2001;Clements, 1990). In order to identify brain areas supporting processing of universal and language-specific constraints, patients with a chronic lefthemispheric brain lesion compared to age-, sex-, and education-matched controls were investigated. ...
... Universal constraints also contain universal preferences such as the sonority profiles of phoneme sequencing. The e e assu ed to efle t u i e sal pho ologi al ellfo ed ess (Berent, 2013;Berent et al., 2001;Clements, 1990). Guided from these assumptions we wanted to compare universal to language-specific constraints during speech comprehension. ...
Thesis
Full-text available
Das Wortlernen begleitet unser Leben von der Kindheit bis ins Alter. Kleinkinder lernen ihre Muttersprache(n), aber auch Erwachsene lernen neue Wörter, z.B. beim Fremdspracherwerb. Unter gewissen Umständen muss eine neue Sprache wieder erlernen werden, wie z.B. nach einer Gehirnläsion. Wie meistert unser Gehirn diese herausfordernden Wortlernsituationen? Um die Neuroplastizität des Wortlernens zu untersuchen, wurden unterschiedliche neurowissenschaftliche Methoden (Elektroenzephalographie, funktionelle Nahinfrarotspektroskopie, voxel-basierte Läsion-Verhalten/EEG Mapping), teilweise in Kombination, bei Kleinkindern, Kindern und Erwachsenen sowie Patienten mit einer Gehirnläsion im Vergleich zu älteren Kontrollprobanden angewendet. 5 Experimente untersuchten die neuronale Verarbeitung von Pseudowörtern, welche mutter- und fremdsprachlichen phonotaktischen Regeln (d.h. die Kombination von verschiedenen Phonemen) folgten, in unterschiedlichen Lernsettings bei monolingualen Teilnehmern. Gesunde Erwachsene aber auch 6monatige und ältere Teilnehmer und Patienten konnten diese Regeln differenzieren. Beteiligte Gehirnareale umfassten ein links-hemisphärisches fronto-temporales Netzwerk. Die Verarbeitung universeller Spracheigenschaften, andererseits, zeigte sich in parietalen Regionen. Während Erwachsene eine klare Dominanz der linken Hemisphäre aufwiesen, nutzten 6monatige noch beide Gehirnhälften. Unterschiedliche Sprachtrainings (semantische Trainings oder Passives Zuhören) an drei aufeinanderfolgenden Tagen veränderten auch die Gehirnaktivität der Kleinkinder und der Erwachsenen und wiesen auf eine erhöhte Lernflexibilität hin. Im 6. Experiment lernten 5jährige bilinguale Kinder anhand pragmatischer Eigenschaften neue Adjektive und zeigten effizientere neuronale Mechanismen als Monolinguale. Die Ergebnisse unterstreichen die Wichtigkeit multi-methodologischer Ansätze, um genauere Einblicke in die komplexen Mechanismen der Neuroplastizität zu erlangen.
... For example, in studies with native listeners of Hebrew (Berent & Shimron, 1997), Arabic (Frisch & Zawaydeh, 2001), and English (Coetzee, 2008), nonwords that violate OCP-PLACE are judged to be less well-formed than nonwords that do not. Furthermore, OCP-PLACE affects performance in lexical decision tasks: nonwords that violate OCP-PLACE are rejected faster than nonwords composed of consonants that do not share [place] by native listeners of Hebrew (Berent, Everett, & Shimron, 2001) and Dutch . Also, OCP-PLACE biases phoneme identification such that forms containing sequences that violate OCP-PLACE tend to be perceived as sequences of non-harmonic consonants by native listeners of English (Coetzee, 2005). ...
... Yet it is not clear how the inhibitory process might stop at word boundaries when listening to continuous speech. Moreover, if pre-lexical perception of homorganic consonant pairs were generally inhibited, it leaves unexplained the results of lexical decision experiments, in which nonwords that violate OCP-PLACE are rejected faster than nonwords composed of consonants that do not share [place] (Berent, et al., 2001;. This raises the question of how the functional effect of OCP to inhibit the encoding of a second consonant in a homorganic pair should be stopped at word boundaries. ...
Article
Full-text available
OCP-PLACE, a cross-linguistically well-attested constraint against pairs of consonants with shared [place], is psychologically real. Studies have shown that the processing of words violating OCP-PLACE is inhibited. Functionalists assume that OCP arises as a consequence of low-level perception: a consonant following another with the same [place] cannot be faithfully perceived as an independent unit. If functionalist theories were correct, then lexical access would be inhibited if two homorganic consonants conjoin at word boundaries—a problem that can only be solved with lexical feedback. Here, we experimentally challenge the functional account by showing that OCP-PLACE can be used as a speech segmentation cue during pre-lexical processing without lexical feedback, and that the use relates to distributions in the input. In Experiment 1, native listeners of Dutch located word boundaries between two labials when segmenting an artificial language. This indicates a use of OCP-LABIAL as a segmentation cue, implying a full perception of both labials. Experiment 2 shows that segmentation performance cannot solely be explained by well-formedness intuitions. Experiment 3 shows that knowledge of OCP-PLACE depends on language-specific input: in Dutch, co-occurrences of labials are under-represented, but co-occurrences of coronals are not. Accordingly, Dutch listeners fail to use OCP-CORONAL for segmentation.
... Identity restrictions are common in phonology [30], and the *AAB rule is prevalent in Semitic languages; these languages ban identical consonants from the left edge of stems (e.g., sisum) but allow them at their right ends (e.g., simum) [31]. Remarkably, humans generalize this rule not only to novel stems [32][33][34], but even to ones with nonnative consonants that comprise features that are unattested in the language [12,13] (Box 1). Computational modeling [13,29,35] suggests that such generalizations cannot be [6,7] believe that phonology is a weak associative system that can be reduced to the auditory and motor interfaces. ...
... The critical evidence is presented by the scope of phonological generalizations ( Figure IC). If humans do, in fact, encode an algebraic rule of the form *AAB, then they should be able to generalize their knowledge across the board, not only to the native consonants of their language (e.g., the Hebrew phoneme b) [32][33][34], but also to nonnative consonants (e.g., th, a phoneme that does not occur in Hebrew, and its place of articulation feature, the wide value of the tongue-tip-constriction-area feature, is likewise non-native). For example, Hebrew speakers should consider thithuk (an illicit AAB form with the nonnative reduplicant th) as worse formed than kithuth (a licit ABB form). ...
Article
Full-text available
Humans weave phonological patterns instinctively. We form phonological patterns at birth, we spontaneously generate them de novo, and we impose phonological design on both our linguistic communication and cultural technologies-reading and writing. Why are humans compelled to generate phonological patterns? Why are phonological patterns intimately grounded in their sensorimotor channels (speech or gesture) while remaining partly amodal and fully productive? And why does phonology shape natural communication and cultural inventions alike? Here, I suggest these properties emanate from the architecture of the phonological mind, an algebraic system of core knowledge. I evaluate this hypothesis in light of linguistic evidence, behavioral studies, and comparative animal research that gauges the design of the phonological mind and its productivity.
... Studies with native listeners of Hebrew (Berent & Shimron, 1997), Arabic (Frisch & Zawaydeh, 2001), and English (Coetzee, 2005(Coetzee, , 2008 have consistently reported that nonwords containing OCP-violating sequences are judged to be less well-formed than nonwords that conform to the OCP. In lexical decision tasks, OCP-violating nonwords are rejected faster than control items by native listeners of Hebrew (Berent et al., 2001), Arabic (Frisch & Zawaydeh, 2001) and Dutch (Shatzman & Kager, 2007). Dutch listeners may use their language-specific restriction on the co-occurrence on multiple labial consonants (OCP-Labial, e.g., */spVp/) as a cue for speech segmentation (Boll-Avetisyan & Kager, 2014). ...
Preprint
What makes Mandarin tones challenging for second language (L2) learners? Several recent studies suggest that two phonological universals, the Obligatory Contour Principle and the Tonal Markedness Scale, may constrain L2 tonal acquisition, regardless of learners' first language. We assessed the role of these universals in the L2 acquisition of Mandarin tones with a perceptual testing protocol that implemented a number of methodological improvements over previous studies. Bayesian mixed-effects analyses supported the null effects of both universals. Instead, a clear determinant of tonal identification accuracy was participants’ pitch discrimination ability. We explain the discrepancy between prior research and the current findings in terms of the representational levels that are targeted by perceptual and production tasks. All materials, data, and code are publicly available in the OSF repository at https://osf.io/ezadw/.
... But there is also evidence that some (other) aspects of phonology are abstract, and shared across languages. Indeed, when people hear phonological patterns such as bogugu and milolo, they automatically extract abstract algebraic rules (here ABB) which they readily generalize to new forms (e.g., wofefe)-adults do so for their native language (e.g., in Semitic languages, where this rule applies Berent, Everett, & Shimron, 2001;Berent & Shimron, 1997;Frisch, Pierrehumbert, & Broe, 2004), and so do infants (Marcus, Vijayan, Bandi Rao, & Vishton, 1999), even newborns . ...
Article
A large literature has gauged the linguistic knowledge of signers by comparing sign-processing by signers and non-signers. Underlying this approach is the assumption that non-signers are devoid of any relevant linguistic knowledge, and as such, they present appropriate non-linguistic controls-a recent paper by Meade et al. (2022) articulates this view explicitly. Our commentary revisits this position. Informed by recent findings from adults and infants, we argue that the phonological system is partly amodal. We show that hearing infants use a shared brain network to extract phonological rules from speech and sign. Moreover, adult speakers who are sign-naïve demonstrably project knowledge of their spoken L1 to signs. So, when it comes to sign-language phonology, speakers are not linguistic blank slates. Disregarding this possibility could systematically underestimate the linguistic knowledge of signers and obscure the nature of the language faculty.
... We ran 1000 permutations under the null hypothesis. Additionally, we conducted an analysis of variance (ANOVA) to directly compare the two experiments to each other, as well as with newborns' responses to reduplication (ABB) and no-reduplication (ABC) in speech (data from Gervain et al. (16)). The ANOVA comprised of the between subjects-factor Stimulus Type (Sign/Visual Analog/Speech) and the within-subject factor Hemisphere (LH/RH) of the difference score between the responses to reduplicated and non-reduplicated stimuli, normalized to their zero baselines, as is appropriate for betweensubject NIRS comparisons, in the significant clusters revealed by the permutation tests (a permutation test was also conducted for the newborn data from 9 ). ...
Article
Full-text available
Infants readily extract linguistic rules from speech. Here, we ask whether this advantage extends to linguistic stimuli that do not rely on the spoken modality. To address this question, we first examine whether infants can differentially learn rules from linguistic signs. We show that, despite having no previous experience with a sign language, six-month-old infants can extract the reduplicative rule (AA) from dynamic linguistic signs, and the neural response to reduplicative linguistic signs differs from reduplicative visual controls, matched for the dynamic spatiotemporal properties of signs. We next demonstrate that the brain response for reduplicative signs is similar to the response to reduplicative speech stimuli. Rule learning, then, apparently depends on the linguistic status of the stimulus, not its sensory modality. These results suggest that infants are language-ready. They possess a powerful rule system that is differentially engaged by all linguistic stimuli, speech or sign.
... Moving to a more linguistic level, universal preferences for specific sonority profiles of syllables (Clements, 1990;Berent, 2013) and regularities regarding non-adjacent phoneme sequencing (e.g. obligatory contour principle; McCarthy, 1986;Berent et al., 2001) have been suggested to represent universals for phonological 'well formedness'. Note that universals of phoneme sequencing represent 'preferences' across languages; as opposed to the more basic exclusive constraints (limitation of frequency-spectrum perception and anatomical constraints of the human vocal tract), universally dispreferred phoneme sequences may be attested in existing natural languages (e.g. ...
Article
View largeDownload slide See Cappa (doi: 10.1093/brain/aww090 ) for a scientific commentary on this article. The phonological structure of speech supports the automatic mapping of sound to meaning. Using a novel combined EEG-lesion approach in patients with chronic left hemisphere lesions and healthy controls, Obrig et al. disentangle contributions of universal and language-specific constraints. They show that different processing steps recruit separable brain regions. View largeDownload slide See Cappa (doi: 10.1093/brain/aww090 ) for a scientific commentary on this article. The phonological structure of speech supports the automatic mapping of sound to meaning. Using a novel combined EEG-lesion approach in patients with chronic left hemisphere lesions and healthy controls, Obrig et al. disentangle contributions of universal and language-specific constraints. They show that different processing steps recruit separable brain regions.
... However, in their study, items were not controlled for 8 lexical statistics, which, as noted by , could also explain 9 the results. Controlling for this confound, later studies still found effects of identity 10 (e.g., Berent, Everett, et al., 2001), suggesting the effects may result from abstract 11 knowledge. Moreover, Berent, Marcus, Shimron, and Gafos (2002) showed that 12 native speakers of Hebrew transfer their well-formedness intuitions to novel roots 13 made up of phonemes that are not part of the Hebrew phoneme inventory, which 14 indicates that they generalize over the restriction against root-initial geminates. ...
Article
Full-text available
Highlights • If OCP-Labial holds as a gradient constraint, specific labial pairs can be exempt. • Dutch listeners know of such exceptions, which affect their processing of speech. • Phonotactic knowledge influenced their segmentation of artificial languages. • Detailed phonotactic knowledge affects processing when task demands are simple. • Abstract phonotactic knowledge may affect processing when task demands are complex. Abstract Many languages restrict their lexicons by OCP-Place, a phonotactic constraint against co-occurrences of consonants with shared [place] (e.g., McCarthy, 1986). While many previous studies have suggested that listeners have knowledge of OCP-Place and use this for speech processing, it is less clear whether they make reference to an abstract representation of this constraint. In Dutch, OCP-Place gradiently restricts non-adjacent consonant co-occurrences in the lexicon. Focusing on labial-vowel-labial co-occurrences, we found that there are, however, exceptions from the general effect of OCP-Labial: (A) co-occurrences of identical labials are systematically less restricted than co-occurrences of homorganic labials, and (B) some specific pairs (e.g., /pVp/, /bVv/) occur more often than expected. Setting out to study whether exceptions such as (A) and (B) had an effect on processing, the current study presents an artificial language learning experiment and a reanalysis of Boll-Avetisyan and Kager's (2014) speech segmentation data. Results indicate that Dutch listeners can use both knowledge of phonotactic detail and an abstract constraint OCP-Labial as a cue for speech segmentation. We suggest that whether detailed or abstract representations are drawn on depends on the complexity of processing demands.
... The morphological processes of de-composition, and specifically root extraction, were also seen in the 'segment shifting task,' where the ease with which participants were able to create a nonword derived from a three consonantal pseudo-root + a word-pattern of a source word depended on the prominence and productivity of the source word's root morpheme (Feldman, Frost, & Pnini, 1995). Additional evidence that the root is a structural unit of words' mental representation in Hebrew is derived from a set of studies which demonstrated that Hebrew readers are sensitive to the root morpheme's phonological structure (Berent, Everett, & Shimron, 2001;Berent & Shimron, 1997;Berent, Shimron, & Vaaknin, 2001). In particular, these studies showed that subjects' response to nonword was affected by whether the nonword obeyed or violated a specific phonological constraint that avoids duplication of the first two consonants of the Hebrew root. ...
Article
Complex words in Hebrew are composed of two non-concatenated interwoven units: (1) a consonantal root morpheme usually comprising three consonants, embedded within (2) a word-pattern morpho-phonological unit made up of vowels or vowels + consonants. The word-pattern unit provides segmental, vocalic and metrical structure information about the word. Using the picture-word interference paradigm with auditorily presented distractors, we investigated the role of the word-patterns within the nominal system, i.e. the nominal-patterns, during word production, using 4 different SOAs (ranging from -200. ms to 300 ms). Compared to an unrelated distractor, the results revealed a facilitatory nominal-pattern effect in the time window of SOAs from -200. ms to 300 ms. This effect (1) had a different time-course than a pure phonological effect, and (2) was not conditioned by semantic similarity. The effect of the nominal-pattern is ascribed to the form, lexical, word-form level, where the patterns, together with the roots, mediate the mapping of the lemma into phonological words. It is suggested that Hebrew speakers attain a word's phonological form by identifying these patterns, which combine rich phonological information from the segmental and the supra-segmental structure.
... Our previous research investigated this prediction. One set of experiments examined the acceptability of novel words formed from novel roots including geminates (Berent & Shimron, 1998;Berent, Everett, & Shimron, 2001). Our results demonstrated that speakers constrain the location of geminates in the root. ...
Article
Hebrew frequently manifests gemination in its roots, but strictly constrains its position: Root-final gemination is extremely frequent (e.g., bbd), whereas root-initial gemination is rare (e.g., bdd). This asymmetry is explained by a universal constraint on phonological representations, the Obligatory Contour Principle (McCarthy, 1986). Three experiments examined whether this phonological constraint affects performance in a lexical decision task. The rejection of nonwords generated from novel roots with root-initial gemination (e.g., Ki-KuS) was significantly faster than roots with final gemination controls (e.g., Si-KuK). The emergence of this asymmetry regardless of the position of geminates in the word implicates a constraint on root, rather than simply word structure. Our results further indicate that speakers are sensitive to the structure of geminate bigrams, i.e., their identity. Nonwords formed from roots with final gemination (e.g., Si-KuK) were significantly more difficult to reject than foils generated from frequency-matched no gemination controls (e.g., Ni-KuS). Speakers are thus sensitive to the identity of geminates and constrain their location in the root. These findings suggest that the representations assembled in reading a deep orthography are structured linguistic entities, constrained by phonological competence.
... The second type of response has been to provide experimental evidence that phonotactic knowledge is encoded in terms of the constructs of phonological theory, and cannot be attributed solely to measures of word-likeness or probability calculated over segmental strings (e.g. Berent, Everett and Shimron 2001, Frisch and Zawaydeh 2001, Coetzee 2008, Albright 2009). ...
Article
Full-text available
The Dutch lexicon contains very few sequences of a long vowel followed by a consonant cluster, where the second member of the cluster is a non-coronal. We provide experimental evidence that Dutch speakers have implicit knowledge of this gap, which cannot be reduced to the probability of segmental sequences or to word-likeness as measured by neighborhood density. The experiment also shows that the ill-formedness of this sequence is mediated by syllable structure: it has a weaker effect on judgments when the last consonant begins a new syllable. We provide an account in terms of Hayes and Wilson's Maximum Entropy model of phonotactics, using constraints that go beyond the complexity permitted by their model of constraint induction.
... The patterns of consonant-vowel interleaving, characteristic of non-concatenative morphology, derive from independently necessary means such as segmental copy, as opposed to autosegmenal spreading, and by the requirements of prosody, best expressed by the formal notion of template in the theory of Prosodic Morphology (McCarthy and Prince 1995a). 30 See also McCarthy (1995McCarthy ( [2000) for a critique of V/C planar segregation and Rose (1997Rose ( , 2000, Kenstowicz and Banksira (1999), and Berent, Everett and Shimron (2001) for related analysis of spreading as segmental copy. ...
... A large body of experimental research shows that Hebrew speakers generalize this restriction to novel forms (Berent and Shimron, 1997;Berent et al., 2001aBerent et al., ,b, 2002Berent et al., , 2004Berent et al., , 2006Berent et al., , 2011Berent et al., , 2012aBerent and Shimron, 2003)-a conclusion that converges with artificial language experiments with adults (Endress et al., 2005;Toro et al., 2008) and infants (Marcus et al., 1999(Marcus et al., , 2007Gervain et al., 2008Gervain et al., , 2012. Such results demonstrate that the reduplication function is productive, but they do not attest to the scope of the generalization, and consequently, they do not distinguish between rule-based and associative explanations. ...
Article
Full-text available
Productivity—the hallmark of linguistic competence—is typically attributed to algebraic rules that support broad generalizations. Past research on spoken language has documented such generalizations in both adults and infants. But whether algebraic rules form part of the linguistic competence of signers remains unknown. To address this question, here we gauge the generalization afforded by American Sign Language (ASL). As a case study, we examine reduplication (X→XX)—a rule that, inter alia, generates ASL nouns from verbs. If signers encode this rule, then they should freely extend it to novel syllables, including ones with features that are unattested in ASL. And since reduplicated disyllables are preferred in ASL, such a rule should favor novel reduplicated signs. Novel reduplicated signs should thus be preferred to nonreduplicative controls (in rating), and consequently, such stimuli should also be harder to classify as nonsigns (in the lexical decision task). The results of four experiments support this prediction. These findings suggest that the phonological knowledge of signers includes powerful algebraic rules. The convergence between these conclusions and previous evidence for phonological rules in spoken language suggests that the architecture of the phonological mind is partly amodal.
... The second type of response has been to provide experimental evidence that phonotactic knowledge is encoded in terms of the constructs of phonological theory, and cannot be attributed solely to measures of word-likeness or probability calculated over segmental strings (e.g. Berent, Everett and Shimron 2001, Frisch and Zawaydeh 2001, Coetzee 2008, Albright 2009). ...
Article
Full-text available
The Dutch lexicon contains very few sequences of a long vowel followed by a consonant cluster whose second member is a non-coronal. We provide experimental evidence that Dutch speakers have implicit knowledge of this gap, which cannot be reduced to the probability of segmental sequences or to word-likeness as measured by neighbourhood density. The experiment also suggests that the ill-formedness of this sequence is mediated by syllable structure: it has a weaker effect on speakers' judgements when the last consonant begins a new syllable. We provide an account in terms of Hayes & Wilson's (2008) maximum entropy model of phonotactics, using constraints that go beyond the complexity permitted by their model of constraint induction.
... A common measure of phonological well-formedness is the rating of nonce words for how word-like they are, and several studies have used this methodology to study speakers' knowledge of restrictions on non-adjacent consonants. Berent and colleagues (Berent and Shimron 1997; Berent et al. 2001a, b) examined the processing of identical consonants by Hebrew speakers. As in Arabic, identical consonants are permitted in the second two positions of a root, but not in the first two (in C 2 /C 3 but not C 1 /C 2 of C 1 VC 2 VC 3 ). ...
Article
In this paper, we analyze the consonant co-occurrence restrictions in the Austronesian language Muna. As in Arabic and other languages, homorganic segments are underrepresented, particularly ones that are also similar in other ways. However, in Muna (voice) plays an unusually central role in this pattern. We analyze the Muna restrictions within Optimality Theory, using OCP-PLACE constraints relativized to (voice), (continuant), and (sonorant). We claim that these constraints are ranked according the frequency with which they are violated in the lexicon. Interspersed amongst these OCP-PLACE constraints are lexically specific faithfulness constraints. We show how such a grammar can be used to explain gradient phonological well-formedness judgments; a nonce word is assigned a well-formedness score based on how often it would be parsed faithfully, given its indexation to each of the lexically specific constraints. We also show how a grammar that is sensitive to lexical frequency can be learned using a slightly augmented version of the Biased Constraint Demotion algorithm (Prince and Tesar 2004). Finally, we discuss the similarity avoidance model of Frisch et al. (2004); we find that the Muna data are consistent with some of its claims, but problematic for its basic premise that similarity is mediated by inventory structure. Like many other languages, Muna (van den Berg 1989) has a restriction on the co-occurrence of homorganic consonants within a word, which is observed in the statistical underrepresentation of homorganic consonant pairs in the lexicon. And as in other languages, the strength of this restriction differs according to place of articulation, and according to how similar the consonants are in other respects. Muna is unique, however, in the degree to which (voice) agreement correlates with the underrepresentation of homorganic pairs. We advance an analysis of the Muna data in terms of OCP-PLACE constraints (McCarthy 1988) that are relativized to place of articulation and other features, including (voice) (cf. Padgett 1995). We propose a ranking of these constraints that corresponds to the degree to which they are obeyed in the lexicon. We also analyze the Muna data in terms of Frisch, Pierrehumbert and Broe's (2004) similarity metric. The Muna data are consistent with some aspects of their proposal, but pose a challenge for the central claim that differences in inventory structure are responsible for differences in similarity between consonant pairs that have the same number of features in common. Differences in inventories can explain neither the differences across place of articulation in Muna, nor the differences in the importance of (voice) agreement between Muna and Arabic. The rest of this paper is structured as follows: In section 1, we discuss the details of the consonant co-occurrence restrictions in Muna. In section 2, we develop an account of these restrictions using OCP-PLACE constraints, and in section 3, we show how this grammar can be learned within a version of the Biased Constraint Demotion Algorithm (Prince and Tesar 2004). Finally, in section 4, we consider the relevance of the Muna data for Frisch et al.'s (2004) similarity based account of consonant co-occurrence patterns.
... While typical instances of a morpheme (e.g., dog, the noun-base of dogs) correspond to form-meaning pairings (e.g., dog = /dog/-[CANINE]), morphemes are defined by formal restrictions. Phonological co-occurrence restrictions offer one criterion for the individuation of morphemes, and speakers demonstrably extend such restrictions to novel words [61][62][63][64]. We likewise used phonological restrictions to define the morphological structure of novel signs. ...
Article
Full-text available
[This corrects the article on p. e60617 in vol. 8.].
... While typical instances of a morpheme (e.g., dog, the noun-base of dogs) correspond to form-meaning pairings (e.g., dog = [CANINE]), morphemes are defined by formal restrictions. Phonological co-occurrence restrictions offer one criterion for the individuation of morphemes, and speakers demonstrably extend such restrictions to novel words61626364. We likewise used phonological restrictions to define the morphological structure of novel signs. ...
Article
Full-text available
All spoken languages encode syllables and constrain their internal structure. But whether these restrictions concern the design of the language system, broadly, or speech, specifically, remains unknown. To address this question, here, we gauge the structure of signed syllables in American Sign Language (ASL). Like spoken languages, signed syllables must exhibit a single sonority/energy peak (i.e., movement). Four experiments examine whether this restriction is enforced by signers and nonsigners. We first show that Deaf ASL signers selectively apply sonority restrictions to syllables (but not morphemes) in novel ASL signs. We next examine whether this principle might further shape the representation of signed syllables by nonsigners. Absent any experience with ASL, nonsigners used movement to define syllable-like units. Moreover, the restriction on syllable structure constrained the capacity of nonsigners to learn from experience. Given brief practice that implicitly paired syllables with sonority peaks (i.e., movement)-a natural phonological constraint attested in every human language-nonsigners rapidly learned to selectively rely on movement to define syllables and they also learned to partly ignore it in the identification of morpheme-like units. Remarkably, nonsigners failed to learn an unnatural rule that defines syllables by handshape, suggesting they were unable to ignore movement in identifying syllables. These findings indicate that signed and spoken syllables are subject to a shared phonological restriction that constrains phonological learning in a new modality. These conclusions suggest the design of the phonological system is partly amodal.
... It allows identical consonants to occur at the right edge of the stem (ABB, e.g., simum), but bans them at the left edge [64] (AAB, e.g., sisum). Although Hebrew speakers are not consciously aware of this regularity [65], they nonetheless freely generalize this tacit restriction to novel stems, including stems with novel phonemes [40,6566676869. Moreover, computational simulations demonstrate that such generalizations are unattainable by various computational mechanisms that lack algebraic rules [70,71]. ...
Article
Full-text available
Dyslexia is associated with numerous deficits to speech processing. Accordingly, a large literature asserts that dyslexics manifest a phonological deficit. Few studies, however, have assessed the phonological grammar of dyslexics, and none has distinguished a phonological deficit from a phonetic impairment. Here, we show that these two sources can be dissociated. Three experiments demonstrate that a group of adult dyslexics studied here is impaired in phonetic discrimination (e.g., ba vs. pa), and their deficit compromises even the basic ability to identify acoustic stimuli as human speech. Remarkably, the ability of these individuals to generalize grammatical phonological rules is intact. Like typical readers, these Hebrew-speaking dyslexics identified ill-formed AAB stems (e.g., titug) as less wordlike than well-formed ABB controls (e.g., gitut), and both groups automatically extended this rule to nonspeech stimuli, irrespective of reading ability. The contrast between the phonetic and phonological capacities of these individuals demonstrates that the algebraic engine that generates phonological patterns is distinct from the phonetic interface that implements them. While dyslexia compromises the phonetic system, certain core aspects of the phonological grammar can be spared.
... For instance, roots of the form C 1 C 2 C 1 , while allowed by a simple OCP-based formulation of root phonotactics, are statistically less frequent in occurrence and are consistently judged less natural by native speakers. Additionally work by Berent et al. (2001) and colleagues has demonstrated that the OCP is not just a simple redundancy constraint on representation, but is synchronically active in the grammars of native Hebrew speakers. These results are hard to reconcile with any approach which denies the existence of a consonantal root qua phonological domain. ...
Article
This article argues from data motivating the existence of the consonantal root in Nonconcatenative Templatic Morphologies (NTM) and the derivational verbal system of Iraqi Arabic for an approach to such root-and-pattern behavior called the "Root-and-Prosody" model. Based upon work in Kramer (2007), this model claims that root-and-pattern behavior arises from the necessary satisfaction of prosodic markedness constraints at the expense of the faithfulness constraints Contiguity and Integrity. Additionally, this article shows that a solution exists to the problem of NTM languages within Generalized Template Theory (McCarthy and Prince, 1995) which does not need Output-Output Correspondence (Benua, 2000; Ussishkin, 2005). In doing so, this work also argues for the extension of indexed markedness constraints Pater (To Appear) to prosodic alternations. Prosodic augmentation is shown to follow from particular rankings of such indexed prosodic markedness constraints, eliminating the need for prosodic material in the input. Finally, discussion of difficulties faced by the Fixed-Prosodic analyses of such systems (Ussishkin, 2000; Buckley, 2003; Ussishkin, 2005) motivates the necessity of the Root-and-Prosody approach.
... Thus, we can add two further degrees to the English onset ill-formedness scale: [ As the Muna and Arabic place restrictions are instances of gradient attestedness, it is important to note that there have been several experimental studies that show that speakers do distinguish between the acceptability of existing sequence types in their language. Shimron (1997) and Berent et al. (2001) asked Hebrew speakers for word-likeness judgments of nonce words that contained three consonants. Their stimuli contained one group of words with identical consonants in the first two positions of the consonantal root (SSM). ...
Article
This paper documents a restriction against the co-occurrence of homorganic consonants in the root morphemes of Muna, a western Austronesian language, and compares the Muna pattern with the much-studied similar pattern in Arabic. As in Arabic, the restriction applies gradiently: its force depends on the place of articulation of the consonants involved, and on whether the homorganic consonants are similar in terms of other features. Muna differs from Arabic in the relative strengths of these other features in affecting co-occurrence rates of homorganic consonants. Along with the descriptions of these patterns, this paper presents phonological analyses in terms of weighted constraints, as in Harmonic Grammar. This account uses a gradual learning algorithm that acquires weights that reflect the relative frequency of different sequence types in the two languages. The resulting grammars assign the sequences acceptability scores that correlate with a measure of their attestedness in the lexicon. This application of Harmonic Grammar illustrates its ability to capture both gradient and categorical patterns.
... In the first stage, we trained the model on a database of tri-consonantal Hebrew roots. Next, we tested the model for its ability to generalize the restriction on segment identity to novel roots -the same set of materials previously tested with human participants (Berent et al. 2001a;Berent et al. 2002). In one condition, novel test roots included phonemes that are all attested in Hebrew. ...
Article
Full-text available
A recent computational model by Hayes and Wilson (2008) seemingly captures a diverse range of phonotactic phenomena without variables, contrasting with the presumptions of many formal theories. Here, we examine the plausibility of this approach by comparing generalizations of identity restrictions by this architecture and human learners. Whereas humans generalize identity restrictions broadly, to both native and non-native phonemes, the original model and several related variants failed to generalize to non-native phonemes. In contrast, a revised model equipped with variables more closely matches human behavior. These findings suggest that, like syntax, phonological grammars are endowed with algebraic relations among variables that support across-the-board generalizations.
... A large body of literature shows that Hebrew speakers are highly sensitive to this restriction and they freely generalize it to novel forms. Specifically, novel-AAB forms are rated as less acceptable than ABB counterparts (Berent and Shimron, 1997; Berent et al., 2001a), and because novel-AAB stems (e.g., titug) are ill-formed, people classify them as non-words more rapidly than ABB/ABC controls (e.g., gitut, migus) in the lexical decision task (Berent et al., 2001bBerent et al., , 2002Berent et al., , 2007b) and they ignore them more readily in Stroop-like conditions (Berent et al., 2005). Given that AAB Hebrew stems are clearly ill-formed, we can now turn to examine whether their structure might affect the classification of non-speech stimuli. ...
Article
Full-text available
It has long been known that the identification of aural stimuli as speech is context-dependent (Remez et al., 1981). Here, we demonstrate that the discrimination of speech stimuli from their non-speech transforms is further modulated by their linguistic structure. We gauge the effect of phonological structure on discrimination across different manifestations of well-formedness in two distinct languages. One case examines the restrictions on English syllables (e.g., the well-formed melif vs. ill-formed mlif); another investigates the constraints on Hebrew stems by comparing ill-formed AAB stems (e.g., TiTuG) with well-formed ABB and ABC controls (e.g., GiTuT, MiGuS). In both cases, non-speech stimuli that conform to well-formed structures are harder to discriminate from speech than stimuli that conform to ill-formed structures. Auxiliary experiments rule out alternative acoustic explanations for this phenomenon. In English, we show that acoustic manipulations that mimic the mlif–melif contrast do not impair the classification of non-speech stimuli whose structure is well-formed (i.e., disyllables with phonetically short vs. long tonic vowels). Similarly, non-speech stimuli that are ill-formed in Hebrew present no difficulties to English speakers. Thus, non-speech stimuli are harder to classify only when they are well-formed in the participants’ native language. We conclude that the classification of non-speech stimuli is modulated by their linguistic structure: inputs that support well-formed outputs are more readily classified as speech.
... Previous psycholinguistic research suggests that the OCP is a psychologically real part of grammar: Native speakers of Arabic judge novel words violating OCP-PLACE (e.g., tasaba) as significantly less word-like than well-formed novel words (e.g., tahafa) (Frisch & Zawaydeh 2001); and native speakers of Hebrew identify illformed novel words faster than wellformed novel words (Berent et al., 1997;Berent, Shimron, & Vaknin, 2001;Berent, Everett, & Shimron, 2001). As for English, Coetzee (2003observes that the OCP exerts a bias on the perception of novel words with phonetically ambiguous final consonants that can either be perceived as spape or spake and skake or skape. ...
Article
Full-text available
How are violations of phonological constraints processed in word comprehension? The present article reports the results of an event-related potentials (ERP) study on a phonological constraint of German that disallows identical segments within a syllable or word (CC(i)VC(i)). We examined three types of monosyllabic late positive CCVC words: (a) existing words [see text], (b) wellformed novel words [see text] and component (c) illformed novel words [see text] as instances of Obligatory Contour Principle non-word (OCP) violations. Wellformed and illformed novel words evoked an N400 effect processing in comparison to existing words. In addition, illformed words produced an enhanced late posterior positivity effect compared to wellformed novel words. obligatory contour Our findings support the well-known observation that novel words evoke principle higher costs in lexical integration (reflected by N400 effects). Crucially, modulations of a late positive component (LPC) show that violations of phonological phonotactic constraints influence later stages of cognitive processing even constraints when stimuli have already been detected as non-existing. Thus, the comparison of electrophysiological effects evoked by the two types of non-existing words reveals the stages at which phonologically based structural wellformedness comes into play during word processing.
... Such a discrimination could be based either on the detection of an unfamiliar token association (e.g., brothermother) or on the detection of a change in ordinal relations (e.g., father now occurs in first position). To distinguish between these two representational schemes, one could investigate generalization to sequences that include novel elements (Berent, Everett, & Shimron, 2001;Berent, Marcus, Shimron, & Gafos, 2002;Berent & Shimron, 1997;Marcus, 2001). If infants represent ordinal information, then they should recognize the invariant ordinal position of a familiar sequence element in a novel context. ...
Article
This study investigated how 4-month-old infants represent sequences: Do they track the statistical relations among specific sequence elements (e.g., AB, BC) or do they encode abstract ordinal positions (i.e., B is second)? Infants were habituated to sequences of 4 moving and sounding elements-3 of the elements varied in their ordinal position while the position of 1 target element remained invariant (e.g., ABCD, CBDA)-and then were tested for the detection of changes in the target's position. Infants detected an ordinal change only when it disrupted the statistical co-occurrence of elements but not when statistical information was controlled. It is concluded that 4-month-olds learn the order of sequence elements by tracking their statistical associations but not their invariant ordinal position.
... Of interest is that native speakers are not only able to make gradient distinctions among the attested phonotactic structures, but also demonstrate sensitivity to certain structures which are absent in their native language (this sensitivity is referred to as hidden phonotactic knowledge in this thesis). The competence of showing gradient phonotactic knowledge has been shown in production studies (e.g., Davidson 2000 Davidson , 2006 Bailey & Hahn 2001; Albright & Hayes 2003; Hammond 2004; Coetzee 2004; Albright 2006 Albright , 2007) and perception studies as well (e.g., Coleman & Pierrehumbert 1997; Berent & Shimron 1997; Berent, Everett, & Shimron 2001; Berent, Shimron, & Vaknin 2001; Berent, Marcus, Shimron, & Gafos 2002; Moreton 2002; Hay, Pierrehumbert, & Beckman 2004; Berent et al. 2006). The production study of Davidson (2000 Davidson ( , 2006) will be presented in section 2.5.1.3 ...
Article
Numerous studies have shown that speakers are sensitive to phonotactic structures which are absent in their native language (Berent & Shimron 1997; Davidson 2000; Moreton 2002; Coetzee 2004; Berent et al. 2006; Albright 2006, 2007). For instance, first language speakers have sonority-related preferences for consonant-consonant onset clusters in nonce words despite the lack of lexical evidence (Davidson 2000; Berent et al. 2006). With respect to the second language acquisition of consonant clusters, it is shown that not all new clusters are equally difficult for second language learners (Broselow & Finer 1991; Eckman & Iverson 1993; Carlisle 1997, 1998; Hancin-Bhatt & Bhatt 1997). However, apart from Berent et al. (2006), most of the studies on phonotactics are based on production rather than perception. It is not clear whether the preferences for certain phonotactic structures are due to articulatory limitations or to phonological grammars. Therefore, perception tasks were conducted in this study. Following the minimal violation model of Pater (2004), according to which the role of the phonological grammar in perception is to regulate the markedness of representations based on the acoustic signal, marked structures will be regulated to a larger extent than less marked ones. Given the fact that Mandarin Chinese does not allow any complex onsets while Dutch does so, the current study examines the sensitivity of Mandarin Chinese listeners of Dutch and Dutch native listeners to the markedness of illegal Dutch onset clusters (tl-, dl-; pm-, km-; fm-, xm-). Experiment 1 tested whether Mandarin Chinese of Dutch and Dutch native listeners had accurate perceptions of Dutch /l/ and /r/. Experiment 2, a syllable number judgment task, examined the effect of vowel epenthesis. Experiments 3 and 4 were comparative wordlikeness judgment tasks, in which the listeners were asked to select a form which was more Dutch-like within a pair of nonce words.
... For instance, speakers consider the novel form didel (whose root, ddl, manifests root initial identity) as less acceptable than a novel form with root final identity (e.g. lided, from the root ldd, see Berent, Everett, & Shimron, 2001a;Berent & Shimron, 1997). How is one to account for such generalizations? ...
Article
Hebrew constrains the occurrence of identical consonants in its roots: Identical consonants are acceptable root finally (e.g., skk), but not root initially (e.g., kks). Speakers' ability to freely generalize this constraint to novel phonemes (Berent, Marcus, Shimron, & Gafos, 2002) suggests that they represent segment identity-a relation among mental variables. An alternative account attributes the restriction on identical phonemes to their feature similarity, captured by either the number of shared features or their statistical frequency. The similarity account predicts that roots with partially similar consonants (e.g., sgk) should be at least as acceptable as roots with fully identical consonants (e.g., skk), and each of these roots should be less acceptable than dissimilar controls (e.g., gdn). Contrary to these predictions, three lexical decision experiments demonstrate that full identity is more acceptable than partial similarity and (in some cases) controls. Speakers' sensitivity to consonant identity suggests that linguistic competence, in general, and phonology, in particular, encompass a computational mechanism that operates over variables. This conclusion is consistent with linguistic accounts that postulate a symbolic grammatical component that is irreducible to the statistical properties of the lexicon.
Chapter
At or near the top of the job description of the phonologist is the goal of finding, describing, and explaining patterns of behavior among features, sounds, and other phonological elements in languages of the world. For theories stemming from the generative approach that began most conspicuously with Chomsky and Halle (1968, henceforth SPE ), the most highly valued explanations are rooted in Universal Grammar (UG). SPE ‐based phonology centered on alternations in phonological patterns, and one of its primary contributions to linguistic theory was to shift the burden of the explanatory basis for such patterns away from the lexicon and onto the grammar (the hypothesized computational system responsible for the relationship and mapping between the hypothesized lexical representations and observed surface forms). Therefore, all evidence for such impoverishment of lexical representations is indirect evidence adduced from patterns and alternations observed on the surface. The cost of reliance on such indirect evidence was nonetheless thought by generative phonologists to be outweighed by the explanatory benefit that resulted from the ability to pare down lexical representations and concomitantly endow the grammar with a rich array of rules (or constraints, depending on one's theoretical commitment) that provided structure necessary for the mapping from underlying to surface form.
Article
Does knowledge of language transfer spontaneously across language modalities? For example, do English speakers, who have had no command of a sign language, spontaneously project grammatical constraints from English to linguistic signs? Here, we address this question by examining the constraints on doubling. We first demonstrate that doubling (e.g. panana ; generally: ABB) is amenable to two conflicting parses (identity vs. reduplication), depending on the level of analysis (phonology vs. morphology). We next show that speakers with no command of a sign language spontaneously project these two parses to novel ABB signs in American Sign Language. Moreover, the chosen parse (for signs) is constrained by the morphology of spoken language. Hebrew speakers can project the morphological parse when doubling indicates diminution, but English speakers only do so when doubling indicates plurality, in line with the distinct morphological properties of their spoken languages. These observations suggest that doubling in speech and signs is constrained by a common set of linguistic principles that are algebraic, amodal and abstract.
Article
Full-text available
Statistical learning (SL) is involved in a wide range of basic and higher-order cognitive functions and is taken to be an important building block of virtually all current theories of information processing. In the last 2 decades, a large and continuously growing research community has therefore focused on the ability to extract embedded patterns of regularity in time and space. This work has mostly focused on transitional probabilities, in vision, audition, by newborns, children, adults, in normal developing and clinical populations. Here we appraise this research approach and we critically assess what it has achieved, what it has not, and why it is so. We then center on present SL research to examine whether it has adopted novel perspectives. These discussions lead us to outline possible blueprints for a novel research agenda. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Book
Over the last two decades, the study of languages and writing systems and their relationship to literacy acquisition has begun to spread beyond studies based mostly on English language learners. As the worldwide demand for literacy continues to grow, researchers from different countries with different language backgrounds have begun examining the connection between their language and writing system and literacy acquisition. This volume is part of this new, emerging field of research. In addition to reviewing psychological research on reading (the author's specialty), the reader is introduced to the Hebrew language: its structure, its history, its writing system, and the issues involved in being fluently literate in Hebrew. Chapters 1-4 introduce the reader to the Hebrew language and word structure and focuses on aspects of Hebrew that have been specifically researched by experimental cognitive psychologists. The reader whose only interest is in the psychological mechanisms of reading Hebrew may be satisfied with these chapters. Chapters 5-8 briefly surveys the history of the Hebrew language and its writing system, the origin of literacy in Hebrew as one of the first alphabetic systems, and then raises questions about the viability (or possibility) of having full-scale literacy in Hebrew. Together, the two sets of chapters present the necessary background for studying the psychology of reading Hebrew and literacy in Hebrew. This volume is appropriate for anyone interested in comparative reading and writing systems or in the Hebrew language in particular. This includes linguists, researchers, and graduate students in such diverse fields as cognitive psychology, psycholinguistics, literacy education, English as a second language, and communication disorders. © 2006 by Lawrence Erlbaum Associates, Inc. All rights reserved.
Article
Some of the things that adults learn about language, and about the world, are very specific, whereas others are more abstract or rulelike. This article reviews evidence showing that infants, too, can very rapidly acquire both specific and abstract information, and considers the mechanisms that infants might use in doing so.
Article
Theories of spoken production have not specifically addressed whether the phonemes of a word compete with each other for selection during phonological encoding (e.g., whether /t/ competes with /k/ in cat). Spoken production theories were evaluated and found to fall into three classes, theories positing (1) no competition, (2) competition among phonemes within the same syllable position, and (3) competition among all phonemes in a word. These predictions were tested by examining the effect of within-word phoneme similarity on oral reading reaction times using mixed-effects regression. Subjects took longer to begin uttering words containing similar phonemes than with dissimilar phonemes. This was true for consonant pairs in the onset, in the onset and coda, and in the onset and suffix. The results are most compatible with theories allowing all phonemes in a word to compete with each other. The possible relationship between these results and cross-linguistic patterns are also discussed.
Article
Full-text available
The article addresses two issues regarding Hebrew reduplication: (a) the distinction between reduplicated and nonreduplicated stems with identical consonants (e.g., minen ‘to apportion vs. mimen ‘to finance), and (b) the patterns of reduplication (C1VC2VC2C, C1VC2C3VC3C,C1VC2C1CVC2C, and C1C2VC3C2CVC3C). These issues are studied from a surface point of view, accounting for speakers capacity to parse forms with identical consonants regardless of their base. It is argued that the grammar constructed by the learner on the basis of structural relations (base output) can also serve for parsing surface forms without reference to a base.
Book
Full-text available
One of the most challenging tasks for language-learning infants and second language (L2)-learning adults is to segment the continuous stream of speech that surrounds them, and, following this, to acquire a lexicon. Both speech segmentation and lexical acquisition are known to be facilitated by phonotactics, i.e., language-specific restrictions on how phonemes may combine. This dissertation addresses questions regarding the representation and acquisition of such phonotactic knowledge in a native language and an L2. Five experimental studies are presented. The first three studies, using the artificial language learning paradigm, reveal that segmentation is influenced by structural phonotactic knowledge of OCP-PLACE, a restriction against pairs of consonants sharing the feature [Place]. It is shown that this knowledge is used only by native listeners or advanced L2 learners of a language restricted by the constraint. This suggests a language-specific acquisition from the input. The third study, with infants, shows that this input is continuous speech rather than the lexicon. The remaining two studies demonstrate that abstract phonotactic knowledge of syllable structure is represented separately from specific probabilistic knowledge, as the two have separate effects on lexical acquisition in a short-term memory recall task. Moreover, results from L2 learners suggest that probabilistic knowledge can be acquired independently of structural knowledge of the L2. While most studies have looked at the influence of specific representations of phonotactic probability, here it is shown that representations of abstract structural constraints also influence processing. Moreover, it is demonstrated that both types of phonotactic representations are acquired from the input
Article
This paper argues that rather than just select the best candidate, EVAL imposes a harmonic rank-ordering on the full candidate set. Language users have access to this enriched information, and it shapes their performance. This paper applies this idea to variation. The claim is that language users can access the full candidate set via the rank-ordering imposed by EVAL. In variation, more than one candidate is well-formed enough to count as grammatical. Consequently, language users will access more than just the best candidate from the rank-ordering. However, the accessibility of a candidate depends on its position on the rank-ordering. The higher the position a candidate occupies, the more likely it is to be selected. In a variable process, variants that appear higher on the rank-ordering (i.e. are more well-formed) will therefore also be the more frequent variants. This model is applied to variation in the phonology of Faialense Portuguese and Ilokano.
Article
Full-text available
Humans routinely generalize universal relationships to unfamiliar instances. If we are told "if glork then frum," and "glork," we can infer "frum"; any name that serves as the subject of a sentence can appear as the object of a sentence. These universals are pervasive in language and reasoning. One account of how they are generalized holds that humans possess mechanisms that manipulate symbols and variables; an alternative account holds that symbol-manipulation can be eliminated from scientific theories in favor of descriptions couched in terms of networks of interconnected nodes. Can these "eliminative" connectionist models offer a genuine alternative? This article shows that eliminative connectionist models cannot account for how we extend universals to arbitrary items. The argument runs as follows. First, if these models, as currently conceived, were to extend universals to arbitrary instances, they would have to generalize outside the space of training examples. Next, it is shown that the class of eliminative connectionist models that is currently popular cannot learn to extend universals outside the training space. This limitation might be avoided through the use of an architecture that implements symbol manipulation.
Article
According to the 'word/rule' account, regular inflection is computed by a default, symbolic process, whereas irregular inflection is achieved by associative memory. Conversely, pattern-associator accounts attribute both regular and irregular inflection to an associative process. The acquisition of the default is ascribed to the asymmetry in the distribution of regular and irregular tokens. Irregular tokens tend to form tight, well-defined phonological clusters (e.g. sing-sang, ring-rang), whereas regular forms are diffusely distributed throughout the phonological space. This distributional asymmetry is necessary and sufficient for the acquisition of a regular default. Hebrew nominal inflection challenges this account. We demonstrate that Hebrew speakers use the regular masculine inflection as a default despite the overlap in the distribution of regular and irregular Hebrew masculine nouns. Specifically, Experiment 1 demonstrates that regular inflection is productively applied to novel nouns regardless of their similarity to existing regular nouns. In contrast, the inflection of irregular sounding nouns is strongly sensitive to their similarity to stored irregular tokens. Experiment 2 establishes the generality of the regular default for novel words that are phonologically idiosyncratic. Experiment 3 demonstrates that Hebrew speakers assign the default regular inflection to borrowings and names that are identical to existing irregular nouns. The existence of default inflection in Hebrew is incompatible with the distributional asymmetry hypothesis. Our findings also lend no support for a type-frequency account. The convergence of the circumstances triggering default inflection in Hebrew, German and English suggests that the capacity for default inflection may be general.
Article
“Does grammar start where statistics stop?”, ask M. S. Seidenberg et al. in the title of their Perspective (18 Oct., p. [553][1]). Arguing against a “reconcilist” position in which complex cognitive functions would depend on a mixture of statistical and algebraic (rule) mechanisms ([1][2], [
Article
This article reports the results of speech error elicitation experiments investigating the role of two consonant co-occurrence restrictions in the productive grammar of speakers of two Ethiopian Semitic languages, Amharic and Chaha. Higher error rates were found with consonant combinations that violated co-occurrence constraints than with those that had only a high degree of shared phonological similarity or low frequency of co-occurrence. Sequences that violated two constraints had the highest error rates. The results indicate that violations of consonant co-occurrence restrictions significantly increase error rates in the productions of native speakers, thereby supporting the psychological reality of the constraints.
Article
Introduction Increased similarity between consonants correlates with increased susceptibility to speech errors (Nooteboom 1967, MacKay 1970, Fromkin 1971, Shattuck-Hufnagel and Klatt 1979, van den Broecke and Goldstein 1980, Levitt and Healy 1985, Wilshire 1999) It follows that combinations of similar consonants may be dispreferred in languages due to production processing problems This dispreference may be phonologized as grammatical constraints (Hansson 2001 a,b, Rose & Walker 2001 ) Interesting question to establish a parallel between the kinds of consonant combinations that result in speech errors and those that are attested as grammatical co-occurrence constraints 2 Central question Does the presence of a phonological constraint on consonant co-occurrence correlate with marked production problems with those consonants? Secondary questions Do similar consonants result in more production errors than more dissimilar consonants? Do less frequent consonant combinations result i
Article
Full-text available
A connectionist approach to processing in quasi-regular domains, as exemplified by English word reading, is developed. Networks using appropriately structured orthographic and phonological representations were trained to read both regular and exception words, and yet were also able to read pronounceable nonwords as well as skilled readers. A mathematical analysis of a simplified system clarifies the close relationship of word frequency and spelling-sound consistency in influencing naming latencies. These insights were verified in subsequent simulations, including an attractor network that accounted for latency data directly in its time to settle on a response. Further analyses of the ability of networks to reproduce data on acquired surface dyslexia support a view of the reading system that incorporates a graded division of labor between semantic and phonological processes, and contrasts in important ways with the standard dual-route account.
Article
Full-text available
The authors propose a model of phonological assembly that postulates a multilinear representation that segregates consonants and vowels in different planes. This representation determines the on-line process of assembly: Consonants and vowels are derived in 2 consecutive cycles that differ in their automaticity. The model’s temporal properties resolve critical contradictions in the phonological processing literature. Its claims are further supported by a series of English-masking and English-priming experiments demonstrating that the contributions of consonants and vowels depend on target exposure duration and differ in their susceptibility to digit load. One methodological implication of the model is that regularity effects are not necessary evidence for assembly. This claim is supported by naming studies showing that vowel assembly requires long target durations, but short target durations permit consonant assembly despite null evidence for vowels.
Article
Full-text available
A parallel distributed processing model of visual word recognition and pronunciation is described. The model consists of sets of orthographic and phonological units and an interlevel of hidden units. Weights on connections between units were modified during a training phase using the back-propagation learning algorithm. The model simulates many aspects of human performance, including (a) differences between words in terms of processing difficulty, (b) pronunciation of novel items, (c) differences between readers in terms of word recognition skill, (d) transitions from beginning to skilled reading, and (e) differences in performance on lexical decision and naming tasks. The model's behavior early in the learning phase corresponds to that of children acquiring word recognition skills. Training with a smaller number of hidden units produces output characteristic of many dyslexic readers. Naming is simulated without pronunciation rules, and lexical decisions are simulated without accessing word-level representations. The performance of the model is largely determined by three factors: the nature of the input, a significant fragment of written English; the learning rule, which encodes the implicit structure of the orthography in the weights on connections; and the architecture of the system, which influences the scope of what can be learned.
Article
Full-text available
Investigated the lexical entry for morphologically complex words in English. Six experiments, using a cross-modal repetition priming task, asked whether the lexical entry for derivationally suffixed and prefixed words is morphologically structured and how this relates to the semantic and phonological transparency of the surface relationship between stem and affix. There was clear evidence for morphological decomposition of semantically transparent forms. This was independent of phonological transparency, suggesting that morphemic representations are phonologically abstract. Semantically opaque forms, in contrast, behave like monomorphemic words. Overall, suffixed and prefixed derived words and their stems prime each other through shared morphemes in the lexical entry, except for pairs of suffixed forms, which show a cohort-based interference effect. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
the present studies used feature integration errors to examine the perceptual groupings of letters in visual word recognition (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
This article describes an integrated theory of analogical access and mapping, instantiated in a computational model called LISA (Learning and Inference with Schemas and Analogies). LISA represents predicates and objects as distributed patterns of activation that are dynamically bound into propositional structures, thereby achieving both the flexibility of a connectionist system and the structure sensitivity of a symbolic system. The model treats access and mapping as types of guided pattern classification, differing only in that mapping is augmented by a capacity to learn new correspondences. The resulting model simulates a wide range of empirical findings concerning human analogical access and mapping. LISA also has a number of inherent limitations, including capacity limits, that arise in human reasoning and suggests a specific computational account of these limitations. Extensions of this approach also account for analogical inference and schema induction. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
et al. (1) familiarized 7-month-old infants with sequences of sylla-bles generated by an artificial grammar; the infants were then able to discriminate be-tween sequences generated both by that grammar and another, even though sequences in the familiarization and test phases em-ployed different syllables. Marcus et al. stat-ed that their infants were representing, ex-tracting, and generalizing abstract algebraic rules. This conclusion was motivated also by their statement that the infants' discrimina-tion could not be performed by a popular class of simple neural network model. Marcus et al. make a number of state-ments regarding the supposed inability of statistical learning mechanisms, including neural networks, to account for their data. One model they describe (2) was developed, however, to model precisely the kinds of abstract generalizations exhibited by the in-fants in the report. Marcus et al. state that this model cannot account for their data because, unlike the infants, it relies on being supplied with attested examples of sentences that are acceptable in the artificial language used dur-ing the test phase. This is not the case. Our model used a simple recurrent net-work (3) with an extra encoding layer be-tween the input and hidden layers of nodes. During training, the network was presented with grammatical sequences of syllables (each input node corresponded to a partic-ular syllable). The network's task was to predict the next syllable. Weights on con-nections between the nodes were adjusted with the use of back-propagation. At test, the weights on connections to the hidden layer were frozen (simulating an adaptive learning procedure). Both grammatical and ungrammatical sequences of hitherto un-seen syllables were then presented to the input layer, using input nodes that had not been used for training. The network's clas-sification of these sequences [determined by an equivalent of the Luce choice rule (4)] did not differ from human participants' above-chance classification of the same stimuli. This was achieved, contrary to the description given by Marcus et al., without pre-test exposure to any test sequences, without feedback at test on which sequenc-es were grammatical and which ungram-matical, and when the input nodes corre-sponded to individual words in the language (contrary to a further statement in their report concerning limitations on generalization). Subsequent simulations using the same network (5) demonstrated that above-chance discrimination at test between "grammatical" and "ungrammatical" sequences can be achieved (within certain parameters) even when the only basis for discrimination is the difference in "repetition structure" (for ex-ample, the ABA and ABB manipulation in the report). The statement by Marcus et al. that the ability to represent repetition pat-terns such as ABB or AAB is outside the scope of most neural network models of lan-guage is unlikely to be correct in light of our findings and demonstrations of recurrent networks' ability to learn context-free gram-mars generating AB, AABB, AAABBB . . . (6). We have also simulated the findings in the report (1) directly, using their own stimuli. We trained eight versions of our network on the ABA grammar and eight on the ABB grammar. At test, each network was present-ed with ABA and ABB test sentences in random order, using input nodes that had not been used in training (thereby simulating a change in vocabulary). As each network "saw" each successive test sequence, we cor-related the network's prediction of what it would see next with the next input, and cal-culated also the Euclidian distance between the two. With learning rate and momentum set at 0.5 and 0.01, respectively, and 10 iter-ations around each test item, we found sig-nificantly higher correlations for congruent sequences than for incongruent ones [F (1,15) 20.8, P 0.0004], and a signifi-cantly smaller euclidian distance between prediction and target for congruent targets than for incongruent ones [F(1,15) 23.1, P 0.0002]. Like the infants studied by Marcus et al., our networks successfully dis-criminated between the test stimuli. The conclusions by Marcus et al. stated in the report are premature; a popular class of neural network can model aspects of their own data, as well as substantially more com-plex data than those in the report. The cog-nitive processes of 7-month-old infants may not be so different from statistical learning mechanisms after all.
Article
Full-text available
Past theoretical analyses have claimed that some languages employ a special type of phonological spreading of a consonant over a vowel, long-distance consonantal spreading. I argue that this type of spreading can and must be eliminated from the theory, by reducing it to segmental copying as in reduplication. This elimination is first motivated from a number of perspectives, including considerations of locality and theoretical redundancy. The reduction to reduplication is then developed in detail for Temiar, one of the main indigenous languages of Malaysia, notorious for the complexity of its copying patterns. Crucial to this reduction is the notion of gradient violation of constraints in Optimality Theory (Prince and Smolensky 1993), and the notion of correspondence, with its particular application to reduplication (McCarthy and Prince 1995a). The proposal extends to other languages (e.g., Arabic, Chaha, Modern Hebrew, and Yoruba), where the putative spreading had been thought necessary. The elimination of long-distance consonantal spreading is argued to further obviate two other special mechanisms, also thought to apply on a language-particular basis: (a) the representation that segregates vowels and consonants on different planes, known as V/C planar segregation, and (b) the distinct mode of word formation consisting of mapping segments to templates.
Article
Full-text available
Several experiments examined repetition priming among morphologically related words as a tool to study lexical organization. The first experiment replicated a finding by Stanners, Neiser, Hernon, and Hall (Journal of Verbal Learning and Verbal Behavior, 1979,18, 399-412), that whereas inflected words prime their unaffixed morphological relatives as effectively as do the unaffixed forms themselves, derived words are effective, but weaker, primes. The experiment also suggested, however, that this difference in priming may have an episodic origin relating to the less formal similarity of derived than of inflected words to unaffixed morphological relatives. A second experiment reduced episodic contributions to priming and found equally effective priming of unaffixed words by themselves, by inflected relatives, and by derived relatives. Two additional experiments found strong priming among relatives sharing the spelling and pronunciation of the unaffixed stem morpheme, sharing spelling alone, or sharing neither formal property exactly. Overall, results with auditory and visual presentations were similar. Interpretations that repetition priming reflects either repeated access to a common lexical entry or associative semantic priming are both rejected in favor of a lexical organization in which components of a word (e.g., a stem morpheme) may be shared among distinct words without the words themselves, in any sense, sharing a “lexical entry.”
Book
In The Algebraic Mind, Gary Marcus attempts to integrate two theories about how the mind works, one that says that the mind is a computer-like manipulator of symbols, and another that says that the mind is a large network of neurons working together in parallel. Resisting the conventional wisdom that says that if the mind is a large neural network it cannot simultaneously be a manipulator of symbols, Marcus outlines a variety of ways in which neural systems could be organized so as to manipulate symbols, and he shows why such systems are more likely to provide an adequate substrate for language and cognition than neural systems that are inconsistent with the manipulation of symbols. Concluding with a discussion of how a neurally realized system of symbol-manipulation could have evolved and how such a system could unfold developmentally within the womb, Marcus helps to set the future agenda of cognitive neuroscience. Bradford Books imprint
Article
A masked priming paradigm was used to examine the role of the root and verbal-pattern morphemes in lexical access within the verbal system of Hebrew. Previous research within the nominal system had showed facilitatory effects from masked primes that shared the same root as the target word, but not when the primes shared the word pattern (R. Frost, K. I. Forster, & A. Deutsch, 1997). In contrast to these findings, facilitatory effects were obtained for both roots and word patterns in the verbal system. In addition, verbal pattern facilitation was obtained even when the primes were pseudoverbs consisting of illegal combinations of roots and verbal patterns. Significant priming was also found when the primes and the targets contained the same root. The results are discussed with reference to the factors that may determine the lexical status of morphological units in lexical organization. A model of morphological processing of Hebrew words is proposed.
Article
The authors investigated the lexical entry for morphologically complex words in English, Six experiments , using a cross-modal repetition priming task, asked whether the lexical entry for derivation-ally suffixed and prefixed words is morphologically structured and how this relates to the semantic and phonological transparency of the surface relationship between stem and affix. There was clear evidence for morphological decomposition of semantically transparent forms. This was independent of phonological transparency, suggesting that morphemic representations are phonologically abstract. Semantically opaque forms, in contrast, behave like monomorphemic words. Overall, suffixed and prefixed derived words and their stems prime each other through shared morphemes in the lexical entry, except for pairs of suffixed forms, which show a cohort-based interference effect.
Article
The Obligatory Contour Principle (OCP) forbids representations in which identical elements are adjacent. A sequence of two high tones, for example, is avoided in a variety of ways: one of the tones is deleted or retracted away from the other, or the two are fused into a single high tone. Processes that would create such a sequence are blocked. The problem is how to derive all these different ways of avoiding this configuration from a single principle. It is argued here that Optimality Theory (OT) provides the means to derive the full range of dissimilatory effects from the OCP, through the ranking of the OCP with Faithfulness constraints. Examples of tonal dissimilation in three Bantu languages are examined: Shona, Rimi, and Kishambaa. The analysis supports the OT interpretation of constraints as violable and ranked.
Article
This article describes an integrated theory of analogical access and mapping, instantiated in a computational model called LISA (Learning and Inference with Schemas and Analogies). LISA represents predicates and objects as distributed patterns of activation that are dynamically bound into propositional structures, thereby achieving both the flexibility of a connectionist system and the structure sensitivity of a symbolic system. The model treats access and mapping as types of guided pattern classification, differing only in that mapping is augmented by a capacity to learn new correspondences. The resulting model simulates a wide range of empirical findings concerning human analogical access and mapping. LISA also has a number of inherent limitations, including capacity limits, that arise in human reasoning and suggests a specific computational account of these limitations. Extensions of this approach also account for analogical inference and schema induction.
Article
The question, "What is Cognitive Science?" is often asked but seldom answered to anyone's satisfaction. Until now, most of the answers have come from the new breed of philosophers of mind. This book, however, is written by a distinguished psychologist and computer scientist who is well-known for his work on the conceptual foundations of cognitive science, and especially for his research on mental imagery, representation, and perception. In Computation and Cognition, Pylyshyn argues that computation must not be viewed as just a convenient metaphor for mental activity, but as a literal empirical hypothesis. Such a view must face a number of serious challenges. For example, it must address the question of "strong equivalents" of processes, and must empirically distinguish between phenomena which reveal what knowledge the organism has, phenomena which reveal properties of the biologically determined "functional architecture" of the mind. The principles and ideas Pylyshyn develops are applied to a number of contentious areas of cognitive science, including theories of vision and mental imagery. In illuminating such timely theoretical problems, he draws on insights from psychology, theoretical computer science, artificial intelligence, and psychology of mind. A Bradford Book
Article
A fundamental task of language acquisition is to extract abstract algebraic rules. Three experiments show that 7-month-old infants attend longer to sentences with unfamiliar structures than to sentences with familiar structures. The design of the artificial language task used in these experiments ensured that this discrimination could not be performed by counting, by a system that is sensitive only to transitional probabilities, or by a popular class of simple neural network models. Instead, these results suggest that infants can represent, extract, and generalize abstract algebraic rules.
Article
Hebrew frequently manifests gemination in its roots, but strictly constrains its position: Root-final gemination is extremely frequent (e.g., bbd), whereas root-initial gemination is rare (e.g., bdd). This asymmetry is explained by a universal constraint on phonological representations, the Obligatory Contour Principle (McCarthy, 1986). Three experiments examined whether this phonological constraint affects performance in a lexical decision task. The rejection of nonwords generated from novel roots with root-initial gemination (e.g., Ki-KuS) was significantly faster than roots with final gemination controls (e.g., Si-KuK). The emergence of this asymmetry regardless of the position of geminates in the word implicates a constraint on root, rather than simply word structure. Our results further indicate that speakers are sensitive to the structure of geminate bigrams, i.e., their identity. Nonwords formed from roots with final gemination (e.g., Si-KuK) were significantly more difficult to reject than foils generated from frequency-matched no gemination controls (e.g., Ni-KuS). Speakers are thus sensitive to the identity of geminates and constrain their location in the root. These findings suggest that the representations assembled in reading a deep orthography are structured linguistic entities, constrained by phonological competence.
Article
"On The Definition of Word" develops a consistent and coherent approach to central questions about morphology and its relation to syntax. In sorting out the various senses in which the word "word" is used, it asserts that three concepts which have often been identified with each other are in fact distinct and not coextensive: listemes (linguistic objects permanently stored by the speaker); morphological objects (objects whose shape can be characterized in morphological terms of affixation and compounding); and syntactic atoms (objects that are unanalyzable units with respect to syntax).The first chapter defends the idea that listemes are distinct from the other two notions, and that all one can and should say about them is that they exist. A theory of morphological objects is developed in chapter two. Chapter three defends the claim that the morphological objects are a proper subset of the syntactic atoms, presenting the authors' reconstruction of the important and much-debated Lexical Integrity Hypothesis. A final chapter shows that there are syntactic atoms which are not morphological objects.Anne Marie Di Sciullo is in the Department of Linguistics at the University of Quebec. Edwin Williams is in the Department of Linguistics at the University of Massachusetts. "On The Definition of Word" is Linguistic Inquiry Monograph 14.
Article
Reply by the current author to the comments made by J. L. McClelland and D. C. Plaut (see record 2006-00293-002) on the original article (199900199-002). It is not altogether surprising that commentators, researchers with longstanding interests in providing alternatives to rules, find our recent experiments unconvincing. But advocates of their cognition-without-rules view might want to look elsewhere to bolster their case, as none of McClelland and Plaut's objections turns out to be plausible. McClelland and Plaut worry that they 'don't really see how experiments' like ours can tell us 'whether (infants) use rules' - without suggesting any alternative. We find such a view to be unduly pessimistic, casting questions about models as unanswerable. While we acknowledge the fact that it is impossible to test the broad framework of connectionism - which encompasses both systems that use rules and those that do not - it is possible to use empirical data to choose between classes of models, and we believe that our experiments do so. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The book from which these sections are excerpted (N. Chomsky, Rules and Representations, Columbia University Press, 1980) is concerned with the prospects for assimilating the study of human intelligence and its products to the natural sciences through the investigation of cognitive structures, understood as systems of rules and representations that can be regarded as “mental organs.” These mental structui′es serve as the vehicles for the exercise of various capacities. They develop in the mind on the basis of an innate endowment that permits the growth of rich and highly articulated structures along an intrinsically determined course under the triggering and partially shaping effect of experience, which fixes parameters in an intricate system of predetermined form. It is argued that the mind is modular in character, with a diversity of cognitive structures, each with its specific properties arid principles. Knowledge of language, of the behavior of objects, and much else crucially involves these mental structures, and is thus not characterizable in terms of capacities, dispositions, or practical abilities, nor is it necessarily grounded in experience in the standard sense of this term.Various types of knowledge and modes of knowledge acquisition are discussed in these terms. Some of the properties of the language faculty are investigated. The basic cognitive relation is “knowing a grammar”; knowledge of language is derivative and, correspondingly, raises further problems. Language as commonly understood is not a unitary phenomenon but involves a number of interacting systems: the “computational” system of grammar, which provides the representations of sound and meaning that permit a rich range of expressive potential, is distinct from a conceptual system with its own properties; knowledge of language must be distinguished from knowledge of how to use a language; and the various systems that enter into the knowledge and use of language must be further analyzed into their specific subcomponents.
Article
This paper deals with finite size networks which consist of interconnections of synchronously evolving processors. Each processor updates its state by applying a "sigmoidal" function to a linear combination of the previous states of all units. We prove that one may simulate all Turing machines by such nets. In particular, one can simulate any multi-stack Turing machine in real time, and there is a net made up of 886 processors which computes a universal partial-recursive function. Products (high order nets) are not required, contrary to what had been stated in the literature. Non-deterministic Turing machines can be simulated by non-deterministic rational nets, also in real time. The simulation result has many consequences regarding the decidability, or more generally the complexity, of questions about recursive nets.
Article
The influence of morphological structure was investigated in two types of word recognition tasks with Serbian materials. Morphological structure included both inflectional and derivational formations and comparisons were controlled for word class and the orthographic and phonological similarity of forms. In Experiments 1, 2, and 3, the pattern of facilitation to target decision latencies was examined following morphologically-related primes in a repetition priming task. Although all morphologically related primes facilitated targets relative to an unprimed condition, inflectionally related primes produced significantly greater effects than did derivationally related primes. In Experiments 4, 5, and 6 subjects were required to segment and shift an underlined portion from one word onto a second word and to name the result aloud. The shifted letter sequence was sometimes morphemic (e.g., the equivalent of ER in DRUMMER) and sometimes not (e.g., the equivalent of ER in SUMMER). Morphemic letter sequences were segmented and shifted more rapidly than their nonmorphemic controls when they were inflectional affixes but not when they were derivational affixes. These results indicate that (a) morphological effects cannot be ascribed to orthographic and phonological structure, (b) the constituent morphemic structure of a word contributes to word recognition and (c) morphemic structure is more transparent for inflectional than for derivational formations.
Article
Both regular inflectional patterns (walk-walked) and irregular ones (swing-swung) can be applied productively to novel words (e.g. wug-wugged; spling -splung). Theories of generative phonology attribute both generalisations to rules; connectionist theories attribute both to analogies in a pattern associator network; hybrid theories attribute regular (fully predictable default) generalisations to a rule and irregular generalisations to a rote memory with pattern-associator properties. In three experiments and three simulations, we observe the process of generalising morphological patterns in humans and two-layer connectionist networks. Replicating Bybee and Moder (1983), we find that people's willingness to generalise from existing irregular verbs to novel ones depends on the global similarity between them (e.g. spling is readily inflectable as splung, but nist is not inflectable as nust). In contrast, generalisability of the regular suffix does not appear to depend on similarity to existing regular verbs Regularly suffixed versions of both common-sounding plip and odd-sounding ploamph were reliably produced and highly rated, and the odd-sounding verbs were not rated as having worse past-tense forms, relative to the naturalness of their stems, than common-sounding ones. In contrast, Rumelhart and McClelland's connectionist past-tense model was found to vary strongly in its tendency to supply both irregular and regular inflections to these novel items as a function of their similarity to forms it was trained on, and for the dissimilar forms, successful regular inflection rarely occurred. We suggest that rule-only theories have trouble explaining patterns of irregular generalisations, whereas single-network theories have trouble explaining regular ones; the computational demands of the two kinds of verbs are different, so a modular system is optimal for handling both simultaneously. Evidence from linguistics and psycholinguistics independently calls for such a hybrid, where irregular pain are stored in a memory system that superimposes phonological forms, fostering generalisation by analogy, and regulars are generated by a default suffix concatenation process capable of operating on any verb, regardless of its sound.
Article
Long-term morphological priming is a form of repetition priming in which the identification of a word is primed by the prior presentation of a morphologically related word. We investigated morphological priming using a variant of the fragment completion task in which a word is briefly presented with one of its letters replaced by a pattern mask and subjects attempt to identify the letter “hidden” by the mask. In Experiment 1 a levels-of-processing manipulation at study was found to affect free recall but not masked fragment completion, suggesting that repetition priming in the latter task is not the result of explicit memory processes. Subsequent experiments revealed that both masked and standard fragment completion are influenced by morphological priming and that, although this effect cannot be attributed to the orthographic and phonological similarity of morphologically related words, it does vary in magnitude as a function of orthographic similarity. These results are consistent with a connectionist account of morphological priming in which morphological effects arise from the activation dynamics of a connectionist network even though morphological relationships are not explicitly represented in this network.
Article
An evaluation metric in Universal Grammar provides a means of selecting between possible grammars for a particular language. The evaluation metric as conceived in Chomsky & Halle (1968; henceforth SPE) prefers the grammar in which only the idiosyncratic properties are lexically listed and predictable properties are derived. The essence of underspecification theory is to supply such predictable distinctive features or feature specifications by rule. Viewed in this way, the general idea of underspecification has always been a part of any theory of phonology that includes such an evaluation metric.