- Home
- A-Z Publications
- Annual Review of Linguistics
- Previous Issues
- Volume 5, 2019
Annual Review of Linguistics - Volume 5, 2019
Volume 5, 2019
-
-
The Impossibility of Language Acquisition (and How They Do It)
Vol. 5 (2019), pp. 1–24More LessThis autobiographical article, which began as an interview, reports some reflections by Lila Gleitman on the development of her thinking and her research—in concert with a host of esteemed collaborators over the years—on issues of language and mind, focusing on how language is acquired. Gleitman entered the field of linguistics as a student of Zellig Harris, and learned firsthand of Noam Chomsky's early work. She chose the psychological perspective, later helping to found the field of cognitive science; and with her husband and long-term collaborator, Henry Gleitman, for over 50 years fostered a continuing research community aimed at answering questions such as: When language input to the child is restricted, what is left to explain language acquisition? The studies reported here find that argument structure encoded in the syntax is key (syntactic bootstrapping) and that children learn word meaning in epiphanies (propose but verify).
-
-
-
How Consonants and Vowels Shape Spoken-Language Recognition
Thierry Nazzi, and Anne CutlerVol. 5 (2019), pp. 25–47More LessAll languages instantiate a consonant/vowel contrast. This contrast has processing consequences at different levels of spoken-language recognition throughout the lifespan. In adulthood, lexical processing is more strongly associated with consonant than with vowel processing; this has been demonstrated across 13 languages from seven language families and in a variety of auditory lexical-level tasks (deciding whether a spoken input is a word, spotting a real word embedded in a minimal context, reconstructing a word minimally altered into a pseudoword, learning new words or the “words” of a made-up language), as well as in written-word tasks involving phonological processing. In infancy, a consonant advantage in word learning and recognition is found to emerge during development in some languages, though possibly not in others, revealing that the stronger lexicon–consonant association found in adulthood is learned. Current research is evaluating the relative contribution of the early acquisition of the acoustic/phonetic and lexical properties of the native language in the emergence of this association.
-
-
-
Cross-Modal Effects in Speech Perception
Vol. 5 (2019), pp. 49–66More LessSpeech research during recent years has moved progressively away from its traditional focus on audition toward a more multisensory approach. In addition to audition and vision, many somatosenses including proprioception, pressure, vibration, and aerotactile sensation are all highly relevant modalities for experiencing and/or conveying speech. In this article, we review both long-standing cross-modal effects stemming from decades of audiovisual speech research and new findings related to somatosensory effects. Cross-modal effects in speech perception to date have been found to be constrained by temporal congruence and signal relevance, but appear to be unconstrained by spatial congruence. The literature reveals that, far from taking place in a one-, two-, or even three-dimensional space, speech occupies a highly multidimensional sensory space. We argue that future research in cross-modal effects should expand to consider each of these modalities both separately and in combination with other modalities in speech.
-
-
-
Computational Modeling of Phonological Learning
Vol. 5 (2019), pp. 67–90More LessRecent advances in computational modeling have led to significant discoveries about the representation and acquisition of phonological knowledge and the limits on language learning and variation. These discoveries are the result of applying computational learning models to increasingly rich and complex natural language data while making increasingly realistic assumptions about the learning task. This article reviews the recent developments in computational modeling that have made connections between fully explicit theories of learning, naturally occurring corpus data, and the richness of psycholinguistic and typological data possible. These advances fall into two broad research areas: (a) the development of models capable of learning the quantitative, noisy, and inconsistent patterns that are characteristic of naturalistic data and (b) the development of models with the capacity to learn hidden phonological structure from unlabeled data. After reviewing these advances, the article summarizes some of the most significant consequent discoveries.
-
-
-
Corpus Phonetics
Vol. 5 (2019), pp. 91–107More LessSemiautomatic analysis of digital speech collections is transforming the science of phonetics. Convenient search and analysis of large published bodies of recordings, transcripts, metadata, and annotations—up to three or four orders of magnitude larger than a few decades ago—have created a trend towards “corpus phonetics,” whose benefits include greatly increased researcher productivity, better coverage of variation in speech patterns, and crucial support for reproducibility. The results of this work include insights into theoretical questions at all levels of linguistic analysis, along with applications in fields as diverse as psychology, medicine, and poetics, as well as within phonetics itself. Remaining challenges include still-limited access to the necessary skills and a lack of consistent standards. These changes coincide with the broader Open Data movement, but future solutions will also need to include more constrained forms of publication motivated by valid concerns for privacy, confidentiality, and intellectual property.
-
-
-
Relations Between Reading and Speech Manifest Universal Phonological Principle
Vol. 5 (2019), pp. 109–129More LessAll writing systems represent speech, providing a means for recording each word of a message. This is achieved by symbolizing the phonological forms of spoken words as well as information conveying grammar and meaning. Alphabetic systems represent the segmental phonology by providing symbols for individual consonants and vowels; some also convey morphological units. Other systems represent syllables (typically CVs) or morphosyllables. In all cases, learning to read requires a learner to discover the forms of language that writing encodes, drawing on metalinguistic abilities that are not needed for the acquisition of speech. Therefore, learning to read is harder and rarer than acquiring speech. Research reveals that skilled readers of every studied orthography access phonological language forms automatically and early in word reading. Although reading processes differ according to the cognitive demands of specific orthographic forms, the differences are subservient to the universal phonologic principle that all readers access phonological language forms.
-
-
-
Individual Differences in Language Processing: Phonology
Vol. 5 (2019), pp. 131–150More LessIndividual variation is ubiquitous and empirically observable in most phonological behaviors, yet relatively few studies aim to capture the heterogeneity of language processing among individuals, as opposed to those focusing primarily on group-level patterns. The study of individual differences can shed light on the nature of the cognitive representations and mechanisms involved in phonological processing. To guide our review of individual variation in the processing of phonological information, we consider studies that can illuminate broader issues in the field, such as the nature of linguistic representations and processes. We also consider how the study of individual differences can provide insight into long-standing issues in linguistic variation and change. Since linguistic communities are made up of individuals, the questions raised by examining individual differences in linguistic processing are relevant to those who study all aspects of language.
-
-
-
The Syntax–Prosody Interface
Ryan Bennett, and Emily ElfnerVol. 5 (2019), pp. 151–171More LessThis article provides an overview of current and historically important issues in the study of the syntax–prosody interface, the point of interaction between syntactic structure and phrase-level phonology. We take a broad view of the syntax–prosody interface, surveying both direct and indirect reference theories, with a focus on evaluating the continuing prominent role of prosodic hierarchy theory in shaping our understanding of this area of linguistics. Specific topics discussed in detail include the identification of prosodic domains, the universality of prosodic categories, the recent resurgence of interest in the role of recursion in prosodic structure, crosslinguistic variation in syntax–prosody mapping, prosodic influences on syntax and word order, and the influence of sentence processing in the planning and shaping of prosodic domains. We consider criticisms of prosodic hierarchy theory in particular, and provide an assessment of the future of prosodic hierarchy theory in research on the syntax–prosody interface.
-
-
-
Western Austronesian Voice
Vol. 5 (2019), pp. 173–195More LessOver the past four decades, the nature of western Austronesian voice—typically subcategorized as Philippine-type and Indonesian-type—has triggered considerable debate in the typological and syntactic literature. Central questions in these debates have been concerned with how voice alternations in western Austronesian languages interact with grammatical relations, transitivity, and syntactic alignment. In this review, we reassess the syntactic properties of voice alternations in western Austronesian languages, in some cases focusing on more controversial alternations, including the putative antipassive and applicative constructions in Philippine-type languages and the passive constructions in Indonesian-type languages. We discuss reasons that favor a valency-neutral approach to western Austronesian voice and evidence against a valency-changing and/or ergative approach to the analysis of these languages.
-
-
-
Dependency Grammar
Vol. 5 (2019), pp. 197–218More LessDependency grammar is a descriptive and theoretical tradition in linguistics that can be traced back to antiquity. It has long been influential in the European linguistics tradition and has more recently become a mainstream approach to representing syntactic and semantic structure in natural language processing. In this review, we introduce the basic theoretical assumptions of dependency grammar and review some key aspects in which different dependency frameworks agree or disagree. We also discuss advantages and disadvantages of dependency representations and introduce Universal Dependencies, a framework for multilingual dependency-based morphosyntactic annotation that has been applied to more than 60 languages.
-
-
-
Closest Conjunct Agreement
Vol. 5 (2019), pp. 219–241More LessClosest conjunct agreement is of great theoretical interest in terms of what it reveals about the structure of coordination; the locality of agreement relations; and the interaction between syntax, semantics, and morphology in the expression of agreement. We highlight recent approaches to the phenomenon, including typologically diverse case studies and experimentally elicited results, and point out crystallized generalizations as well as directions for future research, including the absence of last conjunct agreement, the absence of closest conjunct case, differences between conjunction and disjunction, and the role of linear adjacency in morphological realization.
-
-
-
Three Mathematical Foundations for Syntax
Vol. 5 (2019), pp. 243–260More LessThree different foundational ideas can be identified in recent syntactic theory: structure from substitution classes, structure from dependencies among heads, and structure as the result of optimizing preferences. As formulated in this review, it is easy to see that these three ideas are completely independent. Each has a different mathematical foundation, each suggests a different natural connection to meaning, and each implies something different about how language acquisition could work. Since they are all well supported by the evidence, these three ideas are found in various mixtures in the prominent syntactic traditions. From this perspective, if syntax springs fundamentally from a single basic human ability, it is an ability that exploits a coincidence of a number of very different things.
-
-
-
Response Systems: The Syntax and Semantics of Fragment Answers and Response Particles
Vol. 5 (2019), pp. 261–287More LessThis article critically reviews the main research issues raised in the study of response systems in natural languages by addressing the syntax and semantics of fragment answers and yes/no response particles. Fragment answers include replies that do not have a sentential form, whereas response particles consist solely of an affirmative or a negative adverb. While the main research question in the syntax of fragments and response particles has been whether these contain more syntactic structure than what is actually pronounced, the key issues in the study of their semantics are question–answer congruence, the anaphoric potential of response particles, and the meaning of fragments in relation to positive and negative questions. In connection to these issues, this review suggests some interesting avenues for further research: (a) providing an analysis of particles other than yes/no, (b) choosing between echoic versus nonechoic forms as answers to polar questions, and (c) deciding whether some non-lexically-based or nonverbal responses are systematically used in combination with polar particles to express (dis)agreement.
-
-
-
Distributivity in Formal Semantics
Vol. 5 (2019), pp. 289–308More LessDistributivity in natural language occurs in sentences such as John and Mary (each) took a deep breath, when a predicate that is combined with a plurality-denoting expression is understood as holding of each of the members of that plurality. Language provides ways to express distributivity overtly, with words such as English each, but also covertly, when no one word can be regarded as contributing it. Both overt and covert distributivity occur in a wide variety of constructions. This article reviews and synthesizes influential approaches to distributivity in formal semantics and includes pointers to some more recent approaches. Theories of distributivity can be distinguished on the basis of how they answer a number of interrelated questions: To what extent can distributivity be attributed to what we know about the world, as opposed to the meanings of words or silent operators? What is the relationship between distributivity and plurality? Does distributivity always reach down to the singular individuals in a plurality? If not, under what circumstances is distributivity over subgroups possible, and what is its relation to distributivity over individuals?
-
-
-
The Syntax and Semantics of Nonfinite Forms
Vol. 5 (2019), pp. 309–328More LessThe syntactic and semantic properties of nonfinite verb categories can best be understood in relation to and distinction from the corresponding properties of finite verb categories. In order to explore these issues, it is necessary to provide a crosslinguistically valid characterization of finiteness. Finiteness is a prototypical notion, understood in relation to a language-specific finite verb prototype; nonfiniteness is therefore understood in terms of degrees of deviation from this prototype. The syntactic properties of nonfinite verb categories, so defined, can be considered from two perspectives: the functions of nonfinite clauses within superordinate clauses (e.g., argument and adjunct functions) and the internal structure of nonfinite verb phrases. Typical of the second aspect is that nonfinite phrases tend to be defective in one or another respect, relative to finite phrases, which may be understood in terms of lacking functional projections or features which are an obligatory part of finite phrases. This defectiveness relative to the finite prototype plays out also in the semantics; typically, certain aspects of the meaning of nonfinite phrases are not independently specified, but must be derived from semantic properties of a superordinate finite clause.
-
-
-
Semantic Anomaly, Pragmatic Infelicity, and Ungrammaticality
Vol. 5 (2019), pp. 329–351More LessA major goal of modern syntax has been to find principles that rule out sentences that seem ungrammatical. To achieve this goal, it has been proposed that syntactically odd (or ungrammatical) sentences can be distinguished empirically and theoretically from semantically odd (or semantically anomalous) sentences. However, sometimes it is not clear why a sentence is “weird,” which has repercussions for our syntactic and semantic theories. According to a number of proposals, semantic and pragmatic processes can lead to weirdness that empirically feels more like ungrammaticality than semantic oddness. But if this is so, then a question arises: What explains the intuitive difference between sentences that feel ungrammatical and those that merely feel semantically (or pragmatically) anomalous? This article addresses this question by describing and comparing various semantic and pragmatic proposals for explaining different types of weirdness: ungrammaticality, semantic anomaly, and pragmatic infelicity.
-
-
-
Artificial Language Learning in Children
Vol. 5 (2019), pp. 353–373More LessArtificial language learning methods—in which learners are taught miniature constructed languages in a controlled laboratory setting—have become a valuable experimental tool for research on language development. These methods offer a complement to natural language acquisition data, allowing researchers to control both the input to learning and the learning environment. A large proportion of artificial language learning studies has aimed to understand the mechanisms of learning in infants. This review focuses instead on investigations into the nature of early linguistic representations and how they are influenced by both the structure of the input and the cognitive features of the learner. Looking not only at young infants but also at children beyond infancy, we discuss evidence for early abstraction, conditions on generalization, the acquisition of grammatical categories and dependencies, and recent work connecting the cognitive biases of learners to language typology. We end by outlining important areas for future research.
-
-
-
What Defines Language Dominance in Bilinguals?
Vol. 5 (2019), pp. 375–393More LessThis article focuses on the construct of language dominance in bilinguals and the ways in which this construct has been operationalized. Language dominance is often seen as relative proficiency in two languages, but it can also be analyzed in terms of language use—that is, how frequently bilinguals use their languages and how these are divided across domains. Assessing language dominance is important because it has become clear that the level of bilinguals’ proficiency in each language and the relative strength of each language affect performance on tasks. A key distinction is made between direct measures of language dominance, which assess an aspect of language proficiency (e.g., vocabulary or grammar), and indirect ones, which measure variability in exposure to different languages and bilinguals’ use of them. The article includes an evaluation of the extent to which the latter can be interpreted as a proxy for the former.
-
-
-
The Advantages of Bilingualism Debate
Vol. 5 (2019), pp. 395–415More LessBilingualism was once thought to result in cognitive disadvantages, but research in recent decades has demonstrated that experience with two (or more) languages confers a bilingual advantage in executive functions and may delay the incidence of Alzheimer's disease. However, conflicting evidence has emerged leading to questions concerning the robustness of the bilingual advantage for both executive functions and dementia incidence. Some investigators have failed to find evidence of a bilingual advantage; others have suggested that bilingual advantages may be entirely spurious, while proponents of the advantage case have continued to defend it. A heated debate has ensued, and the field has now reached an impasse. This review critically examines evidence for and against the bilingual advantage in executive functions, cognitive aging, and brain plasticity, before outlining how future research could shed light on this debate and advance knowledge of how experience with multiple languages affects cognition and the brain.
-