- Home
- A-Z Publications
- Annual Review of Linguistics
- Previous Issues
- Volume 8, 2022
Annual Review of Linguistics - Volume 8, 2022
Volume 8, 2022
-
-
Reverse Engineering Language Acquisition with Child-Centered Long-Form Recordings
Vol. 8 (2022), pp. 389–407More LessLanguage use in everyday life can be studied using lightweight, wearable recorders that collect long-form recordings—that is, audio (including speech) over whole days. The hardware and software underlying this technique are increasingly accessible and inexpensive, and these data are revolutionizing the language acquisition field. We first place this technique into the broader context of the current ways of studying both the input being received by children and children's own language production, laying out the main advantages and drawbacks of long-form recordings. We then go on to argue that a unique advantage of long-form recordings is that they can fuel realistic models of early language acquisition that use speech to represent children's input and/or to establish production benchmarks. To enable the field to make the most of this unique empirical and conceptual contribution, we outline what this reverse engineering approach from long-form recordings entails, why it is useful, and how to evaluate success.
-
-
-
Stance and Stancetaking
Vol. 8 (2022), pp. 409–426More LessStance and stancetaking are considered here as related concepts that help to explain the patterning of language and the motivations for the use of lexical items, constructions, and discourse markers. I begin with a discussion of how stance can be used in variation analysis to help explain the patterning of variables and directions of change, and how stance is central in any understanding of the indexicality of sociolinguistic variables. I then provide a discussion of several approaches to theorizing stance and explicate a stance model that combines a number of these approaches, arguing that such a model should include three dimensions: evaluation, alignment, and investment. Finally, I outline several ways that stance has been operationalized in quantitative analyses, including analyses based on the model outlined.
-
-
-
Neurocomputational Models of Language Processing
Vol. 8 (2022), pp. 427–446More LessEfforts to understand the brain bases of language face the Mapping Problem: At what level do linguistic computations and representations connect to human neurobiology? We review one approach to this problem that relies on rigorously defined computational models to specify the links between linguistic features and neural signals. Such tools can be used to estimate linguistic predictions, model linguistic features, and specify a sequence of processing steps that may be quantitatively fit to neural signals collected while participants use language. Progress has been helped by advances in machine learning, attention to linguistically interpretable models, and openly shared data sets that allow researchers to compare and contrast a variety of models. We describe one such data set in detail in the Supplemental Appendix.
-
-
-
Semantic Structure in Deep Learning
Vol. 8 (2022), pp. 447–471More LessDeep learning has recently come to dominate computational linguistics, leading to claims of human-level performance in a range of language processing tasks. Like much previous computational work, deep learning–based linguistic representations adhere to the distributional meaning-in-use hypothesis, deriving semantic representations from word co-occurrence statistics. However, current deep learning methods entail fundamentally new models of lexical and compositional meaning that are ripe for theoretical analysis. Whereas traditional distributional semantics models take a bottom-up approach in which sentence meaning is characterized by explicit composition functions applied to word meanings, new approaches take a top-down approach in which sentence representations are treated as primary and representations of words and syntax are viewed as emergent. This article summarizes our current understanding of how well such representations capture lexical semantics, world knowledge, and composition. The goal is to foster increased collaboration on testing the implications of such representations as general-purpose models of semantics.
-
-
-
Deriving the Wug-Shaped Curve: A Criterion for Assessing Formal Theories of Linguistic Variation
Vol. 8 (2022), pp. 473–494More LessIn this review, I assess a variety of constraint-based formal frameworks that can treat variable phenomena, such as well-formedness intuitions, outputs in free variation, and lexical frequency-matching. The idea behind this assessment is that data in gradient linguistics fall into natural mathematical patterns, which I call quantitative signatures. The key signatures treated here are the sigmoid curve, going from zero to one probability, and the wug-shaped curve, which combines two or more sigmoids. I argue that these signatures appear repeatedly in linguistics, and I adduce examples from phonology, syntax, semantics, sociolinguistics, phonetics, and language change. I suggest that the ability to generate these signatures is a trait that can help us choose between rival frameworks.
-
-
-
Structural, Functional, and Processing Perspectives on Linguistic Island Effects
Vol. 8 (2022), pp. 495–525More LessRoss (1967) observed that “island” structures like “Who do you think [NP the gift from__] prompted the rumor?” or “Who did you hear [NP the statement [S that the CEO promoted__]]?” are not acceptable, despite having what seem to be plausible meanings in some contexts. Ross (1967) and Chomsky (1973) hypothesized that the source of the unacceptability is in the syntax. Here, we summarize how theories of discourse, frequency, and memory from the literature might account for such effects. We suggest that there is only one island structure—a class of coordination islands—that is best explained by a syntactic/semantic constraint. We speculate that all other island structures are likely to be explained in terms of discourse, frequency, and memory.
-