1932

Abstract

Neural models greatly outperform grammar-based models across many tasks in modern computational linguistics. This raises the question of whether linguistic principles, such as the Principle of Compositionality, still have value as modeling tools. We review the recent literature and find that while an overly strict interpretation of compositionality makes it hard to achieve broad coverage in semantic parsing tasks, compositionality is still necessary for a model to learn the correct linguistic generalizations from limited data. Reconciling both of these qualities requires the careful exploration of a novel design space; we also review some recent results that may help in this exploration.

Loading

Article metrics loading...

/content/journals/10.1146/annurev-linguistics-030521-044439
2023-01-17
2024-05-15
Loading full text...

Full text loading...

/deliver/fulltext/linguistics/9/1/annurev-linguistics-030521-044439.html?itemId=/content/journals/10.1146/annurev-linguistics-030521-044439&mimeType=html&fmt=ahah

Literature Cited

  1. Abzianidze L, Bjerva J, Evang K, Haagsma H, van Noord R et al. 2017. The Parallel Meaning Bank: towards a multilingual corpus of translations annotated with compositional meaning representations. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers242–47 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  2. Akyürek E, Andreas J 2021. Lexicon learning for few shot sequence modeling. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)4934–46 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  3. Andreas J 2019. Measuring compositionality in representation learning Paper presented at the 7th International Conference on Learning Representations (ICLR 2019) New Orleans, LA: May 6–9
  4. Andreas J 2020. Good-enough compositional data augmentation. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics7556–66 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  5. Andreas J, Bufe J, Burkett D, Chen C, Clausman J et al. 2020. Task-oriented dialogue as dataflow synthesis. Trans. Assoc. Comput. Linguist. 8:556–71
    [Google Scholar]
  6. Antol S, Agrawal A, Lu J, Mitchell M, Batra D et al. 2015. VQA: Visual Question Answering. Proceedings of the IEEE International Conference on Computer Vision2425–33 Washington, DC: IEEE
    [Google Scholar]
  7. Artzi Y, Lee K, Zettlemoyer L. 2015. Broad-coverage CCG semantic parsing with AMR. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)1699–710 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  8. Bahdanau D, Cho K, Bengio Y. 2015. Neural machine translation by jointly learning to align and translate Paper presented at the 3rd International Conference on Learning Representations (ICLR 2015) San Diego, CA: May 7–9
  9. Bahdanau D, de Vries H, O'Donnell TJ, Murty S, Beaudoin P et al. 2019. CLOSURE: assessing systematic generalization of CLEVR models. arXiv:1912.05783 [cs.AI]
  10. Banarescu L, Bonial C, Cai S, Georgescu M, Griffitt K et al. 2013. Abstract Meaning Representation for sembanking. Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse178–86 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  11. Bastings J, Baroni M, Weston J, Cho K, Kiela D. 2018. Jump to better conclusions: SCAN both left and right. Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP47–55 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  12. Bender EM, Koller A. 2020. Climbing towards NLU: on meaning, form, and understanding in the age of data. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics5185–98 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  13. Bevilacqua M, Blloshmi R, Navigli R. 2021. One SPRING to rule them both: symmetric AMR semantic parsing and generation without a complex pipeline. Proceedings of the AAAI Conference on Artificial Intelligence (AAAI-21), Vol. 3512564–73 Palo Alto, CA: AAAI Press
    [Google Scholar]
  14. Bogin B, Subramanian S, Gardner M, Berant J. 2021. Latent compositional representations improve systematic generalization in grounded question answering. Trans. Assoc. Comput. Linguist. 9:195–210
    [Google Scholar]
  15. Chaabouni R, Kharitonov E, Bouchacourt D, Dupoux E, Baroni M. 2020. Compositionality and generalization in emergent languages. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics4427–42 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  16. Choi E, Lazaridou A, de Freitas N. 2018. Compositional obverter communication learning from raw visual input Paper presented at the 6th International Conference on Learning Representations (ICLR 2018) Vancouver, Can.: Apr. 30–May 3
  17. Chomsky N. 1957. Syntactic Structures The Hague, Neth: Mouton
  18. Conklin H, Wang B, Smith K, Titov I. 2021. Meta-learning to compositionally generalize. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)3322–35 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  19. Copestake A, Lascarides A, Flickinger D. 2001. An algebra for semantic construction in constraint-based grammars. Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics140–47 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  20. Csordás R, Irie K, Schmidhuber J. 2021. The devil is in the detail: Simple tricks improve systematic generalization of transformers. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing619–34 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  21. Dalrymple M 2022. The Handbook of Lexical Functional Grammar: Empirically Oriented Theoretical Morphology and Syntax Berlin: Lang. Sci. Press
  22. Donatelli L, Groschwitz J, Lindemann M, Koller A, Weißenhorn P. 2020. Normalizing compositional structures across graphbanks. Proceedings of the 28th International Conference on Computational Linguistics2991–3006 n.p.: Int. Comm. Comput. Linguist.
    [Google Scholar]
  23. Fodor JA, Lepore E. 2002. The Compositionality Papers Oxford, UK: Oxford Univ. Press
  24. Fodor JA, Pylyshyn ZW. 1988. Connectionism and cognitive architecture: a critical analysis. Cognition 28:13–71
    [Google Scholar]
  25. Groschwitz J, Fowlie M, Koller A. 2021. Learning compositional structures for semantic graph parsing. Proceedings of the 5th Workshop on Structured Prediction for NLP (SPNLP 2021)22–36 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  26. Groschwitz J, Lindemann M, Fowlie M, Johnson M, Koller A 2018. AMR dependency parsing with a typed semantic algebra. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)1831–41 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  27. Halle M, Marantz A, Hale K, Keyser SJ 1993. Distributed morphology and the pieces of inflection. The View from Building 20 K Hale, SJ Keyser 111–76 Cambridge, MA: MIT Press
    [Google Scholar]
  28. Herzig J, Berant J. 2021. Span-based semantic parsing for compositional generalization. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)908–21 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  29. Holtzman A, Buys J, Du L, Forbes M, Choi Y. 2019. The curious case of neural text degeneration. arXiv:1904.09751 [cs.CL]
  30. Jambór D, Bahdanau D. 2021. LAGr: labeling aligned graphs for improving systematic generalization in semantic parsing. arXiv:2110.07572 [cs.CL]
  31. Janssen T. 2001. Frege, contextuality and compositionality. J. Logic Lang. Inform. 10:1115–36
    [Google Scholar]
  32. Johnson J, Hariharan B, van der Maaten L, Fei-Fei L, Zitnick CL, Girshick R. 2017. CLEVR: a diagnostic dataset for compositional language and elementary visual reasoning. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR)1988–97 Washington, DC: IEEE
    [Google Scholar]
  33. Kautz H. 2020. The third AI summer Presented at the 34th Annual Meeting of the Association for the Advancement of Artificial Intelligence (AAAI-2020) New York: Feb. 10
  34. Kay P, Michaelis LA 2019. Constructional meaning and compositionality. Semantics–Interfaces C Maienborn, K Heusinger, P Portner 293–324 Berlin: De Gruyter Mouton
    [Google Scholar]
  35. Keysers D, Schärli N, Scales N, Buisman H, Furrer D et al. 2020. Measuring compositional generalization: a comprehensive method on realistic data Paper presented at the 8th International Conference on Learning Representations (ICLR 2020), virtual, Apr. 26–May 1
  36. Khashabi D, Min S, Khot T, Sabharwal A, Tafjord O et al. 2020. UNIFIEDQA: crossing format boundaries with a single QA system. Findings of the Association for Computational Linguistics: EMNLP 20201896–1907 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  37. Kim N, Linzen T. 2020. COGS: a compositional generalization challenge based on semantic interpretation. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing9087–105 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  38. Kitaev N, Klein D. 2018. Constituency parsing with a self-attentive encoder. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)2676–86 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  39. Kwiatkowski T, Zettlemoyer L, Goldwater S, Steedman M. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP)1223–33 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  40. Lake B, Baroni M 2018. Generalization without systematicity: on the compositional skills of sequence-to-sequence recurrent networks. Proceedings of Machine Learning Research, Vol. 80: International Conference on Machine Learning J Dy, A Krause 2873–82 n.p.: PMLR
    [Google Scholar]
  41. Lazaridou A, Hermann K, Tuyls K, Clark S 2019. Emergence of linguistic communication from referential games with symbolic and pixel input Paper presented at the 6th International Conference on Learning Representations (ICLR 2018) Vancouver, Can:.
  42. Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A et al. 2020. BART: denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics7871–80 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  43. Lindemann M, Groschwitz J, Koller A. 2019. Compositional semantic parsing across graphbanks. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics4576–85 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  44. Lindemann M, Groschwitz J, Koller A. 2020. Fast semantic parsing with well-typedness guarantees. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)3929–51 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  45. Liu C, An S, Lin Z, Liu Q, Chen B et al. 2021. Learning algebraic recombination for compositional generalization. Findings of the Association for Computational Linguistics: ACL-IJCNLP 20211129–44 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  46. Lyu C, Titov I. 2018. AMR parsing as graph prediction with latent alignment. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics397–407 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  47. Marantz A. 1997. No escape from syntax: Don't try morphological analysis in the privacy of your own lexicon. Univ. Pa. Work. Pap. Linguist. 4:214
    [Google Scholar]
  48. McCoy RT, Grant E, Smolensky P, Griffiths TL, Linzen T. 2020. Universal linguistic inductive biases via meta-learning. arXiv:2006.16324 [cs.CL]
  49. Mitchell E, Finn C, Manning C. 2021. Challenges of acquiring compositional inductive biases via meta-learning. Proceedings of Machine Learning Research, Vol. 140: AAAI Workshop on Meta-Learning and MetaDL Challenge138–48 n.p.: PMLR
    [Google Scholar]
  50. Nguyen DT, Lazaridou A, Bernardi R. 2014. Coloring objects: adjective-noun visual semantic compositionality. Proceedings of the Third Workshop on Vision and Language112–14 Dublin, Irel./Stroudsburg, PA: Dublin City Univ./Assoc. Comput. Linguist.
    [Google Scholar]
  51. Nikolaus M, Abdou M, Lamm M, Aralikatte R, Elliott D. 2019. Compositional generalization in image captioning. Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)87–98 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  52. Oepen S, Kuhlmann M, Miyao Y, Zeman D, Cinková S et al. 2015. SemEval 2015 task 18: broad-coverage semantic dependency parsing. Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)915–26 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  53. Partee BH 1984. Compositionality. Varieties of Formal Semantics: Proceedings of the 4th Amsterdam Colloquium, September 1982, Vol. 3 F Landman, F Veltman 281–311 Dordrecht, Neth: Foris
    [Google Scholar]
  54. Pater J. 2019. Generative linguistics and neural networks at 60: foundation, friction, and fusion. Language 95:1e41–74
    [Google Scholar]
  55. Pollard C, Sag I. 1994. Head-Driven Phrase Structure Grammar Chicago: Univ. Chicago Press
  56. Qiu L, Shaw P, Pasupat P, Nowak PK, Linzen T et al. 2021. Improving compositional generalization with latent structure and data augmentation. arXiv:2112.07610 [cs.CL]
  57. Raffel C, Shazeer N, Roberts A, Lee K, Narang S et al. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21:1401–67
    [Google Scholar]
  58. Shaw P, Chang MW, Pasupat P, Toutanova K. 2021. Compositional generalization and natural language variation: Can a semantic parsing approach handle both?. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)922–38 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  59. Steedman M. 2000. The Syntactic Process Cambridge, MA: MIT Press
  60. Sutskever I, Vinyals O, Le QV 2014. Sequence to sequence learning with neural networks. Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS'14), Vol. 2 Z Ghahramani, M Welling, C Cortes, ND Lawrence, KQ Weinberger 3104–12 Cambridge, MA: MIT Press
    [Google Scholar]
  61. Szabó ZG 2012. The case for compositionality. The Oxford Handbook of Compositionality W Hinzen, E Machery, M Werning 64–80 Oxford, UK: Oxford Univ. Press
    [Google Scholar]
  62. Venant A, Koller A. 2019. Semantic expressive capacity with bounded memory. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics65–79 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  63. Wang B, Lapata M, Titov I. 2021. Structured reordering for modeling latent alignments in sequence transduction Paper presented at the 35th Conference on Neural Information Processing Systems (NeurIPS 2021), virtual, Dec. 6–14
  64. Weißenhorn P, Donatelli L, Koller A. 2022. Compositional generalization with a broad-coverage semantic parser. Proceedings of the 11th Joint Conference on Lexical and Computational Semantics44–54 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  65. Wong YW, Mooney RJ. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL)960–67 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  66. Yanaka H, Mineshima K, Inui K. 2021. SyGNS: a systematic generalization testbed based on natural language semantics. Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021103–19 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  67. Yao Y, Koller A. 2022. Structural generalization is hard for sequence-to-sequence models. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP) pp. 5048–62. Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  68. Yin P, Fang H, Neubig G, Pauls A, Platanios EA et al. 2021. Compositional generalization for neural semantic parsing via span-level supervised attention. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies2810–23 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  69. Yin P, Neubig G. 2017. A syntactic neural model for general-purpose code generation. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)440–50 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  70. Yu T, Zhang R, Yang K, Yasunaga M, Wang D et al. 2018. Spider: a large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing3911–21 Stroudsburg, PA: Assoc. Comput. Linguist.
    [Google Scholar]
  71. Zelle JM, Mooney RJ. 1996. Learning to parse database queries using inductive logic programming. Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI'96), Vol. 21050–55 Palo Alto, CA: AAAI Press
    [Google Scholar]
  72. Zettlemoyer LS, Collins M. 2005. Learning to map sentences to logical form: structured classification with probabilistic categorial grammars. Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence (UAI)658–66 Arlington, VA: AUAI Press
    [Google Scholar]
  73. Zheng H, Lapata M. 2021. Disentangled sequence to sequence learning for compositional generalization. arXiv:2110.04655 [cs.CL]
/content/journals/10.1146/annurev-linguistics-030521-044439
Loading
/content/journals/10.1146/annurev-linguistics-030521-044439
Loading

Data & Media loading...

  • Article Type: Review Article
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error