New preprint by Daoxin Li and Kathryn Schuler: Acquiring recursive structures through distributional learning.

New preprint: Li, D., & Schuler, K. (2023, February 20). Acquiring recursive structures through distributional learning.  

Abstract: Languages differ regarding the depth, structure, and syntactic domains of recursive structures. Even within a single language, some structures allow infinite self-embedding while others are more restricted. For example, when expressing ownership relation, English allows infinite embedding of the prenominal genitive -s, whereas the postnominal genitive of is much more restricted. How do speakers learn which specific structures allow infinite embedding and which do not? The distributional learning proposal suggests that the recursion of a structure (e.g., X1’s-X2) is licensed if the X1 position and the X2 position are productively substitutable in non-recursive input. The present study tests this proposal with an artificial language learning experiment. We exposed adult participants to X1-ka-X2 strings. In the productive condition, almost all words attested in X1 position were also attested in X2 position; in the unproductive condition, only some were. We found that, as predicted, participants from the productive condition were more likely to accept unattested strings at both one- and two-embedding levels than participants from the unproductive condition. Our results suggest that speakers can use distributional information at one-embedding level to learn whether or not a structure is recursive.