Event

Christine Soh Yue will be defending their dissertation proposal titled "Learning Variation and Systematicity in Language" on Thursday, September 12 at 9:30am. The defense will take place in person in the Linguistics Department Library and over Zoom. All are welcome to attend!

The abstract is included below.

------

Title: Learning Variation and Systematicity in Language

Supervisor: Charles Yang

Proposal committee: Anna Papafragou (chair), Marlyse Baptista, Kathryn Schuler

Time: Thursday September 12, 9:30am - 11am

Place: Linguistics Department Library, 3401C Walnut Street, Suite 300 and on Zoom

Abstract:

The successful acquisition of a language is a highly complex process in which learners must extract systematic information given a sample of input utterances. The input is filled with variability: some of it is noise that must be filtered out, and some of it is a systematic feature of the language. Learners must learn early words across contexts in which the abstracted meaning of the word is ambiguous, and learners must determine the existence and scope of linguistic rules, and whether these rules are deterministic or probabilistically variable. This dissertation seeks to explore the mechanisms and representations of language acquisition by proposing a two-part cognitive model of learning: the first part can learn by rote association and by generalization, and the second part learns by generalization.  
   
Language has often been described as "making infinite use of finite means" (Chomsky, 1965, quoting von Humboldt, 1836). Thus, successful users of a language must master both the finite and the infinite, to learn both the rote associations and the grammars that generate the possible utterances of the language (Chomsky, 1958). Beginning with the case study of early word learning (Chapter 3), the mechanism of learning the "finite" is validated through simulations of existing experimental work as well as two novel cross-situational word learning experiments targeting memory. Next, the dissertation explores processes of learning the "infinite": regularization and of the acquisition of variable rules. Both processes are key to developing a native knowledge of a language, but the relationship between these processes remains unclear. The model proposes that regularization occurs when a single form is generalized, and that the generalization of the different forms must occur to acquire variable rules. Chapter 4 proposes a pair of artificial language learning experiments to test the model of learning via generalizations. Chapter 5 examines an example of a variable rule learned in natural language, through a corpus study of differential object marking (DOM) in various dialects of Spanish, where DOM is variable in usage.