Event

Mingyang, Muhammed, and Enrico will be presenting this Friday, October 20 at 1:30pm in 401B 3401 Walnut St. Please find the titles and abstracts of their mini talks below.

 

Mingyang Bian 

Title: A commitment-based Analysis of 'Believe' as an Implicature Trigger

Abstract: The distribution of believe as a propositional attitude verb exhibits a complicated picture, which calls for a reasonable description of its strength. The observations that believe is typically used as a device for hedging but on other occasions does not allow unsureness motivate my proposal that the hedging sense of believe is realized by a generalized conversational implicature. Upon further examination, I argue that the implicature in question originates from scalar pair <believe, know> that is brought about by a scale of levels of commitment to the truth of the complement. Using Lauer’s (2017) propositional language, I compositionally derive that know induces full assertoric commitment to the truth of its complement while believe induces a level of commitment to the truth of its complement that is lower than full assertoric commitment. This commitment-based analysis provides a plausible characterization of the strength of believe as it delivers desired predictions about the distribution of the predicate.

 

Muhammed Ileri

Title: A Paradigm Gap in Turkish

Abstract: Based on the results of an acceptability judgment experiment and a corpus search, I report that desiderative constructions formed with the -AsI suffix in Turkish do not have a grammatical form when inflected for third person plural agreement. In attempting to explain why Turkish speakers cannot reliably produce a form for 3PL desideratives whereas they can produce other inflected forms they have never seen or heard before, I argue that speakers cannot reliably select a base in the desiderative paradigm in the absence of a 3PL form in the input, which results in speaker uncertainty and eventually induces a gap in the paradigm of -AsI desideratives in Turkish. 

 

Enrico Micali

Title: Black Hole Reinforcement Learning

Abstract: We introduce the Black Hole Reinforcement Learning problem, a previously unexplored variant of reinforcement learning in which we lose all turn information and all reward from trajectories that visit a particular subset of states. We assume awareness of the trajectory loss events, making this a censored data problem but not a truncated data problem. We describe a memory-efficient stochastic policy gradient algorithm that guarantees non-asymptotic convergence to a policy with finitely suboptimal expected trajectory reward. We hope this will inspire further development of algorithms for “bias-aware” data science.