Speech Perception

Central to language comprehension is how the long-term memory representations of linguistic information guide or otherwise impinge on perception. I have previously approached this in the domain of vision and sign language (Almeida, Poeppel & Corina, 2016), but my current focus is in speech perception (Schluter, Politzer-Ahles & Almeida, 2016; Schluter, Politzer-Ahles, Al Kaabi & Almeida, 2017; Politzer-Ahles, Schluter, Wu & Almeida, 2016). In particular, I am interested in the extent to which the representations of speech sounds in long-term memory is tied to sensory processing: do speech sound categories in long-term memory retain fine-grained acoustic/phonetic information or are they fairly abstract and/or optimally sparse?

Research from my lab on this topic has exploited an automatic difference-detection brain response in the auditory domain (the Mismatch Negativity - MMN), and the results have consistently pointed to long-term memory representations for speech sounds that are fairly abstract when it comes to sensory information, and also potentially sparse in their featural content.

Publications

(2016). Asymmetries in the perception of Mandarin tones: Evidence from mismatch negativity. JEP:HPP.

PDF Project doi:10.1037/xhp0000242

(2016). No place for /h/: an ERP investigation of English fricative place features. Lang. Cog. Neuro.

PDF Project doi:10.1080/23273798.2016.1151058

(2015). Asymmetries in the perception of Mandarin tones: Evidence from mismatch negativity. ICPhS 2015.

PDF Project