Go to page content

Time and Space in Speech and Signed Languages

Thursday, Feb. 19, 3:30-4:45 p.m.

SN-4073

Time and space in speech and sign languages

William J. Idsardi, University of Maryland

 

One question that has been reinvigorated by the Substance Free Phonology program (SFP, Reiss 2017, Chabot 2024) is the extent to which phonological representations and operations are abstract and amodal (Berent 2013, Berent et al 2021, Chabot 2026), with the most prevalent point of comparison being signed and spoken languages. In my view, signed and spoken languages share a common core of phonological mental representations (Idsardi 2022, 2025). These representations are a perspicuous way to formalize the principal insights of autosegmental phonology (Goldsmith 1976). The main idea is that phonological representations consist of EVENTS (points in time and/or space), FEATURES (monadic properties of events, mostly modality-specific), and PRECEDENCE (a dyadic relation of temporal order between events). In addition to this, signed languages also include dyadic SPATIAL RELATIONS between events (i.e. points in space and time). Illustrations of the EFPS model for signed phonology are drawn from some simple ASL signs, and an ongoing diachronic change in ASL MOTORCYCLE is analyzed using parallel events, partial reduplication, and underspecification. I will also briefly compare the EFPS model with other phonological models for sign language (Brentari 2019).

 

Presented by Department of Linguistics

Event Listing 2026-02-19 15:30:00 2026-02-19 16:45:00 America/St_Johns Time and Space in Speech and Signed Languages Time and space in speech and sign languages William J. Idsardi, University of Maryland   One question that has been reinvigorated by the Substance Free Phonology program (SFP, Reiss 2017, Chabot 2024) is the extent to which phonological representations and operations are abstract and amodal (Berent 2013, Berent et al 2021, Chabot 2026), with the most prevalent point of comparison being signed and spoken languages. In my view, signed and spoken languages share a common core of phonological mental representations (Idsardi 2022, 2025). These representations are a perspicuous way to formalize the principal insights of autosegmental phonology (Goldsmith 1976). The main idea is that phonological representations consist of EVENTS (points in time and/or space), FEATURES (monadic properties of events, mostly modality-specific), and PRECEDENCE (a dyadic relation of temporal order between events). In addition to this, signed languages also include dyadic SPATIAL RELATIONS between events (i.e. points in space and time). Illustrations of the EFPS model for signed phonology are drawn from some simple ASL signs, and an ongoing diachronic change in ASL MOTORCYCLE is analyzed using parallel events, partial reduplication, and underspecification. I will also briefly compare the EFPS model with other phonological models for sign language (Brentari 2019).   SN-4073 Department of Linguistics