LINGUIST List 31.800

Tue Feb 25 2020

Calls: Comp Ling, Phonetics, Phonology/Canada

Editor for this issue: Lauren Perkins <laurenlinguistlist.org>



Date: 25-Feb-2020
From: Tomas Lentz <lentzuva.nl>
Subject: Neural network models for articulatory gestures
E-mail this message to a friend

Full Title: Neural network models for articulatory gestures
Short Title: NNArt

Date: 09-Jul-2020 - 09-Jul-2020
Location: Vancouver, BC, Canada
Contact Person: Tomas Lentz
Meeting Email: < click here to access email >
Web Site: https://staff.science.uva.nl/t.o.lentz/nnart/

Linguistic Field(s): Computational Linguistics; Phonetics; Phonology

Call Deadline: 15-Mar-2020

Meeting Description:

This workshop (satellite to LabPhon 17 on the day after, 9 July, 2020, 1:30pm-17:00pm) aims at bringing together researchers interested in articulation and computational modelling, especially neural networks.

Articulation has been formalised as dynamic articulatory gestures, i.e., a target-driven pattern of articulator movements (e.g., Browman & Goldstein, 1986). Such a pattern unfolds in time and space and could therefore also be seen as a spatial sequence of analytically relevant articulatory landmarks such as timepoint of peak velocity and target achievement. Seeing such sequences as sequences of vectors (of spatial coordinates) make them potentially learnable with algorithms for sequence modelling.

Current developments of machine learning offer greatly improved power for sequence learning and prediction. Recurrent Neural Networks (RNNs) or their extension Long Short-Term Memory (LSTM, Hochreiter & Schmidhuber, 1997) allows efficient training over short and even long time intervals (Gers, Schraudolph & Schmidhuber, 2002). Such networks have been used for acoustic modelling, but their application in articulation research has been mainly been limited to ultrasound data, and applied less to the classification of two-dimensional articulator movement curves as obtained from EMA or ROI analyses of MRI data.

However, promising approaches to acoustics-to-EMA mapping tentatively suggest that articulatory movement allow meaningful modelling using deep neural networks (e.g., Liu et al., 2005, Chartier et al., 2018)

Call for Papers:

We call for abstracts that bring together articulation data and computational modelling, especially neural network modelling. We welcome any abstract, including tentative work, on the possibility of using neural and/or deep computational modelling for articulatory data. Suggestions for topics are:

- Whether it is possible to capture invariants, language-independent predictable patterns that apply to all articulation
- If transfer learning is possible, i.e. if a network trained on articulatory features in one speaker and, ultimately, language can be mapped onto the pattern of another speaker (or language)
- If annotation of gestures can be aided by generating most likely gesture structures, analogous to the derivation of articulation from acoustics (e.g., Mitra, Vikramjit, et al. 2010)
- If diagnostic classification is possible on networks that model articulation, analogous to e.g., the detection of counterparts to compositionality in a model of arithmetic grammar by Hupkes & Zuidema (2017)

Please use the link to EasyChair (https://easychair.org/my/conference?conf=nnart2020) to submit abstracts. Tentative work is more than welcome! As for the main conference, abstracts should be written in English and not exceed one page of text. References, examples and/or figures can optionally be included on a second page. Submitted abstracts must be in .pdf format, with Times New Roman font, size 12, 1 inch margins and single spacing. We do not require anonymous abstracts.

Website for more details: https://staff.science.uva.nl/t.o.lentz/nnart/




Page Updated: 25-Feb-2020