The Phonology Lab is equipped to record and analyze acoustic and physiological
speech data. In addition, we collect behavioral speech perception
data as people listen to speech that has been edited or synthesized in
Dwinelle 52 Articulatory Phonetics.
|A double-walled soundbooth|
|Ultrasound imaging of the tongue during speech|
|Multichannel digital recording|
|Oral airpressure and nasal airflow|
|Audio and Video recording|
|Dental cameras and dental impression materials for static palatography|
Dwinelle 51/53 Acoustic and Auditory Phonetics.
|Three small soundbooths/listening stations|
|Linux, PC, and Mac workstations|
Dwinelle 49 IT Architect (Ronald Sprouse).
Dwinelle 50 Graduate Student Researchers.
Dwinelle 55 Lab Library - visiting scholars.
Dwinelle 57 Research space - Language Documentation research office.
We have been developing a Phonetics Machine as a way to distribute our software. This is a Linux virtual machine that runs under Oracle VirtualBox. This is still underdevelopment but we would be happy to share the current instantiation.
XWaves We continue to use some of the speech signal processing routines that were a part of the ESPS XWaves package.
wxKLSYN a python enabled version of the KLSYN-88 speech synthesizer (see Klatt and Klatt,
EAR is a slight modification of
Malcomb Slaney's (1988) implementation of Lyons' cochlear model. This
is the one mentioned in Johnson Acoustic and Auditory
Phonetics, and we use it mainly for the cochleagram - a display
of speech that combines features of auditory spectra and spectrograms.
We use experiment scheduling software originally developed at Ohio State for keeping track of experiments and subjects. (Requires Apache + MySQL + PHP5)