Our breadth of research focuses on speech production disorders and deficits, dysphagia in patients with impaired facial movement, oromotor skill development for early speech and feeding, and quantification of speech motor performance.

Current and Past Research Projects

To date, over 30 facial transplantation surgeries have been performed worldwide. While facial mobility appears to be slowly improving in these patients, mild to moderate communication and swallowing deficits persist. Because the procedure is new and has been performed on only a small number of patients, there is much to learn about the course of facial motor recovery following implantation and the best surgical and post-surgical interventions to maximize recovery.

Now that feasibility of this surgery has been established, the next step to improving outcomes requires the development of better assessment tools and the testing of promising new therapies that may improve long-term outcomes.

The SFDL is working with the facial transplantation team directed by Dr. Bohdan Pomahac at Brigham and Women’s Hospital to better understand the course of facial motor recovery following surgery using 3D facial motion analysis and to test the benefits of lip strength exercises on the recovery of speech, facial expression, swallowing, and quality of life.

Approximately 30,000 people are diagnosed with Amyotrophic Lateral Sclerosis (ALS), also known as Lou Gehrig's Disease, in the US, with 5,600 new diagnoses each year. The disease results in the selective deterioration of both upper and lower motor neurons. A sudden impairment of speech and swallowing are frequent first indicators of the disease with bulbar onset (head and neck) in approximately 25% of all ALS cases.

Bulbar symptoms associated with ALS have a devastating effect on quality of life and significantly shorten survival. Despite advances made in research and supportive therapy, the diagnosis of ALS remains elusive and options for treatment are limited. The goals of this research is to identify sensitive, quantitative indicators of disease progression that can be used for improving the detection of bulbar onset, the prediction accuracy of disease progression in bulbar ALS, and the identification of bulbar subtypes of ALS.

In this project, Dr. Green and his collaborators, Dr. Jun Wang (University of Texas at Dallas) and Dr. Ashok Samal (University of Nebraska-Lincoln), are developing a speech device that is controlled in real-time by the movements of the lips and tongue. The device generates words or phrases that are articulated silently. This technology is most frequently called a “silent speech interface.”

The purpose of the technology is to provide an alternative mode of oral communication for persons with speech impairments or for persons with the inability to use their voice due to, for example, throat cancer. Because the device can be tuned to the movements of an individual speaker, it can generate clear speech even when it is poorly articulated. Although the device is still in the experimental phase, the research team has published numerous articles demonstrating the feasibility of the approach.

SFDL also advises and consults with pharmaceutical companies developing new therapeutic drugs for diseases impacting speech and swallowing. Objective clinical endpoints of speech and swallowing are needed for use in clinical trials, but existing measurements are often subjective and too coarse to detect intervention effects. Using the techniques and tools developed by our lab, we are able to provide scientifically validated speech and swallowing outcome measures for behavioral, surgical, and pharmacological interventions.

Tools for Researchers

Advances in 3D motion capture technologies, including optical motion capture and electromagnetic articulography, have provided speech researchers with a wealth of new data on speech movements, but have presented unique challenges in data reduction.

To help overcome these challenges, we’ve developed SMASH – Speech Movement Analysis for Speech and Hearing Research – a Matlab-based software tool that can automate many data analysis tasks on speech movement data. The goal of SMASH is to advance research on speech production by improving the efficiency and reliability of speech movement analyses.

Green, J. R., Wang, J., & Wilson, D. L. (2013). SMASH: A tool for articulatory data processing and analysis, Interspeech 2013, 1331-1335.

Speech Pause Analysis is a Matlab-based software tool for automatically segmenting previously recorded acoustic files into “speech events” and “pause events” based on an analysis of the waveform’s amplitude.

The program automatically calculated statistics on the pausing patterns in a given speech audio file, and the pause threshold can be adjusted to detect paused between words in a phrase, or between phrases only.

Green, J.R., Beukelman, D.R., & Ball, L. J. (2004). Algorithmic estimation of pauses in extended speech samples of dysarthric and typical speech. Journal of Medical Speech-Language Pathology, 12, 149-154. PMID: 20628555.

The Bamboo Passage is a 60-word paragraph that was developed to improve the precision of automatic pause boundary detection (such as with SPA) during a read-speech task. Specifically, voiced consonants were strategically positioned at word and phrase boundaries to minimize the possibility of identifying voiceless consonants as part of pause events.

Green, J.R., Beukelman, D.R., & Ball, L. J. (2004). Algorithmic estimation of pauses in extended speech samples of dysarthric and typical speech. Journal of Medical Speech-Language Pathology, 12, 149-154. PMID: 20628555.

Beets, Bats, and Boots is a story and associated picture book designed for use in a story-retell task. It is designed to elicit four target words: beet, bat, boot and Bobby. These words were chosen because their medial vowels have well-defined acoustic and visual targets that circumscribe the boundaries of vowel space.

Green, J.R., Nip, I.S.B, Mefferd, A.S., Wilson, E.M., & Yunusova, Y. (2010). Lip movement exaggerations during infant-directed speech. Journal of Speech, Language, and Hearing Research, 53, 1529-1542. PMID: 20699342.