The main interests of my research group lie in the area of auditory perception, which involves discovering how the ear and brain interact in allowing us to make sense of the acoustic environment. Applications of this research include auditory prostheses, such as hearing aids and cochlear implants, and robust automatic speech recognition.
One major topic of research in the group involves pitch perception. Pitch is important for understanding speech, listening to music, and for hearing out one sound in the presence of other competing sounds. How pitch is coded in the auditory system is still not fully understood, and we believe that a better understanding could lead to substantial improvements in cochlear-implant systems, most of which currently provide very limited pitch information to implant users.
Another ongoing topic involves developing behavioral methods that allow us to investigate different aspects of inner-ear (or cochlear) function, such as the non-linear amplification that occurs in a healthy ear. Hearing impairment can result from many different types of cochlear damage. Understanding more about the perceptual consequences of different types of hearing loss should lead to improvements in our ability to treat hearing loss on an individual basis.
We also study how different auditory cues, such as pitch and spatial location, are combined within the auditory system to enable us to attend to some sounds, while ignoring others. This classic "cocktail party problem" has still not been solved, and is a major limiting factor in the ability of automatic speech recognition systems to function in everyday situations, where interfering sounds must be filtered out. Our work combines behavioral tests in humans and, in collaboration with others, functional imaging studies (fMRI and MEG) to elucidate the underlying principles of auditory scene analysis.
Our work is funded primarily by the National Institute on Deafness and Other Communication Disorders.