The research in our lab focuses on the ability of humans to function in complex auditory environments. Humans are typically faced with the challenge of interpreting sounds as they reach the ears, learning to ignore echoes and other irrelevant, distracting signals. Some common examples are classrooms, restaurants, playgrounds and “cocktail parties”. In order to understand how the brain determines the location and the content of important sounds we study hearing in adults and in children with normal hearing, as well as individuals with impaired hearing. In particular, we focus on a unique population of people who are deaf and use cochlear implants. Our interest in is the potential benefits that arise when bilateral cochlear implants are provided.

How can humans understand speech in noisy environments?

In the laboratory environment we simulate aspects of “real world” listening situations, in which there are target stimuli with speech content, and competing/masking sounds that listeners in the lab are instructed to ignore. Our studies identify conditions under which listeners are able to ignore the maskers and attend to the target speech. For example, we vary the spatial locations of the maskers and measure the extent to spatial separation between target and maskers leads to improved performance. Through headphones we can identify locations in space using virtual space technology, in which tight control can be exerted over which ears are activated, and monaural deafness can be simulated. This approach enables us to study and understand the role that binaural hearing plays in these listening situations.

Development of binaural hearing in typically-developing children

Children spend many hours a day in enclosed spaces, where they attempt to understand their teachers and peers, and to do so they must be able to ignore distracting sounds around them. Our studies are aimed at understanding what scenarios allow children to perform best, and what challenges they face in simulated complex environments. The tests in the lab utilize a “listening game” platform, whereby a child interacts with a computer to identify the content and locations of words presented from loudspeakers, either in quiet or in the presence of masking sounds.

One of our goals is to bench-mark the development of these abilities in typically-developing children in order to compare their abilities with those of cochlear implant users. In addition to studying auditory abilities we are now also looking at non-auditory measures, such as speech production, phonological awareness, vocabulary acquisition, non-verbal IQ, word learning, etc.

Adults who are deaf and use bilateral cochlear implants

The impetus to provide CI users with bilateral devices was initiated in adults in the 1990’s, in an effort to improve functional abilities in realistic complex environments. Studies in adults, conducted in controlled laboratory environments using loudspeaker arrays, have demonstrated that significant benefits in performance can be attributed to the use of two implants compared with one implant. Studies with adults have also shown that bilateral CIs lead to improved speech understanding in the presence of competing speech. However, the magnitude and type of advantage seen across patients varies greatly.

Reduced benefits may be due to hardware limitations. Bilateral CI users are essentially fitted with two separate monaural systems that are not synchronized. Furthermore, the incoming signal is filtered into numerous frequency bands; the envelope of the signal is extracted from the output of each band and is used to set stimulation levels. However, low-frequency information used by the binaural system, known as fine-structure, is discarded by the processors. Binaural benefits may also be reduced because many CI users undergo auditory deprivation prior to being activated, with the likely possibility that there occurs degeneration of auditory neurons.

We study how the loss of binaural synchrony is important by using customized research processors to bypass the clinical processors. The devices are engineered in such a way that they provide precise binaural cues directly to pairs of electrodes in the right and left ears, with precise synchronization. This experimental approach enables us to measure the extent to which bilateral CI users are sensitive to auditory cues that we know to be important and to be relied on by listeners who are normal-hearing. This work has also enabled us to look at auditory plasticity and the effect of age at which adults become deaf as a predictor for performance. With further research we hope to better understand conditions under which binaural hearing can be restored to bilateral CI users.

Emergence of spatial hearing skills and non-auditory skills in children who use cochlear implants

Despite hardware limitations, a growing number of children world-wide have been implanted bilaterally. These children represent a different population of deaf individuals than their adult counterparts, because, unlike the majority of adults who had been exposed to acoustic hearing early in life, most of the children are deaf from birth or from a very young age.

Auditory measures: Our studies have implemented “listening game” approaches whereby we simulate aspects of real-world environments in a quiet, sound-proof booth, by positioning arrays of loudspeakers at various locations in the horizontal plane and engaging the children in “looking” and “pointing” games or verbal-response computer interactive games. We measure how well the children can discriminate sounds from right-vs-left, and also how well they achieve more complex, sound localization abilities. We have found that factors such as age of implantation, amount of bilateral experience and chronological age at the time of testing may be significant. With changing clinical criteria, children in many clinics are implanted bilaterally at younger ages. Thus, the notion that “earlier is best,” which took roots in pediatric implantation over a decade ago, has now also been embraced by many clinicians in order to initiate early activation of both ears. Our research on this population implements different methods that are appropriate for younger listeners.

Non-Auditory measures: In collaboration with colleagues at the Waisman Center, including Jenny Saffran, Jan Edwards and Susan Ellis-Weismer, we are investigating the relationships between auditory measures described above and language comprehension, speech perception and production, non-verbal IQ, and other measures.