Research

Oral presentation at IERASG Symposium, Sydney, Australia,

photo of Sydney Opera HouseThe International Evoked Response Audiometry Study Group symposium is held every two years and was held this year in Sydney, Australia, from June 30th to July 4th.  The conference provides a forum for discussion of the physiologic signals generated within the auditory system, including EEG, cortical auditory evoked potentials (CAEP), otoacoustic emissions (as used in newborn screening), and the the auditory brainstem response (ABR).

Dr Lendra Friesen, the Director of the Cochlear Implant Brain and Behavior Lab, presented the latest results from a CAEP study intended to get further understanding of the neural mechanisms underlying low-frequency hearing preservation in cochlear implant users with new electrode technology who have had “soft” surgery.

For more details see our conference abstract.

Poster presentation at CIAP, Lake Tahoe, CA

photo of Lake TahoeThe Conference on Implantable Auditory Prostheses (CIAP) is held every two years at the Granlibakken Conference Center in Lake Tahoe, California.  The beautiful location is host to scientific researchers from round the world and is one of the leading forums worldwide for the presentation and discussion of cochlear implant research.  This year’s conference from July 14-19 was the 19th in a series that originated in 1983, the very early days of cochlear implant research.  The conference is almost unique in bringing together the wide range of disciplines that underlie cochlear implant research, as well as bringing together surgeons, audiologists and engineers.

We were delighted to be able to present results from an ongoing study on “Rate discrimination, sentence and prosody recognition in young and elderly normal hearing adults using vocoders”.  The work was presented in collaboration with Dr Monita Chatterjee from the Boys Town National Hospital and Dr Robert Morse from the UK, who wrote the Matlab programs for controlling the experiments.  In the study we are investigating the ability of younger and older listeners to use prosodic speech cues, which are essential for determining whether the speaker is a female or male, an utterance is a statement or a question, and for determining the tone of voice of the speaker – whether they are for example happy or sad.  We used a noise vocoder to simulate the way that a cochlear implant codes speech.  We related the ability to use prosodic information with real-world listening tasks, such as speech understanding (using IEEE sentences), and the ability to identify the emotion expressed by a speaker (using the House Ear Institute emotional speech database).

In brief, we found that it was harder for older listeners to use prosodic speech cues and older listeners corresponding found it harder to identify the emotion expressed in an utterance.  We did not find an effect of age on the ability of listeners to recognise speech.  Whilst speech recognition is perhaps the most essential part of good communication, these results show that older listeners may be missing essential cues that give additional meaning to speech.  Like the difficulty in detecting emotion in emails and text messages, missing the emotional context of speech could lead to misunderstanding and reduce the empathy between speakers.  It remains to be seen whether the effects of age can be mitigated by changes to cochlear implant design.

For more details see a our CIAP poster.