Welcome to our new website!

The Cochlear Implant Brain and Behavior Lab at UConn has launched its new website! 

Whether you are interested in our research, interested in resources for people with hearing loss, or would like to participate in our research, we hope that you will follow this site to keep up with the latest news coming out of our lab.

Older listeners with normal hearing wanted for research study

Use of speech cues by older and younger listeners to understand speech and emotion

We are conducting a research study to determine how older and younger listeners use cues in speech to aid their understanding of speech and the emotions expressed by a speaker - are they for example happy, sad, angry, or anxious?  This knowledge will help us and others design better speech coding strategies for cochlear implants.

We are looking for adults aged between 65 and 78 years old who have normal hearing for their age.

You will be asked to the following:

  • Participate in tests where you will be played sounds that have been processed to simulate a cochlear implant.
  • In one test, you will be played sounds similar to musical notes and we will find the smallest difference in frequency that enables you to determine that one sound is different from another sound.
  • In another test, you will be played sentences and you will be asked to repeat what you hear.
  • In a a final test, you will be played sentences and asked whether you think the speaker was angry, sad, happy, anxious, or speaking in a neutral tone.

This study will be supervised by Dr Lendra Fiesen Ph.D.  You may have the results of any testing provided. There are no other direct benefits, but you will be helping us to learn about the ability of older adults to use speech cues. This could lead to the development of better cochlear implants. Participants will be paid $15/hour for any testing, $10/day for travel to the UCONN clinic.

For more information or to sign up, please contact Lendra Friesen.
Email: lendra.friesen@uconn.edu

UConn IRB PROTOCOL ? APPROVED ?

Oral presentation at IERASG Symposium, Sydney, Australia,

photo of Sydney Opera HouseThe International Evoked Response Audiometry Study Group symposium is held every two years and was held this year in Sydney, Australia, from June 30th to July 4th.  The conference provides a forum for discussion of the physiologic signals generated within the auditory system, including EEG, cortical auditory evoked potentials (CAEP), otoacoustic emissions (as used in newborn screening), and the the auditory brainstem response (ABR).

Dr Lendra Friesen, the Director of the Cochlear Implant Brain and Behavior Lab, presented the latest results from a CAEP study intended to get further understanding of the neural mechanisms underlying low-frequency hearing preservation in cochlear implant users with new electrode technology who have had “soft” surgery.

For more details see our conference abstract.

Poster presentation at CIAP, Lake Tahoe, CA

photo of Lake TahoeThe Conference on Implantable Auditory Prostheses (CIAP) is held every two years at the Granlibakken Conference Center in Lake Tahoe, California.  The beautiful location is host to scientific researchers from round the world and is one of the leading forums worldwide for the presentation and discussion of cochlear implant research.  This year’s conference from July 14-19 was the 19th in a series that originated in 1983, the very early days of cochlear implant research.  The conference is almost unique in bringing together the wide range of disciplines that underlie cochlear implant research, as well as bringing together surgeons, audiologists and engineers.

We were delighted to be able to present results from an ongoing study on “Rate discrimination, sentence and prosody recognition in young and elderly normal hearing adults using vocoders”.  The work was presented in collaboration with Dr Monita Chatterjee from the Boys Town National Hospital and Dr Robert Morse from the UK, who wrote the Matlab programs for controlling the experiments.  In the study we are investigating the ability of younger and older listeners to use prosodic speech cues, which are essential for determining whether the speaker is a female or male, an utterance is a statement or a question, and for determining the tone of voice of the speaker – whether they are for example happy or sad.  We used a noise vocoder to simulate the way that a cochlear implant codes speech.  We related the ability to use prosodic information with real-world listening tasks, such as speech understanding (using IEEE sentences), and the ability to identify the emotion expressed by a speaker (using the House Ear Institute emotional speech database).

In brief, we found that it was harder for older listeners to use prosodic speech cues and older listeners corresponding found it harder to identify the emotion expressed in an utterance.  We did not find an effect of age on the ability of listeners to recognise speech.  Whilst speech recognition is perhaps the most essential part of good communication, these results show that older listeners may be missing essential cues that give additional meaning to speech.  Like the difficulty in detecting emotion in emails and text messages, missing the emotional context of speech could lead to misunderstanding and reduce the empathy between speakers.  It remains to be seen whether the effects of age can be mitigated by changes to cochlear implant design.

For more details see a our CIAP poster.