Publisher
source

Trevor Cox

1 year ago

Hearing loss and deep learning technology University of Sheffield in United Kingdom

Degree Level

PhD

Field of study

Neuroscience

Funding

Full funding available

Deadline

December 31, 2026
Country flag

Country

United Kingdom

University

University Name
Social connections

How do I apply for this?

Sign in for free to reveal details, requirements, and source links.

Where to contact

Official Email

Keywords

Neuroscience
Computer Science
Audiology
Biomedical Engineering
Signal Processing
Deep Learning
Artificial Intelligence
Speech Recognition
Virtual Reality
Artificial Neural Network
Acoustical Engineering

About this position

We have a new fully-funded PhD advertised on the CDT, "Better Personalization of Deep Learning-Enhanced Hearing Devices" See University of Sheffield list https://lnkd.in/e2CkWTRG

Hearing loss affects over 5% of the world’s population, making it a major public health concern. Hearing aids are the most commonly prescribed treatment, but many users report they do not perform well for listening to speech in noisy situations. Breakthroughs in deep learning and low-power chip design are driving the next generation of hearing devices and wearables, with the potential to revolutionize speech understanding in challenging listening environments. For example, Apple’s AirPods Pro have gained FDA approval as hearing aids for mild to moderate hearing loss, and Phonak has introduced deep neural network-equipped devices that dynamically enhance speech clarity in noisy environments. However, training these approaches to work in general settings and to suit individual preferences remains a critical challenge.

To improve deep learning-enhanced hearing aids, we require metrics that predict how well a given hearing aid algorithm will perform for a specific user in a particular acoustic environment. Existing approaches often rely on oversimplified assumptions about listener preferences, which are captured using basic metrics. For example, it is often assumed there is a well-defined target speaker and that processing should maximise noise suppression while preserving quality. These simple metrics do little to capture users’ needs in more complex settings, such as trying to engage in multiparty conversations in a busy restaurant.

The project will explore a variety of methods to understand hearing device user preferences in more complex settings, including leveraging virtual reality (VR) to simulate diverse acoustic environments and hearing aid algorithms. VR offers the advantage of creating immersive and controlled scenarios where users can directly experience and evaluate different algorithmic configurations. This approach allows the systematic measurement of user preferences across a wide range of conditions, ensuring both ecological validity and experimental rigor. From this understanding new algorithm quality metrics will be derived for optimising existing deep-learning enhancement approaches in a more user-dependent manner.

The project will be based at the University of Sheffield and co-supervised by experts from both Sheffield and the University of Salford, collaborators on the ongoing EPSRC-funded Clarity Project . The Clarity Project focuses on improving speech-in-noise understanding, making it a natural foundation for this work. The Royal National Institute for Deaf People (RNID) will act as a key partner, offering additional expertise and a crucial end-user perspective.

Funding details

Full funding including tuition fees and living expenses is available for this position. The scholarship covers all educational costs and provides a monthly stipend.

How to apply

Please submit your application including a cover letter, CV, academic transcripts, and contact information for two references. Applications should be sent via the online portal before the deadline.

Ask ApplyKite AI

Start chatting
Can you summarize this position?
What qualifications are required for this position?
How should I prepare my application?

Professors