Keynote: From Sensing to Hearing

Speaker

Advances in mixed reality and discoveries in perception science offer exciting possibilities of matching human perceptual strengths with vast distributed sensing, AI, and memory resources. In this sphere, audio-based techniques stand to lead the way, driven by advances in non-occluding, head-tracking headphones and hearables, and supported by clear evidence for ambient/peripheral stimuli influencing human mood, attention, and cognition. In many ways, this future is already here, exemplified by increasing convergence between assistive devices and consumer wearables around advanced sensing, connectivity, and interaction design. I will focus on two projects: first, a bone-conducting headphone I developed, called HearThere, for investigating user experiences of audio augmented reality (AR); and second, ongoing work using deep audio embeddings and sensor feedback to generate personalized audio streams for people falling asleep. Using my own and others’ application of these technologies as case studies, I will show how sensing and audio AR can transform how we perceive ourselves and access the world around us.