Amber Smith's voice is a symptom of illness and an alarm for looming danger, even if she doesn't always hear it herself.
Amber has bipolar disorder and her mood swings are a risk: high highs can lead to massive spending sprees and low lows have dipped into suicidal territory. She's managing it now with medication. She's also testing out a new technology to try to catch a mood swing before it starts by using her cell phone to analyze the acoustics of her voice. Tiny variations in how she speaks, or you speak, can be clues to shifting mental states.
"Speech is incredibly rich it encodes so much of our behavior, it encodes information about gender, about our age, about our identity, and in this case about mood," explains computer engineering professor Emily Mower Provost of the University of Michigan. She and her colleague psychiatrist Melvin McInnis are testing out how to plumb the hidden signals and codes of a human voice to enable early action and better care for people with mental health issues.
It gets touching, it gets ambitious, and it's all pretty hopeful. Have a listen.
This is Part 1 of a two part series on voices and how computers and new technology can hear hidden meaning in how we speak. Next week: how this is being used to make products and profits. Subscribe to New Tech City here to make sure you don't miss it.