natalunasans reblogged
“If voice is the future, tech companies need to prioritize developing software that is inclusive of all speech. In the United States, 7.5 million people have trouble using their voice and more than 3 million people stutter, which can make it difficult for them to fully realize voice-enabled technology. Speech disabilities can stem from a wide variety of causes, including Parkinson’s disease, cerebral palsy, traumatic brain injuries, even age. In many cases, those with speech disabilities also have limited mobility and motor skills. This makes voice-enabled technology especially beneficial for them, as it doesn’t involve pushing buttons or tapping a screen. For disabled people, this technology can provide independence, making speech recognition that works for everyone all the more important. Yet voice-enabled tech developers struggle to meet their needs. People with speech disabilities use the same language and grammar that others do. But their speech musculature—things like their tongue and jaw—is affected, resulting in consonants becoming slurred and vowels blending together, says Frank Rudzicz, an associate professor of computer science at the University of Toronto who studies speech and machine learning. These differences present a challenge in developing voice-enabled technologies. Tech companies rely on user input to fine-tune their algorithms. The machine learning that makes voice-enabled tech possible requires massive amounts of data, which comes from the commands you give and the questions you ask devices. Most of these data points come from younger abled users, says Rudzicz. This means that it can be challenging to use machine-learning techniques to develop inclusive voice-enabled technology that works consistently for populations whose speech varies widely, such as children, the elderly, and the disabled.”