Google’s latest app is a new machine learning-based research app to make communication easier for people with speech impairments. It’s looking for beta testes to test and improve the app starting today.
Like product manager for Google Research said in a video, “standard speech recognition doesn’t always work as well for people with atypical speech because the algorithms have not been trained on samples of their speech.” Project Relate would use custom models trained on each individual user’s speech patterns. When someone first launches the app, it’ll ask them to repeat a few phrases to create a base model and understand the way they speak.
- According to the video demonstrated by Google, a user with a speech impairment is able to talk to the app, which is then able to speak out her request to another user.
- The user is also able to talk to Google Assistant from the app, which can carry out the user’s request. “If after 250 phrases, we realise that the accuracy of the model is good enough, we release the model early, but it might not be the case for every user. It depends on the severity of their speech impairment,” she added.
- The app will then use these phrases to automatically learn how to better understand the user’s unique speech patterns and give them access to the app’s three main features, which are Listen, Repeat, and Assistant.
- “With the Listen feature, which we demoed in the app, we start with the baseline speech recognition model that is also being used in G board.
- Users of the app will also be able to create custom phrases that they would like to say to the Assistant. This Google says will help the app model become more robust and personal for each user.
- Google says it also worked with Aubrie Lee, a brand manager at the company, whose speech is affected by muscular dystrophy.