Apple and Google have suspended reviewing recordings from users interacting with their voice assistants

0
51

The decision comes following reports that Apple contractors tasked with reviewing the recordings regularly heard confidential information and private conversations.

The temporary pause on the practice of listening and transcribing audio recorded by their platforms comes after a spate of stories revealed private conversations and intimate moments have been inadvertently recorded by voice assistants and reviewed by the companies as part of ongoing quality assurance. More notably, each of these articles has revealed that third-party contractors are tasked with reviewing these recordings, and not just employees of the large tech companies.

AMAZON TAKES THE OPT-OUT APPROACH

Amazon has taken a different approach with Alexa. It does have human reviewers for some recordings similar to Google Assistant and Apple but the company stresses that it also offers users the ability to opt-out of having their recordings heard by any humans. Given that opt-out option, Amazon appears to not think it necessary to halt or radically change its current procedures.

An Amazon spokesperson commented in an email to Vociebot, “We take customer privacy seriously and continuously review our practices and procedures. For Alexa, we already offer customers the ability to opt-out of having their voice recordings used to help develop new Alexa features. The voice recordings from customers who use this opt-out are also excluded from our supervised learning workflows that involve manual review of an extremely small sample of Alexa requests. We’ll also be updating information we provide to customers to make our practices more clear.”

VOICE ASSISTANTS RECORD MORE THAN THEY SHOULD

Voice assistant recordings are transcribed and analyzed by employees and contractors in order to correct and improve them. It is part of a process called supervised learning which is common in machine learning systems. Humans annotate a subset of interactions to ensure the assistant is responding appropriately and some say the approach is critical to improving the solutions and making them better at recognizing uncommon requests and accents. The quality control and testing are part of the standard licensing terms people agree to when they buy a phone or smart speaker. According to the iOS license agreement,

“By using Siri or Dictation, you agree and consent to Apple’s and its subsidiaries’ and agents’ transmission, collection, maintenance, processing, and use of this information, including your voice input and User Data, to provide and improve Siri, Dictation, and dictation functionality in other Apple products and services.”

A Voicebot survey in 2018 found that for 28.5 percent of smart speaker users these false wake-ups were occurring at least daily with another 43.7 percent reporting the incidents happened at least monthly. Whether you notice the false wake-up or not, the voice assistant records the noise nearby for a few seconds and a portion of those recordings are listened to later by a human to determine whether the activation was a false wake up or a missed user command. That information is then used to determine how and whether the issue can be avoided in the future. Apple’s method is outlined in a security white paper that notes the company ingests voice recordings, strips them of identifiable information, assigns a random device identifier and saves the data for six months, over which time the system can tap into the information for learning purposes. Following the six-month period, the identifier is erased and the clip is saved “for use by Apple in improving and developing Siri for up to two years.”

Apple does not explicitly mention the possibility of manual review by human contractors or employees, nor does it currently offer an option for Siri users to opt out of the program. The company will address the latter issue in a future software update.

LEAVE A REPLY