Google Assistant won a new dispute against Siri and Alexa. In an annual comparison carried out by Loup Ventures, the solution was the one that understood the most questions and offered the most correct answers.
The survey was made from 800 questions asked for the three tools. For the second year, Google Assistant understood 100% of the questions. He further improved his rate of adequate responses, going from 85.5% in 2018 to 92.9%.
Siri, in turn, understood 99.8% of what was said. In 2018, the rate was 99%. The Apple assistant also started to give more appropriate responses, rising from 78.5% to 83.1%. Alexa understood 99.9% of the questions – in 2018, the rate was 98% – and responded adequately to 79.8% of them – the rate was 61.4%.
The Google Assistant test was done on a Pixel XL, while Siri’s was done on an iPhone with iOS 12.4 and Alexa’s was done through its iOS app. The comparison involved five categories of questions: location, commerce, navigation, information and command.
The Google tool led in four categories and had the highest margin in commerce, which involves messages like “ask me for more paper towels”. She answered 92% of the questions correctly, while Alexa had a 71% hit rate and Siri, 68%.
“Common sense suggests that Alexa would be better suited for trade issues,” says Loup Ventures. “However, the Google Assistant correctly answers more questions about product and service information and where to buy certain items.”
The only segment that Google Assistant doesn’t lead is the command segment, which includes messages like “remind me to call Jerome at 2pm today”. Siri did better by answering 93% of the questions correctly. Google Assistant’s hit rate was 86% and Alexa’s was 69%.
“Both Siri and Google Assistant, which are integrated into the phone’s operating system, have far surpassed Alexa in the command section,” says Loup Ventures. “Alexa remains in a third-party application that, although it can send voice messages and call other Alexa devices, it cannot send text messages, emails or initiate a call.”
For this year, the benchmark did not evaluate Cortana due to Microsoft’s decision to change its “strategic positioning”. In 2018, when evaluated, the assistant understood 99.4% of the questions, but answered correctly only 52.4% of them.
With information: VentureBeat.