Google improves voice recognition in noisy places
Google has announced that it has improved its voice recognition system to make vocal searches even better in noisy conditions.
It’s always been one of the better voice recognition systems out there, but Google’s voice search function is now stronger than ever.
Over on the Google Research blog, the company has outlined some ways in which it has improved its systems.
This has involved moving from deep neural network technology to recurrent neural networks. The blog goes on to provide some fairly technical explanations for what this means, but essentially Google’s voice recognition system can now consider the context within which vocal sounds are made, extrapolating what’s been said from surrounding elements of each word.
The example provided is the word “museum,” a flowing word that can prove difficult for systems to separate into its constituent sounds. Google’s new system processes all parts of the word together, in real time.
The result is that Google’s voice recognition software can now prove accurate even in loud environments, where every last part of a spoken word might not necessarily be clear.
Related: Cortana vs Google Now vs Siri: Which is best?
In addition, the system’s ability to consume larger chunks of audio with fewer computations means that Google voice searches are much faster than before.
Google’s new acoustic models are now being applied to its voice searches on the Google app for Android and iOS, as well as for dictation on Android.
Next, take a look at our smartphone buyers guide: