Improving Vision-inspired Keyword Spotting Using a Streaming Conformer Encoder With Input-dependent Dynamic Depth
AuthorsAlexandre Bittar, Paul Dixon, Mohammad Samragh, Kumari Nishu, Devang Naik
AuthorsAlexandre Bittar, Paul Dixon, Mohammad Samragh, Kumari Nishu, Devang Naik
Using a vision-inspired keyword spotting framework, we propose an architecture with input-dependent dynamic depth capable of processing streaming audio. Specifically, we extend a Conformer encoder with trainable binary gates that allow to dynamically skip network modules according to the input audio. Our approach improves detection and localization accuracy on continuous speech using Librispeech's 1,000 most frequent words while maintaining a small memory footprint. The inclusion of gates also allows the average amount of processing without affecting the overall performance to be reduced. These benefits are shown to be even more pronounced using the Google speech commands placed over background noise, where up to 97% of the processing is skipped on non-speech inputs, therefore making our method particularly interesting for an always-on keyword spotter.