Who Calls the Shots Rethinking Few-Shot Learning for Audio
Document Type
Conference Proceeding
Publication Date
1-1-2021
Abstract
Few-shot learning aims to train models that can recognize novel classes given just a handful of labeled examples, known as the support set. While the field has seen notable advances in recent years, they have often focused on multi-class image classification. Audio, in contrast, is often multi-label due to overlapping sounds, resulting in unique properties such as polyphony and signal-to-noise ratios (SNR). This leads to unanswered questions concerning the impact such audio properties may have on few-shot learning system design, performance, and human-computer interaction, as it is typically up to the user to collect and provide inference-time support set examples. We address these questions through a series of experiments designed to elucidate the answers to these questions. We introduce two novel datasets, FSD-MIX-CLIPS and FSD-MIX-SED, whose programmatic generation allows us to explore these questions systematically. Our experiments lead to audio-specific insights on few-shot learning, some of which are at odds with recent findings in the image domain: there is no best one-size- fits-all model, method, and support set selection criterion. Rather, it depends on the expected application scenario. Our code and data are available at https://github.com/wangyu/rethink-audio-fsl.
Identifier
85123418497 (Scopus)
ISBN
[9781665448703]
Publication Title
IEEE Workshop on Applications of Signal Processing to Audio and Acoustics
External Full Text Location
https://doi.org/10.1109/WASPAA52581.2021.9632677
e-ISSN
19471629
ISSN
19311168
First Page
36
Last Page
40
Volume
2021-October
Grant
1544753
Fund Ref
National Science Foundation
Recommended Citation
Wang, Yu; Bryan, Nicholas J.; Salamon, Justin; Cartwright, Mark; and Bello, Juan Pablo, "Who Calls the Shots Rethinking Few-Shot Learning for Audio" (2021). Faculty Publications. 4600.
https://digitalcommons.njit.edu/fac_pubs/4600