Résumé:
Language-queried audio source separation (LASS) enables on-demand sound extraction of sound sources using natural language queries overcoming limitations in traditional audio source separation systems. In this work, we propose a language-queried audio source separation architecture integrating two major innovations: a cross attention driven ResUNet++ with multi scale receptive fields (via Atrous Spatial Pyramid Pooling), channel wise attention(Squeeze and Excitation block) and residual connections to integrate FLAN-T5 text embedding with audio features; Cosine similarity filtering to suppress overly similar mixture target pairs that might hinder the training. We trained our model on Clotho dataset derived mixtures and evaluated on its test set using state of the art metrics. Our system achieves good separation quality with an SDR of 2.41 and SDRI of 8.37. This work presents a lightweight, efficient framework for language-queried audio source separation compared to current state of the art models.
Keywords: Language-queried audio source separation, Cross-Modal Attention, ResUNet++, Cosine similarity filtering, Phase-aware reconstruction, computational efficiency.