Please use this identifier to cite or link to this item:
http://localhost:8080/xmlui/handle/123456789/41130| Title: | Language-queried audio source separation |
| Authors: | Benlaoubi, Chaima Nour el Houda Khettal, Mounia Ykhlef, Hadjer. (Promotrice) |
| Keywords: | Language-queried audio source separation Cross-Modal Attention ResUNet++ computational efficiency. Cosine similarity filtering Phase-aware reconstruction. |
| Issue Date: | 2025 |
| Publisher: | Université Blida 1 |
| Abstract: | Language-queried audio source separation (LASS) enables on-demand sound extraction of sound sources using natural language queries overcoming limitations in traditional audio source separation systems. In this work, we propose a language-queried audio source separation architecture integrating two major innovations: a cross attention driven ResUNet++ with multi scale receptive fields (via Atrous Spatial Pyramid Pooling), channel wise attention(Squeeze and Excitation block) and residual connections to integrate FLAN-T5 text embedding with audio features; Cosine similarity filtering to suppress overly similar mixture target pairs that might hinder the training. We trained our model on Clotho dataset derived mixtures and evaluated on its test set using state of the art metrics. Our system achieves good separation quality with an SDR of 2.41 and SDRI of 8.37. This work presents a lightweight, efficient framework for language-queried audio source separation compared to current state of the art models. Keywords: Language-queried audio source separation, Cross-Modal Attention, ResUNet++, Cosine similarity filtering, Phase-aware reconstruction, computational efficiency. |
| Description: | ill.,Bibliogr.cote:MA-004-1055 |
| URI: | https://di.univ-blida.dz/jspui/handle/123456789/41130 |
| Appears in Collections: | Mémoires de Master |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| BENLAOUBI Chaima Nour el Houda & KHETTAL Mounia.pdf | 3,93 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.