Veuillez utiliser cette adresse pour citer ce document : https://di.univ-blida.dz/jspui/handle/123456789/19960
Titre: ENCODER-DECODER NEURAL NETWORK ARCHITECTURES FOR AUTOMATIC AUDIO CAPTIONING
Auteur(s): Bouchelaram, Ishrak
Chita, Ramzi
Kameche, A. (Promoteur)
Mots-clés: Audio Captioning
Machine Learning
Encoder Decoder Models
Signal Processing
Natural Language Processing
Date de publication: 25-sep-2022
Editeur: Université Blida 1
Résumé: The main purpose of this project is to design an environmental general audio content description using text, where a system accepts as an input an audio signal and outputs the textual description of that signal. This task has drawn lots of attention during the past several years as a result of quick devolvement of different methods that can provide captions for a general audio recording. To accomplish the automatic audio captioning task, we have performed multiple experiments using a Clotho dataset. Two deep neural networks have been employed in the construction of our systems Recurrent Neural Network and Gated Recurrent Unit, along with encoder-decoder architecture and a combination of feature representations based on audio processing techniques like Mel Spectrogram and text processing techniques used in text decoding from word embeddings like one-hot-encoding and BERT. Keywords: Audio Captioning, Machine Learning, Encoder Decoder Models, Signal Processing, Natural Language Processing.
Description: ill., Bibliogr. Cote: ma-004-869
URI/URL: https://di.univ-blida.dz/jspui/handle/123456789/19960
Collection(s) :Mémoires de Master

Fichier(s) constituant ce document :
Fichier Description TailleFormat 
Bouchelaram Ishrak et Chita Ramzi.pdf2,66 MBAdobe PDFVoir/Ouvrir


Tous les documents dans DSpace sont protégés par copyright, avec tous droits réservés.