Afficher la notice abrégée
dc.contributor.author |
Amirouche, Bouchra |
|
dc.contributor.author |
Moussa, Ilhem |
|
dc.date.accessioned |
2020-10-06T09:03:07Z |
|
dc.date.available |
2020-10-06T09:03:07Z |
|
dc.date.issued |
2020 |
|
dc.identifier.uri |
http://di.univ-blida.dz:8080/jspui/handle/123456789/6186 |
|
dc.description |
ill., Bibliogr. |
fr_FR |
dc.description.abstract |
The goal of general-purpose audio tagging is to create systems capable of recognizing a
variety of sounds. Including musical instruments, vehicles, animals, sounds generated by some
sort of human activity etc. The motivation for research in the field of artificial sound
understanding can be found in potential applications such as security, healthcare (hearing
impairment), improvement in smart devices and various music related tasks. The main
contribution of this work entails conducting extensive studies and comparisons between audio
tagging systems using a huge dataset made of 11 073 audio recordings. In this thesis, we have
carried out two sets of experiments. First, we have examined Deep Convolutional neural
networks (CNN) and 3 of its variants (Convolutional Recurrent Neural Network (CRNN),
Gated Convolutional Recurrent Neural Network (GCRNN) and Gated Convolutional Neural
Networks (GCNN)) using Log-Mel Spectrogram features. We have supported our analysis and
discussion with numerous statistical tests to analyze and compare the effect of the abovementioned
features
and
models
on
the
tagging
performance.
Our
experimental
findings
indicate
that
our systems capture diverse set of sound events, with various confidences. Moreover,
Convolutional Recurrent Neural Network (CRNN) significantly outperforms the other models.
Second, motivated by the fact that the individual models produce diverse predictions, we have
investigated the effect of ensemble learning using a technique known as stacking. Our analysis
shows that stacking provides a proper amalgamation of the individual learners, resulting in
better handling the diverse nature of the events.
Keywords: Audio Tagging, Deep Learning, Machine leaning, Ensemble Learning,
Stacking, Feature Extraction, Statistical Tests. |
fr_FR |
dc.language.iso |
en |
fr_FR |
dc.publisher |
Université Blida 1 |
fr_FR |
dc.subject |
Audio Tagging |
fr_FR |
dc.subject |
Deep Learning |
fr_FR |
dc.subject |
Machine leaning |
fr_FR |
dc.subject |
Ensemble Learning |
fr_FR |
dc.subject |
Stacking |
fr_FR |
dc.subject |
Feature Extraction |
fr_FR |
dc.subject |
Statistical Tests |
fr_FR |
dc.title |
Experimental design and analysis of audio tagging systems |
fr_FR |
dc.title.alternative |
Case studies |
fr_FR |
dc.type |
Thesis |
fr_FR |
Fichier(s) constituant ce document
Ce document figure dans la(les) collection(s) suivante(s)
Afficher la notice abrégée