Please use this identifier to cite or link to this item: http://localhost:8080/xmlui/handle/123456789/25278
Full metadata record
DC FieldValueLanguage
dc.contributor.authorKRAMOU, Rime-
dc.contributor.authorDJADI, Aya-
dc.date.accessioned2023-10-05T08:51:27Z-
dc.date.available2023-10-05T08:51:27Z-
dc.date.issued2023-
dc.identifier.urihttps://di.univ-blida.dz/jspui/handle/123456789/25278-
dc.description4.621.1.1261/p65fr_FR
dc.description.abstractVoice activity detection (VAD) is considered one of the most important techniques for many speech applications. It is an important method in speech processing, as it detects the presence or absence of speech. Previously VAD performance was based on methods that depended on signal processing signal processing, but did not perform satisfactorily in high-noise environments, so deep learning became an alternative. A , we adopted in the experimental study three structures for deep learning deep learning, namely Convolutional Neural Networks (CNN) and a DenseNet network, and we also used the three databases for speech and noise, namely LibriSpeech, TidiGets and Chimie5 in succession. We measured accuracy in low-noise environments with various sensitivities and achieved 100% accuracy.fr_FR
dc.language.isofrfr_FR
dc.publisherblida 1fr_FR
dc.subjectConvolutional Neural Networks, deep learning, database.fr_FR
dc.titleDétection d’activité vocale basée sur l’apprentissage profondfr_FR
dc.typeOtherfr_FR
Appears in Collections:Mémoires de Master

Files in This Item:
File Description SizeFormat 
format mémoire ryma & Aya (1).pdf2,17 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.