Veuillez utiliser cette adresse pour citer ce document :
https://di.univ-blida.dz/jspui/handle/123456789/25174
Affichage complet
Élément Dublin Core | Valeur | Langue |
---|---|---|
dc.contributor.author | AMEUR, El Hachemi | - |
dc.contributor.author | HAOUI, Hamza | - |
dc.contributor.author | Hireche, ( Promoteur) | - |
dc.date.accessioned | 2023-10-03T13:36:27Z | - |
dc.date.available | 2023-10-03T13:36:27Z | - |
dc.date.issued | 2023-06-24 | - |
dc.identifier.uri | https://di.univ-blida.dz/jspui/handle/123456789/25174 | - |
dc.description | ill., Bibliogr. Cote:ma-004-939 | fr_FR |
dc.description.abstract | The goal of this master’s thesis is to design, develop, and implement a comprehensive system that can effectively classify images based on their context. To achieve this objective, we employed two multimodal learning approaches, which enable us to capture and analyze long-term dependencies and contextual information more effectively. To demonstrate the performance of the proposed methods, experiments were conducted on a custom dataset. The evaluation of the chosen method yielded a classification accuracy of 80% Key words: Artificial intelligence, image classification, deep learning, contextual image classification, multimodal learning | fr_FR |
dc.language.iso | en | fr_FR |
dc.publisher | Université Blida 1 | fr_FR |
dc.subject | Artificial intelligence | fr_FR |
dc.subject | image classification | fr_FR |
dc.subject | deep learning | fr_FR |
dc.subject | contextual image classification | fr_FR |
dc.subject | multimodal learning | fr_FR |
dc.title | Application for contextual images classification | fr_FR |
dc.type | Thesis | fr_FR |
Collection(s) : | Mémoires de Master |
Fichier(s) constituant ce document :
Fichier | Description | Taille | Format | |
---|---|---|---|---|
Ameur El Hachemi et Haoui Hamza.pdf | 17,07 MB | Adobe PDF | Voir/Ouvrir |
Tous les documents dans DSpace sont protégés par copyright, avec tous droits réservés.