Network and Information Technologies Doctoral Programme
10/10/2024

Author: Gereziher Weldegebriel Adhane
Programme: Doctoral Programme in Network and Information Technologies
Idioma: English
Supervision: Dr David Masip Rodó and Dr Mohammad Mahdi Dehshibi

Faculty / Institute: Doctoral School UOC
Subjects: Computer Science
Key words:  explainable AI, transparency, model uncertainty, sample selection, visual explainability

Area of knowledge: Network and Information Technologies

+ Link to the project

Summary

In this work, we propose techniques to enhance the performance and transparency of convolutional neural networks (CNNs). We introduce novel methods for informative sample selection (ISS), uncertainty quantification, and visual explanation. The two ISS methods involve using reinforcement learning to filter out samples that could lead to overfitting and bias, and employing Monte Carlo dropout to estimate model uncertainty during training and inference. In addition, we present two visual explainability techniques: ADVISE, which generates detailed visual explanations and quantifies the relevance of feature map units, and UniCAM, which explains the opaque nature of knowledge distillation. These methods aim to improve model accuracy, robustness, fairness, and explainability, contributing to both academic research and the transparency of CNNs in computer vision applications.