Abstract of Dissertation AND PhD Defense - Minh Vu
Abstract of Dissertation Presented to the Graduate School
of the University of
Florida in Partial Fulfillment of the
Requirements for the
Degree of Doctorate of Philosophy
EXPLAINING THE BOX: A
GLIMPSE INSIDE DECISIONS OF MODERN AI
By
Minh Nhat Vu
September 2023
Chair: My T. Thai
Major: Computer and
Information Science and Engineering
Recent years have
observed a swift adoption of modern Artificial Intelligence (AI) models in
real-world applications, especially those in critical and sensitive domains. It
has become increasingly important to explain the predictions generated by those
complex models. This dissertation is to present heuristic and theoretical
findings centering around that emerging problem:
·
Developing
a metric and algorithm to evaluate local explanations. While many explanation
methods for deep learning models have been introduced, evaluating them remains
challenging. In response, we propose the c-Eval metric and its corresponding
framework to quantify the correctness of local explanation. Given a prediction
of a deep neural network and its explanation, c-Eval is the minimum-distortion
perturbation that successfully alters the prediction while keeping the
explanatory features unchanged.
·
Improving
explanation methods with topological perturbations. We introduce a novel
perturbation scheme so that more faithful and robust explanations of
perturbation-based explanation methods can be obtained. In particular, the
study focuses on the impact of perturbing directions on the data topology. We
show that perturbing along the orthogonal directions of the input manifold
better preserves the data topology, both in the worst-case analysis of the
discrete Gromov-Hausdorff distance and in the average-case analysis via
persistent homology.
·
Providing
more in-depth explanations for deep neural networks. To obtain more in-depth
explanations for neural networks, we propose NeuCEPT, a method to identify
critical neurons that are important to the model’s local predictions and learn
their underlying mechanisms. We also provide a theoretical analysis to
efficiently solve for critical neurons while keeping the precision under
control.
·
Explaining
predictions of modern neural networks on graph data. Although many explanation
methods have been developed for deep models operating on grid-like data, e.g.
time series, text and images, the counterparts for graph data are lacking. In
response, we introduce PGM-Explainer, a Probabilistic Graphical Model (PGM)
explainer for Graph Neural Networks (GNNs). We theoretically show that the
resulting explanation guarantees to include all statistical information
regarding the target of the explanation.
·
Investigating
the limitation of explaining information generated by modern explainers. To
circumvent the lack of internal information on the explained models, black-box
explainers rely on the responses of the model on some perturbations of input
data. We theoretically point out that this lack of internal access limits
perturbation-based methods from uncovering certain crucial information about
the predictions generated by Temporal Graph Neural Networks (TGNNs), i.e., a
class of modern AI models.
ANNOUNCEMENT OF Ph.D. Defense
|
Date Posted |
||||
FROM:
Computer and
Information Science and Engineering Department |
September
21, 2023 |
||||
TO: the Members of the Supervisory Committee
for: |
|
||||
First Name |
Last Name |
M.I. |
|
||
Minh |
Vu |
Nhat |
|
||
SUPERVISORY COMMITTTEE MEMBERS |
|||||
My
Thai |
Chair |
|
|||
|
Co-Chair |
|
|||
Kejun
Huang |
Member |
|
|||
Kevin
Butler |
Member |
|
|||
|
Member |
|
|||
|
Member |
|
|||
Sandip
Ray |
External
Member |
|
|||
1. Name of Examination |
Ph.D. Defense |
||||
2. Degree Sought |
Doctor of Philosophy |
||||
3. Area of Specialization |
|
||||
4. Thesis/Dissertation Title |
|
||||
5.
Examination: Date, Time, Place |
October
25, 2023 |
9:00
a.m. |
Room
CSE 404 |
Comments
Post a Comment