Graph neural networks have proved to be a key tool for dealing with many problems and domains such as chemistry, natural language processing and social networks. While the structure of the layers is simple, it is difficult to identify the patterns learned by the graph neural network. Several works propose post-hoc methods to explain graph predictions, but few of them try to generate interpretable models. Conversely, the topic of the interpretable models is highly investigated in image recognition. Given the similarity between image and graph domains, we analyze the adaptability of prototype-based neural networks for graph and node classification. In particular, we investigate the use of two interpretable networks, ProtoPNet and TesNet, in the graph domain. We show that the adapted networks manage to reach better or higher accuracy scores than their respective black-box models and comparable performances with state-of-the-art self-explainable models. Showing how to extract ProtoPNet and TesNet explanations on graph neural networks, we further study how to obtain global and local explanations for the trained models. We then evaluate the explanations of the interpretable models by comparing them with post-hoc approaches and self-explainable models. Our findings show that the application of TesNet and ProtoPNet to the graph domain produces qualitative predictions while improving their reliability and transparency.
Dettaglio pubblicazione
2022, IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE, Pages 1-11
Prototype-based Interpretable Graph Neural Networks (01a Articolo in rivista)
Ragno Alessio, LA ROSA Biagio, Capobianco Roberto
Gruppo di ricerca: Artificial Intelligence and Robotics
keywords