Abstract—Deep Learning achieves surprising performance in many real-world tasks. However, on a black-box approach, computational techniques have been applied without a strong critical understanding of their properties. In this paper, we review the current methodologies and techniques about improving the interpretability of Deep Learning from different research directions. Some works are based on analysis of the learning process, some lay more emphasis on interpreted network architecture, and others intend to design self-interpretable Deep Learning models. This article analyzes the popular and advanced works in these fields and provides a future look for Deep Learning researchers.
Index Terms—Interpretability, proxy model, salience map,separate representation method, multimodality.
Zhenlin Huang is with the Department of Artificial Intelligence, Central China Normal University, Wuhan, China (e-mail: huangzhenlin_666@163.com).
Fan Li is with Department of Computer Science, Tongji University, Shanghai, China (e-mail: 1950670@tongji.edu.cn).
Zhanliang Wang is with the Department of Mathematics New York University, New York, USA (correspondent author; e-mail: zw3342@nyu.edu).
Zhiyuan Wang is with Department of Mathematics, Ohio State University Columbus, USA (e-mail: wang.11193@osu.edu).
Cite: Zhenlin Huang, Fan Li, Zhanliang Wang, and Zhiyuan Wang, "Interpretability of Deep Learning," International Journal of Future Computer and Communication vol. 11, no. 2, pp. 34-39, 2022.
Copyright © 2022 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
Copyright © 2008-2024. International Journal of Future Computer and Communication. All rights reserved.
E-mail: ijfcc@ejournal.net