邮箱登录 | 所务办公 | 收藏本站 | English | 中国科学院
 
首页 计算所概况 新闻动态 科研成果 研究队伍 国际交流 技术转移 研究生教育 学术出版物 党群园地 科学传播 信息公开
国际交流
学术活动
交流动态
现在位置:首页 > 国际交流 > 学术活动
From AI 1.0, AI 2.0,to XAI3.0
2019-06-17 | 【 【打印】【关闭】

  时间:2019年6月24日(周一)上午10:00-11:00

  地点:计算所四层446会议室

  报告人::Prof.Sun-Yuan KUNG, 普林斯顿大学

  摘要:

  Deep Learning (NN/AI 2.0) depends solely on Back-propagation (BP), now classic learning paradigm whose supervision is exclusively accessed via the external interfacing nodes (i.e. input/output neurons). Hampered by BP's external learning paradigm, Deep Learning has been limited to learning the parameters of the neural nets (NNs). The task of finding optimal structure is often by trial and error. It is naturally desirable to see the next generation of NN/AI technology fully addressing the issue of simultaneously training both parameter and structure of NNs. To this end, we propose an internal learning paradigm, which represents a processing model for Internal Neuron's Explainablility, championed by DARPA's XAI (or AI3.0). Practically, in order to evaluate/train hidden layers/nodes for directly, we propose an Explainable Neural Networks (XNN) based on an internal learning paradigm comprising (1) internal teacher labels (ITL); and (2) internal optimization metrics (IOM). Furthermore, by combining external and internal learning, itleads to a joint parameter/structure design of Deep Learning/Compression. Pursuant to our simulation studies, the proposed XNN facilitates simultaneously trimming hidden nodes and raising accuracy. Moreover, it appears to outperform several prominent pruning methods based on the based on the existing Deep Learning paradigms. Furthermore, it opens up a new research possibility on supporting inter-machine mutual learning.

  报告人简介:

  S.Y. Kung, Life Fellow of IEEE, is a Professor at Department of Electrical Engineering in Princeton University. His research areas include machine learning, data mining, systematic design of (deep-learning) neural networks, statistical estimation, VLSI array processors, signal and multimedia information processing, and most recently compressive privacy. He was a founding member of several Technical Committees (TC) of the IEEE Signal Processing Society. He was elected to Fellow in 1988 and served as a Member of the Board of Governors of the IEEE Signal Processing Society (1989-1991). He was a recipient of IEEE Signal Processing Society's Technical Achievement Award for the contributions on "parallel processing and neural network algorithms for signal processing" (1992); a Distinguished Lecturer of IEEE Signal Processing Society (1994); a recipient of IEEE Signal Processing Society's Best Paper Award for his publication on principal component neural networks (1996); and a recipient of the IEEE Third Millennium Medal (2000). Since 1990, he has been the Editor-In-Chief of the Journal of VLSI Signal Processing Systems. He served as the first Associate Editor in VLSI Area (1984) and the first Associate Editor in Neural Network (1991) for the IEEE Transactions on Signal Processing. He has authored and co-authored more than 500 technical publications and numerous textbooks including ``VLSI Array Processors'', Prentice-Hall (1988); ``Digital Neural Networks'', Prentice-Hall (1993); ``Principal Component Neural Networks'', John-Wiley(1996); ``Biometric Authentication: A Machine Learning Approach'', Prentice-Hall (2004); and ``Kernel Methods and Machine Learning”, Cambridge University Press (2014)

 
网站地图 | 联系我们 | 意见反馈 | 所长信箱
 
京ICP备05002829号 京公网安备1101080060号