应365bet邀请,美国爱荷华大学李哲博士于2017年8月24日访问我校,并在此期间做以下学术报告。
题目:SEP-Nets: Small and Effective Pattern Networks
时间:2017年8月24日14:30-16:30
地点:理科楼407
摘要:While going deeper has been witnessed to improve the performance of convolutional neural networks (CNN), going smaller for CNN has received increasing attention recently due to its attractiveness for mobile/embedded applications. It remains an active and important topic how to design a small network while retaining the performance of large and deep CNNs (e.g., Inception Nets, ResNets). Albeit there are already intensive studies on compressing the size of CNNs, the considerable drop of performance is still a key concern in many designs. This paper addresses this concern with several new contributions. First, we propose a simple yet powerful method for compressing the size of deep CNNs based on parameter binarization. The striking difference from most previous work on parameter binarization/ quantization lies at different treatments of 1 X 1 convolutions and k X k convolutions (k>1), where we only binarize k X k convolutions into binary patterns. The resulting networks are referred to as pattern networks. By doing this, we show that previous deep CNNs such as GoogLeNet and Inception-type Nets can be compressed dramatically with marginal drop in performance. Second, in light of the different functionalities of 1 X 1(data pro-jection/transformation) and k X k convolutions (pattern extraction), we propose a new block structure codenamed the pattern residual block that adds transformed feature maps generated by 1 X 1convolutions to the pattern feature maps generated by k X k convolutions, based on which we design a small network with ~ 1 million parameters. Combining with our parameter binarization, we achieve better performance on ImageNet than using similar sized networks including recently released Google MobileNets.
报告人简介:Zhe Li is a fourth year PhD student in computer science department in University of lowa. His research interests lie in the area of designing and analyzing optimization algorithms for machine learning, deep neural networks and its application to computer vision. He obtained his bachelor, master degree both in computer science from Xi’an Jiaotong University, South Dakota State University in 2010 and 2013, respectively. He has published papers in NIPS, AAAI, IJCAI and other conferences. In 2017, he received University of lowa post-comprehensive research award due to his outstanding research. Besides academic experience, he also did three summer research intern in General Motor research lab, Yahoo research, Snap research in 2015, 2016 and 2017, respectively.