万博manbetx网页版在线登录 _ 2022welcome

2019-06-26

美国东北大学王言治副教授到访微纳电子学系

        2019620日,来自美国东北大学的王言治副教授应邀来访微纳电子学系,并在微纳电子大厦205会议室做了题为《5,000X model compression in DNNs; But, is it truly desirable?》的学术报告。微纳电子学系多位老师以及研究生、本科生参加了报告会,并与报告人进行了热烈的讨论。

                                                                              

 

5,000X model compression in DNNs; But, is it truly desirable?

Hardware implementation of deep neural networks (DNNs) with emphasis on performance and energy efficiency has been the focus of extensive ongoing investigations. When large DNNs are mapped to hardware as an inference engine, the resulting hardware suffers from significant performance and energy overheads. To overcome this hurdle, we develop ADMM-NN, an algorithm-hardware co-optimization framework for greatly reducing DNN computation and storage requirements by incorporating Alternating Direction Method of Multipliers (ADMM) and utilizing all redundancy sources in DNN. Our preliminary results show that ADMM-NN can achieve the highest degree of model compression on representative DNNs. For example, we can achieve 348X, 63X, 34X, and 17X weight reduction on LeNet-5, AlexNet, VGGNet, and ResNet-50, respectively, with (almost) no accuracy loss. We achieve a maximum of 4,438X weight data storage reduction when combining weight pruning and weight quantization, while maintaining accuracy.

 


返回