万博manbetx网页版在线登录 _ 2022welcome



    2018年12月20日9:00~17:30, “未来计算系列论坛”(二)在北京大学微纳电子大厦103报告厅成功举办。本次论坛邀请了来自英国南安普顿大学的Themis Prodromakis教授、来自美国圣母大学的Patrick Fay教授、来自美国加州大学圣塔芭芭拉分校的Dmitri B. Strukov教授,以及来自美国东北大学的Yanzhi Wang教授,并依次做了题为 “Harnessing the power of the brain with metal-oxide memristors”、题为“High-Performance III-N Devices and Integration Technologies for Advanced System Applications”、题为“Alternative Computing with Memristors”和题为“Towards 1,000X model compression in Deep Neural Networks”的学术报告。



Lecture #1

Harnessing the power of the brain with metal-oxide memristors

Prof. Themis Prodromakis

University of Southampton, UK

In the not so far future, electronic devices will be everywhere – embedded within our physical world and even in our bodies – empowering modern societies with unprecedented capabilities. Yet, the technological progress that brought us the mobile revolution is not any more sustainable for allowing us reaching this point. Up until now, the processing of data in electronics has relied on assemblies of vast numbers of transistors – microscopic switches that control the flow of electrical current by turning it on or off. Transistors have got smaller and smaller in order to meet the increasing demands of technology, but have nowadays reached their physical limit, with – for example – the processing chips that power smartphones containing an average of five billion transistors that are only a few atoms wide.


A novel nano-electronic technology, known as the memristor, proclaims to hold the key to a new era in electronics, being both smaller and simpler in form than transistors, low-energy, and with the ability to retain data by ‘remembering’ the amount of charge that has passed through them – akin to the behaviour of synaptic connections in the human brain. In his lecture Themis Prodromakis will present a few examples on how memristive technologies can be exploited in practical applications ranging from neuromorphic systems to charge-based computing and even enabling bioelectronics medicines.


Lecture #2

High-Performance III-N Devices and Integration Technologies for Advanced System Applications

Prof. Patrick Fay

University of Notre Dame

Gallium Nitride (GaN) and related III-N materials offer exceptional levels of performance for RF, microwave, and mm-wave applications, as well as for power conversion and control due to the combination of large band gap with high critical electric field, mobility, and saturation velocity, in addition to spontaneous and piezoelectric polarization that enables high sheet carrier concentrations to be achieved without extrinsic doping.  These features have led to remarkable device performance, ranging from devices with ft/fmax near 500 GHz in ultra-scaled HEMTs to high-power, high-voltage rectifiers and transistors capable of handling voltages well above 1500 V and 10 A by using vertical device architectures on emerging bulk GaN substrates.  For these extremely promising device results to have maximum system-level impact, however, heterogeneous integration with Si-based electronics and compatibility with advanced packaging platforms is needed.  In this talk, recent developments of high-performance GaN-based devices for RF through mm-wave applications, as well as for power conversion and control will be described, and novel advanced processing techniques to enable these devices to be heterogeneously integrated with Si and advanced packages while retaining the unsurpassed performance possible with GaN will be discussed. 


Lecture #3

Alternative Computing with Memristors

Prof. Dmitri B. Strukov

UC Santa Barbara, USA

By now, there have been many demonstrations of resistive switching (“memristive”) devices based on organic, chalcogenide, transition metal-oxide and silicon materials and involving different physical switching mechanisms, such as phase-change transitions and modulation of the ionic concentration profiles by electrical and/or thermal forces.  The conductance of properly engineered memristors can be continuously tuned with relatively large electrical bias, and retained, effectively indefinitely, when small stress is applied. Such analog nonvolatile memory functionality and extremely high device density achieved by lateral scaling and/or monolithical vertical integration enable many new exciting applications of memristors in neuromorphic and alternative types of computing. 

In my talk, I will review several such applications based on metal-oxide memristors which were a recent focus of my group. I will start with stateful material implication logic, which was originally suggested by HPL group. In our work, we showed that 3D version of this logic allows resolving Feynman grand challenge of implementing 8-bit adder in a volume smaller than 50-nm cube. I will then review our experimental work on memristor-based security primitives, in which we utilized device variations and their nonlinear I-Vs to demonstrate functionality and physical performance superior to those of conventional approaches. Finally, I will discuss applications of memristor in neuromorphic computing, outlining some of my group’s recent experimental work on mixed-signal firing-rate neural networks, which have a potential to greatly exceed conventional implementations in energy efficiency, speed, and density, and spiking neural networks, that allow for compact implementation of training algorithms in the hardware.


Lecture #4

Towards 1,000X model compression in Deep Neural Networks

Prof. Yanzhi Wang

Northeastern University, USA

Hardware implementation of deep neural networks (DNNs) with emphasis on performance and energy efficiency has been the focus of extensive ongoing investigations. Many applications require large DNNs to achieve high quality results. When such large DNNs are mapped to hardware as an inference engine, the resulting hardware suffers from expensive hardware computations and frequent accesses to off-chip DRAM memory, which in turn result in significant performance and energy overheads. To overcome this hurdle, previous research has proposed model compression techniques for DNN inference engines, with two separate approaches: weight pruning and weight quantization. We start from these prior work references, but make the additional observation that the sources of redundancy in DNNs are more than simply the weight count or the bit-level representation. In fact there are two additional sources of redundancy, namely, redundancy in the number of intermediate results of a DNN and redundancy in the bit-level representation of the said intermediate results. Next we set up to develop a rigorous and holistic optimization framework to achieve ultra-high DNN compression exploiting redundancies in the counts and bit-level representations of both weights and intermediate results of a DNN.

More precisely, we develop ADMM-NN, an algorithm-hardware co-optimization framework for greatly reducing DNN computation and storage requirements by incorporating Alternating Direction Method of Multipliers (ADMM) and utilizing all redundancy sources in DNN. ADMM-NN includes: (i) development of a framework of joint pruning and quantization of DNN weights, and (ii) development of a unified framework for utilizing all sources of redundancy in DNN inference engines, enabling joint pruning and quantization of both weights and intermediate result in a DNN. Our preliminary results show that ADMM-NN can achieve the highest degree of model compression on representative DNNs. For example, we can achieve 167X, 32X, 34X, and 17X weight reduction on LeNet-5, AlexNet, VGGNet, and ResNet-50, respectively, with (almost) no accuracy loss. We achieve a maximum of 1,910X weight data storage reduction when combining weight pruning and weight quantization, while maintaining accuracy.