Date(s) - 16 Feb 2022
9:50 AM - 11:30 AM
3043 ECpE Building Addition
Speaker: Cheng Wang, Research Scientist at the Center for Brain-inspired Computing Enabling Autonomous Intelligence (C-BRIC) at Purdue University
Title: Enabling Efficient Machine Learning with Device-to-Algorithm Co-Design: Opportunities and Challenges
Abstract: Advances in machine learning (ML) and artificial intelligence (AI) on various cognitive tasks, including computer vision and natural language processing, have been accompanied by a surge in hardware development to meet the high computational requirements. However, implementation of neural network algorithms in conventional von-Neumann architectures are still orders of magnitude more inefficient than the biological brain in power consumption. Moreover, building energy-efficient ML hardware accelerators for edge applications faces further challenges, due to constraints of power budget and on-chip memory. Hence, fundamentally new approaches are needed to sustain a continuous growth in the performance of computers beyond the end of CMOS technology roadmap. In order to achieve a better match between the hardware primitives and computational models, exploring new paradigms of computing necessitates a multi-disciplinary endeavor across the stack consisting of devices, circuits, hardware architectures, and learning algorithms. Specifically, such holistic endeavor will involve creation of nanoscale emerging devices that can efficiently mimic neuronal/synaptic operations in biological brains, design of hardware architectures best suited for data-intensive ML models, and exploration of novel learning algorithms inspired by bio-plausible principles. In this talk, I will discuss our recent exploration of emerging computing paradigms such as analog in-memory computing and neuromorphic computing in pursuit of robust and efficient ML hardware based on novel nano-electronic devices. First, I will present sparsity-aware device circuit co-design of spin-orbit-torque magnetic random-access-memory (SOT-MRAM) for robust ML inference acceleration based on crossbar in-memory computing. Significant energy improvement with near software accuracy is demonstrated leveraging robust crossbar arrays with low precision analog-to-digital conversion. Second, I will introduce a spin-based device that can mimic a leaky integrate-and-fire spiking neuron with compact footprints and high energy efficiency. Incorporation of such neuronal devices into the training of a deep convolution spiking neural network for image classification demonstrates improved robustness against various types of noise injection. I will conclude my talk with a brief discussion on potential opportunities and directions for future work.
Bio: Cheng Wang is currently a Research Scientist at the Center for Brain-inspired Computing Enabling Autonomous Intelligence (C-BRIC) in Purdue University. Cheng received his B.S. degree in physics from Peking University in 2009, and completed Ph.D. from The University of Texas at Austin in 2015, with his dissertation on exploring non-volatile spintronic and memristive emerging devices for efficient computing hardware. Prior to joining Purdue, Cheng worked as a Staff R&D Engineer at Seagate Research Center (Fremont) from 2016 to 2019 developing high-density data storage hardware and magnetoelectronic memories, and received the Seagate FRC Technical Award in 2018. His current research interests include machine learning hardware acceleration and energy efficient neuromorphic computing with emerging technologies such as non-volatile memories. Cheng has authored/co-authored more than 20 journal and conference publications and has served as a technical program committee member for the IEEE International Magnetic Conference.