Date(s) - 21 Jan 2022
9:50 AM - 11:30 AM
Title: Elastic Processing and Hardware Architectures for Machine Learning
Abstract: Machine Learning (ML) models, especially Deep Neural Networks (DNNs), have been driving innovations in many application domains. These breakthroughs are powered by the computational improvements in processor technology driven by Moore’s Law. However, the need for computational resources is insatiable when applying ML to large-scale real-world problems. Energy efficiency is another major concern of large-scale ML. The enormous energy consumption of ML models not only increases costs in data-centers and decreases battery life of mobile devices but also has a severe environmental impact. Entering the post-Moore’s Law era, how to keep up performance and energy-efficiency with the scaling of ML remains challenging. My approach to the performance and energy-efficiency challenges can be encapsulated in a few questions. Do we need all the computations and data movements involved in conventional ML processing? Does redundancy exist at the hardware level? How can we better approach large-scale ML problems with new computing paradigms? What are the implications for architectures with more focus on privacy, robustness, and security? In this talk, I will present how to explore the elasticity in ML processing and hardware architectures: from the algorithm perspective, I propose redundancy-aware processing in DNN training and inference, as well as large-scale classification problems and long-range Transformers; from the architecture perspective, I explore balanced, specialized, and flexible designs to improve efficiency.
Bio: Liu Liu is a Ph.D. candidate in the Department of Computer Science at UC Santa Barbara. His research interests reside in the intersection between computer architecture and machine learning, towards high-performance, energy-efficient, and robust machine intelligence. He leads the research on elastic processing and hardware architectures, with publications in top-tier conferences on machine learning (ICML/ICLR) and computer architecture (MICRO/ASPLOS). He is a recipient of Peter J Frenkel Fellowship from the Institute of Energy Efficiency at UCSB.