Date(s) - 20 Feb 2017
10:00 AM - 10:50 AM
3043 ECpE Building Addition
Title: Ultra-low-power Computing in the IoT Era
Abstract: Wearables, sensors, and Internet of Things (IoT) arguably represent the next frontier of computing. By many accounts, we will have over 20 billion such devices in five years representing $7 trillion dollars in revenue. This corresponds to roughly 3 devices for every human and a value higher than the 2016 GDP of all the countries in the world, except the US and China. Many of these systems will be characterized by extremely low power and area requirements. As an example, even when using using a state-of-art 90mm^2 lithium-polymer battery, an IoT system is constrained to sub-milliwatt power budgets for a one-day lifetime. As another example, one square centimeter of solar cell can power systems only in the 100 microwatt to 100 milliwatt range. In our research, we ask the question: are there opportunities for power and area reduction that are unique to these emerging computing platforms. Stated differently, are there specific characteristics of these emerging systems that can be exploited to approach the power and area constraints that these systems have. We answered the question in the affirmative and developed several techniques that appear to be very effective. In this talk, I will discuss one such technique–application-specific power management–that is applicable over a large class of IoT applications. Application-specific power management recognizes that cost constraints require many IoT systems to use general purpose microprocessers and microcontrollers even in settings where one or a small number of applications is being targeted. We have developed a novel hardware-software co-analysis technique that enables novel power optimizations in these situations. I will discuss two optimizations in detail–determining application-specific peak power to reduce system size, weight, and cost, and bespoke processor–processors that consist of only those gates that can be exercised by the application while all other gates are eliminated. Finally, we also studied aggressively reducing voltage of on-chip memories since the power of many of these IoT platforms might be dominated by memories. I will discuss one technique–correction prediction–that allows 16-21% reduction in energy consumed by on-chip memories.
Bio: Henry Duwe received his B.S. degree in computer engineering and computer science from the University of Wisconsin–Madison and his M.S. in electrical and computer engineering (ECE) from the University of Illinois at Urbana-Champaign (UIUC). He is currently finishing a Ph.D. in electrical and computer engineering at UIUC. His research interests include computer architecture and the design and architecture of ultra-low-power computer systems suitable for the Internet of Things applications. Henry’s research has been recognized by two Best Paper in Session Awards at SRC TECHCON (2014 and 2015), a Best of SELSE award (2016), and a Best of IEEE Computer Architecture Letters (2013). His other honors include the Mavis Future Faculty Fellowship and the E.A. Reid Fellowship Award. When not working on saving IoT power, Henry is busy playing with his active one-year old daughter, Adelaide.