ML@GT Seminar Series | Techniques for TinyML: from Some Classical Pruning Methods to a New Neuron Paradigm for DNNs

Featuring Gianluca Setti, King Abdullah University of Science and Technology

Abstract: The growing interest in Internet of Thing (IoT) and mobile Artificial Intelligence applications is pushing the investigation on Deep Neural Networks (DNNs) that can operate at the edge by running on low-resources/energy platforms. This has lead to the development of a plethora of machine learning techniques at hardware, algorithm and software level capable of performing on-device sensor data analytics at extremely low power (typically in the mW range and below), and which broadly defines the field of Tiny Machine Learning (TinyML). TinyML offers several notable advantages, including low latency since algorithms run on edge devices, reducing the need for data transfer to the cloud. It also minimizes bandwidth requirements as little to no connectivity is necessary for inference, enhancing data privacy as data is not stored on external servers; instead, models operate at the edge.

To make sure that DNNs resulting from the training phase may fit low resource platforms, several pruning techniques have been proposed in the literature. They aim to reduce the number of interconnections (and consequently the size, and the corresponding computing and storage requirements) of a DNN relying on classic Multiply-and-ACcumulate (MAC) neurons. In this talk, we first review some pruning techniques highlighting their pros and cons. Then, we introduce a novel neurons structure based on a Multiply-And-Max/min (MAM) map-reduce paradigm, and we show that by exploiting such a new paradigm it is possible to build naturally and aggressively prunable DNN layers, with a negligible loss in performance. In fact, this novel structure allows a greater interconnection sparsity when compared to classic MAC based DNN layers. Moreover, most of the already existing state-of-the-art pruning techniques can be used with MAM layers with little to no changes. We present results using AlexNet, VGG16 and ViT-B/16 using either CIFAR-10, CIFAR-100 or ImagenNet-1K as dataset and show the clear advantages attainable when MAM neurons are exploited.

 

 

Bio: Gianluca Setti received a Dr. Eng. degree (honors) and a Ph.D. degree in Electronic Engineering from the University of Bologna, in 1992 and in 1997. From 1997 to 2017 he was with the Department of Engineering, University of Ferrara, Italy, as an Assistant- (1998-2000), Associate- (2001-2008) and as a Professor (2009-2017) of Circuit Theory and Analog Electronics. From 2017 to 2022, he was Professor of Electronics, Signal and Data Processing at the Department of Electronics and Telecommunications (DET) of Politecnico di Torino, Italy. He is currently Dean of the Computer, Electrical, Mathematical Science and Engineering at KAUST, Saudi Arabia, where is also a Professor of Electrical and Computer Engineering.

Dr. Setti has held various visiting positions, most recently at the University of Washington, at IBM T. J. Watson Laboratories, and at EPFL (Lausanne).

His research interests include nonlinear circuits, recurrent neural networks, statistical signal processing, electromagnetic compatibility, compressive sensing and statistical signal processing, biomedical circuits and systems, power electronics, design and implementation of IoT nodes, circuits and systems for machine learning.

He is the recipient of numerous awards, including the 2004 IEEE Circuits and Systems (CAS) Society Darlington Award, the 2013 IEEE CASS Guellemin-Cauer Award, the 2013 IEEE CASS Meritorious Service Award and the 2019 IEEE Transactions on Biomedical Circuits and Systems Best Paper Award.

He was a Distinguished Lecturer of the IEEE CAS Society (2004-2005 and 2013-2014) of the same Society, a member of the CASS Board of Governors (2005-2008), and served as the 2010 CAS Society President, as well as the 2018 General Chair of the IEEE International Symposium on Circuits and Systems (ISCAS) in Florence, Italy.

Event Details

Date/Time:

  • Wednesday, February 14, 2024
    12:00 pm - 1:00 pm
Location: CODA 9th Floor Atrium

For More Information Contact

Shelli Hatcher, Program and Operations Manager

Related Links