EdgeAI refers to the ability to run various AI applications directly on edge devices, hence minimizing or even eliminating the need to rely on the cloud. Given its huge potential to enable new opportunities for various IoT applications (e.g., image classification, object detection, autonomous driving, language processing, etc.), edge computing/IoT is currently one of the hottest research areas. Our research is primarily focused on developing new energy-aware machine learning techniques and hardware prototypes that leverage the network and the system characteristics to enable edge/IoT computing.
Selected Publications
SUGAR: Efficient Subgraph-level Training via Resource-aware Graph Partitioning Journal Article
In: IEEE Transactions on Computers, 2023, ISSN: 0018-9340.
MobileViG: Graph-Based Sparse Attention for Mobile Vision Applications Conference
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023.
International Conference on Internet-of-Things Design and Implementation (IoTDI), 2023.
Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation, 2023.
Efficient On-device Training via Gradient Filtering Conference
The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
ZiCo: Zero-shot NAS via Inverse Coefficient of Variation on Gradients Conference
International Conference on Learning Representations (ICLR), 2023.
Internet of Things
Internet of Things (IoT) represents a paradigm shift from the traditional Internet and Cloud computing to a new reality where all “things” are connected to the Internet. Indeed, it has been estimated that the number of connected IoT-devices will reach one trillion by 2035. Such an explosive growth in IoT-devices necessitates new breakthroughs in AI research that can help efficiently deploy intelligence at the edge. Given that IoT-devices are extremely resource-constrained (e.g., small memory, low operating frequencies for energy efficiency), we focus primarily on challenges related to enabling deeplearning models at the edge.
Model Compression & Optimization
Model compression has emerged as an important area of research for deploying deep learning models on IoT devices. However, model compression is not a sufficient solution to fit the models within the memory of a single device; as a result we need to distribute them across multiple devices. This leads to a distributed inference paradigm in which communication costs represent another major bottleneck. To this end, we focus on knowledge distillation and ‘teacher’ – ‘student’ type of architectures for distributed model compression, as well as data independent model compression.
Federated Learning
Large amounts of data are generated nowadays on edge devices, such as phones, tablets, and wearable devices. However, since data on such personal devices is highly sensitive, training ML models by sending the users’ local data to a centralized server clearly involves significant privacy risks. Hence, in order to enable intelligence for these privacy-critical applications, Federated Learning (FL) has become the de facto paradigm for training ML models on local devices without sending data to the cloud. Our research in this direction focuses on developing new FL approaches that exploit data and devices heterogeneity.