EdgeAI refers to the ability to run various AI applications directly on edge devices, hence minimizing or even eliminating the need to rely on the cloud. Given its huge potential to enable new opportunities for various IoT applications (e.g., image classification, object detection, autonomous driving, language processing, etc.), edge computing/IoT is currently one of the hottest research areas. Our research is primarily focused on developing new energy-aware machine learning techniques and hardware prototypes that leverage the network and the system characteristics to enable edge/IoT computing.
Selected Publications
Anytime Depth Estimation with Limited Sensing and Computation Capabilities on Mobile Devices Proceedings Article
In: The Conference on Robot Learning, 2021.
FLASH: Fast Neural Architecture Search with Hardware Optimization Journal Article
In: ACM Transactions on Embedded Computing Systems, vol. 20, no. 63, pp. 1-26, 2021.
FedMAX: Mitigating Activation Divergence for Accurate and Communication-Efficient Federated Learning Journal Article
In: The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD) , 2020.
A Hardware Prototype Targeting Distributed Deep Learning for On-Device Inference Proceedings Article
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 398–399, 2020.
Model Personalization for Human Activity Recognition Proceedings Article
In: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), pp. 1–7, IEEE 2020.
Edge AI: Systems Design and ML for IoT Data Analytics Proceedings Article
In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 3565–3566, 2020.
Internet of Things
Internet of Things (IoT) represents a paradigm shift from the traditional Internet and Cloud computing to a new reality where all “things” are connected to the Internet. Indeed, it has been estimated that the number of connected IoT-devices will reach one trillion by 2035. Such an explosive growth in IoT-devices necessitates new breakthroughs in AI research that can help efficiently deploy intelligence at the edge. Given that IoT-devices are extremely resource-constrained (e.g., small memory, low operating frequencies for energy efficiency), we focus primarily on challenges related to enabling deeplearning models at the edge.
Model Compression & Optimization
Model compression has emerged as an important area of research for deploying deep learning models on IoT devices. However, model compression is not a sufficient solution to fit the models within the memory of a single device; as a result we need to distribute them across multiple devices. This leads to a distributed inference paradigm in which communication costs represent another major bottleneck. To this end, we focus on knowledge distillation and ‘teacher’ – ‘student’ type of architectures for distributed model compression, as well as data independent model compression.
Federated Learning
Large amounts of data are generated nowadays on edge devices, such as phones, tablets, and wearable devices. However, since data on such personal devices is highly sensitive, training ML models by sending the users’ local data to a centralized server clearly involves significant privacy risks. Hence, in order to enable intelligence for these privacy-critical applications, Federated Learning (FL) has become the de facto paradigm for training ML models on local devices without sending data to the cloud. Our research in this direction focuses on developing new FL approaches that exploit data and devices heterogeneity.