EdgeAI refers to the ability to run various AI applications directly on edge devices, hence minimizing or even eliminating the need to rely on the cloud. Given its huge potential to enable new opportunities for various IoT applications (e.g., image classification, object detection, autonomous driving, language processing, etc.), edge computing/IoT is currently one of the hottest research areas. Our research is primarily focused on developing new energy-aware machine learning techniques and hardware prototypes that leverage the network and the system characteristics to enable edge/IoT computing.
Map
Contact
Prof. Radu Marculescu
System Level Design Group
Electrical and Computer Engineering
The University of Texas at Austin
radum@utexas.edu
Join Us
We are actively looking for smart and passionate students like you!
Join the team and be at the forefront of machine learning, network science, and systems design.
Join Us
Internet of Things
Internet of Things (IoT) represents a paradigm shift from the traditional Internet and Cloud computing to a new reality where all “things” are connected to the Internet. Indeed, it has been estimated that the number of connected IoT-devices will reach one trillion by 2035. Such an explosive growth in IoT-devices necessitates new breakthroughs in AI research that can help efficiently deploy intelligence at the edge. Given that IoT-devices are extremely resource-constrained (e.g., small memory, low operating frequencies for energy efficiency), we focus primarily on challenges related to enabling deeplearning models at the edge.
Model Compression & Optimization
Model compression has emerged as an important area of research for deploying deep learning models on IoT devices. However, model compression is not a sufficient solution to fit the models within the memory of a single device; as a result we need to distribute them across multiple devices. This leads to a distributed inference paradigm in which communication costs represent another major bottleneck. To this end, we focus on knowledge distillation and ‘teacher’ – ‘student’ type of architectures for distributed model compression, as well as data independent model compression.
Federated Learning
Large amounts of data are generated nowadays on edge devices, such as phones, tablets, and wearable devices. However, since data on such personal devices is highly sensitive, training ML models by sending the users’ local data to a centralized server clearly involves significant privacy risks. Hence, in order to enable intelligence for these privacy-critical applications, Federated Learning (FL) has become the de facto paradigm for training ML models on local devices without sending data to the cloud. Our research in this direction focuses on developing new FL approaches that exploit data and devices heterogeneity.