System Level Design
  • Home
  • People
  • Research
  • Publications
  • Software
  • Opportunities
  • Search
  • Menu Menu
  • Edge AI
  • Networks
  • Systems

Edge AI

EdgeAI refers to the ability to run various AI applications directly on edge devices, hence minimizing or even eliminating the need to rely on the cloud. Given its huge potential to enable new opportunities for various IoT applications (e.g., image classification, object detection, autonomous driving, language processing, etc.), edge computing/IoT is currently one of the hottest research areas. Our research is primarily focused on developing new energy-aware machine learning techniques and hardware prototypes that leverage the network and the system characteristics to enable edge/IoT computing.

Edge AI 1600 x 1100

internet of things

Internet of Things

Internet of Things (IoT) represents a paradigm shift from the traditional Internet and Cloud computing to a new reality where all “things” are connected to the Internet. Indeed, it has been estimated that the number of connected IoT-devices will reach one trillion by 2035. Such an explosive growth in IoT-devices necessitates new breakthroughs in AI research that can help efficiently deploy intelligence at the edge. Given that IoT-devices are extremely resource-constrained (e.g., small memory, low operating frequencies for energy efficiency), we focus primarily on challenges related to enabling deeplearning models at the edge.

Read more
https://radum.ece.utexas.edu/wp-content/uploads/2020/11/internet-of-things-1.jpg 1469 1600 Academic Web Pages /wp-content/themes/awp-enfold/blank.png Academic Web Pages2020-11-04 12:59:262020-11-17 14:55:31Internet of Things
model compressions

Model Compression & Optimization

Model compression has emerged as an important area of research for deploying deep learning models on IoT devices. However, model compression is not a sufficient solution to fit the models within the memory of a single device; as a result we need to distribute them across multiple devices. This leads to a distributed inference paradigm in which communication costs represent another major bottleneck. To this end, we focus on knowledge distillation and ‘teacher’ – ‘student’ type of architectures for distributed model compression, as well as data independent model compression.

Read more
https://radum.ece.utexas.edu/wp-content/uploads/2020/11/model-compressions.jpg 1067 1600 Academic Web Pages /wp-content/themes/awp-enfold/blank.png Academic Web Pages2020-11-02 13:02:182020-11-17 14:56:31Model Compression & Optimization
federated learning

Federated Learning

Large amounts of data are generated nowadays on edge devices, such as phones, tablets, and wearable devices. However, since data on such personal devices is highly sensitive, training ML models by sending the users’ local data to a centralized server clearly involves significant privacy risks. Hence, in order to enable intelligence for these privacy-critical applications, Federated Learning (FL) has become the de facto paradigm for training ML models on local devices without sending data to the cloud. Our research in this direction focuses on developing new FL approaches that exploit data and devices heterogeneity.

Read more
https://radum.ece.utexas.edu/wp-content/uploads/2020/11/federated-learning-495x-less-talljpg.jpg 350 495 Academic Web Pages /wp-content/themes/awp-enfold/blank.png Academic Web Pages2020-10-03 12:59:462020-11-17 14:57:27Federated Learning

Selected Publications

16 entries « ‹ 1 of 3 › »

Farcas, Allen-Jasmin; Lee, Myungjin; Kompella, Ramana Rao; Latapie, Hugo; de Veciana, Gustavo; Marculescu, Radu

MOHAWK: Mobility and Heterogeneity-Aware Dynamic Community Selection for Hierarchical Federated Learning Inproceedings Forthcoming

In: International Conference on Internet-of-Things Design and Implementation (IoTDI), Forthcoming.

@inproceedings{IoTDI2023,
title = {MOHAWK: Mobility and Heterogeneity-Aware Dynamic Community Selection for Hierarchical Federated Learning},
author = {Farcas, Allen-Jasmin and Lee, Myungjin and Kompella, Ramana Rao and Latapie, Hugo and de Veciana, Gustavo and Marculescu, Radu},
year = {2023},
date = {2023-05-09},
urldate = {2023-05-09},
booktitle = {International Conference on Internet-of-Things Design and Implementation (IoTDI)},
keywords = {},
pubstate = {forthcoming},
tppubtype = {inproceedings}
}

Close

Yang, Yuedong; Li, Guihong; Marculescu, Radu

Efficient On-device Training via Gradient Filtering Inproceedings

In: The IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.

Links

@inproceedings{albertcvpr2023,
title = {Efficient On-device Training via Gradient Filtering},
author = {Yuedong Yang and Guihong Li and Radu Marculescu},
url = {https://arxiv.org/pdf/2301.00330.pdf},
year = {2023},
date = {2023-02-27},
urldate = {2023-02-27},
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}

Close

  • https://arxiv.org/pdf/2301.00330.pdf

Close

Li, Guihong; Yang, Yuedong; Bhardwaj, Kartikeya; Marculescu, Radu

ZiCo: Zero-shot NAS via Inverse Coefficient of Variation on Gradients Inproceedings

In: International Conference on Learning Representations, 2023.

Links

@inproceedings{iclr2023,
title = {ZiCo: Zero-shot NAS via Inverse Coefficient of Variation on Gradients},
author = {Guihong Li and Yuedong Yang and Kartikeya Bhardwaj and Radu Marculescu },
url = {https://arxiv.org/pdf/2301.11300.pdf},
year = {2023},
date = {2023-01-26},
urldate = {2023-01-26},
booktitle = {International Conference on Learning Representations},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}

Close

  • https://arxiv.org/pdf/2301.11300.pdf

Close

Farcas, Allen-Jasmin; Chen, Xiaohan; Wang, Zhangyang; Marculescu, Radu

Model Elasticity for Hardware Heterogeneity in Federated Learning Systems Inproceedings

In: Proceedings of the 1st ACM Workshop on Data Privacy and Federated Learning Technologies for Mobile Edge Network (FedEdge), pp. 19-24, 2022.

Links

@inproceedings{AlleneLF,
title = {Model Elasticity for Hardware Heterogeneity in Federated Learning Systems},
author = {Farcas, Allen-Jasmin and Chen, Xiaohan and Wang, Zhangyang and Marculescu, Radu},
url = {https://dl.acm.org/doi/abs/10.1145/3556557.3557954},
year = {2022},
date = {2022-10-17},
urldate = {2022-10-17},
booktitle = {Proceedings of the 1st ACM Workshop on Data Privacy and Federated Learning Technologies for Mobile Edge Network (FedEdge)},
pages = {19-24},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}

Close

  • https://dl.acm.org/doi/abs/10.1145/3556557.3557954

Close

Krishnakumar, Anish; Ogras, Umit Y; Marculescu, Radu; Kishinevsky, Michael; Mudge, Trevor

Domain-Specific Architectures (DSAs): Research Problems and Promising Approaches Journal Article

In: ACM Transactions on Embedded Computing Systems (TECS), 2022.

Links

@article{DSAAnish,
title = {Domain-Specific Architectures (DSAs): Research Problems and Promising Approaches},
author = {Krishnakumar, Anish and Ogras, Umit Y and Marculescu, Radu and Kishinevsky, Michael and Mudge, Trevor },
url = {https://dl.acm.org/doi/full/10.1145/3563946},
year = {2022},
date = {2022-10-07},
urldate = {2022-10-07},
journal = {ACM Transactions on Embedded Computing Systems (TECS)},
keywords = {},
pubstate = {published},
tppubtype = {article}
}

Close

  • https://dl.acm.org/doi/full/10.1145/3563946

Close

Yang, Yuedong; Xue, Zihui; Marculescu, Radu

Anytime Depth Estimation with Limited Sensing and Computation Capabilities on Mobile Devices Inproceedings

In: The Conference on Robot Learning, 2021.

Links

@inproceedings{corl2021,
title = {Anytime Depth Estimation with Limited Sensing and Computation Capabilities on Mobile Devices},
author = {Yuedong Yang and Zihui Xue and Radu Marculescu },
url = {https://openreview.net/pdf?id=I6DLxqk9J0A},
year = {2021},
date = {2021-10-30},
urldate = {2021-10-30},
booktitle = {The Conference on Robot Learning},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}

Close

  • https://openreview.net/pdf?id=I6DLxqk9J0A

Close

16 entries « ‹ 1 of 3 › »

ecelogo

Contact

Prof. Radu Marculescu
System Level Design Group
Electrical and Computer Engineering
The University of Texas at Austin
radum@utexas.edu

Join Us

We are actively looking for smart and passionate students like you!
Join the team and be at the forefront of machine learning, network science, and systems design.
Join Us

© Copyright System Level Design Group. Site by Academic Web Pages
    • Login
    Scroll to top