System Level Design
  • Home
  • People
  • Research
  • Publications
  • Software
  • Opportunities
  • Click to open the search input field Click to open the search input field Search
  • Menu Menu
  • Edge AI
  • Networks
  • Systems

Edge AI

EdgeAI refers to the ability to run various AI applications directly on edge devices, hence minimizing or even eliminating the need to rely on the cloud. Given its huge potential to enable new opportunities for various IoT applications (e.g., image classification, object detection, autonomous driving, language processing, etc.), edge computing/IoT is currently one of the hottest research areas. Our research is primarily focused on developing new energy-aware machine learning techniques and hardware prototypes that leverage the network and the system characteristics to enable edge/IoT computing.

Edge AI 1600 x 1100

internet of things

Internet of Things

Internet of Things (IoT) represents a paradigm shift from the traditional Internet and Cloud computing to a new reality where all “things” are connected to the Internet. Indeed, it has been estimated that the number of connected IoT-devices will reach one trillion by 2035. Such an explosive growth in IoT-devices necessitates new breakthroughs in AI research that can help efficiently deploy intelligence at the edge. Given that IoT-devices are extremely resource-constrained (e.g., small memory, low operating frequencies for energy efficiency), we focus primarily on challenges related to enabling deeplearning models at the edge.

Read more
https://radum.ece.utexas.edu/wp-content/uploads/2020/11/internet-of-things-1.jpg 1469 1600 Academic Web Pages /wp-content/themes/awp-enfold/blank.png Academic Web Pages2020-11-04 12:59:262020-11-17 14:55:31Internet of Things
model compressions

Model Compression & Optimization

Model compression has emerged as an important area of research for deploying deep learning models on IoT devices. However, model compression is not a sufficient solution to fit the models within the memory of a single device; as a result we need to distribute them across multiple devices. This leads to a distributed inference paradigm in which communication costs represent another major bottleneck. To this end, we focus on knowledge distillation and ‘teacher’ – ‘student’ type of architectures for distributed model compression, as well as data independent model compression.

Read more
https://radum.ece.utexas.edu/wp-content/uploads/2020/11/model-compressions.jpg 1067 1600 Academic Web Pages /wp-content/themes/awp-enfold/blank.png Academic Web Pages2020-11-02 13:02:182020-11-17 14:56:31Model Compression & Optimization
federated learning

Federated Learning

Large amounts of data are generated nowadays on edge devices, such as phones, tablets, and wearable devices. However, since data on such personal devices is highly sensitive, training ML models by sending the users’ local data to a centralized server clearly involves significant privacy risks. Hence, in order to enable intelligence for these privacy-critical applications, Federated Learning (FL) has become the de facto paradigm for training ML models on local devices without sending data to the cloud. Our research in this direction focuses on developing new FL approaches that exploit data and devices heterogeneity.

Read more
https://radum.ece.utexas.edu/wp-content/uploads/2020/11/federated-learning-495x-less-talljpg.jpg 350 495 Academic Web Pages /wp-content/themes/awp-enfold/blank.png Academic Web Pages2020-10-03 12:59:462020-11-17 14:57:27Federated Learning

Selected Publications

28 entries « ‹ 1 of 5 › »

Farcas, Allen-Jasmin; Marculescu, Radu

Demo Abstract: Lightweight Training and Inference for Self-Supervised Depth Estimation on Edge Devices Conference Forthcoming

Proceedings of the 23rd ACM Conference on Embedded Networked Sensor Systems (SenSys 2025), Forthcoming.

@conference{liti,
title = {Demo Abstract: Lightweight Training and Inference for Self-Supervised Depth Estimation on Edge Devices},
author = {Allen-Jasmin Farcas and Radu Marculescu},
year = {2025},
date = {2025-05-07},
urldate = {2025-05-07},
booktitle = {Proceedings of the 23rd ACM Conference on Embedded Networked Sensor Systems (SenSys 2025)},
keywords = {},
pubstate = {forthcoming},
tppubtype = {conference}
}

Close

Munir, Mustafa; Rahman, Md Mostafijur; Marculescu, Radu

RapidNet: Multi-Level Dilated Convolution Based Mobile Backbone Conference

2025 Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2025), 2025.

@conference{RapidNet_WACV_2025,
title = {RapidNet: Multi-Level Dilated Convolution Based Mobile Backbone},
author = {Mustafa Munir and Md Mostafijur Rahman and Radu Marculescu},
year = {2025},
date = {2025-03-03},
urldate = {2025-03-03},
booktitle = {2025 Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2025)},
journal = {2025 Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2025)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}

Close

Munir, Mustafa; Zhang, Alex; Marculescu, Radu

Multi-Scale High-Resolution Logarithmic Grapher Module for Efficient Vision GNNs Conference

The Third Learning on Graphs Conference (LOG 2024), 2024.

@conference{LogViG_LOG_2024,
title = {Multi-Scale High-Resolution Logarithmic Grapher Module for Efficient Vision GNNs},
author = {Mustafa Munir and Alex Zhang and Radu Marculescu},
year = {2024},
date = {2024-11-26},
urldate = {2024-11-26},
booktitle = {The Third Learning on Graphs Conference (LOG 2024)},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}

Close

Munir, Mustafa; Avery, William; Rahman, Md Mostafijur; Marculescu, Radu

GreedyViG: Dynamic Axial Graph Construction for Efficient Vision GNNs Conference

Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.

Abstract | Links

@conference{GreedyViG_CVPR_2024,
title = {GreedyViG: Dynamic Axial Graph Construction for Efficient Vision GNNs},
author = {Mustafa Munir and William Avery and Md Mostafijur Rahman and Radu Marculescu},
url = {https://openaccess.thecvf.com/content/CVPR2024/papers/Munir_GreedyViG_Dynamic_Axial_Graph_Construction_for_Efficient_Vision_GNNs_CVPR_2024_paper.pdf},
year = {2024},
date = {2024-06-19},
urldate = {2024-06-19},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
abstract = {Vision graph neural networks (ViG) offer a new avenue for exploration in computer vision. A major bottleneck in ViGs is the inefficient k-nearest neighbor (KNN) operation used for graph construction. To solve this issue, we propose a new method for designing ViGs, Dynamic Axial Graph Construction (DAGC), which is more efficient than KNN as it limits the number of considered graph connections made within an image. Additionally, we propose a novel CNN-GNN architecture, GreedyViG, which uses DAGC. Extensive experiments show that GreedyViG beats existing ViG, CNN, and ViT architectures in terms of accuracy, GMACs, and parameters on image classification, object detection, instance segmentation, and semantic segmentation tasks. Our smallest model, GreedyViG-S, achieves 81.1% top-1 accuracy on ImageNet-1K, 2.9% higher than Vision GNN and 2.2% higher than Vision HyperGraph Neural Network (ViHGNN), with less GMACs and a similar number of parameters. Our largest model, GreedyViG-B obtains 83.9% top-1 accuracy, 0.2% higher than Vision GNN, with a 66.6% decrease in parameters and a 69% decrease in GMACs. GreedyViG-B also obtains the same accuracy as ViHGNN with a 67.3% decrease in parameters and a 71.3% decrease in GMACs. Our work shows that hybrid CNNGNN architectures not only provide a new avenue for designing efficient models, but that they can also exceed the performance of current state-of-the-art models.},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}

Close

Vision graph neural networks (ViG) offer a new avenue for exploration in computer vision. A major bottleneck in ViGs is the inefficient k-nearest neighbor (KNN) operation used for graph construction. To solve this issue, we propose a new method for designing ViGs, Dynamic Axial Graph Construction (DAGC), which is more efficient than KNN as it limits the number of considered graph connections made within an image. Additionally, we propose a novel CNN-GNN architecture, GreedyViG, which uses DAGC. Extensive experiments show that GreedyViG beats existing ViG, CNN, and ViT architectures in terms of accuracy, GMACs, and parameters on image classification, object detection, instance segmentation, and semantic segmentation tasks. Our smallest model, GreedyViG-S, achieves 81.1% top-1 accuracy on ImageNet-1K, 2.9% higher than Vision GNN and 2.2% higher than Vision HyperGraph Neural Network (ViHGNN), with less GMACs and a similar number of parameters. Our largest model, GreedyViG-B obtains 83.9% top-1 accuracy, 0.2% higher than Vision GNN, with a 66.6% decrease in parameters and a 69% decrease in GMACs. GreedyViG-B also obtains the same accuracy as ViHGNN with a 67.3% decrease in parameters and a 71.3% decrease in GMACs. Our work shows that hybrid CNNGNN architectures not only provide a new avenue for designing efficient models, but that they can also exceed the performance of current state-of-the-art models.

Close

  • https://openaccess.thecvf.com/content/CVPR2024/papers/Munir_GreedyViG_Dynamic_Ax[...]

Close

Avery, William; Munir, Mustafa; Marculescu, Radu

Scaling Graph Convolutions for Mobile Vision Conference

Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024.

Links

@conference{MobileViGv2,
title = {Scaling Graph Convolutions for Mobile Vision},
author = {William Avery and Mustafa Munir and Radu Marculescu},
url = {https://openaccess.thecvf.com/content/CVPR2024W/MAI/papers/Avery_Scaling_Graph_Convolutions_for_Mobile_Vision_CVPRW_2024_paper.pdf},
year = {2024},
date = {2024-06-17},
urldate = {2024-06-17},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}

Close

  • https://openaccess.thecvf.com/content/CVPR2024W/MAI/papers/Avery_Scaling_Graph_C[...]

Close

Farcas, Allen-Jasmin; Cooper, Geffen; Song, Hyun Joon; Mir, Afnan; Liew, Vincent; Tang, Chloe; Senthilkumar, Prithvi; Chen-Troester, Tiani; Marculescu, Radu

Demo Abstract: Online Training and Inference for On-Device Monocular Depth Estimation Conference

Proceedings of the 9th ACM/IEEE Conference on Internet of Things Design and Implementation, 2024.

Links

@conference{nokey,
title = {Demo Abstract: Online Training and Inference for On-Device Monocular Depth Estimation},
author = {Allen-Jasmin Farcas and Geffen Cooper and Hyun Joon Song and Afnan Mir and Vincent Liew and Chloe Tang and Prithvi Senthilkumar and Tiani Chen-Troester and Radu Marculescu},
url = {https://ieeexplore.ieee.org/abstract/document/10562188},
year = {2024},
date = {2024-05-13},
booktitle = {Proceedings of the 9th ACM/IEEE Conference on Internet of Things Design and Implementation},
keywords = {},
pubstate = {published},
tppubtype = {conference}
}

Close

  • https://ieeexplore.ieee.org/abstract/document/10562188

Close

28 entries « ‹ 1 of 5 › »
Search Search

ecelogo

Map

Contact

Prof. Radu Marculescu
System Level Design Group
Electrical and Computer Engineering
The University of Texas at Austin
radum@utexas.edu

Join Us

We are actively looking for smart and passionate students like you!
Join the team and be at the forefront of machine learning, network science, and systems design.
Join Us

© Copyright System Level Design Group. Site by Academic Web Pages
    • Login
    Scroll to top Scroll to top Scroll to top