Model Compression & Optimization

Model compression has emerged as an important area of research for deploying deep learning models on IoT devices. However, model compression is not a sufficient solution to fit the models within the memory of a single device; as a result we need to distribute them across multiple devices. This leads to a distributed inference paradigm in which communication costs represent another major bottleneck. To this end, we focus on knowledge distillation and ‘teacher’ – ‘student’ type of architectures for distributed model compression, as well as data independent model compression.

model compressions

Selected Publications

13 entries « 3 of 3 »

Sartor, Anderson Luiz; Becker, Pedro Henrique Exenberger; Wong, Stephan; Marculescu, Radu; Beck, Antonio Carlos Schneider

Machine Learning-Based Processor Adaptability Targeting Energy, Performance, and Reliability Proceedings Article

In: 2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), pp. 158–163, IEEE 2019.

Links | BibTeX

13 entries « 3 of 3 »