Within this conference, benchmarking was addressed in the “Driving Next-Gen Edge AI technologies” workshop co-organized by Chips JU; EdgeAI, ANDANTE, REBECCA, CLEVER, NEUROKIT2E, and Horizon Europe dAIEDGE projects.
Accurately assessing the performance of Edge AI technologies in real-world conditions has become increasingly complex. The inherent heterogeneity of edge devices, varying workloads due to the diversity of applications, and the lack of standardized evaluation methodologies have given rise to a critical need for comprehensive benchmarking solutions.
This 3rd Workshop on Benchmarking for Edge AI was dedicated to advancing the field of smart embedded devices through the systematic evaluation and comparison of Edge AI technologies. This unique event confronted the challenge of heterogeneous system benchmarking by bringing together a diverse community of experts, researchers, and industry leaders to collectively address the complexities of assessing Edge AI.
The main objective was to identify the requirements and barriers hindering the benchmarking of Edge AI systems to develop methodologies and tools that enable fair and reliable performance assessment in various use cases and domains. By establishing key metrics, test scenarios, and evaluation methodologies, a second objective was to seek to define Edge AI benchmarking standards that the community needs to make informed choices about the adoption of Edge AI technologies, and ultimately, accelerate the deployment of Edge AI into real-world applications.
Participants shared their insights, research findings, and best practices, fostering a collaborative environment that drives innovation in Edge AI.
Link to the workshop: https://www.hipeac.net/2024/munich/#/program/
Topics addressed during this workshop:
- Emerging edge AI technologies.
- Fundamental aspects of edge AI verification, validation, and testing.
- Challenges for edge AI benchmarking.
- Edge AI technologies methodologies and tools.
- Software stack for benchmarking.
- Edge AI hardware platforms benchmarking.
- Neuromorphic systems benchmarking.
- Benchmarking of edge AI use-cases.
- Pre-normative and standardisation initiatives on edge AI benchmarking.
Session 1: Edge AI Concepts and Challenges
- Edge AI trends and engineering principles applied to micro, meta, and end-to-end AI system verification, validation, and testing. Ovidiu Vermesan/SINTEF
- The road to edge AI system benchmarking from requirements and robust modelling for verification till system validation and testing. Mario Diaz Nava / STMicroelectronics
- Benchmarking neuromorphic computing systems Andrea L. Dunbar/CSEM
Session 2: Metrics and Tools
- Advancing neuromorphic computing with NeuroBench Bernhard Vogginger/TUD
- NeurIO: A Python library for deployment on edge devices. Simon Narduzzi /CSEM
- SENECA: Lessons learned from architecting and building a fully digital neuromorphic processor. Manolis Sifalakis/imec-NL
Session 3: Algorithms and Application
- Ultra-efficient on-device object detection on AI-Integrated smart glasses with TinyissimoYOLO. Michele Magno/ETHZ
- Evaluating federated learning for malware detection at the edge. G.Xenos & D. Serpanos/CTI and University of Patras
- Self-powered vibro-acoustic micro-edge condition monitoring: journey and roadmap. Clemens Saur/ Neurocontrols
Session 4: System-level Benchmarking
- Automated software in the loop (SIL) and hardware in the loop (HIL) Benchmarking Tim Llewellynn/Bonseyes
- Comparing implementations of a small CNN on commodity hardware. Frédéric Pétrot/ Université Grenoble Alpes
- At-the-edge AI acceleration on FPGAs, from CNNs to SNNs Paolo Meloni/ Università degli Studi di Cagliari
- Accelerating intelligent threat detection at the Edge: CPU, GPU, or FPGA? Abdelghani Bourenane/ Scuola Superiore Sant’Anna
- REBECCA: Full stack HW and SW for RISC-V with tightly coupled hardware accelerators on the edge. Ioannis Papaefstathiou/ Exapsys
(link to the presentations): https://edge-ai-tech.eu/workshops/