Over the last few years there has been an important shift from cloud-level to device-level AI processing. The ability to run AI/ML tasks becomes a must-have when selecting an SoC or MCU for IoT and IIoT applications.
Embedded devices are typically resource-constrained, making it difficult to run AI algorithms on them. This paper looks at what could make it easier from a software and hardware point of view and how Codasip tools and IP help.
This paper focuses on:
- How TensorFlow Lite for Microcontrollers (TFLite-Micro), as a dedicated AI framework, supports domain-specific optimization aligning perfectly with Codasip design tools.
- Examples based on the Codasip L31 processor core (which we announced in this press release) with both standard and custom extensions.
- The benefits of custom instructions for neural networks.
To access the full document, please provide your details below.
It may take a few seconds for the link to appear. If it does not, please, refresh the page. Having issues? Contact us.