Over the last few years there has been an important shift from cloud-level to device-level AI processing. The ability to run AI/ML tasks becomes a must-have when selecting an SoC or MCU for IoT and IIoT applications.
Embedded devices are typically resource-constrained, making it difficult to run AI algorithms on them. This paper looks at what could make it easier from a software and hardware point of view and how Codasip tools and IP help.
This paper focuses on:
- How TensorFlow Lite for Microcontrollers (TFLite-Micro), as a dedicated AI framework, supports domain-specific optimization aligning perfectly with Codasip design tools.
- Examples based on the Codasip L31 processor core (which we announced in this press release) with both standard and custom extensions.
- The benefits of custom instructions for neural networks.

To access the full document, please provide your details below.
We will process them with care, as described in our Privacy Policy.
It may take a few seconds for the email to arrive. If it does not, please, resubmit the form. Having issues? Contact us.