SPRADB0 may   2023 AM62A3 , AM62A3-Q1 , AM62A7 , AM62A7-Q1 , AM68A , AM69A

 

  1.   1
  2.   Abstract
  3.   Trademarks
  4. 1Introduction
    1. 1.1 Intended Audience
    2. 1.2 Host Machine Information
  5. 2Creating the Dataset
    1. 2.1 Collecting Images
    2. 2.2 Labelling Images
    3. 2.3 Augmenting the Dataset (Optional)
  6. 3Selecting a Model
  7. 4Training the Model
    1. 4.1 Input Optimization (Optional)
  8. 5Compiling the Model
  9. 6Using the Model
  10. 7Building the End Application
    1. 7.1 Optimizing the Application With TI’s Gstreamer Plugins
    2. 7.2 Using a Raw MIPI-CSI2 Camera
  11. 8Summary
  12. 9References

Introduction

The purpose of this document is to describe the tools and process for creating an Edge AI Vision application for Texas Instruments AM6xA Microprocessors. The exemplary application uses the AM62A for an automated retail scanner or checkout system, in which a customer’s tray of food items is quickly and automatically recognized using a deep learning neural network, accelerated by TI’s C7xMMA deep learning accelerator.

This application note follows the overall development flow as shown in Figure 1-1, starting from a custom dataset and a partially trained model from the TI Model-Zoo [1]. A neural network model is trained with TI-supported tools and compiled to the target device’s C7xMMA architecture. This is then used alongside Gstreamer for developing an end-to-end media pipeline, including camera capture, preprocessing, deep learning inference, post-processing, and display to a monitor. The end application is written in Python3, and open source code can be found online on the Texas Instruments Github under the retail-shopping directory [2]. For instructions on how to run the demo, see the README.md.

This application was developed for AM62A, a quad-core microprocessor from Texas Instruments with 2 TOPS of deep learning acceleration. The model is compiled for this architecture, and will need to be recompiled to run on another TI processor to take full advantage of the accelerator. To learn more about this compilation process, see the related document in TI's Edge AI repo [4] and edgeai-tidl-tools [5]. This application was taken to Embedded World 2023 as a demo to showcase the AM62A.

At the time of implementation, TI’s Model Composer suite of software within the Edge AI Cloud [6] was not yet available for this processor – those tools simplify neural network model development from data capture to model selection to training to compilation and evaluation as shown in Figure 1-1 up through Model Evaluation. This document follows a more programmatic design flow, which offers greater flexibility that experienced developers may prefer.

Find the concept video for the demo on youtube at the following link: https://www.youtube.com/watch?v=jYJvtoPAW6E.

GUID-20230424-SS0I-KZ3T-985R-RCZBDWTRSSRC-low.svg Figure 1-1 Development Flow Diagram