SPRADB0 may   2023 AM62A3 , AM62A3-Q1 , AM62A7 , AM62A7-Q1 , AM68A , AM69A

 

  1.   1
  2.   Abstract
  3.   Trademarks
  4. 1Introduction
    1. 1.1 Intended Audience
    2. 1.2 Host Machine Information
  5. 2Creating the Dataset
    1. 2.1 Collecting Images
    2. 2.2 Labelling Images
    3. 2.3 Augmenting the Dataset (Optional)
  6. 3Selecting a Model
  7. 4Training the Model
    1. 4.1 Input Optimization (Optional)
  8. 5Compiling the Model
  9. 6Using the Model
  10. 7Building the End Application
    1. 7.1 Optimizing the Application With TI’s Gstreamer Plugins
    2. 7.2 Using a Raw MIPI-CSI2 Camera
  11. 8Summary
  12. 9References

Using the Model

The next step is to use the model in practice.

The accuracy reported during training is helpful for determining the effectiveness of the model, but visualizing on realistic input is crucial to be confident the model is performing as expected.

The fastest way to evaluate a model for new input on the target device is to use it within edgeai-gst-apps. This is a valuable proof-of-concept for evaluating more practical accuracy without writing new code. Copy the new directory within “compiled-artifacts” onto the target device and modify a config file like object_detection.yaml (shown in Figure 6-1) to point to this model directory. Ensure that the model is used in the flow at the bottom of the config file. The input can either be a live input like USB camera or a premade video/directory of image files.

GUID-20230424-SS0I-BWGJ-ZLB0-5NV1KSNCJ3QK-low.png Figure 6-1 Example of edgeai-gst-apps config File

A connected display/saved video file looks like Figure 6-2 (or may have performance information overlaid depending on how the output is configured):

GUID-20230424-SS0I-1TNZ-NPHM-R1SK5WXDWVF9-low.jpg Figure 6-2 Gstreamer Display When Using edgeai-gst-apps for Newly Trained Model

An additional benefit of running the model in this way is that a holistic gstreamer string is printed to the terminal, which is a useful starting point for application development.