SPRADC9 july   2023 AM62A3 , AM62A7

 

  1.   1
  2.   Abstract
  3.   Trademarks
  4. 1Introduction
    1. 1.1 Defect Detection Demo Summary
    2. 1.2 AM62A Processor
    3. 1.3 Defect Detection Systems
    4. 1.4 Conventional Machine Vision vs Deep Learning
  5. 2Data Set Preparation
    1. 2.1 Test Samples
    2. 2.2 Data Collection
    3. 2.3 Data Annotation
    4. 2.4 Data Augmentation
  6. 3Model Selection and Training
    1. 3.1 Model Selection
    2. 3.2 Model Training and Compilation
  7. 4Application Development
    1. 4.1 System Flow
    2. 4.2 Object Tracker
    3. 4.3 Dashboard and Bounding Boxes Drawing
    4. 4.4 Physical Demo Setup
  8. 5Performance Analysis
    1. 5.1 System Accuracy
    2. 5.2 Frame Rate
    3. 5.3 Cores Utilization
    4. 5.4 Power Consumption
  9. 6Summary
  10. 7References

Conventional Machine Vision vs Deep Learning

Defect detection in conventional machine vision uses rules-based algorithms. Such systems require direct engagement of experts in image processing to define a set of rules to develop application specific algorithms. These algorithms usually consist of multiple classical feature detectors followed by a series of conditional decisions. Some examples of the rules might include existence of a specific shape or dimensional relation between certain features. Embedded system engineers have to program algorithms to the desired system. This process takes months of work. On the other hand, deep learning models can be easily trained with the appropriate dataset with no need to specify features or rules. The trained models can be easily ported to the desired embedded system. TI provides a suit of tools to train, compile, and benchmark deep learning models Edge AI Studio.

Table 1-1 A Comparison Between Conventional Rules-Based Machine Vision Systems and Deep Learning Using TI Edge AI.
Conventional Rules-Based Systems Deep Learning Using TI Edge AI
Requires image processing expertise Models can be trained using Edge AI Studio with little to no previous deep learning experience
Requires HW specific algorithm programming expertise Model can be directly imported to AM62A
Algorithms are application specific TI EdgeAI-ModelZoo provides hundreds of models that can be easily retrained for different applications
Longer development time Shorter development time
Usually requires general purpose processor Models can be off loaded to the C7x/MMA deep learning accelerator
Requires less computation resources compared to deep learning Requires more computation resources compared to rules-based
Requires smaller dataset compared to deep learning Requires bigger dataset to train the model
Generally used for simpler tasks such as object tracking Used for more complex tasks such as object detection and sematic segmentation
Less robust to environmental changes such as lighting condition and camera angle More robust to environmental changes
Hard to update and tune after development The model can be easily re-trained using Edge AI Studio