Application overview
Automation in factories and assembly lines is rapidly progressing to bring more intelligence to the manufacturing process. Defects can occur at many stages of the process, and inspection is a crucial step for quality assurance. Machine vision cameras are a common automated inspection method, helping ensure products come off the line free of defects.
Edge AI and neural network models can recognize and classify even small, minute defects in parts without strict positioning or orientation requirements that are common in conventional computer vision approaches. Neural networks offer a robust, data driven approach to visual inspection, lowering the barrier to entry.
Microprocessors with the C7™ NPU accelerate neural networks for vision tasks, with models running at low-latencies such that they keep up with the framerate of the camera (> 60 FPS) and the assembly line. Additional on-chip accelerators like the Vision Preprocessing Accelerator (VPAC) performing image signal processor (ISP) functions enables high-quality and high-throughput image preprocessing to reduce noise and prepare data for AI-based analysis.
Starting evaluation
Data collection
Training data will be images collected from a camera that closely matches the one used on the factory line. Lighting should be consistent with the factory conditions, including any additional illuminators like ring lights. The camera lens can also impact image quality, so it should match the lens (or lenses, if multiple options will be available) used in the final system.
Ground truth for the dataset can be labels as bounding boxes over defective parts (better for small, simple parts like screws) or image masks that isolate pixels on specific regions showing defects (better for large, complex objects like PCBs). Select the model type before labeling data, as the model architecture may influence the type of annotations required.
Data quality assessment
The dataset must include types of realistic defects and expected orientations of the objects on the factory line. Expected types of lighting variations, even including partial failure of illuminators, can improve model robustness to minor equipment failures.
The dataset should have samples that do not show any defects and some that do show defects -- not all samples should have defects present. The defects must be visible within the image. If the human eye cannot see the defect, then the trained model will not learn to see it either.
Build and train your model
Models may be trained with CCStudio™ Edge AI Studio or edgeai-modelmaker for TI-supported neural networks. More advanced developers may approach this task using autoencoder models for anomaly detection.
Find the right model for your needs
Select the best model type for your defects:
- Object detection models (like YOLOX): Best for spotting distinct, separate defects such as cracks, missing components, or other large defects.
- Semantic segmentation models (like DeepLabv3): Ideal for detecting surface-level issues like discoloration or texture problems.
Balance resolution and speed:
Choose a resolution high enough to catch important small defects, but not so detailed that it slows down your inspection process. Most systems improve efficiency by downscaling the original captured images to reduce the amount of data to process.
Consider anomaly detection for unique cases:
For situations where defects are unpredictable or hard to define, anomaly detection models (using autoencoders) can offer a simpler approach. These models learn what "normal" products look like and flag anything different as a potential defect—no need to train on defective examples. However, these types of models are not supported in Edge AI Studio or edgeai-modelmaker, so developers must implement an autoencoder neural network themselves and export to a supported format like ONNX.
Deploying your model
Model deployment requires the model to be compiled beforehand for the target hardware accelerator. With tools like Edge AI Studio and edgeai-modelmaker, compilation is automatic. Otherwise, compiling models will require a separate step through software packages like edgeai-tidl-tools on the TI GitHub using a Bring Your Own Model flow.
Model artifacts are deployed through runtimes like ONNX Runtime, LiteRT (formerly tensorflow-lite) and TVM using TI Deep Learning (TIDL) as the backend software for hardware acceleration.
To deploy the model into an end-to-end vision application, start with edgeai-gst-apps, which composes the pipeline with multiple stages of hardware acceleration for pre-processing and post-processing the image, in addition to accelerating the AI model itself.
In addition to edgeai-gst-apps, we have built a demo for defect detection on small objects, ring terminals, using the YOLOX-Nano object detection network and edgeai-gst-apps as a baseline. The model was trained with Edge AI Studio to recognize good parts vs. multiple types of defects. Additional postprocessing and visualization shows statistics for the number of good and defective parts.
Choosing the right device for you
Device selection will depend on the level of AI performance required and the camera throughput (resolution and framerate).
The benchmarks in the table below were produced using SDK version 10.1.
| Product number | Processing core | NPU available | Defect detection benchmarks | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
YOLOX-Nano(416x416) Performance |
SSD-Mobilenetv2 (512x512) Performance |
DeepLabv3 segmentation (512x512) Performance | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| AM62A7 | 4x Arm® | 2 TOPS | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| TDA4VE-Q1 | 4x Arm® | 8 TOPS | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
FPS (Frames per second)
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
All the hardware, software and resources you’ll need to get started
Hardware
SK-AM62A-LP
The AM62A is the lowest-cost AI-accelerated device in the AM6xA family, and is best suited for evaluation. A generic USB camera or webcam can be used for image capture and model evaluation on live data
Software & development tools
PROCESSOR-SDK-LINUX-AM62A
The Edge AI processor SDK is Linux-based and includes the necessary software components to run a compiled model with hardware acceleration. Other Edge AI accelerated processors may be substituted for AM62A
CCStudio™ Edge AI Studio
This tool contains tools for training, compiling and deploying a model to TI edge AI processors. A model selection tool is available to view pre-generated benchmarks of popular models.
Command-Line tools
Tools for Micro Processor devices with Linux and TIDL support. TI's edge AI solution simplifies the whole product life cycle of DNN development and deployment by providing a rich set of tools and optimized libraries.
Supporting resources
Demo application for building and deploying a defect detection model using an object detection neural network trained with Edge AI Studio.
Industrial | Vision
Detect fine-grain obstacles and pathways from imagery in real-time using C7™ NPU equipped processors