Digital signal processors (DSPs) – Deep learning
TI Deep Learning (TIDL) software
TIDL software brings deep learning inference to the edge by enabling applications to leverage TI’s proprietary, highly optimized Convolutional Neural Network (CNN)/Deep Neural Network (DNN) implementation on TI’s Sitara AM57x processors. TIDL is a set of open-source Linux software packages and tools that enables offloading of deep learning inference to either the Embedded Vision Engine (EVE) subsystems, C66x DSPs, or both.
Benefits of TIDL
- Low power deep learning inference for embedded applications at the edge
- Hardware acceleration for deep learning: runs on scalable platform leveraging C66x DSP cores and/or EVE subsystems on AM57x Sitara™ Processors
- Can run multiple DNNs simultaneously on different cores
- Improved performance by leveraging embedded techniques of sparsity, efficient CNN, configurations and dynamic quantization
- Supports importing from Caffe and Tensorflow
- Provides Caffe-Jacinto framework optimized for embedded applications
- Optimized example CNN models that demonstrate real-time processing of applications involving image classification, object detection and pixel-level semantic segmentation
TIDL software is available as part of TI’s free Processors (SDK) for Linux. The initial release of TIDL supportsCNN layer types with most of the popular CNN layers present in frameworks such as Caffe and TensorFlow. Follow Processor SDK Linux TIDL documentation to learn the latest list of TIDL supported
layers.
New to deep learning?
- Watch the video: Introduction to Deep Learning
- Watch the video: TI Deep Learning (TIDL) Overview for Sitara Processors
- Read the white paper: Bringing deep learning to embedded systems.
Want to get started right away?
- Download the TI Design deep learning inference for embedded applications reference design.
- Download the free Processor SDK Linux for the AM57x
- Read the PSDK Getting Started Guide (GSG) on deep learning
AM57x Sitara Processors run the deep learning inference. The inference is the part that is deployed in an end application to perform a specific task, i.e. inspecting a bottle on an assembly line, counting and tracking people within a room, or determining whether a bill is counterfeit. The neural network must still be trained on a desktop or cloud environment and then the trained network must be imported into the AM57x Sitara Processors using TI’s device translation tool.
See the figure below for the TIDL development flow and components of TIDL.

TIDL development flow
The TIDL development flow consists of three main parts.
- Training - Teaches the network model to perform a specific task. Usually occurs offline on desktops or in the cloud and entails feeding large labeled data sets into a DNN. This is where the ‘learning’ part of deep learning. Result of the training phase is a trained network model.
- Import (Format Conversion) - Consists of using the TIDL device translator tool to convert trained network models into an internal format best suited for use inside the TIDL library running on AM57x Processor. Command line parameters specify if the inference will run on C66x DSP cores, EVE subsystem or both.
- Deployment (Inference) - Run the converted network model on an AM57x Processor using TIDL APIs.
More details on each of these are provided further in the TI Design Guide for deep learning inference for embedded applications reference design and in the Getting Started Guide (GSG) user’s guide section on deep learning.
TIDL example applications
Spatial 2D data
(image, video, ToF, radar/mmWave, etc.)

Time series data
(speech, audio, machine data)

Example applications | Role of deep learning |
---|---|
Vision computers, optical inspection, | Classify products as good or defective. Deep learning can improve accuracy and flexibility |
Building automation | Can track/identify/count people and objects |
Automated Sorting Equipment, industrial | Ability to recognize objects and guide movement/placement |
ATMs, currency counters | Determine valid/counterfeit currency |
Vacuum robots, robotic lawn mowers | Improve ability to recognize obstacles and things like power cords/wires |
Smart appliances | Recognize objects in refrigerator, determine food and auto cook |
Building automation/security | Classify and recognize specific sounds/audio patterns |
Industrial Equipment | Identify wear and expected lifetime of equipment |
Examples
Processor SDK includes several examples that are provided in source, including image classification, pixel segmentation and a single shot multi-box detection demo.
There is also a Matrix-GUI example that is based on ImageNet. This demo can recognize up to 1,000 different classes of object and display the label on the screen along with the image. This can run from either a live camera or from a pre-recorded video clip. A subset of “acceptable” classes are used for classification filtering to tune demo performance and can be listed in the source file “findclasses.cpp.”

More information on these demos can be found here.