SLUUDB7 August   2025 TMS320F28P550SJ

 

  1.   1
  2.   Abstract
  3.   Trademarks
  4. 1Introduction
  5. 2TMS320F28P55x
  6. 3Edge AI Studio
  7. 4Out-of-the-Box Demo (Smart Signal Classifier)
    1. 4.1 Dataset
      1. 4.1.1 Methods for Data Collection
        1. 4.1.1.1 Collection Process
      2. 4.1.2 Data Formats
    2. 4.2 Model Training
      1. 4.2.1 Preprocessing Options
    3. 4.3 Deploying to TI's Hardware
      1. 4.3.1 TVM Compiler
      2. 4.3.2 Model Execution
  8. 5Summary

Preprocessing Options

  1. Preprocessing generic time series data generally enables neural networks to more accurately classify signals. This process is referred to as feature extraction. Since the goal of the FFT and other operations are to isolate features unique to specific signals of interest. In this simple example you can see by the below figures how the sawtooth wave has a unique STFT spectrogram vs the STFT of the sine wave. More complex signals, non-periodic signals (such as the vacuum and arc fault) have a unique time varying frequency spectrogram. Concatenating frames of FFT data allows the unique time varying frequency signature of a signal to be more easily identified by a neural network.
     Saw Wave STFT Figure 4-10 Saw Wave STFT
     Sine Wave STFT Figure 4-11 Sine Wave STFT
     Vacuum Cleaner and
                            Arcing STFT Figure 4-12 Vacuum Cleaner and Arcing STFT
  2. In “Train” tab, configure the pre-processing parameters before training. There are nine preset configurations and a custom option. All the parameter fields are updated when a preset is selected. User can further adjust the parameters to improve performance. The following drop-down options are given:
    1. Preprocessing preset: Choose from a variety of preset configurations.
    2. Transform: This is the transformation applied to the raw time series data.
      • Fast Fourier Transform: (FFT) is used to extract frequency information. FFT sizes supported are all factors of 2. The output size is equal to frame size/2. If this option is chosen, frames to concatenate must be 1.
      • FFT BIN: (takes the FFT output and combines it into a number of bins equal to the feature size per frame parameter)
      • RAW (Raw time series data)
    3. Frame Size: (number of input samples)
    4. Feature size per frame: (number of bins for each frame of data, more bins give higher accuracy at the cost of a larger model)
    5. Frames to Concatenate: Concatenates the bins of past input frames in a buffer. This allows the model to detect time varying changes across the frequency spectrum. More frames give the model more context at the cost of a larger model.
    6. Channels: Number of sensor channels
  3. Once the desired options are selected click train. The selected model goes through the training and validation data. The results are displayed after training is complete.
  4. The training results displays once the model optimization is complete.
     Training
                            Results Figure 4-13 Training Results
  5. A confusion matrix is also generated from the test dataset to identify which classes the neural network can confuse. In this simple example case, 100% of the data files were correctly identified.
     Confusion
                            Matrix Figure 4-14 Confusion Matrix
  6. TI uses quantization aware training that first trains the model in full precision, then quantizes the model parameters and re-trains the model. This approach maintains a high accuracy and significantly reduced model size.