SWRA774 may   2023 IWRL6432

 

  1.   1
  2.   Trademarks
  3. 1Introduction
  4. 2Machine Learning in mmWave Sensing
  5. 3Development Process Flow
  6. 4Case-Study-1: Motion Classification
  7. 5Case-Study-2: Gesture Recognition
  8. 6References

Machine Learning in mmWave Sensing

GUID-D4706F96-A792-4B4A-A654-77BC2FEA996F-low.png Figure 2-1 A Representative Radar Based Classification Chain

Figure 2-1 depicts the typical processing flow of an algorithmic chain that incorporates an ML-based classifier. First basic physical layer processing is done on the raw ADC data received from the radar front end. This block involves a series of FFTs to separate the signal based on range, Doppler, and angle of arrival [2]. In some cases (e.g. Case-Study-1: Motion Classification), this can additionally be followed by detection and tracking algorithms.

The next step involves feature extraction, where the output of the previous block is processed to provide input to an ML-based classifier. The choice of feature extraction and the corresponding classifier that follows are closely coupled. Some popular options found in the literature [3] are listed below:

  1. Range-Doppler heat map: A 2-dimensional Range-Doppler heat map is created for each frame of data that is sent to a Convolutional Neural Network (CNN). In some cases, the output of the CNN can be fed to another network (such as an Recursive Neural Network (RNN) or a Temporal Convolutional Network (TCN) to exploit temporal signatures in the data.
  2. Micro-Doppler vs. time: A Micro-Doppler spectrum is created from each frame of data. This spectrum is then concatenated across frames to create a 2-dimensional Doppler vs. time image, which can then be processed through a CNN model for classification.
  3. Hand-Crafted Feature extraction: The 2-dimensional images (such as the Range-Doppler in (a) or the Doppler vs. time in (b) can be pre-processed to create hand-crafted features (a small set of parameters that capture the essence of the image). A single frame yields a single value for each feature, while a sequence of frames generates a corresponding time series. These are then sent to a classifier such as ANN.

There is a tradeoff between the intelligence in the feature extraction block and the complexity of the classifier. Approach (3) typically results in the lighter-weight classifier. The selection of the feature extraction block and the classifier architecture depends on the complexity of the classification task as well as requirements such as classifier performance, frame rate, and available memory and computing power.

In a typical implementation on the IWRL6432, the radar physical layer (PHY) processing is handled by the HWA while the classifier runs on the MCU (M4F). Hand-crafted feature extraction typically runs on the M4F with some assistance from the HWA.