TIDUE71D March   2018  – April 2020

 

  1.   Revision History

Low-Level Processing

An example of a processing chain for people counting is implemented on the IWR6843 EVM.

The processing chain is implemented on the DSP and Cortex-R4F together. Table 3 lists the several physical memory resources used in the processing chain.

Table 3. Memory Configuration

SECTION NAME SIZE (KB) AS CONFIGURED MEMORY USED (KB) DESCRIPTION
L1D SRAM 16 16 Layer one data static RAM is the fastest data access for DSP and is used for most time-critical DSP processing data that can fit in this section.
L1D cache 16 16 Layer one data cache caches data accesses to any other section configured as cacheable. LL2, L3, and HSRAM are configured as cache-able.
L1P SRAM 28 28 Layer one program static RAM is the fastest program access RAM for DSP and is used for most time-critical DSP program that can fit in this section.
L1P cache 4 4 Layer one cache caches program accesses to any other section configured as cacheble. LL2, L3, and HSRAM are configured as cache-able.
L2 256 239 Local layer two memory is lower latency than layer three for accessing and is visible only from the DSP. This memory is used for most of the program and data for the signal processing chain.
L3 768 720 Higher latency memory for DSP accesses primarily stores the radar cube and the range-azimuth heat map. It also stored system code not required to be executed at high speed.
HSRAM 32 20 Shared memory buffer used to store slow, non-runtime code.
Figure 5. Processing Chain Flow: Detection Tracking Visualization TIDEP-01000 detailtedBlockCapon.jpg

As shown in Figure 5, the implementation of the people-counting example in the signal-processing chain consists of the following blocks implemented on both the DSP and Cortex R4F. In the following section we break this process into the following smaller blocks:

  1. Range FFT through Range Azimuth Heatmap with Capon BF
  2. Object Detection with CFAR and Elevation Estimation
  3. Doppler Estimation

  1. Range FFT through Range Azimuth Heatmap with Capon BF
    • As shown in the block diagram, Raw Data is processed with a 1-D FFT (Range Processing) and Static Clutter Removal is applied to the result. Then Capon Beamforming is used to generate a range-azimuth heatmap. These are explained in depth below.
    • Figure 6. Range FFT through Range-Azimuth Heatmap TIDEP-01000 type3a _noText.jpg
    • Range processing
      • For each antenna, EDMA is used to move samples from the ADC output buffer to the FFT Hardware Acceleratorrator (HWA), controlled by the Cortex R4F. A 16-bit, fixed-point 1D windowing and 16-bit, fixed-point, 1D FFT are performed. EDMA is used to move output from the HWA local memory to the radar cube storage in layer three (L3) memory. Range processing is interleaved with active chirp time of the frame. All other processing occurs each frame, except where noted, during the idle time between the active chirp time and the end of the frame.
    • Static Clutter Removal
      • Once the active chirp time of the frame is complete, the interframe processing can begin, starting with static clutter removal. 1D FFT data is averaged across all chirps for a single Virtual Rx antenna. This average is then subtracted from each chirp from the Virtual Rx antenna. This cleanly removes the static information from the signal, leaving only the signals returned from moving objects. The formula is
      • Equation 1. X n r   =   1 N c X n c r c =1 N c X n c r =   X n c r - X n r
      • With Nc = Number of chirps; Nr = Number of recieve antennas; Xnr = Average samples for a single receive antenna across all chirps; Xncr = Samples of a Single Chirp from a receive antenna
    • Capon beamforming
      • The Capon BF algorithm is split into two components: 1) the Spatial Covariance Matrix computation and 2) Range-Azimuth Heatmap Generation. The final output is the Range-Azimuth heatmap with beamweights. This is passed to the CFAR algorithm.
      • Spatial Covariance Matrix is calculated as the following:
        • First, the spacial covariance matrix is estimated as an average over the chirps in the frame as Rxx,n which is 8x8 for ISK and 4x4 for ODS:
        • Equation 2. R x x , n   =   1 N c c =1 N c X n c X n c H   X n c   =   [ X n c1 ,   . . . .   ,   X n c N r ] T
        • Second, diagonal loading is applied to the R matrix to ensure stability
        • Equation 3. R x x , n   =   R x x , n   + α t r ( R x x , n ) N r I N r
      • Range-Azimuth Heatmap Generation
        • First, the Range-Azimuth Heatmap Pna is calculated using the following equtaions
        • Subscript a indicates values across azimuth bins
        • Equation 4. P n a = 1 a a H R x x , n -1 a a a a =   [1 ,   e j μ a , . . . . ,   e j ( N r -1 ) μ a ] T μ a =2 π d λ sin ( θ a )
        • Then, the beamforming weights are calculated as
        • Equation 5. w a =   R x x , n -1 a a a a H R x x , n -1 a a
  2. Object Detection with CFAR and Elevation Estimation
    • Using the heatmap generated in the above steps, 2 Pass CFAR is used to generated detected points in the Range-Azimuth spectrum. For each detected point, Capon is applied to generate a 1D elevation angular spectrum, which is used to determine the elevation angle of the point
    • Figure 7. CFAR and Elevation Estimation TIDEP-01000 type3b_noText.jpg
    • Object detection
      • Two pass CFAR algorithms is used on the range azimuth heat map to perform the object detection using the CFAR "smallest of" method. First pass is done per angle bin along the range domain. Second pass in the angle domain is used confirm the detection from the first pass. The output detected point list is stored in L2 memory.
    • Elevation Estimation with Capon BF
      • Full 2D 12 antenna Capon Beamforming is performed at the azimuth of each detected point. This is done following the same steps used to generate the range-azimuth heatmap; 1) generate spacial covariance matrix and 2) generate 1D elevation angle spectrum (similar to the heatmap)
      • Then a single peak search is performed to find the elevation angle of each point. This step does not generate new detection points.
      • Spatial Covariance matrix is similar to before, with input based on detections
      • Equation 6. R x x , m   =   1 N c c =1 N c X k c X k c H   k   =   r d e t , m
      • With diagonal loading matrix
      • Equation 7. R x x , m   =   R x x , m   + α2 t r ( R x x , m ) N r I N r
      • 1D Elevation Spectrum is as follows
      • Equation 8. P m = 1 a m H R x x , m -1 a m a m   =   a ( μ m , υ m )   =   a ( μ m ) a ( υ m ) a ( μ m )   =   [1 ,   e j μ m , . . . , e j ( N r -1 ) μ m ] T a ( υ m )   =     [1 ,   e j υ m , . . . , e j ( N r -1 ) υ m ] T
  3. Doppler Estimation
    • For each detected point in range and azimuth(angle) space, Doppler is estimated using the capon beam weights and Doppler FFT. The output is stored in the L2 memory. This output is combined with the point cloud produced during CFAR and Elevation Estimation, resulting in output for each point of:
      • Range
      • Azimuth
      • Elevation
      • Doppler
      • SNR
    • Figure 8. Doppler Estimation and Combination of Results TIDEP-01000 type3c_noText.jpg
  4. All the above processing except the range processing happens during inter-frame time. After DSP finishes frame processing, the results are written in shared memory (L3/HSRAM) for Cortex-R4F to input for the group tracker.

  5. Group tracker
    • The tracking algorithm implements the localization processing. Tracker works on the point cloud data from DSP, and provide localization information. Tracker inputs the point cloud data, range, azimuth, elevation, doppler, and SNR for each point in the point cloud. It outputs a list of tracked objects - each object has position, velocity, and acceleration in 3D Cartesian (X, Y, Z) space.

Table 4 lists the results of benchmark data measuring the overall MIPS consumption of the signal processing chain running on the DSP. Time remaining assumes a 50 ms total frame time.

Table 4. MIPS Use Summary

PARAMETER Time Used (ms) LOADING (Assuming 50 ms frame time)
Active Frame time 13.9 27.8%
Range-Azimuth Heatmap Generation 9.42 18.84%
2 Pass CFAR 1 2%
Elevation Estimation 11.7 23.4%
Doppler Estimation 2.15 4.3%
Total Processing Time 22.335 44.67%
Total Time 36.235 72.47%