SPRADD2 august   2023 AM62A3 , AM62A3-Q1 , AM62A7 , AM62A7-Q1

 

  1.   1
  2.   Abstract
  3.   Trademarks
  4. Introduction
  5. AM62A Processor
  6. System Block Diagram
  7. Driver and Occupancy Mirror System Data Flow
  8. Deep Learning Acceleration
  9. Functional Safety in DMS/OMS Applications Using AM62A
    1. 6.1 Overview of Functional Safety Features on AM62A
  10. Functional Safety Targets and Assumptions of Use
  11. Functional Safety in DMS/OMS Data Flow
  12. LED Driver Illumination Use Case
  13. 10Summary
  14. 11References

Driver and Occupancy Mirror System Data Flow

Figure 4-1 illustrates the full data flow for processing of a 5MP 60 fps RGB-IR sensor for a driver and occupancy monitoring system.

GUID-20230807-SS0I-XBW7-FDVC-WNVKNM78RJZP-low.svg Figure 4-1 DMS/OMS Data Flow With AM62A

Upon reception via CSI2-RX, the RGB-IR data is fed into the newly developed RGB-IR pre-processing hardware integrated into the AM62A. This pre-processing hardware performs real-time separation of the RGB and IR components. The output can be either alternating RGB Bayer data and IR data at 30 frames per second for daytime mode, or IR data at 60 frames per second for night mode.

The received 5MP@30fps Bayer data is processed by the Vision Imaging Subsystem (VISS) module within the VPAC. The VISS module generates either RGB data or YUV data, depending on the desired image data format. This choice considers factors such as overall memory bandwidth requirements, image quality considerations, or input format compatibility with subsequent image processing modules. In this particular example, the NV12 format is utilized.

The output from the VISS module is then scaled down to the desired image resolution for video streaming or recording purposes using the Multi-Scalar (MSC) module. The Multi-Scalar module is capable of generating multiple pyramid scales in a single pass. To correct any lens distortion resulting from wide-angle lens used in Driver Monitoring Systems (DMS) or Occupant Monitoring Systems (OMS), the scaled-down image is processed by the Lens Distortion Correction (LDC) module.

For video streaming purposes, the H.264/265 video encode and decode hardware is employed to encode the 2MP@30fps video stream into the corresponding H.264/265 format. This processed video stream can then be transmitted over Ethernet.

The IR data acquired from the sensor is primarily utilized for eye tracking (Eye Lid and Gaze) and head tracking of the driver. At least 30fps of processing is required to detect eye lid opening and closing- a critical statistic required for drowsiness detection. Either classical computer vision techniques or deep learning analytic methods are used to detect the eyes and key points around the eyes. Classical computer vision algorithms can be implemented on the Arm Cortex-A53 cores while deep learning method based on convolution neural networks are implemented on the C7/MMA. Whereas DMS tasks typically need to be processed at a minimum of 30fps, the OMS tasks need only be performed at 1-5fps

AM62A is a heterogenous processor supporting different computing cores dedicated to running DMS/OMS algorithms. The C7x/MMA is a novel hardware deep learning engine capable of delivering up to 2 Tera Operations Per Second (TOPS) of compute capability. This deep learning engine is optimized for low power consumption, facilitating high-performance analytics within compact enclosures, such as rearview mirrors, without requiring additional cooling mechanisms.

Furthermore, the addition of the Cortex-A53 core serves as an additional hardware resource for running DMS algorithms that have already been validated and proven on AM62x devices in DMS products. This additional core provides supplementary performance when extra signal processing capabilities are required, in addition to the C7x/MMA Deep Learning accelerator.