SPRADE2 October   2023 AM69A

 

  1.   1
  2.   Abstract
  3. 1Introduction
  4. 2Localization and Mapping
    1. 2.1 Simultaneous Localization and Mapping
    2. 2.2 Graph SLAM
    3. 2.3 Localization
  5. 3Surroundings Perception
  6. 4Path Planning
  7. 5Summary

Surroundings Perception

The mobile robot must comprehend the dynamic change of the surroundings for safe navigation. Dynamic obstacles getting in the way of the mobile robot must be detected and avoided as quickly as possible. The mobile robot can detect obstacles in the middle of localization as well since the features from obstacles do not exist in the map. Identifying these features is not always successful though. Sometimes the features belonging to obstacles are wrongly matched to the features in the map. Therefore, it is important to identify and exclude the features belonging to obstacles for reliable localization.

This is where AI and sensor fusion with other sensor modalities such as millimeter Wave (mmWave) radar come into the picture. Both AI and sensor fusion help the mobile robot perceive dynamic objects accurately and therefore help navigate more safely and intelligently. Vision-based deep learning networks are able to detect and classify obstacles and measure their distances to the mobile robot. TI mmWave radar is unique in the sense that the radar provides range, velocity, and angle-of-arrival information of obstacles, with which the robot can navigate better without collision.

The AM69A processor has four 512-bit C7x DSPs running at 1 GHz, each of which is tightly coupled with each of four MMAs capable of 4K 8-bit fixed multiply-accumulate (MAC) per cycle. Four MMAs provides 32 dense Trillion Operations per Second (TOPS) that can support various deep-learning networks with multiple sensors simultaneously. In addition, the AI application development on AM69A is made simpler and faster with the Processor SDK Linux for AM69A. This Software Development Kit (SDK) enables an interplay of multiple open-source components and deep-learning runtime such as TFLite, ONNX, and TVM on top of the foundational Linux® component and the firmware packages for remote cores and HWAs. TI has converted and exported 100+ models from their original training frameworks in PyTorch, TensorFlow, and MXNet into the format friendly to the C7xMMA architecture and hosts them in Edge AI Model Zoo. TI also provides Edge AI Studio that is a collection of tools to accelerate the development of edge AI applications on TI’s embedded processors including AM69A. Edge AI Studio allows building, evaluation and deployment of deep learning models. The TI E2E™ forums article How to simplify your embedded edge AI application development has more about free tools and software from TI designed to help with the development process.