SPRADE2 October   2023 AM69A

 

  1.   1
  2.   Abstract
  3. 1Introduction
  4. 2Localization and Mapping
    1. 2.1 Simultaneous Localization and Mapping
    2. 2.2 Graph SLAM
    3. 2.3 Localization
  5. 3Surroundings Perception
  6. 4Path Planning
  7. 5Summary

Introduction

Autonomous Mobile Robots (AMRs) help improve productivity and operation efficiency in manufacturing, warehousing, logistics, and so forth. For example, AMRs can carry packages in warehouses and logistic centers, vacuum-clean floors, and serve foods and drinks in restaurants. Early AMRs used to operate in workspaces restricted from humans and navigated along the predetermined path guided by lanes and AprilTags on the ground. Therefore, early AMRs did not necessitate lots of sensors and stringent functional safety features. Such robots that follow the predefined path are also called automated guided vehicles (AGVs). On the contrary, recent AMRs are equipped with advanced sensors to operate in the shared workspace with humans and navigate freely but safely through the environment to perform assigned tasks at designated locations with as little human intervention as possible.

As shown in Figure 1-1, there are three main tasks for AMRs to self-navigate safely: localization, perception, and planning. First of all, the mobile robot must know their own location in the workspace. Accurate localization is the minimum requirement for autonomous navigation. Once localized, the mobile robot must perceive the dynamic environment with moving objects including humans and other robots in operation. Next, the robot must plan a path to the destination to control themselves accordingly to avoid situations that cause safety concerns. This paper discusses how these tasks work and the inherent challenges while focusing mainly on localization and mapping.

GUID-20230918-SS0I-8PDF-QZBJ-NVKGWPH6NZ8V-low.svgFigure 1-1 Three Main Tasks for Autonomous Navigation

The AM69A processor is a heterogeneous microprocessor built for high-performance computing applications with traditional analytics and AI algorithms. Key components include octal Arm® Cortex® A72 cores, Vision Processing Accelerator (VPAC), quad C7x Digital Signal Processor (DSP) with Matrix Multiplication Accelerator (MMA), Graphical Processing Unit (GPU), video codec, isolated MCU island, and so forth. VPAC has multiple accelerators including an Vision Imaging Subsystem (VISS), that is, Image Signal Processor (ISP), Lens Distortion Correction (LDC), and Multi-Scaler (MSC). Figure 1-2 illustrates the simplified AM69A block diagram. More details are found in the AM69x Processors, Silicon Revision 1.0 data sheet. Multi-camera AI use cases on AM69A are introduced in the Advanced AI Vision Processing Using AM69A for Smart Camera Applications technical white paper. This paper explains why AM69A is the best processor to run all three tasks simultaneously for autonomous navigation.

GUID-20230918-SS0I-SJQT-9VDQ-NP4HVCNRJTXX-low.svgFigure 1-2 AM69A Simplified Block Diagram