SPRY344A January   2022  – March 2023 TDA4VM , TDA4VM-Q1

 

  1.   At a glance
  2.   Authors
  3.   Introduction
  4.   Defining AI at the edge
  5.   What is an efficient edge AI system?
    1.     Selecting an SoC architecture
    2.     Programmable core types and accelerators
  6.   Designing edge AI systems with TI vision processors
    1.     Deep learning accelerator
    2.     Imaging and computer vision hardware accelerators
    3.     Smart internal bus and memory architecture
    4.     Optimized system BOM
    5.     Easy-to-use software development environment
  7.   Conclusion

Designing edge AI systems with TI vision processors

TI's vision processor portfolio was created to enable efficient, scalable AI processing in applications where size and power constraints are key design challenges

These processors, which include the AM6xA and TDA4 processor families, feature an SoC architecture that includes extensive integration for vision systems, including Arm® Cortex®-A72 or Cortex-A53 CPUs, internal memory, interfaces, and hardware accelerators that deliver from two to 32 teraoperations per second (TOPS) of AI processing for deep learning.

The AM6xA family uses Arm Cortex-A MPU to offload computationally intense tasks such as deep learning inference, imaging, vision, video and graphics processing to specialized hardware accelerators and programmable cores, as shown in Figure 2. Integrating advanced system components into these processors helps edge AI designers streamline system bill of materials. This portfolio of processors includes scalable processing options from AM62A processors for low-power application with one to two cameras to the AM68A (up to eight cameras) and AM69A (up to 12 cameras).

GUID-20230208-SS0I-WJHF-8SKL-X3JMQXKB4CNW-low.jpg Figure 2 TI vision processor edge AI system partitioning.