SPRY344A January   2022  – March 2023 TDA4VM , TDA4VM-Q1

 

  1.   At a glance
  2.   Authors
  3.   Introduction
  4.   Defining AI at the edge
  5.   What is an efficient edge AI system?
    1.     Selecting an SoC architecture
    2.     Programmable core types and accelerators
  6.   Designing edge AI systems with TI vision processors
    1.     Deep learning accelerator
    2.     Imaging and computer vision hardware accelerators
    3.     Smart internal bus and memory architecture
    4.     Optimized system BOM
    5.     Easy-to-use software development environment
  7.   Conclusion

Smart internal bus and memory architecture

Monitoring data movement and the memory architecture of a processor – in order to prevent various core blockages and delays when running multiple cores concurrently – can help maximize overall system performance.

TI vision AI processors have a high-bandwidth bus interconnect with a nonblocking infrastructure and large internal memory. Multiple dedicated programmable DMA engines automate data movement at very high speeds. This design enables high utilization of the hardware accelerators, with substantial double-data-rate (DDR) bandwidth savings. Reducing the number of DDR instances lowers the amount of power used by DDR access, thus lowering the overall system power consumption.