Advancing intelligence at the edge
Scalable and efficient vision processors bring real-time intelligence to smart camera systems
One platform for single and multicamera systems
Our vision processors let you execute facial recognition, object detection, pose estimation and other artificial intelligence (AI) features in real time using the same software. With scalable performance for up to 12 cameras, you can build smart security cameras, autonomous mobile robots and everything in between.
Battery-powered systems such as security cameras and vacuum robots need low-cost vision processors that deliver optimized performance and low-power operation.
Machine vision and professional surveillance systems need vision processors with more performance, additional features and power efficiency.
Autonomous systems have complex designs that often include sensor fusion and functional safety features, demanding high performance at an affordable cost.
Why choose TI for edge AI vision applications?
Build and deploy affordable edge AI applications at scale with a portfolio of devices built for low, mid and high performance.
Energy efficient processing
Reduce system power, latency and cost with deep learning and vision accelerators that enable industry-leading energy efficiency.
Faster time to market
Reduce development time with end-to-end software and tools, including free hardware evaluation, image signal processing (ISP) tuning and AI model training.
Free evaluation and development tools
Edge AI Studio lets you build, evaluate and deploy deep learning models on an embedded device. It's free to use, and you can login today.
- No installation is required because the tool resides in the cloud.
- Free access to hundreds of optimized, pre-trained models.
- Remote access to real evaluation hardware.
- Bring your own data (coming 2Q 2023), bring your own model.
Industry-standard APIs and frameworks
We support open-source frameworks from TensorFlow, PyTorch, ONNX, TVM and more to simplify your edge AI application development workflow.
Scalable vision processors
Our portfolio of vision processors is suitable for a wide range of embedded vision applications and offers performance scalability that supports systems with up to 12 cameras.
|Description||Targeting battery-powered systems such as security cameras, vacuum robots and lawn mowers||Optimized for multi-inference real-time systems found in retail and factory automation||Built for high-performance sensor fusion systems such as autonomous mobile robots|
|Performance||2 TOPS||8 TOPS||32 TOPS|
|Power||As low as 1 W||As low as 6 W||As low as 15 W|
|Cameras||Up to 2 RGB-IR cameras||Up to 8 RGB cameras||Up to 12 RGB cameras|
|Maximum resolution||5 MP||12 MP||12 MP|
|4K frame rate||30 fps||60 fps||60 fps (two simultaneous streams)|
AM62A starter kit for low-power Sitara™ processors
The SK-AM62A-LP starter kit (SK) evaluation module (EVM) is built around our AM62A AI vision processor, which includes an image signal processor (ISP) supporting up to 5 MP at 60 fps, a 2 teraoperations per second (TOPS) AI accelerator, a quad-core 64-bit Arm® Cortex®-A53 microprocessor, a (...)
AM68x starter kit for Sitara™ processors
The SK-AM68 Starter Kit/Evaluation Module (EVM) is based on the AM68x vision SoC which includes an image signal processor (ISP) supporting up to 480MP/s, an 8 tera-operations-per-second (TOPS) AI accelerator, two 64-bit Arm® Cortex®-A72 CPUs, and support for H.264/H.265 video encode/decode. (...)
Starter kit for Sitara™ processors
The SK-AM69 Starter Kit/Evaluation Module (EVM) is based on the AM69x AI vision processor which includes an image signal processor (ISP) supporting up to 1440MP/s, 32 tera-operations-per-second (TOPS) AI accelerator, eight 64-bit Arm®-Cortex® A72 microprocessor, and H.264/H.265 video (...)
|Phytec||Systems on modules||Learn more|
|Allied Vision||Cameras and sensors||Learn more|
|D3 Engineering||Camera, radar, sensor fusion, hardware, drivers and firmware||Learn more|
|Ignitarium||AI services and robotics||Learn more|
|RidgeRun||Linux development, GStreamer plug-ins and AI applications||Learn more|
|Kudan||Simultaneous localization and mapping||Learn more|
|Amazon Web Services||Cloud services for machine learning, model management and Internet of Things||Learn more|
|e-con Systems||Camera design and services, imaging solutions, camera and ISP tuning, and computer vision||Learn more|
|Scuttle Robotics LLC||Open-source, payload-ready mobile platform||Learn more|
|FRAMOS||Camera and ISP tuning, 3D cameras, imaging solutions, and computer vision||Learn more|
Training & resources
Explore a selection of self-paced lessons for edge AI and robotics development. Follow along with free cloud tools without having to purchase an evaluation module.
Edge AI Academy
Edge AI Academy is a great way to learn how to develop a smart application. Follow along using free cloud tools and progress at your own pace.
The fundamentals of edge AI development include:
- Hands-on coding projects
- Special topics for advanced users
Using open-source community platforms and free cloud tools, Robotics Academy will teach you how to build a robot that is smart, safe and energy efficient.
The fundamentals of robotics development:
- Object tracking
- Collision avoidance
- Autonomous navigation
Edge AI demos
Browse our demos and those created by our ecosystem partners to see a sample of things you can build with our edge AI technology.
- Smart cameras
- Video analytics
- Autonomous machines
- Autonomous mobile robots