SCEA141 May   2024 LSF0102 , PCA9306 , SN74AXC8T245 , TXU0304 , TXV0106 , TXV0106-Q1 , TXV0108 , TXV0108-Q1


  1.   1
  2.   2

Several of today’s electronic end equipment applications are adding artificial intelligence (AI) capabilities to help bring new functionality and user experiences to end applications enabling users to harness the power of AI in their day to day work flows. To bring AI capabilities to their end application, electronic system designers need to leverage large language models (LLM) such as Generative Pre-trained Transformers (GPT) which requires high performance compute capabilities on both the cloud side as well as the client side of applications. Enabling AI capabilities on device or by leveraging cloud-based compute infrastructure requires client systems and cloud infrastructure designers to leverage the latest processor technologies.

GPT based AI implementations require system designers to not only use the latest high-performance processors and FPGAs (CPU based devices) but also use the latest high-performance GPU (Graphic Processing Units) that are better suited for AI given their ability to parallel process large amounts of data as well as higher memory bandwidths needed for high speed data transfer. Using the latest CPUs and GPUs to support AI functionality does present systems designer with multiple design challenges.

One of these design challenges is overcoming the control and low speed data I/O level mismatches that results from operating CPUs and GPUs at very low core voltages. Operating high performance CPUs and GPUs at low core voltages is often an absolute requirement for achieving target performance levels given the thermal and power limitations of a specific processor. Operating CPUs and GPUs at low core voltages limits the I/O voltage levels that these processors can support.

System designers often need a simple, efficient and scalable way to connect the numerous I/O and control buses of their GPU processors to peripheral devices and sub-systems. One design approach that enables system designers to maintain the CPU’s or GPU’s lower core voltage and still resolve I/O level mismatches is to use simple voltage level translator devices. Level translation devices provide system designers an easy and cost-effective solution for resolving their system’s I/O level mismatch challenges without having to compromise on performance, power, or size. See Figure 1.

                                                  Accelerator Card Block Diagram Figure 1 AI Accelerator Card Block Diagram

Integrated level shifting designs are available in a wide assortment of I/O types, bit widths, data rate ranges, current drive capabilities, and package options. Texas Instruments’ portfolio of level shifter devices contains many different types of level translation functions that collectively is able to address almost any interface requirement likely needed for high performance compute use cases for AI applications. TI’s level translation portfolio includes Auto Directional, Direction Controlled, and Fixed Direction level translators in Industrial, Automotive and Enhanced ratings. Table 1 shows common control interfaces found on high-performance CPU and GPU families and recommended level translation devices for each interface supporting voltage ranges from < 0.8 V to 5.5 V. For more information on all of TI’s level translation devices, please visit TI’s Level Translation Landing Page.

Table 1 Recommended Translator by Interface
Translation Level
InterfaceUp to 3.6VUp to 5.5V
FET Replacement2N7001TSN74LXC1T45/TXU0101
1 Bit GPIO/Clock SignalSN74AXC1T45SN74LXC1T45/TXU0101
2 Bit GPIOSN74AXC2T245SN74LXC2T45 / TXU0x02
I2C/MDIO/SMBusTXS0102 / LSF0102

/ PCA9306

TXS0102 / LSF0102

/ PCA9306



4 Bit GPIOSN74AXC4T245TXB0104 / TXU0104
UARTSN74AXC4T245TXB0104 / TXU0204
SPISN74AXC4T774 / TXB0104TXB0104 / TXU0304
JTAGSN74AXC4T774/ TXB0104TXB0104 / TXU0204
I2S/PCMSN74AXC4T774 / TXB0104TXB0104 / TXU0204