SPRAD06C March   2022  – March 2025 AM620-Q1 , AM623 , AM625 , AM625-Q1 , AM62L

 

  1.   1
  2.   Abstract
  3.   Trademarks
  4. 1Overview
    1. 1.1 Board Designs Supported
    2. 1.2 General Board Layout Guidelines
    3. 1.3 PCB Stack-Up
    4. 1.4 Bypass Capacitors
      1. 1.4.1 Bulk Bypass Capacitors
      2. 1.4.2 High-Speed Bypass Capacitors
      3. 1.4.3 Return Current Bypass Capacitors
    5. 1.5 Velocity Compensation
  5. 2DDR4 Board Design and Layout Guidance
    1. 2.1  DDR4 Introduction
    2. 2.2  DDR4 Device Implementations Supported
    3. 2.3  DDR4 Interface Schematics
      1. 2.3.1 DDR4 Implementation Using 16-Bit SDRAM Devices
      2. 2.3.2 DDR4 Implementation Using 8-Bit SDRAM Devices
    4. 2.4  Compatible JEDEC DDR4 Devices
    5. 2.5  Placement
    6. 2.6  DDR4 Keepout Region
    7. 2.7  DBI
    8. 2.8  VPP
    9. 2.9  Net Classes
    10. 2.10 DDR4 Signal Termination
    11. 2.11 VREF Routing
    12. 2.12 VTT
    13. 2.13 POD Interconnect
    14. 2.14 CK and ADDR_CTRL Topologies and Routing Guidance
    15. 2.15 Data Group Topologies and Routing Guidance
    16. 2.16 CK and ADDR_CTRL Routing Specification
      1. 2.16.1 CACLM - Clock Address Control Longest Manhattan Distance
      2. 2.16.2 CK and ADDR_CTRL Routing Limits
    17. 2.17 Data Group Routing Specification
      1. 2.17.1 DQLM - DQ Longest Manhattan Distance
      2. 2.17.2 Data Group Routing Limits
    18. 2.18 Bit Swapping
      1. 2.18.1 Data Bit Swapping
      2. 2.18.2 Address and Control Bit Swapping
  6. 3LPDDR4 Board Design and Layout Guidance
    1. 3.1  LPDDR4 Introduction
    2. 3.2  LPDDR4 Device Implementations Supported
    3. 3.3  LPDDR4 Interface Schematics
    4. 3.4  Compatible JEDEC LPDDR4 Devices
    5. 3.5  Placement
    6. 3.6  LPDDR4 Keepout Region
    7. 3.7  LPDDR4 DBI
    8. 3.8  Net Classes
    9. 3.9  LPDDR4 Signal Termination
    10. 3.10 LPDDR4 VREF Routing
    11. 3.11 LPDDR4 VTT
    12. 3.12 CK0 and ADDR_CTRL Topologies
    13. 3.13 Data Group Topologies
    14. 3.14 CK0 and ADDR_CTRL Routing Specification
    15. 3.15 Data Group Routing Specification
    16. 3.16 Byte and Bit Swapping
  7. 4LPDDR4 Board Design Simulations
    1. 4.1 Board Model Extraction
    2. 4.2 Board-Model Validation
    3. 4.3 S-Parameter Inspection
    4. 4.4 Time Domain Reflectometry (TDR) Analysis
    5. 4.5 System Level Simulation
      1. 4.5.1 Simulation Setup
      2. 4.5.2 Simulation Parameters
      3. 4.5.3 Simulation Targets
        1. 4.5.3.1 Eye Quality
        2. 4.5.3.2 Delay Report
        3. 4.5.3.3 Mask Report
    6. 4.6 Design Example
      1. 4.6.1 Stack-Up
      2. 4.6.2 Routing
      3. 4.6.3 Model Verification
      4. 4.6.4 Simulation Results
  8. 5Additional Information: Package Delays
  9. 6Summary
  10. 7References
  11. 8Revision History

Data Group Topologies and Routing Guidance

Regardless of the number of DDR4 devices implemented, the data line topology is always point-to-point. Minimize layer transitions during routing. If a layer transition is necessary, then transition to a layer using the same reference plane. If this cannot be accommodated, then make sure there are nearby ground vias to allow the return currents to transition between reference planes. The goal is to provide a low inductance path for the return current. Also, to optimize the length matching, TI recommends routing all nets within a single data routing group on one layer where all have the exact same number of vias and the same via barrel length.

DQSP and DQSN lines are point-to-point signals routed as a differential pair. Figure 2-9 shows the DQS connection topology.

 DDR4 DQS Topology Figure 2-9 DDR4 DQS Topology

DQ and DM lines are point-to-point signals routed singled-ended. Figure 2-10 shows the DQ and DM connection topology.

 DDR4 DQ/DM Topology Figure 2-10 DDR4 DQ/DM Topology

Similar to the figures above for the CK and ADDR_CTRL routes, Figure 2-11 and Figure 2-12 show an example of the PCB routes for a DQS routing group and the associated data routing group nets.

The routing example shows DQS0P and DQS0N, which are routed as a differential pair from the processor to the SDRAM that contains Byte 0. This is implemented as a point-to-point routed differential pair without any board terminations. There are no stubs allowed on these nets of any kind. All test access probes must be in line without any branches or stubs. Similar DQS pair routing exists from the processor to each SDRAM for the byte lanes implemented.

Figure 2-12 shows a routing example for a single net in the Byte 0 routing group. The DQ and DM nets are routed single-ended and are also point-to-point without any stubs or board terminations. Point-to-point routes exist for each of the DQ and DM nets implemented.

The DQ and DM nets are routed along the same path as the DQSP and DQSN pair for that byte lane, so that the nets can be length matched to the DQS pair.

 DQS Routing to Two DDR4 SDRAM
                    Devices Figure 2-11 DQS Routing to Two DDR4 SDRAM Devices
 DQ/DM Routing to Two DDR4
                    SDRAM Devices Figure 2-12 DQ/DM Routing to Two DDR4 SDRAM Devices