JAJSDP7 August 2017 AM5718-HIREL
Information in the following Applications section is not part of the TI component specification, and TI does not warrant its accuracy or completeness. TI's customers are responsible for determining suitability of components for their purposes. Customers should validate and test their design implementation to confirm system functionality.
It is possible that some voltage domains on the device are unused in some systems. In such cases, to ensure device reliability, it is still required that the supply pins for the specific voltage domains are connected to some core power supply output.
These unused supplies though can be combined with any of the core supplies that are used (active) in the system. e.g. if IVA and GPU domains are not used, they can be combined with the CORE domain, thereby having a single power supply driving the combined CORE, IVA and GPU domains.
For the combined rail, the following relaxations do apply:
Table 8-1 illustrates the approved and validated power supply connections to the Device for the SMPS outputs of the TPS659037 PMIC.
|TPS659037 POWER SUPPLY||VALID COMBINATION 1:
||VALID COMBINATION 2:
|SMPS4/5||vdd_dsp, vdd_gpu, vdd_iva||vdd_dsp|
|SMPS7||SW configuration after boot||vdd|
|SMPS9||SW configuration after boot 3.3V||vddshvx|
|LDO3||vdda_usb1, vdda_usb2, vdda_csi,vdda_sata||vdda_usb1, vdda_usb2, vdda_csi, vdda_sata|
|LDO4||vdda_hdmi, vdda_pcie, vdda_pcie0, vdda_usb3||vdda_hdmi, vdda_pcie, vdda_pcie0, vdda_usb3|
|LDOLN||1.8V PLLs||1.8V PLLs|
Table 8-2 illustrates the approved and validated power supply connections to the Device for the SMPS outputs of the TPS65916 PMIC.
|TPS65916 POWER SUPPLY||VALID COMBINATION 1:
|SMPS3||vdd_dsp, vdd_gpu, vdd_iva|
TI only supports board designs using DDR3 memory that follow the guidelines in this document. The switching characteristics and timing diagram for the DDR3 memory controller are shown in Table 8-3 and Figure 8-1.
|1||tc(DDR_CLK)||Cycle time, DDR_CLK||1.5||2.5(1)||ns|
The processor contains one DDR3 EMIF.
Because there are several possible combinations of device counts and single- or dual-side mounting, Table 8-4 summarizes the supported device configurations.
|NUMBER OF DDR3 DEVICES||DDR3 DEVICE WIDTH (BITS)||MIRRORED?||DDR3 EMIF WIDTH (BITS)|
The DDR3 interface schematic varies, depending upon the width of the DDR3 devices used and the width of the bus used (16 or 32 bits). General connectivity is straightforward and very similar. 16-bit DDR devices look like two 8-bit devices. Figure 8-2 and Figure 8-3 show the schematic connections for 32-bit interfaces using x16 devices.
Note that the 16-bit wide interface schematic is practically identical to the 32-bit interface (see Figure 8-2 and Figure 8-3); only the high-word DDR memories are removed and the unused DQS inputs are tied off.
When not using all or part of a DDR interface, the proper method of handling the unused pins is to tie off the ddrx_dqsi pins to ground via a 1k-Ω resistor and to tie off the ddrx_dqsni pins to the corresponding vdds_ddrx supply via a 1k-Ω resistor. This needs to be done for each byte not used. Although these signals have internal pullups and pulldowns, external pullups and pulldowns provide additional protection against external electrical noise causing activity on the signals.
The vdds_ddrx and ddrx_vref0 power supply pins need to be connected to their respective power supplies even if ddrx is not being used. All other DDR interface pins can be left unconnected. Note that the supported modes for use of the DDR EMIF are 32-bits wide, 16-bits wide, or not used.
Table 8-5 shows the parameters of the JEDEC DDR3 devices that are compatible with this interface. Generally, the DDR3 interface is compatible with DDR3-1333 devices in the x8 or x16 widths.
|1||JEDEC DDR3 device speed grade(1)||DDR clock rate = 400MHz||DDR3-800||DDR3-1600|
|400MHz< DDR clock rate ≤ 533MHz||DDR3-1066||DDR3-1600|
|533MHz < DDR clock rate ≤ 667MHz||DDR3-1333||DDR3-1600|
|2||JEDEC DDR3 device bit width||x8||x16||Bits|
|3||JEDEC DDR3 device count(2)||2||4||Devices|
The minimum stackup for routing the DDR3 interface is a six-layer stack up as shown in Table 8-6. Additional layers may be added to the PCB stackup to accommodate other circuitry, enhance SI/EMI performance, or to reduce the size of the PCB footprint. Complete stackup specifications are provided in Table 8-7.
|1||Signal||Top routing mostly vertical|
|3||Plane||Split power plane|
|4||Plane||Split power plane or Internal routing|
|6||Signal||Bottom routing mostly horizontal|
|PS1||PCB routing/plane layers||6|
|PS2||Signal routing layers||3|
|PS3||Full ground reference layers under DDR3 routing region(1)||1|
|PS4||Full 1.5-V power reference layers under the DDR3 routing region(1)||1|
|PS5||Number of reference plane cuts allowed within DDR routing region(2)||0|
|PS6||Number of layers between DDR3 routing layer and reference plane(3)||0|
|PS7||PCB routing feature size||4||Mils|
|PS8||PCB trace width, w||4||Mils|
|PS9||Single-ended impedance, Zo||50||75||Ω|
Figure 8-4 shows the required placement for the processor as well as the DDR3 devices. The dimensions for this figure are defined in Table 8-8. The placement does not restrict the side of the PCB on which the devices are mounted. The ultimate purpose of the placement is to limit the maximum trace lengths and allow for proper routing space. For a 16-bit DDR memory system, the high-word DDR3 devices are omitted from the placement.
|KOD36||DDR3 keepout region (1)|
|KOD37||Clearance from non-DDR3 signal to DDR3 keepout region (2) (3)||4||W|
The region of the PCB used for DDR3 circuitry must be isolated from other signals. The DDR3 keepout region is defined for this purpose and is shown in Figure 8-5. The size of this region varies with the placement and DDR routing. Additional clearances required for the keepout region are shown in Table 8-8. Non-DDR3 signals should not be routed on the DDR signal layers within the DDR3 keepout region. Non-DDR3 signals may be routed in the region, provided they are routed on layers separated from the DDR signal layers by a ground layer. No breaks should be allowed in the reference ground layers in this region. In addition, the 1.5-V DDR3 power plane should cover the entire keepout region. Also note that the two signals from the DDR3 controller should be separated from each other by the specification in Table 8-8, (see KOD37).
Bulk bypass capacitors are required for moderate speed bypassing of the DDR3 and other circuitry. Table 8-9 contains the minimum numbers and capacitance required for the bulk bypass capacitors. Note that this table only covers the bypass needs of the DDR3 controllers and DDR3 devices. Additional bulk bypass capacitance may be needed for other circuitry.
|1||vdds_ddrx bulk bypass capacitor count(1)||1||Devices|
|2||vdds_ddrx bulk bypass total capacitance||22||μF|
High-speed (HS) bypass capacitors are critcal for proper DDR3 interface operation. It is particularly important to minimize the parasitic series inductance of the HS bypass capacitors, processor/DDR power, and processor/DDR ground connections. Table 8-10 contains the specification for the HS bypass capacitors as well as for the power connections on the PCB. Generally speaking, it is good to:
|1||HS bypass capacitor package size(1)||0201||0402||10 Mils|
|2||Distance, HS bypass capacitor to processor being bypassed(2)(3)(4)||400||Mils|
|3||Processor HS bypass capacitor count per vdds_ddrx rail||See Section 8.4 and (11)||Devices|
|4||Processor HS bypass capacitor total capacitance per vdds_ddrx rail||See Section 8.4 and (11)||μF|
|5||Number of connection vias for each device power/ground ball(5)||Vias|
|6||Trace length from device power/ground ball to connection via(2)||35||70||Mils|
|7||Distance, HS bypass capacitor to DDR device being bypassed(6)||150||Mils|
|8||DDR3 device HS bypass capacitor count(7)||12||Devices|
|9||DDR3 device HS bypass capacitor total capacitance(7)||0.85||μF|
|10||Number of connection vias for each HS capacitor(8)(9)||2||Vias|
|11||Trace length from bypass capacitor connect to connection via(2)(9)||35||100||Mils|
|12||Number of connection vias for each DDR3 device power/ground ball(10)||1||Vias|
|13||Trace length from DDR3 device power/ground ball to connection via(2)(8)||35||60||Mils|
Use additional bypass capacitors if the return current reference plane changes due to DDR3 signals hopping from one signal layer to another. The bypass capacitor here provides a path for the return current to hop planes along with the signal. As many of these return current bypass capacitors should be used as possible. Because these are returns for signal current, the signal via size may be used for these capacitors.
Table 8-11 lists the clock net classes for the DDR3 interface. Table 8-12 lists the signal net classes, and associated clock net classes, for signals in the DDR3 interface. These net classes are used for the termination and routing rules that follow.
|CLOCK NET CLASS||Processor PIN NAMES|
|DQS0||ddrx_dqs0 / ddrx_dqsn0|
|DQS1||ddrx_dqs1 / ddrx_dqsn1|
|DQS2(1)||ddrx_dqs2 / ddrx_dqsn2|
|DQS3(1)||ddrx_dqs3 / ddrx_dqsn3|
|SIGNAL NET CLASS||ASSOCIATED CLOCK
|Processor PIN NAMES|
|ADDR_CTRL||CK||ddrx_ba[2:0], ddrx_a[14:0], ddrx_csnj, ddrx_casn, ddrx_rasn, ddrx_wen, ddrx_cke, ddrx_odti|
Signal terminators are required for the CK and ADDR_CTRL net classes. The data lines are terminated by ODT and, thus, the PCB traces should be unterminated. Detailed termination specifications are covered in the routing rules in the following sections.
ddrx_vref0 (VREF) is used as a reference by the input buffers of the DDR3 memories as well as the processor. VREF is intended to be half the DDR3 power supply voltage and is typically generated with the DDR3 VDDS and VTT power supply. It should be routed as a nominal 20-mil wide trace with 0.1 µF bypass capacitors near each device connection. Narrowing of VREF is allowed to accommodate routing congestion.
Like VREF, the nominal value of the VTT supply is half the DDR3 supply voltage. Unlike VREF, VTT is expected to source and sink current, specifically the termination current for the ADDR_CTRL net class Thevinen terminators. VTT is needed at the end of the address bus and it should be routed as a power sub-plane. VTT should be bypassed near the terminator resistors.
The CK and ADDR_CTRL net classes are routed similarly and are length matched to minimize skew between them. CK is a bit more complicated because it runs at a higher transition rate and is differential. The following subsections show the topology and routing for various DDR3 configurations for CK and ADDR_CTRL. The figures in the following subsections define the terms for the routing specification detailed in Table 8-13.
Four DDR3 devices are supported on the DDR EMIF consisting of four x8 DDR3 devices arranged as one bank (CS). These four devices may be mounted on a single side of the PCB, or may be mirrored in two pairs to save board space at a cost of increased routing complexity and parts on the backside of the PCB.
To save PCB space, the four DDR3 memories may be mounted as two mirrored pairs at a cost of increased routing and assembly complexity. Figure 8-10 and Figure 8-11 show the routing for CK and ADDR_CTRL, respectively, for four DDR3 devices mirrored in a two-pair configuration.
Two DDR3 devices are supported on the DDR EMIF consisting of two x8 DDR3 devices arranged as one bank (CS), 16 bits wide, or two x16 DDR3 devices arranged as one bank (CS), 32 bits wide. These two devices may be mounted on a single side of the PCB, or may be mirrored in a pair to save board space at a cost of increased routing complexity and parts on the backside of the PCB.
To save PCB space, the two DDR3 memories may be mounted as a mirrored pair at a cost of increased routing and assembly complexity. Figure 8-16 and Figure 8-17 show the routing for CK and ADDR_CTRL, respectively, for two DDR3 devices mirrored in a single-pair configuration.
A single DDR3 device is supported on the DDR EMIF consisting of one x16 DDR3 device arranged as one bank (CS), 16 bits wide.
No matter the number of DDR3 devices used, the data line topology is always point to point, so its definition is simple.
Care should be taken to minimize layer transitions during routing. If a layer transition is necessary, it is better to transition to a layer using the same reference plane. If this cannot be accommodated, ensure there are nearby ground vias to allow the return currents to transition between reference planes if both reference planes are ground or vdds_ddr. Ensure there are nearby bypass capacitors to allow the return currents to transition between reference planes if one of the reference planes is ground. The goal is to minimize the size of the return current loops.
Skew within the CK and ADDR_CTRL net classes directly reduces setup and hold margin and, thus, this skew must be controlled. The only way to practically match lengths on a PCB is to lengthen the shorter traces up to the length of the longest net in the net class and its associated clock. A metric to establish this maximum length is Manhattan distance. The Manhattan distance between two points on a PCB is the length between the points when connecting them only with horizontal or vertical segments. A reasonable trace route length is to within a percentage of its Manhattan distance. CACLM is defined as Clock Address Control Longest Manhattan distance.
Given the clock and address pin locations on the processor and the DDR3 memories, the maximum possible Manhattan distance can be determined given the placement. Figure 8-26 and Figure 8-27 show this distance for four loads and two loads, respectively. It is from this distance that the specifications on the lengths of the transmission lines for the address bus are determined. CACLM is determined similarly for other address bus configurations; that is, it is based on the longest net of the CK/ADDR_CTRL net class. For CK and ADDR_CTRL routing, these specifications are contained in Table 8-13.
|CARS315||CK/ADDR_CTRL trace length||1020||ps|
|CARS316||Vias per trace||3(1)||vias|
|CARS317||Via count difference||1(15)||vias|
|CARS318||Center-to-center CK to other DDR3 trace spacing(9)||4w|
|CARS319||Center-to-center ADDR_CTRL to other DDR3 trace spacing(9)(10)||4w|
|CARS320||Center-to-center ADDR_CTRL to other ADDR_CTRL trace spacing(9)||3w|
|CARS321||CK center-to-center spacing(11) (12)|
|CARS322||CK spacing to other net(9)||4w|
Skew within the DQS and DQ/DM net classes directly reduces setup and hold margin and thus this skew must be controlled. The only way to practically match lengths on a PCB is to lengthen the shorter traces up to the length of the longest net in the net class and its associated clock. As with CK and ADDR_CTRL, a reasonable trace route length is to within a percentage of its Manhattan distance. DQLMn is defined as DQ Longest Manhattan distance n, where n is the byte number. For a 32-bit interface, there are four DQLMs, DQLM0-DQLM3. Likewise, for a 16-bit interface, there are two DQLMs, DQLM0-DQLM1.
It is not required, nor is it recommended, to match the lengths across all bytes. Length matching is only required within each byte.
Given the DQS and DQ/DM pin locations on the processor and the DDR3 memories, the maximum possible Manhattan distance can be determined given the placement. Figure 8-28 shows this distance for four loads. It is from this distance that the specifications on the lengths of the transmission lines for the data bus are determined. For DQS and DQ/DM routing, these specifications are contained in Table 8-14.
|DRS36||DQSn+ to DQSn- skew||1||ps|
|DRS37||DQSn to DBn skew(3)(4)||5(10)||ps|
|DRS38||Vias per trace||2(1)||vias|
|DRS39||Via count difference||0(10)||vias|
|DRS310||Center-to-center DBn to other DDR3 trace spacing(6)||4||w(5)|
|DRS311||Center-to-center DBn to other DBn trace spacing(7)||3||w(5)|
|DRS312||DQSn center-to-center spacing(8) (9)|
|DRS313||DQSn center-to-center spacing to other net||4||w(5)|
The High-Speed Interface Layout Guidelines Application Report (SPRAAR7) available from http://www.ti.com/lit/pdf/spraar7 provides guidance for successful routing of the high speed differential signals. This includes PCB stackup and materials guidance as well as routing skew, length and spacing limits. TI supports only designs that follow the board design guidelines contained in the application report.
The Power Distribution Network Implementation Guidelines Application Report (SPRABY8) available from http://www.ti.com/lit/pdf/spraby8 provides guidance for successful implementation of the power distribution network. This includes PCB stackup guidance as well as guidance for optimizing the selection and placement of the decoupling capacitors. TI supports only designs that follow the board design guidelines contained in the application report.
The following section details the routing guidelines that must be observed when routing the QSPI interfaces.
*0 Ω resistor (R1), located as close as possible to the qspi1_sclk pin, is placeholder for fine-tuning if needed.
Although the impedance of a ground plane is low it is, of course, not zero. Therefore, any noise current in the ground plane causes a voltage drop in the ground. Figure 8-32 shows the grounding scheme for slow (low frequency) clock generated from the internal oscillator.
Figure 8-33 shows the grounding scheme for high-frequency clock.