SBAA572 February   2023 ADS9218 , ADS9227

 

  1. 11

Introduction

A digital control loop is a type of closed-loop control system that uses digital signals and algorithms to regulate the output of lab instruments such as power supplies, source measure units, and electronic loads. The main goal of a digital control loop in an instrument is to maintain a stable output voltage or current despite changes in the load conditions or input voltage. The control algorithm can be tuned according to the load to minimize settling time at the output of the instrument.

Digital Control Loop

A typical digital control loop in an instrument includes the following components as shown in #GUID-A05069BA-1C42-4826-BCF9-EA8884A20486:

  1. Measurement unit: This component measures the output voltage, current, or both, of the instrument and converts the measurement into a digital signal that can be processed by the controller.
  2. Controller: The controller receives the digital measurement of the output of the instrument and calculates an error signal, which is the difference between the measured output and the reference signal. The error signal is then processed by the control algorithm to determine the control inputs that need to be applied to the instrument.
  3. Control algorithm: This component processes the error signal to determine the control inputs that are applied to the instrument. The control algorithm can be based on a simple proportional-integral-derivative (PID) controller, or a more complex control algorithm such as a linear or nonlinear controller.
  4. Digital-to-analog converter (DAC): The DAC is responsible for converting the digital control signals into analog signals that can be applied to the instrument.

In a digital control loop, the output of the instrument is fed back to the measurement unit, and the control inputs are applied to the instrument to correct any deviations from the desired output. This closed loop configuration allows the instrument to continuously adjust the output to maintain a stable voltage or current, despite changes in the load conditions or input voltage.

Figure 1-1 Digital Control Loop Block Diagram

The settling time or response time depends on the speed at which the control algorithm can adjust the DAC input, to compensate for changes in the output voltage. The total delay in adjusting the output voltage includes the following components:

  1. Time required for measuring the output signal in the measurement unit
  2. Time required for the control algorithm to generate a new setting for the DAC
  3. Time required for the DAC output to settle to required accuracy

A low-latency ADC (Analog-to-Digital Converter) is used in digital control loops to accurately and quickly measure the system output signal. By converting the measured analog signal to a digital value at a high speed, the ADC helps to minimize the delay between the input signal and the response of the controller. This is critical in applications where fast and accurate sensing is important, such as in high-speed data communication systems or precision control of mechanical systems. The low-latency ADC helps to improve the performance of the control loop by allowing the system to respond quickly and accurately to changes in the input signal.

An ADC based on successive approximation register (SAR) architecture is used to achieve low-latency in the measurement unit. The ADS9218 is an 18-bit 10-MSPS ADC that converts an analog signal to a digital value in 100 ns. The controller must read the digital value from the ADC and this communication adds an additional time of 100 ns. The controller can get a digital value from the measurement unit based on ADS9218 in 200 ns including the time required for analog-to-digital conversion and communication.

A fast settling DAC is often used in digital control loops to minimize settling time and achieve a more accurate control response. By quickly converting digital control signals to analog output signals, the DAC helps to reduce the delay between the control input and the output response of the controller, allowing the controller to react more quickly to changes in the target output voltage and feedback from the measurement unit.

Accuracy of the measurement unit affects the accuracy of the output of the instrument. The measurement accuracy depends on the thermal drift of errors in the measurement unit and the operating temperature range. The offset and gain errors in the measurement can be calibrated using a calibration circuit after the instrument powers-up to increase accuracy. The 18-bit resolution of ADS9218 enables high-accuracy as shown in Table 1-1.

Table 1-1 Measurement Accuracy of ADS9218 Under Various Operating Conditions
Condition INL (ppm) Offset Error (ppm) Gain Error (ppm) TUE (ppm) Accuracy
TUE at 25°C 3.81 7.63 1 8.58 0.0009%
TUE at 25°C after calibration 3.81 0 0 3.81 0.0003%
TUE at 25°C ±5°C after calibration 3.81 5 5 8.03 0.0008%
TUE at 25°C ±25°C after calibration 3.81 25 25 35.56 0.0036%

The output voltage of the instrument can change rapidly depending on the external load. Hence the measurement unit must have wide bandwidth to accurately capture the fast-changing signals. The sampling rate and analog input bandwidth of the measurement unit need to be high enough to avoid undersampling or attenuating the signal, which can result in significant errors in the measurement. The ADS9218 features 90-MHz analog input bandwidth to capture fast-changing transients with low-distortion. The circuit in #FIG_VM4_4XP_NWB shows a high-performance wide-bandwidth circuit using THS4541 and ADS9218 for simultaneously measuring two inputs. Simultaneous measurement of two inputs is required when the control algorithm needs both voltage and current values.

Figure 1-2 Low-Latency Wide-Bandwidth Signal-Chain for Full-Scale Inputs up to 1 MHz

The circuit in #FIG_VM4_4XP_NWB achieves –104-dB distortion with a full-scale 1-MHz signal input. The low-distortion measurement provides linear output from the measurement unit for a wide range of input signal frequencies as shown in #GUID-77CE1CE6-BCAB-4191-915A-6FF4A73E1C6A and #FIG_TKV_GCR_QTB.

fIN = 1 MHz, SNR = 90.6 dB, THD = –104 dB
Figure 1-3 Typical FFT at 10 MSPS per Channel: ADS9218
fIN = 1 MHz, SNR = 90.5 dB, THD = –104.2 dB
Figure 1-4 Typical FFT at 5 MSPS per Channel: ADS9217

The circuit in #FIG_HBT_XZP_NWB shows a low-power signal chain for low-latency measurement unit for full-scale signal bandwidth up to 300 kHz. The low-distortion and low-noise measurement at full-scale 100 kHz sine-wave input are shown in #GUID-BE1DB03F-F605-4FDC-993D-77EE4D0BFB28 and #GUID-D2BB4F4B-67CF-4F3F-8D0B-5D32CE4550C5.

Figure 1-5 Low-Latency Low-Power Signal-Chain for Full-Scale Inputs up to 300 kHz
fIN = 100 kHz, SNR = 92 dB, THD = –117 dB
Figure 1-6 Typical FFT at 10 MSPS per Channel: ADS9218
fIN = 100 kHz, SNR = 92 dB, THD = –117 dB
Figure 1-7 Typical FFT at 5 MSPS per Channel: ADS9217