STM32 ADC Tutorial Complete Guide With Examples
STM32 ADC Tutorial Complete Guide With Examples
deepbluembedded.com/stm32-adc-tutorial-complete-guide-with-examples
In this tutorial, we’ll discuss the STM32 ADC (Analog-To-Digital Converter) module.
Starting with an introduction for the ADC as a digital circuit and then shifting the
attention to the STM32 ADC hardware and its features. We’ll get into the functional
description for the ADC in STM32 microcontrollers, how it works, and how to configure
it and make the best use of it. And let’s get right into it!
The ADC does the counter operation that of a DAC, while an ADC (A/D) converts analog
voltage to digital data the DAC (D/A) converts digital numbers to the analog voltage on
the output pin.
The ADC is one of the most expensive electronic components especially when it does
have a high sampling rate and high resolution. Therefore, it’s a valuable resource in
microcontrollers and different manufacturers provide us (the firmware engineers) with
various features so as to make the best use of it. And the flexibility also to make a lot of
decisions like sacrificing resolution in exchange for a higher resolution or having the
ADC to trigger on an internal timer signal to periodically sample the analog channels,
and much more as we’ll see in this tutorial.
For those who like to have a solid introduction in ADC, how it works at the low level,
different types of ADCs, ADC errors, equations, and all other details. The ADC Tutorial
down below is a complete introductory guide for this topic and highly recommended.
2/18
The analog watchdog feature allows the application to detect if the input voltage goes
outside the user-defined high or low thresholds. The ADC input clock is generated from
the PCLK2 clock divided by a Prescaler and it must not exceed 14 MHz.
ADC Features
12-bit resolution
Interrupt generation at End of Conversion, End of Injected conversion and Analog
watchdog event
Single and continuous conversion modes
Scan mode for automatic conversion of channel 0 to channel ‘n’
Self-calibration
Data alignment with in-built data coherency
Channel by channel programmable sampling time
External trigger option for both regular and injected conversion
Discontinuous mode
Dual-mode (on devices with 2 ADCs or more)
ADC conversion time: 1 µs at 56 MHz (1.17 µs at 72 MHz)
ADC supply requirement: 2.4 V to 3.6 V
ADC input range: V REF– ≤ VIN ≤ VREF+
DMA request generation during regular channel conversion
3/18
The ADC Clock
4/18
The ADCCLK clock provided by the Clock Controller is synchronous with the PCLK2
(APB2 clock). The RCC controller has a dedicated programmable Prescaler for the ADC
clock, and it must not exceed 14 MHz.
Channel Selection
the ADC needs a stabilization time of t STAB before it starts converting accurately. After
the start of ADC conversion and after 14 clock cycles, the EOC flag is set and the 16-bit
ADC Data register contains the result of the conversion.
The AWD analog watchdog status bit is set if the analog voltage converted by the ADC is
5/18
below a low threshold or above a high threshold. These thresholds are programmed in
the 12 least significant bits of the ADC_HTR and ADC_LTR 16-bit registers. An
interrupt can be enabled by using the AWDIE bit in the ADC_CR1 register. The
threshold value is independent of the alignment selected. And the analog watchdog can
be enabled on one or more channels.
ALIGN bit in the ADC_CR2 register selects the alignment of data stored after
conversion. Data can be left or right-aligned as shown in the diagram below.
The injected group channels converted data value is decreased by the user-defined
offset written in the ADC_JOFRx registers so the result can be a negative value. The
SEXT bit is the extended sign value. For regular group channels, no offset is subtracted
so only twelve bits are significant.
Scan Mode
This mode is used to scan a group of analog channels. A single conversion is performed
for each channel of the group. After each end of conversion, the next channel of the
group is converted automatically. If the CONT bit is set, conversion does not stop at the
last selected group channel but continues again from the first selected group channel.
When using scan mode, DMA bit must be set and the direct memory access controller is
used to transfer the converted data of regular group channels to SRAM after each
update of the ADC_DR register. The injected channel converted data is always stored in
the ADC_JDRx registers.
Discontinuous Mode
7/18
This mode is enabled by setting the DISCEN bit in the ADC_CR1 register. It can be used
to convert a short sequence of n conversions (n <=8) which is a part of the sequence of
conversions selected in the ADC_SQRx registers. The value of n is specified by writing
to the DISCNUM[2:0] bits in the ADC_CR1 register.
When an external trigger occurs, it starts the next n conversions selected in the
ADC_SQRx registers until all the conversions in the sequence are done. The total
sequence length is defined by the L[3:0] bits in the ADC_SQR1 register.
When an external trigger is selected for ADC regular or injected conversion, only the
rising edge of the signal can start the conversion. Down below is a table for the possible
options for ADC1 & 2 external trigger inputs to start a conversion on regular group
channels.
8/18
Calibration is started by setting the CAL bit in the ADC_CR2 register. Once calibration
is over, the CAL bit is reset by hardware and normal conversion can be performed. It is
recommended to calibrate the ADC once at power on. The calibration codes are stored
in the ADC_DR as soon as the calibration phase ends. It is recommended to perform a
calibration after each power-up.
The STM32 HAL does provide a function within the ADC APIs dedicated to starting the
calibration process and as said before it’s a recommended step after initializing the ADC
hardware at the system power-up.
Example:
With an ADCCLK = 14 MHz and a sampling time of 1.5 cycles: Tconv = 1.5 + 12.5 = 14
cycles = 1 µs
SamplingRate = 1 / Tconv
For The Previous example where Tconv = 1µs, The samplingRate = 1000000 = 1Ms/sec
9/18
The STM32 ADC has a resolution of 12-Bit which results in a total conversion time of
SamplingTime+12.5 clock cycles. However, higher sampling rates can be achieved by
sacrificing the high-resolution. Therefore, the resolution can be dropped down to 10-
Bit, 8-Bit, or 6-Bit, and hence the conversion time is much shorter and the sampling
rate increases. This can be configured and implemented in software by the programmer
and the STM32 HAL does provide APIs to set all the ADC parameters including its
resolution.
The ADC reference voltage pins are defined in the datasheet and assumed to be
connected to a voltage level in a certain range. This is shown in the table below. You can
choose to set the reference voltage to its maximum allowable level for a wider range of
conversion but less voltage measurement resolution. Or alternatively, you can set the
reference voltage to the minimum allowable value for better voltage reading resolution.
If you’re using a development board, you may need to check out its schematic diagram
as it may not be connecting the ADC Vref at all or connecting it to a 2.5v for example, so
the ADC will saturate and give you 4096 before the input analog voltage reaches 3.3v
and you’re wondering why! it may be because the reference voltage is set to a value less
than 3.3v, so it’s something to consider.
SamplingRate = 1 / Tconv
10/18
ADC & DMA
Since converted regular channels value are stored in a unique data register, it is
necessary to use DMA for the conversion of more than one regular channel. This avoids
the loss of data already stored in the ADC_DR register.
Only the end of conversion of a regular channel generates a DMA request, which allows
the transfer of its converted data from the ADC_DR register to the destination location
selected by the user.
It’s the easiest way in code in order to perform an analog to digital conversion using the
ADC on an analog input channel. However, it’s not an efficient way in all cases as it’s
considered to be a blocking way of using the ADC. As in this way, we start the A/D
conversion and wait for the ADC until it completes the conversion so the CPU can
resume processing the main code.
However, when you’re delaing with multiple channels in a circular mode or so, you’ll
have periodic interrupts from the ADC that are too much for the CPU to handle. This
will introduce jitter injection and interrupt latency and all sorts of timing issues to the
system. This can be avoided by using DMA.
The offset error is the deviation between the first actual transition and the first ideal
transition. The first transition occurs when the digital ADC output changes from 0 to 1.
Ideally, when the analog input ranges between 0.5 LSB and 1.5 LSB, the digital output
should be 1. Still, ideally, the first transition occurs at 0.5 LSB. The offset error is
denoted by EO. The offset error can easily be calibrated by the application firmware.
The gain error is the deviation between the last actual transition and the last ideal
transition. It is denoted by EG. The last actual transition is the transition from 0xFFE to
0xFFF. Ideally, there should be a transition from 0xFFE to 0xFFF when the analog
input is equal to VREF+ – 0.5 LSB. So for V REF+= 3.3 V, the last ideal transition should
occur at 3.299597 V. If the ADC provides the 0xFFF reading for VAIN < VREF+ – 0.5
LSB, then a negative gain error is obtained. The gain error is obtained by the formula
below:
The integral linearity error is the maximum deviation between any actual transition and
the endpoint correlation line. The ILE is denoted by EL. The endpoint correlation line
can be defined as the line on the A/D transfer curve that connects the first actual
transition with the last actual transition.
EL is the deviation from this line for each transition. The endpoint correlation line thus
corresponds to the actual transfer curve and has no relation to the ideal transfer curve.
The ILE is also known as the integral non-linearity error (INL). The ILE is the integral
of the DLE over the whole range.
13/18
As the ADC output is the ratio between the analog signal voltage and the reference
voltage, any noise on the analog reference causes a change in the converted digital
value. VDDA analog power supply is used on some packages as the reference voltage
(VREF+), so the quality of the V DDA power supply has an influence on ADC error.
Small but high-frequency signal variation can result in big conversion errors during
sampling time. This noise is generated by electrical devices, such as motors, engine
ignition, power lines. It affects the source signal (such as sensors) by adding an
unwanted signal. As a consequence, the ADC conversion results are not accurate.
To obtain the maximum ADC conversion precision, it is very important that the ADC
dynamic range matches the maximum amplitude of the signal to be converted. Let us
assume that the signal to be converted varies between 0 V and 2.5 V and that VREF+ is
equal to 3.3 V. The maximum signal value converted by the ADC is 3102 (2.5 V) as
shown in the diagram down below. In this case, there are 993 unused transitions (4095
– 3102 = 993). This implies a loss in the converted signal accuracy.
The impedance of the analog signal source, or series resistance (R AIN), between the
source and pin, causes a voltage drop across it because of the current flowing into the
pin. The charging of the internal sampling capacitor (CADC) is controlled by switches
with a resistance RADC. With the addition of source resistance (with RADC), the time
required to fully charge the hold capacitor increases.
If the sampling time is less than the time required to fully charge the C ADC through R ADC
+ RAIN (ts < tc), the digital value converted by the ADC is less than the actual value.
14/18
This error can be reduced or completely eliminated by setting the sampling time of the
analog channel in such a way that guarantees an appropriate voltage level on the input
pin is present before the ADC starts the conversion.
When converting analog signals, it is necessary to account for the capacitance at the
source and the parasitic capacitance seen on the analog input pin. The source resistance
and capacitance form an RC network. In addition, the ADC conversion results may not
be accurate unless the external capacitor (CAIN + Cp) is fully charged to the level of the
input voltage.
The greater value of (C AIN + Cp), the more limited the source frequency. The external
capacitance at the source and the parasitic capacitance are denoted by CAIN and Cp,
respectively.
Switching the I/Os may induce some noise in the analog input of the ADC due to
capacitive coupling between I/Os. Crosstalk may be introduced by PCB tracks that run
close to each other or that cross each other.
Internally switching digital signals and IOs introduces high-frequency noise. Switching
high sink I/Os may induce some voltage dips in the power supply caused by current
surges. A digital track that crosses an analog input track on the PCB may affect the
analog signal.
16/18
2.8 – EMI-Induced Noise
Did you find this helpful? If yes, please consider supporting this work and
sharing these tutorials!
17/18
Stay tuned for the upcoming tutorials and don’t forget to SHARE these
tutorials. And consider SUPPORTING this work to keep publishing free
content just like this!
18/18