Micropython Esp8266 PDF
Micropython Esp8266 PDF
Release 1.9.4
i
3.2.1 REPL over the serial port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.2 WebREPL - a prompt over WiFi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.3 Using the REPL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 The internal filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3.1 Creating and reading files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3.2 Listing file and more . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3.3 Start up scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3.4 Accessing the filesystem via WebREPL . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4 Network basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4.1 Configuration of the WiFi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.4.2 Sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.5 Network - TCP sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.5.1 Star Wars Asciimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.5.2 HTTP GET request . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.5.3 Simple HTTP server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.6 GPIO Pins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.6.1 External interrupts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.7 Pulse Width Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.7.1 Fading an LED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.7.2 Control a hobby servo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.8 Analog to Digital Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.9 Power control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.9.1 Changing the CPU frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.9.2 Deep-sleep mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.10 Controlling 1-wire devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.11 Controlling NeoPixels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.12 Temperature and Humidity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.13 Next steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4 MicroPython libraries 31
4.1 Python standard libraries and micro-libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.1.1 Builtin functions and exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1.2 array – arrays of numeric data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.3 gc – control the garbage collector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.4 math – mathematical functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.1.5 sys – system specific functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.1.6 ubinascii – binary/ASCII conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.1.7 ucollections – collection and container types . . . . . . . . . . . . . . . . . . . . . . 40
4.1.8 uerrno – system error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.9 uhashlib – hashing algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.1.10 uheapq – heap queue algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.1.11 uio – input/output streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.12 ujson – JSON encoding and decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.1.13 uos – basic “operating system” services . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1.14 ure – simple regular expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.1.15 uselect – wait for events on a set of streams . . . . . . . . . . . . . . . . . . . . . . . . 50
4.1.16 usocket – socket module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.1.17 ussl – SSL/TLS module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.1.18 ustruct – pack and unpack primitive data types . . . . . . . . . . . . . . . . . . . . . . . 56
4.1.19 utime – time related functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.1.20 uzlib – zlib decompression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2 MicroPython-specific libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2.1 btree – simple BTree database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.2.2 framebuf — Frame buffer manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
ii
4.2.3 machine — functions related to the hardware . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2.4 micropython – access and control MicroPython internals . . . . . . . . . . . . . . . . . 79
4.2.5 network — network configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.2.6 uctypes – access binary data in a structured way . . . . . . . . . . . . . . . . . . . . . . 85
4.3 Libraries specific to the ESP8266 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.3.1 esp — functions related to the ESP8266 . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
iii
6.2.5 import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
6.3 Builtin Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.3.1 Exception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6.3.2 bytearray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6.3.3 bytes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6.3.4 float . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
6.3.5 int . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
6.3.6 list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
6.3.7 str . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.3.8 tuple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
6.4 Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
6.4.1 array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
6.4.2 builtins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
6.4.3 deque . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.4.4 json . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.4.5 struct . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
6.4.6 sys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Index 141
iv
CHAPTER
ONE
See the corresponding section of tutorial: Getting started with MicroPython on the ESP8266. It also includes a
troubleshooting subsection.
The MicroPython REPL is on UART0 (GPIO1=TX, GPIO3=RX) at baudrate 115200. Tab-completion is useful to
find out what methods an object has. Paste mode (ctrl-E) is useful to paste a large slab of Python code into the REPL.
The machine module:
import machine
1
MicroPython Documentation, Release 1.9.4
import esp
1.3 Networking
import network
def do_connect():
import network
wlan = network.WLAN(network.STA_IF)
wlan.active(True)
if not wlan.isconnected():
print('connecting to network...')
wlan.connect('essid', 'password')
while not wlan.isconnected():
pass
print('network config:', wlan.ifconfig())
Once the network is established the socket module can be used to create and use TCP/UDP sockets as usual.
import time
1.5 Timers
Virtual (RTOS-based) timers are supported. Use the machine.Timer class with timer ID of -1:
tim = Timer(-1)
tim.init(period=5000, mode=Timer.ONE_SHOT, callback=lambda t:print(1))
tim.init(period=2000, mode=Timer.PERIODIC, callback=lambda t:print(2))
Available pins are: 0, 1, 2, 3, 4, 5, 12, 13, 14, 15, 16, which correspond to the actual GPIO pin numbers of ESP8266
chip. Note that many end-user boards use their own adhoc pin numbering (marked e.g. D0, D1, . . . ). As MicroPython
supports different boards and modules, physical pin numbering was chosen as the lowest common denominator. For
mapping between board logical pins and physical chip pins, consult your board documentation.
Note that Pin(1) and Pin(3) are REPL UART TX and RX respectively. Also note that Pin(16) is a special pin (used for
wakeup from deepsleep mode) and may be not available for use with higher-level classes like Neopixel.
PWM can be enabled on all pins except Pin(16). There is a single frequency for all channels, with range between 1
and 1000 (measured in Hz). The duty cycle is between 0 and 1023 inclusive.
Use the machine.PWM class:
1.5. Timers 3
MicroPython Documentation, Release 1.9.4
ADC is available on a dedicated pin. Note that input voltages on the ADC pin must be between 0v and 1.0v.
Use the machine.ADC class:
from machine import ADC
There are two SPI drivers. One is implemented in software (bit-banging) and works on all pins, and is accessed via
the machine.SPI class:
from machine import Pin, SPI
The hardware SPI is faster (up to 80Mhz), but only works on following pins: MISO is GPIO12, MOSI is GPIO13,
and SCK is GPIO14. It has the same methods as the bitbanging SPI class above, except for the pin parameters for the
constructor and init (as those are fixed):
The I2C driver is implemented in software and works on all pins, and is accessed via the machine.I2C class:
from machine import Pin, I2C
See machine.RTC
from machine import RTC
rtc = RTC()
rtc.datetime((2017, 8, 23, 1, 12, 48, 0, 0)) # set a specific date and time
rtc.datetime() # get date and time
Connect GPIO16 to the reset pin (RST on HUZZAH). Then the following code can be used to sleep, wake and check
the reset cause:
import machine
Be sure to put a 4.7k pull-up resistor on the data line. Note that the convert_temp() method must be called each
time you want to sample the temperature.
import esp
esp.neopixel_write(pin, grb_buf, is800khz)
apa[0] = (255, 255, 255, 31) # set the first pixel to white with a maximum brightness
˓→of 31
import esp
esp.apa102_write(clock_pin, data_pin, rgbi_buf)
import dht
import machine
d = dht.DHT11(machine.Pin(4))
d.measure()
d.temperature() # eg. 23 (°C)
d.humidity() # eg. 41 (% RH)
d = dht.DHT22(machine.Pin(4))
d.measure()
d.temperature() # eg. 23.6 (°C)
d.humidity() # eg. 41.3 (% RH)
WebREPL (REPL over WebSockets, accessible via a web browser) is an experimental feature available in
ESP8266 port. Download web client from https://github.com/micropython/webrepl (hosted version available at
http://micropython.org/webrepl), and configure it by executing:
import webrepl_setup
and following on-screen instructions. After reboot, it will be available for connection. If you disabled automatic
start-up on boot, you may run configured daemon on demand using:
import webrepl
webrepl.start()
The supported way to use WebREPL is by connecting to ESP8266 access point, but the daemon is also started on STA
interface if it is active, so if your router is set up and works correctly, you may also use WebREPL while connected to
your normal Internet access point (use the ESP8266 AP connection method if you face any issues).
Besides terminal/command prompt access, WebREPL also has provision for file transfer (both upload and download).
Web client has buttons for the corresponding functions, or you can use command-line client webrepl_cli.py from
the repository above.
See the MicroPython forum for other community-supported alternatives to transfer files to ESP8266.
TWO
There is a multitude of modules and boards from different sources which carry the ESP8266 chip. MicroPython tries to
provide a generic port which would run on as many boards/modules as possible, but there may be limitations. Adafruit
Feather HUZZAH board is taken as a reference board for the port (for example, testing is performed on it). If you
have another board, please make sure you have a datasheet, schematics and other reference materials for your board
handy to look up various aspects of your board functioning.
To make a generic ESP8266 port and support as many boards as possible, the following design and implementation
decision were made:
• GPIO pin numbering is based on ESP8266 chip numbering, not some “logical” numbering of a particular board.
Please have the manual/pin diagram of your board at hand to find correspondence between your board pins and
actual ESP8266 pins. We also encourage users of various boards to share this mapping via MicroPython forum,
with the idea to collect community-maintained reference materials eventually.
• All pins which make sense to support, are supported by MicroPython (for example, pins which are used to
connect SPI flash are not exposed, as they’re unlikely useful for anything else, and operating on them will lead
to board lock-up). However, any particular board may expose only subset of pins. Consult your board reference
manual.
• Some boards may lack external pins/internal connectivity to support ESP8266 deepsleep mode.
The datasheets and other reference material for ESP8266 chip are available from the vendor site: http://bbs.espressif.
com/viewtopic.php?f=67&t=225 . They are the primary reference for the chip technical specifications, capabilities,
operating modes, internal functioning, etc.
For your convenience, some of technical specifications are provided below:
• Architecture: Xtensa lx106
• CPU frequency: 80MHz overclockable to 160MHz
• Total RAM available: 96KB (part of it reserved for system)
• BootROM: 64KB
• Internal FlashROM: None
9
MicroPython Documentation, Release 1.9.4
• External FlashROM: code and data, via SPI Flash. Normal sizes 512KB-4MB.
• GPIO: 16 + 1 (GPIOs are multiplexed with other functions, including external FlashROM, UART, deep sleep
wake-up, etc.)
• UART: One RX/TX UART (no hardware handshaking), one TX-only UART.
• SPI: 2 SPI interfaces (one used for FlashROM).
• I2C: No native external I2C (bitbang implementation available on any pins).
• I2S: 1.
• Programming: using BootROM bootloader from UART. Due to external FlashROM and always-available
BootROM bootloader, ESP8266 is not brickable.
ESP8266 has very modest resources (first of all, RAM memory). So, please avoid allocating too big container objects
(lists, dictionaries) and buffers. There is also no full-fledged OS to keep track of resources and automatically clean
them up, so that’s the task of a user/user application: please be sure to close open files, sockets, etc. as soon as possible
after use.
On boot, MicroPython EPS8266 port executes _boot.py script from internal frozen modules. It mounts filesystem
in FlashROM, or if it’s not available, performs first-time setup of the module and creates the filesystem. This part
of the boot process is considered fixed, and not available for customization for end users (even if you build from
source, please refrain from changes to it; customization of early boot process is available only to advanced users and
developers, who can diagnose themselves any issues arising from modifying the standard process).
Once the filesystem is mounted, boot.py is executed from it. The standard version of this file is created during
first-time module set up and has commands to start a WebREPL daemon (disabled by default, configurable with
webrepl_setup module), etc. This file is customizable by end users (for example, you may want to set some
parameters or add other services which should be run on a module start-up). But keep in mind that incorrect modifica-
tions to boot.py may still lead to boot loops or lock ups, requiring to reflash a module from scratch. (In particular, it’s
recommended that you use either webrepl_setup module or manual editing to configure WebREPL, but not both).
As a final step of boot procedure, main.py is executed from filesystem, if exists. This file is a hook to start up a user
application each time on boot (instead of going to REPL). For small test applications, you may name them directly as
main.py, and upload to module, but instead it’s recommended to keep your application(s) in separate files, and have
just the following in main.py:
import my_app
my_app.main()
This will allow to keep the structure of your application clear, as well as allow to install multiple applications on a
board, and switch among them.
RTC in ESP8266 has very bad accuracy, drift may be seconds per minute. As a workaround, to measure short enough
intervals you can use utime.time(), etc. functions, and for wall clock time, synchronize from the net using
included ntptime.py module.
Due to limitations of the ESP8266 chip the internal real-time clock (RTC) will overflow every 7:45h. If a long-
term working RTC time is required then time() or localtime() must be called at least once within 7 hours.
MicroPython will then handle the overflow.
Socket instances remain active until they are explicitly closed. This has two consequences. Firstly they occupy RAM,
so an application which opens sockets without closing them may eventually run out of memory. Secondly not properly
closed socket can cause the low-level part of the vendor WiFi stack to emit Lmac errors. This occurs if data comes
in for a socket and is not processed in a timely manner. This can overflow the WiFi stack input queue and lead to a
deadlock. The only recovery is by a hard reset.
The above may also happen after an application terminates and quits to the REPL for any reason including an ex-
ception. Subsequent arrival of data provokes the failure with the above error message repeatedly issued. So, sockets
should be closed in any case, regardless whether an application terminates successfully or by an exeption, for example
using try/finally:
sock = socket(...)
try:
# Use sock
finally:
sock.close()
ESP8266 uses axTLS library, which is one of the smallest TLS libraries with the compatible licensing. However, it
also has some known issues/limitations:
1. No support for Diffie-Hellman (DH) key exchange and Elliptic-curve cryptography (ECC). This means it can’t
work with sites which force the use of these features (it works ok with classic RSA certifactes).
2. Half-duplex communication nature. axTLS uses a single buffer for both sending and receiving, which leads
to considerable memory saving and works well with protocols like HTTP. But there may be problems with
protocols which don’t follow classic request-response model.
Besides axTLS own limitations, the configuration used for MicroPython is highly optimized for code size, which leads
to additional limitations (these may be lifted in the future):
3. Optimized RSA algorithms are not enabled, which may lead to slow SSL handshakes.
4. Stored sessions are not supported (may allow faster repeated connections to the same site in some circum-
stances).
Besides axTLS specific limitations described above, there’s another generic limitation with usage of TLS on the low-
memory devices:
5. The TLS standard specifies the maximum length of the TLS record (unit of TLS communication, the entire
record must be buffered before it can be processed) as 16KB. That’s almost half of the available ESP8266
memory, and inside a more or less advanced application would be hard to allocate due to memory fragmentation
issues. As a compromise, a smaller buffer is used, with the idea that the most interesting usage for SSL would
be accessing various REST APIs, which usually require much smaller messages. The buffers size is on the
order of 5KB, and is adjusted from time to time, taking as a reference being able to access https://google.com .
The smaller buffer hower means that some sites can’t be accessed using it, and it’s not possible to stream large
amounts of data.
There are also some not implemented features specifically in MicroPython’s ussl module based on axTLS:
6. Certificates are not validated (this may make connections susceptible to man-in-the-middle attacks).
7. There is no support for client certificates (scheduled to be fixed in 1.9.4 release).
THREE
This tutorial is intended to get you started using MicroPython on the ESP8266 system-on-a-chip. If it is your first time
it is recommended to follow the tutorial through in the order below. Otherwise the sections are mostly self contained,
so feel free to skip to those that interest you.
The tutorial does not assume that you know Python, but it also does not attempt to explain any of the details of the
Python language. Instead it provides you with commands that are ready to run, and hopes that you will gain a bit of
Python knowledge along the way. To learn more about Python itself please refer to https://www.python.org.
Using MicroPython is a great way to get the most of your ESP8266 board. And vice versa, the ESP8266 chip is a
great platform for using MicroPython. This tutorial will guide you through setting up MicroPython, getting a prompt,
using WebREPL, connecting to the network and communicating with the Internet, using the hardware peripherals, and
controlling some external components.
Let’s get started!
3.1.1 Requirements
The first thing you need is a board with an ESP8266 chip. The MicroPython software supports the ESP8266 chip
itself and any board should work. The main characteristic of a board is how much flash it has, how the GPIO pins are
connected to the outside world, and whether it includes a built-in USB-serial convertor to make the UART available to
your PC.
The minimum requirement for flash size is 1Mbyte. There is also a special build for boards with 512KB, but it is
highly limited comparing to the normal build: there is no support for filesystem, and thus features which depend on it
won’t work (WebREPL, upip, etc.). As such, 512KB build will be more interesting for users who build from source
and fine-tune parameters for their particular application.
Names of pins will be given in this tutorial using the chip names (eg GPIO0) and it should be straightforward to find
which pin this corresponds to on your particular board.
If your board has a USB connector on it then most likely it is powered through this when connected to your PC.
Otherwise you will need to power it directly. Please refer to the documentation for your board for further details.
13
MicroPython Documentation, Release 1.9.4
The first thing you need to do is download the most recent MicroPython firmware .bin file to load onto your ESP8266
device. You can download it from the MicroPython downloads page. From here, you have 3 main choices
• Stable firmware builds for 1024kb modules and above.
• Daily firmware builds for 1024kb modules and above.
• Daily firmware builds for 512kb modules.
If you are just starting with MicroPython, the best bet is to go for the Stable firmware builds. If you are an advanced,
experienced MicroPython ESP8266 user who would like to follow development closely and help with testing new
features, there are daily builds (note: you actually may need some development experience, e.g. being ready to follow
git history to know what new changes and features were introduced).
Support for 512kb modules is provided on a feature preview basis. For end users, it’s recommended to use modules
with flash of 1024kb or more. As such, only daily builds for 512kb modules are provided.
Once you have the MicroPython firmware (compiled code), you need to load it onto your ESP8266 device. There are
two main steps to do this: first you need to put your device in boot-loader mode, and second you need to copy across
the firmware. The exact procedure for these steps is highly dependent on the particular board and you will need to
refer to its documentation for details.
If you have a board that has a USB connector, a USB-serial convertor, and has the DTR and RTS pins wired in a
special way then deploying the firmware should be easy as all steps can be done automatically. Boards that have such
features include the Adafruit Feather HUZZAH and NodeMCU boards.
For best results it is recommended to first erase the entire flash of your device before putting on new MicroPython
firmware.
Currently we only support esptool.py to copy across the firmware. You can find this tool here: https://github.com/
espressif/esptool/, or install it using pip:
Versions starting with 1.3 support both Python 2.7 and Python 3.4 (or newer). An older version (at least 1.2.1 is
needed) works fine but will require Python 2.7.
Any other flashing program should work, so feel free to try them out or refer to the documentation for your board to
see its recommendations.
Using esptool.py you can erase the flash with the command:
You might need to change the “port” setting to something else relevant for your PC. You may also need to reduce the
baudrate if you get errors when flashing (eg down to 115200). The filename of the firmware should also match the file
that you have.
For some boards with a particular FlashROM configuration (e.g. some variants of a NodeMCU board) you may need
to use the following command to deploy the firmware (note the -fm dio option):
If the above commands run without error then MicroPython should be installed on your board!
Once you have the firmware on the device you can access the REPL (Python prompt) over UART0 (GPIO1=TX,
GPIO3=RX), which might be connected to a USB-serial convertor, depending on your board. The baudrate is 115200.
The next part of the tutorial will discuss the prompt in more detail.
3.1.6 WiFi
After a fresh install and boot the device configures itself as a WiFi access point (AP) that you can connect to. The
ESSID is of the form MicroPython-xxxxxx where the x’s are replaced with part of the MAC address of your device
(so will be the same everytime, and most likely different for all ESP8266 chips). The password for the WiFi is
micropythoN (note the upper-case N). Its IP address will be 192.168.4.1 once you connect to its network. WiFi
configuration will be discussed in more detail later in the tutorial.
If you experience problems during flashing or with running firmware immediately after it, here are troubleshooting
recommendations:
• Be aware of and try to exclude hardware problems. There are 2 common problems: bad power source quality
and worn-out/defective FlashROM. Speaking of power source, not just raw amperage is important, but also low
ripple and noise/EMI in general. If you experience issues with self-made or wall-wart style power supply, try
USB power from a computer. Unearthed power supplies are also known to cause problems as they source of
increased EMI (electromagnetic interference) - at the very least, and may lead to electrical devices breakdown.
So, you are advised to avoid using unearthed power connections when working with ESP8266 and other boards.
In regard to FlashROM hardware problems, there are independent (not related to MicroPython in any way)
reports (e.g.) that on some ESP8266 modules, FlashROM can be programmed as little as 20 times before
programming errors occur. This is much less than 100,000 programming cycles cited for FlashROM chips of a
type used with ESP8266 by reputable vendors, which points to either production rejects, or second-hand worn-
out flash chips to be used on some (apparently cheap) modules/boards. You may want to use your best judgement
about source, price, documentation, warranty, post-sales support for the modules/boards you purchase.
• The flashing instructions above use flashing speed of 460800 baud, which is good compromise between speed
and stability. However, depending on your module/board, USB-UART convertor, cables, host OS, etc., the
above baud rate may be too high and lead to errors. Try a more common 115200 baud rate instead in such cases.
• If lower baud rate didn’t help, you may want to try older version of esptool.py, which had a different program-
ming algorithm:
This version doesn’t support --flash_size=detect option, so you will need to specify FlashROM size
explicitly (in megabits). It also requires Python 2.7, so you may need to use pip2 instead of pip in the
command above.
• The --flash_size option in the commands above is mandatory. Omitting it will lead to a corrupted
firmware.
• To catch incorrect flash content (e.g. from a defective sector on a chip), add --verify switch to the commands
above.
• Additionally, you can check the firmware integrity from a MicroPython REPL prompt (assuming you were able
to flash it and --verify option doesn’t report errors):
import esp
esp.check_fw()
If the last output value is True, the firmware is OK. Otherwise, it’s corrupted and need to be reflashed correctly.
• If you experience any issues with another flashing application (not esptool.py), try esptool.py, it is a generally
accepted flashing application in the ESP8266 community.
• If you still experience problems with even flashing the firmware, please refer to esptool.py project page, https:
//github.com/espressif/esptool for additional documentation and bug tracker where you can report problems.
• If you are able to flash firmware, but --verify option or esp.check_fw() return errors even after multiple
retries, you may have a defective FlashROM chip, as explained above.
REPL stands for Read Evaluate Print Loop, and is the name given to the interactive MicroPython prompt that you can
access on the ESP8266. Using the REPL is by far the easiest way to test out your code and run commands.
There are two ways to access the REPL: either via a wired connection through the UART serial port, or via WiFi.
The REPL is always available on the UART0 serial peripheral, which is connected to the pins GPIO1 for TX and
GPIO3 for RX. The baudrate of the REPL is 115200. If your board has a USB-serial convertor on it then you should
be able to access the REPL directly from your PC. Otherwise you will need to have a way of communicating with the
UART.
To access the prompt over USB-serial you need to use a terminal emulator program. On Windows TeraTerm is a good
choice, on Mac you can use the built-in screen program, and Linux has picocom and minicom. Of course, there are
many other terminal programs that will work, so pick your favourite!
For example, on Linux you can try running:
Once you have made the connection over the serial port you can test if it is working by hitting enter a few times. You
should see the Python REPL prompt, indicated by >>>.
WebREPL allows you to use the Python prompt over WiFi, connecting through a browser. The latest versions of
Firefox and Chrome are supported.
For your convenience, WebREPL client is hosted at http://micropython.org/webrepl . Alternatively, you can install it
locally from the the GitHub repository https://github.com/micropython/webrepl .
Before connecting to WebREPL, you should set a password and enable it via a normal serial connection. Initial
versions of MicroPython for ESP8266 came with WebREPL automatically enabled on the boot and with the ability to
set a password via WiFi on the first connection, but as WebREPL was becoming more widely known and popular, the
initial setup has switched to a wired connection for improved security:
import webrepl_setup
Follow the on-screen instructions and prompts. To make any changes active, you will need to reboot your device.
To use WebREPL connect your computer to the ESP8266’s access point (MicroPython-xxxxxx, see the previous
section about this). If you have already reconfigured your ESP8266 to connect to a router then you can skip this part.
Once you are on the same network as the ESP8266 you click the “Connect” button (if you are connecting via a router
then you may need to change the IP address, by default the IP address is correct when connected to the ESP8266’s
access point). If the connection succeeds then you should see a password prompt.
Once you type the password configured at the setup step above, press Enter once more and you should get a prompt
looking like >>>. You can now start typing Python commands!
Once you have a prompt you can start experimenting! Anything you type at the prompt will be executed after you
press the Enter key. MicroPython will run the code that you enter and print the result (if there is one). If there is an
error with the text that you enter then an error message is printed.
Try typing the following at the prompt:
Note that you shouldn’t type the >>> arrows, they are there to indicate that you should type the text after it at the
prompt. And then the line following is what the device should respond with. In the end, once you have entered the
text print("hello esp8266!") and pressed the Enter key, the output on your screen should look exactly like
it does above.
If you already know some python you can now try some basic commands here. For example:
>>> 1 + 2
3
>>> 1 / 2
0.5
>>> 12**34
4922235242952026704037113243122008064
If your board has an LED attached to GPIO2 (the ESP-12 modules do) then you can turn it on and off using the
following code:
Note that on method of a Pin might turn the LED off and off might turn it on (or vice versa), depending on how the
LED is wired on your board. To resolve this, machine.Signal class is provided.
Line editing
You can edit the current line that you are entering using the left and right arrow keys to move the cursor, as well as the
delete and backspace keys. Also, pressing Home or ctrl-A moves the cursor to the start of the line, and pressing End
Input history
The REPL remembers a certain number of previous lines of text that you entered (up to 8 on the ESP8266). To recall
previous lines use the up and down arrow keys.
Tab completion
Pressing the Tab key will do an auto-completion of the current word that you are entering. This can be very useful to
find out functions and methods that a module or object has. Try it out by typing “ma” and then pressing Tab. It should
complete to “machine” (assuming you imported machine in the above example). Then type “.” and press Tab again to
see a list of all the functions that the machine module has.
Certain things that you type will need “continuing”, that is, will need more lines of text to make a proper Python
statement. In this case the prompt will change to ... and the cursor will auto-indent the correct amount so you can
start typing the next line straight away. Try this by defining the following function:
In the above, you needed to press the Enter key three times in a row to finish the compound statement (that’s the three
lines with just dots on them). The other way to finish a compound statement is to press backspace to get to the start of
the line, then press the Enter key. (If you did something wrong and want to escape the continuation mode then press
ctrl-C; all lines will be ignored.)
The function you just defined allows you to toggle a pin. The pin object you created earlier should still exist (recreate
it if it doesn’t) and you can toggle the LED using:
>>> toggle(pin)
Let’s now toggle the LED in a loop (if you don’t have an LED then you can just print some text instead of calling
toggle, to see the effect):
This will toggle the LED at 1Hz (half a second on, half a second off). To stop the toggling press ctrl-C, which will
raise a KeyboardInterrupt exception and break out of the loop.
The time module provides some useful functions for making delays and doing timing. Use tab completion to find out
what they are and play around with them!
Paste mode
Pressing ctrl-E will enter a special paste mode. This allows you to copy and paste a chunk of text into the REPL. If
you press ctrl-E you will see the paste-mode prompt:
You can then paste (or type) your text in. Note that none of the special keys or commands work in paste mode (eg Tab
or backspace), they are just accepted as-is. Press ctrl-D to finish entering the text and execute it.
If your devices has 1Mbyte or more of storage then it will be set up (upon first boot) to contain a filesystem. This
filesystem uses the FAT format and is stored in the flash after the MicroPython firmware.
MicroPython on the ESP8266 supports the standard way of accessing files in Python, using the built-in open()
function.
To create a file try:
The “9” is the number of bytes that were written with the write() method. Then you can read back the contents of
this new file using:
>>> f = open('data.txt')
>>> f.read()
'some data'
>>> f.close()
Note that the default mode when opening a file is to open it in read-only mode, and as a text file. Specify 'wb' as the
second argument to open() to open for writing in binary mode, and 'rb' to open for reading in binary mode.
The os module can be used for further control over the filesystem. First import the module:
>>> import os
There are two files that are treated specially by the ESP8266 when it starts up: boot.py and main.py. The boot.py
script is executed first (if it exists) and then once it completes the main.py script is executed. You can create these files
yourself and populate them with the code that you want to run when the device starts up.
You can access the filesystem over WebREPL using the web client in a browser or via the command-line tool. Please
refer to Quick Reference and Tutorial sections for more information about WebREPL.
The network module is used to configure the WiFi connection. There are two WiFi interfaces, one for the station
(when the ESP8266 connects to a router) and one for the access point (for other devices to connect to the ESP8266).
Create instances of these objects using:
>>> import network
>>> sta_if = network.WLAN(network.STA_IF)
>>> ap_if = network.WLAN(network.AP_IF)
You can also check the network settings of the interface by:
>>> ap_if.ifconfig()
('192.168.4.1', '255.255.255.0', '192.168.4.1', '8.8.8.8')
Upon a fresh install the ESP8266 is configured in access point mode, so the AP_IF interface is active and the STA_IF
interface is inactive. You can configure the module to connect to your own network using the STA_IF interface.
First activate the station interface:
>>> sta_if.active(True)
>>> sta_if.isconnected()
>>> sta_if.ifconfig()
('192.168.0.2', '255.255.255.0', '192.168.0.1', '8.8.8.8')
You can then disable the access-point interface if you no longer need it:
>>> ap_if.active(False)
Here is a function you can run (or put in your boot.py file) to automatically connect to your WiFi network:
def do_connect():
import network
sta_if = network.WLAN(network.STA_IF)
if not sta_if.isconnected():
print('connecting to network...')
sta_if.active(True)
sta_if.connect('<essid>', '<password>')
while not sta_if.isconnected():
pass
print('network config:', sta_if.ifconfig())
3.4.2 Sockets
Once the WiFi is set up the way to access the network is by using sockets. A socket represents an endpoint on a
network device, and when two sockets are connected together communication can proceed. Internet protocols are built
on top of sockets, such as email (SMTP), the web (HTTP), telnet, ssh, among many others. Each of these protocols is
assigned a specific port, which is just an integer. Given an IP address and a port number you can connect to a remote
device and start talking with it.
The next part of the tutorial discusses how to use sockets to do some common and useful network tasks.
The building block of most of the internet is the TCP socket. These sockets provide a reliable stream of bytes between
the connected network devices. This part of the tutorial will show how to use TCP sockets in a few different cases.
The simplest thing to do is to download data from the internet. In this case we will use the Star Wars Asciimation
service provided by the blinkenlights.nl website. It uses the telnet protocol on port 23 to stream data to anyone that
connects. It’s very simple to use because it doesn’t require you to authenticate (give a username or password), you can
just start downloading data straight away.
The first thing to do is make sure we have the socket module available:
>>> import socket
The getaddrinfo function actually returns a list of addresses, and each address has more information than we need.
We want to get just the first valid address, and then just the IP address and port of the server. To do this use:
>>> addr = addr_info[0][-1]
If you type addr_info and addr at the prompt you will see exactly what information they hold.
Using the IP address we can make a socket and connect to the server:
>>> s = socket.socket()
>>> s.connect(addr)
Now that we are connected we can download and display the data:
>>> while True:
... data = s.recv(500)
... print(str(data, 'utf8'), end='')
...
When this loop executes it should start showing the animation (use ctrl-C to interrupt it).
You should also be able to run this same code on your PC using normal Python if you want to try it out there.
The next example shows how to download a webpage. HTTP uses port 80 and you first need to send a “GET” request
before you can download anything. As part of the request you need to specify the page to retrieve.
Let’s define a function that can download and print a URL:
def http_get(url):
_, _, host, path = url.split('/', 3)
addr = socket.getaddrinfo(host, 80)[0][-1]
s = socket.socket()
s.connect(addr)
s.send(bytes('GET /%s HTTP/1.0\r\nHost: %s\r\n\r\n' % (path, host), 'utf8'))
while True:
data = s.recv(100)
if data:
print(str(data, 'utf8'), end='')
else:
break
s.close()
Make sure that you import the socket module before running this function. Then you can try:
>>> http_get('http://micropython.org/ks/test.html')
This should retrieve the webpage and print the HTML to the console.
The following code creates an simple HTTP server which serves a single webpage that contains a table with the state
of all the GPIO pins:
import machine
pins = [machine.Pin(i, machine.Pin.IN) for i in (0, 2, 4, 5, 12, 13, 14, 15)]
import socket
addr = socket.getaddrinfo('0.0.0.0', 80)[0][-1]
s = socket.socket()
s.bind(addr)
s.listen(1)
while True:
cl, addr = s.accept()
print('client connected from', addr)
cl_file = cl.makefile('rwb', 0)
while True:
line = cl_file.readline()
if not line or line == b'\r\n':
break
rows = ['<tr><td>%s</td><td>%d</td></tr>' % (str(p), p.value()) for p in pins]
response = html % '\n'.join(rows)
cl.send(response)
cl.close()
The way to connect your board to the external world, and control other components, is through the GPIO pins. Not all
pins are available to use, in most cases only pins 0, 2, 4, 5, 12, 13, 14, 15, and 16 can be used.
The pins are available in the machine module, so make sure you import that first. Then you can create a pin using:
Here, the “0” is the pin that you want to access. Usually you want to configure the pin to be input or output, and you
do this when constructing it. To make an input pin use:
You can either use PULL_UP or None for the input pull-mode. If it’s not specified then it defaults to None, which is
no pull resistor. GPIO16 has no pull-up mode. You can read the value on the pin using:
>>> pin.value()
0
The pin on your board may return 0 or 1 here, depending on what it’s connected to. To make an output pin use:
>>> pin.value(0)
>>> pin.value(1)
Or:
>>> pin.off()
>>> pin.on()
All pins except number 16 can be configured to trigger a hard interrupt if their input changes. You can set code (a
callback function) to be executed on the trigger.
Let’s first define a callback function, which must take a single argument, being the pin that triggered the function. We
will make the function just print the pin:
An finally we need to tell the pins when to trigger, and the function to call when they detect an event:
We set pin 0 to trigger only on a falling edge of the input (when it goes from high to low), and set pin 2 to trigger on
both a rising and falling edge. After entering this code you can apply high and low voltages to pins 0 and 2 to see the
interrupt being executed.
A hard interrupt will trigger as soon as the event occurs and will interrupt any running code, including Python code. As
such your callback functions are limited in what they can do (they cannot allocate memory, for example) and should
be as short and simple as possible.
Pulse width modulation (PWM) is a way to get an artificial analog output on a digital pin. It achieves this by rapidly
toggling the pin from low to high. There are two parameters associated with this: the frequency of the toggling, and
the duty cycle. The duty cycle is defined to be how long the pin is high compared with the length of a single period
(low plus high time). Maximum duty cycle is when the pin is high all of the time, and minimum is when it is low all
of the time.
On the ESP8266 the pins 0, 2, 4, 5, 12, 13, 14 and 15 all support PWM. The limitation is that they must all be at the
same frequency, and the frequency must be between 1Hz and 1kHz.
To use PWM on a pin you must first create the pin object, for example:
>>> pwm12.freq(500)
>>> pwm12.duty(512)
Note that the duty cycle is between 0 (all off) and 1023 (all on), with 512 being a 50% duty. Values beyond this
min/max will be clipped. If you print the PWM object then it will tell you its current configuration:
>>> pwm12
PWM(12, freq=500, duty=512)
You can also call the freq() and duty() methods with no arguments to get their current values.
The pin will continue to be in PWM mode until you deinitialise it using:
>>> pwm12.deinit()
Let’s use the PWM feature to fade an LED. Assuming your board has an LED connected to pin 2 (ESP-12 modules
do) we can create an LED-PWM object using:
Hobby servo motors can be controlled using PWM. They require a frequency of 50Hz and then a duty between about
40 and 115, with 77 being the centre value. If you connect a servo to the power and ground pins, and then the signal
line to pin 12 (other pins will work just as well), you can control the motor using:
>>> servo = machine.PWM(machine.Pin(12), freq=50)
>>> servo.duty(40)
>>> servo.duty(115)
>>> servo.duty(77)
The ESP8266 has a single pin (separate to the GPIO pins) which can be used to read analog voltages and convert them
to a digital value. You can construct such an ADC pin object using:
>>> import machine
>>> adc = machine.ADC(0)
The values returned from the read() function are between 0 (for 0.0 volts) and 1024 (for 1.0 volts). Please note
that this input can only tolerate a maximum of 1.0 volts and you must use a voltage divider circuit to measure larger
voltages.
The ESP8266 provides the ability to change the CPU frequency on the fly, and enter a deep-sleep state. Both can be
used to manage power consumption.
The machine module has a function to get and set the CPU frequency. To get the current frequency use:
>>> import machine
>>> machine.freq()
80000000
By default the CPU runs at 80MHz. It can be changed to 160MHz if you need more processing power, at the expense
of current consumption:
>>> machine.freq(160000000)
>>> machine.freq()
160000000
You can change to the higher frequency just while your code does the heavy processing and then change back when
it’s finished.
The deep-sleep mode will shut down the ESP8266 and all its peripherals, including the WiFi (but not including the
real-time-clock, which is used to wake the chip). This drastically reduces current consumption and is a good way to
make devices that can run for a while on a battery.
To be able to use the deep-sleep feature you must connect GPIO16 to the reset pin (RST on the Adafruit Feather
HUZZAH board). Then the following code can be used to sleep and wake the device:
import machine
Note that when the chip wakes from a deep-sleep it is completely reset, including all of the memory. The boot scripts
will run as usual and you can put code in them to check the reset cause to perhaps do something different if the device
just woke from a deep-sleep. For example, to print the reset cause you can use:
if machine.reset_cause() == machine.DEEPSLEEP_RESET:
print('woke from a deep sleep')
else:
print('power on or hard reset')
The 1-wire bus is a serial bus that uses just a single wire for communication (in addition to wires for ground and
power). The DS18B20 temperature sensor is a very popular 1-wire device, and here we show how to use the onewire
module to read from such a device.
For the following code to work you need to have at least one DS18S20 or DS18B20 temperature sensor with its data
line connected to GPIO12. You must also power the sensors and connect a 4.7k Ohm resistor between the data pin and
the power pin.
import time
import machine
import onewire, ds18x20
Note that you must execute the convert_temp() function to initiate a temperature reading, then wait at least 750ms
before reading the value.
NeoPixels, also known as WS2812 LEDs, are full-colour LEDs that are connected in serial, are individually address-
able, and can have their red, green and blue components set between 0 and 255. They require precise timing to control
them and there is a special neopixel module to do just this.
To create a NeoPixel object do the following:
>>> import machine, neopixel
>>> np = neopixel.NeoPixel(machine.Pin(4), 8)
This configures a NeoPixel strip on GPIO4 with 8 pixels. You can adjust the “4” (pin number) and the “8” (number of
pixel) to suit your set up.
To set the colour of pixels use:
>>> np[0] = (255, 0, 0) # set to red, full brightness
>>> np[1] = (0, 128, 0) # set to green, half brightness
>>> np[2] = (0, 0, 64) # set to blue, quarter brightness
For LEDs with more than 3 colours, such as RGBW pixels or RGBY pixels, the NeoPixel class takes a bpp parameter.
To setup a NeoPixel object for an RGBW Pixel, do the following:
>>> import machine, neopixel
>>> np = neopixel.NeoPixel(machine.Pin(4), 8, bpp=4)
In a 4-bpp mode, remember to use 4-tuples instead of 3-tuples to set the colour. For example to set the first three pixels
use:
>>> np[0] = (255, 0, 0, 128) # Orange in an RGBY Setup
>>> np[1] = (0, 255, 0, 128) # Yellow-green in an RGBY Setup
>>> np[2] = (0, 0, 255, 128) # Green-blue in an RGBY Setup
Then use the write() method to output the colours to the LEDs:
>>> np.write()
def demo(np):
n = np.n
# cycle
for i in range(4 * n):
for j in range(n):
np[j] = (0, 0, 0)
np[i % n] = (255, 255, 255)
np.write()
time.sleep_ms(25)
# bounce
for i in range(4 * n):
for j in range(n):
np[j] = (0, 0, 128)
if (i // n) % 2 == 0:
np[i % n] = (0, 0, 0)
else:
np[n - 1 - (i % n)] = (0, 0, 0)
np.write()
time.sleep_ms(60)
# fade in/out
for i in range(0, 4 * 256, 8):
for j in range(n):
if (i // 256) % 2 == 0:
val = i & 0xff
else:
val = 255 - (i & 0xff)
np[j] = (val, 0, 0)
np.write()
# clear
for i in range(n):
np[i] = (0, 0, 0)
np.write()
Execute it using:
>>> demo(np)
DHT (Digital Humidity & Temperature) sensors are low cost digital sensors with capacitive humidity sensors and
thermistors to measure the surrounding air. They feature a chip that handles analog to digital conversion and provide
a 1-wire interface. Newer sensors additionally provide an I2C interface.
The DHT11 (blue) and DHT22 (white) sensors provide the same 1-wire interface, however, the DHT22 requires a
separate object as it has more complex calculation. DHT22 have 1 decimal place resolution for both humidity and
temperature readings. DHT11 have whole number for both.
A custom 1-wire protocol, which is different to Dallas 1-wire, is used to get the measurements from the sensor. The
payload consists of a humidity value, a temperature value and a checksum.
To use the 1-wire interface, construct the objects referring to their data pin:
>>> d.measure()
>>> d.temperature()
>>> d.humidity()
Values returned from temperature() are in degrees Celsius and values returned from humidity() are a per-
centage of relative humidity.
The DHT11 can be called no more than once per second and the DHT22 once every two seconds for most accurate
results. Sensor accuracy will degrade over time. Each sensor supports a different operating range. Refer to the product
datasheets for specifics.
In 1-wire mode, only three of the four pins are used and in I2C mode, all four pins are used. Older sensors may still
have 4 pins even though they do not support I2C. The 3rd pin is simply not connected.
Pin configurations:
Sensor without I2C in 1-wire mode (eg. DHT11, DHT22, AM2301, AM2302):
1=VDD, 2=Data, 3=NC, 4=GND
Sensor with I2C in 1-wire mode (eg. DHT12, AM2320, AM2321, AM2322):
1=VDD, 2=Data, 3=GND, 4=GND
Sensor with I2C in I2C mode (eg. DHT12, AM2320, AM2321, AM2322):
1=VDD, 2=SDA, 3=GND, 4=SCL
You should use pull-up resistors for the Data, SDA and SCL pins.
To make newer I2C sensors work in backwards compatible 1-wire mode, you must connect both pins 3 and 4 to GND.
This disables the I2C interface.
DHT22 sensors are now sold under the name AM2302 and are otherwise identical.
That brings us to the end of the tutorial! Hopefully by now you have a good feel for the capabilities of MicroPython
on the ESP8266 and understand how to control both the WiFi and IO aspects of the chip.
There are many features that were not covered in this tutorial. The best way to learn about them is to read the full
documentation of the modules, and to experiment!
Good luck creating your Internet of Things devices!
FOUR
MICROPYTHON LIBRARIES
This chapter describes modules (function and class libraries) which are built into MicroPython. There are a few
categories of such modules:
• Modules which implement a subset of standard Python functionality and are not intended to be extended by the
user.
• Modules which implement a subset of Python functionality, with a provision for extension by the user (via
Python code).
• Modules which implement MicroPython extensions to the Python standard libraries.
• Modules specific to a particular MicroPython port and thus not portable.
Note about the availability of the modules and their contents: This documentation in general aspires to describe
all modules and functions/classes which are implemented in MicroPython project. However, MicroPython is highly
configurable, and each port to a particular board/embedded system makes available only a subset of MicroPython
libraries. For officially supported ports, there is an effort to either filter out non-applicable items, or mark individual
descriptions with “Availability:” clauses describing which ports provide a given feature.
With that in mind, please still be warned that some functions/classes in a module (or even the entire module) described
in this documentation may be unavailable in a particular build of MicroPython on a particular system. The best place
to find general information of the availability/non-availability of a particular feature is the “General Information”
section which contains information pertaining to a specific MicroPython port.
Beyond the built-in libraries described in this documentation, many more modules from the Python standard library,
as well as further MicroPython extensions to it, can be found in micropython-lib.
The following standard Python libraries have been “micro-ified” to fit in with the philosophy of MicroPython. They
provide the core functionality of that module and are intended to be a drop-in replacement for the standard Python
library. Some modules below use a standard Python name, but prefixed with “u”, e.g. ujson instead of json. This
is to signify that such a module is micro-library, i.e. implements only a subset of CPython module functionality.
31
MicroPython Documentation, Release 1.9.4
By naming them differently, a user has a choice to write a Python-level module to extend functionality for better
compatibility with CPython (indeed, this is what done by the micropython-lib project mentioned above).
On some embedded platforms, where it may be cumbersome to add Python-level wrapper modules to achieve naming
compatibility with CPython, micro-modules are available both by their u-name, and also by their non-u-name. The
non-u-name can be overridden by a file of that name in your library path (sys.path). For example, import json
will first search for a file json.py (or package directory json) and load that module if it is found. If nothing is
found, it will fallback to loading the built-in ujson module.
All builtin functions and exceptions are described here. They are also available via builtins module.
abs()
all()
any()
bin()
class bool
class bytearray
class bytes
See CPython documentation: bytes.
callable()
chr()
classmethod()
compile()
class complex
delattr(obj, name)
The argument name should be a string, and this function deletes the named attribute from the object given by
obj.
class dict
dir()
divmod()
enumerate()
eval()
exec()
filter()
class float
class frozenset
getattr()
globals()
hasattr()
hash()
hex()
id()
input()
class int
class slice
The slice builtin is the type that slice objects have.
sorted()
staticmethod()
class str
sum()
super()
class tuple
type()
zip()
Exceptions
exception AssertionError
exception AttributeError
exception Exception
exception ImportError
exception IndexError
exception KeyboardInterrupt
exception KeyError
exception MemoryError
exception NameError
exception NotImplementedError
exception OSError
See CPython documentation: OSError. MicroPython doesn’t implement errno attribute, instead use the
standard way to access exception arguments: exc.args[0].
exception RuntimeError
exception StopIteration
exception SyntaxError
exception SystemExit
See CPython documentation: SystemExit.
exception TypeError
See CPython documentation: TypeError.
exception ValueError
exception ZeroDivisionError
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: array.
Supported format codes: b, B, h, H, i, I, l, L, q, Q, f, d (the latter 2 depending on the floating-point support).
Classes
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: gc.
Functions
gc.enable()
Enable automatic garbage collection.
gc.disable()
Disable automatic garbage collection. Heap memory can still be allocated, and garbage collection can still be
initiated manually using gc.collect().
gc.collect()
Run a garbage collection.
gc.mem_alloc()
Return the number of bytes of heap RAM that are allocated.
Difference to CPython
This function is MicroPython extension.
gc.mem_free()
Return the number of bytes of available heap RAM, or -1 if this amount is not known.
Difference to CPython
This function is MicroPython extension.
gc.threshold([amount ])
Set or query the additional GC allocation threshold. Normally, a collection is triggered only when a new allo-
cation cannot be satisfied, i.e. on an out-of-memory (OOM) condition. If this function is called, in addition to
OOM, a collection will be triggered each time after amount bytes have been allocated (in total, since the pre-
vious time such an amount of bytes have been allocated). amount is usually specified as less than the full heap
size, with the intention to trigger a collection earlier than when the heap becomes exhausted, and in the hope
that an early collection will prevent excessive memory fragmentation. This is a heuristic measure, the effect of
which will vary from application to application, as well as the optimal value of the amount parameter.
Calling the function without argument will return the current value of the threshold. A value of -1 means a
disabled allocation threshold.
Difference to CPython
This function is a MicroPython extension. CPython has a similar function - set_threshold(), but due to
different GC implementations, its signature and semantics are different.
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: math.
The math module provides some basic mathematical functions for working with floating-point numbers.
Note: On the pyboard, floating-point numbers have 32-bit precision.
Availability: not available on WiPy. Floating point support required for this module.
Functions
math.acos(x)
Return the inverse cosine of x.
math.acosh(x)
Return the inverse hyperbolic cosine of x.
math.asin(x)
Return the inverse sine of x.
math.asinh(x)
Return the inverse hyperbolic sine of x.
math.atan(x)
Return the inverse tangent of x.
math.atan2(y, x)
Return the principal value of the inverse tangent of y/x.
math.atanh(x)
Return the inverse hyperbolic tangent of x.
math.ceil(x)
Return an integer, being x rounded towards positive infinity.
math.copysign(x, y)
Return x with the sign of y.
math.cos(x)
Return the cosine of x.
math.cosh(x)
Return the hyperbolic cosine of x.
math.degrees(x)
Return radians x converted to degrees.
math.erf(x)
Return the error function of x.
math.erfc(x)
Return the complementary error function of x.
math.exp(x)
Return the exponential of x.
math.expm1(x)
Return exp(x) - 1.
math.fabs(x)
Return the absolute value of x.
math.floor(x)
Return an integer, being x rounded towards negative infinity.
math.fmod(x, y)
Return the remainder of x/y.
math.frexp(x)
Decomposes a floating-point number into its mantissa and exponent. The returned value is the tuple (m, e)
such that x == m * 2**e exactly. If x == 0 then the function returns (0.0, 0), otherwise the relation
0.5 <= abs(m) < 1 holds.
math.gamma(x)
Return the gamma function of x.
math.isfinite(x)
Return True if x is finite.
math.isinf(x)
Return True if x is infinite.
math.isnan(x)
Return True if x is not-a-number
math.ldexp(x, exp)
Return x * (2**exp).
math.lgamma(x)
Return the natural logarithm of the gamma function of x.
math.log(x)
Return the natural logarithm of x.
math.log10(x)
Return the base-10 logarithm of x.
math.log2(x)
Return the base-2 logarithm of x.
math.modf(x)
Return a tuple of two floats, being the fractional and integral parts of x. Both return values have the same sign
as x.
math.pow(x, y)
Returns x to the power of y.
math.radians(x)
Return degrees x converted to radians.
math.sin(x)
Return the sine of x.
math.sinh(x)
Return the hyperbolic sine of x.
math.sqrt(x)
Return the square root of x.
math.tan(x)
Return the tangent of x.
math.tanh(x)
Return the hyperbolic tangent of x.
math.trunc(x)
Return an integer, being x rounded towards 0.
Constants
math.e
base of the natural logarithm
math.pi
the ratio of a circle’s circumference to its diameter
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: sys.
Functions
sys.exit(retval=0)
Terminate current program with a given exit code. Underlyingly, this function raise as SystemExit exception.
If an argument is given, its value given as an argument to SystemExit.
sys.print_exception(exc, file=sys.stdout)
Print exception with a traceback to a file-like object file (or sys.stdout by default).
Difference to CPython
This is simplified version of a function which appears in the traceback module in CPython. Unlike
traceback.print_exception(), this function takes just exception value instead of exception type, ex-
ception value, and traceback object; file argument should be positional; further arguments are not supported.
CPython-compatible traceback module can be found in micropython-lib.
Constants
sys.argv
A mutable list of arguments the current program was started with.
sys.byteorder
The byte order of the system ("little" or "big").
sys.implementation
Object with information about the current Python implementation. For MicroPython, it has following attributes:
• name - string “micropython”
• version - tuple (major, minor, micro), e.g. (1, 7, 0)
This object is the recommended way to distinguish MicroPython from other Python implementations (note that
it still may not exist in the very minimal ports).
Difference to CPython
CPython mandates more attributes for this object, but the actual useful bare minimum is implemented in Mi-
croPython.
sys.maxsize
Maximum value which a native integer type can hold on the current platform, or maximum value representable
by MicroPython integer type, if it’s smaller than platform max value (that is the case for MicroPython ports
without long int support).
This attribute is useful for detecting “bitness” of a platform (32-bit vs 64-bit, etc.). It’s recommended to not
compare this attribute to some value directly, but instead count number of bits in it:
bits = 0
v = sys.maxsize
while v:
bits += 1
v >>= 1
if bits > 32:
# 64-bit (or more) platform
...
else:
# 32-bit (or less) platform
# Note that on 32-bit platform, value of bits may be less than 32
# (e.g. 31) due to peculiarities described above, so use "> 16",
# "> 32", "> 64" style of comparisons.
sys.modules
Dictionary of loaded modules. On some ports, it may not include builtin modules.
sys.path
A mutable list of directories to search for imported modules.
sys.platform
The platform that MicroPython is running on. For OS/RTOS ports, this is usually an identifier of the OS, e.g.
"linux". For baremetal ports it is an identifier of a board, e.g. "pyboard" for the original MicroPython
reference board. It thus can be used to distinguish one board from another. If you need to check whether your
program runs on MicroPython (vs other Python implementation), use sys.implementation instead.
sys.stderr
Standard error stream.
sys.stdin
Standard input stream.
sys.stdout
Standard output stream.
sys.version
Python language version that this implementation conforms to, as a string.
sys.version_info
Python language version that this implementation conforms to, as a tuple of ints.
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: binascii.
This module implements conversions between binary data and various encodings of it in ASCII form (in both direc-
tions).
Functions
ubinascii.hexlify(data[, sep ])
Convert binary data to hexadecimal representation. Returns bytes string.
Difference to CPython
If additional argument, sep is supplied, it is used as a separator between hexadecimal values.
ubinascii.unhexlify(data)
Convert hexadecimal data to binary representation. Returns bytes string. (i.e. inverse of hexlify)
ubinascii.a2b_base64(data)
Decode base64-encoded data, ignoring invalid characters in the input. Conforms to RFC 2045 s.6.8. Returns a
bytes object.
ubinascii.b2a_base64(data)
Encode binary data in base64 format, as in RFC 3548. Returns the encoded data followed by a newline character,
as a bytes object.
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: collections.
This module implements advanced collection and container types to hold/accumulate various objects.
Classes
ucollections.namedtuple(name, fields)
This is factory function to create a new namedtuple type with a specific name and set of fields. A namedtuple is
a subclass of tuple which allows to access its fields not just by numeric index, but also with an attribute access
syntax using symbolic field names. Fields is a sequence of strings specifying field names. For compatibility
with CPython it can also be a a string with space-separated field named (but this is less efficient). Example of
use:
from ucollections import namedtuple
ucollections.OrderedDict(...)
dict type subclass which remembers and preserves the order of keys added. When ordered dict is iterated over,
keys/items are returned in the order they were added:
from ucollections import OrderedDict
Output:
z 1
a 2
w 5
b 3
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: errno.
This module provides access to symbolic error codes for OSError exception. A particular inventory of codes depends
on MicroPython port.
Constants
uerrno.errorcode
Dictionary mapping numeric error codes to strings with symbolic error code (see above):
>>> print(uerrno.errorcode[uerrno.EEXIST])
EEXIST
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: hashlib.
This module implements binary data hashing algorithms. The exact inventory of available algorithms depends on a
board. Among the algorithms which may be implemented:
• SHA256 - The current generation, modern hashing algorithm (of SHA2 series). It is suitable for
cryptographically-secure purposes. Included in the MicroPython core and any board is recommended to provide
this, unless it has particular code size constraints.
• SHA1 - A previous generation algorithm. Not recommended for new usages, but SHA1 is a part of number
of Internet standards and existing applications, so boards targeting network connectivity and interoperatiability
will try to provide this.
• MD5 - A legacy algorithm, not considered cryptographically secure. Only selected boards, targeting interoper-
atibility with legacy applications, will offer this.
Constructors
class uhashlib.sha256([data ])
Create an SHA256 hasher object and optionally feed data into it.
class uhashlib.sha1([data ])
Create an SHA1 hasher object and optionally feed data into it.
class uhashlib.md5([data ])
Create an MD5 hasher object and optionally feed data into it.
Methods
hash.update(data)
Feed more binary data into hash.
hash.digest()
Return hash for all data passed through hash, as a bytes object. After this method is called, more data cannot be
fed into the hash any longer.
hash.hexdigest()
This method is NOT implemented. Use ubinascii.hexlify(hash.digest()) to achieve a similar
effect.
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: heapq.
This module implements the heap queue algorithm.
A heap queue is simply a list that has its elements stored in a certain way.
Functions
uheapq.heappush(heap, item)
Push the item onto the heap.
uheapq.heappop(heap)
Pop the first item from the heap, and return it. Raises IndexError if heap is empty.
uheapq.heapify(x)
Convert the list x into a heap. This is an in-place operation.
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: io.
This module contains additional types of stream (file-like) objects and helper functions.
Conceptual hierarchy
Difference to CPython
Conceptual hierarchy of stream base classes is simplified in MicroPython, as described in this section.
(Abstract) base stream classes, which serve as a foundation for behavior of all the concrete classes, adhere to few
dichotomies (pair-wise classifications) in CPython. In MicroPython, they are somewhat simplified and made implicit
to achieve higher efficiencies and save resources.
An important dichotomy in CPython is unbuffered vs buffered streams. In MicroPython, all streams are currently
unbuffered. This is because all modern OSes, and even many RTOSes and filesystem drivers already perform buffering
on their side. Adding another layer of buffering is counter- productive (an issue known as “bufferbloat”) and takes
precious memory. Note that there still cases where buffering may be useful, so we may introduce optional buffering
support at a later time.
But in CPython, another important dichotomy is tied with “bufferedness” - it’s whether a stream may incur short
read/writes or not. A short read is when a user asks e.g. 10 bytes from a stream, but gets less, similarly for writes. In
CPython, unbuffered streams are automatically short operation susceptible, while buffered are guarantee against them.
The no short read/writes is an important trait, as it allows to develop more concise and efficient programs - something
which is highly desirable for MicroPython. So, while MicroPython doesn’t support buffered streams, it still provides
for no-short-operations streams. Whether there will be short operations or not depends on each particular class’ needs,
but developers are strongly advised to favor no-short-operations behavior for the reasons stated above. For example,
MicroPython sockets are guaranteed to avoid short read/writes. Actually, at this time, there is no example of a short-
operations stream class in the core, and one would be a port-specific class, where such a need is governed by hardware
peculiarities.
The no-short-operations behavior gets tricky in case of non-blocking streams, blocking vs non-blocking behavior
being another CPython dichotomy, fully supported by MicroPython. Non-blocking streams never wait for data either
to arrive or be written - they read/write whatever possible, or signal lack of data (or ability to write data). Clearly,
this conflicts with “no-short-operations” policy, and indeed, a case of non-blocking buffered (and this no-short-ops)
streams is convoluted in CPython - in some places, such combination is prohibited, in some it’s undefined or just not
documented, in some cases it raises verbose exceptions. The matter is much simpler in MicroPython: non-blocking
stream are important for efficient asynchronous operations, so this property prevails on the “no-short-ops” one. So,
while blocking streams will avoid short reads/writes whenever possible (the only case to get a short read is if end of
file is reached, or in case of error (but errors don’t return short data, but raise exceptions)), non-blocking streams may
produce short data to avoid blocking the operation.
The final dichotomy is binary vs text streams. MicroPython of course supports these, but while in CPython text
streams are inherently buffered, they aren’t in MicroPython. (Indeed, that’s one of the cases for which we may
introduce buffering support.)
Note that for efficiency, MicroPython doesn’t provide abstract base classes corresponding to the hierarchy above, and
it’s not possible to implement, or subclass, a stream class in pure Python.
Functions
Classes
class uio.FileIO(...)
This is type of a file open in binary mode, e.g. using open(name, "rb"). You should not instantiate this
class directly.
class uio.TextIOWrapper(...)
This is type of a file open in text mode, e.g. using open(name, "rt"). You should not instantiate this class
directly.
class uio.StringIO([string ])
class uio.BytesIO([string ])
In-memory file-like objects for input/output. StringIO is used for text-mode I/O (similar to a normal file
opened with “t” modifier). BytesIO is used for binary-mode I/O (similar to a normal file opened with “b”
modifier). Initial contents of file-like objects can be specified with string parameter (should be normal string
for StringIO or bytes object for BytesIO). All the usual file methods like read(), write(), seek(),
flush(), close() are available on these objects, and additionally, a following method:
getvalue()
Get the current contents of the underlying buffer which holds data.
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: json.
This modules allows to convert between Python objects and the JSON data format.
Functions
ujson.dump(obj, stream)
Serialise obj to a JSON string, writing it to the given stream.
ujson.dumps(obj)
Return obj represented as a JSON string.
ujson.load(stream)
Parse the given stream, interpreting it as a JSON string and deserialising the data to a Python object. The
resulting object is returned.
Parsing continues until end-of-file is encountered. A ValueError is raised if the data in stream is not correctly
formed.
ujson.loads(str)
Parse the JSON str and return an object. Raises ValueError if the string is not correctly formed.
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: os.
The uos module contains functions for filesystem access and mounting, terminal redirection and duplication, and the
uname and urandom functions.
General functions
uos.uname()
Return a tuple (possibly a named tuple) containing information about the underlying machine and/or its operat-
ing system. The tuple has five fields in the following order, each of them being a string:
• sysname – the name of the underlying system
• nodename – the network name (can be the same as sysname)
• release – the version of the underlying system
• version – the MicroPython version and build date
• machine – an identifier for the underlying hardware (eg board, CPU)
uos.urandom(n)
Return a bytes object with n random bytes. Whenever possible, it is generated by the hardware random number
generator.
Filesystem access
uos.chdir(path)
Change current directory.
uos.getcwd()
Get the current directory.
uos.ilistdir([dir ])
This function returns an iterator which then yields tuples corresponding to the entries in the directory that it is
listing. With no argument it lists the current directory, otherwise it lists the directory given by dir.
The tuples have the form (name, type, inode[, size]):
• name is a string (or bytes if dir is a bytes object) and is the name of the entry;
• type is an integer that specifies the type of the entry, with 0x4000 for directories and 0x8000 for regular
files;
• inode is an integer corresponding to the inode of the file, and may be 0 for filesystems that don’t have such
a notion.
• Some platforms may return a 4-tuple that includes the entry’s size. For file entries, size is an integer
representing the size of the file or -1 if unknown. Its meaning is currently undefined for directory entries.
uos.listdir([dir ])
With no argument, list the current directory. Otherwise list the given directory.
uos.mkdir(path)
Create a new directory.
uos.remove(path)
Remove a file.
uos.rmdir(path)
Remove a directory.
uos.rename(old_path, new_path)
Rename a file.
uos.stat(path)
Get the status of a file or directory.
uos.statvfs(path)
Get the status of a fileystem.
Returns a tuple with the filesystem information in the following order:
• f_bsize – file system block size
• f_frsize – fragment size
• f_blocks – size of fs in f_frsize units
• f_bfree – number of free blocks
• f_bavail – number of free blocks for unpriviliged users
• f_files – number of inodes
• f_ffree – number of free inodes
• f_favail – number of free inodes for unpriviliged users
• f_flag – mount flags
• f_namemax – maximum filename length
Parameters related to inodes: f_files, f_ffree, f_avail and the f_flags parameter may return 0 as
they can be unavailable in a port-specific implementation.
uos.sync()
Sync all filesystems.
uos.dupterm(stream_object, index=0)
Duplicate or switch the MicroPython terminal (the REPL) on the given stream-like object. The stream_object
argument must implement the readinto() and write() methods. The stream should be in non-blocking
mode and readinto() should return None if there is no data available for reading.
After calling this function all terminal output is repeated on this stream, and any input that is available on the
stream is passed on to the terminal input.
The index parameter should be a non-negative integer and specifies which duplication slot is set. A given port
may implement more than one slot (slot 0 will always be available) and in that case terminal input and output is
duplicated on all the slots that are set.
If None is passed as the stream_object then duplication is cancelled on the slot given by index.
The function returns the previous stream-like object in the given slot.
Filesystem mounting
Some ports provide a Virtual Filesystem (VFS) and the ability to mount multiple “real” filesystems within this VFS.
Filesystem objects can be mounted at either the root of the VFS, or at a subdirectory that lives in the root. This allows
dynamic and flexible configuration of the filesystem that is seen by Python programs. Ports that have this functionality
provide the mount() and umount() functions, and possibly various filesystem implementations represented by
VFS classes.
uos.mount(fsobj, mount_point, *, readonly)
Mount the filesystem object fsobj at the location in the VFS given by the mount_point string. fsobj can be a a
VFS object that has a mount() method, or a block device. If it’s a block device then the filesystem type is
automatically detected (an exception is raised if no filesystem was recognised). mount_point may be '/' to
mount fsobj at the root, or '/<name>' to mount it at a subdirectory under the root.
If readonly is True then the filesystem is mounted read-only.
During the mount process the method mount() is called on the filesystem object.
Will raise OSError(EPERM) if mount_point is already mounted.
uos.umount(mount_point)
Unmount a filesystem. mount_point can be a string naming the mount location, or a previously-mounted filesys-
tem object. During the unmount process the method umount() is called on the filesystem object.
Will raise OSError(EINVAL) if mount_point is not found.
class uos.VfsFat(block_dev)
Create a filesystem object that uses the FAT filesystem format. Storage of the FAT filesystem is provided by
block_dev. Objects created by this constructor can be mounted using mount().
static mkfs(block_dev)
Build a FAT filesystem on block_dev.
Block devices
A block device is an object which implements the block protocol, which is a set of methods described below by the
AbstractBlockDev class. A concrete implementation of this class will usually allow access to the memory-like
functionality a piece of hardware (like flash memory). A block device can be used by a particular filesystem driver to
store the data for its filesystem.
class uos.AbstractBlockDev(...)
Construct a block device object. The parameters to the constructor are dependent on the specific block device.
readblocks(block_num, buf )
Starting at block_num, read blocks from the device into buf (an array of bytes). The number of blocks to
read is given by the length of buf, which will be a multiple of the block size.
writeblocks(block_num, buf )
Starting at block_num, write blocks from buf (an array of bytes) to the device. The number of blocks to
write is given by the length of buf, which will be a multiple of the block size.
ioctl(op, arg)
Control the block device and query its parameters. The operation to perform is given by op which is one
of the following integers:
• 1 – initialise the device (arg is unused)
• 2 – shutdown the device (arg is unused)
• 3 – sync the device (arg is unused)
• 4 – get a count of the number of blocks, should return an integer (arg is unused)
• 5 – get the number of bytes in a block, should return an integer, or None in which case the default
value of 512 is used (arg is unused)
By way of example, the following class will implement a block device that stores its data in RAM using a
bytearray:
class RAMBlockDev:
def __init__(self, block_size, num_blocks):
self.block_size = block_size
self.data = bytearray(block_size * num_blocks)
import uos
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: re.
This module implements regular expression operations. Regular expression syntax supported is a subset of CPython
re module (and actually is a subset of POSIX extended regular expressions).
Supported operators are:
'.' Match any character.
'[...]' Match set of characters. Individual characters and ranges are supported, including negated sets (e.g.
[^a-c]).
'^'
'$'
'?'
'*'
'+'
'??'
'*?'
'+?'
'|'
'(...)' Grouping. Each group is capturing (a substring it captures can be accessed with match.group()
method).
NOT SUPPORTED: Counted repetitions ({m,n}), more advanced assertions (\b, \B), named groups ((?
P<name>...)), non-capturing groups ((?:...)), etc.
Functions
ure.compile(regex_str[, flags ])
Compile regular expression, return regex object.
ure.match(regex_str, string)
Compile regex_str and match against string. Match always happens from starting position in a string.
ure.search(regex_str, string)
Compile regex_str and search it in a string. Unlike match, this will search string for first position which
matches regex (which still may be 0 if regex is anchored).
ure.DEBUG
Flag value, display debug information about compiled expression. (Availability depends on MicroPython
port.)
Regex objects
Compiled regular expression. Instances of this class are created using ure.compile().
regex.match(string)
regex.search(string)
Similar to the module-level functions match() and search(). Using methods is (much) more efficient if
the same regex is applied to multiple strings.
regex.split(string, max_split=-1)
Split a string using regex. If max_split is given, it specifies maximum number of splits to perform. Returns list
of strings (there may be up to max_split+1 elements if it’s specified).
Match objects
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: select.
This module provides functions to efficiently wait for events on multiple streams (select streams which are ready
for operations).
Functions
uselect.poll()
Create an instance of the Poll class.
uselect.select(rlist, wlist, xlist[, timeout ])
Wait for activity on a set of objects.
This function is provided by some MicroPython ports for compatibility and is not efficient. Usage of Poll is
recommended instead.
class Poll
Methods
poll.register(obj[, eventmask ])
Register stream obj for polling. eventmask is logical OR of:
• uselect.POLLIN - data available for reading
• uselect.POLLOUT - more data can be written
Note that flags like uselect.POLLHUP and uselect.POLLERR are not valid as input eventmask (these
are unsolicited events which will be returned from poll() regardless of whether they are asked for). This
semantics is per POSIX.
eventmask defaults to uselect.POLLIN | uselect.POLLOUT.
poll.unregister(obj)
Unregister obj from polling.
poll.modify(obj, eventmask)
Modify the eventmask for obj.
poll.poll(timeout=-1)
Wait for at least one of the registered objects to become ready or have an exceptional condition, with optional
timeout in milliseconds (if timeout arg is not specified or -1, there is no timeout).
Returns list of (obj, event, . . . ) tuples. There may be other elements in tuple, depending on a platform and
version, so don’t assume that its size is 2. The event element specifies which events happened with a stream
and is a combination of uselect.POLL* constants described above. Note that flags uselect.POLLHUP
and uselect.POLLERR can be returned at any time (even if were not asked for), and must be acted on
accordingly (the corresponding stream unregistered from poll and likely closed), because otherwise all further
invocations of poll() may return immediately with these flags set for this stream again.
In case of timeout, an empty list is returned.
Difference to CPython
poll.ipoll(timeout=-1, flags=0)
Like poll.poll(), but instead returns an iterator which yields a callee-owned tuple. This function
provides an efficient, allocation-free way to poll on streams.
If flags is 1, one-shot behavior for events is employed: streams for which events happened will have their event
masks automatically reset (equivalent to poll.modify(obj, 0)), so new events for such a stream won’t
be processed until new mask is set with poll.modify(). This behavior is useful for asynchronous I/O
schedulers.
Difference to CPython
This function is a MicroPython extension.
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: socket.
This module provides access to the BSD socket interface.
Difference to CPython
For efficiency and consistency, socket objects in MicroPython implement a stream (file-like) interface directly. In
CPython, you need to convert a socket to a file-like object using makefile() method. This method is still supported
by MicroPython (but is a no-op), so where compatibility with CPython matters, be sure to use it.
The native socket address format of the usocket module is an opaque data type returned by getaddrinfo func-
tion, which must be used to resolve textual address (including numeric addresses):
sockaddr = usocket.getaddrinfo('www.micropython.org', 80)[0][-1]
# You must use getaddrinfo() even for numeric addresses
sockaddr = usocket.getaddrinfo('127.0.0.1', 80)[0][-1]
# Now you can use that address
sock.connect(addr)
Using getaddrinfo is the most efficient (both in terms of memory and processing power) and portable way to work
with addresses.
However, socket module (note the difference with native MicroPython usocket module described here) pro-
vides CPython-compatible way to specify addresses using tuples, as described below. Note that depending on a
MicroPython port, socket module can be builtin or need to be installed from micropython-lib (as in the
case of MicroPython Unix port), and some ports still accept only numeric addresses in the tuple format, and
require to use getaddrinfo function to resolve domain names.
Summing up:
• Always use getaddrinfo when writing portable applications.
• Tuple addresses described below can be used as a shortcut for quick hacks and interactive use, if your port
supports them.
Functions
usocket.getaddrinfo(host, port)
Translate the host/port argument into a sequence of 5-tuples that contain all the necessary arguments for creating
a socket connected to that service. The list of 5-tuples has following structure:
s = usocket.socket()
s.connect(usocket.getaddrinfo('www.micropython.org', 80)[0][-1])
Difference to CPython
CPython raises a socket.gaierror exception (OSError subclass) in case of error in this function.
MicroPython doesn’t have socket.gaierror and raises OSError directly. Note that error numbers of
getaddrinfo() form a separate namespace and may not match error numbers from the uerrno module. To
distinguish getaddrinfo() errors, they are represented by negative numbers, whereas standard system er-
rors are positive numbers (error numbers are accessible using e.args[0] property from an exception object).
The use of negative values is a provisional detail which may change in the future.
usocket.inet_ntop(af, bin_addr)
Convert a binary network address bin_addr of the given address family af to a textual representation:
usocket.inet_pton(af, txt_addr)
Convert a textual network address txt_addr of the given address family af to a binary representation:
Constants
usocket.AF_INET
usocket.AF_INET6
Address family types. Availability depends on a particular MicroPython port.
usocket.SOCK_STREAM
usocket.SOCK_DGRAM
Socket types.
usocket.IPPROTO_UDP
usocket.IPPROTO_TCP
IP protocol numbers. Availability depends on a particular MicroPython port. Note that you don’t need
to specify these in a call to usocket.socket(), because SOCK_STREAM socket type automatically selects
IPPROTO_TCP, and SOCK_DGRAM - IPPROTO_UDP. Thus, the only real use of these constants is as an
argument to setsockopt().
usocket.SOL_*
Socket option levels (an argument to setsockopt()). The exact inventory depends on a MicroPython
port.
usocket.SO_*
Socket options (an argument to setsockopt()). The exact inventory depends on a MicroPython port.
Constants specific to WiPy:
usocket.IPPROTO_SEC
Special protocol value to create SSL-compatible socket.
class socket
Methods
socket.close()
Mark the socket closed and release all resources. Once that happens, all future operations on the socket object
will fail. The remote end will receive EOF indication if supported by protocol.
Sockets are automatically closed when they are garbage-collected, but it is recommended to close() them
explicitly as soon you finished working with them.
socket.bind(address)
Bind the socket to address. The socket must not already be bound.
socket.listen([backlog ])
Enable a server to accept connections. If backlog is specified, it must be at least 0 (if it’s lower, it will be set to 0);
and specifies the number of unaccepted connections that the system will allow before refusing new connections.
If not specified, a default reasonable value is chosen.
socket.accept()
Accept a connection. The socket must be bound to an address and listening for connections. The return value is
a pair (conn, address) where conn is a new socket object usable to send and receive data on the connection, and
address is the address bound to the socket on the other end of the connection.
socket.connect(address)
Connect to a remote socket at address.
socket.send(bytes)
Send data to the socket. The socket must be connected to a remote socket. Returns number of bytes sent, which
may be smaller than the length of data (“short write”).
socket.sendall(bytes)
Send all data to the socket. The socket must be connected to a remote socket. Unlike send(), this method will
try to send all of data, by sending data chunk by chunk consecutively.
The behavior of this method on non-blocking sockets is undefined. Due to this, on MicroPython, it’s recom-
mended to use write() method instead, which has the same “no short writes” policy for blocking sockets, and
will return number of bytes sent on non-blocking sockets.
socket.recv(bufsize)
Receive data from the socket. The return value is a bytes object representing the data received. The maximum
amount of data to be received at once is specified by bufsize.
socket.sendto(bytes, address)
Send data to the socket. The socket should not be connected to a remote socket, since the destination socket is
specified by address.
socket.recvfrom(bufsize)
Receive data from the socket. The return value is a pair (bytes, address) where bytes is a bytes object representing
the data received and address is the address of the socket sending the data.
socket.setsockopt(level, optname, value)
Set the value of the given socket option. The needed symbolic constants are defined in the socket module (SO_*
etc.). The value can be an integer or a bytes-like object representing a buffer.
socket.settimeout(value)
Note: Not every port supports this method, see below.
Set a timeout on blocking socket operations. The value argument can be a nonnegative floating point number
expressing seconds, or None. If a non-zero value is given, subsequent socket operations will raise an OSError
exception if the timeout period value has elapsed before the operation has completed. If zero is given, the socket
is put in non-blocking mode. If None is given, the socket is put in blocking mode.
Not every MicroPython port supports this method. A more portable and generic solution is to use
uselect.poll object. This allows to wait on multiple objects at the same time (and not just on sockets,
but on generic stream objects which support polling). Example:
# Instead of:
s.settimeout(1.0) # time in seconds
s.read(10) # may timeout
# Use:
poller = uselect.poll()
poller.register(s, uselect.POLLIN)
res = poller.poll(1000) # time in milliseconds
if not res:
# s is still not ready for input, i.e. operation timed out
Difference to CPython
CPython raises a socket.timeout exception in case of timeout, which is an OSError subclass. MicroPy-
thon raises an OSError directly instead. If you use except OSError: to catch the exception, your code will
work both in MicroPython and CPython.
socket.setblocking(flag)
Set blocking or non-blocking mode of the socket: if flag is false, the socket is set to non-blocking, else to
blocking mode.
This method is a shorthand for certain settimeout() calls:
Difference to CPython
As MicroPython doesn’t support buffered streams, values of buffering parameter is ignored and treated as if it
was 0 (unbuffered).
Difference to CPython
Closing the file object returned by makefile() WILL close the original socket as well.
socket.read([size ])
Read up to size bytes from the socket. Return a bytes object. If size is not given, it reads all data available from
the socket until EOF; as such the method will not return until the socket is closed. This function tries to read as
much data as requested (no “short reads”). This may be not possible with non-blocking socket though, and then
less data will be returned.
socket.readinto(buf [, nbytes ])
Read bytes into the buf. If nbytes is specified then read at most that many bytes. Otherwise, read at most len(buf)
bytes. Just as read(), this method follows “no short reads” policy.
Return value: number of bytes read and stored into buf.
socket.readline()
Read a line, ending in a newline character.
Return value: the line read.
socket.write(buf )
Write the buffer of bytes to the socket. This function will try to write all data to a socket (no “short writes”).
This may be not possible with a non-blocking socket though, and returned value will be less than the length of
buf.
Return value: number of bytes written.
exception usocket.error
MicroPython does NOT have this exception.
Difference to CPython
CPython used to have a socket.error exception which is now deprecated, and is an alias of OSError. In
MicroPython, use OSError directly.
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: ssl.
This module provides access to Transport Layer Security (previously and widely known as “Secure Sockets Layer”)
encryption and peer authentication facilities for network sockets, both client-side and server-side.
Functions
Warning: Some implementations of ussl module do NOT validate server certificates, which makes an SSL
connection established prone to man-in-the-middle attacks.
Exceptions
ssl.SSLError
This exception does NOT exist. Instead its base class, OSError, is used.
Constants
ussl.CERT_NONE
ussl.CERT_OPTIONAL
ussl.CERT_REQUIRED
Supported values for cert_reqs parameter.
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: struct.
Supported size/byte order prefixes: @, <, >, !.
Supported format codes: b, B, h, H, i, I, l, L, q, Q, s, P, f, d (the latter 2 depending on the floating-point support).
Functions
ustruct.calcsize(fmt)
Return the number of bytes needed to store the given fmt.
ustruct.pack(fmt, v1, v2, ...)
Pack the values v1, v2, . . . according to the format string fmt. The return value is a bytes object encoding the
values.
ustruct.pack_into(fmt, buffer, offset, v1, v2, ...)
Pack the values v1, v2, . . . according to the format string fmt into a buffer starting at offset. offset may be
negative to count from the end of buffer.
ustruct.unpack(fmt, data)
Unpack from the data according to the format string fmt. The return value is a tuple of the unpacked values.
ustruct.unpack_from(fmt, data, offset=0)
Unpack from the data starting at offset according to the format string fmt. offset may be negative to count from
the end of buffer. The return value is a tuple of the unpacked values.
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: time.
The utime module provides functions for getting the current time and date, measuring time intervals, and for delays.
Time Epoch: Unix port uses standard for POSIX systems epoch of 1970-01-01 00:00:00 UTC. However, embedded
ports use epoch of 2000-01-01 00:00:00 UTC.
Maintaining actual calendar date/time: This requires a Real Time Clock (RTC). On systems with underlying OS
(including some RTOS), an RTC may be implicit. Setting and maintaining actual calendar time is responsibility of
OS/RTOS and is done outside of MicroPython, it just uses OS API to query date/time. On baremetal ports however
system time depends on machine.RTC() object. The current calendar time may be set using machine.RTC().
datetime(tuple) function, and maintained by following means:
• By a backup battery (which may be an additional, optional component for a particular board).
• Using networked time protocol (requires setup by a port/user).
• Set manually by a user on each power-up (many boards then maintain RTC time across hard resets, though some
may require setting it again in such case).
If actual calendar time is not maintained with a system/MicroPython RTC, functions below which require reference to
current absolute time may behave not as expected.
Functions
utime.localtime([secs ])
Convert a time expressed in seconds since the Epoch (see above) into an 8-tuple which contains: (year, month,
mday, hour, minute, second, weekday, yearday) If secs is not provided or None, then the current time from the
RTC is used.
• year includes the century (for example 2014).
• month is 1-12
• mday is 1-31
• hour is 0-23
• minute is 0-59
• second is 0-59
• weekday is 0-6 for Mon-Sun
• yearday is 1-366
utime.mktime()
This is inverse function of localtime. It’s argument is a full 8-tuple which expresses a time as per localtime. It
returns an integer which is the number of seconds since Jan 1, 2000.
utime.sleep(seconds)
Sleep for the given number of seconds. Some boards may accept seconds as a floating-point number to sleep for a
fractional number of seconds. Note that other boards may not accept a floating-point argument, for compatibility
with them use sleep_ms() and sleep_us() functions.
utime.sleep_ms(ms)
Delay for given number of milliseconds, should be positive or 0.
utime.sleep_us(us)
Delay for given number of microseconds, should be positive or 0.
utime.ticks_ms()
Returns an increasing millisecond counter with an arbitrary reference point, that wraps around after some value.
The wrap-around value is not explicitly exposed, but we will refer to it as TICKS_MAX to simplify discussion.
Period of the values is TICKS_PERIOD = TICKS_MAX + 1. TICKS_PERIOD is guaranteed to be a power
of two, but otherwise may differ from port to port. The same period value is used for all of ticks_ms(),
ticks_us(), ticks_cpu() functions (for simplicity). Thus, these functions will return a value in range [0
.. TICKS_MAX], inclusive, total TICKS_PERIOD values. Note that only non-negative values are used. For the
most part, you should treat values returned by these functions as opaque. The only operations available for them
are ticks_diff() and ticks_add() functions described below.
Note: Performing standard mathematical operations (+, -) or relational operators (<, <=, >, >=) directly on these
value will lead to invalid result. Performing mathematical operations and then passing their results as arguments
to ticks_diff() or ticks_add() will also lead to invalid results from the latter functions.
utime.ticks_us()
Just like ticks_ms() above, but in microseconds.
utime.ticks_cpu()
Similar to ticks_ms() and ticks_us(), but with the highest possible resolution in the system. This is
usually CPU clocks, and that’s why the function is named that way. But it doesn’t have to be a CPU clock, some
other timing source available in a system (e.g. high-resolution timer) can be used instead. The exact timing unit
(resolution) of this function is not specified on utime module level, but documentation for a specific port may
provide more specific information. This function is intended for very fine benchmarking or very tight real-time
loops. Avoid using it in portable code.
Availability: Not every port implements this function.
utime.ticks_add(ticks, delta)
Offset ticks value by a given number, which can be either positive or negative. Given a ticks value, this function
allows to calculate ticks value delta ticks before or after it, following modular-arithmetic definition of tick values
(see ticks_ms() above). ticks parameter must be a direct result of call to ticks_ms(), ticks_us(), or
ticks_cpu() functions (or from previous call to ticks_add()). However, delta can be an arbitrary integer
number or numeric expression. ticks_add() is useful for calculating deadlines for events/tasks. (Note: you
must use ticks_diff() function to work with deadlines.)
Examples:
utime.ticks_diff(ticks1, ticks2)
Measure ticks difference between values returned from ticks_ms(), ticks_us(), or ticks_cpu()
functions, as a signed value which may wrap around.
The argument order is the same as for subtraction operator, ticks_diff(ticks1, ticks2) has the same
meaning as ticks1 - ticks2. However, values returned by ticks_ms(), etc. functions may wrap
around, so directly using subtraction on them will produce incorrect result. That is why ticks_diff() is
needed, it implements modular (or more specifically, ring) arithmetics to produce correct result even for wrap-
around values (as long as they not too distant inbetween, see below). The function returns signed value in
the range [-TICKS_PERIOD/2 .. TICKS_PERIOD/2-1] (that’s a typical range definition for two’s-complement
signed binary integers). If the result is negative, it means that ticks1 occurred earlier in time than ticks2. Other-
wise, it means that ticks1 occurred after ticks2. This holds only if ticks1 and ticks2 are apart from each other for
no more than TICKS_PERIOD/2-1 ticks. If that does not hold, incorrect result will be returned. Specifically, if
two tick values are apart for TICKS_PERIOD/2-1 ticks, that value will be returned by the function. However,
if TICKS_PERIOD/2 of real-time ticks has passed between them, the function will return -TICKS_PERIOD/2
instead, i.e. result value will wrap around to the negative range of possible values.
Informal rationale of the constraints above: Suppose you are locked in a room with no means to monitor passing
of time except a standard 12-notch clock. Then if you look at dial-plate now, and don’t look again for another
13 hours (e.g., if you fall for a long sleep), then once you finally look again, it may seem to you that only 1 hour
has passed. To avoid this mistake, just look at the clock regularly. Your application should do the same. “Too
long sleep” metaphor also maps directly to application behavior: don’t let your application run any single task
for too long. Run tasks in steps, and do time-keeping inbetween.
ticks_diff() is designed to accommodate various usage patterns, among them:
• Polling with timeout. In this case, the order of events is known, and you will deal only with positive results
of ticks_diff():
• Scheduling events. In this case, ticks_diff() result may be negative if an event is overdue:
Note: Do not pass time() values to ticks_diff(), you should use normal mathematical operations on
them. But note that time() may (and will) also overflow. This is known as https://en.wikipedia.org/wiki/
Year_2038_problem .
utime.time()
Returns the number of seconds, as an integer, since the Epoch, assuming that underlying RTC is set and main-
tained as described above. If an RTC is not set, this function returns number of seconds since a port-specific
reference point in time (for embedded boards without a battery-backed RTC, usually since power up or reset).
If you want to develop portable MicroPython application, you should not rely on this function to provide higher
than second precision. If you need higher precision, use ticks_ms() and ticks_us() functions, if you
need calendar time, localtime() without an argument is a better choice.
Difference to CPython
In CPython, this function returns number of seconds since Unix epoch, 1970-01-01 00:00 UTC, as a floating-
point, usually having microsecond precision. With MicroPython, only Unix port uses the same Epoch, and if
floating-point precision allows, returns sub-second precision. Embedded hardware usually doesn’t have floating-
point precision to represent both long time ranges and subsecond precision, so they use integer value with second
precision. Some embedded hardware also lacks battery-powered RTC, so returns number of seconds since last
power-up or from other relative, hardware-specific point (e.g. reset).
This module implements a subset of the corresponding CPython module, as described below. For more information,
refer to the original CPython documentation: zlib.
This module allows to decompress binary data compressed with DEFLATE algorithm (commonly used in zlib library
and gzip archiver). Compression is not yet implemented.
Functions
Difference to CPython
This class is MicroPython extension. It’s included on provisional basis and may be changed considerably or
removed in later versions.
The btree module implements a simple key-value database using external storage (disk files, or in general case,
a random-access stream). Keys are stored sorted in the database, and besides efficient retrieval by a key value,
a database also supports efficient ordered range scans (retrieval of values with the keys in a given range). On the
application interface side, BTree database work as close a possible to a way standard dict type works, one notable
difference is that both keys and values must be bytes objects (so, if you want to store objects of other types, you
need to serialize them to bytes first).
The module is based on the well-known BerkelyDB library, version 1.xx.
Example:
import btree
# Prints b'two'
print(db[b"2"])
del db[b"2"]
# Prints:
# b"1"
# b"3"
for key in db:
print(key)
db.close()
Functions
Methods
btree.close()
Close the database. It’s mandatory to close the database at the end of processing, as some unwritten data may be
still in the cache. Note that this does not close underlying stream with which the database was opened, it should
be closed separately (which is also mandatory to make sure that data flushed from buffer to the underlying
storage).
btree.flush()
Flush any data in cache to the underlying stream.
btree.__getitem__(key)
btree.get(key, default=None)
btree.__setitem__(key, val)
btree.__detitem__(key)
btree.__contains__(key)
Standard dictionary methods.
btree.__iter__()
A BTree object can be iterated over directly (similar to a dictionary) to get access to all keys in order.
btree.keys([start_key[, end_key[, flags ]]])
btree.values([start_key[, end_key[, flags ]]])
btree.items([start_key[, end_key[, flags ]]])
These methods are similar to standard dictionary methods, but also can take optional parameters to iterate over
a key sub-range, instead of the entire database. Note that for all 3 methods, start_key and end_key arguments
represent key values. For example, values() method will iterate over values corresponding to they key range
given. None values for start_key means “from the first key”, no end_key or its value of None means “until the
end of database”. By default, range is inclusive of start_key and exclusive of end_key, you can include end_key
in iteration by passing flags of btree.INCL. You can iterate in descending key direction by passing flags of
btree.DESC. The flags values can be ORed together.
Constants
btree.INCL
A flag for keys(), values(), items() methods to specify that scanning should be inclusive of the end
key.
btree.DESC
A flag for keys(), values(), items() methods to specify that scanning should be in descending direction
of keys.
This module provides a general frame buffer which can be used to create bitmap images, which can then be sent to a
display.
class FrameBuffer
The FrameBuffer class provides a pixel buffer which can be drawn upon with pixels, lines, rectangles, text and even
other FrameBuffer’s. It is useful when generating output for displays.
For example:
import framebuf
fbuf.fill(0)
fbuf.text('MicroPython!', 0, 0, 0xffff)
fbuf.hline(0, 10, 96, 0xffff)
Constructors
One must specify valid buffer, width, height, format and optionally stride. Invalid buffer size or dimensions may
lead to unexpected errors.
Drawing text
FrameBuffer.text(s, x, y[, c ])
Write text to the FrameBuffer using the the coordinates as the upper-left corner of the text. The color of the text
can be defined by the optional argument but is otherwise a default value of 1. All characters have dimensions of
8x8 pixels and there is currently no way to change the font.
Other methods
FrameBuffer.scroll(xstep, ystep)
Shift the contents of the FrameBuffer by the given vector. This may leave a footprint of the previous colors in
the FrameBuffer.
FrameBuffer.blit(fbuf, x, y[, key ])
Draw another FrameBuffer on top of the current one at the given coordinates. If key is specified then it should
be a color integer and the corresponding color will be considered transparent: all pixels with that color value
will not be drawn.
This method works between FrameBuffer instances utilising different formats, but the resulting colors may be
unexpected due to the mismatch in color formats.
Constants
framebuf.MONO_VLSB
Monochrome (1-bit) color format This defines a mapping where the bits in a byte are vertically mapped with
bit 0 being nearest the top of the screen. Consequently each byte occupies 8 vertical pixels. Subsequent bytes
appear at successive horizontal locations until the rightmost edge is reached. Further bytes are rendered at
locations starting at the leftmost edge, 8 pixels lower.
framebuf.MONO_HLSB
Monochrome (1-bit) color format This defines a mapping where the bits in a byte are horizontally mapped. Each
byte occupies 8 horizontal pixels with bit 0 being the leftmost. Subsequent bytes appear at successive horizontal
locations until the rightmost edge is reached. Further bytes are rendered on the next row, one pixel lower.
framebuf.MONO_HMSB
Monochrome (1-bit) color format This defines a mapping where the bits in a byte are horizontally mapped. Each
byte occupies 8 horizontal pixels with bit 7 being the leftmost. Subsequent bytes appear at successive horizontal
locations until the rightmost edge is reached. Further bytes are rendered on the next row, one pixel lower.
framebuf.RGB565
Red Green Blue (16-bit, 5+6+5) color format
framebuf.GS2_HMSB
Grayscale (2-bit) color format
framebuf.GS4_HMSB
Grayscale (4-bit) color format
framebuf.GS8
Grayscale (8-bit) color format
The machine module contains specific functions related to the hardware on a particular board. Most functions in
this module allow to achieve direct and unrestricted access to and control of hardware blocks on a system (like CPU,
timers, buses, etc.). Used incorrectly, this can lead to malfunction, lockups, crashes of your board, and in extreme
cases, hardware damage. A note of callbacks used by functions and class methods of machine module: all these
callbacks should be considered as executing in an interrupt context. This is true for both physical devices with IDs >=
0 and “virtual” devices with negative IDs like -1 (these “virtual” devices are still thin shims on top of real hardware
and real hardware interrupts). See Writing interrupt handlers.
machine.reset()
Resets the device in a manner similar to pushing the external RESET button.
machine.reset_cause()
Get the reset cause. See constants for the possible return values.
machine.disable_irq()
Disable interrupt requests. Returns the previous IRQ state which should be considered an opaque value. This
return value should be passed to the enable_irq() function to restore interrupts to their original state, before
disable_irq() was called.
machine.enable_irq(state)
Re-enable interrupt requests. The state parameter should be the value that was returned from the most recent
call to the disable_irq() function.
machine.freq()
Returns CPU frequency in hertz.
machine.idle()
Gates the clock to the CPU, useful to reduce power consumption at any time during short or long periods.
Peripherals continue working and execution resumes as soon as any interrupt is triggered (on many ports this
includes system timer interrupt occurring at regular intervals on the order of millisecond).
machine.sleep()
Stops the CPU and disables all peripherals except for WLAN. Execution is resumed from the point where the
sleep was requested. For wake up to actually happen, wake sources should be configured first.
machine.deepsleep()
Stops the CPU and all peripherals (including networking interfaces, if any). Execution is resumed from the
main script, just as with a reset. The reset cause can be checked to know that we are coming from machine.
DEEPSLEEP. For wake up to actually happen, wake sources should be configured first, like Pin change or
RTC timeout.
Miscellaneous functions
machine.unique_id()
Returns a byte string with a unique identifier of a board/SoC. It will vary from a board/SoC instance to another,
if underlying hardware allows. Length varies by hardware (so use substring of a full value if you expect a short
ID). In some MicroPython ports, ID corresponds to the network MAC address.
machine.time_pulse_us(pin, pulse_level, timeout_us=1000000)
Time a pulse on the given pin, and return the duration of the pulse in microseconds. The pulse_level argument
should be 0 to time a low pulse or 1 to time a high pulse.
If the current input value of the pin is different to pulse_level, the function first (*) waits until the pin input
becomes equal to pulse_level, then (**) times the duration that the pin is equal to pulse_level. If the pin is
already equal to pulse_level then timing starts straight away.
The function will return -2 if there was timeout waiting for condition marked (*) above, and -1 if there was
timeout during the main measurement, marked (**) above. The timeout is the same for both cases and given by
timeout_us (which is in microseconds).
Constants
machine.IDLE
machine.SLEEP
machine.DEEPSLEEP
IRQ wake values.
machine.PWRON_RESET
machine.HARD_RESET
machine.WDT_RESET
machine.DEEPSLEEP_RESET
machine.SOFT_RESET
Reset causes.
machine.WLAN_WAKE
machine.PIN_WAKE
machine.RTC_WAKE
Wake-up reasons.
Classes
A pin object is used to control I/O pins (also known as GPIO - general-purpose input/output). Pin objects are com-
monly associated with a physical pin that can drive an output voltage and read input voltages. The pin class has
methods to set the mode of the pin (IN, OUT, etc) and methods to get and set the digital logic level. For analog control
of a pin, see the ADC class.
A pin object is constructed by using an identifier which unambiguously specifies a certain I/O pin. The allowed forms
of the identifier and the physical pin that the identifier maps to are port-specific. Possibilities for the identifier are an
integer, a string or a tuple with port and pin number.
Usage Model:
Constructors
– Pin.ALT - Pin is configured to perform an alternative function, which is port specific. For a pin
configured in such a way any other Pin methods (except Pin.init()) are not applicable (calling
them will lead to undefined, or a hardware-specific, result). Not all ports implement this mode.
– Pin.ALT_OPEN_DRAIN - The Same as Pin.ALT, but the pin is configured as open-drain. Not all
ports implement this mode.
• pull specifies if the pin has a (weak) pull resistor attached, and can be one of:
– None - No pull up or down resistor.
– Pin.PULL_UP - Pull up resistor enabled.
– Pin.PULL_DOWN - Pull down resistor enabled.
• value is valid only for Pin.OUT and Pin.OPEN_DRAIN modes and specifies initial output pin value if
given, otherwise the state of the pin peripheral remains unchanged.
• drive specifies the output power of the pin and can be one of: Pin.LOW_POWER, Pin.MED_POWER or
Pin.HIGH_POWER. The actual current driving capabilities are port dependent. Not all ports implement
this argument.
• alt specifies an alternate function for the pin and the values it can take are port dependent. This argument
is valid only for Pin.ALT and Pin.ALT_OPEN_DRAIN modes. It may be used when a pin supports
more than one alternate function. If only one pin alternate function is supported the this argument is not
required. Not all ports implement this argument.
As specified above, the Pin class allows to set an alternate function for a particular pin, but it does not specify any
further operations on such a pin. Pins configured in alternate-function mode are usually not used as GPIO but
are instead driven by other hardware peripherals. The only operation supported on such a pin is re-initialising,
by calling the constructor or Pin.init() method. If a pin that is configured in alternate-function mode is
re-initialised with Pin.IN, Pin.OUT, or Pin.OPEN_DRAIN, the alternate function will be removed from
the pin.
Methods
• Pin.IN - The value is stored in the output buffer for the pin. The pin state does not change, it remains
in the high-impedance state. The stored value will become active on the pin as soon as it is changed to
Pin.OUT or Pin.OPEN_DRAIN mode.
• Pin.OUT - The output buffer is set to the given value immediately.
• Pin.OPEN_DRAIN - If the value is ‘0’ the pin is set to a low voltage state. Otherwise the pin is set to
high-impedance state.
When setting the value this method returns None.
Pin.__call__([x ])
Pin objects are callable. The call method provides a (fast) shortcut to set and get the value of the pin. It is
equivalent to Pin.value([x]). See Pin.value() for more details.
Pin.on()
Set pin to “1” output level.
Pin.off()
Set pin to “0” output level.
Pin.mode([mode ])
Get or set the pin mode. See the constructor documentation for details of the mode argument.
Pin.pull([pull ])
Get or set the pin pull state. See the constructor documentation for details of the pull argument.
Pin.drive([drive ])
Get or set the pin drive strength. See the constructor documentation for details of the drive argument.
Not all ports implement this method.
Availability: WiPy.
Pin.irq(handler=None, trigger=(Pin.IRQ_FALLING | Pin.IRQ_RISING), *, priority=1, wake=None)
Configure an interrupt handler to be called when the trigger source of the pin is active. If the pin mode is Pin.
IN then the trigger source is the external value on the pin. If the pin mode is Pin.OUT then the trigger source
is the output buffer of the pin. Otherwise, if the pin mode is Pin.OPEN_DRAIN then the trigger source is the
output buffer for state ‘0’ and the external pin value for state ‘1’.
The arguments are:
• handler is an optional function to be called when the interrupt triggers.
• trigger configures the event which can generate an interrupt. Possible values are:
– Pin.IRQ_FALLING interrupt on falling edge.
– Pin.IRQ_RISING interrupt on rising edge.
– Pin.IRQ_LOW_LEVEL interrupt on low level.
– Pin.IRQ_HIGH_LEVEL interrupt on high level.
These values can be OR’ed together to trigger on multiple events.
• priority sets the priority level of the interrupt. The values it can take are port-specific, but higher values
always represent higher priorities.
• wake selects the power mode in which this interrupt can wake up the system. It can be machine.IDLE,
machine.SLEEP or machine.DEEPSLEEP. These values can also be OR’ed together to make a pin
generate interrupts in more than one power mode.
This method returns a callback object.
Constants
The following constants are used to configure the pin objects. Note that not all constants are available on all ports.
Pin.IN
Pin.OUT
Pin.OPEN_DRAIN
Pin.ALT
Pin.ALT_OPEN_DRAIN
Selects the pin mode.
Pin.PULL_UP
Pin.PULL_DOWN
Selects whether there is a pull up/down resistor. Use the value None for no pull.
Pin.LOW_POWER
Pin.MED_POWER
Pin.HIGH_POWER
Selects the pin drive strength.
Pin.IRQ_FALLING
Pin.IRQ_RISING
Pin.IRQ_LOW_LEVEL
Pin.IRQ_HIGH_LEVEL
Selects the IRQ trigger type.
The Signal class is a simple extension of the Pin class. Unlike Pin, which can be only in “absolute” 0 and 1 states, a
Signal can be in “asserted” (on) or “deasserted” (off) states, while being inverted (active-low) or not. In other words, it
adds logical inversion support to Pin functionality. While this may seem a simple addition, it is exactly what is needed
to support wide array of simple digital devices in a way portable across different boards, which is one of the major
MicroPython goals. Regardless of whether different users have an active-high or active-low LED, a normally open
or normally closed relay - you can develop a single, nicely looking application which works with each of them, and
capture hardware configuration differences in few lines in the config file of your app.
Example:
from machine import Pin, Signal
# Now to light up both of them using Pin class, you'll need to set
# them to different values
led1_pin.value(1)
led2_pin.value(0)
led2.value(1)
# Even better:
led1.on()
led2.on()
Constructors
Methods
Signal.value([x ])
This method allows to set and get the value of the signal, depending on whether the argument x is supplied or
not.
If the argument is omitted then this method gets the signal level, 1 meaning signal is asserted (active) and 0 -
signal inactive.
If the argument is supplied then this method sets the signal level. The argument x can be anything that converts
to a boolean. If it converts to True, the signal is active, otherwise it is inactive.
Correspondence between signal being active and actual logic level on the underlying pin depends on whether
signal is inverted (active-low) or not. For non-inverted signal, active status corresponds to logical 1, inactive -
to logical 0. For inverted/active-low signal, active status corresponds to logical 0, while inactive - to logical 1.
Signal.on()
Activate signal.
Signal.off()
Deactivate signal.
UART implements the standard UART/USART duplex serial communications protocol. At the physical level it con-
sists of 2 lines: RX and TX. The unit of communication is a character (not to be confused with a string character)
which can be 8 or 9 bits wide.
UART objects can be created and initialised using:
Constructors
Methods
UART.deinit()
Turn off the UART bus.
UART.any()
Returns an integer counting the number of characters that can be read without blocking. It will return 0 if there
are no characters available and a positive number if there are characters. The method may return 1 even if there
is more than one character available for reading.
For more sophisticated querying of available characters use select.poll:
poll = select.poll()
poll.register(uart, select.POLLIN)
poll.poll(timeout)
UART.read([nbytes ])
Read characters. If nbytes is specified then read at most that many bytes, otherwise read as much data as
possible.
Return value: a bytes object containing the bytes read in. Returns None on timeout.
UART.readinto(buf [, nbytes ])
Read bytes into the buf. If nbytes is specified then read at most that many bytes. Otherwise, read at most
len(buf) bytes.
Return value: number of bytes read and stored into buf or None on timeout.
UART.readline()
Read a line, ending in a newline character.
Return value: the line read or None on timeout.
UART.write(buf )
Write the buffer of bytes to the bus.
Return value: number of bytes written or None on timeout.
UART.sendbreak()
Send a break condition on the bus. This drives the bus low for a duration longer than required for a normal
transmission of a character.
SPI is a synchronous serial protocol that is driven by a master. At the physical level, a bus consists of 3 lines: SCK,
MOSI, MISO. Multiple devices can share the same bus. Each device should have a separate, 4th signal, SS (Slave
Select), to select a particular device on a bus with which communication takes place. Management of an SS signal
should happen in user code (via machine.Pin class).
Constructors
Methods
• polarity can be 0 or 1, and is the level the idle clock line sits at.
• phase can be 0 or 1 to sample data on the first or second clock edge respectively.
• bits is the width in bits of each transfer. Only 8 is guaranteed to be supported by all hardware.
• firstbit can be SPI.MSB or SPI.LSB.
• sck, mosi, miso are pins (machine.Pin) objects to use for bus signals. For most hardware SPI blocks
(as selected by id parameter to the constructor), pins are fixed and cannot be changed. In some cases,
hardware blocks allow 2-3 alternative pin sets for a hardware SPI block. Arbitrary pin assignments are
possible only for a bitbanging SPI driver (id = -1).
• pins - WiPy port doesn’t sck, mosi, miso arguments, and instead allows to specify them as a tuple of
pins parameter.
SPI.deinit()
Turn off the SPI bus.
SPI.read(nbytes, write=0x00)
Read a number of bytes specified by nbytes while continuously writing the single byte given by write.
Returns a bytes object with the data that was read.
SPI.readinto(buf, write=0x00)
Read into the buffer specified by buf while continuously writing the single byte given by write. Returns
None.
Note: on WiPy this function returns the number of bytes read.
SPI.write(buf )
Write the bytes contained in buf. Returns None.
Note: on WiPy this function returns the number of bytes written.
SPI.write_readinto(write_buf, read_buf )
Write the bytes from write_buf while reading into read_buf. The buffers can be the same or different, but
both buffers must have the same length. Returns None.
Note: on WiPy this function returns the number of bytes written.
Constants
SPI.MASTER
for initialising the SPI bus to master; this is only used for the WiPy
SPI.MSB
set the first bit to be the most significant bit
SPI.LSB
set the first bit to be the least significant bit
I2C is a two-wire protocol for communicating between devices. At the physical level it consists of 2 wires: SCL and
SDA, the clock and data lines respectively.
I2C objects are created attached to a specific bus. They can be initialised when created, or initialised later on.
Printing the I2C object gives you information about its configuration.
Example usage:
Constructors
General Methods
The following methods implement the primitive I2C master bus operations and can be combined to make any I2C
transaction. They are provided if you need more control over the bus, otherwise the standard methods (see below) can
be used.
I2C.start()
Generate a START condition on the bus (SDA transitions to low while SCL is high).
Availability: ESP8266.
I2C.stop()
Generate a STOP condition on the bus (SDA transitions to high while SCL is high).
Availability: ESP8266.
I2C.readinto(buf, nack=True)
Reads bytes from the bus and stores them into buf. The number of bytes read is the length of buf. An ACK will
be sent on the bus after receiving all but the last byte. After the last byte is received, if nack is true then a NACK
will be sent, otherwise an ACK will be sent (and in this case the slave assumes more bytes are going to be read
in a later call).
Availability: ESP8266.
I2C.write(buf )
Write the bytes from buf to the bus. Checks that an ACK is received after each byte and stops transmitting the
remaining bytes if a NACK is received. The function returns the number of ACKs that were received.
Availability: ESP8266.
The following methods implement the standard I2C master read and write operations that target a given slave device.
I2C.readfrom(addr, nbytes, stop=True)
Read nbytes from the slave specified by addr. If stop is true then a STOP condition is generated at the end of
the transfer. Returns a bytes object with the data read.
I2C.readfrom_into(addr, buf, stop=True)
Read into buf from the slave specified by addr. The number of bytes read will be the length of buf. If stop is
true then a STOP condition is generated at the end of the transfer.
The method returns None.
I2C.writeto(addr, buf, stop=True)
Write the bytes from buf to the slave specified by addr. If a NACK is received following the write of a byte
from buf then the remaining bytes are not sent. If stop is true then a STOP condition is generated at the end of
the transfer, even if a NACK is received. The function returns the number of ACKs that were received.
Memory operations
Some I2C devices act as a memory device (or set of registers) that can be read from and written to. In this case there are
two addresses associated with an I2C transaction: the slave address and the memory address. The following methods
are convenience functions to communicate with such devices.
I2C.readfrom_mem(addr, memaddr, nbytes, *, addrsize=8)
Read nbytes from the slave specified by addr starting from the memory address specified by memaddr. The
argument addrsize specifies the address size in bits. Returns a bytes object with the data read.
The RTC is and independent clock that keeps track of the date and time.
Example usage:
rtc = machine.RTC()
rtc.init((2014, 5, 1, 4, 13, 0, 0, 0))
print(rtc.now())
Constructors
Methods
RTC.init(datetime)
Initialise the RTC. Datetime is a tuple of the form:
(year, month, day[, hour[, minute[, second[, microsecond[,
tzinfo]]]]])
RTC.now()
Get get the current datetime tuple.
RTC.deinit()
Resets the RTC to the time of January 1, 2015 and starts running it again.
RTC.alarm(id, time, *, repeat=False)
Set the RTC alarm. Time might be either a millisecond value to program the alarm to current time + time_in_ms
in the future, or a datetimetuple. If the time passed is in milliseconds, repeat can be set to True to make the
alarm periodic.
RTC.alarm_left(alarm_id=0)
Get the number of milliseconds left before the alarm expires.
RTC.cancel(alarm_id=0)
Cancel a running alarm.
RTC.irq(*, trigger, handler=None, wake=machine.IDLE)
Create an irq object triggered by a real time clock alarm.
Constants
RTC.ALARM0
irq trigger source
Hardware timers deal with timing of periods and events. Timers are perhaps the most flexible and heterogeneous kind
of hardware in MCUs and SoCs, differently greatly from a model to a model. MicroPython’s Timer class defines a
baseline operation of executing a callback with a given period (or once after some delay), and allow specific boards to
define more non-standard behavior (which thus won’t be portable to other boards).
See discussion of important constraints on Timer callbacks.
Note: Memory can’t be allocated inside irq handlers (an interrupt) and so exceptions raised within a handler don’t
give much information. See micropython.alloc_emergency_exception_buf() for how to get around
this limitation.
Constructors
Methods
Timer.deinit()
Deinitialises the timer. Stops the timer, and disables the timer peripheral.
Constants
Timer.ONE_SHOT
Timer.PERIODIC
Timer operating mode.
The WDT is used to restart the system when the application crashes and ends up into a non recoverable state. Once
started it cannot be stopped or reconfigured in any way. After enabling, the application must “feed” the watchdog
periodically to prevent it from expiring and resetting the system.
Example usage:
Constructors
Methods
wdt.feed()
Feed the WDT to prevent it from resetting the system. The application should place this call in a sensible place
ensuring that the WDT is only fed after verifying that everything is functioning correctly.
Functions
micropython.const(expr)
Used to declare that the expression is a constant so that the compile can optimise it. The use of this function
should be as follows:
CONST_X = const(123)
CONST_Y = const(2 * CONST_X + 1)
Constants declared this way are still accessible as global variables from outside the module they are declared
in. On the other hand, if a constant begins with an underscore then it is hidden, it is not available as a global
variable, and does not take up any memory during execution.
This const function is recognised directly by the MicroPython parser and is provided as part of the
micropython module mainly so that scripts can be written which run under both CPython and MicroPy-
thon, by following the above pattern.
micropython.opt_level([level ])
If level is given then this function sets the optimisation level for subsequent compilation of scripts, and returns
None. Otherwise it returns the current optimisation level.
The optimisation level controls the following compilation features:
• Assertions: at level 0 assertion statements are enabled and compiled into the bytecode; at levels 1 and
higher assertions are not compiled.
• Built-in __debug__ variable: at level 0 this variable expands to True; at levels 1 and higher it expands
to False.
• Source-code line numbers: at levels 0, 1 and 2 source-code line number are stored along with the bytecode
so that exceptions can report the line number they occurred at; at levels 3 and higher line numbers are not
stored.
A use for this function is to schedule a callback from a preempting IRQ. Such an IRQ puts restrictions on the
code that runs in the IRQ (for example the heap may be locked) and scheduling a function to call later will lift
those restrictions.
Note: If schedule() is called from a preempting IRQ, when memory allocation is not allowed and the
callback to be passed to schedule() is a bound method, passing this directly will fail. This is because
creating a reference to a bound method causes memory allocation. A solution is to create a reference to the
method in the class constructor and to pass that reference to schedule(). This is discussed in detail here
reference documentation under “Creation of Python objects”.
There is a finite stack to hold the scheduled functions and schedule() will raise a RuntimeError if the
stack is full.
This module provides network drivers and routing configuration. To use this module, a MicroPython variant/build
with network capabilities must be installed. Network drivers for specific hardware are available within this module
and are used to configure hardware network interface(s). Network services provided by configured interfaces are then
available for use via the usocket module.
For example:
This section describes an (implied) abstract base class for all network interface classes implemented by
MicroPython ports for different hardware. This means that MicroPython does not actually provide
AbstractNIC class, but any actual NIC class, as described in the following sections, implements methods as de-
scribed here.
class network.AbstractNIC(id=None, ...)
Instantiate a network interface object. Parameters are network interface dependent. If there are more than one interface
of the same type, the first parameter should be id.
network.active([is_active ])
Activate (“up”) or deactivate (“down”) the network interface, if a boolean argument is passed. Other-
wise, query current state if no argument is provided. Most other methods require an active interface
(behavior of calling them on inactive interface is undefined).
network.connect([service_id, key=None, *, ... ])
Connect the interface to a network. This method is optional, and available only for interfaces which
are not “always connected”. If no parameters are given, connect to the default (or the only) ser-
vice. If a single parameter is given, it is the primary identifier of a service to connect to. It may
be accompanied by a key (password) required to access said service. There can be further arbi-
trary keyword-only parameters, depending on the networking medium type and/or particular device.
Parameters can be used to: a) specify alternative service identifer types; b) provide additional con-
nection parameters. For various medium types, there are different sets of predefined/recommended
parameters, among them:
• WiFi: bssid keyword to connect to a specific BSSID (MAC address)
network.disconnect()
Disconnect from network.
network.isconnected()
Returns True if connected to network, otherwise returns False.
network.scan(*, ...)
Scan for the available network services/connections. Returns a list of tuples with discovered service
parameters. For various network media, there are different variants of predefined/ recommended
tuple formats, among them:
• WiFi: (ssid, bssid, channel, RSSI, authmode, hidden). There may be further fields, specific to a
particular device.
The function may accept additional keyword arguments to filter scan results (e.g. scan for a particu-
lar service, on a particular channel, for services of a particular set, etc.), and to affect scan duration
and other parameters. Where possible, parameter names should match those in connect().
network.status([param ])
Query dynamic status information of the interface. When called with no argument the return value
describes the network link status. Otherwise param should be a string naming the particular status
parameter to retrieve.
The return types and values are dependent on the network medium/technology. Some of the param-
eters that may be supported are:
• WiFi STA: use 'rssi' to retrieve the RSSI of the AP signal
• WiFi AP: use 'stations' to retrieve a list of all the STAs connected to the AP. The list
contains tuples of the form (MAC, RSSI).
network.ifconfig([(ip, subnet, gateway, dns) ])
Get/set IP-level network interface parameters: IP address, subnet mask, gateway and DNS server.
When called with no arguments, this method returns a 4-tuple with the above information. To set
the above values, pass a 4-tuple with the required information. For example:
network.config(’param’)
network.config(param=value, ...)
Get or set general network interface parameters. These methods allow to work with additional pa-
rameters beyond standard IP configuration (as dealt with by ifconfig()). These include network-
specific and hardware-specific parameters. For setting parameters, the keyword argument syntax
should be used, and multiple parameters can be set at once. For querying, a parameter name should
be quoted as a string, and only one parameter can be queried at a time:
# Set WiFi access point name (formally known as ESSID) and WiFi channel
ap.config(essid='My AP', channel=11)
# Query params one by one
print(ap.config('essid'))
print(ap.config('channel'))
Functions
network.phy_mode([mode ])
Get or set the PHY mode.
If the mode parameter is provided, sets the mode to its value. If the function is called without parameters, returns
the current mode.
The possible modes are defined as constants:
• MODE_11B – IEEE 802.11b,
• MODE_11G – IEEE 802.11g,
• MODE_11N – IEEE 802.11n.
class WLAN
This class provides a driver for WiFi network processor in the ESP8266. Example usage:
import network
# enable station interface and connect to WiFi access point
nic = network.WLAN(network.STA_IF)
nic.active(True)
nic.connect('your-ssid', 'your-password')
# now use sockets as usual
Constructors
class network.WLAN(interface_id)
Create a WLAN network interface object. Supported interfaces are network.STA_IF (station aka client, connects
to upstream WiFi access points) and network.AP_IF (access point, allows other WiFi clients to connect). Avail-
ability of the methods below depends on interface type. For example, only STA interface may connect() to an
access point.
Methods
wlan.active([is_active ])
Activate (“up”) or deactivate (“down”) network interface, if boolean argument is passed. Otherwise, query
current state if no argument is provided. Most other methods require active interface.
wlan.connect(ssid=None, password=None, *, bssid=None)
Connect to the specified wireless network, using the specified password. If bssid is given then the connection
will be restricted to the access-point with that MAC address (the ssid must also be specified in this case).
wlan.disconnect()
Disconnect from the currently connected wireless network.
wlan.scan()
Scan for the available wireless networks.
Scanning is only possible on STA interface. Returns list of tuples with the information about WiFi access points:
(ssid, bssid, channel, RSSI, authmode, hidden)
bssid is hardware address of an access point, in binary form, returned as bytes object. You can use ubinascii.
hexlify() to convert it to ASCII form.
There are five values for authmode:
• 0 – open
• 1 – WEP
• 2 – WPA-PSK
• 3 – WPA2-PSK
• 4 – WPA/WPA2-PSK
and two for hidden:
• 0 – visible
• 1 – hidden
wlan.status([param ])
Return the current status of the wireless connection.
When called with no argument the return value describes the network link status. The possible statuses are
defined as constants:
• STAT_IDLE – no connection and no activity,
• STAT_CONNECTING – connecting in progress,
• STAT_WRONG_PASSWORD – failed due to incorrect password,
• STAT_NO_AP_FOUND – failed because no access point replied,
• STAT_CONNECT_FAIL – failed due to other problems,
• STAT_GOT_IP – connection successful.
When called with one argument param should be a string naming the status parameter to retrieve. Supported
parameters in WiFI STA mode are: 'rssi'.
wlan.isconnected()
In case of STA mode, returns True if connected to a WiFi access point and has a valid IP address. In AP mode
returns True when a station is connected. Returns False otherwise.
wlan.ifconfig([(ip, subnet, gateway, dns) ])
Get/set IP-level network interface parameters: IP address, subnet mask, gateway and DNS server. When called
with no arguments, this method returns a 4-tuple with the above information. To set the above values, pass a
4-tuple with the required information. For example:
wlan.config(’param’)
wlan.config(param=value, ...)
Get or set general network interface parameters. These methods allow to work with additional parameters
beyond standard IP configuration (as dealt with by wlan.ifconfig()). These include network-specific
and hardware-specific parameters. For setting parameters, keyword argument syntax should be used, multiple
parameters can be set at once. For querying, parameters name should be quoted as a string, and only one
parameter can be queries at time:
# Set WiFi access point name (formally known as ESSID) and WiFi channel
ap.config(essid='My AP', channel=11)
# Query params one by one
print(ap.config('essid'))
print(ap.config('channel'))
Following are commonly supported parameters (availability of a specific parameter depends on network tech-
nology type, driver, and MicroPython port).
Parameter Description
mac MAC address (bytes)
essid WiFi access point name (string)
channel WiFi channel (integer)
hidden Whether ESSID is hidden (boolean)
authmode Authentication mode supported (enumeration, see module constants)
password Access password (string)
dhcp_hostname The DHCP hostname to use
This module implements “foreign data interface” for MicroPython. The idea behind it is similar to CPython’s ctypes
modules, but the actual API is different, streamlined and optimized for small size. The basic idea of the module is
to define data structure layout with about the same power as the C language allows, and then access it using familiar
dot-syntax to reference sub-fields.
See also:
Module ustruct Standard Python way to access binary data structures (doesn’t scale well to large and complex
structures).
Structure layout is defined by a “descriptor” - a Python dictionary which encodes field names as keys and other
properties required to access them as associated values. Currently, uctypes requires explicit specification of offsets for
each field. Offset are given in bytes from a structure start.
Following are encoding examples for various field types:
• Scalar types:
in other words, value is scalar type identifier ORed with field offset (in bytes) from the start of the structure.
• Recursive structures:
"sub": (offset, {
"b0": 0 | uctypes.UINT8,
"b1": 1 | uctypes.UINT8,
})
i.e. value is a 2-tuple, first element of which is offset, and second is a structure descriptor dictionary (note:
offsets in recursive descriptors are relative to the structure it defines).
• Arrays of primitive types:
i.e. value is a 2-tuple, first element of which is ARRAY flag ORed with offset, and second is scalar element type
ORed number of elements in array.
• Arrays of aggregate types:
i.e. value is a 3-tuple, first element of which is ARRAY flag ORed with offset, second is a number of elements
in array, and third is descriptor of element type.
• Pointer to a primitive type:
i.e. value is a 2-tuple, first element of which is PTR flag ORed with offset, and second is scalar element type.
• Pointer to an aggregate type:
i.e. value is a 2-tuple, first element of which is PTR flag ORed with offset, second is descriptor of type pointed
to.
• Bitfields:
i.e. value is type of scalar value containing given bitfield (typenames are similar to scalar types, but prefixes with
“BF”), ORed with offset for scalar value containing the bitfield, and further ORed with values for bit offset and
bit length of the bitfield within scalar value, shifted by BF_POS and BF_LEN positions, respectively. Bitfield
position is counted from the least significant bit, and is the number of right-most bit of a field (in other words,
it’s a number of bits a scalar needs to be shifted right to extract the bitfield).
In the example above, first a UINT16 value will be extracted at offset 0 (this detail may be important when
accessing hardware registers, where particular access size and alignment are required), and then bitfield whose
rightmost bit is lsbit bit of this UINT16, and length is bitsize bits, will be extracted. For example, if lsbit is 0
and bitsize is 8, then effectively it will access least-significant byte of UINT16.
Note that bitfield operations are independent of target byte endianness, in particular, example above will access
least-significant byte of UINT16 in both little- and big-endian structures. But it depends on the least significant
bit being numbered 0. Some targets may use different numbering in their native ABI, but uctypes always uses
the normalized numbering described above.
Module contents
Given a structure descriptor dictionary and its layout type, you can instantiate a specific structure instance at a given
memory address using uctypes.struct() constructor. Memory address usually comes from following sources:
• Predefined address, when accessing hardware registers on a baremetal system. Lookup these addresses in
datasheet for a particular MCU/SoC.
• As a return value from a call to some FFI (Foreign Function Interface) function.
• From uctypes.addressof(), when you want to pass arguments to an FFI function, or alternatively, to access some
data for I/O (for example, data read from a file or network socket).
Structure objects
Structure objects allow accessing individual fields using standard dot notation: my_struct.substruct1.
field1. If a field is of scalar type, getting it will produce a primitive value (Python integer or float) corresponding
to the value contained in a field. A scalar field can also be assigned to.
If a field is an array, its individual elements can be accessed with the standard subscript operator [] - both read and
assigned to.
If a field is a pointer, it can be dereferenced using [0] syntax (corresponding to C * operator, though [0] works in C
too). Subscripting a pointer with other integer values but 0 are supported too, with the same semantics as in C.
Summing up, accessing structure fields generally follows C syntax, except for pointer dereference, when you need to
use [0] operator instead of *.
Limitations
Accessing non-scalar fields leads to allocation of intermediate objects to represent them. This means that special care
should be taken to layout a structure which needs to be accessed when memory allocation is disabled (e.g. from an
interrupt). The recommendations are:
• Avoid nested structures. For example, instead of mcu_registers.peripheral_a.register1, define
separate layout descriptors for each peripheral, to be accessed as peripheral_a.register1.
• Avoid other non-scalar data, like array. For example, instead of peripheral_a.register[0] use
peripheral_a.register0.
Note that these recommendations will lead to decreased readability and conciseness of layouts, so they should be used
only if the need to access structure fields without allocation is anticipated (it’s even possible to define 2 parallel layouts
- one for normal usage, and a restricted one to use when memory allocation is prohibited).
The esp module contains specific functions related to the ESP8266 module.
Functions
esp.sleep_type([sleep_type ])
Get or set the sleep type.
If the sleep_type parameter is provided, sets the sleep type to its value. If the function is called without parame-
ters, returns the current sleep type.
The possible sleep types are defined as constants:
• SLEEP_NONE – all functions enabled,
• SLEEP_MODEM – modem sleep, shuts down the WiFi Modem circuit.
• SLEEP_LIGHT – light sleep, shuts down the WiFi Modem circuit and suspends the processor periodically.
The system enters the set sleep mode automatically when possible.
esp.deepsleep(time=0)
Enter deep sleep.
The whole module powers down, except for the RTC clock circuit, which can be used to restart the module after
the specified time if the pin 16 is connected to the reset pin. Otherwise the module will sleep until manually
reset.
esp.flash_id()
Read the device ID of the flash memory.
esp.flash_read(byte_offset, length_or_buffer)
esp.flash_write(byte_offset, bytes)
esp.flash_erase(sector_no)
esp.set_native_code_location(start, length)
Set the location that native code will be placed for execution after it is compiled. Native code is emitted when
the @micropython.native, @micropython.viper and @micropython.asm_xtensa decorators
are applied to a function. The ESP8266 must execute code from either iRAM or the lower 1MByte of flash
(which is memory mapped), and this function controls the location.
If start and length are both None then the native code location is set to the unused portion of memory at the
end of the iRAM1 region. The size of this unused portion depends on the firmware and is typically quite small
(around 500 bytes), and is enough to store a few very small functions. The advantage of using this iRAM1
region is that it does not get worn out by writing to it.
If neither start nor length are None then they should be integers. start should specify the byte offset from the
beginning of the flash at which native code should be stored. length specifies how many bytes of flash from start
can be used to store native code. start and length should be multiples of the sector size (being 4096 bytes). The
flash will be automatically erased before writing to it so be sure to use a region of flash that is not otherwise
used, for example by the firmware or the filesystem.
When using the flash to store native code start+length must be less than or equal to 1MByte. Note that the flash
can be worn out if repeated erasures (and writes) are made so use this feature sparingly. In particular, native
code needs to be recompiled and rewritten to flash on each boot (including wake from deepsleep).
In both cases above, using iRAM1 or flash, if there is no more room left in the specified region then the use of
a native decorator on a function will lead to MemoryError exception being raised during compilation of that
function.
FIVE
MicroPython aims to implement the Python 3.4 standard (with selected features from later versions) with respect to
language syntax, and most of the features of MicroPython are identical to those described by the “Language Reference”
documentation at docs.python.org.
The MicroPython standard library is described in the corresponding chapter. The MicroPython differences from
CPython chapter describes differences between MicroPython and CPython (which mostly concern standard library
and types, but also some language-level features).
This chapter describes features and peculiarities of MicroPython implementation and the best practices to use them.
5.1 Glossary
baremetal A system without a (full-fledged) OS, for example an MCU-based system. When running on a baremetal
system, MicroPython effectively becomes its user-facing OS with a command interpreter (REPL).
board A PCB board. Oftentimes, the term is used to denote a particular model of an MCU system. Sometimes, it
is used to actually refer to MicroPython port to a particular board (and then may also refer to “boardless” ports
like Unix port).
callee-owned tuple A tuple returned by some builtin function/method, containing data which is valid for a limited
time, usually until next call to the same function (or a group of related functions). After next call, data in the tuple
may be changed. This leads to the following restriction on the usage of callee-owned tuples - references to them
cannot be stored. The only valid operation is extracting values from them (including making a copy). Callee-
owned tuples is a MicroPython-specific construct (not available in the general Python language), introduced for
memory allocation optimization. The idea is that callee-owned tuple is allocated once and stored on the callee
side. Subsequent calls don’t require allocation, allowing to return multiple values when allocation is not possible
(e.g. in interrupt context) or not desirable (because allocation inherently leads to memory fragmentation). Note
that callee-owned tuples are effectively mutable tuples, making an exception to Python’s rule that tuples are
immutable. (It may be interesting why tuples were used for such a purpose then, instead of mutable lists - the
reason for that is that lists are mutable from user application side too, so a user could do things to a callee-owned
list which the callee doesn’t expect and could lead to problems; a tuple is protected from this.)
CPython CPython is the reference implementation of Python programming language, and the most well-known one,
which most of the people run. It is however one of many implementations (among which Jython, IronPython,
PyPy, and many more, including MicroPython). As there is no formal specification of the Python language, only
CPython documentation, it is not always easy to draw a line between Python the language and CPython its partic-
ular implementation. This however leaves more freedom for other implementations. For example, MicroPython
does a lot of things differently than CPython, while still aspiring to be a Python language implementation.
GPIO General-purpose input/output. The simplest means to control electrical signals. With GPIO, user can configure
hardware signal pin to be either input or output, and set or get its digital signal value (logical “0” or “1”).
MicroPython abstracts GPIO access using machine.Pin and machine.Signal classes.
91
MicroPython Documentation, Release 1.9.4
GPIO port A group of GPIO pins, usually based on hardware properties of these pins (e.g. controllable by the same
register).
interned string A string referenced by its (unique) identity rather than its address. Interned strings are thus can be
quickly compared just by their identifiers, instead of comparing by content. The drawbacks of interned strings
are that interning operation takes time (proportional to the number of existing interned strings, i.e. becoming
slower and slower over time) and that the space used for interned strings is not reclaimable. String interning is
done automatically by MicroPython compiler and runtimer when it’s either required by the implementation (e.g.
function keyword arguments are represented by interned string id’s) or deemed beneficial (e.g. for short enough
strings, which have a chance to be repeated, and thus interning them would save memory on copies). Most of
string and I/O operations don’t produce interned strings due to drawbacks described above.
MCU Microcontroller. Microcontrollers usually have much less resources than a full-fledged computing system, but
smaller, cheaper and require much less power. MicroPython is designed to be small and optimized enough to
run on an average modern microcontroller.
micropython-lib MicroPython is (usually) distributed as a single executable/binary file with just few builtin modules.
There is no extensive standard library comparable with CPython. Instead, there is a related, but separate project
micropython-lib which provides implementations for many modules from CPython’s standard library. However,
large subset of these modules require POSIX-like environment (Linux, FreeBSD, MacOS, etc.; Windows may
be partially supported), and thus would work or make sense only with MicroPython Unix port. Some
subset of modules is however usable for baremetal ports too.
Unlike monolithic CPython stdlib, micropython-lib modules are intended to be installed individually - either
using manual copying or using upip.
MicroPython port MicroPython supports different boards, RTOSes, and OSes, and can be relatively easily adapted
to new systems. MicroPython with support for a particular system is called a “port” to that system. Different
ports may have widely different functionality. This documentation is intended to be a reference of the generic
APIs available across different ports (“MicroPython core”). Note that some ports may still omit some APIs
described here (e.g. due to resource constraints). Any such differences, and port-specific extensions beyond
MicroPython core functionality, would be described in the separate port-specific documentation.
MicroPython Unix port Unix port is one of the major MicroPython ports. It is intended to run on POSIX-compatible
operating systems, like Linux, MacOS, FreeBSD, Solaris, etc. It also serves as the basis of Windows port. The
importance of Unix port lies in the fact that while there are many different boards, so two random users unlikely
have the same board, almost all modern OSes have some level of POSIX compatibility, so Unix port serves as
a kind of “common ground” to which any user can have access. So, Unix port is used for initial prototyping,
different kinds of testing, development of machine-independent features, etc. All users of MicroPython, even
those which are interested only in running MicroPython on MCU systems, are recommended to be familiar with
Unix (or Windows) port, as it is important productivity helper and a part of normal MicroPython workflow.
port Either MicroPython port or GPIO port. If not clear from context, it’s recommended to use full specification like
one of the above.
stream Also known as a “file-like object”. An object which provides sequential read-write access to the under-
lying data. A stream object implements a corresponding interface, which consists of methods like read(),
write(), readinto(), seek(), flush(), close(), etc. A stream is an important concept in MicroPy-
thon, many I/O objects implement the stream interface, and thus can be used consistently and interchangeably
in different contexts. For more information on streams in MicroPython, see uio module.
upip (Literally, “micro pip”). A package manage for MicroPython, inspired by CPython’s pip, but much smaller and
with reduced functionality. upip runs both on Unix port and on baremetal ports (those which offer filesystem
and networking support).
This section covers some characteristics of the MicroPython Interactive Interpreter Mode. A commonly used term for
this is REPL (read-eval-print-loop) which will be used to refer to this interactive prompt.
5.2.1 Auto-indent
When typing python statements which end in a colon (for example if, for, while) then the prompt will change to three
dots (. . . ) and the cursor will be indented by 4 spaces. When you press return, the next line will continue at the same
level of indentation for regular statements or an additional level of indentation where appropriate. If you press the
backspace key then it will undo one level of indentation.
If your cursor is all the way back at the beginning, pressing RETURN will then execute the code that you’ve entered.
The following shows what you’d see after entering a for statement (the underscore shows where the cursor winds up):
Finally type print(i), press RETURN, press BACKSPACE and press RETURN again:
Auto-indent won’t be applied if the previous two lines were all spaces. This means that you can finish entering a
compound statement by pressing RETURN twice, and then a third press will finish and execute.
5.2.2 Auto-completion
While typing a command at the REPL, if the line typed so far corresponds to the beginning of the name of something,
then pressing TAB will show possible things that could be entered. For example, first import the machine module by
entering import machine and pressing RETURN. Then type m and press TAB and it should expand to machine.
Enter a dot . and press TAB again. You should see something like:
>>> machine.
__name__ info unique_id reset
bootloader freq rng idle
sleep deepsleep disable_irq enable_irq
Pin
The word will be expanded as much as possible until multiple possibilities exist. For example, type machine.Pin.
AF3 and press TAB and it will expand to machine.Pin.AF3_TIM. Pressing TAB a second time will show the
possible expansions:
>>> machine.Pin.AF3_TIM
AF3_TIM10 AF3_TIM11 AF3_TIM8 AF3_TIM9
>>> machine.Pin.AF3_TIM
You can interrupt a running program by pressing Ctrl-C. This will raise a KeyboardInterrupt which will bring you back
to the REPL, providing your program doesn’t intercept the KeyboardInterrupt exception.
For example:
If you want to paste some code into your terminal window, the auto-indent feature will mess things up. For example,
if you had the following python code:
def foo():
print('This is a test to show paste mode')
print('Here is a second line')
foo()
and you try to paste this into the normal REPL, then you will see something like this:
If you press Ctrl-E, then you will enter paste mode, which essentially turns off the auto-indent feature, and changes
the prompt from >>> to ===. For example:
>>>
paste mode; Ctrl-C to cancel, Ctrl-D to finish
=== def foo():
=== print('This is a test to show paste mode')
=== print('Here is a second line')
=== foo()
===
This is a test to show paste mode
Here is a second line
>>>
Paste Mode allows blank lines to be pasted. The pasted text is compiled as if it were a file. Pressing Ctrl-D exits paste
mode and initiates the compilation.
A soft reset will reset the python interpreter, but tries not to reset the method by which you’re connected to the
MicroPython board (USB-serial, or Wifi).
You can perform a soft reset from the REPL by pressing Ctrl-D, or from your python code by executing:
machine.soft_reset()
For example, if you reset your MicroPython board, and you execute a dir() command, you’d see something like this:
>>> dir()
['__name__', 'pyb']
>>> i = 1
>>> j = 23
>>> x = 'abc'
>>> dir()
['j', 'x', '__name__', 'pyb', 'i']
>>>
Now if you enter Ctrl-D, and repeat the dir() command, you’ll see that your variables no longer exist:
When you use the REPL, you may perform computations and see the results. MicroPython stores the results of the
previous statement in the variable _ (underscore). So you can use the underscore to save the result in a variable. For
example:
>>> 1 + 2 + 3 + 4 + 5
15
>>> x = _
>>> x
15
>>>
Raw mode is not something that a person would normally use. It is intended for programmatic use. It essentially
behaves like paste mode with echo turned off.
Raw mode is entered using Ctrl-A. You then send your python code, followed by a Ctrl-D. The Ctrl-D will be ac-
knowledged by ‘OK’ and then the python code will be compiled and executed. Any output (or errors) will be sent
back. Entering Ctrl-B will leave raw mode and return the the regular (aka friendly) REPL.
The tools/pyboard.py program uses the raw REPL to execute python files on the MicroPython board.
On suitable hardware MicroPython offers the ability to write interrupt handlers in Python. Interrupt handlers - also
known as interrupt service routines (ISR’s) - are defined as callback functions. These are executed in response to
an event such as a timer trigger or a voltage change on a pin. Such events can occur at any point in the execution
of the program code. This carries significant consequences, some specific to the MicroPython language. Others are
common to all systems capable of responding to real time events. This document covers the language specific issues
first, followed by a brief introduction to real time programming for those new to it.
This introduction uses vague terms like “slow” or “as fast as possible”. This is deliberate, as speeds are application
dependent. Acceptable durations for an ISR are dependent on the rate at which interrupts occur, the nature of the main
program, and the presence of other concurrent events.
This summarises the points detailed below and lists the principal recommendations for interrupt handler code.
• Keep the code as short and simple as possible.
• Avoid memory allocation: no appending to lists or insertion into dictionaries, no floating point.
• Consider using micropython.schedule to work around the above constraint.
• Where an ISR returns multiple bytes use a pre-allocated bytearray. If multiple integers are to be shared
between an ISR and the main program consider an array (array.array).
• Where data is shared between the main program and an ISR, consider disabling interrupts prior to accessing the
data in the main program and re-enabling them immediately afterwards (see Critical Sections).
• Allocate an emergency exception buffer (see below).
If an error occurs in an ISR, MicroPython is unable to produce an error report unless a special buffer is created for the
purpose. Debugging is simplified if the following code is included in any program using interrupts.
import micropython
micropython.alloc_emergency_exception_buf(100)
Simplicity
For a variety of reasons it is important to keep ISR code as short and simple as possible. It should do only what has
to be done immediately after the event which caused it: operations which can be deferred should be delegated to the
main program loop. Typically an ISR will deal with the hardware device which caused the interrupt, making it ready
for the next interrupt to occur. It will communicate with the main loop by updating shared data to indicate that the
interrupt has occurred, and it will return. An ISR should return control to the main loop as quickly as possible. This is
not a specific MicroPython issue so is covered in more detail below.
Normally an ISR needs to communicate with the main program. The simplest means of doing this is via one or more
shared data objects, either declared as global or shared via a class (see below). There are various restrictions and
hazards around doing this, which are covered in more detail below. Integers, bytes and bytearray objects are
commonly used for this purpose along with arrays (from the array module) which can store various data types.
MicroPython supports this powerful technique which enables an ISR to share instance variables with the underlying
code. It also enables a class implementing a device driver to support multiple device instances. The following example
causes two LED’s to flash at different rates.
In this example the red instance associates timer 4 with LED 1: when a timer 4 interrupt occurs red.cb() is called
causing LED 1 to change state. The green instance operates similarly: a timer 2 interrupt results in the execution of
green.cb() and toggles LED 2. The use of instance methods confers two benefits. Firstly a single class enables
code to be shared between multiple hardware instances. Secondly, as a bound method the callback function’s first
argument is self. This enables the callback to access instance data and to save state between successive calls. For
example, if the class above had a variable self.count set to zero in the constructor, cb() could increment the
counter. The red and green instances would then maintain independent counts of the number of times each LED
had changed state.
ISR’s cannot create instances of Python objects. This is because MicroPython needs to allocate memory for the object
from a store of free memory block called the heap. This is not permitted in an interrupt handler because heap allocation
is not re-entrant. In other words the interrupt might occur when the main program is part way through performing an
allocation - to maintain the integrity of the heap the interpreter disallows memory allocations in ISR code.
A consequence of this is that ISR’s can’t use floating point arithmetic; this is because floats are Python objects.
Similarly an ISR can’t append an item to a list. In practice it can be hard to determine exactly which code constructs
will attempt to perform memory allocation and provoke an error message: another reason for keeping ISR code short
and simple.
One way to avoid this issue is for the ISR to use pre-allocated buffers. For example a class constructor creates a
bytearray instance and a boolean flag. The ISR method assigns data to locations in the buffer and sets the flag.
The memory allocation occurs in the main program code when the object is instantiated rather than in the ISR.
The MicroPython library I/O methods usually provide an option to use a pre-allocated buffer. For example pyb.
i2c.recv() can accept a mutable buffer as its first argument: this enables its use in an ISR.
A means of creating an object without employing a class or globals is as follows:
The compiler instantiates the default buf argument when the function is loaded for the first time (usually when the
module it’s in is imported).
An instance of object creation occurs when a reference to a bound method is created. This means that an ISR cannot
pass a bound method to a function. One solution is to create a reference to the bound method in the class constructor
and to pass that reference in the ISR. For example:
class Foo():
def __init__(self):
self.bar_ref = self.bar # Allocation occurs here
self.x = 0.1
tim = pyb.Timer(4)
tim.init(freq=2)
tim.callback(self.cb)
Other techniques are to define and instantiate the method in the constructor or to pass Foo.bar() with the argument
self.
A further restriction on objects arises because of the way Python works. When an import statement is executed
the Python code is compiled to bytecode, with one line of code typically mapping to multiple bytecodes. When the
code runs the interpreter reads each bytecode and executes it as a series of machine code instructions. Given that
an interrupt can occur at any time between machine code instructions, the original line of Python code may be only
partially executed. Consequently a Python object such as a set, list or dictionary modified in the main loop may lack
internal consistency at the moment the interrupt occurs.
A typical outcome is as follows. On rare occasions the ISR will run at the precise moment in time when the object
is partially updated. When the ISR tries to read the object, a crash results. Because such problems typically occur on
rare, random occasions they can be hard to diagnose. There are ways to circumvent this issue, described in Critical
Sections below.
It is important to be clear about what constitutes the modification of an object. An alteration to a built-in type such as
a dictionary is problematic. Altering the contents of an array or bytearray is not. This is because bytes or words are
written as a single machine code instruction which is not interruptible: in the parlance of real time programming the
write is atomic. A user defined object might instantiate an integer, array or bytearray. It is valid for both the main loop
and the ISR to alter the contents of these.
MicroPython supports integers of arbitrary precision. Values between 2**30 -1 and -2**30 will be stored in a single
machine word. Larger values are stored as Python objects. Consequently changes to long integers cannot be considered
atomic. The use of long integers in ISR’s is unsafe because memory allocation may be attempted as the variable’s value
changes.
In general it is best to avoid using floats in ISR code: hardware devices normally handle integers and conversion to
floats is normally done in the main loop. However there are a few DSP algorithms which require floating point. On
platforms with hardware floating point (such as the Pyboard) the inline ARM Thumb assembler can be used to work
round this limitation. This is because the processor stores float values in a machine word; values can be shared between
the ISR and main program code via an array of floats.
Using micropython.schedule
This function enables an ISR to schedule a callback for execution “very soon”. The callback is queued for execution
which will take place at a time when the heap is not locked. Hence it can create Python objects and use floats. The
callback is also guaranteed to run at a time when the main program has completed any update of Python objects, so
the callback will not encounter partially updated objects.
Typical usage is to handle sensor hardware. The ISR acquires data from the hardware and enables it to issue a further
interrupt. It then schedules a callback to process the data.
Scheduled callbacks should comply with the principles of interrupt handler design outlined below. This is to avoid
problems resulting from I/O activity and the modification of shared data which can arise in any code which pre-empts
the main program loop.
Execution time needs to be considered in relation to the frequency with which interrupts can occur. If an interrupt
occurs while the previous callback is executing, a further instance of the callback will be queued for execution; this
will run after the current instance has completed. A sustained high interrupt repetition rate therefore carries a risk of
unconstrained queue growth and eventual failure with a RuntimeError.
If the callback to be passed to schedule() is a bound method, consider the note in “Creation of Python objects”.
5.3.3 Exceptions
If an ISR raises an exception it will not propagate to the main loop. The interrupt will be disabled unless the exception
is handled by the ISR code.
This is merely a brief introduction to the subject of real time programming. Beginners should note that design errors
in real time programs can lead to faults which are particularly hard to diagnose. This is because they can occur rarely
and at intervals which are essentially random. It is crucial to get the initial design right and to anticipate issues before
they arise. Both interrupt handlers and the main program need to be designed with an appreciation of the following
issues.
As mentioned above, ISR’s should be designed to be as simple as possible. They should always return in a short,
predictable period of time. This is important because when the ISR is running, the main loop is not: inevitably the
main loop experiences pauses in its execution at random points in the code. Such pauses can be a source of hard to
diagnose bugs particularly if their duration is long or variable. In order to understand the implications of ISR run time,
a basic grasp of interrupt priorities is required.
Interrupts are organised according to a priority scheme. ISR code may itself be interrupted by a higher priority
interrupt. This has implications if the two interrupts share data (see Critical Sections below). If such an interrupt
occurs it interposes a delay into the ISR code. If a lower priority interrupt occurs while the ISR is running, it will be
delayed until the ISR is complete: if the delay is too long, the lower priority interrupt may fail. A further issue with
slow ISR’s is the case where a second interrupt of the same type occurs during its execution. The second interrupt will
be handled on termination of the first. However if the rate of incoming interrupts consistently exceeds the capacity of
the ISR to service them the outcome will not be a happy one.
Consequently looping constructs should be avoided or minimised. I/O to devices other than to the interrupting device
should normally be avoided: I/O such as disk access, print statements and UART access is relatively slow, and its
duration may vary. A further issue here is that filesystem functions are not reentrant: using filesystem I/O in an ISR
and the main program would be hazardous. Crucially ISR code should not wait on an event. I/O is acceptable if the
code can be guaranteed to return in a predictable period, for example toggling a pin or LED. Accessing the interrupting
device via I2C or SPI may be necessary but the time taken for such accesses should be calculated or measured and its
impact on the application assessed.
There is usually a need to share data between the ISR and the main loop. This may be done either through global
variables or via class or instance variables. Variables are typically integer or boolean types, or integer or byte arrays
(a pre-allocated integer array offers faster access than a list). Where multiple values are modified by the ISR it is
necessary to consider the case where the interrupt occurs at a time when the main program has accessed some, but not
all, of the values. This can lead to inconsistencies.
Consider the following design. An ISR stores incoming data in a bytearray, then adds the number of bytes received to
an integer representing total bytes ready for processing. The main program reads the number of bytes, processes the
bytes, then clears down the number of bytes ready. This will work until an interrupt occurs just after the main program
has read the number of bytes. The ISR puts the added data into the buffer and updates the number received, but the
main program has already read the number, so processes the data originally received. The newly arrived bytes are lost.
There are various ways of avoiding this hazard, the simplest being to use a circular buffer. If it is not possible to use a
structure with inherent thread safety other ways are described below.
Reentrancy
A potential hazard may occur if a function or method is shared between the main program and one or more ISR’s or
between multiple ISR’s. The issue here is that the function may itself be interrupted and a further instance of that
function run. If this is to occur, the function must be designed to be reentrant. How this is done is an advanced topic
beyond the scope of this tutorial.
Critical Sections
An example of a critical section of code is one which accesses more than one variable which can be affected by an ISR.
If the interrupt happens to occur between accesses to the individual variables, their values will be inconsistent. This
is an instance of a hazard known as a race condition: the ISR and the main program loop race to alter the variables.
To avoid inconsistency a means must be employed to ensure that the ISR does not alter the values for the duration of
the critical section. One way to achieve this is to issue pyb.disable_irq() before the start of the section, and
pyb.enable_irq() at the end. Here is an example of this approach:
class BoundsException(Exception):
pass
ARRAYSIZE = const(20)
index = 0
data = array.array('i', 0 for x in range(ARRAYSIZE))
def callback1(t):
global data, index
for x in range(5):
data[index] = pyb.rng() # simulate input
index += 1
if index >= ARRAYSIZE:
raise BoundsException('Array bounds exceeded')
tim4.callback(None)
A critical section can comprise a single line of code and a single variable. Consider the following code fragment.
count = 0
def cb(): # An interrupt callback
count +=1
def main():
# Code to set up the interrupt callback omitted
while True:
count += 1
This example illustrates a subtle source of bugs. The line count += 1 in the main loop carries a specific race
condition hazard known as a read-modify-write. This is a classic cause of bugs in real time systems. In the main loop
MicroPython reads the value of t.counter, adds 1 to it, and writes it back. On rare occasions the interrupt occurs
after the read and before the write. The interrupt modifies t.counter but its change is overwritten by the main loop
when the ISR returns. In a real system this could lead to rare, unpredictable failures.
As mentioned above, care should be taken if an instance of a Python built in type is modified in the main code and that
instance is accessed in an ISR. The code performing the modification should be regarded as a critical section to ensure
that the instance is in a valid state when the ISR runs.
Particular care needs to be taken if a dataset is shared between different ISR’s. The hazard here is that the higher
priority interrupt may occur when the lower priority one has partially updated the shared data. Dealing with this
situation is an advanced topic beyond the scope of this introduction other than to note that mutex objects described
below can sometimes be used.
Disabling interrupts for the duration of a critical section is the usual and simplest way to proceed, but it disables all
interrupts rather than merely the one with the potential to cause problems. It is generally undesirable to disable an
interrupt for long. In the case of timer interrupts it introduces variability to the time when a callback occurs. In the
case of device interrupts, it can lead to the device being serviced too late with possible loss of data or overrun errors
in the device hardware. Like ISR’s, a critical section in the main code should have a short, predictable duration.
An approach to dealing with critical sections which radically reduces the time for which interrupts are disabled is to
use an object termed a mutex (name derived from the notion of mutual exclusion). The main program locks the mutex
before running the critical section and unlocks it at the end. The ISR tests whether the mutex is locked. If it is, it avoids
the critical section and returns. The design challenge is defining what the ISR should do in the event that access to the
critical variables is denied. A simple example of a mutex may be found here. Note that the mutex code does disable
interrupts, but only for the duration of eight machine instructions: the benefit of this approach is that other interrupts
are virtually unaffected.
Interrupt handlers, such as those associated with timers, can continue to run after a program terminates. This may
produce unexpected results where you might have expected the object raising the callback to have gone out of scope.
For example on the Pyboard:
def bar():
foo = pyb.Timer(2, freq=4, callback=lambda t: print('.', end=''))
bar()
This continues to run until the timer is explicitly disabled or the board is reset with ctrl D.
Contents
* Algorithms
* RAM Allocation
* Buffers
* Floating Point
* Arrays
– Identifying the slowest section of code
– MicroPython code improvements
This tutorial describes ways of improving the performance of MicroPython code. Optimisations involving other lan-
guages are covered elsewhere, namely the use of modules written in C and the MicroPython inline assembler.
The process of developing high performance code comprises the following stages which should be performed in the
order listed.
• Design for speed.
• Code and debug.
Optimisation steps:
• Identify the slowest section of code.
• Improve the efficiency of the Python code.
• Use the native code emitter.
• Use the viper code emitter.
• Use hardware-specific optimisations.
Performance issues should be considered at the outset. This involves taking a view on the sections of code which are
most performance critical and devoting particular attention to their design. The process of optimisation begins when
the code has been tested: if the design is correct at the outset optimisation will be straightforward and may actually be
unnecessary.
Algorithms
The most important aspect of designing any routine for performance is ensuring that the best algorithm is employed.
This is a topic for textbooks rather than for a MicroPython guide but spectacular performance gains can sometimes be
achieved by adopting algorithms known for their efficiency.
RAM Allocation
To design efficient MicroPython code it is necessary to have an understanding of the way the interpreter allocates
RAM. When an object is created or grows in size (for example where an item is appended to a list) the necessary
RAM is allocated from a block known as the heap. This takes a significant amount of time; further it will on occasion
trigger a process known as garbage collection which can take several milliseconds.
Consequently the performance of a function or method can be improved if an object is created once only and not
permitted to grow in size. This implies that the object persists for the duration of its use: typically it will be instantiated
in a class constructor and used in various methods.
This is covered in further detail Controlling garbage collection below.
Buffers
An example of the above is the common case where a buffer is required, such as one used for communication with
a device. A typical driver will create the buffer in the constructor and use it in its I/O methods which will be called
repeatedly.
The MicroPython libraries typically provide support for pre-allocated buffers. For example, objects which support
stream interface (e.g., file or UART) provide read() method which allocates new buffer for read data, but also a
readinto() method to read data into an existing buffer.
Floating Point
Some MicroPython ports allocate floating point numbers on heap. Some other ports may lack dedicated floating-point
coprocessor, and perform arithmetic operations on them in “software” at considerably lower speed than on integers.
Where performance is important, use integer operations and restrict the use of floating point to sections of the code
where performance is not paramount. For example, capture ADC readings as integers values to an array in one quick
go, and only then convert them to floating-point numbers for signal processing.
Arrays
Consider the use of the various types of array classes as an alternative to lists. The array module supports various
element types with 8-bit elements supported by Python’s built in bytes and bytearray classes. These data struc-
tures all store elements in contiguous memory locations. Once again to avoid memory allocation in critical code these
should be pre-allocated and passed as arguments or as bound objects.
When passing slices of objects such as bytearray instances, Python creates a copy which involves allocation of
the size proportional to the size of slice. This can be alleviated using a memoryview object. memoryview itself is
allocated on heap, but is a small, fixed-size object, regardless of the size of slice it points too.
A memoryview can only be applied to objects supporting the buffer protocol - this includes arrays but not lists.
Small caveat is that while memoryview object is live, it also keeps alive the original buffer object. So, a memoryview
isn’t a universal panacea. For instance, in the example above, if you are done with 10K buffer and just need those
bytes 30:2000 from it, it may be better to make a slice, and let the 10K buffer go (be ready for garbage collection),
instead of making a long-living memoryview and keeping 10K blocked for GC.
Nonetheless, memoryview is indispensable for advanced preallocated buffer management. readinto() method
discussed above puts data at the beginning of buffer and fills in entire buffer. What if you need to put data in the middle
of existing buffer? Just create a memoryview into the needed section of buffer and pass it to readinto().
This is a process known as profiling and is covered in textbooks and (for standard Python) supported by various
software tools. For the type of smaller embedded application likely to be running on MicroPython platforms the
slowest function or method can usually be established by judicious use of the timing ticks group of functions
documented in utime. Code execution time can be measured in ms, us, or CPU cycles.
The following enables any function or method to be timed by adding an @timed_function decorator:
MicroPython provides a const() declaration. This works in a similar way to #define in C in that when the code
is compiled to bytecode the compiler substitutes the numeric value for the identifier. This avoids a dictionary lookup
at runtime. The argument to const() may be anything which, at compile time, evaluates to an integer e.g. 0x100
or 1 << 8.
Where a function or method repeatedly accesses objects performance is improved by caching the object in a local
variable:
class foo(object):
def __init__(self):
ba = bytearray(100)
def bar(self, obj_display):
ba_ref = self.ba
fb = obj_display.framebuffer
# iterative code using these two objects
This avoids the need repeatedly to look up self.ba and obj_display.framebuffer in the body of the method
bar().
When memory allocation is required, MicroPython attempts to locate an adequately sized block on the heap. This may
fail, usually because the heap is cluttered with objects which are no longer referenced by code. If a failure occurs, the
process known as garbage collection reclaims the memory used by these redundant objects and the allocation is then
tried again - a process which can take several milliseconds.
There may be benefits in pre-empting this by periodically issuing gc.collect(). Firstly doing a collection before
it is actually required is quicker - typically on the order of 1ms if done frequently. Secondly you can determine the
point in code where this time is used rather than have a longer delay occur at random points, possibly in a speed critical
section. Finally performing collections regularly can reduce fragmentation in the heap. Severe fragmentation can lead
to non-recoverable allocation failures.
This causes the MicroPython compiler to emit native CPU opcodes rather than bytecode. It covers the bulk of the
MicroPython functionality, so most functions will require no adaptation (but see below). It is invoked by means of a
function decorator:
@micropython.native
def foo(self, arg):
buf = self.linebuf # Cached object
# code
There are certain limitations in the current implementation of the native code emitter.
• Context managers are not supported (the with statement).
• Generators are not supported.
• If raise is used an argument must be supplied.
The trade-off for the improved performance (roughly twices as fast as bytecode) is an increase in compiled code size.
The optimisations discussed above involve standards-compliant Python code. The Viper code emitter is not fully
compliant. It supports special Viper native data types in pursuit of performance. Integer processing is non-compliant
because it uses machine words: arithmetic on 32 bit hardware is performed modulo 2**32.
Like the Native emitter Viper produces machine instructions but further optimisations are performed, substantially
increasing performance especially for integer arithmetic and bit manipulations. It is invoked using a decorator:
@micropython.viper
def foo(self, arg: int) -> int:
# code
As the above fragment illustrates it is beneficial to use Python type hints to assist the Viper optimiser. Type hints
provide information on the data types of arguments and of the return value; these are a standard Python language
feature formally defined here PEP0484. Viper supports its own set of types namely int, uint (unsigned integer),
ptr, ptr8, ptr16 and ptr32. The ptrX types are discussed below. Currently the uint type serves a single
purpose: as a type hint for a function return value. If such a function returns 0xffffffff Python will interpret the
result as 2**32 -1 rather than as -1.
In addition to the restrictions imposed by the native emitter the following constraints apply:
• Functions may have up to four arguments.
• Default argument values are not permitted.
• Floating point may be used but is not optimised.
Viper provides pointer types to assist the optimiser. These comprise
• ptr Pointer to an object.
• ptr8 Points to a byte.
• ptr16 Points to a 16 bit half-word.
• ptr32 Points to a 32 bit machine word.
The concept of a pointer may be unfamiliar to Python programmers. It has similarities to a Python memoryview
object in that it provides direct access to data stored in memory. Items are accessed using subscript notation, but slices
are not supported: a pointer can return a single item only. Its purpose is to provide fast random access to data stored in
contiguous memory locations - such as data stored in objects which support the buffer protocol, and memory-mapped
peripheral registers in a microcontroller. It should be noted that programming using pointers is hazardous: bounds
checking is not performed and the compiler does nothing to prevent buffer overrun errors.
@micropython.viper
def foo(self, arg: int) -> int:
buf = ptr8(self.linebuf) # self.linebuf is a bytearray or bytes object
for x in range(20, 30):
bar = buf[x] # Access a data item through the pointer
# code omitted
In this instance the compiler “knows” that buf is the address of an array of bytes; it can emit code to rapidly com-
pute the address of buf[x] at runtime. Where casts are used to convert objects to Viper native types these should
be performed at the start of the function rather than in critical timing loops as the cast operation can take several
microseconds. The rules for casting are as follows:
• Casting operators are currently: int, bool, uint, ptr, ptr8, ptr16 and ptr32.
• The result of a cast will be a native Viper variable.
• Arguments to a cast can be a Python object or a native Viper variable.
• If argument is a native Viper variable, then cast is a no-op (i.e. costs nothing at runtime) that just changes the
type (e.g. from uint to ptr8) so that you can then store/load using this pointer.
• If the argument is a Python object and the cast is int or uint, then the Python object must be of integral type
and the value of that integral object is returned.
• The argument to a bool cast must be integral type (boolean or integer); when used as a return type the viper
function will return True or False objects.
• If the argument is a Python object and the cast is ptr, ptr, ptr16 or ptr32, then the Python object must
either have the buffer protocol with read-write capabilities (in which case a pointer to the start of the buffer is
returned) or it must be of integral type (in which case the value of that integral object is returned).
The following example illustrates the use of a ptr16 cast to toggle pin X1 n times:
BIT0 = const(1)
@micropython.viper
def toggle_n(n: int):
odr = ptr16(stm.GPIOA + stm.GPIO_ODR)
for _ in range(n):
odr[0] ^= BIT0
A detailed technical description of the three code emitters may be found on Kickstarter here Note 1 and here Note 2
Note: Code examples in this section are given for the Pyboard. The techniques described however may be applied to
other MicroPython ports too.
This comes into the category of more advanced programming and involves some knowledge of the target MCU.
Consider the example of toggling an output pin on the Pyboard. The standard approach would be to write
This involves the overhead of two calls to the Pin instance’s value() method. This overhead can be eliminated by
performing a read/write to the relevant bit of the chip’s GPIO port output data register (odr). To facilitate this the stm
module provides a set of constants providing the addresses of the relevant registers. A fast toggle of pin P4 (CPU pin
A14) - corresponding to the green LED - can be performed as follows:
import machine
import stm
MicroPython is designed to be capable of running on microcontrollers. These have hardware limitations which may
be unfamiliar to programmers more familiar with conventional computers. In particular the amount of RAM and non-
volatile “disk” (flash memory) storage is limited. This tutorial offers ways to make the most of the limited resources.
Because MicroPython runs on controllers based on a variety of architectures, the methods presented are generic: in
some cases it will be necessary to obtain detailed information from platform specific documentation.
On the Pyboard the simple way to address the limited capacity is to fit a micro SD card. In some cases this is
impractical, either because the device does not have an SD card slot or for reasons of cost or power consumption;
hence the on-chip flash must be used. The firmware including the MicroPython subsystem is stored in the onboard
flash. The remaining capacity is available for use. For reasons connected with the physical architecture of the flash
memory part of this capacity may be inaccessible as a filesystem. In such cases this space may be employed by
incorporating user modules into a firmware build which is then flashed to the device.
There are two ways to achieve this: frozen modules and frozen bytecode. Frozen modules store the Python source
with the firmware. Frozen bytecode uses the cross compiler to convert the source to bytecode which is then stored
with the firmware. In either case the module may be accessed with an import statement:
import mymodule
The procedure for producing frozen modules and bytecode is platform dependent; instructions for building the
firmware can be found in the README files in the relevant part of the source tree.
In general terms the steps are as follows:
• Clone the MicroPython repository.
• Acquire the (platform specific) toolchain to build the firmware.
• Build the cross compiler.
• Place the modules to be frozen in a specified directory (dependent on whether the module is to be frozen as
source or as bytecode).
• Build the firmware. A specific command may be required to build frozen code of either type - see the platform
documentation.
• Flash the firmware to the device.
5.5.2 RAM
When reducing RAM usage there are two phases to consider: compilation and execution. In addition to memory
consumption, there is also an issue known as heap fragmentation. In general terms it is best to minimise the repeated
creation and destruction of objects. The reason for this is covered in the section covering the heap.
Compilation Phase
When a module is imported, MicroPython compiles the code to bytecode which is then executed by the MicroPython
virtual machine (VM). The bytecode is stored in RAM. The compiler itself requires RAM, but this becomes available
for use when the compilation has completed.
If a number of modules have already been imported the situation can arise where there is insufficient RAM to run the
compiler. In this case the import statement will produce a memory exception.
If a module instantiates global objects on import it will consume RAM at the time of import, which is then unavailable
for the compiler to use on subsequent imports. In general it is best to avoid code which runs on import; a better
approach is to have initialisation code which is run by the application after all modules have been imported. This
maximises the RAM available to the compiler.
If RAM is still insufficient to compile all modules one solution is to precompile modules. MicroPython has a cross
compiler capable of compiling Python modules to bytecode (see the README in the mpy-cross directory). The result-
ing bytecode file has a .mpy extension; it may be copied to the filesystem and imported in the usual way. Alternatively
some or all modules may be implemented as frozen bytecode: on most platforms this saves even more RAM as the
bytecode is run directly from flash rather than being stored in RAM.
Execution Phase
In both instances where the constant is assigned to a variable the compiler will avoid coding a lookup to the name
of the constant by substituting its literal value. This saves bytecode and hence RAM. However the ROWS value will
occupy at least two machine words, one each for the key and value in the globals dictionary. The presence in the
dictionary is necessary because another module might import or use it. This RAM can be saved by prepending the
name with an underscore as in _COLS: this symbol is not visible outside the module so will not occupy RAM.
The argument to const() may be anything which, at compile time, evaluates to an integer e.g. 0x100 or 1 << 8.
It can even include other const symbols that have already been defined, e.g. 1 << BIT.
Constant data structures
Where there is a substantial volume of constant data and the platform supports execution from Flash, RAM may be
saved as follows. The data should be located in Python modules and frozen as bytecode. The data must be defined as
bytes objects. The compiler ‘knows’ that bytes objects are immutable and ensures that the objects remain in flash
memory rather than being copied to RAM. The ustruct module can assist in converting between bytes types and
other Python built-in types.
When considering the implications of frozen bytecode, note that in Python strings, floats, bytes, integers and complex
numbers are immutable. Accordingly these will be frozen into flash. Thus, in the line
the actual string “The quick brown fox” will reside in flash. At runtime a reference to the string is assigned to the
variable mystring. The reference occupies a single machine word. In principle a long integer could be used to store
constant data:
bar = 0xDEADBEEF0000DEADBEEF
As in the string example, at runtime a reference to the arbitrarily large integer is assigned to the variable bar. That
reference occupies a single machine word.
It might be expected that tuples of integers could be employed for the purpose of storing constant data with minimal
RAM use. With the current compiler this is ineffective (the code works, but RAM is not saved).
At runtime the tuple will be located in RAM. This may be subject to future improvement.
Needless object creation
There are a number of situations where objects may unwittingly be created and destroyed. This can reduce the usability
of RAM through fragmentation. The following sections discuss instances of this.
String concatenation
Consider the following code fragments which aim to produce constant strings:
Each produces the same outcome, however the first needlessly creates two string objects at runtime, allocates more
RAM for concatenation before producing the third. The others perform the concatenation at compile time which is
more efficient, reducing fragmentation.
Where strings must be dynamically created before being fed to a stream such as a file it will save RAM if this is done
in a piecemeal fashion. Rather than creating a large string object, create a substring and feed it to the stream before
dealing with the next.
The best way to create dynamic strings is by means of the string format() method:
Buffers
When accessing devices such as instances of UART, I2C and SPI interfaces, using pre-allocated buffers avoids the
creation of needless objects. Consider these two loops:
while True:
var = spi.read(100)
# process data
buf = bytearray(100)
while True:
spi.readinto(buf)
# process data in buf
The first creates a buffer on each pass whereas the second re-uses a pre-allocated buffer; this is both faster and more
efficient in terms of memory fragmentation.
Bytes are smaller than ints
On most platforms an integer consumes four bytes. Consider the two calls to the function foo():
def foo(bar):
for x in bar:
print(x)
foo((1, 2, 0xff))
foo(b'\1\2\xff')
In the first call a tuple of integers is created in RAM. The second efficiently creates a bytes object consuming the
minimum amount of RAM. If the module were frozen as bytecode, the bytes object would reside in flash.
Strings Versus Bytes
Python3 introduced Unicode support. This introduced a distinction between a string and an array of bytes. MicroPy-
thon ensures that Unicode strings take no additional space so long as all characters in the string are ASCII (i.e. have
a value < 126). If values in the full 8-bit range are required bytes and bytearray objects can be used to ensure
that no additional space will be required. Note that most string methods (e.g. str.strip()) apply also to bytes
instances so the process of eliminating Unicode can be painless.
Where it is necessary to convert between strings and bytes the str.encode() and the bytes.decode() methods
can be used. Note that both strings and bytes are immutable. Any operation which takes as input such an object and
produces another implies at least one RAM allocation to produce the result. In the second line below a new bytes
object is allocated. This would also occur if foo were a string.
micropython.qstr_info(1)
Then copy and paste all the Q(xxx) lines into a text editor. Check for and remove lines which are obviously invalid.
Open the file qstrdefsport.h which will be found in ports/stm32 (or the equivalent directory for the architecture in use).
Copy and paste the corrected lines at the end of the file. Save the file, rebuild and flash the firmware. The outcome can
be checked by importing the modules and again issuing:
micropython.qstr_info(1)
When a running program instantiates an object the necessary RAM is allocated from a fixed size pool known as the
heap. When the object goes out of scope (in other words becomes inaccessible to code) the redundant object is known
as “garbage”. A process known as “garbage collection” (GC) reclaims that memory, returning it to the free heap. This
process runs automatically, however it can be invoked directly by issuing gc.collect().
The discourse on this is somewhat involved. For a ‘quick fix’ issue the following periodically:
gc.collect()
gc.threshold(gc.mem_free() // 4 + gc.mem_alloc())
Fragmentation
Say a program creates an object foo, then an object bar. Subsequently foo goes out of scope but bar remains. The
RAM used by foo will be reclaimed by GC. However if bar was allocated to a higher address, the RAM reclaimed
from foo will only be of use for objects no bigger than foo. In a complex or long running program the heap can
become fragmented: despite there being a substantial amount of RAM available, there is insufficient contiguous space
to allocate a particular object, and the program fails with a memory error.
The techniques outlined above aim to minimise this. Where large permanent buffers or other objects are required it is
best to instantiate these early in the process of program execution before fragmentation can occur. Further improve-
ments may be made by monitoring the state of the heap and by controlling GC; these are outlined below.
Reporting
A number of library functions are available to report on memory allocation and to control GC. These are to be found
in the gc and micropython modules. The following example may be pasted at the REPL (ctrl e to enter paste
mode, ctrl d to run it).
import gc
import micropython
gc.collect()
micropython.mem_info()
print('-----------------------------')
print('Initial free: {} allocated: {}'.format(gc.mem_free(), gc.mem_alloc()))
def func():
a = bytearray(10000)
gc.collect()
print('Func definition: {} allocated: {}'.format(gc.mem_free(), gc.mem_alloc()))
func()
print('Func run free: {} allocated: {}'.format(gc.mem_free(), gc.mem_alloc()))
gc.collect()
print('Garbage collect free: {} allocated: {}'.format(gc.mem_free(), gc.mem_alloc()))
print('-----------------------------')
micropython.mem_info(1)
Symbol Meaning
. free block
h head block
= tail block
m marked head block
T tuple
L list
D dict
F float
B byte code
M module
Each letter represents a single block of memory, a block being 16 bytes. So each line of the heap dump represents
0x400 bytes or 1KiB of RAM.
A GC can be demanded at any time by issuing gc.collect(). It is advantageous to do this at intervals, firstly to
pre-empt fragmentation and secondly for performance. A GC can take several milliseconds but is quicker when there
is little work to do (about 1ms on the Pyboard). An explicit call can minimise that delay while ensuring it occurs at
points in the program when it is acceptable.
Automatic GC is provoked under the following circumstances. When an attempt at allocation fails, a GC is performed
and the allocation re-tried. Only if this fails is an exception raised. Secondly an automatic GC will be triggered if the
amount of free RAM falls below a threshold. This threshold can be adapted as execution progresses:
gc.collect()
gc.threshold(gc.mem_free() // 4 + gc.mem_alloc())
This will provoke a GC when more than 25% of the currently free heap becomes occupied.
In general modules should instantiate data objects at runtime using constructors or other initialisation functions. The
reason is that if this occurs on initialisation the compiler may be starved of RAM when subsequent modules are
imported. If modules do instantiate data on import then gc.collect() issued after the import will ameliorate the
problem.
MicroPython handles strings in an efficient manner and understanding this can help in designing applications to run
on microcontrollers. When a module is compiled, strings which occur multiple times are stored once only, a process
known as string interning. In MicroPython an interned string is known as a qstr. In a module imported normally
that single instance will be located in RAM, but as described above, in modules frozen as bytecode it will be located
in flash.
String comparisons are also performed efficiently using hashing rather than character by character. The penalty for
using strings rather than integers may hence be small both in terms of performance and RAM usage - a fact which may
come as a surprise to C programmers.
5.5.5 Postscript
MicroPython passes, returns and (by default) copies objects by reference. A reference occupies a single machine word
so these processes are efficient in RAM usage and speed.
Where variables are required whose size is neither a byte nor a machine word there are standard libraries which can
assist in storing these efficiently and in performing conversions. See the array, ustruct and uctypes modules.
On Unix and Windows platforms the gc.collect() method returns an integer which signifies the number of
distinct memory regions that were reclaimed in the collection (more precisely, the number of heads that were turned
into frees). For efficiency reasons bare metal ports do not return this value.
Just as the “big” Python, MicroPython supports creation of “third party” packages, distributing them, and easily
installing them in each user’s environment. This chapter discusses how these actions are achieved. Some familiarity
with Python packaging is recommended.
5.6.1 Overview
Steps below represent a high-level workflow when creating and consuming packages:
1. Python modules and packages are turned into distribution package archives, and published at the Python Package
Index (PyPI).
2. upip package manager can be used to install a distribution package on a MicroPython port with network-
ing capabilities (for example, on the Unix port).
3. For ports without networking capabilities, an “installation image” can be prepared on the Unix port, and trans-
ferred to a device by suitable means.
4. For low-memory ports, the installation image can be frozen as the bytecode into MicroPython executable, thus
minimizing the memory storage overheads.
The sections below describe this process in details.
Python modules and packages can be packaged into archives suitable for transfer between systems, storing at the
well-known location (PyPI), and downloading on demand for deployment. These archives are known as distribution
packages (to differentiate them from Python packages (means to organize Python source code)).
The MicroPython distribution package format is a well-known tar.gz format, with some adaptations however. The
Gzip compressor, used as an external wrapper for TAR archives, by default uses 32KB dictionary size, which means
that to uncompress a compressed stream, 32KB of contguous memory needs to be allocated. This requirement may be
not satisfiable on low-memory devices, which may have total memory available less than that amount, and even if not,
a contiguous block like that may be hard to allocate due to memory fragmentation. To accommodate these constraints,
MicroPython distribution packages use Gzip compression with the dictionary size of 4K, which should be a suitable
compromise with still achieving some compression while being able to uncompressed even by the smallest devices.
Besides the small compression dictionary size, MicroPython distribution packages also have other optimizations, like
removing any files from the archive which aren’t used by the installation process. In particular, upip package manager
doesn’t execute setup.py during installation (see below), and thus that file is not included in the archive.
At the same time, these optimizations make MicroPython distribution packages not compatible with CPython’s
package manager, pip. This isn’t considered a big problem, because:
1. Packages can be installed with upip, and then can be used with CPython (if they are compatible with it).
2. In the other direction, majority of CPython packages would be incompatible with MicroPython by various
reasons, first of all, the reliance on features not implemented by MicroPython.
Summing up, the MicroPython distribution package archives are highly optimized for MicroPython’s target environ-
ments, which are highly resource constrained devices.
MicroPython distribution packages are intended to be installed using the upip package manager. upip is a Python
application which is usually distributed (as frozen bytecode) with network-enabled MicroPython ports. At the
very least, upip is available in the MicroPython Unix port.
On any MicroPython port providing upip, it can be accessed as following:
import upip
upip.help()
upip.install(package_or_package_list, [path])
Where package_or_package_list is the name of a distribution package to install, or a list of such names to install
multiple packages. Optional path parameter specifies filesystem location to install under and defaults to the standard
library location (see below).
An example of installing a specific package and then using it:
>>> import upip
>>> upip.install("micropython-pystone_lowmem")
[...]
>>> import pystone_lowmem
>>> pystone_lowmem.main()
Note that the name of Python package and the name of distribution package for it in general don’t have to match, and
oftentimes they don’t. This is because PyPI provides a central package repository for all different Python implementa-
tions and versions, and thus distribution package names may need to be namespaced for a particular implementation.
For example, all packages from micropython-lib follow this naming convention: for a Python module or package
named foo, the distribution package name is micropython-foo.
For the ports which run MicroPython executable from the OS command prompts (like the Unix port), upip can
be (and indeed, usually is) run from the command line instead of MicroPython’s own REPL. The commands which
corresponds to the example above are:
micropython -m upip -h
micropython -m upip install [-p <path>] <packages>...
micropython -m upip install micropython-pystone_lowmem
For MicroPython ports without native networking capabilities, the recommend process is “cross-installing”
them into a “directory image” using the MicroPython Unix port, and then transferring this image to a device
by suitable means.
Installing to a directory image involves using -p switch to upip:
After this command, the package content (and contents of every depenency packages) will be available in the
install_dir/ subdirectory. You would need to transfer contents of this directory (without the install_dir/
prefix) to the device, at the suitable location, where it can be found by the Python import statement (see discussion
of the upip installation path above).
For the low-memory MicroPython ports, the process described in the previous section does not provide the most
efficient resource usage,because the packages are installed in the source form, so need to be compiled to the bytecome
on each import. This compilation requires RAM, and the resulting bytecode is also stored in RAM, reducing its
amount available for storing application data. Moreover, the process above requires presence of the filesystem on a
device, and the most resource-constrained devices may not even have it.
The bytecode freezing is a process which resolves all the issues mentioned above:
• The source code is pre-compiled into bytecode and store as such.
• The bytecode is stored in ROM, not RAM.
• Filesystem is not required for frozen packages.
Using frozen bytecode requires building the executable (firmware) for a given MicroPython port from the C
source code. Consequently, the process is:
1. Follow the instructions for a particular port on setting up a toolchain and building the port. For example, for
ESP8266 port, study instructions in ports/esp8266/README.md and follow them. Make sure you can
build the port and deploy the resulting executable/firmware successfully before proceeding to the next steps.
2. Build MicroPython Unix port and make sure it is in your PATH and you can execute micropython.
3. Change to port’s directory (e.g. ports/esp8266/ for ESP8266).
4. Run make clean-frozen. This step cleans up any previous modules which were installed for freezing
(consequently, you need to skip this step to add additional modules, instead of starting from scratch).
5. Run micropython -m upip install -p modules <packages>... to install packages you want
to freeze.
6. Run make clean.
7. Run make.
After this, you should have the executable/firmware with modules as the bytecode inside, which you can deploy the
usual way.
Few notes:
1. Step 5 in the sequence above assumes that the distribution package is available from PyPI. If that is not the case,
you would need to copy Python source files manually to modules/ subdirectory of the port port directory.
(Note that upip does not support installing from e.g. version control repositories).
2. The firmware for baremetal devices usually has size restrictions, so adding too many frozen modules may
overflow it. Usually, you would get a linking error if this happens. However, in some cases, an image may be
produced, which is not runnable on a device. Such cases are in general bugs, and should be reported and further
investigated. If you face such a situation, as an initial step, you may want to decrease the amount of frozen
modules included.
Distribution packages for MicroPython are created in the same manner as for CPython or any other Python imple-
mentation, see references at the end of chapter. Setuptools (instead of distutils) should be used, because distutils
do not support dependencies and other features. “Source distribution” (sdist) format is used for packaging. The
post-processing discussed above, (and pre-processing discussed in the following section) is achieved by using custom
sdist command for setuptools. Thus, packaging steps remain the same as for the standard setuptools, the user just
needs to override sdist command implementation by passing the appropriate argument to setup() call:
setup(
...,
cmdclass={'sdist': sdist_upip.sdist}
)
A complete application, besides the source code, oftentimes also consists of data files, e.g. web page templates, game
images, etc. It’s clear how to deal with those when application is installed manually - you just put those data files in
the filesystem at some location and use the normal file access functions.
The situation is different when deploying applications from packages - this is more advanced, streamlined and flexible
way, but also requires more advanced approach to accessing data files. This approach is treating the data files as
“resources”, and abstracting away access to them.
Python supports resource access using its “setuptools” library, using pkg_resources module. MicroPython, fol-
lowing its usual approach, implements subset of the functionality of that module, specifically pkg_resources.
resource_stream(package, resource) function. The idea is that an application calls this function, pass-
ing a resource identifier, which is a relative path to data file within the specified package (usually top-level application
package). It returns a stream object which can be used to access resource contents. Thus, the resource_stream()
emulates interface of the standard open() function.
Implementation-wise, resource_stream() uses file operations underlyingly, if distribution package is install in
the filesystem. However, it also supports functioning without the underlying filesystem, e.g. if the package is frozen
as the bytecode. This however requires an extra intermediate step when packaging application - creation of “Python
resource module”.
The idea of this module is to convert binary data to a Python bytes object, and put it into the dictionary, indexed by the
resource name. This conversion is done automatically using overridden sdist command described in the previous
section.
Let’s trace the complete process using the following example. Suppose your application has the following structure:
my_app/
__main__.py
utils.py
data/
page.html
image.png
__main__.py and utils.py should access resources using the following calls:
import pkg_resources
pkg_resources.resource_stream(__name__, "data/page.html")
pkg_resources.resource_stream(__name__, "data/image.png")
You can develop and debug using the MicroPython Unix port as usual. When time comes to make a distribu-
tion package out of it, just use overridden “sdist” command from sdist_upip.py module as described in the previous
section.
This will create a Python resource module named R.py, based on the files declared in MANIFEST or MANIFEST.
in files (any non-.py file will be considered a resource and added to R.py) - before proceeding with the normal
packaging steps.
Prepared like this, your application will work both when deployed to filesystem and as frozen bytecode.
If you would like to debug R.py creation, you can run:
Alternatively, you can use tools/mpy_bin2res.py script from the MicroPython distribution, in which can you will need
to pass paths to all resource files:
5.6.8 References
SIX
The operations listed in this section produce conflicting results in MicroPython when compared to standard Python.
6.1 Syntax
6.1.1 Spaces
uPy requires spaces between literal numbers and keywords, CPy doesn’t
Sample code:
try:
print(eval('1and 0'))
except SyntaxError:
print('Should have worked')
try:
print(eval('1or 0'))
except SyntaxError:
print('Should have worked')
try:
print(eval('1if 1else 0'))
except SyntaxError:
print('Should have worked')
6.1.2 Unicode
Sample code:
119
MicroPython Documentation, Release 1.9.4
6.2.1 Classes
Sample code:
import gc
class Foo():
def __del__(self):
print('__del__')
f = Foo()
del f
gc.collect()
__del__
class Foo:
def __str__(self):
return "Foo"
t = C((1, 2, 3))
print(t)
Foo (1, 2, 3)
When inheriting from multiple classes super() only calls one class
Cause: See Method Resolution Order (MRO) is not compliant with CPython
Workaround: See Method Resolution Order (MRO) is not compliant with CPython
Sample code:
class A:
def __init__(self):
print("A.__init__")
class B(A):
def __init__(self):
print("B.__init__")
super().__init__()
class C(A):
def __init__(self):
print("C.__init__")
super().__init__()
class D(B,C):
def __init__(self):
print("D.__init__")
super().__init__()
D()
D.__init__ D.__init__
B.__init__ B.__init__
C.__init__ A.__init__
A.__init__
Calling super() getter property in subclass will return a property object, not the value
Sample code:
class A:
@property
def p(self):
return {"a":10}
class AA(A):
@property
def p(self):
return super().p
a = AA()
print(a.p)
6.2.2 Functions
try:
[].append()
except Exception as e:
print(e)
def f():
pass
f.x = 0
print(f.x)
6.2.3 Generator
Context manager __exit__() not called in a generator which does not run to completion
Sample code:
class foo(object):
def __enter__(self):
print('Enter')
def __exit__(self, *args):
print('Exit')
def bar(x):
with foo():
while True:
x += 1
yield x
def func():
g = bar(0)
for _ in range(3):
print(next(g))
func()
Enter Enter
1 1
2 2
3 3
Exit
6.2.4 Runtime
Cause: MicroPython doesn’t maintain symbolic local environment, it is optimized to an array of slots. Thus, local
variables can’t be accessed by a name.
Sample code:
def test():
val = 2
print(locals())
test()
˓→<stdin>'}
Cause: MicroPython doesn’t maintain symbolic local environment, it is optimized to an array of slots. Thus, local
variables can’t be accessed by a name. Effectively, eval(expr) in MicroPython is equivalent to eval(expr,
globals(), globals()).
Sample code:
val = 1
def test():
val = 2
print(val)
eval("print(val)")
test()
2 2
2 1
6.2.5 import
__path__ attribute of a package has a different type (single string instead of list of strings) in Mi-
croPython
Cause: MicroPython does’t support namespace packages split across filesystem. Beyond that, MicroPython’s import
system is highly optimized for minimal memory usage.
Workaround: Details of import handling is inherently implementation dependent. Don’t rely on such details in
portable applications.
Sample code:
import modules
print(modules.__path__)
['/home/micropython/micropython-docs/ ../tests/cpydiff//modules
˓→tests/cpydiff/modules']
Cause: To make module handling more efficient, it’s not wrapped with exception handling.
Workaround: Test modules before production use; during development, use del sys.modules["name"], or
just soft or hard reset the board.
Sample code:
import sys
try:
from modules import foo
except NameError as e:
print(e)
try:
from modules import foo
print('Should not get here')
except NameError as e:
print(e)
foo foo
name 'xxx' is not defined name 'xxx' is not defined
foo Should not get here
name 'xxx' is not defined
Cause: MicroPython’s import system is highly optimized for simplicity, minimal memory usage, and minimal filesys-
tem search overhead.
Workaround: Don’t install modules belonging to the same namespace package in different directories. For MicroPy-
thon, it’s recommended to have at most 3-component module search paths: for your current application, per-user
(writable), system-wide (non-writable).
Sample code:
import sys
sys.path.append(sys.path[1] + "/modules")
sys.path.append(sys.path[1] + "/modules2")
import subpkg.foo
import subpkg.bar
Two modules of a split namespace package Traceback (most recent call last):
˓→imported File "<stdin>", line 12, in <module>
ImportError: no module named 'subpkg.bar'
6.3.1 Exception
Sample code:
try:
raise TypeError
except TypeError:
raise ValueError
Traceback (most recent call last): Traceback (most recent call last):
File "<stdin>", line 8, in <module> File "<stdin>", line 10, in <module>
TypeError ValueError:
e = Exception()
e.x = 0
print(e.x)
Cause: Condition checks are optimized to happen at the end of loop body, and that line number is reported.
Sample code:
l = ["-foo", "-bar"]
i = 0
while l[i][0] == "-":
print("iter")
i += 1
iter iter
iter iter
Traceback (most recent call last): Traceback (most recent call last):
File "<stdin>", line 10, in <module> File "<stdin>", line 12, in <module>
IndexError: list index out of range IndexError: list index out of range
class A(Exception):
def __init__(self):
super().__init__()
Sample code:
class A(Exception):
def __init__(self):
Exception.__init__(self)
a = A()
6.3.2 bytearray
Sample code:
b = bytearray(4)
b[0:1] = [1, 2]
print(b)
6.3.3 bytes
print(bytes('abc', encoding='utf8'))
˓→instead
Sample code:
print(b'123'[0:3:2])
6.3.4 float
Sample code:
print('%.1g' % -9.9)
-1e+01 -10
6.3.5 int
Workaround: Avoid subclassing builtin types unless really needed. Prefer https://en.wikipedia.org/wiki/
Composition_over_inheritance .
Sample code:
class A(int):
__add__ = lambda self, other: A(int(self) + other)
a = A(42)
print(a+a)
6.3.6 list
l = [1, 2, 3, 4]
del l[0:4:2]
print(l)
l = [10, 20]
l[0:1] = range(4)
print(l)
l = [1, 2, 3, 4]
l[0:4:2] = [5, 6]
print(l)
6.3.7 str
Sample code:
print('abc'.endswith('c', 1))
Sample code:
print('{a[0]}'.format(a=[1, 2]))
print(str(b'abc', encoding='utf8'))
˓→instead
Cause: MicroPython is highly optimized for memory usage. Easy workarounds available.
Workaround: Instead of s.ljust(10) use "%-10s" % s, instead of s.rjust(10) use "% 10s" % s.
Alternatively, "{:<10}".format(s) or "{:>10}".format(s).
Sample code:
print('abc'.ljust(10))
Sample code:
Instance of a subclass of str cannot be compared for equality with an instance of a str
Sample code:
class S(str):
pass
s = S('hello')
print(s == 'hello')
True False
Sample code:
print('abcdefghi'[0:9:2])
6.3.8 tuple
Sample code:
print((1, 2, 3, 4)[0:4:2])
6.4 Modules
6.4.1 array
Sample code:
import array
print(1 in array.array('B', b'12'))
Sample code:
import array
a = array.array('b', (1, 2, 3))
del a[1]
print(a)
Sample code:
import array
a = array.array('b', (1, 2, 3))
print(a[3:2:2])
6.4.2 builtins
Sample code:
print(next(iter(range(0)), 42))
6.4.3 deque
import collections
D = collections.deque()
print(D)
6.4.4 json
JSON module does not throw exception when object is not serialisable
Sample code:
import json
a = bytes(x for x in range(256))
try:
z = json.dumps(a)
x = json.loads(z)
print('Should not get here')
except TypeError:
print('TypeError')
6.4.5 struct
Sample code:
import struct
try:
print(struct.pack('bb', 1))
print('Should not get here')
except:
print('struct.error')
struct.error b'\x01\x00'
Should not get here
Sample code:
import struct
try:
print(struct.pack('bb', 1, 2, 3))
print('Should not get here')
except:
print('struct.error')
struct.error b'\x01\x02'
Should not get here
6.4.6 sys
import sys
sys.stdin = None
print(sys.stdin)
SEVEN
137
MicroPython Documentation, Release 1.9.4
a
array, 35
b
btree, 60
e
esp, 88
f
framebuf, 63
g
gc, 35
m
machine, 65
math, 36
micropython, 79
n
network, 81
s
sys, 38
u
ubinascii, 40
ucollections, 40
uctypes, 85
uerrno, 41
uhashlib, 42
uheapq, 42
uio, 43
ujson, 44
uos, 45
ure, 48
uselect, 50
usocket, 51
ussl, 55
ustruct, 56
utime, 57
uzlib, 60
139
MicroPython Documentation, Release 1.9.4
Symbols baremetal, 91
__call__() (machine.Pin method), 69 BIG_ENDIAN (in module uctypes), 87
__contains__() (btree.btree method), 62 bin() (built-in function), 32
__detitem__() (btree.btree method), 62 bind() (usocket.socket method), 53
__getitem__() (btree.btree method), 62 blit() (framebuf.FrameBuffer method), 64
__iter__() (btree.btree method), 62 board, 91
__setitem__() (btree.btree method), 62 bool (built-in class), 32
btree (module), 60
A bytearray (built-in class), 32
bytearray_at() (in module uctypes), 87
a2b_base64() (in module ubinascii), 40
byteorder (in module sys), 39
abs() (built-in function), 32
bytes (built-in class), 32
AbstractBlockDev (class in uos), 47
bytes_at() (in module uctypes), 87
AbstractNIC (class in network), 81
BytesIO (class in uio), 44
accept() (usocket.socket method), 53
acos() (in module math), 36
acosh() (in module math), 36
C
active() (in module network), 82 calcsize() (in module ustruct), 56
active() (network.wlan method), 83 callable() (built-in function), 32
addressof() (in module uctypes), 87 callee-owned tuple, 91
AF_INET (in module usocket), 53 cancel() (machine.RTC method), 77
AF_INET6 (in module usocket), 53 ceil() (in module math), 36
alarm() (machine.RTC method), 77 chdir() (in module uos), 45
alarm_left() (machine.RTC method), 77 chr() (built-in function), 32
all() (built-in function), 32 classmethod() (built-in function), 32
alloc_emergency_exception_buf() (in module micropy- close() (btree.btree method), 62
thon), 80 close() (usocket.socket method), 53
any() (built-in function), 32 collect() (in module gc), 35
any() (machine.UART method), 72 compile() (built-in function), 32
append() (array.array.array method), 35 compile() (in module ure), 49
argv (in module sys), 39 complex (built-in class), 32
array (module), 35 config() (in module network), 82
array.array (class in array), 35 config() (network.wlan method), 84
asin() (in module math), 36 connect() (in module network), 82
asinh() (in module math), 36 connect() (network.wlan method), 83
AssertionError, 34 connect() (usocket.socket method), 53
atan() (in module math), 36 const() (in module micropython), 79
atan2() (in module math), 36 copysign() (in module math), 36
atanh() (in module math), 36 cos() (in module math), 36
AttributeError, 34 cosh() (in module math), 36
CPython, 91
B
b2a_base64() (in module ubinascii), 40
141
MicroPython Documentation, Release 1.9.4
142 Index
MicroPython Documentation, Release 1.9.4
Index 143
MicroPython Documentation, Release 1.9.4
144 Index
MicroPython Documentation, Release 1.9.4
Index 145
MicroPython Documentation, Release 1.9.4
Z
ZeroDivisionError, 34
zip() (built-in function), 34
146 Index