Trezor Firmware documentation

This documentation can also be found at where it is available in a HTML-built version compiled using mdBook.

Welcome to the Trezor Firmware repository. This repository is so called monorepo, it contains several different yet very related projects that together form the Trezor Firmware ecosystem.

Repository Structure

  • ci: Gitlab CI configuration files
  • common/defs: JSON coin definitions and support tables
  • common/protob: Common protobuf definitions for the Trezor protocol
  • common/tools: Tools for managing coin definitions and related data
  • core: Trezor Core, firmware implementation for Trezor T
  • crypto: Stand-alone cryptography library used by both Trezor Core and the Trezor One firmware
  • docs: Assorted documentation
  • legacy: Trezor One firmware implementation
  • python: Python client library and the trezorctl command
  • storage: NORCOW storage implementation used by both Trezor Core and the Trezor One firmware
  • tests: Firmware unit test suite
  • tools: Miscellaneous build and helper scripts
  • vendor: Submodules for external dependencies



Also please have a look at the docs, either in the docs folder or at before contributing. The misc chapter should be read in particular because it contains some useful assorted knowledge.

Security vulnerability disclosure

Please report suspected security vulnerabilities in private to [email protected], also see the disclosure section on the website. Please do NOT create publicly viewable issues for suspected security vulnerabilities.

Note on terminology

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.

Trezor Core

Trezor Core is the second-gen firmware running on Trezor devices. It currently runs on Trezor T only, but it will probably be used on Trezor One in future as well (see issue #24).

Trezor Core is part of the trezor-firmware monorepo to be found on GitHub, in the core subdirectory.


New Project

Run the following to checkout the project:

git clone --recurse-submodules
cd trezor-firmware
poetry install
cd core

After this you will need to install some software dependencies based on what flavor of Core you want to build. You can either build the Emulator or the actual firmware running on ARM devices. Emulator (also called unix port) is a unix version that can run on your computer. See Emulator for more information.

Existing Project

If you are building from an existing checkout, do not forget to refresh the submodules and the poetry environment:

git submodule update --init --recursive --force
poetry install


We use Poetry to install and track Python dependencies. You need to install it, sync the packages and then use poetry run for every command or enter poetry shell before typing any commands. The commands in this section suppose you are in a poetry shell environment!

sudo pip3 install poetry
poetry install
poetry shell

Build instructions for Embedded (ARM port)

First clone, initialize submodules and install Poetry as defined here. Do not forget you need to be in a poetry shell environment!


You will need the GCC ARM toolchain for building and OpenOCD for flashing to a device. You will also need Python dependencies for signing.


sudo apt-get install scons gcc-arm-none-eabi libnewlib-arm-none-eabi


There is a shell.nix file in the root of the project. Just run the following before entering the core directory:



Consider using Nix. With Nix all you need to do is nix-shell.

For other users:

  1. Download gcc-arm-none-eabi
  2. Follow the install instructions
  3. To install OpenOCD, run brew install open-ocd
  4. Run make vendor build_boardloader build_bootloader build_firmware


make vendor build_boardloader build_bootloader build_firmware


Use make upload to upload the firmware to a production device. Do not forget to enter bootloader on the device beforehand.


For flashing firmware to blank device (without bootloader) use make flash, or make flash STLINK_VER=v2-1 if using a ST-LINK/V2.1 interface. You need to have OpenOCD installed.

Building in debug mode

You can also build firmware in debug mode to see log output or run tests.

PYOPT=0 make build_firmware

You can then use screen to enter the device's console. Do not forget to add your user to the dialout group or use sudo. Note that both the group and the tty name can differ, use ls -l to find out proper names on your machine.

screen /dev/ttyACM0

Debug console via serial port is enabled only for the Bitcoin-only firmware. If you need the console to debug non-Bitcoin features, please edit src/, disable WebAuthn USB interface and enable the VCP USB interface.

Build instructions for Emulator (Unix port)

First clone, initialize submodules, install Poetry and enter the Poetry shell as defined here. Do not forget you need to be in a poetry shell environment!


Install the required packages, depending on your operating system.

  • Debian/Ubuntu:
sudo apt-get install scons libsdl2-dev libsdl2-image-dev
  • Fedora:
sudo yum install scons SDL2-devel SDL2_image-devel
  • OpenSUSE:
sudo zypper install scons libSDL2-devel libSDL2_image-devel
  • Arch:
sudo pacman -S scons sdl2 sdl2_image
  • NixOS:

There is a shell.nix file in the root of the project. Just run the following before entering the core directory:

  • Mac OS X:

Consider using Nix. With Nix all you need to do is nix-shell.

For other users:

brew install scons sdl2 sdl2_image pkg-config
  • Windows: not supported yet, sorry.


Run the build with:

make build_unix


Now you can start the emulator:


The emulator has a number of interesting features all documented in the Emulator section.

Building for debugging and hacking in Emulator (Unix port)

Build the debuggable unix binary so you can attach the gdb or lldb. This removes optimizations and reduces address space randomization. Beware that this will significantly bloat the final binary and the firmware runtime memory limit HEAPSIZE may have to be increased.

DEBUG_BUILD=1 make build_unix


Emulator is a unix version of Core firmware that runs on your computer.


There is neither boardloader nor bootloader and no firmware uploads. Emulator runs the current code as is it is and if you want to run some specific firmware version you need to use git for that (simply checkout the right branch/tag). Actually, maybe we should call it simulator to be precise, because it does not emulate the device in its completeness, it just runs the firmware on your host.

Emulator significantly speeds up development and has several features to help you along the way.

How to run

  1. build the emulator
  2. run inside the poetry environment:
    • either enter poetry shell first, and then use ./
    • or always use poetry run ./
  3. to use bridge with the emulator support, start it with trezord -e 21324

Now you can use the emulator the same way as you use the device, for example you can visit our Wallet (, use our Python CLI tool (trezorctl) etc. Simply click to emulate screen touches.


Run ./ --help to see all supported command line options and shortcuts. The sections below only list long option names and most notable features.

Debug and production mode

By default the emulator runs in debug mode. Debuglink is available (on port 21325 by default), exceptions and log output goes to console. To indicate debug mode, there is a red square in the upper right corner of Trezor screen.


To enable production mode, run ./ --production, or set environment variable PYOPT=1.

Initialize with mnemonic words

In debug mode, the emulator can be pre-configured with a mnemonic phrase.

To use a specific mnemonic phrase:

./ --mnemonic "such deposit very security much theme..."

When using Shamir shares, repeat the --mnemonic option:

./ --mnemonic "your first share" --mnemonic "your second share" ...

To use the "all all all" seed defined in SLIP-14:

./ --slip0014

Storage and Profiles

Internal Trezor's storage is emulated and stored in the /var/tmp/trezor.flash file by default. Deleting this file is similar to calling wipe device. You can also find /var/tmp/trezor.sdcard for SD card.

You can specify a different location for the storage and log files via the -p / --profile option:

./ -p foobar

This will create a profile directory in your home ~/.trezoremu/foobar containing emulator run files. Alternatively you can set a full path like so:

./ -p /var/tmp/foobar

You can also set a full profile path to TREZOR_PROFILE_DIR environment variable.

Specifying -t / --temporary-profile will start the emulator in a clean temporary profile that will be erased when the emulator stops. This is useful, e.g., for tests.


By default, emulator output goes to stdout. When silenced with --quiet, it is redirected to ${TREZOR_PROFILE_DIR}/trezor.log. You can specify an alternate output file with --output.

Running subcommands with the emulator

In scripts, it is often necessary to start the emulator, run a commmand while it is available, and then stop it. The following command runs the device test suite using the emulator:

./ --command pytest ../tests/device_tests

Profiling support

Run ./ --profiling, or set environment variable TREZOR_PROFILING=1, to run the emulator with a profiling wrapper that generates statistics of executed lines.

Memory statistics

Run ./ --log-memory, or set environment variable TREZOR_LOG_MEMORY=1, to dump memory usage information after each workflow task is finished.

Run in gdb

Running ./ --debugger runs emulator inside gdb/lldb.

Watch for file changes

Running ./ --watch watches for file changes and reloads the emulator if any occur. Note that this does not do rebuild, i.e. this works for MicroPython code (which is interpreted) but if you make C changes, you need to rebuild yourself.

Print screen

Press p on your keyboard to capture emulator's screen. You will find a png screenshot in the src directory.

Disable animation

Run ./ --disable-animation, or set environment variable TREZOR_DISABLE_ANIMATION=1 to disable all animations.

Trezor Core event loop

The event loop is implemented in src/trezor/ and forms the core of the processing. At boot time, default tasks are started and inserted into an event queue. Such task will usually run in an endless loop: wait for event, process event, loop back.

Application code is written with async/await constructs. Low level of the event queue processes running coroutines via coroutine.send() and coroutine.throw() calls.

MicroPython details

MicroPython does not distinguish between coroutines, awaitables, and generators. Some low-level constructs are using yield and yield from constructions.

async def definition marks the function as a generator, even if it does not contain await or yield expressions. It is thus possible to see async def __iter__, which indicates that the function is a generator.

For type-checking purposes, objects usually define an __await__ method that delegates to __iter__. The __await__ method is never executed, however.

Low-level API

Function summary starts the event loop. The call only returns when there are no further waiting tasks -- so, in usual conditions, never.

loop.schedule(task, value, deadline, finalizer, reschedule) schedules an awaitable to be run either as soon as possible, or at a specified time (given as a deadline in microseconds since system bootup.)

In addition, when the task finishes processing or is closed externally, the finalizer callback will be executed, with the task and the return value (or the raised exception) as a parameter.

If reschedule is true, the task is first cleared from the scheduled queue -- in effect, it is rescheduled to run at a different time.

loop.close(task) removes a previously scheduled task from the list of waiting tasks and calls its finalizer.

loop.pause(task, interface) sets the task as waiting for a particular interface: either reading from or writing to one of the USB interfaces, or waiting for a touch event.

Implementation details

Trezor Core runs coroutine-based cooperative multitasking, i.e., there is no preemption.

Every task is a coroutine, which means that it runs uninterrupted until it yields a value (or, in async terms, until it awaits something). In every processing step, the currently selected coroutine is resumed by sending a value to it (which is returned as a result of the yield/await, or raised as an exception if it is an instance of BaseException). The tasks then runs uninterrupted again, until it yields or exits.

A loop in spins for as long as any tasks are waiting. Two lists of waiting tasks exist:

  • _queue is a priority queue where the ordering is defined by real-time deadlines. In most cases, tasks are scheduled for "now", which makes them run one after another in FIFO order. It is also possible to schedule a task to run in the future.

  • _paused is a collection of tasks grouped by the interface for which they are waiting.

In each run of the loop, io.poll is called to query I/O events. If an event arrives on an interface, all tasks waiting on that interface are resumed one after another. No scheduled tasks in _queue can execute until the waiting tasks yield again.

At most one I/O event is processsed in this phase.

When the I/O phase is done, a task with the highest priority is popped from _queue and resumed.

I/O wait

When no tasks are paused on a given interface, events on that interface remain in queue.

When multiple tasks are paused on the same interface, all of them receive every event. However, a waiting task receives at most one event. To receive more, it must pause itself again. Event processing is usually done in an endless loop with a pause call.

If two tasks are attempting to read from the same interface, and one of them re-pauses itself immediately while the other doesn't (possibly due to use of loop.race, which introduces scheduling gaps), the other task might lose some events.

For this reason, you should avoid waiting on the same interface from multiple tasks.


Syscalls bridge the gap between await-based application code and the coroutine-based low-level implementation.

Every sequence of awaits will at some point boil down to yielding a Syscall instance. (Yielding anything else is an error.) When that happens, control returns to the event loop.

The handle(task) method is called on the result. This way the syscall gets hold of the task object, and can schedule() or pause() it as appropriate.

As an example, consider pausing on an input event. A running task has no way to call pause() on itself. It would need to pass a separate function as a callback.

The wait syscall can be implemented as a simple wrapper around the pause() low-level call:

class wait(Syscall):
    def __init__(self, msg_iface: int) -> None:
        self.msg_iface = msg_iface

    def handle(self, task: Task) -> None:
        pause(task, self.msg_iface)

The __init__() method takes all the arguments of the "call", and handle() pauses the task on the given interface.

Calling code will look like this:

event = await loop.wait(io.TOUCH)

The loop.wait(io.TOUCH) expression instantiates a new Syscall object. The argument is passed to the constructor, and stored on the instance. The rest boils down to

event = await some_syscall_instance

which is equivalent to

event = yield from some_syscall_instance.__iter__()

The Syscall.__iter__() method yields self, returning control to the event loop. The event loop invokes some_syscall_instance.handle(task_object). The task_object is then set to resume when a touch event arrives.

A side-effect of this design is that it is possible to store and reuse syscall instances. That can be advantageous for avoiding unnecessary allocations.

while True:
    # every run of the loop allocates a new object
    event = await loop.wait(io.TOUCH)

touch_source = loop.wait(io.TOUCH)
while True:
    # same instance is reused
    event = await touch_source

High-level API

Application code should not be using any of the above low-level functions. Awaiting syscalls is the preferred method of writing code.

The following syscalls and constructs are available:

loop.sleep(delay_ms: int): Suspend execution until the given delay (in milliseconds) elapses. Return value is the planned deadline in milliseconds since system start.

Calling await loop.sleep(0) yields execution to other tasks, and schedules the current task for the next tick.

loop.wait(interface): Wait indefinitely for an event on the given interface. Return value is the event.

Upcoming code modification adds a timeout parameter to loop.wait.

loop.race(*children): Schedule each argument to run, and suspend execution until the first of them finishes.

It is possible to specify wait timeout for loop.wait by using loop.race:

result = await loop.race(loop.wait(io.TOUCH), loop.sleep(1000))

This introduces scheduling gaps: every child is treated as a task and scheduled to run. This means that if the child is a syscall, as in the above example, its action is not done immediately. Instead, the wait begins on the next tick (or whenever the newly created coroutine runs) and the sleep in the tick afterwards. When nesting multiple races, the child races also run later.

Also, when a child task is done, another scheduling gap happens, and the parent task is scheduled to run on the next tick.

Upcoming changes may solve this in relevant cases, by inlining syscall operations.

loop.spawn(task): Start the task asynchronously. Return an object that allows the caller to await its result, or shut the task down.

Example usage:

task = loop.spawn(some_background_task())
await do_something_here()
result = await task

Unlike other syscalls, loop.spawn starts the task at instantiation time. awaiting the same loop.spawn instance a second time will immediately return the result of the original run.

If the task is cancelled (usually by calling task.close()), the awaiter receives a loop.TaskClosed exception.

It is also possible to register a synchronous finalizer callback via task.set_finalizer. This is used internally to implement workflow management.

loop.chan() is a unidirectional communication channel that actually implements two syscalls:

  • chan.put() sends a value to the channel, and waits until it is picked up by a taker task.
  • chan.take() waits until a value is sent to the channel and then returns it.

It is possible to put in a value without waiting for a taker, by calling chan.publish(). It is not possible to take a value without waiting.


The folder src/apps/ is the place where all the user-facing features are implemented.

Each app has a boot() function in the module's __init__ file. This functions assigns what function should be called if some specific message was received. In other words, it is a link between the MicroPython functions and the Protobuf messages.


This binds the message GetAddress to function get_address inside the apps.bitcoin module.

from trezor import wire
from trezor.messages import MessageType

wire.add(MessageType.GetAddress, apps.bitcoin, "get_address")


We have two types of tests in Core:

  1. Unit tests that are specific to Trezor Core.
  2. Common tests, which are common to both Trezor Core (Model T) and Legacy (Model one). Device tests belong to this category.

Core tests

See the core/tests/ directory.

Common tests

See the tests section.


Topics that do not fit elsewhere:

Use of SLIP-39 in trezor-core

SLIP-39 describes a way to securely back up a secret value using Shamir's Secret Sharing scheme.

The secret value, called a Master Secret (MS) in SLIP-39 terminology, is first encrypted by a passphrase, producing an Encrypted Master Secret (EMS). The EMS is then split into a number of shares, which are encoded as a set of mnemonic words. Afterwards, it is possible to recombine some or all of the shares to obtain back the EMS, and when the correct passphrase is provided, decrypt the original Master Secret.

This does not quite match Trezor's use of the "passphrase protection" feature, namely that any passphrase is valid, and using any passphrase will yield a working wallet.

SLIP-39 enables this usage by specifying that passphrases are not validated in any way. Decrypting an EMS with any passphrase will produce data usable as the Master Secret, regardless of whether it is the original data or not.

Seed handling in Trezor

Trezor stores a mnemonic secret in a storage field _MNEMONIC_SECRET. This is the input for the root node derivation process: mnemonic.get_seed(passphrase) takes the user-provided passphrase as an argument, and derives the appropriate root node from the mnemonic secret.

With BIP-39, the recovery phrase itself is the mnemonic secret. During device initialization, the raw recovery phrase is given to the user, and also directly stored in the _MNEMONIC_SECRET field. Whenever the root node is required, it is derived by applying PBKDF2 to the mnemonic secret plus passphrase.

For SLIP-39 it is not practical to store the raw data of the recovery shares. During device initialization, a random Encrypted Master Secret is generated and stored as _MNEMONIC_SECRET. SLIP-39 encryption parameters (a random identifier and an iteration exponent) are stored alongside the mnemonic secret in their own storage fields. Whenever the root node is required, it is derived by "decrypting" the stored mnemonic secret with the provided passphrase.

SLIP-39 implementation

The reference implementation of SLIP-39 provides the following high-level API:

  • generate_mnemonics(group parameters, master_secret, passphrase): Encrypt Master Secret with the provided passphrase, and split into a number of shares defined via the group parameters. Implemented using the following:
    • encrypt(master_secret, passphrase, iteration_exponent, identifier): Encrypt the Master Secret with the given passphrase and parameters.
    • split_ems(group parameters, identifier, iteration_exponent, encrypted_master_secret): Split the encrypted secret and encode the metadata into a set of shares defined via the group parameters.
  • combine_mnemonics(set of shares, passphrase): Combine the given set of shares to reconstruct the secret, then decrypt it with the provided passphrase. Implemented using the following:
    • recover_ems(set of shares): Combine the given set of shares to obtain the encrypted master secret, identifier and iteration exponent.
    • decrypt(encrypted_master_secret, passphrase, iteration_exponent, identifier): Decrypt the secret with the given passphrase and parameters, to obtain the original Master Secret.

Only the functions denoted in bold are implemented in trezor-core. Recovery shares are generated with split_ems and combined with recover_ems. Passphrase decryption is done with decrypt. There is never an original "master secret" to be encrypted, so the encrypt function is also omitted.


Device initialization

This process does not use passphrase.

  1. Generate the required number of random bits (128 or 256), and store as _MNEMONIC_SECRET.
  2. Generate a random identifier and store as _SLIP39_IDENTIFIER.
  3. Store the default iteration exponent 1 as _SLIP39_ITERATION_EXPONENT.
  4. The storage now contains all parameters required for seed derivation.

Seed derivation

This is the only process that uses passphrase.

  1. If passphrase is enabled, prompt user for passphrase. Otherwise use empty string.
  2. Use slip39.decrypt(_MNEMONIC_SECRET, passphrase, _SLIP39_ITERATION_EXPONENT, _SLIP39_IDENTIFIER) to "decrypt" the root node that matches the provided passphrase.

Seed backup

This process does not use passphrase.

  1. Prompt user for group parameters (number of groups, number of shares per group, etc.).
  2. Use slip39.split_ems(group parameters, _SLIP39_IDENTIFIER, _SLIP39_ITERATION_EXPONENT, _MNEMONIC_SECRET) to split the secret into the given number of shares.

Seed recovery

This process does not use passphrase.

  1. Prompt the user to enter enough shares.
  2. Use slip39.recover_ems(shares) to combine the shares and get metadata.
  3. Store the Encrypted Master Secret as _MNEMONIC_SECRET.
  4. Store the identifier as _SLIP39_IDENTIFIER.
  5. Store the iteration exponent as _SLIP39_ITERATION_EXPONENT.
  6. The storage now contains all parameters required for seed derivation.

Exceptions in Core

From version 2.3.0 we try to follow few rules about how we use exceptions. All new code MUST follow these.


You MAY use any exceptions in Core's logic. Exceptions from wire.errors SHOULD be the final exceptions that are thrown and SHOULD NOT be caught. Note that wire.Error is a type of exception that is intended to be sent out over the wire. It should only be used in contexts where that behavior is appropriate.

Custom exception type hierarchies SHOULD always be derived directly from Exception. They SHOULD NOT be derived from other built-in exceptions (such as ValueError, TypeError, etc.)

Deriving a custom exception type signals an intention to catch and handle it somewhere in the code. For this reason, custom exception types SHOULD NOT be derived from wire.Error and subclasses.

Exception strings, including in internal exceptions, SHOULD only be used in cases where the text is intended to be shown on the host. Exception strings MUST NOT contain any sensitive information. An explanation of an internal exception MAY be placed as a comment on the raise statement, to aid debugging. If an exception is thrown with no arguments, the exception class SHOULD be thrown instead of a new object, i.e., raise CustomError instead of raise CustomError().


  • Do not use wire.errors for try-catch statements, use other exceptions.
  • Use wire.errors solely as a way to communicate errors to the Host, do not include them somewhere deep in the stack.
  • Do not put sensitive information in exception's message. If you are not sure, do not add any message and provide a comment next to the raise statement.
  • Use raise CustomError instead of raise CustomError() if you are omitting the exception message.

Trezor One Bootloader and Firmware

Building with Docker

Ensure that you have Docker installed. You can follow Docker's installation instructions.

Clone this repository, then use to build all images:

git clone
cd trezor-firmware

When the build is done, you will find the current firmware in build/legacy/firmware/trezor.bin.

Running with sudo

It is possible to run if either your Docker is configured in rootless mode, or if your user is a member of the docker group; see Docker documentation for details.

If you don't satisfy the above conditions, and run sudo ./, you might receive a Permission denied error. To work around it, make sure that the directory hierarchy in build/ directory is world-writable - e.g., by running chmod -R a+w build/.

Building older versions

For firmware versions 1.8.1 and newer, you can checkout the respective tag locally. To build firmware 1.8.2, for example, run git checkout legacy/v1.8.2 and then use the instructions below.

Note that the unified Docker build was added after version 1.8.3, so it is not available for older versions.

For firmwares older than 1.8.1, please clone the archived trezor-mcu repository and follow the instructions in its README.

Local development build

Make sure you have Python 3.6 or later and Poetry installed.

If you want to build device firmware, also make sure that you have the GNU ARM Embedded toolchain installed. See Dockerfile for up-to-date version of the toolchain.

The build process is configured via environment variables:

  • EMULATOR=1 specifies that an emulator should be built, instead of the device firmware.
  • DEBUG_LINK=1 specifies that DebugLink should be available in the built image.
  • MEMORY_PROTECT=0 disables memory protection. This is necessary for installing unofficial firmware.
  • DEBUG_LOG=1 enables debug messages to be printed on device screen.
  • BITCOIN_ONLY=1 specifies Bitcoin-only version of the firmware.

To run the build process, execute the following commands:

# enter the legacy subdirectory
cd legacy
# set up poetry
poetry install
# set up environment variables. For example, to build emulator with debuglink:
# clear build artifacts
poetry run ./script/setup
# run build process
poetry run ./script/cibuild

A built device firmware will be located in legacy/firmware/trezor.bin. A built emulator will be located in legacy/firmware/trezor.elf.

Common errors

  • "Exception: bootloader has to be smaller than 32736 bytes": if you didn't modify the bootloader source code, simply make sure you always run ./script/setup before runnning ./script/cibuild

  • "error adding symbols: File in wrong format": This happens when building emulator after building the firmware, or vice-versa. Execute the following command to fix the problem:

    find -L vendor -name "*.o" -delete

You can launch the emulator using ./firmware/trezor.elf. To use trezorctl with the emulator, use trezorctl -p udp (for example, trezorctl -p udp get_features).

You can use TREZOR_OLED_SCALE environment variable to make emulator screen bigger.

How to get fingerprint of firmware signed and distributed by SatoshiLabs?

  1. Pick version of firmware binary listed on
  2. Download it: wget -O trezor.signed.bin
  3. Use trezorctl dry-run mode to get the firmware fingerprint:
    trezorctl firmware-update -n -f trezor.signed.bin

Step 3 should produce the same fingerprint like your local build (for the same version tag).

How to install custom built firmware?

WARNING: This will erase the recovery seed stored on the device! You should never do this on Trezor that contains coins!

Build with MEMORY_PROTECT=0 or you will get a hard fault on your device.

Switch your device to bootloader mode, then execute:

trezorctl firmware-update -f build/legacy/firmware/trezor.bin

Combining bootloader and firmware with various MEMORY_PROTECT settings, signed/unsigned

Not all combinations of bootloader and firmware will work. This depends on 3 variables: MEMORY_PROTECT of bootloader, MEMORY_PROTECT of firmware, whether firmware is signed

This table shows the result for bootloader 1.8.0+ and 1.9.1+:

Bootloader MEMORY_PROTECTFirmware MEMORY_PROTECTIs firmware officially signed?Result
11yesworks, official configuration
11nohardfault in header.S when setting VTOR and stack
01noworks, but don't forget to comment out check_bootloader, otherwise it'll get overwritten
00nohard fault because header.S doesn't set VTOR and stack right

The other three possibilities with signed firmware and MEMORY_PROTECT!=0 for bootloader/firmware don't exist.


To be documented by @matejcik, see #229.

In the meantime see python/docs and README for the PyPI package.


repology image

Python library and command-line client for communicating with Trezor Hardware Wallet.

See for more information.


Python Trezor tools require Python 3.5 or higher, and libusb 1.0. The easiest way to install it is with pip. The rest of this guide assumes you have a working pip; if not, you can refer to this guide.

On a typical system, you already have all you need. Install trezor with:

pip3 install trezor

On Windows, you also need to either install Trezor Bridge, or libusb and the appropriate drivers.

Firmware version requirements

Current trezorlib version supports Trezor One version 1.8.0 and up, and Trezor T version 2.1.0 and up.

For firmware versions below 1.8.0 and 2.1.0 respectively, the only supported operation is "upgrade firmware".

Trezor One with firmware older than 1.7.0 (including firmware-less out-of-the-box units) will not be recognized, unless you install HIDAPI support (see below).

Installation options

  • Firmware-less Trezor One: If you are setting up a brand new Trezor One without firmware, you will need HIDAPI support. On Linux, you will need the following packages (or their equivalents) as prerequisites: python3-dev, cython3, libusb-1.0-0-dev, libudev-dev.

    Install with:

    pip3 install trezor[hidapi]
  • Ethereum: To support Ethereum signing from command line, additional packages are needed. Install with:

    pip3 install trezor[ethereum]

To install both, use pip3 install trezor[hidapi,ethereum].

Distro packages

Check out Repology to see if your operating system has an up-to-date python-trezor package.

Installing latest version from GitHub

pip3 install "git+"

Running from source

Install the Poetry tool, checkout trezor-firmware from git, and enter the poetry shell:

pip3 install poetry
git clone
cd trezor-firmware
poetry install
poetry shell

In this environment, trezorlib and the trezorctl tool is running from the live sources, so your changes are immediately effective.

Command line client (trezorctl)

The included trezorctl python script can perform various tasks such as changing setting in the Trezor, signing transactions, retrieving account info and addresses. See the docs/ sub folder for detailed examples and options.

NOTE: An older version of the trezorctl command is available for Debian Stretch (and comes pre-installed on Tails OS).

Python Library

You can use this python library to interact with a Trezor and use its capabilities in your application. See examples here in the tools/ sub folder.

PIN Entering

When you are asked for PIN, you have to enter scrambled PIN. Follow the numbers shown on Trezor display and enter the their positions using the numeric keyboard mapping:


Example: your PIN is 1234 and Trezor is displaying the following:


You have to enter: 3795


If you want to change protobuf or coin definitions, you will need to regenerate definitions in the python/ subdirectory.

First, make sure your submodules are up-to-date with:

git submodule update --init --recursive

Then, rebuild the protobuf messages by running, from the trezor-firmware top-level directory:

make gen

To get support for BTC-like coins, these steps are enough and no further changes to the library are necessary.


Trezor Model T Open Source Hardware Reference Documentation


Photo Front

Photo of assembled board (top)

Assembled Board Top

Photo of assembled board (bottom)

Assembled Board Bottom

Photo of assembled TFT LCD display + capacitive touch panel module (top)

Display Module Top

Photo of assembled TFT LCD display + capacitive touch panel module (bottom)

Display Module Bottom

Photo of disassembled TFT LCD display + capacitive touch panel module (top) (CTPM on left) (TFT LCD broken glass removed)

Display Module Disassembled Top

Photo of disassembled TFT LCD display + capacitive touch panel module (bottom) (CTPM on left) (TFT LCD broken glass removed)

Display Module Disassembled Bottom

Bill of Materials / BOM




Developer Kit


  • Resolution: 240px x 240px -OR- 240px x 320px
  • Driver IC: ST7789V, GC9307, or ILI9341V (on-chip display data RAM of 240x320x18 bits)
  • 18-bit (262,144) RGB color graphic type TFT-LCD
  • Bus/Interface: 8080-I 8-bit parallel with 16-bit/pixel (RGB 5-6-5)


DescriptionMCU PinNotes
LCD_RSTPC14display module pin 21. benign conflict with unpopulated OSC32_IN on dev board.
LCD_FMARKPD12tearing effect input; display module pin 22
LCD_PWMPA7backlight control (brightness); display module pin 29. benign conflict with I2C_EXT_RST on dev board.
LCD_CSPD7display module pin 23
LCD_RSPD11register select aka command/data; display module pin 24
LCD_RDPD4display module pin 26
LCD_WRPD5display module pin 25
LCD_D0PD14display module pin 3
LCD_D1PD15display module pin 4
LCD_D2PD0display module pin 5
LCD_D3PD1display module pin 6
LCD_D4PE7display module pin 7
LCD_D5PE8display module pin 8
LCD_D6PE9display module pin 9
LCD_D7PE10display module pin 10
LCD_D8PE11not currently used

Capacitive Touch Panel / Sensor

  • Bus/Interface: I2C
  • Driver IC: FT6236 or FT6206
  • single touch


DescriptionMCU PinNotes
TOUCH_ONPB10no mapped pin on display module
I2C1_SCLPB6display module pin 30
I2C1_SDAPB7display module pin 31
EINTPC4not currently used. display module pin 39. conflict with USB OTG FS PSO on dev board.
RESTPC5benign conflict with USB OTG FS OC on dev board. no mapped pin on display module.

microSD Socket

  • Bus/Interface: 4-bit


DescriptionMCU Pin

USB Socket

  • USB HS (high-speed) peripheral in FS (full-speed) mode


DescriptionMCU PinNotes
SBU1PA2not currently used. conflict with L3GD20 Gyroscope MEMS on dev board.
SBU2PA3not currently used

Dev Board

  • STM32F429ZIT6
  • HSE / High-Speed External Crystal: 8 MHz
  • Integrated STMicroelectronics ST-LINK/V2.1 debugger

Note: There are many conflicts between how the software maps GPIO pins and how the dev board maps them to its functions. Many of the conflicts are resolved by removing the external SDRAM chip and the TFT LCD display + resistive touch panel that come attached to the dev board. The unresolved conflicts are noted in the pinout descriptions above. Currently, testing has shown that it is not necessary to remove either the SDRAM or the TFT LCD display + resistive touch panel. If you choose to remove them, our experience is that the easiest way to remove the SDRAM chip is by first cutting the leads on one side of the chip (e.g.- with an X-Acto knife) and then lifting the chip and rocking it until the leads on the other side break. Be sure that no broken leads short to another pin as this can cause the dev board and/or display to malfunction. If some do, just clean them up so that they are separate again. This method reduces the amount of knife work, and the chance for slicing other things on the board (or yourself). To remove the TFT LCD display + resistive touch panel module, lift the module away from the metal tray, bend the metal tray out of the way, then cleanly pull/tear the flex PCB away from the solder connections to the main board (the connections usually break without much force). The metal tray is attached to the board with double stick tape. You just have to pull that up.

Photo of dev board before modifications (top)

Dev Board Top Before

Photo of dev board before modifications (bottom)

Dev Board Bottom Before

Minimum MCU requirements:

  • STM32F4 family STM32F427VIT6
  • 168 MHz, 8 MHz HSE
  • 2048 KB Flash memory
  • 192 KB SRAM
  • 64 KB CCMRAM
  • FMC controller
  • TRNG

Clock Tree

Clock Tree

Trezor Core Boot Stages

Trezor T initialization is split into two stages. See Memory Layout for info about in which sectors each stage is stored.

First stage (boardloader) is stored in write-protected area, which means it is non-upgradable. Only second stage (bootloader) update is allowed.

First Stage - Boardloader

First stage checks the integrity and signatures of the second stage and runs it if everything is OK.

If first stage boardloader finds a valid second stage bootloader image on the SD card (in raw format, no filesystem), it will replace the internal second stage, allowing a second stage update via SD card.

The boardloader is special in that it is the device's write protected embedded code. The primary purpose for write protecting the boardloader is to make it the immutable portion that can defend against code-based attacks (e.g.- BadUSB) and bugs that would reprogram any/all of the embedded code. It assures that only verified signed embedded code is run on the device (and that the intended code is run, and not skipped). The write protection also provides some defense against attacks where the attacker has physical control of the device.

The boardloader must include an update mechanism for later stage code because if it did not, then a corruption/erasure of later stage flash memory would leave the device unusable (only the boardloader could run and it would not pass execution to a later stage that fails signature validation).

Developer note:

A microSD card can be prepared with the following. Note that the bootloader is allocated 128 KiB.

WARNING: Ensure that you want to overwrite and destroy the contents of /dev/mmcblk0 before running these commands. Likewise, /dev/mmcblk0 may be replaced by your own specific destination.

  1. sudo dd if=/dev/zero of=/dev/mmcblk0 bs=512 count=256 conv=fsync

  2. sudo dd if=build/bootloader/bootloader.bin of=/dev/mmcblk0 bs=512 conv=fsync

Second Stage - Bootloader

Second stage checks the integrity and signatures of the firmware and runs it if everything is OK.

If second stage bootloader detects a pressed finger on the display or there is no firmware loaded in the device, it will start in a firmware update mode, allowing a firmware update via USB.

Common notes

  • Hash function used for computing data digest for signatures is BLAKE2s.
  • Signature system is Ed25519 (allows combining signatures by multiple keys into one).
  • All multibyte integer values are little endian.
  • There is a tool called which checks validity of the bootloader/firmware images including their headers.

Bootloader Format

Trezor Core (second stage) bootloader consists of 2 parts:

  1. bootloader header
  2. bootloader code

Bootloader Header

Total length of bootloader header is always 1024 bytes.

0x00004magicfirmware magic TRZB
0x00044hdrlenlength of the bootloader header
0x00084expiryvalid until timestamp (0=infinity)
0x000C4codelenlength of the bootloader code (without the header)
0x00101vmajorversion (major)
0x00111vminorversion (minor)
0x00121vpatchversion (patch)
0x00131vbuildversion (build)
0x00141fix_vmajorversion of last critical bugfix (major)
0x00151fix_vminorversion of last critical bugfix (minor)
0x00161fix_vpatchversion of last critical bugfix (patch)
0x00171fix_vbuildversion of last critical bugfix (build)
0x00188reservednot used yet (zeroed)
0x002032hash1hash of the first code chunk (128 - 1 KiB), this excludes the header
0x004032hash2hash of the second code chunk (128 KiB), zeroed if unused
0x020032hash16hash of the last possible code chunk (128 KiB), zeroed if unused
0x0220415reservednot used yet (zeroed)
0x03BF1sigmaskSatoshiLabs signature indexes (bitmap)
0x03C064sigSatoshiLabs aggregated signature of the bootloader header

Firmware Format

Trezor Core firmware consists of 3 parts:

  1. vendor header
  2. firmware header
  3. firmware code

Vendor Header

Total length of vendor header is 84 + 32 * (number of pubkeys) + (length of vendor string rounded up to multiple of 4) + (length of vendor image) bytes rounded up to the closest multiple of 512 bytes.

0x00004magicfirmware magic TRZV
0x00044hdrlenlength of the vendor header (multiple of 512)
0x00084expiryvalid until timestamp (0=infinity)
0x000C1vmajorversion (major)
0x000D1vminorversion (minor)
0x000E1vsig_mnumber of signatures needed to run the firmware from this vendor
0x000F1vsig_nnumber of different pubkeys vendor provides for signing
0x00102vtrustlevel of vendor trust (bitmap)
0x001214reservednot used yet (zeroed)
0x002032vpub1vendor pubkey 1
?32vpubnvendor pubkey n
?1vstr_lenvendor string length
??vstrvendor string
??vstrpadpadding to a multiple of 4 bytes
??vimgvendor image (120x120 pixels in TOIf format)
??reservedpadding to an address that is -65 modulo 512 (zeroed)
?1sigmaskSatoshiLabs signature indexes (bitmap)
?64sigSatoshiLabs aggregated signature of the vendor header

Vendor Trust

Vendor trust is stored as bitmap where unset bit means the feature is active.

00x0001wait 1 second
10x0002wait 2 seconds
20x0004wait 4 seconds
30x0008wait 8 seconds
40x0010use red background instead of black one
50x0020require user click
60x0040show vendor string (not just the logo)

Firmware Header

Total length of firmware header is always 1024 bytes.

0x00004magicfirmware magic TRZF
0x00044hdrlenlength of the firmware header
0x00084expiryvalid until timestamp (0=infinity)
0x000C4codelenlength of the firmware code (without the header)
0x00101vmajorversion (major)
0x00111vminorversion (minor)
0x00121vpatchversion (patch)
0x00131vbuildversion (build)
0x00141fix_vmajorversion of last critical bugfix (major)
0x00151fix_vminorversion of last critical bugfix (minor)
0x00161fix_vpatchversion of last critical bugfix (patch)
0x00171fix_vbuildversion of last critical bugfix (build)
0x00188reservednot used yet (zeroed)
0x002032hash1hash of the first code chunk excluding both the firmware and the vendor header (128 - 1 - [vendor header length] KiB)
0x004032hash2hash of the second code chunk (128 KiB), zeroed if unused
0x020032hash16hash of the last possible code chunk (128 KiB), zeroed if unused
0x0220415reservednot used yet (zeroed)
0x03BF1sigmaskvendor signature indexes (bitmap)
0x03C064sigvendor aggregated signature of the firmware header

Trezor T Memory Layout


Sector 00x08000000 - 0x08003FFF16 KiBboardloader (1st stage) (write-protected)
Sector 10x08004000 - 0x08007FFF16 KiBboardloader (1st stage) (write-protected)
Sector 20x08008000 - 0x0800BFFF16 KiBboardloader (1st stage) (write-protected)
Sector 30x0800C000 - 0x0800FFFF16 KiBunused
Sector 40x08010000 - 0x0801FFFF64 KiBstorage area #1
Sector 50x08020000 - 0x0803FFFF128 KiBbootloader (2nd stage)
Sector 60x08040000 - 0x0805FFFF128 KiBfirmware
Sector 70x08060000 - 0x0807FFFF128 KiBfirmware
Sector 80x08080000 - 0x0809FFFF128 KiBfirmware
Sector 90x080A0000 - 0x080BFFFF128 KiBfirmware
Sector 100x080C0000 - 0x080DFFFF128 KiBfirmware
Sector 110x080E0000 - 0x080FFFFF128 KiBfirmware
Sector 120x08100000 - 0x08103FFF16 KiBunused
Sector 130x08104000 - 0x08107FFF16 KiBunused
Sector 140x08108000 - 0x0810BFFF16 KiBunused
Sector 150x0810C000 - 0x0810FFFF16 KiBunused
Sector 160x08110000 - 0x0811FFFF64 KiBstorage area #2
Sector 170x08120000 - 0x0813FFFF128 KiBfirmware extra
Sector 180x08140000 - 0x0815FFFF128 KiBfirmware extra
Sector 190x08160000 - 0x0817FFFF128 KiBfirmware extra
Sector 200x08180000 - 0x0819FFFF128 KiBfirmware extra
Sector 210x081A0000 - 0x081BFFFF128 KiBfirmware extra
Sector 220x081C0000 - 0x081DFFFF128 KiBfirmware extra
Sector 230x081E0000 - 0x081FFFFF128 KiBfirmware extra


block 00x1FFF7800 - 0x1FFF781F32 Bdevice batch (week of manufacture)
block 10x1FFF7820 - 0x1FFF783F32 Bbootloader downgrade protection
block 20x1FFF7840 - 0x1FFF785F32 Bvendor keys lock
block 30x1FFF7860 - 0x1FFF787F32 Bentropy/randomness
block 40x1FFF7880 - 0x1FFF789F32 Bunused
block 50x1FFF78A0 - 0x1FFF78BF32 Bunused
block 60x1FFF78C0 - 0x1FFF78DF32 Bunused
block 70x1FFF78E0 - 0x1FFF78FF32 Bunused
block 80x1FFF7900 - 0x1FFF791F32 Bunused
block 90x1FFF7920 - 0x1FFF793F32 Bunused
block 100x1FFF7940 - 0x1FFF795F32 Bunused
block 110x1FFF7960 - 0x1FFF797F32 Bunused
block 120x1FFF7980 - 0x1FFF799F32 Bunused
block 130x1FFF79A0 - 0x1FFF79BF32 Bunused
block 140x1FFF79C0 - 0x1FFF79DF32 Bunused
block 150x1FFF79E0 - 0x1FFF79FF32 Bunused


CCMRAM0x10000000 - 0x1000FFFF64 KiBCore Coupled Memory
SRAM10x20000000 - 0x2001BFFF112 KiBGeneral Purpose SRAM
SRAM20x2001C000 - 0x2001FFFF16 KiBGeneral Purpose SRAM
SRAM30x20020000 - 0x2002FFFF64 KiBGeneral Purpose SRAM

Model One

To be documented.

In the meantime check out these great write-ups from @mcudev:

Trezor Common

Common contains files shared among Trezor projects.

Coin Definitions

JSON coin definitions and support tables.

Protobuf Definitions

Common Protobuf definitions for the Trezor protocol. Also see Communication.


Tools for managing coin definitions and related data.


Note: In this section we describe the internal functioning of the communication protocol. If you wish to implement Trezor support you should use Connect or python-trezor, which will do all this hard work for you.

We use Protobuf v2 for host-device communication. The communication cycle is very simple, Trezor receives a message (request), acts on it and responds with another one (response). Trezor on its own is incapable of initiating the communication.


Protobuf messages are defined in the Common project, which is part of this monorepo. This repository is also exported to trezor/trezor-common to be used by third parties, which prefer not to include the whole monorepo. That copy is read-only mirror and all changes are happening in this monorepo.

Notable topics


Trezor has limited support for logical sessions. The main purpose is to enable seamless operation with multiple passphrases.

Warning: Session isolation does not exist. Host software is responsible for maintaining isolation. Running multiple host-side applications at the same time is not recommended.

See "Caveats" section below for details.

Support for isolated sessions is in the works, see #79.

Session lifecycle

After Trezor starts up, no session exists. Any attempt to use session data (i.e., the seed) will be rejected with InvalidSession error code.

New session is started by calling Initialize with no arguments. The response is a Features message, which contains a 32-byte session_id. All subsequent commands happen within the given session.

To resume a previous session (after creating a new one), call Initialize with a stored session_id as an argument.

Attempt to resume an unknown session ID will transparently allocate a new session ID.

Since firmwares 1.9.4 / 2.3.4, it is possible to destroy the current session via the EndSession call. The session and all its associated data is wiped from Trezor memory, and it is impossible to resume the session. Trezor returns to the initial state and all requests will return InvalidSession.

There is no explicit way to leave a session and keep its data for later resumption. Instead, you can switch to a new session via Initialize with no arguments.

At the moment, both T1 and TT allow for 10 sessions to exist at the same time. When a new session needs to be allocated and there is no space in the cache, the least recently used session is evicted.

Sessions only exist in RAM and are lost when Trezor is disconnected.

All commands are performed in the context of the current session, until one of the following happens:

  • Host calls EndSession. The current session is destroyed and Trezor returns to the initial state.
  • Host calls Initialize with no arguments, or with an unknown session_id. A new session is allocated and its id returned in the Features message.
  • Host calls Initialize with a known session_id. The specified session is resumed and its session_id is returned in the Features message.
  • Trezor is disconnected.


  • Sessions only exist on the protobuf message level. There is no proper isolation. Multiple host applications can insert commands into each other's sessions.

    It is recommended to send Initialize to resume a session immediately before each flow. However, even this does not guarantee that another application doesn't insert its own Initialize in the time it takes you to send the next command.

    The reverse is also true: session management does not prevent other applications from inserting commands under the currently active session (and therefore passphrase), without knowledge of the session ID or the passphrase.

  • It is impossible to run complex flows concurrently. If an application is in the middle of Bitcoin signing, sending Initialize will cancel the signing flow. Resuming the appropriate session later will not continue where it left off.


Allocate a new session, perform a command, and end the session:

--------->          Features(..., session_id=AAAA)
    ---<now in session AAAA>---
--------->          Response
--------->          Success()
    ---<now in no session>---

Allocate two new sessions, resume the first one later:

--------->          Features(..., session_id=AAAA)
    ---<now in session AAAA>---
--------->          Response

--------->          Features(..., session_id=BBBB)
    ---<now in session BBBB>---
--------->          Response

--------->          Features(..., session_id=AAAA)
    ---<now in session AAAA>---
--------->          Response

Attempt to resume session that is not in the cache:

--------->          Features(..., session_id=AAAA)
    ---<now in session AAAA>---
--------->          Success()
    ---<now in no session>---
--------->          Features(..., session_id=BBBB)
    ---<now in session BBBB>---


As of 1.9.0 / 2.3.0 we have changed how passphrase is communicated between the host and the device. For migration information for existing Hosts communicating with Trezor please see this document.

Passphrase is very tightly coupled with sessions. The reader is encouraged to read on that topic first in the section.


As soon as Trezor needs the passphrase to do BIP-39/SLIP-39 derivations it prompts the user for passphrase.

--------->          PassphraseRequest()
(str passphrase, bool on_device)
--------->          Address(...)

In the default Trezor setting, the passphrase is obtained from the Host. Trezor sends a PassphraseRequest message and awaits PassphraseAck as a response. This message contains field passphrase to transmit it or it has on_device boolean flag to indicate that the user wishes to enter the passphrase on Trezor instead. Setting both passphrase and on_device to true is forbidden.

Note that this has changed as of 2.3.0. In previous firmware versions the on_device flag was in the PassphraseRequest message, since this decision has been made on Trezor. We also had two additional messages PassphraseStateRequest and PassphraseStateAck which were removed.


On an initialized device with passphrase enabled a common communication starts like this:

--------->          Features(..., session_id)
--------->          PassphraseRequest()
--------->          Address(...)

The device requested the passphrase since the BIP-39/SLIP-39 seed is not yet cached. After this workflow the seed is cached and the passphrase will therefore never be requested again unless the session is cleared*.

Since we do not have sessions, the Host can not be sure that someone else has not used the device and applied another session id (e.g. changed the Passphrase). To work around this we send the session id again on every subsequent message. See more on that in

--------->          Features(..., session_id)
--------->          PublicKey(...)

As long as the session_id in Initialize is the same as the one Trezor stores internally, Trezor guarantees the same passphrase is being used.

* There is one exception and that is Cardano. Because Cardano has a different BIP-39/SLIP-39 derivation scheme for passphrase we can not use the cached seed. As a workaround we prompt for the passphrase again in such case and cache the cardano seed in the cardano app directly.

Passphrase always on device

User might want to enforce the passphrase entry on the device every time without the hassle of instructing the Host to do so.

For such cases the user may apply the Passphrase always on device setting. As the name suggests, with this setting the passphrase is prompted on the device right away and no PassphraseRequest/PassphraseAck messages are exchanged. Note that the passphrase is prompted only once for given session id. If the user wishes to enter another passphrase they need to either send Initialize(session_id=None) or replug the device.

Passphrase Redesign In 1.9.0 / 2.3.0

On the T1, passphrase must be entered on the host PC and sent to Trezor. On the TT, the user can choose whether to enter the passphrase on host or on Trezor's touch screen.

In versions 1.9.0 and 2.3.0 we have redesigned how the passphrase is communicated between the Host and the Device. The new design is documented thoroughly in and this document should help with the transition from the old design.

New features

  • Passphrase flow is now identical for T1 and TT.
  • By keeping track of sessions, it is possible to avoid having to send passphrase repeatedly.
  • The choice between entering on Host or Device for TT has been moved from Device to Host.
  • Multiple passphrases are cached simultaneously.

Backwards compatibility

T1 1.9.0 is fully backwards-compatible and works with existing Host code.

TT 2.3.0 communicating with old Host software degrades to T1-level features: entering passphrase on device will not be available, and on-device passphrase caching via the state field will not be available.

As a workaround, it is possible to use the "passphrase always on device" feature on the new TT firmware. When enabled, the passphrase flow is completely hidden from the Host software, and the Device itself prompts the user to enter the passphrase.

Implementation guide

Protobuf changes

Protobuf has built-in backwards compatibility mechanisms, so a conforming implementation should continue to work with old protobuf definitions.

To restore support for TT on-device passphrase entry, and to make use of the new features, you will need to update to newer protobuf definitions from trezor-common (TODO: link to commit in trezor-common).

The gist of the changes is:

  • PassphraseRequest.on_device was deprecated, and renamed to _on_device. New Devices will never send this field.
  • Corresponding field PassphraseAck.on_device was added.
  • PassphraseAck.state was deprecated, and renamed to _state. It is retained for code compatibility, but the field should never be set.
  • PassphraseStateRequest/PassphraseStateAck messages were deprecated, and renamed with a Deprecated_ prefix. New Devices will not send or accept these messages.
  • Initialize.state was renamed to Initialize.session_id.
  • Corresponding field Features.session_id was added. New Devices will always send this field in response to Initialize call.
  • A new value Capability_PassphraseEntry was added to the Features.Capability enum. This capability will be sent from a Device that supports on-device passphrase entry (currently only TT).

Restoring on-device entry for TT

The Host software reacts to a PassphraseRequest message by prompting the user for a passphrase and sending it in the PassphraseAck.passphrase field.

A new UI element should be added: when the passphrase prompt is displayed on Host, there should be an option to "enter passphrase on device". When the user selects this option, the Host must send a PassphraseAck(passphrase=null, on_device=true).

The "enter passphrase on device" option should be displayed when Features.capabilities contain the Capability_PassphraseEntry value, regardless of reported Trezor version or model. Firmwares older than 2.3.0 or 1.9.0 never set this value, so this ensures forwards and backwards compatibility.

Cross-version compatibility for TT

TT version < 2.3.0 will send PassphraseRequest(_on_device=true) if the user selected on-device entry. Neither T1 nor TT >= 2.3.0 will ever set this field to true.

If the Host receives PassphraseRequest(_on_device=true), it should immediately respond with PassphraseAck() with no fields set.

TT version < 2.3.0 will send Deprecated_PassphraseStateRequest(state=[bytes]) after receiving PassphraseAck. The Host should immediately respond with Deprecated_PassphraseStateAck() with no fields set. If the Host does session management, it should store the value of state as the session ID.

Triggering passphrase prompt

Use GetAddress(coin_name="Testnet", address_n=[44'/1'/0'/0/0]) (the first address of the first account of Testnet) to ensure that the Device asks for a passphrase if needed, and caches it for future use.

Validating passphrases

You can store the result of the above call, and in the future, compare it to a newly received address. This is a good way to check if the user is using the same passphrase as last time.

Do not store user-entered passphrases for the purpose of validation, even in hashed, encrypted, or otherwise obfuscated format.

Session support

A call to Initialize can include a session_id field. When starting a new user session, this field should be left empty.

The response Features message will always include a session_id field. The value of this field should be stored. When calling Initialize again, the stored value should be sent as session_id. If the received Features.session_id is the same, it means that session was resumed successfully and the user will not be prompted for passphrase.

--> Initialize()
<-- Features(session_id=0xABCDEF, ...)

--> Initialize(session_id=0xABCDEF)
<-- Features(session_id=0xABCDEF)
# (session resumed successfully)

--> Initialize(session_id=0xABCDEF)
<-- Features(session_id=0x123456)
# (session was not resumed, user will be prompted for passphrase again)

Session support is identical on T1 and TT, and both models support multiple sessions, i.e., it is possible to seamlessly switch between using multiple passphrases.

Cross-version compatible algorithm summary

The following algorithm will ensure that your Host application works properly with both T1 and TT with both older and newer firmwares.

  1. If you have a session ID stored, call Initialize(session_id=stored_session_id)
  2. Check the value of Features.session_id. If it is identical to stored_session_id, the session was resumed and user will not need to be prompted for a passphrase.
    1. If Features.session_id is not set, you are communicating with an older Device. Do not store the null value as session ID.
    2. Otherwise store the value as stored_session_id.
  3. When you receive a PassphraseRequest(_on_device=true), respond with PassphraseAck() with no fields set.
  4. When you receive a PassphraseRequest, prompt the user for passphrase.
    1. If Features.capabilities contains value Capability_PassphraseEntry, display a UI element that allows the user to enter passphrase on-device.
    2. If the user chooses this option, send PassphraseAck(passphrase=null, on_device=true)
    3. If the user enters the passphrase in your application, send PassphraseAck(passphrase="user entered passphrase", on_device=false)
  5. When you receive a Deprecated_PassphraseStateRequest(state=...), store the value of state as stored_session_id, and respond with Deprecated_PassphraseStateAck with no fields set.

Note: up to 64 bytes may be required to store the session ID. Firmwares < 2.3.0 use a 64-byte value, newer firmwares use a 32-byte value.

Bitcoin signing flow

The Bitcoin signing process is one of the most complicated workflows in the Trezor codebase. This is because Trezor cannot store arbitrarily large transactions in memory, so both the input data and the results must be sent in chunks. The Protobuf messages cannot fully encode the data pertaining to a single transaction; instead, the data is spread out over multiple messages.


The signing flow is initiated by a SignTx command. The message contains the name of the coin, number of inputs and outputs, and transaction metadata: version, lock time, and others that are required for some coins.

In response, Trezor will send a number of TxRequest messages, asking for additional data from the host. The host is supposed to respond with a TxAck providing the requested data.

Trezor can request the following kinds of items:

  • input of the transaction being signed
  • output of the transaction being signed
  • metadata of a previous transaction, i.e., the transaction whose UTXO is being spent
  • metadata of an original transaction, i.e., a transaction that is being replaced by the current transaction
  • input of a previous or original transaction
  • output of a previous or original transaction
  • additional trailing data of a previous transaction

As part of each TxRequest message, Trezor can also send a chunk of the resulting serialized transaction, and/or a signature of one of the inputs.

The flow ends when Trezor sends a TxRequest with request_type of TXFINISHED.

Signing phases

The following content is for reference only, and details might change in the future. Host code should make no assumptions about the order of the phases.

The list of phases here also does not necessarily correspond to internal phase numbering.

Gathering info about current transaction

Trezor will request all inputs and outputs of the transaction to be signed, and set up data structures that allow it to verify that the same data is sent in the following phases.

In this phase, Trezor will also ask the user to confirm destination addresses, transaction fee, metadata, and the total being sent. If the user confirms, we continue to the next phase.

Validation of input data

Trezor must verify that the host does not lie about input amounts, i.e., that the transaction total in the first phase was calculated correctly.

For this reason, Trezor will ask the host to send each input again. It will then request data about the referenced previous transaction: metadata, all inputs, all outputs, and possible trailing data. This allows Trezor to reconstruct the previous transaction and calculate its hash. If this hash matches the provided one, and the amount on selected UTXO matches the input amount, the given input is considered valid.

Trezor also supports pre-signed inputs for multi-party signing. If an input has an EXTERNAL type, and provides a signature, Trezor will validate the signature against the previous transaction in this step.


Satisfied that all provided data is valid, Trezor will ask once again about every input and generate a signature for it.

For every legacy (non-segwit) inputs, it is necessary to stream the full set of inputs and outputs again, so that Trezor can build the serialization which is being signed. For segwit inputs, this is not necessary

Finally, when all inputs are streamed, Trezor will ask for every output, so that it can serialize it, fill out change addresses, and return the full transaction.

Old versus new data structures

Originally, the TxAck message contained one field of type TransactionType. This, in turn, contained all the possible fields that Trezor could request:

  • TransactionType itself contains fields for all necessary metadata, plus a field extra_data for trailing data.
  • TransactionType.inputs is an array of TxInputType objects, each of which can describe either the current input, or an input of a previous transaction.
  • TransactionType.outputs is an array of TxOutputType objects, each of which can describe an output of the current transaction.
  • TransactionType.bin_outputs is an array of TxOutputBinType objects, each of which can describe an output of a previous transaction.

This organization makes it practical to use the TransactionType for host-side storage: a transaction can be fully stored in one object, and in order to send a TxAck response, you only need to extract the appropriate data.

The cost of this is that this organization makes it extremely unclear which data should be extracted at which points.

To make the constraints more visible, a new set of data types was designed. There is a TxAck<Kind> message for every Kind of data. These only define the fields that are appropriate for that kind of request.

It is possible to use both representations, as they are wire-compatible. However, we recommend using the new definitions for new applications.

Request types

The TxRequest message always contains a request_type field, indicating which kind of data it wants. In addition, request_details specify the particular piece of data requested.

If request_details.tx_hash is set, Trezor is requesting data about a specified previous transaction. If it is unset, Trezor wants data about current transaction.

Transaction input

Trezor sets request_type to TXINPUT, and request_details.tx_hash is unset.

request_details.request_index is the index of the input in the transaction: 0 is the first input, 1 is second, etc.

Old style: Host must respond with a TxAck message. The field tx.inputs must be set to an array of one element, which describes the requested input. All other fields should be left unset.

New style: Host must respond with a TxAckInput message. All relevant data must be set on tx.input.

Normal (internal) inputs

Usually, the user owns, and wants to sign, all inputs of the transaction. For that, the host must specify a derivation path for the key, and script type SPENDADDRESS (legacy), SPENDP2SHWITNESS (P2SH segwit) or SPENDWITNESS (native segwit).

Multisig inputs

For multisig inputs, the XPUBs of all signers (including the current user) must be provided in the multisig structure. Legacy multisig uses type SPENDMULTISIG, P2SH segwit and native segwit multisig use the same type as non-multisig inputs, i.e. SPENDP2SHWITNESS or SPENDWITNESS.

Full documentation for multisig is TBD.

External inputs

Trezor can include inputs that it will not sign, typically because they are owned by another party. Such inputs are of type EXTERNAL and the host does not specify a derivation path for the key. Instead, these inputs must either already have a valid signature or they must come with an ownership proof. If the input already has a valid signature, then the host provides the script_sig and/or witness fields. If the other signing party hasn't signed their input yet (i.e., with two Trezors, one must sign first so that the other can include a pre-signed input), they can instead provide a SLIP-19 ownership proof in the ownership_proof field, with optional commitment data in commitment_data.

Transaction output

Trezor sets request_type to OUTPUT, and request_details.tx_hash is unset.

request_details.request_index is the index of the output in the transaction: 0 is the first input, 1 is second, etc.

Old style: Host must respond with a TxAck message. The field tx.outputs must be set to an array of one element, which describes the requested output. All other fields should be left unset.

New style: Host must respond with a TxAckOutput message. All relevant data must be set on tx.output.

External outputs

Outputs that send coins to a particular address are always of type PAYTOADDRESS. The address is sent as a string in the field address.

Change outputs

Outputs that send coins back to the same owner must specify a derivation path and the appropriate script type. If the derivation path has the same prefix as all inputs, and a matching script type (legacy, p2sh segwit, native segwit), it is considered to be a change output, and its amount is subtracted from the total.

address must not be specified in this case. It is instead derived internally from the provided derivation path.

OP_RETURN outputs

Outputs of type PAYTOOPRETURN must not specify address nor address_n, and the amount must be zero. The OP_RETURN data is sent as op_return_data field.

Previous transaction metadata

Trezor sets request_type to TXMETA. request_details.tx_hash is a transaction hash, matching one of the current transaction inputs.

Old style: Host must respond with a TxAck message. The structure tx must be filled out with relevant data, in particular, inputs_cnt and outputs_cnt must be set to the number of transaction inputs and outputs. Arrays inputs, outputs, bin_outputs and extra_data should be empty.

New style: Host must respond with a TxAckPrevMeta message. All relevant data must be set on tx.

Extra data

Some coins (e.g., Zcash) contain data at the end of transaction serialization that Trezor does not understand. The host must indicate the length of this extra data in the field extra_data_len.

To figure out which is the extra data, the host must parse the serialized previous transaction up to the last field understood by Trezor. In case of Zcash, that is:

  • version + version group ID
  • number of inputs, and every input
  • number of outputs, and every output
  • lock time
  • expiry

All data after the expiry field is considered "extra data".

Previous transaction input

Trezor sets request_type to TXINPUT. request_details.tx_hash is a transaction hash, matching one of the current transaction inputs.

Old style: Host must respond with a TxAck message. The field tx.inputs must be set to an array of one element, which describes the requested input of the specified previous transaction. All other fields should be left unset.

New style: Host must respond with a TxAckPrevInput message. All relevant data must be set on tx.input.

Previous transaction output

Trezor sets request_type to TXOUTPUT. request_details.tx_hash is a transaction hash, matching one of the current transaction inputs.

Old style: Host must respond with a TxAck message. The field tx.bin_outputs must be set to an array of one element, which describes the requested output of the specified previous transaction. All other fields should be left unset.

New style: Host must respond with a TxAckPrevOutput message. All relevant data must be set on tx.output.

Previous transaction trailing data

On some coins, such as Zcash, the transaction serialization can contain data not understood by Trezor. This data is not relevant for validation, but it must be included so that Trezor can correctly compute the previous transaction hash.

Trezor sets request_type to TXEXTRADATA. request_details.tx_hash is a transaction hash, matching one of the current transaction inputs.

request_details.extra_data_offset specifies the offset of the requested data from the start of the extra data. request_details.extra_data_length specifies the length of the requested chunk.

Old style: Host must respond with a TxAck message. The field tx.extra_data must contain the specified chunk, starting at the given offset and of exactly the given length. All other fields should be unset.

New style: Host must respond with a TxAckPrevExtraData message. The chunk must be set to tx.extra_data_chunk.

Original transaction input

Trezor sets request_type to TXORIGINPUT. request_details.tx_hash is the transaction hash of the original transaction.

Host must respond with a TxAckInput message. All relevant data must be set in tx.input. The derivation path and script_type are mandatory for all original internal inputs. For each original transaction, one of its original internal inputs must be accompanied with a valid signature in the script_sig and/or witness fields.

Original transaction output

Trezor sets request_type to TXORIGOUTPUT. request_details.tx_hash is the transaction hash of the original transaction.

Host must respond with a TxAckOutput message. All relevant data must be set in tx.output. The derivation path and script type are mandatory for all original change-outputs.

Replacement transactions

A replacement transaction is a transaction that uses the same inputs as one or more transactions which have already been signed (the original transactions). Replacement transactions can be used to securely bump the fee of an already signed transaction (BIP-125) or to participate as a sender in PayJoin (BIP-78). Trezor only supports signing of replacement transaction which do not increase the amount that the user is spending on external outputs. Thus when signing a replacement transaction the user only needs to confirm the fee modification and the original TXIDs without being shown any outputs, since the original external outputs must have already been confirmed by the user and any new external outputs can only be paid for by new external inputs.

The host signals that a transaction is a replacement transaction by setting the orig_hash and orig_index fields for at least one TxInput. Trezor will then automatically request metadata about the original transaction and verify the original signatures.

A replacement transaction in Trezor must satisfy the following requirements:

  • All inputs of the original transactions must be inputs of the replacement transation.
  • All external outputs of the original transactions must be outputs of the replacement transation and none of their output amounts may be decreased.
  • The replacement transaction must not increase the amount that the user is spending on external outputs.
  • Original transactions must have the same effective nLockTime as the replacement transaction.
  • The inputs and outputs of the original transactions must not be permuted in the replacement transaction, but they can be interleaved with new inputs or with inputs from another original transaction.
  • New OP_RETURN outputs cannot be added in the replacement transaction.

So the replacement transaction is, for example, allowed to:

  • Increase the user's contribution to the mining fee by adding new inputs or decreasing or removing change outputs.
  • Decrease the user's contribution to the mining fee by increasing or adding change-outputs.
  • Add external inputs (PayJoin) and use them to introduce new outputs, increase the original external outputs or even to increase the user's change outputs so as to decrease the amount that the user is spending.

Implementation notes


The following is a rough outline of host-side implementation. See above for detailed info.

transaction_bytes = ""
signatures = [""] * len(INPUTS)

def sign_tx():
    # send initial message
            # ...fill individual metadata fields

    # wait for TxAck forever, until Trezor indicates we are finished
    while True:
        msg = receive_message()

        # extract data first

        if msg.request_type == TXFINISHED:
            # we are done

        if msg.details.tx_hash is not None:
            # Trezor requires data about some previous transaction
            send_response_prev(msg.request_type, msg.details)
            # Trezor requires data about this transaction
            send_response_current(msg.request_type, msg.details)

def extract_streamed_data(ser: TxRequestSerializedType):
    global transaction_bytes, signatures
    # append serialized data to what we got so far
    transaction_bytes += ser.serialized_tx
    if ser.signature_index is not None:
        # read the signature
        signatures[ser.signature_index] = ser.signature

def send_response_prev(request_type: RequestType, details: TxRequestDetailsType):
    prev_tx = get_prev_tx(details.tx_hash)
    if request_type == TXINPUT:
    elif request_type == TXOUTPUT:
    elif request_type == TXMETA:
    elif request_type == TXEXTRADATA:
        offset = details.extra_data_offset
        length = details.extra_data_length
        extra_data_chunk = prev_tx.extra_data[offset : offset + length]

def send_response_current(request_type: RequestType, details: TxRequestDetailsType):
    if request_type == TXINPUT:
    elif request_type == TXOUTPUT:

Wire compatibility

The new definitions are structured so that the Protobuf binary encoded form can be decoded into both representations. This means that the host can encode data in the old representation, and Trezor will successfully and correctly decode it into the new one.

This is done by reusing field IDs as appropriate, and taking advantage of the fact that Protobuf encodes arrays as a sequence of the same field repeated a number of times.

For example, here is a part of the TxAck definition:

message TxAck {
    optional TransactionType tx = 1;

    message TransactionType {
        // ... some fields omitted ...
        repeated TxInputType inputs = 2;
        // ... some fields omitted ...

        message TxInputType {
            repeated uint32 address_n = 1;
            // ... some fields omitted ...
            optional uint64 amount = 8;
            // ... some fields omitted ...

A message carrying these fields would look like this:

FIELD 1 (type NESTED):
    FIELD 2 (type NESTED):
        FIELD 1 (type int): 0x8000002c
        FIELD 1 (type int): 0x80000000
        FIELD 1 (type int): 0x80000000
        FIELD 1 (type int): 0
        FIELD 1 (type int): 0
        FIELD 8 (type int): 1234567

We can see that this is identical as if the type definition looked as follows; indeed, we only renamed the types, removed some fields, and set some optional or repeated to required instead.

message TxAckInput {
    required TxAckInputWrapper tx = 1;

    message TxAckInputWrapper {
        required TxInput input = 2; // the field is now required instead of repeated

        message TxInput {
            repeated uint32 address_n = 1;
            required uint64 amount = 8;

A caveat of this approach is that this introduces invisible dependencies: TxInput and PrevInput fold into the same old-style TxInputType, so adding new fields must be done carefully.

We expect to gradually deprecate the TransactionType. At that point, the new-style types will be fully independent.

Trezor Storage

The storage folder contains the implementation of Trezor's internal storage, which is common for both Legacy (Trezor One) and Core (Trezor T). This README also contains a detailed description of the cryptographic design.

All tests are located in the tests subdirectory, which also includes a Python implementation to run tests against this C production version and the Python one.


The PIN is no longer stored in the flash storage. A new entry is added to the flash storage consisting of a 256-bit encrypted data encryption key (EDEK) followed by a 128-bit encrypted storage authentication key (ESAK) and a 64-bit PIN verification code (PVC). The PIN is used to decrypt the EDEK and ESAK and the PVC is used to verify that the correct PIN was used. The resulting data encryption key (DEK) is then used to encrypt/decrypt protected entries in the flash storage. We use Chacha20Poly1305 as defined in RFC 7539 to encrypt the EDEK and the protected entries. The storage authentication key (SAK) is used to authenticate the list of (APP, KEY) values for all protected entries that have been set in the storage. This prevents an attacker from erasing or adding entries to the storage.

Storage format

Entries fall into three categories:

PrivateAPP = 0NeverNever
Protected1 ≤ APP ≤ 127Only when unlockedOnly when unlocked
Public128 ≤ APP ≤ 255AlwaysOnly when unlocked

The format of public entries has remained unchanged, that is:

Length (bytes)112LEN

Private values are used to store storage-specific information and cannot be directly accessed through the storage interface. Protected entries have the following new format:

Length (bytes)1121216LEN - 28

The LEN value thus indicates the total length of IV, TAG and ENCRDATA.

The random salt (32 bits), EDEK (256 bits), ESAK (128 bits) and PVC (64 bits) is stored in a single entry under APP=0, KEY=2:

Length (bytes)112432168
Value02003C 00

The storage authentication tag (128 bits) is stored in a single entry under APP=0, KEY=5:

Length (bytes)11216
Value050020 00

Furthermore, if any entry is overwritten, the old entry is erased, i.e., overwritten with 0. We are also using APP=0, KEY=0 as marker that the entry is erased (this was formerly used for the PIN entry, which is not needed anymore).

PIN verification and decryption of protected entries in flash storage

  1. From the flash storage read the entry containing the random salt, EDEK and PVC.

  2. Gather constant data from various system resources such as the ProcessorID (aka Unique device ID) and any hardware serial numbers that are available. The concatenation of this data with the random salt will be referred to as salt.

  3. Prompt the user to enter the PIN. Prefix the entered PIN with a "1" digit in base 10 and convert the integer to 4 bytes in little endian byte order. Then compute:

    PBKDF2(PRF = HMAC-SHA256, Password = pin, Salt = salt, iterations = 10000, dkLen = 352 bits)

    The first 256 bits of the output will be used as the key encryption key (KEK) and the remaining 96 bits will be used as the key encryption initialization vector (KEIV).

    Note: Since two blocks of output need to be produced in PBKDF2 the total number of iterations is 20000.

  4. Compute:

    (dek, tag) = ChaCha20Poly1305Decrypt(kek, keiv, edek)

  5. Compare the PVC read from the flash storage with the first 64 bits of the computed tag value. If there is a mismatch, then fail. Otherwise store the DEK in a global variable.

  6. When a protected entry needs to be decrypted, load the IV, ENCRDATA and TAG of the entry and compute:

    (data, tag) = ChaCha20Poly1305Decrypt(dek, iv, (key || app), encrdata)

    where the APP and KEY of the entry is used as two bytes of associated data. Compare the TAG read from the flash storage with the computed tag value. If there is a mismatch, then fail.


Initializing the EDEK

  1. When the storage is initialized, generate the 32 bit random salt and 256 bit DEK using a cryptographically secure random number generator.

  2. Set a boolean value in the storage denoting that the PIN has not been set. Use an empty PIN to derive the KEK and KEIV as described above.

  3. Encrypt the DEK using the derived KEK and KEIV:

    (edek, tag) = ChaCha20Poly1305Encrypt(kek, keiv, dek)

  4. Store the random salt, EDEK value and the first 64 bits of the tag as the PVC.

Setting a new PIN

  1. If the PIN has already been set, then prompt the user to enter the old PIN value, check the PVC and compute the DEK as described above in steps 1-4.

  2. Generate a new 32 bit random salt and prompt the user to enter the new PIN value. Use these values to derive the new KEK and KEIV as described above.

  3. Encrypt the DEK using the new KEK and KEIV:

    (edek, tag) = ChaCha20Poly1305Encrypt(kek, keiv, dek)

  4. Store the new EDEK value and the first 64 bits of the tag as the new PVC. This operation should be atomic, i.e. either both values should be stored or neither. Overwrite the old values of the EDEK and PVC with zeros.

Encryption of protected entries in flash storage

Whenever the value of an entry needs to be updated, a fresh IV is generated using a cryptographically secure random number generator and the data is encrypted as (encrdata, tag) = ChaCha20Poly1305Encrypt(dek, iv, (key || app), data).

Storage authentication

The storage authentication key (SAK) will be used to generate a storage authentication tag (SAT) for the list of all (APP, KEY) values of protected entries (1 ≤ APP ≤ 127) that have been set in the storage. The SAT will be checked during every get operation. When a new protected entry is added to the storage or when a protected entry is deleted from the storage, the value of the SAT will be updated. The value of the SAT is defined as the first 16 bytes of

HMAC-SHA-256(SAK, ⨁i HMAC-SHA-256(SAK, KEY_i || APP_i))

where denotes the n-ary bitwise XOR operation and KEY_i || APP_i is a two-byte encoding of the value of the i-th (APP, KEY) such that 1 ≤ APP ≤ 127.

Design rationale

  • The purpose of the PBKDF2 function is to thwart brute-force attacks in case the attacker is able to circumvent the PIN entry counter mechanism but does not have full access to the contents of the flash storage of the device, e.g. fault injection attacks. For an attacker that would be able to read the flash storage and obtain the salt, the PBKDF2 with 20000 iterations and a 4- to 9-digit PIN would not pose an obstacle.

  • The reason why we use a separate data encryption key rather than using the output of PBKDF2 directly to encrypt the sensitive entries is so that when the user decides to change their PIN, only the EDEK needs to be reencrypted, but the remaining entries do not need to be updated.

  • We use ChaCha20 for encryption, because as a stream cipher it has no padding overhead and its implementation is readily available in trezor-crypto. A possible alternative to using ChaCha20Poly1305 for DEK encryption is to use AES-CTR with HMAC in an encrypt-then-MAC scheme. A possible alternative to using ChaCha20 for encryption of other data entries is to use AES-XTS (XEX-based tweaked-codebook mode with ciphertext stealing), which was designed specifically for disk-encryption. The APP || KEY value would be used as the tweak.

    • Advantages of AES-XTS:
      • Does not require an initialization vector.
      • Ensures better diffusion than a stream cipher, which eliminates the above concerns about malleability and fault injection attacks.
    • Disadvantages of AES-XTS:
      • Not implemented in trezor-crypto.
      • Requires two keys of length at least 128 bits.
  • A 32-bit PVC would be sufficient to verify the PIN value, since there would be less than a 1 in 4 chance that there exists a false PIN, which has the same PVC as the correct PIN. Nevertheless, we decided to go with a 64-bit PVC to achieve a larger security margin. The chance that there exists a false PIN, which has the same PVC as the correct PIN, then drops below 1 in 10^10. The existence of a false PIN does not appear to pose a security weakness, since the false PIN cannot be used to decrypt the protected entries.

  • Instead of using separate IVs for each entry we considered using a single IV for the entire sector. Upon sector compaction a new IV would have to be generated and the encrypted data would have to be reencrypted under the new IV. A possible issue with this approach is that compaction cannot happen without the DEK, i.e. generally data could not be written to the flash storage without knowing the PIN. This property might not always be desirable.

New measures for PIN entry counter protection

The former implementation of the PIN entry counter was vulnerable to fault injection attacks.

Under the former implementation the PIN counter storage entry consisted of 32 words initialized to 0xFFFFFFFF. The first non-zero word in this area was the current PIN failure counter. Before verifying the PIN the lowest bit with value 1 was set to 0, i.e. a value of FFFFFFFC indicated two PIN entries. Upon successful PIN entry, the word was set to 0x00000000, indicating that the next word was the PIN failure counter. Allegedly, by manipulating the voltage on the USB input an attacker could convince the device to read the PIN entry counter as 0xFFFFFFFF even if some of the bits had been set to 0.

Design goals

  • Make it easy to decrement the counter by changing a 1 bit to 0.
  • Make it hard to reset the counter by a fault injection, i.e. counter values should not have an overly simple binary representation like 0xFFFFFFFF.
  • If possible, use two or more different methods of checking the counter value so that an attacker has to mount different fault injection attacks to succeed.
  • Optimize the format for successful PIN entry.
  • Minimize the number of branching operations. Avoid loops, instead utilize bitwise and arithmetic operations when processing the PIN counter data.

Proposal summary

Under the former implementation, for every unsuccessful PIN entry we discarded one bit from the counter, while for every successful PIN entry we discard an entire word. In the new implementation we optimize the counter operations for successful PIN entry.

The basic idea is that there are two binary logs stored in the flash storage, e.g.:

...0001111111111111... pin_success_log
...0000001111111111... pin_entry_log

Before every PIN verification the highest 1-bit in the pin_entry_log is set to 0. If the verification succeeds, then the corresponding bit in the pin_success_log is also set to 0. The example above shows the status of the logs when the last three PIN entries were not successful.

In actual fact the logs are not written to the flash storage exactly as shown above, but they are stored in a form that should protect them against fault injection attacks. Only half of the stored bits carry information, the other half acts as "guard bits". So a stored value ...001110... could look like ...0g0gg1g11g0g..., where g denotes a guard bit. The positions and the values of the guard bits are determined by a guard key. The guard_key is a randomly generated uint32 value stored as an entry in the flash memory in cleartext. The assumption behind this is that an attacker attempting to reset or decrement the PIN counter by a fault injection is not able to read the flash storage. However, the value of guard_key also needs to be protected against fault injection, so the set of valid guard_key values should be limited by some condition which is easy to verify, such as guard_key mod M == C, where M and C a suitably chosen constants. The constants should be chosen so that the binary representation of any valid guard_key value has Hamming weight between 8 and 24. These conditions are discussed below.

Storage format

The PIN log has APP = 0 and KEY = 1. The DATA part of the entry consists of 33 words (132 bytes, assuming 32-bit words):

  • guard_key (1 word)
  • pin_success_log (16 words)
  • pin_entry_log (16 words)

Each log is stored in big-endian word order. The byte order of each word is platform dependent.

Guard key validation

The guard_key is said to be valid if the following three conditions hold true:

  1. Each byte of the binary representation of the guard_key has a balanced number of zeros and ones at the positions corresponding to the guard values (that is those bits in the mask 0xAAAAAAAA).
  2. The guard_key binary representation does not contain a run of 5 (or more) zeros or ones.
  3. The guard_key integer representation is congruent to 15 modulo 6311.

Key validity can be checked with this function:

int key_validity(uint32_t guard_key)
  uint32_t count = (guard_key & 0x22222222) + ((guard_key >> 2) & 0x22222222);
  count = count + (count >> 4);

  uint32_t zero_runs = ~guard_key;
  zero_runs = zero_runs & (zero_runs >> 2);
  zero_runs = zero_runs & (zero_runs >> 1);
  zero_runs = zero_runs & (zero_runs >> 1);
  uint32_t one_runs = guard_key;
  one_runs = one_runs & (one_runs >> 2);
  one_runs = one_runs & (one_runs >> 1);
  one_runs = one_runs & (one_runs >> 1);

  return ((count & 0x0e0e0e0e) == 0x04040404) & (one_runs == 0) & (zero_runs == 0) & (guard_key % 6311 == 15);

Key generation

The guard_key may be generated in the following way:

  1. Generate a random integer r in such that 0 ≤ r ≤ 680552 with uniform probability.
  2. Set r = r * 6311 + 15.
  3. If key_validity(r) is not true go back to the step 1.

Note that on average steps 1 to 3 are repeated about one hundred times.

Key expansion

The guard_key is read from storage, its value is checked for validity and used to compute the guard_mask (indicating the positions of the guard bits) and guard value (indicating the values of the guard bits on their actual positions):

LOW_MASK = 0x55555555
guard_mask = ((guard_key & LOW_MASK) << 1) |
             ((~guard_key) & LOW_MASK)
guard = (((guard_key & LOW_MASK) << 1) & guard_key) |
        (((~guard_key) & LOW_MASK) & (guard_key >> 1))


The guard_key contains two pieces of information. The position of the guard bits but also their corresponding values. The bitwise format of the guard_key is vpvpvp...vp. The bits labelled p indicate the position of each guard bit and the bits labelled v indicate its value.

The guard_mask is derived from the guard_key and has the form xyxyxy...xy where x+y = 1 (in other words, there is exactly one 1 bit in each pair xy). First, we set the x bits:

(guard_key & LOW_MASK) << 1

and the y bits to its corresponding complement:

(~guard_key) & LOW_MASK

That ensures that only one 1 bit is present in each pair xy. The guard value is equal to the bits labelled v in the guard_key but only at the positions indicated by the guard_mask. The guard value is therefore equal to:

        -------- x bits mask --------- & -- guard_key --
guard = (((guard_key & LOW_MASK) << 1) & guard_key) |
        ----- y bits mask ---- & - guard_key shifted to v bits
        (((~guard_key) & LOW_MASK) & (guard_key >> 1))

Log initialization

Each log is stored as 16 consecutive words each initialized to:

guard | ~guard_mask

Removing and adding guard bits

After reading a word from the flash storage we verify the format by checking the condition:

(word & guard_mask) == guard

and then remove the guard bits as follows:

word = word & ~guard_mask
word = ((word  >> 1) | word ) & LOW_MASK
word = word | (word << 1)

This operation replaces each guard bit with the value of its neighbouring bit, e.g. …0g0gg1g11g0g… is converted to …000011111100… Thus each non-guard bit is duplicated.

The guard bits can be added back as follows:

word = (word & ~guard_mask) | guard

Determining the number of PIN failures

Remove the guard bits from the words of the pin_entry_log using the operations described above and verify that the result has form 0*1* by checking the condition:

word & (word + 1) == 0

Then verify that the pin_entry_log and pin_success_log are in sync by checking the condition:

pin_entry_log & pin_success_log == pin_entry_log

Finally, determine the current number of PIN failures by counting the number of set bits in the evaluation of the following expression:

pin_success_log xor pin_entry_log

Note that the number of set bits in a word can be counted using bitwise and arithmetic operations. For a 32-bit word the following can be used:

count = word - ((word >> 1) & 0x55555555)
count = (count & 0x33333333) + ((count >> 2) & 0x33333333)
count = (count + (count >> 4)) & 0x0F0F0F0F
count = count + (count >> 8)
count = (count + (count >> 16)) & 0x3F


Burn tests

These tests are doing a simple read/write operations on the device to see if the hardware can endure high number of flash writes. Meant to be run on the device directly for a long period of time.

Device tests

Device tests are integration tests that can be run against either emulator or on an actual device. You are responsible to provide either an emulator or a device with Debug mode present.

Device tests

The original version of device tests. These tests can be run against both Model One and Model T.

See for instructions how to run it.

UI tests

UI tests use device tests and take screenshots of every screen change and compare them against fixtures. Currently for model T only.

See for more info.

Click tests

Click tests are a next-generation of the Device tests. The tests are quite similar, but they are capable of imitating user's interaction with the screen.

Fido tests

Implement U2F/FIDO2 tests.

Upgrade tests

These tests test upgrade from one firmware version to another. They initialize an emulator on some specific version and then pass its storage to another version to see if the firmware operates as expected. They use fixtures from which can be downloaded using the script.

See the for instructions how to run it.

Persistence tests

These tests test the Persistence mode, which is currently used in the device recovery. These tests launch the emulator themselves and they are capable of restarting or stopping it simulating user's plugging in or plugging out the device.

Running device tests

1. Running the full test suite

Note: You need Poetry, as mentioned in the core's documentation section.

In the trezor-firmware checkout, in the root of the monorepo, install the environment:

poetry install

And run the automated tests:

poetry run make -C core test_emu

2. Running tests manually

Install the poetry environment as outlined above. Then switch to a shell inside the environment:

poetry shell

If you want to test against the emulator, run it in a separate terminal:


Now you can run the test suite with pytest from the root directory:

pytest tests/device_tests

Useful Tips

The tests are randomized using the pytest-random-order plugin. The random seed is printed in the header of the tests output, in case you need to run the tests in the same order.

If you only want to run a particular test, pick it with -k <keyword> or -m <marker>:

pytest -k nem      # only runs tests that have "nem" in the name
pytest -m stellar  # only runs tests marked with @pytest.mark.stellar

If you want to see debugging information and protocol dumps, run with -v.

If you would like to interact with the device (i.e. press the buttons yourself), just prefix pytest with INTERACT=1:

INTERACT=1 pytest tests/device_tests

3. Using markers

When you're developing a new currency, you should mark all tests that belong to that currency. For example, if your currency is called NewCoin, your device tests should have the following marker:


This marker must be registered in REGISTERED_MARKERS file.

If you wish to run a test only on TT, mark it with @pytest.mark.skip_t1. If the test should only run on T1, mark it with @pytest.mark.skip_t2. You must not use both on the same test.

Extended testing and debugging

Building for debugging (Emulator only)

Build the debuggable unix binary so you can attach the gdb or lldb. This removes optimizations and reduces address space randomizaiton.

make build_unix_debug

The final executable is significantly slower due to ASAN(Address Sanitizer) integration. If you wan't to catch some memory errors use this.

time ASAN_OPTIONS=verbosity=1:detect_invalid_pointer_pairs=1:strict_init_order=true:strict_string_checks=true TREZOR_PROFILE="" poetry run make test_emu

Coverage (Emulator only)

Get the Python code coverage report.

If you want to get HTML/console summary output you need to install the tool.

pip3 install coverage

Run the tests with coverage output.

make build_unix && make coverage

Running Upgrade Tests

  1. As always, use poetry environment:
poetry shell
  1. Download the emulators, if you have not already:
  1. And run the tests using pytest:
pytest tests/upgrade_tests

You can use TREZOR_UPGRADE_TEST environment variable if you would like to run core or legacy upgrade tests exclusively. This will run core only:

TREZOR_UPGRADE_TEST="core" pytest tests/upgrade_tests

Running UI tests

1. Running the full test suite

Note: You need Poetry, as mentioned in the core's documentation section.

In the trezor-firmware checkout, in the root of the monorepo, install the environment:

poetry install

And run the tests:

poetry run make -C core test_emu_ui

2. Running tests manually

Install the poetry environment as outlined above. Then switch to a shell inside the environment:

poetry shell

If you want to test against the emulator, run it in a separate terminal:


Now you can run the test suite with pytest from the root directory:

pytest tests/device_tests --ui=test

If you wish to check that all test cases in fixtures.json were used set the --ui-check-missing flag. Of course this is meaningful only if you run the tests on the whole device_tests folder.

pytest tests/device_tests --ui=test --ui-check-missing

You can also skip tests marked as skip_ui.

pytest tests/device_tests --ui=test -m "not skip_ui"

Updating Fixtures ("Recording")

Short version:

poetry run make -C core test_emu_ui_record

Long version:

The --ui pytest argument has two options:

  • record: Create screenshots and calculate theirs hash for each test. The screenshots are gitignored, but the hash is included in git.
  • test: Create screenshots, calculate theirs hash and test the hash against the one stored in git.

If you want to make a change in the UI you simply run --ui=record. An easy way to proceed is to run --ui=test at first, see what tests fail (see the Reports section below), decide if those changes are the ones you expected and then finally run the --ui=record and commit the new hashes.

Also here we provide an option to check the fixtures.json file. Use --ui-check-missing flag again to make sure there are no extra fixtures in the file:

pytest tests/device_tests --ui=record --ui-check-missing



Each --ui=test creates a clear report which tests passed and which failed. The index file is stored in tests/ui_tests/reporting/reports/test/index.html, but for an ease of use you will find a link at the end of the pytest summary.

On CI this report is published as an artifact. You can see the latest master report here.

Master diff

In the ui tests folder you will also find a Python script, which creates a report where you find which tests were altered, added, or removed relative to master. This useful for Pull Requests.

This report is available as an artifact on CI as well. You can find it by visiting the "core unix ui changes" job in your pipeline - browse the artifacts and open master_diff/index.html.

Click Tests

This set of tests is intended for cases where USB communication must be decoupled from the input stream. They are mainly based on sending simulated clicks and reading screen contents. Unlike device tests that use the client fixture, click tests generally use the device_handler fixture. TODO fixture documentation, the important point is that device_handler runs trezorlib calls in the background and leaves the main thread free to interact with the device from the user's perspective.

Running the full test suite

Note: You need Poetry, as mentioned in the core's documentation section.

In the trezor-firmware checkout, in the root of the monorepo, install the environment:

poetry install

Switch to a shell inside theenvironment:

poetry shell

If you want to test against the emulator, run it in a separate terminal:


Now you can run the test suite with pytest from the root directory:

pytest tests/click_tests

Click test recorder

The repository now includes a tool for automatically generating testcases from user interaction. The resulting test cases must still be tweaked manually, but they can provide a solid starting point for a complex interaction pattern.

Caveat: The testcase recorder is in alpha-stage, both in terms of functionality and code quality. Your mileage may vary.

Run the tool with:

python tests/click_tests/

The tool accepts the same arguments as trezorctl. For example, to record yourself getting an address, use:

python tests/click_tests/ btc get-address -n m/44h/0h/0h/0/0 -d

Instead of clicking buttons on the emulator, type commands in the terminal that ran the tool. A list of possible button clicks will be shown in your terminal. These will be sent to the emulator over debuglink.

(Note that if a particular click does not react through the tool, there is a good chance that it won't work in the testcase either. Please file an issue.)

After the session is over (when you type stop), the tool will collect all layout changes and output a testcase in pytest format. Copy-paste that into your test file and tweak as appropriate.


The complete test suite is running on a public GitLab CI. If you are an external contributor, we also have a Travis instance where a small subset of tests is running as well - mostly style and easy fast checks, which are quite common to fail for new contributors.

See this list of CI jobs descriptions for more info.

The CI folder contains all the .yml GitLab files that are included in the main .gitlab.yml to provide some basic structure. All GitLab CI Jobs run inside a docker image, which is built using the present Dockerfile. This image is stored in the GitLab registry.

List of GitLab CI Jobs



Environment job builds the ci/Dockerfile and pushes the built docker image into our GitLab registry. Since modifications of this Dockerfile are very rare this si a manual job which needs to be triggered on GitLab.

Almost all CI jobs run inside this docker image.


All builds are published as artifacts so you can download and use them.

core fw btconly build

Build of Core into firmware. Bitcoin-only version.

core fw regular build

Build of Core into firmware. Regular version. Are you looking for Trezor T firmware build? This is most likely it.

core fw regular debug build

Build of Core into firmware with enabled debug mode. In debug mode you can upload mnemonic seed, use debug link etc. which enables device tests. Storage on the device gets wiped on every start in this firmware.

core unix frozen btconly debug build

Build of Core into UNIX emulator. Something you can run on your laptop.

Frozen version. That means you do not need any other files to run it, it is just a single binary file that you can execute directly.

See Emulator for more info.

Debug mode enabled, Bitcoin-only version.

core unix frozen debug build

Same as above but regular version (not only Bitcoin). Are you looking for a Trezor T emulator? This is most likely it.

core unix frozen regular build

Same as above but regular version (not only Bitcoin) without debug mode enabled.

core unix frozen regular darwin

Same as above for MacOS.

core unix regular build

Non-frozen emulator build. This means you still need Python files present which get interpreted.

crypto build

Build of our cryptographic library, which is then incorporated into the other builds.

legacy emu btconly build

Build of Legacy into UNIX emulator. Use keyboard arrows to emulate button presses.

Bitcoin-only version.

legacy emu regular build

Regular version (not only Bitcoin) of above. Are you looking for a Trezor One emulator? This is most likely it.

legacy fw btconly build

Build of Legacy into firmware. Bitcoin only.

legacy fw debug build

Build of Legacy into firmware. Debug mode on. Storage on the device gets wiped on every start in this firmware.

legacy fw regular build

Build of Legacy into firmware. Regular version. Are you looking for Trezor One firmware build? This is most likely it.


core device ui test

UI tests for Core. See artifacts for a comprehensive report of UI. See tests/ui-tests for more info.

TODO: document others if needed

Assorted knowledge

This file serves as a dumping ground for important knowledge tidbits that do not clearly fit in any particular location. Please add any information that you think should be written down.

At any time, information stored here might be restructured or moved to a different location, so as to ensure that the documentation is well structured overall.

List of third parties

That need to be notified when a protocol breaking change occurs.

Using trezorlib:

This usually requires some code changes in the affected software.

  • Electrum
  • HWI
  • Trezor Agent
  • Shadowlands

Using HWI

Updating HWI to the latest version should be enough.

  • BTCPay
  • Wasabi

Using no Trezor libraries

  • Monero
  • Mycelium Android
  • Mycelium iOS
  • Blockstream Green Android
  • Blockstream Green iOS

Using Connect:

See for a full list of projects depending on Connect.

Connect dependencies introduction

Javascript projects that have Connect as a dependency are using the Connect NPM package on version specified in their yarn.lock (or similar). This NPM package is not a complete Connect library, it is a simple layer that deals with opening an iframe and loading the newest Connect from

Such project must have the newest MAJOR version of this NPM package (v8 at the moment). But then the main logic library (dealing with devices etc.) is fetched from and is therefore under our control and can be updated easily.

So in a nutshell:

  • If there is a new MAJOR version of Connect we indeed want to notify these parties below.
  • In other cases we do not, we just need to deploy updated Connect before releasing firmwares.

Notable third-parties

  • Trezor Password Manager
  • Exodus (closed source)
  • MagnumWallet (closed source)
  • CoinMate (closed source)
  • MyEtherWallet
  • MyCrypto
  • MetaMask
  • SimpleStaking
  • AdaLite
  • Stellarterm
  • frame
  • lisk-desktop
  • Liskish Wallet
  • web3-react
  • KyberSwap
  • Balance Manager

BIP-44 derivation paths

Each coin uses BIP-44 derivation path scheme. If the coin is UTXO-based the path should have all five parts, precisely as defined in BIP-32. If it is account-based we follow Stellar's SEP-0005 - paths have only three parts 44'/c'/a'. Unfortunately, lot of exceptions occur due to compatibility reasons.

Keys are derived according to SLIP-10, which is a superset of the BIP-32 derivation algorithm, extended to work on other curves.

List of used derivation paths

coincurvepathpublic nodenote

c stands for the SLIP-44 id of the currency, when multiple currencies are handled by the same code. a is an account number, y is change address indicator (must be 0 or 1), and i is address index.

Paths that do not conform to this table are allowed, but user needs to confirm a warning on Trezor.

Public nodes

Some currencies allow exporting a public node, which lets the client derive all non-hardened paths below it. In that case, the conforming path is equal to the hardened prefix.

I.e., for Bitcoin's path 44'/c'/a'/y/i, the allowed public node path is 44'/c'/a'.

Trezor does not check if the path is followed by other non-hardened items (anyone can derive those anyway). This is beneficial for Ethereum and its MEW compatibility, which sends 44'/60'/0'/0 for getPublicKey.


  1. For Bitcoin and its derivatives it is a little bit more complicated. p is decided based on the following table:

    ptypeinput script type
    48legacy multisigSPENDMULTISIG
    49p2sh segwitSPENDP2SHWITNESS
    84native segwitSPENDWITNESS

    Other p are disallowed.

  2. We believe this should be 44'/c'/a', because Ethereum is account-based, rather than UTXO-based. Unfortunately, lot of Ethereum tools (MEW, Metamask) do not use such scheme and set a = 0 and then iterate the address index i. Therefore for compatibility reasons we use the same scheme.

  3. Similar to Ethereum this should be 44'/c'/a'. But for compatibility with other HW vendors we use 44'/c'/a'/0/0.

  4. Cardano has a custom derivation algorithm that allows non-hardened derivation on ed25519.

  1. NEM's path should be 44'/43'/a' as per SEP-0005, but we allow 44'/43'/a'/0'/0' as well for compatibility reasons with NanoWallet.

  2. Tezos supports multiple curves, but Trezor currently supports ed25519 only.

Sign message paths are validated in the same way as the sign tx paths are.

Allowed values

For UTXO-based currencies, account number a needs to be in the interval [0, 20] and address index i in the interval [0, 1 000 000].

For account-based currencies (i.e., those that do not use address indexes), account number a needs to be in the interval [0, 1 000 000]

Contribute to Trezor Firmware

Please read the general instructions you can find on our wiki.

Your Pull Request should follow these criteria:

  • The code is properly tested.
  • Tests must pass on CI.
  • The code is properly formatted. Use make style_check to check the format and make style to do the required changes.
  • The generated files are up-to-date. Use make gen in repository root to make it happen.
  • Commits must have concise commit messages, we endorse Conventional Commits.

Please read and follow our review procedure.

Firmware update and device wipe

This document describes under which circumstances the device gets wiped during a firmware update.

Trezor 1

The device gets wiped:

  • If the firmware to be installed is unsigned.
  • If the present firmware is unsigned.
  • If the firmware to be installed has lower version than the current firmware's fix_version [1].

The device gets wiped on every reboot:

  • If the firmware's debug mode is turned on.

Trezor T

In Trezor T this works a bit differently, we have introduced so-called vendors headers. Each firmware has its vendor header and this vendor header is signed by SatoshiLabs. The actual firmware is signed by the vendor header's key. That means that all firmwares are signed by someone to be able to run on Trezor T.

We currently have two vendors:

  1. SatoshiLabs

As the names suggest, the first one is the official SatoshiLabs vendor header and all public firmwares are signed with that. The second one is meant for generic audience; if you build firmware this vendor header is automatically applied and the firmware is signed with it (see tools/

The device gets wiped:

  • If the firmware to be installed is from different vendor than the present firmware [2].
  • If the firmware to be installed has lower version than the current firmware's fix_version [1].

The device gets wiped on every reboot:

  • If the firmware's debug mode is turned on.

[1] Firmware contains a fix_version, which is the lowest version to which that particular firmware can be downgraded without wiping storage. This is typically used in case the internal storage format is changed. For example, in version 2.2.0, we have introduced Wipe Code, which introduced some changes to storage that the older firmwares (e.g. 2.1.8) would not understand. It can also be used to enforce security fixes.

[2] The most common example is if you have a device with the official firmware (SatoshiLabs) and you install the unofficial (UNSIGNED) firmware -> the device gets wiped. Same thing vice versa.

Generated files

Certain files in the repository are auto-generated from other sources, but the generated content is stored in Git. The command make gen_check, run from CI, ensures that the generated content matches its sources. The command make gen regenerates all relevant files.

In general, generated files are not compatible between branches. After rebasing or merging a different branch, you should immediately run make gen and make sure the result is committed.

Do not fix merge conflicts in generated files. Instead, run make gen and commit the result.

The following is a (possibly incomplete) list of files regenerated by make gen:

  • core/mocks/generated: mock Python stubs for C modules (modtrezor*). Generated from special comments in embed/extmod/modtrezor*.
  •,, and in their respective subdirectories of core/src/apps. In general, any file matching *.py.mako has a corresponding *.py file generated from the Mako template. These files are based on coin data from common/defs.
  • Protobuf class definitions in core/src/trezor/messages and python/src/trezorlib/messages. Generated from common/protob/*.proto.

Git Hooks


Copy docs/git/hooks/ files to .git/hooks/ to activate them. Run in root:

cp docs/git/hooks/* .git/hooks/

Monorepo notes


Use the create_monorepo script to regenerate from current master(s).


This is a result of Git merge of several unrelated histories, each of which is moved to its own subdirectory during the merge.

That means that this is actually all the original repos at the same time. You can check out any historical commit hash, or any historical tag.

All tags from the previous history still exist, and in addition, each has a version named by its directory. I.e., for trezor-mcu tag v1.6.3, you can also check out legacy/v1.6.3.

Merging pre-existing branches

Because the repository shares all the histories, merging a branch or PR can be done with a simple git merge. It's often necessary to add hints to git by specifying a merge strategy - especially when some commits add new files.

Use the following options: -s subtree -X subtree=<destdir>.

Example for your local checkout:

$ git remote add core-local ~/git/trezor-core
$ git fetch core-local
$ git merge core-local/wip -s subtree -X subtree=core

Same options should be used for git rebase of a pre-existing branch.


The monorepo has two subdirectories that can be exported to separate repos:

  • common exports to
  • crypto exports to

These exports are managed with git-subrepo tool. To export all commits that touch one of these directories, run the following command:

$ git subrepo push <dirname>

You will need commit access to the respective GitHub repository.

For installation instructions and detailed usage info, refer to the git-subrepo README.

Sketch of further details:

What git-subrepo does under the hood is create and fetch a remote for the export, check out parent revision and replay all commits since commit using something along the lines of git filter-branch --subdirectory-filter.

So basically a nicely tuned git-subtree.

This can all be done manually if need be (or if you need more advanced usecases like importing changes from the repo commit-by-commit, because git-subrepo will squash on import). See this nice article for hints.

Purpose48 derivation scheme

Per BIP-43, the first level of the derivation path is used as a "purpose". The purpose number is usually selected to match the BIP number: e.g., BIP-49 uses purpose 49'.

There is no officially proposed BIP-48 standard. Despite that, a de-facto standard for the purpose 48' exists in the wild and is implemented by several HD wallets, most notably Electrum. This standard was never before formally specified, and this document aims to rectify the situation.


Purpose48 is intended for multisig scenarios. It allows using multiple script types from a single logical account or root key, while keeping multisig keys separate from single-sig keys.


The following BIP-32 path levels are defined:

m / 48' / coin_type' / account' / script_type' / change / address_index

Meaning of all fields except script_type is defined in BIP-44

script_type can have the following values:

  • 0: raw BIP-11 (p2ms) multisig
  • 1: p2sh-wrapped segwit multisig (p2wsh-p2sh)
  • 2: native segwit multisig (p2wsh)

The path derivation is hardened up to and including the script_type field.

Trezor implementation

script_type value 0 corresponds to SPENDMULTISIG/PAYTOMULTISIG.




Electrum implementation:

Trezor implementation: TBD

Review Process

  • File a Pull Request (PR) with a number of well-defined clearly described commits. Multiple commits per PR are allowed, but please do not include revert commits, etc. Use rebase.
  • Do not use merge (e.g. merge trezor/master into ...). Again, use rebase.
  • The general review workflow goes as follows:
    1. The author creates a PR. They should make sure it passes lints and anything that can be run quickly on their computer. When creating the PR, the author should also add links to any resources which might be helpful to the reviewer, namely the issue that the PR resolves or a design document, if there is one.
    2. The author assigns a reviewer. In some cases two or more reviewers can be assigned. One reason to do this is if the code under review is security-critical and deserves two pairs of eyes. Another reason is if the PR touches two very distinct areas of the codebase and the author wants a different specialist to review each one. In the latter case the author should clearly state who should review which part.
    3. The reviewer reviews the PR. In case they find something, they create a comment using the Github review system.
    4. The author implements the required changes and pushes the new commits. The author should never force-push during code review. If an earlier commit needs to be fixed, then instead of fixing it and force-pushing, it should be fixed by adding a so called fixup commit. This can be done by git commit --fixup [commithash] which creates a new commit with the message "fixup! [orig_message]", where orig_message is the commit message of the commithash it "fixes". If the fixes are across multiple earlier commits, then they need to be split into multiple fixup commits.
    5. The author informs the reviewer with a simple comment "done" or similar to tell the reviewer their comment was implemented. Bonus points for including a revision of the fixup commit.
    6. The reviewer reviews the modifications and when they are finally satisfied they resolve the Github comment.
    7. The reviewer finally approves the PR with the green tick.
    8. The author runs git rebase -i [main branch] --autosquash which squashes the fixup commits into their respective parents and then force-pushes the rebase branch.
    9. The author makes a final check and merges the PR. If the rebase involved resolving some complicated merge conflicts, then the author may ask the reviewer for a final check.


If you find the description too difficult, then here is an example to make it more clear.

Andrew tries to add a number of commits very well structured and with nice and consistent commit messages. These will not be squashed together.

Matějčík starts to review and finds something he would like to improve:

Andrew responds with a commit hash 55d883b informing that he has accepted and implemented the comment.

This commit is a fixup commit. Since it is a new commit he does not have to force-push. In the following image he is fixing the "test: Add device tests..." commit.

This way we can end up with number of fixup commits at the end of the review. Note that there is one commit in the following image that is not a fixup commit. That's totally okay in case it makes sense and the author indeed wants it as a separate commit.

Matějčík is happy and approves the PR. After that Andrew squashes his commits via git rebase -i [main branch] --autosquash. This command will squash the fixup commits into their respective places modifying the original commits. After this he force-pushes. As you can see the history we end up with is very nice.

We merge the PR and that's it!

Notes & Rationale

  • If you want to fixup the latest commit, just use git commit --fixup HEAD as written above. If you want to fixup some older commit, use git commit --fixup [commithash]. Or any other reference for that matter.
  • More good git rebase tips can be found at this Atlassian website.
  • Some rationale why we avoid force pushing during code review, i.e. during the period starting with the creation of the PR until the last approval is given:
    1. Force pushing often makes it impossible to see the changes made by the author, so the reviewer has to go through the entire PR again. If it's just an amendment, then GitHub can show the differences, but in more complicated situations it's unable to untangle what happened. Especially if you rebase over master, which adds lots of new changes.
    2. A fixup commit can be easily referenced in the response to the reviewer's comment.
    3. Force pushing often breaks hyperlinks in GitHub comments, which is a real nuisance when somebody is referencing some code in their comment and you have no clue what they are talking about.
    4. It has led to code review comments being lost on multiple occasions. This seems to happen especially if you comment on a particular commit.
  • What to do if you really need to rebase over master during an ongoing code review? This happens rarely, but if it's really necessary in order to implement the requested revisions, then:
    1. Try to resolve as many reviewer comments as possible before rebasing.
    2. Ask the reviewers for approval to go ahead with the rebase, i.e. give them time to confirm that the comments have been well resolved, avoiding as many of the problems mentioned above as possible.
    3. Rebase, do the stuff you need, force-push for a second round of review.

Trezor Optimized Image Format (TOIF)

All multibyte integer values are little endian!


0x00031fmtdata format: f or g (see below)
0x00042widthwidth of the image
0x00062heightheight of the image
0x00084datasizelength of the compressed data
0x000A?datacompressed data (see below)


TOI currently supports 2 variants:

  • f: full-color
  • g: gray-scale


For each pixel a 16-bit value is used. First 5 bits are used for red component, next 6 bits are green, final 5 bits are blue:



Each pixel is encoded using a 4-bit value. Each byte contains color of two pixels:


Where Po is odd pixel and Pe is even pixel.


Pixel data is compressed using DEFLATE algorithm with 10-bit sliding window and no header. This can be achieved with ZLIB library by using the following:

import zlib
z = zlib.compressobj(level=9, wbits=-10)
zdata = z.compress(pixeldata) + z.flush()


  • toif_convert - tool for converting PNGs into TOI format and back