Back to Technology

Secure Boot & Hardware Security

April 1, 2026 Wasil Zafar 55 min read

Secure embedded and server platforms from boot to runtime — chain of trust, TPM 2.0, secure elements (ATECC608), hardware security modules, code signing, measured boot, encrypted storage, anti-tamper design, and compliance with NIST and Common Criteria frameworks.

Table of Contents

  1. History of Boot Security
  2. Chain of Trust
  3. Secure Boot Implementation
  4. TPM 2.0
  5. Secure Elements
  6. Hardware Security Modules
  7. Code Signing & Verification
  8. Encrypted Storage
  9. Anti-Tamper Design
  10. Compliance Frameworks
  11. Case Studies
  12. Exercises
  13. Secure Boot Audit Generator
  14. Conclusion & Resources

History of Boot Security

Key Insight: The boot process is the most critical attack surface in any computing system. If an attacker can modify the code that runs before the operating system loads, they achieve persistence that survives OS reinstallation, hard drive replacement, and even factory resets. Every major nation-state cyberattack toolkit includes bootkit capabilities.

The history of boot security is a history of escalating attacks and defenses. For decades, the PC BIOS — a 16-bit firmware interface dating to 1981 — had no security mechanisms whatsoever. Any software with sufficient privileges could modify BIOS code stored in flash memory, creating a class of malware called bootkits that load before and underneath the operating system.

The first widely documented BIOS rootkit was IceLord (2006), a proof-of-concept that modified the BIOS to persist malicious code across reboots. In 2007, researchers John Heasman and Joanna Rutkowska demonstrated practical BIOS-level rootkits at Black Hat, proving the threat was real. The Mebromi rootkit (2011) was the first BIOS-infecting malware found in the wild, targeting Award BIOS and Chinese-market antivirus software.

The most famous firmware-level attack is Stuxnet (discovered 2010), the joint US-Israeli cyberweapon that targeted Iranian nuclear centrifuges. While Stuxnet primarily attacked Siemens PLCs via Windows, its success demonstrated that firmware-level attacks on embedded systems could have physical-world consequences. This galvanized the industry to take boot security seriously.

The Rise of UEFI Secure Boot

The Unified Extensible Firmware Interface (UEFI) specification, developed by Intel starting in the late 1990s as EFI and later managed by the UEFI Forum, included a security framework called Secure Boot. UEFI Secure Boot was formally specified in UEFI 2.3.1 (2012) and became a requirement for Windows 8 certification, meaning every PC shipped from 2012 onward had the hardware capability.

In the embedded world, ARM introduced TrustZone technology in 2003 with the ARM1176JZ-S processor, creating a hardware-enforced separation between a "Normal World" and a "Secure World." TrustZone provides the hardware foundation for secure boot on billions of ARM-based phones, tablets, and embedded devices. Qualcomm's implementation, QHEE (Qualcomm Hypervisor Execution Environment), and Samsung's Knox platform both build on TrustZone.

Year Event Impact
2003ARM TrustZone introducedHardware-based secure world for ARM processors
2006IceLord BIOS rootkit PoCFirst public demonstration of BIOS persistence attack
2007TPM 1.2 widely deployedTrusted Platform Module provides measured boot on PCs
2010Stuxnet discoveredNation-state firmware attack on industrial systems
2011Mebromi BIOS rootkit in the wildFirst real-world BIOS-infecting malware
2012UEFI Secure Boot (2.3.1)Cryptographic boot verification becomes standard on PCs
2014TPM 2.0 specification publishedModern TPM with SHA-256, ECC, policy-based authorization
2018Spectre/MeltdownCPU-level vulnerabilities bypass all software security
2020NIST SP 800-193 publishedFederal standard for platform firmware resilience
2022BlackLotus UEFI bootkitFirst in-the-wild bootkit bypassing UEFI Secure Boot

Chain of Trust

The chain of trust is the fundamental concept in boot security. It is a sequence of verification steps where each stage authenticates the next stage before transferring control. The chain begins at a root of trust — a small, immutable piece of code or hardware that is inherently trusted because it cannot be modified.

Root of Trust Types

  • Hardware Root of Trust (HRoT) — Code burned into ROM (mask ROM or OTP fuses) during manufacturing. Cannot be modified after fabrication. Examples: Intel Boot Guard ACM, ARM TrustZone secure ROM, Google Titan chip.
  • Immutable Bootloader — First-stage bootloader stored in write-protected flash or OTP memory. Can be verified but not changed. Examples: STM32 system bootloader, NXP HABv4 ROM.
  • TPM-Based Root of Trust — A discrete security chip that measures (hashes) each boot stage and stores measurements in Platform Configuration Registers (PCRs). Does not prevent boot but can prove what booted.

Boot Chain Verification Flow

# Typical embedded secure boot chain:
#
# Stage 0: ROM Bootloader (immutable, in silicon)
#   |-- Verifies signature of Stage 1
#   v
# Stage 1: Second-Stage Bootloader (SPL / BL2)
#   |-- Verifies signature of Stage 2
#   v
# Stage 2: U-Boot / UEFI Firmware
#   |-- Verifies signature of kernel + device tree
#   v
# Stage 3: Linux Kernel
#   |-- Verifies signature of kernel modules (if dm-verity)
#   v
# Stage 4: Root Filesystem
#   |-- dm-verity integrity verification of every block
#   v
# Application Layer

# Each arrow represents: verify(hash(next_stage), public_key, signature)
# If ANY verification fails, boot halts (or falls back to recovery)
Key Insight: The chain of trust is only as strong as its weakest link. If the root of trust (ROM code) has a vulnerability, the entire chain collapses. This is why chip manufacturers invest heavily in formal verification of ROM bootloader code — it can never be patched after the chip is fabricated.

Measured Boot vs. Verified Boot

Property Verified Boot Measured Boot
What it doesChecks signature before running codeHashes code into TPM PCRs, allows boot regardless
Failure behaviorHalts boot or falls back to recoveryBoot continues, but attestation report reveals tampering
Use caseEmbedded devices, phones, game consolesEnterprise PCs, servers, cloud workloads
Key managementDevice holds public key, OEM holds private keyTPM holds measurements, remote verifier checks
ExamplesAndroid Verified Boot, UEFI Secure BootIntel TXT, TCG Measured Boot, Keylime attestation

Secure Boot Implementation

UEFI Secure Boot on x86

UEFI Secure Boot uses a hierarchy of cryptographic keys stored in UEFI firmware variables:

  • PK (Platform Key) — The root key, typically owned by the OEM. Only one PK exists. Controls who can modify KEK.
  • KEK (Key Exchange Key) — Keys authorized to update the db/dbx. Typically includes the OEM's key and Microsoft's key.
  • db (Signature Database) — Contains hashes or certificates of authorized boot loaders and drivers.
  • dbx (Forbidden Signature Database) — Contains hashes or certificates of known-compromised or revoked boot loaders.
# Check UEFI Secure Boot status on Linux
mokutil --sb-state
# Output: SecureBoot enabled / disabled

# List enrolled keys
mokutil --pk        # Platform Key
mokutil --kek       # Key Exchange Keys
mokutil --db        # Authorized signatures
mokutil --dbx       # Revoked signatures

# Enroll a custom key (for self-signed kernels)
# 1. Generate a key pair
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=My Secure Boot Key/" \
    -keyout MOK.priv -outform DER -out MOK.der -days 36500 -nodes

# 2. Sign the kernel
sbsign --key MOK.priv --cert MOK.der --output vmlinuz.signed vmlinuz

# 3. Enroll the MOK (Machine Owner Key)
mokutil --import MOK.der
# Reboot -> MokManager -> Enter password -> Enroll MOK

# 4. Verify signature
sbverify --cert MOK.der vmlinuz.signed

STM32 Secure Boot (SBSFU)

STMicroelectronics provides the Secure Boot and Secure Firmware Update (SBSFU) reference implementation for STM32 microcontrollers. It uses the STM32's flash read/write protection (RDP), option bytes, and hardware crypto accelerator to implement a complete secure boot chain.

/* STM32 Secure Boot verification flow (simplified)
 * Based on STM32 SBSFU architecture
 */

#include "se_key.h"
#include "se_crypto.h"
#include <stdint.h>
#include <string.h>

/* Firmware header structure (prepended to binary) */
typedef struct {
    uint32_t magic;           /* 0x53424655 "SBFU" */
    uint32_t version;         /* Firmware version (anti-rollback) */
    uint32_t fw_size;         /* Firmware size in bytes */
    uint8_t  fw_hash[32];    /* SHA-256 hash of firmware */
    uint8_t  signature[64];  /* ECDSA-P256 signature of header */
    uint32_t min_version;     /* Minimum allowed version */
} fw_header_t;

/* Stored in OTP / option bytes - NEVER in flash */
static const uint8_t ecdsa_public_key[64] = { /* ... */ };

typedef enum {
    BOOT_OK = 0,
    BOOT_ERR_MAGIC,
    BOOT_ERR_VERSION,
    BOOT_ERR_SIGNATURE,
    BOOT_ERR_HASH,
} boot_status_t;

boot_status_t verify_firmware(const fw_header_t *header,
                               const uint8_t *fw_data)
{
    /* Step 1: Check magic number */
    if (header->magic != 0x53424655) {
        return BOOT_ERR_MAGIC;
    }

    /* Step 2: Anti-rollback check */
    uint32_t current_version = read_current_version_from_otp();
    if (header->version < current_version) {
        return BOOT_ERR_VERSION;
    }

    /* Step 3: Verify ECDSA signature of header */
    if (!ecdsa_verify_p256(ecdsa_public_key,
                            (uint8_t *)header,
                            sizeof(fw_header_t) - 64, /* Exclude sig */
                            header->signature)) {
        return BOOT_ERR_SIGNATURE;
    }

    /* Step 4: Verify firmware hash */
    uint8_t computed_hash[32];
    sha256(fw_data, header->fw_size, computed_hash);
    if (memcmp(computed_hash, header->fw_hash, 32) != 0) {
        return BOOT_ERR_HASH;
    }

    /* Step 5: Update anti-rollback counter if new version */
    if (header->version > current_version) {
        write_version_to_otp(header->version);
    }

    return BOOT_OK;
}

void secure_boot_main(void)
{
    const fw_header_t *header = (fw_header_t *)FW_HEADER_ADDRESS;
    const uint8_t *fw_data = (uint8_t *)(FW_HEADER_ADDRESS + sizeof(fw_header_t));

    boot_status_t status = verify_firmware(header, fw_data);
    if (status != BOOT_OK) {
        /* Boot verification failed - enter recovery mode */
        enter_recovery_mode(status);
        /* Never returns */
    }

    /* Verification passed - jump to application */
    typedef void (*app_entry_t)(void);
    uint32_t app_sp = *(uint32_t *)fw_data;
    uint32_t app_entry = *(uint32_t *)(fw_data + 4);

    __set_MSP(app_sp);
    ((app_entry_t)app_entry)();
}
Warning: Anti-rollback protection is essential. Without it, an attacker who has a copy of an older, vulnerable firmware version can downgrade the device to that version and exploit the known vulnerability. Use one-time-programmable (OTP) fuses or monotonic counters to enforce minimum firmware versions.

TPM 2.0

The Trusted Platform Module (TPM) is a dedicated security chip (or firmware module) that provides hardware-based security functions. The TPM 2.0 specification, published by the Trusted Computing Group (TCG) in 2014, is now required for Windows 11 and increasingly mandated in enterprise and government systems.

TPM 2.0 Architecture

  • Platform Configuration Registers (PCRs) — 24 hash registers that record measurements of boot components. PCR values can only be extended (new_PCR = hash(old_PCR || new_measurement)), never directly written. This creates a tamper-evident log of what software ran during boot.
  • Key Hierarchy — The TPM generates and stores keys internally. The Endorsement Key (EK) is unique per chip and never leaves the TPM. Storage Root Key (SRK) is the parent of all user keys.
  • Sealing/Unsealing — Encrypts data to specific PCR values. The data can only be decrypted when the platform is in the exact same boot state. Used for disk encryption keys (BitLocker, LUKS with TPM).
  • Remote Attestation — The TPM can sign a quote (PCR values + nonce) with its Attestation Key, allowing a remote verifier to confirm the platform's boot integrity.
# TPM 2.0 tools on Linux (tpm2-tools package)

# Check TPM presence and version
tpm2_getcap properties-fixed | grep -A1 TPM2_PT_FAMILY_INDICATOR

# Read PCR values (SHA-256 bank)
tpm2_pcrread sha256:0,1,2,3,4,5,6,7

# Extend PCR 16 (resettable, used for testing)
echo "test measurement" | tpm2_pcrextend 16:sha256=$(sha256sum | cut -d' ' -f1)

# Create a primary key under the storage hierarchy
tpm2_createprimary -C o -g sha256 -G rsa -c primary.ctx

# Create a signing key under the primary
tpm2_create -C primary.ctx -g sha256 -G rsa \
    -u signing_key.pub -r signing_key.priv

# Load the signing key
tpm2_load -C primary.ctx -u signing_key.pub -r signing_key.priv -c signing.ctx

# Sign data with TPM-resident key
echo "data to sign" > message.dat
tpm2_sign -c signing.ctx -g sha256 -o signature.dat message.dat

# Verify signature
tpm2_verifysignature -c signing.ctx -g sha256 -s signature.dat -m message.dat

# Seal data to PCR values (e.g., seal a disk encryption key)
echo "my_secret_key_material" > secret.dat
tpm2_create -C primary.ctx -i secret.dat \
    -u sealed.pub -r sealed.priv \
    -L "sha256:0,1,2,3,7" \
    -a "fixedtpm|fixedparent"

# Unseal (only works if current PCR values match sealed policy)
tpm2_load -C primary.ctx -u sealed.pub -r sealed.priv -c sealed.ctx
tpm2_unseal -c sealed.ctx -p pcr:sha256:0,1,2,3,7

Measured Boot with Remote Attestation

#!/usr/bin/env python3
"""Remote attestation verifier using TPM 2.0 quotes.
Verifies that a remote machine booted with expected PCR values.
"""

import hashlib
import json
import subprocess

class AttestationVerifier:
    """Verifies TPM quotes from remote devices."""

    def __init__(self, expected_pcrs):
        """
        Args:
            expected_pcrs: Dict mapping PCR index to expected SHA-256 hash.
                           e.g., {0: "abc123...", 7: "def456..."}
        """
        self.expected_pcrs = expected_pcrs

    def verify_quote(self, quote_data, signature, ak_public, nonce):
        """Verify a TPM quote from a remote device.

        Returns:
            dict with 'valid' bool and 'details' list
        """
        results = {'valid': True, 'details': []}

        # Step 1: Verify the quote signature using the AK public key
        # (In production, use tpm2_checkquote or a TPM library)
        sig_valid = self._verify_signature(quote_data, signature, ak_public)
        if not sig_valid:
            results['valid'] = False
            results['details'].append("Quote signature verification FAILED")
            return results
        results['details'].append("Quote signature: VALID")

        # Step 2: Verify nonce freshness (prevents replay attacks)
        if nonce not in quote_data:
            results['valid'] = False
            results['details'].append("Nonce mismatch - possible replay attack")
            return results
        results['details'].append("Nonce freshness: VALID")

        # Step 3: Compare PCR values against expected golden values
        actual_pcrs = self._extract_pcrs(quote_data)
        for pcr_idx, expected_hash in self.expected_pcrs.items():
            actual = actual_pcrs.get(pcr_idx, "MISSING")
            if actual != expected_hash:
                results['valid'] = False
                results['details'].append(
                    f"PCR[{pcr_idx}] MISMATCH: "
                    f"expected={expected_hash[:16]}... "
                    f"actual={actual[:16]}..."
                )
            else:
                results['details'].append(f"PCR[{pcr_idx}]: MATCH")

        return results

    def _verify_signature(self, data, signature, public_key):
        """Verify ECDSA or RSA signature (simplified)."""
        # In production: use cryptography library or tpm2_checkquote
        return True  # Placeholder

    def _extract_pcrs(self, quote_data):
        """Extract PCR values from quote structure (simplified)."""
        return {}  # Placeholder - parse TPM2B_ATTEST structure

# Usage
verifier = AttestationVerifier({
    0: "expected_sha256_of_firmware...",
    7: "expected_sha256_of_secureboot_policy...",
})
result = verifier.verify_quote(quote_data={}, signature=b"", ak_public=b"", nonce="random123")
print(json.dumps(result, indent=2))

Secure Elements

A secure element (SE) is a tamper-resistant microcontroller designed specifically for cryptographic operations and key storage. Unlike a TPM (which is designed for platform integrity measurement), a secure element focuses on protecting cryptographic keys and performing operations like ECDH key agreement, ECDSA signing, and AES encryption without ever exposing private keys to the host processor.

Secure Element Comparison

Feature ATECC608A/B SE050 STSAFE-A110
ManufacturerMicrochipNXPSTMicroelectronics
InterfaceI2C (up to 1 MHz)I2CI2C
Key Slots16Flexible (dozens)8
AlgorithmsECDSA P-256, SHA-256, AES-128, HMACRSA up to 4096, ECC up to P-521, AES, 3DESECDSA P-256/P-384, SHA-256, AES
CertificationsCommon Criteria EAL6+Common Criteria EAL6+Common Criteria EAL5+
Price~$0.55~$1.50~$0.80
Best ForIoT authentication, AWS IoT, small devicesComplex PKI, multi-tenancy, rich cryptoSTM32 ecosystem integration

ATECC608B with ESP32 (I2C)

/* ATECC608B secure element - key generation and ECDSA signing
 * Requires: Microchip CryptoAuthLib library
 */

#include "cryptoauthlib.h"
#include "esp_log.h"

static const char *TAG = "atecc608";

/* I2C configuration for ATECC608B */
static ATCAIfaceCfg atecc608_cfg = {
    .iface_type = ATCA_I2C_IFACE,
    .devtype = ATECC608B,
    .atcai2c.address = 0x6A,  /* Default I2C address (0x35 << 1) */
    .atcai2c.bus = 0,
    .atcai2c.baud = 400000,
    .wake_delay = 1500,
    .rx_retries = 20,
};

esp_err_t atecc608_init(void)
{
    ATCA_STATUS status = atcab_init(&atecc608_cfg);
    if (status != ATCA_SUCCESS) {
        ESP_LOGE(TAG, "ATECC608 init failed: 0x%02x", status);
        return ESP_FAIL;
    }

    /* Read serial number (unique per chip) */
    uint8_t serial[9];
    status = atcab_read_serial_number(serial);
    if (status == ATCA_SUCCESS) {
        ESP_LOGI(TAG, "ATECC608 serial: %02X%02X%02X%02X%02X%02X%02X%02X%02X",
                 serial[0], serial[1], serial[2], serial[3], serial[4],
                 serial[5], serial[6], serial[7], serial[8]);
    }
    return ESP_OK;
}

esp_err_t atecc608_generate_keypair(uint8_t slot, uint8_t *public_key)
{
    /* Generate ECDSA P-256 key pair. Private key stays in SE, never exposed. */
    ATCA_STATUS status = atcab_genkey(slot, public_key);
    if (status != ATCA_SUCCESS) {
        ESP_LOGE(TAG, "Key generation failed: 0x%02x", status);
        return ESP_FAIL;
    }
    ESP_LOGI(TAG, "Generated P-256 key pair in slot %d", slot);
    ESP_LOGI(TAG, "Public key (first 8 bytes): %02X%02X%02X%02X%02X%02X%02X%02X",
             public_key[0], public_key[1], public_key[2], public_key[3],
             public_key[4], public_key[5], public_key[6], public_key[7]);
    return ESP_OK;
}

esp_err_t atecc608_sign(uint8_t slot, const uint8_t *digest,
                         uint8_t *signature)
{
    /* Sign a SHA-256 digest using private key in specified slot.
     * The private key NEVER leaves the secure element. */
    ATCA_STATUS status = atcab_sign(slot, digest, signature);
    if (status != ATCA_SUCCESS) {
        ESP_LOGE(TAG, "Signing failed: 0x%02x", status);
        return ESP_FAIL;
    }
    ESP_LOGI(TAG, "Signed with key in slot %d", slot);
    return ESP_OK;
}

esp_err_t atecc608_verify(const uint8_t *digest, const uint8_t *signature,
                           const uint8_t *public_key)
{
    bool verified = false;
    ATCA_STATUS status = atcab_verify_extern(digest, signature,
                                              public_key, &verified);
    if (status != ATCA_SUCCESS) {
        ESP_LOGE(TAG, "Verify failed: 0x%02x", status);
        return ESP_FAIL;
    }
    ESP_LOGI(TAG, "Signature verification: %s",
             verified ? "VALID" : "INVALID");
    return verified ? ESP_OK : ESP_FAIL;
}
Key Insight: The ATECC608B at $0.55 provides the same Common Criteria EAL6+ certification level as enterprise HSMs costing thousands of dollars. For IoT devices that need to authenticate to AWS IoT Core, Azure IoT Hub, or Google Cloud IoT, the ATECC608B provides per-device TLS mutual authentication with zero-touch provisioning through Microchip's Trust Platform.

Hardware Security Modules

A Hardware Security Module (HSM) is a dedicated, tamper-resistant computing device for managing cryptographic keys and performing cryptographic operations. HSMs are the gold standard for key protection in enterprise environments — protecting root CA keys, code signing keys, payment processing keys, and database encryption keys.

HSM Types and Use Cases

Type Examples FIPS Level Use Cases
Network HSMThales Luna, Entrust nShieldFIPS 140-3 Level 3PKI, code signing, TLS offload
Cloud HSMAWS CloudHSM, Azure Dedicated HSMFIPS 140-2 Level 3Cloud key management, BYOK
PCIe HSMThales Luna PCIe, Utimaco CryptoServerFIPS 140-3 Level 3High-throughput signing, payment processing
USB TokenYubiHSM 2, Nitrokey HSM 2FIPS 140-2 Level 2Small-scale signing, development, air-gapped CA
Managed KMSAWS KMS, GCP Cloud KMS, Azure Key VaultFIPS 140-2 Level 2-3Application encryption, envelope encryption

PKCS#11 Interface

# PKCS#11 is the standard API for HSM access
# Install p11-kit and opensc tools

# List tokens (HSMs/smart cards) visible via PKCS#11
pkcs11-tool --list-tokens

# Generate a key pair on the HSM
pkcs11-tool --module /usr/lib/softhsm/libsofthsm2.so \
    --keypairgen --key-type EC:prime256v1 \
    --label "code-signing-key" --id 01 \
    --login --pin 1234

# Sign a file using HSM-resident key
pkcs11-tool --module /usr/lib/softhsm/libsofthsm2.so \
    --sign --mechanism ECDSA-SHA256 \
    --id 01 --input-file firmware.bin \
    --output-file firmware.sig \
    --login --pin 1234

# Use with OpenSSL via engine
openssl dgst -sha256 -engine pkcs11 \
    -keyform engine -sign "pkcs11:id=%01" \
    -out firmware.sig firmware.bin

Code Signing & Verification

Code signing is the process of attaching a cryptographic signature to firmware, software, or configuration files to prove authenticity (it came from a trusted source) and integrity (it has not been modified). In embedded systems, code signing is the mechanism that makes secure boot possible — the bootloader verifies the signature before executing the firmware.

Signing Pipeline for Embedded Firmware

#!/bin/bash
# firmware_sign.sh - Complete firmware signing pipeline
# Requires: openssl, srec_cat (srecord package)

set -euo pipefail

FIRMWARE_BIN="$1"
PRIVATE_KEY="signing_key.pem"
CERT="signing_cert.pem"
OUTPUT_DIR="signed_firmware"

mkdir -p "$OUTPUT_DIR"

echo "=== Firmware Signing Pipeline ==="
echo "Input: $FIRMWARE_BIN"
echo "Size:  $(stat -c%s "$FIRMWARE_BIN") bytes"

# Step 1: Compute SHA-256 hash
SHA256=$(sha256sum "$FIRMWARE_BIN" | cut -d' ' -f1)
echo "SHA-256: $SHA256"

# Step 2: Sign the hash with ECDSA P-256
openssl dgst -sha256 -sign "$PRIVATE_KEY" \
    -out "$OUTPUT_DIR/firmware.sig" "$FIRMWARE_BIN"
echo "Signature generated: firmware.sig"

# Step 3: Verify the signature (sanity check)
openssl dgst -sha256 -verify <(openssl x509 -in "$CERT" -pubkey -noout) \
    -signature "$OUTPUT_DIR/firmware.sig" "$FIRMWARE_BIN"
echo "Signature verification: OK"

# Step 4: Create firmware header (version + size + hash + sig)
VERSION="0x00020005"  # v2.5
python3 -c "
import struct, sys
with open('$FIRMWARE_BIN', 'rb') as f:
    fw = f.read()
with open('$OUTPUT_DIR/firmware.sig', 'rb') as f:
    sig = f.read()
header = struct.pack('<II', $VERSION, len(fw))
header += bytes.fromhex('$SHA256')
header += sig.ljust(64, b'\\x00')[:64]
with open('$OUTPUT_DIR/firmware_header.bin', 'wb') as f:
    f.write(header)
print(f'Header: {len(header)} bytes')
"

# Step 5: Concatenate header + firmware
cat "$OUTPUT_DIR/firmware_header.bin" "$FIRMWARE_BIN" > \
    "$OUTPUT_DIR/firmware_signed.bin"
echo "Signed firmware: $OUTPUT_DIR/firmware_signed.bin"
echo "Total size: $(stat -c%s "$OUTPUT_DIR/firmware_signed.bin") bytes"

# Step 6: Generate build manifest (JSON)
cat > "$OUTPUT_DIR/manifest.json" <<MANIFEST
{
    "version": "2.5.0",
    "build_date": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
    "sha256": "$SHA256",
    "size": $(stat -c%s "$FIRMWARE_BIN"),
    "signed_size": $(stat -c%s "$OUTPUT_DIR/firmware_signed.bin"),
    "signer": "build-server-01",
    "certificate": "$(openssl x509 -in "$CERT" -fingerprint -noout | cut -d= -f2)"
}
MANIFEST
echo "Manifest: $OUTPUT_DIR/manifest.json"
echo "=== Signing complete ==="
Key Insight: In a CI/CD pipeline, the code signing key should never be stored in the build server. Use an HSM (hardware or cloud) via PKCS#11 so the private key never leaves the HSM. The build server sends the firmware hash to the HSM, which returns the signature. This way, even if the build server is compromised, the attacker cannot sign arbitrary firmware.

Encrypted Storage

Encrypted storage protects data at rest — ensuring that an attacker who gains physical access to a device's storage media (SD card, eMMC, NVMe, or flash chip) cannot read the contents without the encryption key.

dm-crypt/LUKS on Linux

# Full-disk encryption with LUKS2 on Linux (e.g., Raspberry Pi)

# Create an encrypted partition
sudo cryptsetup luksFormat --type luks2 --cipher aes-xts-plain64 \
    --key-size 512 --hash sha256 --iter-time 2000 /dev/sda2

# Open (decrypt) the partition
sudo cryptsetup luksOpen /dev/sda2 encrypted_data

# Create filesystem on the decrypted device
sudo mkfs.ext4 /dev/mapper/encrypted_data
sudo mount /dev/mapper/encrypted_data /mnt/secure

# Add a key file for automatic unlocking (protected by TPM)
dd if=/dev/urandom of=/root/luks_keyfile bs=32 count=1
sudo cryptsetup luksAddKey /dev/sda2 /root/luks_keyfile

# TPM-sealed LUKS unlocking (using clevis + tang or tpm2)
sudo clevis luks bind -d /dev/sda2 tpm2 '{"pcr_ids":"0,1,2,3,7"}'
# On boot, clevis automatically unseals the key if PCRs match

# Verify encryption status
sudo cryptsetup luksDump /dev/sda2

# Close encrypted volume
sudo umount /mnt/secure
sudo cryptsetup luksClose encrypted_data

Flash Encryption on ESP32

# ESP32 Flash Encryption (ESP-IDF)
# Encrypts all flash contents with AES-256-XTS

# Enable in menuconfig:
# Security features -> Enable flash encryption on boot
# Security features -> Flash encryption mode -> Development (for testing)

# The encryption key is stored in eFuse Block 1 (256 bits)
# In Development mode: key is readable (for debugging)
# In Release mode: key is write-only (PERMANENT - cannot be read back)

# Check eFuse status
espefuse.py --port /dev/ttyUSB0 summary

# Flash encrypted firmware
idf.py -p /dev/ttyUSB0 encrypted-flash monitor

# Encrypt a binary manually (for OTA)
espsecure.py encrypt_flash_data --keyfile flash_encryption_key.bin \
    --address 0x20000 --output encrypted_app.bin build/app.bin

# IMPORTANT: In Release mode, these eFuses are permanently burned:
# - FLASH_CRYPT_CNT set to 0xFF (encryption always active)
# - DISABLE_DL_ENCRYPT (cannot encrypt via UART)
# - DISABLE_DL_DECRYPT (cannot decrypt via UART)
# - JTAG_DISABLE (hardware debugging disabled)
Warning: Flash encryption in Release mode is a one-way operation that permanently burns eFuses. If you lose the encryption key and have not properly set up OTA, the device is permanently bricked. Always maintain a secure backup of encryption keys in an HSM, and always test the full OTA update flow in Development mode before switching to Release.

Key Derivation and Key Wrapping

Encryption keys should never be stored in plaintext — even in "secure" storage. Instead, use key derivation functions (KDFs) to derive encryption keys from a master secret combined with device-unique data, and key wrapping to encrypt keys at rest.

#!/usr/bin/env python3
"""Key derivation for per-device unique encryption keys.
Uses HKDF (HMAC-based Key Derivation Function) per RFC 5869.
"""

import hashlib
import hmac
import os

def hkdf_extract(salt, input_key_material):
    """HKDF-Extract: PRK = HMAC-Hash(salt, IKM)"""
    if salt is None:
        salt = b'\x00' * 32  # Default salt = HashLen zeros
    return hmac.new(salt, input_key_material, hashlib.sha256).digest()

def hkdf_expand(prk, info, length):
    """HKDF-Expand: derive output keying material of desired length."""
    hash_len = 32  # SHA-256 output length
    n = (length + hash_len - 1) // hash_len
    okm = b''
    t = b''
    for i in range(1, n + 1):
        t = hmac.new(prk, t + info + bytes([i]), hashlib.sha256).digest()
        okm += t
    return okm[:length]

def derive_device_key(master_secret, device_serial, purpose):
    """Derive a unique key for a specific device and purpose.

    Args:
        master_secret: Factory-provisioned master key (32 bytes)
        device_serial: Unique device identifier (e.g., chip serial number)
        purpose: Key purpose string (e.g., "firmware-encryption", "tls-auth")

    Returns:
        32-byte derived key unique to this device and purpose
    """
    # Extract: combine master secret with device identity
    salt = hashlib.sha256(device_serial.encode()).digest()
    prk = hkdf_extract(salt, master_secret)

    # Expand: derive purpose-specific key
    info = f"device-key-v1:{purpose}:{device_serial}".encode()
    return hkdf_expand(prk, info, 32)

# Example usage
master = os.urandom(32)  # In production: from HSM during manufacturing
serial = "ATECC608B-01234567"

fw_key = derive_device_key(master, serial, "firmware-encryption")
tls_key = derive_device_key(master, serial, "tls-auth")
storage_key = derive_device_key(master, serial, "storage-encryption")

print(f"Firmware key:  {fw_key.hex()[:32]}...")
print(f"TLS key:       {tls_key.hex()[:32]}...")
print(f"Storage key:   {storage_key.hex()[:32]}...")
# Each key is unique even though derived from same master + serial
/* AES-256 key wrapping (RFC 3394) for protecting keys at rest.
 * The Key Encryption Key (KEK) is stored in the secure element.
 * Wrapped keys can be safely stored in flash or NVS.
 */

#include "mbedtls/nist_kw.h"
#include <string.h>
#include <stdio.h>

/* Wrap a key using AES Key Wrap (RFC 3394) */
int wrap_key(const uint8_t *kek, size_t kek_len,
             const uint8_t *key_to_wrap, size_t key_len,
             uint8_t *wrapped_key, size_t *wrapped_len)
{
    mbedtls_nist_kw_context ctx;
    mbedtls_nist_kw_init(&ctx);

    int ret = mbedtls_nist_kw_setkey(&ctx, MBEDTLS_CIPHER_ID_AES,
                                      kek, kek_len * 8, 1); /* 1 = wrap */
    if (ret != 0) return ret;

    ret = mbedtls_nist_kw_wrap(&ctx, MBEDTLS_KW_MODE_KW,
                                key_to_wrap, key_len,
                                wrapped_key, wrapped_len,
                                key_len + 8); /* Wrapped = key + 8 bytes */
    mbedtls_nist_kw_free(&ctx);
    return ret;
}

/* Unwrap a key */
int unwrap_key(const uint8_t *kek, size_t kek_len,
               const uint8_t *wrapped_key, size_t wrapped_len,
               uint8_t *unwrapped_key, size_t *unwrapped_len)
{
    mbedtls_nist_kw_context ctx;
    mbedtls_nist_kw_init(&ctx);

    int ret = mbedtls_nist_kw_setkey(&ctx, MBEDTLS_CIPHER_ID_AES,
                                      kek, kek_len * 8, 0); /* 0 = unwrap */
    if (ret != 0) return ret;

    ret = mbedtls_nist_kw_unwrap(&ctx, MBEDTLS_KW_MODE_KW,
                                  wrapped_key, wrapped_len,
                                  unwrapped_key, unwrapped_len,
                                  wrapped_len - 8);
    mbedtls_nist_kw_free(&ctx);
    return ret;  /* Returns MBEDTLS_ERR_CIPHER_AUTH_FAILED if tampered */
}
Key Insight: The key hierarchy pattern is: HSM holds the root key -> root key derives per-device master keys during manufacturing -> master key is provisioned into the device's secure element -> device derives purpose-specific keys from its master using HKDF. This way, compromising one device reveals only that device's keys, not the keys of any other device in the fleet.

Anti-Tamper Design

Anti-tamper mechanisms protect a device against physical attacks — attempts to extract keys, read memory, probe bus signals, or modify hardware. The level of anti-tamper protection needed depends on the threat model: consumer IoT devices need basic protections, while payment terminals, military equipment, and DRM systems require extensive physical security.

Anti-Tamper Hierarchy

Level Technique Protects Against Cost
1 - BasicTamper-evident seals, custom screwsCasual inspection, consumer tampering$0.10-1
2 - SoftwareDisable JTAG, read-out protection, secure bootFirmware extraction via debug ports$0 (firmware config)
3 - EnclosureTamper switches (micro switches, light sensors)Case opening, triggers key zeroization$1-5
4 - PCBConformal coating, BGA packages, inner-layer routingBus probing, chip desoldering$2-10
5 - Active MeshConductive mesh over sensitive areas, battery-backedPhysical probing with FIB or microprobes$10-50
6 - EnvironmentalTemperature/voltage/frequency monitorsGlitching attacks, extreme condition exploits$5-20
7 - PottingEpoxy encapsulation of entire PCBAll physical access (destructive removal required)$5-30

Side-Channel Attack Mitigations

Side-channel attacks extract secret information by observing physical characteristics of computation rather than attacking the algorithm directly:

  • Power Analysis (SPA/DPA) — Measuring power consumption during cryptographic operations reveals key bits. Mitigation: constant-time implementations, random delays, power line filtering.
  • Electromagnetic (EM) Analysis — EM emissions from the chip leak information about operations. Mitigation: EM shielding, randomized execution order.
  • Timing Attacks — Execution time differences reveal secret-dependent branches. Mitigation: constant-time comparison functions (e.g., CRYPTO_memcmp instead of memcmp).
  • Fault Injection (Glitching) — Voltage or clock glitches cause the processor to skip instructions or produce faulty outputs. Mitigation: redundant checks, voltage/clock monitoring, instruction duplication.
/* Constant-time comparison to prevent timing attacks.
 * Standard memcmp() returns early on first difference — leaking info.
 * This version always compares all bytes.
 */

#include <stdint.h>
#include <stddef.h>

int secure_compare(const uint8_t *a, const uint8_t *b, size_t len)
{
    volatile uint8_t result = 0;
    for (size_t i = 0; i < len; i++) {
        result |= a[i] ^ b[i];
    }
    return (int)result;  /* 0 = match, non-zero = mismatch */
}

/* Redundant verification to mitigate fault injection */
int verify_signature_hardened(const uint8_t *digest,
                               const uint8_t *signature,
                               const uint8_t *public_key)
{
    /* First verification */
    int result1 = ecdsa_verify(digest, signature, public_key);

    /* Introduce random delay to prevent synchronization */
    volatile int delay = get_random_u32() & 0xFFFF;
    while (delay--) { __asm__ volatile("nop"); }

    /* Second verification (independent computation) */
    int result2 = ecdsa_verify(digest, signature, public_key);

    /* Both must agree — a single glitch cannot bypass both */
    if (result1 == 1 && result2 == 1) {
        return 1;  /* Signature valid */
    }
    return 0;  /* Invalid or glitch detected */
}

Compliance Frameworks

Security compliance frameworks provide structured requirements and evaluation methodologies for assessing the security of products and systems. Understanding these frameworks is essential for products sold to government, defense, healthcare, financial, or industrial customers.

Framework Comparison

Framework Scope Key Requirements Industries
NIST SP 800-193Platform firmware resilienceProtection, detection, recovery of firmwareUS Federal, enterprise
Common Criteria (ISO 15408)IT security evaluationEAL1-EAL7 assurance levels, security targetsGovernment, defense, financial
FIPS 140-3Cryptographic modules4 security levels, physical security, key managementUS/Canada government
IEC 62443Industrial automation securitySecurity levels SL1-SL4, zones and conduitsIndustrial, OT, SCADA
ETSI EN 303 645Consumer IoT security14 provisions including unique passwords, secure updatesEU consumer electronics
PSA CertifiedIoT device security3 levels, threat modeling, secure boot, key storageARM-based IoT products

NIST SP 800-193: Platform Firmware Resilience

NIST SP 800-193 (published 2018, updated 2020) defines three pillars for firmware security:

  • Protection — Firmware update mechanisms must verify the authenticity and integrity of updates before applying them. The root of trust for update verification must be immutable.
  • Detection — The platform must be able to detect when firmware has been corrupted or tampered with, either during boot or at runtime via periodic integrity checks.
  • Recovery — When corruption is detected, the platform must be able to recover to a known-good firmware state. Recovery images must be protected with the same rigor as the primary firmware.
Key Insight: NIST SP 800-193 compliance is increasingly required for products sold to US federal agencies. The three pillars — protect, detect, recover — map directly to secure boot (protect), measured boot with attestation (detect), and dual-bank OTA with rollback (recover). If you implement these three features, you are well on your way to 800-193 compliance.

FIPS 140-3 Security Levels

FIPS 140-3 (Federal Information Processing Standard) is the US/Canada standard for cryptographic module validation. It replaced FIPS 140-2 in 2019 and is administered by NIST's Cryptographic Module Validation Program (CMVP). Products sold to US federal agencies for cryptographic use must be FIPS 140-3 validated.

Level Physical Security Key Management Example Products
Level 1Production-grade components, no tamper mechanismsApproved algorithms, basic key managementOpenSSL (software-only crypto module)
Level 2Tamper-evident coatings or seals, role-based authenticationOperator authentication requiredAWS KMS, Azure Key Vault
Level 3Tamper-responsive (active zeroization), identity-based authKeys zeroized on tamper, split knowledge/dual controlThales Luna HSM, AWS CloudHSM
Level 4Environmental failure protection, penetration-resistant envelopeMulti-factor authentication, complete envelope detectionSpecialized military/payment HSMs

PSA Certified for IoT

PSA Certified (Platform Security Architecture), created by ARM and supported by Arm, Cypress, Microchip, Nordic, Nuvoton, NXP, and STMicroelectronics, is a security framework specifically designed for IoT devices. It defines three certification levels:

  • PSA Certified Level 1 — Threat model and security analysis completed. Manufacturer demonstrates awareness of security threats and describes mitigations. No lab testing required.
  • PSA Certified Level 2 — Lab-evaluated. A third-party lab tests the device against PSA-RoT (Root of Trust) protection profile. Requires secure boot, secure storage, attestation, and secure update capabilities.
  • PSA Certified Level 3 — Substantial lab evaluation against physical and logical attacks. Requires defense against side-channel attacks, fault injection, and physical probing. Closest to Common Criteria EAL4+.
# PSA Certified security requirements checklist:

# Level 1 (self-assessment):
# [ ] Threat model document covering top 10 IoT threats
# [ ] Unique per-device identity (serial number or certificate)
# [ ] Secure boot chain documented
# [ ] Secure update mechanism documented
# [ ] Secure communication (TLS 1.2+) documented
# [ ] Secure storage for credentials documented

# Level 2 (lab-evaluated):
# [ ] Immutable Root of Trust in hardware (ROM or eFuse)
# [ ] Secure boot with cryptographic verification at every stage
# [ ] Secure storage (encrypted, tamper-detected)
# [ ] Attestation (device can prove its identity and boot state)
# [ ] Secure lifecycle management (provisioning, decommissioning)
# [ ] Debug port disabled or authenticated in production
# [ ] Anti-rollback for firmware updates

# Level 3 (substantial evaluation):
# [ ] All Level 2 requirements
# [ ] Resistance to side-channel attacks (power analysis, EM)
# [ ] Resistance to fault injection (glitching)
# [ ] Secure key storage in dedicated hardware (SE or HSM)
# [ ] Isolation between secure and non-secure processing

Case Studies

Case Study 1: Google Titan Security Chip

Google's Titan chip is a custom-designed hardware root of trust deployed in every Google server, Chromebook, and Pixel phone. In Google's data centers, the Titan chip verifies the first instruction executed by the server's CPU at every boot. It contains an immutable boot ROM, a cryptographic identity (unique per chip), and an application processor that runs Google's secure boot verification code.

The data center Titan chip sits on the server motherboard and intercepts the boot process before the main CPU begins execution. It verifies the BIOS/UEFI firmware signature, measures every boot component into its internal registers, and provides remote attestation to Google's fleet management infrastructure. Any server that fails attestation is automatically quarantined and investigated.

The consumer version, Titan M2 (used in Pixel phones since Pixel 6), provides secure boot, hardware-backed keystore for Android, insider attack resistance (the chip cannot be re-provisioned even by Google), and tamper-resistant storage for biometric templates. Google open-sourced the Titan firmware verification logic as part of the OpenTitan project (partnering with lowRISC, ETH Zurich, and others) to create an open reference design for hardware roots of trust.

Case Study 2: Apple Secure Enclave

Apple's Secure Enclave is a hardware security subsystem integrated into Apple silicon (A-series, M-series, S-series, T-series chips). First introduced with the A7 chip in iPhone 5s (2013) alongside Touch ID, it has evolved into one of the most sophisticated hardware security architectures in consumer electronics.

The Secure Enclave is a separate processor with its own secure boot ROM, encrypted memory (using a per-boot ephemeral key), and dedicated AES engine. It handles all biometric data (Face ID, Touch ID), device passcode verification, and Apple Pay transaction signing. The Secure Enclave's memory is encrypted with a key that changes on every boot — even if an attacker could freeze and read the RAM, the data from a previous session would be unrecoverable.

Apple's implementation demonstrates several best practices: the Secure Enclave has its own independent boot chain verified by an immutable hardware root of trust; all communication with the application processor uses a mailbox interface (no shared memory); the AES engine uses hardware key wrapping so keys are never exposed in software; and the system enforces rate limiting on passcode attempts (escalating delays + data destruction after 10 failed attempts) in tamper-resistant hardware rather than software.

Case Study 3: Automotive HSM (SHE/EVITA)

Modern vehicles use hardware security modules embedded within automotive microcontrollers to protect encryption keys, authenticate ECU-to-ECU communication, and verify firmware updates. The SHE (Secure Hardware Extension) standard, defined by the HIS (Herstellerinitiative Software) consortium, provides a minimum set of security functions: AES-128 encryption, CMAC authentication, secure boot, and secure key storage.

The EVITA (E-safety Vehicle Intrusion Protected Applications) project, funded by the European Commission, extended SHE with asymmetric cryptography (ECC) and defined three levels of automotive HSM: EVITA Light (for sensor ECUs), EVITA Medium (for body/comfort ECUs), and EVITA Full (for gateway and powertrain ECUs requiring maximum security). Today, chips like the Infineon AURIX TC3xx and NXP S32K3 integrate EVITA Full-equivalent HSMs, providing hardware-accelerated secure boot, authenticated CAN/Ethernet communication (SecOC per AUTOSAR), and secure firmware update for every ECU in the vehicle.

Exercises

Exercise 1 Beginner

UEFI Secure Boot Key Management

On a test Linux machine (VM is fine), generate a custom UEFI Secure Boot key pair (RSA-2048 or ECDSA P-256), sign a Linux kernel with sbsign, enroll the key using mokutil, and verify that the signed kernel boots successfully with Secure Boot enabled. Then, create an unsigned kernel and confirm that Secure Boot rejects it. Document the entire process, including the key generation commands, signing commands, and the boot log entries showing verification success/failure.

UEFI Secure Boot Code Signing MOK sbsign
Exercise 2 Intermediate

TPM-Sealed Disk Encryption

Set up LUKS full-disk encryption on a Linux system with the encryption key sealed to TPM 2.0 PCR values using clevis and tpm2-tools. Verify that the system boots and auto-unlocks when PCR values match. Then, modify the GRUB configuration (changing a kernel parameter) and verify that the TPM refuses to unseal the key because PCR values changed. Implement a recovery mechanism using a passphrase fallback. Finally, write a script that reads and logs TPM PCR values at each boot stage, creating an audit trail of boot integrity.

TPM 2.0 LUKS Clevis PCR Sealing Measured Boot
Exercise 3 Advanced

End-to-End Secure Boot for ESP32 with ATECC608B

Implement a complete secure boot and secure update system for an ESP32-S3 with an ATECC608B secure element. Requirements: (1) Enable ESP-IDF Secure Boot v2 with a signing key stored on a YubiHSM 2 or SoftHSM via PKCS#11, (2) Enable flash encryption in Development mode, (3) Use the ATECC608B for mutual TLS authentication to an MQTT broker, (4) Implement HTTPS OTA with firmware signature verification using a key stored in the ATECC608B, (5) Implement anti-rollback using monotonic counters in eFuse, (6) Write a CI/CD pipeline (GitHub Actions) that builds, signs (via HSM), and publishes firmware to an S3 bucket. Document the key hierarchy, threat model, and recovery procedures.

ESP32-S3 ATECC608B Secure Boot v2 Flash Encryption mTLS PKCS#11 CI/CD

Secure Boot Audit Generator

Use this tool to document your secure boot architecture and security posture — platform, boot chain, key storage, signing algorithm, tamper detection, and compliance requirements. Download as Word, Excel, PDF, or PowerPoint for security audits, compliance documentation, or design review.

Secure Boot Audit Generator

Document your secure boot and hardware security configuration for audit and export. All data stays in your browser — nothing is sent to any server.

Draft auto-saved

All data stays in your browser. Nothing is sent to or stored on any server.

Conclusion & Resources

Hardware security is no longer optional — it is a fundamental requirement for any connected device. From the boot ROM to the application layer, every stage must be verified, measured, and protected. The tools and techniques covered in this article — secure boot chains, TPM 2.0, secure elements, code signing, encrypted storage, and anti-tamper design — form the building blocks of trustworthy computing.

Key Takeaways:
  • The chain of trust starts at an immutable root — usually ROM or eFuse-based — and each stage verifies the next
  • TPM 2.0 provides measured boot and remote attestation for enterprise platforms
  • Secure elements (ATECC608B) provide cost-effective key storage for IoT devices at $0.55/unit
  • Code signing keys must never be stored on build servers — use HSMs (hardware or cloud)
  • Anti-rollback protection is as important as signature verification
  • Flash encryption and debug port disabling are essential for production deployments
  • Compliance frameworks (NIST 800-193, FIPS 140-3, Common Criteria) provide structured evaluation criteria

Further Resources

Technology