# Secure Boot in Windows: The Chain From Sector Zero to Userinit, and Every Place It Has Broken

> How Windows verifies and measures itself from CPU reset to logon, every rung of the boot chain, every public break, and what Pluton is being built to fix.

*Published: 2026-05-09*
*Canonical: https://paragmali.com/blog/secure-boot-in-windows-the-chain-from-sector-zero-to-userini*
*License: CC BY 4.0 - https://creativecommons.org/licenses/by/4.0/*

---
<TLDR>
Windows boots through a chain of verifications and measurements that runs from CPU reset to your desktop. UEFI Secure Boot signs the boot manager; Trusted Boot extends the signature check to every kernel-mode component; Measured Boot extends a parallel hash of every step into the TPM's PCR 0-7 and PCR 11, with DRTM later seeding PCR 17-22 from a CPU-vendor-signed late-launch anchor. After fifteen years of BIOS rootkits, MBR bootkits, and ESP-resident bootkits, that chain holds -- but every public Secure Boot break since 2022 (BlackLotus, Bitpixie, Bootkitty, LogoFAIL) has exploited the same gap: between patching a vulnerable Microsoft-signed binary and revoking it in dbx. Pluton-rooted firmware on Microsoft's update cadence is the planned escape.
</TLDR>

## 1. Eight seconds in 2010, and everything that could already be wrong

Picture a small business owner in December 2010. She unplugs her three-year-old Dell, drives it home, and powers it on. The fan spins. The BIOS chimes. The Windows 7 logo appears. By the time she types her password and the desktop loads, eight seconds have passed.

In those eight seconds, a TDL-4 bootkit that has been on disk for two weeks has already done its work. The infected master boot record patched the operating system loader in memory before the kernel finished initialising. Driver Signature Enforcement, the policy that was supposed to keep unsigned kernel drivers out, was disabled before the kernel checked for it. A ring-0 rootkit is now staged inside `ntoskrnl.exe`. Kaspersky's June 2011 analysis counted 4,524,488 infected machines in the first three months of 2011 alone [@kaspersky-tdl4]. The owner notices nothing. By the time she authenticates, the operating system that authenticates her is loading code the operating system never agreed to load.

The structural question raised by that scene is the question this article exists to answer: *what would it take for Windows to know, by the time the user types a password, that the machine has not been tampered with since power-on?*

The answer Microsoft began designing in 2011-2012 is a chain. UEFI Platform Initialization brings up the firmware. UEFI Secure Boot verifies the boot manager. Trusted Boot extends the signature check through `winload.efi`, the kernel, and every boot-start driver. Early Launch Anti-Malware classifies subsequent drivers. The Secure Kernel comes up in a hardware-isolated execution mode. Through every one of those rungs, a parallel rail -- Measured Boot -- writes a tamper-evident hash log into the TPM's Platform Configuration Registers, so that what was loaded can be proven later, even if the verifier itself was bypassed.

That chain is the spine of this article. We will walk it rung by rung. We will see where it has been broken in the wild. And we will see why every successful break since 2022 has exploited the same operational invariant -- the gap between *patched* and *revoked* -- rather than any flaw in the cryptography.

<Mermaid caption="The end-to-end Windows boot chain. Each rung is verified by the rung above it, and a parallel measurement is extended into the TPM's PCRs. The article walks each rung in order.">
flowchart TD
    SEC["SEC -- CPU reset, immutable ROM"] --> PEI["PEI -- platform init"]
    PEI --> DXE["DXE -- Secure Boot verifier lives here"]
    DXE --> BDS["BDS -- pick boot variable"]
    BDS --> BMGR["bootmgfw.efi (Microsoft-signed)"]
    BMGR --> WLOAD["winload.efi (Microsoft-signed)"]
    WLOAD --> NT["ntoskrnl.exe + boot-start drivers"]
    NT --> ELAM["ELAM (Defender, signed AM)"]
    NT --> SK["securekernel.exe (VTL1) + Trustlets"]
    ELAM --> SMSS["smss.exe -> wininit -> winlogon"]
    SK --> SMSS
    SMSS --> USR["userinit.exe -> explorer.exe"]
    TPM[("TPM PCR 0-7, PCR 11")]
    DXE -. extend .-> TPM
    BMGR -. extend .-> TPM
    WLOAD -. extend .-> TPM
    NT -. extend .-> TPM
    ELAM -. extend .-> TPM
</Mermaid>

Before there was a chain to walk, there was no chain at all.

## 2. Before Secure Boot: sector zero and the fiction of OS-level security

Ask what was actually verified during a 2011 PC boot, and the answer is: one byte pair. The `0x55AA` magic at the end of the 512-byte master boot record. That is a format check, not an authenticity check. The 16-bit BIOS power-on self test loaded sector zero of the boot device into memory at `0000:7C00` and jumped [@wp-mbr]. No signature. No measurement. Whatever was at sector zero, ran.

That architectural fact had been the structural lesson of computer-security history for a quarter century. Stoned, the boot sector virus written by an unknown student in Wellington, New Zealand in 1987, demonstrated it without malicious intent: the virus was a prank that displayed "Your PC is now Stoned!" and propagated by writing itself to the boot sector of every disk a victim machine touched [@wp-stoned]. Brain (Pakistan, 1986) [@wp-brain] and Michelangelo (1991) [@wp-michelangelo] were the same lesson at scale. The lesson was not that those particular authors were dangerous. It was that any code reaching sector zero ran with implicit privilege.

<Definition term="Bootkit">
A class of malware that survives operating-system reinstallation and antivirus scanning by infecting code that runs *before* the operating system loads -- traditionally the master boot record or the partition's volume boot record, more recently the EFI System Partition or the firmware itself. A bootkit's defining property is that the operating system it boots is one the bootkit itself chooses to load.
</Definition>

The modern bootkit family arrived in 2005 and ran undefended for the next seven years. Derek Soeder and Ryan Permeh of eEye published *BootRoot* at Black Hat USA 2005 [@bhusa05-bootroot], a proof of concept that hooked the BIOS interrupt 13h disk-read service before any operating system loaded and intercepted Windows kernel images on the way to memory. Vbootkit (Vipin and Nitin Kumar) followed in 2007, demonstrating the same primitive on Vista [@vbootkit-archive]. Mebroot (the malware family Sinowal/Torpig used) compiled in November 2007 according to early infection telemetry, weaponised the technique against actual victim populations [@wp-mebroot]. By 2011, TDL-3 and TDL-4 had pushed the lineage into the millions of infected hosts [@kaspersky-tdl4].

The category took its final structural step on 13 September 2011, when Marco Giuliani at Webroot's threat lab disclosed *Mebromi*, the first BIOS rootkit found in the wild [@wayback-mebromi, @malpedia-giuliani]. Mebromi targeted Award BIOS firmware. It used the legitimate Phoenix `CBROM.EXE` utility -- a tool the BIOS vendor itself shipped for assembling firmware images -- to splice malicious code into the firmware ROM image, then flashed the modified ROM through System Management Mode. On every subsequent boot, the firmware itself reinstalled the rootkit's MBR before any operating system existed to scan for it.<Sidenote>The Mebromi reuse of Phoenix's own `CBROM.EXE` is the canonical illustration of the architectural problem. The defender's tools and the attacker's tools were the same tools. The firmware-update path had no signature, no measurement, and no policy gate; CBROM was just an executable that knew the Award ROM image format. The fix was not better antivirus. The fix was a hardware root that the OS itself could not rewrite.</Sidenote>

The structural argument that Mebromi made unanswerable: there was no measurement endpoint and no signature verifier *anywhere below* the operating system. Every operating-system-level defence was rhetorical against this layer. Kernel-Mode Code Signing, the policy Windows Vista x64 had introduced in 2006 [@app-identity-sibling], was enforced by code that the bootkit could rewrite before the kernel started checking. Driver Signature Enforcement was a setting the operating system wrote into a memory location the operating system could not yet defend.

Trust must be rooted in something the operating system cannot rewrite. That means the chain has to start before the operating system exists. The next rung is firmware itself.

## 3. UEFI Platform Initialization: SEC, PEI, DXE, BDS, and where Secure Boot actually lives

If Secure Boot starts at the operating-system loader, which exact piece of firmware decides whether the operating-system loader is allowed to run, and what verifies *that* piece? The answer is a four-phase pipeline that almost no Windows engineer ever writes about. It is also where every modern firmware attack lands.

<Definition term="UEFI Platform Initialization (PI)">
The Unified Extensible Firmware Interface Platform Initialization specification defines the internal architecture firmware uses to bring a system up. It splits boot into four phases: Security (SEC), Pre-EFI Initialization (PEI), Driver Execution Environment (DXE), and Boot Device Selection (BDS). Standard Windows usage of "UEFI" almost always means the externally-visible behaviour exposed by BDS and the EFI runtime services, not the multi-phase internal pipeline the firmware uses to get there.
</Definition>

The four phases, per the TianoCore reference flow [@tianocore-pi-flow]:

- **SEC.** The Security phase begins at processor reset. Code runs from immutable on-die ROM or a locked region of SPI flash before main memory is even initialised. SEC's job is to establish the root of trust in the firmware -- before any flexible code path can be taken, the firmware has committed to an instruction stream the operating system cannot influence.
- **PEI.** Pre-EFI Initialization brings up DRAM, configures the memory controller, populates Hand-Off Blocks (HOBs) the later phases consume, and dispatches the small drivers needed to reach a state where general firmware code can run. SEC and PEI together are the part of firmware that fits in the few hundred kilobytes of cache-as-RAM the CPU offers before main memory is up.
- **DXE.** The Driver Execution Environment hosts most of what we think of as firmware: the disk drivers, the network stack, the human-interface drivers, the USB stack, and Secure Boot's image verifier. *This is where `LoadImage()` runs db/dbx checks against incoming PE/COFF binaries.* DXE phase code is several megabytes on a modern x86 platform.
- **BDS.** Boot Device Selection reads the `BootOrder` UEFI variable, picks a boot entry, hands the platform off to the operating system loader, and -- in normal operation -- never runs again until the next reboot.

<Mermaid caption="UEFI Platform Initialization phases. SEC and PEI run from immutable code; DXE hosts the Secure Boot verifier; BDS picks the boot variable. Boot Guard runs one rung below SEC, in microcode that verifies the firmware itself before the firmware can issue useful instructions.">
flowchart LR
    BG["Boot Guard / AMD PSB<br/>(microcode, OTP fuses)"] --> SEC["SEC<br/>immutable ROM"]
    SEC --> PEI["PEI<br/>DRAM init"]
    PEI --> DXE["DXE<br/>Secure Boot LoadImage()"]
    DXE --> BDS["BDS<br/>read BootOrder"]
    BDS --> OS["bootmgfw.efi"]
</Mermaid>

There is one rung *below* SEC. Intel Boot Guard verifies the firmware via a CPU-microcode-loaded Authenticated Code Module signed by Intel [@wp-txt]; AMD Platform Secure Boot performs the same role from the AMD Platform Security Processor (PSP), an ARM-based co-processor embedded on the SoC [@ioactive-psb, @wp-amd-psp]. Both run before SEC can begin. Intel introduced Boot Guard on platforms based on the Haswell processor family (4th-generation Core, Lynx Point PCH) in 2013 [@eset-lojax, @wp-txt]; the actual root-of-trust fuses live in the PCH [@eset-lojax], and the OEM commits the verification key at provisioning, so Boot Guard support is a chipset-and-OEM property rather than a bare CPU-model property [@eset-lojax, @ioactive-psb]. AMD's PSB followed on EPYC server parts and was rolled out to Ryzen Pro platforms over the next several years; the PSP itself has been present on AMD client and server parts since around 2013 [@wp-amd-psp], but PSB is a distinct firmware-signing flow that uses it [@ioactive-psb].<MarginNote>The Windows Hardware Compatibility Program codified UEFI 2.3.1 as the firmware floor for Windows 10 security features [@ms-oem-uefi]. Anything below 2.3.1 cannot host a Secure Boot configuration that Microsoft will certify.</MarginNote> The keys that anchor those verifications are burned into one-time-programmable fuses inside the package, so the OEM commits to a public key when the part ships and cannot rotate it later [@eset-lojax, @ioactive-psb]. ESET's 2018 LoJax disclosure recommended Boot Guard explicitly: "if possible, have a processor with a hardware root of trust as is the case with Intel processors supporting Intel Boot Guard (from the Haswell family of Intel processors onwards)" [@eset-lojax].<Sidenote>Boot Guard's OTP fuses are the canonical example of why hardware-rooted verification cannot have a software-only escape hatch [@eset-lojax, @ioactive-psb]. If the OEM's signing key leaks, the fuses cannot be reprogrammed in the field; an attacker with the leaked key can produce firmware that the silicon will accept. This is the structural argument behind moving the root one more rung down -- into Pluton, where Microsoft, not the OEM, owns the update cadence.</Sidenote>

The conclusion is the part most engineers skip. By the time `bootmgfw.efi` is verified, several megabytes of DXE-phase code have already executed. Anything that compromises the DXE compromises Secure Boot from below: the verifier itself is now the attacker's code. That is the precondition that LogoFAIL exploits, and it is the reason "Secure Boot starts at the OS loader" is the wrong mental model.

NIST recognised the structural problem early. NIST Special Publication 800-147 *BIOS Protection Guidelines* (April 2011) [@nist-sp-800-147] articulated the BIOS-update-signing baseline two years before Boot Guard shipped a hardware-rooted answer. SP 800-147 said only that firmware updates must be signed; it did not say *who* must verify the signing key. Boot Guard and PSB were the hardware-rooted answer to that gap, with the OEM holding the verification key in OTP fuses.

Now we have a place to put a verifier. The next question is *what* it verifies, and *who* signed the allowlist.

## 4. Secure Boot itself: PK, KEK, db, dbx, and the Microsoft monoculture

Secure Boot is four UEFI variables, one Authenticode hash per binary, and one centralised root of trust. The technical content of this section is not the hard part. The social and operational content -- *who* holds which key, and *what happens when a signed binary becomes vulnerable* -- is the rest of the article.

The four authenticated UEFI variables, defined in UEFI 2.3.1 (April 2011) and refined through the current 2.11 specification (December 16, 2024) [@uefi-spec, @wp-uefi, @oem-secure-boot]:

- **PK** -- the Platform Key. The OEM holds the private half. Whoever holds PK can rewrite KEK, db, and dbx; whoever holds PK can *turn Secure Boot off* by replacing PK with a key it does not actually own.
- **KEK** -- the Key Exchange Key. Both the OEM and Microsoft hold KEKs. KEK is the trust anchor for db and dbx updates. A KEK-signed update can add or remove entries in db and dbx without touching PK.
- **db** -- the signature database. This is the allowlist: hashes the firmware will accept, plus certificates whose signers it will accept. db typically contains a small handful of entries.
- **dbx** -- the forbidden signatures database. The denylist: hashes and certs the firmware must refuse, even if they would otherwise pass db.

<Definition term="Authenticated UEFI Variables (PK, KEK, db, dbx)">
Four EFI variables defined by the UEFI specification that together form Secure Boot's trust hierarchy. Each variable is *authenticated*: any update must be signed by a key one rung up the hierarchy. PK signs updates to KEK, db, and dbx; KEK signs updates to db and dbx. Microsoft requires the Microsoft Corporation KEK CA to be present in KEK on every Windows-certified PC, so that Microsoft can push db and dbx updates without OEM cooperation per device.
</Definition>

The verification algorithm runs every time UEFI calls `LoadImage()` on a PE/COFF binary, in this order:

1. Hash the PE/COFF image. The Authenticode digest excludes the signature directory and the checksum field, so the hash is computed over the parts of the image that should not change between signing and loading [@ms-pe-format].
2. If the hash matches a hash in dbx, reject.
3. Else if the signer's certificate chains to a certificate in dbx, reject.
4. Else if the hash matches an entry in db, accept. Else if the signer chains to a certificate in db, accept.
5. Else, reject.

Microsoft's WHCP requires firmware components to be signed with at least RSA-2048 and SHA-256 [@oem-secure-boot]. That floor is generous by 2026 standards but has held without serious controversy since the original UEFI 2.3.1 release.

<Mermaid caption="LoadImage() decision tree as Secure Boot's image verifier walks each incoming PE/COFF binary against db/dbx. The dbx check is unconditional and runs first, so a hash that lands in dbx cannot be saved by being also present in db.">
flowchart TD
    L["LoadImage(image)"] --> H["Compute Authenticode hash"]
    H --> D1&#123;"Hash in dbx?"&#125;
    D1 -- yes --> R["REJECT"]
    D1 -- no --> D2&#123;"Signer chains to dbx cert?"&#125;
    D2 -- yes --> R
    D2 -- no --> D3&#123;"Hash in db, OR signer chains to db cert?"&#125;
    D3 -- yes --> A["ACCEPT (load image)"]
    D3 -- no --> R
</Mermaid>

The de facto roots for x86 PCs are *two* Microsoft-rooted certificate authorities, both pre-trusted in db on essentially every certified Windows-class system: the **Microsoft Windows Production PCA 2011**, which signs Microsoft's own Windows boot binaries (`bootmgfw.efi`, `bootmgr.efi`, `winload.efi`), and the **Microsoft Corporation UEFI CA 2011**, which signs third-party UEFI binaries -- Linux's `shim`, option ROMs, and third-party firmware drivers [@sbat-shim, @oem-secure-boot]. The rhboot/shim project documents the arrangement: every certified PC is "typically configured to trust 2 authorities for signing UEFI boot code, the Microsoft UEFI Certificate Authority (CA) and Windows CA" [@sbat-shim]. The fact that *both* are Microsoft-rooted is the reason Secure Boot, as deployed, and "Microsoft is the gatekeeper of which operating systems may boot" are operationally the same thing. The UEFI Forum's specification did not require that monoculture. The economics did. There are exactly two certificate authorities every OEM is willing to trust by default, and both belong to the operating-system vendor whose installer media every OEM ships.

<Definition term="Microsoft 2011 CAs / Windows UEFI CA 2023">
The X.509 certificate authorities Microsoft uses for Secure Boot. Two CAs from the 2011 family ship pre-installed in db on essentially every Windows-certified PC: the **Microsoft Windows Production PCA 2011** signs Microsoft's own Windows boot binaries, and the **Microsoft Corporation UEFI CA 2011** signs third-party UEFI binaries (Linux's `shim`, option ROMs, third-party firmware drivers). Both 2011 certificates begin expiring in late June 2026. The **Windows UEFI CA 2023** is their successor; its industry-wide enrolment began in May 2023 with the KB5025885 program responding to CVE-2023-24932 and is still rolling out under phased automatic enrolment via monthly Windows Updates as of 2026.
</Definition>

<Aside label="The shim escape hatch">
Linux's path through Secure Boot runs through `shim.efi`, a small bootloader Matthew Garrett released on November 30, 2012 -- his last day at Red Hat [@garrett-shim-2012, @phoronix-shim]. The trick is structural: Microsoft signs `shim` itself; `shim` is shipped on the install media of every major Linux distribution; once running, `shim` validates a distribution-signed `grubx64.efi` (or kernel) using a key the distribution embeds, *or* a Machine Owner Key (MOK) the user has enrolled at install time. Garrett credits the MOK design to engineers at SUSE. The arrangement is the open-source community's pressure valve against the Microsoft monoculture: Linux still boots on Secure Boot hardware because Microsoft signs one bootloader that delegates trust to a community-managed key store. It also explains why Linux dual-boot installs began breaking after May 2023 -- the certificates that signed older copies of `shim` are being rotated out.
</Aside>

The dbx variable carries the operational weight of the system. If a signed bootloader is found to be vulnerable, the only blocking remedy is to add its hash to dbx. dbx lives in NV-RAM; on commodity Windows PCs the storage budget is roughly 32 KB total [@sbat-shim].<Sidenote>The 32 KB figure comes from the rhboot/shim project's SBAT documentation, which notes that the BootHole disclosure of July 2020 -- a single GRUB vulnerability requiring revocation of three certificates and roughly 150 image hashes -- consumed approximately 10 KB of dbx in one event. That is one third of the available capacity, used up by one CVE.</Sidenote> Linux distributions and Windows share the same dbx region. A botched update can refuse to validate a bootloader that the platform actually needs, and there is no remote rollback for a brick-on-write to dbx. Section 9 will show what happens when dbx revocation lags behind a CVE.

The CA-2023 transition is therefore not a routine certificate rotation. The original 2011 certificates begin expiring in late June 2026. Microsoft's industry-wide Windows UEFI CA 2023 rollout started May 2023 with KB5025885, the patch advisory that paired with CVE-2023-24932, and is on track to be, in Microsoft's own framing, one of the largest coordinated security maintenance efforts the Windows install base has ever seen [@ms-windows-blog-2026, @kb5025885]. The phasing, as published: enrol the new CA in db; sign new bootloaders with it; enrol new dbx entries to revoke older signed-but-vulnerable binaries; finally, revoke the 2011 CA. The published cautionary text is unambiguous: once the irreversible mitigation step is enabled on a device, "it cannot be reverted if you continue to use Secure Boot on that device. Even reformatting of the disk will not remove the revocations if they have already been applied" [@kb5025885].

<RunnableCode lang="js" title="Conceptual LoadImage decision">{`
// Sketch of what UEFI does for every PE/COFF binary it loads.
function loadImage(image, db, dbx) {
  const hash = authenticodeHash(image);
  const signerCert = parseSignerCert(image);

  if (dbx.hashes.includes(hash)) return { ok: false, reason: "dbx hash" };
  if (signerCert && chainsTo(signerCert, dbx.certs)) {
    return { ok: false, reason: "dbx cert" };
  }
  if (db.hashes.includes(hash)) return { ok: true, reason: "db hash" };
  if (signerCert && chainsTo(signerCert, db.certs)) {
    return { ok: true, reason: "db cert" };
  }
  return { ok: false, reason: "not in db" };
}

const decision = loadImage(
  { hash: "abc", signer: "Microsoft Windows Production PCA 2011" },
  { hashes: [], certs: ["Microsoft Windows Production PCA 2011", "Microsoft Corporation UEFI CA 2011"] },
  { hashes: [], certs: [] }
);
console.log(decision);
`}</RunnableCode>

Verification is a one-shot signature check at firmware boundaries. The chain still has to extend all the way to userland. Microsoft's name for what comes next is *Trusted Boot*. And here is the thing the patch-cadence narrative fails to convey: *patched is not revoked*. Microsoft can ship a fixed `bootmgfw.efi` next month. It cannot delete the old, vulnerable, validly-signed copy from every machine in the world. As long as the old binary's hash is not in dbx, Secure Boot will load it.

## 5. Trusted Boot: bootmgfw.efi, winload.efi, and the Windows-specific chain

Secure Boot can answer "is this `.efi` file in our allowlist?" It cannot answer "is every kernel-mode driver loaded after this `.efi` file in our allowlist?" That second question is what Trusted Boot exists to answer.

<Definition term="Trusted Boot">
Microsoft's term for the post-firmware portion of the verified boot chain. UEFI Secure Boot validates `bootmgfw.efi`. `bootmgfw.efi` validates `winload.efi`. `winload.efi` validates `ntoskrnl.exe`, the Hardware Abstraction Layer, every boot-start driver, and the ELAM driver. `ntoskrnl.exe` validates every driver loaded thereafter against the active code-integrity policy. Trusted Boot is therefore the Microsoft policy enforcement chain layered *on top of* Secure Boot's firmware-side verifier; it is what extends the signature check past the operating-system loader into kernel mode.
</Definition>

The mechanics, after the firmware hands control to `bootmgfw.efi`: the boot manager reads the Boot Configuration Data store, locates `winload.efi` (or `winresume.efi` for resuming from hibernation), and enforces the boot-time integrity policy on every component it loads [@ms-trusted-boot]. The verifier handoff, however, is more interesting than the Microsoft Learn paragraph suggests. It runs in three stages.

**Stage A: `winload`'s in-image `bootlib` verifier.** `winload.efi` does not call kernel-mode `ci.dll` to validate boot images. It carries its own boot-time code-integrity verifier inside the `bootlib` boot library shared with `bootmgr`. Reverse-engineering work on the Elysium bootkit research framework reconstructed the call chain inside `winload.efi`: `OslLoadDrivers` -> `OslLoadImage` -> `LdrpLoadImage` -> `BlImgLoadPEImageEx` -> `ImgpLoadPEImage`, with `ImgpValidateImageHash` performing the Authenticode digest check against the trusted boot policy embedded in `winload` itself [@elysium-bootkit]. Boot-start drivers, `ntoskrnl.exe`, the Hardware Abstraction Layer, and the ELAM driver all flow through this chain before kernel mode is alive to do anything about it.

**Stage B: handoff via `LOADER_PARAMETER_EXTENSION`.** When `winload.efi` is done validating, it has to hand the validated state across the loader-kernel boundary. The mechanism is `LOADER_PARAMETER_EXTENSION` (LPE), the under-documented structure that hangs off the `LOADER_PARAMETER_BLOCK` whose address the loader passes to the kernel.<Sidenote>The LPE structure has been Microsoft-internal in every shipping Windows release; the public reference Geoff Chappell maintains is the canonical third-party reverse-engineering of its layout across Windows builds. New fields are added at the tail of the structure when shipping features need to communicate state across the loader/kernel boundary. The fact that Smart App Control's CI state needed two new LPE fields is a small but telling indicator of how much policy state Trusted Boot now carries.</Sidenote> Geoff Chappell's reference describes the LPE as "part of the mechanism through which the kernel and HAL learn the initialisation data that was gathered by the loader" [@geoffchappell-lpe]. The structure has grown across Windows builds; with Smart App Control on Windows 11 22H2, two new fields -- `CodeIntegrityData` and `CodeIntegrityDataSize` -- were added so that the loader-validated CI state, including the active SiPolicy and the pre-validated boot-start driver list, would survive the handoff intact [@n4r1b-sac].

**Stage C: kernel-mode `ci.dll` continuation.** Only after `ntoskrnl.exe` is itself running does the kernel-mode `ci.dll` come into play. It picks up the SiPolicy state from the LPE and continues the same code-integrity policy enforcement on every kernel-mode image loaded after the loader's window closes -- principally via the `Se`-prefixed validation routines that the kernel's image-load notification routines call into. From that point, every subsequent driver load goes through the same code-integrity gate. The `bootlib` -> LPE -> kernel-mode `ci.dll` decomposition is the underlying mechanism Microsoft's high-level documentation collapses into a single sentence:

<PullQuote>
"The Windows bootloader verifies the digital signature of the Windows kernel before loading it. The Windows kernel, in turn, verifies every other component of the Windows startup process, including boot drivers, startup files, and your anti-malware product's early-launch anti-malware (ELAM) driver." -- Microsoft Learn [@ms-trusted-boot]
</PullQuote>

Trusted Boot is therefore the *Windows-specific* extension of the verifier into kernel mode. UEFI Secure Boot is platform-agnostic; it ships in db on every certified PC. Trusted Boot is the policy engine that reuses the firmware-side trust anchor and walks it forward into `ntoskrnl.exe`. The mechanism for *how* SiPolicy is parsed, how publisher rules are evaluated, and how the kernel's code-integrity state machine handles attempts to load binaries outside policy, lives in this article's App Identity sibling and is not redefined here [@app-identity-sibling].

There is a failure mode you can see coming. If the trusted boot manager itself is signed but vulnerable, the chain still validates, the policy still enforces, and the entire defence is bypassed. The signature is correct; the code path is what is wrong. Section 9 will show what happens when an older `bootmgfw.efi` revision contains a memory-map manipulation flaw that lets attacker-controlled data flow before the SiPolicy enforcement engine is up. That is the BlackLotus failure. For now, hold the framing: Trusted Boot's guarantee is "every kernel-mode component has a valid Microsoft signature." It is not "every Microsoft signature in this chain corresponds to a binary that is itself secure."

Verification can stop loading bad code. It cannot prove that good code was loaded. For that we need a parallel rail.

## 6. Measured Boot: SRTM, the TPM event log, and PCR 0-7+11 in order

Verification stops bad code from running. *Measurement* makes sure you can prove, after the fact, what code did run. The two rails do not protect against the same thing. This is the article's mechanism-densest section, and the place a few key terms have to be exactly right.

<Definition term="Static Root of Trust for Measurement (SRTM)">
A boot-time chain of cryptographic measurements anchored in a Core Root of Trust for Measurement (CRTM): a code segment in the platform's flash that is implicitly trusted because it is immutable and is *measured by construction* into the TPM before any flexible code runs. SRTM extends one PCR per component as the chain unfolds, producing a tamper-evident log of exactly which firmware, boot manager, and kernel the platform launched. The measurement does not stop bad code; it records what code ran so a verifier can decide later.
</Definition>

The TPM extend primitive is the cryptographic core. The TPM never overwrites a PCR. When the platform asks the TPM to extend PCR `N` with a measurement `m`, the TPM does:

$$\mathrm{PCR}[N] := H\bigl(\mathrm{PCR}[N] \,\Vert\, m\bigr)$$

where `H` is the bank's hash algorithm (SHA-1 on TPM 1.2; SHA-1 and SHA-256 banks both required by the TCG PC Client Platform Firmware Profile on TPM 2.0; SHA-384 and SHA3 banks optional and present on some newer parts) and `||` is byte concatenation [@syss-bitpixie]. The TPM 2.0 specification was finalised by the Trusted Computing Group on 9 April 2014 [@wp-tpm]. The mechanism guarantees that any later PCR value is a function of every prior measurement in the order it was extended -- you cannot rewind, and you cannot reorder. The TPM 2.0 PC Client profile specifies at least 24 PCRs, the first 16 of which are append-only and non-resettable until the platform itself is reset [@syss-bitpixie]. The full TPM `extend` mechanics are covered in this article's TPM sibling; we do not redefine them here [@tpm-sibling].

The PCR allocation, per the TCG PC Client Platform Firmware Profile, corroborated against the SySS Bitpixie writeup [@syss-bitpixie] and Microsoft Learn [@ms-secure-boot-process]:

| PCR | Extended by | What it measures |
|-----|-------------|------------------|
| 0 | CRTM, SEC, PEI | SRTM core firmware code (BIOS/UEFI) |
| 1 | PEI / DXE | Host platform configuration (CPU microcode, NVRAM settings) |
| 2 | DXE | UEFI driver and application code (option ROMs) |
| 3 | DXE | UEFI driver and application configuration / data |
| 4 | DXE / BDS | Hashes of all boot managers in the boot path; `bootmgfw.efi` lands here |
| 5 | BDS | Boot manager code config and data; GPT; boot attempts |
| 6 | DXE / OEM | Host platform manufacturer specific |
| 7 | DXE | State of Secure Boot: PK, KEK, db, dbx hashes; the `SecureBoot` variable; signing certificate of every loaded image |
| 11 | `bootmgfw.efi` | BitLocker access control: locked after VMK is obtained |

<Mermaid caption="The SRTM extend chain. The CRTM measures itself implicitly by being immutable; firmware measures itself into PCR[0] and the boot manager into PCR[4]; the boot manager measures the kernel; the loader measures Secure Boot policy into PCR[7]; the boot manager seals BitLocker into PCR[11].">
sequenceDiagram
    participant CRTM
    participant SEC
    participant DXE
    participant BMGR as bootmgfw.efi
    participant TPM as TPM PCRs
    CRTM->>TPM: extend PCR[0] with SRTM hash
    SEC->>TPM: extend PCR[1] with platform config
    DXE->>TPM: extend PCR[2] with option-ROM code
    DXE->>TPM: extend PCR[7] with Secure Boot state
    DXE->>TPM: extend PCR[4] with bootmgfw.efi hash
    BMGR->>TPM: extend PCR[4] with winload.efi hash
    BMGR->>TPM: extend PCR[7] with signer cert of winload
    BMGR->>TPM: extend PCR[11] with BitLocker access flag
</Mermaid>

PCR[7] deserves a section of its own. On modern Windows, *PCR[7] is the canonical seal target* for BitLocker. A protector sealed to PCR[7] unwraps cleanly across firmware updates, microcode revisions, and option-ROM changes, because PCR[7] reflects only the Secure Boot state -- the keys in PK, KEK, db, dbx, the `SecureBoot` variable, and the signing certificates of loaded images. PCR[0..4] are too volatile for sealing on a real fleet because every BIOS update changes them. PCR[7] changes only when Secure Boot policy itself changes [@syss-bitpixie, @ms-system-guard]. The full BitLocker key hierarchy is covered in this article's BitLocker sibling [@bitlocker-sibling]; here we are placing PCR[7] in the chain.

> **Key idea:** Verification stops bad code. Measurement records what code ran. Neither rail is sufficient alone. Modern Windows boot integrity needs both rails reaching the same place -- the kernel and the Secure Kernel -- before user-mode runtime defences take over.

The TCG event log makes the measurement chain useful for more than sealing. Every `extend` is logged through the TCG2 EFI Protocol with the hash, the algorithm, and a description of what was measured. A verifier (BitLocker locally; an attestation service remotely) can replay the log to recover *which binary hashed to which PCR value*, and -- if the replay does not match the live PCRs -- detect tampering. Microsoft Learn describes exactly that path: "the PC's firmware logs the boot process, and Windows can send it to a trusted server that can objectively assess the PC's health" [@ms-secure-boot-process].

There is a second root of measurement that sidesteps the firmware-trust regress entirely. DRTM -- Dynamic Root of Trust for Measurement -- is late-launched after firmware boot, via Intel TXT's `GETSEC[SENTER]` instruction or AMD's `SKINIT`. It resets PCR[17..22] at locality 4 and re-anchors a measurement chain in a vendor-controlled allowlistable module that does not depend on the DXE phase having been clean [@wp-txt, @ms-system-guard]. Microsoft documents the motivation in plain language:

<PullQuote>
"There are thousands of PC vendors that produce many models with different UEFI BIOS versions. This creates an incredibly large number of SRTM measurements upon bootup." [@ms-system-guard]
</PullQuote>

The argument: SRTM measurements are platform-specific. An attestation service that wants to know whether a given device booted clean must hold an allowlist of SRTM measurements covering N OEMs * M models * K firmware revisions. The allowlist explodes; the blocklist is asymmetric in the attacker's favour. DRTM collapses the allowlist by defining one small, well-known late-launched measurement chain that the attestation service can recognise across every Secured-core PC.

<Definition term="Dynamic Root of Trust for Measurement (DRTM)">
A late-launched measurement chain that re-anchors trust *after* firmware boot, by using a CPU instruction (`GETSEC[SENTER]` on Intel, `SKINIT` on AMD) to reset a designated set of PCRs and execute a small, vendor-controlled measured launch module. DRTM is Microsoft's answer to the SRTM allowlist explosion. It powers System Guard Secure Launch, which Windows 10 1809 introduced; on supported hardware, the late-launched module brings up the hypervisor and Secure Kernel from a trust anchor that the firmware cannot influence.
</Definition>

The DRTM PCR allocation is parallel to SRTM but lives in a separate range, PCR[17..22], reset only by the late-launch event. Per the TCG PC Client Platform Firmware Profile (corroborated against the Wikipedia Trusted Execution Technology mirror, since TCG returns HTTP 403 to non-browser fetches) and Microsoft's System Guard documentation [@wp-txt, @ms-system-guard]:

| PCR | Reset by | What it measures |
|-----|----------|------------------|
| 17 | `GETSEC[SENTER]` / `SKINIT` at locality 4 | DRTM-event measurement and Launch Control Policy hash extended by the SINIT ACM (Intel TXT) or the Secure Loader block hash (`SKINIT` on AMD) |
| 18 | locality 4 | Trusted-OS start-up code (the Measured Launch Environment itself) |
| 19 | locality 4 | Trusted-OS measurement, e.g., OS configuration |
| 20 | locality 4 | Trusted-OS measurement, e.g., OS kernel and other code |
| 21 | locality 4 | Reserved for and defined by the Trusted OS |
| 22 | locality 4 | Reserved for and defined by the Trusted OS |

The reset semantics are the load-bearing detail. PCR[0..16] are append-only after platform reset; they cannot be cleared without rebooting the box. PCR[17..22] are different: they can be reset *during runtime*, but only by an atomic late-launch event. That asymmetry is what makes DRTM's anchor verifiable [@wp-txt, @syss-bitpixie].

The mechanism that enforces it is *TPM locality*. Locality is a side-channel attribute on every TPM command identifying which entity issued the request. Locality 0 is general OS and application traffic. **Locality 4 is assertable only by the CPU itself**, during the atomic `GETSEC[SENTER]` (Intel TXT) or `SKINIT` (AMD) sequence. The TPM accepts a `Reset` of PCR[17..22] only when the request arrives tagged with locality 4. No software running outside the late-launch instruction can forge that tag. That is the structural reason DRTM's late-launch is verifiable rather than forgeable [@wp-txt].

The asymmetry pays off for an attestation service. If a remote verifier reads PCR[17] and finds it equal to zero, DRTM did not happen on this boot. If it reads PCR[17] and finds it equal to the iterated extend $\mathrm{PCR}[17] := H\bigl(0 \,\Vert\, H(\text{SINIT\_ACM\_hash} \,\Vert\, \text{LCP\_hash})\bigr)$ (or, more accurately, the chain of extends the SINIT ACM logged), a CPU-vendor-signed SINIT Authenticated Code Module seeded the chain, and the value is recomputable by the verifier from the published, signed SINIT ACM and the platform's Launch Control Policy [@wp-txt, @ms-system-guard]. The verifier's allowlist for DRTM measurements is bounded by the small set of CPU-vendor-signed measured-launch modules in circulation (SINIT ACMs on Intel TXT; the Secure Loader block measured directly by `SKINIT` on AMD) -- not by the cross-product of OEMs, models, and firmware revisions.

<RunnableCode lang="js" title="Simulating a TPM extend chain in JavaScript">{`
// Demonstrates the PCR extend formula:
//   PCR[N] := H( PCR[N] || measurement )
// Run it to see how PCR[4] would evolve as bootmgfw, winload, and ntoskrnl
// hashes are extended one after another.

const sha256 = (buf) => createHash('sha256').update(buf).digest();

function extend(pcrHex, measurementHex) {
  const pcr = Buffer.from(pcrHex, 'hex');
  const m = Buffer.from(measurementHex, 'hex');
  return sha256(Buffer.concat([pcr, m])).toString('hex');
}

// Real PCRs start as 32 bytes of zero on a TPM 2.0 reset.
let pcr4 = '00'.repeat(32);

const measurements = [
  { name: 'bootmgfw.efi', hash: 'aa'.repeat(32) },
  { name: 'winload.efi',  hash: 'bb'.repeat(32) },
  { name: 'ntoskrnl.exe', hash: 'cc'.repeat(32) },
];

for (const m of measurements) {
  pcr4 = extend(pcr4, m.hash);
  console.log(\`after \${m.name}: PCR[4] = \${pcr4.slice(0, 16)}...\`);
}
`}</RunnableCode>

We now have two rails of trust ready to converge in the kernel. The next thing the kernel has to do is hand control to defenders that can keep the chain alive into runtime.

## 7. ELAM, the kernel, and the Secure Kernel bring-up: where the chain ends

Trusted Boot has signed every kernel-mode binary along the path. Then what? The chain still has to outlive the boot.

<Definition term="Early Launch Anti-Malware (ELAM)">
A specially-signed driver class introduced in Windows 8 (2012) that loads as the *first* boot-start driver -- ahead of every other boot-start driver -- and classifies each subsequent boot-start driver as *Good*, *Bad*, *Unknown*, or *BadButCritical* before the operating-system loader allows it to load [@ms-elam, @ms-elam-driver-requirements]. ELAM's classification influences whether Windows loads the driver. The ELAM driver itself is a Microsoft-signed binary in the `Early-Launch` service-start group and is itself measured into the SRTM chain; the user-mode anti-malware service that consumes its classification events runs as a Protected Process Light (PPL).
</Definition>

ELAM exists for a specific reason. The boot-start group includes anti-malware, device, and disk drivers that have to load before the rest of the operating system. Before Windows 8, those drivers all loaded in an undefined order, with no anti-malware product running yet. A bootkit that survived the kernel's signature check (or a driver that was signed but malicious) had a window in which nothing was watching. ELAM closed that window by ordering one driver -- a Microsoft-signed AM driver -- as the first boot-start driver, and giving it the right to classify those drivers as they loaded [@ms-elam]. ELAM is itself a boot-start driver; the Microsoft documentation specifies the INF requirement plainly: "An ELAM Driver advertises its group as 'Early-Launch'" [@ms-elam-driver-requirements]. The associated user-mode anti-malware service runs as a Protected Process Light (PPL), so even SYSTEM-privileged user-mode code cannot inject into it [@ms-elam, @app-identity-sibling].<Sidenote>The classification surface ELAM exposes is the four-element set Good / Bad / Unknown / BadButCritical, enumerated in Microsoft's `BDCB_CLASSIFICATION` reference (ntddk.h) as `BdCbClassificationKnownGoodImage`, `BdCbClassificationKnownBadImage`, `BdCbClassificationUnknownImage`, and `BdCbClassificationKnownBadImageBootCritical` (the ELAM driver requirements page itself only enumerates three classes in prose; the fourth lives in the enum reference) [@ms-elam-driver-requirements, @ms-bdcb-classification]. The fourth category exists because some drivers are required for the system to boot; the AM driver's verdict on those is advisory rather than blocking. Defender ships the ELAM driver in Windows; Microsoft's interface allows third-party AM products to ship their own [@ms-elam].</Sidenote>

The kernel itself does the next set of jobs. `ntoskrnl.exe` initialises memory protections and DMA defences. Kernel DMA Protection enables the IOMMU (Intel VT-d or AMD-Vi) so that PCIe peripherals either DMA only to memory their compatible driver has assigned (DMA-Remapping-compatible drivers, enumerated and started normally) or are blocked from starting and performing DMA entirely until an authorised user signs in or unlocks the screen (DMA-Remapping-incompatible drivers, the user-presence-gated default); both regimes block the drive-by-DMA pattern that targets arbitrary kernel memory and defend against malicious Thunderbolt peripherals [@ms-kernel-dma-protection]. The Driver Block List, enforced at code-integrity load time, refuses to load a recognised set of vulnerable signed drivers (the canonical example is *gdrv2.sys*); details in this article's App Identity sibling [@app-identity-sibling]. HVCI (Hypervisor-Enforced Code Integrity, also called Memory Integrity) is loaded inside the Secure Kernel and enforces W^X on all kernel-mode memory; details in the Secure Kernel sibling [@secure-kernel-sibling].

Then the Secure Kernel comes up. `securekernel.exe` and `skci.dll` initialise in Virtual Trust Level 1 -- a Hyper-V-managed isolation domain that the normal Windows kernel in VTL0 cannot read or write. The first Trustlet is LSAIso, the isolated process Credential Guard uses to hold NTLM hashes and Kerberos tickets out of reach of any kernel-mode attacker [@secure-kernel-sibling]. Control returns to the normal kernel; the user-mode tail begins.

<Mermaid caption="Kernel and Secure Kernel bring-up. winload.efi starts ntoskrnl.exe; ntoskrnl.exe loads ELAM as the first boot-start driver, then brings up the Secure Kernel in VTL1; LSAIso (Credential Guard) is the first Trustlet. Then SMSS, wininit, and winlogon run.">
flowchart TD
    WL["winload.efi"] --> NT["ntoskrnl.exe (VTL0)"]
    NT --> SK["securekernel.exe (VTL1)"]
    SK --> LSA["LSAIso (Credential Guard Trustlet)"]
    NT --> ELAM["ELAM driver"]
    ELAM --> BS["boot-start drivers (classified by ELAM)"]
    BS --> SMSS["smss.exe"]
    SMSS --> WI["wininit.exe"]
    WI --> WL2["winlogon.exe"]
    WL2 --> UI["userinit.exe -> explorer.exe"]
</Mermaid>

The user-mode tail is not security-cryptographic per se. SMSS (the Session Manager) loads system DLLs and starts the first Win32 subsystem session. `wininit.exe` initialises the LSA, the Service Control Manager, and the Local Session Manager. `winlogon.exe` paints the credential UI, calls into Windows Hello [@no-secrets-to-steal-sibling] if applicable, and authenticates the user. `userinit.exe` runs the logon scripts and launches `explorer.exe` [@ms-trusted-boot]. From the boot-integrity perspective, `userinit` is the moment the static-time guarantees of Trusted Boot end and the runtime defences -- Defender, EDR, attestation -- take over.

We have walked the chain end to end. The next question is: when did this chain *actually start working*?

## 8. The breakthroughs that made the chain land (2014-2024)

*Secure Boot existed* in 2012. *Secure Boot worked* (in the sense of defending most of what it claims to defend) only after roughly a decade of operational fixes that almost nobody outside Microsoft and a handful of OEMs ever wrote about. Four breakthroughs deserve naming. The matrix below collates them by *layer fixed* and *fix-delivery vehicle* before the prose treatments that follow.

| # | Breakthrough | Year | Layer it fixed | Fix-delivery vehicle |
|---|--------------|------|----------------|----------------------|
| B1 | PCR[7] becomes the canonical BitLocker seal target | ~2014-2016 | Sealing brittleness; PCR[0..4] churn vs. firmware-revision cadence | Windows servicing + BitLocker policy default change [@syss-bitpixie, @ms-system-guard] |
| B2 | Forced retirement of the Microsoft UEFI CA 2011 | May 2023 - June 2026 | Revocation gap (BlackLotus / Baton Drop) | KB5025885 / CVE-2023-24932 multi-year, opt-in dbx push and CA-2023 enrolment [@kb5025885, @ms-windows-blog-2026] |
| B3 | Secure Kernel becomes the launch destination | Win10 2015 - Win11 2021 | "Kernel signed" is insufficient (TDL-4 lesson) | OS feature ship and WHCP requirement; HVCI / Driver Block List default-on by 2024 [@ms-trusted-boot, @secure-kernel-sibling] |
| B4 | Pluton arrives as a Microsoft-firmware-authored RoT | Nov 2020 announcement; Q1 2022 first silicon | LPC/SPI bus-sniffing class against discrete TPMs; OEM patch-cadence latency for fTPM/PTT firmware | Windows-Update-delivered Pluton firmware (alongside UEFI capsule), Rust-based on 2024+ AMD/Intel parts [@ms-pluton-blog, @ms-pluton] |

The first row is operational, not architectural: PCR[7] becoming the canonical BitLocker seal target, somewhere between Windows 8.1 and Windows 10 1607 [@syss-bitpixie, @ms-system-guard]. Before PCR[7], BitLocker sealed against PCR[0..4]: firmware code, platform configuration, option ROMs, option-ROM configuration, and boot-manager hashes. Every UEFI update -- and on real fleets they happen monthly -- changed PCR[0..4] and forced BitLocker into recovery, which forced an IT staffer to find the recovery key, which was annoying enough to make people turn BitLocker off. PCR[7] sealing decoupled the BitLocker protector from the firmware-revision churn and made Measured Boot durable in practice. This is the operational fix that made Measured Boot actually worth running on a fleet of thousands of laptops with monthly UEFI capsule updates.

The second row is the forced retirement of the Microsoft UEFI CA 2011, which began in May 2023 with KB5025885 and CVE-2023-24932 and is on track to complete in late 2026 [@kb5025885, @ms-windows-blog-2026]. This was the first serious dbx housekeeping in a decade. The relevant point: the fix had to be a *programme*, not a hotfix, because dbx is too small to handle a one-shot revocation of a CA-rooted set without bricking either some Linux dual-boots or some Windows machines. The CA-2023 rollout phases the work across four years.

The third was VBS and the Secure Kernel becoming the launch target the boot chain was actually defending. Without the Secure Kernel as a destination, Trusted Boot's guarantee ended at "the kernel is signed", which TDL-4 had already shown was insufficient -- a signed kernel is of limited use if the SYSTEM-privileged user-mode code that follows can rewrite kernel memory through a vulnerable signed driver. The Secure Kernel arrived in Windows 10 1507 (2015) and matured into its enforced-by-default form in Windows 11 (2021), at which point the chain had a hardware-isolated destination that even a SYSTEM-level attacker could not reach without a hypervisor exploit [@secure-kernel-sibling].

The fourth is still landing. Pluton, the cryptoprocessor whose firmware Microsoft (not the OEM) ships and updates, was announced in November 2020 and reached its first silicon -- AMD Ryzen 6000 -- in Q1 2022 [@ms-pluton-blog, @ms-pluton]. Pluton is not yet ubiquitous, and its Secure Boot story is pending: as of 2026, Pluton ships as a TPM 2.0 implementation [@ms-pluton-as-tpm], not as a replacement verifier. Section 10 unpacks why the Microsoft-firmware-on-silicon-Microsoft-doesnt-own model matters more than the part numbers do.

These were the operational fixes. The architectural breaks they were responding to are the next section.

## 9. The boot-chain attacks that actually worked

There has never been a public Secure Boot attack that broke the cryptographic primitive. Every successful attack has exploited the same gap: between fixing a vulnerability and revoking the signed binaries that carried it. The CVE numbers change. The structure does not.

| Attack | Year | Rung broken | Prerequisite | dbx state at disclosure | Fix path |
|--------|------|-------------|--------------|-------------------------|----------|
| ESPecter | 2021 | ESP-resident bootmgr replacement | Secure Boot disabled | n/a | Enable Secure Boot |
| FinSpy UEFI | 2021 | bootmgfw.efi replaced on ESP | Secure Boot disabled | n/a | Enable Secure Boot |
| BlackLotus / CVE-2022-21894 (Baton Drop) | 2022-23 | Signed-but-vulnerable older bootmgfw | Patched but unrevoked old binaries | Old binaries not revoked | dbx update via KB5025885 |
| Bitpixie / CVE-2023-21563 | 2022-24 | PXE soft-reboot leaks BitLocker VMK | TPM-only BitLocker; LAN + keyboard | n/a (no signature break) | Pre-boot PIN; KB5025885 |
| LogoFAIL / CVE-2023-39539 et al. | 2023 | DXE-phase image-parser RCE | UEFI logo customisation accepting attacker BMP | n/a | OEM UEFI updates |
| Bootkitty | 2024 | Self-signed PoC; Secure Boot disabled or LogoFAIL | Linux target | n/a | Enable Secure Boot; patch LogoFAIL |
| WinRE / CVE-2024-20666 family | 2024 | Recovery Environment downgrade | TPM-only BitLocker; reachable WinRE | n/a | Servicing stack updates |

ESPecter (ESET, October 2021) [@eset-especter] is the simplest case. It is an ESP-resident bootkit that bypasses Driver Signature Enforcement to load its own unsigned kernel driver -- but only on systems with Secure Boot disabled. ESPecter is in the table to make the category visible: the ESP is a writable FAT partition with no signature on the contents, and any malware that can write to the ESP and persuade the firmware to boot from a different `bootmgfw` path can win on a non-Secure-Boot system. The fix is to turn Secure Boot on.

FinSpy (Kaspersky, September 2021) [@kaspersky-finspy] is the same attack family carrying an actual nation-state-grade payload. Kaspersky's GReAT analysis names the mechanism plainly: "All machines infected with the UEFI bootkit had the Windows Boot Manager (`bootmgfw.efi`) replaced with a malicious one." The malicious `bootmgfw` injected code into `winlogon.exe` for persistence. Again, Secure Boot disabled was the precondition. FinSpy was the proof that the ESP-resident category had real-world tradecraft attached, not just academic interest.

BlackLotus (advertised on hacking forums from at least October 2022 [@eset-blacklotus]; ESET writeup 1 March 2023) is the case that defines the modern era [@eset-blacklotus, @nvd-cve-2022-21894, @wack0-batondrop]. BlackLotus does not disable Secure Boot. It chain-loads a legitimately-signed but vulnerable older `bootmgfw.efi` revision. The vulnerability is CVE-2022-21894, nicknamed *Baton Drop*: an older boot manager honoured a `truncatememory` setting that removed blocks of memory containing serialised data structures from the memory map. The Wack0 PoC repository describes the primitive: "Windows Boot Applications allow the truncatememory setting to remove blocks of memory containing 'persistent' ranges of serialised data from the memory map, leading to Secure Boot bypass" [@wack0-batondrop]. The chain: boot the legitimately-signed older bootmgfw; trigger Baton Drop; install a malicious SiPolicy that disables further checks; load an unsigned kernel driver; persistently disable HVCI, BitLocker, and Defender from below the trusted-boot horizon. Microsoft's incident-response guide for BlackLotus enumerates six classes of detection artefact: recently-written ESP files, staging directories, registry entries, event-log evidence of policy changes, network indicators, and BCD-log modifications [@ms-blacklotus-guidance]. The NSA and CISA published a joint mitigation guide on 22 June 2023 [@nsa-blacklotus]. ESET's epitaph is the article's recurring quote:

<PullQuote>
"Exploitation is still possible as the affected, validly signed binaries have still not been added to the [UEFI revocation list]." -- Martin Smolar, ESET, March 2023 [@eset-blacklotus]
</PullQuote>

<Mermaid caption="The BlackLotus exploit chain. Every Secure-Boot-enforced step accepts a legitimately Microsoft-signed binary; the break happens because the older signed bootmgfw revision contains the Baton Drop vulnerability, and Microsoft has not added that older signed binary's hash to dbx. Once Baton Drop runs, it disables HVCI/BitLocker/Defender from below the trusted-boot horizon.">
sequenceDiagram
    participant Attacker
    participant ESP as EFI System Partition
    participant FW as UEFI firmware
    participant BMGR as bootmgfw (older signed)
    participant OS as Windows kernel
    Attacker->>ESP: drop legit but old signed bootmgfw
    FW->>BMGR: LoadImage() -- signature OK, hash NOT in dbx
    Attacker->>BMGR: trigger CVE-2022-21894 (truncatememory)
    BMGR->>BMGR: install malicious SiPolicy
    BMGR->>OS: load unsigned driver
    OS->>OS: disable HVCI, BitLocker, Defender
</Mermaid>

The "disables HVCI / BitLocker / Defender from below the trusted-boot horizon" framing in the caption is verbatim from the ESET disclosure and is reinforced by Microsoft's own incident-response guide [@eset-blacklotus, @ms-blacklotus-guidance].

Bitpixie / CVE-2023-21563 [@neodyme-bitpixie, @nvd-cve-2023-21563, @wack0-bitlocker, @syss-bitpixie] is BlackLotus' twin in BitLocker space. The vulnerability was discovered by `Rairii` in August 2022; Thomas Lambertz of Neodyme published a public PoC at 38C3 in December 2024. The mechanism is a downgrade. The attacker boots the target machine into Windows' PXE network-recovery soft-reboot path, which loads a Microsoft-signed but older `bootmgfw.efi` revision. That older revision does not erase the BitLocker VMK from physical memory before the PXE soft-reboot hands off, leaving the VMK in RAM where the chained payload (a signed Linux PE or downgraded WinPE) can dump it. The combination of TPM-only BitLocker (no pre-boot PIN), a Microsoft-Account-defaulted Windows 11 install (which biases toward TPM-only encryption), and physical access to a network port and keyboard, decrypts the disk in minutes. Lambertz' framing: "All an attacker needs is the ability to plug in a LAN cable and keyboard to decrypt the disk" [@neodyme-bitpixie]. Bitpixie does not break Secure Boot. It exploits the same operational invariant -- old-but-signed binaries still validate -- in a different protection domain.

> **Note:** TPM-only BitLocker is no longer a defensible default on Windows 11 once Bitpixie's PoC is public; the attack reduces to a LAN cable and a keyboard. See Section 11's `Replace TPM-only BitLocker` bullet for the pre-boot-factor fix list [@neodyme-bitpixie, @kb5025885].

Bootkitty (ESET, 27 November 2024) [@eset-bootkitty] closes a symmetry. Twelve years after Andrea Allievi's September 2012 PoC -- the first UEFI bootkit designed for Windows 8 [@theregister-allievi] -- Bootkitty is the first UEFI bootkit aimed at Linux. Bootkitty was uploaded as a self-signed PoC, so on systems with Secure Boot enabled, it does not load unless the attacker's certificate has been enrolled in the Machine Owner Key (MOK) list -- either by a user via `mokutil` (the ordinary Linux path), by a prior compromise enrolling the cert, or by chaining LogoFAIL (CVE-2023-40238) to inject a rogue MOK certificate from a malicious BMP, as Binarly demonstrated [@binarly-logofail-bootkitty]. Bootkitty patches kernel-image-integrity functions and pre-loads ELF binaries via `init`. ESET later updated the attribution: an analysis posted in early December 2024 traced the build to a Korean Best of the Best (BoB) student project. The structural lesson is platform-orthogonal -- Secure Boot's gaps live in the firmware and revocation surfaces, not in any one operating system.

<Aside label="Bootkitty closes the symmetry">
The Allievi 2012 ITSEC PoC was *the first UEFI bootkit*, full stop -- a research artefact that demonstrated, on Windows 8, the same trick BootRoot had demonstrated on the Windows NT/2000/XP MBR seven years earlier. Twelve years later, Bootkitty is the first UEFI bootkit *for Linux*, also a research artefact. The arc closes a symmetry: UEFI's verifier is platform-agnostic, so its weaknesses are too. A LogoFAIL-style image-parser bug in DXE compromises Secure Boot whether the operating system above it is Windows or Ubuntu. The attacker community needed twelve years to apply the technique to the second platform, but only because the second platform's market share for boot-chain attacks was smaller, not because the verifier was structurally any safer.
</Aside>

LogoFAIL (Binarly REsearch, Black Hat EU 2023; CVE-2023-39539, CVE-2023-40238, CVE-2023-5058; advisory BRLY-2023-006) [@binarly-logofail, @nvd-cve-2023-39539, @binarly-bh-eu23-slides] is the most architectural of the breaks because it compromises the verifier itself. The DXE phase parses a customisable boot logo image -- the OEM splash screen displayed on power-on -- and the parser is a piece of firmware code accepting an attacker-controlled input. Binarly demonstrated parser bugs in the BMP, GIF, JPEG, PCX, and TGA decoders shipped in reference code by all three major Independent BIOS Vendors -- AMI, Insyde, and Phoenix -- across roughly six hundred enterprise device models. A successful exploit gives the attacker code execution at the DXE phase, which is *below* Secure Boot's `LoadImage()` verifier. From DXE, the attacker can do whatever they want before the operating-system loader runs. Bootkitty later carried a LogoFAIL exploit (CVE-2023-40238) to inject a rogue MOK certificate from a malicious BMP, demonstrating the chain end to end [@binarly-logofail-bootkitty].

Finally, the WinRE / `ReAgent.xml` downgrade family (CVE-2024-20666 and successors) is the smaller cousin of the bigger story [@nvd-cve-2024-20666]. The Recovery Environment is a Windows partition with its own boot path; older WinRE images that contain unrevoked vulnerable `bootmgr.efi` revisions can be persuaded to mount the encrypted volume under attacker control. The attack does not break the Secure Boot chain; it routes around it. The point of including it in this catalogue: it is another instance of the dbx-revocation-by-hash limit. As long as an older signed binary exists and is reachable, Secure Boot's verifier will validate it.

Every attack here exploits the same operational invariant: the gap between *patched* and *revoked* is wide, and dbx is too small to close it. The next section examines whether anything can.

## 10. Theoretical limits, open problems, and the Pluton pivot

If every break has been operational, why has nobody fixed the operations? Because the operational bounds are themselves theoretical.

Six structural limits.

**The verifier-of-verifiers regress.** Secure Boot's verifier is firmware code that itself must be trusted. Boot Guard and AMD PSB push that root one rung deeper, into silicon ROM and OTP fuses [@ioactive-psb, @wp-txt]. Pluton arguably pushes it one rung deeper still, into silicon Microsoft directly updates. There is no software-only bottom turtle. Every architecture in the field has *some* layer that is trusted because there is no further layer to which trust can be deferred. The engineering question is *which party* owns that layer -- OEM, Intel, AMD, or Microsoft via Pluton -- and *on whose update cadence* the layer can be patched. IOActive's 2024 review of AMD PSB found that "various major vendors fail to" configure PSB correctly [@ioactive-psb], which is the kind of operational failure mode no cryptographic primitive can fix.

**Why dbx revocation is hard.** dbx is small, shared with Linux, vendor-implemented, and a brick-risk if mismanaged. The list stayed nearly empty for a decade until BlackLotus forced KB5025885's multi-year program [@nvd-cve-2023-24932]. SBAT (Secure Boot Advanced Targeting), the partial answer in the rhboot/shim project [@sbat-shim], revokes by *generation number* rather than by image hash. SBAT works by embedding a CSV-formatted vendor-and-component-version table in every shim-signed binary; when the firmware-side SBAT policy variable says "minimum acceptable shim generation is 4", every older shim hashes correctly but is refused for being too old. SBAT collapses tens of revocation events that would each consume hundreds of bytes of dbx into a single small metadata bump. The UEFI Forum has, since 2024, deferred to the canonical Microsoft-managed `secureboot_objects` GitHub repository [@uefi-revocationlist, @ms-secureboot-objects] as the source of truth for KEK, db, and dbx contents.

<Definition term="SBAT (Secure Boot Advanced Targeting)">
A revocation scheme designed by the rhboot/shim project to address dbx capacity exhaustion. Instead of revoking each vulnerable signed binary by Authenticode hash (which consumes ~32 bytes of dbx per binary), SBAT revokes by *generation number*: each signed component carries a CSV-formatted version table; a small SBAT policy variable in firmware specifies the minimum generation accepted; older builds are refused without consuming dbx capacity. SBAT is the project's structural answer to the cohort-revocation problem the §4 Sidenote quantifies.
</Definition>

<Sidenote>The SBAT generation-number scheme is also the model the Microsoft UEFI CA 2023 rollout extends across the wider Windows install base. KB5025885's mitigation strategy combines a small set of dbx hash revocations with a CA rotation, because no single mechanism by itself can revoke a decade's worth of signed bootloaders within the dbx storage budget [@kb5025885, @sbat-shim].</Sidenote>

**The signed-but-vulnerable problem.** As long as Microsoft-signed bootloaders with known flaws exist on the install media of any production Windows installation, Secure Boot must revoke by hash, by SVN, by SiPolicy, or by certificate -- each with collateral damage. Hash revocation does not cover binaries the attacker has not yet seen. SVN revocation forces a rebuild of every signed binary across the install base. SiPolicy revocation depends on the SiPolicy update reaching every machine. CA rotation breaks PXE recovery, recovery USBs, dual-boot Linux, and custom WinPE images.

**Supply chain at the firmware level.** LogoFAIL, BMC-resident attacks against rack servers, Boot Guard key leaks (which OTP fuses cannot recover from), and OEM ME/PSP fuse misconfiguration are the categories Secure Boot cannot, by construction, defend against. The verifier sits above these layers; if these layers are compromised, the verifier is running on a base it cannot trust.

**SRTM allowlist explosion.** N OEMs, M models, K firmware revisions; the allowlist of "good SRTM measurements" explodes; the blocklist is asymmetric in the attacker's favour. DRTM late-launch is the only known way to collapse the allowlist. As Microsoft puts it, "DRTM lets the system freely boot into untrusted code initially, but shortly after launches the system into a trusted state" [@ms-system-guard].

**Bus interception of discrete TPMs.** A discrete TPM on the LPC or SPI bus can be sniffed by a physical attacker. This is what motivates the move to Pluton: the TPM moves on-die, the bus disappears, and the BitLocker VMK no longer crosses a sniffable wire [@ms-pluton-blog, @tpm-sibling].

> **Key idea:** Every public Secure Boot break has exploited the gap between *patched* and *revoked*, not the cryptographic primitive. The dbx revocation half-life is the article's invariant. Pluton closes the cadence gap on the verifier-update side. It does not close the gap between patched and revoked.

**The Pluton pivot.** Pluton's pitch, for the boot chain, is to re-anchor both the verification root (long term) and the measurement endpoint (today) in silicon Microsoft can patch [@ms-pluton, @ms-pluton-as-tpm, @ms-pluton-blog]. Pluton implements TPM 2.0 on the CPU die, so the existing measurement chain plugs in unchanged. What changes is the *firmware update cadence* -- Pluton firmware ships through Windows Update, not through OEM UEFI capsules, so there is no longer a multi-month gap between Microsoft fixing a Pluton bug and the fix landing on a customer device. The bus disappears: Pluton's interface is a SoC-internal mailbox, not LPC or SPI, eliminating bus-sniffing as an attack class. And on 2024+ AMD and Intel parts, the Pluton firmware itself is written in Rust, addressing the memory-safety class of bugs that has historically dominated firmware CVEs [@ms-pluton].

<Mermaid caption="Discrete TPM versus Pluton trust topology. The discrete TPM sits across a package boundary on a sniffable LPC or SPI bus; Pluton lives on the SoC die and communicates with the host CPU over an internal mailbox.">
flowchart LR
    subgraph Discrete["Discrete TPM"]
        CPU1["CPU"] -- LPC/SPI bus<br/>(sniffable) --> dTPM["dTPM (Infineon, STM)"]
    end
    subgraph PlutonTopo["Pluton"]
        CPU2["CPU"] -- on-die mailbox --> PL["Pluton"]
    end
</Mermaid>

<Aside label="Why dbx will never simply be larger">
The first reaction to "dbx is too small" is always: make it bigger. Three constraints stop that. First, dbx is implemented by hundreds of OEM firmware vendors against a UEFI specification floor; raising the floor would invalidate every shipped UEFI implementation. Second, dbx is shared between Windows, Linux, ESXi, and other operating systems, so growing it requires coordination across vendors with different incentives. Third -- and the real blocker -- the variable lives in NV-RAM with limited write cycles; a runaway revocation update can brick a board if the write fails partway through. The realistic fix is SBAT for image-version bumps and CA rotation for cohort-scale revocation. Both are partial.
</Aside>

<Aside label="Apple, Arm, and the design space">
Pluton's design only makes sense against the contrast with the two endpoints of the design space.

At one endpoint sits Apple. Apple authors the silicon, the Boot ROM, the iBoot bootloader, the kernel, and the Secure Enclave Processor's sepOS firmware. The Apple Boot ROM holds the Apple Root certificate authority public key directly; it verifies iBoot before iBoot loads anything else; on older A-series parts an additional Low-Level Bootloader stage is verified by the Boot ROM and in turn loads and verifies iBoot [@apple-boot]. The Secure Enclave Processor is "a dedicated secure subsystem integrated into Apple SoC", isolated from the main processor and reachable only over a mailbox interface; sepOS is an L4 microkernel Apple ships and updates [@apple-sep]. Every stage of secure boot is signed by the same vendor that ships the operating system, and "secure boot begins in silicon and builds a chain of trust through software" [@apple-system]. The cadence is the iOS / iPadOS / macOS update cadence -- Apple-cadence -- because the same release pipeline ships everything from the Boot ROM update vehicle to the user-facing apps.

At the other endpoint sits Trusted Firmware-A on Armv7-A and Armv8-A platforms. TF-A is the reference secure-world software stack with a Secure Monitor at Exception Level 3 [@tfa-home]. The Trusted Board Boot feature implements Arm's TBBR-CLIENT specification (DEN0006D): "The Trusted Board Boot (TBB) feature prevents malicious firmware from running on the platform by authenticating all firmware images up to and including the normal world bootloader" [@tfa-tbb]. The chain runs BL1 -> BL2 -> BL31 / BL32 -> BL33, anchored on a ROTPK (Root of Trust Public Key) fused per silicon family. Because TBBR is a specification rather than a single shipping product, the actual signing keys and update cadence are the OEM's choice. The silicon vendor sets the fuse policy; the platform vendor signs the boot images; the operating-system vendor sees a verified BL33 handoff and trusts whatever ROTPK the silicon was fused with. There is no monoculture, and there is no single update cadence -- which is exactly what makes the security guarantees uneven across Arm devices in practice.

Pluton sits between Apple and TF-A. Microsoft authors the firmware (vendor-monopoly trust anchor) on silicon Microsoft does not own (AMD, Intel, Qualcomm fabricate it) [@ms-pluton, @ms-pluton-blog]. The contrast is sharpest at the firmware-update cadence. Apple-cadence ships everything as one. OEM-UEFI-capsule-cadence is what discrete TPMs and PCH-isolated fTPM/PTT firmware are stuck with -- which is why a known-bad fTPM firmware can take months to land on every customer device after Microsoft posts a fix. Windows-Update-cadence is what Pluton offers: a Microsoft-authored firmware update riding the same channel that ships kernel patches. The same axis -- *who* owns the trust anchor and *on whose schedule* it ships -- is the axis on which the article's main Pluton argument turns.
</Aside>

There are honest residual limits. Pluton is a TPM, not a verification chain; the rest of Secure Boot still runs in DXE-phase firmware that LogoFAIL can compromise. Adoption is non-universal -- as of 2026, Pluton ships on Microsoft Surface, AMD Ryzen 6000-9000/AI series, a subset of Intel Core Ultra (200V / Series 3) parts, and Qualcomm Snapdragon 8cx Gen 3 / X parts powering Copilot+ PCs, with many enterprise PCs still on discrete TPMs [@ms-pluton]. The OEM still owns PK and the firmware update path *outside* Pluton, so the dbx-revocation problem and the OEM-key-leak problem are unaddressed by Pluton alone. Attestation infrastructure -- Device Health Attestation, Intune device-health Conditional Access -- is still maturing, and the policies that consume attestation outcomes are still hand-rolled per organisation.

Pluton closes the cadence gap. It does not close the gap between *patched* and *revoked* -- nothing yet does, and that is the next decade's problem.

## 11. Practical guide, FAQ, and where the chain goes next

This is the part you do today, on whatever Windows machine is in front of you, before this article ages another year.

**Verify Secure Boot state.** Open an elevated PowerShell prompt and run `Confirm-SecureBootUEFI`. The cmdlet returns `True` only if Secure Boot is currently enforcing. `msinfo32` shows BIOS Mode (UEFI vs Legacy) and Secure Boot State on its System Summary page. `Get-SecureBootPolicy` reveals which Secure Boot policy GUID is in force; the default Microsoft policy on a healthy modern install is `{77fa9abd-0359-4d32-bd60-28f4e78f784b}` (the Microsoft owner GUID for the canonical KEK/db/dbx variables) [@ms-secureboot-objects]. `Get-Tpm` and `tpmtool getdeviceinformation` confirm that the TPM is present, owned, and ready [@ms-trusted-boot, @ms-secure-boot-process].

**Read the TPM event log.** `tpmtool gatherlogs` collects the WBCL files into a working folder you can inspect; `Get-WinEvent -LogName Microsoft-Windows-TPM-WMI` exposes the boot and provisioning events. On a healthy boot, the WBCL and the live PCR state replay to the same digest; mismatch is the attestation signal a remote verifier looks for.

<Spoiler kind="solution" label="One-shot health check (PowerShell snippet)">
The following one-liner gathers the basic state in elevated PowerShell:

```powershell
"" |
  Select-Object @{n='SecureBoot'; e={ Confirm-SecureBootUEFI }},
                @{n='SBPolicy';  e={ (Get-SecureBootPolicy).Publisher }},
                @{n='TPMReady';  e={ (Get-Tpm).TpmReady }},
                @{n='UEFI/BIOS'; e={ (Get-CimInstance Win32_BIOS).SMBIOSBIOSVersion }} |
  Format-List
```

If `SecureBoot` is `False`, your boot chain has no firmware-side allowlist. If `TPMReady` is `False`, BitLocker is sealing to nothing -- recovery-key escrow is your only protector.
</Spoiler>

**Verify your Windows UEFI CA 2023 enrolment.** KB5025885 is a phased deployment; the relevant deployment phase is recorded in the registry under `HKLM\SYSTEM\CurrentControlSet\Control\Secureboot\AvailableUpdates` (or the equivalent CSV in the support article) [@kb5025885]. The current UEFI db can be inspected with `Get-SecureBootUEFI db` and `Format-SecureBootUEFI`. The 2023 CA's certificate has subject CN `Windows UEFI CA 2023`. If you do not see it in db and you are running a Windows install that has been online during 2025-2026, the deployment programme has not reached your device; consult the KB article for the next steps.

> **Note:** The 2011 CA expires in late June 2026. After that, signed-but-old bootloaders that depend on the 2011 CA will not validate without explicit dbx housekeeping. If your install media is older than May 2023 and you have not run a full set of cumulative updates, you may end up with a machine that boots today but cannot boot a future Windows recovery image. The fix is to apply the KB5025885 updates and verify the 2023 CA is enrolled before that deadline [@kb5025885, @ms-windows-blog-2026].

**Enable DRTM / System Guard Secure Launch where the silicon supports it.** The control surfaces are:

- MDM CSP: `DeviceGuard/ConfigureSystemGuardLaunch`.
- Group Policy: *Computer Configuration > Administrative Templates > System > Device Guard > Turn On Virtualization Based Security > Secure Launch Configuration*.
- Registry: `HKLM\SYSTEM\CurrentControlSet\Control\DeviceGuard\Scenarios\SystemGuard\Enabled = 1`.

Verify via `msinfo32`: under *System Summary* the *Virtualization-based Security Services Configured / Running* line should include *Secure Launch* [@ms-secure-launch-config, @ms-system-guard].

**Replace TPM-only BitLocker.** After Bitpixie, TPM-only BitLocker is no longer a defensible default. Add a pre-boot PIN (`manage-bde -protectors -add C: -tpmAndPin`), a USB key, or use device encryption with pre-boot authentication [@neodyme-bitpixie, @syss-bitpixie].

<RunnableCode lang="js" title="Sketch of a Secure-Boot + TPM health check (analogue of Confirm-SecureBootUEFI / Get-Tpm)">{`
// JavaScript analogue of the PowerShell one-liner above. The real cmdlets
// query NV variables and the TPM driver directly; this just shows the shape
// of what a remote attestation collector would assemble.

function healthCheck(state) {
  return {
    secureBoot:  state.secureBoot === true,
    sbPolicyGuid: state.policyGuid ?? 'unknown',
    tpmReady:    state.tpmReady === true,
    pcr7:        state.pcr7,
    caEnrolled:  state.dbCerts.includes('Windows UEFI CA 2023'),
    notes:       []
  };
}

const live = healthCheck({
  secureBoot:  true,
  policyGuid:  '{77fa9abd-0359-4d32-bd60-28f4e78f784b}',
  tpmReady:    true,
  pcr7:        '0xab4c...',
  dbCerts:     ['Microsoft Windows Production PCA 2011', 'Microsoft Corporation UEFI CA 2011', 'Windows UEFI CA 2023']
});

console.log(live);
`}</RunnableCode>

<FAQ title="Frequently asked questions">

<FAQItem question="Does Secure Boot stop ransomware?">
No. Secure Boot defends the boot chain. Ransomware targets user data after the operating system is up and the user is logged in, so it sees a signed Windows kernel exactly as it should. The defences against ransomware are runtime: Defender, EDR, Controlled Folder Access, and offline backups. Secure Boot is a precondition for trusting the operating system that hosts those runtime defences, but it is not a runtime defence itself.
</FAQItem>

<FAQItem question="Does Secure Boot protect Linux?">
Yes, via the Microsoft-signed `shim`. The maintenance burden: keep `shim` current under the Windows UEFI CA 2023 rollout (`shim-signed`, `shim-x64`, `mokutil` packages on most distributions) or your Linux install will lose its boot path when older `shim` builds are revoked. See `The shim escape hatch` Aside in §4 for the underlying mechanism [@garrett-shim-2012, @sbat-shim].
</FAQItem>

<FAQItem question="If I have Pluton, do I still need Secure Boot?">
Yes. Pluton replaces the TPM, not the signature-verification chain. Pluton is a cryptoprocessor: it implements TPM 2.0 on the CPU die, holds keys, performs `extend` operations, and signs attestations [@ms-pluton-as-tpm]. Secure Boot is the firmware-side `LoadImage()` allowlist check. The two rails are complementary, not substitutes -- Pluton makes Measured Boot's endpoint better; it does not replace Secure Boot's verifier.
</FAQItem>

<FAQItem question="Why did my Linux dual-boot break after KB5025885?">
The 2011 CA is being revoked. You need a `shim` signed by the 2023 CA. Update from your distribution's secure-boot package (the canonical names are `shim-signed`, `shim-x64`, or `mokutil`). If your installation media is older than May 2023 and you have not run distribution updates, expect breakage somewhere between your next dbx update and the June 2026 expiry [@kb5025885, @sbat-shim].
</FAQItem>

<FAQItem question="Is TPM-only BitLocker still safe?">
Not as a default. After Bitpixie / CVE-2023-21563, TPM-only BitLocker can be defeated with a LAN cable and a keyboard on a Windows 11 install with Microsoft Account defaults. See `Replace TPM-only BitLocker` in Section 11 for the fix list [@neodyme-bitpixie, @nvd-cve-2023-21563].
</FAQItem>

<FAQItem question="What is the difference between Measured Boot and Trusted Boot?">
Trusted Boot is *signature-policy enforcement*: `bootmgfw.efi` and `winload.efi` refuse to load any kernel-mode binary whose Authenticode hash or signer is not in the trusted-boot policy, and the kernel-mode `ci.dll` continues that enforcement after handoff. Measured Boot is *hash-into-PCR recording*: every binary that loads is also extended into a TPM PCR so a verifier (BitLocker locally, an attestation service remotely) can later prove what code ran. Trusted Boot stops bad code; Measured Boot records what code ran. They run in parallel, not in sequence [@ms-trusted-boot, @ms-secure-boot-process].
</FAQItem>

</FAQ>

The chain is longer than it has ever been. It is not yet long enough.

The next article in this series picks up where `userinit` ends. Once Windows is running, the question shifts from *which code loaded?* to *what does this device look like to a remote verifier right now?* Device Health Attestation, runtime measurement of the running kernel and Secure Kernel, and Conditional Access decisions tied to attestation outcomes are the runtime continuation of everything we walked through here. Pluton on the boot chain feeds Pluton-rooted attestation at runtime. Secure Boot ends at the desktop. The runtime chain begins there.

<StudyGuide slug="secure-boot-in-windows" keyTerms={[
  { term: "Bootkit", definition: "Malware that survives operating-system reinstallation by infecting code that runs before the operating system loads -- MBR, ESP, firmware, or below." },
  { term: "UEFI Platform Initialization (PI)", definition: "Four-phase firmware pipeline (SEC, PEI, DXE, BDS); Secure Boot's verifier lives in DXE." },
  { term: "PK / KEK / db / dbx", definition: "Authenticated UEFI variables: Platform Key, Key Exchange Key, allowlist, denylist." },
  { term: "Trusted Boot", definition: "Microsoft's policy enforcement chain from bootmgfw.efi through winload.efi, ntoskrnl.exe, ELAM, and every boot-start driver." },
  { term: "SRTM", definition: "Static Root of Trust for Measurement: the boot-time chain of TPM extends anchored in the immutable CRTM." },
  { term: "DRTM", definition: "Dynamic Root of Trust for Measurement: late-launched via GETSEC[SENTER] or SKINIT to re-anchor measurement after firmware boot." },
  { term: "ELAM", definition: "Early Launch Anti-Malware: a specially-signed driver class that loads as the first boot-start driver, ahead of every other boot-start driver, and classifies them Good/Bad/Unknown/BadButCritical." },
  { term: "PCR[7]", definition: "Platform Configuration Register holding the state of Secure Boot; the canonical BitLocker seal target on modern Windows." },
  { term: "Baton Drop", definition: "CVE-2022-21894: a memory-map manipulation primitive in older signed bootmgfw.efi revisions that BlackLotus used to bypass Secure Boot." },
  { term: "Bitpixie", definition: "CVE-2023-21563: older signed bootmgfw.efi revisions do not erase the BitLocker VMK from physical memory before the PXE soft-reboot handoff, leaving the VMK in RAM where a downgraded payload chain-loaded over PXE can dump it." },
  { term: "SBAT", definition: "Secure Boot Advanced Targeting: rhboot/shim's generation-number revocation scheme, the partial answer to dbx capacity exhaustion." },
  { term: "Pluton", definition: "Microsoft's cryptoprocessor on the CPU die, implementing TPM 2.0, with firmware delivered by Microsoft via Windows Update." }
]} />
