# The Driver That Was Signed and the Driver That Won't Load: Windows Kernel Code Integrity, 2006-2026

> A history of Windows kernel code-signing -- KMCS, BYOVD, HVCI, the Vulnerable Driver Block List, and why a 2026 Windows kernel uses five gates to decide what loads.

*Published: 2026-05-14*
*Canonical: https://paragmali.com/blog/the-driver-that-was-signed-and-the-driver-that-wont-load-win*
*License: CC BY 4.0 - https://creativecommons.org/licenses/by/4.0/*

---
<TLDR>
**Windows ships a list of Microsoft-signed drivers it refuses to load.** That list -- `DriverSiPolicy.p7b` -- exists because every previous generation of kernel-driver trust assumed a signed driver was a safe driver, and a twenty-year run of Bring-Your-Own-Vulnerable-Driver attacks (Stuxnet, Capcom.sys, RTCore64.sys, gdrv.sys) proved that assumption wrong. The 2026 default-on stack -- KMCS, the block list, HVCI in VTL1, Smart App Control, and Defender ASR coverage -- is five gates doing what one ideal gate cannot do: name the specific weakness, not just the publisher. The architectural gap that motivates the stack is undecidable in principle and will not close.
</TLDR>

## 1. The Driver That Loaded

On 13 September 2016, the researcher Matt Nelson posted on his *enigma0x3* blog that a Capcom-published kernel driver, `Capcom.sys`, exposed IOCTL `0xAA013044` and used it to execute a user-supplied function pointer in kernel mode, with [SMEP disabled along the way](https://github.com/tandasat/ExploitCapcom) [@gh-tandasat-capcom]. Within two weeks the technique was operational in Metasploit. Later in September 2016, Capcom pushed the same driver to Street Fighter V's entire installed base as part of an anti-cheat update; in October 2016, Satoshi Tanda published the canonical standalone exploit on GitHub. Capcom withdrew the SFV driver shortly after, but the bytes were already in the wild.<Sidenote>The often-told version of this story compresses three distinct events into one. Matt Nelson's *Let's Be Bad Guys* post on 13 September 2016 disclosed the IOCTL number and the function-pointer-execution primitive. OJ Reeves opened the canonical Metasploit pull request, [rapid7/metasploit-framework#7363](https://github.com/rapid7/metasploit-framework/pull/7363), shortly after; the PR was created on 27 September 2016 and merged the following day [@gh-msf-pr-7363]. Satoshi Tanda's `tandasat/ExploitCapcom` repository was first published in October 2016 and is the canonical standalone PoC, and the artefact this article cites for the IOCTL number and SHA-1 hash.</Sidenote>

The driver was properly Authenticode-signed. It chained to a Microsoft-recognised root. It loaded cleanly on every default-configured Windows 7, 8.1, and 10 machine in the world.

That is the puzzle this article exists to answer. How does an operating system whose entire kernel-loading policy is *was this binary signed?* answer a vulnerability whose only failure mode is *yes, by a real publisher, doing exactly what the signature says it does*?

### A class, not an incident

Capcom.sys was not the first signed kernel driver with a primitive IOCTL, and it would not be the last. The pattern recurs across two decades and is the through-line of this article. The catalogue includes Micro-Star's `RTCore64.sys` (the kernel component of MSI Afterburner), Gigabyte's `gdrv.sys`, and the `KProcessHacker` driver shipped with Process Hacker. Section 4 walks through each one with its primary disclosure record.

The attack class has a name. *Bring Your Own Vulnerable Driver*, or BYOVD. The adversary does not need to find a kernel zero-day. They need to find one signed driver, anywhere, whose interface is unsafe by design, and to ship it.

> **Key idea:** Windows in 2026 ships a curated list of Microsoft-signed drivers it refuses to load. Understanding that list is understanding why every previous attempt to make kernel-mode trust mean *safety* instead of just *identity* eventually broke.

The current Windows 11 22H2 client honours `%windir%\system32\CodeIntegrity\DriverSiPolicy.p7b`, a Microsoft-signed deny list enforced by a hypervisor-isolated code-integrity engine sitting in Virtual Trust Level 1. The same engine refuses to map any kernel page that is simultaneously writable and executable. Both behaviours are documented on [Microsoft Learn's Memory Integrity page](https://learn.microsoft.com/en-us/windows/security/hardware-security/enable-virtualization-based-protection-of-code-integrity) and the [Microsoft-recommended driver block rules page](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/microsoft-recommended-driver-block-rules) [@ms-hvci-vbs] [@ms-driver-block-rules]. Neither existed in 2006.

To understand why Windows now refuses to load drivers it once asked Microsoft to sign, we need to go back thirty years to the moment Windows first asked a publisher to sign anything at all.

## 2. Advisory Trust: 1996 to 2005

For its first decade, the Windows driver signing policy was a polite recommendation.

Microsoft shipped its first user-mode code-signing primitive, [Authenticode](/blog/authenticode-and-catalog-files-the-crypto-foundation-under-w/), in 1996, packaged for developers in the same tool kit that gave us `SignTool`, `MakeCat`, and `Inf2Cat` -- the suite [Microsoft Learn still documents under "Cryptography tools"](https://learn.microsoft.com/en-us/windows/win32/seccrypto/cryptography-tools) [@ms-crypto-tools]. Authenticode wrapped a PKCS#7 signature around the SHA-1 (and later SHA-256) hash of a PE image and let a recipient walk the signer's certificate chain to a trusted root. It was the first answer to the question *who shipped this binary?* It was, deliberately, never an answer to *is this binary safe?*

<Definition term="Authenticode">
Microsoft's PKCS#7-based code-signing format for Windows binaries. Authenticode attests to the publisher's identity by binding the binary's hash to a certificate chain anchored at a trusted root. It does not analyse the program's behaviour.
</Definition>

For drivers, the user-mode signing primitive was paired with a separate quality program. The Windows Hardware Quality Labs programme, [documented today via the Hardware Lab Kit](https://learn.microsoft.com/en-us/windows-hardware/test/hlk/), tested third-party drivers against a Microsoft-curated compatibility suite and rewarded passing drivers with a counter-signature, eventually surfaced as the "Designed for Windows" or "Certified for Windows" mark [@ms-hlk]. The badge was operationally meaningful for OEM badging and Windows Update distribution. It was not a load-time gate. An unsigned `.sys` file dropped on disk by a setup script still loaded.

<Definition term="WHQL / HLK (Windows Hardware Quality Labs / Hardware Lab Kit)">
Microsoft's compatibility-test programme for third-party drivers. A driver that passes the HLK test suite receives a Microsoft counter-signature and is eligible for OEM and Windows Update distribution. The programme produces a quality signal, not a load-time enforcement decision.
</Definition>

### The SetupAPI prompt

On 32-bit Windows, the gate the user actually saw was the SetupAPI driver-installation prompt. The administrator could set the system to *Ignore*, *Warn*, or *Block* unsigned drivers; the default was *Warn*. *Warn* meant a click-through dialog at install time. An administrator who clicked *Install this driver anyway* loaded the unsigned driver, no further questions asked. The structural truth is the one [Microsoft's modern KMCS policy page](https://learn.microsoft.com/en-us/windows-hardware/drivers/install/kernel-mode-code-signing-policy--windows-vista-and-later-) acknowledges by contrast: under advisory policy, the prompt is the policy, and a prompt is exactly as strong as the user clicking past it [@ms-kmcs-policy].

The Sony BMG XCP incident in October 2005 made the structural weakness concrete. The XCP copy-protection software, shipped on retail audio CDs, autorun-installed an unsigned kernel-mode filter driver. The driver hid any file, registry key, or process whose name began with the string `$sys$` -- a textbook rootkit by capability if not by intent. The driver loaded after an administrator clicked through the warning prompt, exactly as advisory policy allowed. The pattern is described well in [Wikipedia's code-signing article](https://en.wikipedia.org/wiki/Code_signing) [@wp-code-signing].<Sidenote>The Sony BMG XCP rootkit triggered class-action lawsuits, FTC settlements, and an industry-wide reconsideration of what "the user clicked OK" actually authorises. From a kernel-trust perspective, the lesson is narrower: any policy that ends in a dismissible dialog has the same threat model as no policy at all, against an attacker who can show the user a dialog.</Sidenote>

The structural takeaway from 1996 through 2005 is the one the next decade tried to repair. When the signing policy is advisory, an attacker who has -- or can socially engineer -- administrator privilege only needs to dismiss a prompt to load a kernel driver. The signing primitive worked. The policy around the primitive did not.

If the prompt is the only thing between an attacker and ring zero, the kernel itself has to take over. And on a brand-new x64 architecture, Microsoft could break backward compatibility to make that happen.

## 3. KMCS: The Vista x64 Revolution (2006-2016)

In November 2006, Vista x64 made a decision that x86 never could: it refused to load any unsigned kernel driver, full stop.

The mechanism was Kernel-Mode Code Signing, or KMCS. The previous-versions [Microsoft Learn page on Vista-era driver signing](https://learn.microsoft.com/en-us/previous-versions/windows/hardware/design/dn653567(v=vs.85)) records the policy [@ms-dn653567]. At the point where the I/O manager called `IoLoadDriver`, the Code Integrity module (`ci.dll`) intercepted the load, extracted the Authenticode signature embedded in the PE image or attached via a published catalogue, walked the certificate chain, and refused to map the image if the chain did not terminate at a Microsoft-trusted root. There was no SetupAPI prompt to dismiss. If the kernel refused, the kernel refused. The decision lived below the user's reach.

<Definition term="KMCS (Kernel-Mode Code Signing)">
The Vista-era mandatory load-time signature policy on 64-bit Windows. Before mapping a kernel driver's PE image, the Code Integrity module verifies that the image's Authenticode signature chains to a Microsoft-trusted root. Drivers that fail the check are refused at load time, not at install time.
</Definition>

x86 kept the advisory policy. Microsoft could not break compatibility with two decades of unsigned drivers on the dominant platform. But x64 was a young architecture with a few hundred drivers in the field, and Microsoft used that moment to flip the default. The structural shift was real: kernel-driver trust on x64 became a property of the binary, decided in the kernel, against a fixed set of trusted roots.

### Cross-certificates: opening the gate to the world

A Microsoft-trusted root alone would have meant Microsoft signs every driver, which Microsoft did not want. Instead Microsoft cross-certified a small set of commercial code-signing certificate authorities -- including VeriSign, DigiCert, Entrust, GlobalSign, GoDaddy, and several smaller successors enumerated on the historical [cross-certificate list (2020 archive)](https://web.archive.org/web/2020/https://docs.microsoft.com/en-us/windows-hardware/drivers/install/cross-certificates-for-kernel-mode-code-signing) -- so that a publisher could buy a code-signing certificate from a commercial CA, sign their driver, and have the chain still terminate at a Microsoft-recognised root [@ms-cross-cert-archive]. The architecture is documented on the [cross-certificates for kernel-mode code signing page](https://learn.microsoft.com/en-us/windows-hardware/drivers/install/cross-certificates-for-kernel-mode-code-signing), which now opens with a sentence that did not exist in 2006: "Cross-signing is no longer accepted for driver signing" [@ms-cross-cert]. We will come back to that.

<Mermaid caption="The Vista-era KMCS driver-load decision. Code Integrity intercepts the PE map, extracts the Authenticode signature, walks the cross-cert chain to a Microsoft-trusted root, and either allows the section creation or refuses the load.">
sequenceDiagram
    participant IO as I/O Manager
    participant CI as Code Integrity (ci.dll)
    participant CA as Cross-certified CA chain
    participant Root as Microsoft trusted root

    IO->>CI: Map PE for kernel driver
    CI->>CI: Extract Authenticode signature (PKCS#7)
    CI->>CA: Walk certificate chain
    CA->>Root: Anchor at Microsoft cross-cert
    alt Chain valid and not revoked
        CI->>IO: Allow section creation
        IO->>IO: Load driver into kernel address space
    else Chain invalid or unsigned
        CI->>IO: STATUS_INVALID_IMAGE_HASH
        IO->>IO: Abort load
    end
</Mermaid>

### Documented escape hatches

KMCS shipped with three documented bypasses for developers and special cases, all enumerated on the [KMCS policy page](https://learn.microsoft.com/en-us/windows-hardware/drivers/install/kernel-mode-code-signing-policy--windows-vista-and-later-) [@ms-kmcs-policy]:

- `bcdedit /set TESTSIGNING ON` enables test-signing mode. The kernel will load drivers signed with self-issued test certificates. The cost is a desktop watermark.
- The F8 advanced-boot option *Disable Driver Signature Enforcement* turns off KMCS for one boot.
- The legacy `nointegritychecks` BCD flag disables enforcement entirely, but is rejected on systems where [Secure Boot](/blog/secure-boot-in-windows-the-chain-from-sector-zero-to-userini/) is on.

Each of these was a development workflow concession. Each of them, with admin privileges and a willingness to reboot, also serves as a kernel-driver loading path for an attacker who has already escalated. The policy holds against unprivileged adversaries. Against an attacker who already runs as administrator, the policy was already, by 2010, defending against a different threat than the one people thought it was defending against.<Sidenote>Microsoft has been formally clear about this since at least 2016: the administrator-to-kernel transition is not a security boundary in the MSRC servicing-criteria sense. Elastic Security Labs writes the position out explicitly in [their analysis of vulnerable-driver mitigations](https://www.elastic.co/security-labs/forget-vulnerable-drivers-admin-is-all-you-need) [@elastic-admin]. The historical irony is that Vista x64 KMCS was widely read at the time as a defence against admin-level adversaries; it was actually a defence against unprivileged or pre-admin ones.</Sidenote>

### PatchGuard: the parallel runtime defence

KMCS was a load-time check. The runtime parallel arrived in 2005 with Kernel Patch Protection, informally PatchGuard or KPP, which the [Wikipedia entry on Kernel Patch Protection](https://en.wikipedia.org/wiki/Kernel_Patch_Protection) describes as a feature of 64-bit Windows that prevents patching of critical kernel structures [@wp-kpp]. KPP polls a set of integrity-critical kernel objects -- the System Service Descriptor Table, IDT, GDT, certain function prologues -- and triggers a bug check if it detects tampering. It is the watchdog against runtime modification of the kernel by code that has already loaded; KMCS gates what loads in the first place.

What this fixed: the unsigned-driver-loading path closed on 64-bit Windows in production mode. Kernel rootkits of the early 2000s -- FU, Mailbot, Rustock, and their contemporaries, widely documented in the security-research literature of the era -- could no longer ship as bare `.sys` files an admin script dropped on disk. The structural class of "unsigned kernel rootkit" effectively died on x64.

But the day Vista x64 shipped, two new attack surfaces opened up. The first one Stuxnet found four years later. The second one nobody had a name for yet.

## 4. Stuxnet, BYOVD, and the Two Things Vista Did Not Fix

On 17 June 2010, researchers in Belarus and Iran identified Stuxnet, a worm targeting [supervisory control and data acquisition systems](https://en.wikipedia.org/wiki/Stuxnet) used in industrial-control environments [@wp-stuxnet]. Two of its drivers carried perfectly valid Authenticode signatures.

The signatures were genuine. The certificates were not. Stuxnet had been signed with private keys stolen from semiconductor vendors whose code-signing certs chained to legitimate cross-certified roots. KMCS verified the chain, found it good, and let the drivers load.<Sidenote>Stuxnet is widely reported to have used stolen signing keys from two real semiconductor vendors. The malware-analysis literature is consistent on the pattern; specific cert-holder attributions are reproduced in many places but the primary advisory record we cite here is the [Wikipedia Stuxnet article](https://en.wikipedia.org/wiki/Stuxnet) and the general framing in the [Wikipedia code-signing article](https://en.wikipedia.org/wiki/Code_signing) [@wp-stuxnet] [@wp-code-signing].</Sidenote> The reactive answer was certificate revocation, but revocation propagates through Windows on a schedule, not instantly, and the cached chain on millions of machines remained valid for days.

That was the first failure mode KMCS could not block by design. The signature primitive answers *was this signed by a key that chains to a trusted root?* It cannot answer *was the key still in the publisher's control when it signed this?*

### The Capcom.sys reframe

The second failure mode arrived publicly in 2016. A Capcom driver shipped via a Street Fighter V update exposed an IOCTL, numbered `0xAA013044`, that took a user-supplied function pointer and executed it in kernel mode -- with Supervisor Mode Execution Prevention (SMEP) disabled while it did so. The driver was signed and chained correctly. Satoshi Tanda's standalone proof of concept at [`tandasat/ExploitCapcom`](https://github.com/tandasat/ExploitCapcom) remains the canonical reference, including the SHA-1 of the binary (`c1d5cf8c43e7679b782630e93f5e6420ca1749a7`) [@gh-tandasat-capcom].

There was nothing for KMCS to catch. The driver did exactly what the signature said it did: ship bytes from a publisher Microsoft could identify. The signature has no opinion about the IOCTL surface.

<PullQuote>
A signed driver means only that someone Microsoft can identify shipped this binary. It does not mean the driver lacks a function-pointer IOCTL.
</PullQuote>

That observation is the first of three reframes in this article and the easiest to underestimate. Up to 2010 the conventional security reading of a Microsoft-rooted Authenticode signature was that the driver had passed a review. After Stuxnet, the reading narrowed to *the publisher is identifiable*. After Capcom.sys, it narrowed again to *the binary's identity is verifiable*. None of these readings includes *the binary does not have a kernel-write primitive in its IOCTL handler*.

<Definition term="BYOVD (Bring Your Own Vulnerable Driver)">
An attack pattern in which an adversary, having obtained or already holding administrator privileges, installs a signed but design-vulnerable third-party kernel driver and uses its exposed primitives -- arbitrary memory read/write, port I/O, MSR access, or function-pointer dispatch -- to gain ring-zero capability. The signature primitive does not refuse the load because the driver is, on signature alone, legitimate.
</Definition>

### The catalogue grows

The BYOVD catalogue accumulated through the 2010s.

`RTCore64.sys`, the kernel component of MSI's Afterburner overclocking utility, exposed read/write access to arbitrary kernel memory, I/O ports, and Model-Specific Registers from user mode. The NVD entry for [CVE-2019-16098](https://nvd.nist.gov/vuln/detail/CVE-2019-16098) is unusually direct: "These signed drivers can also be used to bypass the Microsoft driver-signing policy to deploy malicious code." [@nvd-cve-2019-16098] The driver became a workhorse for ransomware crews. Sophos's October 2022 incident analysis of [BlackByte's new variant](https://news.sophos.com/en-us/2022/10/04/blackbyte-ransomware-returns/) documents the abuse: BlackByte "abus[ed] a known vulnerability in the legitimate vulnerable driver RTCore64.sys" to disable "a whopping list of over 1,000 drivers on which security products rely to provide protection" [@sophos-blackbyte].

`gdrv.sys`, the Gigabyte APP Center driver, exposed a ring-zero memcpy-equivalent that a local attacker could use to overwrite arbitrary kernel addresses. [CVE-2018-19320](https://nvd.nist.gov/vuln/detail/CVE-2018-19320) is on CISA's Known Exploited Vulnerabilities catalogue [@nvd-cve-2018-19320]. The RobinHood ransomware abused it during the 2019 Baltimore municipal-government attack -- a connection widely documented by Sophos and CrowdStrike incident-response teams, though absent from the bare NVD record.

`KProcessHacker`, the kernel companion to the Process Hacker administration tool, exposed a process-termination primitive that bypassed even the Protected Process Light (PPL) shielding around antivirus and EDR processes. CrowdStrike's [DoppelPaymer write-up](https://www.crowdstrike.com/en-us/blog/how-doppelpaymer-hunts-and-kills-windows-processes/) documents the abuse explicitly: "the hijacking technique ... leverages ProcessHacker's kernel driver, KProcessHacker, that has been registered under the service name KProcessHacker3 ... terminate processes, including those protected by Protected Process Light (PPL)." [@cs-doppelpaymer]

<Mermaid caption="The structural BYOVD attack flow. The signed driver passes KMCS unchanged; the adversary's user-mode code, running as administrator, then issues IOCTLs to elicit a kernel-write primitive that disables EDR notify routines or escalates a token.">
sequenceDiagram
    participant Adv as Adversary (admin user mode)
    participant SCM as Service Control Manager
    participant CI as Code Integrity (ci.dll)
    participant Drv as Signed vulnerable driver
    participant K as Kernel state

    Adv->>SCM: Install signed driver as kernel service
    SCM->>CI: Request load
    CI->>CI: Authenticode check passes
    CI->>SCM: Allow
    SCM->>Drv: Load into kernel
    Adv->>Drv: IOCTL with attacker-supplied pointers
    Drv->>K: Write attacker bytes at arbitrary kernel address
    K->>K: Clear EDR notify routine / escalate token
</Mermaid>

### The third bypass: patching the policy from kernel mode

There is a third failure mode that closes the loop. Once an attacker has a signed driver with an arbitrary kernel-write primitive, they can write directly into the in-kernel Code Integrity state. The variable of interest is `g_CiOptions`, an integer inside `ci.dll` whose bits gate Driver Signature Enforcement. TrustedSec describes the technique cleanly: "this configuration variable has a number of flags that can be set, but typically for bypassing DSE this value is set to 0, completely disabled DSE and allows the attacker to load unsigned drivers just fine." [@trustedsec-gcioptions] Set `g_CiOptions` to zero and the subsequent driver loads do not need signatures at all. The signed driver, in effect, is a one-shot key that opens the gate for any unsigned driver behind it. The pattern recurs through the early 2020s; specific malware-family attributions remain research-folklore, but the technique class is well attested in TrustedSec's account.

The structural takeaway: KMCS verifies *who signed*, never *what was signed*. Once an attacker has a signed driver with a write primitive, they have ring zero. Stricter signing closes the front door for new malicious drivers. Every commercial-CA cert that was ever issued is still loadable. The policy decision has to move out of the attacker's reach. And the kernel itself has to stop being the thing that decides.

## 5. Microsoft as the Only Signer (2016-2024)

In August 2016, Microsoft did something the WHQL programme had refused to do for twenty years: it became the only entity that could counter-sign a new Windows kernel driver.

The transition shipped with Windows 10 version 1607. The [KMCS policy page](https://learn.microsoft.com/en-us/windows-hardware/drivers/install/kernel-mode-code-signing-policy--windows-vista-and-later-) records the cut precisely: for end-entity certificates issued after 29 July 2015, the chain had to terminate at one of three Microsoft-owned roots -- *Microsoft Root Authority 2010*, *Microsoft Root Certificate Authority*, or *Microsoft Root Authority* -- and the binary had to be counter-signed via the Windows Hardware Dev Center submission portal [@ms-kmcs-policy]. The commercial CAs were out. Microsoft was in, as the single point through which any new third-party kernel driver had to pass.

### Two pipelines

Behind the portal sat two submission paths. The HLK/WHQL path required a full Hardware Lab Kit compatibility test pass on the publisher's hardware -- the lab kit is the modern incarnation of the WHQL programme, [documented on Microsoft Learn](https://learn.microsoft.com/en-us/windows-hardware/test/hlk/) [@ms-hlk]. A passing run produced a "Certified for Windows" mark and made the driver eligible for OEM badging and Windows Update distribution. The lighter-friction path, called [attestation signing](https://learn.microsoft.com/en-us/windows-hardware/drivers/dashboard/code-signing-attestation), did not require an HLK run [@ms-attestation]. The publisher submitted a CAB containing the driver and supporting metadata. Microsoft's backend ran a malware scan and an automated policy check; if both passed, Microsoft applied a counter-signature. Attestation-signed drivers, the page notes, ship only to client SKUs.

<Definition term="Attestation signing">
The lower-friction post-2016 Microsoft signing path for Windows kernel drivers. The publisher uploads a CAB to the Hardware Dev Center; Microsoft runs malware scanning and an automated policy check; on pass, Microsoft applies its counter-signature. The path replaces full HLK testing for client-only drivers.
</Definition>

### EV certificates as the account-binding primitive

Both paths required the publisher to hold an Extended Validation code-signing certificate. The EV cert does not sign the driver image itself; it signs and binds the Hardware Dev Center submission. That gives Microsoft a real-name handle on every kernel-driver publisher. EV certificates ride a strong identity check, cost meaningfully more than commercial OV certs, and live on a hardware token in the publisher's possession. The 2021 Microsoft Security blog announcing the Vulnerable & Malicious Driver Reporting Center spells the requirement out: "Kernel-mode driver publishers must pass the Hardware Lab Kit (HLK) compatibility tests, malware scanning, and prove their identity through extended validation (EV) certificates." [@ms-vdrc-blog]

<Mermaid caption="The post-1607 Hardware Dev Center submission pipeline for a new kernel driver. The publisher's EV-signed CAB enters Microsoft's pipeline; on pass, the driver receives the Microsoft counter-signature and is eligible for Windows Update distribution.">
flowchart LR
    A[Publisher EV cert + driver CAB] --> B[Hardware Dev Center upload]
    B --> C[Malware scan]
    C --> D&#123;HLK required?&#125;
    D -- "Yes" --> E[HLK compatibility test pass]
    D -- "No" --> F[Attestation policy check]
    E --> G[Microsoft counter-sign]
    F --> G
    G --> H[Optional Windows Update distribution]
</Mermaid>

### The legacy long tail

The pivot to Microsoft-only signing closed the door for new drivers. It did not close the door for old ones.

<Aside label="The legacy long tail">
The [KMCS policy page](https://learn.microsoft.com/en-us/windows-hardware/drivers/install/kernel-mode-code-signing-policy--windows-vista-and-later-) is candid about the carve-outs: "Cross-signed drivers are still permitted if any of the following are true: The PC was upgraded from an earlier release of Windows to Windows 10, version 1607. Secure Boot is off in the BIOS. Drivers was signed with an end-entity certificate issued prior to July 29th 2015 that chains to a supported cross-signed CA." [@ms-kmcs-policy]

Operationally, every signed-but-vulnerable driver from the 2006-2015 era remains loadable on a meaningful population of Windows machines: upgraded installs, devices with Secure Boot disabled in firmware, and drivers with pre-cutoff end-entity certs whose chains are still valid. `Capcom.sys`, `RTCore64.sys`, `gdrv.sys`, `KProcessHacker` -- the entire 2010s BYOVD catalogue -- continues to chain to roots Windows still accepts.
</Aside>

### What attestation signing catches and what it does not

The malware scan inside attestation signing looks for known dangerous behaviour. The Microsoft Security blog post on the Vulnerable & Malicious Driver Reporting Center enumerates the categories the backend flags: "Drivers with the ability to read or write arbitrary kernel, physical, or device memory, including Port I/O and central processing unit (CPU) registers from user mode." [@ms-vdrc-blog] In other words, the scanner already understands the BYOVD pattern.

What it does not catch are *novel* design flaws. A driver whose IOCTL surface is structurally unsafe in a way the scanner does not have a signature for passes the scan and ships with a Microsoft counter-signature. The Capcom.sys pattern is in the scanner's repertoire today; the pattern in the next driver to ship is, by definition, not.

A second weakness sits on the publisher side. EV-key compromise -- whether through the LAPSUS$ supply-chain leaks of 2022 or other vendor incidents -- gives the attacker the Microsoft-only-signing flavour of the Stuxnet problem. The signed-by-Microsoft chain is exactly as strong as the EV key's safekeeping at the publisher.

One bottleneck for signing is an improvement. But the bottleneck still trusts the kernel that asks the question. As long as the policy engine runs in the same memory the attacker can write, the policy engine loses.

## 6. HVCI: Moving the Policy Out of Reach (2015-present)

In July 2015, Microsoft shipped a feature so structurally important that it took six years to become a consumer default, and so misunderstood that it still travels under three different names.

The names are the easiest place to start. *Virtualization-Based Security* (VBS) is the platform: a Hyper-V-rooted virtualisation layer that exists on every modern Windows installation that meets the hardware requirements. *Hypervisor-protected Code Integrity* (HVCI) is the kernel-code-integrity consumer of VBS. *Memory Integrity* is the label the Windows Security UI uses today. The [Microsoft Learn page on Memory Integrity](https://learn.microsoft.com/en-us/windows/security/hardware-security/enable-virtualization-based-protection-of-code-integrity) is the canonical primary source [@ms-hvci-vbs]. TrustedSec called out the conflation explicitly in their [`g_CiOptions in a virtualized world` post](https://www.trustedsec.com/blog/g_cioptions-in-a-virtualized-world) [@trustedsec-gcioptions].

> **Key idea:** A security check that shares a trust domain with what it is checking has, by definition, already lost. HVCI moves the check out of the attacker's trust domain. It is the answer to *who decides*. It is not the answer to *what gets decided*.

That sentence is the second of this article's three reframes, and the one that makes everything that follows make sense.

### VBS and the Virtual Trust Levels

On a VBS-on Windows machine, [Hyper-V](/blog/above-ring-zero-how-the-windows-hypervisor-became-a-security/) is the Type-1 hypervisor. The bootloader brings the hypervisor up first, the hypervisor brings up two execution environments side by side, and the normal Windows kernel runs in one of them while a much smaller [Secure Kernel](/blog/when-system-isnt-enough-the-windows-secure-kernel-and-the-en/) runs in the other.

<Definition term="VTL (Virtual Trust Level)">
The VBS abstraction that partitions a Windows installation into two execution environments. VTL0 is the normal Windows kernel and its drivers. VTL1 is a much smaller Secure Kernel and a curated set of "trustlets" -- isolated user-mode processes that hold the most sensitive secrets. VTL1 can read and write VTL0 memory; VTL0 cannot read or write VTL1 memory. Code-integrity policy lives in VTL1.
</Definition>

The Code Integrity engine on an HVCI-on machine -- signature verification and policy-file consultation -- runs inside VTL1's Secure Kernel as the *Secure Kernel Code Integrity* component, SKCI. The VTL0 kernel cannot read or write VTL1 memory by hardware construction: the hypervisor's second-level address translation tables, programmed before VTL0 ever runs, mark VTL1 pages as unreachable from VTL0. The in-memory `g_CiOptions` state continues to reside in `ci.dll`'s VTL0 data section -- it does not relocate into VTL1 -- but on an HVCI-on machine Kernel Data Protection (KDP), exposed to VTL0 drivers as `MmProtectDriverSection`, asks the Secure Kernel to mark the containing page read-only at the SLAT level. A fully compromised VTL0 kernel -- with kernel debugging attached, with all of ring zero's privileges -- cannot rewrite `g_CiOptions` to zero, because the SLAT mapping refuses the write.

<Mermaid caption="The VBS/VTL split. VTL0 is the conventional Windows kernel and its drivers; VTL1 holds the Secure Kernel and the Code Integrity engine (SKCI). When VTL0 tries to map an executable kernel section, the hypervisor mediates the request to VTL1's SKCI, which performs signature verification and policy consultation before the section is created.">
flowchart TD
    subgraph VTL1 [VTL1 -- Secure Kernel]
        SK[Secure Kernel]
        SKCI[SKCI -- Code Integrity]
        Policy[Code Integrity policy<br/>(DriverSiPolicy.p7b)]
        SK --> SKCI
        SKCI --> Policy
    end
    subgraph VTL0 [VTL0 -- Normal Windows]
        Kern[NT Kernel]
        Drv[Driver attempting load]
        CI[ci.dll user-side]
        Kern --> CI
        CI --> Drv
    end
    Hypervisor&#123;"Hyper-V SLAT"&#125;
    Kern -->|"Section create"| Hypervisor
    Hypervisor -->|"Forward"| SKCI
    SKCI -->|"Allow or deny"| Hypervisor
    Hypervisor -->|"Result"| Kern
</Mermaid>

### W^X on kernel memory

There is a second, equally structural property HVCI enforces. When the VTL0 kernel tries to map an executable section -- to create a kernel-executable page from a PE image -- the hypervisor forces the request through SKCI. SKCI verifies the Authenticode signature *at section creation time*, not only at the `IoLoadDriver` entry point a load goes through later [@ms-hvci-vbs]. And SKCI refuses any page that is simultaneously writable and executable. The classic exploitation technique of allocating a writable kernel buffer, writing shellcode into it, and then jumping to it stops working: the page either is writable, in which case it is not executable, or is executable, in which case it is not writable.

The hardware acceleration matters. The [Memory Integrity page](https://learn.microsoft.com/en-us/windows/security/hardware-security/enable-virtualization-based-protection-of-code-integrity) is unusually direct about the requirement: "Memory integrity works better with Intel Kabylake and higher processors with Mode-Based Execution Control, and AMD Zen 2 and higher processors with Guest Mode Execute Trap capabilities. Older processors rely on an emulation of these features, called Restricted User Mode, and will have a bigger impact on performance." [@ms-hvci-vbs]<Sidenote>Mode-Based Execute Control (MBEC) is the Intel feature that lets the hypervisor distinguish "executable in supervisor mode" from "executable in user mode" at the page-table-entry level. AMD's Guest Mode Execute Trap (GMET) is the structurally equivalent feature. Older silicon falls back to Restricted User Mode emulation, which works correctly but pays a meaningfully larger performance tax. The hardware cutoff is a major reason HVCI defaulted off on pre-2017 OEM hardware for years.</Sidenote>

### What HVCI fixed

The `g_CiOptions` patching family, the third bypass we met in section 4, closes on HVCI-on systems. TrustedSec's [post](https://www.trustedsec.com/blog/g_cioptions-in-a-virtualized-world) gives a clean account: `g_CiOptions` still lives in `ci.dll`'s VTL0 data section, but Kernel Data Protection -- exposed to VTL0 drivers as `MmProtectDriverSection` -- asks the Secure Kernel in VTL1 to mark its containing page read-only at the SLAT level, so a VTL0 ring-zero write to it faults; the VTL0 kernel cannot rewrite the variable; live-kernel debuggers attached to VTL0 cannot rewrite it either [@trustedsec-gcioptions]. The arbitrary-write-to-disable-DSE pattern that worked on Windows 7 through pre-HVCI Windows 10 is, on an HVCI-on Windows 11, no longer a primitive that exists in the attacker's threat model. The trust domain that decides the policy is not the trust domain the attacker can reach.

### What HVCI did not fix

It is essential to be clear about what HVCI does not catch, because misreading this is how the BYOVD class survives.

HVCI verifies the *signature* and enforces W^X. It does not analyse the driver's *behaviour*. The 2019 `RTCore64.sys` driver passes SKCI section-mapping unchanged: it is signed by MSI through a Microsoft-recognised chain, it has no writable-and-executable pages, and the Authenticode hash on disk matches the binary in memory. After it loads, an attacker in user mode sends an IOCTL; the driver, executing legitimately in ring zero, writes attacker-controlled bytes to an attacker-chosen kernel address; the EDR notify routine table is patched; the BYOVD attack proceeds. Everything that happens inside the IOCTL handler happens with kernel privilege, on properly-signed code paths, inside HVCI's W^X policy. The structural BYOVD class is unaffected.

That is the gap the next two sections close.

> **Note:** The [Memory Integrity page](https://learn.microsoft.com/en-us/windows/security/hardware-security/enable-virtualization-based-protection-of-code-integrity) is explicit that "some applications and hardware device drivers may be incompatible with memory integrity. This incompatibility can cause devices or software to malfunction and in rare cases may result in a boot failure (blue screen)." [@ms-hvci-vbs] For years OEM and gaming-system vendors shipped with HVCI off because legacy ISV drivers, anti-cheat kernel components, or older virtualisation tools could not coexist with it. On an HVCI-off system the `g_CiOptions` patching family is back in play, the kernel-CI engine and the kernel it polices are in the same trust domain, and the analysis of section 4 applies unchanged. The 2026 default-on baseline is real, but it is not yet universal.

HVCI is the answer to *who decides*. It is not the answer to *what gets decided*. We still need a way to say: this specific signed binary is one we do not trust.

## 7. The Block List: Naming the Weakness (2020-present)

In October 2020, Microsoft started shipping something it had spent twenty-five years avoiding: a list of specific drivers it would refuse to load by name.

The artefact lives at `%windir%\system32\CodeIntegrity\DriverSiPolicy.p7b`. The file is a PKCS#7-signed [App Control for Business](/blog/wdac--hvci-code-integrity-at-every-layer-in-windows/) policy -- "WDAC" by its former name -- whose body consists of deny rules expressed at the granularity of file hash, file name, or publisher. The canonical [Microsoft-recommended driver block rules page](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/microsoft-recommended-driver-block-rules) is the primary source, and is unusually rich for a Microsoft Learn page [@ms-driver-block-rules].

<Definition term="App Control for Business (WDAC)">
Microsoft's policy-driven application-control engine. An App Control policy is a signed XML or binary file that lists allow rules, deny rules, and signer-level rules; at load time, the policy engine consults the rules and either allows or refuses the image. `DriverSiPolicy.p7b` is itself an App Control policy whose body is all deny rules.
</Definition>

### Cadence and the published-vs-shipped gap

The block list is refreshed on two cadences. Microsoft publishes the source XML on the [block-rules page](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/microsoft-recommended-driver-block-rules) on a quarterly schedule and pushes the binary `DriverSiPolicy.p7b` to client devices through monthly Windows servicing [@ms-driver-block-rules]. Microsoft's Security Baselines team also publishes a [running update post](https://techcommunity.microsoft.com/blog/microsoft-security-baselines/microsoft-vulnerable-driver-blocklist-getting-better-and-better/4172168) cataloguing the changes [@ms-tc-blocklist-baselines].

The candid admission on the [block-rules page](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/microsoft-recommended-driver-block-rules) is the part of the story that is most worth understanding.

<PullQuote>
"The blocklist included in this article and in the associated downloadable files usually contains a more complete set of known vulnerable drivers than the version in the OS and delivered by Windows Update. It's often necessary for us to hold back some blocks to avoid breaking existing functionality." -- Microsoft Learn, *Microsoft-recommended driver block rules* [@ms-driver-block-rules]
</PullQuote>

The published list is, on purpose, more inclusive than the shipped list. The reason is operational: every entry in the shipped list is a driver that would refuse to load on millions of devices, some of which have legitimate dependencies. Microsoft holds entries back when the compatibility cost is too high, even when the security signal is strong. We will come back to whether that gap is closeable in section 9.

### The 22H2 cut and the Server 2016 carve-out

Two dates anchor the deployment story.

The block list was an *optional* feature in Windows 10 1809, enabled by default only on systems that ran [Hypervisor-protected Code Integrity, Smart App Control, or Windows in S-mode](https://support.microsoft.com/en-us/topic/kb5020779-the-vulnerable-driver-blocklist-after-the-october-2022-preview-release-3fcbe13a-6013-4118-b584-fcfbc6a09936) [@ms-kb5020779]. With the [Windows 11 2022 Update, also known as 22H2](https://blogs.windows.com/windowsexperience/2022/09/20/available-today-the-windows-11-2022-update/), released on 20 September 2022, default-on coverage extended to every client device, not just the HVCI-on subset [@ms-blogs-win11-2022]. The 22H2 release is the moment the block list became universal Windows client behaviour, six years after the first BYOVD primitive that motivated it.

The [block-rules page](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/microsoft-recommended-driver-block-rules) notes a single explicit carve-out worth flagging.<Sidenote>"Except on Windows Server 2016, the vulnerable driver blocklist is also enforced when either memory integrity (also known as hypervisor-protected code integrity or HVCI), Smart App Control, or S mode is active." [@ms-driver-block-rules] Windows Server 2016 does not get the default-on block list even when HVCI is on. An enterprise admin managing Server 2016 has to deploy an explicit App Control policy to get the same coverage.</Sidenote> The October 2022 preview cycle saw a documented quirk -- [KB5020779](https://support.microsoft.com/en-us/topic/kb5020779-the-vulnerable-driver-blocklist-after-the-october-2022-preview-release-3fcbe13a-6013-4118-b584-fcfbc6a09936) explains that a preview release shipped without an actual blocklist refresh, addressed by a subsequent servicing update [@ms-kb5020779].<Sidenote>The KB5020779 episode is a useful reminder that the in-box block list ships through the same Windows Update cycle as everything else. Preview releases do not always carry a fresh policy, and the cadence on the [block-rules page](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/microsoft-recommended-driver-block-rules) describes the intended steady state rather than every individual update [@ms-driver-block-rules].</Sidenote>

### Naming the weakness, not the publisher

For the first time in the story, the question Windows asks at load time is not only *who signed this binary?* but also *is this specific signed binary one we have learned is unsafe?* The block list is a step the previous generations could not have taken with the primitives they had: it requires a deny list that can be authored after the fact, distributed quickly, and enforced inside a trust domain the attacker cannot reach. KMCS supplied the load-time enforcement primitive; HVCI supplied the immune-from-VTL0 enforcement context; only with both in place could `DriverSiPolicy.p7b` actually do its job.

<Mermaid caption="Block-list enforcement on a 22H2-class machine. When the VTL0 kernel requests a kernel section, SKCI in VTL1 evaluates the Authenticode chain and then consults DriverSiPolicy.p7b for hash, name, and signer deny rules; only when no deny rule matches does the section creation succeed.">
flowchart TD
    A[Driver image requested for load] --> B[Hypervisor mediates section create]
    B --> C[SKCI verifies Authenticode chain]
    C --> D&#123;"Chain OK?"&#125;
    D -- "No" --> X[Refuse]
    D -- "Yes" --> E[Consult DriverSiPolicy.p7b deny rules]
    E --> F&#123;"Hash, name, or signer on deny list?"&#125;
    F -- "Yes" --> X
    F -- "No" --> G[Allow section creation]
    G --> H[Driver maps into kernel address space]
</Mermaid>

### The Vulnerable & Malicious Driver Reporting Center

The block list grew faster after Microsoft built a structured channel to feed it. The [December 2021 Microsoft Security blog post](https://www.microsoft.com/en-us/security/blog/2021/12/08/improve-kernel-security-with-the-new-microsoft-vulnerable-and-malicious-driver-reporting-center/) announced the Vulnerable & Malicious Driver Reporting Center: a portal where researchers and vendors can submit kernel drivers for evaluation, backed by an automated analysis pipeline that looks for the BYOVD primitives -- "the ability to read or write arbitrary kernel, physical, or device memory, including Port I/O and central processing unit (CPU) registers from user mode." [@ms-vdrc-blog] The post explicitly lists the historical CVE backdrop that motivated the centre, naming RobinHood, Uroburos, Derusbi, GrayFish, and Sauron as families that leveraged driver vulnerabilities such as CVE-2008-3431, CVE-2013-3956, CVE-2009-0824, and CVE-2010-1592 [@ms-vdrc-blog].

The same post anchors the EV-certificate publisher requirement and the HLK or attestation gating that produces the block list's inputs in the first place. The reporting centre is the path by which a flagged driver moves from "spotted in research" to "deny rule in the next quarterly XML push".

### Defender ASR as the HVCI-off coverage path

There is a third surface worth knowing about. [Microsoft's Attack Surface Reduction rules](https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/attack-surface-reduction-rules-reference) include "Block abuse of exploited vulnerable signed drivers" (`56a863a9-875e-4185-98a7-b882c64b5ce5`) as part of the standard ASR protection set [@ms-asr-rules]. For Microsoft Defender for Endpoint customers on Windows 10 E3 or E5, the rule covers machines where HVCI is not on. Microsoft notes that "the same blocklist is also used by Microsoft Defender Antivirus customers" via the ASR rule [@ms-vdrc-blog]. The path is narrower than HVCI-rooted enforcement -- Defender has to be running, the rule has to be enabled -- but it extends the block list to enterprise environments that have not yet flipped HVCI on.

### LOLDrivers and the dual-use externality

The block list is not the only catalogue of vulnerable Windows drivers. The community-maintained [LOLDrivers project](https://www.loldrivers.io/) -- "Living Off The Land Drivers" -- collects vulnerable, malicious, and known-malicious Windows drivers in one place. Every entry carries YAML metadata and where possible YARA, Sigma, ClamAV, and Sysmon rules, plus a pre-compiled App Control deny policy that can be deployed standalone [@gh-loldrivers] [@loldrivers-io]. As of the source verification for this article, LOLDrivers carried approximately 2,132 driver entries -- considerably more than the Microsoft-shipped list.

Check Point Research called out the dual-use problem in their [2024 piece](https://research.checkpoint.com/2024/breaking-boundaries-investigating-vulnerable-drivers-and-mitigating-risks/): a public catalogue of vulnerable drivers is also a reading list for attackers. The same researchers ran the methodology in reverse: "we conducted a mass hunt for new drivers that may be vulnerable, uncovering thousands of potentially at-risk drivers." [@cpr-byovd] Defenders use the list for hardening; attackers use it for shopping. Both effects are real.

> **Note:** Defenders who can tolerate compatibility risk can compile the source XML from the [block-rules page](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/microsoft-recommended-driver-block-rules) into an App Control policy and deploy it directly, picking up the entries Microsoft holds back from the in-box list. Optionally layer the [LOLDrivers App Control policy](https://github.com/magicsword-io/LOLDrivers) on top for community-curated coverage. Test in audit mode first -- both lists are more aggressive than the shipped baseline and may flag drivers your environment depends on [@ms-driver-block-rules] [@gh-loldrivers].

### A WDAC rule evaluator, in miniature

The semantics of an App Control policy are simple enough to model in a few lines. Deny rules win; allow rules are consulted next; the default action handles whatever is left.

<RunnableCode lang="js" title="Toy App Control evaluator: deny-first, then allow, then default">{`
// Simplified model of the App Control / WDAC rule-evaluation engine.
// Deny rules win, allow rules permit the remainder, and an explicit
// default action handles images neither denied nor allowed.

const policy = {
  denyByHash:    new Set(["c1d5cf8c43e7679b782630e93f5e6420ca1749a7"]), // Capcom.sys
  denyByName:    new Set(["RTCore64.sys"]),
  denyBySigner:  new Set(["CN=Some Compromised Publisher, O=Example"]),
  allowBySigner: new Set(["CN=Microsoft Windows, O=Microsoft Corporation"]),
  defaultAction: "BLOCK",
};

function evaluate(image, policy) {
  if (policy.denyByHash.has(image.sha1)) return "BLOCK (hash on deny list)";
  if (policy.denyByName.has(image.fileName)) return "BLOCK (name on deny list)";
  if (policy.denyBySigner.has(image.signer)) return "BLOCK (signer on deny list)";
  if (policy.allowBySigner.has(image.signer)) return "ALLOW (signer on allow list)";
  return policy.defaultAction === "ALLOW"
    ? "ALLOW (default)"
    : "BLOCK (default)";
}

const cases = [
  { sha1: "c1d5cf8c43e7679b782630e93f5e6420ca1749a7", fileName: "Capcom.sys",
    signer: "CN=CAPCOM Co., Ltd." },
  { sha1: "0000000000000000000000000000000000000000", fileName: "RTCore64.sys",
    signer: "CN=Micro-Star International Co., Ltd." },
  { sha1: "1111111111111111111111111111111111111111", fileName: "ntfs.sys",
    signer: "CN=Microsoft Windows, O=Microsoft Corporation" },
];
for (const c of cases) console.log(c.fileName, "->", evaluate(c, policy));
`}</RunnableCode>

Naming the weakness is genuinely new. But the list only ever lists what someone has already found. The window between disclosure and enforcement is months, and Microsoft documents that the shipped list is by design weaker than the published one. What gets the rest of the way?

## 8. The 2026 Stack: Defence in Depth Made Concrete

On a default-configured Windows 11 22H2 machine in 2026, a kernel driver that tries to load passes through five distinct gates. Each one closes a blind spot the previous one cannot reach.

The order matters, and so do the dependencies. The gates are:

1. **Kernel-Mode Code Signing.** The Authenticode chain must terminate at a Microsoft-owned root. The chain check rejects unsigned drivers and drivers chained to non-Microsoft roots, except under the documented [grandfathering carve-outs](https://learn.microsoft.com/en-us/windows-hardware/drivers/install/kernel-mode-code-signing-policy--windows-vista-and-later-) [@ms-kmcs-policy].
2. **The Vulnerable Driver Block List.** SKCI consults `DriverSiPolicy.p7b` for hash, file-name, and signer-level deny rules. The list is default-on for every client device since [Windows 11 22H2](https://blogs.windows.com/windowsexperience/2022/09/20/available-today-the-windows-11-2022-update/), and is updated quarterly through Microsoft Learn's published source XML and monthly through Windows servicing [@ms-driver-block-rules] [@ms-blogs-win11-2022].
3. **HVCI / SKCI.** The Code Integrity engine runs in VTL1, verifies signatures at section-mapping time rather than only at `IoLoadDriver`, and enforces W^X on kernel memory. The policy engine is structurally out of reach of a fully compromised VTL0 kernel [@ms-hvci-vbs].
4. **App Control / Smart App Control.** Enterprise admins author explicit App Control allowlists; consumer devices on clean Windows 11 installs run [Smart App Control](https://support.microsoft.com/topic/what-is-smart-app-control-285ea03d-fa88-4d56-882e-6698afdb7003), a Microsoft-authored allowlist policy backed by cloud reputation [@ms-sac-faq] [@ms-appcontrol].
5. **Defender ASR.** On Microsoft Defender for Endpoint deployments, the "Block abuse of exploited vulnerable signed drivers" ASR rule extends block-list coverage to HVCI-off environments [@ms-asr-rules].

<Definition term="Smart App Control (SAC)">
The Windows 11 22H2+ consumer-facing front end for App Control for Business. SAC enforces a Microsoft-authored policy and supplements it with cloud reputation lookups from the Intelligent Security Graph. SAC is only available on clean installs and is shipped in evaluation mode by default; once turned on, it also unconditionally enforces the vulnerable driver block list [@ms-sac-faq].
</Definition>

<Definition term="ISG (Intelligent Security Graph)">
The cloud-backed reputation service that Smart App Control consults to predict whether a given binary is safe. When confident, ISG approves the binary; when unconfident, SAC falls back to signature checks; absent both, the binary is blocked [@ms-sac-faq].
</Definition>

### Orthogonality, not redundancy

The five gates look redundant from a distance. They are not. Each closes a class of failure the others cannot reach. The orthogonality is the reason for the stack.

| Gate | Catches | Misses |
|------|---------|--------|
| KMCS | Unsigned and cross-cert-only-signed drivers | Signed-but-vulnerable drivers |
| Block list | Known-vulnerable signed drivers (post-disclosure) | Unknown-vulnerable signed drivers |
| HVCI / SKCI | `g_CiOptions`-patching from VTL0; writable+executable kernel pages | Behavioural BYOVD inside a properly-signed driver |
| WDAC / SAC | Anything not on the allowlist (enterprise) or unknown-reputation (consumer) | Allowlisted drivers with unknown defects |
| Defender ASR | Block-list entries on HVCI-off machines (where the rule is enabled) | Drivers not on Microsoft's blocklist |

The matrix is the practical justification for the stack. If `DriverSiPolicy.p7b` had perfect coverage there would be no need for SAC; if SAC had a complete allowlist there would be no need for the block list; if HVCI proved driver safety rather than driver identity there would be no need for either. None of those preconditions hold, and section 9 explains why they cannot.

### Smart App Control's particulars

SAC merits a few specifics because its behaviour differs from the rest of the stack in ways that surprise readers. First, it is consumer-facing and only available on clean Windows 11 installs -- an upgrade does not get SAC. Second, SAC ships in *evaluation mode* by default. Windows watches user behaviour, and if the user mostly runs cloud-reputable software, SAC quietly flips to *enforce*; if the user runs a lot of niche or self-developed software, SAC quietly flips to *off*. Third, until a [2024 servicing change](https://support.microsoft.com/topic/what-is-smart-app-control-285ea03d-fa88-4d56-882e-6698afdb7003) made SAC re-enableable from Windows Security, turning SAC off used to require a clean install to bring it back [@ms-sac-faq]. Fourth, on enterprise-managed devices, SAC turns itself off automatically after 48 hours; managed environments are expected to deploy WDAC instead [@ms-appcontrol].

The cold-start failure mode is worth knowing. A small independent hardware vendor whose driver has never been seen at scale lacks a cloud reputation when SAC asks about it. The fallback is signature, but a signed driver from an unknown publisher does not always clear SAC's confidence threshold. Small IHVs occasionally find their drivers blocked on consumer hardware running SAC for that reason alone.

<Mermaid caption="The 2026 default Windows 11 22H2 stack, in evaluation order. KMCS gates identity; the block list gates known-vulnerability; HVCI/SKCI gates trust domain; SAC or WDAC gates allowlist policy; Defender ASR adds coverage on HVCI-off environments. Each layer covers a blind spot the others do not.">
flowchart TD
    A[Driver image requested] --> B[Gate 1: KMCS Authenticode chain]
    B --> C&#123;"Microsoft-rooted?"&#125;
    C -- "No" --> X[Refuse]
    C -- "Yes" --> D[Gate 2: DriverSiPolicy.p7b]
    D --> E&#123;"On block list?"&#125;
    E -- "Yes" --> X
    E -- "No" --> F[Gate 3: HVCI / SKCI section mapping]
    F --> G&#123;"Signature OK, W^X satisfied?"&#125;
    G -- "No" --> X
    G -- "Yes" --> H[Gate 4: App Control / SAC]
    H --> I&#123;"On allowlist or reputable?"&#125;
    I -- "No" --> X
    I -- "Yes" --> J[Gate 5: Defender ASR rule applicable]
    J --> K[Driver loads into VTL0 kernel]
</Mermaid>

### Verifying what the machine actually does

The state of the stack on any given Windows machine is observable. The Win32_DeviceGuard WMI class exposes a `SecurityServicesRunning` array whose integer codes name the security services currently active. The aside below covers the practitioner-facing details.

<Aside label="How to check what your machine actually does">
Two commands answer most of the question. From an elevated PowerShell prompt, `Get-CimInstance -Namespace root\Microsoft\Windows\DeviceGuard -ClassName Win32_DeviceGuard` returns a structure whose `SecurityServicesRunning` array enumerates the services in operation; a value of **1** indicates **Credential Guard**, a value of **2** indicates **HVCI / Memory Integrity**, and additional values cover newer services (System Guard Secure Launch, SMM Firmware Measurement, Kernel-mode Hardware-enforced Stack Protection, and Hypervisor-Enforced Paging Translation) [@ms-hvci-vbs]. `bcdedit /enum {default}` shows whether `hypervisorlaunchtype` is set to `Auto`, the prerequisite for VBS being on at all. The block list file itself lives at `%windir%\system32\CodeIntegrity\DriverSiPolicy.p7b`; if it is missing, the in-box list is not deployed on that machine. None of these tell you whether your Defender ASR rule is active without a separate `Get-MpPreference` check.
</Aside>

A toy decoder helps make the WMI surface concrete.

<RunnableCode lang="js" title="Decode Win32_DeviceGuard SecurityServicesRunning flags">{`
// Mirror of the integer codes the Win32_DeviceGuard WMI class reports
// for SecurityServicesRunning. Documented on Microsoft Learn under
// the Memory Integrity / HVCI guidance.

const SERVICE_NAMES = {
  1: "Credential Guard",
  2: "Hypervisor-protected Code Integrity (HVCI / Memory Integrity)",
  3: "System Guard Secure Launch",
  4: "SMM Firmware Measurement",
  5: "Kernel-mode Hardware-enforced Stack Protection",
  6: "Kernel-mode Hardware-enforced Stack Protection (Audit mode)",
  7: "Hypervisor-Enforced Paging Translation",
};

function explain(servicesRunning) {
  if (!servicesRunning.length) {
    return "No VBS-rooted security services are running on this device.";
  }
  return servicesRunning
    .map((code) => SERVICE_NAMES[code] || ("unknown service " + code))
    .map((s) => "  - " + s)
    .join("\\n");
}

console.log("Sample 1: HVCI on, Credential Guard on");
console.log(explain([1, 2]));
console.log("\\nSample 2: nothing running");
console.log(explain([]));
console.log("\\nSample 3: full stack on a Secured-core PC");
console.log(explain([1, 2, 3, 4, 5]));
`}</RunnableCode>

Five gates is a lot of work to do what one ideal gate could not. The reason for the inflation is uncomfortable: the one ideal gate cannot, in principle, exist.

## 9. The Undecidability Wall

Why does Windows need five layers to do what one perfect signature ought to do? Because the perfect signature is mathematically impossible.

The third reframe of this article is the one that turns engineering frustration into theoretical inevitability. The property of interest -- "this signed driver, when exercised through its IOCTL surface, can be coerced into giving an attacker an arbitrary kernel-write primitive" -- is a non-trivial semantic property of the driver's program text. Rice's theorem says that for any non-trivial semantic property of programs, the predicate is undecidable on the class of all programs. No algorithm exists that, in finite time, answers correctly for every input.

A useful way to state the bound: if $P$ is the set of all kernel drivers and $\text{Unsafe}(p) = 1$ iff driver $p$ exposes a kernel-write primitive through its IOCTL handler, then no total computable function $f: P \to \&#123;0, 1\&#125;$ satisfies $f = \text{Unsafe}$. Every approximation either over-blocks ($f(p) = 1$ when $\text{Unsafe}(p) = 0$, false positives, broken drivers) or under-blocks ($f(p) = 0$ when $\text{Unsafe}(p) = 1$, false negatives, BYOVD in the wild). The signing pipeline scans for the obvious cases; sophisticated dynamic analysers will catch more of the not-obvious cases; but the unrestricted version of the problem has no complete solution.

> **Key idea:** Whether an arbitrary signed driver can be coerced into giving an attacker a kernel-write primitive is undecidable. No static signing scheme can ever block exactly the unsafe drivers. The Windows answer is therefore not a single perfect gate; it is defence in depth that narrows, but does not close, the gap.

### Microsoft's formal acknowledgement

Microsoft has been formally clear about a related point for years: the administrator-to-kernel transition is not, in the [MSRC servicing-criteria](https://www.elastic.co/security-labs/forget-vulnerable-drivers-admin-is-all-you-need) sense, a security boundary [@elastic-admin]. Elastic Security Labs put the position in plain English: "the blocklist's deployment model can be slow to adapt to new threats, with updates automatically deployed typically only once or twice a year. Users can manually update their blocklists, but such interventions bring us out of 'secure by default' territory ... When determining which vulnerabilities to fix, the Microsoft Security Response Center (MSRC) uses the concept of a security boundary." [@elastic-admin]

<PullQuote>
Administrator-to-kernel is not a security boundary, in the MSRC servicing-criteria sense. The defence-in-depth mechanisms described here mitigate it; from the impossibility result, none can close it.
</PullQuote>

The MSRC framing is engineering policy. The undecidability result is theoretical inevitability. They land in the same place: an attacker who has administrator privilege, who can pick from the entire history of signed Windows drivers, who is patient, is not stopped by any number of signature checks. The defence-in-depth mechanisms make the attacker work harder; they raise the cost; they shrink the surface of viable signed drivers. They do not close the structural gap.

### Closeable gaps and irreducible gaps

It is worth separating two kinds of gap.

The published-vs-shipped block list gap is a *policy* decision, not an engineering limit. Microsoft documents that "it's often necessary for us to hold back some blocks to avoid breaking existing functionality." [@ms-driver-block-rules]<Sidenote>The published-vs-shipped gap is the closeable part. An administrator who can author or import an App Control policy can deploy the published XML directly and pick up Microsoft's full curation. The irreducible part of the gap sits behind it: even the published list lists only what someone has already disclosed. The undecidability result applies to *finding* unsafe drivers, not to *listing* known-unsafe ones.</Sidenote> Defenders willing to accept compatibility risk can close it on their own machines today.

The gap that cannot close is the one between the published list and the universe of vulnerable drivers Microsoft has not yet learned about. That is where the undecidability result bites. No amount of pipeline tightening eliminates the class of design flaws whose recognition requires understanding what the driver's IOCTL handler will do under all possible inputs.

### What static methods *can* achieve

Quantifying what the existing layers achieve is more useful than lamenting what they cannot. The complexity bounds for each layer are well-defined.

Authenticode signature verification is bounded below by one public-key operation and one cryptographic hash over the PE image, regardless of policy. SKCI's per-section cost is dominated by that constant. The Memory Integrity page is conspicuously silent on a published benchmark number; in practice the overhead is small but non-zero on Intel Kabylake-and-later or AMD Zen-2-and-later silicon with MBEC/GMET hardware acceleration, and meaningfully higher on the emulated Restricted-User-Mode fallback path that older silicon falls back to [@ms-hvci-vbs].

WDAC allowlist evaluation is $O(\log r)$ per image on $r$ rules with a hashed index, or $O(r)$ on the naïve linear scan; the deny-rule check in `DriverSiPolicy.p7b` follows the same bound.

The gap between achievable static enforcement and the ideal "block all and only the unsafe drivers" is, in the limit, irreducible.

### Three axes that can be improved

If the gap cannot close, it can be narrowed along three independent axes -- and the improvements that matter, look like one of these:

- **Reactiveness.** The disclosure-to-enforcement latency is months today. Forthcoming WHCP submission-time analyses can compress it.
- **Coverage of unknown-bad signed drivers.** Reputation, allowlists, and dynamic analysis at scale extend coverage beyond what a static deny list lists.
- **Visibility into binary contents.** SBOMs answer "what is inside this driver?" -- a question the signature alone never asked.

Each axis is the answer to a different blind spot. None substitutes for another. Section 11 returns to the SBOM axis specifically because it is the one Microsoft is building into the submission flow right now.

Static signing has hit a wall it cannot push through. The only way forward is to widen the question. Two of the answers exist on other operating systems. The third is being built now.

## 10. The Other Two Operating Systems

Linux solved the signing half and pushed the curated-denylist half down to distribution vendors. macOS solved both by making third-party drivers stop being drivers.

### Linux: signatures without a curated denylist

Linux has supported in-kernel module signing since version 3.7 (December 2012), under the configuration symbol `CONFIG_MODULE_SIG`. The [kernel documentation](https://docs.kernel.org/admin-guide/module-signing.html) catalogues the supported algorithms: "The built-in facility currently only supports the RSA, NIST P-384 ECDSA and NIST FIPS-204 ML-DSA public key signing standards." [@docs-kernel-module-sig] The choice of signature scheme is a build-time decision, and the kernel can be told to use a key embedded in the kernel image, a key loaded into the trusted keyring at runtime, or a Machine Owner Key managed by `shim` and the platform's UEFI boot stack.

The structural decision that matters is the enforcement mode. `CONFIG_MODULE_SIG_FORCE` is the toggle. The kernel documentation describes the two settings cleanly: "If this is off (ie. 'permissive'), then modules for which the key is not available and modules that are unsigned are permitted, but the kernel will be marked as being tainted ... If this is on (ie. 'restrictive'), only modules that have a valid signature that can be verified by a public key in the kernel's possession will be loaded." [@docs-kernel-module-sig]

Most mainstream distributions ship permissive: unsigned modules taint the kernel but load. The defender-shipping-restrictive-enforcement model is real on Secure-Boot-on RHEL and modern Ubuntu, paired with the Linux *lockdown* security module, which restricts certain root-level kernel-modification paths even on signed builds.<Sidenote>The Linux lockdown LSM is the closest mainline-Linux analogue to HVCI's policy-out-of-reach property. The [`kernel_lockdown(7)` man page](https://man7.org/linux/man-pages/man7/kernel_lockdown.7.html) describes lockdown as "designed to prevent both direct and indirect access to a running kernel image" and enumerates the restricted surfaces: `/dev/mem`, `/dev/kmem`, `/dev/kcore`, kprobes, BPF, MSR alteration, ACPI table overrides, and unsigned kexec [@man7-kernel-lockdown]. It is a partial analogue, not equivalent: lockdown still runs in the same trust domain as the kernel it polices, so a sufficient kernel exploit defeats it. HVCI's VTL0/VTL1 split is structurally stronger.</Sidenote>

What Linux does not have is the equivalent of `DriverSiPolicy.p7b`. There is no kernel-level curated denylist of "we have learned this module is unsafe; refuse to load it by name". Defenders rely on per-distribution CVE trackers, on `modprobe.blacklist`, and on `udev` rules to keep specific modules out. The G5 generation -- naming the *weakness* rather than the publisher -- has no mainline Linux equivalent at the kernel-loader level.

### macOS: DriverKit removes the surface

Apple's answer is structurally different. Starting with [macOS Catalina 10.15](https://support.apple.com/en-us/HT210999) in 2019, Apple deprecated legacy kernel extensions for third parties and pushed them onto the [DriverKit](https://developer.apple.com/documentation/driverkit) framework instead [@apple-legacy-extensions] [@apple-driverkit].

<Definition term="DriverKit">
Apple's user-space driver framework, introduced with macOS Catalina 10.15. Third-party drivers ship as `.dext` user-space extensions linked against a curated IOKit subset; they receive IOKit messages from the kernel and respond with the same operations they used to perform in ring zero, but the code itself runs in user mode under sandbox restrictions. The kernel side of the new model exposes a controlled message surface; the third-party side cannot directly execute kernel code.
</Definition>

A `.dext` runs in user space under a sandbox profile. It can claim devices, register for IOKit interrupts, and exchange messages with kernel-side broker code -- but it cannot, in any usable sense, execute arbitrary code in the kernel address space. The Capcom.sys class of vulnerability cannot be expressed in DriverKit: there is no IOCTL surface whose handler runs in ring zero, because the handler does not run in ring zero. Apple reinforces the boundary further with [System Integrity Protection](https://support.apple.com/guide/security/system-integrity-protection-secb7ea06b49/web) (since 2015) and, on Apple Silicon, Kernel Integrity Protection (KIP), which makes the kernel page tables read-only after boot [@apple-sip].

The price was paid by Apple's IHV community. Whole categories of third-party drivers -- deep audio, virtualisation, certain security tools -- spent years migrating, and some categories took multiple macOS releases before a DriverKit equivalent of a particular kext capability existed. Apple Silicon requires explicit reduced-security mode to load *any* legacy kext at all: Apple's [Platform Security guide](https://support.apple.com/guide/security/securely-extending-the-kernel-sec8e454101b/web) records that "Kexts must be explicitly enabled for a Mac with Apple silicon by holding the power button at startup to enter into One True Recovery (1TR) mode, then downgrading to Reduced Security and checking the box to enable kernel extensions" [@apple-kext-aux].

### Why Windows cannot copy Apple

The reason Windows cannot make Apple's move in the short term is operational, not architectural. Windows' IHV installed base is orders of magnitude larger and less centrally controlled. Microsoft does not own its hardware vendors the way Apple owns Macs. Breaking compatibility with twenty years of shipped kernel drivers would impose unbounded migration cost on third parties Microsoft cannot direct.

| Dimension | Windows (2026) | Linux (mainline + RHEL-class hardening) | macOS (Catalina+ / Apple Silicon) |
|-----------|-----------------|-----------------------------------------|-----------------------------------|
| Default signature enforcement | Mandatory on x64 since 2006 | Permissive (taints kernel); restrictive on hardened distros | Mandatory; legacy kexts deprecated |
| Curated denylist of signed-but-vulnerable artefacts | `DriverSiPolicy.p7b`, default-on since 22H2 | None at kernel loader; per-distro CVE trackers | Not needed -- third-party kexts removed |
| Policy engine isolated from kernel it polices | HVCI in VTL1 | Lockdown LSM (same trust domain) | KIP and SIP on Apple Silicon |
| Third-party drivers in kernel | Yes, still the model | Yes | No -- DriverKit user-space dexts |
| Operational price of the model | Compatibility carve-outs, opt-outs | Permissive default | Multi-year IHV migration |

Windows cannot move drivers to user space at Apple's speed. But it can look at *what is inside* a driver in a way the signature alone never could. And it has been quietly building that capability since 2022.

## 11. What Comes Next: SBOM, Artifact Signing, Dynamic Analysis

If signatures cannot answer "is this driver safe", and the block list can only ever answer "is this driver known-unsafe", the next question Windows has to learn how to ask is "what is inside this driver?"

### SBOM for drivers

A Software Bill of Materials is a structured inventory of the components, dependencies, and versions inside a software artefact. The mainstream community formats are SPDX (now at version 3.0) and CycloneDX; Microsoft contributes to and ships an open-source tool, [microsoft/sbom-tool](https://github.com/microsoft/sbom-tool), that produces SPDX-compatible SBOMs as part of a build pipeline [@gh-sbom-tool]. The repository description is plain: "The SBOM tool is a highly scalable and enterprise ready tool to create SPDX 2.2 and SPDX 3.0 compatible SBOMs for any variety of artifacts. The tool uses the Component Detection libraries to detect components and the ClearlyDefined API to populate license information for these components." [@gh-sbom-tool]

<Definition term="SBOM (Software Bill of Materials)">
A machine-readable inventory of components and dependencies inside a software artefact. For a Windows kernel driver, an SBOM lists the third-party static libraries linked into the PE, the open-source code paths bundled with the driver, and the versions of each, in a format (SPDX, CycloneDX) that automated tools can consume to answer "is any component of this driver subject to a known vulnerability?"
</Definition>

The piece that affects Windows drivers specifically is the Windows Hardware Compatibility Program SBOM requirement. The [Microsoft Q&A entry on Hardware Dev Center and CRA compliance](https://learn.microsoft.com/en-us/answers/questions/5732099/would-hardware-dev-center-for-cra-compliance-chang) is candid: "The WHCP SBOM requirement (Device.DevFund.Security.SoftwareBillofMaterials) has been deferred and will only be enforced starting in H2 2026." [@ms-qa-cra] The deferral aligns the WHCP rollout with the European Union's Cyber Resilience Act compliance window.

<Aside label="The compliance angle">
The EU Cyber Resilience Act sets phased compliance obligations for products with digital elements sold into the EU market. Among them is a requirement to produce a machine-readable SBOM that customers and regulators can inspect. Microsoft's WHCP SBOM mandate, scheduled for H2 2026, is the Windows-specific implementation of the same requirement, applied to kernel drivers submitted through the Hardware Dev Center. For regulated-industry IHVs, the WHCP gate and the CRA gate land at the same time and concern the same artefact [@ms-qa-cra].
</Aside>

There is a structural problem an SBOM does not solve on its own. If the SBOM ships separately from the driver, an attacker who controls the distribution path can substitute a clean-looking SBOM for a contaminated driver. The WHCP submission flow is expected to bind the SBOM cryptographically to the artefact it describes so that a recipient can verify the binding, but the public documentation for the binding mechanism is still light beyond the [WHCP SBOM mandate itself](https://learn.microsoft.com/en-us/answers/questions/5732099/would-hardware-dev-center-for-cra-compliance-chang) [@ms-qa-cra].

### Dynamic analysis at submission time

The other axis of improvement is reactiveness. Today, the typical disclosure-to-enforcement cycle for a new BYOVD driver looks like this: vendor ships, attacker exploits, researcher discloses, Microsoft adds to the quarterly published list, Windows servicing pushes to clients. The latency is months. Two recent research programmes show how dynamic analysis at scale can compress it.

The first is the EURECOM/Politecnico di Milano NDSS 2026 paper on the [authors' publication page](https://www.eurecom.fr/publication/8384). The team built a DRAKVUF-based instrumentation layer called Kernelmon and traced every kernel function executed by signed drivers under malware-loaded workloads [@eurecom-paper]. The numbers are unusually concrete: the [paper PDF](https://www.s3.eurecom.fr/docs/ndss26_monzani.pdf) reports that the team "analyzed 8,779 malware samples that load 773 distinct signed drivers. It flagged suspicious behavior in 48 drivers, and subsequent manual verification led to the responsible disclosure of seven previously unknown vulnerable drivers" [@eurecom-paper-pdf]. The companion [S3 blog post](https://www.s3.eurecom.fr/post/2025/10/13/unveiling-byovd-threats-malwares-use-and-abuse-of-kernel-drivers/) corroborates the 48-flagged / 7-disclosed numbers and notes that one of the seven received CVE-2024-26506 [@eurecom-s3-blog]. The technique is dynamic: it runs the driver under a hypervisor, watches what its IOCTL handlers actually do, and flags patterns characteristic of the BYOVD class.

The second is [Check Point Research's 2024 work](https://research.checkpoint.com/2024/breaking-boundaries-investigating-vulnerable-drivers-and-mitigating-risks/), which built a mass-hunt methodology around import-table signatures of risky kernel APIs and ran it across the global driver corpus. "Using the same methodology, we conducted a mass hunt for new drivers that may be vulnerable, uncovering thousands of potentially at-risk drivers." [@cpr-byovd] The technique is static: it asks *what does the driver import?* rather than *what does it do under exercise?* Combined, the two approaches cover complementary halves of the surface.

Neither currently gates Hardware Dev Center submissions. Both are candidates for the kind of submission-time check that would compress disclosure-to-enforcement latency from quarters to days.

### Empirical patterns the defences have to recognise

Cisco Talos's BYOVD work, summarised in [their *Exploring vulnerable Windows drivers* post](https://blog.talosintelligence.com/exploring-vulnerable-windows-drivers/), classifies the post-load payloads attackers actually run [@talos-byovd]. Three behaviour classes dominate: token-swap escalation that overwrites the access token in the `_EPROCESS` structure to reach SYSTEM; unsigned-code-loading that uses the kernel-write primitive to disable DSE or patch CI state; and EDR-killing that clears the kernel callback registrations endpoint detection products rely on. Each is a target for the dynamic analyses above, each is detectable by import-table heuristics, and each is what defenders see in the wild today.

The historical roots are old. The Microsoft Security blog tracing the Vulnerable & Malicious Driver Reporting Center is direct: "Multiple malware attacks, including RobinHood, Uroburos, Derusbi, GrayFish, and Sauron, have leveraged driver vulnerabilities (for example CVE-2008-3431, CVE-2013-3956, CVE-2009-0824, and CVE-2010-1592)." [@ms-vdrc-blog] The payload classes have stayed remarkably stable for fifteen years.

> **Note:** The structural gap between *signed* and *safe* cannot close. It can be narrowed along three independent axes. Reactiveness: how long disclosure-to-enforcement takes (closeable by submission-time dynamic analysis along the lines of the [EURECOM NDSS 2026 paper](https://www.eurecom.fr/publication/8384) [@eurecom-paper] and [Check Point's mass-hunt methodology](https://research.checkpoint.com/2024/breaking-boundaries-investigating-vulnerable-drivers-and-mitigating-risks/) [@cpr-byovd]). Coverage of unknown-bad signed drivers (extended by reputation-backed allowlists like Smart App Control and by WDAC enterprise policies). Visibility into binary contents (the H2 2026 [WHCP SBOM mandate](https://learn.microsoft.com/en-us/answers/questions/5732099/would-hardware-dev-center-for-cra-compliance-chang) and the SBOM-to-artefact binding the submission flow is expected to enforce [@ms-qa-cra]). Each axis closes a different blind spot. None substitutes for another.

### Threats the stack cannot yet absorb

Three problems remain open and uncovered by the published roadmap. The Smart App Control cold-start window leaves small IHVs whose drivers have no cloud reputation to fall through to signature, and signature alone is exactly what we already established does not answer the question. BYOVD on HVCI-off environments, prevalent in older anti-cheat configurations and on enterprise machines with legacy ISV drivers, still admits the `g_CiOptions`-patching family from VTL0 because there is no VTL1 to keep the policy out of reach. And the shipped-vs-published block list gap, while operationally rational and individually closeable by a willing administrator, is a gap any default-on customer carries.

None of those closes by algorithmic improvement. Each closes only by widening the question.

What started as a yes/no signature check has become a continually expanding set of questions Windows asks before it will hand a driver the keys to ring zero. None of those questions is sufficient. All of them are necessary. And the next one is already being written into the WHCP submission flow.

## 12. What This Means in Practice

Three audiences, three things to do.

**Administrators.** Confirm the stack is on. `Get-CimInstance -Namespace root\Microsoft\Windows\DeviceGuard -ClassName Win32_DeviceGuard` returns a `SecurityServicesRunning` array; a `2` in the array confirms HVCI. A `DriverSiPolicy.p7b` in `%windir%\system32\CodeIntegrity\` confirms the in-box block list is deployed. If you can tolerate the compatibility risk, compile the published [block-rules XML](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/microsoft-recommended-driver-block-rules) into an App Control policy and deploy it (audit mode first) [@ms-driver-block-rules]. If you run Windows Server 2016, you have to deploy an explicit policy yourself because the in-box default does not apply there [@ms-driver-block-rules]. If you ship through the Hardware Dev Center, schedule the H2 2026 WHCP SBOM gate now [@ms-qa-cra]. Subscribe to the Vulnerable & Malicious Driver Reporting Center cadence for new disclosures [@ms-vdrc-blog].

**Driver authors.** Assume your IOCTL surface will be read by [Check Point's import-table mass hunt](https://research.checkpoint.com/2024/breaking-boundaries-investigating-vulnerable-drivers-and-mitigating-risks/) and exercised by [EURECOM's Kernelmon](https://www.eurecom.fr/publication/8384) [@cpr-byovd] [@eurecom-paper]. Any handler that takes a user-supplied address and returns kernel data, or that dispatches a user-supplied function pointer, will end up on a block list on its current trajectory.

**Researchers.** The field is wide open. The undecidability result is real, but the practical gap between what current analyses detect and what is, in principle, detectable for any specific vulnerability class is large. The NDSS 2026 paper found seven CVE-worthy drivers in a corpus of 773. The next paper will find more.

### Every layer is somebody's incident report

Every layer in the 2026 stack exists because the previous one lost to a named adversary. Sony BMG XCP retired advisory signing. Stuxnet retired the assumption that a valid chain is a safe chain. Capcom.sys retired the assumption that a safe chain is a safe driver. RTCore64.sys, gdrv.sys, and KProcessHacker retired the assumption that the BYOVD class would burn itself out. Each entry on `DriverSiPolicy.p7b` is somebody's incident report, recorded in the most permanent place Microsoft can put it -- the kernel loader's deny list.

> **Note:** Windows 11 22H2 ships with a list of drivers Microsoft will not load. The next list will be longer. The story has been adversarial since 1996 and the trajectory does not reverse: every layer was added because the previous one met an attacker. The structural gap is undecidable; the engineering gap, narrowable; the work, unfinished.

## Frequently Asked Questions

<FAQ title="Frequently asked questions">
<FAQItem question="Does HVCI block every vulnerable driver?">
No. HVCI verifies the Authenticode signature at section-mapping time and enforces a write-xor-execute invariant on kernel memory; it does not analyse the driver's IOCTL surface. A signed driver with an unsafe IOCTL passes HVCI unchanged and proceeds to execute its handler in kernel mode with kernel privilege. That is what the Vulnerable Driver Block List is for: HVCI gates *who decides*; the block list gates *what gets decided*. See the [Memory Integrity page](https://learn.microsoft.com/en-us/windows/security/hardware-security/enable-virtualization-based-protection-of-code-integrity) [@ms-hvci-vbs].
</FAQItem>

<FAQItem question="Can I update the block list faster than quarterly?">
Yes. Microsoft publishes the source XML on the [Microsoft-recommended driver block rules page](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/microsoft-recommended-driver-block-rules) [@ms-driver-block-rules]. You can compile it into a binary App Control policy with the standard tooling and deploy it directly, picking up entries Microsoft holds back from the in-box list. Test in audit mode first because the published list is more inclusive than the shipped list and may flag drivers your environment depends on. Many defenders layer the [LOLDrivers App Control policy](https://github.com/magicsword-io/LOLDrivers) on top for community-curated coverage [@gh-loldrivers].
</FAQItem>

<FAQItem question="What about Windows Server 2016?">
Windows Server 2016 does not enforce the block list by default, even when Memory Integrity is on. The [block-rules page](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/microsoft-recommended-driver-block-rules) calls this out explicitly [@ms-driver-block-rules]. If you administer Server 2016, deploy an explicit App Control policy to get the same coverage as the default-on 22H2 client.
</FAQItem>

<FAQItem question="How is Smart App Control different from App Control for Business?">
App Control for Business (the engine formerly known as WDAC) is a policy *you* author. You define what signers, hashes, and paths are allowed; you ship and enforce the policy yourself. Smart App Control is a Microsoft-authored policy bundled with cloud reputation lookups via the Intelligent Security Graph. SAC is the consumer-friendly front end; App Control is the enterprise back end. SAC's default policy ships at `%windir%\schemas\CodeIntegrity\ExamplePolicies\SmartAppControl.xml`. SAC is consumer-only and turns itself off after 48 hours on enterprise-managed devices, where the expectation is that the operator deploys an App Control policy directly. See the [Smart App Control FAQ](https://support.microsoft.com/topic/what-is-smart-app-control-285ea03d-fa88-4d56-882e-6698afdb7003) and the [App Control for Business overview](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/appcontrol) [@ms-sac-faq] [@ms-appcontrol].
</FAQItem>

<FAQItem question="Are anti-cheat engines compatible with HVCI?">
Increasingly yes. Major anti-cheat vendors have shipped HVCI-compatible kernel components since around 2023, but a meaningful tail of older configurations still requires HVCI off. On those configurations, the [`g_CiOptions`-patching technique TrustedSec describes](https://www.trustedsec.com/blog/g_cioptions-in-a-virtualized-world) is back in play because the policy variable is no longer protected behind VTL1 [@trustedsec-gcioptions]. Audit your gaming-rig population if you care about coverage.
</FAQItem>

<FAQItem question="What is the practical difference between the in-box blocklist and LOLDrivers?">
The in-box block list is Microsoft-curated with explicit compatibility holdbacks; the [LOLDrivers catalogue](https://www.loldrivers.io/) is community-curated, considerably more inclusive (approximately 2,132 entries as of the source verification for this article), and ships with App Control deny policies, Sigma, YARA, ClamAV, and Sysmon detection content alongside the entries [@loldrivers-io] [@gh-loldrivers]. For threat hunting, use both. For enforcement, layer the LOLDrivers App Control policy on top of the in-box list if your environment can tolerate the compatibility risk. [Check Point Research](https://research.checkpoint.com/2024/breaking-boundaries-investigating-vulnerable-drivers-and-mitigating-risks/) has documented the dual-use externality of any such public list -- attackers also read them -- but the defender net benefit of broader coverage outweighs the marginal attacker advantage on most environments [@cpr-byovd].
</FAQItem>
</FAQ>

<StudyGuide slug="vulnerable-driver-block-list-hvci-and-the-driver-signing-lifecycle" keyTerms={[
  { term: "Authenticode", definition: "Microsoft's PKCS#7 code-signing format, used in Windows since 1996. Attests to publisher identity; does not analyse program behaviour." },
  { term: "KMCS", definition: "Kernel-Mode Code Signing. The mandatory load-time signature policy on 64-bit Windows since Vista x64 in 2006." },
  { term: "BYOVD", definition: "Bring Your Own Vulnerable Driver. An attack pattern in which an adversary installs a signed but design-vulnerable third-party driver to gain kernel capability." },
  { term: "HVCI", definition: "Hypervisor-protected Code Integrity, also called Memory Integrity. The Code Integrity engine running in VTL1 under a Hyper-V root, isolated from the VTL0 kernel." },
  { term: "VTL", definition: "Virtual Trust Level. VTL0 is the normal Windows kernel; VTL1 is the Secure Kernel and trustlets. VTL1 can read VTL0 memory but not vice versa." },
  { term: "DriverSiPolicy.p7b", definition: "The Microsoft-signed App Control deny policy that lists known-vulnerable signed kernel drivers and is default-on for all Windows 11 22H2 client devices." },
  { term: "App Control for Business", definition: "Microsoft's policy-driven application control engine, formerly WDAC. Used for both deny lists (the block list) and enterprise allowlists." },
  { term: "Smart App Control", definition: "Consumer-facing front end for App Control, backed by ISG cloud reputation. Available on clean Windows 11 22H2+ installs only." },
  { term: "SBOM", definition: "Software Bill of Materials. Machine-readable inventory of components and dependencies. Mandatory for WHCP submissions from H2 2026." },
  { term: "DriverKit", definition: "Apple's user-space driver framework. Third-party drivers ship as sandboxed dexts rather than kernel extensions; the BYOVD class is eliminated by construction." },
]} questions={[
  { q: "Why did the Windows kernel-driver signing policy have to wait until Vista x64 to become mandatory?", a: "The advisory SetupAPI-prompt model on 32-bit Windows could not be made mandatory without breaking compatibility with decades of unsigned drivers. The x64 architecture was a young platform with relatively few drivers in the field, which let Microsoft make the load-time signature requirement mandatory without disrupting an installed base." },
  { q: "What single property of HVCI makes the g_CiOptions patching technique stop working?", a: "HVCI runs the signature-verification and policy-consultation logic inside VTL1's Secure Kernel and uses Kernel Data Protection, exposed to VTL0 drivers as MmProtectDriverSection, to mark the VTL0 page containing g_CiOptions read-only at the second-level address translation level. The variable still resides in ci.dll's VTL0 data section, but a VTL0 ring-zero write to it faults because the SLAT mapping refuses the write -- and a live-kernel debugger attached to VTL0 cannot bypass that protection either." },
  { q: "Why does Microsoft document that the published block list is more inclusive than the shipped one?", a: "Some entries in the published list would block drivers that legitimate environments still depend on. Microsoft holds those entries back from the in-box DriverSiPolicy.p7b to avoid breaking existing functionality, while leaving them available in the source XML for defenders who can author their own App Control policies and accept the compatibility risk." },
  { q: "Why is the BYOVD class undecidable to gate at the signing stage?", a: "Whether an arbitrary signed driver exposes a kernel-write primitive through its IOCTL surface is a non-trivial semantic property of the driver's program text. Rice's theorem says no algorithm decides such properties for all programs. Static and dynamic analyses catch decidable subsets; the unrestricted class admits no complete solution." },
  { q: "Why can Windows not simply move third-party drivers to user space the way macOS DriverKit did?", a: "Apple owns its hardware vendors and could impose a multi-year migration on a comparatively centralised vendor community. Windows' third-party IHV base is much larger and more independent; breaking compatibility with twenty years of shipped kernel drivers would impose unbounded migration cost on parties Microsoft does not direct." },
]} />
