58 min read

The Driver That Was Signed and the Driver That Won't Load: Windows Kernel Code Integrity, 2006-2026

A history of Windows kernel code-signing -- KMCS, BYOVD, HVCI, the Vulnerable Driver Block List, and why a 2026 Windows kernel uses five gates to decide what loads.

Permalink

1. The Driver That Loaded

On 13 September 2016, the researcher Matt Nelson posted on his enigma0x3 blog that a Capcom-published kernel driver, Capcom.sys, exposed IOCTL 0xAA013044 and used it to execute a user-supplied function pointer in kernel mode, with SMEP disabled along the way [1]. Within two weeks the technique was operational in Metasploit. Later in September 2016, Capcom pushed the same driver to Street Fighter V's entire installed base as part of an anti-cheat update; in October 2016, Satoshi Tanda published the canonical standalone exploit on GitHub. Capcom withdrew the SFV driver shortly after, but the bytes were already in the wild. The often-told version of this story compresses three distinct events into one. Matt Nelson's Let's Be Bad Guys post on 13 September 2016 disclosed the IOCTL number and the function-pointer-execution primitive. OJ Reeves opened the canonical Metasploit pull request, rapid7/metasploit-framework#7363, shortly after; the PR was created on 27 September 2016 and merged the following day [2]. Satoshi Tanda's tandasat/ExploitCapcom repository was first published in October 2016 and is the canonical standalone PoC, and the artefact this article cites for the IOCTL number and SHA-1 hash.

The driver was properly Authenticode-signed. It chained to a Microsoft-recognised root. It loaded cleanly on every default-configured Windows 7, 8.1, and 10 machine in the world.

That is the puzzle this article exists to answer. How does an operating system whose entire kernel-loading policy is was this binary signed? answer a vulnerability whose only failure mode is yes, by a real publisher, doing exactly what the signature says it does?

A class, not an incident

Capcom.sys was not the first signed kernel driver with a primitive IOCTL, and it would not be the last. The pattern recurs across two decades and is the through-line of this article. The catalogue includes Micro-Star's RTCore64.sys (the kernel component of MSI Afterburner), Gigabyte's gdrv.sys, and the KProcessHacker driver shipped with Process Hacker. Section 4 walks through each one with its primary disclosure record.

The attack class has a name. Bring Your Own Vulnerable Driver, or BYOVD. The adversary does not need to find a kernel zero-day. They need to find one signed driver, anywhere, whose interface is unsafe by design, and to ship it.

Windows in 2026 ships a curated list of Microsoft-signed drivers it refuses to load. Understanding that list is understanding why every previous attempt to make kernel-mode trust mean safety instead of just identity eventually broke.

The current Windows 11 22H2 client honours %windir%\system32\CodeIntegrity\DriverSiPolicy.p7b, a Microsoft-signed deny list enforced by a hypervisor-isolated code-integrity engine sitting in Virtual Trust Level 1. The same engine refuses to map any kernel page that is simultaneously writable and executable. Both behaviours are documented on Microsoft Learn's Memory Integrity page and the Microsoft-recommended driver block rules page [3] [4]. Neither existed in 2006.

To understand why Windows now refuses to load drivers it once asked Microsoft to sign, we need to go back thirty years to the moment Windows first asked a publisher to sign anything at all.

2. Advisory Trust: 1996 to 2005

For its first decade, the Windows driver signing policy was a polite recommendation.

Microsoft shipped its first user-mode code-signing primitive, Authenticode, in 1996, packaged for developers in the same tool kit that gave us SignTool, MakeCat, and Inf2Cat -- the suite Microsoft Learn still documents under "Cryptography tools" [5]. Authenticode wrapped a PKCS#7 signature around the SHA-1 (and later SHA-256) hash of a PE image and let a recipient walk the signer's certificate chain to a trusted root. It was the first answer to the question who shipped this binary? It was, deliberately, never an answer to is this binary safe?

Authenticode

Microsoft's PKCS#7-based code-signing format for Windows binaries. Authenticode attests to the publisher's identity by binding the binary's hash to a certificate chain anchored at a trusted root. It does not analyse the program's behaviour.

For drivers, the user-mode signing primitive was paired with a separate quality program. The Windows Hardware Quality Labs programme, documented today via the Hardware Lab Kit, tested third-party drivers against a Microsoft-curated compatibility suite and rewarded passing drivers with a counter-signature, eventually surfaced as the "Designed for Windows" or "Certified for Windows" mark [6]. The badge was operationally meaningful for OEM badging and Windows Update distribution. It was not a load-time gate. An unsigned .sys file dropped on disk by a setup script still loaded.

WHQL / HLK (Windows Hardware Quality Labs / Hardware Lab Kit)

Microsoft's compatibility-test programme for third-party drivers. A driver that passes the HLK test suite receives a Microsoft counter-signature and is eligible for OEM and Windows Update distribution. The programme produces a quality signal, not a load-time enforcement decision.

The SetupAPI prompt

On 32-bit Windows, the gate the user actually saw was the SetupAPI driver-installation prompt. The administrator could set the system to Ignore, Warn, or Block unsigned drivers; the default was Warn. Warn meant a click-through dialog at install time. An administrator who clicked Install this driver anyway loaded the unsigned driver, no further questions asked. The structural truth is the one Microsoft's modern KMCS policy page acknowledges by contrast: under advisory policy, the prompt is the policy, and a prompt is exactly as strong as the user clicking past it [7].

The Sony BMG XCP incident in October 2005 made the structural weakness concrete. The XCP copy-protection software, shipped on retail audio CDs, autorun-installed an unsigned kernel-mode filter driver. The driver hid any file, registry key, or process whose name began with the string $sys$ -- a textbook rootkit by capability if not by intent. The driver loaded after an administrator clicked through the warning prompt, exactly as advisory policy allowed. The pattern is described well in Wikipedia's code-signing article [8]. The Sony BMG XCP rootkit triggered class-action lawsuits, FTC settlements, and an industry-wide reconsideration of what "the user clicked OK" actually authorises. From a kernel-trust perspective, the lesson is narrower: any policy that ends in a dismissible dialog has the same threat model as no policy at all, against an attacker who can show the user a dialog.

The structural takeaway from 1996 through 2005 is the one the next decade tried to repair. When the signing policy is advisory, an attacker who has -- or can socially engineer -- administrator privilege only needs to dismiss a prompt to load a kernel driver. The signing primitive worked. The policy around the primitive did not.

If the prompt is the only thing between an attacker and ring zero, the kernel itself has to take over. And on a brand-new x64 architecture, Microsoft could break backward compatibility to make that happen.

3. KMCS: The Vista x64 Revolution (2006-2016)

In November 2006, Vista x64 made a decision that x86 never could: it refused to load any unsigned kernel driver, full stop.

The mechanism was Kernel-Mode Code Signing, or KMCS. The previous-versions Microsoft Learn page on Vista-era driver signing records the policy [9]. At the point where the I/O manager called IoLoadDriver, the Code Integrity module (ci.dll) intercepted the load, extracted the Authenticode signature embedded in the PE image or attached via a published catalogue, walked the certificate chain, and refused to map the image if the chain did not terminate at a Microsoft-trusted root. There was no SetupAPI prompt to dismiss. If the kernel refused, the kernel refused. The decision lived below the user's reach.

KMCS (Kernel-Mode Code Signing)

The Vista-era mandatory load-time signature policy on 64-bit Windows. Before mapping a kernel driver's PE image, the Code Integrity module verifies that the image's Authenticode signature chains to a Microsoft-trusted root. Drivers that fail the check are refused at load time, not at install time.

x86 kept the advisory policy. Microsoft could not break compatibility with two decades of unsigned drivers on the dominant platform. But x64 was a young architecture with a few hundred drivers in the field, and Microsoft used that moment to flip the default. The structural shift was real: kernel-driver trust on x64 became a property of the binary, decided in the kernel, against a fixed set of trusted roots.

Cross-certificates: opening the gate to the world

A Microsoft-trusted root alone would have meant Microsoft signs every driver, which Microsoft did not want. Instead Microsoft cross-certified a small set of commercial code-signing certificate authorities -- including VeriSign, DigiCert, Entrust, GlobalSign, GoDaddy, and several smaller successors enumerated on the historical cross-certificate list (2020 archive) -- so that a publisher could buy a code-signing certificate from a commercial CA, sign their driver, and have the chain still terminate at a Microsoft-recognised root [10]. The architecture is documented on the cross-certificates for kernel-mode code signing page, which now opens with a sentence that did not exist in 2006: "Cross-signing is no longer accepted for driver signing" [11]. We will come back to that.

Ctrl + scroll to zoom
The Vista-era KMCS driver-load decision. Code Integrity intercepts the PE map, extracts the Authenticode signature, walks the cross-cert chain to a Microsoft-trusted root, and either allows the section creation or refuses the load.

Documented escape hatches

KMCS shipped with three documented bypasses for developers and special cases, all enumerated on the KMCS policy page [7]:

  • bcdedit /set TESTSIGNING ON enables test-signing mode. The kernel will load drivers signed with self-issued test certificates. The cost is a desktop watermark.
  • The F8 advanced-boot option Disable Driver Signature Enforcement turns off KMCS for one boot.
  • The legacy nointegritychecks BCD flag disables enforcement entirely, but is rejected on systems where Secure Boot is on.

Each of these was a development workflow concession. Each of them, with admin privileges and a willingness to reboot, also serves as a kernel-driver loading path for an attacker who has already escalated. The policy holds against unprivileged adversaries. Against an attacker who already runs as administrator, the policy was already, by 2010, defending against a different threat than the one people thought it was defending against. Microsoft has been formally clear about this since at least 2016: the administrator-to-kernel transition is not a security boundary in the MSRC servicing-criteria sense. Elastic Security Labs writes the position out explicitly in their analysis of vulnerable-driver mitigations [12]. The historical irony is that Vista x64 KMCS was widely read at the time as a defence against admin-level adversaries; it was actually a defence against unprivileged or pre-admin ones.

PatchGuard: the parallel runtime defence

KMCS was a load-time check. The runtime parallel arrived in 2005 with Kernel Patch Protection, informally PatchGuard or KPP, which the Wikipedia entry on Kernel Patch Protection describes as a feature of 64-bit Windows that prevents patching of critical kernel structures [13]. KPP polls a set of integrity-critical kernel objects -- the System Service Descriptor Table, IDT, GDT, certain function prologues -- and triggers a bug check if it detects tampering. It is the watchdog against runtime modification of the kernel by code that has already loaded; KMCS gates what loads in the first place.

What this fixed: the unsigned-driver-loading path closed on 64-bit Windows in production mode. Kernel rootkits of the early 2000s -- FU, Mailbot, Rustock, and their contemporaries, widely documented in the security-research literature of the era -- could no longer ship as bare .sys files an admin script dropped on disk. The structural class of "unsigned kernel rootkit" effectively died on x64.

But the day Vista x64 shipped, two new attack surfaces opened up. The first one Stuxnet found four years later. The second one nobody had a name for yet.

4. Stuxnet, BYOVD, and the Two Things Vista Did Not Fix

On 17 June 2010, researchers in Belarus and Iran identified Stuxnet, a worm targeting supervisory control and data acquisition systems used in industrial-control environments [14]. Two of its drivers carried perfectly valid Authenticode signatures.

The signatures were genuine. The certificates were not. Stuxnet had been signed with private keys stolen from semiconductor vendors whose code-signing certs chained to legitimate cross-certified roots. KMCS verified the chain, found it good, and let the drivers load. Stuxnet is widely reported to have used stolen signing keys from two real semiconductor vendors. The malware-analysis literature is consistent on the pattern; specific cert-holder attributions are reproduced in many places but the primary advisory record we cite here is the Wikipedia Stuxnet article and the general framing in the Wikipedia code-signing article [14] [8]. The reactive answer was certificate revocation, but revocation propagates through Windows on a schedule, not instantly, and the cached chain on millions of machines remained valid for days.

That was the first failure mode KMCS could not block by design. The signature primitive answers was this signed by a key that chains to a trusted root? It cannot answer was the key still in the publisher's control when it signed this?

The Capcom.sys reframe

The second failure mode arrived publicly in 2016. A Capcom driver shipped via a Street Fighter V update exposed an IOCTL, numbered 0xAA013044, that took a user-supplied function pointer and executed it in kernel mode -- with Supervisor Mode Execution Prevention (SMEP) disabled while it did so. The driver was signed and chained correctly. Satoshi Tanda's standalone proof of concept at tandasat/ExploitCapcom remains the canonical reference, including the SHA-1 of the binary (c1d5cf8c43e7679b782630e93f5e6420ca1749a7) [1].

There was nothing for KMCS to catch. The driver did exactly what the signature said it did: ship bytes from a publisher Microsoft could identify. The signature has no opinion about the IOCTL surface.

A signed driver means only that someone Microsoft can identify shipped this binary. It does not mean the driver lacks a function-pointer IOCTL.

That observation is the first of three reframes in this article and the easiest to underestimate. Up to 2010 the conventional security reading of a Microsoft-rooted Authenticode signature was that the driver had passed a review. After Stuxnet, the reading narrowed to the publisher is identifiable. After Capcom.sys, it narrowed again to the binary's identity is verifiable. None of these readings includes the binary does not have a kernel-write primitive in its IOCTL handler.

BYOVD (Bring Your Own Vulnerable Driver)

An attack pattern in which an adversary, having obtained or already holding administrator privileges, installs a signed but design-vulnerable third-party kernel driver and uses its exposed primitives -- arbitrary memory read/write, port I/O, MSR access, or function-pointer dispatch -- to gain ring-zero capability. The signature primitive does not refuse the load because the driver is, on signature alone, legitimate.

The catalogue grows

The BYOVD catalogue accumulated through the 2010s.

RTCore64.sys, the kernel component of MSI's Afterburner overclocking utility, exposed read/write access to arbitrary kernel memory, I/O ports, and Model-Specific Registers from user mode. The NVD entry for CVE-2019-16098 is unusually direct: "These signed drivers can also be used to bypass the Microsoft driver-signing policy to deploy malicious code." [15] The driver became a workhorse for ransomware crews. Sophos's October 2022 incident analysis of BlackByte's new variant documents the abuse: BlackByte "abus[ed] a known vulnerability in the legitimate vulnerable driver RTCore64.sys" to disable "a whopping list of over 1,000 drivers on which security products rely to provide protection" [16].

gdrv.sys, the Gigabyte APP Center driver, exposed a ring-zero memcpy-equivalent that a local attacker could use to overwrite arbitrary kernel addresses. CVE-2018-19320 is on CISA's Known Exploited Vulnerabilities catalogue [17]. The RobinHood ransomware abused it during the 2019 Baltimore municipal-government attack -- a connection widely documented by Sophos and CrowdStrike incident-response teams, though absent from the bare NVD record.

KProcessHacker, the kernel companion to the Process Hacker administration tool, exposed a process-termination primitive that bypassed even the Protected Process Light (PPL) shielding around antivirus and EDR processes. CrowdStrike's DoppelPaymer write-up documents the abuse explicitly: "the hijacking technique ... leverages ProcessHacker's kernel driver, KProcessHacker, that has been registered under the service name KProcessHacker3 ... terminate processes, including those protected by Protected Process Light (PPL)." [18]

Ctrl + scroll to zoom
The structural BYOVD attack flow. The signed driver passes KMCS unchanged; the adversary's user-mode code, running as administrator, then issues IOCTLs to elicit a kernel-write primitive that disables EDR notify routines or escalates a token.

The third bypass: patching the policy from kernel mode

There is a third failure mode that closes the loop. Once an attacker has a signed driver with an arbitrary kernel-write primitive, they can write directly into the in-kernel Code Integrity state. The variable of interest is g_CiOptions, an integer inside ci.dll whose bits gate Driver Signature Enforcement. TrustedSec describes the technique cleanly: "this configuration variable has a number of flags that can be set, but typically for bypassing DSE this value is set to 0, completely disabled DSE and allows the attacker to load unsigned drivers just fine." [19] Set g_CiOptions to zero and the subsequent driver loads do not need signatures at all. The signed driver, in effect, is a one-shot key that opens the gate for any unsigned driver behind it. The pattern recurs through the early 2020s; specific malware-family attributions remain research-folklore, but the technique class is well attested in TrustedSec's account.

The structural takeaway: KMCS verifies who signed, never what was signed. Once an attacker has a signed driver with a write primitive, they have ring zero. Stricter signing closes the front door for new malicious drivers. Every commercial-CA cert that was ever issued is still loadable. The policy decision has to move out of the attacker's reach. And the kernel itself has to stop being the thing that decides.

5. Microsoft as the Only Signer (2016-2024)

In August 2016, Microsoft did something the WHQL programme had refused to do for twenty years: it became the only entity that could counter-sign a new Windows kernel driver.

The transition shipped with Windows 10 version 1607. The KMCS policy page records the cut precisely: for end-entity certificates issued after 29 July 2015, the chain had to terminate at one of three Microsoft-owned roots -- Microsoft Root Authority 2010, Microsoft Root Certificate Authority, or Microsoft Root Authority -- and the binary had to be counter-signed via the Windows Hardware Dev Center submission portal [7]. The commercial CAs were out. Microsoft was in, as the single point through which any new third-party kernel driver had to pass.

Two pipelines

Behind the portal sat two submission paths. The HLK/WHQL path required a full Hardware Lab Kit compatibility test pass on the publisher's hardware -- the lab kit is the modern incarnation of the WHQL programme, documented on Microsoft Learn [6]. A passing run produced a "Certified for Windows" mark and made the driver eligible for OEM badging and Windows Update distribution. The lighter-friction path, called attestation signing, did not require an HLK run [20]. The publisher submitted a CAB containing the driver and supporting metadata. Microsoft's backend ran a malware scan and an automated policy check; if both passed, Microsoft applied a counter-signature. Attestation-signed drivers, the page notes, ship only to client SKUs.

Attestation signing

The lower-friction post-2016 Microsoft signing path for Windows kernel drivers. The publisher uploads a CAB to the Hardware Dev Center; Microsoft runs malware scanning and an automated policy check; on pass, Microsoft applies its counter-signature. The path replaces full HLK testing for client-only drivers.

EV certificates as the account-binding primitive

Both paths required the publisher to hold an Extended Validation code-signing certificate. The EV cert does not sign the driver image itself; it signs and binds the Hardware Dev Center submission. That gives Microsoft a real-name handle on every kernel-driver publisher. EV certificates ride a strong identity check, cost meaningfully more than commercial OV certs, and live on a hardware token in the publisher's possession. The 2021 Microsoft Security blog announcing the Vulnerable & Malicious Driver Reporting Center spells the requirement out: "Kernel-mode driver publishers must pass the Hardware Lab Kit (HLK) compatibility tests, malware scanning, and prove their identity through extended validation (EV) certificates." [21]

Ctrl + scroll to zoom
The post-1607 Hardware Dev Center submission pipeline for a new kernel driver. The publisher's EV-signed CAB enters Microsoft's pipeline; on pass, the driver receives the Microsoft counter-signature and is eligible for Windows Update distribution.

The legacy long tail

The pivot to Microsoft-only signing closed the door for new drivers. It did not close the door for old ones.

What attestation signing catches and what it does not

The malware scan inside attestation signing looks for known dangerous behaviour. The Microsoft Security blog post on the Vulnerable & Malicious Driver Reporting Center enumerates the categories the backend flags: "Drivers with the ability to read or write arbitrary kernel, physical, or device memory, including Port I/O and central processing unit (CPU) registers from user mode." [21] In other words, the scanner already understands the BYOVD pattern.

What it does not catch are novel design flaws. A driver whose IOCTL surface is structurally unsafe in a way the scanner does not have a signature for passes the scan and ships with a Microsoft counter-signature. The Capcom.sys pattern is in the scanner's repertoire today; the pattern in the next driver to ship is, by definition, not.

A second weakness sits on the publisher side. EV-key compromise -- whether through the LAPSUS$ supply-chain leaks of 2022 or other vendor incidents -- gives the attacker the Microsoft-only-signing flavour of the Stuxnet problem. The signed-by-Microsoft chain is exactly as strong as the EV key's safekeeping at the publisher.

One bottleneck for signing is an improvement. But the bottleneck still trusts the kernel that asks the question. As long as the policy engine runs in the same memory the attacker can write, the policy engine loses.

6. HVCI: Moving the Policy Out of Reach (2015-present)

In July 2015, Microsoft shipped a feature so structurally important that it took six years to become a consumer default, and so misunderstood that it still travels under three different names.

The names are the easiest place to start. Virtualization-Based Security (VBS) is the platform: a Hyper-V-rooted virtualisation layer that exists on every modern Windows installation that meets the hardware requirements. Hypervisor-protected Code Integrity (HVCI) is the kernel-code-integrity consumer of VBS. Memory Integrity is the label the Windows Security UI uses today. The Microsoft Learn page on Memory Integrity is the canonical primary source [3]. TrustedSec called out the conflation explicitly in their g_CiOptions in a virtualized world post [19].

A security check that shares a trust domain with what it is checking has, by definition, already lost. HVCI moves the check out of the attacker's trust domain. It is the answer to who decides. It is not the answer to what gets decided.

That sentence is the second of this article's three reframes, and the one that makes everything that follows make sense.

VBS and the Virtual Trust Levels

On a VBS-on Windows machine, Hyper-V is the Type-1 hypervisor. The bootloader brings the hypervisor up first, the hypervisor brings up two execution environments side by side, and the normal Windows kernel runs in one of them while a much smaller Secure Kernel runs in the other.

VTL (Virtual Trust Level)

The VBS abstraction that partitions a Windows installation into two execution environments. VTL0 is the normal Windows kernel and its drivers. VTL1 is a much smaller Secure Kernel and a curated set of "trustlets" -- isolated user-mode processes that hold the most sensitive secrets. VTL1 can read and write VTL0 memory; VTL0 cannot read or write VTL1 memory. Code-integrity policy lives in VTL1.

The Code Integrity engine on an HVCI-on machine -- signature verification and policy-file consultation -- runs inside VTL1's Secure Kernel as the Secure Kernel Code Integrity component, SKCI. The VTL0 kernel cannot read or write VTL1 memory by hardware construction: the hypervisor's second-level address translation tables, programmed before VTL0 ever runs, mark VTL1 pages as unreachable from VTL0. The in-memory g_CiOptions state continues to reside in ci.dll's VTL0 data section -- it does not relocate into VTL1 -- but on an HVCI-on machine Kernel Data Protection (KDP), exposed to VTL0 drivers as MmProtectDriverSection, asks the Secure Kernel to mark the containing page read-only at the SLAT level. A fully compromised VTL0 kernel -- with kernel debugging attached, with all of ring zero's privileges -- cannot rewrite g_CiOptions to zero, because the SLAT mapping refuses the write.

Ctrl + scroll to zoom
The VBS/VTL split. VTL0 is the conventional Windows kernel and its drivers; VTL1 holds the Secure Kernel and the Code Integrity engine (SKCI). When VTL0 tries to map an executable kernel section, the hypervisor mediates the request to VTL1's SKCI, which performs signature verification and policy consultation before the section is created.

W^X on kernel memory

There is a second, equally structural property HVCI enforces. When the VTL0 kernel tries to map an executable section -- to create a kernel-executable page from a PE image -- the hypervisor forces the request through SKCI. SKCI verifies the Authenticode signature at section creation time, not only at the IoLoadDriver entry point a load goes through later [3]. And SKCI refuses any page that is simultaneously writable and executable. The classic exploitation technique of allocating a writable kernel buffer, writing shellcode into it, and then jumping to it stops working: the page either is writable, in which case it is not executable, or is executable, in which case it is not writable.

The hardware acceleration matters. The Memory Integrity page is unusually direct about the requirement: "Memory integrity works better with Intel Kabylake and higher processors with Mode-Based Execution Control, and AMD Zen 2 and higher processors with Guest Mode Execute Trap capabilities. Older processors rely on an emulation of these features, called Restricted User Mode, and will have a bigger impact on performance." [3] Mode-Based Execute Control (MBEC) is the Intel feature that lets the hypervisor distinguish "executable in supervisor mode" from "executable in user mode" at the page-table-entry level. AMD's Guest Mode Execute Trap (GMET) is the structurally equivalent feature. Older silicon falls back to Restricted User Mode emulation, which works correctly but pays a meaningfully larger performance tax. The hardware cutoff is a major reason HVCI defaulted off on pre-2017 OEM hardware for years.

What HVCI fixed

The g_CiOptions patching family, the third bypass we met in section 4, closes on HVCI-on systems. TrustedSec's post gives a clean account: g_CiOptions still lives in ci.dll's VTL0 data section, but Kernel Data Protection -- exposed to VTL0 drivers as MmProtectDriverSection -- asks the Secure Kernel in VTL1 to mark its containing page read-only at the SLAT level, so a VTL0 ring-zero write to it faults; the VTL0 kernel cannot rewrite the variable; live-kernel debuggers attached to VTL0 cannot rewrite it either [19]. The arbitrary-write-to-disable-DSE pattern that worked on Windows 7 through pre-HVCI Windows 10 is, on an HVCI-on Windows 11, no longer a primitive that exists in the attacker's threat model. The trust domain that decides the policy is not the trust domain the attacker can reach.

What HVCI did not fix

It is essential to be clear about what HVCI does not catch, because misreading this is how the BYOVD class survives.

HVCI verifies the signature and enforces W^X. It does not analyse the driver's behaviour. The 2019 RTCore64.sys driver passes SKCI section-mapping unchanged: it is signed by MSI through a Microsoft-recognised chain, it has no writable-and-executable pages, and the Authenticode hash on disk matches the binary in memory. After it loads, an attacker in user mode sends an IOCTL; the driver, executing legitimately in ring zero, writes attacker-controlled bytes to an attacker-chosen kernel address; the EDR notify routine table is patched; the BYOVD attack proceeds. Everything that happens inside the IOCTL handler happens with kernel privilege, on properly-signed code paths, inside HVCI's W^X policy. The structural BYOVD class is unaffected.

That is the gap the next two sections close.

HVCI is the answer to who decides. It is not the answer to what gets decided. We still need a way to say: this specific signed binary is one we do not trust.

7. The Block List: Naming the Weakness (2020-present)

In October 2020, Microsoft started shipping something it had spent twenty-five years avoiding: a list of specific drivers it would refuse to load by name.

The artefact lives at %windir%\system32\CodeIntegrity\DriverSiPolicy.p7b. The file is a PKCS#7-signed App Control for Business policy -- "WDAC" by its former name -- whose body consists of deny rules expressed at the granularity of file hash, file name, or publisher. The canonical Microsoft-recommended driver block rules page is the primary source, and is unusually rich for a Microsoft Learn page [4].

App Control for Business (WDAC)

Microsoft's policy-driven application-control engine. An App Control policy is a signed XML or binary file that lists allow rules, deny rules, and signer-level rules; at load time, the policy engine consults the rules and either allows or refuses the image. DriverSiPolicy.p7b is itself an App Control policy whose body is all deny rules.

Cadence and the published-vs-shipped gap

The block list is refreshed on two cadences. Microsoft publishes the source XML on the block-rules page on a quarterly schedule and pushes the binary DriverSiPolicy.p7b to client devices through monthly Windows servicing [4]. Microsoft's Security Baselines team also publishes a running update post cataloguing the changes [22].

The candid admission on the block-rules page is the part of the story that is most worth understanding.

"The blocklist included in this article and in the associated downloadable files usually contains a more complete set of known vulnerable drivers than the version in the OS and delivered by Windows Update. It's often necessary for us to hold back some blocks to avoid breaking existing functionality." -- Microsoft Learn, Microsoft-recommended driver block rules [4]

The published list is, on purpose, more inclusive than the shipped list. The reason is operational: every entry in the shipped list is a driver that would refuse to load on millions of devices, some of which have legitimate dependencies. Microsoft holds entries back when the compatibility cost is too high, even when the security signal is strong. We will come back to whether that gap is closeable in section 9.

The 22H2 cut and the Server 2016 carve-out

Two dates anchor the deployment story.

The block list was an optional feature in Windows 10 1809, enabled by default only on systems that ran Hypervisor-protected Code Integrity, Smart App Control, or Windows in S-mode [23]. With the Windows 11 2022 Update, also known as 22H2, released on 20 September 2022, default-on coverage extended to every client device, not just the HVCI-on subset [24]. The 22H2 release is the moment the block list became universal Windows client behaviour, six years after the first BYOVD primitive that motivated it.

The block-rules page notes a single explicit carve-out worth flagging. "Except on Windows Server 2016, the vulnerable driver blocklist is also enforced when either memory integrity (also known as hypervisor-protected code integrity or HVCI), Smart App Control, or S mode is active." [4] Windows Server 2016 does not get the default-on block list even when HVCI is on. An enterprise admin managing Server 2016 has to deploy an explicit App Control policy to get the same coverage. The October 2022 preview cycle saw a documented quirk -- KB5020779 explains that a preview release shipped without an actual blocklist refresh, addressed by a subsequent servicing update [23]. The KB5020779 episode is a useful reminder that the in-box block list ships through the same Windows Update cycle as everything else. Preview releases do not always carry a fresh policy, and the cadence on the block-rules page describes the intended steady state rather than every individual update [4].

Naming the weakness, not the publisher

For the first time in the story, the question Windows asks at load time is not only who signed this binary? but also is this specific signed binary one we have learned is unsafe? The block list is a step the previous generations could not have taken with the primitives they had: it requires a deny list that can be authored after the fact, distributed quickly, and enforced inside a trust domain the attacker cannot reach. KMCS supplied the load-time enforcement primitive; HVCI supplied the immune-from-VTL0 enforcement context; only with both in place could DriverSiPolicy.p7b actually do its job.

Ctrl + scroll to zoom
Block-list enforcement on a 22H2-class machine. When the VTL0 kernel requests a kernel section, SKCI in VTL1 evaluates the Authenticode chain and then consults DriverSiPolicy.p7b for hash, name, and signer deny rules; only when no deny rule matches does the section creation succeed.

The Vulnerable & Malicious Driver Reporting Center

The block list grew faster after Microsoft built a structured channel to feed it. The December 2021 Microsoft Security blog post announced the Vulnerable & Malicious Driver Reporting Center: a portal where researchers and vendors can submit kernel drivers for evaluation, backed by an automated analysis pipeline that looks for the BYOVD primitives -- "the ability to read or write arbitrary kernel, physical, or device memory, including Port I/O and central processing unit (CPU) registers from user mode." [21] The post explicitly lists the historical CVE backdrop that motivated the centre, naming RobinHood, Uroburos, Derusbi, GrayFish, and Sauron as families that leveraged driver vulnerabilities such as CVE-2008-3431, CVE-2013-3956, CVE-2009-0824, and CVE-2010-1592 [21].

The same post anchors the EV-certificate publisher requirement and the HLK or attestation gating that produces the block list's inputs in the first place. The reporting centre is the path by which a flagged driver moves from "spotted in research" to "deny rule in the next quarterly XML push".

Defender ASR as the HVCI-off coverage path

There is a third surface worth knowing about. Microsoft's Attack Surface Reduction rules include "Block abuse of exploited vulnerable signed drivers" (56a863a9-875e-4185-98a7-b882c64b5ce5) as part of the standard ASR protection set [25]. For Microsoft Defender for Endpoint customers on Windows 10 E3 or E5, the rule covers machines where HVCI is not on. Microsoft notes that "the same blocklist is also used by Microsoft Defender Antivirus customers" via the ASR rule [21]. The path is narrower than HVCI-rooted enforcement -- Defender has to be running, the rule has to be enabled -- but it extends the block list to enterprise environments that have not yet flipped HVCI on.

LOLDrivers and the dual-use externality

The block list is not the only catalogue of vulnerable Windows drivers. The community-maintained LOLDrivers project -- "Living Off The Land Drivers" -- collects vulnerable, malicious, and known-malicious Windows drivers in one place. Every entry carries YAML metadata and where possible YARA, Sigma, ClamAV, and Sysmon rules, plus a pre-compiled App Control deny policy that can be deployed standalone [26] [27]. As of the source verification for this article, LOLDrivers carried approximately 2,132 driver entries -- considerably more than the Microsoft-shipped list.

Check Point Research called out the dual-use problem in their 2024 piece: a public catalogue of vulnerable drivers is also a reading list for attackers. The same researchers ran the methodology in reverse: "we conducted a mass hunt for new drivers that may be vulnerable, uncovering thousands of potentially at-risk drivers." [28] Defenders use the list for hardening; attackers use it for shopping. Both effects are real.

A WDAC rule evaluator, in miniature

The semantics of an App Control policy are simple enough to model in a few lines. Deny rules win; allow rules are consulted next; the default action handles whatever is left.

JavaScript Toy App Control evaluator: deny-first, then allow, then default
// Simplified model of the App Control / WDAC rule-evaluation engine.
// Deny rules win, allow rules permit the remainder, and an explicit
// default action handles images neither denied nor allowed.

const policy = {
denyByHash:    new Set(["c1d5cf8c43e7679b782630e93f5e6420ca1749a7"]), // Capcom.sys
denyByName:    new Set(["RTCore64.sys"]),
denyBySigner:  new Set(["CN=Some Compromised Publisher, O=Example"]),
allowBySigner: new Set(["CN=Microsoft Windows, O=Microsoft Corporation"]),
defaultAction: "BLOCK",
};

function evaluate(image, policy) {
if (policy.denyByHash.has(image.sha1)) return "BLOCK (hash on deny list)";
if (policy.denyByName.has(image.fileName)) return "BLOCK (name on deny list)";
if (policy.denyBySigner.has(image.signer)) return "BLOCK (signer on deny list)";
if (policy.allowBySigner.has(image.signer)) return "ALLOW (signer on allow list)";
return policy.defaultAction === "ALLOW"
  ? "ALLOW (default)"
  : "BLOCK (default)";
}

const cases = [
{ sha1: "c1d5cf8c43e7679b782630e93f5e6420ca1749a7", fileName: "Capcom.sys",
  signer: "CN=CAPCOM Co., Ltd." },
{ sha1: "0000000000000000000000000000000000000000", fileName: "RTCore64.sys",
  signer: "CN=Micro-Star International Co., Ltd." },
{ sha1: "1111111111111111111111111111111111111111", fileName: "ntfs.sys",
  signer: "CN=Microsoft Windows, O=Microsoft Corporation" },
];
for (const c of cases) console.log(c.fileName, "->", evaluate(c, policy));

Press Run to execute.

Naming the weakness is genuinely new. But the list only ever lists what someone has already found. The window between disclosure and enforcement is months, and Microsoft documents that the shipped list is by design weaker than the published one. What gets the rest of the way?

8. The 2026 Stack: Defence in Depth Made Concrete

On a default-configured Windows 11 22H2 machine in 2026, a kernel driver that tries to load passes through five distinct gates. Each one closes a blind spot the previous one cannot reach.

The order matters, and so do the dependencies. The gates are:

  1. Kernel-Mode Code Signing. The Authenticode chain must terminate at a Microsoft-owned root. The chain check rejects unsigned drivers and drivers chained to non-Microsoft roots, except under the documented grandfathering carve-outs [7].
  2. The Vulnerable Driver Block List. SKCI consults DriverSiPolicy.p7b for hash, file-name, and signer-level deny rules. The list is default-on for every client device since Windows 11 22H2, and is updated quarterly through Microsoft Learn's published source XML and monthly through Windows servicing [4] [24].
  3. HVCI / SKCI. The Code Integrity engine runs in VTL1, verifies signatures at section-mapping time rather than only at IoLoadDriver, and enforces W^X on kernel memory. The policy engine is structurally out of reach of a fully compromised VTL0 kernel [3].
  4. App Control / Smart App Control. Enterprise admins author explicit App Control allowlists; consumer devices on clean Windows 11 installs run Smart App Control, a Microsoft-authored allowlist policy backed by cloud reputation [29] [30].
  5. Defender ASR. On Microsoft Defender for Endpoint deployments, the "Block abuse of exploited vulnerable signed drivers" ASR rule extends block-list coverage to HVCI-off environments [25].
Smart App Control (SAC)

The Windows 11 22H2+ consumer-facing front end for App Control for Business. SAC enforces a Microsoft-authored policy and supplements it with cloud reputation lookups from the Intelligent Security Graph. SAC is only available on clean installs and is shipped in evaluation mode by default; once turned on, it also unconditionally enforces the vulnerable driver block list [29].

ISG (Intelligent Security Graph)

The cloud-backed reputation service that Smart App Control consults to predict whether a given binary is safe. When confident, ISG approves the binary; when unconfident, SAC falls back to signature checks; absent both, the binary is blocked [29].

Orthogonality, not redundancy

The five gates look redundant from a distance. They are not. Each closes a class of failure the others cannot reach. The orthogonality is the reason for the stack.

GateCatchesMisses
KMCSUnsigned and cross-cert-only-signed driversSigned-but-vulnerable drivers
Block listKnown-vulnerable signed drivers (post-disclosure)Unknown-vulnerable signed drivers
HVCI / SKCIg_CiOptions-patching from VTL0; writable+executable kernel pagesBehavioural BYOVD inside a properly-signed driver
WDAC / SACAnything not on the allowlist (enterprise) or unknown-reputation (consumer)Allowlisted drivers with unknown defects
Defender ASRBlock-list entries on HVCI-off machines (where the rule is enabled)Drivers not on Microsoft's blocklist

The matrix is the practical justification for the stack. If DriverSiPolicy.p7b had perfect coverage there would be no need for SAC; if SAC had a complete allowlist there would be no need for the block list; if HVCI proved driver safety rather than driver identity there would be no need for either. None of those preconditions hold, and section 9 explains why they cannot.

Smart App Control's particulars

SAC merits a few specifics because its behaviour differs from the rest of the stack in ways that surprise readers. First, it is consumer-facing and only available on clean Windows 11 installs -- an upgrade does not get SAC. Second, SAC ships in evaluation mode by default. Windows watches user behaviour, and if the user mostly runs cloud-reputable software, SAC quietly flips to enforce; if the user runs a lot of niche or self-developed software, SAC quietly flips to off. Third, until a 2024 servicing change made SAC re-enableable from Windows Security, turning SAC off used to require a clean install to bring it back [29]. Fourth, on enterprise-managed devices, SAC turns itself off automatically after 48 hours; managed environments are expected to deploy WDAC instead [30].

The cold-start failure mode is worth knowing. A small independent hardware vendor whose driver has never been seen at scale lacks a cloud reputation when SAC asks about it. The fallback is signature, but a signed driver from an unknown publisher does not always clear SAC's confidence threshold. Small IHVs occasionally find their drivers blocked on consumer hardware running SAC for that reason alone.

Ctrl + scroll to zoom
The 2026 default Windows 11 22H2 stack, in evaluation order. KMCS gates identity; the block list gates known-vulnerability; HVCI/SKCI gates trust domain; SAC or WDAC gates allowlist policy; Defender ASR adds coverage on HVCI-off environments. Each layer covers a blind spot the others do not.

Verifying what the machine actually does

The state of the stack on any given Windows machine is observable. The Win32_DeviceGuard WMI class exposes a SecurityServicesRunning array whose integer codes name the security services currently active. The aside below covers the practitioner-facing details.

A toy decoder helps make the WMI surface concrete.

JavaScript Decode Win32_DeviceGuard SecurityServicesRunning flags
// Mirror of the integer codes the Win32_DeviceGuard WMI class reports
// for SecurityServicesRunning. Documented on Microsoft Learn under
// the Memory Integrity / HVCI guidance.

const SERVICE_NAMES = {
1: "Credential Guard",
2: "Hypervisor-protected Code Integrity (HVCI / Memory Integrity)",
3: "System Guard Secure Launch",
4: "SMM Firmware Measurement",
5: "Kernel-mode Hardware-enforced Stack Protection",
6: "Kernel-mode Hardware-enforced Stack Protection (Audit mode)",
7: "Hypervisor-Enforced Paging Translation",
};

function explain(servicesRunning) {
if (!servicesRunning.length) {
  return "No VBS-rooted security services are running on this device.";
}
return servicesRunning
  .map((code) => SERVICE_NAMES[code] || ("unknown service " + code))
  .map((s) => "  - " + s)
  .join("\n");
}

console.log("Sample 1: HVCI on, Credential Guard on");
console.log(explain([1, 2]));
console.log("\nSample 2: nothing running");
console.log(explain([]));
console.log("\nSample 3: full stack on a Secured-core PC");
console.log(explain([1, 2, 3, 4, 5]));

Press Run to execute.

Five gates is a lot of work to do what one ideal gate could not. The reason for the inflation is uncomfortable: the one ideal gate cannot, in principle, exist.

9. The Undecidability Wall

Why does Windows need five layers to do what one perfect signature ought to do? Because the perfect signature is mathematically impossible.

The third reframe of this article is the one that turns engineering frustration into theoretical inevitability. The property of interest -- "this signed driver, when exercised through its IOCTL surface, can be coerced into giving an attacker an arbitrary kernel-write primitive" -- is a non-trivial semantic property of the driver's program text. Rice's theorem says that for any non-trivial semantic property of programs, the predicate is undecidable on the class of all programs. No algorithm exists that, in finite time, answers correctly for every input.

A useful way to state the bound: if PP is the set of all kernel drivers and Unsafe(p)=1\text{Unsafe}(p) = 1 iff driver pp exposes a kernel-write primitive through its IOCTL handler, then no total computable function f: P \to \{0, 1\} satisfies f=Unsafef = \text{Unsafe}. Every approximation either over-blocks (f(p)=1f(p) = 1 when Unsafe(p)=0\text{Unsafe}(p) = 0, false positives, broken drivers) or under-blocks (f(p)=0f(p) = 0 when Unsafe(p)=1\text{Unsafe}(p) = 1, false negatives, BYOVD in the wild). The signing pipeline scans for the obvious cases; sophisticated dynamic analysers will catch more of the not-obvious cases; but the unrestricted version of the problem has no complete solution.

Whether an arbitrary signed driver can be coerced into giving an attacker a kernel-write primitive is undecidable. No static signing scheme can ever block exactly the unsafe drivers. The Windows answer is therefore not a single perfect gate; it is defence in depth that narrows, but does not close, the gap.

Microsoft's formal acknowledgement

Microsoft has been formally clear about a related point for years: the administrator-to-kernel transition is not, in the MSRC servicing-criteria sense, a security boundary [12]. Elastic Security Labs put the position in plain English: "the blocklist's deployment model can be slow to adapt to new threats, with updates automatically deployed typically only once or twice a year. Users can manually update their blocklists, but such interventions bring us out of 'secure by default' territory ... When determining which vulnerabilities to fix, the Microsoft Security Response Center (MSRC) uses the concept of a security boundary." [12]

Administrator-to-kernel is not a security boundary, in the MSRC servicing-criteria sense. The defence-in-depth mechanisms described here mitigate it; from the impossibility result, none can close it.

The MSRC framing is engineering policy. The undecidability result is theoretical inevitability. They land in the same place: an attacker who has administrator privilege, who can pick from the entire history of signed Windows drivers, who is patient, is not stopped by any number of signature checks. The defence-in-depth mechanisms make the attacker work harder; they raise the cost; they shrink the surface of viable signed drivers. They do not close the structural gap.

Closeable gaps and irreducible gaps

It is worth separating two kinds of gap.

The published-vs-shipped block list gap is a policy decision, not an engineering limit. Microsoft documents that "it's often necessary for us to hold back some blocks to avoid breaking existing functionality." [4] The published-vs-shipped gap is the closeable part. An administrator who can author or import an App Control policy can deploy the published XML directly and pick up Microsoft's full curation. The irreducible part of the gap sits behind it: even the published list lists only what someone has already disclosed. The undecidability result applies to finding unsafe drivers, not to listing known-unsafe ones. Defenders willing to accept compatibility risk can close it on their own machines today.

The gap that cannot close is the one between the published list and the universe of vulnerable drivers Microsoft has not yet learned about. That is where the undecidability result bites. No amount of pipeline tightening eliminates the class of design flaws whose recognition requires understanding what the driver's IOCTL handler will do under all possible inputs.

What static methods can achieve

Quantifying what the existing layers achieve is more useful than lamenting what they cannot. The complexity bounds for each layer are well-defined.

Authenticode signature verification is bounded below by one public-key operation and one cryptographic hash over the PE image, regardless of policy. SKCI's per-section cost is dominated by that constant. The Memory Integrity page is conspicuously silent on a published benchmark number; in practice the overhead is small but non-zero on Intel Kabylake-and-later or AMD Zen-2-and-later silicon with MBEC/GMET hardware acceleration, and meaningfully higher on the emulated Restricted-User-Mode fallback path that older silicon falls back to [3].

WDAC allowlist evaluation is O(logr)O(\log r) per image on rr rules with a hashed index, or O(r)O(r) on the naïve linear scan; the deny-rule check in DriverSiPolicy.p7b follows the same bound.

The gap between achievable static enforcement and the ideal "block all and only the unsafe drivers" is, in the limit, irreducible.

Three axes that can be improved

If the gap cannot close, it can be narrowed along three independent axes -- and the improvements that matter, look like one of these:

  • Reactiveness. The disclosure-to-enforcement latency is months today. Forthcoming WHCP submission-time analyses can compress it.
  • Coverage of unknown-bad signed drivers. Reputation, allowlists, and dynamic analysis at scale extend coverage beyond what a static deny list lists.
  • Visibility into binary contents. SBOMs answer "what is inside this driver?" -- a question the signature alone never asked.

Each axis is the answer to a different blind spot. None substitutes for another. Section 11 returns to the SBOM axis specifically because it is the one Microsoft is building into the submission flow right now.

Static signing has hit a wall it cannot push through. The only way forward is to widen the question. Two of the answers exist on other operating systems. The third is being built now.

10. The Other Two Operating Systems

Linux solved the signing half and pushed the curated-denylist half down to distribution vendors. macOS solved both by making third-party drivers stop being drivers.

Linux: signatures without a curated denylist

Linux has supported in-kernel module signing since version 3.7 (December 2012), under the configuration symbol CONFIG_MODULE_SIG. The kernel documentation catalogues the supported algorithms: "The built-in facility currently only supports the RSA, NIST P-384 ECDSA and NIST FIPS-204 ML-DSA public key signing standards." [31] The choice of signature scheme is a build-time decision, and the kernel can be told to use a key embedded in the kernel image, a key loaded into the trusted keyring at runtime, or a Machine Owner Key managed by shim and the platform's UEFI boot stack.

The structural decision that matters is the enforcement mode. CONFIG_MODULE_SIG_FORCE is the toggle. The kernel documentation describes the two settings cleanly: "If this is off (ie. 'permissive'), then modules for which the key is not available and modules that are unsigned are permitted, but the kernel will be marked as being tainted ... If this is on (ie. 'restrictive'), only modules that have a valid signature that can be verified by a public key in the kernel's possession will be loaded." [31]

Most mainstream distributions ship permissive: unsigned modules taint the kernel but load. The defender-shipping-restrictive-enforcement model is real on Secure-Boot-on RHEL and modern Ubuntu, paired with the Linux lockdown security module, which restricts certain root-level kernel-modification paths even on signed builds. The Linux lockdown LSM is the closest mainline-Linux analogue to HVCI's policy-out-of-reach property. The kernel_lockdown(7) man page describes lockdown as "designed to prevent both direct and indirect access to a running kernel image" and enumerates the restricted surfaces: /dev/mem, /dev/kmem, /dev/kcore, kprobes, BPF, MSR alteration, ACPI table overrides, and unsigned kexec [32]. It is a partial analogue, not equivalent: lockdown still runs in the same trust domain as the kernel it polices, so a sufficient kernel exploit defeats it. HVCI's VTL0/VTL1 split is structurally stronger.

What Linux does not have is the equivalent of DriverSiPolicy.p7b. There is no kernel-level curated denylist of "we have learned this module is unsafe; refuse to load it by name". Defenders rely on per-distribution CVE trackers, on modprobe.blacklist, and on udev rules to keep specific modules out. The G5 generation -- naming the weakness rather than the publisher -- has no mainline Linux equivalent at the kernel-loader level.

macOS: DriverKit removes the surface

Apple's answer is structurally different. Starting with macOS Catalina 10.15 in 2019, Apple deprecated legacy kernel extensions for third parties and pushed them onto the DriverKit framework instead [33] [34].

DriverKit

Apple's user-space driver framework, introduced with macOS Catalina 10.15. Third-party drivers ship as .dext user-space extensions linked against a curated IOKit subset; they receive IOKit messages from the kernel and respond with the same operations they used to perform in ring zero, but the code itself runs in user mode under sandbox restrictions. The kernel side of the new model exposes a controlled message surface; the third-party side cannot directly execute kernel code.

A .dext runs in user space under a sandbox profile. It can claim devices, register for IOKit interrupts, and exchange messages with kernel-side broker code -- but it cannot, in any usable sense, execute arbitrary code in the kernel address space. The Capcom.sys class of vulnerability cannot be expressed in DriverKit: there is no IOCTL surface whose handler runs in ring zero, because the handler does not run in ring zero. Apple reinforces the boundary further with System Integrity Protection (since 2015) and, on Apple Silicon, Kernel Integrity Protection (KIP), which makes the kernel page tables read-only after boot [35].

The price was paid by Apple's IHV community. Whole categories of third-party drivers -- deep audio, virtualisation, certain security tools -- spent years migrating, and some categories took multiple macOS releases before a DriverKit equivalent of a particular kext capability existed. Apple Silicon requires explicit reduced-security mode to load any legacy kext at all: Apple's Platform Security guide records that "Kexts must be explicitly enabled for a Mac with Apple silicon by holding the power button at startup to enter into One True Recovery (1TR) mode, then downgrading to Reduced Security and checking the box to enable kernel extensions" [36].

Why Windows cannot copy Apple

The reason Windows cannot make Apple's move in the short term is operational, not architectural. Windows' IHV installed base is orders of magnitude larger and less centrally controlled. Microsoft does not own its hardware vendors the way Apple owns Macs. Breaking compatibility with twenty years of shipped kernel drivers would impose unbounded migration cost on third parties Microsoft cannot direct.

DimensionWindows (2026)Linux (mainline + RHEL-class hardening)macOS (Catalina+ / Apple Silicon)
Default signature enforcementMandatory on x64 since 2006Permissive (taints kernel); restrictive on hardened distrosMandatory; legacy kexts deprecated
Curated denylist of signed-but-vulnerable artefactsDriverSiPolicy.p7b, default-on since 22H2None at kernel loader; per-distro CVE trackersNot needed -- third-party kexts removed
Policy engine isolated from kernel it policesHVCI in VTL1Lockdown LSM (same trust domain)KIP and SIP on Apple Silicon
Third-party drivers in kernelYes, still the modelYesNo -- DriverKit user-space dexts
Operational price of the modelCompatibility carve-outs, opt-outsPermissive defaultMulti-year IHV migration

Windows cannot move drivers to user space at Apple's speed. But it can look at what is inside a driver in a way the signature alone never could. And it has been quietly building that capability since 2022.

11. What Comes Next: SBOM, Artifact Signing, Dynamic Analysis

If signatures cannot answer "is this driver safe", and the block list can only ever answer "is this driver known-unsafe", the next question Windows has to learn how to ask is "what is inside this driver?"

SBOM for drivers

A Software Bill of Materials is a structured inventory of the components, dependencies, and versions inside a software artefact. The mainstream community formats are SPDX (now at version 3.0) and CycloneDX; Microsoft contributes to and ships an open-source tool, microsoft/sbom-tool, that produces SPDX-compatible SBOMs as part of a build pipeline [37]. The repository description is plain: "The SBOM tool is a highly scalable and enterprise ready tool to create SPDX 2.2 and SPDX 3.0 compatible SBOMs for any variety of artifacts. The tool uses the Component Detection libraries to detect components and the ClearlyDefined API to populate license information for these components." [37]

SBOM (Software Bill of Materials)

A machine-readable inventory of components and dependencies inside a software artefact. For a Windows kernel driver, an SBOM lists the third-party static libraries linked into the PE, the open-source code paths bundled with the driver, and the versions of each, in a format (SPDX, CycloneDX) that automated tools can consume to answer "is any component of this driver subject to a known vulnerability?"

The piece that affects Windows drivers specifically is the Windows Hardware Compatibility Program SBOM requirement. The Microsoft Q&A entry on Hardware Dev Center and CRA compliance is candid: "The WHCP SBOM requirement (Device.DevFund.Security.SoftwareBillofMaterials) has been deferred and will only be enforced starting in H2 2026." [38] The deferral aligns the WHCP rollout with the European Union's Cyber Resilience Act compliance window.

There is a structural problem an SBOM does not solve on its own. If the SBOM ships separately from the driver, an attacker who controls the distribution path can substitute a clean-looking SBOM for a contaminated driver. The WHCP submission flow is expected to bind the SBOM cryptographically to the artefact it describes so that a recipient can verify the binding, but the public documentation for the binding mechanism is still light beyond the WHCP SBOM mandate itself [38].

Dynamic analysis at submission time

The other axis of improvement is reactiveness. Today, the typical disclosure-to-enforcement cycle for a new BYOVD driver looks like this: vendor ships, attacker exploits, researcher discloses, Microsoft adds to the quarterly published list, Windows servicing pushes to clients. The latency is months. Two recent research programmes show how dynamic analysis at scale can compress it.

The first is the EURECOM/Politecnico di Milano NDSS 2026 paper on the authors' publication page. The team built a DRAKVUF-based instrumentation layer called Kernelmon and traced every kernel function executed by signed drivers under malware-loaded workloads [39]. The numbers are unusually concrete: the paper PDF reports that the team "analyzed 8,779 malware samples that load 773 distinct signed drivers. It flagged suspicious behavior in 48 drivers, and subsequent manual verification led to the responsible disclosure of seven previously unknown vulnerable drivers" [40]. The companion S3 blog post corroborates the 48-flagged / 7-disclosed numbers and notes that one of the seven received CVE-2024-26506 [41]. The technique is dynamic: it runs the driver under a hypervisor, watches what its IOCTL handlers actually do, and flags patterns characteristic of the BYOVD class.

The second is Check Point Research's 2024 work, which built a mass-hunt methodology around import-table signatures of risky kernel APIs and ran it across the global driver corpus. "Using the same methodology, we conducted a mass hunt for new drivers that may be vulnerable, uncovering thousands of potentially at-risk drivers." [28] The technique is static: it asks what does the driver import? rather than what does it do under exercise? Combined, the two approaches cover complementary halves of the surface.

Neither currently gates Hardware Dev Center submissions. Both are candidates for the kind of submission-time check that would compress disclosure-to-enforcement latency from quarters to days.

Empirical patterns the defences have to recognise

Cisco Talos's BYOVD work, summarised in their Exploring vulnerable Windows drivers post, classifies the post-load payloads attackers actually run [42]. Three behaviour classes dominate: token-swap escalation that overwrites the access token in the _EPROCESS structure to reach SYSTEM; unsigned-code-loading that uses the kernel-write primitive to disable DSE or patch CI state; and EDR-killing that clears the kernel callback registrations endpoint detection products rely on. Each is a target for the dynamic analyses above, each is detectable by import-table heuristics, and each is what defenders see in the wild today.

The historical roots are old. The Microsoft Security blog tracing the Vulnerable & Malicious Driver Reporting Center is direct: "Multiple malware attacks, including RobinHood, Uroburos, Derusbi, GrayFish, and Sauron, have leveraged driver vulnerabilities (for example CVE-2008-3431, CVE-2013-3956, CVE-2009-0824, and CVE-2010-1592)." [21] The payload classes have stayed remarkably stable for fifteen years.

Threats the stack cannot yet absorb

Three problems remain open and uncovered by the published roadmap. The Smart App Control cold-start window leaves small IHVs whose drivers have no cloud reputation to fall through to signature, and signature alone is exactly what we already established does not answer the question. BYOVD on HVCI-off environments, prevalent in older anti-cheat configurations and on enterprise machines with legacy ISV drivers, still admits the g_CiOptions-patching family from VTL0 because there is no VTL1 to keep the policy out of reach. And the shipped-vs-published block list gap, while operationally rational and individually closeable by a willing administrator, is a gap any default-on customer carries.

None of those closes by algorithmic improvement. Each closes only by widening the question.

What started as a yes/no signature check has become a continually expanding set of questions Windows asks before it will hand a driver the keys to ring zero. None of those questions is sufficient. All of them are necessary. And the next one is already being written into the WHCP submission flow.

12. What This Means in Practice

Three audiences, three things to do.

Administrators. Confirm the stack is on. Get-CimInstance -Namespace root\Microsoft\Windows\DeviceGuard -ClassName Win32_DeviceGuard returns a SecurityServicesRunning array; a 2 in the array confirms HVCI. A DriverSiPolicy.p7b in %windir%\system32\CodeIntegrity\ confirms the in-box block list is deployed. If you can tolerate the compatibility risk, compile the published block-rules XML into an App Control policy and deploy it (audit mode first) [4]. If you run Windows Server 2016, you have to deploy an explicit policy yourself because the in-box default does not apply there [4]. If you ship through the Hardware Dev Center, schedule the H2 2026 WHCP SBOM gate now [38]. Subscribe to the Vulnerable & Malicious Driver Reporting Center cadence for new disclosures [21].

Driver authors. Assume your IOCTL surface will be read by Check Point's import-table mass hunt and exercised by EURECOM's Kernelmon [28] [39]. Any handler that takes a user-supplied address and returns kernel data, or that dispatches a user-supplied function pointer, will end up on a block list on its current trajectory.

Researchers. The field is wide open. The undecidability result is real, but the practical gap between what current analyses detect and what is, in principle, detectable for any specific vulnerability class is large. The NDSS 2026 paper found seven CVE-worthy drivers in a corpus of 773. The next paper will find more.

Every layer is somebody's incident report

Every layer in the 2026 stack exists because the previous one lost to a named adversary. Sony BMG XCP retired advisory signing. Stuxnet retired the assumption that a valid chain is a safe chain. Capcom.sys retired the assumption that a safe chain is a safe driver. RTCore64.sys, gdrv.sys, and KProcessHacker retired the assumption that the BYOVD class would burn itself out. Each entry on DriverSiPolicy.p7b is somebody's incident report, recorded in the most permanent place Microsoft can put it -- the kernel loader's deny list.

Frequently Asked Questions

Frequently asked questions

Does HVCI block every vulnerable driver?

No. HVCI verifies the Authenticode signature at section-mapping time and enforces a write-xor-execute invariant on kernel memory; it does not analyse the driver's IOCTL surface. A signed driver with an unsafe IOCTL passes HVCI unchanged and proceeds to execute its handler in kernel mode with kernel privilege. That is what the Vulnerable Driver Block List is for: HVCI gates who decides; the block list gates what gets decided. See the Memory Integrity page [3].

Can I update the block list faster than quarterly?

Yes. Microsoft publishes the source XML on the Microsoft-recommended driver block rules page [4]. You can compile it into a binary App Control policy with the standard tooling and deploy it directly, picking up entries Microsoft holds back from the in-box list. Test in audit mode first because the published list is more inclusive than the shipped list and may flag drivers your environment depends on. Many defenders layer the LOLDrivers App Control policy on top for community-curated coverage [26].

What about Windows Server 2016?

Windows Server 2016 does not enforce the block list by default, even when Memory Integrity is on. The block-rules page calls this out explicitly [4]. If you administer Server 2016, deploy an explicit App Control policy to get the same coverage as the default-on 22H2 client.

How is Smart App Control different from App Control for Business?

App Control for Business (the engine formerly known as WDAC) is a policy you author. You define what signers, hashes, and paths are allowed; you ship and enforce the policy yourself. Smart App Control is a Microsoft-authored policy bundled with cloud reputation lookups via the Intelligent Security Graph. SAC is the consumer-friendly front end; App Control is the enterprise back end. SAC's default policy ships at %windir%\schemas\CodeIntegrity\ExamplePolicies\SmartAppControl.xml. SAC is consumer-only and turns itself off after 48 hours on enterprise-managed devices, where the expectation is that the operator deploys an App Control policy directly. See the Smart App Control FAQ and the App Control for Business overview [29] [30].

Are anti-cheat engines compatible with HVCI?

Increasingly yes. Major anti-cheat vendors have shipped HVCI-compatible kernel components since around 2023, but a meaningful tail of older configurations still requires HVCI off. On those configurations, the g_CiOptions-patching technique TrustedSec describes is back in play because the policy variable is no longer protected behind VTL1 [19]. Audit your gaming-rig population if you care about coverage.

What is the practical difference between the in-box blocklist and LOLDrivers?

The in-box block list is Microsoft-curated with explicit compatibility holdbacks; the LOLDrivers catalogue is community-curated, considerably more inclusive (approximately 2,132 entries as of the source verification for this article), and ships with App Control deny policies, Sigma, YARA, ClamAV, and Sysmon detection content alongside the entries [27] [26]. For threat hunting, use both. For enforcement, layer the LOLDrivers App Control policy on top of the in-box list if your environment can tolerate the compatibility risk. Check Point Research has documented the dual-use externality of any such public list -- attackers also read them -- but the defender net benefit of broader coverage outweighs the marginal attacker advantage on most environments [28].

Study guide

Key terms

Authenticode
Microsoft's PKCS#7 code-signing format, used in Windows since 1996. Attests to publisher identity; does not analyse program behaviour.
KMCS
Kernel-Mode Code Signing. The mandatory load-time signature policy on 64-bit Windows since Vista x64 in 2006.
BYOVD
Bring Your Own Vulnerable Driver. An attack pattern in which an adversary installs a signed but design-vulnerable third-party driver to gain kernel capability.
HVCI
Hypervisor-protected Code Integrity, also called Memory Integrity. The Code Integrity engine running in VTL1 under a Hyper-V root, isolated from the VTL0 kernel.
VTL
Virtual Trust Level. VTL0 is the normal Windows kernel; VTL1 is the Secure Kernel and trustlets. VTL1 can read VTL0 memory but not vice versa.
DriverSiPolicy.p7b
The Microsoft-signed App Control deny policy that lists known-vulnerable signed kernel drivers and is default-on for all Windows 11 22H2 client devices.
App Control for Business
Microsoft's policy-driven application control engine, formerly WDAC. Used for both deny lists (the block list) and enterprise allowlists.
Smart App Control
Consumer-facing front end for App Control, backed by ISG cloud reputation. Available on clean Windows 11 22H2+ installs only.
SBOM
Software Bill of Materials. Machine-readable inventory of components and dependencies. Mandatory for WHCP submissions from H2 2026.
DriverKit
Apple's user-space driver framework. Third-party drivers ship as sandboxed dexts rather than kernel extensions; the BYOVD class is eliminated by construction.

Comprehension questions

  1. Why did the Windows kernel-driver signing policy have to wait until Vista x64 to become mandatory?

    The advisory SetupAPI-prompt model on 32-bit Windows could not be made mandatory without breaking compatibility with decades of unsigned drivers. The x64 architecture was a young platform with relatively few drivers in the field, which let Microsoft make the load-time signature requirement mandatory without disrupting an installed base.

  2. What single property of HVCI makes the g_CiOptions patching technique stop working?

    HVCI runs the signature-verification and policy-consultation logic inside VTL1's Secure Kernel and uses Kernel Data Protection, exposed to VTL0 drivers as MmProtectDriverSection, to mark the VTL0 page containing g_CiOptions read-only at the second-level address translation level. The variable still resides in ci.dll's VTL0 data section, but a VTL0 ring-zero write to it faults because the SLAT mapping refuses the write -- and a live-kernel debugger attached to VTL0 cannot bypass that protection either.

  3. Why does Microsoft document that the published block list is more inclusive than the shipped one?

    Some entries in the published list would block drivers that legitimate environments still depend on. Microsoft holds those entries back from the in-box DriverSiPolicy.p7b to avoid breaking existing functionality, while leaving them available in the source XML for defenders who can author their own App Control policies and accept the compatibility risk.

  4. Why is the BYOVD class undecidable to gate at the signing stage?

    Whether an arbitrary signed driver exposes a kernel-write primitive through its IOCTL surface is a non-trivial semantic property of the driver's program text. Rice's theorem says no algorithm decides such properties for all programs. Static and dynamic analyses catch decidable subsets; the unrestricted class admits no complete solution.

  5. Why can Windows not simply move third-party drivers to user space the way macOS DriverKit did?

    Apple owns its hardware vendors and could impose a multi-year migration on a comparatively centralised vendor community. Windows' third-party IHV base is much larger and more independent; breaking compatibility with twenty years of shipped kernel drivers would impose unbounded migration cost on parties Microsoft does not direct.

References

  1. tandasat/ExploitCapcom. https://github.com/tandasat/ExploitCapcom
  2. rapid7/metasploit-framework#7363 -- Add LPE exploit module for the capcom driver flaw. https://github.com/rapid7/metasploit-framework/pull/7363
  3. Enable virtualization-based protection of code integrity (Memory Integrity / HVCI). https://learn.microsoft.com/en-us/windows/security/hardware-security/enable-virtualization-based-protection-of-code-integrity
  4. Microsoft recommended driver block rules. https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/microsoft-recommended-driver-block-rules
  5. Cryptography tools. https://learn.microsoft.com/en-us/windows/win32/seccrypto/cryptography-tools
  6. Windows Hardware Lab Kit. https://learn.microsoft.com/en-us/windows-hardware/test/hlk/
  7. Kernel-mode code signing policy (Windows Vista and later). https://learn.microsoft.com/en-us/windows-hardware/drivers/install/kernel-mode-code-signing-policy--windows-vista-and-later-
  8. Code signing. https://en.wikipedia.org/wiki/Code_signing
  9. Kernel-mode code signing requirements (Windows previous-versions). https://learn.microsoft.com/en-us/previous-versions/windows/hardware/design/dn653567(v=vs.85)
  10. Cross-Certificates for Kernel Mode Code Signing (2020 archive of the historical CA list). https://web.archive.org/web/2020/https://docs.microsoft.com/en-us/windows-hardware/drivers/install/cross-certificates-for-kernel-mode-code-signing
  11. Cross-certificates for kernel-mode code signing. https://learn.microsoft.com/en-us/windows-hardware/drivers/install/cross-certificates-for-kernel-mode-code-signing
  12. Forget vulnerable drivers: admin is all you need. https://www.elastic.co/security-labs/forget-vulnerable-drivers-admin-is-all-you-need
  13. Kernel Patch Protection (PatchGuard). https://en.wikipedia.org/wiki/Kernel_Patch_Protection
  14. Stuxnet. https://en.wikipedia.org/wiki/Stuxnet
  15. CVE-2019-16098 (RTCore64.sys / MSI Afterburner). https://nvd.nist.gov/vuln/detail/CVE-2019-16098
  16. Remove all the callbacks -- BlackByte ransomware disables EDR via RTCore64.sys. https://news.sophos.com/en-us/2022/10/04/blackbyte-ransomware-returns/
  17. CVE-2018-19320 (Gigabyte gdrv.sys). https://nvd.nist.gov/vuln/detail/CVE-2018-19320
  18. How DoppelPaymer hunts and kills Windows processes. https://www.crowdstrike.com/en-us/blog/how-doppelpaymer-hunts-and-kills-windows-processes/
  19. g_CiOptions in a virtualized world. https://www.trustedsec.com/blog/g_cioptions-in-a-virtualized-world
  20. Code signing attestation. https://learn.microsoft.com/en-us/windows-hardware/drivers/dashboard/code-signing-attestation
  21. Improve kernel security with the new Microsoft Vulnerable and Malicious Driver Reporting Center. https://www.microsoft.com/en-us/security/blog/2021/12/08/improve-kernel-security-with-the-new-microsoft-vulnerable-and-malicious-driver-reporting-center/
  22. Microsoft vulnerable driver blocklist: getting better and better. https://techcommunity.microsoft.com/blog/microsoft-security-baselines/microsoft-vulnerable-driver-blocklist-getting-better-and-better/4172168
  23. KB5020779: The vulnerable driver blocklist after the October 2022 preview release. https://support.microsoft.com/en-us/topic/kb5020779-the-vulnerable-driver-blocklist-after-the-october-2022-preview-release-3fcbe13a-6013-4118-b584-fcfbc6a09936
  24. Available today: the Windows 11 2022 Update. https://blogs.windows.com/windowsexperience/2022/09/20/available-today-the-windows-11-2022-update/
  25. Attack surface reduction rules reference. https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/attack-surface-reduction-rules-reference
  26. magicsword-io/LOLDrivers. https://github.com/magicsword-io/LOLDrivers
  27. LOLDrivers. https://www.loldrivers.io/
  28. Breaking boundaries: investigating vulnerable drivers and mitigating risks. https://research.checkpoint.com/2024/breaking-boundaries-investigating-vulnerable-drivers-and-mitigating-risks/
  29. Smart App Control frequently asked questions. https://support.microsoft.com/topic/what-is-smart-app-control-285ea03d-fa88-4d56-882e-6698afdb7003
  30. App Control for Business. https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/appcontrol
  31. Linux kernel module signing. https://docs.kernel.org/admin-guide/module-signing.html
  32. kernel_lockdown(7) -- Linux manual page. https://man7.org/linux/man-pages/man7/kernel_lockdown.7.html
  33. Legacy system extensions in macOS. https://support.apple.com/en-us/HT210999
  34. DriverKit (Apple Developer Documentation). https://developer.apple.com/documentation/driverkit
  35. System Integrity Protection (Apple Platform Security). https://support.apple.com/guide/security/system-integrity-protection-secb7ea06b49/web
  36. Securely extending the kernel in macOS (Apple Platform Security). https://support.apple.com/guide/security/securely-extending-the-kernel-sec8e454101b/web
  37. microsoft/sbom-tool. https://github.com/microsoft/sbom-tool
  38. Would Hardware Dev Center for CRA compliance change?. https://learn.microsoft.com/en-us/answers/questions/5732099/would-hardware-dev-center-for-cra-compliance-chang
  39. Unveiling BYOVD threats: kernel-driver dynamic analysis at scale. https://www.eurecom.fr/publication/8384 - NDSS 2026
  40. Unveiling BYOVD threats: kernel-driver dynamic analysis at scale (NDSS 2026 paper PDF). https://www.s3.eurecom.fr/docs/ndss26_monzani.pdf
  41. Unveiling BYOVD threats: malware use and abuse of kernel drivers. https://www.s3.eurecom.fr/post/2025/10/13/unveiling-byovd-threats-malwares-use-and-abuse-of-kernel-drivers/
  42. Exploring vulnerable Windows drivers. https://blog.talosintelligence.com/exploring-vulnerable-windows-drivers/