When SYSTEM Isn't Enough: The Windows Secure Kernel and the End of Total Kernel Trust
How Windows built a hardware-isolated kernel above Ring 0 using Hyper-V, protecting credentials and code integrity even after full NT kernel compromise.
PermalinkWhen SYSTEM Isn't Enough
An attacker has achieved the holy grail: SYSTEM-level access on a domain-joined Windows machine. They load Mimikatz, point it at LSASS, and reach for the domain admin's Kerberos ticket. The command runs. The output comes back empty. The credentials are there -- the machine uses them every second -- but they're locked behind a wall that even full kernel access cannot breach.
Welcome to the world of the Windows Secure Kernel.
For decades, Windows security rested on a single hard boundary: user mode versus kernel mode. If you crossed that line -- if you achieved Ring 0 execution -- the system was yours. Every credential, every security policy, every secret was accessible. Tools like Benjamin Delpy's Mimikatz turned this architectural reality into a practical catastrophe, making Pass-the-Hash and Pass-the-Ticket attacks trivially easy across enterprise networks [1].
But on a modern Windows 11 machine with Virtualization-Based Security (VBS) enabled, the rules have changed. A new trust boundary exists -- one enforced not by the kernel, but by the hypervisor running above the kernel. Even SYSTEM-level access in the traditional kernel cannot reach across this boundary [2].
If kernel mode gives you everything, what could possibly be above kernel mode? The answer requires a 30-year journey through Windows security.
The All-or-Nothing Kernel: How Windows NT Was Built
In 1988, Dave Cutler began designing Windows NT with a security model influenced by military security research -- especially the reference monitor concept, distinct from Bell-LaPadula's mandatory-access-control model. State-of-the-art for its era. It also contained a fatal assumption.
The core component of the Windows NT security architecture that mediates all access to securable objects (files, registry keys, processes) by checking Access Control Lists (ACLs) against the caller's security token. The SRM runs in kernel mode and enforces discretionary access control for every system operation.
The NT kernel drew a hard line between Ring 3 (user mode) and Ring 0 (kernel mode) [3]. User-mode processes could not directly access kernel memory. The Security Reference Monitor mediated all access to system objects. For the early 1990s, this was a significant advance over DOS and Windows 9x, where applications and the OS shared the same memory space with no isolation at all.
Dave Cutler previously designed VMS at Digital Equipment Corporation (DEC). Many NT design principles -- including the SRM, the object manager, and the layered architecture -- trace directly back to VMS. The letters "WNT" are famously one character ahead of "VMS" in the alphabet.But the NT model contained a fatal assumption: all kernel-mode code is equally trusted. Once a driver or exploit gained Ring 0 access, it shared the same address space and privilege level as the kernel itself. It could read and write any memory, modify the System Service Dispatch Table (SSDT), manipulate the Interrupt Descriptor Table (IDT), or unlink processes from the EPROCESS active process list.
This was the golden age of kernel-mode rootkits. Jamie Butler's FU rootkit (2004) used Direct Kernel Object Manipulation (DKOM) to unlink processes from the active process list, making malicious processes invisible to Task Manager, antivirus tools, and every other system utility [4]. SSDT hooking allowed rootkits to intercept and redirect any system call, providing total control over OS behavior.
Mark Russinovich and Bryce Cogswell built the Sysinternals tools to make these kernel internals visible to defenders [5]. Process Explorer, Filemon, and Regmon became essential diagnostic instruments. But visibility is not protection. Defenders could see the problem; they could not stop it.
The NT kernel drew one hard line -- user mode versus kernel mode. When attackers crossed that line, there was nothing left to protect. Every security mechanism, every credential, every policy lived in the same flat address space. Microsoft needed to draw a new line.
Software Guards for a Hardware Problem: PatchGuard and Friends
What do you do when the prisoners are as powerful as the guards? You send in more guards at the same level. That was Microsoft's first strategy -- and its fundamental flaw.
A software-only kernel integrity monitor introduced in 2005 for 64-bit Windows. PatchGuard periodically checks critical kernel structures (SSDT, IDT, GDT, processor MSRs) for unauthorized modifications and forces a Blue Screen of Death (CRITICAL_STRUCTURE_CORRUPTION) if tampering is detected.
PatchGuard arrived in Windows XP x64 and Windows Server 2003 SP1 in 2005 [6]. It used obfuscated, randomized integrity checks to detect unauthorized modifications to kernel structures. If it caught tampering, it triggered a BSOD. On the surface, this seemed like a strong defense.
PatchGuard's internal implementation uses extensive obfuscation: randomized check intervals, encrypted context blocks, and self-protecting code that resists static analysis. Microsoft never published its internal design, treating security through obscurity as a deliberate delaying tactic against attackers.Mandatory kernel-mode code signing followed with Windows Vista x64 in 2007, requiring all kernel drivers to carry a valid Authenticode signature [7]. Data Execution Prevention (DEP) marked memory pages as non-executable [8]. Address Space Layout Randomization (ASLR) randomized the memory layout of loaded modules [9]. Supervisor Mode Execution Prevention (SMEP) blocked kernel code from executing user-mode memory pages.
Each mitigation raised the cost of attack. Together, they made kernel exploitation significantly harder. But each one had a fatal weakness.
An attack technique where adversaries install a legitimately signed but vulnerable third-party driver, then exploit the driver's vulnerability to gain arbitrary kernel-mode code execution. Because the driver carries a valid signature, it bypasses kernel-mode code signing enforcement.
PatchGuard runs at Ring 0 -- the same privilege level as the attackers it monitors. In 2019, the InfinityHook project demonstrated how to hook kernel callbacks via the Event Tracing for Windows (ETW) subsystem without patching any kernel structures that PatchGuard checks [10]. PatchGuard never noticed.
Kernel-mode code signing stops unsigned drivers but not signed-and-vulnerable ones. The BYOVD technique became a staple of advanced persistent threat (APT) groups: install a legitimately signed driver with a known vulnerability, exploit that vulnerability, and gain arbitrary kernel execution while all code signing checks pass [11].
DEP is bypassed by Return-Oriented Programming (ROP). Instead of injecting new code, attackers chain existing executable code snippets ("gadgets") to achieve arbitrary computation [12]. ASLR has limited entropy on 32-bit systems and is defeated by information leaks that reveal randomized base addresses [13].
PatchGuard was a guard who could be knocked out by the very prisoners it watched. A defense sharing its privilege level with the attacker can always, given sufficient motivation, be subverted.
No software-only defense can protect against an attacker at the same privilege level. This is not a fixable bug -- it is a structural limitation. PatchGuard delays attacks; it cannot prevent them. Microsoft needed something that kernel-mode code could not even reach.
Building the Foundation: Secure Boot and the Trust Chain
If you cannot trust the kernel at runtime, can you at least trust that it started clean? UEFI Secure Boot bet on that premise.
Windows 8 (October 2012) mandated Secure Boot for certified hardware, establishing a cryptographic chain of trust from firmware through bootloader to OS kernel [14]. Only components signed by trusted authorities could execute during the boot process. Measured Boot extended this by hashing each boot component into TPM Platform Configuration Registers (PCRs), creating a verifiable boot log that remote attestation services could check [15].
This was a real advance. Bootkits like TDL4/Alureon, which operated below the OS and were invisible to all software-based defenses, were effectively blocked [16]. The boot chain was now cryptographically verified.
But Secure Boot had a critical gap: it protected the boot process, not runtime. Once Windows loaded and started executing, a kernel exploit could compromise the system just as before. PatchGuard was still the only runtime defense, and we have already seen its limitations.
Secure Boot ensured the system started clean but could not keep it clean. Microsoft needed runtime isolation -- and the key technology was already sitting on millions of machines, unused for this purpose: the hypervisor.
The Breakthrough: Virtual Trust Levels and the Secure Kernel
The insight that changed everything was deceptively simple: if Ring 0 attackers can compromise anything at Ring 0, create a Ring -1. The hypervisor was already there.
Intel VT-x and AMD-V hardware virtualization extensions, shipping since 2005-2006, gave the hypervisor a privilege level above the OS kernel [18]. Microsoft's Hyper-V already used this capability for virtual machines. The breakthrough was recognizing that the same hardware could create a security boundary within a single OS instance -- not a separate VM, but a hardware-isolated execution context that the kernel could not reach.
A hardware-enforced execution environment created by the Hyper-V hypervisor using Second Level Address Translation (SLAT). VTL0 is the Normal World where the standard NT kernel, drivers, and applications run. VTL1 is the Secure World where securekernel.exe and security-critical trustlets execute. VTL1 memory is physically inaccessible to all VTL0 code, including the NT kernel.
A hardware feature (Intel Extended Page Tables / AMD Nested Page Tables) that provides a second layer of virtual-to-physical address translation managed by the hypervisor. SLAT enables the hypervisor to control which physical memory pages each VTL can access, making VTL1 memory invisible to VTL0 without any software-level enforcement that could be bypassed.
In May 2015, Brad Anderson announced Virtualization-Based Security, Device Guard, and Credential Guard at Microsoft Ignite [19]. The initial Windows 10 release, version 1507 (July 2015), shipped with VBS, creating two Virtual Trust Levels: VTL0 (Normal World) and VTL1 (Secure World) [2].
Diagram source
flowchart TB
subgraph VTL1["VTL1 -- Secure World"]
SK["securekernel.exe\n(Secure Kernel)"]
IUM["Isolated User Mode"]
LSAISO["lsaiso.exe\n(Credential Guard)"]
ENCLAVE["VBS Enclaves"]
IUM --- LSAISO
IUM --- ENCLAVE
end
subgraph VTL0["VTL0 -- Normal World"]
NT["ntoskrnl.exe\n(NT Kernel)"]
DRIVERS["Kernel Drivers"]
LSASS["lsass.exe\n(LSASS broker)"]
APPS["User Applications"]
end
subgraph HV["Hyper-V Hypervisor"]
SLAT["SLAT Enforcement\n(Intel EPT / AMD NPT)"]
end
HV -->|"enforces memory isolation"| VTL1
HV -->|"enforces memory isolation"| VTL0
VTL0 -.->|"Secure Service Calls\n(controlled boundary)"| VTL1 Here is how it works:
- At boot, the Hyper-V hypervisor initializes and creates both VTLs.
- The standard NT kernel (ntoskrnl.exe), all drivers, and user-mode applications run in VTL0.
- securekernel.exe loads in VTL1 kernel mode. It is a minimal, purpose-built kernel that handles only security-critical functions [20].
- The hypervisor uses SLAT to make VTL1 memory physically inaccessible to VTL0. No amount of Ring 0 code in VTL0 can read or write VTL1 pages.
- Communication between VTL0 and VTL1 occurs only via Secure Service Calls (SSCs) -- controlled hypercalls that cross the VTL boundary under strict validation [2].
A process running in VTL1 Isolated User Mode (IUM), protected from all VTL0 access by hypervisor-enforced memory isolation. The canonical example is lsaiso.exe, the Credential Guard trustlet that holds NTLM hashes and Kerberos tickets in VTL1 where even a fully compromised NT kernel cannot reach them.
Alex Ionescu's 2015 Black Hat presentation was the first major public technical teardown of the Secure Kernel Mode (SKM) and Isolated User Mode (IUM) architecture [20]. Rafal Wojtczuk (Bromium) followed in 2016 with the first independent security audit of VBS, mapping the trust boundaries and identifying the secure call interface as the primary attack surface [21].
What can an attacker with full SYSTEM access in VTL0 not do?
- Read credentials protected by Credential Guard
- Load unsigned kernel drivers when HVCI is enabled
- Access VTL1 memory or modify Secure Kernel data structures
- Disable VBS without rebooting (and with Secure Boot + UEFI lock, not easily even then)
For the first time, an attacker with full NT kernel compromise could not access secrets protected in VTL1. This fundamentally changed the Windows threat model.
For the first time, full NT kernel compromise was no longer game over. But what, exactly, does this new architecture protect?
The Pillars: What the Secure Kernel Protects
The Secure Kernel is not a product -- it is a platform. Five distinct security features stand on its shoulders, each protecting a different class of asset.
Credential Guard
When Credential Guard is enabled, NTLM password hashes and Kerberos Ticket-Granting Tickets (TGTs) are stored exclusively in lsaiso.exe -- a trustlet running in VTL1 [22]. The VTL0 lsass.exe process acts as a broker: authentication requests from VTL0 are forwarded to lsaiso.exe via secure RPC over the VTL boundary. lsaiso.exe performs cryptographic operations (challenge signing, ticket generation) within VTL1 and returns only the result -- never the raw secret.
Diagram source
sequenceDiagram
participant App as Application (VTL0)
participant LSASS as lsass.exe (VTL0)
participant HV as Hypervisor
participant LSAISO as lsaiso.exe (VTL1)
App->>LSASS: Authentication request
LSASS->>HV: Secure Service Call
HV->>LSAISO: Forward to VTL1
LSAISO->>LSAISO: Sign challenge with stored credential
LSAISO->>HV: Return signed response (NOT the raw secret)
HV->>LSASS: Forward to VTL0
LSASS->>App: Authentication result
Note over LSASS: Even SYSTEM access here cannot read VTL1 memory Even a Mimikatz-wielding attacker with SYSTEM access in VTL0 gets nothing -- the raw credentials never exist in VTL0 memory. Credential Guard is enabled by default on domain-joined Windows 11 22H2+ systems [22].
HVCI / Memory Integrity
A VBS feature (also called "Memory Integrity") that enforces kernel-mode code integrity from VTL1. HVCI ensures only signed code executes in the kernel and enforces W^X (Write XOR Execute) policy on all kernel memory pages via SLAT. No kernel memory page can be both writable and executable simultaneously.
A memory protection policy enforcing that a page can be either writable or executable, but never both simultaneously. HVCI enforces W^X across all kernel memory via SLAT page permissions controlled from VTL1, preventing attackers from injecting and executing arbitrary code in the kernel.
HVCI moves code integrity enforcement from VTL0 into VTL1 [23]. Before any kernel-mode driver loads, its signature is verified by VTL1 code integrity services. HVCI enforces W^X on kernel memory pages using SLAT: page table modifications that would create a writable-and-executable page are trapped by the hypervisor and denied. Even if an attacker achieves kernel execution in VTL0, they cannot load unsigned drivers or make arbitrary kernel memory executable.
On newer CPUs, Intel Mode-Based Execution Control (MBEC, Kaby Lake / 7th Gen+) and AMD Guest Mode Execute Trap (GMET, Zen 2+) provide hardware-accelerated W^X enforcement. Older CPUs rely on software emulation ("Restricted User Mode"), which increases overhead.Diagram source
flowchart TD
A["Driver load request\n(VTL0)"] --> B["Signature check\n(VTL1 code integrity)"]
B -->|"Valid signature"| C["Set page permissions:\nExecutable + Read-Only"]
B -->|"Invalid signature"| D["BLOCKED\nDriver cannot load"]
E["Attacker tries to\nmake page W+X"] --> F{"SLAT check\n(Hypervisor)"}
F -->|"Violation: W+X"| G["DENIED\nPage remains Read-Only"]
F -->|"Valid: W XOR X"| H["Allowed"] VBS Enclaves
An isolated memory region backed by VTL1 that allows third-party applications to protect secrets from even admin-level OS compromise. The host application in VTL0 communicates with the enclave via the CallEnclave API. Enclave memory is invisible to all VTL0 code, including the NT kernel. Available since Windows 11 24H2.
Starting with Windows 11 24H2, third-party developers can create their own VTL1-protected enclaves -- isolated memory regions for protecting application-level secrets like encryption keys and authentication tokens [24]. Unlike Intel SGX, VBS Enclaves require no specialized hardware beyond a VBS-capable CPU [25]. Developers define enclave interfaces using EDL (Enclave Description Language) files and build with the VBS Enclave Tooling SDK [26].
System Guard Runtime Attestation
System Guard extends the trust chain from boot into runtime [28]. A trustlet running in VTL1 periodically measures the integrity of critical system components -- boot state, kernel integrity, driver signatures -- and signs these measurements using a hardware-backed TPM key. Because the measurement code runs in VTL1, it is protected from tampering by compromised VTL0 code [29]. Remote attestation services (such as Microsoft Defender for Endpoint) can verify these signed reports to confirm device health -- enabling zero-trust conditional access decisions.
Secured-core PCs
Secured-core PCs integrate hardware, firmware, and VBS into a single security platform requirement [30]. Certified hardware must include a 64-bit CPU with SLAT, IOMMU for DMA protection, TPM 2.0, UEFI with Secure Boot, SMM protection, DRTM support, and VBS/HVCI enabled and firmware-locked. Major OEMs -- Dell, HP, Lenovo, Microsoft Surface -- ship Secured-core PCs for enterprise and government customers.
VBS also enables additional isolation features beyond these core pillars. Windows Defender Application Guard (WDAG) uses Hyper-V containers to isolate untrusted browser sessions and Office documents, preventing web-based exploits from reaching the host OS. Hyper-V container isolation provides similar protection for containerized workloads.
Decision Guide
| Scenario | Recommended Approach |
|---|---|
| Protect domain credentials from Pass-the-Hash/Ticket | Enable Credential Guard |
| Prevent unsigned kernel driver loading | Enable HVCI / Memory Integrity |
| Protect application-level secrets from admin attacks | Develop a VBS Enclave |
| Verify device integrity for zero-trust | Enable System Guard Runtime Attestation |
| Maximum baseline security for new hardware | Require Secured-core PC certification |
The Secure Kernel now protects credentials, code integrity, application secrets, and device health. It is deployed on hundreds of millions of machines. But is it unbreakable?
How Others Solve This Problem: Competing Approaches
Windows is not alone in this challenge. Intel, AMD, and ARM each built their own answer to the same question: how do you protect secrets from a compromised OS? Each made different trade-offs.
Intel SGX
Intel Software Guard Extensions provided hardware enclaves at the CPU level without requiring a hypervisor [31]. Application code and data inside an SGX enclave were encrypted in memory and isolated from the OS, hypervisor, and other applications. The idea was compelling: trust nothing but the CPU itself.
Then side-channel attacks proved the CPU itself was not trustworthy. The Foreshadow attack (2018) exploited L1 Terminal Fault to extract data directly from SGX enclaves via CPU cache side channels [32]. Intel deprecated SGX across 11th Gen client CPUs, including Tiger Lake mobile and Rocket Lake desktop, and continued that direction with 12th Gen Alder Lake [31].
AMD SEV-SNP
AMD Secure Encrypted Virtualization with Secure Nested Paging (SEV-SNP) encrypts VM memory with per-VM keys and enforces page ownership via a Reverse Map Table (RMP) -- a hardware table that records which VM owns each physical page [33]. Even the hypervisor cannot read or remap guest memory without the guest's consent. This is a fundamentally different trust model from VBS: SEV-SNP distrusts the hypervisor, while VBS trusts it. SEV-SNP protects VMs in multi-tenant cloud environments (like Azure Confidential VMs) but does not provide intra-OS isolation within a single machine the way VBS does. The two are complementary, not competing.
Intel TDX
Intel Trust Domain Extensions create hardware-isolated Trust Domains for VMs, excluding the hypervisor from the trusted computing base [34]. The TDX Module runs in a special CPU mode called Secure Arbitration Mode (SEAM) and mediates all interactions between the hypervisor and Trust Domains -- the hypervisor can schedule TD VMs but cannot read their memory or registers. Like SEV-SNP, TDX targets cloud confidential computing rather than intra-OS protection. It complements VBS rather than replacing it.
ARM TrustZone
ARM TrustZone partitions the CPU into a Secure World and a Normal World using a hardware security state bit, predating VBS by a decade (2004 vs. 2015) [35]. World transitions happen through a Secure Monitor Call (SMC) instruction, handled by firmware or a trusted OS like OP-TEE. The concept is similar to VBS -- two execution worlds with hardware isolation -- but the mechanism differs. TrustZone has a smaller attack surface (no hypervisor in the path) but is less flexible: it typically supports only two worlds with coarser granularity. TrustZone dominates mobile and embedded devices; Windows on ARM uses TrustZone as a foundation for VBS.
ARM TrustZone predates VBS by over a decade. The concept of hardware-enforced dual execution worlds was well established in the mobile/embedded world long before Microsoft applied the idea to desktop Windows. The insight was not the dual-world concept itself, but using the x86 hypervisor to implement it.Linux
No production equivalent of VBS exists in mainline Linux. Linux relies on Mandatory Access Control (SELinux/AppArmor), container isolation (namespaces/cgroups), and VM-level isolation via SEV-SNP or TDX for cloud workloads. Google's pKVM (Protected KVM) in Android and ChromeOS is the closest parallel -- it uses the hypervisor to isolate a secure VM from the host kernel, similar in spirit to VTL1. Research projects have proposed similar intra-OS isolation for desktop Linux, but none has reached mainline. Linux's security philosophy favors defense-in-depth via many smaller mechanisms rather than a single architectural boundary.
Cross-Platform Comparison
| Dimension | Windows VBS | Intel SGX | AMD SEV-SNP | Intel TDX | ARM TrustZone |
|---|---|---|---|---|---|
| Isolation granularity | OS-level (VTL split) | Process-level enclaves | VM-level | VM-level | 2 worlds |
| Trusts the hypervisor? | Yes | N/A (no hypervisor) | No | No | N/A |
| Memory encryption | No (isolation only) | Yes | Yes (full VM) | Yes (full VM) | Varies |
| Primary use case | Desktop/server OS | Legacy high-assurance | Cloud confidential VMs | Cloud confidential VMs | Mobile/IoT |
| Status (2025) | Active, expanding | Deprecated on consumer | GA on major clouds | Rolling out | Widely deployed |
| Known weakness | Rollback, side-channels | Foreshadow, deprecated | Physical attacks | Early deployment | Firmware attacks |
Every platform bets on a different trust anchor. VBS trusts the hypervisor. SEV-SNP trusts only the CPU and its encryption keys. SGX trusted the CPU itself -- until side-channel attacks proved that wrong. The uncomfortable question follows: what cannot VBS protect against?
The Limits: What VBS Cannot Protect Against
Every security boundary has an edge. VBS's edge is more nuanced than most defenders realize.
Attacking the Secure Kernel Directly
In August 2020, Saar Amar and Daniel King of Microsoft's own MSRC stood on the Black Hat stage and demonstrated something the community had feared: direct exploitation of securekernel.exe itself [36]. Using a custom fuzzer called Hyperseed, they discovered 10 vulnerabilities in the secure call interface within two weeks [37]. Memory corruption bugs in pool management and interface validation allowed VTL0 code to achieve code execution inside VTL1 -- breaking the isolation entirely.
All vulnerabilities were patched before disclosure. Microsoft has since added mitigations: improved KASLR, Control Flow Guard (CFG) in VTL1, and stricter input validation. But the attack proved that VTL1 is not invulnerable -- the secure call interface is a real attack surface, and any bug there defeats all VBS guarantees.
Pass-the-Challenge: The Protocol-Level Bypass
Oliver Lyak's "Pass-the-Challenge" research revealed a subtle limitation of Credential Guard [38]. Credential Guard prevents credential extraction -- but it cannot prevent credential use. An attacker with SYSTEM access can relay NTLM authentication challenges through lsaiso.exe, using the machine as an "NTLM oracle." The raw hash never leaves VTL1, but the attacker can still sign challenges on demand.
Side-Channel Attacks
Spectre and Meltdown demonstrated that speculative execution creates information leakage channels across any software-enforced boundary [39]. VTL0 and VTL1 share the same physical CPU, including caches, branch predictors, and TLBs. Microsoft has deployed microcode updates and software mitigations (IBRS, STIBP, retpolines) [40], but these reduce the risk rather than eliminating it. Complete elimination requires fundamentally different CPU designs that do not share microarchitectural state across trust boundaries.
The Formal Verification Gap
Microsoft's Own Boundary
Microsoft explicitly states in its Security Servicing Criteria that an administrator with physical access is not a security boundary [42]. VBS defends against remote kernel exploitation and privilege escalation, but not against an administrator who can modify firmware, attach hardware debuggers, or perform DMA or evil-maid-style physical attacks; Microsoft's VBS guidance separately calls out IOMMU-backed DMA protection as a distinct hardware requirement [2].
This boundary declaration has practical consequences: it is why CVE-2024-21302 (Windows Downdate) required an opt-in fix rather than an automatic security update -- the attack requires admin privileges.VBS is the strongest runtime isolation Windows has ever had. But it is empirically strong, not mathematically proven. And one attack discovered in 2024 threatened to undo it entirely.
The Arms Race: Rollback Attacks and the Ongoing Battle
In August 2024, Alon Leviev of SafeBreach Labs stood on the Black Hat stage and demonstrated something terrifying: he could silently roll back a "fully patched" Windows system to a state where all VBS protections were vulnerable -- using Windows Update itself.
"I found several vulnerabilities that let me develop Windows Downdate -- a tool to take over the Windows Update process to craft fully undetectable downgrades." -- Alon Leviev, SafeBreach Labs
The Windows Downdate attack (CVE-2024-21302) works by hijacking the Windows Update mechanism to replace current versions of securekernel.exe, ci.dll, and other VBS components with older, vulnerable versions [43]. The system continues to report itself as "fully patched" while running code with known, exploitable vulnerabilities [44]. The attack requires administrator privileges -- which, as we noted, Microsoft does not consider a security boundary.
Diagram source
sequenceDiagram
participant Attacker as Attacker (Admin in VTL0)
participant WU as Windows Update
participant FS as File System
participant Boot as Next Boot
Attacker->>WU: Hijack update process
WU->>FS: Replace securekernel.exe with old version
WU->>FS: Replace ci.dll with old version
Note over FS: System still reports "fully patched"
FS->>Boot: Boot with vulnerable binaries
Boot->>Boot: VBS runs with known vulnerabilities
Note over Boot: All previously patched bugs are re-exposed Microsoft responded with KB5042562, publishing a SkuSiPolicy.p7b revocation policy to block loading of outdated VBS-related binaries [45]. A UEFI variable lock prevents firmware-level rollback. But deployment is opt-in and complex -- applying it incorrectly can cause boot failures. And the underlying mechanism (admin-level control over the update process) remains exploitable [46].
The weaponization of VBS itself followed shortly. At DEF CON 33 in August 2025, Akamai researchers demonstrated "BYOVE" (Bring Your Own Vulnerable Enclave) and "Mirage" -- techniques for running malware inside a VBS enclave, hidden from EDR and antimalware tools that cannot inspect VTL1 memory [47]. The very isolation that protects legitimate secrets can also protect malicious code.
The pattern is clear: VBS raises the cost of attack, attackers find creative bypasses, Microsoft hardens further. The question is no longer "is VBS breakable?" but "where does the research go next?"
Open Questions: Where Research Is Heading
The Secure Kernel is mature but not finished. Five open problems define the next decade of research.
Complete rollback prevention. KB5042562 is a start, but complete protection may require hardware-enforced monotonic version counters -- similar to ARM's anti-rollback fuse bits -- integrated into platform firmware [45]. Without hardware support, the administrator-who-controls-updates problem remains fundamentally unsolved.
Secure Kernel vulnerability discovery. Jonathan Jagt's 2025 MSc thesis at Radboud University documented the process of setting up a Secure Kernel debugging environment and analyzed patched security bugs to identify vulnerability patterns [49]. A key finding: the tooling for VTL1 research is scarce. Building a VTL1 debugging environment requires VMware-specific configurations and custom modifications that most researchers do not have access to. Better tooling would accelerate both offensive and defensive research.
VBS Enclave security model. The tension between protecting legitimate secrets and preventing malware evasion has no clean solution. Microsoft's hardening guidance addresses developer mistakes (TOCTOU races, pointer validation, reentrancy risks) [48], but the architectural problem -- that VTL1 isolation is equally useful to attackers and defenders -- requires a new approach to enclave attestation and monitoring.
Formal verification. Can we ever prove Hyper-V correct? The seL4 proof covers fewer than 10,000 lines of C and assembly [41]. Hyper-V is hundreds of thousands of lines. Current verification technology cannot scale to that size. Partial verification of critical subsystems (the SLAT enforcement logic, the secure call dispatcher) might be feasible and would meaningfully reduce the trusted computing base.
Side-channel elimination. Requires fundamentally different CPU designs. Current mitigations (microcode patches, partitioned caches, branch prediction barriers) reduce the leakage rate but cannot close the channel entirely while VTL0 and VTL1 share physical hardware [39]. Some academic designs propose physically separate execution units for different trust levels, but these are years from production.
The Windows Secure Kernel is the most significant architectural change to Windows security since the NT reference monitor. It does not make Windows invulnerable -- no technology does. But it changed what "kernel compromise" means.
Diagram source
gantt
title Windows Kernel Security Evolution
dateFormat YYYY
axisFormat %Y
section Gen 0
NT Kernel (flat trust) :1993, 2005
section Gen 1
PatchGuard (KPP) :2005, 2012
KMCS (driver signing) :2007, 2012
DEP / ASLR / SMEP :2004, 2012
section Gen 2
UEFI Secure Boot :2012, 2015
Measured Boot + TPM :2012, 2015
section Gen 3
VBS + Secure Kernel :2015, 2026
Credential Guard :2015, 2026
HVCI / Memory Integrity :2015, 2026
System Guard Attestation :2018, 2026
Secured-core PCs :2019, 2026
section Gen 3.5
VBS Enclaves :2024, 2026 Modern Windows runs all three generations simultaneously -- PatchGuard still watches for kernel tampering, Secure Boot still verifies the boot chain, and VBS adds hardware-enforced isolation on top. Newer defenses supplement rather than replace earlier ones.
Theory is valuable; practice pays the bills. Here is how to enable, verify, and troubleshoot VBS on your systems.
Hardware Requirements
VBS requires: a 64-bit CPU with hardware virtualization (Intel VT-x or AMD-V), Second Level Address Translation (Intel EPT or AMD NPT), TPM 2.0, and UEFI firmware with Secure Boot [2]. For optimal HVCI performance, Intel Kaby Lake (7th Gen) or newer (for MBEC) or AMD Zen 2 or newer (for GMET) is recommended [23].
Enabling VBS
VBS can be enabled through:
- Group Policy: Computer Configuration > Administrative Templates > System > Device Guard > Turn On Virtualization Based Security
- Intune/MDM: Use the DeviceGuard CSP or endpoint security policies
- Registry: Set
HKLM\SYSTEM\CurrentControlSet\Control\DeviceGuard\EnableVirtualizationBasedSecurityto 1
HVCI/Memory Integrity can be enabled separately via Windows Security > Device Security > Core Isolation > Memory Integrity.
Verifying VBS Status
// Simulates Get-CimInstance -ClassName Win32_DeviceGuard
// -Namespace root/Microsoft/Windows/DeviceGuard
const vbsStatus = {
VirtualizationBasedSecurityStatus: 2, // 0=Not enabled, 1=Enabled but not running, 2=Running
RequiredSecurityProperties: [1, 2], // 1=Hypervisor support, 2=Secure Boot
AvailableSecurityProperties: [1, 2, 3, 5, 6], // What hardware supports
SecurityServicesConfigured: [1, 2], // 1=CredentialGuard, 2=HVCI
SecurityServicesRunning: [1, 2], // Which services are active
};
const statusNames = { 0: "Not enabled", 1: "Enabled (not running)", 2: "Running" };
const serviceNames = { 1: "Credential Guard", 2: "HVCI / Memory Integrity", 3: "System Guard" };
console.log("VBS Status:", statusNames[vbsStatus.VirtualizationBasedSecurityStatus]);
console.log("\nConfigured Security Services:");
vbsStatus.SecurityServicesConfigured.forEach(s =>
console.log(" -", serviceNames[s] || "Unknown (" + s + ")")
);
console.log("\nRunning Security Services:");
vbsStatus.SecurityServicesRunning.forEach(s =>
console.log(" -", serviceNames[s] || "Unknown (" + s + ")")
);
console.log("\nTo check on your system, run in PowerShell:");
console.log("Get-CimInstance -ClassName Win32_DeviceGuard -Namespace root/Microsoft/Windows/DeviceGuard"); Press Run to execute.
You can also verify VBS status via:
- msinfo32.exe: Look for "Virtualization-based security" in the System Summary
- Windows Security app: Device Security > Core Isolation details
Troubleshooting common VBS issues
Driver compatibility: Some older drivers violate W^X policy and fail to load with HVCI enabled. Check the Windows Event Log (CodeIntegrity events) for blocked drivers. Microsoft's Hardware Lab Kit (HLK) provides HVCI compatibility testing.
Performance impact: VBS/HVCI adds roughly 5-10% overhead in CPU-bound workloads, especially gaming benchmarks [50]. On modern CPUs with MBEC/GMET, the overhead is lower. For gaming workloads, you may see reduced frame rates in CPU-bound scenarios.
Credential Guard and NLA: Network Level Authentication can fail if Credential Guard is enabled but the domain controller does not support the required Kerberos extensions. Ensure domain controllers are running Windows Server 2016 or later.
Cannot enable VBS: Verify that virtualization is enabled in BIOS/UEFI settings, Secure Boot is on, and TPM 2.0 is present and enabled. Some older systems lack SLAT support.
The Windows Secure Kernel is the most important Windows security feature most people have never heard of. It does not make the headlines that zero-days do. But it quietly changed the fundamental question of Windows security -- from "can we keep attackers out of the kernel?" to "what can we protect even after they get in?" The secrets behind the VTL1 wall remain safe. At least until the next chapter of the arms race.
Frequently Asked Questions
Does VBS make my PC slow?
VBS and HVCI add roughly 5-10% overhead in CPU-bound workloads, with gaming seeing the most noticeable impact [50]. For typical business usage (email, documents, web browsing), the impact is negligible. Modern CPUs with Intel MBEC (Kaby Lake / 7th Gen+) or AMD GMET (Zen 2+) significantly reduce this overhead through hardware-accelerated W^X enforcement.
Does the Secure Kernel replace the NT kernel?
No. securekernel.exe coexists with ntoskrnl.exe. The NT kernel handles all general OS operations -- process management, file systems, networking, device drivers. The Secure Kernel handles only security-critical functions: credential isolation, code integrity enforcement, enclave management. They run in parallel in separate VTLs.
Does VBS protect against all kernel exploits?
No. VBS protects specific assets (credentials, code integrity, application secrets) from a compromised kernel. The NT kernel itself can still be exploited -- an attacker can still gain SYSTEM access, install rootkits in VTL0, and control the standard OS environment. What they cannot do is access VTL1-protected secrets or load unsigned kernel drivers (with HVCI enabled).
Do I need Hyper-V VMs to use VBS?
No. VBS uses the Hyper-V hypervisor, not traditional VMs. You can run VBS without creating any virtual machines. The hypervisor runs as a thin layer beneath both VTLs to enforce memory isolation. If you also use Hyper-V VMs, VBS coexists with them.
Does Credential Guard make passwords unnecessary?
No. Credential Guard protects stored credentials (NTLM hashes, Kerberos TGTs) from extraction, but it does not eliminate the need for strong authentication. It does not protect against phishing, password reuse, or credential relay attacks (as demonstrated by Pass-the-Challenge [38]). Credential Guard is one layer in a defense-in-depth strategy.
Can an attacker disable VBS once it is running?
Not without rebooting. And with Secure Boot and a UEFI lock, VBS cannot be easily disabled even across reboots. However, the Windows Downdate attack demonstrated that VBS components can be silently downgraded to vulnerable versions without disabling VBS itself [43]. Deploying KB5042562 rollback protection mitigates this risk.
Is VBS the same as running a VM?
No. VBS creates isolated execution environments within a single OS instance, not separate VMs. VTL0 and VTL1 share the same OS, the same desktop, the same processes (with the exception of trustlets in VTL1). The isolation is at the memory level via SLAT, not at the OS level. It is more like having a secure safe inside your house than having two separate houses.
Study guide
Key terms
- VBS
- Virtualization-Based Security -- uses Hyper-V hypervisor to create hardware-isolated Virtual Trust Levels within a single OS instance
- VTL0
- Normal World -- where the standard NT kernel, drivers, and applications run
- VTL1
- Secure World -- where securekernel.exe and security-critical trustlets like lsaiso.exe run, isolated by SLAT
- SLAT
- Second Level Address Translation (Intel EPT / AMD NPT) -- hardware feature enabling hypervisor-enforced memory isolation between VTLs
- HVCI
- Hypervisor-Protected Code Integrity -- enforces W^X and code signing from VTL1
- Credential Guard
- VBS feature isolating NTLM hashes and Kerberos TGTs in VTL1 via lsaiso.exe
- BYOVD
- Bring Your Own Vulnerable Driver -- attack using signed-but-vulnerable drivers to bypass code signing
- PatchGuard
- Software-only kernel integrity monitor that runs at Ring 0 -- same level as attackers
- W^X
- Write XOR Execute -- memory policy preventing pages from being both writable and executable
- Trustlet
- A process running in VTL1 Isolated User Mode, protected from all VTL0 access
Comprehension questions
Why can't PatchGuard provide the same security guarantees as VBS?
PatchGuard runs at Ring 0 -- the same privilege level as the attackers it monitors. Any Ring 0 code can find and disable PatchGuard given sufficient effort. VBS uses the hypervisor (Ring -1) to enforce isolation from a higher privilege level.
What is the fundamental difference between VBS and AMD SEV-SNP?
VBS trusts the hypervisor and uses it to protect OS components from a compromised kernel. SEV-SNP distrusts the hypervisor and encrypts VM memory to protect guests from a compromised hypervisor. They address different threat models.
Why can't Credential Guard prevent Pass-the-Challenge attacks?
Credential Guard isolates raw credentials in VTL1 but must provide an interface for using them (via lsaiso.exe). Pass-the-Challenge relays authentication challenges through this interface without extracting the secret -- exploiting the necessary API rather than breaking the isolation.
What would it take to formally verify Hyper-V's isolation guarantees?
seL4 was verified for ~10K lines of C. Hyper-V is hundreds of thousands of lines. Current formal verification tools cannot scale to this size. Partial verification of critical subsystems (SLAT enforcement, secure call dispatch) might be feasible.
References
- Mimikatz. https://github.com/gentilkiwi/mimikatz - Credential extraction tool that motivated Credential Guard ↩
- Virtualization-based Security (VBS). https://learn.microsoft.com/en-us/windows-hardware/design/device-experiences/oem-vbs - Official VBS architecture documentation ↩
- (1992). Inside Windows NT. Microsoft Press. ISBN 978-1-55615-481-2. - Original NT security architecture reference ↩
- (2005). Rootkits: Subverting the Windows Kernel. Addison-Wesley. - FU rootkit and DKOM techniques ↩
- Sysinternals. https://en.wikipedia.org/wiki/Sysinternals - History of Sysinternals, Winternals, and its founders ↩
- Kernel Patch Protection. https://en.wikipedia.org/wiki/Kernel_Patch_Protection - PatchGuard history and mechanism ↩
- Kernel-Mode Code Signing Policy. https://learn.microsoft.com/en-us/windows-hardware/drivers/install/kernel-mode-code-signing-policy--windows-vista-and-later- - Mandatory kernel driver signing on x64 Vista+ ↩
- Data Execution Prevention. https://learn.microsoft.com/en-us/windows/win32/memory/data-execution-prevention - DEP NX bit documentation ↩
- Overview of Threat Mitigations in Windows 10. https://learn.microsoft.com/en-us/windows/security/threat-protection/overview-of-threat-mitigations-in-windows-10 - ASLR and other mitigations ↩
- InfinityHook. https://github.com/everdox/InfinityHook - PatchGuard bypass via ETW callbacks ↩
- Microsoft recommended driver block rules. https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/microsoft-recommended-driver-block-rules - BYOVD technique and vulnerable driver blocklist ↩
- Return-oriented programming. https://en.wikipedia.org/wiki/Return-oriented_programming - ROP technique bypasses DEP ↩
- Address space layout randomization. https://en.wikipedia.org/wiki/Address_space_layout_randomization - ASLR entropy limitations ↩
- Secure Boot Overview. https://learn.microsoft.com/en-us/windows-hardware/design/device-experiences/oem-secure-boot - UEFI Secure Boot chain of trust ↩
- Trusted Boot. https://learn.microsoft.com/en-us/windows/security/operating-system-security/system-security/trusted-boot - Measured Boot via TPM PCR measurements ↩
- Virus:Win32/Alureon.A. https://www.microsoft.com/en-us/wdsi/threats/malware-encyclopedia-description?Name=Virus:Win32/Alureon.A - Microsoft threat description for the Alureon/TDL4 bootkit family ↩
- (2023). BlackLotus UEFI bootkit: Myth confirmed. https://www.welivesecurity.com/2023/03/01/blacklotus-uefi-bootkit-myth-confirmed/ - First in-the-wild UEFI bootkit bypassing Secure Boot ↩
- x86 virtualization. https://en.wikipedia.org/wiki/X86_virtualization - VT-x and AMD-V introduction timeline and hardware virtualization background ↩
- (2015). Brad Anderson Ignite 2015 Keynote. https://news.microsoft.com/speeches/brad-anderson-ignite-2015/ - VBS/Device Guard/Credential Guard announcement ↩
- (2015). Battle of the SKM and IUM: How Windows 10 Rewrites OS Architecture. https://github.com/tpn/pdfs/blob/master/Battle%20of%20SKM%20and%20IUM%20-%20How%20Windows%2010%20Rewrites%20OS%20Architecture%20-%20Alex%20Ionescu%20-%202015%20(blackhat2015).pdf - First public technical analysis of VBS internals ↩
- (2016). Analysis of the Attack Surface of Windows 10 Virtualization-Based Security. https://infocondb.org/con/black-hat/black-hat-usa-2016/analysis-of-the-attack-surface-of-windows-10-virtualization-based-security - First independent VBS attack surface audit ↩
- Credential Guard Overview. https://learn.microsoft.com/en-us/windows/security/identity-protection/credential-guard/ - Credential Guard isolation of LSASS secrets in VTL1 ↩
- Enable virtualization-based protection of code integrity. https://learn.microsoft.com/en-us/windows/security/hardware-security/enable-virtualization-based-protection-of-code-integrity - HVCI / Memory Integrity documentation ↩
- (2024). Securely design your applications and protect your sensitive data with VBS enclaves. https://techcommunity.microsoft.com/blog/windowsosplatform/securely-design-your-applications-and-protect-your-sensitive-data-with-vbs-encla/4179543 - VBS Enclaves announcement for third-party developers ↩
- VBS Enclaves. https://learn.microsoft.com/en-us/windows/win32/trusted-execution/vbs-enclaves - VBS Enclaves developer documentation ↩
- VBS Enclave Tooling. https://github.com/microsoft/VbsEnclaveTooling - VBS Enclave SDK with Rust crate and VS integration ↩
- VBS Enclaves Developer Guide. https://learn.microsoft.com/en-us/windows/win32/trusted-execution/vbs-enclaves-dev-guide - VBS Enclave development guide ↩
- (2018). Introducing Windows Defender System Guard runtime attestation. https://www.microsoft.com/en-us/security/blog/2018/04/19/introducing-windows-defender-system-guard-runtime-attestation/ - System Guard Runtime Attestation announcement ↩
- How hardware-based root of trust helps protect Windows. https://learn.microsoft.com/en-us/windows/security/hardware-security/how-hardware-based-root-of-trust-helps-protect-windows - System Guard and hardware root of trust ↩
- Secured-core Windows 11 PCs. https://www.microsoft.com/en-us/windows/business/windows-11-secured-core-computers - Secured-core PC program and requirements ↩
- Intel Software Guard Extensions. https://en.wikipedia.org/wiki/Intel_Software_Guard_Extensions - SGX history, deprecation, Foreshadow vulnerability ↩
- (2018). Foreshadow: Breaking the Virtual Memory Abstraction with Transient Out-of-Order Execution. https://foreshadowattack.eu/ - L1 Terminal Fault attack on Intel SGX ↩
- AMD SEV-SNP: Strengthening VM Isolation with Integrity Protection and More. https://docs.amd.com/v/u/en-US/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more - AMD SEV/SEV-ES/SEV-SNP VM memory encryption ↩
- Intel Trust Domain Extensions. https://www.intel.com/content/www/us/en/developer/tools/trust-domain-extensions/overview.html - Intel TDX hardware-isolated Trust Domains ↩
- ARM TrustZone Technology. https://developer.arm.com/documentation/102418/latest/ - ARM TrustZone Secure/Normal World partitioning ↩
- (2020). Breaking VSM by Attacking SecureKernel. https://github.com/microsoft/MSRC-Security-Research/blob/master/presentations/2020_08_BlackHatUSA/Breaking_VSM_by_Attacking_SecureKernel.pdf - 10 vulnerabilities found in securekernel.exe via fuzzing ↩
- Saar Amar Publications. https://saaramar.github.io/Publications/ - Confirms Breaking VSM presentation ↩
- (2022). PassTheChallenge. https://raw.githubusercontent.com/ly4k/PassTheChallenge/main/README.md - Project README for Oliver Lyak's Pass-the-Challenge Credential Guard bypass research and tooling ↩
- (2019). Spectre Attacks: Exploiting Speculative Execution. https://spectreattack.com/spectre.pdf - Spectre speculative execution side-channel attacks ↩
- ADV180002: Guidance to mitigate speculative execution side-channel vulnerabilities. https://msrc.microsoft.com/update-guide/vulnerability/ADV180002 - Microsoft Spectre/Meltdown mitigation guidance ↩
- (2009). seL4: Formal Verification of an OS Kernel. https://doi.org/10.1145/1629575.1629596 - seL4 formally verified microkernel ↩
- Microsoft Security Servicing Criteria. https://www.microsoft.com/en-us/msrc/windows-security-servicing-criteria - Admin-to-kernel is not a security boundary ↩
- (2024). Downgrade Attacks Using Windows Updates. https://www.safebreach.com/blog/downgrade-attacks-using-windows-updates/ - Windows Downdate rollback attack on VBS components ↩
- CVE-2024-21302. https://nvd.nist.gov/vuln/detail/CVE-2024-21302 - VBS rollback elevation-of-privilege vulnerability ↩
- (2024). KB5042562: Guidance for blocking rollback of VBS security updates. https://support.microsoft.com/en-us/topic/guidance-for-blocking-rollback-of-virtualization-based-security-vbs-related-security-updates-b2e7ebf4-f64d-4884-a390-38d63171b8d3 - Microsoft VBS rollback mitigation guidance ↩
- (2024). Update on Windows Downdate Downgrade Attacks. https://www.safebreach.com/blog/update-on-windows-downdate-downgrade-attacks/ - Updated Windows Downdate research including DSE bypass ↩
- (2025). Virtualized (In)Security: How Attackers Can Weaponize VBS Enclaves. https://www.akamai.com/blog/security-research/virtualized-insecurity-attackers-weaponize-vbs-enclaves - VBS enclave weaponization via BYOVE and Mirage ↩
- (2025). Everything Old Is New Again: Hardening the Trust Boundary of VBS Enclaves. https://techcommunity.microsoft.com/blog/microsoft-security-blog/everything-old-is-new-again-hardening-the-trust-boundary-of-vbs-enclaves/4386961 - VBS enclave trust boundary hardening guidance ↩
- (2025). Analysis of Windows Secure Kernel Security Bugs. https://www.cs.ru.nl/masters-theses/2025/J_Jagt___Analysis_of_Windows_Secure_Kernel_security_bugs.pdf - Academic analysis of Secure Kernel vulnerabilities ↩
- (2023). Tested: Default Windows VBS Setting Slows Games Up to 10%, Even on RTX 4090. https://www.tomshardware.com/news/windows-vbs-harms-performance-rtx-4090 - Tom's Hardware benchmarks show roughly 5% average performance impact, with larger hits in some games ↩
- (2024). WindowsDowndate Tool. https://github.com/SafeBreach-Labs/WindowsDowndate - Open-source Windows downgrade attack tool ↩
- CVE-2024-21302 Security Update. https://msrc.microsoft.com/update-guide/vulnerability/CVE-2024-21302 - Official Microsoft advisory for VBS rollback vulnerability ↩
- (2024). Windows Downdate tool lets you unpatch Windows systems. https://www.bleepingcomputer.com/news/microsoft/windows-downdate-tool-lets-you-unpatch-windows-systems/ - Coverage of Windows Downdate tool and BH2024 presentation ↩
- (2021). Windows Internals, 7th Edition. Microsoft Press. ISBN 978-0-13-546240-9. - Canonical VBS/Secure Kernel textbook reference ↩