# Protected Process Light: When the Administrator Isn't Enough

> How a single byte in EPROCESS encodes a signer lattice that denies SYSTEM-integrity admins the right to read LSASS -- and why every public bypass since 2018 attacks the same structural seam.

*Published: 2026-05-12*
*Canonical: https://paragmali.com/blog/protected-process-light-when-the-administrator-isnt-enough*
*License: CC BY 4.0 - https://creativecommons.org/licenses/by/4.0/*

---
<TLDR>
**Windows Protected Process Light (PPL) re-asks the question of who can touch whom one level below the token model.** A single byte in `EPROCESS` packs a process's protection type, audit bit, and signer rung; the kernel's lattice check inside `NtOpenProcess` rejects memory-read attempts from below the target's rung even when the caller is SYSTEM with `SeDebugPrivilege` enabled. Every public bypass since 2018 lives in one structural class -- the kernel verifies the channel by which code enters a PPL, not the behaviour of that code once mapped -- which is why Microsoft classifies PPL as defense in depth rather than a security boundary, and why Credential Guard / `LsaIso.exe` is its necessary VBS-anchored companion.
</TLDR>

## 1. The Hook -- Mimikatz on a Protected Box

A red team operator has done everything right. The shell is SYSTEM-integrity. `SeDebugPrivilege` is enabled in the token. `whoami /priv` shows every privilege Windows defines. The operator types `mimikatz.exe`, then `privilege::debug` -- *OK*. Then `sekurlsa::logonpasswords` -- and Mimikatz answers:

```
ERROR kuhl_m_sekurlsa_acquireLSA ; Handle on memory : (0x00000005) Access is denied
```

The mechanism that just denied them is not a privilege check at all. It is not an ACL decision. It is not the integrity-level mediator. itm4n recreated exactly this failure in 2021 against a vanilla Windows install with one registry value set [@itm4n-runasppl]. The error code `0x00000005` is `ERROR_ACCESS_DENIED` -- the Win32 surface that `GetLastError` exposes for the kernel's NTSTATUS `STATUS_ACCESS_DENIED = 0xC0000022`. The kernel returns the NTSTATUS out of `NtOpenProcess` before the security descriptor of `lsass.exe` has been consulted; `RtlNtStatusToDosError` then maps it to the Win32 `0x5` that surfaces in `kuhl_m_sekurlsa.c`.

<Definition term="Protected Process Light (PPL)">
A kernel-enforced gating model that decorates a process with a *protection level* -- a structured byte combining a type field, an audit bit, and a signer rung -- and rejects `OpenProcess` requests from callers whose protection level is below the target's, regardless of token privileges or security-descriptor ACLs.
</Definition>

Picture the scenario concretely. A 2026 red-team engagement against a hardened Windows 11 24H2 endpoint. `RunAsPPL` audit-mode is on by default after the Windows 11 22H2 rollout extended audit-default to consumer SKUs [@learn-runasppl]. A third-party EDR daemon is already running, signed at the Antimalware rung via the vendor's Microsoft Virus Initiative enrollment. The operator owns local administrator. The operator has SYSTEM. The operator holds every privilege Windows defines. They still cannot read a single byte of LSASS memory.

The denial trace, walked carefully, looks like this. Mimikatz calls `OpenProcess(PROCESS_VM_READ | PROCESS_QUERY_INFORMATION, FALSE, lsass_pid)`. The Win32 thunk lands on `NtOpenProcess`, which dispatches to the object-manager callback `PspProcessOpen`. That callback calls `PspCheckForInvalidAccessByProtection`, which calls `RtlTestProtectedAccess` against the caller's `EPROCESS.Protection` byte and the target's `EPROCESS.Protection` byte. The lattice test fails. The kernel strips `PROCESS_VM_READ` from the requested mask. With the surviving limited mask, the request continues into `SeAccessCheck`, but Mimikatz never wanted the limited mask; it wanted to read memory. The handle returned (or the failure path taken) gives Mimikatz exactly the path that produces `0x00000005` in `kuhl_m_sekurlsa.c`<Sidenote>The relevant commit is `fe4e98405589e96ed6de5e05ce3c872f8108c0a0`, cited by itm4n as the source for the exact failure path that yields `0x00000005` [@mimikatz-sekurlsa].</Sidenote>.

<Mermaid caption="The Mimikatz `OpenProcess` denial trace -- the protection check fires before `SeAccessCheck` ever runs.">
sequenceDiagram
    participant Mim as Mimikatz (SYSTEM, SeDebugPrivilege)
    participant K32 as kernel32 / OpenProcess
    participant NtOP as NtOpenProcess
    participant PsPO as PspProcessOpen
    participant CHK as PspCheckForInvalidAccessByProtection
    participant Lat as RtlTestProtectedAccess
    participant SAC as SeAccessCheck

    Mim->>K32: OpenProcess(PROCESS_VM_READ, lsass)
    K32->>NtOP: syscall NtOpenProcess
    NtOP->>PsPO: object-manager callback
    PsPO->>CHK: check caller.Protection vs target.Protection
    CHK->>Lat: lattice rule (signer rungs)
    Lat-->>CHK: full mask denied
    CHK-->>PsPO: strip PROCESS_VM_READ
    PsPO->>SAC: residual mask (limited only)
    SAC-->>NtOP: limited handle (read denied)
    NtOP-->>Mim: STATUS_ACCESS_DENIED (NTSTATUS 0xC0000022, Win32 GetLastError = 5)
</Mermaid>

> **Note:** If every privilege Windows defines is held by the caller, what is doing the denying? The answer is a kernel structure that the token model does not see and the security descriptor does not influence -- a byte in `EPROCESS` named `Protection`, mediating a lattice the access check consults *before* it ever asks `SeAccessCheck` about privileges.

This is not a workaround pattern. It is a new dimension. The token model is unchanged. The integrity level is unchanged. The security descriptor on `lsass.exe` is unchanged. What changed is that the kernel now answers a question it did not ask before: *what kind of trust does the caller have to manipulate the address space of the callee?*

<PullQuote>
PPL re-asks the question of who can touch whom one level below the token model.
</PullQuote>

That mechanism has a name (Protected Process Light), an encoding (a single `UCHAR`), and a history that does not begin where you would expect. To understand the byte, we have to understand why Microsoft built it in the first place. The next section starts where the history starts: a 2006 Microsoft whitepaper about Hollywood.

## 2. Historical Origins -- Vista, DRM, and the First Protected Process

The kernel mechanism that today denies admins access to LSASS was invented in 2006 to keep Hollywood happy. The cover page of Microsoft's `process_vista.doc` whitepaper opens with a sentence almost no one quotes today:

> The Microsoft Windows Vista operating system introduces a new type of process known as a protected process to enhance support for Digital Rights Management functionality in Windows Vista.

The whitepaper was published November 27, 2006, two months before Vista's GA, and it is the architectural seed of the byte we will be staring at for the rest of this article [@vista-process-doc]. The motivation was not credential theft. It was HD-DVD and Blu-ray content protection. Studio licensing agreements required that even an administrator on the local machine could not read the audio device graph isolation host's memory while protected content was playing. The Protected Media Path required a kernel-enforced barrier between admin user-mode and the media pipeline.

<Definition term="Protected Media Path (PMP)">
The Vista-era set of components that decrypt and render high-definition video and audio content under DRM. PMP requires kernel-enforced isolation of `audiodg.exe` and a small set of related processes so that local administrators cannot dump intermediate content keys from process memory.
</Definition>

The Vista design was minimal. A single bit in `EPROCESS` marks a process as protected. At `NtCreateUserProcess`, the kernel parses the main image's Authenticode signature and looks for a specific Microsoft EKU OID that only the PMP signing root can issue [@forshaw-2018-10]. If the EKU is present and the chain resolves to that root, the kernel flips the bit. On every subsequent `NtOpenProcess` against that process, the kernel strips a fixed set of access rights from the mask, no matter who is asking.

Alex Ionescu, then a Windows internals researcher and now CrowdStrike's Chief Technology Innovation Officer, enumerated the denials in 2007 [@ionescu-pp-bad-idea]:

> A typical process cannot perform operations such as the following on a protected process: Inject a thread into a protected process; Access the virtual memory of a protected process; Debug an active protected process; Duplicate a handle from a protected process; Change the quota or working set of a protected process.

Five denials. One bit. One certificate root. Ionescu's same essay, titled "Why Protected Processes Are A Bad Idea," made a structural argument that aged well: putting a DRM mechanism in the kernel is a category error. The mechanism is too narrow for non-DRM use because the only certificate accepted is Microsoft's PMP signing root, and the only operations gated are the ones Hollywood cared about. Third parties cannot opt in, and Microsoft itself cannot graduate the level of trust.

<Sidenote>Ionescu's 2007 critique remains worth reading on its own merits. The argument that DRM-shaped kernel features tend to be reused for security mitigations and that this reuse changes their threat-model semantics is exactly what plays out over the next seven years [@ionescu-pp-bad-idea].</Sidenote>

The seven-year pause is its own story. Vista shipped, Vista was followed by Windows 7, and Windows 7 was followed by Windows 8 -- and through all of it, the access-check primitive that protects `audiodg.exe` from administrators remained a DRM artefact. The primitive existed; the *graduated trust dimension* did not. Two parallel failures pushed Microsoft toward widening the encoding.

The first was Mimikatz. Benjamin Delpy's tool was first released in May 2011 and refined through 2013 [@mimikatz-wikipedia]; it made it trivial for an administrator to extract NTLM hashes and Kerberos session keys from `lsass.exe`. The countermeasure of restricting `SeDebugPrivilege` was useless; an attacker who has SYSTEM has every privilege. What Mimikatz exploited was a primitive gap: the kernel had no way to say "lsass is protected against administrators but reachable from privileged Microsoft services."

The second was Mateusz Jurczyk's CSRSS jailbreak of Windows 8 RT in 2013. Jurczyk (who writes as `j00ru`) catalogued more than seventy Win32k system calls that the kernel guarded with the pattern `if (PsGetCurrentProcess() != gpepCsrss) return STATUS_ACCESS_DENIED;` [@j00ru-1393]. That gating mechanism worked only as long as nobody could inject code into `csrss.exe`. On Windows 8 RT, an attacker who could inject into `csrss.exe` could bypass Microsoft's locked-down Surface RT shell. Ionescu later observed that "In Windows 8.1 RT, this jailbreak is 'fixed', by virtue that code can no longer be injected into Csrss.exe for the attack" [@ionescu-part2]. The fix made `csrss.exe` a PPL at the `WinTcb` rung, and the same machinery was generalised to `lsass.exe` and the Antimalware tier.

> **Note:** Mimikatz proved Microsoft needed a graduated trust dimension for `lsass.exe`. The j00ru CSRSS jailbreak proved Microsoft needed it for `csrss.exe` too. The same widening of the encoding answered both.

<Mermaid caption="Vista's single-bit protection versus Windows 8.1's structured byte -- one boolean became a Type / Audit / Signer triple.">
flowchart LR
    subgraph Vista2006[Vista 2006 -- single bit]
        V1[EPROCESS protected = 0 or 1]
        V2[Certificate root: PMP only]
        V3[Access denials: hardcoded 5-tuple]
    end
    subgraph Win81[Windows 8.1 -- _PS_PROTECTION byte]
        W1[Type: 3 bits]
        W2[Audit: 1 bit]
        W3[Signer rung: 4 bits]
        W4[Certificate roots: per-EKU sub-OIDs]
        W5[Access denials: lattice over signer]
    end
    V1 --> W1
    V2 --> W4
    V3 --> W5
</Mermaid>

<Aside label="Why this matters historically">
The DRM-to-credentials repurposing is not unique to PPL. The same pattern shows up in HVCI (originally a Hyper-V kernel-mode integrity feature, later repurposed for general code-integrity enforcement) and in Trustlets (originally an enterprise feature for Credential Guard, later generalised). Kernel mechanisms born in one threat model rarely stay confined to it.
</Aside>

Microsoft already had the access-check primitive. What it didn't have, in 2007, was a way to ask "how much trust does this process carry?" The fix would not arrive until Windows 8.1 in October 2013, and when it arrived, it would fit in a single byte.

## 3. `_PS_PROTECTION` -- The Single-Byte Encoding

The 8.1 fix is so compact it fits in a single byte. Ionescu's Part 1 of the "Evolution of Protected Processes" series, published November 22, 2013, gives the kernel structure verbatim [@ionescu-part1]:

```c
typedef struct _PS_PROTECTION {
    union {
        UCHAR Level;
        struct {
            UCHAR Type   : 3;
            UCHAR Audit  : 1;
            UCHAR Signer : 4;
        };
    };
} PS_PROTECTION, *PPS_PROTECTION;
```

Three fields. One byte. The union with `Level:UCHAR` exists so that two `_PS_PROTECTION` values can be compared with a single byte load and a single byte compare. The kernel does this on every `NtOpenProcess`. Speed matters; this is the hot path of the security model.

<Definition term="`_PS_PROTECTION` byte">
The kernel structure that encodes a process's protection state in eight bits: three bits of Type (`None`, `ProtectedLight`, `Protected`), one bit of Audit (intended as a forensic side-channel hint, although the exact runtime semantics are not enumerated in the public sources cited here), and four bits of Signer rung. Stored as `EPROCESS.Protection`.
</Definition>

The Type field has three values. `PsProtectedTypeNone = 0` marks a regular process. `PsProtectedTypeProtectedLight = 1` marks a PPL -- the graduated path introduced in 8.1. `PsProtectedTypeProtected = 2` marks a "heavy" Vista-style PP. Heavy PPs still exist; they retain the original DRM semantics where almost nothing from below the protection level may touch them. PPLs are the new general-purpose path where the *signer rung* mediates a graduated lattice.

The Audit bit is the least documented of the three fields. Ionescu Part 1 lists it as `Audit : Pos 3, 1 Bit` with no semantic gloss; itm4n's RunAsPPL header annotates it as `// Reserved`; Microsoft Learn enumerates CodeIntegrity events `3033`, `3063`, `3065`, and `3066`, but those are triggered by the `AuditLevel` configuration under `Image File Execution Options\LSASS.exe` and concern DLL-load failures, not per-process `OpenProcess` denials [@ionescu-part1] [@itm4n-runasppl] [@learn-runasppl]. The field's name implies a forensic side-channel, and the bit-position is reserved; the precise runtime emission shape is not enumerated in the public sources cited here.

The Signer field is the structurally interesting one. Ionescu's 2013 enumeration names eight values [@ionescu-part1]:

| Signer constant | Value | Used for |
|---|---|---|
| `PsProtectedSignerNone` | 0 | Non-protected (no rung) |
| `PsProtectedSignerAuthenticode` | 1 | Generic third-party Authenticode (early PPL guests) |
| `PsProtectedSignerCodeGen` | 2 | .NET native runtime code generators |
| `PsProtectedSignerAntimalware` | 3 | EDR / AV daemons admitted via ELAM |
| `PsProtectedSignerLsa` | 4 | `lsass.exe` under `RunAsPPL` |
| `PsProtectedSignerWindows` | 5 | Microsoft Windows components below TCB |
| `PsProtectedSignerWinTcb` | 6 | `csrss.exe`, `smss.exe`, `services.exe` -- the inbox TCB |
| `PsProtectedSignerMax` | 7 | Sentinel value (enumeration upper bound) |

> **Note:** Ionescu's 2013 list is the authoritative *baseline* enumeration. It is not a permanent enumeration. By 2018, James Forshaw's PowerShell tooling (`NtApiDotNet`) was enumerating an additional `App = 8` signer used for AppContainer / TruePlay scenarios [@forshaw-2018-10]. Newer builds of Windows extend the enumeration further. The article will name `WinTcb` (Microsoft's documented inbox-TCB rung) and `Antimalware` (the only non-Microsoft-admissible rung) repeatedly, because they are the load-bearing ones. The intermediate values evolve.

<Sidenote>Adjacent to `EPROCESS.Protection` are two related fields, `EPROCESS.SignatureLevel` and `EPROCESS.SectionSignatureLevel`, which Ionescu introduces in Part 3 [@ionescu-part3]. These fields encode the *binary integrity* the kernel demands at process creation and at every subsequent section load, and they are filled in from a 16-entry Signing Level table that runs from `Unchecked = 0` up to `Windows TCB = 14`. The Signer rung in `Protection` answers "what kind of trust does this process hold?" The SignatureLevel pair answers "what binaries is this process allowed to map?" They are not the same question.</Sidenote>

Now the worked decode. Given the byte value `0x41`, the encoding falls out by hand:

- Low three bits (Type): `0x41 & 0x07 = 0x01` -- `PsProtectedTypeProtectedLight`.
- Bit 3 (Audit): `(0x41 >> 3) & 0x01 = 0` -- Audit off.
- High four bits (Signer): `(0x41 >> 4) & 0x0F = 0x04` -- `PsProtectedSignerLsa`.

A process with `EPROCESS.Protection = 0x41` is a PPL signed at the `Lsa` rung. That is exactly what `lsass.exe` looks like on a host with `RunAsPPL = 1`. Ionescu's blog explicitly states: "it's easy to read 0x41 as Lsa (0x4) + PPL (0x1)" [@ionescu-part1]. The Defender service `MsMpEng.exe`, signed at the Antimalware rung, has `Protection = 0x31`. The session manager `csrss.exe`, signed at WinTcb, has `Protection = 0x61`.

<Mermaid caption="The `_PS_PROTECTION` byte layout -- three fields packed into eight bits.">
flowchart TD
    B[byte: 8 bits]
    B --> F1[bits 0..2: Type]
    B --> F2[bit 3: Audit]
    B --> F3[bits 4..7: Signer]
    F1 --> T0[0 = None]
    F1 --> T1[1 = ProtectedLight PPL]
    F1 --> T2[2 = Protected PP]
    F3 --> S0[0 None]
    F3 --> S1[1 Authenticode]
    F3 --> S2[2 CodeGen]
    F3 --> S3[3 Antimalware]
    F3 --> S4[4 Lsa]
    F3 --> S5[5 Windows]
    F3 --> S6[6 WinTcb]
</Mermaid>

<RunnableCode lang="js" title="Decode a _PS_PROTECTION byte">{`
function decodeProtection(byteValue) {
  const type = byteValue & 0x07;
  const audit = (byteValue >> 3) & 0x01;
  const signer = (byteValue >> 4) & 0x0F;
  const typeNames = ['None', 'ProtectedLight', 'Protected'];
  const signerNames = [
    'None', 'Authenticode', 'CodeGen', 'Antimalware',
    'Lsa', 'Windows', 'WinTcb', 'Max'
  ];
  return {
    raw: '0x' + byteValue.toString(16).padStart(2, '0'),
    type: typeNames[type] || 'unknown(' + type + ')',
    audit: audit ? 'on' : 'off',
    signer: signerNames[signer] || 'unknown(' + signer + ')'
  };
}

// Worked examples from real Windows processes
console.log('MsMpEng.exe (Defender):', decodeProtection(0x31));
console.log('lsass.exe under RunAsPPL:', decodeProtection(0x41));
console.log('csrss.exe (WinTcb):', decodeProtection(0x61));
`}</RunnableCode>

> **Note:** One byte, three fields, eight signer rungs. The kernel reads it on every `OpenProcess`, before any token check, before any ACL evaluation. The encoding is the entire vocabulary the kernel has for asking *how trusted* a process is.

The encoding tells the kernel *what kind* of trust a process holds. It says nothing about *who can touch whom* across rungs. That rule -- the lattice -- is the structure imposed on top of the bytes. The next section is the lattice.

## 4. The Signer Lattice -- Who Can Open Whom

itm4n's 2021 walkthrough states the three rules verbatim, and they have the rare quality of being short enough to memorise [@itm4n-scrt]:

<PullQuote>
A PP can open a PP or a PPL with full access if its signer type is greater or equal. A PPL can open a PPL with full access if its signer type is greater or equal. A PPL cannot open a PP with full access, regardless of its signer type.
</PullQuote>

Three rules. They settle every cross-process access question PPL gates. Let us name them and then read off their consequences.

**Rule 1.** A PP at signer $S_c$ may open with full access a PP or PPL at signer $S_t$ if and only if $S_c \ge S_t$.

**Rule 2.** A PPL at signer $S_c$ may open with full access a PPL at signer $S_t$ if and only if $S_c \ge S_t$.

**Rule 3.** A PPL cannot open a PP with full access, regardless of signer.

The qualifier "with full access" is load-bearing. PPL's lattice gates the *full* mask -- `PROCESS_VM_READ`, `PROCESS_VM_WRITE`, `PROCESS_CREATE_THREAD`, `PROCESS_DUP_HANDLE`, `PROCESS_ALL_ACCESS`. A separate *limited* mask (`SYNCHRONIZE`, `PROCESS_QUERY_LIMITED_INFORMATION`, `PROCESS_SET_LIMITED_INFORMATION`, `PROCESS_SUSPEND_RESUME`, and -- for callers below the `Authenticode`/`CodeGen`/`Windows` tier -- `PROCESS_TERMINATE`) is allowed when the security descriptor permits. The tier matters. Ionescu's verbatim `RtlProtectedAccess[]` table widens the deny mask from `0xFC7FE` to `0xFC7FF` at the `Antimalware`, `Lsa`, and `WinTcb` rungs -- one extra bit, bit 0, which is `PROCESS_TERMINATE` [@ionescu-part2]. So an administrator can still call `OpenProcess(PROCESS_QUERY_LIMITED_INFORMATION, ...)` against a protected `lsass.exe` to enumerate threads, but cannot terminate a `PPL/Antimalware`, `PPL/Lsa`, or `PPL/WinTcb` daemon via a direct kill. The lattice does not lock the process; it locks the *interesting* access, and for the top-tier rungs it also locks the kill.

| Caller signer \ Target signer | None | Authenticode (1) | Antimalware (3) | Lsa (4) | Windows (5) | WinTcb (6) |
|---|---|---|---|---|---|---|
| None (admin, integrity SYSTEM) | full | denied | denied | denied | denied | denied |
| PPL/Authenticode (1) | full | full | denied | denied | denied | denied |
| PPL/Antimalware (3) | full | full | full | denied | denied | denied |
| PPL/Lsa (4) | full | full | full | full | denied | denied |
| PPL/Windows (5) | full | full | full | full | full | denied |
| PPL/WinTcb (6) | full | full | full | full | full | full |

Where "denied" means the *full* mask is rejected; the limited mask continues to apply per the target's security descriptor.

<Mermaid caption="The signer lattice as a Hasse diagram -- higher rungs can open lower rungs for full access; equal rungs can open each other; lower rungs cannot reach upward.">
flowchart BT
    None[None / unprotected]
    Auth[Authenticode]
    CG[CodeGen]
    AM[Antimalware]
    Lsa[Lsa]
    Win[Windows]
    Tcb[WinTcb]
    None --> Auth
    Auth --> CG
    CG --> AM
    AM --> Lsa
    Lsa --> Win
    Win --> Tcb
</Mermaid>

The Enhanced Key Usage side of the design holds the lattice together. Microsoft's EKU OID arc `1.3.6.1.4.1.311.10.3.*` defines sub-OIDs per signer rung [@iana-pen311] [@oid-base-eku-arc], and at process creation the kernel parses the main image's Authenticode signature and walks its EKU extensions to determine which rung the binary is entitled to claim. If the certificate chain resolves cleanly to a Microsoft-issued root *and* carries the rung's sub-OID, the kernel records the rung. Otherwise the process either starts unprotected or refuses to start at all.

<Definition term="Enhanced Key Usage (EKU)">
An X.509 v3 certificate extension that asserts what specific purposes a certificate is allowed to certify. Microsoft uses sub-OIDs under `1.3.6.1.4.1.311.10.3.*` to encode protected-process signer rungs as EKU values [@iana-pen311] [@oid-base-eku-arc]. The kernel checks the EKU at process creation; the certificate chain anchors which Microsoft-issued sub-CA may issue at each rung.

<Sidenote>The IANA Private Enterprise Number `311` is registered to Microsoft under the PEN prefix `1.3.6.1.4.1.` [@iana-pen311], so `1.3.6.1.4.1.311.*` is the catch-all namespace for Microsoft-specific X.509 extensions; the `10.3.*` arc within it is the Microsoft Enhanced Key Usage (purpose) sub-tree [@oid-base-eku-arc], and `10.3.<n>` slots map to specific signer purposes including protected-process rungs.</Sidenote>
</Definition>

The most important property of this design is the resolution point. The kernel parses the EKU exactly once, at `NtCreateUserProcess`. It stores the resulting rung in `EPROCESS.Protection`. On every subsequent `OpenProcess` against that process, the kernel consults the byte, not the certificate. This makes the access check fast (one byte load, one byte compare) and decouples policy at runtime from policy at signing time. It also creates the structural seam that every public bypass since 2018 has exploited, because the kernel's confidence in the byte is exactly the confidence it had in the certificate at process-create time, projected forward indefinitely.

Ionescu's Part 2 names the implementation directly. The lattice is not code; it is a data table named `RtlProtectedAccess[]` baked into `ntoskrnl.exe` [@ionescu-part2]. Each row of that table corresponds to a (signer, target-type) pair and encodes which access bits are allowed in the full mask. The relevant runtime routines are `PspProcessOpen` and `PspThreadOpen` (the object-manager open callbacks), `PspCheckForInvalidAccessByProtection` (which performs the check), `RtlTestProtectedAccess` (which applies the lattice row), and `RtlValidProtectionLevel` (which sanity-checks the encoded byte for consistency).

> **Note:** The decision of who can touch whom is encoded in a table inside `ntoskrnl.exe`. Changing the lattice means changing a table; widening or narrowing it does not require new code. This is why Microsoft can add `App = 8` to the enumeration over time without touching the access-check routine.

Note one symmetry that becomes important later. "Greater or equal" means that within a rung, every PPL can read every other PPL. Two co-resident `PPL/Antimalware` daemons -- Microsoft Defender's `MsMpEng.exe` and a third-party EDR's agent -- can call `PROCESS_VM_READ` on each other. Within-rung peers leak to each other by design. The lattice prevents *escalation*, not *peer access*.

The lattice settles the rule. The next question is admission: who decides which binaries are allowed to claim the Antimalware rung, and how does Microsoft admit third-party code into it at all? The answer is a driver.

## 5. The Antimalware Rung -- ELAM and Third-Party Code at PPL

PPL is interesting only if it admits non-Microsoft code at *some* rung. The Vista PP design admitted nobody; it required a Microsoft PMP root certificate, full stop. PPL inherited that constraint at every rung except one. The Antimalware rung -- signer value `3` -- is the only rung where third-party vendors can ship their own user-mode binaries as protected processes. The admission mechanism is the Early Launch Anti-Malware driver.

<Definition term="Early Launch Anti-Malware (ELAM)">
A specially signed Microsoft-certified kernel driver shipped by an anti-malware vendor that loads before any other boot-start driver. The ELAM driver participates in trusted-boot measurement, vouches for follow-on drivers, and -- critical to PPL -- carries an embedded resource section enumerating the vendor's user-mode signing certificate hashes. The kernel uses that resource section to admit the vendor's user-mode daemon binaries to `PPL/Antimalware` at service start.
</Definition>

Microsoft Learn's "Protecting Anti-Malware Services" page describes the boot-time admission flow in two sentences [@learn-am-services]:

> The driver must have an embedded resource section containing the information of the certificates used to sign the user mode service binaries. During the boot process, this resource section will be extracted from the ELAM driver to validate the certificate information and register the anti-malware service.

Two consequences. First, the third-party signer set is bounded by a *kernel-readable resource section*, not by an open EKU. Microsoft, not the vendor, controls which user-mode binaries are admissible. Second, the certificate hashes are baked into the driver at signing time and re-validated at every service start. A vendor cannot widen the admissible set after the fact; an attacker cannot drop in their own user-mode binary unless its hash is already listed.

The gate that decides which vendors get ELAM drivers in the first place is the Microsoft Virus Initiative. Microsoft Learn's MVI criteria page enumerates the requirement explicitly [@learn-mvi]:

> Your security solution must be certified within the last 12 months by at least one of the organizations listed below: AV-Comparatives, AVLab Cybersecurity Foundation, AV-Test, MRG Effitas, SE Labs, SKD Labs, VB 100, West Coast Labs.

The same page requires "use of Trusted Signing," Microsoft's cloud-managed code signing service. The implications are operational. To ship code at `PPL/Antimalware`, a vendor must (a) hold MVI membership, (b) maintain independent-lab certification, (c) author an ELAM driver, (d) get the driver through Microsoft WHQL and have it Microsoft co-signed, and (e) embed the user-mode certificate hashes in the driver's resource section.

<Definition term="Microsoft Virus Initiative (MVI)">
A Microsoft program for anti-malware vendors that gates access to ELAM driver signing and to specific Defender APIs. Membership requires independent-lab certification (renewed annually) and Trusted Signing usage; in practical terms, MVI membership is the entry ticket to deploying user-mode binaries at `PPL/Antimalware`.
</Definition>

<Aside label="Hobbyist tooling cannot join the Antimalware rung">
The implication of MVI is that an indie security tool, however technically sound, cannot deploy as `PPL/Antimalware`. The gate is not technical but commercial: independent-lab certification fees, annual renewals, and the engineering investment of building a production-grade ELAM driver. The signer rung is *signed*; the signing program is *gated*.
</Aside>

<Mermaid caption="ELAM admission flow -- the kernel extracts vendor certificate hashes from the driver's resource section at boot, and CI validates user-mode service binaries against them at service start.">
sequenceDiagram
    participant BM as Boot manager
    participant K as Windows kernel
    participant ELAM as Vendor ELAM driver (.sys)
    participant SCM as Service Control Manager
    participant CI as ci.dll (CodeIntegrity)
    participant Svc as Vendor service (e.g. EDR daemon)
    BM->>K: load boot drivers
    K->>ELAM: load ELAM driver early
    K->>ELAM: read embedded ELAM resource section
    K->>K: cache vendor user-mode cert hashes
    Note over K,SCM: Boot continues, OS initialises
    SCM->>Svc: start vendor service
    Svc->>CI: validate service binary signature
    CI->>K: lookup vendor cert against cached hashes
    K-->>CI: match -- admit at PPL/Antimalware
    CI-->>Svc: launch as PPL/Antimalware (Protection = 0x31)
</Mermaid>

By 2024, every major commercial EDR ships through this path. Microsoft Defender's `MsMpEng.exe` uses the inbox `WdBoot.sys` ELAM driver<Sidenote>`WdBoot.sys` ("Windows Defender Boot Driver") is Microsoft's inbox first-party ELAM driver; it ships in every Windows install and is loaded before any third-party ELAM driver. The canonical reference implementation of the ELAM resource-section pattern is Microsoft's `Windows-driver-samples/security/elam` repository [@ms-elam-sample], which also documents the Early Launch EKU `1.3.6.1.4.1.311.61.4.1` verbatim.</Sidenote>. Third-party members of Microsoft's Virus Initiative -- the cohort gated by the MVI criteria quoted above [@learn-mvi] -- ship their own vendor ELAM drivers and run their main user-mode daemons at `PPL/Antimalware`. Microsoft Learn's "Early Launch Antimalware" page is the canonical confirmation [@learn-elam]:

> Because an ELAM service runs as a PPL (Protected Process Light), you need to debug using a kernel debugger.

One Microsoft-signed sentence and a billion endpoints. EDR vendors get protection against administrator-level tampering for free, on top of the kernel telemetry their drivers already collect. Microsoft gets a viable third-party security market without widening the EKU gates beyond a controllable set of vendors.

ELAM admits the *daemon*. The next operational question is what Microsoft does for `lsass.exe` itself -- the canonical credential store, the original Mimikatz target. The mechanism is called `RunAsPPL`.

## 6. RunAsPPL -- Hardening LSASS

The registry value that produced the Mimikatz failure in Section 1 is a single DWORD. itm4n's walkthrough names it verbatim [@itm4n-runasppl]:

> Open the key `HKLM\SYSTEM\CurrentControlSet\Control\Lsa`; add the DWORD value `RunAsPPL` and set it to 1; reboot.

After reboot, `lsass.exe` launches at `PPL/Lsa`, signer rung 4, protection byte `0x41`. Mimikatz running with full SYSTEM-integrity and `SeDebugPrivilege` then receives `0x00000005` on `OpenProcess(PROCESS_VM_READ, lsass.exe)`. The registry knob is one DWORD; the consequences are large.

<Definition term="Local Security Authority Subsystem Service (LSASS)">
The Windows user-mode process that holds NTLM password hashes, Kerberos Ticket Granting Tickets, MSV1_0 credential caches, DPAPI master keys, and (on legacy builds before Microsoft's 2014 KB2871997 update [@ms-kb2871997]) WDigest plaintext passwords. The canonical target of credential-theft tooling since 2011.
</Definition>

The threat being mitigated is simple. Mimikatz reads LSASS memory via `OpenProcess(PROCESS_VM_READ, lsass.exe)`, walks the internal key-store structures, and extracts NTLM hashes, Kerberos session keys, and (on older configurations) cached plaintext. Restricting `SeDebugPrivilege` does not work, because an attacker with SYSTEM has every privilege. Restricting the security descriptor on `lsass.exe` does not work either, because legitimate services need to interact with it. PPL is the right primitive: it gates the *full* mask irrespective of token state, and the kernel admits only Microsoft-signed code into the `Lsa` rung.

`RunAsPPL = 1` is the stronger form of the setting on Secure Boot-capable machines. On the next boot, the kernel automatically mirrors the policy into a Secure Boot-anchored UEFI variable; once set, the protection survives registry rollback. An attacker who removes the registry key finds that LSASS still launches as PPL on the next boot. The only path to remove the protection is to disable Secure Boot at the firmware level, which requires physical access and which trips other defences. Microsoft Learn's documentation describes it verbatim [@learn-runasppl]:

> You can achieve further protection when you use Unified Extensible Firmware Interface (UEFI) lock and Secure Boot. When these settings are enabled, disabling the `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa` registry key has no effect.

This is `RunAsPPL = 1`. For environments that need admin-removable protection without the UEFI lock, `RunAsPPL = 2` (available on Win11 22H2 and later) omits the UEFI variable. The policy lives in the registry only and is removable by any administrator (or by malware running as administrator) who simply deletes the registry value before reboot.

| `RunAsPPL` value | Behaviour | Removable by? | Persistence |
|---|---|---|---|
| `0` (or absent) | LSASS runs unprotected | n/a | none |
| `1` | LSASS runs as PPL/Lsa; policy mirrored to UEFI variable on Secure Boot machines | Physical access + Secure Boot disable | Firmware-anchored |
| `2` | LSASS runs as PPL/Lsa; registry only (Win11 22H2+ only) | Any admin who deletes the key | Registry only |

> **Note:** The `RunAsPPL = 1` setting is the practical answer to "what stops an attacker who is willing to reboot?" Once the UEFI variable is set, neither registry rollback nor PE-based offline attacks on the registry hive can disable LSA protection on the next boot.

The deployment cost of `RunAsPPL` is compatibility with third-party authentication modules. LSASS hosts a set of plug-ins: smart-card middleware, third-party Cryptographic Service Providers (CSPs), password-filter DLLs, alternative authentication packages. Under `RunAsPPL`, the kernel demands that every DLL loaded into LSASS be Microsoft-signed at the LSA level (signer rung 4). Vendor DLLs that lack the right EKU are rejected at section creation. The rejections surface as CodeIntegrity events in the system event log. Microsoft Learn enumerates the two relevant event IDs [@learn-runasppl]:

> Event 3065 occurs when a code integrity check determines that a process, usually LSASS.exe, attempts to load a driver that doesn't meet the security requirements for shared sections.
>
> Event 3066 occurs when a code integrity check determines that a process, usually LSASS.exe, attempts to load a driver that doesn't meet the Microsoft signing level requirements.

This is why Microsoft recommends running the setting in *audit mode* before enforcement. Audit mode is enabled by setting a separate `AuditLevel` DWORD to `8`, but -- critically -- under a *different* registry key from the one that hosts `RunAsPPL`. Microsoft Learn places `AuditLevel` under the Image File Execution Options hive for `LSASS.exe` and names the path verbatim [@learn-runasppl]:

> Open the Registry Editor, or enter RegEdit.exe in the Run dialog, and then go to the `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\LSASS.exe` registry key. Open the `AuditLevel` value. Set its data type to `dword` and its data value to `00000008`.

> **Note:** `RunAsPPL` sits under `HKLM\SYSTEM\CurrentControlSet\Control\Lsa`. `AuditLevel = 8` sits under `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\LSASS.exe`. A defender who edits "the same key" silently sets the wrong value and audit mode never engages. The deployment looks correct from the registry; the log surface is empty; the rollout breaks production on enforcement day. Two values. Two hives. Read this twice.

In audit mode, the kernel emits the same 3065 / 3066 events for would-be load rejections but allows the loads to proceed. Two months of audit-mode telemetry typically surfaces every smart-card middleware DLL, every password-filter, every third-party CSP on a corporate fleet. Once the audit log is clean (every vendor's modules have been re-signed at the LSA level or replaced), enforcement mode can be turned on without breaking production logins.

> **Note:** Skipping audit mode is the most common cause of LSA protection rollouts being rolled back after a wave of authentication failures. See §11 Item 1 for the full audit-then-enforce-then-UEFI-lock recipe.

The deployment cadence has been deliberately glacial. `RunAsPPL` shipped in Windows 8.1 in October 2013 -- *opt-in*. It remained opt-in for nine years. Microsoft Learn records the inflection [@learn-runasppl]:

> Audit mode for added LSA protection is enabled by default on devices running Windows 11 version 22H2 and later.

Audit mode default-on. Not enforcement. The Windows 11 24H2 release expanded the audit-mode rollout further. Eleven years from opt-in to effective default. The pace reflects the compatibility risk: every domain with a single non-Microsoft-signed LSASS plug-in would have surfaced as a support call.

The registry knob is simple. The *kernel* check that enforces it is not. The next section walks the access-check pipeline in detail, because the structural reason `SeDebugPrivilege` cannot help an attacker is the order in which the kernel asks its questions.

## 7. The Kernel Access Check -- What Happens Inside `NtOpenProcess`

Recall the trace from Section 1. The denial happens before `SeAccessCheck` runs. The reason `SeDebugPrivilege` does not help is not that the kernel decided to override the privilege; it is that the kernel never asked about the privilege. The order matters. Let us walk it.

The Win32 caller invokes `OpenProcess`, which thunks through `kernel32.dll` to the syscall `NtOpenProcess`. `NtOpenProcess` does its handle-lookup and dispatches to the process-type object-manager open callback, `PspProcessOpen`. Ionescu's Part 2 names the path verbatim [@ionescu-part2]:

> Access to protected processes (and their threads) is gated by the `PspProcessOpen` and `PspThreadOpen` object manager callback routines, which perform two checks. The first, done by calling `PspCheckForInvalidAccessByProtection` (which in turn calls `RtlTestProtectedAccess` and `RtlValidProtectionLevel`) ...

`PspCheckForInvalidAccessByProtection` does two things. First, it splits the caller's requested access mask into two subsets:

- The **limited mask** -- a fixed set of bits (`SYNCHRONIZE`, `PROCESS_QUERY_LIMITED_INFORMATION`, and a small handful of others) that the lattice never forbids. The limited mask is subject only to the standard `SeAccessCheck` against the target's DACL.
- The **full mask** -- everything else, including `PROCESS_VM_READ`, `PROCESS_VM_WRITE`, `PROCESS_CREATE_THREAD`, `PROCESS_DUP_HANDLE`, and `PROCESS_ALL_ACCESS`. The full mask is subject to the lattice rule.

<Definition term="Limited Access Mask">
The subset of `PROCESS_*` access rights that the PPL lattice always allows the standard `SeAccessCheck` to evaluate. Includes `SYNCHRONIZE`, `PROCESS_QUERY_LIMITED_INFORMATION`, `PROCESS_SET_LIMITED_INFORMATION`, and `PROCESS_SUSPEND_RESUME`. `PROCESS_TERMINATE` is included for callers below the Antimalware tier (deny mask `0xFC7FE`), but the kernel widens the deny mask to `0xFC7FF` at the `Antimalware`, `Lsa`, and `WinTcb` rungs -- bit 0, `PROCESS_TERMINATE` -- making those three rungs unkillable except from peers or higher.
</Definition>

Second, it indexes into `RtlProtectedAccess[]` using the caller's signer rung and the target's type, retrieves the row of permissible access bits, and ANDs the row with the full mask. If the result is non-empty, the access proceeds; if the result is zero, the kernel strips the full-mask bits from the request and returns either the limited subset (if the caller asked for any limited bits) or `STATUS_ACCESS_DENIED`. `RtlValidProtectionLevel` runs alongside as a sanity check on the encoded byte to catch malformed `EPROCESS.Protection` values that would otherwise let the lattice walk off the end of the table.

<Mermaid caption="`NtOpenProcess` execution path -- the protection check runs before `SeAccessCheck`, which is why no token-level privilege can rescue the denied bits.">
sequenceDiagram
    participant App as Caller (any token)
    participant Nt as NtOpenProcess
    participant PsPO as PspProcessOpen
    participant Chk as PspCheckForInvalidAccessByProtection
    participant Rtl as RtlTestProtectedAccess + RtlValidProtectionLevel
    participant Tab as RtlProtectedAccess[] table
    participant SAC as SeAccessCheck
    App->>Nt: NtOpenProcess(DesiredAccess)
    Nt->>PsPO: dispatch
    PsPO->>Chk: protection check
    Chk->>Rtl: lookup caller / target rungs
    Rtl->>Tab: index row, retrieve allowed bits
    Tab-->>Rtl: row of allowed access bits
    Rtl-->>Chk: full mask allowed or stripped
    Chk-->>PsPO: residual mask (full or limited)
    PsPO->>SAC: residual mask vs DACL + token
    SAC-->>Nt: final mask
    Nt-->>App: handle or STATUS_ACCESS_DENIED
</Mermaid>

> **Key idea:** The protection check runs *before* `SeAccessCheck`. Privileges are evaluated by `SeAccessCheck`. The reason `SeDebugPrivilege` does not help is structural -- it is not consulted at the moment of denial.

Four worked traces make this concrete.

**Case (a): admin -> lsass with `PROCESS_ALL_ACCESS`.** The caller has no `EPROCESS.Protection.Type` (it is `None`). The target is `PPL/Lsa`. The lattice forbids the full mask. The kernel strips every bit of `PROCESS_ALL_ACCESS` except the limited subset. The caller wanted to write memory; the limited subset cannot write memory; the operation effectively fails. This is the Mimikatz scenario.

**Case (b): admin -> lsass with `PROCESS_QUERY_LIMITED_INFORMATION`.** Same caller, same target, but the requested mask sits entirely in the limited subset. The lattice does not gate the limited mask. `SeAccessCheck` evaluates the DACL on `lsass.exe`, finds that administrators are permitted to query basic process information, and the call succeeds. This is why Process Explorer can still enumerate `lsass.exe` and show its threads even when LSA protection is enabled.

**Case (c): `MsMpEng.exe` (PPL/Antimalware, rung 3) -> `lsass.exe` (PPL/Lsa, rung 4) with `PROCESS_VM_READ`.** The lattice rule: caller rung 3 < target rung 4, so the full mask is denied. Defender cannot read LSASS memory. Defender does not need to; the cross-rung isolation prevents one Microsoft service from reading another Microsoft service's secrets even within the same trusted system.

**Case (d): hypothetical `PPL/WinTcb` (rung 6) -> `lsass.exe` (PPL/Lsa, rung 4) with `PROCESS_VM_READ`.** The lattice rule: caller rung 6 >= target rung 4, so the full mask is allowed. A process signed at the WinTcb rung can read LSASS memory by design. This is how Service Control Manager and Windows Error Reporting can still interact with protected `lsass.exe`.

| Caller | Target | Mask | Lattice rule | Outcome |
|---|---|---|---|---|
| Admin, no Protection | PPL/Lsa | PROCESS_ALL_ACCESS | Caller has no rung | Full mask stripped (denied) |
| Admin, no Protection | PPL/Lsa | PROCESS_QUERY_LIMITED_INFORMATION | Limited mask | Allowed (DACL permitting) |
| PPL/Antimalware (3) | PPL/Lsa (4) | PROCESS_VM_READ | 3 < 4 | Denied |
| PPL/WinTcb (6) | PPL/Lsa (4) | PROCESS_VM_READ | 6 >= 4 | Allowed |

The Audit bit revisits the table from a different angle. The bit is annotated `Reserved` in itm4n's public structure definition and named without semantic gloss in Ionescu Part 1; the precise runtime emission shape on an `OpenProcess` denial is not enumerated in any of Ionescu Part 1, Forshaw 2018, itm4n's RunAsPPL writeup, or Microsoft Learn's RunAsPPL page (whose CodeIntegrity events 3033/3063/3065/3066 are scoped to `AuditLevel` under `IFEO\LSASS.exe` and to DLL-load failures, not per-process Audit-bit denials) [@ionescu-part1] [@itm4n-runasppl] [@learn-runasppl]. The field name and bit position imply a forensic side-channel; the exact event shape is not in the public record.

<Sidenote>Two adjacent kernel mechanisms exist in the same neighbourhood but mediate different threat models. `PROCESS_TRUST_LABEL_ACE` (a Trust SID ACL entry, introduced in Windows 8.1 alongside PPL) is an ACL-side companion that runs *inside* `SeAccessCheck` -- it adds a token-style trust label that interacts with the security descriptor in the standard way. Code Integrity Guard (`ProcessSignaturePolicy`) is a per-process *signed-image* enforcer settable at `CreateProcess` time via the `PROC_THREAD_ATTRIBUTE_MITIGATION_POLICY` attribute. Neither is part of PPL; both interact with the same problem space.</Sidenote>

The kernel verifies who is asking, what they are asking for, and at what rung the target sits. What the kernel *cannot* verify is the behaviour of code that arrives through a signed channel and then executes against attacker-controlled data. That structural seam is the entire premise of the bypass arms race, and it is the next section.

## 8. The Bypass Arms Race -- Forshaw, itm4n, Landau

If the kernel only verifies the channel by which code enters a PPL, every bypass should attack the seam between channel and behaviour. Test that prediction against the public record. Since 2018, four named bypass acts have hit major Microsoft research blogs. All four sit in the same structural class.

> **Key idea:** The kernel verifies the channel. It does not verify the behaviour. Every public PPL bypass since 2018 attacks the seam between what the channel proves (a signature, an EKU, a section identity) and what the code does once mapped.

### Act I (2018) -- Forshaw and JScript-into-PPL

James Forshaw, then at Google Project Zero, published "Injecting Code into Windows Protected Processes Using COM" in October 2018 [@forshaw-2018-10]. The mechanism: a PPL can be made to instantiate a COM object whose CLSID resolves to `scrobj.dll`, the Microsoft-signed Windows Script Component scripting host. Once loaded into the PPL, the script object accepts attacker-supplied source code and executes it inside the protected process. The DLL is signed. The kernel admits it. The kernel cannot reason about the JScript source it then runs.

Microsoft's fix in Windows 10 1803 (April 2018, deployed broadly through that year) was a hardcoded deny-list in `CI.DLL`. Forshaw's own writeup gives the source verbatim [@forshaw-2018-10]:

```c
UNICODE_STRING g_BlockedDllsForPPL[] = {
    DECLARE_USTR("scrobj.dll"),
    DECLARE_USTR("scrrun.dll"),
    DECLARE_USTR("jscript.dll"),
    DECLARE_USTR("jscript9.dll"),
    DECLARE_USTR("vbscript.dll")
};

NTSTATUS CipMitigatePPLBypassThroughInterpreters(
    PEPROCESS Process, LPBYTE Image, SIZE_T ImageSize)
{
    if (!PsIsProtectedProcess(Process)) return STATUS_SUCCESS;
    // walk g_BlockedDllsForPPL; if any match, return STATUS_DYNAMIC_CODE_BLOCKED
    ...
}
```

Five DLLs, hardcoded. Microsoft Learn corroborates the policy on the user-facing side [@learn-am-services]:

> The following scripting DLLs are forbidden by CodeIntegrity inside a protected process: scrobj.dll, scrrun.dll, jscript.dll, jscript9.dll, and vbscript.dll.

Channel: a Microsoft-signed DLL. Behaviour: arbitrary attacker script. The fix narrows the channel by name-listing the five DLLs known to admit attacker behaviour. The class survives.

<Sidenote>The mechanism was previewed at Recon Montreal 2018 in the joint Forshaw-Ionescu talk "Unknown Known DLLs and other Code Integrity Trust Violations" (June 15-17, 2018) [@recon-mtl-2018]. Forshaw's August 2017 "Bypassing VirtualBox Process Hardening" essay [@forshaw-2017-vbox] is the structural precursor -- it makes the same channel-vs-behaviour argument against a different kernel-supported process-hardening regime.</Sidenote>

### Act II (2018-2021) -- DefineDosDevice and PPLdump

In his August 2018 post on object-directory exploits [@forshaw-2018-08], Forshaw added a single throwaway sentence that the security community would spend three years productising. itm4n quotes it verbatim in his 2021 SCRT walkthrough [@itm4n-scrt]:

<PullQuote>
Abusing the DefineDosDevice API actually has a second use, it's an Administrator to Protected Process Light (PPL) bypass.
</PullQuote>

The mechanism, fully worked out by itm4n in April 2021, is structural and uses that same primitive. As an administrator, call `DefineDosDevice` to create a symbolic link in `\KnownDlls\` (the object-directory subkey that the loader uses for fast known-DLL lookups). The call is dispatched via RPC to `csrss.exe`, which runs at PPL/WinTcb (rung 6) and so has the lattice authority to write into protected directories. The administrator gets a `\KnownDlls\` entry pointing at an attacker-controlled section. Now start a PPL. The PPL's loader resolves DLL names through `\KnownDlls\` and finds the administrator's entry. The PPL maps the attacker's section without re-validating its on-disk signature, because `\KnownDlls\` is the kernel's vouched-for fast path.

itm4n's PPLdump tool, published April 2021, automated the attack. The README test matrix lists every Windows version it ran against [@ppldump-repo]. For fifteen months, an administrator could dump any PPL's memory, including `lsass.exe`, despite `RunAsPPL`.

Microsoft's fix arrived in build 19044.1826 (the July 2022 update to Windows 10 21H2). itm4n's "End of PPLdump" writeup describes the patch and the BinDiff diff verbatim [@itm4n-end-of-ppldump]:

> The conclusion is that PPLs now appear to be behaving just like PPs and therefore no longer rely on Known DLLs.

The fix patched `LdrpInitializeProcess` in NTDLL to skip `\KnownDlls\` for PPL processes, behind a Velocity feature flag (`Feature_Servicing_2206c_38427506__private_IsEnabled`). PPLdump's repository README now opens with [@ppldump-repo]:

> 2022-07-24 - As of Windows 10 21H2 10.0.19044.1826 (July 2022 update), the exploit implemented in PPLdump no longer works. A patch in NTDLL now prevents PPLs from loading Known DLLs.

itm4n's structural finding -- that *PPLs honoured `\KnownDlls\` while PPs did not* -- is the most interesting failure in the eight-year run, because the asymmetry sat in plain sight from 2013 to 2022 and nobody had asked "why are PPs and PPLs loading sections differently?" The fix closes one asymmetry. The structural class survives.

<Sidenote>PPLdump's substitution chain uses NTFS transactions and Forrest Orr's "phantom DLL hollowing" technique to materialise the attacker-controlled section on disk in a way the kernel section creator will accept [@forrest-orr-hollow]. Orr's writeup is the original publication of the hollowing primitive; PPLdump composes it with the `\KnownDlls\` redirection trick.</Sidenote>

### Act III (2022-2024) -- Landau's PPLFault CI TOCTOU

Gabriel Landau, then at Elastic, presented "PPLdump Is Dead. Long Live PPLdump!" at Black Hat Asia 2023 [@bh-asia-2023-pdf]. The mechanism is a Time-Of-Check / Time-Of-Use bug at the section-creation layer.

<Definition term="TOCTOU (Time-Of-Check / Time-Of-Use)">
A class of bug in which a security property is verified at one point in time but the underlying object is mutable between the check and the use. The protected resource passes its check, then changes between check and access, and the operation proceeds against the changed state without re-verification.
</Definition>

The TOCTOU here is subtle. When a PPL calls `NtCreateSection` on a Microsoft-signed DLL, the kernel's memory manager calls `MiValidateSectionCreate`, which calls into `ci.dll` to verify the file's Authenticode signature. The check succeeds. The section is created. But the memory manager does not page in the file contents at section-create time; it pages them in lazily, on demand, when threads first touch the mapped pages. If an attacker can keep the section's backing file *unsubstituted* during the signature check and substituted during the lazy page-in, the kernel will execute attacker bytes through a section whose signature it already verified.

Landau's exploit uses Windows' CloudFilter API. An attacker holds an exclusive oplock on a Microsoft-signed DLL during the section-create signature check. After the check passes, the attacker's CloudFilter `FetchDataCallback` provides different bytes (the payload) when the kernel pages in the section. The PPL maps and executes the payload. Landau's Elastic post documents the chain verbatim [@elastic-pplfault]:

> The internal memory manager function `MiValidateSectionCreate` relies on the Code Integrity module `ci.dll` to handle the requisite cryptography and PKI policy.

Microsoft's fix shipped in Windows Insider Canary build 25941 on September 1, 2023 [@elastic-pplfault]:

> On September 1, 2023, Microsoft released a new build of Windows Insider Canary, version 25941 ... Build 25941 includes improvements to the Code Integrity (CI) subsystem that mitigate a long-standing issue that enables attackers to load unsigned code into Protected Process Light (PPL) processes.

The fix narrows the immediate channel by extending page-hash validation to PPL-loaded images that reside on *remote* (SMB redirector) paths -- the precise surface that PPLFault required to drive its CloudFilter `FetchDataCallback` substitution [@elastic-pplfault]. Locally-cached PPL DLL loads continue to rely on the section-create signature check, so the structural seam survives. The GA patch shipped on February 13, 2024 [@pplfault-repo]:

> 2024-02 UPDATE: Microsoft patched PPLFault on 2024-02-13.

Channel: a signed Microsoft DLL whose hash matched at section create. Behaviour: attacker payload mapped via the lazy page-in. The fix narrows the channel by widening the verification surface from "the file at section-create time" to "every page at fault time." The class survives.

### Act IV (2022-2024) -- BYOVDLL and itm4n's KeyIso chain

Bring Your Own Vulnerable DLL. Coined by Gabriel Landau on Twitter in October 2022 (itm4n screenshots the original tweet [@itm4n-ghost-part1]; tweet status 1580067594568364032). Productised by itm4n in August 2024 in "Ghost in the PPL Part 1."

<Definition term="BYOVDLL (Bring Your Own Vulnerable DLL)">
A bypass class against any signature-gated security mechanism in which the attacker loads a *legitimately signed but historically vulnerable* binary and exploits the known vulnerability inside it. The signature check passes; the vulnerability does the work. The structural property that makes the class hard to fix is that the kernel cannot deny-list legitimately signed older Microsoft DLLs without breaking the deployments that still depend on them.
</Definition>

itm4n's specific chain targets the CNG Key Isolation service ("KeyIso"), which runs in `lsass.exe` and so inherits its PPL/Lsa protection. The chain is precise [@itm4n-ghost-part1]:

1. As administrator, stop the KeyIso service.
2. Set `HKLM\SYSTEM\CurrentControlSet\Services\KeyIso\Parameters\ServiceDll` to point at an older `keyiso.dll` extracted from Microsoft update KB5023778. This DLL is Microsoft-signed; the kernel admits it.
3. Restart the KeyIso service. The older `keyiso.dll` loads into LSASS at PPL/Lsa.
4. Trigger CVE-2023-36906, an out-of-bounds read information disclosure in the older `keyiso.dll`, to leak an address.
5. Trigger CVE-2023-28229, one of six use-after-frees in the same DLL, to obtain control of a `CALL` target via the `RAX` register.
6. Execute attacker code at PPL/Lsa.

The CVEs are real and tracked. k0shl's writeup is the primary root-cause analysis [@k0shl-keyiso]:

> Microsoft patched vulnerabilities I reported in CNG Key Isolation service, assigned CVE-2023-28229 and CVE-2023-36906, the CVE-2023-28229 included 6 use after free vulenrabilities with similar root cause and the CVE-2023-36906 is a out of bound read information disclosure.

NVD records both [@nvd-2023-28229] [@nvd-2023-36906]. Y3A's GitHub repository [@y3a-cve-poc] provides a public PoC for CVE-2023-28229 that itm4n's chain composes.

Channel: an actually-Microsoft-signed DLL. Behaviour: the memory-safety vulnerability inside it. There is no general fix announced. Microsoft fixed the specific CVEs by shipping a newer `keyiso.dll`, but the older DLL remains in circulation (it ships inside every patched cumulative update bundle), and a kernel that has to admit every legitimately signed older Microsoft DLL has no general defense against the next CVE-of-the-month.

> **Note:** BYOVDLL has no general patch. Microsoft fixes each underlying CVE on the standard cumulative-update cadence. The class persists for as long as the kernel admits older signed Microsoft DLLs into PPLs, which is for as long as legitimately deployed software depends on the older DLLs.

<Mermaid caption="The four-act bypass arms race -- disclosure to patch half-lives by act.">
timeline
    title PPL Bypass Arms Race (2018-2024)
    2018-10 : Forshaw JScript-into-PPL : Fix 1803 Apr 2018 : g_BlockedDllsForPPL deny-list
    2021-04 : itm4n PPLdump (KnownDlls) : Fix Jul 2022 build 19044.1826 : LdrpInitializeProcess patch
    2022-09 : Landau PPLFault (TOCTOU) : Fix Feb 2024 13 GA : CI page-hash for PPLs
    2024-08 : itm4n BYOVDLL KeyIso chain : No general fix : CVEs patched piecewise
</Mermaid>

| Act | Year | Channel verified | Behaviour exploited | Microsoft fix | Fix date |
|---|---|---|---|---|---|
| I | 2018 | Microsoft-signed `scrobj.dll` | JScript source executed by COM object | `g_BlockedDllsForPPL` deny-list of 5 DLLs | Apr 2018 (1803) |
| II | 2021 | `\KnownDlls\` symlink (CSRSS-blessed) | Attacker section mapped without re-validation | NTDLL `LdrpInitializeProcess` patch | Jul 2022 (19044.1826) |
| III | 2023 | Signed DLL passed `MiValidateSectionCreate` | CloudFilter substitutes bytes on lazy page-in | `/INTEGRITYCHECK` page hashes for PPLs | Feb 2024 (GA) |
| IV | 2024 | Legitimately-signed older `keyiso.dll` | Use-after-free + OOB read (CVE-2023-28229, CVE-2023-36906) | None (CVE-by-CVE) | open |

<Mermaid caption="itm4n's BYOVDLL / KeyIso chain -- the channel passes verification at every stage; the behaviour is the vulnerability inside.">
flowchart TD
    A[Admin stops KeyIso service]
    B[Repoint ServiceDll to older keyiso.dll<br/>from KB5023778]
    C[Restart KeyIso service]
    D[Older keyiso.dll loads<br/>into lsass.exe PPL/Lsa]
    E[Trigger CVE-2023-36906<br/>OOB read for info leak]
    F[Trigger CVE-2023-28229<br/>UAF for RAX control]
    G[Code execution at PPL/Lsa]
    A --> B --> C --> D --> E --> F --> G
</Mermaid>

<Aside label="Why itm4n credits Landau">
itm4n explicitly attributes the BYOVDLL framing to Landau's October 2022 tweet, even though itm4n's KeyIso chain is the first public productisation. The attribution chain matters because it documents how a one-line research observation (Twitter status 1580067594568364032, screenshot preserved in [@itm4n-ghost-part1]) became a working exploit two years later. The pattern repeats in this domain: Forshaw's one-sentence DefineDosDevice comment to PPLdump (3 years); Landau's BYOVDLL tweet to itm4n's KeyIso chain (2 years). The structural class outlives its discoverer.
</Aside>

Four acts, one class. Every public bypass since 2018 has lived in the same narrow shape: code that becomes part of a PPL through a signed channel and executes attacker-influenced data once mapped. Each generation of fix narrows what the channel admits -- name-list five DLLs; ignore `\KnownDlls\`; page-hash every section; CVE-patch every vulnerable older DLL. The class survives because the kernel cannot reason about behaviour. By Rice's theorem it cannot reason about behaviour in general; in practice, it has nowhere even to start.

If `lsass.exe` code execution is reachable through BYOVDLL, where are the actual *secrets*? Not in `lsass.exe`. Not anywhere the kernel can read at all. The next section is the companion boundary.

## 9. The Companion Boundary -- Credential Guard, VBS, and `LsaIso.exe`

itm4n opens his RunAsPPL walkthrough with a warning [@itm4n-runasppl]:

> I noticed that this protection tends to be confused with Credential Guard, which is completely different.

The confusion is understandable. Both run on Windows. Both protect LSASS. Both are configured by domain administrators. Both yield "ACCESS_DENIED" to Mimikatz when working correctly. They are nonetheless answering different questions, and they stack rather than replace each other.

PPL stops an *administrator* from reading kernel-trusted user-mode memory. It does nothing against a kernel-mode attacker who can simply zero the `Protection` byte in the target `EPROCESS`. The kernel-mode attacker is the next threat-model rung up, and the kernel-mode attacker is the threat that Credential Guard answers, by moving the credentials themselves out of `lsass.exe` entirely.

<Definition term="Virtualization-Based Security (VBS)">
A Hyper-V-based isolation regime in which the Windows hypervisor partitions the system into Virtual Trust Levels (VTLs). VTL0 contains the normal Windows kernel and user-mode processes. VTL1 contains the Secure Kernel and a small set of user-mode trustlets. Memory in VTL1 is inaccessible to VTL0, even from VTL0 kernel-mode code.
</Definition>

<Definition term="Trustlet">
A user-mode process running inside VTL1. Trustlets are Microsoft-signed at a specific protected-process equivalent rung within VTL1 and serve as the user-mode hosts for VBS-isolated functionality. `LsaIso.exe` is the trustlet that holds the actual credential material on Credential Guard-enabled hosts.
</Definition>

The architecture is, at the highest level, three layers: VTL0 user-mode, VTL0 kernel, and VTL1 (Secure Kernel plus trustlets). On a Credential Guard-enabled host, `lsass.exe` still exists in VTL0 user-mode, still protects itself with PPL/Lsa, and still answers authentication requests. But it no longer holds the NTLM hashes, Kerberos TGT keys, or Cred Manager domain credentials. Those secrets live in `LsaIso.exe`, a trustlet in VTL1. When LSASS needs to authenticate a credential, it makes a hypercall into VTL1, and `LsaIso.exe` performs the cryptographic operation entirely within VTL1 memory, returning only the result. The keys never leave VTL1.

Microsoft's documentation states the threat model directly [@learn-cg]:

> Credential Guard prevents credential theft attacks by protecting NTLM password hashes, Kerberos Ticket Granting Tickets (TGTs), and credentials stored by applications as domain credentials.
>
> Credential Guard uses Virtualization-based security (VBS) to isolate secrets so that only privileged system software can access them.
>
> Malware running in the operating system with administrative privileges can't extract secrets that are protected by VBS.

The third sentence is the load-bearing one. *Malware running with administrative privileges* maps cleanly to a PPL bypass that achieves code execution at PPL/Lsa. Even from inside `lsass.exe`, the secrets are not there.

<Mermaid caption="PPL and Credential Guard stack -- PPL barriers admin from LSASS in VTL0; VBS barriers VTL0 kernel from LsaIso in VTL1.">
flowchart TD
    subgraph VTL0[VTL0 normal world]
        Admin[Admin / SYSTEM token]
        Lsass[lsass.exe at PPL/Lsa]
        Kern0[VTL0 kernel]
    end
    subgraph VTL1[VTL1 secure world]
        SK[Secure Kernel]
        Iso[LsaIso.exe trustlet]
        Secrets[NTLM hashes, Kerberos TGT keys]
    end
    Admin -- "PPL barrier (lattice)" --x Lsass
    Lsass -- hypercall --> Iso
    Kern0 -- "VBS barrier (VTL boundary)" --x Iso
    Iso --> Secrets
</Mermaid>

The two mechanisms stack rather than overlap. PPL prevents an admin from `OpenProcess(PROCESS_VM_READ, lsass)` at the user-mode lattice level. Credential Guard prevents a kernel-mode attacker who *succeeds* against PPL from finding the keys, because the keys are in VTL1 memory that the VTL0 kernel cannot read at all. itm4n's "complementary" framing in the RunAsPPL writeup is the right operational summary [@itm4n-runasppl]: deploy both, always both.

> **Note:** PPL gates user-mode admins out of LSASS code memory. Credential Guard gates everything else (kernel-mode attackers, BYOVDLL execution-at-PPL/Lsa) out of the secrets themselves by moving the secrets to VTL1. Each mechanism answers a layer of the threat model the other does not.

| Dimension | PPL (LSA protection) | Credential Guard |
|---|---|---|
| Threat model | Administrator -> user-mode LSASS | VTL0 kernel + admin -> credential material |
| Layer | VTL0 user-mode lattice | VTL0 / VTL1 VBS boundary |
| Kernel-mode attacker | Cannot stop them | Stops them (VBS-isolated memory) |
| MSRC classification | Defense in depth | Security boundary |
| Default-on (consumer) | Audit mode, Win11 22H2 | n/a (enterprise) |
| Default-on (enterprise) | Audit mode, Win11 22H2 | Enabled, Win11 22H2 / Win Server 2025 (domain-joined non-DC) |

<Aside label="For the deep treatment of LsaIso, see the VBS Trustlets article in this series">
The architecture of `LsaIso.exe`, its trustlet ID, its IUM EKU, and the hypercall plumbing between LSASS and the trustlet are the subject of a separate article in this series ("VBS Trustlets: What Actually Runs in the Secure Kernel"). The cross-link is deliberate: PPL and Credential Guard are paired in practice, but the architectural depth of VTL1 is its own subject.
</Aside>

Credential Guard's default-on rollout, recorded in Microsoft Learn [@learn-cg]:

> Starting in Windows 11, 22H2 and Windows Server 2025, Credential Guard is enabled by default on domain-joined, non-DC systems that meet hardware requirements.

Two stacked mechanisms; one classified as a security boundary, one not. The next section asks what the classification means.

## 10. Where PPL Isn't a Security Boundary -- Microsoft's Servicing Criteria

Gabriel Landau's "Inside Microsoft's Plan to Kill PPLFault" essay states the classification in one sentence [@elastic-pplfault]:

<PullQuote>
Microsoft does not consider PPL to be a security boundary, meaning they won't prioritize security patches for code-execution vulnerabilities discovered therein, but they have historically addressed some such vulnerabilities on a less-urgent basis.
</PullQuote>

Microsoft's "Windows Security Servicing Criteria" defines the term *security boundary* directly [@msrc-servicing]:

> A security boundary provides a logical separation between the code and data of security domains with different levels of trust. For example, the separation between kernel mode and user mode is a classic [...] security boundary.

<Definition term="Security boundary (MSRC sense)">
A logical separation between code and data of security domains with different levels of trust. Microsoft commits to servicing security boundary violations with out-of-band patches when the severity bar is met. The kernel-mode / user-mode separation is the canonical example. Per Microsoft's published servicing criteria, PPL is *not* on the security-boundary list.
</Definition>

<Definition term="Defense in depth">
A security feature that raises the cost of an attack without guaranteeing prevention. Microsoft treats defense-in-depth features as servicing targets on the standard cumulative-update cadence, not as out-of-band patch priorities. PPL falls into this category per Microsoft's published classification.
</Definition>

The relevant excerpts of the criteria page enumerate which surfaces are and are not boundaries. The live MSRC page renders that enumeration table client-side via JavaScript; the raw HTML returned by automated fetchers contains only the React shell. The text of the enumeration is preserved in the Wayback Machine capture at archive date 2023-05-06 [@msrc-criteria-archive], and Landau's follow-on Elastic post quotes the relevant administrative-process row verbatim [@elastic-byovd-admin]:

> Administrative processes and users are considered part of the Trusted Computing Base (TCB) for Windows and are therefore not strong[ly] isolated from the kernel boundary.

The corresponding row for PPL is the same shape: administrative-process-to-PPL is not isolated as a security boundary. Landau filed VULN-074311 with MSRC in September 2022 disclosing both an admin-to-PPL and a PPL-to-kernel zero-day. The Elastic post records MSRC's classification of the disclosure verbatim [@elastic-byovd-admin]:

> MSRC similarly does not consider admin-to-PPL a security boundary, instead classifying it as a defense-in-depth security feature.

<Aside label="The MSRC enumeration table is JavaScript-rendered">
The MSRC servicing-criteria page's *definition* of "security boundary" is retrievable from raw HTML and verified against the live page. The *enumeration* of which Windows surfaces are or are not boundaries lives in a client-side rendered table and is not present in the raw HTML payload. The verifiable trail for "PPL is excluded from the boundary list" is the Wayback Machine capture combined with Elastic's verbatim quotation of MSRC's classification.
</Aside>

The operational consequence is direct. A published PPL bypass does not trigger an out-of-band patch. It is fixed on the next major-release cadence, sometimes faster if Microsoft has internal motivation. The disclosure-to-fix half-lives are public record:

| Bypass | Disclosed | Microsoft fix | Disclosure-to-fix |
|---|---|---|---|
| Forshaw 2018 JScript-into-PPL | Oct 2018 | Apr 2018 (1803, pre-disclosure) | ~0 months (Microsoft fixed first) |
| itm4n 2021 PPLdump (KnownDlls) | Apr 2021 | Jul 2022 (build 19044.1826) | ~15 months |
| Landau 2023 PPLFault (CI TOCTOU) | Apr-Sep 2023 | Feb 2024 (GA) | ~5-11 months |
| itm4n 2024 BYOVDLL (KeyIso chain) | Aug 2024 | none (open, CVE-by-CVE) | open |

> **Note:** A correctly classified PPL bypass is fixed on the standard cumulative-update cadence, not out-of-band. The implication for defenders is operational: PPL is exactly as strong as the engineering velocity Microsoft chooses to invest in it. Treat detection (Section 11) and the Credential Guard companion (Section 9) as load-bearing.

The reader takeaway is the third Aha moment of the article. PPL is real, kernel-enforced, structurally elegant, and demonstrably effective against the threat it was designed for (administrator-from-user-mode reads of LSASS). It is also explicitly *not* a security boundary per Microsoft's own published servicing policy, and that classification is the most important fact about it. Plan for bypasses. Stack with Credential Guard. Treat detection as primary, not secondary.

## 11. Practical Guide -- Configuring, Verifying, and Monitoring PPL

If you are deploying PPL on a corporate fleet, run this checklist. The order is deliberate: audit before enforce, verify before trust the verifier, and detect because no static control survives unmotivated.

### Deploy

> **Note:** Enable `AuditLevel = 8` under `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\LSASS.exe` for two months [@learn-runasppl]. This is a *different* registry hive from `RunAsPPL` (which lives under `HKLM\SYSTEM\CurrentControlSet\Control\Lsa`); mixing the two values up is the most common Stage 0 deployment error (see §6). Collect CodeIntegrity events 3065 and 3066 to enumerate every LSASS plug-in that would fail enforcement (smart-card middleware, third-party CSPs, password-filter DLLs). Re-sign or replace the failing modules. Set `RunAsPPL = 1` on Secure Boot-capable machines; the kernel automatically stores the policy in a UEFI variable. `RunAsPPL = 2` (Win11 22H2+) is the softer option that omits the UEFI variable for environments requiring admin-removable protection.

> **Note:** For third-party EDR, confirm the agent daemon runs at `PPL/Antimalware` (signer rung 3, byte `0x31`). Process Explorer exposes this via View -> Select Columns -> Protection. System Informer (the modern Process Hacker fork that itm4n recommends in his BYOVDLL writeup [@itm4n-ghost-part1]) shows the same field in its process list. If your EDR is *not* running at `PPL/Antimalware`, it does not have the kernel's protection against admin tampering even when its vendor claims "protected" in marketing material. <Sidenote>Process Explorer's "Protection" column ships in the canonical Sysinternals distribution [@sysinternals-procexp]; it reads `EPROCESS.Protection` via the `NtQueryInformationProcess` entry point [@learn-ntqueryinfoproc], although the specific `ProcessProtectionInformation` information-class value is not enumerated in the public Learn `PROCESSINFOCLASS` table -- the value is community-documented from Windows headers and reverse engineering rather than from a Microsoft Learn API reference.</Sidenote>

### Verify

> **Note:** On a host you suspect of misconfiguration, attach WinDbg to the kernel and run `!process 0 7 lsass.exe`. The output includes the `_PS_PROTECTION` byte. Decode it with the formula from §3 above: `((value & 0xF0) >> 4)` is the signer rung; `value & 0x07` is the type; `(value >> 3) & 1` is the audit bit. A `RunAsPPL = 1` host yields `0x41` (PPL + Lsa). The Defender service yields `0x31` (PPL + Antimalware). `csrss.exe` yields `0x61` (PPL + WinTcb). If `lsass.exe` shows `0x00`, the registry policy did not take effect on this boot.

<RunnableCode lang="js" title="Decode a _PS_PROTECTION byte (re-run as a verification utility)">{`
function decode(b) {
  const t = b & 0x07, a = (b >> 3) & 0x01, s = (b >> 4) & 0x0F;
  const tn = ['None', 'ProtectedLight', 'Protected'];
  const sn = ['None','Authenticode','CodeGen','Antimalware',
              'Lsa','Windows','WinTcb','Max'];
  return '0x' + b.toString(16).padStart(2,'0') + ' = ' +
         (sn[s] || s) + '-' + (tn[t] || t) +
         (a ? ' (Audit on)' : '');
}
// Three benchmark values you should be able to recognise by sight
console.log(decode(0x31)); // MsMpEng.exe (Defender at PPL/Antimalware)
console.log(decode(0x41)); // lsass.exe under RunAsPPL=1
console.log(decode(0x61)); // csrss.exe (PPL/WinTcb)
`}</RunnableCode>

### Monitor

> **Note:** The CodeIntegrity provider emits three event IDs that matter for PPL monitoring [@learn-runasppl]: | Event ID | Provider | What it tells you | |---|---|---| | 3033 | Microsoft-Windows-CodeIntegrity | A DLL load was blocked by CI (PPL or otherwise) | | 3063 | Microsoft-Windows-CodeIntegrity | Enforcement-mode: LSASS plug-in failed the shared-section security requirement (complement of audit-mode event 3065) | | 3065 | Microsoft-Windows-CodeIntegrity | LSASS plug-in failed the shared-section requirement | | 3066 | Microsoft-Windows-CodeIntegrity | LSASS plug-in failed the Microsoft signing level requirement | Sysmon Event 10 (ProcessAccess) captures `OpenProcess` denials with the requested access mask and is the cheapest detection for a Mimikatz-shaped attempt against an RunAsPPL-protected `lsass.exe`. A burst of 3033 events from a non-Microsoft process targeting `lsass.exe` is the canonical signal that a PPL bypass attempt is under way.

> **Note:** PPL prevents admin-from-user-mode reads of LSASS. Credential Guard prevents kernel-mode reads of the credentials themselves (and BYOVDLL-style execution at PPL/Lsa). Deploy both. itm4n's "complementary" framing in his RunAsPPL writeup [@itm4n-runasppl] is the right operational model. On Win11 22H2 and Windows Server 2025, Credential Guard is default-on for domain-joined non-DC systems with VBS-capable hardware [@learn-cg]; on older fleets, enable it explicitly via Group Policy or the Device Guard / Credential Guard configuration script. Always both -- either alone leaves a layer of the threat model uncovered.

> **Note:** If you are an EDR vendor wanting your daemon to run at `PPL/Antimalware`, the path is fixed [@learn-mvi] [@learn-am-services]: 1. Hold Microsoft Virus Initiative membership; maintain independent-lab certification (AV-Comparatives, AV-Test, SE Labs, MRG Effitas, SKD Labs, VB 100, West Coast Labs, AVLab Cybersecurity Foundation). 2. Author an ELAM driver with an embedded `<ELAM>` resource section enumerating your user-mode binary signing-certificate hashes. 3. Submit the driver through WHQL for Microsoft co-signing. 4. Use Trusted Signing for your user-mode binaries. 5. Verify with Process Explorer that the service launches at `PPL/Antimalware` after install.

Practitioners who follow the checklist still need to know the common misconceptions. The next section catalogues them.

## 12. FAQ -- Common Misconceptions

Seven questions practitioners ask after their first PPL deployment.

<FAQ title="Frequently asked questions">

<FAQItem question="Does PPL stop an administrator from killing my AV?">
Yes for full-access termination via `OpenProcess(PROCESS_TERMINATE, ...)`; an admin without a higher signer rung cannot terminate a `PPL/Antimalware` daemon by a direct kill. No for legitimate uninstall: the vendor's MSI installer (or equivalent) typically signals the daemon to shut itself down through its own service-control path, which is gated by ACL and not by the PPL lattice. Operationally, expect administrators to be able to uninstall your EDR but not to terminate its main process from outside the vendor toolchain.
</FAQItem>

<FAQItem question="Is PPL the same as Credential Guard?">
No. itm4n's verbatim warning is worth repeating [@itm4n-runasppl]: "I noticed that this protection tends to be confused with Credential Guard, which is completely different." PPL protects `lsass.exe` *as a process* from admin-from-user-mode reads. Credential Guard moves the *credentials themselves* into VTL1 memory via VBS. PPL is a VTL0 user-mode lattice control. Credential Guard is a VTL0 / VTL1 hypervisor boundary. They stack; see Section 9 for the layering and Section 11 Item 5 for the deployment recommendation.
</FAQItem>

<FAQItem question="Why doesn't Microsoft service PPL bugs urgently?">
Because Microsoft has not classified PPL as a security boundary. The Windows Security Servicing Criteria define a security boundary as a logical separation between security domains at different levels of trust, and Microsoft's published enumeration excludes administrative-process-to-PPL from that list [@msrc-servicing] [@elastic-byovd-admin]. PPL is treated as a defense-in-depth feature. The operational implication is that PPL bypasses are fixed on the next major release cadence rather than out-of-band, with disclosure-to-fix half-lives ranging from approximately five to fifteen months historically (see Section 10 for the data).
</FAQItem>

<FAQItem question="Can I make my own application a PPL?">
Practically no for non-AV applications. The protected-process EKU OIDs are gated by Microsoft's certificate authorities; only the Antimalware rung admits third-party certificates, and admission is mediated by ELAM driver + Microsoft Virus Initiative membership [@learn-mvi]. Hobbyist tooling cannot opt in. There is no public path for a non-AV third-party application to claim a PPL rung. If your application requires PPL-style anti-tampering, the realistic options are (a) become an MVI member if your application is an AV/EDR, (b) use Process Mitigation Policies such as Code Integrity Guard for code-injection resistance, or (c) deploy your sensitive operations inside a separate Microsoft-signed service.
</FAQItem>

<FAQItem question="What's the difference between PPL and a 'protected service'?">
"Protected service" is informal terminology for a Windows service whose host process runs as a PPL, with the Service Control Manager configured to launch it at a specific signer rung. The deployment plumbing (SCM service configuration, service-DLL packaging, the signing of the host binary) is what makes a service "protected." The PPL machinery is what makes the host process actually resistant to tampering. The two terms describe the same thing from different angles -- one from the SCM-management view, one from the kernel-access-check view.
</FAQItem>

<FAQItem question="Does RunAsPPL break smart cards?">
Only if the smart-card middleware DLL is not signed at the LSA level (signer rung 4). Most major smart-card vendors have updated their middleware to be Microsoft-signed at the required level, but legacy or in-house middleware frequently fails enforcement. The recommended workflow is to run `AuditLevel = 8` for two months [@learn-runasppl], collect CodeIntegrity 3065 / 3066 events, enumerate the failing modules, re-sign or replace them, and only then switch to `RunAsPPL = 1`. Skipping the audit period is the single most common cause of authentication outages during LSA protection rollouts.
</FAQItem>

<FAQItem question="If a kernel-mode driver can disable PPL, why bother with it?">
Because the threat model PPL answers is *administrator-from-user-mode*, not *administrator-from-kernel-mode*. PPL is a kernel-enforced gate in the access-check pipeline, but a kernel-mode driver that can write to `EPROCESS.Protection` can zero the byte and disable the gate for any process. The defense against the kernel-mode attacker is a different mechanism: VBS-isolated credentials in VTL1 (Credential Guard), with HVCI / kernel-mode integrity controls preventing arbitrary kernel-mode code from running in the first place. PPL stops one threat; Credential Guard stops the threat one rung up; and the two are intended to be deployed together (Section 9, Section 11 Item 5).
</FAQItem>

</FAQ>

The arc has run from a single Mimikatz error code to a kernel-enforced lattice, a third-party admission path mediated by ELAM and MVI, an arms race shaped by a single structural insight that the kernel verifies the channel and not the behaviour, and a stacked companion boundary that lives in VTL1 because VTL0 has run out of places to hide a key. PPL is not a security boundary. That classification is not a footnote; it is the most important fact about it, because it tells defenders that the mechanism is exactly as strong as the engineering velocity Microsoft chooses to invest. Deploy it. Stack it with Credential Guard. Monitor for the next bypass.

> **Key idea:** The kernel verifies the channel. It does not verify the behaviour. Every PPL bypass since 2018 has lived in that seam, every fix has narrowed the channel, and the seam survives because behaviour is, by Rice's theorem, structurally outside what static signature verification can reason about.

<StudyGuide slug="protected-process-light-the-ppl-signer-hierarchy-from-wintcb-to-antimalware" keyTerms={[
  { term: "Protected Process Light (PPL)", definition: "A kernel-enforced gating model decorating a process with a structured protection level (Type, Audit, Signer) and rejecting OpenProcess requests from callers below the target's signer rung." },
  { term: "_PS_PROTECTION byte", definition: "The EPROCESS field encoding Type (3 bits), Audit (1 bit), Signer (4 bits) in a single UCHAR; read on every NtOpenProcess." },
  { term: "Signer rung", definition: "The four-bit Signer field of _PS_PROTECTION naming the trust tier of a protected process; values include Authenticode, Antimalware, Lsa, Windows, and WinTcb." },
  { term: "RunAsPPL", definition: "The HKLM\\SYSTEM\\CurrentControlSet\\Control\\Lsa registry knob that launches lsass.exe at PPL/Lsa on the next boot; value 1 anchors the policy in a UEFI variable on Secure Boot machines." },
  { term: "ELAM", definition: "Early Launch Anti-Malware driver -- a Microsoft-certified kernel driver that enrolls a vendor's user-mode signing certificates at PPL/Antimalware via an embedded resource section." },
  { term: "BYOVDLL", definition: "Bring Your Own Vulnerable DLL -- a bypass class against signature-gated security mechanisms in which the attacker loads a legitimately signed but historically vulnerable binary and exploits the known vulnerability inside it." },
  { term: "Credential Guard", definition: "A VBS-based isolation mechanism that moves NTLM hashes, Kerberos TGT keys, and Cred Manager credentials out of lsass.exe and into LsaIso.exe in VTL1." },
  { term: "Security boundary (MSRC)", definition: "Per Microsoft's published servicing criteria, a logical separation between code and data of security domains at different trust levels; PPL is excluded from this list and treated as defense in depth." }
]} />
