# "Who Is This Code?" -- The Quiet 33-Year Reinvention of App Identity in Windows

> NT 3.1 could prove which user typed at the keyboard but had no answer to which code was running. Eight successive primitives later, Windows is still answering the same question.

*Published: 2026-05-08*
*Updated: 2026-05-09*
*Canonical: https://paragmali.com/blog/who-is-this-code----the-quiet-33-year-reinvention-of-app-ide*
*License: CC BY 4.0 - https://creativecommons.org/licenses/by/4.0/*

---
<TLDR>
Windows NT 3.1 (1993) could prove which **user** typed at the keyboard but had no answer to **which code was running**. Over the next thirty-three years, eight successive primitives -- Authenticode, Kernel-Mode Code Signing, Protected Process Light, AppContainer with the Package SID, App Control for Business, Mark of the Web with SmartScreen, the Vulnerable Driver Block List, and Pluton-rooted attestation -- accreted into a single layered code-identity stack. Each was forced into existence by a specific, named failure of the one before it. This is that story, told as one system.
</TLDR>

## Two identities, one operating system

On July 27, 1993 -- the day Windows NT 3.1 shipped -- the new operating system could prove with cryptographic precision who Alice was, which group she belonged to, which file she was allowed to open, and at what level of privilege she was running. It could prove exactly nothing about the program she had just double-clicked.

Thirty-three years later, "Alice" has barely changed. The code she runs has acquired a publisher signature stamped onto its Portable Executable file, a kernel-loader gate that refuses to load unsigned drivers, a signer level in a runtime lattice that decides whether one process can read another's memory, a Package SID derived from a Crockford-Base32 hash of the manifest publisher [@ms-package-identity], a publisher-rule entry in a centrally managed App Control policy [@ms-appcontrol], a Mark-of-the-Web alternate data stream from the browser that downloaded it [@ms-fscc-motw], a SmartScreen reputation score [@learn-smartscreen], a possible entry on a Microsoft-curated denylist that overrides its own valid signature [@msft-driver-blocklist], and -- on a Pluton-equipped 2026 laptop -- a hardware-attested measurement of the boot chain that loaded it [@learn-pluton]. Every one of those identities was forced into existence by a specific failure of the one before. This is that story.

A modern symptom makes the asymmetry concrete. In April 2026, attackers seized the publishing pipeline for the `@bitwarden/cli` npm package -- a credential they had no business holding -- and shipped a backdoored release for ninety-three minutes before maintainers caught it [@bitwarden-statement]. Code identity, as it existed at every layer of every operating system that consumed that package, said the artifact was authentic. The signature was valid. The publisher's account was real. The package metadata was correct. Every check passed. *And the binary was hostile.* That gap, between "who shipped it" and "is it safe to run," is the same gap NT 3.1 first stepped over in 1993 and that Windows has been trying to close ever since.

The Bitwarden case sits in a long company. Stuxnet's stolen Realtek and JMicron driver-signing keys (2010) [@symantec-stuxnet], Flame's MD5 collision against Microsoft's own intermediate CAs (2012) [@ms-2718704], the ASUS ShadowHammer pipeline compromise (operation 2018, disclosed 2019) [@securelist-shadowhammer], every "Bring Your Own Vulnerable Driver" rootkit since 2018 -- they all have the same shape. A valid Windows-anchored signature, on code the publisher did not intend to ship, on a machine that loaded it without complaint.

> **Key idea:** Every Windows code-identity primitive introduced since 1996 was forced into existence by a specific failure of the layer before it. The article's spine is that cascade.

The pieces in 2026 are not a feature checklist. They are a layered system, each layer answering a question its predecessor structurally could not. If you read the Microsoft Learn pages one at a time you see eight unrelated products. If you read them in the order their failures forced them into existence, you see one operating system slowly learning to name the code it runs.

<Mermaid caption="Generation-by-generation evolution. Each layer is a forced response to a specific failure of the one before.">
timeline
    title Windows code identity, 1993 to 2026
    1993 : NT 3.1 ships : user-only principal
    1996 : Authenticode : publisher signature on PE
    2002 : Trustworthy Computing memo : SDL forcing function
    2006 : Vista x64 KMCS : refusal of unsigned kernel code
    2010 : Stuxnet : stolen Realtek + JMicron keys
    2012 : AppContainer : per-app SID
    2012 : Flame : MD5 collision against MS CA
    2013 : Windows 8.1 PPL : signer level as runtime ACL
    2015 : Device Guard / WDAC : publisher policy
    2019 : ASUS ShadowHammer disclosed : compromised pipeline (2018 operation)
    2020 : Pluton announced : in-die security processor
    2022 : Driver Block List default-on : signed != trusted
    2024 : CrowdStrike outage : placement is identity
    2025 : MVI 3.0 user-mode preview : kernel/user split
</Mermaid>

*Timeline sources, in row order (Mermaid syntax does not permit inline tokens inside the timeline block; each event is independently cited in the surrounding prose as well):* 1993 NT 3.1 [@custer-inside-nt]; 1996 Authenticode [@ms-news-1996-authenticode]; 2002 Trustworthy Computing memo [@cnet-gates-memo] [@theregister-tcm]; 2006 Vista x64 KMCS [@ms-kmcs]; 2010 Stuxnet [@symantec-stuxnet]; 2012 AppContainer [@ms-package-identity]; 2012 Flame MD5 collision [@ms-2718704] [@msrc-2718704]; 2013 Windows 8.1 PPL [@ionescu-ppl] [@ms-protected-processes]; 2015 Device Guard / WDAC [@ms-appcontrol]; 2019 ASUS ShadowHammer disclosed (operation 2018) [@securelist-shadowhammer]; 2020 Pluton announced [@learn-pluton]; 2022 Driver Block List default-on [@msft-driver-blocklist]; 2024 CrowdStrike outage [@ms-crowdstrike-blog] [@msft-crowdstrike-best-practices]; 2025 MVI 3.0 user-mode preview [@weston-2024] [@weston-2025].

If user identity was easy, why did code identity take thirty-three years -- and where exactly did each generation break?

## Why code had no name

Helen Custer's 1992 *Inside Windows NT* opens its security chapter on a single principle: the user is the principal [@custer-inside-nt]. Every action the kernel arbitrates is attributable to a user account. The token that the kernel manufactures at logon carries a Security Identifier (SID) for the user, SIDs for each group the user belongs to, a privilege bitmap, and a set of impersonation flags. Every Discretionary Access Control List on every securable object is evaluated against that token [@ms-sids]. The kernel never asks what binary is running. It asks who is running it.

<Definition term="Security Identifier (SID)">
A variable-length value that uniquely identifies a security principal in Windows. Users, groups, computer accounts, and (later) packages and capabilities all receive SIDs. Until Windows 8, every SID encoded a *user* or *group*; AppContainer and Package SIDs (the `S-1-15-2-...` form) extended SIDs to name code instead.
</Definition>

For 1993's threat model, the user-as-principal model was defensible. NT 3.1 lived on multi-user workstations in a trusted local-area network. The attacker the designers worried about was a malicious insider, a contractor with the wrong group membership, an admin who exceeded his authority. Code arrived on floppies and CDs from coworkers and shrink-wrapped vendors; nobody downloaded executables off the public internet, because for most of the world there was no public internet to download them from.

<Sidenote>Integrity levels (Low, Medium, High, System) were added later, in Vista (2006), and they are still attributes of the *token*, not of the binary on disk. A Low-integrity Internet Explorer process and a Low-integrity Notepad receive the same write restrictions because their tokens carry the same Mandatory Integrity Control label, regardless of which binary loaded.</Sidenote>

Then came Internet Explorer 3.0 in August 1996 and ActiveX. Microsoft repositioned OLE/COM as a cross-internet component model and committed to letting any compliant ActiveX control execute inside the browser [@ms-news-1996-authenticode]. The decision was not casually made; it was the strategic foundation of Microsoft's bet on the web. But its consequence at the security layer was immediate and devastating.

If Alice double-clicks a control on a web page, the operating system's question is "who is running this?" The answer is "Alice." She is allowed to run anything she wants. The control does whatever it likes -- with her token, her files, her privileges, her network access. The user-as-principal model has no second axis to invoke.

There was no theoretical fix at this layer. Alice did genuinely request the download. She did genuinely double-click. NT had no other principal to consult. The model was complete, internally consistent, and exactly wrong for the new threat surface.

What was missing was a cryptographic, network-portable identity for the code itself, attached to the binary in a way nobody downstream could forge. If the kernel cannot see the code, who can put a name on it -- and how do we attach that name to a running PE?

## The first naive attempt: Authenticode (1996)

On August 7, 1996, Microsoft and VeriSign jointly announced the first cryptographic answer Windows had ever offered to "who is this code?" The press release ran twenty-two paragraphs and named every design choice that the next thirty years of Windows code identity would inherit: an X.509 certificate issued by an external commercial Certificate Authority, a PKCS#7 SignedData blob attached directly to the binary, and verification at download or install time by Internet Explorer 3.0 [@ms-news-1996-authenticode].

<Definition term="Authenticode">
A cryptographic format for binding a publisher's identity and a tamper-evident hash to a Portable Executable. The signature is stored in the PE Attribute Certificate Table as a PKCS#7 SignedData structure containing an X.509 certificate chain and a hash that excludes the checksum field, the certificate-table directory entry, and the certificate table itself. Authenticode names the *publisher*, not the code; this is the founding constraint the rest of the article is forced to work around.
</Definition>

<PullQuote>
"The new Microsoft Authenticode technology uniquely identifies the publisher of a piece of software and provides assurance to end users that it has not been tampered with or modified." -- Microsoft press release, August 7, 1996 [@ms-news-1996-authenticode]
</PullQuote>

That sentence is the founding promise of Windows code identity. Read it once and the rest of the article becomes inevitable. Authenticode promises two things. It identifies the publisher. It detects tampering. It does not promise that the publisher is trustworthy, that the publisher's key is uncompromised, or that the bytes it covers are safe to execute. Three decades of failure modes follow from exactly that scoping.

The mechanism is precise enough to demand a diagram. SignTool computes a hash that deliberately skips three regions of the PE: the checksum field (which the loader recomputes), the certificate-table directory entry, and the certificate table itself. The signature does not have to sign the bytes of its own embedding [@ms-pe-format].

It then forms a PKCS#7 SignedData structure [@rfc-2315] containing the hash, an algorithm identifier, the X.509 chain, and an optional RFC 3161 timestamp. That blob is appended to the certificate table. At verify time, `WinVerifyTrust` recomputes the hash, walks the chain to a trusted root, and (if a timestamp is present) honours signatures that were valid as of the timestamped time even if the issuer has since revoked the certificate [@ms-cryptotools].

<Mermaid caption="How Authenticode signs and verifies a PE file. The hash deliberately excludes the certificate table, so the signature does not sign itself.">
sequenceDiagram
    participant Dev as Developer
    participant Sign as SignTool
    participant PE as PE binary
    participant Win as WinVerifyTrust
    participant CA as CA / chain store
    Dev->>Sign: signtool sign /a app.exe
    Sign->>PE: hash bytes (skip checksum + cert table)
    Sign->>PE: build PKCS#7 SignedData
    Sign->>PE: append RFC 3161 timestamp
    Sign->>PE: write into Attribute Cert Table
    Note over Win: at install / download time
    Win->>PE: re-hash same byte ranges
    Win->>PE: extract PKCS#7 SignedData
    Win->>CA: verify X.509 chain to trusted root
    CA-->>Win: chain ok
    Win-->>Win: trust verdict (advisory pre-Vista)
</Mermaid>

Three structural failure modes shipped on day one and still ship in 2026.

**Userland was advisory.** A signed `.exe` ran. An unsigned `.exe` also ran. Internet Explorer would prompt the user with a publisher name, but the prompt was a UI feature, not a kernel gate. The signature was a credential offered for inspection, never a wall the loader refused to cross. Closing this gap took ten years for kernel code (Authenticode 1996 [@ms-news-1996-authenticode] -> KMCS, Vista 2006 [@ms-kmcs]) and nineteen years for managed user-mode policy (Authenticode 1996 [@ms-news-1996-authenticode] -> Device Guard, 2015 [@ms-appcontrol]). Unmanaged consumer Windows in 2026 still permits arbitrary unsigned `.exe` to run if the user clicks through SmartScreen.

**The signed hash did not cover the whole file.** This is CVE-2013-3900, disclosed by Microsoft on December 10, 2013 in security bulletin MS13-098 [@ms13-098]. The Authenticode hash skips the certificate-table region by design, and the verifier in `WinVerifyTrust` did not constrain the size of the unsigned PKCS#7 blob. An attacker could append arbitrary unauthenticated bytes inside the `WIN_CERTIFICATE` structure of an already-signed PE without invalidating the signature.

The fix was a registry value, `EnableCertPaddingCheck=1`, that turned on strict verification. Microsoft chose not to enable it by default. Twelve years later, the National Vulnerability Database still records the same scoping note: "Microsoft does not plan to enforce the stricter verification behavior as a default functionality on supported releases of Microsoft Windows" [@nvd-cve-2013-3900]. CISA added CVE-2013-3900 to its Known Exploited Vulnerabilities catalog on January 10, 2022 -- eight years after disclosure, because attackers were still abusing the unfixed default [@nvd-cve-2013-3900].

> **Note:** CVE-2013-3900 is still default-off in 2026. On any Windows endpoint where strict signature verification matters, set `HKLM\Software\Microsoft\Cryptography\Wintrust\Config\EnableCertPaddingCheck=1` (and the WOW6432Node mirror on 64-bit). Microsoft documents the change as opt-in by design [@msrc-cve-2013-3900].

**Timestamped signatures survive revocation.** The trust evaluator in `WinVerifyTrust` is told to trust signatures as of the timestamped instant, not as of now. Removing this property would invalidate large catalogs of legitimate, archived signed software whose signing certificates have since expired [@ms-cryptotools]. The same property is what let the Stuxnet drivers load on every Windows machine that received them, because Microsoft revoked the Realtek and JMicron certificates *after* Stuxnet had already shipped.

<Sidenote>The architectural choice here is genuinely hard. Synchronous global revocation would break offline software install. Asynchronous revocation, the alternative Microsoft chose, lets pre-revocation signatures continue to verify forever. There is no third option inside the Authenticode design.</Sidenote>

Pull these three threads together and the first aha falls out. Authenticode names the *publisher*, not the code. A signed binary is a credential, not a verdict. The signature proves the bytes came from a holder of the publisher's private key. It does not prove the publisher is trustworthy, that the publisher's key has not been stolen, or that the bytes are safe to execute. Every failure mode of the next twenty-five years lives in that gap.

Six years of failure modes had to accumulate before Microsoft executive priorities caught up. On January 15, 2002, Bill Gates sent the "Trustworthy Computing" memo company-wide, declaring security a higher priority than features and freezing engineering work for security review across every Microsoft product [@cnet-gates-memo] [@theregister-tcm]. The memo did not specify a code-identity mechanism. It is in this story because every later code-identity primitive -- the Security Development Lifecycle's mandatory SignTool integration, the XP SP2 hardening pass that produced MOTW, and the Vista work that produced KMCS -- shipped under the executive cover the memo provided [@windows-internals-7e].

If unsigned code still runs in userland, what makes us think the same primitive will work for a kernel driver -- where the wrong binary owns the operating system?

## The first refusal: KMCS, EV, and the WHQL pipeline (Vista, 2006)

Vista x64 shipped in November 2006 as the first Windows release that *refuses to load unsigned kernel code* [@ms-kmcs]. The refusal was uncompromising. The kernel loader and the Plug-and-Play manager call into `WinVerifyTrust` for every driver image; if the chain does not terminate at one of a small set of Microsoft-trusted roots, `MmLoadSystemImage` returns `STATUS_INVALID_IMAGE_HASH` and the driver does not load.

<Definition term="Kernel-Mode Code Signing (KMCS)">
The Vista-era policy that requires every kernel-mode driver to carry an Authenticode signature chained to a Microsoft-trusted root. From Windows 10 1607 onward (the August 2016 Anniversary Update), only drivers signed by Microsoft via the Hardware Developer Center are accepted on Secure-Boot systems; end-entity cross-signed certificates issued before July 29, 2015 are grandfathered for legacy devices [@ms-kmcs].
</Definition>

The mechanism is a load-time gate. In 2026, Microsoft offers three signing tiers that all terminate at a Microsoft cross-signed cert: HLK-tested (the full Windows Hardware Lab Kit run, eligible for retail Windows Update distribution), attestation-signed (lighter-weight, EV cert plus Microsoft attestation key, no hardware testing), and preproduction (developer signing for pre-release Windows builds) [@learn-driver-signing-offerings] [@ms-attestation-signing]. Driver `.cat` catalog files extend Authenticode coverage from a single PE to an entire driver package, including INF files and supporting executables [@learn-embedded-sig].

EV certificates -- Extended Validation, with mandatory hardware-security-module key storage and audited issuance -- became the practical floor for kernel signing. The reason was not pedagogical. A Domain Validated Authenticode cert from a commodity CA in that era could be obtained cheaply, often with little more than a working email address. EV raised the cost and binding strength of the publisher claim by an order of magnitude.

Then, on June 17, 2010, Sergey Ulasen of the Belarusian anti-virus vendor VirusBlokAda flagged a strange piece of malware on a customer machine in Iran. It had been signed [@wikipedia-stuxnet].

The Stuxnet dropper carried two kernel drivers, `mrxnet.sys` and `mrxcls.sys`, signed with legitimate Authenticode certificates issued to Realtek Semiconductor and JMicron Technology -- two Taiwanese hardware vendors. Investigators concluded the private keys had been physically exfiltrated from the publishers' Taiwanese offices. Microsoft and VeriSign revoked the Realtek certificate on July 16, 2010 and the JMicron certificate shortly afterward [@symantec-stuxnet]. While the certs were valid, Vista x64 KMCS happily loaded both drivers on every system it touched.

> **Note:** KMCS verifies *who signed*, never *whether the signed code is safe*. Every kernel-mode-identity failure between 2010 and 2026 reduces to that single sentence.

The Stuxnet certificates were not anomalies. The same failure shape -- valid Microsoft-rooted signature, on code the publisher did not intend to ship, on a healthy KMCS-enforcing kernel -- replays at predictable intervals.

<Aside label="Flame and the MD5 collision (2012)">
The Flame espionage toolkit produced a *forged* Microsoft-rooted certificate by exploiting an MD5 chosen-prefix collision against Microsoft's Terminal Services Licensing Service, which still issued MD5-hash code-signing certificates years after MD5's brokenness was known. Microsoft Security Advisory 2718704 revoked three of its own intermediate CAs and emergency-deployed a new Untrusted Certificate Store mechanism through Windows Update [@ms-2718704] [@msrc-2718704]. The episode forced Microsoft to deprecate MD5 in code signing and led directly to the curation infrastructure the Driver Block List uses today.
</Aside>

ASUS ShadowHammer in 2018, disclosed by Kaspersky in 2019, added a third variant. The attackers did not steal an HSM-bound key. They compromised ASUS's signing pipeline and got their backdoor signed by ASUS's *production* signing key in the normal course of a normal release, distributed through ASUS Live Update [@securelist-shadowhammer]. Kaspersky's analysis recorded "trojanized updaters were signed with legitimate certificates (eg: 'ASUSTeK Computer Inc.')" and that "over 57,000 Kaspersky users have downloaded and installed the backdoored version of ASUS Live Update." The trust root, the chain, the cert -- all valid. The bytes -- attacker-controlled.

KMCS verified that a driver was signed, not that it was safe. Signing alone was not enough. But what was?

## The second refusal: identity as a runtime attribute (PPL, 2013)

Until October 17, 2013, code identity gated *whether* code could load. Windows 8.1 quietly shipped a structural shift: code identity now also gated *what one running process could do to another* [@ionescu-ppl]. Alex Ionescu, then independent and previously a co-author of *Windows Internals*, was the first person to publish a detailed external map of the new mechanism. The lineage runs back to Vista's 2006 Protected Process model, originally introduced as a DRM container for protected media playback; PPL is the security-grade descendant of that primitive, repurposed seven years later as a general-purpose process-protection mechanism [@windows-internals-7e].

<Definition term="Protected Process Light (PPL) and signer level">
A protection attribute attached to running processes that mediates inter-process access checks above and beyond the user-token DACL. PPL processes carry a *signer level* (in increasing order, roughly: `Authenticode`, `CodeGen`, `Antimalware`, `Lsa`, `Windows`, `WinTcb`, `WinSystem`). A process can open `PROCESS_VM_READ`, `PROCESS_VM_WRITE`, or `CREATE_THREAD` rights against another protected process only if its own signer level is greater than or equal to the target's [@ionescu-ppl] [@ms-protected-processes].
</Definition>

The mechanism lives in the kernel's `EPROCESS` object. When process A opens process B, the kernel calls into `RtlTestProtectedAccess` (and downstream `PsTestProtectedProcessIncompatibility`) before any DACL evaluation [@scrt-ppl-bypass]. If A's signer level is below B's, sensitive access masks are silently stripped from the returned handle. The classic effect: an attacker running with a SYSTEM token, holding `SeDebugPrivilege`, calling `OpenProcess` on LSASS, gets back a handle without `PROCESS_VM_READ`. Mimikatz can no longer dump the LSASS process memory.

The signer level itself is set by an Enhanced Key Usage extension on the Authenticode certificate Microsoft issues to the binary's publisher. Antimalware vendors receive a certificate carrying the `Antimalware` EKU; only Microsoft-internal binaries carry `WinTcb` [@itm4n-runasppl]. Identity, in this model, is an EKU OID baked into a Microsoft-issued Authenticode cert, attached to the binary, evaluated by the kernel at every cross-process access check.

<Mermaid caption="The PPL signer-level lattice and the access-mask gate. Below WinTcb, processes can no longer touch each other's memory.">
flowchart TD
    A[WinSystem]
    B[WinTcb]
    C[Windows]
    D[Lsa]
    E[Antimalware]
    F[CodeGen]
    G[Authenticode]
    A --> B --> C --> D --> E --> F --> G
    H["Caller (signer level X)"] -- "OpenProcess(target T, signer Y)" --> I&#123;"X >= Y ?"&#125;
    I -- yes --> J["full access mask"]
    I -- no  --> K["VM_READ / VM_WRITE / CREATE_THREAD stripped"]
</Mermaid>

LSASS-as-PPL is the canonical demonstration of the mechanism in practice. Setting `HKLM\SYSTEM\CurrentControlSet\Control\Lsa\RunAsPPL=1` causes the next boot's LSASS to start with `PsProtectedSignerLsa`. From that moment, no process below the `Lsa` signer level can read LSASS memory, regardless of the user account. Mimikatz still runs as code; its `OpenProcess(LSASS, PROCESS_VM_READ)` call returns a handle with the read right stripped, and its memory dump fails with `STATUS_ACCESS_DENIED` before it ever sees a credential blob [@itm4n-runasppl].

<Sidenote>The `RunAsPPL=1` setting is mirrored into a UEFI variable on Secure Boot systems precisely so that an attacker with `HKLM\SYSTEM` registry write but no firmware-level access cannot disable LSA Protection by editing the registry and rebooting. The UEFI mirror is checked before the registry value is read [@itm4n-runasppl].</Sidenote>

ELAM -- Early Launch Antimalware -- is the same idea applied to boot. An ELAM driver, signed with a Microsoft-issued antimalware certificate, runs before any third-party boot driver and gets to vote on which subsequent drivers are allowed to load [@learn-elam]. Signer level enters the boot chain at the earliest moment third-party code can enter the boot chain.

> **Key idea:** PPL's invention is conceptual, not just mechanical. Code identity becomes a runtime ACL between two running processes, not merely a load-time gate. App Control, HVCI, and the Driver Block List all operate on this same conceptual frame: identity continuously evaluated, in context, while code is executing.

PPL was, and is, the right idea. It is also incomplete in two ways that drove every subsequent layer.

The first gap is BYOVD -- Bring Your Own Vulnerable Driver. A signed-but-vulnerable driver such as `RTCore64.sys` (shipped with MSI Afterburner), `Capcom.sys` (shipped with the *Street Fighter V* anti-cheat), or `gdrv.sys` (shipped with Gigabyte motherboard utilities) gives any local administrator arbitrary kernel read/write through an IOCTL. Because these drivers are validly KMCS-signed, they load. From kernel mode, the attacker simply zeroes the `Protection` byte in the target process's `EPROCESS` structure, and PPL evaporates. The signing chain is sound. The signer level is correctly evaluated. The mechanism that decides which kernel code is allowed to *exist* -- not just to be signed -- is what fails.

> **Note:** PPL is bypassed not by attacking PPL itself but by editing `EPROCESS.Protection` from kernel mode. That is exactly why the Driver Block List had to exist as a separate layer above KMCS [@msft-driver-blocklist].

The second gap is the user-mode side. PPLdump and PPLfault demonstrated that confused-deputy DLL loads inside higher-PPL services could be turned into an arbitrary memory read of LSASS. Microsoft eventually patched PPLdump in Windows 10 21H2 build 19044.1826, but the failure *class* remains structural: trusting a higher-signer process to safely load DLLs from publisher-controlled paths is a foot-gun every time a new such service ships [@ppldump-github] [@scrt-ppl-bypass].

If signer level is the principal for OS-internal processes, what is the principal for the next layer up -- the application?

## The application becomes a principal: AppContainer and the Package SID

Two processes, same user, same machine. One can read the user's SSH private keys. The other cannot. Same token. Same DACLs on the file. Different verdict. That is the AppContainer promise [@ms-appcontainer-isolation], and to keep it the operating system needs a *cryptographic identity for the application itself* -- something derived from the application, not from the user, that ACLs can name.

Windows 8 shipped AppContainer in 2012. Internally it was called LowBox, the name surviving in the legacy documentation [@ms-appcontainer-legacy]. Windows 10 generalised the model into MSIX, the modern app-package format [@ms-msix].

<Definition term="AppContainer and Package SID">
AppContainer is a per-process sandbox that augments the user-token security check with an *AppContainer SID* (`S-1-15-2-...`) derived from the package identity of the running application. ACLs and capability claims (such as `internetClient` or `picturesLibrary`) are evaluated against this SID, not against the user. Two processes running as the same user can therefore receive different access verdicts because their AppContainer SIDs differ.
</Definition>

The cryptographic move is in how the SID is built.

<Definition term="Package Family Name (PFN) and the 5-tuple">
Every MSIX/APPX package is identified by a five-element tuple: `(Name, Version, Architecture, ResourceId, Publisher)` [@ms-package-identity]. The `Publisher` field is the X.509 subject Distinguished Name of the certificate that signed the package. A 13-character `PublisherId` is derived deterministically from the Publisher DN by Crockford-Base32 encoding the first 64 bits of a SHA-256 hash. The *Package Family Name* is then `<Name>_<PublisherId>`; the *AppContainer SID* is computed deterministically from the full identity tuple and slotted into the `S-1-15-2-...` namespace.
</Definition>

The derivation is dense enough to deserve a worked example. `Microsoft Corporation` plus the `Microsoft.WindowsCalculator` package name yields `Microsoft.WindowsCalculator_8wekyb3d8bbwe` -- the suffix is the Crockford-Base32 PublisherId of `Microsoft Corporation`'s subject DN [@ms-package-identity]. Every MSIX package whose Publisher DN matches will share that suffix; every package whose Publisher DN differs will have a different suffix; an attacker who does not hold the publisher's signing key cannot make a package masquerade as belonging to that publisher's family.

<RunnableCode lang="js" title="Compute a Crockford-Base32 PublisherId from a Publisher DN (illustrative; not bit-exact to Microsoft's internal hashing)">{`
async function publisherIdOf(publisherDN) {
  const data = new TextEncoder().encode(publisherDN);
  const digest = await crypto.subtle.digest('SHA-256', data);
  const first8 = new Uint8Array(digest.slice(0, 8));
  // Crockford Base32 alphabet (no I, L, O, U)
  const alpha = '0123456789abcdefghjkmnpqrstvwxyz';
  let bits = 0n;
  for (const b of first8) bits = (bits << 8n) | BigInt(b);
  let out = '';
  for (let i = 0; i < 13; i++) {
    out = alpha[Number(bits & 31n)] + out;
    bits >>= 5n;
  }
  return out;
}
const dn = 'CN=Microsoft Corporation, O=Microsoft Corporation, L=Redmond, S=Washington, C=US';
publisherIdOf(dn).then(pid => console.log('PFN suffix candidate:', pid));
console.log('Real PFN: Microsoft.WindowsCalculator_8wekyb3d8bbwe');
console.log('Note: the real algorithm is documented in package-identity-overview; this snippet demonstrates the structure, not the exact hash.');
`}</RunnableCode>

Capabilities sit at the same layer. When an MSIX manifest declares `<Capability Name="internetClient" />`, the package is tagged at install time with a *capability SID* of the form `S-1-15-3-1`, and the Windows Filtering Platform evaluates outbound TCP connections against that SID, not against the user's [@p0-appcontainer]. Mandatory Integrity Control labels (Low/Medium/High) compose with the AppContainer SID rather than replacing it [@learn-mic]. A broker process running outside the AppContainer is the only path back to user-scoped resources, and the broker keys its trust decisions on the calling Package SID.

<Aside label="Why Hello's auth broker is identified by Package SID">
Windows Hello's biometric authentication broker is itself an MSIX-style protected service whose AppContainer-flavoured identity is the Package SID derived from its Microsoft-signed manifest. Other processes that want to ask Hello to verify a face or a fingerprint must talk to the broker, and the broker decides whether to honour the request based partly on the caller's package identity. The reason this matters is the same as the LSASS reason: the secret material the broker holds (the user's TPM-bound private key) needs a principal that an attacker holding a SYSTEM token cannot impersonate. User-SID equality is not enough. Package-SID equality is.
</Aside>

<Sidenote>The `8wekyb3d8bbwe` suffix you see on Calculator, Edge, the Microsoft Store, and most other in-box apps is `Microsoft Corporation`'s PublisherId. Once you know what it is, you start seeing it everywhere -- it is the cryptographic fingerprint of "Microsoft signed this package" [@ms-package-identity].</Sidenote>

The aha is the same shape as the PPL aha but at the layer above. Two binaries running as the same user can be authorised differently because the Package SID is derived from the manifest publisher and the package cannot forge it. AppContainer is not a sandbox you opt into. It is a SID you have. Capability ACLs name that SID. The firewall keys on it. The MIC label composes with it. The broker checks it.

The limits are also visible. AppContainer is opt-in for Win32 desktop apps that have not been packaged. Forshaw's 2021 Project Zero analysis of the AppContainer firewall identified loopback-exemption and namespace-isolation holes that Microsoft classified as WontFix [@p0-appcontainer]. Per-app sandbox identity solves the Modern-app problem; it does not solve the legacy Win32 problem. For that, the operating system needs a policy plane that names code in publisher vocabulary instead of path vocabulary.

What does an enterprise admin do when the application refuses to be packaged at all?

## The policy plane: AppLocker, App Control, and the publisher rule

Path-based whitelisting failed for the same reason path-based ACLs failed. Anything writeable can be planted. AppLocker, shipped in Windows 7 in 2009, still stays in the box for compatibility, but Microsoft's own documentation recommends App Control for Business -- the rebranded Windows Defender Application Control -- for new deployments [@ms-applocker] [@ms-appcontrol]. The change is not cosmetic. It is the difference between filename-as-identity and Authenticode-publisher-as-identity.

<Definition term="App Control for Business (formerly WDAC)">
A Code Integrity policy mechanism that expresses allow and deny rules in Authenticode-publisher vocabulary. Policies are authored in XML, compiled to a binary `siPolicy.p7b`, and enforced by the Code Integrity engine at every PE load. With HVCI active, enforcement happens inside the Hyper-V-protected secure kernel, immune to a compromised NT kernel [@ms-appcontrol].
</Definition>

The certificate-and-publisher rule levels run from strictest to broadest as `Hash > FileName > FilePath > FilePublisher > SignedVersion > LeafCertificate > Publisher > PcaCertificate`, with a parallel WHQL-only family for kernel drivers ordered `WHQLFilePublisher > WHQLPublisher > WHQL` [@ms-appcontrol]. `Hash` is the strictest (this exact byte string); `PcaCertificate` is the broadest signer-based level (anything signed under that intermediate CA). Microsoft documents `RootCertificate` as not supported, and `FilePath` -- available for user-mode binaries from Windows 10 1903 onward -- is path-based and so inherits the failure modes the publisher-rule model was designed to escape.

The `LeafCertificate > Publisher` adjacency is the subtle one. `LeafCertificate` pins to one specific signing certificate, so a renewal under a new leaf cert no longer matches. `Publisher` matches any certificate with the same PCA + leaf-CN combination, including future renewals. `LeafCertificate` is the stricter of the two [@ms-appcontrol].

The practical sweet spot is `FilePublisher`. It binds an allow rule to the tuple `(certificate authority + leaf publisher CN + original filename + minimum version)`. That tuple survives recompiles: a benign update from the same publisher under the same name, signed by the same key, with a higher version still passes. It does not survive tampering. Change the original filename in the resource section, change the publisher, change the leaf certificate, and the rule no longer matches.

| Policy primitive | Era | Rule basis | Kernel coverage | Default state |
|---|---|---|---|---|
| Software Restriction Policies (SRP) | XP, 2001 | path / hash / certificate | none | unmanaged |
| AppLocker | Windows 7 Enterprise, 2009 | path / publisher / hash | none | off |
| Device Guard / WDAC | Windows 10, 2015 | publisher / file attributes / hash | full (with [HVCI](/blog/when-system-isnt-enough-the-windows-secure-kernel-and-the-en/)) | off |
| App Control for Business | renamed 2023 | publisher / file attributes / hash | full (with HVCI) | off; on by default in S Mode and on Windows 11 SE |

The Code Integrity engine evaluates an App Control policy on every PE load -- user mode and kernel mode alike. With HVCI active, the policy lives behind the Hyper-V security boundary; even an NT-kernel-level attacker with arbitrary memory write cannot edit it without breaking out of the virtualization layer [@ms-appcontrol]. Deny rules always win; an explicit deny can never be undone by any number of allows on the same binary.

> **Note:** Author every App Control policy in audit mode for at least one full reference-image cycle before promoting to enforce. Audit mode logs every load that *would have been* blocked, into the `Microsoft-Windows-CodeIntegrity/Operational` event channel, without breaking anything. The pre-deployment failure rate of strict policies on real fleets is high enough that audit mode is not optional [@ms-appcontrol].

App Control inherits the same structural ceiling Authenticode put in place. `Allow Signer = Microsoft Windows` admits the entire LOLBins inventory -- `regsvr32`, `mshta`, `installutil`, `rundll32`, every signed-by-Microsoft binary an attacker can call to execute arbitrary content. `Allow Signer = ASUSTeK` would have admitted ShadowHammer (operation 2018, disclosed 2019), every byte of which carried a valid ASUS production signature [@securelist-shadowhammer]. The publisher-rule model is the right primitive for managed endpoints, and the LOLBins / supply-chain-attack failure modes are the structural ceiling on what the primitive can prove.

PKI-rooted publisher policy still trusts the publisher's key custody. When the key is stolen or the binary is signed but malicious, what does the operating system fall back on?

## Reputation as identity: Mark of the Web and SmartScreen

A novel binary, signed by a freshly issued EV cert, has zero history. PKI says yes. Reputation says: I have never seen this before -- run it past the user.

<Definition term="Mark of the Web (MOTW)">
An NTFS alternate data stream named `Zone.Identifier` written by browsers, mail clients, and other downloaders to record the trust zone of a downloaded file. The stream contains an INI-style `[ZoneTransfer]` block with `ZoneId=3` for files from the public internet, plus optional `ReferrerUrl=` and `HostUrl=` fields. The protocol is documented in the MS-FSCC reference [@ms-fscc-motw]. SmartScreen, Office Protected View, and the Attachment Execution Service all read MOTW to gate behaviour on origin.
</Definition>

MOTW is not an Authenticode replacement. It is a parallel, *origin-based* identity: the binary's provenance, encoded as data the file system carries with it, separate from any signature. Origin is the input to SmartScreen. SmartScreen submits a hash of the binary together with publisher metadata to a Microsoft-hosted reputation service; if the service has not seen the binary before, or has not seen enough downloads to be confident, the user gets the familiar "Windows protected your PC" prompt that requires an explicit More info / Run anyway click [@learn-smartscreen].

The pipeline is parallel to Authenticode and App Control, not a successor. PKI says "this signature chains to a real publisher." Reputation says "this hash has been observed N times in the last 30 days, with prevalence trending up; the publisher account is six years old; M of the downloads were from machines later flagged for malware." None of those signals are derivable from a signature.

<Sidenote>The Defender machine-learning pipeline that powers SmartScreen reputation is the deeper version of the same idea -- already covered in *The Defender's Dilemma* sibling article, which traces the twenty-year arc from Defender's 0.5/6 AV-TEST score to its 100% MITRE detection rate. The reputation primitive sits on top of that ML pipeline.</Sidenote>

The bypass surface is now well-known. Container formats (ISO, IMG, VHD, 7z) historically did not propagate MOTW to files extracted from them, because their on-disk representation does not preserve alternate data streams. Phishing campaigns adapted: send the attacker's `.exe` inside an `.iso`, the user mounts the `.iso`, double-clicks the `.exe`, and SmartScreen sees a binary with no MOTW and offers no warning.

Microsoft's response combined fixes -- VHD and ISO MOTW propagation in Windows 11 22H2, MOTW-aware extraction in OneDrive and the new Windows Archive APIs -- with two attack-surface-reduction rules that gate execution on prevalence and trust independently of MOTW [@learn-asr-reference]. The most useful is rule `01443614-cd74-433a-b99e-2ecdc07bfc25`, "Block executable files from running unless they meet a prevalence, age, or trusted list criterion."

Office is the most consequential consumer of MOTW. A Word, Excel, or PowerPoint file carrying a `ZoneId=3` Mark of the Web opens in Protected View: read-only, in a sandboxed renderer, with macros and active content disabled, until the user clicks "Enable Editing" on the message bar [@learn-protected-view].

The 2022 wave of HTML-smuggling and ISO-borne malware that bypassed SmartScreen still tripped over Protected View at the document layer, and the post-2022 macro-blocked-by-default change extended the same MOTW-gated logic from container files to embedded VBA. Origin is now an input to two parallel pipelines: SmartScreen's reputation check on the executable, and Office's read-only-until-confirmed gate on the document.

<MarginNote>The full ASR rule GUIDs are in the Defender for Endpoint reference. Memorise none of them; pin the page.</MarginNote>

A useful way to read the layered system at this point: Authenticode answered "who shipped it?" KMCS answered "is the kernel allowed to load it?" PPL answered "is this running process allowed to touch that one?" AppContainer answered "what application is this?" App Control answered "does the enterprise honour this publisher?" MOTW and SmartScreen answer the question PKI cannot: "have we seen this before, and from where?" When PKI identity is necessary but not sufficient, reputation closes the gap -- statistically, never absolutely.

PKI says yes; reputation says unknown. What does the operating system do when Microsoft itself says *no* to a signature it just minted?

## The breakthrough: signed is not trusted (Driver Block List, 2022)

December 8, 2021. Microsoft launches the Vulnerable and Malicious Driver Reporting Center [@msft-driver-reporting]. The blog post enumerates the failure shape that drove it: drivers that "map arbitrary kernel, physical, or device memory to user mode," drivers that "provide access to storage that bypass Windows access control," drivers whose IOCTLs let a local admin become an arbitrary kernel writer. Every one of those drivers was signed. Every one of those signatures was valid. Every one of those binaries was loadable on a default Windows install.

By the Windows 11 22H2 update in September 2022, the Vulnerable Driver Block List was enabled by default [@msft-driver-blocklist]. The mechanism is a Microsoft-curated `SiPolicy.p7b` (the same WDAC binary policy format), distributed through Windows Update and Defender intelligence updates, enforced by the Code Integrity engine -- with HVCI when present -- at every driver load. The published rules deny drivers by publisher, original filename, and hash. Critically, *the publisher's signature is still valid*. The Block List is an explicit Microsoft veto layered on top of a working PKI verdict.

<PullQuote>
"The blocklist included in this article ... usually contains a more complete set of known vulnerable drivers than the version in the OS and delivered by Windows Update." -- Microsoft Learn, *Microsoft recommended driver block rules* [@msft-driver-blocklist]
</PullQuote>

That sentence, in Microsoft's own documentation, is the breakthrough. Microsoft is openly admitting that the version of the list shipped with the operating system trails the curated reference list. Curation is now a continuous, asynchronous activity, distinct from signing. The list ships on a quarterly cadence. New BYOVD drivers ship faster than that. The LOLDrivers community catalogue tracks hundreds of vulnerable drivers, many of which are not (yet) on Microsoft's list [@loldrivers].

The Block List has a write-time companion. ASR rule `56a863a9-875e-4185-98a7-b882c64b5ce5`, "Block abuse of exploited vulnerable signed drivers," prevents *writing* a known-vulnerable driver to disk in the first place [@learn-asr-reference]. The defence is layered: the Block List denies load; the ASR rule denies install; together they form a curtain across the BYOVD attack class. Together they do not close the BYOVD class -- the catalogue is a list, the threat is a set, and the gap is structural.

> **Key idea:** A signature attests *who*. A reputation score attests *unfamiliar versus seen-good*. A block list attests *Microsoft has revoked trust at runtime, even though the signature still verifies*. These are three distinct identity layers, and 2022 is the year all three were finally co-deployed by default on the same operating system.

> **Note:** "Curated identity at runtime" is the conceptual breakthrough. "Quarterly cadence" is its operational ceiling. The Driver Block List is a list, the BYOVD threat is a set, and the gap between them is the open problem the next layer (Pluton + attestation + faster curation pipelines) is being asked to close.

The Driver Block List is the operational expression of a 25-year admission. After 1996's "the new Microsoft Authenticode technology uniquely identifies the publisher," after Vista's "we will refuse unsigned kernel drivers," after Windows 8.1's "signer level mediates inter-process access," after Windows 10's "App Control names policy in publisher vocabulary," Microsoft's December 2021 blog post said something different. It said: a signature is a publisher claim; trust is a different claim; we, Microsoft, will curate the second claim continuously, even when we ourselves issued the first one. Identity has become curated, not just verified.

If even Microsoft can no longer trust a valid signature, where does trust ultimately have to live?

## The 2026 stack and the hardware future

The eight primitives from the previous sections do not run in isolation. They compose. A modern Windows boot -- on a Pluton-equipped 2026 laptop running Windows 11 24H2 with HVCI on, App Control in enforce mode, Smart App Control on, and Microsoft Defender as the active anti-malware -- evaluates code identity continuously, top to bottom, from firmware through user mode.

<Mermaid caption="The 2026 layered code-identity stack at boot. Each layer evaluates a question its predecessor structurally cannot.">
flowchart LR
    A["UEFI Secure Boot<br/>firmware-rooted PKI"] --> B["Pluton / TPM<br/>measured boot, PCRs"]
    B --> C["KMCS<br/>chain-to-Microsoft"]
    C --> D["Driver Block List<br/>Microsoft curated veto"]
    D --> E["ELAM<br/>signer-level boot gate"]
    E --> F["User-mode Authenticode<br/>publisher attribution"]
    F --> G["PPL signer-level<br/>runtime ACL"]
    G --> H["AppContainer + Package SID<br/>per-app principal"]
    H --> I["App Control for Business<br/>publisher policy"]
    I --> J["MOTW + SmartScreen<br/>origin + reputation"]
    J --> K["Pluton attestation<br/>device-identity claim"]
</Mermaid>

The hardware root has shifted in five years. Pluton, announced on November 17, 2020 by Microsoft together with AMD, Intel, and Qualcomm, is a security processor integrated into the CPU die rather than a discrete TPM chip on the motherboard bus [@ms-pluton-blog]. AMD Ryzen 6000-series and later (including Ryzen AI), Intel Core Ultra Series 3 and 200V, and Qualcomm Snapdragon 8cx Gen 3 and Snapdragon X Series ship Pluton as the on-die TPM. Pluton's firmware is updated through Windows Update -- not through OEM-controlled SPI flash patches -- and Microsoft has been rewriting the firmware in Rust [@learn-pluton].

The architectural significance is twofold. The trust root is no longer a chip with its bus exposed to a trace-and-sniff attacker. The firmware update path is now a Microsoft-controlled channel rather than thirty different OEM-controlled channels. The same hardware root is what [BitLocker](/blog/bitlocker-on-windows-architecture-attacks-and-the-limits-of-/) depends on when it seals the Volume Master Key to a [measured boot](/blog/the-tpm-in-windows-one-primitive-twenty-five-years-and-the-c/) chain via TPM PCRs [@ms-bitlocker]. On Pluton, those PCR measurements live in-die rather than on a bus-exposed chip, and the sibling article *BitLocker on Windows* traces what that buys and what it does not.

<Aside label="How Apple, Linux, and Android answer the same question">
Apple Gatekeeper plus Notarization is a single-CA model. All Mac binaries that pass Gatekeeper are notarized by Apple, scanning happens server-side, and Apple's own notary signature is the trust root [@apple-gatekeeper]. Linux IMA-Appraisal expresses code identity as a per-host keyring of cryptographic measurements; the kernel evaluates a load against a policy stored in the same keyring [@linux-ima]. Android APK Signature Scheme v3 binds the APK to a per-app signing key with an explicit proof-of-rotation chain that lets a publisher rotate keys without breaking the app's identity [@apksigning-v3]. Windows is the only one of the four that accepts third-party CAs in user mode while reserving Microsoft roots for the kernel. The cost of pluralism is exactly the long tail of failure modes this article enumerates; the benefit is the freedom every Windows ISV has used since 1996 to ship without asking Microsoft's permission.
</Aside>

Then came July 19, 2024.

CrowdStrike's Falcon kernel driver loaded a malformed Channel File 291 update that triggered an out-of-bounds memory read inside `csagent.sys` and raised an invalid page fault [@msft-crowdstrike-best-practices], bug-checking roughly 8.5 million Windows endpoints simultaneously [@ms-crowdstrike-blog]. The driver was correctly Microsoft-signed through the Hardware Developer Center attestation pipeline. Every code-identity layer in the stack -- KMCS, the cross-cert, the EV cert, the attestation key, even the Block List -- said yes. The thing that went wrong was not identity. It was that an identity-blessed driver, running in kernel mode, can fail in ways that take entire continents offline.

> **Note:** The CrowdStrike outage proves that a correctly-signed, attested kernel driver is still a planet-scale liability if its placement is wrong. Identity is not the only dimension. Where in the privilege hierarchy a binary runs is itself a dimension that signing cannot capture.

Microsoft's reaction was structural. On September 12, 2024, David Weston published the recap of the September 10 WESES summit Microsoft had hosted with its endpoint-security partners, committing to provide "additional security capabilities outside of kernel mode" so that EDR vendors could run their detection logic in user mode [@weston-2024].

On June 26, 2025, the Windows Resiliency Initiative announced a private preview of the new endpoint security platform, scheduled for July 2025 delivery to selected MVI partners: Bitdefender, CrowdStrike, ESET, and SentinelOne [@weston-2025]. CrowdStrike's representative was Alex Ionescu, now its Chief Technology Innovation Officer -- the same Alex Ionescu whose 2013 Breakpoint talk publicly mapped PPL signer levels. The arc had closed in twelve years.

MVI 3.0 -- the Microsoft Virus Initiative, version three -- adds Safe Deployment Practices as a contractual condition: staged rollouts, deployment rings, monitoring. The same playbook Microsoft itself follows for Windows updates after the 2024 outage [@msft-crowdstrike-best-practices].

The conceptual move is the same one PPL made in 2013, projected one layer higher. Then: identity becomes a runtime ACL between processes. Now: identity-bound *placement* (kernel mode versus user mode) becomes a trust dimension co-equal with identity-bound *signing*. The question is no longer "is this driver signed and on the allow list?" The question is "should code with this identity be running in this context at all?"

If even attested, signed, blessed kernel code can fail catastrophically, what could code identity in principle ever prove -- and what is provably out of reach?

## Theoretical bounds and open problems

Two papers from the 1980s bound everything that followed.

Fred Cohen's 1984 paper at IFIP-Sec, republished in *Computers & Security* in 1987, proved that perfect virus detection is undecidable: there is no algorithm that, given an arbitrary program, can decide whether it is a virus [@cohen-1986]. Reputation systems are necessarily heuristic. The "first 1,000 downloads" gap -- the window where SmartScreen has not yet seen enough of a new binary to be confident -- is structural, not a tuning problem. You cannot close it by waiting harder.

Ken Thompson's 1984 ACM Turing Award lecture, "Reflections on Trusting Trust," made a different point about a different layer [@thompson-trusting-trust]. Thompson exhibited a compiler that, when used to build itself, inserted a backdoor into a target program; when used to build the compiler, propagated the backdoor invisibly to the next-generation binary. Signing what the compiler emitted never proved the compiler was unmodified. SLSA Level 3+ provenance, reproducible builds, hermetic build environments [@slsa-spec] push the bound back one level. They do not eliminate it.

A third bound is Authenticode-specific. Asynchronous revocation, the property that lets pre-revocation timestamped signatures continue to verify forever, is the reason Stuxnet's drivers loaded after Realtek's certificate was revoked, and the reason every other stolen-key compromise has a window of cryptographic legitimacy [@symantec-stuxnet]. Synchronous global revocation would invalidate large catalogs of legitimate, archived, signed software whose signing certs have since expired. There is no fix inside the design.

Pulled together, these bounds explain the persistent gap. Stolen-but-not-yet-revoked publisher keys are the same failure mode replayed three times in sixteen years: Stuxnet (2010, Realtek and JMicron), ASUS ShadowHammer (operation 2018, disclosed 2019, ASUSTeK production key), [Bitwarden CLI](/blog/when-your-password-manager-attacks-you-inside-the-bitwarden-/) (2026, npm publishing credential). The Pluton firmware-update pipeline is the most credible architectural response yet -- a Microsoft-controlled key-rotation channel that does not depend on OEM-side custody -- but it does not eliminate the class. It compresses the response window.

The other open problem is identity for non-PE artifacts. The Authenticode hash and the WDAC publisher rule were designed for Portable Executable files; everything else gets uneven coverage. PowerShell `.ps1` scripts can be signed and gated through Constrained Language Mode, which the runtime enters automatically when an AppLocker or App Control policy is in force [@learn-clm]. .NET assemblies have strong-name signatures, separate from Authenticode and explicitly not a security boundary; Microsoft's own documentation warns "do not rely on strong names for security" [@learn-strong-name].

JIT-compiled code -- the most common shape of "code" in 2026 -- is signed only insofar as the JIT host is signed. The JIT itself produces unsigned bytes. Container images, WSL guests, AI model files, and (now) agent prompts all live outside the Authenticode universe entirely. Each is its own substrate, with its own emerging signing scheme, and the unification has not happened.

$$\text{trust}_{2026}(\text{binary}) = \text{publisher}(\text{binary}) \land \text{provenance}(\text{build}) \land \text{placement}(\text{runtime}) \land \text{reputation}(\text{telemetry}) \land \neg \text{revoked}(\text{Microsoft})$$

That conjunction is the 2026 verdict. None of its terms are sufficient on their own. Each was forced into existence by a failure of the term before. The arc from "who launched this thread?" in 1993 to that conjunction in 2026 is what thirty-three years of forced moves produced.

What does the layered system look like in practice on a 2026 endpoint -- and what should an admin actually do?

## Practical guide

Six concrete recommendations for a 2026 Windows fleet, each tied to a primary Microsoft Learn or MSRC source.

> **Note:** On Windows 11 22H2 and later it is enabled by default. On Windows 10, Server, and downlevel Windows 11 builds, enable it explicitly through Settings > Privacy & security > Windows Security > Device security > Core isolation > Microsoft Vulnerable Driver Blocklist. HVCI must be on for full enforcement [@msft-driver-blocklist].

> **Note:** The published Microsoft baseline policies (`Default Windows`, `Allow Microsoft`, the Windows S Mode policy) are the right starting points. Run any custom policy in audit mode for a full reference-image cycle, mine the `Microsoft-Windows-CodeIntegrity/Operational` event log for blocked loads, then promote to enforce. Pair with HVCI so the policy lives behind the secure-kernel boundary [@ms-appcontrol]. Deploy through Microsoft Intune (or your MDM of choice), Configuration Manager, or Group Policy -- App Control policy distribution is a first-class managed-endpoint scenario rather than a per-machine hand edit [@learn-appcontrol-deployment].

> **Note:** On Secure Boot systems the value is mirrored into a UEFI variable, so registry-only attackers cannot turn it off. Verify with `Get-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Control\Lsa' -Name RunAsPPL` and the corresponding `RunAsPPLBoot` UEFI variable [@itm4n-runasppl].

> **Note:** SmartScreen alone is bypassed by container-format MOTW stripping. Pair it with ASR rule `01443614-cd74-433a-b99e-2ecdc07bfc25`, which gates execution on prevalence, age, or a trusted list, independently of MOTW [@learn-smartscreen] [@learn-asr-reference].

> **Note:** The Package SID is a free identity for any internal app you ship as MSIX. ACL sensitive resources to it, declare capabilities explicitly in the manifest, and let the AppContainer SID enforce the ACL at the kernel boundary [@ms-package-identity] [@ms-msix].

> **Note:** Treat your code-signing key like a credential, not a build artifact. Rotate the EV cert, revoke the old one, notify customers, and -- if the binary already shipped -- request the offending hash on the Driver Block List or the ASR rule [@msft-driver-reporting]. The Bitwarden CLI 2026 incident took 93 minutes from release to containment, with rollback continuing for several hours afterward [@bitwarden-statement]; have the playbook ready before you need it.

<Spoiler kind="solution" label="A small JS that simulates the boot-time decision tree for a single PE">
```js
function loadDecision({ signed, signerLevel, motwed, onBlockList, allowedByAppControl, smartScreenVerdict }) {
  if (onBlockList) return 'BLOCK -- Microsoft veto, signature ignored';
  if (signed === false && allowedByAppControl === false) return 'BLOCK -- unsigned, App Control denies';
  if (signerLevel === 'WinTcb' || signerLevel === 'WinSystem') return 'LOAD -- protected process';
  if (allowedByAppControl === false) return 'BLOCK -- App Control deny';
  if (motwed && smartScreenVerdict === 'unknown') return 'WARN -- SmartScreen, user gate';
  if (motwed && smartScreenVerdict === 'malicious') return 'BLOCK -- SmartScreen';
  return 'LOAD';
}
console.log(loadDecision({
  signed: true, signerLevel: 'Authenticode',
  motwed: true, onBlockList: false,
  allowedByAppControl: true, smartScreenVerdict: 'good',
}));
```
The decision tree is the practical mental model. Every branch of it is the consequence of one of the failures this article tracks.
</Spoiler>

<FAQ title="Frequently asked questions">

<FAQItem question="Doesn't a valid Authenticode signature mean Microsoft trusts the file?">
No. A signature attests *publisher identity* and *binary integrity*. It does not attest safety. Microsoft trust is a separate, runtime claim expressed through the Driver Block List, App Control policies, and Defender reputation -- evaluated continuously, even on signatures Microsoft itself once minted [@msft-driver-blocklist].
</FAQItem>

<FAQItem question="What is the difference between EV signing and attestation signing?">
Extended Validation Authenticode signing vets organisational identity through an audited issuance process and mandates that the private key live in a hardware security module; the publisher's signature is the trust root. Attestation signing is Microsoft's lighter-weight pipeline for kernel drivers: the publisher submits an EV-signed binary to the Hardware Developer Center, Microsoft re-signs with its own attestation key, and the result is delivered back. Attestation-signed drivers are not WHQL tested and are not distributed via retail Windows Update [@learn-driver-signing-offerings] [@ms-attestation-signing].
</FAQItem>

<FAQItem question="Why does SmartScreen warn on my own EXE?">
MOTW plus low prevalence. SmartScreen sees a binary it has not observed enough times in the global telemetry to be confident, on a file marked as having been downloaded from the internet. Sign the binary with an EV certificate, accumulate downloads on a stable hash, and the warning fades. Internal binaries can have MOTW stripped at deployment time if your distribution channel is itself trusted [@learn-smartscreen].
</FAQItem>

<FAQItem question="Are AppLocker and App Control for Business the same thing?">
No. AppLocker is the Windows 7-era policy mechanism with rules in path/publisher/hash form, no kernel coverage, and no virtualization-based protection of the policy itself. App Control for Business -- formerly Windows Defender Application Control -- is the publisher-rule Code Integrity policy mechanism with HVCI enforcement at the kernel boundary. Microsoft recommends App Control for new deployments and keeps AppLocker for compatibility [@ms-applocker] [@ms-appcontrol].
</FAQItem>

<FAQItem question="Why can't my admin user dump LSASS even with SeDebugPrivilege?">
LSASS is running as a Protected Process Light at the `Lsa` signer level. Signer-level gating sits *above* the token DACL check. Even a SYSTEM-token caller with `SeDebugPrivilege` gets a process handle with `PROCESS_VM_READ` and `PROCESS_VM_WRITE` stripped, because PPL strips access masks before the DACL evaluation. Disable LSA Protection (`RunAsPPL=0`) on a test machine and the same call succeeds [@itm4n-runasppl] [@scrt-ppl-bypass].
</FAQItem>

<FAQItem question="Does Authenticode protect me from supply-chain attacks?">
Only if the publisher's signing-key custody and build pipeline are themselves uncompromised. Stuxnet (stolen Realtek and JMicron keys, 2010), ASUS ShadowHammer (compromised production signing pipeline, operation 2018 / disclosed 2019), and the Bitwarden CLI npm incident (2026) all produced cryptographically valid signatures on attacker-controlled bytes [@symantec-stuxnet] [@securelist-shadowhammer] [@bitwarden-statement]. SLSA-level build provenance and Pluton-rooted attestation are the architectural responses; neither is yet universally deployed [@slsa-spec] [@learn-pluton].
</FAQItem>

</FAQ>

## Where this is going

Pluton-rooted device attestation, MVI 3.0's user-mode security platform, SLSA build provenance, and the post-CrowdStrike push to make placement a first-class identity attribute are all in motion in 2026 [@weston-2025] [@slsa-spec]. The follow-on articles -- Driver Block List in production, App Control with HVCI on real fleets, Secure Boot internals, the Pluton firmware-update channel -- are the operational complement to the conceptual story this article has told.

The arc that began with Windows NT 3.1 having no answer to "who is this code?" now has eight overlapping answers, each insufficient on its own. Identity in 2026 is a multi-layered claim about a binary's publisher, its build provenance, its runtime placement, and its reputation, evaluated continuously while the code is running. The arc from 1993's "who launched this thread?" to 2026's "is this signed binary, in this placement, with this build provenance, on Microsoft's curated honour list, today, on this hardware-attested device?" is the answer thirty-three years of forced moves produced -- and the question the next thirty-three years will keep asking, because none of the bounds Cohen and Thompson proved have moved.

<StudyGuide slug="app-identity-in-windows" keyTerms={[
  { term: "Authenticode", definition: "PE-attached PKCS#7 SignedData that names the publisher and detects tampering. Names the publisher, not the code." },
  { term: "Kernel-Mode Code Signing (KMCS)", definition: "Vista x64 policy that refuses to load unsigned kernel drivers; chain-to-Microsoft requirement post-2015." },
  { term: "Protected Process Light (PPL)", definition: "Windows 8.1 attribute that mediates inter-process access by signer level; LSASS-as-PPL defeats user-mode credential dumpers." },
  { term: "Package SID", definition: "Cryptographic application identity (S-1-15-2-...) derived from the MSIX manifest publisher; first-class principal in ACLs and capability checks." },
  { term: "App Control for Business", definition: "Publisher-rule Code Integrity policy formerly called WDAC; enforced by HVCI; ships in S Mode and Windows 11 SE by default." },
  { term: "Mark of the Web (MOTW)", definition: "Zone.Identifier alternate data stream that records a file's origin; input to SmartScreen reputation." },
  { term: "Vulnerable Driver Block List", definition: "Microsoft-curated WDAC-format deny list shipped quarterly; default-on since Windows 11 22H2; the operational expression of 'signed != trusted'." },
  { term: "Pluton", definition: "On-die Microsoft security processor in AMD Ryzen 6000+, Intel Core Ultra 200V, and Qualcomm 8cx Gen 3; firmware updated through Windows Update." }
]} />
