43 min read
Last updated on

"Who Is This Code?" -- The Quiet 33-Year Reinvention of App Identity in Windows

NT 3.1 could prove which user typed at the keyboard but had no answer to which code was running. Eight successive primitives later, Windows is still answering the same question.

Permalink

Two identities, one operating system

On July 27, 1993 -- the day Windows NT 3.1 shipped -- the new operating system could prove with cryptographic precision who Alice was, which group she belonged to, which file she was allowed to open, and at what level of privilege she was running. It could prove exactly nothing about the program she had just double-clicked.

Thirty-three years later, "Alice" has barely changed. The code she runs has acquired a publisher signature stamped onto its Portable Executable file, a kernel-loader gate that refuses to load unsigned drivers, a signer level in a runtime lattice that decides whether one process can read another's memory, a Package SID derived from a Crockford-Base32 hash of the manifest publisher [1], a publisher-rule entry in a centrally managed App Control policy [2], a Mark-of-the-Web alternate data stream from the browser that downloaded it [3], a SmartScreen reputation score [4], a possible entry on a Microsoft-curated denylist that overrides its own valid signature [5], and -- on a Pluton-equipped 2026 laptop -- a hardware-attested measurement of the boot chain that loaded it [6]. Every one of those identities was forced into existence by a specific failure of the one before. This is that story.

A modern symptom makes the asymmetry concrete. In April 2026, attackers seized the publishing pipeline for the @bitwarden/cli npm package -- a credential they had no business holding -- and shipped a backdoored release for ninety-three minutes before maintainers caught it [7]. Code identity, as it existed at every layer of every operating system that consumed that package, said the artifact was authentic. The signature was valid. The publisher's account was real. The package metadata was correct. Every check passed. And the binary was hostile. That gap, between "who shipped it" and "is it safe to run," is the same gap NT 3.1 first stepped over in 1993 and that Windows has been trying to close ever since.

The Bitwarden case sits in a long company. Stuxnet's stolen Realtek and JMicron driver-signing keys (2010) [8], Flame's MD5 collision against Microsoft's own intermediate CAs (2012) [9], the ASUS ShadowHammer pipeline compromise (operation 2018, disclosed 2019) [10], every "Bring Your Own Vulnerable Driver" rootkit since 2018 -- they all have the same shape. A valid Windows-anchored signature, on code the publisher did not intend to ship, on a machine that loaded it without complaint.

Every Windows code-identity primitive introduced since 1996 was forced into existence by a specific failure of the layer before it. The article's spine is that cascade.

The pieces in 2026 are not a feature checklist. They are a layered system, each layer answering a question its predecessor structurally could not. If you read the Microsoft Learn pages one at a time you see eight unrelated products. If you read them in the order their failures forced them into existence, you see one operating system slowly learning to name the code it runs.

Ctrl + scroll to zoom
Generation-by-generation evolution. Each layer is a forced response to a specific failure of the one before.

Timeline sources, in row order (Mermaid syntax does not permit inline tokens inside the timeline block; each event is independently cited in the surrounding prose as well): 1993 NT 3.1 [11]; 1996 Authenticode [12]; 2002 Trustworthy Computing memo [13] [14]; 2006 Vista x64 KMCS [15]; 2010 Stuxnet [8]; 2012 AppContainer [1]; 2012 Flame MD5 collision [9] [16]; 2013 Windows 8.1 PPL [17] [18]; 2015 Device Guard / WDAC [2]; 2019 ASUS ShadowHammer disclosed (operation 2018) [10]; 2020 Pluton announced [6]; 2022 Driver Block List default-on [5]; 2024 CrowdStrike outage [19] [20]; 2025 MVI 3.0 user-mode preview [21] [22].

If user identity was easy, why did code identity take thirty-three years -- and where exactly did each generation break?

Why code had no name

Helen Custer's 1992 Inside Windows NT opens its security chapter on a single principle: the user is the principal [11]. Every action the kernel arbitrates is attributable to a user account. The token that the kernel manufactures at logon carries a Security Identifier (SID) for the user, SIDs for each group the user belongs to, a privilege bitmap, and a set of impersonation flags. Every Discretionary Access Control List on every securable object is evaluated against that token [23]. The kernel never asks what binary is running. It asks who is running it.

Security Identifier (SID)

A variable-length value that uniquely identifies a security principal in Windows. Users, groups, computer accounts, and (later) packages and capabilities all receive SIDs. Until Windows 8, every SID encoded a user or group; AppContainer and Package SIDs (the S-1-15-2-... form) extended SIDs to name code instead.

For 1993's threat model, the user-as-principal model was defensible. NT 3.1 lived on multi-user workstations in a trusted local-area network. The attacker the designers worried about was a malicious insider, a contractor with the wrong group membership, an admin who exceeded his authority. Code arrived on floppies and CDs from coworkers and shrink-wrapped vendors; nobody downloaded executables off the public internet, because for most of the world there was no public internet to download them from.

Integrity levels (Low, Medium, High, System) were added later, in Vista (2006), and they are still attributes of the token, not of the binary on disk. A Low-integrity Internet Explorer process and a Low-integrity Notepad receive the same write restrictions because their tokens carry the same Mandatory Integrity Control label, regardless of which binary loaded.

Then came Internet Explorer 3.0 in August 1996 and ActiveX. Microsoft repositioned OLE/COM as a cross-internet component model and committed to letting any compliant ActiveX control execute inside the browser [12]. The decision was not casually made; it was the strategic foundation of Microsoft's bet on the web. But its consequence at the security layer was immediate and devastating.

If Alice double-clicks a control on a web page, the operating system's question is "who is running this?" The answer is "Alice." She is allowed to run anything she wants. The control does whatever it likes -- with her token, her files, her privileges, her network access. The user-as-principal model has no second axis to invoke.

There was no theoretical fix at this layer. Alice did genuinely request the download. She did genuinely double-click. NT had no other principal to consult. The model was complete, internally consistent, and exactly wrong for the new threat surface.

What was missing was a cryptographic, network-portable identity for the code itself, attached to the binary in a way nobody downstream could forge. If the kernel cannot see the code, who can put a name on it -- and how do we attach that name to a running PE?

The first naive attempt: Authenticode (1996)

On August 7, 1996, Microsoft and VeriSign jointly announced the first cryptographic answer Windows had ever offered to "who is this code?" The press release ran twenty-two paragraphs and named every design choice that the next thirty years of Windows code identity would inherit: an X.509 certificate issued by an external commercial Certificate Authority, a PKCS#7 SignedData blob attached directly to the binary, and verification at download or install time by Internet Explorer 3.0 [12].

Authenticode

A cryptographic format for binding a publisher's identity and a tamper-evident hash to a Portable Executable. The signature is stored in the PE Attribute Certificate Table as a PKCS#7 SignedData structure containing an X.509 certificate chain and a hash that excludes the checksum field, the certificate-table directory entry, and the certificate table itself. Authenticode names the publisher, not the code; this is the founding constraint the rest of the article is forced to work around.

"The new Microsoft Authenticode technology uniquely identifies the publisher of a piece of software and provides assurance to end users that it has not been tampered with or modified." -- Microsoft press release, August 7, 1996 [12]

That sentence is the founding promise of Windows code identity. Read it once and the rest of the article becomes inevitable. Authenticode promises two things. It identifies the publisher. It detects tampering. It does not promise that the publisher is trustworthy, that the publisher's key is uncompromised, or that the bytes it covers are safe to execute. Three decades of failure modes follow from exactly that scoping.

The mechanism is precise enough to demand a diagram. SignTool computes a hash that deliberately skips three regions of the PE: the checksum field (which the loader recomputes), the certificate-table directory entry, and the certificate table itself. The signature does not have to sign the bytes of its own embedding [24].

It then forms a PKCS#7 SignedData structure [25] containing the hash, an algorithm identifier, the X.509 chain, and an optional RFC 3161 timestamp. That blob is appended to the certificate table. At verify time, WinVerifyTrust recomputes the hash, walks the chain to a trusted root, and (if a timestamp is present) honours signatures that were valid as of the timestamped time even if the issuer has since revoked the certificate [26].

Ctrl + scroll to zoom
How Authenticode signs and verifies a PE file. The hash deliberately excludes the certificate table, so the signature does not sign itself.

Three structural failure modes shipped on day one and still ship in 2026.

Userland was advisory. A signed .exe ran. An unsigned .exe also ran. Internet Explorer would prompt the user with a publisher name, but the prompt was a UI feature, not a kernel gate. The signature was a credential offered for inspection, never a wall the loader refused to cross. Closing this gap took ten years for kernel code (Authenticode 1996 [12] -> KMCS, Vista 2006 [15]) and nineteen years for managed user-mode policy (Authenticode 1996 [12] -> Device Guard, 2015 [2]). Unmanaged consumer Windows in 2026 still permits arbitrary unsigned .exe to run if the user clicks through SmartScreen.

The signed hash did not cover the whole file. This is CVE-2013-3900, disclosed by Microsoft on December 10, 2013 in security bulletin MS13-098 [27]. The Authenticode hash skips the certificate-table region by design, and the verifier in WinVerifyTrust did not constrain the size of the unsigned PKCS#7 blob. An attacker could append arbitrary unauthenticated bytes inside the WIN_CERTIFICATE structure of an already-signed PE without invalidating the signature.

The fix was a registry value, EnableCertPaddingCheck=1, that turned on strict verification. Microsoft chose not to enable it by default. Twelve years later, the National Vulnerability Database still records the same scoping note: "Microsoft does not plan to enforce the stricter verification behavior as a default functionality on supported releases of Microsoft Windows" [28]. CISA added CVE-2013-3900 to its Known Exploited Vulnerabilities catalog on January 10, 2022 -- eight years after disclosure, because attackers were still abusing the unfixed default [28].

Timestamped signatures survive revocation. The trust evaluator in WinVerifyTrust is told to trust signatures as of the timestamped instant, not as of now. Removing this property would invalidate large catalogs of legitimate, archived signed software whose signing certificates have since expired [26]. The same property is what let the Stuxnet drivers load on every Windows machine that received them, because Microsoft revoked the Realtek and JMicron certificates after Stuxnet had already shipped.

The architectural choice here is genuinely hard. Synchronous global revocation would break offline software install. Asynchronous revocation, the alternative Microsoft chose, lets pre-revocation signatures continue to verify forever. There is no third option inside the Authenticode design.

Pull these three threads together and the first aha falls out. Authenticode names the publisher, not the code. A signed binary is a credential, not a verdict. The signature proves the bytes came from a holder of the publisher's private key. It does not prove the publisher is trustworthy, that the publisher's key has not been stolen, or that the bytes are safe to execute. Every failure mode of the next twenty-five years lives in that gap.

Six years of failure modes had to accumulate before Microsoft executive priorities caught up. On January 15, 2002, Bill Gates sent the "Trustworthy Computing" memo company-wide, declaring security a higher priority than features and freezing engineering work for security review across every Microsoft product [13] [14]. The memo did not specify a code-identity mechanism. It is in this story because every later code-identity primitive -- the Security Development Lifecycle's mandatory SignTool integration, the XP SP2 hardening pass that produced MOTW, and the Vista work that produced KMCS -- shipped under the executive cover the memo provided [30].

If unsigned code still runs in userland, what makes us think the same primitive will work for a kernel driver -- where the wrong binary owns the operating system?

The first refusal: KMCS, EV, and the WHQL pipeline (Vista, 2006)

Vista x64 shipped in November 2006 as the first Windows release that refuses to load unsigned kernel code [15]. The refusal was uncompromising. The kernel loader and the Plug-and-Play manager call into WinVerifyTrust for every driver image; if the chain does not terminate at one of a small set of Microsoft-trusted roots, MmLoadSystemImage returns STATUS_INVALID_IMAGE_HASH and the driver does not load.

Kernel-Mode Code Signing (KMCS)

The Vista-era policy that requires every kernel-mode driver to carry an Authenticode signature chained to a Microsoft-trusted root. From Windows 10 1607 onward (the August 2016 Anniversary Update), only drivers signed by Microsoft via the Hardware Developer Center are accepted on Secure-Boot systems; end-entity cross-signed certificates issued before July 29, 2015 are grandfathered for legacy devices [15].

The mechanism is a load-time gate. In 2026, Microsoft offers three signing tiers that all terminate at a Microsoft cross-signed cert: HLK-tested (the full Windows Hardware Lab Kit run, eligible for retail Windows Update distribution), attestation-signed (lighter-weight, EV cert plus Microsoft attestation key, no hardware testing), and preproduction (developer signing for pre-release Windows builds) [31] [32]. Driver .cat catalog files extend Authenticode coverage from a single PE to an entire driver package, including INF files and supporting executables [33].

EV certificates -- Extended Validation, with mandatory hardware-security-module key storage and audited issuance -- became the practical floor for kernel signing. The reason was not pedagogical. A Domain Validated Authenticode cert from a commodity CA in that era could be obtained cheaply, often with little more than a working email address. EV raised the cost and binding strength of the publisher claim by an order of magnitude.

Then, on June 17, 2010, Sergey Ulasen of the Belarusian anti-virus vendor VirusBlokAda flagged a strange piece of malware on a customer machine in Iran. It had been signed [34].

The Stuxnet dropper carried two kernel drivers, mrxnet.sys and mrxcls.sys, signed with legitimate Authenticode certificates issued to Realtek Semiconductor and JMicron Technology -- two Taiwanese hardware vendors. Investigators concluded the private keys had been physically exfiltrated from the publishers' Taiwanese offices. Microsoft and VeriSign revoked the Realtek certificate on July 16, 2010 and the JMicron certificate shortly afterward [8]. While the certs were valid, Vista x64 KMCS happily loaded both drivers on every system it touched.

The Stuxnet certificates were not anomalies. The same failure shape -- valid Microsoft-rooted signature, on code the publisher did not intend to ship, on a healthy KMCS-enforcing kernel -- replays at predictable intervals.

ASUS ShadowHammer in 2018, disclosed by Kaspersky in 2019, added a third variant. The attackers did not steal an HSM-bound key. They compromised ASUS's signing pipeline and got their backdoor signed by ASUS's production signing key in the normal course of a normal release, distributed through ASUS Live Update [10]. Kaspersky's analysis recorded "trojanized updaters were signed with legitimate certificates (eg: 'ASUSTeK Computer Inc.')" and that "over 57,000 Kaspersky users have downloaded and installed the backdoored version of ASUS Live Update." The trust root, the chain, the cert -- all valid. The bytes -- attacker-controlled.

KMCS verified that a driver was signed, not that it was safe. Signing alone was not enough. But what was?

The second refusal: identity as a runtime attribute (PPL, 2013)

Until October 17, 2013, code identity gated whether code could load. Windows 8.1 quietly shipped a structural shift: code identity now also gated what one running process could do to another [17]. Alex Ionescu, then independent and previously a co-author of Windows Internals, was the first person to publish a detailed external map of the new mechanism. The lineage runs back to Vista's 2006 Protected Process model, originally introduced as a DRM container for protected media playback; PPL is the security-grade descendant of that primitive, repurposed seven years later as a general-purpose process-protection mechanism [30].

Protected Process Light (PPL) and signer level

A protection attribute attached to running processes that mediates inter-process access checks above and beyond the user-token DACL. PPL processes carry a signer level (in increasing order, roughly: Authenticode, CodeGen, Antimalware, Lsa, Windows, WinTcb, WinSystem). A process can open PROCESS_VM_READ, PROCESS_VM_WRITE, or CREATE_THREAD rights against another protected process only if its own signer level is greater than or equal to the target's [17] [18].

The mechanism lives in the kernel's EPROCESS object. When process A opens process B, the kernel calls into RtlTestProtectedAccess (and downstream PsTestProtectedProcessIncompatibility) before any DACL evaluation [35]. If A's signer level is below B's, sensitive access masks are silently stripped from the returned handle. The classic effect: an attacker running with a SYSTEM token, holding SeDebugPrivilege, calling OpenProcess on LSASS, gets back a handle without PROCESS_VM_READ. Mimikatz can no longer dump the LSASS process memory.

The signer level itself is set by an Enhanced Key Usage extension on the Authenticode certificate Microsoft issues to the binary's publisher. Antimalware vendors receive a certificate carrying the Antimalware EKU; only Microsoft-internal binaries carry WinTcb [36]. Identity, in this model, is an EKU OID baked into a Microsoft-issued Authenticode cert, attached to the binary, evaluated by the kernel at every cross-process access check.

Ctrl + scroll to zoom
The PPL signer-level lattice and the access-mask gate. Below WinTcb, processes can no longer touch each other's memory.

LSASS-as-PPL is the canonical demonstration of the mechanism in practice. Setting HKLM\SYSTEM\CurrentControlSet\Control\Lsa\RunAsPPL=1 causes the next boot's LSASS to start with PsProtectedSignerLsa. From that moment, no process below the Lsa signer level can read LSASS memory, regardless of the user account. Mimikatz still runs as code; its OpenProcess(LSASS, PROCESS_VM_READ) call returns a handle with the read right stripped, and its memory dump fails with STATUS_ACCESS_DENIED before it ever sees a credential blob [36].

The RunAsPPL=1 setting is mirrored into a UEFI variable on Secure Boot systems precisely so that an attacker with HKLM\SYSTEM registry write but no firmware-level access cannot disable LSA Protection by editing the registry and rebooting. The UEFI mirror is checked before the registry value is read [36].

ELAM -- Early Launch Antimalware -- is the same idea applied to boot. An ELAM driver, signed with a Microsoft-issued antimalware certificate, runs before any third-party boot driver and gets to vote on which subsequent drivers are allowed to load [37]. Signer level enters the boot chain at the earliest moment third-party code can enter the boot chain.

PPL's invention is conceptual, not just mechanical. Code identity becomes a runtime ACL between two running processes, not merely a load-time gate. App Control, HVCI, and the Driver Block List all operate on this same conceptual frame: identity continuously evaluated, in context, while code is executing.

PPL was, and is, the right idea. It is also incomplete in two ways that drove every subsequent layer.

The first gap is BYOVD -- Bring Your Own Vulnerable Driver. A signed-but-vulnerable driver such as RTCore64.sys (shipped with MSI Afterburner), Capcom.sys (shipped with the Street Fighter V anti-cheat), or gdrv.sys (shipped with Gigabyte motherboard utilities) gives any local administrator arbitrary kernel read/write through an IOCTL. Because these drivers are validly KMCS-signed, they load. From kernel mode, the attacker simply zeroes the Protection byte in the target process's EPROCESS structure, and PPL evaporates. The signing chain is sound. The signer level is correctly evaluated. The mechanism that decides which kernel code is allowed to exist -- not just to be signed -- is what fails.

The second gap is the user-mode side. PPLdump and PPLfault demonstrated that confused-deputy DLL loads inside higher-PPL services could be turned into an arbitrary memory read of LSASS. Microsoft eventually patched PPLdump in Windows 10 21H2 build 19044.1826, but the failure class remains structural: trusting a higher-signer process to safely load DLLs from publisher-controlled paths is a foot-gun every time a new such service ships [38] [35].

If signer level is the principal for OS-internal processes, what is the principal for the next layer up -- the application?

The application becomes a principal: AppContainer and the Package SID

Two processes, same user, same machine. One can read the user's SSH private keys. The other cannot. Same token. Same DACLs on the file. Different verdict. That is the AppContainer promise [39], and to keep it the operating system needs a cryptographic identity for the application itself -- something derived from the application, not from the user, that ACLs can name.

Windows 8 shipped AppContainer in 2012. Internally it was called LowBox, the name surviving in the legacy documentation [40]. Windows 10 generalised the model into MSIX, the modern app-package format [41].

AppContainer and Package SID

AppContainer is a per-process sandbox that augments the user-token security check with an AppContainer SID (S-1-15-2-...) derived from the package identity of the running application. ACLs and capability claims (such as internetClient or picturesLibrary) are evaluated against this SID, not against the user. Two processes running as the same user can therefore receive different access verdicts because their AppContainer SIDs differ.

The cryptographic move is in how the SID is built.

Package Family Name (PFN) and the 5-tuple

Every MSIX/APPX package is identified by a five-element tuple: (Name, Version, Architecture, ResourceId, Publisher) [1]. The Publisher field is the X.509 subject Distinguished Name of the certificate that signed the package. A 13-character PublisherId is derived deterministically from the Publisher DN by Crockford-Base32 encoding the first 64 bits of a SHA-256 hash. The Package Family Name is then <Name>_<PublisherId>; the AppContainer SID is computed deterministically from the full identity tuple and slotted into the S-1-15-2-... namespace.

The derivation is dense enough to deserve a worked example. Microsoft Corporation plus the Microsoft.WindowsCalculator package name yields Microsoft.WindowsCalculator_8wekyb3d8bbwe -- the suffix is the Crockford-Base32 PublisherId of Microsoft Corporation's subject DN [1]. Every MSIX package whose Publisher DN matches will share that suffix; every package whose Publisher DN differs will have a different suffix; an attacker who does not hold the publisher's signing key cannot make a package masquerade as belonging to that publisher's family.

JavaScript Compute a Crockford-Base32 PublisherId from a Publisher DN (illustrative; not bit-exact to Microsoft's internal hashing)
async function publisherIdOf(publisherDN) {
const data = new TextEncoder().encode(publisherDN);
const digest = await crypto.subtle.digest('SHA-256', data);
const first8 = new Uint8Array(digest.slice(0, 8));
// Crockford Base32 alphabet (no I, L, O, U)
const alpha = '0123456789abcdefghjkmnpqrstvwxyz';
let bits = 0n;
for (const b of first8) bits = (bits << 8n) | BigInt(b);
let out = '';
for (let i = 0; i < 13; i++) {
  out = alpha[Number(bits & 31n)] + out;
  bits >>= 5n;
}
return out;
}
const dn = 'CN=Microsoft Corporation, O=Microsoft Corporation, L=Redmond, S=Washington, C=US';
publisherIdOf(dn).then(pid => console.log('PFN suffix candidate:', pid));
console.log('Real PFN: Microsoft.WindowsCalculator_8wekyb3d8bbwe');
console.log('Note: the real algorithm is documented in package-identity-overview; this snippet demonstrates the structure, not the exact hash.');

Press Run to execute.

Capabilities sit at the same layer. When an MSIX manifest declares <Capability Name="internetClient" />, the package is tagged at install time with a capability SID of the form S-1-15-3-1, and the Windows Filtering Platform evaluates outbound TCP connections against that SID, not against the user's [42]. Mandatory Integrity Control labels (Low/Medium/High) compose with the AppContainer SID rather than replacing it [43]. A broker process running outside the AppContainer is the only path back to user-scoped resources, and the broker keys its trust decisions on the calling Package SID.

The 8wekyb3d8bbwe suffix you see on Calculator, Edge, the Microsoft Store, and most other in-box apps is Microsoft Corporation's PublisherId. Once you know what it is, you start seeing it everywhere -- it is the cryptographic fingerprint of "Microsoft signed this package" [1].

The aha is the same shape as the PPL aha but at the layer above. Two binaries running as the same user can be authorised differently because the Package SID is derived from the manifest publisher and the package cannot forge it. AppContainer is not a sandbox you opt into. It is a SID you have. Capability ACLs name that SID. The firewall keys on it. The MIC label composes with it. The broker checks it.

The limits are also visible. AppContainer is opt-in for Win32 desktop apps that have not been packaged. Forshaw's 2021 Project Zero analysis of the AppContainer firewall identified loopback-exemption and namespace-isolation holes that Microsoft classified as WontFix [42]. Per-app sandbox identity solves the Modern-app problem; it does not solve the legacy Win32 problem. For that, the operating system needs a policy plane that names code in publisher vocabulary instead of path vocabulary.

What does an enterprise admin do when the application refuses to be packaged at all?

The policy plane: AppLocker, App Control, and the publisher rule

Path-based whitelisting failed for the same reason path-based ACLs failed. Anything writeable can be planted. AppLocker, shipped in Windows 7 in 2009, still stays in the box for compatibility, but Microsoft's own documentation recommends App Control for Business -- the rebranded Windows Defender Application Control -- for new deployments [44] [2]. The change is not cosmetic. It is the difference between filename-as-identity and Authenticode-publisher-as-identity.

App Control for Business (formerly WDAC)

A Code Integrity policy mechanism that expresses allow and deny rules in Authenticode-publisher vocabulary. Policies are authored in XML, compiled to a binary siPolicy.p7b, and enforced by the Code Integrity engine at every PE load. With HVCI active, enforcement happens inside the Hyper-V-protected secure kernel, immune to a compromised NT kernel [2].

The certificate-and-publisher rule levels run from strictest to broadest as Hash > FileName > FilePath > FilePublisher > SignedVersion > LeafCertificate > Publisher > PcaCertificate, with a parallel WHQL-only family for kernel drivers ordered WHQLFilePublisher > WHQLPublisher > WHQL [2]. Hash is the strictest (this exact byte string); PcaCertificate is the broadest signer-based level (anything signed under that intermediate CA). Microsoft documents RootCertificate as not supported, and FilePath -- available for user-mode binaries from Windows 10 1903 onward -- is path-based and so inherits the failure modes the publisher-rule model was designed to escape.

The LeafCertificate > Publisher adjacency is the subtle one. LeafCertificate pins to one specific signing certificate, so a renewal under a new leaf cert no longer matches. Publisher matches any certificate with the same PCA + leaf-CN combination, including future renewals. LeafCertificate is the stricter of the two [2].

The practical sweet spot is FilePublisher. It binds an allow rule to the tuple (certificate authority + leaf publisher CN + original filename + minimum version). That tuple survives recompiles: a benign update from the same publisher under the same name, signed by the same key, with a higher version still passes. It does not survive tampering. Change the original filename in the resource section, change the publisher, change the leaf certificate, and the rule no longer matches.

Policy primitiveEraRule basisKernel coverageDefault state
Software Restriction Policies (SRP)XP, 2001path / hash / certificatenoneunmanaged
AppLockerWindows 7 Enterprise, 2009path / publisher / hashnoneoff
Device Guard / WDACWindows 10, 2015publisher / file attributes / hashfull (with HVCI)off
App Control for Businessrenamed 2023publisher / file attributes / hashfull (with HVCI)off; on by default in S Mode and on Windows 11 SE

The Code Integrity engine evaluates an App Control policy on every PE load -- user mode and kernel mode alike. With HVCI active, the policy lives behind the Hyper-V security boundary; even an NT-kernel-level attacker with arbitrary memory write cannot edit it without breaking out of the virtualization layer [2]. Deny rules always win; an explicit deny can never be undone by any number of allows on the same binary.

App Control inherits the same structural ceiling Authenticode put in place. Allow Signer = Microsoft Windows admits the entire LOLBins inventory -- regsvr32, mshta, installutil, rundll32, every signed-by-Microsoft binary an attacker can call to execute arbitrary content. Allow Signer = ASUSTeK would have admitted ShadowHammer (operation 2018, disclosed 2019), every byte of which carried a valid ASUS production signature [10]. The publisher-rule model is the right primitive for managed endpoints, and the LOLBins / supply-chain-attack failure modes are the structural ceiling on what the primitive can prove.

PKI-rooted publisher policy still trusts the publisher's key custody. When the key is stolen or the binary is signed but malicious, what does the operating system fall back on?

Reputation as identity: Mark of the Web and SmartScreen

A novel binary, signed by a freshly issued EV cert, has zero history. PKI says yes. Reputation says: I have never seen this before -- run it past the user.

Mark of the Web (MOTW)

An NTFS alternate data stream named Zone.Identifier written by browsers, mail clients, and other downloaders to record the trust zone of a downloaded file. The stream contains an INI-style [ZoneTransfer] block with ZoneId=3 for files from the public internet, plus optional ReferrerUrl= and HostUrl= fields. The protocol is documented in the MS-FSCC reference [3]. SmartScreen, Office Protected View, and the Attachment Execution Service all read MOTW to gate behaviour on origin.

MOTW is not an Authenticode replacement. It is a parallel, origin-based identity: the binary's provenance, encoded as data the file system carries with it, separate from any signature. Origin is the input to SmartScreen. SmartScreen submits a hash of the binary together with publisher metadata to a Microsoft-hosted reputation service; if the service has not seen the binary before, or has not seen enough downloads to be confident, the user gets the familiar "Windows protected your PC" prompt that requires an explicit More info / Run anyway click [4].

The pipeline is parallel to Authenticode and App Control, not a successor. PKI says "this signature chains to a real publisher." Reputation says "this hash has been observed N times in the last 30 days, with prevalence trending up; the publisher account is six years old; M of the downloads were from machines later flagged for malware." None of those signals are derivable from a signature.

The Defender machine-learning pipeline that powers SmartScreen reputation is the deeper version of the same idea -- already covered in The Defender's Dilemma sibling article, which traces the twenty-year arc from Defender's 0.5/6 AV-TEST score to its 100% MITRE detection rate. The reputation primitive sits on top of that ML pipeline.

The bypass surface is now well-known. Container formats (ISO, IMG, VHD, 7z) historically did not propagate MOTW to files extracted from them, because their on-disk representation does not preserve alternate data streams. Phishing campaigns adapted: send the attacker's .exe inside an .iso, the user mounts the .iso, double-clicks the .exe, and SmartScreen sees a binary with no MOTW and offers no warning.

Microsoft's response combined fixes -- VHD and ISO MOTW propagation in Windows 11 22H2, MOTW-aware extraction in OneDrive and the new Windows Archive APIs -- with two attack-surface-reduction rules that gate execution on prevalence and trust independently of MOTW [45]. The most useful is rule 01443614-cd74-433a-b99e-2ecdc07bfc25, "Block executable files from running unless they meet a prevalence, age, or trusted list criterion."

Office is the most consequential consumer of MOTW. A Word, Excel, or PowerPoint file carrying a ZoneId=3 Mark of the Web opens in Protected View: read-only, in a sandboxed renderer, with macros and active content disabled, until the user clicks "Enable Editing" on the message bar [46].

The 2022 wave of HTML-smuggling and ISO-borne malware that bypassed SmartScreen still tripped over Protected View at the document layer, and the post-2022 macro-blocked-by-default change extended the same MOTW-gated logic from container files to embedded VBA. Origin is now an input to two parallel pipelines: SmartScreen's reputation check on the executable, and Office's read-only-until-confirmed gate on the document.

The full ASR rule GUIDs are in the Defender for Endpoint reference. Memorise none of them; pin the page.

A useful way to read the layered system at this point: Authenticode answered "who shipped it?" KMCS answered "is the kernel allowed to load it?" PPL answered "is this running process allowed to touch that one?" AppContainer answered "what application is this?" App Control answered "does the enterprise honour this publisher?" MOTW and SmartScreen answer the question PKI cannot: "have we seen this before, and from where?" When PKI identity is necessary but not sufficient, reputation closes the gap -- statistically, never absolutely.

PKI says yes; reputation says unknown. What does the operating system do when Microsoft itself says no to a signature it just minted?

The breakthrough: signed is not trusted (Driver Block List, 2022)

December 8, 2021. Microsoft launches the Vulnerable and Malicious Driver Reporting Center [47]. The blog post enumerates the failure shape that drove it: drivers that "map arbitrary kernel, physical, or device memory to user mode," drivers that "provide access to storage that bypass Windows access control," drivers whose IOCTLs let a local admin become an arbitrary kernel writer. Every one of those drivers was signed. Every one of those signatures was valid. Every one of those binaries was loadable on a default Windows install.

By the Windows 11 22H2 update in September 2022, the Vulnerable Driver Block List was enabled by default [5]. The mechanism is a Microsoft-curated SiPolicy.p7b (the same WDAC binary policy format), distributed through Windows Update and Defender intelligence updates, enforced by the Code Integrity engine -- with HVCI when present -- at every driver load. The published rules deny drivers by publisher, original filename, and hash. Critically, the publisher's signature is still valid. The Block List is an explicit Microsoft veto layered on top of a working PKI verdict.

"The blocklist included in this article ... usually contains a more complete set of known vulnerable drivers than the version in the OS and delivered by Windows Update." -- Microsoft Learn, Microsoft recommended driver block rules [5]

That sentence, in Microsoft's own documentation, is the breakthrough. Microsoft is openly admitting that the version of the list shipped with the operating system trails the curated reference list. Curation is now a continuous, asynchronous activity, distinct from signing. The list ships on a quarterly cadence. New BYOVD drivers ship faster than that. The LOLDrivers community catalogue tracks hundreds of vulnerable drivers, many of which are not (yet) on Microsoft's list [48].

The Block List has a write-time companion. ASR rule 56a863a9-875e-4185-98a7-b882c64b5ce5, "Block abuse of exploited vulnerable signed drivers," prevents writing a known-vulnerable driver to disk in the first place [45]. The defence is layered: the Block List denies load; the ASR rule denies install; together they form a curtain across the BYOVD attack class. Together they do not close the BYOVD class -- the catalogue is a list, the threat is a set, and the gap is structural.

A signature attests who. A reputation score attests unfamiliar versus seen-good. A block list attests Microsoft has revoked trust at runtime, even though the signature still verifies. These are three distinct identity layers, and 2022 is the year all three were finally co-deployed by default on the same operating system.

The Driver Block List is the operational expression of a 25-year admission. After 1996's "the new Microsoft Authenticode technology uniquely identifies the publisher," after Vista's "we will refuse unsigned kernel drivers," after Windows 8.1's "signer level mediates inter-process access," after Windows 10's "App Control names policy in publisher vocabulary," Microsoft's December 2021 blog post said something different. It said: a signature is a publisher claim; trust is a different claim; we, Microsoft, will curate the second claim continuously, even when we ourselves issued the first one. Identity has become curated, not just verified.

If even Microsoft can no longer trust a valid signature, where does trust ultimately have to live?

The 2026 stack and the hardware future

The eight primitives from the previous sections do not run in isolation. They compose. A modern Windows boot -- on a Pluton-equipped 2026 laptop running Windows 11 24H2 with HVCI on, App Control in enforce mode, Smart App Control on, and Microsoft Defender as the active anti-malware -- evaluates code identity continuously, top to bottom, from firmware through user mode.

Ctrl + scroll to zoom
The 2026 layered code-identity stack at boot. Each layer evaluates a question its predecessor structurally cannot.

The hardware root has shifted in five years. Pluton, announced on November 17, 2020 by Microsoft together with AMD, Intel, and Qualcomm, is a security processor integrated into the CPU die rather than a discrete TPM chip on the motherboard bus [49]. AMD Ryzen 6000-series and later (including Ryzen AI), Intel Core Ultra Series 3 and 200V, and Qualcomm Snapdragon 8cx Gen 3 and Snapdragon X Series ship Pluton as the on-die TPM. Pluton's firmware is updated through Windows Update -- not through OEM-controlled SPI flash patches -- and Microsoft has been rewriting the firmware in Rust [6].

The architectural significance is twofold. The trust root is no longer a chip with its bus exposed to a trace-and-sniff attacker. The firmware update path is now a Microsoft-controlled channel rather than thirty different OEM-controlled channels. The same hardware root is what BitLocker depends on when it seals the Volume Master Key to a measured boot chain via TPM PCRs [50]. On Pluton, those PCR measurements live in-die rather than on a bus-exposed chip, and the sibling article BitLocker on Windows traces what that buys and what it does not.

Then came July 19, 2024.

CrowdStrike's Falcon kernel driver loaded a malformed Channel File 291 update that triggered an out-of-bounds memory read inside csagent.sys and raised an invalid page fault [20], bug-checking roughly 8.5 million Windows endpoints simultaneously [19]. The driver was correctly Microsoft-signed through the Hardware Developer Center attestation pipeline. Every code-identity layer in the stack -- KMCS, the cross-cert, the EV cert, the attestation key, even the Block List -- said yes. The thing that went wrong was not identity. It was that an identity-blessed driver, running in kernel mode, can fail in ways that take entire continents offline.

Microsoft's reaction was structural. On September 12, 2024, David Weston published the recap of the September 10 WESES summit Microsoft had hosted with its endpoint-security partners, committing to provide "additional security capabilities outside of kernel mode" so that EDR vendors could run their detection logic in user mode [21].

On June 26, 2025, the Windows Resiliency Initiative announced a private preview of the new endpoint security platform, scheduled for July 2025 delivery to selected MVI partners: Bitdefender, CrowdStrike, ESET, and SentinelOne [22]. CrowdStrike's representative was Alex Ionescu, now its Chief Technology Innovation Officer -- the same Alex Ionescu whose 2013 Breakpoint talk publicly mapped PPL signer levels. The arc had closed in twelve years.

MVI 3.0 -- the Microsoft Virus Initiative, version three -- adds Safe Deployment Practices as a contractual condition: staged rollouts, deployment rings, monitoring. The same playbook Microsoft itself follows for Windows updates after the 2024 outage [20].

The conceptual move is the same one PPL made in 2013, projected one layer higher. Then: identity becomes a runtime ACL between processes. Now: identity-bound placement (kernel mode versus user mode) becomes a trust dimension co-equal with identity-bound signing. The question is no longer "is this driver signed and on the allow list?" The question is "should code with this identity be running in this context at all?"

If even attested, signed, blessed kernel code can fail catastrophically, what could code identity in principle ever prove -- and what is provably out of reach?

Theoretical bounds and open problems

Two papers from the 1980s bound everything that followed.

Fred Cohen's 1984 paper at IFIP-Sec, republished in Computers & Security in 1987, proved that perfect virus detection is undecidable: there is no algorithm that, given an arbitrary program, can decide whether it is a virus [54]. Reputation systems are necessarily heuristic. The "first 1,000 downloads" gap -- the window where SmartScreen has not yet seen enough of a new binary to be confident -- is structural, not a tuning problem. You cannot close it by waiting harder.

Ken Thompson's 1984 ACM Turing Award lecture, "Reflections on Trusting Trust," made a different point about a different layer [55]. Thompson exhibited a compiler that, when used to build itself, inserted a backdoor into a target program; when used to build the compiler, propagated the backdoor invisibly to the next-generation binary. Signing what the compiler emitted never proved the compiler was unmodified. SLSA Level 3+ provenance, reproducible builds, hermetic build environments [56] push the bound back one level. They do not eliminate it.

A third bound is Authenticode-specific. Asynchronous revocation, the property that lets pre-revocation timestamped signatures continue to verify forever, is the reason Stuxnet's drivers loaded after Realtek's certificate was revoked, and the reason every other stolen-key compromise has a window of cryptographic legitimacy [8]. Synchronous global revocation would invalidate large catalogs of legitimate, archived, signed software whose signing certs have since expired. There is no fix inside the design.

Pulled together, these bounds explain the persistent gap. Stolen-but-not-yet-revoked publisher keys are the same failure mode replayed three times in sixteen years: Stuxnet (2010, Realtek and JMicron), ASUS ShadowHammer (operation 2018, disclosed 2019, ASUSTeK production key), Bitwarden CLI (2026, npm publishing credential). The Pluton firmware-update pipeline is the most credible architectural response yet -- a Microsoft-controlled key-rotation channel that does not depend on OEM-side custody -- but it does not eliminate the class. It compresses the response window.

The other open problem is identity for non-PE artifacts. The Authenticode hash and the WDAC publisher rule were designed for Portable Executable files; everything else gets uneven coverage. PowerShell .ps1 scripts can be signed and gated through Constrained Language Mode, which the runtime enters automatically when an AppLocker or App Control policy is in force [57]. .NET assemblies have strong-name signatures, separate from Authenticode and explicitly not a security boundary; Microsoft's own documentation warns "do not rely on strong names for security" [58].

JIT-compiled code -- the most common shape of "code" in 2026 -- is signed only insofar as the JIT host is signed. The JIT itself produces unsigned bytes. Container images, WSL guests, AI model files, and (now) agent prompts all live outside the Authenticode universe entirely. Each is its own substrate, with its own emerging signing scheme, and the unification has not happened.

trust2026(binary)=publisher(binary)provenance(build)placement(runtime)reputation(telemetry)¬revoked(Microsoft)\text{trust}_{2026}(\text{binary}) = \text{publisher}(\text{binary}) \land \text{provenance}(\text{build}) \land \text{placement}(\text{runtime}) \land \text{reputation}(\text{telemetry}) \land \neg \text{revoked}(\text{Microsoft})

That conjunction is the 2026 verdict. None of its terms are sufficient on their own. Each was forced into existence by a failure of the term before. The arc from "who launched this thread?" in 1993 to that conjunction in 2026 is what thirty-three years of forced moves produced.

What does the layered system look like in practice on a 2026 endpoint -- and what should an admin actually do?

Practical guide

Six concrete recommendations for a 2026 Windows fleet, each tied to a primary Microsoft Learn or MSRC source.

A small JS that simulates the boot-time decision tree for a single PE
function loadDecision({ signed, signerLevel, motwed, onBlockList, allowedByAppControl, smartScreenVerdict }) {
  if (onBlockList) return 'BLOCK -- Microsoft veto, signature ignored';
  if (signed === false && allowedByAppControl === false) return 'BLOCK -- unsigned, App Control denies';
  if (signerLevel === 'WinTcb' || signerLevel === 'WinSystem') return 'LOAD -- protected process';
  if (allowedByAppControl === false) return 'BLOCK -- App Control deny';
  if (motwed && smartScreenVerdict === 'unknown') return 'WARN -- SmartScreen, user gate';
  if (motwed && smartScreenVerdict === 'malicious') return 'BLOCK -- SmartScreen';
  return 'LOAD';
}
console.log(loadDecision({
  signed: true, signerLevel: 'Authenticode',
  motwed: true, onBlockList: false,
  allowedByAppControl: true, smartScreenVerdict: 'good',
}));

The decision tree is the practical mental model. Every branch of it is the consequence of one of the failures this article tracks.

Frequently asked questions

Doesn't a valid Authenticode signature mean Microsoft trusts the file?

No. A signature attests publisher identity and binary integrity. It does not attest safety. Microsoft trust is a separate, runtime claim expressed through the Driver Block List, App Control policies, and Defender reputation -- evaluated continuously, even on signatures Microsoft itself once minted [5].

What is the difference between EV signing and attestation signing?

Extended Validation Authenticode signing vets organisational identity through an audited issuance process and mandates that the private key live in a hardware security module; the publisher's signature is the trust root. Attestation signing is Microsoft's lighter-weight pipeline for kernel drivers: the publisher submits an EV-signed binary to the Hardware Developer Center, Microsoft re-signs with its own attestation key, and the result is delivered back. Attestation-signed drivers are not WHQL tested and are not distributed via retail Windows Update [31] [32].

Why does SmartScreen warn on my own EXE?

MOTW plus low prevalence. SmartScreen sees a binary it has not observed enough times in the global telemetry to be confident, on a file marked as having been downloaded from the internet. Sign the binary with an EV certificate, accumulate downloads on a stable hash, and the warning fades. Internal binaries can have MOTW stripped at deployment time if your distribution channel is itself trusted [4].

Are AppLocker and App Control for Business the same thing?

No. AppLocker is the Windows 7-era policy mechanism with rules in path/publisher/hash form, no kernel coverage, and no virtualization-based protection of the policy itself. App Control for Business -- formerly Windows Defender Application Control -- is the publisher-rule Code Integrity policy mechanism with HVCI enforcement at the kernel boundary. Microsoft recommends App Control for new deployments and keeps AppLocker for compatibility [44] [2].

Why can't my admin user dump LSASS even with SeDebugPrivilege?

LSASS is running as a Protected Process Light at the Lsa signer level. Signer-level gating sits above the token DACL check. Even a SYSTEM-token caller with SeDebugPrivilege gets a process handle with PROCESS_VM_READ and PROCESS_VM_WRITE stripped, because PPL strips access masks before the DACL evaluation. Disable LSA Protection (RunAsPPL=0) on a test machine and the same call succeeds [36] [35].

Does Authenticode protect me from supply-chain attacks?

Only if the publisher's signing-key custody and build pipeline are themselves uncompromised. Stuxnet (stolen Realtek and JMicron keys, 2010), ASUS ShadowHammer (compromised production signing pipeline, operation 2018 / disclosed 2019), and the Bitwarden CLI npm incident (2026) all produced cryptographically valid signatures on attacker-controlled bytes [8] [10] [7]. SLSA-level build provenance and Pluton-rooted attestation are the architectural responses; neither is yet universally deployed [56] [6].

Where this is going

Pluton-rooted device attestation, MVI 3.0's user-mode security platform, SLSA build provenance, and the post-CrowdStrike push to make placement a first-class identity attribute are all in motion in 2026 [22] [56]. The follow-on articles -- Driver Block List in production, App Control with HVCI on real fleets, Secure Boot internals, the Pluton firmware-update channel -- are the operational complement to the conceptual story this article has told.

The arc that began with Windows NT 3.1 having no answer to "who is this code?" now has eight overlapping answers, each insufficient on its own. Identity in 2026 is a multi-layered claim about a binary's publisher, its build provenance, its runtime placement, and its reputation, evaluated continuously while the code is running. The arc from 1993's "who launched this thread?" to 2026's "is this signed binary, in this placement, with this build provenance, on Microsoft's curated honour list, today, on this hardware-attested device?" is the answer thirty-three years of forced moves produced -- and the question the next thirty-three years will keep asking, because none of the bounds Cohen and Thompson proved have moved.

Study guide

Key terms

Authenticode
PE-attached PKCS#7 SignedData that names the publisher and detects tampering. Names the publisher, not the code.
Kernel-Mode Code Signing (KMCS)
Vista x64 policy that refuses to load unsigned kernel drivers; chain-to-Microsoft requirement post-2015.
Protected Process Light (PPL)
Windows 8.1 attribute that mediates inter-process access by signer level; LSASS-as-PPL defeats user-mode credential dumpers.
Package SID
Cryptographic application identity (S-1-15-2-...) derived from the MSIX manifest publisher; first-class principal in ACLs and capability checks.
App Control for Business
Publisher-rule Code Integrity policy formerly called WDAC; enforced by HVCI; ships in S Mode and Windows 11 SE by default.
Mark of the Web (MOTW)
Zone.Identifier alternate data stream that records a file's origin; input to SmartScreen reputation.
Vulnerable Driver Block List
Microsoft-curated WDAC-format deny list shipped quarterly; default-on since Windows 11 22H2; the operational expression of 'signed != trusted'.
Pluton
On-die Microsoft security processor in AMD Ryzen 6000+, Intel Core Ultra 200V, and Qualcomm 8cx Gen 3; firmware updated through Windows Update.

References

  1. Microsoft Package identity overview. https://learn.microsoft.com/en-us/windows/apps/desktop/modernize/package-identity-overview - 5-tuple package identity; 13-char Crockford-Base32 PublisherId; PFN format.
  2. Microsoft App Control for Business. https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/ - Rebranded WDAC; rule-level hierarchy; allow/deny precedence.
  3. Microsoft Mark of the Web zone identifier ADS layout (MS-FSCC). https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/6e3f7352-d11c-4d76-8c39-2516a9df36e8 - MS-FSCC reference for the Zone.Identifier alternate data stream.
  4. Microsoft Microsoft Defender SmartScreen overview. https://learn.microsoft.com/en-us/windows/security/operating-system-security/virus-and-threat-protection/microsoft-defender-smartscreen/ - SmartScreen reputation; MOTW integration; Block executable ASR rule.
  5. Microsoft Microsoft recommended driver block rules. https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/microsoft-recommended-driver-block-rules - Vulnerable Driver Block List; 22H2 default-on; quarterly cadence; HVCI dependency.
  6. Microsoft Microsoft Pluton security processor. https://learn.microsoft.com/en-us/windows/security/hardware-security/pluton/microsoft-pluton-security-processor - In-die security processor; firmware update via Windows Update; TPM 2.0 conformance.
  7. Bitwarden (2026). Bitwarden Statement on Checkmarx Supply Chain Incident. https://community.bitwarden.com/t/bitwarden-statement-on-checkmarx-supply-chain-incident/96127 - Bitwarden CLI npm pipeline compromise, Apr 22 2026; verbatim 5:57 PM-7:30 PM (ET) window (93 minutes); @bitwarden/cli@2026.4.0.
  8. Nicolas Falliere, Liam O'Murchu, & Eric Chien (2011). W32.Stuxnet Dossier (Version 1.4). https://archive.org/details/w32_stuxnet_dossier - Internet Archive copy. Stuxnet kernel drivers mrxnet.sys / mrxcls.sys; stolen Realtek and JMicron certs.
  9. Microsoft (2012). Microsoft Security Advisory 2718704: Unauthorized Digital Certificates Could Allow Spoofing. https://learn.microsoft.com/en-us/security-updates/securityadvisories/2012/2718704 - Flame MD5-collision response; revoked Microsoft intermediate CAs.
  10. Kaspersky GReAT (2019). Operation ShadowHammer. https://securelist.com/operation-shadowhammer/89992/ - Trojanized ASUS Live Update signed with legitimate ASUSTeK certificate; operation June--November 2018, disclosed March 2019.
  11. Helen Custer (1992). Inside Windows NT. Microsoft Press. ISBN 1-55615-481-X. - NT 3.1 user-as-principal security model.
  12. Microsoft (1996). Microsoft and VeriSign Provide First Technology For Secure Downloading of Software Over the Internet. https://news.microsoft.com/source/1996/08/07/microsoft-and-verisign-provide-first-technology-for-secure-downloading-of-software-over-the-internet/ - The Authenticode launch press release. Establishes the publisher-identity framing.
  13. CNET (2002). Gates memo: We can and must do better. https://www.cnet.com/tech/tech-industry/gates-memo-we-can-and-must-do-better/ - Verbatim Gates Trustworthy Computing memo (Jan 15, 2002).
  14. The Register (2002). MS's highest priority must be security: BillG. https://www.theregister.com/2002/01/17/ms_highest_priority_must/ - Independent contemporaneous coverage of the Trustworthy Computing memo.
  15. Microsoft Kernel-Mode Code Signing Policy (Windows Vista and Later). https://learn.microsoft.com/en-us/windows-hardware/drivers/install/kernel-mode-code-signing-policy--windows-vista-and-later- - Vista x64 mandatory KMCS, cross-sign grandfather rule, post-1607 dashboard signing.
  16. MSRC (2012). Microsoft Releases Security Advisory 2718704. https://www.microsoft.com/en-us/msrc/blog/2012/06/microsoft-releases-security-advisory-2718704 - MSRC blog post about Advisory 2718704.
  17. Alex Ionescu (2013). The Evolution of Protected Processes Part 1: Pass-the-Hash Mitigations in Windows 8.1. https://www.alex-ionescu.com/the-evolution-of-protected-processes-pass-the-hash-mitigations-in-windows-8-1/ - PPL signer-level lattice; LSASS RunAsPPL registry value.
  18. Microsoft Protecting Anti-Malware Services. https://learn.microsoft.com/en-us/windows/win32/services/protecting-anti-malware-services- - PPL signer-level / Antimalware-Protected service concepts.
  19. Microsoft (2024). Helping our customers through the CrowdStrike outage. https://blogs.microsoft.com/blog/2024/07/20/helping-our-customers-through-the-crowdstrike-outage/ - 8.5M endpoints; Microsoft response to CrowdStrike Falcon channel-file outage.
  20. Microsoft Security (2024). Windows security best practices for integrating and managing security tools. https://www.microsoft.com/en-us/security/blog/2024/07/27/windows-security-best-practices-for-integrating-and-managing-security-tools/ - User-mode integration roadmap follow-up.
  21. David Weston (2024). Taking steps that drive resiliency and security for Windows customers. https://blogs.windows.com/windowsexperience/2024/09/12/taking-steps-that-drive-resiliency-and-security-for-windows-customers/ - WESES summit recap; user-mode EDR commitment; post-CrowdStrike pivot.
  22. David Weston (2025). The Windows Resiliency Initiative -- Building resilience for a future-ready enterprise. https://blogs.windows.com/windowsexperience/2025/06/26/the-windows-resiliency-initiative-building-resilience-for-a-future-ready-enterprise/ - MVI 3.0; Safe Deployment Practices; user-mode security platform private preview.
  23. Microsoft Security Identifiers. https://learn.microsoft.com/en-us/windows/win32/secauthz/security-identifiers - NT user-as-principal model; Package SID encoding S-1-15-2-...
  24. Microsoft PE Format. https://learn.microsoft.com/en-us/windows/win32/debug/pe-format - PE Attribute Certificate Table layout; bytes excluded from the Authenticode hash.
  25. B. Kaliski (1998). PKCS #7: Cryptographic Message Syntax Version 1.5. https://datatracker.ietf.org/doc/html/rfc2315 - The PKCS#7 SignedData primitive that Authenticode embeds in PE files.
  26. Microsoft Cryptography Tools. https://learn.microsoft.com/en-us/windows/win32/seccrypto/cryptography-tools - Authenticode toolchain index: SignTool, MakeCert, WinVerifyTrust, SIPs.
  27. Microsoft (2013). Microsoft Security Bulletin MS13-098 -- Vulnerability in Windows Could Allow Remote Code Execution. https://learn.microsoft.com/en-us/security-updates/securitybulletins/2013/ms13-098 - Original WinVerifyTrust signature padding advisory.
  28. NIST NVD (2013). CVE-2013-3900. https://nvd.nist.gov/vuln/detail/CVE-2013-3900 - NVD entry; Microsoft has not enforced strict verification by default. CISA KEV Jan 10, 2022.
  29. Microsoft MSRC (2013). CVE-2013-3900 -- WinVerifyTrust Signature Validation Vulnerability. https://msrc.microsoft.com/update-guide/vulnerability/CVE-2013-3900 - MSRC vendor advisory (SPA; content rendered client-side).
  30. Pavel Yosifovich, Alex Ionescu, Mark Russinovich, & David Solomon (2017). Windows Internals, 7th Edition (Parts 1 and 2). Microsoft Press. ISBN 978-0735684188. - Code Integrity, PPL, AppContainer, Package Identity chapters.
  31. Microsoft Driver signing offerings. https://learn.microsoft.com/en-us/windows-hardware/drivers/dashboard/driver-signing-offerings - HLK / attestation / preproduction signing tiers.
  32. Microsoft Attestation signing a kernel driver for public release. https://learn.microsoft.com/en-us/windows-hardware/drivers/dashboard/code-signing-attestation - EV cert requirement plus Microsoft attestation key.
  33. Microsoft Embedded signatures in a driver file. https://learn.microsoft.com/en-us/windows-hardware/drivers/install/embedded-signatures-in-a-driver-file - Catalog files extending Authenticode coverage to driver packages.
  34. Wikipedia contributors Stuxnet. https://en.wikipedia.org/wiki/Stuxnet - Tertiary corroboration: Sergey Ulasen / VirusBlokAda discovery, June 17, 2010; principal infections in Iran.
  35. SCRT Team (2021). Bypassing LSA Protection in Userland. https://blog.scrt.ch/2021/04/22/bypassing-lsa-protection-in-userland/ - PsTestProtectedProcessIncompatibility table; Known-DLL bypass.
  36. Clement Labro (2021). Do You Really Know About LSA Protection (RunAsPPL)?. https://itm4n.github.io/lsass-runasppl/ - RunAsPPL registry value; UEFI variable mirror; PPL signer levels reference.
  37. Microsoft Early Launch Antimalware. https://learn.microsoft.com/en-us/windows-hardware/drivers/install/early-launch-antimalware - ELAM-as-PPL design; runs before any third-party boot driver.
  38. Clement Labro PPLdump. https://github.com/itm4n/PPLdump - PPLdump bypass; patched in Windows 10 21H2 build 19044.1826.
  39. Microsoft AppContainer Isolation. https://learn.microsoft.com/en-us/windows/win32/secauthz/appcontainer-isolation - AppContainer execution environment definition.
  40. Microsoft AppContainer for Legacy Applications. https://learn.microsoft.com/en-us/windows/win32/secauthz/appcontainer-for-legacy-applications- - LowBox terminology; Win32 AppContainer.
  41. Microsoft MSIX overview. https://learn.microsoft.com/en-us/windows/msix/overview - MSIX as the modern Windows app package format.
  42. James Forshaw (2021). Understanding Network Access in Windows AppContainers. https://googleprojectzero.blogspot.com/2021/08/understanding-network-access-windows-app.html - AppContainer firewall semantics; loopback exemption.
  43. Microsoft Mandatory Integrity Control. https://learn.microsoft.com/en-us/windows/win32/secauthz/mandatory-integrity-control - MIC label evaluation in AppContainer access checks.
  44. Microsoft AppLocker overview. https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/applocker/applocker-overview - AppLocker as the legacy compatibility predecessor; Microsoft recommends App Control for new deployments.
  45. Microsoft Attack surface reduction rules reference. https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/attack-surface-reduction-rules-reference - ASR rule 56a863a9-875e-4185-98a7-b882c64b5ce5 -- block abuse of vulnerable signed drivers.
  46. Microsoft What is Protected View? -- Microsoft Office. https://support.microsoft.com/en-us/office/what-is-protected-view-d6f09ac7-e6b9-4495-8e43-2bbcdbcb6653 - Office Protected View opens MOTW-tagged Internet files read-only with editing disabled until 'Enable Editing'.
  47. Microsoft Security (2021). Improve kernel security with the new Microsoft Vulnerable and Malicious Driver Reporting Center. https://www.microsoft.com/en-us/security/blog/2021/12/08/improve-kernel-security-with-the-new-microsoft-vulnerable-and-malicious-driver-reporting-center/ - Reporting Center launch (Dec 8, 2021); vulnerability classification primitives.
  48. LOLDrivers catalogue. https://www.loldrivers.io/ - BYOVD coverage rate vs Block List.
  49. David Weston (2020). Meet the Microsoft Pluton processor -- The security chip designed for the future of Windows PCs. https://www.microsoft.com/en-us/security/blog/2020/11/17/meet-the-microsoft-pluton-processor-the-security-chip-designed-for-the-future-of-windows-pcs/ - Pluton announcement; AMD/Intel/Qualcomm joint launch.
  50. Microsoft BitLocker overview. https://learn.microsoft.com/en-us/windows/security/operating-system-security/data-protection/bitlocker/ - BitLocker volume encryption with TPM-anchored measured-boot integrity verification (PCR-sealed VMK).
  51. Apple Gatekeeper and runtime protection. https://support.apple.com/guide/security/gatekeeper-and-runtime-protection-sec5599b66df/web - Apple Platform Security guide section.
  52. SourceForge IMA project Linux Integrity Measurement Architecture (IMA-Appraisal). https://sourceforge.net/p/linux-ima/wiki/Home/ - IMA-Appraisal + EVM project wiki.
  53. Android Open Source Project APK Signature Scheme v3. https://source.android.com/docs/security/features/apksigning/v3 - Per-app signing key with proof-of-rotation chain.
  54. Fred Cohen (1984). Computer Viruses -- Theory and Experiments. https://all.net/books/virus/index.html - IFIP-Sec 1984 / Computers & Security 6(1) 1987. Undecidability of malware detection.
  55. Ken Thompson (1984). Reflections on Trusting Trust. https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf - Thompson 1984 ACM Turing Award lecture.
  56. Supply-chain Levels for Software Artifacts (SLSA) v1.0. https://slsa.dev/spec/v1.0/ - Build-provenance / supply-chain identity layer.
  57. Microsoft about_Language_Modes -- PowerShell. https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_language_modes - PowerShell Constrained Language Mode under AppLocker / WDAC application-control policies.
  58. Microsoft Strong-named assemblies (.NET). https://learn.microsoft.com/en-us/dotnet/standard/assembly/strong-named - .NET strong-name signatures are unique-identity, explicitly not a security boundary.
  59. Microsoft Deploy App Control for Business policies. https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/deployment/appcontrol-deployment-guide - Documents Intune/MDM, Configuration Manager, script, and Group Policy as the supported App Control distribution channels.


Revisions (1)
  1. · Cross-reference pass: linked the first mention of HVCI, measured boot, BitLocker, and the Bitwarden CLI compromise to their dedicated posts.
  2. · Published.