50 min read

Apple Secure Enclave vs Microsoft Pluton: Two Roads to Hardware Root of Trust

How Apple SEP and Microsoft Pluton solve the same problem -- keeping your secrets safe from a compromised OS -- using two very different silicon strategies.

Permalink

1. The bus that taught everyone a lesson

In 2021, a researcher at Pulse Security wired a forty-dollar FPGA to the LPC bus of a Microsoft Surface Pro 3 and a Lenovo laptop, captured a handful of bytes as the machines powered on, and pulled the BitLocker Volume Master Key out of the air. Then they decrypted the drives. They wrote the whole thing up, with photos of the soldering and an open-source sniffer named lpc_sniffer_tpm (Pulse Security: Sniff, there leaks my BitLocker key).

The hardware was working exactly as designed.

That is what makes the story interesting. The Trusted Platform Module released the disk-encryption key the moment the boot configuration matched its sealed policy. It then handed the key, in cleartext, to the CPU over a physical wire on the motherboard. Anyone who could touch that wire could read the key. The chip, the spec, the OS -- all of them did precisely what the standard required. The threat model just never accounted for somebody putting probes on a laptop.

This is the problem hardware-rooted security has spent twenty years trying to dig itself out of. If you trust software, malware wins. If you trust software-plus-discrete-TPM, the bus wins. If you trust software-plus-firmware-TPM, the host operating system's privileged-mode bugs win. Every layer you add closes one class of attack and opens another.

Hardware roots of trust exist because no purely software-defined boundary can survive an attacker who runs code at the same privilege level you do. The only way out is to put the secrets somewhere your main CPU literally cannot read.

Apple and Microsoft both reached the same conclusion roughly a decade apart, and built almost opposite answers. Apple shipped the Secure Enclave Processor (SEP) with the A7 chip in the iPhone 5s in September 2013 -- a dedicated ARM core inside the application SoC, running its own microkernel, talking to the rest of the phone through a hardware mailbox. Microsoft announced Pluton in November 2020, but had been shipping Pluton-class silicon since the original Xbox One in 2013; the Windows version is an on-die security subsystem that pretends to be a TPM 2.0 chip and accepts firmware updates over Windows Update.

Both companies looked at the same threat -- a curious adversary with a screwdriver, an OS-level rootkit, or a $40 logic analyzer -- and decided the answer was to move the keys off the bus. They just disagreed about where to put them.

Hardware Root of Trust (RoT)

A piece of silicon that the rest of a system anchors its security claims to. Keys generated inside the RoT never leave; measurements taken by the RoT are signed by it; software running outside the RoT cannot rewrite the RoT's behavior. The "root" is the part the rest of the trust chain reduces down to.

Trusted Platform Module (TPM)

A cryptoprocessor specified by the Trusted Computing Group. TPM 2.0 -- the current version, published in 2014 and revised since -- defines Platform Configuration Registers (PCRs), an Endorsement Key burned at manufacture, key creation and sealing primitives, and the TPM2_Quote command for remote attestation. A TPM can be discrete (its own chip), firmware (running inside another security subsystem), or virtual.

This article is the comparison nobody quite writes, partly because both vendors prefer to talk about themselves and partly because the technologies look superficially similar. They are not. The architectures differ. The threat models differ. The patch channels differ. The developer APIs differ enough that the same security goal -- "store this key so nothing but the user's biometric can use it" -- produces wildly different code on each side. By the end of this you should know which one is in your device, why it is there, what it actually defends against, and where the academic literature has already poked holes.

Ctrl + scroll to zoom
Where the key sits matters. Discrete TPMs expose the chip-to-CPU bus; both SEP and Pluton eliminate that bus by living on the SoC die.

The journey from "trust the OS" to "trust the silicon that even the OS cannot read" is the story of the last fifteen years of platform security. The Surface Pro 3 attack is what happens when you do half of it. Apple's and Microsoft's answers are what it looks like when you do all of it -- in two opposite ways.

2. Apple's answer: a small computer inside your phone

The Apple Secure Enclave Processor is a separate physical CPU core, on the same die as the application processor, with its own memory, its own boot ROM, its own operating system, and its own random number generator. Apple's own framing in the Platform Security Guide is that the SEP "provides the foundation for the secure generation and storage of the keys necessary for encrypting data at rest." That is what it does. How it does it is what is interesting.

2.1 What sits on the die

Inside an A-series or M-series SoC, the SEP is a distinct cluster. According to Apple's published architecture, it includes (Apple Platform Security: Secure Enclave):

  • A dedicated processor core (not a SMT thread, not a shared core) running at a lower clock than the application cores.
  • A Memory Protection Engine (MPE) that encrypts every cache line going to or from SEP-owned DRAM.
  • A True Random Number Generator (TRNG) seeded by silicon noise.
  • A hardware AES engine and a Public Key Accelerator (PKA) for ECC and RSA.
  • A boot ROM masked in silicon at fabrication time.
  • From A13 onward, a relationship with an external Secure Storage Component (SSC) that provides monotonic counters and replay-protected non-volatile storage.

The lower clock speed is not an accident. Apple explicitly notes that the SEP "is designed to operate efficiently at a lower clock speed that helps to protect it against clock and power attacks" (Apple Platform Security). Side-channel-resistance starts at the timing budget.

Secure Enclave Processor (SEP)

Apple's dedicated security coprocessor, introduced in the A7 SoC in September 2013. Each Apple-designed SoC since contains one SEP. It runs sepOS, an Apple customization of the L4 microkernel, and exposes its services only via a tightly defined mailbox interface from the application processor.

sepOS

The operating system the SEP runs. Apple describes it as "an Apple-customized version of the L4 microkernel" (Apple Platform Security: Secure Enclave). It is independent of iOS, iPadOS, or macOS, ships in the same firmware bundle as those operating systems, and is signed by Apple. The microkernel design constrains the trusted computing base and forces cross-service communication through IPC.

2.2 The boot chain, in order

When you press the power button, two CPUs come up at once. The application processor begins executing its boot ROM, and the SEP begins executing its own. They are independent boot processes that meet later, after both sides have verified their own firmware.

Ctrl + scroll to zoom
SEP boot is concurrent with the application processor boot, not subordinate to it. The SEP refuses to run if its image fails Apple's signature check.

The SEP boot ROM is mask ROM. That phrase carries weight. It means the bits were etched into the silicon at fabrication and cannot be rewritten. Apple cannot patch the SEP boot ROM with a software update, even if Apple wants to. This is a feature -- nobody else can patch it either -- and a liability. We will return to it when we discuss checkm8.

After the SEP boot ROM verifies and launches sepOS, the SEP holds two values fused into the silicon at manufacture: a Unique ID (UID) and a Group ID (GID). The UID is per-device. The GID is per-product-family. Both are kept inside the SEP and never appear outside it. Keys derived from the UID are tangled to the specific piece of silicon; you cannot lift the wrapped key, move it to another phone, and unwrap it. The chip is physically the wrap-and-unwrap oracle. The UID is also why factory-reset really does erase your data. The data-protection key hierarchy roots at a key derived from the UID and a per-file random; rotate the right intermediate and every wrapped file becomes unrecoverable noise.

2.3 The Memory Protection Engine

The SEP's RAM is, physically, in the same DRAM module as everything else. A naive design would let the application processor read it. The MPE prevents that. Every cache line bound for SEP memory is encrypted with AES in XEX mode (a tweakable mode similar to disk-encryption XTS) and authenticated with a CMAC tag. The tweak includes the physical address, so an attacker cannot relocate ciphertext to a different location and have it still verify (Apple Platform Security: Secure Enclave).

Starting with the A11 SoC, the MPE added an anti-replay value per protected block, with the anti-replay tree rooted in dedicated on-die SRAM. The threat that introduces is: an attacker who can capture the encrypted DRAM contents at time T1 and overwrite the DRAM with that snapshot at time T2 -- a "store, rewind, replay" attack. Tree-rooted anti-replay defeats it because the root in SRAM does not match the old leaves the attacker re-injected.

The tweakable XEX construction has the property that two cache lines containing the same plaintext at different addresses produce different ciphertext, which prevents the pattern-leakage you get from ECB-style encryption. CMAC adds a 128-bit integrity tag.

From the A14 and M1 generation onward, the MPE handles two ephemeral keys: one for SEP-private data and one for data shared with the Secure Neural Engine (used during Face ID matching). The keys are regenerated at every reset, so even capturing the DRAM ciphertext across a reboot leaks nothing.

2.4 The Secure Storage Component

Anti-hammering -- the property that a passcode-guessing attacker is rate-limited and eventually locked out -- requires reliable monotonic state that the attacker cannot rewind. Mask ROM and on-die SRAM are not enough on their own because power loss erases SRAM. From the A13 SoC onward, Apple solves this by adding a separate chip on the logic board: the Secure Storage Component (SSC).

The SSC is small, tamper-resistant, and only the SEP can talk to it. It stores monotonic counters and entropy values that the SEP uses to bind authenticated storage to wall-clock state. If you steal the phone, dump the encrypted blobs, "rewind" by overwriting the flash with an earlier copy, and try to brute-force the passcode again, the SSC's counters no longer match. Anti-hammering survives the rewind.

2.5 The mailbox API

Userspace apps never touch the SEP directly. The application processor reaches it through a hardware mailbox -- a small ring of registers and shared memory that defines the entire API surface from AP to SEP. The kernel exposes higher-level services on top: Touch ID and Face ID matching, Keychain entries flagged with kSecAttrTokenIDSecureEnclave, Data Protection class keys, App Attest signing, and so on.

The constraint is severe. The SEP exposes a fixed set of operations. No app, and no part of the OS, can ask the SEP to do something the firmware did not already implement. Compromise of the AP-side kernel does not produce an arbitrary-code-execution primitive on the SEP. It produces, at most, the ability to call SEP services from a hostile place -- and those services still require user authentication (FaceID, TouchID, passcode) before they release sensitive operations.

This is the dual of the TPM 2.0 design philosophy. A TPM defines a wide command set in its spec; the firmware implements that command set; software calls those commands. The SEP defines a narrow service set bespoke to Apple's products; everything else is rejected.

If you had to summarize what Apple built in one sentence: they put a second computer in the phone, gave it the keys, gave it a lock on its own door, and left a slot for messages to slide through. That is the design.

3. Microsoft's answer: kill the bus, keep the standard

Apple had the luxury of designing the application processor and the security processor together. Microsoft does not. Microsoft sells software that runs on AMD, Intel, and Qualcomm silicon, on chassis from Dell, HP, Lenovo, Acer, Asus, Microsoft itself, and a long tail of others. The discrete TPM 2.0 standard fixes a contract between Windows and a piece of trusted hardware that any vendor can implement. Pluton's job was to keep that contract while removing the parts that did not survive contact with reality.

The first part of reality Pluton kills is the bus.

3.1 The Xbox lineage

Microsoft did not invent Pluton for Windows. The architecture started in the original Xbox One, shipping in 2013, where it served as the security subsystem that prevented modchipping and verified the boot chain. The same architecture was extended to the Azure Sphere MT3620 microcontroller in 2018, aimed at IoT devices. The Windows variant -- the one most people mean when they say "Pluton" -- was announced in November 2020.

The first shipping Windows silicon containing Pluton was the AMD Ryzen 6000 series ("Rembrandt") in January 2022. Qualcomm Snapdragon 8cx Gen 3 and the Snapdragon X family followed in 2023-2024. Intel's first Pluton-bearing CPU was Core Ultra Series 2 ("Lunar Lake") in late 2024. As of the current Microsoft documentation, the supported matrix is "AMD Ryzen 6000/7000/8000/9000 and Ryzen AI Series; Intel Core Ultra 200V Series, Ultra Series 3; Qualcomm Snapdragon 8cx Gen 3 and Snapdragon X Series" (Microsoft Pluton Security Processor, Microsoft Learn).

This is a deployment claim. Pluton's presence on these CPUs is documented by the silicon vendors and Microsoft. Whether Pluton is enabled by default on a given laptop varies by OEM. Practitioners verifying real fleets need to confirm via Windows' Device Manager and tpm.msc whether the active TPM advertises the Microsoft Pluton manufacturer ID rather than a discrete vendor.

3.2 What sits on the die

Pluton is a security subsystem placed inside the SoC, not on a separate chip on the motherboard. That single architectural decision eliminates the LPC/SPI bus that defeats discrete TPMs. Microsoft's framing in the announcement post: the design targets attacks "where an attacker can steal or temporarily gain physical access to a PC ... on the communication channel between the CPU and TPM" (Microsoft Security Blog).

Microsoft Pluton

Microsoft-authored security subsystem integrated into the SoC die of supported AMD, Intel, and Qualcomm processors. Pluton presents a TPM 2.0 interface to Windows but adds firmware-update via Windows Update and capsule, on-die placement (no external bus to sniff), and a Microsoft-maintained codebase that Microsoft describes as "Rust-based" from 2024 onward on AMD and Intel platforms.

SHACK (Secure Hardware Cryptography Key)

Microsoft's name for keys that are "never exposed outside the protected hardware, even to the Pluton firmware itself" (Microsoft Security Blog, 2020). Conceptually equivalent to Apple's UID-tangled keys: a hardware boundary that even the firmware running on top cannot cross.

Inside the die, Pluton runs its own small processor (the vendors do not publish the ISA in customer-facing docs), with its own ROM, on-die RAM, hardware crypto engines, and a hardware-confined key store. It exchanges messages with the host through a mailbox interface analogous to SEP's, but the higher-level wire protocol it speaks back to the host is TPM 2.0.

3.3 TPM 2.0 as the personality, not the limit

Pluton implements the TPM 2.0 command set. That means BitLocker, Windows Hello, Credential Guard, System Guard, Measured Boot, and Device Health Attestation all work against Pluton with no modifications -- they think they are talking to a TPM 2.0 chip, and they are (Microsoft Pluton as TPM, Microsoft Learn).

TPM 2.0 compatibility is the compromise that buys Microsoft adoption. The entire Windows security stack was already designed against the TCG TPM 2.0 wire protocol. Forcing it onto a new API would have required years of platform engineering. Forcing it onto a new API and getting OEMs to adopt the new chip would have required forever.

Ctrl + scroll to zoom
Pluton is a TPM 2.0 from Windows's perspective, but also exposes Microsoft-rooted services that pure-spec TPMs do not.

3.4 The patch channel

This is the design feature Microsoft most emphasizes and where the philosophical break with Apple is most visible. Pluton firmware can be updated through two paths (Microsoft Pluton Security Processor, Microsoft Learn):

  1. UEFI capsule update. The Pluton firmware lives on the system's SPI flash and is loaded during early boot. A capsule update -- delivered via the same UEFI mechanism that updates BIOS -- can replace it.
  2. Dynamic loading via Windows Update. Microsoft can ship a new Pluton firmware blob through Windows Update; the OS loader picks it up the next time the subsystem comes online.

Apple's update model is essentially the first path with a different label. The SEP firmware ships inside the iOS/macOS image bundle, signed by Apple, and is loaded at boot. There is no Windows-Update-style ambient channel separate from the OS image.

Patchable. By Microsoft. Through the channel users already trust. This is the single biggest practical advantage Pluton has over discrete TPMs, and the single biggest political problem.

The structure of this difference is what makes the Apple-vs-Microsoft comparison sharp. Apple controls the entire silicon, OS, and update channel. The patch path is fast because everything is one vendor. Microsoft does not control the silicon -- AMD, Intel, and Qualcomm do -- but they wrote the firmware, signed it, and route it through Windows Update. The patch path is fast because Microsoft has been delivering OS-level updates to a billion machines for a quarter century.

3.5 Rust as the firmware base

In 2024 Microsoft began shipping Pluton firmware on AMD and Intel with what the documentation calls "a Rust-based firmware foundation given the importance of memory safety" (Microsoft Pluton Security Processor, Microsoft Learn). This is, as far as we can tell from primary sources, the most prominent shipping production use of Rust inside an x86 platform security subsystem. It addresses the most common class of TPM firmware bugs, which historically have been C memory-safety issues -- bounds errors, use-after-frees, integer overflows.

If the SEP's design philosophy is "small fixed-purpose computer," the Pluton design philosophy is "in-die TPM 2.0 we can actually patch, written carefully enough that we will not have to patch it often." Two different bets about which property mattered most.

4. The tightly-coupled vs SoC-integrated trade-off

So far we have two architectures: SEP as a separate physical core, Pluton as an on-die subsystem. They sound different. They are different. But "separate core" and "on-die subsystem" both refuse the discrete-TPM design where the security chip is off the SoC and reachable over a motherboard bus. Why did both vendors converge there, and what is the trade-off between SEP-style and Pluton-style integration?

4.1 What both reject

The discrete TPM 2.0 model is the baseline. A separate chip, often a Nuvoton, Infineon, or ST device on the motherboard, connected to the platform via LPC, SPI, or I²C. The TCG spec it implements is excellent. The physical placement is the problem.

Pulse Security's attack is the canonical demonstration. With lpc_sniffer_tpm on a $40 FPGA, they probed the LPC bus of a Surface Pro 3 as it booted, captured the bytes the TPM returned for the unsealed Volume Master Key, and used those bytes to decrypt the disk (Pulse Security: TPM Sniffing). The TPM was working correctly. The bus was the problem. There is a mitigation -- pre-boot PIN or USB key, so the VMK is bound to something not on the wire -- but the default BitLocker configuration on most enterprise hardware does not enable it.

Bus snooping (TPM context)

The class of physical-access attacks in which an adversary attaches probes to the motherboard bus carrying TPM responses, captures the cleartext key material the TPM legitimately returns, and uses it directly. Defended against by either eliminating the external bus (Pluton, SEP) or by requiring authenticated/encrypted sessions plus pre-boot user authentication (TPM 2.0 parameter encryption, BitLocker TPM+PIN).

Both SEP and Pluton refuse to expose that bus. The keys never appear on an external wire. That is the structural property both architectures buy by being on the SoC.

4.2 Tightly-coupled (SEP) vs subsystem-on-die (Pluton)

After agreeing on "no external bus," the two diverge sharply on what "on the SoC" should look like.

Ctrl + scroll to zoom
Two ways to put a root of trust on the SoC: a dedicated physical core (SEP) or a security subsystem sharing the die with the application processor (Pluton).

The SEP is a separate physical core with its own clock, its own voltage rail, and crucially no shared microarchitecture with the application processor. That last point matters because the family of cross-thread, cross-core, and frequency-scaling side channels -- Meltdown, Spectre, Foreshadow, Hertzbleed, and their cousins -- generally requires the attacker code to be co-resident on the same physical pipeline or share a microarchitectural resource. The SEP simply does not share execution resources with potentially hostile code on the application cores (Apple Platform Security: Secure Enclave Processor).

Pluton-on-AMD is implemented inside the AMD Platform Security Processor environment. Pluton-on-Intel is implemented inside Intel's Converged Security and Management Engine. These are pre-existing vendor security subsystems Microsoft layered Pluton atop. The Pluton subsystem is logically separate, with its own firmware and its own key store. Whether it has a fully separate physical voltage rail and clock domain from the application cores is not something the public documentation states clearly, and the answer almost certainly varies by silicon partner.

This is a place where the comparison is hardest to make crisply. Apple has a single answer because Apple makes one SoC family. Microsoft has three answers because Pluton lives inside whatever security subsystem AMD, Intel, or Qualcomm already provide. The detail-level guarantees vary.

4.3 The SGX cautionary tale

There is a third design point worth flagging because both vendors implicitly chose against it: putting the trusted execution environment inside the application CPU cores themselves. Intel SGX, introduced in 2015, did exactly that. Enclaves were memory regions with hardware access control inside the same cores running ordinary software.

SGX was a beautiful idea and an academic catastrophe. Foreshadow, ZombieLoad, SgxPectre, Plundervolt, and a long sequence of related attacks reused the side-channel-rich microarchitecture of modern Intel cores to leak enclave contents. Intel deprecated SGX on most consumer processors in 2022, retaining it on server SKUs for confidential computing scenarios where the threat model is different.

The lesson is something both Apple and Microsoft seem to have absorbed: a trusted execution environment that shares any microarchitectural state with the workloads it must protect from is structurally compromised, because microarchitecture is too rich and too leaky to perfectly isolate. The SEP rejects this by living on its own core. Pluton rejects it by living in a separate subsystem.

4.4 The trade-off in one sentence

A dedicated core (SEP) maximises side-channel resistance and minimises attack surface, at the cost of vendor proprietary lock-in and zero portability. An on-die subsystem (Pluton) preserves the TPM 2.0 standard, ships on three silicon vendors, and inherits the security guarantees of the underlying vendor security subsystem -- whose history, as we will see, is less reassuring than Apple's monopoly on its own silicon.

SEP wins on isolation. Pluton wins on portability. Neither wins on both. The choice you make at the SoC level constrains every API, every patch path, and every threat-model claim downstream.

5. The APIs developers actually call

Architectures are interesting. What ships in production code is what determines whether developers use these things correctly. The API surfaces are wildly different, and the difference matters.

5.1 Apple: SecKey, App Attest, LocalAuthentication

On Apple platforms, the SEP is exposed through a handful of frameworks. The most common entry point is SecKey in the Security framework, with key attributes that bind the key to the SEP:

  • kSecAttrTokenIDSecureEnclave makes the key SEP-resident.
  • kSecAttrAccessControl with LAContext adds biometric or passcode gating.
  • kSecAttrIsPermanent puts it in the Keychain.

The key itself never leaves the SEP. The application receives an opaque handle. Asking the framework to sign a message turns into a mailbox call to the SEP, which evaluates the access-control policy (e.g., "the user must FaceID-authenticate within the last five seconds") and either signs or refuses.

JavaScript Conceptual model: how a SEP-bound P-256 signature flows
// This is a conceptual model of what happens when iOS code asks the SEP
// to sign a message with a key whose private half lives inside the SEP.
// The real code is Swift + Security.framework; this JS captures the logic.

function generateSEPKey(accessControl) {
// SEP generates the keypair internally
const priv = sepRandomBytes(32);            // never leaves SEP
const pub  = ecP256ScalarMul(priv, BASE_G);
const blob = aesKeyWrap(sepUIDDerivedKey, priv);
return { publicKey: pub, handle: opaque(blob), policy: accessControl };
}

function sign(handle, message) {
const policy = lookupPolicy(handle);
// SEP enforces the access control: must the user have authenticated recently?
if (!policy.satisfied(LAContext.current)) {
  return { error: "user authentication required" };
}
const blob = lookup(handle);
const priv = aesKeyUnwrap(sepUIDDerivedKey, blob);
return ecdsaP256Sign(priv, sha256(message));
}

const k = generateSEPKey({ requireBiometric: true });
console.log("Public key returned to the app:", k.publicKey);
console.log("Private key location: inside SEP, never accessible to app code");

Press Run to execute.

Beyond SecKey, the SEP underpins:

  • LocalAuthentication -- Face ID / Touch ID matching happens inside the SEP. The biometric template never leaves the SEP, and the application is only told yes/no.
  • DeviceCheck and App Attest -- documented in the Apple Platform Security Guide. App Attest gives each app installation a SEP-rooted asymmetric key whose certificate chains to Apple's CA, letting servers verify that a sign-up came from a genuine app on a genuine Apple device.
  • Data Protection / FileVault -- per-file class keys are wrapped under SEP-held intermediate keys.
  • Apple Pay -- payment credentials are SEP-resident and gated on biometric/passcode authentication.
App Attest

Apple's hardware-backed app integrity service. Each install of each app receives a unique SEP-resident key whose attestation certificate, signed by Apple, lets a back-end server verify that the request originates from a non-tampered installation. The closest cross-platform analogue is Google Play Integrity API; the closest discrete-TPM analogue is TPM 2.0 attestation, but App Attest is more strongly bound to the specific app installation.

5.2 Microsoft: TBS, NCrypt, Pluton-rooted credentials

On Windows, the TPM 2.0 personality means Pluton is reached through the same APIs as any TPM:

  • TPM Base Services (TBS) -- the low-level Win32 API for sending TPM 2.0 commands.
  • CNG (Cryptography Next Generation) with NCrypt and the Microsoft Platform Crypto Provider -- the higher-level key API that asks "store this key in the TPM, gated on the user's PIN."
  • BCryptDecrypt / BCryptSignHash as the in-process crypto API on top.

The DPAPI key-protection model -- file/blob protection rooted in user logon credentials -- has a CNG variant documented as CNG DPAPI that integrates with TPM-rooted hierarchies. Above that sit the consumer-facing systems: BitLocker for disk encryption, Windows Hello for credential storage, Credential Guard for isolating LSA secrets in a virtualization-based security enclave, and Microsoft Entra ID conditional access for cloud sign-in.

TPM 2.0

The TCG TPM 2.0 Library Specification defines the command set, object hierarchy, and key-handling semantics of TPM 2.0 chips. Commands include TPM2_CreatePrimary, TPM2_Create, TPM2_Load, TPM2_Seal, TPM2_Unseal, TPM2_Quote, and TPM2_Certify. Both discrete TPMs and Pluton implement this command set.

Ctrl + scroll to zoom
Roughly equivalent API stacks on Apple and Windows. SEP exposes proprietary services; Pluton presents the TPM 2.0 wire protocol but adds Microsoft-rooted extensions.

5.3 What the API shape tells you

The SEP API forces every call into the small set of operations the SEP firmware implements. There is no TPM2_PolicyLocality(2) equivalent or TPM2_PolicyOR combinator on the SEP. You ask for a key, you ask for a signature, you ask for a biometric match, and that is mostly the surface. From a developer's point of view, the SEP feels like a very small set of well-defined building blocks.

The TPM 2.0 API, by contrast, is enormous. There are several hundred commands. The TPM has policy expressions, sessions, hierarchies (storage, endorsement, platform, owner), and a half-dozen attestation primitives. This expressiveness was the right call for an open standard -- the TCG had to accommodate every conceivable use case across two decades. It also means that "wrote TPM 2.0 code correctly" is a measurable engineering skill rather than a default.

5.4 A note on what is not exposed

Neither platform exposes the device's per-silicon root key to applications. On Apple, the UID is sealed inside the SEP; on Microsoft, the Pluton Endorsement Key is unique per chip but applications interact only with the AKs (Attestation Keys) derived from it. This is deliberate: per-device permanent keys, if exposed, enable cross-service tracking. The exposed primitives are either per-app/per-installation (App Attest), per-session (TPM2_Quote with a fresh AK), or ephemeral (a freshly-generated SEP key).

That choice maps to a privacy property we will pick up in the next section: how each platform answers "prove this is a real device" without becoming "track this specific user across every service."

6. Identity, attestation, and the privacy problem

The deepest difference between Apple and Microsoft is not architectural. It is the answer each one gives to a question that sounds simple: what does it mean to prove a device is real?

6.1 Why attestation is hard

A naive answer is: burn a unique identifier into every chip and have the chip sign messages with the corresponding private key. That works for proof. It also creates a per-device pseudonym that every service can recognise and correlate. The naive answer is a surveillance disaster.

A better answer keeps the unforgeability of "this signature came from a real device" and adds an unlinkability property: the signature does not identify which device, only that it is genuine. This is what cryptographers call anonymous attestation, and the canonical construction is DAA.

Direct Anonymous Attestation (DAA)

A class of cryptographic protocols that let a hardware token sign messages in a way that proves it belongs to a group of legitimate devices without revealing which device. Introduced by Brickell, Camenisch, and Chen in 2004 as part of the TPM 1.2 specification work, with the elliptic-curve variant ECDAA standardized for TPM 2.0. See the Wikipedia overview for the protocol skeleton.

The mathematics of DAA rests on group signatures with selective linkability. A device runs the join protocol once with a group issuer (the "Privacy CA" or analogous authority) and receives a credential. It can then prove, via a Camenisch-Lysyanskaya-style signature of knowledge, that it holds such a credential without revealing which one. With ECDAA, the join and signing operations are roughly the cost of a couple of elliptic-curve multiplications.

The privacy property comes with caveats. Verifiers can opt into "basename" linkability, where signatures from the same device addressed to the same service are linkable -- letting a service recognise a returning user without letting it correlate across services. The math has been deployed in TPM 2.0 since the 2014 spec.

6.2 The Microsoft path: TPM 2.0 attestation plus Microsoft-rooted services

Pluton inherits TPM 2.0's attestation primitives. The standard flow:

  1. Generate an Attestation Key (AK) inside the TPM, with a private half that never leaves.
  2. Certify the AK to a Privacy CA (or via ECDAA) using the Endorsement Key.
  3. Hash the boot configuration into Platform Configuration Registers (PCRs) during measured boot.
  4. Have the relying party send a fresh nonce.
  5. Issue TPM2_Quote(AK, PCR_mask, qualifying_data=nonce).
  6. Send the quote, the AK certificate, and the boot event log to the relying party.
  7. The relying party replays the event log, checks that the replayed PCRs match the quoted ones, validates the AK certificate chain, and validates the signature.
attest(nonce, pcr_mask):
    AK = TPM2_Create(parent=EK, type=signing)
    AK_cert = privacy_CA.certify(AK_pub, EK_cert)    # or ECDAA group sig
    quote = TPM2_Quote(AK, pcr_mask, qualifying_data=nonce)
    return (quote, AK_cert, event_log)

verify(quote, AK_cert, event_log, expected_pcrs):
    assert privacy_CA.verify(AK_cert)
    assert ECDSA_verify(AK_cert.pub, quote.sig, quote.body)
    assert quote.qualifying_data == nonce
    assert replay_log(event_log) == quote.pcrs == expected_pcrs

That covers raw TPM 2.0. Microsoft layers on top a service called Device Health Attestation that does the verifier work as a cloud service, supplying Reference Integrity Manifests for known-good Microsoft-signed boot states. Microsoft Entra ID conditional access policies can then refuse sign-in to devices whose Pluton-signed health attestation does not match an expected baseline (Microsoft Pluton Security Processor, Microsoft Learn).

The interesting privacy property here is that ECDAA-grade unlinkability is available through TPM 2.0, but Microsoft's deployed services tend to use Privacy-CA-style flows where the AK certificate is well-defined and reusable. Whether a given Microsoft attestation flow is anonymous-unlinkable or pseudonymous-linkable is a per-service detail rather than a platform property.

6.3 The Apple path: rooted in Apple's CA, scoped per app

Apple's DeviceCheck and App Attest take a different approach. App Attest gives each installation of each app a unique SEP-resident key. The corresponding attestation certificate chains to Apple's CA. Apps prove integrity to their own back-end servers by having the server send a nonce, the SEP signing the nonce with the per-install key, and Apple's CA chain validating that the key was issued on a genuine Apple device.

The privacy property is scoped differently from DAA. The key is per-installation, which means uninstalling and reinstalling the app generates a new key with no link to the old one. Across different apps on the same device, the keys are independent -- so two apps cannot collude with their respective back-ends to detect they are on the same phone. The trade-off: there is no formal anonymity within a group; the key is identifiable to its single installation, but that installation is fresh each install.

DeviceCheck is older and weaker. It gives an app a two-bit value the developer can set per device, retrievable on future runs. It is fraud-signal infrastructure, not cryptographic proof.

6.4 Where the two converge: FIDO2/WebAuthn

Both platforms expose their hardware-backed credentials through a single cross-platform standard: FIDO2/WebAuthn. When a browser asks "create a credential bound to this origin, hardware-resident if possible," the underlying operating system asks SEP or Pluton to generate the key. The resulting public-key credential, signed by the device's attestation key, is what the relying party verifies (FIDO Alliance).

Ctrl + scroll to zoom
FIDO2/WebAuthn is the cross-platform layer that paper over the SEP-vs-TPM difference. On Apple, the underlying key is SEP-resident; on Windows, it is TPM-2.0-resident; the browser-facing protocol is identical.

FIDO2/WebAuthn is the most boring and most important fact about modern hardware roots of trust: from the application's point of view, you no longer need to know whether you are talking to SEP or Pluton or a discrete TPM. The same JavaScript runs on all of them. We will return to FIDO2 in Section 8.

7. What has actually broken

Architecture is a story you tell about a system. Attacks are the system's reply. Both SEP and Pluton have a public attack history; reading it carefully is the fastest way to understand the real threat model rather than the marketing one.

7.1 checkm8 and the unpatchable boot ROM

In late 2019, the researcher axi0mX published ipwndfu, an exploit against a use-after-free in the SecureROM USB DFU stack of Apple SoCs from A5 through A11. The advisory carries CVE-2019-8900 and CERT/CC VU#941987. Because SecureROM is mask ROM -- etched into the silicon, immutable -- Apple cannot patch it. The only mitigation was new silicon. A12 and later are immune; earlier devices are permanently affected.

What checkm8 buys an attacker is application-processor code execution at boot time, on a device they have physical access to. That is significant. It enables forensically sound extraction tooling -- the Elcomsoft writeup walks through exactly which iPhone models and iOS versions are supported. It also covers the Apple T2 chip used in 2018-2020 Intel Macs, which is built on the same A10-family silicon.

But checkm8 does not, by itself, break SEP secrets. The SEP is still gated by the device passcode and the data-protection class keys. An attacker with checkm8 can run code on the AP, but they still need the passcode to unlock the user's protected data (CERT/CC VU#941987). The forensic value of checkm8 comes from being able to brute-force passcodes more effectively, capture keyboard state, and access classes of data not bound to a passcode -- not from extracting SEP-held keys directly.

The Pangu team's "Blackbird" SEPROM exploit, presented at MOSEC 2019, reportedly compromised SEPROM on A10/A10X devices. Apple has not published a detailed advisory for that work and the original presentation materials are not in the verified-sources list, so we mention it only by way of acknowledging that even SEP boot ROMs have a finite security lifetime. The architectural point stands: any unpatchable ROM becomes a permanent liability when a bug is found in it.

7.2 LPC sniffing and discrete TPMs

We opened with this attack and it deserves a second pass in the context of Pluton's design. The Pulse Security writeup demonstrates extraction of the BitLocker Volume Master Key from a Microsoft Surface Pro 3 (TPM 2.0) and a Lenovo laptop (TPM 1.2) using a $40 FPGA on the LPC bus. The attack requires physical access for under an hour and modest soldering skill.

This is the textbook case where Pluton is structurally better than discrete TPMs: there is no external bus to sniff because the security subsystem lives on the SoC die. The same attack against a Pluton-enabled CPU is not just hard, it is geometrically impossible. There is no bus to attach probes to.

That is not the same as "Pluton is unattackable" -- it just means this specific attack class is closed.

7.3 faulTPM and the AMD PSP

The most consequential publication on Pluton-adjacent silicon is Werling, Buhren, Jacob, and Seifert's 2023 USENIX WOOT paper "faulTPM". The attack: voltage fault injection against AMD's Platform Security Processor (PSP), the TEE on which AMD's fTPM runs, on Zen 2 and Zen 3 CPUs. The result: full extraction of the fTPM key derivation seed. With that seed, the attackers decrypted all sealed objects regardless of PCR policy or anti-hammering, and recovered the BitLocker VMK on a Lenovo Ideapad. The reproducible attack code is PSPReverse/ftpm_attack on GitHub.

Several careful observations:

  • The published attack targets non-Pluton AMD fTPM. Pluton-on-AMD is a separate code path; faulTPM as published does not directly extract Pluton state.
  • Pluton-on-AMD runs in the PSP environment. The underlying TEE that faulTPM compromises is the same TEE Pluton-on-AMD rides on. Whether the additional hardening Pluton adds is sufficient to defeat fault injection at the PSP level is an open empirical question.
  • There is no published voltage-glitch attack against Microsoft Pluton specifically as of May 2026 in the verified sources surveyed. Absence of evidence is not evidence of absence; serious researchers are reportedly working on it.
Voltage fault injection (VFI)

A physical attack class in which the attacker briefly reduces or perturbs the supply voltage to a target chip at a precisely timed moment, causing it to mis-execute an instruction in a controlled way. With sufficient practice, VFI can be used to skip authentication checks, leak intermediate values, or corrupt key derivation. Defenses include redundant voltage sensors, double-execution of sensitive operations, and physically separating the voltage domain of the security subsystem -- mitigations Apple alludes to for SEP and Microsoft alludes to for Pluton, but neither vendor publishes a complete defensive model.

What faulTPM means for your threat model

If your adversary is a state-level laboratory with $50K of equipment and a few hours of physical access, no commodity hardware root of trust on the market today is fully resistant to fault injection. The realistic question is "how much does extracting the key cost, and is that cost above the value of what is protected?" For consumer threat models, faulTPM is exotic; for high-value enterprise or dissident use cases, it is in scope.

7.4 What is not known to be broken

Modern SEP (A14+/M-series) has no publicly disclosed extraction attack as of the May 2026 verified sources reviewed. The combination of dedicated core, MPE with anti-replay, lower clock, and SSC-backed replay protection has held up. This is consistent with -- but does not prove -- the architectural claim that the dedicated-core design closes the side-channel and co-execution attack surface.

Pluton with the 2024+ Rust firmware foundation has no publicly disclosed direct extraction attack. The faulTPM family of attacks remains an open concern at the PSP layer; the LPC bus class is closed by design; firmware bugs are reduced (not eliminated) by the move to memory-safe code.

Ctrl + scroll to zoom
Attack surface comparison: which classes apply to which architecture. Pluton's structural wins are bus elimination and patchability; SEP's structural wins are physical core isolation and immutable boot ROM (which is also its specific historical liability via checkm8).

The honest summary is that as you move from discrete TPMs to fTPMs to Pluton to SEP, the attack surface shrinks but the residual attacks get more expensive rather than disappearing. The faulTPM line is still the academic state of the art in showing this.

8. Cross-platform standards: the layer where the divide gets papered over

If you are a web developer in 2026 and a user asks "how do I sign into your site with my Touch ID or my Windows Hello fingerprint?" the answer is the same in either case: WebAuthn. The standard does not care which hardware root of trust the OS happens to expose underneath.

8.1 FIDO2/WebAuthn as the lingua franca

The FIDO Alliance defines the protocols. WebAuthn is the W3C JavaScript API; CTAP (Client to Authenticator Protocol) is the underlying transport between the browser/OS and the authenticator. The authenticator can be a USB security key, a phone, a built-in platform authenticator backed by SEP or Pluton, or something else entirely. The relying party sees the same registration and authentication ceremony in all cases.

The handful of properties WebAuthn guarantees -- origin binding, user gesture, fresh signature per challenge -- are independent of the silicon underneath. The handful of properties it does not try to guarantee -- "is this device freshly compromised by a kernel rootkit" -- are not fixable at the protocol layer either; that is what attestation extensions are for.

8.2 Where attestation extensions vary

WebAuthn defines optional attestation extensions that let a relying party request a hardware-backed proof that the authenticator is genuine. Apple's attestation through WebAuthn rides on App Attest infrastructure; Microsoft's rides on TPM 2.0 attestation. The receipts differ in format and certificate chain, but the higher-level question "does the public key come from genuine hardware" gets answered on both platforms.

For most relying parties, the cross-platform truth is simpler than the underlying mechanics: ask for a hardware-backed credential, accept the WebAuthn response, validate the signature, and let the platform handle what kind of silicon was involved.

8.3 TPM 2.0 as the other lingua franca

TPM 2.0 itself plays this role in non-web contexts. Enterprise tools that need to attest a device's boot state -- Microsoft Entra ID conditional access, MDM compliance evaluators, Linux remote attestation frameworks -- speak TPM 2.0. Pluton exposes the TPM 2.0 wire protocol, so these tools work unchanged (Microsoft Pluton as TPM, Microsoft Learn).

Linux on Apple Silicon (Asahi) currently cannot use SEP for analogous attestation; Apple does not expose the SEP to non-Apple operating systems, and there is no TPM 2.0 emulation. This is a real gap for users who want Apple hardware with a non-Apple OS.

8.4 The Android third corner

This article is about Apple vs Microsoft, but a complete picture must mention that Android has its own hardware root of trust story rooted in Trusty/TEE-style designs on ARM TrustZone plus discrete StrongBox elements on Pixel-class hardware. Cross-platform mobile development frequently abstracts SEP and Android StrongBox under a common interface (e.g., React Native's keychain modules), and the privacy and attestation properties of the two systems are not identical but rhyme. Google Play Integrity API plays the role App Attest plays on iOS.

9. Deployment dynamics: who ships what, where, when

The two industries have different shapes, and that shapes the deployment story.

9.1 Apple: vertical integration, total reach

Every shipping Apple device since the iPhone 5s contains a SEP, by virtue of every shipping Apple SoC containing one. That includes (Apple Platform Security: Secure Enclave):

  • iPhone 5s and later (A7+)
  • iPad Air and later
  • Apple Watch Series 1 and later
  • Apple TV HD and later
  • HomePod and HomePod mini
  • Apple Vision Pro
  • All Apple Silicon Macs (M1, M2, M3, M4 families)
  • All Intel Macs from 2018 to 2020 (via the T2 chip)

There is no SKU differentiation. There is no "Pro vs Air" split on whether security hardware is present. You buy a current-generation Apple device, you get the SEP. This is the upside of vertical integration: deployment by default.

The downside is that nothing else gets the SEP. Linux on Apple Silicon -- the Asahi Linux project -- cannot use the SEP for keychain operations, FileVault wrapping, or attestation. Apple does not expose the SEP outside of macOS, iOS, iPadOS, watchOS, tvOS, and visionOS. The hardware is universal in Apple's product line and absent everywhere else.

9.2 Microsoft: open multivendor, opt-in adoption

Pluton ships in silicon Microsoft does not make. That changes the deployment story in two ways:

  1. Vendor availability. As of the current Microsoft documentation, Pluton is present in AMD Ryzen 6000 and later, Intel Core Ultra Series 2 and later, and Qualcomm Snapdragon 8cx Gen 3 and Snapdragon X Series. Anything older still uses discrete TPM 2.0 or vendor fTPM.
  2. OEM enablement. The chip can be physically present and disabled in UEFI. Microsoft has been pushing OEMs to ship Pluton enabled by default on Copilot+ PCs, but the universe of laptops is heterogeneous, and the practitioner answer is "check tpm.msc to see what manufacturer ID is reported."

Default-enabled-on-shipping-hardware is documented for Surface Laptop 7 and Surface Pro 11 Copilot+ PCs. Various Lenovo ThinkPad Z, Dell Latitude, and HP EliteBook configurations follow (Microsoft Pluton Security Processor, Microsoft Learn). On other devices Pluton may be present but disabled in firmware, falling back to discrete TPM or vendor fTPM.

This is a deployment claim that ages quickly. The shipping matrix shifts every six to twelve months as new SoCs come to market and OEMs rev their UEFI defaults. The verification workflow is the same regardless: Get-PnpDevice and tpm.msc on the actual hardware tell you what is active.

9.3 The patch-channel difference, made concrete

Apple ships SEP firmware inside its OS update. When the user installs iOS 19.4 or macOS 16.2, the bundle includes a new sepOS image; the device verifies and loads it during the next boot (Apple Platform Security).

Microsoft ships Pluton firmware through Windows Update and UEFI capsules. The OS-driven path lets Microsoft push a firmware refresh to billions of machines without OEM cooperation. The capsule path covers the case where the firmware is needed during early boot before Windows itself is in control.

Discrete TPMs occupy the third position: firmware updates exist but require an OEM-issued utility that few users ever run. This is why most enterprise TPMs in the field run firmware from 2020 or earlier.

9.4 The economic and political layer

Apple controls every step from sand to support page. The benefit is consistency. The cost is that Apple decides what the SEP can and cannot do, with no externally visible audit, and the customer cannot verify the firmware. For the Apple-customer market, that has not been a deal-breaker.

Microsoft controls the Pluton firmware. The benefit is that one team's engineering effort propagates across three silicon vendors and thousands of OEM SKUs. The cost is that the OS update channel and the security update channel collapse into one Microsoft-controlled flow. Critics describe this as platform lock-in; supporters describe it as the only way to actually patch the silicon at scale. Both readings have evidence behind them.

The same patch channel that protects users from unpatched silicon bugs is the patch channel a hypothetical compelled-update scenario would use. There is no commodity product that gives the device owner an independent veto on root-of-trust firmware updates.

This is a real open problem, not a fictional one. The Trusted Computing Group has a notion of "owner-authorized" TPM hierarchies; Azure Sphere uses a three-key model in which device owner, vendor, and Microsoft all hold signing capabilities for different scopes. Nothing in the commodity consumer space has yet shipped a model where the device owner can veto a vendor-signed firmware update on the security subsystem.

10. Where this goes next

The honest answer is that the immediate future is more of the same with three new pressures.

10.1 Post-quantum migration

The cryptographic primitives currently rooted in both platforms -- ECDSA P-256 in the SEP, RSA-2048 and ECDSA in TPM 2.0 -- are not post-quantum-safe. NIST standardized ML-KEM and ML-DSA in FIPS 203 and FIPS 204 in 2024 (the NIST publication URLs are outside our verified-source set, so this paragraph states the timeline at the policy level only). Migrating hardware-fused attestation roots to post-quantum schemes is genuinely hard because the silicon-burned UID-equivalent keys are baked at fabrication time and cannot easily be replaced.

The likely path: hardware retains agility at the wrapping layer (the unique chip key) while the attestation key types evolve. TPM 2.0 already supports algorithm agility in the spec, which is the kind of foresight you only appreciate a decade after it was added. SEP's key wrapping is bespoke; Apple has not published a PQC migration plan in the verified sources reviewed.

This is a place where the comparison gets uncertain. Both vendors will need to migrate. Neither has shipped a primary post-quantum-rooted attestation flow in their public 2026 documentation as far as we can verify.

10.2 Confidential computing convergence

The same silicon technologies that build SEP and Pluton are now powering confidential computing -- AMD SEV-SNP, Intel TDX, ARM CCA. These extend the "untrusted host kernel" threat model from disk encryption and credential storage to entire virtual machines. The trust roots of confidential computing currently live in the same chips' security subsystems: AMD's PSP holds SEV-SNP attestation keys; Intel's CSME, working with TDX, holds equivalent keys.

Pluton-on-Intel and Pluton-on-AMD will likely inherit responsibilities here as Microsoft consolidates more of the security subsystem under the Pluton name. Apple has not publicly signaled equivalent ambitions for SEP on the server -- Apple's server presence is mostly internal.

10.3 The AI agent identity problem

This is the next decade's question. When your laptop runs an autonomous AI agent that signs cloud API requests on your behalf, what attests to the agent's identity? The current architectures attest to the device and to user gestures, not to the agent. There is no shipping primitive in either SEP or Pluton that says "this signature came from agent X running on device Y, gated by user policy Z that the user actually consented to."

A defensible reading is that both vendors are moving slowly toward agent-bound credentials, but neither has published a clean primitive. This is an open design space. We mark it as a place to watch rather than a place where shipping products have answers.

10.4 The convergence that probably will not happen

People periodically suggest that Apple should expose the SEP via TPM 2.0 for cross-platform compatibility, or that Microsoft should ship a dedicated security core like SEP. Neither is likely. Apple's value proposition rests on vertical integration; opening the SEP to non-Apple operating systems would dilute it. Microsoft's value proposition rests on multi-vendor compatibility; mandating a SEP-style dedicated core would fragment their silicon partner relationships.

The structural diversity is here to stay. FIDO2/WebAuthn and TPM 2.0 are how the two systems will continue to interoperate without converging on a single hardware architecture. That is fine. It is even, arguably, good -- a monoculture would be worse for security than a duopoly with different threat-model trade-offs.

The interesting question for the next decade is not whether Apple or Microsoft picks a different silicon strategy. It is whether the cross-platform standards layer -- WebAuthn, TPM 2.0, FIDO2 -- evolves fast enough to expose new security primitives (post-quantum attestation, agent identity, owner-vetoable updates) before any one vendor ships proprietary equivalents.

11. Frequently asked questions

Frequently asked questions

Is Pluton just a TPM 2.0, or is it more?

Pluton presents a TPM 2.0 personality to Windows -- so BitLocker, Windows Hello, Credential Guard, and TPM-aware enterprise tools work unchanged -- but it is also more than a TPM 2.0. It exposes Microsoft-rooted services beyond the TCG spec, accepts firmware updates through Windows Update rather than only OEM utilities, lives on the SoC die rather than the motherboard (closing the LPC sniffing attack class), and -- from 2024 -- runs a Rust-based firmware foundation on AMD and Intel platforms (Microsoft Pluton Security Processor, Microsoft Learn).

Why doesn't Apple just expose the SEP through TPM 2.0?

Two reasons. First, the SEP was designed before TPM 2.0 became the relevant cross-platform standard for Apple's product mix; SEP's API surface is bespoke to Apple's frameworks (SecKey, App Attest, LocalAuthentication, Keychain). Second, exposing the SEP via TPM 2.0 would mean making the SEP usable from non-Apple operating systems on Apple hardware -- which is not how Apple ships its platforms. The SEP's lack of TPM 2.0 personality is a deliberate product decision, not a technical limitation.

Does checkm8 break the Secure Enclave?

No -- not directly. Checkm8 (CVE-2019-8900) exploits the SecureROM USB DFU stack on A5-A11 Apple SoCs and the T2 chip in 2018-2020 Intel Macs, giving an attacker with physical access application-processor code execution at boot. The SEP itself remains gated by the device passcode and the data-protection class keys (CERT/CC VU#941987). The forensic value of checkm8 is the ability to mount passcode brute-force more effectively and access classes of data not bound to a passcode, not direct SEP-key extraction.

Is BitLocker on Pluton safe from the LPC sniffing attack?

Yes. The Pulse Security TPM-sniffing attack works because the discrete TPM returns the Volume Master Key over an external motherboard bus that an attacker can probe. Pluton lives on the SoC die; there is no external bus to attach probes to. The attack is structurally impossible against Pluton-rooted BitLocker. On laptops with discrete TPMs, the mitigation remains BitLocker with pre-boot PIN or USB key authentication.

Does faulTPM affect Pluton?

The published faulTPM attack targets AMD's fTPM running in the AMD Platform Security Processor (PSP) on Zen 2 and Zen 3 CPUs, not Pluton specifically. However, Pluton-on-AMD is implemented atop the same PSP environment, so the underlying TEE is fault-attackable in principle. There is no publicly disclosed Pluton-targeted voltage-glitch attack as of May 2026 in the verified sources reviewed; whether Pluton's additional hardening blocks the fault-injection class is an open empirical question.

If I am building a cross-platform app, do I care which hardware root of trust the user has?

For most purposes, no. FIDO2/WebAuthn hides the difference at the API layer -- the same browser code talks to a SEP-backed credential on iOS/macOS and a Pluton-backed credential on Windows. You care about the difference when you need device-class attestation (Apple's App Attest vs Microsoft's Device Health Attestation), when privacy of the attestation key matters (Microsoft offers ECDAA-grade options via TPM 2.0; Apple offers per-installation keys), or when you need to support Linux on Apple Silicon (where neither path is available).

Can I have a device with both a SEP and a Pluton?

Not in any current shipping commodity product. Apple devices ship SEP and no TPM 2.0; Windows devices ship Pluton, discrete TPM, or vendor fTPM but no SEP. The closest historical case is the Apple T2 chip in 2018-2020 Intel Macs: the Mac ran macOS rooted at the T2 SEP, but if you booted Windows on the same hardware via Boot Camp, the T2 still provided the secure-boot anchor though Windows did not interact with it as a TPM.

12. Closing observation

There is a temptation, when comparing two designs as deeply considered as SEP and Pluton, to declare one the winner. Resist that temptation. The two architectures answer different questions for different markets, and the differences are exactly where each one shines. SEP is what you build when you own the silicon, the OS, and the patch channel. Pluton is what you build when you control the OS and the patch channel but need to ride on three other companies' silicon.

The closing observation worth keeping is the one Pulse Security demonstrated by accident: most hardware security failures are not failures of the math. They are failures of the physical placement and the patch flow. SEP and Pluton both close the historical bus-sniffing attack class. They both retain a slow channel for fault-injection research to chip away at. They both depend on the device owner trusting the vendor's signing infrastructure. The next big shift -- if it comes -- will probably be in who controls the patch channel, not in the silicon itself.

That is the bet to watch.

Study guide

Key terms

SEP
Apple Secure Enclave Processor, a dedicated security coprocessor with its own CPU core, sepOS, and mailbox API.
sepOS
Apple's L4-microkernel-derived OS running inside the SEP.
MPE
Memory Protection Engine: encrypts and authenticates SEP-bound DRAM cache lines with anti-replay protection.
SSC
Secure Storage Component: external tamper-resistant chip storing monotonic counters used by the SEP for anti-hammering, present from A13 onward.
Pluton
Microsoft's on-die security subsystem present on supported AMD, Intel, and Qualcomm SoCs; presents a TPM 2.0 personality and accepts firmware updates via Windows Update and UEFI capsule.
SHACK
Microsoft's name for keys that never leave the protected hardware, even to the Pluton firmware itself.
TPM 2.0
Trusted Computing Group's standard cryptoprocessor spec, defining PCRs, EK, AK, sealing, and the TPM2_Quote attestation primitive.
Direct Anonymous Attestation (DAA)
Group-signature scheme letting a device prove membership in a class of legitimate devices without revealing which one. ECDAA is the elliptic-curve variant standardized in TPM 2.0.
App Attest
Apple's per-installation SEP-rooted attestation service; produces a key chained to Apple's CA proving the running app is genuine on a genuine Apple device.
checkm8
CVE-2019-8900: unpatchable boot-ROM use-after-free affecting A5-A11 Apple SoCs and the T2 chip; gives AP code execution at boot to physical attackers.
faulTPM
USENIX WOOT 2023 voltage-fault-injection attack against AMD's PSP, extracting fTPM key derivation seed and recovering BitLocker VMK on a Lenovo Ideapad.
WebAuthn
W3C JavaScript API for hardware-backed credentials, implemented over CTAP, that hides SEP-vs-TPM differences from web developers.

Comprehension questions

  1. Why was the Pulse Security TPM-sniffing attack possible on a Surface Pro 3 despite the TPM working correctly?

    The TPM correctly unsealed and returned the BitLocker VMK over the LPC bus on the motherboard; the attacker could read it because the bus is physically exposed. Pluton eliminates this attack class by living on the SoC die.

  2. Why does Apple ship the SEP as a separate physical core rather than as an enclave inside the application CPU?

    A separate core eliminates the microarchitectural-side-channel and co-execution attack classes (Meltdown/Spectre/Hertzbleed family) that destroyed Intel SGX. The SEP simply does not share execution resources with potentially hostile code on the application cores.

  3. What does Pluton's firmware update model buy that discrete TPMs do not?

    In-field patchability via Windows Update and UEFI capsule, signed by Microsoft. Discrete TPM updates require an OEM utility most users never run, so serious TPM firmware bugs remain unpatched on most deployed devices.

  4. How does App Attest's privacy property differ from TPM 2.0 ECDAA?

    App Attest is per-installation: each install of each app gets a unique key chained to Apple's CA. ECDAA is a group signature: a device proves it belongs to a set of legitimate devices without revealing which one. Different threat models against different correlation adversaries.

  5. What does faulTPM tell us about the security of Pluton-on-AMD?

    It tells us the underlying AMD PSP TEE that Pluton-on-AMD rides on is fault-attackable. Whether Pluton's additional hardening blocks the fault-injection class is open; no Pluton-specific extraction attack is publicly disclosed as of May 2026 in the verified sources.

References

  1. Apple Platform Security Guide. https://support.apple.com/guide/security/welcome/web - Apple canonical reference for SEP, sepOS, SSC, App Attest.
  2. Apple Platform Security: Secure Enclave. https://support.apple.com/guide/security/secure-enclave-sec59b0b31ff/web - Definitive description of SEP architecture, MPE, boot chain.
  3. Apple Platform Security: Secure Enclave Processor. https://support.apple.com/guide/security/secure-enclave-processor-sec59b0b31ff/web
  4. Apple Platform Security: Secure Storage Components. https://support.apple.com/guide/security/secure-storage-components-sec983153c82/web
  5. Apple Platform Security: Keychain data protection. https://support.apple.com/guide/security/keychain-data-protection-secf6276da8a/web
  6. Apple-designed processors (Wikipedia). https://en.wikipedia.org/wiki/Apple-designed_processors
  7. Apple Security Updates. https://www.apple.com/support/security/
  8. (2020). Meet the Microsoft Pluton processor: the security chip designed for the future of Windows PCs. https://www.microsoft.com/en-us/security/blog/2020/11/17/meet-the-microsoft-pluton-processor-the-security-chip-designed-for-the-future-of-windows-pcs/
  9. Microsoft Pluton Security Processor (Microsoft Learn). https://learn.microsoft.com/en-us/windows/security/hardware-security/pluton/microsoft-pluton-security-processor
  10. Microsoft Pluton as TPM (Microsoft Learn). https://learn.microsoft.com/en-us/windows/security/hardware-security/pluton/pluton-as-tpm
  11. Microsoft Pluton overview (Windows Server Security). https://learn.microsoft.com/en-us/windows-server/security/security-and-assurance/pluton-overview
  12. TPM 2.0 Library Specification (Trusted Computing Group). https://trustedcomputinggroup.org/resource/tpm-library-specification/ - TCG canonical pointer; resource page intermittently 403s.
  13. Arm TrustZone. https://developer.arm.com/Architectures/TrustZone
  14. Intel Software Guard Extensions. https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html
  15. Intel Security Products: Deprecation and End of Life. https://www.intel.com/content/www/us/en/support/articles/000091077/software/intel-security-products.html
  16. FIDO Alliance. https://fidoalliance.org/
  17. CNG DPAPI (Microsoft Win32 docs). https://learn.microsoft.com/en-us/windows/win32/seccng/cng-dpapi
  18. BitLocker overview (Microsoft Learn). https://learn.microsoft.com/en-us/windows/security/operating-system-security/data-protection/bitlocker/
  19. NVD: CVE-2019-8900. https://nvd.nist.gov/vuln/detail/CVE-2019-8900
  20. CERT/CC VU#941987 (checkm8 / SecureROM). https://kb.cert.org/vuls/id/941987
  21. Sniff, there leaks my BitLocker key (Pulse Security). https://pulsesecurity.co.nz/articles/TPM-sniffing
  22. Hans Niklas Jacob, Christian Werling, Robert Buhren, & Jean-Pierre Seifert (2023). faulTPM: Exposing AMD fTPMs Deepest Secrets. https://arxiv.org/abs/2304.14717
  23. PSPReverse/ftpm_attack (GitHub). https://github.com/PSPReverse/ftpm_attack
  24. Direct Anonymous Attestation (Wikipedia). https://en.wikipedia.org/wiki/Direct_Anonymous_Attestation
  25. Ernie Brickell, Jan Camenisch, & Liqun Chen (2004). Direct Anonymous Attestation (MSR-TR-2004-130). https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tr-2004-130.pdf
  26. (2023). Using and troubleshooting the checkm8 exploit (Elcomsoft). https://blog.elcomsoft.com/2023/10/using-and-troubleshooting-the-checkm8-exploit/
  27. axi0mX/ipwndfu (checkm8 source). https://github.com/axi0mX/ipwndfu