Inside Azure Confidential VMs: SEV-SNP, Intel TDX, and the Paravisor that Makes Them a Cloud Product
Azure Confidential VMs combine AMD SEV-SNP and Intel TDX with the OpenHCL paravisor and MAA policy v1.2. A textbook tour from silicon to relying party.
Permalink1. Even the cloud operator must not see your memory
A Windows Server VM is running a SQL query on Azure right now. It is joining a million-row variant table against a patient-genome reference, building an index in RAM, and serving the answer back to a clinician's web portal. The customer who owns that VM has every reason to want the query to succeed and every reason to make sure that nobody else can ever read the index it builds: not the hypervisor it runs on, not the host firmware below it, not the Microsoft engineer holding the on-call pager, not even a court-ordered datacentre raid carried out with full physical access to the rack.
As of 2026, that is not a thought experiment. It is the contract Azure signs when you provision a DCasv5 or DCesv5 confidential VM [1]. And the contract has a shape -- an architecturally enforced shape rooted in two distinct CPU mechanisms, wrapped in an open-source Rust paravisor [2], verified by a policy-driven attestation service [3], and dented by four published 2024 attacks that this article will name in order.
The Confidential Computing Consortium defines the contract in one sentence: "Confidential Computing protects data in use by performing computation in a hardware-based, attested Trusted Execution Environment" [4]. That sentence finishes a longer thought. Data at rest gets BitLocker and full-disk encryption. Data in transit gets TLS. Data in use -- the gigabytes that sit in DRAM while a process actually computes against them -- has historically been the unencrypted leg of a three-legged stool.
A virtual machine whose memory and CPU state are cryptographically protected from the host hypervisor and the cloud operator's infrastructure, and whose configuration is bound to a hardware-rooted attestation report a remote verifier can check. The Confidential Computing Consortium's framing is the canonical one: "These secure and isolated environments prevent unauthorized access or modification of applications and data while in use" [4].
A computing environment whose confidentiality, integrity, and attestability are enforced by hardware mechanisms below the level of the operating system. A TEE may be process-scoped (Intel SGX enclaves), VM-scoped (AMD SEV-SNP, Intel TDX), or board-scoped (AWS Nitro Enclaves). The Confidential VM is the VM-scoped specialisation.
Three concrete workloads make the contract operationally legible. A regulated clean room running joint analytics over patient genomes between an academic medical centre and a pharmaceutical sponsor, where the contract literally forbids the sponsor's staff from reading raw genotypes. A multi-party anti-money-laundering analytic between two competing banks who will share encrypted features but not raw transactions. A sovereign-cloud control plane that must not leak to the hyperscaler's host kernel under any subpoena. In each case the threat model treats the cloud operator as semi-trusted at best and adversarial at worst, and in each case the customer wants the cipher engine to live below the operator's reach.
The architecture that makes this contract real takes vocabulary from Internet standards as well as silicon. RFC 9334, published in January 2023, gives us the verifier / evidence / relying party language we will use throughout the article [5]. An attester (the guest VM plus the paravisor) generates evidence (a hardware attestation report plus a vTPM quote). A verifier (Microsoft Azure Attestation in Azure's case) checks the evidence against a policy and emits an attestation result (a signed JWT). A relying party (Azure Key Vault, or any customer service) consumes the result and decides whether to release a secret. The article you are reading is, at heart, a tour of how a SEV-SNP or TDX guest, an OpenHCL paravisor, and Microsoft Azure Attestation realise that abstract diagram on commodity silicon.
That leads to the obvious question. How can a CPU enforce that even the hypervisor cannot read RAM? And once it can, why does a single mechanism turn out to be insufficient -- why does the architecture need a separate integrity rail on top? The next two sections trace the wrong answers that came first.
2. Why enclaves were not enough
In August 2016 David Kaplan stood on the USENIX Security stage in Austin and described "two new x86 ISA features developed by AMD" that he called "the first general-purpose memory encryption features to be integrated into the x86 architecture" [6]. Kaplan was, in the conference biography's words, the "lead architect for the AMD memory encryption features" [6]. His argument was deceptively simple. An enclave that lives inside a single process is the wrong unit of confidential computation for a cloud workload. The workloads customers actually run -- database engines, analytic services, language runtimes -- want gigabytes of working memory, multiple threads, and an unmodified operating system. None of that fits inside a roughly 96-MiB SGX enclave [7].
Two design ancestors set the shape of the problem before either AMD or Intel solved it.
The first ancestor is the Trusted Platform Module. The TCG TPM specification dates back to 2003, when "the first TPM version that was deployed was 1.1b" [8]. TPM 2.0 was announced on April 9, 2014 [8] and standardised as ISO/IEC 11889. The TPM contributed three concepts that remain load-bearing two decades later: platform configuration registers (the extend-only PCR digests that a measured-boot chain builds), attestation identity keys, and a quote operation that signs PCR state with a key whose origin a remote verifier can trust. The TPM is not a TEE in the modern sense -- it does not host computation -- but it is the first widely deployed device that lets a remote party gain cryptographic assurance about what a machine is running. Every confidential VM design ships a TPM-shaped attestation surface inside it.
The second ancestor is Intel Software Guard Extensions. Designed at the HASP 2013 workshop and delivered on Skylake in 2015 [7], SGX introduced the enclave: a process-scoped TEE backed by the Enclave Page Cache, a CPU-managed memory region whose contents are decrypted only inside the cache. Programs enter and leave through ENCLU-family instructions; cross-domain calls use a partitioned model called ECALL / OCALL; remote attestation is mediated by Intel through a quoting enclave. SGX worked, in the strict sense that the threat model included even a malicious operating system. But three things kept it from generalising.
A CPU-protected DRAM region that holds an SGX enclave's working memory in encrypted, integrity-checked form. On early Skylake / Kaby Lake parts the EPC was capped at approximately 128 MiB physical with between ~93 and 96 MiB usable depending on BIOS reservation after reserved EPCM metadata accounting [7]. Anything beyond the cap paged through the encrypted-page-eviction path with a substantial performance cliff, which is one of the architectural reasons SGX did not generalise to whole-VM cloud workloads.
The EPC cap was the first. A working set of ~96 MiB is fine for a key-wrapping service or a small ML model, but it is not a cloud-database VM. The second was the partitioned programming model. Real applications had to be split into trusted and untrusted halves with explicit ECALL / OCALL boundaries, which is a refactoring tax that few existing codebases would pay. The third was the side-channel question: Foreshadow [9], SgxPectre [10], and SGAxe [11] each demonstrated that a determined attacker with microarchitectural access could extract secrets from SGX, often without ever defeating the cipher itself.
Microsoft staked Azure publicly to "data in use" on September 14, 2017, when Mark Russinovich announced Azure confidential computing on the company blog: "Microsoft Azure is the first cloud to offer new data security capabilities with a collection of features and services called Azure confidential computing" [@russinovich-azure-2017; @russinovich-azure-2017-wayback]. The same post named the initial backing TEEs. "Initially we support two TEEs, Virtual Secure Mode and Intel SGX. Virtual Secure Mode (VSM) is a software-based TEE that's implemented by Hyper-V in Windows 10 and Windows Server 2016" [12]. VSM was already the substrate of Credential Guard and HVCI inside the operating system; pulling it up as a "TEE the cloud customer can target" was the bridge between the in-OS Secure Kernel story and the eventually-needed silicon-rooted CVM.
The industry got organised two years later. The Confidential Computing Consortium formed under the Linux Foundation on October 17, 2019. The press release names the founding premiere members verbatim: "Alibaba, Arm, Google Cloud, Huawei, Intel, Microsoft and Red Hat" and the general members "Baidu, ByteDance, decentriq, Fortanix, Kindite, Oasis Labs, Swisscom, Tencent and VMware" [13]. An earlier Microsoft Open Source blog post on August 21, 2019, announced the formation with a slightly different membership list (including IBM but not Huawei) [14]; the October press release is the formal founding roster.
Diagram source
flowchart TD
Data["Customer data"] --> Rest["At rest -- BitLocker, SED, KMS"]
Data --> Transit["In transit -- TLS 1.3, IPsec"]
Data --> Use["In use -- ?"]
Use --> CVM["Confidential VMs -- SEV-SNP / Intel TDX"]
CVM --> Para["Paravisor -- OpenHCL"]
Para --> MAA["MAA verifier"] If a TEE has to be smaller than a single page cache, the unit of confidential computation is wrong. What if the unit were a whole VM, and the cipher engine lived inline with the memory controller? The next section is the first time someone tried.
3. Generation 1 and 1.5: confidentiality without integrity
April 2016. David Kaplan, Jeremy Powell, and Tom Woller publish the AMD whitepaper AMD Memory Encryption [15]. The paper introduces two features in a single document. Secure Memory Encryption (SME) is a chassis-wide bulk cipher: a per-boot AES-128 key, managed by the on-die AMD Secure Processor, encrypts main memory transparently to the operating system. Secure Encrypted Virtualization (SEV) takes the same engine and gives each VM its own AES key tagged into an Address Space Identifier (ASID) in the cache, so two co-resident VMs cannot read each other's memory and neither can the hypervisor. The "C-bit" in the guest page table marks which pages are encrypted [15]. The first silicon to ship SEV was the first-generation EPYC "Naples" launched June 20, 2017 [16].
A high physical-address bit in an AMD SEV guest's page-table entries that signals to the memory controller "this page is encrypted with my VM's key." The C-bit is the per-page opt-in that lets a SEV guest mix encrypted private memory with explicitly shared bounce buffers in the same address space. Its absence means a page is cleartext to the hypervisor; its presence means the AES engine in the memory controller decrypts on every read and encrypts on every write [15].
The threat model was clear and the architecture was honest about it. The hypervisor sees ciphertext on every encrypted page. What the architecture did not do, and what the original whitepaper did not claim, was integrity. The hypervisor remained authoritative over the nested page tables -- it could remap which host physical page a given guest physical address pointed to, and the cipher engine would happily decrypt whatever blob it found under the same key.
That gap produced the architectural lesson.
SEVered (Morbitzer et al., EuroSec 2018)
"We present the design and implementation of SEVered, an attack from a malicious hypervisor capable of extracting the full contents of main memory in plaintext from SEV-encrypted virtual machines." -- Morbitzer, Huber, Horsch, Wessel, EuroSec'18 [17]
The architectural lesson, stated as bluntly as the paper deserves, is that confidentiality without integrity is not confidentiality.
Confidentiality without integrity is not confidentiality. The hypervisor that can move ciphertext between addresses is the hypervisor that can read it. The integrity of the guest-physical-to-host-physical mapping is as load-bearing as the cipher itself.
SEV-ES (February 2017): half a fix
AMD's first response was SEV-ES, dated February 17, 2017 in the whitepaper's PDF cover page [18]. SEV-ES introduced register-state encryption on VMEXIT. Before SEV-ES, every VM exit handed the hypervisor a complete dump of guest CPU registers, including pointers into otherwise-encrypted memory. SEV-ES encrypted the saved register state under the guest key, surfaced a new #VC (VMM Communication) exception (vector 29), and required the guest to use a deliberately shared page called the Guest-Hypervisor Communication Block (GHCB) for everything that genuinely needed to cross the boundary -- emulated I/O, MMIO, time, the works.
A page that a SEV-ES (and later SEV-SNP) guest deliberately shares with the hypervisor for the purposes of communicating about events the hypervisor genuinely needs to handle: emulated I/O, MMIO accesses, certain control-plane operations. The GHCB is the explicit, audited "side channel" through the trust boundary. Everything else stays encrypted [18].
SEV-ES closed one channel and left the other open. The integrity of the GPA-to-HPA mapping was still the hypervisor's problem to behave on, and the cipher was still XEX-mode AES without any keyed authentication. Two more papers made the architectural pressure unbearable.
ICUP (Buhren et al., CCS 2019) and SEVurity (Wilke et al., S&P 2020)
In August 2019, Robert Buhren, Christian Werling, and Jean-Pierre Seifert published Insecure Until Proven Updated [19]. The abstract makes the operational point cleanly: "We demonstrate that it is possible to extract critical CPU-specific keys that are fundamental for the security of the remote attestation protocol. This effectively renders the SEV technology on current AMD Epyc CPUs useless when confronted with an untrusted cloud provider" [19]. The mechanism was a firmware rollback against the AMD-SP that exposed attestation keys.
In May 2020, Wilke, Wichelmann, Morbitzer, and Eisenbarth published SEVurity: No Security Without Integrity at IEEE S&P [20]. Their two new methods, the project-page abstract records verbatim, "allow us to inject arbitrary code into SEV-ES secured virtual machines. Due to the lack of proper integrity protection, it is sufficient to reuse existing ciphertext to build a high-speed encryption oracle" [20]. The architectural diagnosis was now overdetermined: integrity had to enter the design, not as a side feature, but as a load-bearing rail.
The same Buhren-led group escalated to physical fault injection in August 2021 with One Glitch to Rule Them All, voltage-glitching the AMD Secure Processor on Zen 1 / 2 / 3 to extract custom payloads [21]. The PSPReverse GitHub artefact contains the supporting tooling [22]. This is the physical-fault lower bound on the AMD-SP: an adversary with the right glitcher can subvert the security processor itself. The SEV-SNP design assumes a logical adversary; physical-access adversaries remain a known residual that §8 will revisit.Intel's parallel road: TME and MKTME
Intel's bottom-of-stack cipher engine ran on a parallel track. In December 2017, Intel published Architecture Memory Encryption Technologies Specification, document 336907 rev 1.1 [23], introducing Total Memory Encryption (TME). The multi-key successor, MKTME (later TME-MK), surfaced publicly through a September 7, 2018 Linux-kernel RFC by Alison Schofield archived on LWN: "Multi-Key Total Memory Encryption API (MKTME) ... allows multiple encryption domains, each having their own key. While the main use case for the feature is virtual machine isolation" [24]. TME-MK is the per-keyID memory cipher that the eventual Intel TDX architecture will mount its trust-domain model on top of.
Three papers, two vendors, one architectural verdict: confidentiality without integrity is not confidentiality, and the architecture has to change. What did AMD and Intel actually build in response?
Diagram source
flowchart LR
SME["SME (2016) -- Bulk memory cipher"]
SEV["SEV (Naples, 2017) -- Per-VM AES key"]
ES["SEV-ES (Feb 2017) -- + Register-state cipher"]
SNP["SEV-SNP (Jan 2020) -- + Integrity rail"]
SME --> SEV
SEV -- "SEVered -- (EuroSec 2018)" --> ES
ES -- "ICUP (CCS 2019) -- SEVurity (S&P 2020)" --> SNP 4. Generation 2: the integrity rail
January 9, 2020. AMD publishes the 20-page SEV-SNP whitepaper, sole-authored by David Kaplan, with the title Strengthening VM Isolation with Integrity Protection and More [25]. Eight months later, in September 2020, Intel publishes the first public TDX whitepaper (document 343961-002US, filename tdx-whitepaper-final9-17.pdf, PDF creation date Thursday September 17, 2020) and the companion Architecture Specification doc 344425-001 dated September 1, 2020 [@intel-tdx-whitepaper; @intel-tdx-spec-344425]. Two vendors, two different architectural answers, one shared diagnosis: the hypervisor must be excluded from the GPA-to-HPA mapping, not just from the ciphertext.
AMD SEV-SNP: four ingredients
SEV-SNP keeps the per-VM AES cipher from SEV and the register-state encryption from SEV-ES, and adds four new architectural ingredients that together close the integrity gap.
The first is the Reverse Map Table (RMP). The RMP is a system-wide per-page metadata table consulted on every nested page-table walk. Each entry binds a host physical page to the tuple (assigned ASID, expected guest physical address, VMPL, immutable bit, validated bit). If the hypervisor tries to remap a guest physical address to a different host page, the RMP entry will fail to match and the CPU raises an #NPF(rmpfault). The architecture's own description is verbatim: "SEV-SNP adds strong memory integrity protection to help prevent malicious hypervisor-based attacks like data replay, memory re-mapping, and more to create an isolated execution environment" [27]. This is the integrity rail. It is not a separate keyed MAC over memory; it is a structural binding that turns SEVered-class remappings into faults.
A system-wide AMD SEV-SNP data structure that records, for every host physical page, the guest ASID it belongs to, the guest physical address it is mapped at, the VMPL ACL, an immutable flag, and a validated flag. Every nested page-table walk consults the RMP; mismatches raise #NPF(rmpfault). The RMP is the architectural answer to SEVered: the hypervisor remains in charge of nested page tables, but the RMP says what each host page is allowed to be used for [@amd-snp-whitepaper; @amd-sev-portal].
The second is the PVALIDATE instruction. A SEV-SNP guest must explicitly validate a page before it uses it for confidential storage. The hypervisor cannot fake validation; if the page has not been validated by the guest, accesses fault. This pushes the responsibility for tracking "is this page really part of my private memory" into the guest, where the hypervisor cannot lie about it.
The third is the Virtual Machine Privilege Level lattice.
A four-level privilege lattice (VMPL0 highest, VMPL3 lowest) introduced by AMD SEV-SNP. Each RMP entry includes per-VMPL access-control bits, so a single SEV-SNP guest can split itself into multiple ring-shaped partitions where a higher-VMPL component (for example, a paravisor at VMPL0) sees pages that a lower-VMPL component (the customer's kernel at VMPL2) cannot. VMPL appears as a field inside the SNP_REPORT, so a remote verifier can tell which VMPL produced a given quote [25].
The fourth is the attestation report. The SNP_REPORT is an ECDSA-P384 signed blob produced by the AMD-SP, carrying fields including the launch measurement, the guest policy, the user-supplied report_data nonce, the issuing vmpl, the unique chip_id, and the tcb_version. The signing key is the Versioned Chip Endorsement Key (VCEK), derived per chip per TCB version from a long-lived endorsement key, and the certificate chain runs VCEK_cert -> ASK -> AMD root [27].
The AMD SEV-SNP attestation signing key. Derived deterministically from each chip's individual endorsement secret and the current TCB version (firmware level), so a single chip exposes one VCEK per TCB version. The certificate chain anchors back to AMD's root via the AMD Signing Key (ASK). The VCEK is what makes SEV-SNP attestation chain to silicon: the verifier checks the SNP_REPORT signature against a VCEK certificate AMD will only issue for genuine AMD-SP firmware [@amd-snp-whitepaper; @amd-sev-portal].
"SEV-SNP adds strong memory integrity protection to help prevent malicious hypervisor-based attacks like data replay, memory re-mapping, and more in order to create an isolated execution environment." -- AMD SEV-SNP whitepaper, January 2020 [25]
Diagram source
sequenceDiagram
autonumber
participant Guest as Guest CPU access
participant NPT as Nested Page Walker
participant RMP as Reverse Map Table
participant AES as AES engine (memory ctrl)
Guest->>NPT: Resolve GVA -> GPA -> HPA
NPT->>RMP: Lookup (HPA)
RMP-->>NPT: ASID, expected GPA, VMPL
alt RMP entry matches request
NPT->>AES: Decrypt under VM key
AES-->>Guest: Plaintext
else Mismatch (SEVered-style remap)
RMP-->>Guest: #NPF (rmpfault)
end Intel TDX: a different geometry, the same end-state
Intel reached the same architectural conclusion with a different mechanism. Rather than bake integrity into microcode plus the AMD-SP, Intel introduced a new CPU mode and a separately signed software module that runs in it. The Intel TDX overview is verbatim: "A CPU-measured Intel TDX module enables Intel TDX. This software module runs in a new CPU Secure Arbitration Mode (SEAM) as a peer virtual machine manager (VMM) ... hosted in a reserved memory space identified by the SEAM Range Register (SEAMRR)" [28].
The ingredients are seven, not four.
A new CPU privilege state introduced by Intel TDX. Code running in SEAM is hosted in a physical-memory range identified by the SEAM Range Register (SEAMRR) that the legacy VMM cannot inspect. Only the signed Intel TDX Module runs in SEAM, and it does so as a peer VMM that mediates every interaction between the legacy hypervisor and a Trust Domain [28].
The Intel TDX Module is the second ingredient: a CPU-measured firmware binary, loaded by the SEAMLDR at boot, that mediates every entry into and exit from a Trust Domain via SEAMCALL and SEAMRET instructions. The Intel-signed intel-tdx-module-1.5-base-spec-348549002.pdf is the canonical specification for the current generation [29].
The third is the Trust Domain, a VM-shaped container that carries a Shared Bit in the guest physical address. A clear shared bit means the page is private; a set shared bit means the page is deliberately shared with the hypervisor for I/O bounce buffers. The fourth is TME-MK memory encryption, derived from the December 2017 TME spec [23] and the September 2018 MKTME Linux-kernel RFC [24]: AES-128 in XTS mode, with the keyID embedded in the upper physical-address bits, gives one key per Trust Domain.
The fifth ingredient is the structural analogue of AMD's RMP, the Physical-Address-Metadata table (PAMT). The Intel TDX overview enumerates the architectural elements precisely: "Intel TDX uses architectural elements such as SEAM, a shared bit in Guest Physical Address (GPA), secure Extended Page Table (EPT), physical-address-metadata table, Intel Total Memory Encryption -- Multi-Key (Intel TME-MK), and remote attestation" [28].
The sixth ingredient is the measurement registers. The MRTD is the build-time measurement of the initial TD image, similar to a TPM PCR fixed at launch. RTMR0 through RTMR3 are the runtime measurement registers, four PCR-equivalents the TDX Module exposes for runtime measured-boot extensions. These four registers are what a TDX-aware Trusted Boot chain extends.
The build-time and runtime measurement registers exposed by an Intel TDX Trust Domain. MRTD is hashed by the TDX Module over the initial TD launch image and is the SEAM analogue of an immutable launch PCR. RTMR0-3 are four extendable runtime registers, the SEAM analogue of the runtime-extension TPM PCRs (the same conceptual role as PCRs 8-15 in the canonical static-OS measurement chain), that hold a measured-boot chain of subsequent components (loaders, kernel, initrd, paravisor pages). The canonical TDX-vTPM event-log convention used by Linux IMA and systemd-stub maps RTMR[0] to PCR[1, 7]; RTMR[1] to PCR[2-6]; RTMR[2] to PCR[8-9]; and RTMR[3] to PCR[14, 17-22]. A TD Quote carries all five values; a verifier evaluates them against a customer-defined policy [@intel-tdx-overview; @intel-tdx-spec-344425].
The seventh is the TD Quote. A TD Quote is produced in two stages. The TD guest first issues TDCALL[TDG.MR.REPORT], which lands in the TDX Module (the VMM-to-Module entry is the separate SEAMCALL interface defined in the comparison table below); the TDX Module returns an in-SEAM SEAMREPORT structure, a Report MAC-signed with a key bound to the platform. A host-side SGX Quoting Enclave then converts that Report into a Quote signed with the SGX-resident QE attestation key. The Quote carries MRTD, RTMR0-3, the TD's TCB SVN (a per-component firmware version vector), and a caller nonce. The Intel Trust Authority (or Microsoft Azure Attestation, or Google's verifier) checks the quote [@intel-tdx-overview; @intel-tdx-module-base-348549].
Diagram source
flowchart TB
HW["Silicon: TME-MK + SEAMRR -- + Secure EPT + PAMT"]
SEAM["Intel TDX Module -- (SEAM mode)"]
VMM["Legacy VMM -- (Hyper-V / KVM)"]
TD1["Trust Domain 1"]
TD2["Trust Domain 2"]
HW --> SEAM
HW --> VMM
VMM -- "SEAMCALL" --> SEAM
SEAM -- "SEAMRET" --> VMM
SEAM -- "TDENTER / TDEXIT" --> TD1
SEAM -- "TDENTER / TDEXIT" --> TD2 Side by side
The two architectures answer the same question and arrive at the same end-state contract through fundamentally different trust geometries.
| Ingredient | AMD SEV-SNP | Intel TDX |
|---|---|---|
| Memory cipher | AES-128, per-VM key in memory controller | AES-128-XTS, per-TD key by keyID (TME-MK) |
| Integrity binding | Reverse Map Table per host page | Physical-Address-Metadata table + Secure EPT |
| Mediating component | AMD-SP firmware (microcode + on-die security processor) | Signed Intel TDX Module in SEAM mode |
| Privilege lattice | VMPL0-VMPL3 (four levels) | TD Partitioning L1/L2 (TDX Module 1.5) |
| Build-time measurement | Launch measurement in SNP_REPORT | MRTD inside the TDX Module |
| Runtime measurement | None at module level (vTPM provides it) | RTMR0-RTMR3 inside the TDX Module |
| Attestation signing key | VCEK (ECDSA-P384), per chip per TCB version | SGX-resident Quoting Enclave key |
| Certificate chain | VCEK -> ASK -> AMD root | Quoting Enclave -> Intel root |
| Page-validation primitive | PVALIDATE (guest-driven) | TDX Module-mediated page acceptance |
| Shared-page indicator | C-bit (clear = shared, set = encrypted) | Shared bit in GPA (set = shared) |
| Hypervisor-to-trust-component call | Mediated VMRUN | SEAMCALL / SEAMRET |
// Pseudo-code sketch of how a SEV-SNP guest assembles an SNP_REPORT
// via SNP_GUEST_REQUEST. Not runnable against silicon; the point is
// the shape of the evidence the verifier receives.
function buildSnpReport(nonce32) {
// Guest builds a request structure with a 32-byte user nonce.
const request = { reportData: nonce32, vmpl: 0 };
// Hypercall lands in the AMD-SP, which signs with the VCEK.
const report = sp_guest_request(request);
return {
version: report.version, // structure version
guestSvn: report.guestSvn, // guest firmware SVN
policy: report.policy, // SEV policy bits at launch
familyId: report.familyId, // 16-byte ID set by launch
measurement: report.measurement, // 48-byte launch measurement
reportData: report.reportData, // echoes user nonce
vmpl: report.vmpl, // VMPL of issuing component
chipId: report.chipId, // 64-byte unique chip ID
tcbVersion: report.tcbVersion, // boot loader / TEE / SNP / microcode SVNs
signature: report.signature, // ECDSA P-384 over the report
};
}
// The verifier walks the certificate chain VCEK -> ASK -> AMD root,
// re-checks the signature, and then evaluates policy on the claims.
console.log(JSON.stringify(buildSnpReport('nonce_from_relying_party'), null, 2)); Press Run to execute.
SEV-SNP and TDX answer the same question differently. AMD bakes integrity into microcode plus the AMD-SP, signs with a per-chip per-TCB VCEK, and exposes a four-level VMPL lattice. Intel puts integrity into a separately loaded, separately signed software module running in a new CPU mode, signs with an SGX-resident Quoting Enclave, and exposes L1/L2 partitioning. The trust roots, the breaking surfaces, and the supply chains are different even when the end-state contract is the same.
Diagram source
flowchart LR
subgraph AMD["AMD SEV-SNP"]
A1["AMD-SP firmware"]
A2["Reverse Map Table"]
A3["VMPL0-3 lattice"]
A4["SNP_REPORT -- VCEK signed"]
end
subgraph INTEL["Intel TDX"]
I1["Signed TDX Module"]
I2["PAMT + Secure EPT"]
I3["L1 / L2 partitioning"]
I4["TD Quote -- Quoting Enclave"]
end
A1 --- I1
A2 --- I2
A3 --- I3
A4 --- I4 Generation 2 makes a confidential VM architecturally possible. But a SEV-SNP guest is not yet a Windows Server VM you can lift and shift onto Azure -- there is a whole productisation problem still to solve. How does Microsoft put a paravisor inside that trust boundary, and what does it deliver?
5. The contract: a cloud-shaped TEE
A confidential VM is two rails, not one. Rail 1 is confidentiality plus integrity of memory and CPU state. Rail 2 is measurement plus attestation. SEV-SNP and TDX each deliver both rails. Anyone who has read the equivalent Secure Boot / Trusted Boot story will recognise the shape: a measurement chain anchored in silicon, terminated in a remote verifier, with a signed result that a relying party can act on.
The Confidential Computing Consortium's framing, repeated here as a contract the architectures actually realise: "Confidential Computing protects data in use by performing computation in a hardware-based, attested Trusted Execution Environment" [4]. Hardware-based is rail 1. Attested is rail 2. The two words together are why a TPM-only system, however well-measured, is not a CVM, and why a SEV-only system, however well-encrypted, is not a CVM either.
RFC 9334 names the actors. The attester is the guest plus the paravisor producing evidence. The evidence is the SNP_REPORT or TD Quote, plus optionally a vTPM quote chained to it. The verifier is the entity that checks the evidence against a policy and emits an attestation result. The relying party is the consumer who acts on the result -- typically a key vault releasing a wrapped secret [5].
The IETF Remote ATtestation procedureS working group's RFC 9334 (January 2023) fixes the vocabulary the rest of the confidential-computing industry uses: an attester produces evidence; a verifier checks it against reference values from an endorser and a reference value provider and emits an attestation result; a relying party acts on the result. RFC 9334 §5 names two topologies. In the Passport model (§5.1), the attester sends evidence directly to the verifier, collects a signed result, and presents that result to the relying party. In the Background-Check model (§5.2), the attester sends evidence to the relying party, which forwards it to the verifier and receives the result on the attester's behalf. Microsoft Azure Attestation, Intel Trust Authority, Google's verifier, and AWS KMS attestation all implement variants of this model [5].
Microsoft Azure Attestation implements the Passport model. The attester -- the CVM, through its in-guest agent -- sends evidence (an SNP_REPORT or TD Quote, plus a vTPM quote) directly to MAA. MAA validates the evidence against the customer-authored policy and returns a signed JWT. The attester then presents that JWT to the relying party. Azure Key Vault authorises Secure Key Release against the MAA-issued claim set, not against raw SNP evidence. The relying party never sees the SNP_REPORT and never calls MAA on the attester's behalf, which is the design signature of Passport rather than Background-Check [@rfc9334; @msdocs-maa-overview].
Diagram source
flowchart LR
Rail1["Rail 1 -- Confidentiality + Integrity"] --> Mem["Encrypted DRAM -- + RMP / PAMT -- + encrypted register state"]
Rail2["Rail 2 -- Measurement + Attestation"] --> Ev["Evidence: -- SNP_REPORT / TD Quote -- + vTPM quote"]
Ev --> Ver["Verifier: -- MAA / Intel Trust Authority"]
Ver --> Tok["Attestation Result -- (signed JWT)"]
Tok --> RP["Relying Party -- (Azure Key Vault)"]
RP --> Secret["Wrapped secret release"] A Confidential VM is not a memory-encryption product. It is a contract: confidentiality with integrity, plus an evidence-bearing attestation chain that a relying party can verify before it releases a secret. Anyone who sells you "confidential" infrastructure without rail 2 is selling you half the product.
If this is the contract, how does Azure actually build a usable Windows-guest CVM on top of it? What lives where, and who signs what?
6. State of the art on Azure: from silicon to MAA
July 20, 2022. Microsoft Azure announces general availability of the DCasv5 and ECasv5 confidential VM SKUs on AMD third-generation EPYC silicon. The Register's coverage captures the framing: "Microsoft is expanding its Azure confidential computing portfolio with virtual machines that use the encryption and memory protection features of AMD's third-gen Epyc processors. ... Customers using them can also use the free Microsoft Azure Attestation (MAA) service to remotely verify the operating environment and integrity of the software binaries running on it" [30]. That is the moment a confidential VM stops being a research paper and starts being a product the customer can pay for by the hour.
This section walks the Azure stack bottom-up. It is the longest section because it is the article's reason to exist.
The Azure CVM SKU family
Microsoft Learn's confidential-computing products page enumerates the current Azure CVM SKU map. On AMD SEV-SNP: "DCasv5 and ECasv5 enable rehosting of existing workloads" [1]. These are the third-generation EPYC Milan SKUs that went GA in July 2022. The Learn page continues: "DCasv6 and ECasv6 confidential VMs based on fourth-generation AMD EPYC processors are currently in gated preview" [1]. Lenovo Press corroborates that "SEV-SNP is supported on AMD EPYC processors starting with the AMD EPYC 7003 series processors" -- i.e., Milan -- with the third-generation 7003 series being the first SEV-SNP silicon [31].
On Intel TDX: "DCesv5 and ECesv5" are the fourth-generation Xeon Sapphire Rapids SKUs, generally available. SecurityWeek's coverage anchors the Sapphire Rapids launch: "Intel announced on Tuesday that it has added Intel Trust Domain Extensions (TDX) to its confidential computing portfolio with the launch of its new 4th Gen Xeon enterprise processors. ... The feature will be available through cloud providers such as Microsoft, Google, IBM and Alibaba" [32]. Wikipedia notes that "TDX is available for 5th generation Intel Xeon processors (codename Emerald Rapids) and Edge Enhanced Compute variants of 4th generation Xeon processors (codename Sapphire Rapids)" [26]. The fifth-generation Emerald Rapids SKUs DCesv6 and ECesv6 are in preview at the time of writing, per the Learn products page [1].
GPU CVMs anchor on the same CPU-side TEEs and add a GPU TEE. The Learn page describes the NCCadsH100v5 SKU: "NCCadsH100v5 confidential VMs come with a GPU ... use linked CPU and GPU Trusted Execution Environments (TEEs)" [1]. This is the linked-attestation product for confidential AI -- a SEV-SNP host CVM bound by attestation to an NVIDIA H100 in Confidential Compute mode.
March 30, 2026 brings a pricing change customers should plan for. Microsoft Learn states: "From March 30 2026, encrypted OS disks will incur higher costs" [33]. Confidential OS-disk encryption remains the recommended configuration where the workload requires it; the change is to the billing line, not to the architecture.The paravisor: OpenHCL on OpenVMM
The single most important productisation move Azure made is what Microsoft calls a paravisor. The framing from the October 17, 2024 Tech Community announcement is verbatim: "Microsoft developed the first paravisor in the industry, and for years, we have been enhancing the paravisor offered to Azure customers. This effort now culminates in the release of a new, open source paravisor, called OpenHCL" [2].
A thin operating system running inside the trust boundary of a confidential VM, between the host hypervisor and the customer guest. The paravisor exposes the synthetic devices, the vTPM, and the GPA partitioning that a Windows or Linux guest expects from a Hyper-V environment -- without trusting any of those services to the host below the trust boundary. The paravisor is itself part of the TCB, but on Azure the paravisor binary is open source [@openhcl-blog; @openvmm-repo].
Microsoft's open-source paravisor, released on October 17, 2024. OpenHCL is built on top of OpenVMM, "a modular, cross-platform Virtual Machine Monitor (VMM), written in Rust" [34]. On Azure SEV-SNP CVMs OpenHCL runs at VMPL0; on TDX CVMs it runs in the L1 partition seat under TD Partitioning [@openhcl-blog; @openvmm-dev]. It mediates virtual devices, brokers the vTPM, manages GPA partitioning between private and shared pages, and handles diagnostics, all inside the trust boundary.
"Microsoft developed the first paravisor in the industry, and for years, we have been enhancing the paravisor offered to Azure customers. This effort now culminates in the release of a new, open source paravisor, called OpenHCL." -- Microsoft Tech Community, OpenHCL announcement, October 17, 2024 [2]
The OpenVMM repository README puts the focus crisply: "OpenVMM is a modular, cross-platform Virtual Machine Monitor (VMM), written in Rust. Although it can function as a traditional VMM, OpenVMM's development is currently focused on its role in the OpenHCL paravisor" [34]. The OpenVMM Guide lists the virtualisation APIs OpenVMM supports, including "MSHV (using VSM / TDX / SEV-SNP)" for paravisor mode, WHP for a Windows host, and KVM for a Linux host [35]. The use cases listed include Azure Boost, Trusted Launch, and Confidential VMs.
Because OpenHCL is in the TCB, customers do not avoid trusting Microsoft by running it -- but they can now read the source. That is a categorical change from earlier closed paravisors. The point about a TCB is not its size but its auditability and reviewability.
The canonical Linux-side analogue is AMD's Secure VM Service Module (SVSM), which runs at VMPL0 inside an SEV-SNP guest and provides the same kind of in-trust-boundary services (virtual TPM, paravirtualised I/O brokering, attestation surface) that OpenHCL provides on Azure [36]. SVSM and OpenHCL solve the same problem with different implementations and different signing chains. The Linux community's reference SVSM is the COCONUT-SVSM open-source project [37]. A reader who needs a confidential-VM paravisor on a non-Azure Linux host should look at SVSM; a reader who needs it on Azure gets OpenHCL.
The vTPM
Inside the paravisor's protected memory, OpenHCL synthesises a per-VM virtual TPM. Microsoft Learn is verbatim: "Azure confidential VMs feature a virtual TPM (vTPM) for Azure VMs. ... Confidential VMs have their own dedicated vTPM instance, which runs in a secure environment outside the reach of any VM" [33]. The architectural significance of this single sentence cannot be overstated. The vTPM's endorsement key is bound at provision time to the SEV-SNP or TDX hardware attestation report, so a vTPM quote can be transitively chained back to silicon: vTPM quote -> EK certificate -> SNP_REPORT or TD Quote -> VCEK or Intel signing root [33].
The practical consequence is that a Windows Server CVM runs an unmodified Trusted Boot chain inside the guest. PCR-7 still indexes the Secure Boot signer. Code Integrity policies still extend their own PCRs. BitLocker still seals the Volume Master Key to the TPM. None of those operating-system features need to know that the TPM they are talking to is synthesised by OpenHCL inside an SEV-SNP guest -- and yet every one of those features is now anchored, transitively, to AMD or Intel silicon rather than to a discrete TPM chip on a motherboard the cloud customer cannot inspect.
Microsoft Azure Attestation
The verifier in Azure's confidential-computing stack is Microsoft Azure Attestation. The Learn overview describes it: "Microsoft Azure Attestation is a unified solution for remotely verifying the trustworthiness of a platform and integrity of the binaries running inside it. The service supports attestation of the platforms backed by Trusted Platform Modules (TPMs) alongside the ability to attest to the state of Trusted Execution Environments (TEEs) such as Intel Software Guard Extensions (SGX) enclaves, Virtualization-based Security (VBS) enclaves ... and Azure confidential VMs" [3].
Azure's unified verifier service for confidential platforms. MAA accepts evidence -- an SNP_REPORT or TD Quote, plus a vTPM quote, plus boot measurements -- evaluates it against a customer-defined attestation policy, and returns a signed JWT carrying the issued claims. MAA's role in the RATS architecture is the verifier, in Passport topology: the attester collects MAA's signed result and presents it to the relying party (Azure Key Vault) [@msdocs-maa-overview; @rfc9334].
The SKR loop is documented verbatim. "When a CVM boots up, SNP report containing the guest VM firmware measurements are sent to Azure Attestation. The service validates the measurements and issues an attestation token that is used to release keys from Managed-HSM or Azure Key Vault. These keys are used to decrypt the vTPM state of the guest VM, unlock the OS disk and start the CVM" [3].
The Azure Key Vault / Managed HSM operation that releases a wrapped key only after the requesting party presents a valid Microsoft Azure Attestation token that satisfies the key's release policy. SKR is what closes the loop between rail 1 (memory protection) and rail 2 (attestation) at the customer's perimeter: a key never leaves the HSM unless the attesting CVM has been verified [@msdocs-maa-overview; @msdocs-azure-cvm].
MAA policy v1.2
The policy language is the operational surface customers actually interact with. The MAA policy v1.2 grammar has four segments, verbatim from the Microsoft Learn page: "Policy version 1.2 has four segments: version, configurationrules, authorizationrules, issuancerules" [38]. The critical operational distinction is between the last two. Authorization rules can fail attestation; issuance rules cannot. The docs are explicit: "authorizationrules: ... These rules can be used to fail attestation. issuancerules: ... These rules can be used to add to the outgoing claim set and the response token. These rules can't be used to fail attestation" [38].
The configuration-rule defaults give you sane behaviour out of the box: require_valid_aik_cert defaults to true and required_pcr_mask defaults to 0xFFFFFF (the first twenty-four PCRs must appear in the quote) [38].
Claim extraction uses JmesPath. The Learn page reproduces a Secure Boot detection rule that the verifier can use to flip a secureBootEnabled claim:
// Verbatim from Microsoft Learn (MAA policy v1.2 Secure Boot detection).
// This is JS-style pseudo-code that walks the rule structure, not
// runnable MAA syntax.
const policyRule = {
segment: 'issuancerules',
// "Claim rules" use JmesPath queries against parsed event data.
step1: {
when: 'type == "events" && issuer == "AttestationService"',
add: 'efiConfigVariables',
via: "Events[?EventTypeString == 'EV_EFI_VARIABLE_DRIVER_CONFIG' " +
"&& ProcessedData.VariableGuid == '8BE4DF61-93CA-11D2-AA0D-00E098032B8C']"
},
// GUID 8BE4DF61-93CA-11D2-AA0D-00E098032B8C is the EFI Global Variable
// namespace, which is where 'SecureBoot' lives.
step2: {
issue: 'secureBootEnabled',
via: "[?ProcessedData.UnicodeName == 'SecureBoot'] " +
"| length(@) == 1 && @[0].ProcessedData.VariableData == 'AQ'"
},
// 'AQ' is base64('\x01'), i.e. SecureBoot==1.
fallback: { issue: 'secureBootEnabled', value: false }
};
console.log('Segment :', policyRule.segment); // issuancerules
console.log('Yields :', 'secureBootEnabled claim in JWT');
console.log('Lesson :', 'Add this to authorizationrules to actually fail!'); Press Run to execute.
Diagram source
sequenceDiagram
participant E as Evidence (SNP_REPORT + vTPM)
participant C as configurationrules
participant A as authorizationrules
participant I as issuancerules
participant J as Signed JWT
E->>C: parse + defaults -- (require_valid_aik_cert, PCR mask)
C->>A: typed claim set
A-->>A: predicate checks
alt All authorization rules pass
A->>I: continue
I->>J: mint claims (secureBootEnabled, x-ms-isolation-tee, ...)
J-->>E: signed attestation token
else Any authorization rule fails
A-->>E: attestation rejected
end The two-axis privilege model: VMPL crossed with VTL
A common misconception is that a SEV-SNP CVM makes Virtualization-Based Security inside the guest redundant. The argument goes: "the whole VM is in a TEE, so why do I still need a Secure Kernel?" The architecture answers the question by saying that VMPL and VTL are orthogonal axes.
The VMPL axis is cloud-operator threat model. VMPL0 (the OpenHCL paravisor) sees pages that the customer's kernel at VMPL2 does not, and the host hypervisor below VMPL0 sees none of the encrypted memory at all. VMPL keeps the operator out.
The VTL axis is intra-guest threat model. Inside the guest, VTL1 hosts the Secure Kernel, IUM (Isolated User Mode) trustlets like LSAIso for Credential Guard, and the HVCI code-integrity verifier. VTL0 hosts the normal Windows kernel and user mode. VTL keeps a kernel-mode attacker out of LSA secrets and credential blobs. Without VTL, the customer's own kernel can read its own LSAIso heap; without VMPL, the hypervisor can read the customer's RAM.
VBS-inside-CVM is therefore not a duplication. It closes two different attack classes.
Diagram source
flowchart TB
subgraph Host["Host below trust boundary"]
H["Hyper-V host kernel -- (no access to encrypted RAM)"]
end
subgraph Boundary["Inside SEV-SNP / TDX trust boundary"]
subgraph V0["VMPL0 / L1 TD partition"]
P["OpenHCL paravisor -- (synthetic devices, vTPM)"]
end
subgraph V2["VMPL2 / L2 TD partition (customer guest)"]
subgraph T1["VTL1 (Secure Kernel)"]
SK["Secure Kernel -- + IUM trustlets: -- LSAIso, Credential Guard"]
end
subgraph T0["VTL0 (normal OS)"]
W["Windows Server kernel -- + user mode"]
end
end
end
H -. "blocked by VMPL + -- RMP / PAMT" .-> P
W -. "blocked by VTL 1 -- VBS / HVCI" .-> SK
P --> V2 Confidential Containers: three Azure surfaces
Confidential VMs are not the only Azure surface where SEV-SNP attestation can land. There are three more.
Confidential Containers on Azure Container Instances (ACI), GA. Microsoft Learn: "Confidential containers on Azure Container Instances are deployed in a container group with a Hyper-V isolated TEE, which includes a memory encryption key generated and managed by an AMD SEV-SNP capable processor" [39]. ACI Confidential Containers use confidential computing enforcement (CCE) policies generated by the confcom Azure CLI extension, and they expose SNP attestation reports for the SKR sidecar pattern.
Confidential Containers on AKS, preview, sunsetting. The Learn AKS page is explicit: "The Confidential Containers preview is set to sunset in March 2026. After this date, customers with existing Confidential Container node pools should expect to see reduced functionality, and you won't be able to spin up any new nodes with the KataCcIsolation runtime" [40]. Microsoft routes customers to four alternatives: Confidential VM AKS node pools, ACI Confidential Containers, ARO Confidential Containers, and the upstream Confidential Containers project [40].
Confidential VM AKS worker nodes, GA. A different model -- node-granularity CVM rather than per-pod CVM. Learn: "AKS now supports confidential VM node pools with Azure confidential VMs. These confidential VMs are the generally available DCasv5 and ECasv5 confidential VM-series utilizing 3rd Gen AMD EPYC processors with Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) security features" [41]. This is a lift-and-shift path for existing AKS workloads.
Confidential Containers on ARO is the Red Hat OpenShift equivalent, with Kata-isolated per-container SEV-SNP enforcement.
The cross-cloud parallel is the CNCF Confidential Containers project, accepted to CNCF on March 8, 2022 at the Sandbox maturity level [42]. The project documentation describes it as "an open source project that brings confidential computing to Cloud Native environments, using hardware technology to protect complex workloads" [43]. Trustee is the canonical attestation broker on the CNCF side. CoCo's substrate is Kata Containers' MicroVM model; the TEE backing is currently Linux-only. The open-source community floor under all of this includes Edgeless's Constellation (historically the canonical confidential-Kubernetes distribution; the upstream repo was archived in 2025-2026 and Edgeless's successor project Contrast [44] now carries the work forward at the workload-confidential-container layer rather than the whole-cluster layer) [45], COCONUT-SVSM (the AMD-side reference SVSM running at VMPL0) [37], and the CoCo Trustee attestation broker.
NVIDIA H100 CC on NCCadsH100v5
The Azure NCCadsH100v5 SKU pairs an SEV-SNP CVM with an NVIDIA H100 in Confidential Compute mode and links the two attestations together. CPU-side rail 1 is SEV-SNP. GPU-side rail 1 is H100 CC. Rail 2 must compose both: the relying party only releases the workload's key if both attestations check out. Cross-vendor attestation composition is one of the open standards problems §9 will revisit.
Diagram source
flowchart TB
subgraph S["Silicon"]
AMD["AMD-SP firmware -- + SEV-SNP RMP"]
INTEL["Intel TDX Module -- (SEAM, SEAMRR)"]
end
subgraph H["Host"]
HV["Azure Hyper-V -- (below trust boundary)"]
end
subgraph P["Paravisor (in TCB)"]
OH["OpenHCL on OpenVMM -- VMPL0 / L1 TD seat"]
VT["vTPM synthesised -- by paravisor"]
end
subgraph G["Customer guest"]
WS["Windows Server CVM -- (VTL0 + VTL1, VBS / HVCI)"]
end
subgraph V["Verifier"]
MAA["Microsoft Azure Attestation -- (policy v1.2)"]
end
subgraph R["Relying party"]
AKV["Azure Key Vault / -- Managed HSM (SKR)"]
APP["Customer application"]
end
AMD --> HV
INTEL --> HV
HV --> OH
OH --> VT
OH --> WS
WS -- "SNP_REPORT -- or TD Quote -- + vTPM quote" --> MAA
MAA -- "Signed JWT" --> AKV
AKV --> APP That is the Azure stack. But Azure is not the only design point -- Google and AWS chose different glue, and one of them is on a fundamentally different threat model. How do they compare?
7. Competing approaches
Google Cloud Confidential VMs
Google Cloud supports the same two CPU TEEs. The GCP Confidential VM docs are explicit: "AMD Secure Encrypted Virtualization-Secure Nested Paging (SEV-SNP) expands on SEV, adding hardware-based security to help prevent malicious hypervisor-based attacks like data replay and memory remapping. Attestation reports can be requested at any time directly from the AMD Secure Processor" [46]. And on the Intel side: "Intel Trust Domain Extensions (TDX) creates an isolated trust domain (TD) within a VM, and uses hardware extensions for managing and encrypting memory" [46].
GCP's machine-type mapping is direct. AMD SEV / SEV-SNP runs on N2D and C3D; Intel TDX runs on C3 Confidential VMs. The Confidential Computing product hub lists "Confidential VMs on the C3 machine series brings hardware-level protection to your AI models and data" and "Confidential VMs on the accelerator-optimized A3 machine series with NVIDIA H100 GPUs" as the parallel GPU-CC product [47]. There is a Confidential Space product on top for multi-party analytics, plus Confidential GKE Nodes and Confidential Dataflow.
The verifier-of-record is Google's own attestation service, with the guest's vTPM as the default trust root. Intel Trust Authority is supported as a plug-in alternative for TDX evidence.
A small correction to a widely repeated framing. It is sometimes said that GCP's confidential offerings are "also SEV-SNP" -- the Stage 0 input to this article said exactly that. Per the GCP docs, GCP supports both SEV-SNP and TDX [46]. If you are picking a CVM cloud for a multi-vendor strategy, treat GCP as a near-peer to Azure on the CPU dimension and differentiate on the verifier, the SKU mapping, and the live-migration story instead.
AWS Nitro Enclaves: a genuinely different model
The most common confusion in this design space is the assumption that AWS Nitro Enclaves is "AWS's confidential VM product." It is not. It is a different model on a different threat boundary.
The Nitro Enclaves user guide is unambiguous about the threat model. "AWS Nitro Enclaves is an Amazon EC2 feature that allows you to create isolated execution environments ... Enclaves are separate, hardened, and highly-constrained virtual machines. They provide only secure local socket connectivity with their parent instance. They have no persistent storage, interactive access, or external networking" [48]. The same page continues: "Nitro Enclaves is processor agnostic and it is supported on most Intel, AMD, and AWS Graviton-based Amazon EC2 instance types built on the AWS Nitro System" [48]. And: "Nitro Enclaves use the same Nitro Hypervisor technology that provides CPU and memory isolation for Amazon EC2 instances" [48].
Three differences matter.
First, there is no CPU memory cipher. Isolation is enforced by the Nitro hypervisor on a dedicated Nitro System card, not by SEV-SNP or TDX. Memory is in the clear in DRAM, just architecturally walled off by the hypervisor and the hardware root of trust below it.
Second, attestation signs through the Nitro hypervisor and integrates with AWS KMS. There is no VCEK or TDX Quoting Enclave.
Third, the threat model is parent-instance and co-tenant isolation, not cloud-operator isolation. Amazon is in the TCB by design. A subpoena or a compromised AWS operator are within the threat model of Azure / GCP CVMs and outside the threat model of Nitro Enclaves.
Nitro Enclaves still has a role: it is excellent at isolating a long-lived signing service from a more loosely audited application instance, and four enclaves per parent EC2 host is a generous concurrency budget for that pattern.
Confidential Containers and NVIDIA H100 CC
The Confidential Containers project crosses cloud boundaries. CNCF accepted it in March 2022 [42]. The project docs describe it as "an open source project that brings confidential computing to Cloud Native environments, using hardware technology to protect complex workloads" [43]. The Azure surfaces (ACI, AKS, ARO) were covered in §6; the equivalent on AWS is the Kata Containers + Confidential Containers combination on top of bare-metal Nitro hosts, and on GCP it lands on Confidential GKE Nodes.
The NVIDIA H100 CC story is roughly cross-cloud parity. Azure NCCadsH100v5 pairs SEV-SNP with H100 CC; Google's A3 series pairs SEV-SNP and TDX with H100 CC. Cross-vendor attestation composition is the open standards problem on which the relying party experience still depends. On the silicon side, ARM's Confidential Compute Architecture (CCA, with Realm Management Extension) is the ARM-side analogue of SEV-SNP/TDX, and Apple's Secure Enclave Processor is a board-scoped TEE with a different form factor; both are adjacent VM-scoped or board-scoped TEE designs but out of scope for the cloud-CVM body of this article.
The head-to-head matrix
| Dimension | Azure CVM | GCP CVM | AWS Nitro Enclaves | Confidential Containers |
|---|---|---|---|---|
| CPU TEE | SEV-SNP, Intel TDX | SEV / SEV-SNP, Intel TDX | None (Nitro hypervisor) | SEV-SNP, TDX (varies by host) |
| Memory cipher | AES (per-VM, per-TD) | AES (per-VM, per-TD) | None (host RAM) | Inherited from host TEE |
| Integrity rail | RMP (AMD), PAMT (Intel) | RMP, PAMT | Nitro hypervisor isolation | Inherited from host TEE |
| Attestation evidence | SNP_REPORT, TD Quote, vTPM quote | SNP_REPORT, TD Quote, vTPM | Nitro attestation document | TEE evidence + container measurement |
| Verifier | Microsoft Azure Attestation | Google attestation, Intel Trust Authority | AWS KMS | Trustee (CNCF) |
| Operator threat model | Yes (operator excluded) | Yes (operator excluded) | No (Nitro in TCB) | Yes (operator excluded) |
| Lift-and-shift Windows | Yes | Yes | No (custom enclave format) | Linux containers only |
| Live migration of CVM | No | Yes (SEV on N2D / C3D) | N/A | No |
| 2024-era CVE exposure | CacheWarp, WeSee, Heckler (SEV-SNP); Heckler (TDX) | Same upstream CVEs | Distinct (Nitro hypervisor) | Inherited from host TEE |
| Granularity | Whole VM, container | Whole VM | Per enclave (up to 4 per host) | Per pod / per container |
Diagram source
flowchart LR
Nitro["AWS Nitro Enclaves -- (parent-instance threat model)"]
Azure["Azure / GCP CVMs -- (cloud-operator threat model, -- whole VM)"]
CoCo["Confidential Containers -- (per pod / per container)"]
H100["NVIDIA H100 CC -- (CPU + GPU linked TEE)"]
Nitro --- Azure
Azure --- CoCo
CoCo --- H100 If the contract is settled and the products ship, what is still wrong with this picture? Why do four published papers in 2024 demonstrate extracting secrets from a fully-patched SEV-SNP CVM?
8. Theoretical limits and the 2024 attack class
May 2, 2024. ETH Zurich's ZISC group publishes the Ahoi family of attacks. The lab's announcement is brisk: "Researchers from the SECTRS group have now discovered a new class of attacks, dubbed Ahoi attacks, that exploit vulnerabilities in the notification framework in Intel TDX and AMD SEV-SNP. ... the vulnerabilities are tracked under 2 CVEs: CVE-2024-25744, CVE-2024-25743" [49] (with CVE-2024-25742 covering WeSee). WeSee won the Distinguished Paper Award at IEEE S&P 2024 [50]. Heckler appeared at USENIX Security 2024 [51]. CISPA's CacheWarp, also at USENIX Security 2024, cross-cut both [52].
Four 2024-era papers attacking shipping confidential VMs, and a key observation: none of them broke the Generation-2 integrity rail itself. They all exploit seams around it.
Trusted Computing Base accounting
The irreducible silicon-vendor trust root is non-zero by design. On SEV-SNP the customer must trust AMD-SP firmware and the ECDSA-P384 VCEK chain rooted at AMD. On TDX the customer must trust the signed TDX Module binary and the SGX-resident Quoting Enclave's signing root rooted at Intel. On Azure the customer additionally trusts Microsoft's signed OpenHCL binary -- with the consolation that OpenHCL is open source and reviewable [@openhcl-blog; @openvmm-repo]. The verifier (MAA, Intel Trust Authority, Google's verifier) is a separate trust component the relying party must extend.
The set of hardware, firmware, and software components whose correct operation is necessary for a system to enforce its security properties. For an Azure SEV-SNP CVM the TCB is the AMD silicon, the AMD-SP firmware, the OpenHCL paravisor binary, and Microsoft Azure Attestation acting as the verifier. The TCB cannot be empty; the goal is to make it small, auditable, and named [@amd-snp-whitepaper; @openhcl-blog].
The lower bound on TCB is at least one signing root the customer cannot independently rebuild from public artefacts. Reproducible-build transparency over the AMD-SP firmware and the Intel TDX Module is one of the open standards problems on the 2026 frontier. The Google-Intel joint TDX security review from April 2023 is the best public substitute for a reproducible build of the TDX Module today [53].
The 2024 attack class, in order of architectural depth
CacheWarp (USENIX Security 2024; CVE-2023-20592; AMD-SB-3005). A software fault injection. The mechanism, in NVD's verbatim language: "Improper or unexpected behavior of the INVD instruction in some AMD CPUs may allow an attacker with a malicious hypervisor to affect cache line write-back behavior of the CPU leading to a potential loss of guest virtual machine (VM) memory integrity" [54]. The project page is plain: "CacheWarp is a new software fault attack on AMD SEV-ES and SEV-SNP. It allows attackers to hijack control flow, break into encrypted VMs, and perform privilege escalation inside the VM" [55]. The CacheWarp authors -- Ruiyi Zhang, Lukas Gerlach, Daniel Weber, Lorenz Hetterich (CISPA), Youheng Lü (Independent), Andreas Kogler (Graz), Michael Schwarz (CISPA) -- demonstrated full RSA key recovery from Intel IPP, passwordless OpenSSH login, and sudo-to-root escalation [52]. SEV-SNP is affected; the fix is the AMD microcode update tracked by AMD-SB-3005 [56].
WeSee (IEEE S&P 2024 Distinguished Paper; CVE-2024-25742). A malicious #VC injection. The hypervisor coerces the guest's #VC handler into doing the wrong thing by injecting a #VC at a moment the guest does not expect one. The arXiv abstract is verbatim: "We present WeSee attack, where the hypervisor injects malicious #VC into a victim VM's CPU to compromise the security guarantees of AMD SEV-SNP. ... WeSee can leak sensitive VM information (kTLS keys for NGINX), corrupt kernel data (firewall rules), and inject arbitrary code (launch a root shell from the kernel space)" [57]. SEV-SNP only.
citation_author metadata for 2404.03526 enumerates the WeSee co-authors as Schlueter, Sridhara, Bertschi, Shinde [57]. Earlier writeups, including some upstream pipeline stages of this article, listed the third co-author as "Wilke." This was an inadvertent crossover from the SEVurity author list. The canonical author list, retrieved by querying the arXiv abstract page's citation_author meta tags, names Andrin Bertschi (ETH Zurich), which matches the project page on ahoi-attacks.github.io/wesee/ [50]. This article reflects the corrected attribution.
Heckler (USENIX Security 2024; CVE-2024-25743, CVE-2024-25744). A malicious non-timer interrupt injection. The hypervisor injects int 0x80 or a signal-mapped exception into the guest at a moment that breaks an invariant. The Ahoi Heckler page captures the scope: "All Intel TDX and AMD SEV-SNP processors are vulnerable to Heckler" [58]. The arXiv extended version demonstrates "Heckler on OpenSSH and sudo to bypass authentication. On AMD SEV-SNP we break execution integrity of C, Java, and Julia applications that perform statistical and text analysis" [59]. Mitigations are kernel-side interrupt filtering plus AMD's protected interrupt delivery feature.
Ahoi Attacks (umbrella). The family page describes scope: "Ahoi Attacks is a family of attacks on Hardware-based Trusted Execution Environments (TEEs) to break AMD SEV-SNP, Intel TDX and Intel SGX" [60]. The ZISC news framing names the SECTRS group at ETH Zurich (Shweta Shinde's lab) as the locus [49].
One Glitch to Rule Them All (CCS 2021). The physical-fault lower bound established in §3, included here for completeness. Buhren et al. voltage-glitched the AMD-SP on Zen 1 / 2 / 3 to execute custom payloads and to "reverse-engineer the Versioned Chip Endorsement Key (VCEK) mechanism introduced with SEV Secure Nested Paging (SEV-SNP)" [21]. With supplemental tooling on the PSPReverse GitHub artefact [22]. With physical access and the right glitcher, the AMD-SP is breakable.
"SEV cannot adequately protect confidential data in cloud environments from insider attackers, such as rogue administrators, on currently available CPUs." -- Buhren, Jacob, Krachenfels, Seifert, One Glitch to Rule Them All, 2021 [21]
Diagram source
flowchart TB
INTG["Generation-2 integrity rail -- (RMP / PAMT)"]
INVD["CacheWarp -- CVE-2023-20592 -- INVD seam -- (SEV-ES, SEV-SNP)"]
VC["WeSee -- CVE-2024-25742 -- #VC handler seam -- (SEV-SNP)"]
INT["Heckler -- CVE-2024-25743/4 -- Interrupt-injection seam -- (SEV-SNP, TDX)"]
GLITCH["One Glitch -- Physical voltage-fault -- (AMD-SP firmware)"]
INTG -. "intact" .-> INVD
INTG -. "intact" .-> VC
INTG -. "intact" .-> INT
INTG -. "intact" .-> GLITCH Composition limits and operational corollaries
Can the verifier itself be a CVM? Can SKR survive a verifier compromise? These are open standards questions; the Confidential Computing Consortium is iterating on them and there is no settled answer. What there is is operational guidance.
Confidential VMs do not promise side-channel resistance. They promise that the hypervisor cannot directly read memory and that an integrity-broken page cannot be silently substituted. The current equilibrium against the 2024 attack class is patch-after-disclosure plus attestation-policy hygiene. That equilibrium is itself an architectural statement.
The 2024 attacks do not break the SEV-SNP or TDX integrity rail. They exploit seams around the rail: the INVD instruction, the
#VChandler, the interrupt-injection path, and the physical AMD-SP. The architecture is settled. The residuals are the work.
The architecture is settled; the residuals are open. What is the 2026 research frontier actually working on?
9. Open problems
Six open problems shape the 2026 confidential-VM research frontier.
OP1. Nested CVMs. Intel TDX Module 1.5 ships TD Partitioning, where an L1 TD can host L2 TDs of its own [61]. AMD's analogue is the VMPL0 / VMPL2 layout that Azure OpenHCL already exploits. The portable cross-vendor formulation -- nested-CVM evidence that composes both vendors' attestation reports into a single relying-party-checkable artefact -- is not yet standardised. Customers who want a verifier-inside-a-CVM design must build the composition themselves.
OP2. Cross-vendor attestation composition for CPU+GPU CVMs. Azure NCCadsH100v5 and GCP A3 already compose AMD or Intel CPU attestation with NVIDIA H100 GPU attestation in production. The relying party today consumes two separate evidence packages and runs two separate policy evaluations. The RATS working group's RFC 9711 (The Entity Attestation Token, EAT) [62] is the canonical wire-format vocabulary -- a JWT- or CWT-encoded attested claims set -- that a Passport-topology verifier such as Microsoft Azure Attestation produces, and is the path to a single composed evidence package, but the cross-vendor standards work is unsettled.
OP3. Transparency and reproducible builds of the AMD-SP firmware and the Intel TDX Module. Both are signed binaries customers trust but do not build. Google's April 2023 joint security review of TDX, authored by Erdem Aktas, Cfir Cohen, Josh Eads (Google Cloud Security), James Forshaw, and Felix Wilhelm (Google Project Zero), enumerated specific vulnerabilities including "Non-Persistent SEAM Loader, Exit Path Interrupt Hijacking, Unsafe Performance Monitoring VMCS Configuration" [53]. That review is the closest thing to public auditability the TDX Module has today. A reproducible build with binary transparency log (rekor-style) would close the residual auditability gap that even open-source OpenHCL leaves on the table for the silicon vendor's firmware.
OP4. Post-quantum attestation signatures. SNP_REPORT signs with ECDSA-P384. TD Quotes are Intel-signed with RSA / ECDSA. The NIST FIPS 204 (ML-DSA) and FIPS 205 (SLH-DSA) standards are final, but vendor-side migration of the CVM signing roots has not been announced for either AMD or Intel. The deployment-feasible path is dual-signing: the SNP_REPORT or TD Quote carries both an ECDSA signature and an ML-DSA signature, the verifier accepts either, and the relying party gates on whichever signing root it trusts most. The transition is non-trivial because the VCEK derivation itself uses a classical KDF chain rooted in classical entropy.
OP5. Side-channel-resistant CVMs at deployment scale. The CacheWarp, WeSee, Heckler, and Ahoi family is the active frontier. The current operational equilibrium is policy-pinning to the latest TCB SVN plus microcode-update discipline. There is no production CVM architecture that promises constant-time execution across the integrity rail or that closes the cache-side and notification-injection seams at the silicon layer. The 2026 frontier is what architectural mitigations look like, not what microcode patches catch up to.
OP6. Confidential container portability after AKS KataCcIsolation sunset (March 2026). The Azure CoCo surface fragments into ACI per-pod CVM, ARO per-container CVM, AKS Confidential VM node pools at node granularity, and the upstream CoCo project [40]. Customers picking a confidential-containers strategy today need to plan for one of those four routes; the CoCo project itself is Linux-only as of 2026-05. Windows confidential containers remain out of scope on every shipping cloud.
Diagram source
flowchart LR
OP1["OP1 -- Nested CVMs -- (TD Part. / VMPL)"]
OP2["OP2 -- Cross-vendor -- attestation composition"]
OP3["OP3 -- Firmware transparency -- + reproducible build"]
OP4["OP4 -- PQ signatures -- (ML-DSA / SLH-DSA)"]
OP5["OP5 -- Side-channel- -- resistant CVMs"]
OP6["OP6 -- CoCo portability -- (post-March-2026)"]
OP1 --- OP2
OP3 --- OP4
OP5 --- OP6 If you are deploying today, what should you do this quarter? The next section is a practical walk-through that ties the architecture to a runnable workflow.
10. Practical guide: VBS-inside-CVM end-to-end
Six steps move you from a credit-card swipe to a Windows Server CVM that runs an attested workload with HSM-backed key release. Treat the list as a checklist; each step is a place where the architecture from the previous sections becomes operational.
Step 1. Provision the CVM. Pick a SEV-SNP SKU (DCasv5 or DCasv6 preview), a supported Windows Server image (2019, 2022, or 2025), and turn on Confidential OS-disk encryption with a customer-managed key in Azure Key Vault or Managed HSM. Bind the key to an MAA-aware release policy. The Learn CVM overview describes the SKU family and the OS-image support [33]. Plan for the March 30, 2026 encrypted-OS-disk pricing change [33].
Step 2. Confirm VBS inside the CVM. A common misconception is that turning on SEV-SNP makes Virtualization-Based Security redundant. It does not -- VMPL and VTL are orthogonal. From an elevated PowerShell session:
Step 3. Capture an attestation token and walk it by hand. Use the Azure Attestation client (Microsoft.Azure.Attestation) to send the guest's SNP_REPORT and vTPM quote to the regional MAA endpoint. Inspect the returned JWT. The decoded claim set will include x-ms-isolation-tee describing the TEE (SEV-SNP or TDX), x-ms-runtime describing the guest configuration, the boot measurements, and any custom claims your policy mints. Verify the JWT signature against the region's MAA signing certificate -- not against an arbitrary trusted root; this is the verifier-identity hygiene that closes the SKR loop.
Quick JWT sanity check
A valid MAA JWT will contain x-ms-attestation-type = sevsnpvm (or tdxvm) and a x-ms-compliance-status = azure-compliant-cvm claim. If either is missing or has a different value, the policy did not gate on the TEE and the relying party is about to release a key against unattested evidence.
Step 5. Repeat on a TDX SKU. Provision a DCesv5 or DCesv6 (preview) CVM. The attestation evidence shape changes: TDX evidence carries MRTD plus RTMR0-3 instead of a single SNP measurement, and the claims JSON shape differs. The JmesPath rules in your policy must be parameterised on productId to handle both TEEs from one policy file, or split into two policy files keyed by attestation provider region and TEE type [@intel-tdx-overview; @maa-policy-v12].
Step 6. Plan TCB SVN hygiene. Treat the TCB SVN floor in your policy as a moving target, not a one-time configuration. Subscribe to the AMD security bulletins and the Intel TDX security advisories. When CacheWarp's microcode shipped via AMD-SB-3005 [56], the appropriate operational response was to raise the policy's TCB SVN floor to the new microcode level, not to leave the floor at the launch baseline. This is the single most important operational habit a CVM customer can adopt.
You can build it today. The FAQ below answers the questions readers most often ask after they have built it.
11. FAQ and closing
Frequently asked questions
Does an Azure Confidential VM protect me from Microsoft?
Architecturally, the host hypervisor cannot read your encrypted RAM and cannot silently remap pages without triggering an RMP or PAMT fault [@amd-sev-portal; @intel-tdx-overview]. Operationally, the verifier (Microsoft Azure Attestation) is run by Microsoft, the paravisor (OpenHCL) is built by Microsoft, and the silicon is signed by AMD or Intel. You must still trust those components. The lower bound on TCB is at least the silicon vendor's signing root plus at least one verifier; you can shrink the verifier trust by using a third party (Intel Trust Authority for TDX, or your own deployment of an attestation broker), but you cannot shrink the silicon-vendor root [3].
Is VBS redundant if SEV-SNP is on?
No. VMPL (the SEV-SNP privilege axis) and VTL (the in-guest Virtualization-Based Security axis) are orthogonal -- VMPL gates the operator; VTL gates the guest kernel. See §6 for the full two-axis treatment; a Windows Server CVM should run with VBS, HVCI, and Credential Guard enabled inside the guest exactly as it would outside a CVM [33].
Is an AWS Nitro Enclave a confidential VM?
No. The Nitro hypervisor enforces the enclave boundary in software AWS owns and operates; there is no CPU-level memory cipher, and the threat model is parent-instance isolation rather than cloud-operator isolation. See §7 for the three architectural differences and the operator-trustless callout [48].
Can I bring my own kernel or initrd into a CVM?
Yes, with limits. The attestation surface changes: the SNP_REPORT measurement (or MRTD plus RTMR extensions on TDX) now reflects your custom image. Your MAA policy must whitelist the new measurement values or use issuance-rule projection to bind to attributes you control. You cannot bypass the paravisor without abandoning the OpenHCL-mediated vTPM, which removes the chained vTPM-quote to silicon path most customers depend on [@msdocs-azure-cvm; @openhcl-blog].
Does the vTPM bind to hardware?
Yes -- transitively, through the paravisor. See §6 for the full vTPM quote -> EK certificate -> SNP_REPORT or TD Quote -> VCEK or Intel signing root chain, and read it end-to-end before you accept a vTPM quote as silicon-bound [33].
What is the difference between Confidential VM AKS worker nodes and Confidential Containers on AKS?
Node-granularity CVM versus per-pod CVM. Confidential VM AKS node pools put each worker node inside an SEV-SNP CVM; all pods on that node share the trust boundary [41]. Confidential Containers on AKS used the KataCcIsolation runtime to put each pod inside its own SEV-SNP-backed Kata MicroVM; that preview is sunsetting in March 2026 [40]. Different SKUs, different runtimes, different sunset timelines. Pick node-granularity for lift-and-shift; pick per-pod when you need stricter blast-radius isolation between pods on the same hardware.
Does CacheWarp, WeSee, or Heckler mean I should not use confidential VMs?
No. See §8 for the architectural finding (the Generation-2 integrity rail remains intact under all four 2024 papers; each attack exploits a seam around the rail) and §10 Step 6 for the TCB-SVN-pinning operational habit that translates the finding into deployment policy [@cachewarp-site; @ahoi-heckler; @amd-sb-3005].
Imagine drawing the architecture from memory. Start at the bottom with AMD silicon plus the AMD-SP firmware, or Intel silicon plus the SEAM Range Register and the signed TDX Module. Above that, the Azure Hyper-V host -- below the trust boundary, blind to encrypted RAM. Above that, the OpenHCL paravisor at VMPL0 or the L1 TD seat, mediating synthetic devices and the vTPM. Above that, the Windows Server guest at VMPL2 or the L2 TD, still running VBS, HVCI, and Credential Guard inside. Then evidence flows up: SNP_REPORT or TD Quote plus vTPM quote into Microsoft Azure Attestation, which evaluates policy v1.2 against the evidence and emits a signed JWT, which Azure Key Vault checks before releasing the wrapped OS-disk key. If you can draw it on a napkin in two minutes, you have understood the article. If you can write the MAA policy that says exactly what you mean by "this VM is one of mine," you can build with it.
Study guide
Key terms
- Reverse Map Table (RMP)
- AMD SEV-SNP per-page metadata table enforcing GPA-to-HPA binding; mismatched mappings raise #NPF(rmpfault).
- Virtual Machine Privilege Level (VMPL)
- AMD SEV-SNP four-level privilege lattice; OpenHCL paravisor at VMPL0, customer kernel at VMPL2.
- SNP_REPORT
- ECDSA-P384 signed attestation report from the AMD-SP, carrying measurement, policy, report_data, vmpl, chip_id, tcb_version.
- Secure Arbitration Mode (SEAM)
- Intel CPU privilege state in which the signed TDX Module executes, hosted in the SEAMRR memory range.
- Intel TDX Module
- Signed Intel firmware running in SEAM that mediates entry, exit, and measurement for Trust Domains.
- MRTD
- Build-time TDX measurement of the initial TD image; SEAM analogue of an immutable launch PCR.
- RTMR0-3
- Runtime extendable measurement registers exposed by the TDX Module; SEAM analogue of the runtime-extension TPM PCRs. Canonical TDX-vTPM mapping: RTMR[0]<->PCR[1,7], RTMR[1]<->PCR[2-6], RTMR[2]<->PCR[8-9], RTMR[3]<->PCR[14,17-22].
- OpenHCL paravisor
- Microsoft's open-source Rust paravisor on OpenVMM, running inside the CVM trust boundary at VMPL0 or the L1 TD seat.
- Microsoft Azure Attestation (MAA)
- Azure's RATS verifier; evaluates customer policy v1.2 against SNP_REPORT or TD Quote plus vTPM evidence and returns a signed JWT.
- Secure Key Release (SKR)
- Azure Key Vault / Managed HSM operation gating wrapped-key release on a valid MAA attestation token.
- Versioned Chip Endorsement Key (VCEK)
- AMD per-chip per-TCB-version ECDSA-P384 signing key for SNP_REPORTs; certificate chain anchors to AMD root via the ASK.
References
- Microsoft Learn: Azure confidential computing products overview. https://learn.microsoft.com/en-us/azure/confidential-computing/overview-azure-products ↩
- (2024). OpenHCL: the new, open source paravisor (Microsoft Tech Community). https://techcommunity.microsoft.com/blog/windowsosplatform/openhcl-the-new-open-source-paravisor/4273172 ↩
- Microsoft Learn: Microsoft Azure Attestation overview. https://learn.microsoft.com/en-us/azure/attestation/overview ↩
- Confidential Computing Consortium -- About. https://confidentialcomputing.io/about/ ↩
- Microsoft Learn: Azure confidential VM overview. https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-vm-overview ↩
- (2023). RFC 9334: Remote ATtestation procedureS (RATS) Architecture. https://datatracker.ietf.org/doc/rfc9334/ ↩
- (2016). AMD x86 Memory Encryption Technologies. https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/kaplan ↩
- (2016). Intel SGX Explained. https://eprint.iacr.org/2016/086 ↩
- Wikipedia: Trusted Platform Module. https://en.wikipedia.org/wiki/Trusted_Platform_Module ↩
- (2018). Foreshadow: Extracting the Keys to the Intel SGX Kingdom with Transient Out-of-Order Execution. https://www.usenix.org/conference/usenixsecurity18/presentation/bulck ↩
- (2018). SgxPectre: Stealing Intel Secrets from SGX Enclaves via Speculative Execution. https://arxiv.org/abs/1802.09085 ↩
- (2020). SGAxe: How SGX Fails in Practice. https://cacheoutattack.com/files/SGAxe.pdf ↩
- (2017). Introducing Azure confidential computing. https://azure.microsoft.com/en-us/blog/introducing-azure-confidential-computing/ ↩
- (2017). Introducing Azure confidential computing (Wayback Machine snapshot, September 2017). https://web.archive.org/web/2017/https://azure.microsoft.com/en-us/blog/introducing-azure-confidential-computing/ ↩
- Linux Foundation press release: CCC formation (Oct 17 2019). https://www.linuxfoundation.org/press/press-release/confidential-computing-consortium-establishes-formation-with-founding-members-and-open-governance-structure-2 ↩
- (2019). Microsoft partners with the Linux Foundation to announce the Confidential Computing Consortium. https://opensource.microsoft.com/blog/2019/08/21/microsoft-partners-linux-foundation-announce-confidential-computing-consortium/ ↩
- (2016). AMD Memory Encryption. https://kib.kiev.ua/x86docs/AMD/SEV/memory-encryption-white-paper-Oct-2021.pdf ↩
- (2017). Protecting VM Register State with SEV-ES. https://www.amd.com/content/dam/amd/en/documents/epyc-business-docs/white-papers/Protecting-VM-Register-State-with-SEV-ES.pdf ↩
- (2020). AMD SEV-SNP: Strengthening VM Isolation with Integrity Protection and More. https://www.amd.com/content/dam/amd/en/documents/epyc-business-docs/white-papers/SEV-SNP-strengthening-vm-isolation-with-integrity-protection-and-more.pdf ↩
- (2020). Intel Trust Domain Extensions (white paper, doc 343961-002US). https://www.intel.com/content/dam/develop/external/us/en/documents/tdx-whitepaper-final9-17.pdf ↩
- (2020). Architecture Specification: Intel Trust Domain Extensions Module (doc 344425-001). https://kib.kiev.ua/x86docs/Intel/TDX/344425-001.pdf ↩
- Wikipedia: AMD EPYC. https://en.wikipedia.org/wiki/Epyc ↩
- (2018). SEVered: Subverting AMD's Virtual Machine Encryption. https://arxiv.org/abs/1805.09604 ↩
- (2019). Insecure Until Proven Updated: Analyzing AMD SEV's Remote Attestation. https://arxiv.org/abs/1908.11680 ↩
- (2020). SEVurity: No Security Without Integrity (project page). https://uzl-its.github.io/SEVurity/ ↩
- (2021). One Glitch to Rule Them All: Fault Injection Attacks Against AMD's Secure Encrypted Virtualization. https://arxiv.org/abs/2108.04575 ↩
- PSPReverse / amd-sp-glitch (One Glitch artefact). https://github.com/PSPReverse/amd-sp-glitch ↩
- (2017). Intel Architecture Memory Encryption Technologies Specification (doc 336907). https://kib.kiev.ua/x86docs/Intel/MemEncryption/336907-001.pdf ↩
- (2018). LWN-archived RFC: Multi-Key Total Memory Encryption API (MKTME). https://lwn.net/Articles/764480/ ↩
- Wikipedia: Trust Domain Extensions. https://en.wikipedia.org/wiki/Trust_Domain_Extensions ↩
- AMD Secure Encrypted Virtualization (SEV) -- developer portal. https://www.amd.com/en/developer/sev.html ↩
- Intel Trust Domain Extensions (Intel TDX) overview. https://www.intel.com/content/www/us/en/developer/tools/trust-domain-extensions/overview.html ↩
- (2024). Intel TDX Module 1.5 Base Architecture Specification (doc 348549, rev 002). https://cdrdv2-public.intel.com/733575/intel-tdx-module-1.5-base-spec-348549002.pdf ↩
- (2022). Microsoft Azure expands confidential VM offerings (The Register). https://www.theregister.com/2022/07/20/microsoft_confidential_vms/ ↩
- Lenovo Press: Enabling AMD SEV-SNP on ThinkSystem servers (LP1893). https://lenovopress.lenovo.com/lp1893-enabling-amd-sev-snp-on-thinksystem-servers ↩
- (2023). Intel Adds TDX to Confidential Computing Portfolio with 4th Gen Xeon launch (SecurityWeek). https://www.securityweek.com/intel-adds-tdx-confidential-computing-portfolio-launch-4th-gen-xeon-processors/ ↩
- microsoft/openvmm (GitHub). https://github.com/microsoft/openvmm ↩
- OpenVMM Guide. https://openvmm.dev/guide/ ↩
- AMD-SEV linux-svsm reference Secure VM Service Module. https://github.com/AMDESE/linux-svsm ↩
- COCONUT-SVSM: an open-source Secure VM Service Module. https://github.com/coconut-svsm/svsm ↩
- Microsoft Learn: Azure Attestation policy version 1.2. https://learn.microsoft.com/en-us/azure/attestation/policy-version-1-2 ↩
- Microsoft Learn: Confidential containers on Azure Container Instances. https://learn.microsoft.com/en-us/azure/container-instances/container-instances-confidential-overview ↩
- Microsoft Learn: Confidential containers on AKS (preview / sunset notice). https://learn.microsoft.com/en-us/azure/aks/confidential-containers-overview ↩
- Microsoft Learn: AKS confidential VM node pools. https://learn.microsoft.com/en-us/azure/confidential-computing/confidential-node-pool-aks ↩
- CNCF Confidential Containers project page. https://www.cncf.io/projects/confidential-containers/ ↩
- Confidential Containers documentation. https://confidentialcontainers.org/docs/ ↩
- Edgeless Contrast: workload-level confidential containers (Constellation successor). https://github.com/edgelesssys/contrast ↩
- Edgeless Constellation: confidential Kubernetes distribution (archived; succeeded by Contrast). https://github.com/edgelesssys/constellation ↩
- Google Cloud: Confidential VM overview. https://cloud.google.com/confidential-computing/confidential-vm/docs/about-cvm ↩
- Google Cloud: Confidential Computing product hub. https://cloud.google.com/confidential-computing ↩
- AWS Nitro Enclaves user guide. https://docs.aws.amazon.com/enclaves/latest/user/nitro-enclave.html ↩
- (2024). ETH Zurich ZISC news: Ahoi attacks disrupting TEEs with malicious notifications. https://zisc.ethz.ch/2024/05/02/ahoi-attacks-disrupting-tees-with-malicious-notifications/ ↩
- Ahoi Attacks: WeSee project page. https://ahoi-attacks.github.io/wesee/ ↩
- (2024). Heckler -- USENIX Security 2024. https://www.usenix.org/conference/usenixsecurity24/presentation/schl%C3%BCter ↩
- (2024). CacheWarp: Software-based Fault Injection using Selective State Reset. https://www.usenix.org/conference/usenixsecurity24/presentation/zhang-ruiyi ↩
- (2023). Intel Trust Domain Extensions (TDX) Security Review. https://services.google.com/fh/files/misc/intel_tdx_-_full_report_041423.pdf ↩
- NVD record for CVE-2023-20592 (CacheWarp / INVD). https://nvd.nist.gov/vuln/detail/CVE-2023-20592 ↩
- CacheWarp project page. https://cachewarpattack.com/ ↩
- AMD Security Bulletin AMD-SB-3005 (CacheWarp / CVE-2023-20592). https://www.amd.com/en/resources/product-security/bulletin/amd-sb-3005.html ↩
- (2024). WeSee: Using Malicious #VC Interrupts to Break AMD SEV-SNP. https://arxiv.org/abs/2404.03526 ↩
- Ahoi Attacks: Heckler project page. https://ahoi-attacks.github.io/heckler/ ↩
- (2024). Heckler: Breaking Confidential VMs with Malicious Interrupts. https://arxiv.org/abs/2404.03387 ↩
- Ahoi Attacks family page. https://ahoi-attacks.github.io/ ↩
- (2024). Intel TDX Module 1.5 TD Partitioning Architecture Specification (doc 354807, rev 003). https://cdrdv2-public.intel.com/817876/intel-tdx-module-1.5-td-partitioning-spec-354807003.pdf ↩
- (2025). RFC 9711: The Entity Attestation Token (EAT). https://datatracker.ietf.org/doc/rfc9711/ ↩