Confidential Computing and Data-Centric Zero Trust: Composable Protection
Encryption operates across three domains: data at rest, data in transit, and data in use. Mature tooling protects the first two through full-disk encryption, TLS 1.3, and envelope encryption in cloud object stores. The third state has historically been the weak vertex of the triangle. Data must be decrypted to be computed on, leaving it exposed in memory, CPU registers, and cache unless the compute environment itself provides cryptographic and hardware-level isolation. Confidential computing addresses this through trusted execution environments (TEEs). Data-centric zero trust, grounded in NIST SP 800-207 and zero trust data fabric (ZTDF) principles, protects data at rest and in transit through attribute-based access control (ABAC), policy enforcement points (PEPs), and cryptographic enforcement. Neither alone is sufficient for regulated, high-value, multi-tenant workloads. Together, they close the triangle and make data protection architectural rather than procedural.
This convergence reflects a maturation in cloud security thinking. For the past decade, organizations have deployed data encryption and network isolation as separate domains. The result: encrypted databases that cloud administrators could still theoretically access through administrative interfaces, encrypted data stores that remain vulnerable while being processed in shared compute pools, and security models that relied on access controls rather than cryptographic proof. A new generation of regulated workloads requires proof that data never exists in decrypted form outside a protected boundary, that proof is cryptographically attested, and that policies are enforced at the data layer, not the network or platform layer.
What Confidential Computing Actually Secures
Confidential computing is a family of hardware-backed isolation technologies: Intel TDX and SGX legacy, AMD SEV-SNP, AWS Nitro Enclaves, Google Confidential VMs on GKE, and Azure Confidential Computing. Each uses memory encryption, instruction-level isolation, and remote attestation to create an execution environment where decrypted data remains protected from the operating system, hypervisor, and cloud provider personnel.
The guarantee is precise. A TEE protects data in use against logical access by the host system. It does not protect against side-channel attacks, speculative execution leaks, or timing attacks. It does not protect against ciphertext replay, key reuse, or the loss of key material if an attacker gains physical access. Most critically, it depends on trust in the silicon manufacturer. If Intel, AMD, or ARM's cryptographic roots of trust are compromised, attestation itself becomes unreliable. NIST IR 8320 Confidential Computing and Confidential Computing Consortium (CCC) documentation detail these boundaries clearly.
The architectural value lies in the hardware-software handshake. When code runs inside a TEE, the hardware guarantees that memory contents are encrypted with a key owned only by that specific CPU core. Even if an attacker has physical access to the memory bus, they see only ciphertext. The hardware also prevents the hypervisor or operating system from directly reading TEE memory, even with administrative access. Intel TDX uses nested paging and memory encryption to ensure that virtual machine memory is opaque to the host kernel. AMD SEV-SNP extends this to authenticated encryption, meaning the guest can verify that memory contents have not been tampered with by the hypervisor. This distinction matters: authenticated encryption adds integrity protection, preventing replay attacks where an attacker substitutes old ciphertext blocks for new ones.
The Three States and Their Guardians
At rest, envelope encryption with key management systems (KMS) and content-addressed storage (CAS-X) provide cryptographic enforcement. In transit, TLS 1.3 and post-quantum key encapsulation mechanisms (ML-KEM-768 and ML-KEM-1024 under NIST FIPS 203) replace pre-quantum Diffie-Hellman. In use, data has historically been unprotected except by operating system access controls, which are insufficient in multi-tenant cloud.
Confidential computing changes the in-use state from a liability to an architectural asset. A TEE can seal keys with hardware-backed attestation, meaning key material is decrypted only inside a specific CPU in a specific TEE with a specific software configuration. The Confidential Computing Consortium reports that as of mid-2025, 34% of cloud workloads involving sensitive data are candidates for TEE deployment; adoption is accelerating fastest in healthcare analytics, financial services, and government.
The composition of all three states requires explicit integration. At rest, the envelope encryption key itself is not held in the object store. Instead, it lives in a key management service that enforces policy before releasing keys. In transit, the key travels over TLS 1.3, and increasingly over post-quantum cryptography to defend against harvest-now-decrypt-later attacks. In use, the key is released not to the application binary itself, but only to an attested enclave whose cryptographic measurement matches a trusted hash. This eliminates a long-standing gap: previously, an administrator with sufficient privilege could extract the key in use, whether from process memory, environment variables, or KMS logs. With attested TEEs, the policy enforcement point can cryptographically verify that the key will never exist outside the enclave's protected memory.
Attestation as a Policy Decision Input
The technical inflection point is attestation-as-an-attribute. A TEE generates a cryptographically signed attestation report containing the enclave's memory hash, CPU hardware identity, software configuration, and a nonce for freshness. This report becomes an attribute in an attribute-based access control (ABAC) policy. A policy decision point (PDP) can now evaluate rules such as "release the customer data key only to code running in an attested Intel TDX instance with build hash X on hardware registered in our trust registry."
This transforms the relationship between data-centric zero trust and confidential computing from independent defense layers into a composed system. A zero trust data fabric (ZTDF) typically enforces policies at the policy enforcement point (PEP), intercepting requests to decrypt or access protected objects. When attestation is available, the PEP can add "requester identity and hardware isolation proof" to the set of attributes the PDP evaluates. Fail-closed policies mean that absent valid attestation, the key is not released, regardless of other attributes.
The attestation report itself is a cryptographic artifact. Intel TDX produces a quote signed by Intel's attestation key infrastructure. That quote includes the TEE's measurement, the SHA-384 hash of the enclave's code and initial data, the TEE's identity indicating which physical CPU it runs on, and a timestamp. A policy engine receiving this report can verify the signature, check that the measurement matches a whitelisted binary, confirm the hardware identity is registered in the organization's hardware root of trust, and verify that the report is fresh. Each of these checks becomes a discrete policy clause in the ABAC rule set, making the security decision transparent, auditable, and composable with other policy attributes like user identity, time of day, or data classification.
Modern implementations also chain attestation across layers. A containerized application running inside a Kubernetes pod on a confidential VM has two attestation boundaries: the VM itself, attested by the cloud provider's infrastructure, and the container or process inside it, attested by the confidential computing platform. Lattix and other platforms are building support for hierarchical attestation, where downstream systems can verify not just whether code is running in a TEE, but whether it is running in a TDX VM in a specific Kubernetes cluster operated by a specific organization.
Composing ZTDF with Confidential Computing
The handshake is straightforward. An application runs inside a TEE, generates an attestation report which includes a public key certified by the hardware, and signs a key release request with the corresponding private key. The ZTDF's policy enforcement point sends the request to the policy decision point, which evaluates the attestation and policy rules. If the attestation is fresh, the hardware identity is trusted, the software hash matches a whitelist, and the requester's identity satisfies the access policy, the data key is returned. The requester then decrypts data only inside the TEE, where it cannot escape.
This pattern is particularly valuable for ZTDF implementations that use Merkle-tree lineage tracking for data provenance, which Lattix Technologies integrates into its platform. The lineage itself is protected by the ZTDF's object encryption; the computation that transforms source data into derived data occurs inside an attested enclave. Downstream consumers can then verify not only whether a file is encrypted, but whether it was processed by trusted code with cryptographic proof.
The practical workflow demonstrates the depth of integration. A healthcare researcher needs to conduct a HIPAA-compliant analysis on a dataset spread across five organizations. Each organization stores its data encrypted with an envelope key managed by an ZTDF policy server. The researcher's analysis code is packaged as a container image, hashed, and registered in the organization's trusted software registry. The ZTDF policy for each data object is updated to include a rule: release the data key to code with measurement hash X running in a TEE with Intel TDX attestation, but only during specified hours on weekdays. The researcher deploys the container to a confidential VM on Google Cloud. The container startup code generates a TDX attestation report, sends it to each organization's policy server, and requests the data keys. The policy server verifies the attestation, confirming it is genuine, fresh, and matches the trusted hash, checks the time constraint, and releases the encrypted keys. The container decrypts the data inside the VM, performs the analysis, and encrypts the results back. Each transformation is recorded in the lineage log, itself encrypted under the ZTDF policy. The result is cryptographic proof that the analysis touched only approved data, ran only approved code, executed in isolated hardware, during approved hours, across multiple administrative domains.
Where the Composition Creates New Capabilities
Regulated multi-tenant analytics is the clearest use case. Healthcare organizations that conduct HIPAA-compliant cohort studies across patient data from multiple systems now face conflicting constraints: they need to compute on decrypted data to perform analysis, but cannot allow compute infrastructure or hosting providers to access plaintext. Confidential computing plus ZTDF allows the analytics code to run in a TEE, with data decrypted only by the analytics algorithm inside the enclave. ZTDF policies prevent the key from being released to any other process, and attestation prevents the key from being released if the enclave's code hash or hardware identity changes.
Cross-border financial computation under conflicting data residency rules similarly benefits. Confidential VMs on Google Cloud allow a bank to run an analysis on customer data where the data never leaves the bank's region, but the compute happens in a shared cloud infrastructure, with attestation and ZTDF policies ensuring the compute infrastructure has no access to the plaintext.
Multi-party machine learning partnerships also crystallize the need. Three financial institutions want to build a shared fraud detection model on their combined transaction datasets. None can share raw transaction records. They can each place their data in an object store encrypted with envelope encryption under an ZTDF policy. They deploy an ML training job inside a confidential computing environment with attestation. The training code requests the data keys; ZTDF policies release them only to the attested enclave. The trained model is encrypted and placed in shared storage. Each institution verifies the model's lineage: that it was built from only approved datasets, by the approved algorithm, in an attested environment, with cryptographic proof.
Federal sensitive AI workloads operating in commercial cloud environments use the same pattern. The Department of Defense and intelligence agencies can now run classification-dependent ML training and inference in environments where physical access, administrative access, and even side-channel attacks provide no decryption path.
The Practical Limitations
Attestation supply chain complexity is real. Intel and AMD sign attestation reports with keys rooted in their hardware. Organizations must maintain trust roots for multiple CPU families, update them as new generations ship, and manage the operational burden of revoking or deprecating attestation keys if vulnerabilities are discovered. A 2024 Confidential Computing Consortium assessment found that 40% of organizations attempting first TEE deployments encountered attestation key rotation issues.
The Intel SGX vulnerability disclosures of 2023 and 2024 illustrated this friction. Transient architectural flaws in how Intel TDX handles certain CPU state transitions led to the deprecation of specific microcode versions and a period during which some attestation signing keys were revoked. Organizations that had hardcoded trust in those keys experienced authentication failures until they updated their trust roots. Cloud providers had to roll out microcode patches across millions of instances. ZTDF policy engines had to support key rotation without dropping legitimate requests. This is not a flaw in the TEE model itself, but rather a consequence of trusting silicon manufacturer attestation keys; it requires operational discipline and advance planning.
Enclave size limits matter. SGX enclaves are typically limited to 128 MB of protected memory, though TDX and SEV-SNP allow larger virtual machines. This constraint is less of an issue for analytics and ML workloads, which typically stream data through large compute pipelines, but it is a hard boundary for applications with large in-memory caches or graph databases. Performance overhead is small but real; TEEs typically incur 5-15% latency overhead on compute-bound workloads and 10-20% on I/O-bound workloads. Debuggability is harder than conventional compute, because stepping through code in an enclave reveals the same protected data that the enclave is designed to guard. Bugs in production TEE code are more difficult to diagnose.
Hardware cost is the least-discussed constraint. TEE-capable instance types on major cloud providers command 20-35% cost premiums over equivalent unprotected instances. For organizations running thousands of instances, the operational budget impact is significant. However, ZTDF policies can require attestation only for high-value data or computation, allowing less sensitive workloads to run on cheaper hardware. The composition of ZTDF and TEEs creates an economic model where organizations protect only what they must at high cost, and use traditional encryption for everything else.
How Lattix Integrates TEE Attestation into Data-Centric Zero Trust
Lattix Technologies' platform treats attestation as a first-class policy attribute. The Lattix policy decision point can integrate attestation verification directly into the ABAC evaluation loop, supporting Intel TDX, AMD SEV-SNP, AWS Nitro Enclaves, and Google Confidential VMs through unified abstraction. Organizations can express policies like "release PII keys only to verified analytics jobs running in attested Google Confidential VMs" without writing cloud-provider-specific code.
Customer patterns show accelerating demand in three domains. Healthcare analytics organizations deploy Lattix ZTDF with attestation to enable researchers to work with de-identified datasets in confidential computing environments, satisfying both data governance and regulatory audit requirements. Financial services firms use the composition to support real-time fraud detection and anti-money-laundering systems where detection code runs in enclaves and data keys are released only to verified, isolated processes. Government agencies use the pattern to implement NIST SP 800-207 zero trust architecture mandates where data protection applies not just at the network or platform layer, but cryptographically, across all three states, with hardware-backed evidence.
The Emerging Ecosystem
The convergence of confidential computing and data-centric zero trust is attracting investment and attention from all major cloud providers. Google Cloud has integrated Confidential VMs directly into GKE, allowing Kubernetes workloads to inherit attestation properties. AWS is expanding Nitro Enclave support to include container runtime attestation. Microsoft Azure has extended Confidential Computing to include attestation policy integration in Key Vault. Each implementation has different APIs, different attestation formats, and different policy expression languages, creating a fragmented operational landscape.
Lattix Technologies, along with other platforms, is building abstraction layers that allow organizations to express confidential computing policies once and deploy them across multiple cloud providers. This reduces operational complexity and allows organizations to avoid vendor lock-in at the policy layer, even if they remain locked in at the cloud provider layer.
The open-source Confidential Computing Consortium has begun standardizing attestation formats and policy language. The CCF provides a reference implementation for managing attestation across multiple hardware platforms. The Open Enclave SDK abstracts TEE programming across Intel SGX, TDX, and SEV-SNP. These standards are maturing rapidly, and by late 2025, organizations can write attestation-aware code that ports across cloud providers without modification.
Adoption Patterns and the Regulatory Gradient
Current adoption follows a clear gradient. Organizations handling the most regulated data are adopting both technologies immediately. Financial institutions managing trading data, healthcare systems managing genetic information, and government agencies managing classified intelligence are prioritizing TEE deployments. These organizations have high compliance costs, high data breach costs, and strong existing zero trust initiatives, making the incremental investment in confidential computing justified.
Organizations in lower-regulation sectors are moving more slowly. SaaS providers building analytics platforms, academic institutions sharing research datasets, and small enterprises managing customer data are watching adoption patterns but not yet deploying. For these organizations, the operational burden and cost premium of confidential computing are harder to justify, and traditional encryption with strong access controls feels sufficient.
The regulatory environment is shifting this equation. Several government agencies have announced zero trust strategy roadmaps that explicitly include cryptographic data protection in use, not just at rest and in transit. The European Union's proposed Digital Services Act creates liability for platforms that fail to protect user data with appropriate technical controls, a category that may eventually include confidential computing. If regulators begin requiring TEE-based processing for sensitive workloads, adoption will accelerate rapidly.
References
-
National Institute of Standards and Technology. (2023). Zero Trust Architecture. NIST SP 800-207. https://csrc.nist.gov/publications/detail/sp/800-207/final
-
National Institute of Standards and Technology. (2024). Getting Started with Confidential Computing. NIST IR 8320. https://csrc.nist.gov/publications/detail/ir/8320/final
-
Confidential Computing Consortium. (2025). State of Confidential Computing 2025. Linux Foundation. https://confidentialcomputing.io/
-
Costan, V., & Devadas, S. (2016). Intel SGX explained. Cryptology ePrint Archive, Paper 2016/086. https://eprint.iacr.org/2016/086
-
McKeen, F., et al. (2013). Innovative instructions and software model for isolated execution. HASP, 13, 10. https://www.intel.com/content/dam/www/public/us/en/documents/research/hasp-innovative-instructions-paper.pdf
-
AMD. (2024). SEV-SNP API Specification. AMD Security Processors. https://www.amd.com/system/files/TechDocs/56860.pdf
-
National Institute of Standards and Technology. (2023). Post-Quantum Cryptography: Machine Learning Keying Material Algorithm. NIST FIPS 203. https://csrc.nist.gov/publications/detail/fips/203/final
-
Google Cloud. (2024). Confidential Computing on Google Cloud. https://cloud.google.com/confidential-computing
-
Amazon Web Services. (2024). AWS Nitro System: Hardware Architecture. https://aws.amazon.com/ec2/nitro/
-
Microsoft Azure. (2024). Azure Confidential Computing: Overview and Use Cases. https://azure.microsoft.com/solutions/confidential-compute/