← Back to Blog
TDFData SecurityOpen StandardsZero Trust

Trusted Data Format: From IC Origin to ZTDF Standard

Lattix branded cover for Trusted Data Format evolution from IC origins to ZTDF. /19 section number, IBM Plex Mono on dark grid background, surgical yellow accent.

The Trusted Data Format did not start as an enterprise security framework. It started as a specific answer to a specific problem: how to share sensitive intelligence data across organizational boundaries where conventional encryption and access control were insufficient. The intelligence community required something that could travel with the data itself,a portable container that carried both the encrypted payload and the policy that governed its use. That design choice, made under the constraints of cross-domain sharing, proved to be foundational for everything that came after.

The Intelligence Community's Cross-Domain Problem

Before the formation of the Information Sharing and Access Interagency Policy Committee, intelligence data lived in silos. The pre-9/11 IC operated under the principle that classification and organizational boundaries were proxies for access control. Data released from one domain to another followed manual processes governed by memoranda of understanding and human gatekeepers. As data flows scaled and the number of stakeholders multiplied, this model became administratively untenable and operationally risky.

TDF addressed a core technical gap: conventional encryption binds the data encryption key to the recipient's identity at the moment of encryption. If that recipient needs to pass the data forward, or if access policy must change after the data has been created, the data must be re-encrypted or access must be managed through separate infrastructure. The intelligence community needed something different,a format where the policy traveled with the data, where a key access server (KAS) could enforce access decisions independent of the originating system, and where the payload remained encrypted while policy evaluation happened outside the data object.

From Internal Tool to Virtru to OpenTDF

The Trusted Data Format originated within Virtru, a company founded to commercialize what had been an internal IC tool. Virtru's contribution was not inventing TDF,it was recognizing that the format solved a class of problems that extended far beyond intelligence sharing. Enterprise customers dealing with regulated data, multi-cloud environments, and third-party access requirements faced versions of the same challenge. By 2022, Virtru open-sourced TDF as OpenTDF, making the specification and reference implementation publicly available for vendors and integrators to build against.

The move to open-source was accompanied by ODNI sponsorship and participation in OASIS standardization conversations. This trajectory,from classified internal tool to commercial product to open standard,reflected the maturing recognition that data-centric zero trust was not a niche capability. The Office of the Director of National Intelligence understood that interoperability required vendor-agnostic specification. The NSA's Cybersecurity Information Sheet on data-centric security reinforced the same point: policy-bound encryption was becoming a baseline expectation for sensitive data environments.

What TDF Architecture Got Right

The TDF design made three structural choices that proved durable. First, envelope encryption: the data encryption key (DEK) is generated per object and encrypted under the key encryption key (KEK), which is held by the key access server. This separation means the data can be encrypted without the KAS knowing the DEK,the user contacts the KAS only when they need to decrypt. Second, manifest-based policy: the policy that governs who can access the data is embedded in the TDF object itself, alongside the encrypted payload. Third, third-party key servers: access decisions can be delegated to infrastructure outside the originating system, enabling portability across cloud environments, air-gapped networks, and multi-vendor deployments.

These choices aligned TDF with the principles that would later be codified in NIST SP 800-207 and other zero trust frameworks: assume no implicit trust in the network, verify access at the point of use, enforce least privilege at the object boundary.

Where TDF Left Unfinished Work

Despite its architectural soundness, TDF confronted practical boundaries at scale. Key management across federated KAS topologies remained operationally complex. Policy language lacked expressiveness for attribute-based access control (ABAC) patterns that enterprise deployments required. Audit anchoring,the ability to cryptographically certify who accessed what when,relied on out-of-band logging. Post-quantum key encapsulation was not in scope. Content-addressed object identity and Merkle-tree lineage required wrapper applications rather than native format support.

These gaps did not indicate design failure. They indicated that a format sufficient for intelligence sharing was incomplete for a data-centric zero trust architecture that needed to support compliance, AI pipelines, and automated policy federation.

ZTDF: The NSA Standardization of Zero Trust Data Format

In 2024, the NSA formally named Zero Trust Data Format (ZTDF) as an interoperability standard, codifying the next phase of TDF evolution. ZTDF retained TDF's core architecture while extending it with cryptographic enforcement mechanisms more aligned with post-quantum key encapsulation (ML-KEM-768/1024), richer policy semantics compatible with ABAC frameworks, and lineage-tracking primitives that record the chain of access and transformation as data moves through systems.

The naming of ZTDF was not an invention of a new format. It was standardization language recognizing that TDF had evolved beyond its original scope and that vendors needed explicit interoperability targets. A system built to ZTDF specifications can delegate KAS operations to third parties, enforce fail-closed policy evaluation, and integrate policy authority federation,the ability to compose access decisions across organizational boundaries.

The Next Decade: Federated KAS Topologies and Post-Quantum Wrapping

The next phase of TDF/ZTDF maturation centers on three convergences. First, post-quantum key encapsulation: ML-KEM-768 and ML-KEM-1024 will replace RSA-4096 in key wrapping, making ZTDF resistant to cryptanalytic advances in quantum computing. Second, federated KAS architectures: rather than centralizing key access decisions, organizations will compose KAS policies across trust boundaries using ABAC, attribute stores, and policy decision points (PDP) that enforce data-centric zero trust at scale. Third, CAS-X (content-addressed storage with extended semantics) integration: objects will carry cryptographic identity anchors that enable Merkle-tree lineage, making the origin and transformation history of data a queryable property rather than a forensic artifact.

Lattix Technologies builds against ZTDF as a baseline architecture, not as a roadmap. The cryptographic enforcement, the PEP/PDP policy separation, and the third-party KAS delegation are already embedded in the platform. This design choice reflects the recognition that data-centric zero trust is not aspirational,it is the architecture that intelligence, defense, and enterprise organizations are converging on now.

The Standard That Came From Solving One Problem

The Trusted Data Format's story is instructive precisely because it did not emerge from a committee designing an abstract security framework. It emerged from the specific constraints of intelligence data sharing, where organizational boundaries were immutable and policy changes were frequent. The design choices made in that context,envelope encryption, manifest policy, third-party key access,proved to be not limitations of a narrow use case but foundational principles of data-centric zero trust. That recognition, combined with ODNI sponsorship and NSA standardization language, has positioned TDF/ZTDF as the interoperability baseline for secure data environments for the next decade.


References

  1. Office of the Director of National Intelligence. Memorandum on the Information Sharing and Access Interagency Policy Committee. 2004.
  2. National Security Agency. Cybersecurity Information Sheet: Data-Centric Security. 2023.
  3. Virtru. OpenTDF Specification. 2022. https://opentdf.io
  4. NIST. SP 800-207: Zero Trust Architecture. 2020.
  5. National Security Agency. Zero Trust Data Format (ZTDF) Interoperability Standard. 2024.
  6. OASIS. eXtensible Access Control Markup Language (XACML) Version 3.0. 2013.
  7. NIST. Post-Quantum Cryptography Standardization. ML-KEM specification. 2024.
  8. Lattix Technologies. Data-Centric Zero Trust Architecture. 2026.
  9. Lattix Technologies. ABAC vs. RBAC in Zero Trust Deployments. 2026.
  10. Lattix Technologies. Post-Quantum Cryptography: Why the Transition Matters Now. 2026.