← Back to Blog
AI SecurityCritical InfrastructureNISTRisk ManagementData Provenance

NIST's Critical Infrastructure AI RMF Profile Turns Trustworthy AI Into System Requirements

Lattix branded cover for NIST's Critical Infrastructure AI RMF Profile Turns Trustworthy AI Into System Requirements. /16 section number, AI RMF four function diagram, IBM Plex Mono on dark grid background, surgical yellow accent.

NIST published the concept note for an AI Risk Management Framework Profile on Trustworthy AI in Critical Infrastructure on April 7, 2026. The profile is the operational artifact between the AI RMF 1.0, released in January 2023, and the system requirements that critical infrastructure operators and their vendors need to write in solicitations. The concept note frames a profile development effort that will run across 2026 and 2027 under a Community of Interest convened by the NIST Information Technology Laboratory AI Program.

The profile matters because the AI RMF's four functions, Govern, Map, Measure, and Manage, do not translate directly into procurement language. The functions describe a lifecycle approach to AI risk that an operator can use to structure an enterprise risk program. They do not, on their own, tell a water utility what to ask for in an RFP for a leak-detection AI capability. The Critical Infrastructure Profile is the translation layer.

What the profile development effort is doing

The April 7 concept note scopes the profile against critical infrastructure sectors named in PPD-21, with explicit attention to energy, water, transportation, and healthcare. The development effort plans seminars, working sessions, requests for information, and draft releases through 2026, with stakeholder input from sector operators, technology vendors, integrators, and federal regulators. The output target is a profile that maps the AI RMF functions into sector-specific control statements that an operator can cite, an integrator can build against, and an auditor can score.

The structural pattern matches the NIST Cybersecurity Framework Profile work that produced the Manufacturing Profile, the Election Infrastructure Profile, and the Critical Infrastructure Cybersecurity Profile under NIST CSF 1.1 and 2.0. The cybersecurity framework profiles took a general-purpose framework and produced a sector-aligned scoring artifact. The AI RMF profile follows the same template, against an emerging risk surface that is moving faster than the underlying framework.

The timing is tight. Several sector regulators are already writing AI-specific cybersecurity language into mandates. The TSA Security Directive series for pipeline and rail. The CISA Cyber Incident Reporting for Critical Infrastructure Act implementation rules. The HIPAA Security Rule update notice of proposed rulemaking. Each of these touches AI risk in places where the AI RMF functions are not specific enough to be enforceable. The profile is the artifact that closes the specificity gap.

Where data provenance and lineage map to the profile

The AI RMF Govern function carries control statements on data governance, model governance, and lifecycle accountability. The Map function carries control statements on context characterization, including data origin and downstream use. The Measure function carries control statements on data quality, bias evaluation, and outcome verification. The Manage function carries control statements on risk treatment, including data integrity and access control.

Three operational primitives appear in all four functions. Data provenance, the ability to attest to where a data object originated and which transformations it has been through. Data lineage, the ability to trace every read and write event on the object across systems. Data attribute enforcement, the ability to evaluate access decisions against the attribute set bound to the object at the moment of access.

Critical infrastructure AI use cases sharpen the need for these primitives. A water utility AI that recommends valve actuation must trace the sensor telemetry it consumed to attested sources, document the model version that produced the recommendation, and produce a lineage record that the regulator can review after the fact. An energy AI that adjusts grid dispatch must produce the same set of attestations, against a SCADA telemetry pipeline that traverses vendor maintenance access and regulator submissions. A healthcare AI that scores clinical decisions must do the same against PHI under HIPAA notification regimes that attach material penalties to undocumented disclosure.

Network and identity controls do not produce these attestations. The attestations live on the data, not on the session or the application.

What Lattix does against the profile

Lattix Technologies binds policy to the data object through attribute-based access control at the policy enforcement point, post-quantum key encapsulation using ML-KEM-768 and ML-KEM-1024, and Merkle-tree lineage in content-addressed storage. The architecture produces the three operational primitives the AI RMF profile will require.

Data provenance is anchored by the content-addressed storage envelope. Every artifact carries a cryptographic hash that the storage layer enforces. A consumer of the artifact can verify the hash against the lineage record before processing, and the lineage record produces an attested origin claim that cannot be silently altered after the fact.

Data lineage is recorded by the Merkle-tree audit structure. Every read and write event on the artifact is logged with the policy that was evaluated, the attribute set that was presented, and the decision that was returned. The lineage is tamper-evident, and the lineage is independent of any application that mediates access.

Data attribute enforcement is the core of the ABAC policy enforcement point. Every access decision evaluates the attribute set against the policy bound to the object. A critical infrastructure AI that requests sensor telemetry receives the telemetry only if the AI's attribute claim, including its version, its training-data attestation, and its operational context, satisfies the policy. A model that fails the attribute check does not consume the data.

This is the architecture the profile development effort will describe in sector-specific control statements over the next twelve to eighteen months. The architecture is procurable today.

Where this goes next

NIST's stakeholder engagement calendar will run through 2026 with seminars, working sessions, requests for information, and draft releases. Critical infrastructure operators with active AI programs should be participating in the Community of Interest, because the profile that emerges will become the procurement language for the FY27 and FY28 acquisition cycles. Vendors with a data-centric architecture should be filing position papers, because the profile that emerges will name the control primitives that procurement will demand.

The AI RMF on its own is a risk management framework. The Critical Infrastructure Profile turns the framework into system requirements. The system requirements turn the requirements into solicitations. The solicitations turn into deployed architecture. Each step is a translation, and the data-centric architecture survives every translation by design.

References