Connectors
Integrations with the cloud storage and applications where your data already lives.
Most enterprise data is already somewhere. Lattix Connectors integrate with the systems where that data lives so that classification and protection can be applied without moving data into a new silo.
A connector's job is to bridge the boundary between an external system and the Lattix fabric. It discovers data in the external system, applies the tenant's classification to inbound data, and participates in the access flow so that outbound data is wrapped before it crosses back.
Currently available
- Microsoft OneDrive
- Microsoft SharePoint
- Google Drive
- Dropbox
- Box
Each connector runs as a tenant-scoped integration configured in the Mesh Dashboard. Connecting an integration requires the necessary OAuth authorization to the source system — the tenant administrator grants scoped, revocable access at configuration time.
Additional connectors are added based on customer demand. If a specific integration is required, it can be discussed with your account team.
What a connector does
Discover. The connector enumerates the data it has been authorized to see — documents, folders, file metadata — and registers the objects it finds.
Classify. Incoming objects are passed through the tenant's classification pipeline (see Concepts → Classification and Tagging). The result is a ZTDF envelope for each object, with classification tags applied according to the tenant's schema.
Enforce. Outbound requests from the external system (a share, a download, a sync to a second location) are handled through the fabric. If the downstream request requires an unwrap, that unwrap follows the normal policy flow.
Audit. Every connector operation produces ledger events — an object was classified, a sync job completed, a policy denial occurred.
What connectors do not do
Connectors are not data migration tools. They do not wholesale copy external data into a Lattix-controlled store. The external system remains the system of record; Lattix provides the policy, classification, and audit overlay.
A connector is also not a bypass. If a user accesses the external system directly (not through Lattix), the connector cannot enforce policy on that out-of-band access. This is why tenants that want end-to-end protection typically combine connectors with organization-wide enforcement — identity provider integration that routes application access through the fabric, or Mesh Node sidecars at the application layer.
Configuration scope per connector
Each connector exposes a similar set of per-integration settings in the dashboard:
- Authorization. OAuth credentials for the source system, with the specific scopes the connector requires. Revocation at the source system deauthorizes the connector immediately.
- Discovery scope. Which subset of the source system the connector sees — specific sites, drives, folders, or the full tenant.
- Classification behavior. Whether to apply automatic classification, require author confirmation, or use an inherited default based on the source.
- Sync cadence. How often the connector reconciles its view with the source system. Real-time where supported by the source, periodic otherwise.
- Egress policy. Whether data handled by this connector can leave the original system, and under what conditions.
Configuration details are covered under Configuration → Connectors.
Relationship to concepts
- Connectors apply the tenant's classification schema to incoming objects.
- Downstream protection uses the full Zero Trust Fabric — policies, keys, and ledger records.
- Objects handled by connectors are identified by their CID for lineage tracking across the original system and any Lattix-mediated derivatives.