Connector Types
Four connector types for ingesting data from external systems into the world model.
The connector runner supports four types of data connectors. Each produces raw records that pass through the unification engine before entering the world model as events.
REST Connector
Polls HTTP endpoints on a schedule. Supports four pagination strategies, configurable authentication, and circuit breaker protection. This is the connector type used for most EHR integrations.
The REST connector handles the operational reality of healthcare APIs: rate limits, business-hour restrictions, inconsistent response formats, and transient failures. It includes content-hash deduplication so repeated polls of unchanged data do not create duplicate events.
For details on polling behavior, adapter-specific logic, and the seven background loops, see Connector Runner.
File Drop Connector
Watches an S3 bucket for new files. Parses CSV, NDJSON, FHIR Bundles, and raw JSON. Useful for batch data imports where a partner drops a file and the connector picks it up on the next poll cycle.
File drop connectors are common for bulk data loads - initial patient roster imports, historical appointment data, or periodic data exports from systems that do not expose an API.
Webhook Connector
Receives inbound HTTP webhooks from external systems. Instead of polling, the external system pushes data to a registered endpoint. Events are deduplicated by content hash (same mechanism as the REST connector) to handle retries from the sender.
Webhook connectors are used when the external system supports push-based notifications - for example, an EHR that fires a webhook when an appointment is created or updated.
Unification Engine
The unification engine is not a connector itself. It is the transformation layer that all connectors feed into. Raw records from any connector type are mapped to world model events using configurable rules.
The rules use dot-path field extraction (similar to JSONPath) to pull values from arbitrarily nested source data into the flat event schema.
This architecture decouples the transport layer (how data arrives) from the transformation layer (how data is mapped). Adding a new data source requires connector configuration and mapping rules - no custom integration code.
For details on how data flows through the connector runner after ingestion - including entity resolution, confidence gating, and outbound write-back - see Connector Runner.
Last updated
Was this helpful?

