# Glossary

Key terms used throughout the Amigo platform documentation. Definitions are kept short and practical. Terms are listed in alphabetical order.

## A

**Acceptance Region**: The set of outcomes that count as successful for a given use case. Defined across multiple dimensions (accuracy, safety, empathy, latency, cost). An outcome must satisfy all dimensions simultaneously to fall within the acceptance region.

**Action** (API: `tool`): A program or integration that an agent can execute during a conversation. Actions connect the agent to external systems such as scheduling APIs, EHR lookups, or notification services. Called "tool" in the API.

**Agent**: A conversational AI system configured to handle a specific set of tasks. Agents have a defined persona, context graph, dynamic behaviors, and evaluation criteria.

**Agent Engine**: The modality-independent reasoning core of the platform. Processes typed signals (utterances, emotion, tool results, silence, barge-in, external events) and emits effects (respond, execute tool, filler, transition, terminate, observe, pause). Voice, text, and simulation are modality adapters that convert channel-specific I/O into signals and execute effects. See [Reasoning Engine](https://docs.amigo.ai/agent/reasoning-engine).

**Agent Forge**: The CLI tool for managing agent configurations. Used to create, update, version, and promote services and version sets across environments. See the [Agent Forge](https://docs.amigo.ai/reference/agent-forge) reference page.

**Audio Embedding**: A dense vector representation of an audio segment that captures paralinguistic features (tone, urgency, hesitation) without transcription. Enables semantic search over how something was said, not just what was said.

**Audio Intelligence**: A real-time verification layer that cross-checks STT output using a separate LLM during live calls. Catches misrecognized medical terms and proper nouns before they reach the agent's reasoning pipeline. Distinct from post-call re-transcription, which runs after the call ends.

**Auto-Enrichment**: Background process that generates vector embeddings for world model events after they are written. Runs asynchronously with zero impact on write latency. Enables semantic search over clinical data.

**Automated Translation**: The process of translating world model entity state into EHR-specific formats during outbound write-back. Falls back to deterministic mapping when the primary translation service is unavailable.

**Autonomous Record Creation**: The connector runner's ability to autonomously create patient records in EHR systems when a new patient is identified during a voice call. Runs within the platform's HIPAA-compliant environment.

## B

**Backfill**: The process of replaying historical conversation data through an updated configuration to regenerate metrics and verify that changes produce the expected improvements.

**Barge-In**: The ability for a caller to interrupt the agent while it is speaking. The voice pipeline detects incoming speech during TTS playback and stops the agent's audio so the caller can be heard immediately.

**Burst-to-Experience Mapping**: The translation layer between vocal burst classifications (sighs, laughs, groans) and TTS emotion parameters. Maps 25 vocal burst types to voice delivery adjustments so the agent's tone responds to non-speech sounds.

## C

**Call-Phase Adaptation**: Automatic adjustment of agent behavior based on call duration and emotional trajectory. Prevents calls from dragging on when the caller is clearly unhappy.

**Catalog Discovery**: The automatic process of finding functions in the compute catalog and making them available as agent tools. Functions with descriptions are discovered at session initialization and merged with built-in defaults and workspace-registered functions. See [Platform Functions](https://docs.amigo.ai/agent/platform-functions).

**Clinical Tools**: The built-in tools available to the agent during live interactions (voice or text). Cover patient lookup/create/update, appointment scheduling/cancellation/confirmation, insurance creation, outbound call scheduling, and semantic search. All write operations fire outbound sync events through the connector runner.

**Clinical Copilot**: A product line for real-time AI-powered clinical documentation and decision support during in-person provider-patient encounters. Produces SOAP notes, ICD-10 code suggestions, and clinical alerts from the encounter conversation. Uses the same reasoning engine and world model as phone and text channels. See [Clinical Copilot](https://docs.amigo.ai/channels/clinical-copilot).

**Compound Emotion**: A nuanced emotional state derived from fusing multiple signal layers (acoustic co-activation, temporal trajectory, behavioral signals, conversation context, and linguistic content) at each caller turn. Examples include Resignation, Process Frustration, Cold Hostility, Masked Distress, and Sarcasm. See [Emotion Detection](https://docs.amigo.ai/channels/voice/emotion-detection).

**Confidence Score**: A numeric value used in two contexts. (1) **Data confidence** ranks information by source reliability: 1.0 (authoritative system integration), 0.7 (browser scrape), 0.5 (voice/conversation extraction), 0.3 (agent inference), 0.0 (rejected/contradicted). Higher-confidence sources overwrite lower ones in the world model. (2) **Agent confidence** measures how certain the agent is about its current response or decision. When agent confidence drops below a configured threshold, the agent escalates to a human operator.

**Conference-First Architecture**: The call transfer model where the agent, caller, and operator are bridged into a three-way conference before the agent drops off. This ensures the operator hears the conversation context and the caller experiences a smooth handoff rather than a cold transfer.

**Connector Runner**: The bidirectional integration layer that loads external data from EHR platforms, FHIR stores, and CRMs into the platform and syncs verified data back through a handler registry that routes writes by connector type.

**Context Fusion**: The emotion engine's ability to interpret acoustic and language signals in light of conversation state. The same utterance is interpreted differently depending on whether the conversation is stuck (amplifies frustration signals), just resolved (amplifies positive signals), or involves sensitive topics (heightens empathy sensitivity). See [Emotion Detection](https://docs.amigo.ai/channels/voice/emotion-detection).

**Context Graph** (API: `service_hierarchical_state_machine`): The structured map of a problem space that guides agent behavior. Context graphs define states, transitions, decision points, and safety boundaries for a specific workflow. Called "service hierarchical state machine" in the API.

## D

**Data-MCP** *(deprecated)*: A Model Context Protocol server that exposes workspace data through SQL query tools. Being replaced by [platform functions](https://docs.amigo.ai/agent/platform-functions) which provide the same query capabilities with tighter agent integration.

**Dominance**: An emotional signal representing perceived control in a conversation. Predicted directly from the audio waveform by the dimensional prosody model. High dominance indicates assertiveness; low dominance indicates deference or helplessness. Low dominance combined with negative valence is a strong signal for empathy tier escalation. See [Emotion Detection](https://docs.amigo.ai/channels/voice/emotion-detection).

**Drift**: A gradual change in agent performance over time. Can be caused by changes in the input distribution (new types of conversations), shifts in agent behavior, or evolving requirements. See [Drift Detection](https://docs.amigo.ai/testing/testing/drift-detection).

**Dynamic Behavior** (API: `dynamic_behavior_set`): A workspace-level behavioral rule that defines how the agent adapts based on conversational context, user signals, or emotional state. Each behavior specifies trigger conditions, actions (turn policy overrides, instruction injection, tool exposure changes), and priority. Behavioral control during conversations is applied through the context graph's per-state turn policy configuration. Called "dynamic behavior set" in the API.

## E

**Effect**: An output primitive from the reasoning engine representing something the agent wants to happen. Effect types: respond (say something), tool called (execute a tool), filler (play/send filler content), transition (state change), terminate (end conversation), observe (emit analytics event), pause (deliberate silence). The modality adapter decides how to execute each effect - a respond effect becomes TTS audio on voice, an SMS message on text, or a trace entry in simulation.

**Encounter Entity**: A world model entity type representing a clinical documentation session. Contains structured SOAP notes, ICD-10 codes, clinical alerts, and encounter metadata (provider, patient, duration, type). Created by the clinical copilot and finalized through a multi-stage review workflow before syncing to the EHR. See [Clinical Copilot](https://docs.amigo.ai/channels/clinical-copilot).

**Entity Resolution**: The process of matching records across multiple data sources to a single patient or user identity. Used by the connector runner to unify data from different systems. Includes cross-source merge detection that links entities from different data sources when they match on deterministic identifiers (phone number, email, NPI, or name + date of birth).

**Escalation**: The process of handing a conversation from the agent to a human operator. Escalation is triggered by low confidence, safety boundary conditions, or explicit patient requests.

## F

**FHIR Store Connector**: A connector type that integrates directly with FHIR R4 stores. Polls configurable FHIR resource types with per-resource cadences and supports outbound write-back with optimistic locking.

**File Drop Connector**: A connector type that ingests data from files (CSV, NDJSON, FHIR Bundle, JSON) deposited in cloud storage. Used for batch data imports from partners or external systems.

**Filler Speech**: Short spoken phrases (such as "Let me check on that" or "One moment") that the agent produces while waiting for a backend operation to complete. Filler speech prevents dead air during tool calls and keeps the caller engaged.

## I

**Interaction Insight**: A detailed view of the agent's reasoning for a specific interaction, including which memories were active, what reflections were generated, which state transitions occurred, and which tools were considered. Used for auditing and debugging agent behavior.

## K

**Keyterm Boosting**: A speech-to-text configuration that increases recognition accuracy for domain-specific vocabulary. Medical terms, drug names, provider names, and other specialized words are added to a boost list so the STT engine favors them over phonetically similar common words.

## M

**LLM-Evaluated Metric**: A metric defined in natural language that the platform scores using an evaluation judge rather than deterministic rules. The judge assesses the conversation transcript against the metric's criteria with dedicated compute resources. Used for dimensions that require judgment (clinical accuracy, empathy quality, protocol adherence). See [Metrics](https://docs.amigo.ai/testing/testing/metrics).

**Metric**: A configured evaluation criterion that measures a specific dimension of agent performance. Two flavors: (1) **Evaluation metrics** score individual conversations against rubrics post-session. See [Metrics and Quality](https://docs.amigo.ai/testing/testing/metrics). (2) **Operational metrics** are workspace-level aggregates computed by the config-driven metric pipeline across all channels. See [Metric Store](https://docs.amigo.ai/intelligence-and-analytics/metric-store).

**Metric Store**: The Platform API's unified analytics infrastructure. Computes, stores, and serves workspace-level metrics across voice calls, text sessions, surface submissions, and data quality events. Ships with 21 built-in metrics and supports unlimited custom metrics with six extraction modes (static, AI classification, AI extraction, AI sentiment, custom AI query, ratio). Per-metric latency tiers and freshness SLAs. See [Metric Store](https://docs.amigo.ai/intelligence-and-analytics/metric-store).

**Modality Adapter**: A channel-specific I/O handler that converts raw input (audio, SMS text, API payload) into typed signals for the reasoning engine and executes the engine's output effects in the appropriate format. The platform includes voice (phone), text (SMS), and simulation adapters. Each adapter handles channel-specific concerns (STT, TTS, filler audio for voice; message delivery for text) while delegating all reasoning to the engine.

**Monitor Concept**: A tracked signal or condition that the platform watches for across conversations. Monitors can trigger alerts, escalations, or dynamic behavior changes when specific patterns are detected.

## O

**Operator**: A human staff member who receives escalated conversations from the agent. Operators get a handoff summary with conversation context and patient data so they can continue without starting over.

**Outbound Task**: An entity type in the world model that represents a scheduled outbound call. Created by scheduling rules, follow-up workflows, or manual triggers. The connector runner's outbound dispatch loop reads these entities and initiates calls when they are due.

## P

**Persona (Agent)**: A managed identity resource defining who the agent is - name, role, background, and communication style - independently of how it communicates (voice configuration, channel type). Services reference a persona by ID, enabling the same identity to be reused across multiple services and channels. See [Agent Core](https://docs.amigo.ai/agent/agents).

**Persona (Simulation)**: A synthetic user profile used in simulations. Defines the characteristics, behaviors, and communication style of a test user. See [Simulations](https://docs.amigo.ai/testing/testing/simulations).

**Platform Function**: A declarative tool primitive - SQL, Python, or AI function - that agents call during conversations to query world model data and analytics. Three categories: named functions (pre-built queries), open query (agent writes SQL at runtime), and open write (agent records new observations as world events). Managed through workspace-scoped API endpoints and Agent Forge CLI. See [Platform Functions](https://docs.amigo.ai/agent/platform-functions).

**Pre-Emptive Tone Adjustment**: The voice agent's ability to detect sensitive topics from context graph content and adjust delivery before the caller reacts. Prevents blunt delivery of difficult information.

**Projection Function**: The function that computes an entity's current state from its events. Different entity types have different projection functions (patient, outbound task, generic). Runs atomically with the event write.

## Q

**Quality Score**: A 1-5 rating produced by post-call analysis across five dimensions: task completion, information accuracy, conversation flow, error recovery, and caller experience.

## R

**Reasoning Engine**: The modality-independent pipeline at the core of the agent engine. Processes signals through three stages: perceive (classify input), reason (navigate context graph, engage LLM, execute tools), and act (emit effects). Supports two modes: streaming (voice, for real-time audio delivery) and batch (text, simulation, API). See [Reasoning Engine](https://docs.amigo.ai/agent/reasoning-engine).

**Regulation Template**: A configurable compliance framework that encodes regulatory requirements (HIPAA, state-specific rules, organizational policies) as constraints the agent must follow.

**Review Queue**: The human review interface where operators examine clinical events flagged by the automated pipeline. Supports approve, reject, and correct actions with confidence elevation.

**Risk Score**: A composite per-turn assessment of interaction health. Combines emotion signals, loop detection, and duration into a score that maps to four levels: normal, monitor, alert, and escalate.

## S

**Safety Triage**: Per-turn regulatory safety evaluation that runs against pre-built templates (Joint Commission NPSG 15, VAWA, FDA MedWatch). Returns concern levels 0-3 independent of monitor concept detection.

**Scenario (Simulation)**: A defined situation used in simulations. Describes the context, events, and goals for a test interaction. See [Simulations](https://docs.amigo.ai/testing/testing/simulations).

**Service Voice Configuration**: Per-service voice tuning that controls filler behavior (style, vocabulary, timing), barge-in sensitivity, response length limits, end-of-turn detection, TTS settings, and call forwarding. Different services within the same workspace can have different voice characteristics. Managed through the Platform API and Agent Forge CLI.

**Signal**: An input primitive to the reasoning engine representing something that happened in the conversation. Signal types: utterance (user said something), emotion (emotional state update), tool result (tool execution completed), silence (user inactive beyond threshold), barge-in (user interrupted agent), external event (operator guidance, surface submission), system (timeout, error, connection state). All modality adapters normalize input into these signal types before passing to the engine.

**Silence Monitor**: The voice adapter component that detects and manages caller inactivity. Uses exponential backoff check-ins (10s, 20s, 40s) before ending the call with an operator transfer offer.

**Simulation Bridge**: An exploratory testing mode that generates scenario variations from a natural-language objective, runs each as a full multi-turn conversation with an LLM-driven persona, and collects interaction insights for coverage tracking. See [Simulations](https://docs.amigo.ai/testing/testing/simulations).

**Skill**: An LLM-backed micro-agent configured through the Platform API. Skills run on one of five execution tiers: Direct (single LLM call), Orchestrated (multi-turn with tools, default), Autonomous (extended loops with checkpointing), Browser (headless web automation), or Computer Use (full desktop automation via browser, RDP, or VNC). See [Skill Execution Tiers](https://docs.amigo.ai/agent/platform-functions#skill-execution-tiers).

**Speaker Normalization**: Per-call acoustic baseline calibration that measures each caller's deviations from their own vocal patterns rather than population averages. Improves emotion detection accuracy for callers whose baseline volume, pitch, or speech rate differs significantly from the training population. See [Emotion Detection](https://docs.amigo.ai/channels/voice/emotion-detection).

**Surface**: An agent-generated data collection interface delivered to patients through communication channels (SMS, WhatsApp, email, voice, web). Agents analyze entity state and data gaps, generate a surface spec, and the platform renders and delivers it. Submissions flow back to the world model as events. See [Surfaces](https://docs.amigo.ai/channels/surfaces).

**SurfaceSpec**: The structured specification an agent generates to define a surface - title, fields, delivery channel, expiration, and entity association.

## T

**Test Run**: The execution of a test set that produces scored results for each unit test. See [Simulations](https://docs.amigo.ai/testing/testing/simulations).

**Test Set**: A group of related unit tests that are run together. Test sets are often organized by capability area or risk level.

**Tone Momentum**: A mechanism that preserves the previous turn's emotional delivery parameters. Prevents jarring vocal shifts when the emotion detection signal is temporarily weak or unavailable.

**Transcript Extraction**: The process of pulling structured patient data (phone, DOB, email, insurance, address) from conversation transcripts during or after a call. Extractions are written to the world model with voice-level confidence.

**Triage Hint**: A linguistic or behavioral pattern included in a safety template that tells the LLM what to watch for beyond direct statements. Examples: farewell language for suicide risk, unexplained injuries for domestic violence.

## U

**Unification Engine**: The transformation layer that converts raw records from any connector type into world model events using configurable mapping rules with dot-path field extraction.

**Unit Test**: A combination of a persona, a scenario, and success criteria that tests a specific agent behavior.

## V

**Version Set**: A named collection of component versions (agent, context graph, dynamic behaviors) that are deployed together. Version sets are promoted through environments (staging to production) as a unit.

## W

**Webhook Connector**: A connector type that receives push-based data from external systems via HTTP webhooks, rather than polling. Events are deduplicated by content hash.

**World Model**: The platform's unified data layer that assembles information from multiple sources (EHR, conversations, manual entry, connected devices) into a single view accessible to the agent during conversations.

**Workspace**: A container in the Platform API that groups related skills, agents, and configurations for a specific deployment context.

**Write Scope**: A permission boundary that limits what entities and event types the agent can modify during an interaction (voice or text). System services bypass this restriction. Prevents conversation-extracted data from overwriting authoritative records.
