# Healthcare Contact Center

Amigo is an AI-first patient communication platform. A patient calls, an agent answers immediately, pulls their history from the data layer, and handles the conversation end-to-end - scheduling appointments, delivering pre-procedure instructions, collecting missing information, or escalating to a human operator when clinical judgment is required. The same [reasoning engine](https://docs.amigo.ai/agent/reasoning-engine) handles SMS/text conversations, outbound text outreach, and agent-generated data collection forms delivered across channels. Every interaction is scored for quality and written back to the EHR.

This is not a traditional contact center with AI added on. The AI agent is the primary handler across voice and text channels. Human operators intervene selectively on a small fraction of interactions rather than staffing every line. That shift makes many traditional contact center components unnecessary - IVR trees, ACD queues, workforce management, sample-based quality scoring - while changing how others work.

This guide covers how Amigo handles the use cases that contact center platforms traditionally address, where the approach is fundamentally different, and where existing tools still have a role.

{% @mermaid/diagram content="flowchart LR
patient\["Patient"] -->|Calls or texts| agent\["AI Agent\n(primary handler)"]
agent -->|Escalation| operator\["Human Operator\n(selective intervention)"]
agent -->|Data| wm\["World Model\n+ EHR Sync"]
agent -->|Forms| surface\["Surfaces\n(data collection)"]
agent -->|Follow-up| outbound\["Outbound\n(calls, texts)"]" %}

## How Amigo Handles Calls

### Conversational Routing (Replaces IVR)

When a patient calls, the agent answers and listens. There are no menus, no button presses, no "please listen carefully as our options have changed." The caller states their need in natural language.

The agent navigates [context graphs](https://docs.amigo.ai/agent/context-graphs) - state machines that structure conversation flow dynamically based on what the caller says, what the system already knows about them (from the [world model](https://docs.amigo.ai/data/world-model)), and what actions are available.

A patient calling about a follow-up appointment does not navigate a menu. The agent recognizes the intent, pulls their appointment history from the world model, and moves directly into scheduling. If the same patient calls about a billing question, the agent routes to the billing flow - or escalates to a human operator if the topic is outside the agent's scope. The "routing" is the conversation itself.

Phone numbers are provisioned and assigned to specific services (agents) through the platform. Each number maps to an agent configuration, so different departments or use cases can have dedicated numbers with their own context graphs, personas, and escalation rules. See [Phone Number Management](https://docs.amigo.ai/channels/voice).

### Operator Command Center

The [operator command center](https://docs.amigo.ai/channels/operators) is the human interface to the platform, designed for selective intervention rather than handling every call.

**Priority queue** - All active calls are ranked by urgency in a single view. Urgency is computed from the call's risk score, emotion detection, and whether an escalation is active. Operators see caller name, current conversation state, wait time, turn count, and emotion - without joining the call.

**AI briefing (caller context pop-up)** - When an operator selects a call, the system generates a situation briefing. Instead of pulling a CRM record, the briefing synthesizes information across systems in real time:

* **Situation summary** - What the caller needs and where the conversation stands right now
* **Patient context** - Relevant history from the world model (demographics, prior calls, care plan, recent labs, medications)
* **Risk assessment** - Current risk level and what is driving it
* **Key issues** - Specific problems identified during this call
* **Recommended actions** - What the operator should do next
* **Prior interactions** - Summary of previous conversations if applicable

The operator reads the briefing in seconds and joins with full context - no need to listen to minutes of conversation or search through multiple systems.

**Work modes** - Operators work in [listen mode](https://docs.amigo.ai/channels/operators) (monitoring silently) or [takeover mode](https://docs.amigo.ai/channels/operators) (speaking directly with the caller). They switch between modes instantly. The caller experiences no disruption - no hold music, no transfer, no reconnection. The operator joins the existing conference as a third participant.

**Operator guidance** - In listen mode, operators can send [text instructions to the agent](https://docs.amigo.ai/channels/operators) without the caller knowing. For example, "Ask for their insurance ID before confirming" - the agent works it into the conversation naturally.

### Recording and Transcription

All voice calls are recorded as [dual-channel stereo](https://docs.amigo.ai/channels/voice/recordings): one channel for the caller, one for the agent (and operator, when present). Recordings are stored with configurable retention and accessible via secure, time-limited URLs.

Recordings include full transcripts with speaker attribution. When an operator joins a call, their speech is captured through a dedicated STT stream and attributed correctly in the transcript.

{% hint style="warning" %}
**Screen recording is not part of the platform.** Amigo handles voice interactions, not desktop activity. Screen recording for quality assurance would come from your existing desktop monitoring tools.
{% endhint %}

### Outbound Dispatch and Callbacks

The platform initiates outbound calls based on clinical rules and patient data from the [world model](https://docs.amigo.ai/data/world-model). When a caller cannot be reached or requests a callback, the outbound dispatch system handles retry logic - rescheduling based on contact preferences, time zones, and configurable retry windows (same day, next morning, or fall back to SMS).

The agent also supports [deferred transfer](https://docs.amigo.ai/channels/operators) - if a call needs to be forwarded (to a clinic front desk, for example), the transfer waits until the agent's goodbye message finishes so the caller is not cut off mid-sentence.

{% hint style="info" %}
**Coming soon** - Dedicated callback queue management with priority ranking, scheduled callback windows, and patient-facing callback confirmation. Today, callbacks are handled through outbound dispatch retry logic rather than a purpose-built queue interface.
{% endhint %}

### Surfaces - Agent-Generated Data Collection

[Surfaces](https://docs.amigo.ai/channels/surfaces) replace post-call IVR surveys, paper intake forms, patient portal questionnaires, and manual data collection calls with a single mechanism that the agent controls dynamically.

**How it works:** During a call, the agent identifies missing data - insurance information, consent forms, symptom details, pre-procedure checklists - and generates a surface on the fly. The agent decides what to ask based on what it knows and what it needs. It texts the patient a secure link before the call ends. The patient fills it out on their phone.

**What surfaces can collect:** 12 field types cover the range of healthcare data needs - text, dates, phone numbers, single and multi-select, checkboxes, photos (insurance cards, wound images, IDs), digital signatures (consent, HIPAA acknowledgment), and file uploads (referral letters, lab results). Fields support prefilling from known data so patients do not re-enter information the system already has, and conditional logic so the form adapts based on answers.

**Real-time during calls:** The agent does not send the surface and forget about it. If the patient opens and completes the surface while still on the call, the agent is notified automatically and can acknowledge it in conversation: "I can see you just uploaded your insurance card - thank you, let me pull that up." Four mid-call tools give the agent full control: create a surface, deliver it, check its status, and list existing surfaces to avoid sending duplicates.

**Multi-channel delivery:** Surfaces reach patients through SMS, WhatsApp, email, voice (IVR-style verbal collection for simple confirmations), or direct web links for portal embedding. The agent selects the channel based on the patient's communication preferences and the nature of the request. A photo upload goes via SMS. A yes/no consent confirmation can happen on the call itself.

**No friction for patients:** Surfaces open as a mobile-first web page via a secure, signed link. No login, no app download, no account creation. Fields auto-save as the patient progresses, so nothing is lost if they close the browser and return later. Surfaces expire after a configurable TTL (1 hour to 1 year, default 7 days).

**Proactive outreach:** Beyond mid-call use, the [automated gap detection](https://docs.amigo.ai/channels/surfaces#automated-gap-detection) scanner runs in the background, checking patient records against configurable requirements. If a patient with an upcoming appointment is missing insurance information or has not completed a consent form, the scanner creates and delivers a surface automatically - days before the appointment, with no agent or human involvement.

**Outreach optimization:** The platform tracks completion rates, channel effectiveness, and field-level abandonment across all surfaces. Before creating a new surface, the agent can check a patient's history: how many surfaces are pending, what their completion rate is, which channel they prefer. If a patient already has three unfinished surfaces, the agent collects data verbally instead. If a patient consistently completes SMS surfaces but ignores email, the agent uses SMS. Fatigue gating prevents the common failure mode of sending more messages to patients who do not respond.

**Data integrity:** Surface submissions enter the [world model](https://docs.amigo.ai/data/world-model) through the same pipeline as every other data source - same confidence gates, same review queues, same entity resolution. A photo of an insurance card uploaded via a surface goes through the same verification pipeline as insurance data extracted from a phone call. Every surface creation, delivery, and submission is recorded in the [audit trail](https://docs.amigo.ai/safety-and-compliance/compliance#unified-audit-trail).

### Call Intelligence and Quality Scoring

Every call produces a structured intelligence profile - a full analytical breakdown, not a summary. Every call gets the same depth of analysis, not a random sample.

**Two layers of quality analysis run on every call:**

**Layer 1 - Real-time intelligence** (computed from session state at call end): Seven structured summaries covering emotion (dominant emotion, valence/arousal averages, peak negative, emotional shifts, final trend), risk (composite score, contributing signals), latency (engine response time avg/p50/p95, audio time-to-first-byte, silence ratio), conversation dynamics (turn count, states visited, loop count, barge-in count, completion reason), tool performance (success/failure counts per tool), safety (rule matches, escalation triggers), and operator involvement (connect time, resolution).

**Layer 2 - Post-call quality scoring** (runs asynchronously on the stereo recording): Each call is scored across five dimensions on a 1-5 scale - task completion (did the agent accomplish what the caller needed), information accuracy (were transcriptions correct, did the agent act on accurate data), conversation flow (natural pacing, no awkward pauses or repetitions), error recovery (did the agent recover gracefully from confusion), and caller experience (based on tone and interaction patterns). This produces an outcome classification: succeeded, partially succeeded, failed, or abandoned.

**Composite quality score** (0-100) summarizes call health using a penalty model - starting at 100 and deducting for high latency, excessive silence, barge-ins, agent loops, escalations, and tool failures. Calls are automatically tiered into excellent, good, fair, and poor.

**Key moments are extracted automatically.** The system identifies notable events - moments of high risk, emotional shifts, escalations, tool failures - so reviewers jump directly to what matters instead of listening to entire recordings.

**STT feedback loop.** The quality analysis identifies STT corrections and feeds them back into the speech recognition configuration. Transcription accuracy improves over time for your specific vocabulary without manual tuning.

**Quality trends over time.** Analytics show quality score distribution, escalation rates, and per-component breakdowns across configurable date ranges. Period-over-period comparison measures whether configuration changes improve or degrade quality. See [Analytics](https://docs.amigo.ai/intelligence-and-analytics/intelligence) for the full endpoint set.

If your compliance program requires structured human evaluation against custom scorecards, recordings, transcripts, and intelligence profiles are all accessible via API for import into existing QM tools.

### Real-Time Speech Intelligence

Traditional speech analytics runs batch analysis on recordings after calls end, producing keyword reports and sentiment scores the next day. Amigo's speech intelligence operates during the call and changes how the agent behaves in real time.

**Three parallel emotion models** analyze every moment of the conversation simultaneously. The [prosody model](https://docs.amigo.ai/channels/voice/emotion-detection) reads vocal qualities - pitch, rhythm, timbre, pace - detecting frustration, anxiety, and confusion from how the caller sounds, not what they say. The [vocal burst model](https://docs.amigo.ai/channels/voice/emotion-detection) captures non-speech sounds that transcription discards entirely - sighs, laughs, groans, gasps - each mapped to delivery adjustments (a sigh triggers more empathetic tone, a laugh triggers matched energy). The [language model](https://docs.amigo.ai/channels/voice/emotion-detection) analyzes transcript text for sarcasm, tiredness, disapproval, and enthusiasm - emotions that audio alone cannot detect.

**These models drive agent behavior, not just reports.** When emotion spikes, the agent adjusts its vocal delivery (pace, warmth, emphasis) on the next turn. When the prosody model and language model disagree - a caller says "I'm fine" but their voice is flat and low-energy - the system trusts the voice over the words. When the agent is about to discuss a sensitive topic (difficult diagnosis, billing dispute), it shifts to a more careful delivery before the caller reacts. After 5+ minutes with deteriorating mood, the agent increases pace and directness. After 10+ minutes of sustained negative emotion, escalation triggers automatically.

**Per-turn risk scoring** combines emotion (40% weight), loop detection (30%), and duration (30%) into a [composite risk score](https://docs.amigo.ai/channels/operators) that updates every conversational turn. Risk levels (normal, monitor, alert, escalate) can be overridden per context graph state - a medication verification state can have a lower escalation threshold than a scheduling state because errors carry higher clinical risk.

**Output signals** available in real time and in analytics: valence (positive/negative direction), arousal (calm vs. agitated), trend (improving/stable/deteriorating), and coherence (agreement across models - low coherence flags complex emotional states).

**Audio embeddings** capture paralinguistic features - tone, urgency, hesitation, confidence - as dense vectors, enabling semantic search over how something was said across conversations. "Find calls where the caller sounded similar to this one" without relying on keyword matching.

{% hint style="info" %}
**Coming soon** - Batch-style keyword trending and topic clustering across large call volumes. Today, speech intelligence is real-time and per-call, with audio embeddings enabling similarity search.
{% endhint %}

### Capacity and Operator Planning

The AI agent does not have shifts, does not call in sick, and scales to handle call volume without forecasting. Traditional workforce management - shift scheduling, staffing forecasts, adherence tracking - solves a problem that largely disappears.

The remaining question is simpler: how many operators do you need on standby for escalations? The platform provides:

* **Escalation rate tracking** - What percentage of calls require human intervention, trended over time
* **Operator performance metrics** - Handle time, resolution stats, and active time per operator
* **Call volume analytics** - Historical and real-time volume by service and time period
* [**Command center**](https://docs.amigo.ai/intelligence-and-analytics/intelligence) - Single-pane dashboard showing active calls, escalated calls, and system health

For the small operator team, any lightweight scheduling tool works. The enterprise WFM suite is overkill.

### Multi-Channel Patient Engagement

Traditional contact centers treat email and live chat as separate channels that each need their own agent pool, routing logic, and quality management. Amigo takes a different approach: the agent reaches patients across channels through [surfaces](https://docs.amigo.ai/channels/surfaces) and text sessions, all powered by the same reasoning engine.

During or after a call, the agent delivers data collection, instructions, and follow-up content via **SMS, WhatsApp, email, voice (IVR-style verbal collection), and web (direct links for portal embedding)**. The agent picks the channel based on the patient's preferences and what is being collected - a photo upload goes via SMS, a longer consent form via email, a simple confirmation verbally on the call.

A single voice call can trigger patient engagement across every channel: the agent handles the conversation, texts a surface for missing insurance photos, emails pre-procedure instructions, and sends an SMS reminder the day before the appointment. The patient gets a coherent experience orchestrated by the same agent, not fragmented across disconnected systems.

Text sessions also support asynchronous patient engagement, care plan reminders, and secure messaging for use cases that start outside of a phone call.

### Analytics and Reporting

The [analytics suite](https://docs.amigo.ai/intelligence-and-analytics/intelligence) covers both real-time and historical reporting:

**Real-time:**

* [Command center](https://docs.amigo.ai/intelligence-and-analytics/intelligence) dashboard with active call count, escalation status, pipeline health, and data quality metrics
* [SSE event stream](https://docs.amigo.ai/intelligence-and-analytics/intelligence) for live dashboards (`call.started`, `call.ended`, `call.escalated`, `pipeline.sync_completed`, `review.submitted`, `alert`)
* Operator command center with live priority queue and call status

**Historical:**

* [Call intelligence analytics](https://docs.amigo.ai/intelligence-and-analytics/intelligence) - quality trends, emotion trends, safety trends, latency, tool performance, operator performance
* [Percentile analytics](https://docs.amigo.ai/intelligence-and-analytics/intelligence) - p50/p95/p99 for duration and quality, period-over-period comparison
* [Surface analytics](https://docs.amigo.ai/intelligence-and-analytics/intelligence) - completion rates, channel effectiveness, field abandonment
* [Data quality trends](https://docs.amigo.ai/intelligence-and-analytics/intelligence) - confidence distribution, review pipeline performance, event distribution by source and type

All historical endpoints support date range filtering, time bucketing (hourly, daily, weekly), and service-level filtering.

## AI Capabilities

### Automated Appointment Scheduling

The agent handles end-to-end appointment scheduling: identifying the need, checking availability through FHIR API integration, presenting options, booking the slot, and writing the outcome back to the EHR. See [Patient Scheduling and Outreach](https://docs.amigo.ai/use-cases/use-cases/scheduling-outreach) for the full workflow.

This includes handling scheduling complexities that trip up simple automation: multiple providers, insurance verification, pre-appointment instructions, and patient preferences learned from prior interactions.

### Smart Call Routing

Routing in Amigo is not a separate module - it is inherent in how the agent works. The agent determines what the caller needs through conversation (not menu selection) and navigates to the appropriate workflow via [context graphs](https://docs.amigo.ai/agent/context-graphs).

For multi-department deployments, different phone numbers map to different agent configurations with their own context graphs, personas, and escalation rules. A scheduling line, a triage line, and a general inquiries line can each have purpose-built agent behavior while sharing the same underlying platform, world model, and operator pool.

### Agent Assist

Amigo's [operator system](https://docs.amigo.ai/channels/operators) provides agent assist in both directions:

**AI assists the human** - When an operator joins a call, they receive an AI-generated briefing with situation summary, patient context, risk assessment, and recommended actions. During the call, the agent continues processing in the background, maintaining context so it can resume if the operator hands control back.

**Human assists the AI** - Operators in listen mode can inject [guidance](https://docs.amigo.ai/channels/operators) into the agent's reasoning without the caller knowing. This is used for real-time coaching: "mention the copay waiver" or "this patient prefers their maiden name."

### General Inquiries

The agent handles general inquiries using [context graphs](https://docs.amigo.ai/agent/context-graphs) that map common question categories to appropriate responses. Combined with [dynamic behaviors](https://docs.amigo.ai/agent/context-graphs) (runtime rules that fire based on conversation context) and [knowledge](https://docs.amigo.ai/agent/context-graphs) (curated information the agent can reference), the system handles FAQs, office hours, location directions, preparation instructions, and policy questions.

When a question falls outside the agent's configured scope, it escalates to an operator rather than guessing.

### Pre-Procedure Instructions

Two mechanisms handle pre-procedure patient preparation:

**During the call** - The agent pulls procedure-specific instructions from the world model and delivers them verbally, adapting language complexity and pacing to the patient. It can confirm understanding, answer follow-up questions, and re-explain sections the patient found confusing.

**After the call** - The agent generates a [surface](https://docs.amigo.ai/channels/surfaces) with written instructions, checklists, and any forms the patient needs to complete before their procedure. This is delivered via SMS or email so the patient has a reference they can review later. Surfaces support conditional fields, so instructions can be tailored to the specific procedure type.

The [automated gap detection](https://docs.amigo.ai/channels/surfaces#automated-gap-detection) scanner can also proactively send preparation surfaces days before scheduled procedures without any agent or human action.

## Beyond the Contact Center: What Amigo Makes Possible

The sections above map traditional contact center requirements to Amigo's architecture. This section covers capabilities that have no traditional equivalent - things that become possible when an agent with a unified data layer handles every interaction.

### Patient Memory Across Every Interaction

Traditional contact centers are stateless. Each call starts from scratch. The agent pulls up a CRM record, asks the caller to verify their identity, and has no memory of previous conversations beyond whatever notes the last agent typed.

Amigo's [memory system](https://docs.amigo.ai/agent/memory) operates in four layers that build on each other over time:

* **L0** - Complete raw transcripts of every conversation, stored as ground truth
* **L1** - Extracted memories: net-new information from each interaction, structured and anchored to the patient's model
* **L2** - Episodic models: temporal patterns synthesized over weeks and months (medication adherence cycles, communication preferences, seasonal health patterns)
* **L3** - Global user model: a persistent, evolving understanding of each patient that is immediately available during every call

By the third call with a patient, the agent knows they prefer morning calls, get anxious about lab results, and have trouble remembering evening medications. By the tenth call, the agent has identified that their adherence dips during work travel and can address it ahead of time. This is not a CRM note - it is a model that updates with each interaction and shapes how the agent approaches every conversation.

Safety-critical information - medication allergies, past adverse reactions, crisis indicators - is always accessible through L3 with immediate recall. The agent never loses sight of what matters clinically.

For details, see [Functional Memory](https://docs.amigo.ai/agent/memory) and [Layered Architecture](https://docs.amigo.ai/agent/memory).

### Unified Data Foundation (World Model)

Traditional contact centers store call records in one system, patient data in the EHR, scheduling in another, and agent notes in a CRM. Integrating them requires expensive middleware, and data is often stale or conflicting.

The [world model](https://docs.amigo.ai/data/world-model) is a unified, event-sourced data layer that ingests information from every source - EHR, voice conversations, manual entry, connected devices, surface submissions - and projects it into a single entity state per patient. Every fact carries a source and a [confidence score](https://docs.amigo.ai/data/connectors-and-ehr). When sources conflict, confidence-based resolution determines which fact is authoritative.

During a call, the agent does not need to "look up" patient information across systems. The world model pushes relevant context into the agent's reasoning automatically: demographics, recent interactions, clinical history, upcoming appointments, medication list. This is what powers the AI briefing for operators and the agent's ability to have contextual conversations without asking patients to repeat themselves.

### Post-Call Clinical Verification

When the agent extracts information during a call - a medication name, an allergy, a symptom description - that data does not write directly to the EHR. It passes through a [verification pipeline](https://docs.amigo.ai/data/review-queue):

1. Voice-extracted data enters at moderate confidence (speech recognition is not perfect)
2. An automated review judge cross-references the extraction against the full transcript
3. Uncertain items route to a human review queue with priority ranking
4. Only verified data is promoted to higher confidence and eligible for EHR writeback
5. The complete chain - extraction, review, approval, writeback - is recorded in the audit trail

Clinical errors do not propagate silently. The system knows what it is uncertain about and routes those items for review rather than assuming accuracy.

### Outbound System

Traditional outbound dialers run through call lists sequentially. Amigo's outbound system is [world model-native](https://docs.amigo.ai/use-cases/use-cases/scheduling-outreach) with five distinct outbound patterns:

* **Scheduled** - Time-triggered calls (appointment reminders, follow-up check-ins)
* **Event-reactive** - Triggered by data changes (lab result arrives, referral approved, insurance verified)
* **Continuous monitoring** - Driven by patient state (adherence tracking, care plan milestones)
* **Conversational follow-through** - Promises made during calls ("I'll call you back tomorrow at 10") become scheduled tasks automatically
* **Orchestrated campaigns** - Goal-driven outreach across patient populations with configurable priority and pacing

The dispatch system respects business hours, contact preferences, and retry logic with exponential backoff. Outbound tasks are first-class entities in the world model, so their outcomes feed back into the patient's record and inform future interactions.

### Proactive Safety Detection

The agent does not wait for a caller to say "I'm having chest pain" to recognize a safety concern. [Dynamic behaviors](https://docs.amigo.ai/agent/context-graphs) run continuously during every call, evaluating conversation context across multiple dimensions simultaneously:

* Detection of implicit health concerns (a constellation of symptoms mentioned casually that together indicate a clinical issue)
* Sharp conversational pivots (casual tone shifting suddenly to urgency)
* Pattern matching against workspace safety configurations (medication interactions, crisis indicators, escalation triggers)

When a safety concern is detected, the agent shifts its behavior immediately - adjusting tone, asking clarifying questions, or escalating to an operator. This happens proactively, before the caller explicitly asks for help.

### Domain Knowledge in Agent Reasoning

The agent does not follow scripts. The [knowledge system](https://docs.amigo.ai/agent/context-graphs) embeds domain expertise - clinical protocols, compliance frameworks, organizational procedures - directly into the agent's reasoning. When discussing medications, the agent reasons within a pharmaceutical knowledge frame. When handling a billing question, it applies the organization's specific policies.

This is different from FAQ lookup. The agent applies knowledge contextually during conversation, adapting its responses to the patient's specific situation rather than reading from a static answer database.

### Continuous Improvement

The platform does not stay static after deployment. [Pattern discovery](https://docs.amigo.ai/agent/pattern-discovery-and-reuse) identifies successful conversation patterns across your call population and optimizes agent behavior over time. The system balances multiple objectives - clinical accuracy, speed, empathy, safety - and surfaces tradeoffs rather than optimizing for a single metric.

Quality scoring, surface completion rates, escalation patterns, and clinical verification outcomes feed back into configuration improvements. The cycle is: deploy, measure, improve, verify.

### Healthcare Compliance by Design

Compliance is built into the architecture, not added as a separate module:

* **Event-sourced audit trail** - Every call, operator action, data change, and system decision is recorded immutably. See [Compliance and Audit](https://docs.amigo.ai/safety-and-compliance/compliance).
* **Confidence-gated data flow** - Clinical data passes through verification before reaching the EHR. Uncertain data is flagged, not silently propagated.
* **Workspace isolation** - Patient data is isolated at the workspace level. Multi-tenant architecture ensures data does not cross organizational boundaries.
* **PHI handling** - Sensitive fields receive additional encryption and access controls. Surface tokens are short-lived and purpose-bound.
* **Retention policies** - Configurable per workspace with legal hold support.
* **Certifications** - SOC 2 Type II and HIPAA compliance with continuous monitoring.

## What Changes in an AI-First Contact Center

| Traditional Component | What Happens to It                                                                                                                                                                                                                                            |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **IVR trees**         | **Obsolete.** Natural language replaces button menus. Context graphs handle routing through conversation, not DTMF.                                                                                                                                           |
| **ACD queue**         | **Obsolete.** There is no queue because the AI agent answers every call immediately. Operators handle the small fraction that escalate.                                                                                                                       |
| **Agent desktop**     | **Replaced.** Operator command center with AI briefing, priority queue, and listen/takeover controls. Designed for selective intervention, not handling every call.                                                                                           |
| **Call recording**    | **Replaced.** Dual-channel recording with full transcription, speaker attribution, and structured intelligence per call.                                                                                                                                      |
| **QM scoring**        | **Transformed.** Two-layer analysis on every call: real-time intelligence (7 structured summaries) + post-call quality scoring (5 dimensions). 100% coverage, not random sampling. Self-improving STT accuracy.                                               |
| **Speech analytics**  | **Transformed.** Three parallel emotion models + per-turn risk scoring + audio embeddings, all running in real time. Agent acts on what it detects during the call - adjusting tone, escalating, shifting delivery. Not batch reports the next morning.       |
| **Post-call survey**  | **Replaced and expanded.** Surfaces collect photos, signatures, checklists, and documents - not just numeric ratings. Agent-generated, multi-channel, with real-time completion tracking and fatigue gating.                                                  |
| **WFM**               | **Mostly obsolete.** AI agents do not need shifts. For the small operator team, call volume and escalation metrics are available to feed any scheduling tool.                                                                                                 |
| **Email/chat**        | **Covered differently.** Surfaces deliver data collection, instructions, and follow-up via SMS, email, WhatsApp, web, and voice - orchestrated by the agent across channels from a single interaction. Text agent sessions handle SMS and chat conversations. |
| **Screen recording**  | **Not applicable.** Amigo is a voice and data platform, not a desktop monitoring tool.                                                                                                                                                                        |

## Integration Points

Amigo is designed to work alongside existing contact center infrastructure during transition:

* **Telephony** - Phone numbers are provisioned through the platform. Calls can be transferred to external numbers (clinic front desks, specialist lines) via deferred transfer.
* **EHR** - Bidirectional data exchange through the [connector runner](https://docs.amigo.ai/data/connectors-and-ehr) and [FHIR integration](https://docs.amigo.ai/data/connectors-and-ehr). Call outcomes, scheduling changes, and patient-reported data write back to the EHR automatically.
* **Existing QM tools** - Recordings and transcripts are accessible via API for import into third-party quality management platforms.
* **Existing WFM tools** - Call volume, duration, and escalation metrics are available via analytics APIs for workforce planning.
* **Compliance** - Full [audit trail](https://docs.amigo.ai/safety-and-compliance/compliance#unified-audit-trail) of every call, operator action, data change, and system decision for regulatory review.

{% hint style="info" %}
For the full list of platform concepts and how they map to API terminology, see the [API Terminology Mapping](https://docs.amigo.ai/reference/api-terminology-mapping).
{% endhint %}
