arrows-left-rightClassic API vs Platform API

Side-by-side comparison of the Classic API (text chat) and Platform API (voice and EHR), with guidance on choosing the right one.

Amigo exposes two API surfaces built for different deployment contexts. This page lays out the differences so you can pick the right one (or both).

Side-by-Side Comparison

Classic API

Platform API

Base URL

api.amigo.ai

api.platform.amigo.ai

Scoping

Organization-scoped

Workspace-scoped

Authentication

User JWT (per-user tokens)

API key (Bearer token)

Primary channel

Text chat, voice notes, WebSocket

Phone calls (inbound + outbound)

Agent actions

Tools (versioned code packages)

Skills (LLM-backed micro-agents)

User data

User models with memory and personalization

Event-sourced world model with confidence scoring

Data access

SQL API, data sharing, organization tables

Structured query API with vector search

Integrations

Webhooks, data sharing

EHR connectors, FHIR, bidirectional sync

Escalation

N/A

Operator escalation with conference transfer

Testing

Simulations, personas, scenarios, metrics

Workspace-level workflows

Safety

Conversation-level controls

Real-time safety monitoring, emotion detection

SDKs

amigo-python-sdk, @amigo-ai/sdk

Direct HTTP (REST)

Classic API

The Classic API is a single-actor system for consumer-facing digital health. Patients interact through text (chat, voice notes, WebSocket streaming) in an organization-scoped environment.

Data model: Each user has a user model that accumulates memory and personalization over time. The system remembers past conversations, preferences, and context across sessions.

Agent actions: Tools are versioned code packages that the agent calls to perform actions (schedule, look up information, send messages). They execute in a sandboxed runtime with defined inputs and outputs.

Testing: A full simulation framework lets you run thousands of synthetic conversations against personas and scenarios, then score results with custom metrics. This is how you validate changes before promoting to production.

Data access: SQL API and data sharing give you direct query access to organization tables (conversations, users, tools, metrics, simulations) for analytics and reporting.

Voice support: The Classic API supports real-time voice via WebSocket with server-side voice activity detection, millisecond-level transcript alignment, and external event injection during conversations. Voice notes (async audio messages) are also supported for non-real-time interactions.

Conversation intelligence: The system generates personalized conversation starters based on the agent's persona and the user's model, suggests follow-up responses when a user hasn't replied, and exposes interaction insights that show the agent's full reasoning trail - which memories were active, what reflections it made, which tools it considered, and how it transitioned states.

Platform API

The Platform API is a multi-system architecture for enterprise healthcare operations. It handles phone-based voice interactions with real-time EHR integration, operator workflows, and an event-sourced data model.

Data model: The world model stores every fact as an immutable, confidence-scored event. Entity state (patients, appointments, practitioners, locations) is a computed projection from events. When two sources disagree, the higher-confidence source wins. Authoritative API data (1.0) overrides browser-scraped data (0.7), which overrides voice-captured data (0.5).

Agent actions: Skills are LLM-backed micro-agents that handle complex multi-step tasks. Tool execution spans four tiers - direct (instant lookups), orchestrated (multi-step coordination), autonomous (long-running background tasks), and browser (UI automation) - each with different latency and supervision profiles.

EHR integration: The connector runner maintains 7 background loops that continuously poll EHR systems, resolve entities across sources, verify data quality, and sync confirmed changes back to the EHR. Data only flows outbound after passing confidence gates.

Operators: Human operators join calls through a conference-based architecture. No dropped calls, no transfers. The operator enters the same conference as the patient and agent.

Choosing Your API

Use the Classic API when:

  • Patients interact through screens (mobile apps, web portals, chat widgets)

  • You need rich conversation memory and personalization

  • You want to run simulation-based testing with synthetic personas

  • Your data needs are covered by SQL queries and data sharing

Use the Platform API when:

  • Patients interact through phone calls

  • You need bidirectional EHR integration

  • You need operator escalation for complex or sensitive cases

  • You need confidence-scored data with audit trails

  • You need emotion detection and adaptive voice delivery

Use both when:

  • You run phone scheduling (Platform API) alongside a patient-facing chat experience (Classic API)

  • Different patient touchpoints have different interaction modalities

Skills vs Actions: A Deeper Look

The Classic API uses Actions (API: tool) and the Platform API uses Skills. The difference is architectural, not just naming.

Actions are versioned code packages. You write Python code, declare dependencies, and deploy through Agent Forge. The agent calls your code at runtime. Actions handle deterministic tasks where you need full control: complex database queries, document generation, multi-step API orchestrations with error handling. You own the code, the retry logic, and the failure modes.

Skills are LLM-backed micro-agents. You configure them with a system prompt, an input schema, and integration references. The platform handles execution. Skills handle tasks where natural language reasoning is the core capability: interpreting patient requests, translating between data formats, summarizing clinical information. You define what the skill should accomplish; the LLM figures out how.

Actions (Classic)
Skills (Platform)

Configuration

Python code + dependencies

System prompt + schema

Execution

Your code runs in a sandbox

LLM interprets and executes

Best for

Deterministic logic, exact control

Flexible interpretation, NLP tasks

Versioning

Semantic versioning, explicit deploys

Prompt versioning through service config

Error handling

You implement it

Platform handles retries and fallbacks

Execution tiers

Single tier (Lambda)

Four tiers (Direct, Orchestrated, Autonomous, Browser)

If you need both - for example, a skill that interprets a scheduling request and an action that executes the booking against a specific EHR API - you can use both APIs in the same deployment.

circle-info

Dynamic behaviors are a Classic API feature. The Platform API voice agent does not use dynamic behaviors at runtime. Conversation adaptation in the Platform API is handled through the context graph's state-level configuration and skill-based execution tiers. Tool availability is strictly state-gated - the agent only sees tools listed in the current state's specs, with no global tool pool.

circle-info

Developer Guide - For API endpoints, SDK examples, and integration details, see the Classic APIarrow-up-right and Platform APIarrow-up-right in the developer guide. The Platform API does not have official SDKs yet - use direct HTTP with Bearer token authentication.

Last updated

Was this helpful?