brainMemory Architecture & API Mapping

Four-layer memory model (L0-L3) mapped to Classic API endpoints for reading and writing.

Amigo's Functional Memory system uses a four-layer architecture to build, store, and refine knowledge about each user. This page maps those layers to the API endpoints you use to read and influence them.

circle-info

Classic API - This memory architecture applies to Classic API user models. The Platform API uses the World Model for patient data.

circle-info

Conceptual vs. API documentation. The four-layer model (L0-L3) comes from Amigo's Conceptual Documentationarrow-up-right. This page bridges those concepts to the REST API you interact with as a developer.

The Four Layers

spinner
Layer
Name
Description
API Access

L0

Raw Transcripts

The complete conversation message history

Direct via REST API

L1

Extracted Memories

Atomic facts and observations pulled from conversations

Direct via REST API

L2

Episodic User Models

Per-conversation structured models of the user

Internal only

L3

Global User Model

Aggregated, evolving model across all conversations

Direct via REST API

Layer-by-Layer API Mapping

L0: Raw Transcripts

Raw transcripts are the unprocessed conversation messages. Retrieve them using the conversation messages endpoint.

Endpoint: GET /v1/{org}/conversation/{conversation_id}/messages/

from amigo_sdk import AmigoClient

with AmigoClient(
    api_key="your-api-key",
    api_key_id="your-api-key-id",
    user_id="your-user-id",
    organization_id="your-org-id"
) as client:
    messages = client.conversation.get_messages(
        conversation_id="conv_abc123"
    )
    for msg in messages:
        print(f"[{msg.role}] {msg.content}")

L1: Extracted Memories

After a conversation ends, post-processing extracts atomic facts (memories) from the transcript. These are the building blocks that feed into higher layers.

Endpoint: GET /v1/{org}/user/{user_id}/memory

L2: Episodic User Models

Episodic user models are per-conversation structured representations generated during post-processing. They capture what the system learned about the user in that specific conversation.

circle-exclamation

L3: Global User Model

The Global User Model is the aggregated, continuously evolving representation of a user across all their conversations. It is organized by dimensions (e.g., "Medical History", "Communication Preferences") and includes supporting insight references.

Endpoint: GET /v1/{org}/user/{user_id}/user_model

See User Models for the full response shape and dimension details.

Enriching the User Model

You can supplement Amigo's automatically generated user model with facts from your own systems using the additional_context field on the user update endpoint.

Endpoint: POST /v1/{org}/user/{requested_user_id}

circle-info

additional_context is additive. Each call appends new facts. Amigo processes and integrates them into the user model over time. Provide concise, self-contained sentences with units and dates where relevant.

See User Models: Update Additional Context for formatting guidance and examples.

Knowing When Memories and Models Update

Memories (L1) and user models (L2/L3) are generated asynchronously during post-processing after a conversation ends. Use the conversation-post-processing-complete webhook event to know exactly when each stage finishes.

Webhook payload:

Post-processing types relevant to memory:

post_processing_type

Memory Layer Affected

What Happened

extract-memories

L1

New memories extracted from the conversation

generate-user-models

L2 and L3

Episodic model built and global model updated

Practical Pattern: React to New Memories

spinner
  1. A conversation finishes (the user or your app calls the finish endpoint).

  2. Amigo runs post-processing asynchronously.

  3. Your webhook endpoint receives extract-memories -- you can now fetch updated L1 memories.

  4. Your webhook endpoint receives generate-user-models -- you can now fetch the updated L3 user model.

See Webhooks for setup instructions and signature verification.

Summary Table

What You Want to Do
API Endpoint
When Available

Read raw conversation messages

GET /v1/{org}/conversation/{id}/messages/

During and after conversation

Read extracted memories for a user

GET /v1/{org}/user/{user_id}/memory

After extract-memories post-processing

Read the global user model

GET /v1/{org}/user/{user_id}/user_model

After generate-user-models post-processing

Add external facts to the user model

POST /v1/{org}/user/{requested_user_id}

Any time

Know when memories/models are ready

Subscribe to conversation-post-processing-complete webhook

After conversation ends

Last updated

Was this helpful?