# Memory Architecture & API Mapping

Amigo's Functional Memory system uses a four-layer architecture to build, store, and refine knowledge about each user. This page maps those layers to the API endpoints you use to read and influence them.

{% hint style="info" %}
**Classic API** - This memory architecture applies to Classic API user models. The Platform API uses the [World Model](https://docs.amigo.ai/developer-guide/platform-api/platform-api/data-world-model) for patient data.
{% endhint %}

{% hint style="info" %}
**Conceptual vs. API documentation.** The four-layer model (L0-L3) comes from Amigo's [Conceptual Documentation](https://docs.amigo.ai). This page bridges those concepts to the REST API you interact with as a developer.
{% endhint %}

## The Four Layers

{% @mermaid/diagram content="%%{init: {"flowchart": {"useMaxWidth": true, "nodeSpacing": 30, "rankSpacing": 50}, "theme": "base", "themeVariables": {"primaryColor": "#D4E2E7", "primaryTextColor": "#100F0F", "primaryBorderColor": "#083241", "lineColor": "#575452", "textColor": "#100F0F", "clusterBkg": "#F1EAE7", "clusterBorder": "#D7D2D0"}}}%%
flowchart TB
L0\["L0: Raw Transcripts"]
L1\["L1: Extracted Memories"]
L2\["L2: Episodic User Models"]
L3\["L3: Global User Model"]

```
L0 -->|extract-memories| L1
L1 -->|generate-user-models| L2
L2 -->|aggregate| L3

style L0 fill:#F1EAE7,stroke:#D7D2D0,color:#100F0F,stroke-width:2px
style L1 fill:#D4E2E7,stroke:#083241,color:#100F0F,stroke-width:2px
style L2 fill:#D4E2E7,stroke:#083241,color:#100F0F,stroke-width:2px
style L3 fill:#E8E2EB,stroke:#C5BACE,color:#100F0F,stroke-width:2px" %}
```

| Layer  | Name                 | Description                                             | API Access          |
| ------ | -------------------- | ------------------------------------------------------- | ------------------- |
| **L0** | Raw Transcripts      | The complete conversation message history               | Direct via REST API |
| **L1** | Extracted Memories   | Atomic facts and observations pulled from conversations | Direct via REST API |
| **L2** | Episodic User Models | Per-conversation structured models of the user          | Internal only       |
| **L3** | Global User Model    | Aggregated, evolving model across all conversations     | Direct via REST API |

## Layer-by-Layer API Mapping

### L0: Raw Transcripts

Raw transcripts are the unprocessed conversation messages. Retrieve them using the conversation messages endpoint.

**Endpoint:** `GET /v1/{org}/conversation/{conversation_id}/messages/`

{% tabs %}
{% tab title="Python SDK" %}

```python
from amigo_sdk import AmigoClient

with AmigoClient(
    api_key="your-api-key",
    api_key_id="your-api-key-id",
    user_id="your-user-id",
    organization_id="your-org-id"
) as client:
    messages = client.conversation.get_messages(
        conversation_id="conv_abc123"
    )
    for msg in messages:
        print(f"[{msg.role}] {msg.content}")
```

{% endtab %}

{% tab title="curl" %}

```bash
curl -s "https://api.amigo.ai/v1/${ORG_ID}/conversation/${CONVERSATION_ID}/messages/" \
  -H "Authorization: Bearer ${API_KEY}"
```

{% endtab %}
{% endtabs %}

### L1: Extracted Memories

After a conversation ends, post-processing extracts atomic facts (memories) from the transcript. These are the building blocks that feed into higher layers.

**Endpoint:** `GET /v1/{org}/user/{user_id}/memory`

{% tabs %}
{% tab title="Python SDK" %}

```python
with AmigoClient(
    api_key="your-api-key",
    api_key_id="your-api-key-id",
    user_id="your-user-id",
    organization_id="your-org-id"
) as client:
    memories = client.user.get_user_memory(user_id="user_12345")
    for memory in memories:
        print(f"Memory: {memory.content}")
```

{% endtab %}

{% tab title="curl" %}

```bash
curl -s "https://api.amigo.ai/v1/${ORG_ID}/user/${USER_ID}/memory" \
  -H "Authorization: Bearer ${API_KEY}"
```

{% endtab %}
{% endtabs %}

### L2: Episodic User Models

Episodic user models are per-conversation structured representations generated during post-processing. They capture what the system learned about the user in that specific conversation.

{% hint style="warning" %}
**No direct API access.** L2 models are internal to Amigo's processing pipeline. They are consumed automatically when building the L3 Global User Model. You do not need to (and cannot) read or write them directly.
{% endhint %}

### L3: Global User Model

The Global User Model is the aggregated, continuously evolving representation of a user across all their conversations. It is organized by dimensions (e.g., "Medical History", "Communication Preferences") and includes supporting insight references.

**Endpoint:** `GET /v1/{org}/user/{user_id}/user_model`

{% tabs %}
{% tab title="Python SDK" %}

```python
with AmigoClient(
    api_key="your-api-key",
    api_key_id="your-api-key-id",
    user_id="your-user-id",
    organization_id="your-org-id"
) as client:
    user_model = client.user.get_user_model(user_id="user_12345")
    for entry in user_model.user_models:
        print(f"[{entry.dimensions[0].description}] {entry.content}")
```

{% endtab %}

{% tab title="curl" %}

```bash
curl -s "https://api.amigo.ai/v1/${ORG_ID}/user/${USER_ID}/user_model" \
  -H "Authorization: Bearer ${API_KEY}"
```

{% endtab %}
{% endtabs %}

See [User Models](https://docs.amigo.ai/developer-guide/classic-api/core-api/users/user-models) for the full response shape and dimension details.

## Enriching the User Model

You can supplement Amigo's automatically generated user model with facts from your own systems using the `additional_context` field on the user update endpoint.

**Endpoint:** `POST /v1/{org}/user/{requested_user_id}`

```python
import requests

resp = requests.post(
    f"https://api.amigo.ai/v1/{ORG_ID}/user/{USER_ID}",
    headers={
        "Authorization": f"Bearer {API_KEY}",
        "Content-Type": "application/json",
    },
    json={
        "additional_context": [
            "Tony's average fasting glucose over the past week is 105 mg/dL.",
            "Tony exercises three times a week (mostly swimming).",
            "Tony prefers morning appointment times.",
        ]
    },
)
resp.raise_for_status()
```

{% hint style="info" %}
**`additional_context` is additive.** Each call appends new facts. Amigo processes and integrates them into the user model over time. Provide concise, self-contained sentences with units and dates where relevant.
{% endhint %}

See [User Models: Update Additional Context](https://docs.amigo.ai/developer-guide/classic-api/core-api/users/user-models#update-additional-context-quick-start) for formatting guidance and examples.

## Knowing When Memories and Models Update

Memories (L1) and user models (L2/L3) are generated asynchronously during post-processing after a conversation ends. Use the **`conversation-post-processing-complete`** webhook event to know exactly when each stage finishes.

**Webhook payload:**

```json
{
    "type": "conversation-post-processing-complete",
    "post_processing_type": "extract-memories",
    "conversation_id": "conv_abc123",
    "org_id": "org_xyz"
}
```

**Post-processing types relevant to memory:**

| `post_processing_type` | Memory Layer Affected | What Happened                                 |
| ---------------------- | --------------------- | --------------------------------------------- |
| `extract-memories`     | L1                    | New memories extracted from the conversation  |
| `generate-user-models` | L2 and L3             | Episodic model built and global model updated |

### Practical Pattern: React to New Memories

{% @mermaid/diagram content="%%{init: {"theme": "base", "themeVariables": {"actorBkg": "#083241", "actorTextColor": "#FFFFFF", "actorBorder": "#083241", "signalColor": "#575452", "signalTextColor": "#100F0F", "labelBoxBkgColor": "#F1EAE7", "labelBoxBorderColor": "#D7D2D0", "labelTextColor": "#100F0F", "loopTextColor": "#100F0F", "noteBkgColor": "#F1EAE7", "noteBorderColor": "#D7D2D0", "noteTextColor": "#100F0F", "activationBkgColor": "#E8E2EB", "activationBorderColor": "#083241", "altSectionBkgColor": "#F1EAE7", "altSectionColor": "#100F0F"}}}%%
sequenceDiagram
autonumber
participant App as Your Application
participant Amigo as Amigo API
participant WH as Webhook Endpoint

```
App->>Amigo: Conversation ends (finish)
Amigo-->>Amigo: Post-processing begins
Amigo->>WH: conversation-post-processing-complete<br/>(extract-memories)
WH->>Amigo: GET /user/{id}/memory
Amigo-->>WH: Updated memories
Amigo->>WH: conversation-post-processing-complete<br/>(generate-user-models)
WH->>Amigo: GET /user/{id}/user_model
Amigo-->>WH: Updated user model" %}
```

1. A conversation finishes (the user or your app calls the finish endpoint).
2. Amigo runs post-processing asynchronously.
3. Your webhook endpoint receives `extract-memories` -- you can now fetch updated L1 memories.
4. Your webhook endpoint receives `generate-user-models` -- you can now fetch the updated L3 user model.

See [Webhooks](https://docs.amigo.ai/developer-guide/classic-api/webhooks) for setup instructions and signature verification.

## Platform API: Memory Query

The Platform API provides workspace-scoped memory query endpoints for reading entity memory data. These are used by the Developer Console's memory views and are available to any Platform API consumer.

Both endpoints read from pre-computed tables populated by a background pipeline during post-session processing, so reads are fast and do not incur query-time extraction overhead. Newly ingested data is available within seconds of pipeline completion.

### Get Entity Dimensions

|            |                                                    |
| ---------- | -------------------------------------------------- |
| **Method** | `GET`                                              |
| **Path**   | `/v1/{workspace_id}/memory/{entity_id}/dimensions` |
| **Auth**   | Workspace API key                                  |

Returns the pre-aggregated memory dimension breakdown for an entity - which dimensions have data, the number of facts per dimension, average confidence, source count, and latest fact timestamp. All six built-in dimensions are always returned (with zero counts for empty ones).

### Get Entity Facts

|            |                                               |
| ---------- | --------------------------------------------- |
| **Method** | `GET`                                         |
| **Path**   | `/v1/{workspace_id}/memory/{entity_id}/facts` |
| **Auth**   | Workspace API key                             |

Returns individual memory facts for an entity, classified by dimension. Each fact includes `extracted_text` (a pre-computed human-readable summary), event type, source, confidence, and ingestion timestamp. Supports filtering by dimension and pagination via `limit`/`offset` parameters.

### Memory Settings

|            |                                      |
| ---------- | ------------------------------------ |
| **Method** | `GET` / `PUT`                        |
| **Path**   | `/v1/{workspace_id}/settings/memory` |
| **Auth**   | Workspace API key                    |

Configure which memory dimensions are active for the workspace, their weights, and associated event types. Six built-in dimensions (clinical, behavioral, operational, social, engagement, risk) are provided as defaults. Custom dimensions can be added. See the [Functional Memory](https://docs.amigo.ai/agent/memory) conceptual documentation for details.

## Summary Table

| What You Want to Do                  | API Endpoint                                                 | When Available                               |
| ------------------------------------ | ------------------------------------------------------------ | -------------------------------------------- |
| Read raw conversation messages       | `GET /v1/{org}/conversation/{id}/messages/`                  | During and after conversation                |
| Read extracted memories for a user   | `GET /v1/{org}/user/{user_id}/memory`                        | After `extract-memories` post-processing     |
| Read the global user model           | `GET /v1/{org}/user/{user_id}/user_model`                    | After `generate-user-models` post-processing |
| Add external facts to the user model | `POST /v1/{org}/user/{requested_user_id}`                    | Any time                                     |
| Know when memories/models are ready  | Subscribe to `conversation-post-processing-complete` webhook | After conversation ends                      |
