# Events

When creating or interacting with conversations, the API responds with an NDJSON stream of events. This page summarizes the event types and how to use them.

## Event Types

| Event                     | `type`                   | Description                                                                                          |
| ------------------------- | ------------------------ | ---------------------------------------------------------------------------------------------------- |
| ConversationCreatedEvent  | `conversation-created`   | Contains the `conversation_id` for subsequent calls.                                                 |
| UserMessageAvailableEvent | `user-message-available` | Present when `initial_message` is provided; may represent a user message or an external event.       |
| NewMessageEvent           | `new-message`            | Streaming message chunks from the agent (text or voice).                                             |
| InteractionCompleteEvent  | `interaction-complete`   | Indicates the current interaction completed successfully.                                            |
| ErrorEvent                | `error`                  | Indicates an internal error; the interaction is rolled back.                                         |
| CurrentAgentActionEvent   | `current-agent-action`   | Emitted only when the `current_agent_action_type` query parameter is set.                            |
| ActionTooLongEvent        | `action-too-long`        | Indicates an agent action is taking longer than expected; provides audio/text filler for voice mode. |

## Important Notes

* Persist `conversation_id` from `conversation-created`.
* Continue reading until `interaction-complete`.
* Handle `error` events. Nothing from that interaction is persisted.

## Understanding Agent Actions

CurrentAgentActionEvent reveals agent behavior during generation.

### Dynamic Behavior Triggered

Emitted when a dynamic behavior is triggered.

```json
{
  "type": "current-agent-action",
  "action": {
    "type": "dynamic-behavior-triggered",
    "dynamic_behavior_set_version_info": ["<DYNAMIC-BEHAVIOR-SET-ID>", 3]
  }
}
```

Use this as a trigger to evaluate metrics and drive business workflows.

### System Integration Flow

1. Capture `dynamic_behavior_set_version_info` when the event appears.
2. Map behavior IDs to the metrics you want to compute in your system.
3. Evaluate metrics and act (route, escalate, create tickets, store results).

{% @mermaid/diagram content="%%{init: {"theme": "base", "themeVariables": {"actorBkg": "#083241", "actorTextColor": "#FFFFFF", "actorBorder": "#083241", "signalColor": "#575452", "signalTextColor": "#100F0F", "labelBoxBkgColor": "#F1EAE7", "labelBoxBorderColor": "#D7D2D0", "labelTextColor": "#100F0F", "loopTextColor": "#100F0F", "noteBkgColor": "#F1EAE7", "noteBorderColor": "#D7D2D0", "noteTextColor": "#100F0F", "activationBkgColor": "#E8E2EB", "activationBorderColor": "#083241", "altSectionBkgColor": "#F1EAE7", "altSectionColor": "#100F0F"}}}%%
sequenceDiagram
autonumber
participant C as Customer System
participant A as Amigo REST API
participant S as Your System (Metrics)

C->>A: POST /v1/{org}/conversation/<br/>{conversation\_id}/interact
A-->>C: 200 OK (NDJSON stream)
loop NDJSON events
A-->>C: new-message "..."
A-->>C: interaction-complete { interaction\_id }
opt Dynamic behavior triggered
A-->>C: current-agent-action { type: dynamic-behavior-triggered,<br/>dynamic\_behavior\_set\_version\_info: \[id, v] }
C->>S: Forward event to internal handlers
S->>S: Lookup behavior → metric mapping
Note over S: Map behavior to one or more<br/>internal metric\_ids
S->>A: POST /v1/{org}/metric/evaluate<br/>{ metric\_ids, conversation\_id,<br/>evaluate\_to\_interaction\_id }
A-->>S: 200 OK { results }
end
end" %}

#### Retrieve Dynamic Behavior Set Versions

{% openapi src="<https://api.amigo.ai/v1/openapi.json>" path="/v1/{organization}/dynamic\_behavior\_set/{dynamic\_behavior\_set\_id}/version/" method="get" %}
<https://api.amigo.ai/v1/openapi.json>
{% endopenapi %}

#### Compute Metrics

{% openapi src="<https://api.amigo.ai/v1/openapi.json>" path="/v1/{organization}/metric/evaluate" method="post" %}
<https://api.amigo.ai/v1/openapi.json>
{% endopenapi %}

```bash
curl --request POST \
  --url 'https://api.amigo.ai/v1/<YOUR-ORG-ID>/metric/evaluate' \
  --header 'Authorization: Bearer <JWT>' \
  --header 'Content-Type: application/json' \
  --data '{
    "metric_ids": ["metric_id_1", "metric_id_2"],
    "conversation_id": "<CONVERSATION-ID>",
    "evaluate_to_interaction_id": "<INTERACTION-ID>"
  }'
```

### Managing Perceived Latency with Audio Fillers

When using `response_format=voice`, the agent may emit `ActionTooLongEvent` during interactions where operations exceed configured time thresholds.

```json
{
  "type": "current-agent-action",
  "action": {
    "type": "action-too-long",
    "filler": "base64_encoded_audio_or_text",
    "previously_started_event": {
      "type": "tool-call-started",
      "tool_name": "search_knowledge_base"
    }
  }
}
```

#### Purpose

Audio fillers improve voice conversation experiences by:

* **Reducing perceived latency**: they play contextual audio during processing delays.
* **Maintaining conversation flow**: they provide natural feedback instead of silence.
* **Improving user experience**: wait times feel shorter and more natural.

#### Event Structure

| Field                      | Type                | Description                                                  |
| -------------------------- | ------------------- | ------------------------------------------------------------ |
| `type`                     | `"action-too-long"` | Event type identifier                                        |
| `filler`                   | string              | Base64-encoded PCM audio (16kHz, 16-bit, mono) or plain text |
| `previously_started_event` | object              | The action that is taking longer than expected               |

#### Audio Filler Types

Audio fillers are triggered in different scenarios based on **Context Graph state types**. Context Graphs define how agents navigate problem spaces using different types of states:

{% hint style="info" %}
**Context Graph State Types**

Context Graphs (API: `service_hierarchical_state_machine`) consist of different state types:

* **ActionState**: perform actions toward an objective.
* **DecisionState**: choose between multiple paths.
* **ReflectionState**: generate internal analysis.
* **ToolCallState**: execute a specific tool end-to-end.
* RecallState, AnnotationState: no audio fillers.

Learn more about Context Graphs in our [Conceptual Documentation](https://docs.amigo.ai/agent/context-graphs).
{% endhint %}

Audio fillers are triggered in these scenarios:

**1. Decision-Making Delays** (DecisionState) When the agent's decision-making LLM interaction exceeds the timeout:

```
Examples: "Let me think about that...", "Just a moment..."
```

**2. Reflection Delays** (ReflectionState) When reflection generation exceeds the timeout:

```
Examples: "Let me consider this carefully...", "Analyzing that information..."
```

**3. Designated Tool Delays** (ToolCallState) When the entire tool call process (parameter generation + execution) exceeds the timeout:

```
Examples: "I'm looking that up for you...", "Searching now...", "Let me check on that..."
```

**4. Helper Tool Delays** (during param generation, decisions, reflections, actions) When helper tools executed during other operations exceed their timeouts:

```
Examples: "Checking that information...", "One moment...", "Let me verify..."
```

<details>

<summary>Audio filler configuration by Context Graph state type</summary>

**Configuration**

Audio fillers are configured in the service's **Context Graph** (API field: `service_hierarchical_state_machine`) using state-specific fields. Each state type in a Context Graph can have audio fillers configured:

**DecisionState:**

* `audio_fillers` + `audio_filler_triggered_after`: for the decision-making process
* `tool_call_specs[].audio_fillers` + `audio_filler_triggered_after`: for helper tools during decision

**ReflectionState:**

* `audio_fillers` + `audio_filler_triggered_after`: for the reflection generation
* `tool_call_specs[].audio_fillers` + `audio_filler_triggered_after`: for helper tools during reflection

**ToolCallState:**

* `designated_tool_call_params_generation_audio_fillers` + `designated_tool_call_params_generation_audio_filler_triggered_after`: for the entire designated tool process (param generation + execution)
* `tool_call_specs[].audio_fillers` + `audio_filler_triggered_after`: for helper tools during param generation

**ActionState:**

* `action_tool_call_specs[].audio_fillers` + `audio_filler_triggered_after`: for tools used during actions
* `exit_condition_tool_call_specs[].audio_fillers` + `audio_filler_triggered_after`: for tools used when evaluating exit conditions

All `audio_fillers` are arrays of text strings (max 5). All `audio_filler_triggered_after` are timeouts in seconds (0 < x ≤ 10). When an operation exceeds its threshold, one audio filler is chosen at random and played.

</details>

{% hint style="warning" %}
**Best Practice: Keep Delays Close to Zero**

The `audio_filler_triggered_after` value should be as close to zero as possible (e.g., `0.0001`). Any delay adds directly to perceived latency for users. Since most operations complete quickly, adding delays hurts the majority of interactions.

**Recommended:** `"audio_filler_triggered_after": 0.0001`

The schema requires `> 0`, so you cannot use exactly `0`, but values below 1ms (0.001s) are instantaneous in practice. Agent Forge will warn if your delay is ≥ 1ms.
{% endhint %}

{% hint style="info" %}
**Related: Tool Result Persistence**

Tool call specifications (`tool_call_specs`, `action_tool_call_specs`, `exit_condition_tool_call_specs`) also include a `result_persistence` property that controls how tool outputs are stored and made available to the agent across interactions. See [Tools: Result Persistence](https://docs.amigo.ai/developer-guide/classic-api/tools#tool-result-persistence) for configuration details.
{% endhint %}

#### Implementation Notes

* **Pre-generation**: audio fillers are pre-generated using the agent's voice configuration when a conversation starts.
* **Storage**: generated audio is stored as base64-encoded PCM WAV (16kHz, 16-bit, mono).
* **Selection**: one filler is chosen at random when the threshold is exceeded.
* **Transparency**: the `filler` field contains either the generated audio (base64) or the original text if audio generation failed.

#### Best Practices

1. **Keep fillers natural**: use conversational phrases appropriate for your use case.
2. **Match the context**: different states can have different fillers (for example, "Searching..." for search tools).
3. **Set appropriate timeouts**: balance between too frequent (annoying) and too late (awkward silence).
4. **Provide variety**: configure multiple fillers to avoid repetition in longer conversations.
