# Simulations

Amigo's simulation system is an evaluation and testing framework for validating agent behavior before deploying to production. It enables you to define simulated users (personas), test scenarios, and success criteria, then run automated conversations to measure how your agent performs.

## How Simulations Work

The simulation system uses five building blocks that compose together:

{% @mermaid/diagram content="%%{init: {"flowchart": {"useMaxWidth": true, "nodeSpacing": 30, "rankSpacing": 40}, "theme": "base", "themeVariables": {"primaryColor": "#D4E2E7", "primaryTextColor": "#100F0F", "primaryBorderColor": "#083241", "lineColor": "#575452", "textColor": "#100F0F", "clusterBkg": "#F1EAE7", "clusterBorder": "#D7D2D0"}}}%%
flowchart TB
P\[Persona] --> UT\[Unit Test]
S\[Scenario] --> UT
SVC\[Service + Version Set] --> UT
M\[Metrics + Success Criteria] --> UT
UT --> UTS\[Unit Test Set]
UTS --> R\[Unit Test Set Run]
R --> A\[Artifacts / Results]

```
style P fill:#DDE3DB,stroke:#2c3827,color:#100F0F,stroke-width:2px
style S fill:#DDE3DB,stroke:#2c3827,color:#100F0F,stroke-width:2px
style UT fill:#F0DDD9,stroke:#AA412A,color:#100F0F,stroke-width:2px
style UTS fill:#F0DDD9,stroke:#AA412A,color:#100F0F,stroke-width:2px
style R fill:#D4E2E7,stroke:#083241,color:#100F0F,stroke-width:2px
style A fill:#E8E2EB,stroke:#C5BACE,color:#100F0F,stroke-width:2px" %}
```

### Building Blocks

| Component                                                                                                                      | Purpose                                                                                                                                                       |
| ------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [**Personas**](https://docs.amigo.ai/developer-guide/classic-api/core-api/simulations/simulation-personas)                     | Simulated user profiles with a background, role, and preferred language. Versioned so you can iterate on persona definitions without breaking existing tests. |
| [**Scenarios**](https://docs.amigo.ai/developer-guide/classic-api/core-api/simulations/simulation-scenarios)                   | Conversation scripts that define the objective, instructions for the simulated user, and how the conversation starts. Also versioned.                         |
| [**Unit Tests**](https://docs.amigo.ai/developer-guide/classic-api/core-api/simulations/simulation-unit-tests)                 | Combine a persona, a scenario, a service (with version set), and success criteria (metrics with thresholds) into a single test case.                          |
| [**Unit Test Sets**](https://docs.amigo.ai/developer-guide/classic-api/core-api/simulations/simulation-unit-test-sets)         | Group multiple unit tests together, each with a configurable run count, to form a test suite.                                                                 |
| [**Unit Test Set Runs**](https://docs.amigo.ai/developer-guide/classic-api/core-api/simulations/simulation-unit-test-set-runs) | Execute a unit test set. The platform runs all unit tests, evaluates metrics, and produces downloadable artifacts with the results.                           |

### Typical Workflow

1. **Define personas** that represent different user archetypes (e.g., "confused new user", "expert power user", "frustrated customer").
2. **Define scenarios** that describe what the simulated user is trying to accomplish and how the conversation should start.
3. **Create unit tests** that pair a persona with a scenario, target a specific service and version set, and set success criteria based on conversation metrics.
4. **Group unit tests into sets** with run counts (e.g., run each test 5 times for statistical significance).
5. **Execute runs** and review artifacts to see whether your agent meets the defined success criteria.

{% hint style="info" %}
**Versioning**

Personas and scenarios are versioned independently. When you update a persona's background or a scenario's instructions, you create a new version. Unit tests reference the persona and scenario by ID and always use the latest version at run time. This lets you iterate on test definitions without recreating unit tests.
{% endhint %}

{% hint style="success" %}
**Tool Execution Modes**

During simulations, tools are invoked with `invocation_mode: "conversation-simulation"` instead of `"regular"`. This lets your tools mock external calls and avoid side effects. See [Tools: Execution Modes](https://docs.amigo.ai/developer-guide/classic-api/tools#execution-modes) for implementation details.
{% endhint %}

## API Categories

### Personas

[**Simulation Personas**](https://docs.amigo.ai/developer-guide/classic-api/core-api/simulations/simulation-personas) -- Create, list, search, update, delete, and version simulated user profiles.

### Scenarios

[**Simulation Scenarios**](https://docs.amigo.ai/developer-guide/classic-api/core-api/simulations/simulation-scenarios) -- Create, list, search, update, delete, and version conversation test scenarios.

### Unit Tests

[**Simulation Unit Tests**](https://docs.amigo.ai/developer-guide/classic-api/core-api/simulations/simulation-unit-tests) -- Create, list, search, update, and delete individual test cases.

### Unit Test Sets

[**Simulation Unit Test Sets**](https://docs.amigo.ai/developer-guide/classic-api/core-api/simulations/simulation-unit-test-sets) -- Create, list, search, update, and delete grouped test suites.

### Unit Test Set Runs

[**Simulation Unit Test Set Runs**](https://docs.amigo.ai/developer-guide/classic-api/core-api/simulations/simulation-unit-test-set-runs) -- Execute test suites, monitor progress, cancel runs, and download result artifacts.

## CLI Testing Tools (Agent Forge)

The Agent Forge SDK provides CLI commands that build on top of the simulation APIs for automated testing:

* **`forge simulation run`** - Coverage-optimized multi-session simulation that scores recommended responses against the context graph to systematically explore states, behaviors, and tools.
* **`forge simulation bridge`** - Claude-driven multi-scenario testing from a natural language objective, with pass^k consistency testing.
* **`forge simulation plan`** - Generate target specs from natural language objectives or metric stress tests.
* **`forge simulation evaluate`** - Compare metric scores across simulation runs (before/after diff mode).
* **`forge conversation simulate-step`** - Agent-driven step-by-step simulation with interaction insights (current state, behaviors, tools called).

{% hint style="info" %}
These CLI commands use ephemeral test users for parallel execution. See the [Agent Forge README](https://github.com/amigo-ai/agent-forge) for setup and usage.
{% endhint %}

## Related

* Core API --> [Services](https://docs.amigo.ai/developer-guide/classic-api/core-api/services)
* Core API --> [Tools](https://docs.amigo.ai/developer-guide/classic-api/core-api/tools)
* Data Access --> [Simulation Tables](https://docs.amigo.ai/developer-guide/classic-api/data-access/organization-tables/simulation)
* Getting Started --> [Authentication](https://docs.amigo.ai/developer-guide/getting-started/authentication)
