# Review Queue

The review queue is where operators and supervisors examine events that the automated pipeline has flagged for human review. Events land in the queue when the [confidence gates](https://docs.amigo.ai/data/connectors-and-ehr) determine that automated verification alone is insufficient.

{% @mermaid/diagram content="flowchart LR
A\[Agent Captures Data] --> CL{Confirmation Level}
CL -->|Confirmed| C5\["0.5 confidence"]
CL -->|Mentioned| C3\["0.3 confidence"]
CL -->|Inferred| C2\["0.2 confidence"]
C5 --> AR\[Automated Review]
C3 --> AR
C2 --> AR
AR -->|Verified| WM\[(World Model Update)]
AR -->|Flagged| RQ\[Review Queue]
RQ --> HR\[Human Review]
HR -->|Approved| WM
HR -->|Corrected| WM
HR -->|Rejected| X\[Excluded]" %}

## How Events Enter the Queue

Events enter the review queue through two paths:

1. **Automated review flags** - The review judge in the [connector runner](https://docs.amigo.ai/data/connectors-and-ehr) evaluates an event and determines it needs human review (ambiguous data, low confidence in its own assessment, or a category that requires human sign-off)
2. **Confidence threshold** - Events at certain confidence levels automatically require human review before they can be promoted to verified status

The review loop is event-driven. When the world model writer commits a low-confidence event, it publishes a notification that the review loop picks up and processes within seconds. A periodic safety-net poll catches any events missed by the real-time path. Stale item expiration runs on its own cadence, separate from the review evaluation cycle.

Each queued event includes the full context an operator needs to make a decision: the event data, the entity it belongs to, the source transcript or record, and the automated review analysis with its reasoning.

## Entity-Level Deduplication

The review queue enforces one pending item per entity. When multiple events for the same entity are flagged, they are grouped into a single review item rather than creating duplicate entries. This works at two levels:

1. **Event fetch** - The review loop skips events whose entity already has a pending review item, avoiding redundant review evaluations
2. **Verdict application** - When a new event is flagged for an entity that already has a pending item, the event ID is appended to the existing item instead of creating a new row

This prevents queue bloat in high-volume scenarios where a single entity generates many events in a short window.

## Review Actions

Operators can take three actions on a queued event:

| Action      | What Happens                                                                                                                                                            |
| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Approve** | The event's confidence is elevated to 0.95 (human-approved). It becomes eligible for EHR sync.                                                                          |
| **Reject**  | The event's confidence is set to 0.0 (rejected). It is excluded from entity projections and will not sync to any external system.                                       |
| **Correct** | The operator provides corrected data. A new event is created at confidence 1.0 (authoritative) that supersedes the original. The original event is preserved for audit. |

Batch operations are supported - operators can approve or reject multiple events at once when reviewing a set of similar items.

## Priority and Ordering

Review items are ordered by priority, not arrival time. Priority is determined by:

* **Entity type** - Patient-facing events (medication changes, appointment bookings) rank higher than administrative events
* **Downstream impact** - Events that block an outbound EHR sync rank higher than events with no pending external action
* **Age** - Older items are promoted to prevent stale reviews from accumulating

Operators see the highest-priority items first. When multiple events are grouped under a single entity (via entity-level deduplication), the group inherits the highest priority of its constituent events.

## Who Reviews

Access to the review queue is role-based:

* **Operators** can approve, reject, and correct events within their assigned workspace
* **Supervisors** can review across workspaces within their organization and override prior operator decisions
* **Automated rules** can be configured to auto-approve specific event types that have a consistent track record - for example, patient demographic updates from authoritative EHR sources. Auto-approved events are logged as such for audit purposes

## The Confidence Pipeline

The review queue sits at the end of a three-stage confidence pipeline:

1. **Source confidence** - Data enters the world model at a confidence level determined by its source: 1.0 (authoritative integration), 0.7 (browser scrape), 0.5 (voice extraction), 0.3 (agent inference)
2. **Automated review** - The [connector runner's](https://docs.amigo.ai/data/connectors-and-ehr) review judge cross-references events against transcripts and source records, promoting or flagging each event
3. **Human review** - Events the automated judge cannot confidently assess land in the review queue for operator decision

After approval, events are promoted to confidence 0.95 (human-approved) and become eligible for outbound EHR sync. Rejected events are set to 0.0 and excluded from entity projections. Corrected events create a new authoritative record at confidence 1.0.

This pipeline ensures that data from noisy sources (phone conversations, patient self-reporting) is verified before it reaches systems of record.

## Analytics

The review queue tracks operational metrics:

* **Completion rate** - What percentage of queued events are reviewed within a given time window
* **Distribution** - How events break down by action taken (approved, rejected, corrected)
* **Assignment** - Which operators are handling which reviews
* **Correction rate** - How often operators correct data versus approving it as-is (high correction rates may indicate an upstream extraction or transcription issue)

{% hint style="info" %}
For how events flow through automated review before reaching the human queue, see [Connector Runner](https://docs.amigo.ai/data/connectors-and-ehr). For how confidence scoring works across the platform, see [World Model](https://docs.amigo.ai/data/world-model).
{% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.amigo.ai/data/review-queue.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
