# Agent Core

{% hint style="success" %}
**For Developers**: See the [REST API reference](https://docs.amigo.ai/developer-guide/core-api/agents-and-context-graphs) for endpoint details, request/response schemas, and SDK code examples.
{% endhint %}

The Agent Core defines how the agent interprets incoming data - which signals matter for this domain, how to evaluate them, and what counts as normal.

## Agent as Interpretive Framework

The Agent Core is not about personality or chat interfaces - it defines how the system interprets the measured world. When the same raw data passes through different agent cores, they produce different conclusions because each agent emphasizes different dimensions based on its domain expertise.

This interpretive role is critical for compositional systems:

1. **Signal Selection.** The agent determines which signals from raw data deserve extraction and tracking.
2. **Transition Gating.** The agent's domain knowledge shapes which conditions must be met before the conversation can move to the next stage in a context graph.
3. **Cohort Matching.** The agent's expertise determines which patient group a caller belongs to based on observed data - for example, distinguishing a post-surgical follow-up from a chronic care check-in.

## Core Components

The Agent Core consists of two artifacts that travel together.

* **Core Persona:** A structured description of professional background, scope of practice, tone, and ethical stance. It answers “How would a credible expert in this role behave?” Personas are managed as first-class resources in the Platform API - services reference a persona by ID rather than embedding identity fields directly, enabling the same persona to be reused across multiple services and channels.
* **Global Directives:** A set of non-negotiable rules and optimization priorities (e.g., “safety overrides convenience,” “never speculate about diagnoses”). Directives provide the tie-breakers when objectives compete.

These artifacts are encoded in structured formats that both the reasoning model and human reviewers can read.

### Persona: Two Layers

The persona has two layers that work together:

* **Identity Layer** - Core attributes: name, role, language, organizational alignment. This is the sketch that defines the agent's professional form. Being identified as an “accredited dietitian” means the agent maintains appropriate professional boundaries in nutritional guidance.
* **Background Layer** - Depth attributes: motivations, expertise, biography, guiding principles. This turns a bare role definition into a grounded professional identity with specific expertise areas, motivations, and guiding principles. When a patient expresses frustration with a plateau, the background layer drives the response - the agent draws on its understanding of setbacks and its motivation to prioritize progress over perfection.

We recommend keeping the background section under 10k tokens. It is the foundation for behavioral alignment, not an exhaustive manual. A rich persona should handle the majority of behavioral decisions naturally - context graphs then focus on the 20-30% that requires structured flows.

### Directives: Behavioral vs. Communication

Global directives split into two categories:

* **Behavioral directives** define what the agent can and cannot do. These are non-negotiable rules like “never create meal plans,” “never interpret nutrition information from photos,” or “refer medical questions to the medical support team.” They override domain expertise when liability or safety requires it.
* **Communication directives** govern how the agent expresses itself. These cover tone, linguistic style, and phrasing rules - “use contractions and informal phrasing,” “split sentences onto separate lines,” “never use phrases like 'at least' or 'you should.'” They enforce brand consistency that would not naturally emerge from the persona alone.

The distinction matters because a dietitian persona might naturally recommend meal plans based on expertise, but a behavioral directive can explicitly prohibit this for liability reasons. Communication directives about British English spelling or line-splitting would never emerge from a professional identity but are critical for brand consistency.

## How the Agent Core Connects to Other Systems

The Agent Core anchors the platform's memory, knowledge retrieval, and reasoning:

* It tells [Functional Memory](/agent/memory.md) which dimensions deserve perfect preservation and how to interpret ambiguous data.
* It constrains knowledge activation so that retrieval focuses on material a real professional would consider relevant.
* It shapes reasoning by defining acceptable risk appetite, escalation criteria, and communication style.

Because of these dependencies, updates to the Agent Core are versioned alongside the context graphs and memories that rely on it.

## Designing an Agent Core

When tailoring the platform to your domain, treat the Agent Core as a specification exercise, not a branding exercise. A rich, well-developed Agent Core should handle the majority of behavioral decisions naturally. When a patient asks a question outside the current topic, the agent's professional identity should guide how to respond without needing explicit context graph rules for every variation. Context graphs then focus on the 20-30% that requires structured flows: specific protocols, compliance-critical paths, and domain-specific edge cases.

A practical process looks like this:

1. **Interview domain experts.** Capture how they assess severity, personalize guidance, and escalate edge cases.
2. **Translate heuristics into directives.** Express their rules in precise language a model can follow and an auditor can review.
3. **Encode calibration parameters.** Define qualitative scales in quantitative terms (e.g., what constitutes “high risk,” acceptable response latency, minimum evidence needed before recommending an action).
4. **Validate with simulations.** Run representative scenarios to confirm the identity behaves as intended before exposing it to users.

## Success Criteria

A well-designed Agent Core exhibits the following traits:

* **Stable voice and judgment** across scenarios, even when other components adapt.
* **Consistent escalation logic** that matches documented policy.
* **Clear boundaries** for what the agent will and will not do, making hand-offs to humans smooth.
* **Traceable decisions** because rationale, directives, and memory pulls all reference the same identity settings.

If logs show divergent behavior that cannot be explained by the persona or directives, the issue lies elsewhere - most often in the context graph or dynamic behavior configuration.

{% hint style="info" %}
**Related sections** - See [Context Graphs](/agent/context-graphs.md) for how the agent navigates problem spaces, [Functional Memory](/agent/memory.md) for context preservation, and [Dynamic Behaviors](/agent/context-graphs.md) for runtime adaptation.
{% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.amigo.ai/agent/agents.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
