Agent Core
The Agent Core defines who the system becomes when it engages with users. It provides a stable professional identity, explicit standards of care, and repeatable decision defaults. Everything else—context graphs, memory, behaviors—assumes that identity when computing the next response.
Why Identity Matters
Traditional chatbots either follow brittle scripts or rely entirely on a language model’s implicit persona. Amigo agents combine both worlds: the Agent Core codifies the expert who shows up, while other components tailor execution to the situation. This yields three advantages for technical teams:
Predictability. The agent evaluates evidence and risk using domain-specific judgment rather than generic heuristics.
Auditability. When something goes wrong, you can trace decisions back to explicit identity settings instead of latent model behavior.
Transferability. The same identity can operate across multiple services as long as the surrounding context graphs and memories align with its standards.
Core Components
The Agent Core consists of two artifacts that travel together.
Core Persona: A structured description of professional background, scope of practice, tone, and ethical stance. It answers “How would a credible expert in this role behave?”
Global Directives: A set of non-negotiable rules and optimization priorities (e.g., “safety overrides convenience,” “never speculate about diagnoses”). Directives provide the tie-breakers when objectives compete.
These artifacts are encoded in machine-consumable formats so that reasoning models—and humans reviewing logs—see the same expectations.
Interface with the M-K-R Stack
We refer to the integrated loop of Memory, Knowledge, and Reasoning (M-K-R) as the cognitive stack—the system that remembers user history, retrieves relevant domain information, and decides what to do next. The Agent Core anchors that loop:
It tells Functional Memory which dimensions deserve perfect preservation and how to interpret ambiguous data.
It constrains Knowledge activation so that retrieval focuses on material a real professional would consider relevant.
It shapes Reasoning by defining acceptable risk appetite, escalation criteria, and communication style.
Because of these dependencies, updates to the Agent Core are versioned alongside the context graphs and memories that rely on it.
Designing an Agent Core
When tailoring the platform to your domain, treat the Agent Core as a specification exercise, not a branding exercise. A practical process looks like this:
Interview domain experts. Capture how they assess severity, personalize guidance, and escalate edge cases.
Translate heuristics into directives. Express their rules in precise language a model can follow and an auditor can review.
Encode calibration parameters. Define qualitative scales in quantitative terms (e.g., what constitutes “high risk,” acceptable response latency, minimum evidence needed before recommending an action).
Validate with simulations. Run representative scenarios to confirm the identity behaves as intended before exposing it to users.
Success Criteria
A well-designed Agent Core exhibits the following traits:
Stable voice and judgment across scenarios, even when other components adapt.
Consistent escalation logic that matches documented policy.
Clear boundaries for what the agent will and will not do, making hand-offs to humans smooth.
Traceable decisions because rationale, directives, and memory pulls all reference the same identity settings.
If logs show divergent behavior that cannot be explained by the persona or directives, the issue lies elsewhere—most often in the context graph or dynamic behavior configuration.
Next Steps
Dive deeper into Global Directives and Core Persona for schema details and examples.
Review how the Agent Core partners with Context Graphs and Functional Memory to maintain a unified context.
Explore Dynamic Behaviors to see how identity-aware modifiers adjust execution in real time.
Last updated
Was this helpful?

