System Components

This page explains how our core components work together to create the near-perfect point-in-time context essential for entropy stratification.

1

Agent Core (foundation)

Start here to understand the stable identity and expertise that anchors the system.

2

Context Graphs (structure)

Learn how the problem space is defined and organized.

3

Functional Memory (persistence)

Understand how context is maintained over time.

4

Dynamic Behaviors (adaptation)

Discover how the system adapts with real-time flexibility.

5

Actions, Evaluations, and RL (execution and improvement)

See how the system executes and continuously improves.

The components integrate to form the unified context that enables intelligent decision-making.


Agent

Agent Core

Amigo's agent architecture builds stable professional identity foundations through immutable core attributes that persist across all problem domains and context variations. Each agent is defined through unchanging core attributes—identity, background, expertise, and philosophy—that anchor stable self-perception.

However, these attributes remain latent until activated and shaped by context. A doctor at a party is just another person, but becomes a medical professional when the situation demands it. This context-dependent activation is fundamental to entropy stratification—the same identity exhibits entirely different characteristics based on the problem space it operates within.

The agent's immutable core attributes serve as potential capabilities that become actual behaviors only through contextual activation. This latent-to-active transformation supports coherent complexity assessment across problem spaces.

For more details, see Agent Core.

Context Graphs

Context graphs fundamentally define the problem—nothing in the architecture makes sense without this minimal problem structure. They represent the invariant parts and blueprint of one coherent problem, capturing the service purpose, structural topology, and local intricacies that make a problem space navigable. These partial frameworks become complete only when composed of an agent's identity and dynamic behaviors.

Context graphs offer tremendous flexibility in their design approach. They can range from extremely open, reactive support states that rely entirely on agent intelligence and dynamic behaviors to handle any situation, to strict workflows where each step is necessary and precisely defined. This spectrum allows organizations to match the graph's constraint level to their business requirements, using open graphs where agent expertise should drive the interaction, and structured workflows where compliance or safety requires specific procedural steps.

This structural innovation allows us to define a problem space at three levels of granularity:

  1. The Conceptual Level defines the problem fundamentally through the service description—the session's purpose and the value being delivered. This conceptual grounding helps agents understand why they're navigating this particular problem space.

  2. The Structural Level captures the entire invariant structure of the problem through abstract topology—all possible states and transitions that define how this problem can be navigated. This bird's-eye view shows agents the complete problem landscape, supporting strategic planning and optimal path selection.

  3. The Operational Level defines guidelines—specific constraints, tool interpretations, data access patterns, and local objectives. These details make the abstract problem structure actionable, providing the concrete guidance needed for optimal execution.

For more details, see Context Graphs.

Dynamic Behaviors

Dynamic behaviors are general problem space modifiers that provide real-time adaptations to the problem context. They can overwrite or enrich context graph local states, connect different graphs, trigger deeper reflection processes, reframe problems entirely, or expose external systems through tools, providing the flexibility needed to handle the full spectrum of variations that static context graphs alone cannot capture.

This comprehensive modification capability operates through constrained optimization rather than rigid command execution, using cognitive selection processes that mirror human decision-making: semantic association brings relevant behaviors to mind, logical analysis selects the most appropriate, and constraint solving applies them effectively.

Dynamic behaviors enable the same context graph to handle the complete spectrum of variations within its problem domain—from routine interactions to edge cases.

For more details, see Dynamic Behaviors.

Functional Memory

Functional Memory serves as the operational blueprint that aligns memory organization with the specific functions each agent performs, ensuring that all function-critical information is available at the right interpretation and granularity for live reasoning. The system centers around user models derived from custom dimensional frameworks that organizations design to interpret raw information through interactions.

Unlike traditional approaches that treat all information equally, our dimensional framework organizes memory according to functional importance, determining what information requires outcome-sufficient preservation (maintaining sufficient statistics—compressed representations preserving all information relevant to outcomes), how contextual relationships should be maintained over time, and when information should be recontextualized based on new understanding.

The memory system operates through a hierarchical architecture (L0→L1→L2→L3) that compresses thousands of observations into 10-50 functional dimensions driving outcomes, preserving what matters while discarding noise. This functional alignment ensures agents have all the context they need for optimal entropy assessment and decision-making without constant information retrieval.

Memory doesn't operate alone—it combines with professional identity (interpretation priors), context graphs (problem structure), and constraints to form the unified context that enables decisions. See Overcoming LLM Limitations for how this hierarchical compression addresses the token bottleneck.

For more details, see Functional Memory.

Actions

Amigo Actions represent the execution layer that transforms our orchestration framework into real-world outcomes through custom programs running in isolated execution environments. Unlike traditional tool calling, Actions can orchestrate entire workflows—authenticating with external systems, processing data through multiple steps, handling errors and retries, and coordinating between different services. The LLM provides contextual reasoning about what needs to happen, while Actions handle the deterministic execution.

Context-aware integration allows sophisticated Action composition and orchestration. Different states in a context graph expose different capabilities—when a clinical agent focuses on emergency triage, it has access to vital sign analyzers, but when transitioning to treatment planning, different Actions become available like drug interaction checkers and care protocol analyzers. Dynamic behaviors can modify the available Action landscape in real-time based on conversational context, creating a fluid, adaptive tool environment where capabilities evolve based on specific problem contexts.

For more details, see Actions.

Platform

Evaluations

Our evaluations framework answers the fundamental question: "What does successfully solving the problem look like?" The framework verifies that economic work units are delivered acceptably across all problem neighborhoods. This verification embodies the strategic objectives that define when problems are considered solved, with evaluation criteria that evolve as market conditions and problem definitions shift.

Evaluations set a north star for end-to-end optimization of these configurations for the real world. Through sophisticated persona simulations and adversarial testing across thousands of predefined scenarios, the framework supports the discovery of optimal architectures that balance efficiency, performance, and accuracy for specific problem neighborhoods. Continuous verification prevents entropic drift, ensuring the orchestration maintains alignment with reality as it evolves and that successful configurations naturally emerge through selective pressure.

The evaluation framework generates evolutionary pressure that drives the discovery of optimal entropy stratification patterns. It identifies which combinations of agent identities, context graph structures, dynamic behavior sets, and memory strategies form the most effective cognitive architectures for each problem neighborhood. This discovery process achieves perfect entropy stratification, not through predetermined rules, but through empirical validation of what actually works.

For more details, see Evaluations.

Reinforcement Learning (RL)

Amigo's RL framework closes foundational reasoning gaps through continuous optimization. Rather than applying RL broadly, the framework focuses investments where they matter most—concentrating exclusively on high-leverage capabilities that directly impact business outcomes while the systematic context management framework handles routine control functions. This targeted approach detects and corrects misalignments before they cause problems, deepens understanding through every edge case, and establishes a clear path from baseline capabilities to optimal performance.

RL serves as the fine-tuning mechanism that takes discovered entropy stratification patterns and optimizes them to their theoretical limits. While evaluations discover which broad configurations work, RL perfects the precise parameters within those configurations—the exact thresholds for cognitive gear-shifting, the optimal timing for memory expansion, or the perfect balance between agent autonomy and structural guidance. This continuous refinement ensures that entropy stratification not only meets verification thresholds but also approaches optimal efficiency.

For more details, see Reinforcement Learning.

How the Components Work Together

These core components form a unified cognitive architecture that operates at two distinct levels: session-level quantum interactions and system-level optimization.

Session-Level Component Integration: The High-Bandwidth Unified Cognitive System

During session interactions, all components operate through quantum patterns of state transitions. An agent can take an arbitrary number of steps before responding to a user, with each interaction composed of quantum patterns such as [A] → [A] (direct action-to-action), [A] → [D] → [R] → [A] (action to decision to reflection to action), [A] → [C] → [D] → [A] (memory-informed decision making), or any valid combination of state transitions.

Each state itself can contain smaller quanta of operations—tool calls, memory queries, internal computations. The only guarantee is that the agent responds to users in an action state, but the path to that response can involve complex internal processing invisible to the user.

For instance:

  1. Agent identity influences memory formation and interpretation as the professional lens shapes what's remembered and how. It drives navigation preferences as identity determines choices between reflection and direct action. It affects dynamic behavior expression as the same behavior manifests differently based on agent identity. It offers the interpretive framework through which all other components are understood.

  2. Context Graphs influence memory recontextualization, with specific states, like recall states, triggering targeted memory operations. They determine how memories are interpreted as the problem structure establishes which memories are relevant. They shape agent identity manifestation as different graph structures elicit different aspects of the same identity. They determine dynamic behavior applicability as the graph structure defines which behaviors can meaningfully apply.

  3. Dynamic Behaviors influence memory recontextualization as behaviors can trigger new interpretations of past events. They modify graph states by enriching or replacing local contexts. They alter agent expression modes by modifying how identity manifests. They create new pathways through the problem space that didn't exist in the base graph.

System-Level Optimization Framework

Beyond session interactions, the broader system uses verification results to continuously optimize the orchestration framework.

Evaluations operate as a complex verification system that reasons about failure modes against orchestration configurations across entire session evolution cycles. The framework analyzes verification failures across entire neighborhoods to gain a deep understanding of failure mechanics, proposing intelligent variations with potential to succeed while balancing concerns like overfitting, creating intelligent improvement cycles through sophisticated reasoning about why failures occurred and how configurations should evolve.

Reinforcement Learning operates within this verification evolutionary chamber to fine-tune topologies within their entropy bands, optimizing how the orchestration framework components integrate and how successful context management patterns propagate throughout the architecture.

Agent Forge: Enabling Recursive System Evolution

The Agent Forge offers the foundational infrastructure that allows this entire unified cognitive architecture to evolve itself. Through automated configuration management, coding agents can recursively optimize all components of the system—from context graphs and dynamic behaviors to evaluation frameworks and memory systems—transforming what was previously manual system evolution into a data-driven optimization process that scales with deployment complexity. This represents the next evolution in AI systems: not just agents that solve problems, but systems that can improve themselves while maintaining human oversight and safety boundaries.

Last updated

Was this helpful?