LogoLogo
Go to website
  • Welcome
  • Getting Started
    • Amigo Overview
      • System Components
      • Overcoming LLM Limitations
      • [Advanced] Future-Ready Architecture
      • [Advanced] The Accelerating AI Landscape
    • The Journey with Amigo
      • Partnership Model
  • Concepts
    • Agent Core
      • Core Persona
      • Global Directives
    • Context Graphs
      • State-Based Architecture
      • [Advanced] Field Implementation Guidance
    • Functional Memory
      • Layered Architecture
      • User Model
      • [Advanced] Recall Mechanisms
      • [Advanced] Analytical Capabilities
    • Dynamic Behaviors
      • Side-Effect Architecture
      • Knowledge
      • [Advanced] Behavior Chaining
    • Evaluations
      • [Advanced] Arena Implementation Guide
    • [Advanced] Reinforcement Learning
    • Safety
  • Glossary
  • Advanced Topics
    • Transition to Neuralese Systems
    • Agent V2 Architecture
  • Agent Building Best Practices
    • [Advanced] Dynamic Behaviors Guide
  • Developer Guide
    • Enterprise Integration Guide
      • Authentication
      • User Creation + Management
      • Service Discovery + Management
      • Conversation Creation + Management
      • Data Retrieval + User Model Management
      • Webhook Management
    • API Reference
      • V1/organization
      • V1/service
      • V1/conversation
      • V1/user
      • V1/role
      • V1/admin
      • V1/webhook_destination
      • V1/dynamic_behavior_set
      • V1/metric
      • V1/simulation
      • Models
      • V1/organization
      • V1/service
      • V1/conversation
      • V1/user
      • V1/role
      • V1/admin
      • V1/webhook_destination
      • V1/dynamic_behavior_set
      • V1/metric
      • V1/simulation
      • Models
Powered by GitBook
LogoLogo

Resources

  • Pricing
  • About Us

Company

  • Careers

Policies

  • Terms of Service

Amigo Inc. ©2025 All Rights Reserved.


On this page
  • The Topological Field Approach
  • Solving the Token Bottleneck
  • Context Density Variance
  • Integration with Functional Memory and Dynamic Behaviors: Orchestrating the M-K-R Cycle
  • Cross-Graph Navigation

Was this helpful?

Export as PDF
  1. Concepts

Context Graphs

PreviousGlobal DirectivesNextState-Based Architecture

Last updated 8 days ago

Was this helpful?

Context graphs are structured frameworks that define and guide AI agents through complex problem spaces. They capture the shape of a problem - its boundaries, optimal paths, key decision points, reflection moments, and problem-solving directions. Unlike traditional flowcharts or decision trees, they provide both structure and flexibility, creating clear pathways while allowing agents to adapt to specific situations.

The Topological Field Approach

Context graphs operate on a fundamentally different principle than traditional AI control mechanisms:

  1. Structured Problem Spaces: Instead of defining rigid sequences, context graphs create structured problem spaces that naturally guide agent behavior toward optimal solutions.

  2. Variable Constraint Regions: Different areas within the graph apply different levels of constraint on agent behavior.

  3. Incomplete by Design: They are intentionally "incomplete hierarchical state machines" that become fully realized through integration with memory systems and dynamic contextual understanding.

This approach mirrors how expert humans navigate complex problems - finding key decision points, recognizing patterns, and making informed choices within a structured space of possibilities. Much like skilled professionals approaching complex situations, Amigo agents intelligently traverse problem spaces through structured context graphs, adaptive understanding, and accumulated experiential insights.

Context graphs allow agents to:

  • Follow Optimal Pathways: Identify and navigate the best routes through complex problem spaces using structured guidance.

  • Adjust to Different Constraint Levels: Achieve high accuracy in critical scenarios while maintaining flexibility in less structured situations.

  • Maintain Critical Context: Preserve essential information to frame interactions, ensuring coherent, relevant, and contextually-informed responses.

  • Transform Knowledge into Navigable Structures: Organize knowledge domains into structured frameworks, facilitating efficient navigation.

  • Learn and Adapt: Continuously improve navigation strategies through reinforcement and ongoing interactions, resulting in increasingly refined and effective agent performance.

Solving the Token Bottleneck

Context graphs provide a practical solution for the limitation of current AI models, which compresses complex reasoning into limited text with significant information loss.

  1. Structured Problem Spaces

    Context graphs function as frameworks that map out the important elements of complex problem spaces. By creating variable-constraint guidance—strict structure for critical protocols, flexible regions for exploratory reasoning—they provide structured support that compensates for the model's inability to maintain rich internal representations across reasoning steps.

    // Example: Creating structured guidance for medical reasoning
    {
      "assess_patient_symptoms": {
        "objective": "Systematically evaluate patient symptoms using structured medical reasoning framework",
        "action_guidelines": [
          "Begin with chief complaint and systematically assess related body systems",
          "Consider temporal relationship between symptoms",
          "Evaluate severity using standardized scales where appropriate",
          "Note potential correlations between seemingly unrelated symptoms"
        ]
      }
    }
  2. Structured Reasoning Path Preservation

    The token bottleneck severely limits models' ability to maintain complex reasoning chains. Context graphs address this by externalizing the reasoning structure, preserving critical path integrity:

    // Example: Preserving reasoning chain integrity through structured states
    {
      "states": {
        "identify_symptoms": { /* ... */ },
        "analyze_potential_causes": { /* ... */ },
        "assess_contextual_factors": { /* ... */ },
        "consider_differential_diagnoses": { /* ... */ },
        "recommend_next_steps": { /* ... */ }
      }
    }
  3. Domain-Specific Optimization

    Context graphs enable domain-specific optimization of reasoning pathways, creating more efficient externalization patterns tailored to each domain's specific needs:

    // Example: Domain-specific optimization for financial advisory
    {
      "financial_advisory_graph": {
        "global_action_guidelines": [
          "Always consider risk tolerance before making investment recommendations",
          "Frame discussions in terms of long-term goals rather than short-term performance",
          "Maintain detailed tracking of client risk profile across conversation"
        ]
      }
    }

Context Density Variance

Context density defines the balance between structure and autonomy. In high-density regions, the agent follows very specific pathways with minimal deviation. In low-density regions, the agent's identity and intuition have greater influence, allowing for more adaptability while still being guided by the underlying structure.

Amigo agents dynamically adjust behaviors based on the density of their current context graph:

  • High-Density Contexts: Structured interactions with strict adherence to defined protocols (e.g., regulatory compliance).

  • Medium-Density Contexts: Balanced interactions with guidance and controlled flexibility (e.g., coaching frameworks).

  • Low-Density Contexts: Open-ended interactions with minimal constraints, allowing intuitive exploration (e.g., creative ideation).

Example: Varying Context Density

High-Density (Medical Instruction)

User: "How should I store this medication?"

Agent: "Keep your medication in the refrigerator between 2-8°C (36-46°F).

Never freeze it.

After first use, it can be stored at room temperature below 30°C (86°F) for up to 28 days.

Always keep the cap on when not in use to protect from light."

Medium-Density (Coaching Conversation)

User: "I struggle with hunger in the evenings."

Agent: "Evening hunger can be challenging.

What types of foods typically appeal to you during those times?

Knowing your preferences helps us find suitable strategies that work with your lifestyle."

Low-Density (Exploratory Discussion)

User: "I need to figure out how to make time for self-care."

Agent: "Self-care looks different for everyone.

What activities genuinely recharge you?

Maybe we can find small pockets in your day that might work?"

This approach combines the dependability of structured processes with the adaptive insight characteristic of human expertise.

Integration with Functional Memory and Dynamic Behaviors: Orchestrating the M-K-R Cycle

Context graphs achieve their full potential not as standalone constructs, but as orchestrators of the dynamic, cyclical interplay between Memory, Knowledge, and Reasoning (M-K-R). They provide the structured pathways and decision points where these facets of the agent's cognition converge and influence each other. The goal is a high-bandwidth, unified system where improvements in one aspect naturally enhance the others.

  1. Memory Layer Interaction (Memory powering Reasoning, Knowledge/Reasoning recontextualizing Memory): Different memory layers interact differently with context graphs:

    • Working Memory: Active memories retrieved during state traversal directly fuel immediate reasoning.

    • Conversation History: Recent interactions inform current context, influencing reasoning and potentially triggering knowledge retrieval or memory recontextualization.

    • Long-term Memory: Historical patterns and insights, retrieved through recall states within the graph, are brought into the reasoning process. New knowledge or reasoning outcomes can, in turn, lead to the recontextualization of these long-term memories.

  2. Dynamic Behavior (Knowledge activation influencing Reasoning, shaped by Memory): Runtime adaptation of agent behavior based on:

    • Conversation context (which includes Memory)

    • User interactions

    • Previous agent responses

    • Triggered behavior instructions (which activate specific Knowledge)

    • Dynamic behaviors can completely modify the context graph in both additive and overwrite ways. This modification, driven by activated knowledge and current memory context, directly shapes the agent's reasoning pathways.

    • These modifications can cause specialized reasoning to occur (like pausing to think through a medical lens based on specific knowledge and memory cues).

    • The modification always includes additional context infusion (knowledge and memory) but can extend to new tool exposure, hand-off to external systems, new exit conditions, specialized reasoning patterns, and more – all part of the integrated M-K-R process.

Cross-Graph Navigation

Cross-graph navigation allows for different related problem spaces to be linked hierarchically (like a "dream within a dream" from the movie Inception) but shouldn't form one massive graph. This approach:

  1. Preserves Problem Space Separation: Maintains clean separation between distinct but related problem domains

  2. Enables Efficient Transitions: Allows seamless movement between specialized problem-solving frameworks

  3. Optimizes for Latency and Performance: Significantly improves both response time and computational efficiency

  4. Preserves Context Integrity: Maintains the logical connections between workflows while preventing context overload

User Model Integration (Memory influencing Reasoning & Knowledge application): The dimensional structure of the user model (a key part of ) constantly informs context graph navigation. This retrieved memory provides critical context that frames the agent's reasoning and shapes how its knowledge (activated by ) is applied within the current state of the graph.

Functional Memory
Dynamic Behaviors
token bottleneck