Context Graphs

Context graphs are structured frameworks that define and guide AI agents through complex problem spaces. They capture the shape of a problem—its boundaries, optimal paths, key decision points, reflection moments, and problem-solving directions. Unlike traditional flowcharts or decision trees, context graphs provide both structure and flexibility, creating clear pathways while allowing agents to adapt to specific situations.

The Topological Field Approach

Context graphs operate on a fundamentally different principle from traditional AI control mechanisms:

  1. Structured Problem Spaces: Instead of defining rigid sequences, context graphs create structured problem spaces that naturally guide agent behavior toward optimal solutions.

  2. Variable Constraint Regions: Different areas within the graph apply different levels of constraint on agent behavior.

  3. Incomplete by Design: They are intentionally "incomplete hierarchical state machines" that become fully realized through integration with memory systems and dynamic contextual understanding.

This approach mirrors how expert humans navigate complex problems - finding key decision points, recognizing patterns, and making informed choices within a structured space of possibilities. Like skilled professionals approaching complex situations, Amigo agents intelligently traverse problem spaces through structured context graphs, adaptive understanding, and accumulated experiential insights.

Context graphs allow agents to:

  • Follow Optimal Pathways: Use structured guidance to identify and navigate the best routes through complex problem spaces.

  • Adjust to Different Constraint Levels: Achieve high accuracy in critical scenarios while maintaining flexibility in less structured situations.

  • Maintain Critical Context: Preserve essential information to frame interactions, ensuring coherent, relevant, and contextually-informed responses.

  • Transform Knowledge into Navigable Structures: Organize knowledge domains into structured frameworks, facilitating efficient navigation.

  • Learn and Adapt: Continuously improve navigation strategies through reinforcement and ongoing interactions, resulting in increasingly refined and effective agent performance.

Solving the Token Bottleneck

Context graphs provide a practical solution for the token bottleneck limitation of current AI models. This limitation compresses complex reasoning into limited text with significant information loss.

  1. Structured Problem Spaces

    Context graphs function as frameworks that map out the essential elements of complex problem spaces. By creating variable-constraint guidance—strict structure for critical protocols, flexible regions for exploratory reasoning—they provide structured support that compensates for the model's inability to maintain rich internal representations across reasoning steps.

    // Example: Creating structured guidance for medical reasoning
    {
      "assess_patient_symptoms": {
        "objective": "Systematically evaluate patient symptoms using structured medical reasoning framework",
        "action_guidelines": [
          "Begin with chief complaint and systematically assess related body systems",
          "Consider temporal relationship between symptoms",
          "Evaluate severity using standardized scales where appropriate",
          "Note potential correlations between seemingly unrelated symptoms"
        ]
      }
    }
  2. Structured Reasoning Path Preservation

    The token bottleneck severely limits models' ability to maintain complex reasoning chains. Context graphs address this by externalizing the reasoning structure, preserving critical path integrity:

    // Example: Preserving reasoning chain integrity through structured states
    {
      "states": {
        "identify_symptoms": { /* ... */ },
        "analyze_potential_causes": { /* ... */ },
        "assess_contextual_factors": { /* ... */ },
        "consider_differential_diagnoses": { /* ... */ },
        "recommend_next_steps": { /* ... */ }
      }
    }
  3. Domain-Specific Optimization

    Context graphs enable domain-specific optimization of reasoning pathways, creating more efficient externalization patterns tailored to each domain's specific needs:

    // Example: Domain-specific optimization for financial advisory
    {
      "financial_advisory_graph": {
        "global_action_guidelines": [
          "Always consider risk tolerance before making investment recommendations",
          "Frame discussions in terms of long-term goals rather than short-term performance",
          "Maintain detailed tracking of client risk profile across conversation"
        ]
      }
    }

Context Density Variance: Entropy Control in Action

Context density defines the balance between structure and autonomy, implementing entropy control at the operational level. In high-density regions, the agent operates with low entropy (few degrees of freedom), following very specific pathways with minimal deviation. In low-density regions, the agent operates with high entropy (many degrees of freedom), allowing identity and intuition to have greater influence while still being guided by the underlying structure.

This approach demonstrates strategic entropy management—applying the right level of constraint based on task requirements:

  • High-Density Contexts (Low Entropy): Structured interactions with strict adherence to defined protocols (e.g., regulatory compliance, safety procedures).

  • Medium-Density Contexts (Medium Entropy): Balanced interactions with guidance and controlled flexibility (e.g., coaching frameworks, domain consultations).

  • Low-Density Contexts (High Entropy): Open-ended interactions with minimal constraints, allowing intuitive exploration (e.g., creative ideation, exploratory discussions).

This entropy stratification ensures that agents apply appropriate constraint levels—tight control when precision matters, flexible adaptation when creativity is needed.

Example: Varying Context Density

High-Density (Medical Instruction)

User: "How should I store this medication?"

Agent: "Keep your medication in the refrigerator between 2-8°C (36-46°F).

Never freeze it.

After first use, it can be stored at room temperature below 30°C (86°F) for up to 28 days.

Always keep the cap on when not in use to protect from light."

Medium-Density (Coaching Conversation)

User: "I struggle with hunger in the evenings."

Agent: "Evening hunger can be challenging.

What types of foods typically appeal to you during those times?

Knowing your preferences helps us find suitable strategies that work with your lifestyle."

Low-Density (Exploratory Discussion)

User: "I need to figure out how to make time for self-care."

Agent: "Self-care looks different for everyone.

What activities genuinely recharge you?

Maybe we can find small pockets in your day that might work?"

This approach combines the dependability of structured processes with the adaptive insight characteristic of human expertise.

Multi-State Traversal: The Agent's Hidden Journey

Context graphs enable a crucial capability: agents can traverse multiple states internally between user interactions. This multi-state traversal allows for sophisticated reasoning and processing that remains invisible to users while ensuring coherent, contextual responses.

The Action State Guarantee

Every user interaction follows a fundamental rule: agents always start and end on action states. This guarantee ensures:

  • Users always receive concrete, actionable responses

  • The agent can take an arbitrary number of steps before responding

  • Each state itself can be composed of smaller quantas of actions (such as tool calls)

  • Internal complexity remains hidden from view

  • Conversations feel natural and seamless

  • Agent personality shines through at interaction points

  • The only guarantee is that the agent responds in an action state

Traversal Patterns

Between action states, agents navigate through various internal states, creating processing "quanta" - fundamental units of behavior that compose into complex interactions:

  • Simple: [A] action → [A] action (direct response)

  • Analytical: [A] action → [D] decision → [R] reflection → [A] action (thoughtful evaluation)

  • Memory-Enhanced: [A] action → [C] recall → [D] decision → [A] action (historically-informed response)

  • Complex: [A] action → [R] reflection → [C] recall → [D] decision → [A] action (deep processing)

Three-Level Navigation Intelligence

Agents navigate these complex paths using three complementary information levels that provide both sparse global views and dense local resolution:

  1. Conceptual Level: Rich service descriptions providing the "why"

    • Sparse, conceptual global understanding of the entire service

    • Philosophy, methodology, and overall approach

    • Enables understanding of purpose across all states

  2. Structural Level: Abstract topology showing the "what"

    • Zoomed-out global map of all possible states and transitions

    • Bird's-eye view of the entire problem space

    • Allows seeing multiple steps ahead

  3. Local Level: Detailed state guidelines providing the "how"

    • Dense, high-resolution view of current state

    • Specific objectives, actions, and boundaries

    • Precise execution instructions

This multi-resolution approach is particularly powerful because it mirrors human expertise - having both a high-level understanding of the domain and detailed knowledge of specific procedures. Agents can:

  • Navigate strategically using global views

  • Execute precisely using local details

  • Balance big-picture thinking with focused action

  • Make intelligent decisions at every scale

Example: Invisible Therapeutic Processing

User: "I've been feeling really stuck in my career lately"

Internal Journey:

  1. [A] get_therapeutic_agreement_get_focus - Captures the client's concern about career

  2. [C] recall - Retrieves past career discussions and goals from previous sessions

  3. [R] reflect_on_therapeutic_agreement - Analyzes patterns between past aspirations and current stuck feeling

  4. [D] assess_focus_significance - Evaluates the personal meaning of this career concern

  5. [A] get_therapeutic_agreement_get_outcome - Explores what "unstuck" would look like for them

User Experience: A flowing conversation that feels deeply personalized, with the therapist demonstrating understanding of their career journey without revealing the complex analytical process happening between responses.

Integration with Functional Memory and Dynamic Behaviors: Orchestrating the M-K-R Cycle

Context graphs achieve their full potential not as standalone constructs, but as orchestrators of the dynamic, cyclical interplay between Memory, Knowledge, and Reasoning (M-K-R). They provide the structured pathways and decision points where these facets of the agent's cognition converge and influence each other. The goal is a high-bandwidth, unified system where improvements in one aspect naturally enhance the others.

  1. User Model Integration (Memory influencing Reasoning & Knowledge application): The dimensional structure of the user model (a key part of Functional Memory) constantly informs context graph navigation. This retrieved memory provides critical context that frames the agent's reasoning and shapes how its knowledge (activated by Dynamic Behaviors) is applied within the current state of the graph.

  2. Memory Layer Interaction (Memory powering Reasoning, Knowledge/Reasoning recontextualizing Memory): Different memory layers interact differently with context graphs:

    • Working Memory: Active memories retrieved during state traversal directly fuel immediate reasoning.

    • Conversation History: Recent interactions inform current context, influencing reasoning and potentially triggering knowledge retrieval or memory recontextualization.

    • Long-term Memory: Historical patterns and insights, retrieved through recall states within the graph, are brought into the reasoning process. New knowledge or reasoning outcomes can, in turn, recontextualize these long-term memories.

  3. Dynamic Behavior (Knowledge activation influencing Reasoning, shaped by Memory): Runtime adaptation of agent behavior based on:

    • Conversation context (which includes Memory)

    • User interactions

    • Previous agent responses

    • Triggered behavior instructions (which activate specific Knowledge)

    • Dynamic behaviors can completely modify the context graph in additive and overwrite ways. This modification, driven by activated knowledge and current memory context, directly shapes the agent's reasoning pathways.

    • These modifications can cause specialized reasoning (like pausing to think through a medical lens based on specific knowledge and memory cues).

    • The modification always includes additional context infusion (knowledge and memory). Still, it can extend to new tool exposure, hand-off to external systems, new exit conditions, specialized reasoning patterns, and more – all part of the integrated M-K-R process.

Cross-Graph Navigation

Cross-graph navigation allows for different related problem spaces to be linked hierarchically (like a "dream within a dream" from the movie Inception), but shouldn't form one massive graph. This approach:

  1. Preserves Problem Space Separation: Maintains clean separation between distinct but related problem domains

  2. Enables Efficient Transitions: Allows seamless movement between specialized problem-solving frameworks

  3. Optimizes for Latency and Performance: Significantly improves both response time and computational efficiency

  4. Preserves Context Integrity: Maintains the logical connections between workflows while preventing context overload

Last updated

Was this helpful?