Context Graphs
Last updated
Was this helpful?
Last updated
Was this helpful?
Context graphs are structured frameworks that define and guide AI agents through complex problem spaces. They capture the shape of a problem - its boundaries, optimal paths, key decision points, reflection moments, and problem-solving directions. Unlike traditional flowcharts or decision trees, they provide both structure and flexibility, creating clear pathways while allowing agents to adapt to specific situations.
Context graphs operate on a fundamentally different principle than traditional AI control mechanisms:
Structured Problem Spaces: Instead of defining rigid sequences, context graphs create structured problem spaces that naturally guide agent behavior toward optimal solutions.
Variable Constraint Regions: Different areas within the graph apply different levels of constraint on agent behavior.
Incomplete by Design: They are intentionally "incomplete hierarchical state machines" that become fully realized through integration with memory systems and dynamic contextual understanding.
This approach mirrors how expert humans navigate complex problems - finding key decision points, recognizing patterns, and making informed choices within a structured space of possibilities. Much like skilled professionals approaching complex situations, Amigo agents intelligently traverse problem spaces through structured context graphs, adaptive understanding, and accumulated experiential insights.
Context graphs allow agents to:
Follow Optimal Pathways: Identify and navigate the best routes through complex problem spaces using structured guidance.
Adjust to Different Constraint Levels: Achieve high accuracy in critical scenarios while maintaining flexibility in less structured situations.
Maintain Critical Context: Preserve essential information to frame interactions, ensuring coherent, relevant, and contextually-informed responses.
Transform Knowledge into Navigable Structures: Organize knowledge domains into structured frameworks, facilitating efficient navigation.
Learn and Adapt: Continuously improve navigation strategies through reinforcement and ongoing interactions, resulting in increasingly refined and effective agent performance.
Context graphs provide a practical solution for the limitation of current AI models, which compresses complex reasoning into limited text with significant information loss.
Structured Problem Spaces
Context graphs function as frameworks that map out the important elements of complex problem spaces. By creating variable-constraint guidance—strict structure for critical protocols, flexible regions for exploratory reasoning—they provide structured support that compensates for the model's inability to maintain rich internal representations across reasoning steps.
Structured Reasoning Path Preservation
The token bottleneck severely limits models' ability to maintain complex reasoning chains. Context graphs address this by externalizing the reasoning structure, preserving critical path integrity:
Domain-Specific Optimization
Context graphs enable domain-specific optimization of reasoning pathways, creating more efficient externalization patterns tailored to each domain's specific needs:
Context density defines the balance between structure and autonomy. In high-density regions, the agent follows very specific pathways with minimal deviation. In low-density regions, the agent's identity and intuition have greater influence, allowing for more adaptability while still being guided by the underlying structure.
Amigo agents dynamically adjust behaviors based on the density of their current context graph:
High-Density Contexts: Structured interactions with strict adherence to defined protocols (e.g., regulatory compliance).
Medium-Density Contexts: Balanced interactions with guidance and controlled flexibility (e.g., coaching frameworks).
Low-Density Contexts: Open-ended interactions with minimal constraints, allowing intuitive exploration (e.g., creative ideation).
Example: Varying Context Density
High-Density (Medical Instruction)
Medium-Density (Coaching Conversation)
Low-Density (Exploratory Discussion)
This approach combines the dependability of structured processes with the adaptive insight characteristic of human expertise.
Context graphs achieve their full potential not as standalone constructs, but as orchestrators of the dynamic, cyclical interplay between Memory, Knowledge, and Reasoning (M-K-R). They provide the structured pathways and decision points where these facets of the agent's cognition converge and influence each other. The goal is a high-bandwidth, unified system where improvements in one aspect naturally enhance the others.
Memory Layer Interaction (Memory powering Reasoning, Knowledge/Reasoning recontextualizing Memory): Different memory layers interact differently with context graphs:
Working Memory: Active memories retrieved during state traversal directly fuel immediate reasoning.
Conversation History: Recent interactions inform current context, influencing reasoning and potentially triggering knowledge retrieval or memory recontextualization.
Long-term Memory: Historical patterns and insights, retrieved through recall states within the graph, are brought into the reasoning process. New knowledge or reasoning outcomes can, in turn, lead to the recontextualization of these long-term memories.
Dynamic Behavior (Knowledge activation influencing Reasoning, shaped by Memory): Runtime adaptation of agent behavior based on:
Conversation context (which includes Memory)
User interactions
Previous agent responses
Triggered behavior instructions (which activate specific Knowledge)
Dynamic behaviors can completely modify the context graph in both additive and overwrite ways. This modification, driven by activated knowledge and current memory context, directly shapes the agent's reasoning pathways.
These modifications can cause specialized reasoning to occur (like pausing to think through a medical lens based on specific knowledge and memory cues).
The modification always includes additional context infusion (knowledge and memory) but can extend to new tool exposure, hand-off to external systems, new exit conditions, specialized reasoning patterns, and more – all part of the integrated M-K-R process.
Cross-graph navigation allows for different related problem spaces to be linked hierarchically (like a "dream within a dream" from the movie Inception) but shouldn't form one massive graph. This approach:
Preserves Problem Space Separation: Maintains clean separation between distinct but related problem domains
Enables Efficient Transitions: Allows seamless movement between specialized problem-solving frameworks
Optimizes for Latency and Performance: Significantly improves both response time and computational efficiency
Preserves Context Integrity: Maintains the logical connections between workflows while preventing context overload
User Model Integration (Memory influencing Reasoning & Knowledge application): The dimensional structure of the user model (a key part of ) constantly informs context graph navigation. This retrieved memory provides critical context that frames the agent's reasoning and shapes how its knowledge (activated by ) is applied within the current state of the graph.