Dynamic Behaviors
Dynamic behaviors are general modifiers that adapt the compositional system at runtime. They modify optimization constraints, adjust entry and exit conditions, add interpretive framing, and reshape how the partial arc fragments from context graphs compose with other components to form complete quantized arcs.
Dynamic Behaviors as Runtime Problem Definition Changers
A key innovation of dynamic behaviors is their role as runtime problem definition changers—a concept fundamentally different from traditional retrieval-augmented generation (RAG) approaches. Traditional RAG systems retrieve relevant content and inject it into context, but they lack the ability to modify the constraints of an existing problem at runtime.
Dynamic behaviors go further: they can fundamentally transform the optimization problem the agent is solving at any given moment. When a dynamic behavior activates, it doesn't just add information—it can reshape the entire problem definition by:
Modifying optimization constraints: Changing what the agent is trying to achieve
Adjusting available tools: Adding, removing, or replacing the tools available for problem-solving
Transforming action guidelines: Merging new guidelines or completely overriding existing ones
Reshaping exit conditions: Defining new success criteria for the current interaction
This means context graphs provide only partial problem definitions—incomplete fragments that represent the skeletal structure of a problem. Dynamic behaviors complete these fragments at runtime based on conversational context, user state, and emerging patterns. The result is a system that can handle variations that only become apparent during execution, creating a level of adaptability that static retrieval systems cannot achieve.
Behaviors as Runtime Modifiers
Dynamic behaviors operate at the composition layer, influencing which arcs execute and under what conditions. They modify this composition process, adjusting how fragments combine with agent identity, memory states, and available actions to become executable arcs.
Related reading
Knowledge explains how behaviors prime the model’s latent space using measurement-backed reframing.
Pattern Discovery and Optimization shows how successful behaviors graduate through the verification evolutionary chamber.
How Dynamic Behaviors Work
Dynamic behaviors influence the system through multiple mechanisms:
Optimization Constraints: Modify the objective functions that guide arc selection, shifting priorities based on detected conditions
Entry/Exit Conditions: Dynamically adjust the predicates that must be satisfied for arc activation and completion
Interpretive Framing: Add new lenses through which measurements are interpreted and sufficient statistics are evaluated
Side-Effect Framework: Trigger actions that modify the compositional structure, update the arc catalogue, or signal blueprint evolution needs
A Comprehensive Action System
Dynamic behaviors represent a sophisticated action system that can:
Execute Complex Tool Calling Sequences: Trigger multi-stage tool calling workflows based on conversational context
Deep System Integration: Connect with enterprise systems to retrieve, analyze, and act on real-time data
Context Graph Modification: Completely transform the problem-solving topology by adding new states, pathways, and exit conditions
Specialized Reasoning Activation: Pause conversation flow to perform deep reflection through domain-specific lenses
Override Local Guidelines: Knock out existing state guidelines when safety or compliance issues are detected
Cross-Domain Coordination: Orchestrate seamless transitions between different specialized knowledge domains
This comprehensive framework means dynamic behaviors aren't just about retrieving knowledge—they're about fundamentally transforming how the agent operates in response to conversation context.
Here is how a typical dynamic behavior is structured and implemented:
Here is how this dynamic behavior transforms a conversation:
Without Dynamic Behavior:
With Dynamic Behavior Applied:
The dynamic behavior has significantly improved the response by:
Introducing Evidence-Based Context: Sharing research about recovery and progressive training
Personalizing the Interaction: Asking about previous exercise experience
Reframing the Goal: Shifting from extreme training to sustainable progression
Providing Actionable Alternatives: Suggesting a more balanced training approach
Supporting Agency: Asking what would work with their lifestyle
Anatomy of a Dynamic Behavior
As can be seen in the example above, all dynamic behaviors are made up of two key components:
Conversational Triggers act as the sensory system, detecting patterns and topics in conversations that indicate when specific behaviors might be relevant. These triggers can range from explicit keywords to subtle contextual cues.
Instructions serve as the action blueprint, guiding how the agent should behave once a trigger has been activated. These instructions can vary widely in their specificity, from general guidance allowing significant discretion to precise protocols demanding exact behaviors.
How Triggers Work: Multi-Vector Broadcast System
The Amigo system uses a multi-vector broadcast approach to evaluate and rank potential dynamic behaviors. This creates a densely connected network where dynamic behaviors are linked through reasoning patterns, conversation outputs, user inputs, tool interactions, and side-effects.
The Multi-Vector Broadcast Architecture
The system generates embeddings across multiple interaction elements and broadcasts them against all conversational triggers simultaneously. This approach is critical for handling sharp conversational pivots—situations where a user suddenly shifts from one topic to an entirely different one (e.g., from discussing their puppy to expressing suicidal thoughts).
The core broadcast vectors include:
Standalone Agent Message Vector: What the agent just said, independent of user input
Standalone User Message Vector: What the user just said, independent of agent context
Combined Agent + User Message Vector: The fused embedding of both messages together, capturing the shared conversational context
Agent Inner Thought Vector: The agent's internal reasoning that may not be expressed to the user (e.g., recognizing concerning patterns)
External Events Vector: Events occurring outside the conversation (sensor data, notifications, system triggers)
Action/Tool Call Vector: Previous tool usage patterns that influence which behaviors might leverage similar capabilities
Why Multiple Vectors Matter
Consider a scenario where a user is discussing their puppy, then suddenly mentions a family emergency. If you only embedded the combined agent + user message, the puppy context might dilute the emergency signal. By broadcasting against standalone vectors, the system can detect sharp conversational turns where individual elements carry distinct semantic weight.
The standalone vectors catch topic pivots that would be missed by combined embeddings alone.
The Selection Process
These vectors are broadcast against all conversational triggers using cosine similarity. The process works as follows:
Generate embeddings for each vector from the current interaction
Broadcast each embedding against all trigger embeddings
Rank behaviors based on the highest similarity scores across all vector-trigger combinations
Select top candidates (typically the top 20) for logical evaluation
LLM selection determines which behavior from the candidate pool best fits the current context
The selection considers:
The complete user model (preferences, tier, history, location)
The agent's identity and service context
The previously selected dynamic behavior (which is always carried forward as a candidate)
The full interaction history in detail
Behavior Persistence and Re-Selection
A critical design feature is that the previously selected dynamic behavior is always included in the candidate pool for the next interaction. This creates behavioral continuity without being deterministic:
The LLM can re-select the same behavior if context warrants continuation
The LLM can select a new behavior if the conversation has shifted
The LLM can select nothing if no behavior is contextually appropriate
This persistence mechanism ensures that behaviors don't abruptly disappear when topics evolve—the semantic relevance naturally decays over turns as the conversation moves away from the original trigger context. The system maintains continuity while allowing natural transitions.
Behavior Stickiness vs. Decay
Previously active behaviors remain candidates even without new trigger matches, but their effective relevance decreases as conversational distance increases. This creates a natural "behavior decay" where behaviors gracefully exit the active consideration set rather than being abruptly terminated.
This approach connects behaviors through a web of reasoning, thoughts, outputs, inputs, and system interactions. When one behavior is activated, it shifts this web and influences future behavior selection. This creates a fluid conversation experience that adapts to emerging patterns while maintaining coherence.
Practical Applications: Topic Transitions and Conversation Flow
The system excels at managing natural topic transitions. For example, if a conversation shifts from nutrition to exercise, the system will appropriately adjust behavior selection without losing the thread of health-related context:
In this example, the system detects the topic bridge and selects a behavior that spans both domains, creating a natural conversation flow that maintains context across the topic shift.
Advanced Example: Implicit Health Issue Detection
The multi-dimensional embedding system can detect potential health concerns even when users don't explicitly mention them. This example demonstrates how the system identifies possible cardiac issues through subtle symptoms and contextual clues:
This example illustrates several key aspects of the multi-dimensional embedding system:
Pattern Recognition Through Agent Thinking: The agent internally recognizes the constellation of symptoms that might indicate cardiac issues, even though the user never mentioned heart problems
Multiple Vector Activation: Several vectors activate simultaneously, raising different candidate behaviors in the pool
Tool Usage Influencing Candidacy: The medical history tool retrieves critical risk factors that significantly boost the cardiac assessment behavior's ranking
Attribute-Driven Selection Shift: New attributes from the tool call (age, hypertension, family history) dramatically alter behavior selection
Context Modification: The selected behavior modifies the context graph to add appropriate follow-up paths and safety exit conditions
The result is that potentially serious health concerns are identified and addressed appropriately, even when the user frames their query around exercise rather than health concerns. The interconnected embedding system ensures that multiple factors—agent medical knowledge, user symptoms, medical history data, and risk factor analysis—all contribute to selecting the most appropriate behavior.
The impact of this approach includes:
More natural conversation flow that doesn't feel scripted
Consistent agent personality and as conversational focus shifts
Contextually appropriate responses that build on prior exchanges
Fluid transitions between topics without abrupt changes
Persistent themes that carry through conversations even as specific topics change
Coherent integration of tool usage and side-effects with conversational elements
System actions that maintain continuity with conversation context
Detection of implicit concerns that users may not directly express
Appropriate safety protocols triggered by pattern recognition rather than explicit mentions
How Instructions Are Applied: The Instruction Flexibility Spectrum
Selecting a dynamic behavior doesn't guarantee its enactment in a specific manner. This by design - rather than being a simple "if-then statement" that dictates exact outputs, instructions are seamlessly integrated into the action guidelines of the current state of the context graph. This allows the system to adapt behaviors to specific conversational nuances while preserving overall intent.
Importantly, the flexibility of instructions exists along an instruction flexibility spectrum—implementing entropy control by strategically managing the degrees of freedom available to the agent:
High-Entropy Instructions (Maximum Degrees of Freedom) Vague triggers paired with open context create more autonomous agents. This approach functions like an associative knowledge cluster that the agent can freely draw from as the conversation evolves, intelligently determining behavior based on the user model and interaction context. Such flexibility is particularly valuable in creative, exploratory, or coaching conversations where adaptability outweighs the need for strict adherence to protocols.
Low-Entropy Instructions (Minimal Degrees of Freedom) Strict triggers combined with precise instructions effectively simulate protocol overrides, creating highly constrained decision spaces for predictable behavior. This approach ensures regulatory compliance and consistent handling of sensitive topics. Such strictness is essential in safety-critical contexts where consistent and compliant situation-handling is paramount.
Strategic Entropy Management Most real-world deployments strategically implement a balanced mix across this spectrum (as described in the system components overview). This instruction flexibility approach creates systems that successfully navigate the tension between strict compliance standards and conversational adaptability. The adaptive nature of Amigo's dynamic behavior system enriches actions with contextual awareness, enabling more human-like interaction patterns that evolve alongside the conversation itself while applying appropriate constraint levels based on situational requirements.
Automated Optimization Through Agent Forge
Agent Forge revolutionizes dynamic behavior development by enabling coding agents to automatically optimize behavior configurations based on performance data. Rather than manually crafting and refining behaviors, coding agents can systematically analyze which behavior patterns deliver the best outcomes and automatically adjust trigger patterns, instruction specificity, and side-effect configurations. This transforms dynamic behavior evolution from a manual process into a data-driven optimization system that scales with deployment complexity while maintaining human oversight for safety and compliance.
Last updated
Was this helpful?

