Dynamic Behaviors
What are Dynamic Behaviors?
Dynamic behaviors are a unified system that enables conversational agents to manage knowledge, intuition, side-effects, actions, and data integration. Effectively, it allows the selection agent to examine available options to modify its own local context graph topology and cause side-effects. This system enables more natural, context-aware responses that enhance the user experience while still maintaining necessary guardrails.
How Dynamic Behaviors Work
Dynamic behaviors operate on a spectrum from strict compliance requirements (redlining) to flexible user experience enhancements. Here's how the system processes them:
1. Conversational Triggers
Triggers are patterns or keywords in user messages that may activate a specific behavior
Triggers are not meant to be exact or close matches - they function as relative ranking mechanisms only
Triggers can be:
Completely associative tags (e.g., "exercise," "anger," "protein")
Specific identifiers mentioning particular items (e.g., a specific drug or food like "chicken")
Thematic/conceptual: Broader topics or semantic areas (better for product experience)
Exact matches: Precise phrases that must appear (ideal for safety/compliance cases)
2. Candidate Evaluation and Ranking
The system presents candidate instructions to the behavior selection agent
The selection agent conditions against interaction logs and user model
Candidates are ranked by relevance to the current context
Only the relative ranking matters, not absolute scores
Think of this as "the agent moving around the conversation with relevant content to optionally grab or ignore"
3. Selection Decision
The behavior selection agent has three options:
Continue using the previous dynamic behavior (if any) to maintain thread consistency
Select a new candidate behavior that better fits the current context
Select nothing if no behavior is appropriate
4. Context Graph Integration
If a behavior is selected, it is injected into the action guidelines of the current context graph state
This modifies the problem topology, directly influencing agent behavior
The selected behavior becomes part of the agent's decision-making framework, but doesn't guarantee 100% enactment
The behavior merges with the local context graph state to create a more constrained, guided, or contextually rich environment
Every time a dynamic behavior is selected, the context graph is modified
The dynamic behavior gives higher resolution and scales much larger to ~5 million characters (without side-effects, with side effects this can scale another order of magnitude larger)
The behavior can:
Enrich the response (for user experience)
Constrain the response (for safety/compliance)
Provide additional guidance
Add new tool exposure
Trigger hand-off to external systems
Add new exit conditions
Enable reflection and self-modification
5. Contextual Implementation
The agent autonomously decides how to implement the behavior in a natural way
The behavior is integrated into the response rather than robotically inserted
Implementation Best Practices
Design Along the Instruction Flexibility Spectrum
Instructions can range widely in their prescriptiveness:
Open-Ended Guidance
Allows significant agent discretion
Provides general direction without strict requirements
Creates a knowledge-enriched environment for the agent
Best for creative, exploratory, or coaching conversations
Example: "You have access to these nutrition resources. Consider their relevance to the user's goals."
Structured Protocols
Provides clear, specific guidelines
Balances direction with some contextual adaptation
Ensures consistency while allowing natural conversation
Example: "When discussing exercise plans, ask about previous injury history before making recommendations."
Strict Instructions
Enforces precise behavior patterns
Minimizes agent discretion for safety-critical scenarios
Essential for regulatory compliance or high-risk situations
Example: "If the user mentions suicidal thoughts, immediately provide crisis resources and follow safety protocol X."
Separate Redlining from Product Experience
Split your dynamic behaviors into two categories:
Redlining (Safety/Compliance)
Purpose: Protect against medical emergencies, legal issues, or user harm
Trigger Design: Exact patterns that must be detected
Instructions: Can be more directive and specific
Testing: Subject to unit tests that must pass for every release
Example: Suicide prevention protocol, legal disclaimers
Product Experience
Purpose: Enhance user experience, provide relevant content
Trigger Design: Broader thematic matches (e.g., "exercise", "protein", "recipes")
Instructions: Provide options rather than mandates (e.g., "You have access to these resources, surface them if relevant")
Testing: Measured via metrics and user experience evaluation
Example: Suggesting relevant meal plans when user discusses nutrition
Measurement Approaches
For Redlining
Implement unit tests
Every critical safety case must be tested and pass before release
100% compliance expected
For Product Experience
Develop metrics that focus on:
Is content being served appropriately?
Is content relevant to the conversation?
Is content repetitive?
Overall user experience quality
Monitor trends over time (e.g., 30-day metrics)
Audit samples where expected behaviors didn't trigger
Focus on anomaly detection rather than expecting 100% triggering
Example Implementation
Poor Implementation (Too Rigid)
Better Implementation (Natural)
Agent Autonomy Spectrum
The design of triggers and instructions directly impacts agent autonomy:
High Autonomy Configuration
Uses vaguer triggers combined with open context
Grants the agent greater freedom to determine behavior based on user model and interaction context
Functions like an associative knowledge cluster available to the agent
Agent can selectively draw from this knowledge as the conversation evolves
Best for creative coaching, exploratory discussions, and personalized experiences
Limited Autonomy Configuration
Uses stricter triggers paired with precise instructions
Effectively simulates protocol overrides in critical situations
Limits agent discretion and enforces specific response patterns
Necessary for regulated industries, safety-critical information, and compliance requirements
Strategic Implementation
Most effective systems use a thoughtful mix across the autonomy spectrum
Critical areas employ strict triggers with precise instructions
General conversation areas use broader triggers with flexible guidance
Creates a balanced system that's both compliant and conversationally natural
Common Pitfalls
Over-engineering: Creating overly specific triggers that rarely match
Robotic responses: Making behaviors too directive, leading to unnatural interactions
Expecting 100% triggering: Product enhancements should be contextual, not forced
Mixed responsibilities: Having the same team design both safety and product behaviors
Inconsistent testing: Using the wrong evaluation approach for different behavior types
Misunderstanding behavior integration: Expecting behaviors to be rigidly enacted rather than merged with the context graph state
Ignoring the selection process: Not accounting for the three options (continue previous, select new, select nothing) in behavior design
Last updated
Was this helpful?