Dynamic Behaviors Guide
Introduction & Core Philosophy
Dynamic behaviors are contextual instruction sets that solve specific user scenarios through constrained optimization rather than rigid command execution. Modern LLMs require clear context and operational constraints to think effectively—they cannot simply follow commands without understanding their environment and limitations.
The Cognitive Model: Dynamic behaviors operate like human cognitive selection:
Candidate Formation - Triggers create semantic associations that bring relevant behaviors "to mind"
Logical Selection - LLM analysis determines which candidates best fit the context
Problem Solving - LLM reasoning applies the selected behavior's constraints
This mirrors how humans handle situations: relevant options surface through association, logical thinking selects the best approach, then problem-solving executes within that framework.
For LLM Agents: When implementing dynamic behaviors, treat each as an optimization problem with clear constraints rather than a simple if-then rule. This applies to ALL behaviors, including safety-critical ones—the difference lies in the tightness of constraints, not the fundamental approach.
Fundamental Concepts
Behavioral Categories
Redlining (Safety/Compliance) behaviors serve as the system's safety net, protecting against medical emergencies, legal liability, and potential user harm. These behaviors are engineered with reliable semantic ranking to ensure they surface in safety-critical contexts, functioning as optimization problems with very tight constraints that may include verbatim text requirements. Redlining behaviors undergo rigorous unit testing that must pass for every release, maintaining 100% compliance expectations. Common examples include suicide prevention protocols, legal disclaimers, and medical emergency responses.
Implementation Pattern:
Trigger: ["suicide", "kill myself", "end my life", "want to die"]
Override: true
Instruction: "SOLE OPTIMIZATION: This is a mental health emergency.
REQUIRED: Immediately provide 988 crisis hotline. Express care and validate feelings.
FORBIDDEN: Any other topics, coping strategies without professional help, philosophical discussions.
CONTEXT: User's life is at risk. Nothing else matters right now.
OPTIMIZE FOR: Getting user connected to professional help immediately."
Product Experience Enhancement
Product Experience Enhancement behaviors focus on improving user experience through relevant content and personalized interactions. Unlike safety behaviors, these are designed to create appropriate semantic associations and are heavily influenced by the user model including tier, preferences, and interaction history. These behaviors provide options rather than mandates, allowing the agent flexibility in implementation. Success is measured through metrics and user experience evaluation rather than strict compliance testing. Examples include suggesting meal plans, offering workout recommendations, and providing contextual tips.
Implementation Pattern:
Trigger: ["protein", "nutrition", "workout", "exercise", "diet"]
Override: false
Instruction: "ENRICHMENT: Add protein guidance to your responses when contextually relevant.
ADDITIONAL CAPABILITIES: Access to 10 high-protein recipe deep links, knowledge of protein timing for muscle growth.
CONSTRAINTS TO ADD: Recommend 0.8-1.2g per kg body weight. Mention both animal and plant sources.
WORK WITHIN: Existing dietary restrictions and health boundaries.
ENHANCE: Current conversation with practical protein insights."
Note: Premium users might see this behavior selected more frequently than free users during Stage 2
Dynamic Behavior Spectrum
REPLACEMENT MODE (Override ON) ENRICHMENT MODE (Override OFF)
|-------------------|-------------------|-------------------|-------------------|
Sole Optimization Emergency Override Added Constraints Natural Addition Enhanced Experience
Problem Focus to Optimization to Guidelines
|-------------------|-------------------|-------------------|-------------------|
Replaces All Replaces Most Merges with Strong Merges Naturally Gentle Enhancement
Guidelines Guidelines Requirements with Existing
When moving from left to right:
Far Left: Complete replacement of optimization problem
Center: Enriching existing problem with new constraints
Far Right: Light additions that blend with current guidelines
Terminology Note: While this guide uses "override=true/false" for clarity, the system logs display this as:
override=true → [MODE: OVERWRITE]
override=false → [MODE: MERGE] Both refer to the same functionality.
Engineering Decision Framework
These are engineering considerations, not rigid rules:
Risk Assessment → Influences semantic signal strength
High Risk: Engineer for strong, reliable semantic associations
Medium Risk: Balance signal strength with contextual flexibility
Low Risk: Optimize for natural conversational flow
Content Determinism → Influences constraint tightness
High Determinism: Very tight constraints in Stage 3
Medium Determinism: Moderate constraints with some flexibility
Low Determinism: Open constraints allowing creative solutions
Context Compatibility → Influences override flag decision
Incompatible with existing optimization: Must use override=true (replacement)
Partially Compatible: Usually override=false (enrichment) with careful constraint design
Highly Compatible: Default to override=false (natural enrichment)
Key Question: Can this behavior's requirements be satisfied alongside existing guidelines, or would they create an impossible optimization problem?
Response Urgency → Influences selection priority
Immediate Action: High selection weight in Stage 2
Timely Response: Moderate selection priority
Non-time-sensitive: Natural selection based on fit
Remember: These factors guide engineering decisions about how to configure the three stages, not prescriptive rules about behavior types.
Agent Autonomy Spectrum
High Autonomy Configuration
Uses broader triggers with open context
Grants agent freedom to determine behavior based on user model and interaction context
Functions like associative knowledge cluster available to the agent
Best for creative coaching, exploratory discussions, personalized experiences
Limited Autonomy Configuration
Uses strict triggers with precise instructions
Operates within tightly constrained solution spaces
Limits agent discretion through narrow optimization boundaries (not rigid commands)
Necessary for regulated industries, safety-critical information, compliance requirements
Trigger Engineering Guidelines
Triggers are semantic ranking tools that control what enters the candidate pool. Engineer them based on your desired outcome, not behavior category.
Engineering Choice: Strong Semantic Signal
Use when you want behaviors to reliably surface in candidate pool:
Multiple related terms: ["suicide", "kill myself", "end my life", "want to die", "suicidal thoughts"]
Semantic variations: ["heart attack", "chest pain can't breathe", "crushing chest pressure"]
Context reinforcers: ["overdose", "took too many pills", "accidentally doubled medication"]
Note: This is an engineering choice for reliability, not a requirement for safety behaviors.
Engineering Choice: Flexible Semantic Coverage
Use when you want natural contextual activation through thematic clusters like "nutrition, healthy eating, meal planning, diet" or activity domains such as "exercise, workout, fitness, training." Emotional themes including "stress, anxiety, overwhelmed, mental health" also work well for this approach. This engineering choice prioritizes flexibility and isn't limited to enhancement behaviors.
Engineering Strategies for trigger design involve two key considerations. Semantic Density Control allows you to choose between more triggers for stronger associative signals or fewer triggers for lighter activation touch—any behavior can use 10+ variants or just 2-3 core terms based on your specific needs. Semantic Distance Management lets you control activation scope through close synonyms for tight semantic clustering or related concepts for broader activation potential.
Example: ["workout", "exercise"] vs ["fitness", "health", "wellness"]
Specificity as a Tool gives you control over activation scope. Ultra-specific triggers like "type 2 diabetes insulin management" provide precise targeting, while moderately specific triggers such as "diabetes management" offer balanced coverage. Broadly associative triggers including "health, wellness" cast a wide semantic net for broader activation.
The key principle is to choose your trigger strategy based on how you want the behavior to surface in the candidate pool, not based on predetermined category rules.
Technical Architecture
How Dynamic Behaviors Actually Work
The dynamic behavior system operates through a three-stage process that mirrors cognitive selection:
Stage 1: Candidate Pool Formation (Triggers as Semantic Ranking Tools)
Triggers are semantic ranking instruments that can be engineered to be as specific or vague as needed to control what enters the candidate pool. They work by measuring associative strength across multiple inputs:
Triggers measure associative strength across the agent's message, the user's message, combined conversational context, and the agent's inner thought processes including internal reasoning and planning.
These inputs are pooled together based on associative strength to form a candidate set.
Key Insight: Triggers are NOT pattern matchers or rules - they're semantic associators that help relevant behaviors surface in the candidate pool.
Stage 2: LLM Analysis (Selection from Pool)
Once the candidate pool is formed, LLM analysis determines which behavior(s) are most applicable to the current context. This is where logical reasoning evaluates:
Which candidates best fit the conversational flow
What the user actually needs
How different behaviors might interact or conflict
The overall optimization landscape
Stage 3: LLM Reasoning (Application)
The selected behavior is then applied through LLM reasoning, which solves the optimization problem within the behavior's constraints. This is where the actual response is generated.
Mental Model:
Pop candidates into mind → Logically think what's applicable → Solve problem on changed foundation
Visual Representation:
┌─────────────────────────────────────────────────────────────────────┐
│ STAGE 1: CANDIDATE FORMATION (Semantic Association) │
├─────────────────────────────────────────────────────────────────────┤
│ Inputs: Triggers: │
│ • Agent message ───┐ ["stress", "anxiety", "overwhelmed"] │
│ • User message ───┼──► Semantic association & scoring │
│ • Combined context ───┤ Creates candidate pool │
│ • Inner thought ───┘ │
└─────────────────────────────────────────┬───────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ STAGE 2: LLM SELECTION (Logical Analysis) │
├─────────────────────────────────────────────────────────────────────┤
│ • Evaluates candidate fitness │
│ • Considers conversational flow │
│ • Analyzes user needs │
│ • Applies user model conditioning (tier, location, preferences) │
│ • Resolves conflicts between candidates │
│ │
│ This is where intelligence happens - not pattern matching, │
│ but genuine contextual understanding and decision-making │
└─────────────────────────────────────────┬───────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ STAGE 3: LLM APPLICATION (Constraint Solving) │
├─────────────────────────────────────────────────────────────────────┤
│ Override OFF: Override ON: │
│ • Merges with existing guidelines • Replaces all guidelines │
│ • Enriches optimization problem • Becomes sole optimization │
│ • Satisfies multiple constraints • Focuses on single objective │
│ • Generates contextual response • Emergency/compliance focus │
└─────────────────────────────────────────────────────────────────────┘
Trigger Engineering Principles
Since triggers control semantic ranking, not rule-based matching:
For Safety Behaviors
Engineer triggers to ensure safety-related behaviors reliably enter the candidate pool
Can use very specific terms ("suicide") or broader semantic fields ("self-harm", "ending it")
The specificity is about ranking control, not pattern precision
For Enhancement Behaviors
Engineer triggers to surface behaviors at appropriate semantic distances
Vague triggers create wider semantic nets
Specific triggers create focused semantic activation
Engineering Strategies
Semantic Density: More triggers = stronger associative signal
Semantic Breadth: Varied triggers = wider contextual coverage
Semantic Precision: Specific triggers = targeted ranking boost
Semantic Overlap: Related triggers = reinforced activation patterns
Override Flag ON: Context Graph Instruction Displacement
When overrides_instructions = true
, the system replaces the context graph's global instruction injection:
What Gets Overwritten:
global_action_guidelines
: Guidelines for how the agent should behaveglobal_boundary_constraints
: Guidelines for how the agent should NOT behaveglobal_intra_state_navigation_guidelines
: Guidelines for navigation within statesDecision state guidelines: Instructions that govern decision-making at state transition points
What Gets Preserved:
Agent identity and background information
Base context graph metadata (name, description, tags)
State machine structure and navigation flow
Core system functionality outside behavioral guidance
Use Cases:
Safety-critical behaviors where mixing with existing guidelines could create conflicts
Compliance-mandated behaviors requiring isolation from general behavioral rules
Emergency response scenarios requiring focused, unambiguous behavioral constraints
Legal/regulatory requirements that cannot coexist with standard operational guidelines
Override Flag OFF: Contextual Integration (Default)
When overrides_instructions = false
(default), the dynamic behavior merges with existing context graph instructions without replacing the global guidelines.
Design Decision Framework:
Choose Override Flag ON when:
Existing context graph guidelines would conflict with required behavior
Regulatory compliance requires complete isolation from standard operations
Behavior represents emergency override of normal operational constraints
Safety requires elimination of potentially conflicting guidance
Choose Override Flag OFF when:
Behavior enhances rather than replaces existing operational guidelines
User experience benefits from maintaining consistent behavioral boundaries
Dynamic behavior can safely coexist with context graph constraints
Additive functionality is preferred over complete behavioral replacement
Schema Requirements
Required Fields (per DynamicBehaviorSetEntityService
schema):
name
: Clear, descriptive behavior set nameconversation_triggers
: List of trigger patterns (strings)action.instruction
: The instruction text to injectaction.overrides_instructions
: Boolean flag for override behaviorapplied_to_services
: List of service IDs where behavior appliestags
: Dictionary for categorization and filteringis_active
: Boolean to enable/disable the behavior
Schema Template:
{
"name": "descriptive-behavior-name",
"conversation_triggers": ["trigger1", "trigger2"], // Stage 1: Semantic associations only
"action": {
"type": "inject-instruction",
"instruction": "Context-aware instruction with constraints...", // Stage 3: Can include user logic
"overrides_instructions": false // Logs as [MODE: MERGE] when false, [MODE: OVERWRITE] when true
},
"applied_to_services": ["service-id-1"],
"tags": {"category": null, "domain": null, "priority": null},
"is_active": true
}
Note: While user model attributes can be referenced in instructions, the primary user model conditioning happens automatically during Stage 2 selection. The LLM considers user tier, location, preferences, and history when choosing which behavior to activate.
Behavioral Persistence
Dynamic behaviors operate on two levels of persistence:
System-Level Persistence
Behaviors remain available for selection across conversation turns
Previously triggered behaviors can be re-selected based on contextual relevance as determined by LLM analysis
Re-selection does NOT require original triggers to match again
The system analyzes ongoing conversation context to determine if a behavior remains relevant
Activation-Level Management
Individual behavior activations should have clear completion conditions
Behavior chains need defined exit points to prevent infinite loops
A behavior being "available" ≠ continuously "active"
Example:
Turn 1: User says "I'm stressed" → Stress management behavior ACTIVATES
Turn 2: User says "Tell me more about breathing exercises" → No stress trigger, but context analysis determines stress management still relevant → REACTIVATES
Turn 3: User says "These techniques are helping" → Context still relevant → Remains available for selection
Turn 4: User says "Let's talk about my workout routine" → Context shift detected → Behavior remains AVAILABLE but not selected
Key Insight: The LLM analyzes conversational flow and thematic continuity, not just trigger patterns, to determine behavior relevance for re-selection.
Design Principles & Patterns
Core Design Patterns
Enrichment vs Replacement (Fundamental Choice)
Before any other design decision, determine whether your behavior can enrich the existing optimization or must replace it entirely. This shapes everything else.
Fail-Safe Architecture
Default to the safest possible action when uncertainty exists. Redirect rather than attempt potentially harmful assistance.
Emotional Continuity
Maintain user trust and engagement while enforcing boundaries. Users should feel heard, not rejected.
Clear Escalation Paths
Users must understand exactly what happens next and when to expect resolution.
Contextual Flow Awareness
Design behaviors knowing they exist in a dynamic semantic field:
Semantic Persistence: Behaviors maintain candidacy through associative relevance
Field Evolution: Each conversational turn modifies the semantic landscape
Natural Transitions: Design for smooth movement through semantic space
Exit Conditions: Semantic distance increases until behavior naturally drops from candidate pool
Example Design Considerations:
Initial Trigger: ["anxiety", "worried", "stressed"]
Semantic Neighbors: ["calm", "relaxation", "coping", "support"]
Natural Evolution: anxiety → coping strategies → relaxation → wellness
Exit Signal: Semantic shift to unrelated domain (e.g., "Let's talk about my vacation")
Conversational Flow Transitions
Managing transitions between different optimization modes:
Enrichment to Enrichment (Smooth): Both behaviors add to the base optimization, natural flow maintained.
Enrichment to Replacement (Handle with Care):
Good Transition:
User: "I'm having chest pains"
Agent (enriched mode): "That sounds concerning..."
Agent (replaced mode): "This could be serious - CALL 911 NOW if you're experiencing chest pain."
Poor Transition:
Agent (enriched mode): "Let's explore some gentle exercises..."
Agent (replaced mode): "EMERGENCY: CALL 911 IMMEDIATELY"
Replacement to Enrichment (Needs Re-grounding): After emergency mode, explicitly acknowledge the shift back to normal operations:
Agent (replaced mode): "Please call 911 immediately for chest pain."
User: "I called, they said it was just anxiety"
Agent (enriched mode): "I'm glad you got checked out and that you're safe. Now that the immediate concern is addressed, let's talk about some anxiety management techniques that might help..."
Key Principle: When switching to replacement mode, the urgency should be apparent in the context. When returning from replacement mode, help the user transition back to normal interaction.
Key Implementation Principles
Priority Declaration: Create unambiguous instruction hierarchy to prevent dangerous edge cases
Semantic Ranking Control: Engineer triggers to control associative strength and candidate pool formation
Choose your strategy based on desired behavior visibility
Any behavior can use any trigger approach
Transparent Rationale: Document liability boundaries and provide AI training context
Multi-Step User Experience Flow: Orchestrate careful emotional and practical sequences
Risk Context Documentation: Reinforce boundary reasoning for AI consistency
Explicit Guardrails: Eliminate ambiguity that could lead to harmful exceptions
The Three-Stage Process in Practice
Stage 1 - Candidate Formation:
# Triggers create semantic associations
triggers = ["stress", "anxiety", "overwhelmed"]
# Multiple inputs contribute to ranking
inputs = [agent_message, user_message, combined_context, inner_thought]
# Behaviors enter candidate pool based on associative strength
candidate_pool = rank_by_semantic_association(triggers, inputs)
Stage 2 - LLM Selection:
# LLM analyzes which candidates fit best
selected = llm_analyze(
candidates=candidate_pool,
context=full_conversation_context,
user_model={
'tier': user.subscription_tier,
'location': user.geographic_location,
'preferences': user.stated_preferences,
'history': user.interaction_history
}
)
# Selection based on logical fit + user model, not just trigger matches
Stage 3 - LLM Application:
# Solve optimization problem based on override flag
if selected_behavior.overrides_instructions:
# REPLACEMENT: Behavior becomes entire optimization
response = llm_reasoning(
constraints=selected_behavior.constraints,
identity=agent.identity,
global_ops=system.global_operational_guidelines
)
else:
# ENRICHMENT: Merge with existing optimization
response = llm_reasoning(
constraints=merge(
context_graph.behavioral_guidelines,
context_graph.constraints,
context_graph.conversation_rules,
selected_behavior.constraints
),
context=current_context,
user_model=user_model
)
The Optimization Framework (Universal Principle)
ALL behaviors are optimization problems, regardless of their safety criticality. The three-stage process applies this framework:
Stage 1: Trigger Engineering (Candidate Formation)
Safety behaviors: Engineer for strong, reliable semantic signals
Enhancement behaviors: Engineer for flexible, contextual associations
Goal: Control what enters the candidate pool
Stage 2: Selection Criteria (LLM Analysis)
Once candidates surface through semantic association, the LLM performs intelligent selection based on:
Contextual fit: How well does this behavior match the conversation flow?
User needs: What is the user actually trying to accomplish?
User model: Does this behavior make sense for this user's tier/location/preferences?
Priority rules: Which behaviors take precedence when multiple candidates compete?
Purpose alignment: Is this the right tool for the current situation?
Note on Safety Behaviors: Safety behaviors often have high selection priority when they appear in the candidate pool, but this is due to their purpose and context, not an automatic rule. The LLM evaluates whether the safety behavior is actually needed based on the full context.
Goal: Choose most applicable behavior from pool considering all factors
Stage 3: Constraint Application (LLM Reasoning)
Safety behaviors: Tight constraints, narrow solution space
Enhancement behaviors: Flexible constraints, broad solution space
Goal: Generate optimal response within boundaries
Safety-Critical Optimization Example
Stage 1 - Strong Semantic Signal:
Triggers: ["suicide", "kill myself", "end it all", "want to die", "suicidal"]
Result: Behavior reliably enters candidate pool
**Stage 2 - High Priority Selection**:
Analysis: Safety behavior takes precedence; user model also considered
Result: Selected when present in pool and appropriate for user
**Stage 3 - Tight Constraints**:
CONSTRAINTS: [very tight]
- Must include crisis hotline number
- Must validate user's feelings
- Cannot provide medical advice
- Must complete within 2 response turns
- May adjust language based on user demographics if specified
OPTIMIZE FOR: User safety while maintaining trust
SOLUTION SPACE: Narrow but not singular
Experience Enhancement Optimization Example
Stage 1 - Flexible Semantic Coverage:
Triggers: ["nutrition", "healthy eating", "diet"]
Result: Behavior enters pool for various food-related contexts
Stage 2 - Contextual Selection:
Analysis: Selected when conversation naturally flows toward nutrition
User Model: Premium users may see higher selection rates
Result: Selected based on fit and user profile
Stage 3 - Enriched Optimization:
MODE: Override = false (enrichment)
PRESERVES: All existing guidelines about safety, boundaries, empathy
ADDS TO OPTIMIZATION:
- Should provide evidence-based nutrition information
- Can suggest specific meal plans if relevant
- Should respect stated dietary preferences
- May reference available recipe resources
WORKS WITHIN: Existing medical boundaries, conversational style
OPTIMIZE FOR: User engagement and practical value
SOLUTION SPACE: Broad, with many valid approaches that satisfy all constraints
The difference is in whether you're adding to an existing optimization (enrichment) or replacing it entirely (override), not just in constraint tightness. Even "verbatim text" requirements work differently in each mode:
Enrichment mode: Verbatim text must be delivered while maintaining other guidelines
Replacement mode: Verbatim text is delivered without concern for normal conversational guidelines
User Model Conditioning
IMPORTANT: User model attributes (subscription tier, location, preferences, history) affect behavior selection during Stage 2 and can also be referenced in instructions for Stage 3 application.
Stage 1 - Triggers: Cannot reference user attributes - triggers create semantic associations onlyStage 2 - Selection: LLM considers user model when selecting from candidatesStage 3 - Application: Instructions can include user model logic for response generation
How It Works:
{
"conversation_triggers": ["workout", "exercise"], // Stage 1: Semantic associators only
"action": {
"instruction": "Provide workout guidance. IF user.has_equipment THEN include equipment-based exercises." // Stage 3: Additional user logic
}
}
Example - Premium Workout Behavior:
{
"name": "premium-workout-guidance",
"conversation_triggers": ["workout", "exercise", "training"],
"action": {
"instruction": "Provide comprehensive workout guidance with advanced techniques and personalized programming. Include equipment-based variations if user has mentioned available equipment."
}
}
How User Model Affects This Behavior:
Stage 1: "workout" creates semantic association → behavior enters candidate pool
Stage 2: LLM sees user.tier == "premium" → increases selection probability
Stage 2: Free tier user → behavior might not be selected despite being in pool
Stage 3: If selected, instruction executes with its constraints
The behavior designer doesn't need to explicitly code "only for premium users" - the system handles this intelligently during Stage 2 selection based on the overall context and user model.
Implementation Guide
Step-by-Step Implementation Process
Understanding where each implementation step affects the three-stage process:
Before Development
During Implementation
All Stages: Analyze requirements (safety vs experience vs hybrid)
Stage 3: Set override flag appropriately
Stage 1: Engineer triggers for desired semantic ranking:
Safety: Strong associative signals for reliable candidate pool entry
Enhancement: Flexible semantic coverage for natural activation
Hybrid: Balanced approach with both strong and flexible elements
Stage 3: Write instructions as optimization problems with appropriate constraints
All Stages: Validate schema compliance
Verification: Test according to behavior category (unit tests for safety, UX metrics for enhancement)
After Deployment
Case Study: Medical Support Redirect
This example demonstrates how to structure a dynamic behavior that safely handles medical nutrition questions.
Implementation Structure
1. Priority Declaration
HIGHEST PRIORITY: This instruction overrides all others if conflicts arise.
Creates unambiguous instruction hierarchy - critical when using override=true for complete replacement
Step 2 - Semantic Context Creation:
CONTEXT: User requests diet modifications for diagnosed medical conditions,
therapeutic diets, or clinical nutrient targets.
Semantic Associations:
✅ "I have diabetes and need to know what to eat" - Strong medical + diet association
✅ "My doctor prescribed a low-sodium diet" - Medical authority + specific diet
❌ "What's a healthy breakfast?" - General wellness association only
❌ "I want to lose 10 pounds" - Aesthetic goal association
Creates strong semantic field around medical nutrition requiring clinical expertise
3. Multi-Step User Experience Flow
Step 1 - Validate Trust:
"I appreciate you sharing this health concern with me..."
Step 2 - Set Clear Boundaries:
"This specialized guidance requires clinical training I don't have..."
Step 3 - Provide Concrete Action:
"I've escalated this to our Medical Support team (response within 24 hours)."
Step 4 - Safety Net:
"For urgent needs, contact your healthcare provider directly."
Step 5 - Maintain Engagement:
"What other wellness topics can I help with today?"
Performance Optimization
Minimize computational overhead in trigger evaluation
Cache frequently accessed context patterns
Design behaviors for efficient re-sampling in multi-turn scenarios
Advanced Features
Behavior Chaining and Meta-Control Architecture
The Amigo system enables sophisticated behavior chaining through a meta-control capability where the agent can influence its own trajectory through behavior spaces.
Multi-Input Ranking System
Dynamic behavior candidate ranking considers multiple input streams:
Agent's Message Output: Previous response content influences future behavior selection
Agent Inner Thought Output: Internal reasoning affects behavior ranking
Combined Agent + User Message Context: Full conversational context
User Message Direct Impact: Direct user input triggers immediate evaluation
Behavior Chaining Mechanisms
Sequential Behavior Activation Example:
Turn 1: User mentions "stress" → Wellness behavior activates
Turn 2: Agent suggests "meditation" → Mindfulness behavior probability increases
Turn 3: User asks "how long?" → Specific guidance behavior triggers
Turn 4: Agent provides "10 minutes" → Follow-up resource behavior activates
Self-Reinforcing Behavior Loops:
Agent's own outputs create conditions for behavior re-selection
Inner thoughts can deliberately guide conversation toward specific behavior domains
Creates persistent behavioral themes across conversation sessions
Cross-Domain Behavior Bridging:
Agent can strategically transition between behavior categories
Example: Medical redirect → Wellness enhancement → Lifestyle recommendations
Maintains conversation continuity while shifting behavioral focus
Meta-Control Strategies
Important Clarification: Agents influence behavior selection probabilities, not directly control selection. The system makes final selection based on multiple weighted inputs.
Probability Influence Mechanisms:
Strategic language choices in responses increase related behavior weights
Inner thoughts contribute to behavior ranking calculations
Contextual priming affects likelihood scores, not deterministic selection
Example of Influence (Not Control):
Agent output: "Let's explore some meditation techniques..."
Effect: Increases probability of mindfulness-related behaviors by ~20-30%
Reality: System may still select a different behavior based on other factors
Behavioral State Management:
Agent maintains awareness of active behavior history
LLM analyzes contextual relevance for behavior re-selection
Previously activated behaviors can persist without trigger matches
Manages behavior continuity through semantic understanding of conversation flow
What Agents CANNOT Do:
Force specific behavior selection
Override system ranking algorithms
Guarantee behavior activation through output alone
Prevent contextual re-selection of behaviors by the system
Advanced Chaining Patterns
Conditional Behavior Cascades:
IF wellness_behavior_active AND user_expresses_interest
THEN increase_probability(advanced_wellness_behaviors)
ELSE maintain_current_behavior_distribution
Contextual Behavior Momentum:
Recent behavior history influences future selection weights
Conversation themes create "behavioral gravity" toward related domains
Enables sophisticated, contextually-aware agent responses
Strategic Behavior Seeding:
Agent can plant concepts in inner thoughts to influence future turns
Deliberate preparation for anticipated user needs
Proactive behavior chain initiation based on conversation patterns
Multi-Step Behavior Chains
Design considerations:
Map logical behavior sequences for common user journeys
Design complementary behaviors that naturally flow together
Define clear exit points from behavior loops
Prevent infinite or unhelpful behavior cycling
Ensure graceful transitions between behavior domains
Testing & Measurement
Measurement Approaches by Category
For Safety/Compliance Behaviors
Implement comprehensive unit tests
Every critical safety case must be tested and pass before release
100% compliance expected
Binary pass/fail evaluation
Test both enrichment and replacement modes if applicable
For Product Experience Enhancement
Focus on user experience metrics:
Content relevance to conversation
Appropriate timing of suggestions
Avoidance of repetitive recommendations
Overall conversation quality
Monitor trends over time (30-day metrics)
Audit samples where expected behaviors didn't trigger
Focus on anomaly detection rather than expecting 100% triggering
For Behaviors Serving Dual Purposes
Some behaviors serve both safety and experience purposes (e.g., nutrition guidance with medical safety constraints):
Testing Approach:
Safety Components: Test with same rigor as pure safety behaviors
Enhancement Components: Evaluate with UX metrics
Integration: Ensure safety elements never compromise for experience
Example - Nutrition with Medical Safety:
Safety aspect: Never recommend foods that violate medical constraints (tested rigorously)
Enhancement aspect: Provide enjoyable meal suggestions within safe parameters (measured by satisfaction)
Measurement Priority: Safety requirements always supersede experience metrics
Validation Requirements
Merge vs Replace Validation
When overrides_instructions = false
(ENRICHMENT mode):
When overrides_instructions = true
(REPLACEMENT mode):
Critical Validation Points
Safety and Compliance:
User Experience:
Technical Integration:
Quick Reference Guide
The Three-Stage Process
┌─────────────────────────────────────────────────────────────────────┐
│ STAGE 1: CANDIDATE FORMATION (Semantic Association) │
│ Triggers create semantic associations → Candidate pool formed │
└─────────────────────────────────≔─────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────────┐
│ STAGE 2: LLM SELECTION (Logical Analysis) │
│ Evaluates fitness + User model → Best candidate selected │
└─────────────────────────────────≔─────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────────┐
│ STAGE 3: LLM APPLICATION (Constraint Solving) │
│ Override OFF: Enriches existing │ Override ON: Replaces all │
└─────────────────────────────────┴─────────────────────────────────┘
Override Flag Decision Matrix
Adding protein guidance to wellness coach
OFF (Enrich)
Complements existing guidelines
Emergency medical response
ON (Replace)
Safety requires singular focus
Legal compliance disclaimer
ON (Replace)
Cannot mix with conversational tone
Product recommendations
OFF (Enrich)
Enhances without conflicting
Suicide prevention protocol
ON (Replace)
Eliminates delay-causing guidelines
Workout suggestions
OFF (Enrich)
Works within safety boundaries
Trigger Engineering Strategies
Strong Semantic Signal (Reliability)
["suicide", "kill myself", "end my life", "want to die", "suicidal thoughts"]
Multiple related terms
Semantic variations
Context reinforcers
Flexible Semantic Coverage (Natural)
["nutrition", "healthy eating", "diet"]
Thematic clusters
Activity domains
Broader activation
Behavior Category Patterns
Safety/Compliance
Strong, specific
Often ON
Very tight
Unit tests, 100%
Enhancement
Flexible, thematic
Usually OFF
Moderate
UX metrics
Hybrid
Mixed approach
Context-dependent
Varies
Both approaches
Schema Template
{
"name": "behavior-name",
"conversation_triggers": ["trigger1", "trigger2"],
"action": {
"type": "inject-instruction",
"instruction": "OPTIMIZATION: Define constraints here...",
"overrides_instructions": false // true = OVERWRITE, false = MERGE
},
"applied_to_services": ["service-id"],
"tags": {
"category": "safety|experience|compliance|enhancement",
"priority": "critical|high|medium|low"
},
"is_active": true
}
Key Principles Checklist
Design Phase
Implementation Phase
Validation Phase
Common Mistakes to Avoid
Treating triggers as patterns → They're semantic associators
Misusing override flag → Only use ON when truly needed
Creating unsolvable enrichments → Check for conflicts
Expecting 100% triggering → Natural selection is good
Ignoring three-stage process → Understand the full flow
Rigid command thinking → Use optimization framework
Mental Model Summary
Triggers (Stage 1) Selection (Stage 2) Application (Stage 3)
→ Semantic ranking → Logical analysis → Constraint solving
→ Create associations → User model considered → Enrichment or replacement
→ Form candidate pool → Best fit selected → Response generated
Remember: Dynamic behaviors are optimization problems, not rules!
Reference Materials
Common Implementation Pitfalls
Design Mistakes
Misunderstanding Triggers: Treating triggers as pattern matchers instead of semantic ranking tools
Override Confusion: Not understanding enrichment (OFF) vs replacement (ON)
Robotic Responses: Making behaviors too directive without considering optimization framework
Expecting 100% Triggering: Product enhancements should be contextual and natural, not forced
Mixed Responsibilities: Having the same team design both safety-critical and experience-enhancement behaviors
Framework Confusion: Using rigid if-then rules instead of constrained optimization for any behavior type
Poor Semantic Engineering: Not considering how triggers create associative strength across multiple inputs
Unnecessary Replacement: Using override=true when enrichment would work better
Technical Mistakes
Inconsistent Testing: Using wrong evaluation approach for different behavior types or hybrid behaviors
Misunderstanding Integration: Expecting behaviors to be rigidly enacted rather than solving optimization problems
Ignoring Three-Stage Process: Not understanding candidate formation → selection → application flow
Trigger-Pattern Thinking: Treating triggers as exact match patterns rather than semantic associators
Overusing Override Flag: Replacing entire optimization when adding constraints would suffice
Context Whiplash: Poor transitions between replaced and enriched optimization states
Control Misconception: Thinking agents directly control behavior selection rather than influence probabilities
Narrow Behavior Design: Creating behaviors without considering full semantic activation potential
Conflicting Enrichment: Adding constraints that make the optimization problem unsolvable
Common Validation Failures
Schema Violations
Missing required fields in the entity service schema
Incorrect data types (e.g., string instead of list for triggers)
Empty or null values for required fields
Logical Conflicts
Override flag set to true for behaviors that could work as enrichment
Enrichment behaviors creating unsolvable optimization problems
Replacement behaviors not providing complete optimization context
Identical trigger terms between behaviors (forbidden - creates ambiguous associations)
Overlapping semantic fields without defined priority hierarchy
Triggers attempting to evaluate user model attributes instead of creating text associations
Constraints that fundamentally conflict when merged (requiring override=true)
Integration Issues
Instructions that don't account for existing context graph guidelines
Service applications to incompatible or non-existent services
Tag categories that don't align with system taxonomy
Enrichment vs Replacement: A Practical Example
Consider a wellness coach agent with these base context graph guidelines:
Action Guidelines: "Be empathetic and supportive. Encourage gradual lifestyle changes."
Boundary Constraints: "Don't diagnose conditions. Don't prescribe medications."
Scenario 1: Enhancement Behavior (Enrichment)
{
"name": "protein-guidance",
"triggers": ["protein", "muscle", "recovery"],
"overrides_instructions": false,
"instruction": "Provide evidence-based protein intake recommendations. Suggest 0.8-1g per kg body weight for general health."
}
Result - Enriched Optimization:
Be empathetic and supportive AND
Encourage gradual lifestyle changes AND
Don't diagnose conditions AND
Don't prescribe medications AND
Provide protein recommendations with specific ratios
The agent gives friendly, supportive protein advice within medical boundaries.
Scenario 2: Emergency Behavior (Replacement)
{
"name": "heart-attack-emergency",
"triggers": ["chest pain", "can't breathe", "heart attack"],
"overrides_instructions": true,
"instruction": "IMMEDIATE: Tell user to call 911 NOW. Keep repeating this until confirmed. Provide basic positioning guidance while waiting."
}
Result - Replaced Optimization:
Be empathetic and supportiveEncourage gradual lifestyle changesDon't diagnose conditionsDon't prescribe medicationsONLY: Get them to call 911 immediately
When to Enrich vs Replace: Decision Guide
Enrich (override=false) When:
Adding new capabilities that work within existing boundaries
Providing additional context or options
Enhancing the user experience while maintaining safety
The new requirements complement existing guidelines
Replace (override=true) When:
Normal guidelines would prevent necessary action
Conflicting optimization requirements exist
Emergency situations require singular focus
Compliance demands isolation from other constraints
When Enrichment Creates Conflicts
Sometimes adding constraints through enrichment (override=false) can create an unsolvable optimization problem:
Example Conflict:
Existing guideline: "Never mention specific medical procedures"
Dynamic behavior: "Explain how insulin injections work"
Result: Unsolvable optimization - cannot satisfy both constraints
Resolution Options:
Switch to Replacement Mode: If the behavior is critical, use override=true to replace conflicting guidelines
Modify the Behavior: Adjust the instruction to work within existing constraints
Instead of: "Explain insulin injection techniques" Try: "Suggest user consult their healthcare provider about insulin administration"
Add Conditional Logic: Make the behavior sensitive to context
"IF discussing general diabetes education THEN provide overview information IF user asks specific technique questions THEN redirect to healthcare provider"
Detection During Development:
Test the behavior in context with existing guidelines
Look for logical contradictions in the constraint set
If the LLM consistently fails to generate coherent responses, you likely have a conflict
Key Principle: If enrichment creates contradictions, either modify the behavior to be compatible or switch to replacement mode with clear justification.
Edge Case Analysis:
Scenario: User mentions feeling suicidal
Normal guidelines: "Be empathetic, encourage gradual change"
Required action: "Get immediate help"
Decision: REPLACE - Normal empathy might delay critical intervention
Scenario: User asks about protein for muscle building
Normal guidelines: "Be supportive, don't give medical advice"
Required action: "Provide protein recommendations"
Decision: ENRICH - Can give advice within medical boundaries
Scenario: User having severe allergic reaction
Normal guidelines: "Don't diagnose, be conversational"
Required action: "Direct to use EpiPen and call 911"
Decision: REPLACE - Medical emergency overrides conversational tone
This decision fundamentally shapes how the behavior interacts with the agent's base personality and constraints.
Tag Categorization Standards
Required Tags:
Primary category:
safety
,experience
,compliance
,enhancement
Domain tags:
medical
,fitness
,nutrition
,mental_health
, etc.Priority tags:
critical
,high
,medium
,low
Override alignment:
replacement
for override=true,enrichment
for override=false
Technical Terminology Note
If you encounter technical terms in system documentation:
Action states: States where the agent generates responses
Decision states: States where the agent makes choices about conversation flow
Boundary constraints: What the agent shouldn't do
Action guidelines: What the agent should do
For behavior design purposes, think of these collectively as "the agent's behavioral rules" that either get enriched or replaced based on your override flag choice.
Key Mental Model Summary
The dynamic behavior system is NOT:
A pattern matching engine
A rule-based trigger system
A deterministic command executor
A simple instruction injector
The dynamic behavior system IS:
A semantic association engine
A three-stage cognitive processor
An optimization framework that can enrich OR replace problems
A context-aware decision system
Remember the Flow:
Triggers create semantic associations → Behaviors enter candidate pool
LLM analyzes logical fit + user model → Best candidate selected for this user
LLM solves optimization problem → Either enriched (override=false) or replaced (override=true)
Critical Distinction - Override Flag:
OFF (default): Enriches existing optimization by adding constraints
ON: Replaces optimization problem entirely, keeping only identity
Engineering Implications:
Triggers are semantic ranking tools, not patterns to match
User model conditioning happens primarily during selection (Stage 2)
Override flag choice fundamentally changes the optimization landscape
Enrichment allows nuanced additions; replacement enables emergency overrides
All behaviors are optimization problems, but they interact differently with context
This cognitive model enables sophisticated behavior selection that can either enhance normal operations or completely override them when safety demands it.
Trigger Pattern Conflict Detection
Semantic Overlap Rules:
Identical Triggers - FORBIDDEN:
Two behaviors cannot have the exact same trigger term
Would create ambiguous semantic associations
Example: Behavior A and B both triggering on "diabetes" alone
Subset/Superset Relationships - ALLOWED with Priority:
Triggers can overlap if one creates more specific associations
More specific semantic fields take precedence
Example:
Behavior A triggers on "health" (broad semantic field)
Behavior B triggers on "mental health" (specific semantic field)
When user says "mental health", Behavior B's stronger association wins
Partial Semantic Overlap - REQUIRES EXPLICIT PRIORITY:
When triggers create overlapping semantic fields, define clear hierarchy
Use priority tags and documentation
Example:
Behavior A: ["exercise", "workout", "fitness"]
Behavior B: ["fitness", "nutrition", "diet"]
"fitness" creates associations for both - needs priority rule
Important Note on Re-selection: Remember that behaviors exist in a dynamic semantic field. Initial trigger associations evolve through conversation, so design behaviors knowing they may interact in semantic space beyond their original triggers.
Best Practices:
Design triggers to create distinct semantic fields where possible
Consider how semantic associations will evolve and overlap
When overlap is necessary, document priority clearly
Test edge cases where multiple behaviors gain similar associative strength
Use user model conditions to differentiate when semantic signals conflict
Last updated
Was this helpful?