Glossary
This glossary provides definitions for key terms used throughout the Amigo documentation. It's designed to help enterprise readers better understand our platform's terminology and concepts.
Note: Terms are organized by category for easier reference. For any term not found in this glossary, please contact your Amigo implementation team.
Agent Architecture
Agent: Advanced conversational AI that navigates dynamically-structured contexts, using adaptive behavior to achieve a balance between situational flexibility and control.
Static Persona: The foundational identity layer of an agent defining its consistent attributes, including identity (name, role, language) and background (expertise, motivation, principles). Recommended to be less than 10k tokens as it serves as the foundation for axiomatic alignment rather than the "final portrait".
Global Directives: Explicit universal rules ensuring consistent agent behavior, including behavioral rules and communication standards that apply across all contexts.
Dynamic Behavior: System enabling real-time agent adaptation through context detection, behavior selection, and adaptive response generation based on conversational cues. Dynamic behavior scales to approximately 5 million characters (without side-effects) and can scale another order of magnitude larger with side-effects.
Conversational Trigger: Pattern or keyword in user messages that may activate a specific dynamic behavior, functioning as a relative ranking mechanism rather than requiring exact matches.
Advanced Ranking Algorithm: Sophisticated multidimensional approach to behavior ranking that separately evaluates user context and conversation history, balancing immediate context with conversation continuity.
Behavior Selection Process: Four-step process (Candidate Evaluation, Selection Decision, Context Graph Integration, Adaptive Application) that determines how dynamic behaviors are identified and applied.
Instruction Flexibility Spectrum: Range of dynamic behavior instructions from open-ended guidance allowing significant agent discretion to extremely rigid protocols requiring precise behavior.
Autonomy Spectrum: Framework describing how trigger and context design impact agent autonomy, from high autonomy (vague triggers with open context) to limited autonomy (strict triggers with precise instructions).
Dynamic Behavior Side-Effect: Action triggered by a dynamic behavior that extends beyond the conversation itself and modifies the local context the agent is currently active in. Every time a dynamic behavior is selected, the context graph is modified. Side-effects can include retrieving real-time data, modifying the context graph, generating structured reflections, integrating with enterprise systems, exposing new tools, triggering hand-offs to external systems, or adding new exit conditions.
Context Graph Framework
Context Graph: Sophisticated topological field that guides AI agents through complex problem spaces, creating "footholds" and "paths of least resistance" rather than rigid decision paths.
Topological Field: The fundamental structure of context graphs that creates gravitational fields guiding agent behavior toward optimal solutions rather than prescribing exact paths.
Context Density: The degree of constraint in different regions of a context graph, ranging from high-density (highly structured) to low-density (minimal constraints).
State: The core building block of a context graph that guides agent behavior and decision-making, including action states, decision states, recall states, reflection states, and side-effect states.
Side-Effect State: A specialized context graph state that enables agents to interact with external systems, triggering actions like data retrieval, tool invocation, alert generation, or workflow initiation beyond the conversation itself.
Gradient Field Paradigm: Approach allowing agents to navigate context graphs like expert rock climbers finding paths through complex terrain, using stable footholds, intuition, and pattern recognition.
Problem Space Topology: The structured mapping of a problem domain showing its boundaries, constraints, and solution pathways, which guides how agents approach and solve problems.
Topological Learning: Process by which agents continuously enhance navigation efficiency across context graphs by learning from prior interactions and adjusting strategies accordingly.
Context Detection: Process identifying conversational patterns, emotional states, user intent, and situational contexts during dynamic behavior selection, evaluating both explicit statements and implicit expressions of user needs across the full conversation history.
Memory Architecture
Functional Memory System: Amigo's approach to memory that guarantees precision and contextualization for critical information while maintaining perfect preservation of important data and its context.
Layered Memory Architecture: Three-tiered memory structure including L0 (complete context layer), L1 (observations & insights layer), and L2 (user model layer).
L0 Complete Context Layer: Layer preserving full conversation transcripts with 100% recall of critical information, maintaining all contextual nuances.
L1 Observations & Insights Layer: Layer extracting structured insights from raw conversations, identifying patterns and relationships along user dimensions.
L2 User Model Layer: Consolidated dimensional understanding serving as a blueprint for identifying critical information and detecting knowledge gaps.
User Model: Operational center of the memory system defining dimensional priorities and relationships, orchestrating how information flows, is preserved, retrieved, and interpreted.
Dimensional Framework: The data structure in the user model that defines categories of information with associated precision requirements and contextual preservation needs. It serves as the blueprint for memory operations, determining what information requires perfect recall, how context is preserved, and how information gaps are detected.
Latent Space: The multidimensional conceptual space within language models containing encoded knowledge, relationships, and problem-solving approaches. Effectiveness of AI is determined by activating the right regions of this space rather than simply adding information.
Knowledge Activation: The process of priming specific regions of an agent's latent space to optimize performance for particular tasks, ensuring the right knowledge and reasoning patterns are accessible for solving problems.
Titan Architecture: An advanced memory architecture combining attention mechanisms with neural long-term memory modules, enabling AI models to process both immediate context and historical data while learning dynamically during inference.
Implicit Recall: Memory retrieval triggered by information gap detection during real-time conversation analysis.
Explicit Recall: Memory retrieval triggered by predetermined recall points defined in the context graph structure.
Recent Information Guarantee: Feature ensuring recent information (last n sessions based on information decay) is always available for full reasoning.
Perfect Search Mechanism: Process identifying specific information gaps using the user model and conducting targeted searches near known critical information.
Information Evolution Handling: System for managing changing information through checkpoint + merge operations, accumulating observations by dimension over time.
Processing Methods
Live-Session Processing: Top-down memory operation during live interactions, primarily accessing the user model (L2) for immediate dimensional context.
Post-Processing Memory Management: Efficient cycle ensuring optimal memory performance through session breakpoint management, L0→L1 transformation, checkpoint + merge pattern, and L1→L2 synthesis.
Causation Lineage Analysis: Analytics mapping developmental pathways in user behaviors and outcomes across time to identify formative experiences leading to specific outcomes.
Dimensional Analysis: Evaluation of patterns across user model dimensions to identify success factors and optimization opportunities.
Metrics and Reinforcement Learning
Metrics & Simulations Framework: System providing objective evaluation of agent performance through configurable criteria and simulated conversations.
Metric: A configurable evaluation criteria to assess the performance of an agent. Metrics can be generated via custom LLM-as-a-judge evals on both real sessions and simulated sessions + unit tests.
Simulations: Simulations describe the situations you want to test programmatically. A simulation contains a Persona and Scenario.
Persona: The user description you want the LLM to emulate when running simulating conversations
Scenario: The scenario description you want the LLM to create when simulating conversations
Unit Tests: Combination of simulations with specific metrics to evaluate critical agent behaviors in a controlled environment.
Feedback Collection: Process of gathering evaluation data through human evals (with scores and tags) and memory system driven analysis. These datasets are exportable with filters for data scientists to generate performance reports.
Reinforcement Learning (RL): System enhancing agent behaviors through simulations based on defined metrics, ensuring alignment with organizational objectives. In Amigo, RL bridges the gap between human-level performance and superhuman capabilities through targeted optimization of identified capability gaps.
Reward-Driven Optimization: Training approach where agents receive explicit rewards or penalties, guiding incremental improvements toward optimal behaviors.
Last updated
Was this helpful?