LogoLogo
Go to website
  • Welcome
  • Getting Started
    • Amigo Overview
      • System Components
      • Overcoming LLM Limitations
      • [Advanced] Future-Ready Architecture
      • [Advanced] The Accelerating AI Landscape
    • The Journey with Amigo
      • Partnership Model
  • Concepts
    • Agent Core
      • Core Persona
      • Global Directives
    • Context Graphs
      • State-Based Architecture
      • [Advanced] Field Implementation Guidance
    • Functional Memory
      • Layered Architecture
      • User Model
      • [Advanced] Recall Mechanisms
      • [Advanced] Analytical Capabilities
    • Dynamic Behaviors
      • Side-Effect Architecture
      • Knowledge
      • [Advanced] Behavior Chaining
    • Evaluations
      • [Advanced] Arena Implementation Guide
    • [Advanced] Reinforcement Learning
    • Safety
  • Glossary
  • Advanced Topics
    • Transition to Neuralese Systems
    • Agent V2 Architecture
  • Agent Building Best Practices
    • [Advanced] Dynamic Behaviors Guide
  • Developer Guide
    • Enterprise Integration Guide
      • Authentication
      • User Creation + Management
      • Service Discovery + Management
      • Conversation Creation + Management
      • Data Retrieval + User Model Management
      • Webhook Management
    • API Reference
      • V1/organization
      • V1/conversation
      • V1/service
      • V1/user
      • V1/role
      • V1/admin
      • V1/webhook_destination
      • V1/dynamic_behavior_set
      • V1/metric
      • V1/simulation
      • Models
      • V1/organization
      • V1/service
      • V1/user
      • V1/role
      • V1/conversation
      • V1/admin
      • V1/webhook_destination
      • V1/dynamic_behavior_set
      • V1/metric
      • V1/simulation
      • Models
Powered by GitBook
LogoLogo

Resources

  • Pricing
  • About Us

Company

  • Careers

Policies

  • Terms of Service

Amigo Inc. ©2025 All Rights Reserved.


On this page
  • Agent Architecture
  • Platform & Core Concepts
  • Context Graph Framework
  • Memory Architecture
  • Integration Bridges
  • Processing Methods
  • Metrics and Reinforcement Learning
  • Future Concepts & Architectures

Was this helpful?

Export as PDF

Glossary

This glossary provides definitions for key terms used throughout the Amigo documentation. It's designed to help enterprise readers better understand our platform's terminology and concepts.

Note: Terms are organized by category for easier reference. For any term not found in this glossary, please contact your Amigo representative.

Agent Architecture

Agent: Advanced conversational AI that navigates dynamically-structured contexts, using adaptive behavior to achieve a balance between situational flexibility and control.

Static Persona: The foundational identity layer of an agent defining its consistent attributes, including identity (name, role, language) and background (expertise, motivation, principles). Recommended to be less than 10k tokens as it serves as the foundation for axiomatic alignment rather than the "final portrait".

Global Directives: Explicit universal rules ensuring consistent agent behavior, including behavioral rules and communication standards that apply across all contexts.

Dynamic Behavior: System enabling real-time agent adaptation through context detection, behavior selection, and adaptive response generation based on conversational cues. Dynamic behavior scales to approximately 5 million characters (without side-effects) and can scale another order of magnitude larger with side-effects.

Conversational Trigger: Pattern or keyword in user messages that may activate a specific dynamic behavior, functioning as a relative ranking mechanism rather than requiring exact matches.

Advanced Ranking Algorithm: Sophisticated multidimensional approach to behavior ranking that separately evaluates user context and conversation history, balancing immediate context with conversation continuity. Incorporates a mechanism for re-sampling previously selected behaviors with decaying recency weight to maintain relevance across longer interactions.

Behavior Chaining: An architectural capability that enables agents to influence their own trajectory through behavior spaces. By leveraging the embedding-based ranking system, agents can modify their conversational patterns to navigate between different clusters of potential behaviors. This creates a meta-control layer where the agent can direct its own path across behavior domains, allowing for structured conversational journeys that remain responsive to user inputs. When integrated with side-effects, behavior chaining functions as an orchestration layer for both conversation and external actions, enabling multi-turn, multi-modal experiences with transitions between dialogue and system interactions. Unlike traditional decision trees, behavior chaining maintains conversational coherence while providing predictable pathways across knowledge and interaction frameworks.

Behavior Selection Process: Four-step process (Candidate Evaluation including re-sampling of previous behavior, Selection Decision among new/previous/no behavior, Context Graph Integration, Adaptive Application) that determines how dynamic behaviors are identified and applied, allowing for persistence across turns.

Instruction Flexibility Spectrum: Range of dynamic behavior instructions from open-ended guidance allowing significant agent discretion to extremely rigid protocols requiring precise behavior.

Autonomy Spectrum: Framework describing how trigger and context design impact agent autonomy, from high autonomy (vague triggers with open context) to limited autonomy (strict triggers with precise instructions).

L4 Autonomy (in targeted domains): A strategic approach to AI development focusing on achieving high levels of autonomy (Level 4, analogous to full self-driving under specific conditions) in well-defined, strategically important areas or "neighborhoods." This prioritizes deep reliability and capability in critical functions over broader but potentially less reliable (e.g., L2) autonomy across all functions. Scaling L4 autonomy is viewed as a deliberate investment in money, strategy, and operational excellence.

Dynamic Behavior Side-Effect: Action triggered by a dynamic behavior that extends beyond the conversation itself and modifies the local context the agent is currently active in. Every time a dynamic behavior is selected, the context graph is modified. Side-effects can include retrieving real-time data, modifying the context graph, generating structured reflections, integrating with enterprise systems, exposing new tools, triggering hand-offs to external systems, or adding new exit conditions.

Token Bottleneck: A fundamental architectural limitation in current language models where models must externalize their reasoning through text tokens, compressing rich multidimensional reasoning (internal residual streams containing thousands of floating-point numbers) into a severely lossy format that reduces information by approximately 1000x. Each token contains only ~17 bits of information (roughly one floating point number), while the model's internal residual stream contains thousands. This forces models to perform token-bottlenecked reasoning, significantly limiting their ability to maintain complex internal states across reasoning steps, even with perfect knowledge access.

Platform & Core Concepts

Alignment (AI): The ongoing challenge and practice of ensuring that an AI system's goals, behaviors, and outcomes consistently match human values, intentions, and specific enterprise objectives, especially as AI capabilities increase. Amigo's platform is built with an alignment-first design principle.

Context Graph: Sophisticated topological field guiding AI agents through complex problem spaces. Functions as adaptable scaffolding, providing structure for reliability and alignment today while being designed to integrate with future AI paradigms like Neuralese. See also: "Context Graph" entry under Context Graph Framework.

Iterative Alignment / Continuous Alignment Loop

Layered Memory Architecture: Amigo's three-tiered memory structure (L0, L1, L2) designed for reliability and efficiency. It enables consistent performance, handles long-horizon tasks, and optimizes the confidence-cost curve by balancing perfect recall (L0), indexed insights (L1), and synthesized understanding (L2). See also: Memory Architecture section below.

Partnership Model (Amigo)

Amigo's collaborative approach to AI implementation with a clear division of responsibilities: domain experts are primarily responsible for building the world/problem models and judges that drive evolutionary pressure and track competitive market changes, while Amigo focuses on building an efficient, recursively improving system that evolves under that pressure. This partnership enables deep integration, effective iterative alignment, and strategic velocity. Like Waymo's approach to autonomous driving, we prioritize being reliable in well-known domains first before expanding, rather than pursuing a high-risk "yolo" approach that sacrifices reliability for breadth.

Platform (Amigo): The comprehensive set of foundational architecture (like Context Graphs and Layered Memory), tools, and methodologies provided by Amigo, enabling enterprises to build, deploy, manage, and iteratively align their own AI agents, typically through a Partnership Model.

Context Graph Framework

Context Graph: See definition under Platform & Core Concepts.

Topological Field: The fundamental structure of context graphs that creates gravitational fields guiding agent behavior toward optimal solutions rather than prescribing exact paths.

Context Density: The degree of constraint in different regions of a context graph, ranging from high-density (highly structured) to low-density (minimal constraints).

State: The core building block of a context graph that guides agent behavior and decision-making, including action states, decision states, recall states, reflection states, and side-effect states.

Side-Effect State: A specialized context graph state that enables agents to interact with external systems, triggering actions like data retrieval, tool invocation, alert generation, or workflow initiation beyond the conversation itself.

Gradient Field Paradigm: Approach allowing agents to navigate context graphs like expert rock climbers finding paths through complex terrain, using stable footholds, intuition, and pattern recognition.

Problem Space Topology: The structured mapping of a problem domain showing its boundaries, constraints, and solution pathways, which guides how agents approach and solve problems.

Topological Learning: Process by which agents continuously enhance navigation efficiency across context graphs by learning from prior interactions and adjusting strategies accordingly.

Context Detection: Process identifying conversational patterns, emotional states, user intent, and situational contexts during dynamic behavior selection, evaluating both explicit statements and implicit expressions of user needs across the full conversation history.

Memory Architecture

Functional Memory System: Amigo's approach to memory that guarantees precision and contextualization for critical information while maintaining perfect preservation of important data and its context.

Layered Memory Architecture: See definition under Platform & Core Concepts.

L0 Complete Context Layer: Layer preserving full conversation transcripts with 100% recall of critical information, maintaining all contextual nuances.

L1 Observations & Insights Layer: Layer extracting structured insights from raw conversations, identifying patterns and relationships along user dimensions.

L2 User Model Layer: Consolidated dimensional understanding serving as a blueprint for identifying critical information and detecting knowledge gaps.

User Model: Operational center of the memory system defining dimensional priorities and relationships, orchestrating how information flows, is preserved, retrieved, and interpreted.

Dimensional Framework: The data structure in the user model that defines categories of information with associated precision requirements and contextual preservation needs. It serves as the blueprint for memory operations, determining what information requires perfect recall, how context is preserved, and how information gaps are detected.

Latent Space: The multidimensional conceptual space within language models containing encoded knowledge, relationships, and problem-solving approaches. Effectiveness of AI is determined by activating the right regions of this space rather than simply adding information.

Knowledge Activation: The process of priming specific regions of an agent's latent space to optimize performance for particular tasks, ensuring the right knowledge and reasoning patterns are accessible for solving problems.

Implicit Recall: Memory retrieval triggered by information gap detection during real-time conversation analysis.

Explicit Recall: Memory retrieval triggered by predetermined recall points defined in the context graph structure.

Recent Information Guarantee: Feature ensuring recent information (last n sessions based on information decay) is always available for full reasoning.

Perfect Search Mechanism: Process identifying specific information gaps using the user model and conducting targeted searches near known critical information.

Information Evolution Handling: System for managing changing information through checkpoint + merge operations, accumulating observations by dimension over time.

Integration Bridges

Memory‑Knowledge‑Reasoning Integration: The broader Agent V2 goal of maximizing bandwidth across all three systems so that the agent can freely zoom between abstraction levels while preserving context.

Processing Methods

Live-Session Processing: Top-down memory operation during live interactions, primarily accessing the user model (L2) for immediate dimensional context.

Post-Processing Memory Management: Efficient cycle ensuring optimal memory performance through session breakpoint management, L0→L1 transformation, checkpoint + merge pattern, and L1→L2 synthesis.

Causation Lineage Analysis: Analytics mapping developmental pathways in user behaviors and outcomes across time to identify formative experiences leading to specific outcomes.

Dimensional Analysis: Evaluation of patterns across user model dimensions to identify success factors and optimization opportunities.

Metrics and Reinforcement Learning

Metrics & Simulations Framework: System providing objective evaluation of agent performance through configurable criteria and simulated conversations.

Metric: A configurable evaluation criteria to assess the performance of an agent. Metrics can be generated via custom LLM-as-a-judge evals on both real sessions and simulated sessions + unit tests.

Simulations: Simulations describe the situations you want to test programmatically. A simulation contains a Persona and Scenario.

Persona: The user description you want the LLM to emulate when running simulating conversations

Scenario: The scenario description you want the LLM to create when simulating conversations

Unit Tests: Combination of simulations with specific metrics to evaluate critical agent behaviors in a controlled environment.

Feedback Collection: Process of gathering evaluation data through human evals (with scores and tags) and memory system driven analysis. These datasets are exportable with filters for data scientists to generate performance reports.

Reward-Driven Optimization: Training approach where agents receive explicit rewards or penalties, guiding incremental improvements toward optimal behaviors.

RLVR (Reinforcement Learning with Verifiable Rewards): A type of reinforcement learning where agents learn from outcome-based rewards that can be definitively verified by an external environment, oracle (e.g., a code executor), or predefined success criteria, rather than relying on human-labeled reasoning steps. This approach is key to enabling systems to learn and improve in complex domains where explicit supervision of every step is impractical or impossible.

Self-Play Reasoning: A learning process where an AI agent improves its reasoning capabilities by generating its own tasks or problems and learning to solve them, often in an iterative loop with itself or versions of itself. This allows the agent to explore and master a problem space more autonomously, potentially discovering novel strategies and achieving higher levels of performance without constant external guidance or pre-defined datasets.

Iterated Distillation and Amplification (IDA)

A framework for systematically improving AI capabilities through iterative cycles. It involves two main phases:

  • Amplification Phase: Using significantly more computational resources (e.g., extended reasoning time, parallel processing, external tools, human feedback, large-scale simulation) to generate higher-quality outputs or problem solutions than the base model could achieve alone. This creates high-quality training data demonstrating superior performance.

  • Distillation Phase: Training a new, more efficient model to mimic the superior behavior demonstrated during the amplification phase, but using substantially fewer computational resources during operation. The goal is to internalize the improved capabilities. This cycle (Base Model -> Amplification -> Distillation -> New Base Model) can be repeated to achieve progressive performance gains.

Future Concepts & Architectures

Neuralese

A theoretical future AI communication and reasoning paradigm (anticipated no earlier than mid-2027) where models could bypass text token limitations by passing high-bandwidth vector representations (residual streams) internally between transformer layers, potentially enabling significantly more complex and efficient thought processes. Neuralese would address the token bottleneck by allowing models to pass their full residual stream back to earlier layers, transmitting 1,000+ times more information than current text-based token systems. This would create a "high-dimensional chain of thought," allowing much richer internal representations, reasoning, and true internal memory. Implementing neuralese faces significant technical challenges including architecture redesigns, training inefficiencies, reduced model interpretability, and complex engineering challenges. Amigo's architecture is designed to be ready for integrating such advancements.

Titan Architecture

An advanced memory architecture concept inspired by the human brain's memory system, designed to enable AI models to process both immediate context and historical data while learning dynamically during inference. It typically combines:

  • Short-term Memory (Attention): Handles immediate context with precision.

  • Long-term Memory (Neural Module): Stores and retrieves historical data, potentially using mechanisms like surprise-based updates (prioritizing unexpected information), momentum-based updates (reinforcing consistent patterns), and adaptive forgetting (discarding less relevant information).

  • Persistent Memory: Encodes stable, task-specific knowledge.

PreviousSafetyNextAdvanced Topics

Last updated 10 days ago

Was this helpful?

The process used by Amigo, involving the and , to continuously refine agent behavior and alignment using feedback from real-world interactions and simulations. This ongoing loop is crucial for achieving high reliability, adapting to complex enterprise environments, and catching edge cases missed by static training.

Evolutionary Chamber: A structured and controlled environment within the Amigo platform where AI agents evolve under carefully designed strategic pressures. These pressures are defined by problem models and judges (often co-developed with domain expert partners through the ) and aim to drive continuous agent improvement towards specific organizational goals and market realities. The chamber facilitates iterative alignment and targeted capability enhancement through simulations and metrics-driven feedback. (See also: )

Memory‑Reasoning Bridge: The mechanism that delivers information at the appropriate granularity (L0, L1 or L2) exactly when the reasoning engine needs it, overcoming the token‑window constraint and enabling multi‑step, long‑horizon reasoning. See .

Knowledge‑Reasoning Integration: The coupling that ensures knowledge activation directly reshapes the problem space being reasoned about rather than serving as passive retrieval. See .

Reinforcement Learning (RL): System enhancing agent behaviors through simulations based on defined metrics, ensuring alignment with organizational objectives. In Amigo, RL is a core part of the , leveraging real-world data (via the ) and simulations to bridge the gap between human-level performance and reliable superhuman capabilities through targeted optimization.

Adversarial Testing Architecture: An evaluation architecture where specialized judge and tester agents challenge the primary agent against defined scenarios, metrics, and thresholds to drive targeted optimization. These judge and tester agents may utilize more computational resources or specialized models to ensure rigorous evaluation. See for full details.

Reinforcement Learning (RL)
Partnership Model
Continuous Alignment Loop
Partnership Model
Partnership Model
Advanced Topics › Memory‑Reasoning Bridge
Advanced Topics › Knowledge‑Reasoning Integration
Metrics & Simulations › Adversarial Testing
Concepts > Reinforcement Learning > Evolutionary Chambers