LogoLogo
Go to website
  • Welcome
  • Getting Started
    • Amigo Overview
      • System Components
      • Overcoming LLM Limitations
      • [Advanced] Future-Ready Architecture
      • [Advanced] The Accelerating AI Landscape
    • The Journey with Amigo
      • Partnership Model
  • Concepts
    • Agent Core
      • Core Persona
      • Global Directives
    • Context Graphs
      • State-Based Architecture
      • [Advanced] Field Implementation Guidance
    • Functional Memory
      • Layered Architecture
      • User Model
      • [Advanced] Recall Mechanisms
      • [Advanced] Analytical Capabilities
    • Dynamic Behaviors
      • Side-Effect Architecture
      • Knowledge
      • [Advanced] Behavior Chaining
    • Evaluations
      • [Advanced] Arena Implementation Guide
    • [Advanced] Reinforcement Learning
    • Safety
  • Glossary
  • Advanced Topics
    • Transition to Neuralese Systems
    • Agent V2 Architecture
  • Agent Building Best Practices
    • [Advanced] Dynamic Behaviors Guide
  • Developer Guide
    • Enterprise Integration Guide
      • Authentication
      • User Creation + Management
      • Service Discovery + Management
      • Conversation Creation + Management
      • Data Retrieval + User Model Management
      • Webhook Management
    • API Reference
      • V1/organization
      • V1/conversation
      • V1/service
      • V1/user
      • V1/role
      • V1/admin
      • V1/webhook_destination
      • V1/dynamic_behavior_set
      • V1/metric
      • V1/simulation
      • Models
      • V1/organization
      • V1/service
      • V1/user
      • V1/role
      • V1/conversation
      • V1/admin
      • V1/webhook_destination
      • V1/dynamic_behavior_set
      • V1/metric
      • V1/simulation
      • Models
Powered by GitBook
LogoLogo

Resources

  • Pricing
  • About Us

Company

  • Careers

Policies

  • Terms of Service

Amigo Inc. ©2025 All Rights Reserved.


On this page
  • Live Conversation Processing
  • Post-Processing Memory Management
  • Memory Evolution Handling
  • Overcoming the Token Bottleneck

Was this helpful?

Export as PDF
  1. Concepts
  2. Functional Memory

Layered Architecture

Amigo's memory architecture employs a sophisticated, multi-layered hierarchy (L0, L1, L2) designed to deliver exceptional reliability and performance, overcoming the limitations often seen in simpler AI memory systems. This layered approach is fundamental to how Amigo achieves consistent performance and handles long-horizon tasks reliably – delivering higher reliability with optimized compute resources. It operates through a complete end-to-end processing cycle, serving as a critical component of the unified Memory-Knowledge-Reasoning (M-K-R) system by providing the right granularity of Memory at each stage of Knowledge application and Reasoning.

Layer
Description
Live-Session Role
Post-Processing Role

L2:

(User Model / Synthesized Understanding)

High-level dimensional understanding, distilled insights, and user context.

  • Primary live access (fast, precise)

  • Provides immediate context for Reasoning & Knowledge application

  • Consolidated snapshot across checkpoints

  • Evolves with new insights from Knowledge/Reasoning, enabling Memory recontextualization

L1:

(Observations / Indexed Insights)

Contextualized, standalone insights extracted from raw data.

  • Contextual search index for Reasoning

  • Bridge to L0 when deeper Memory is needed for Knowledge application

  • Accumulates checkpoints

  • Extracts patterns for L2 (Memory refinement based on observed K-R patterns)

L0:

(Raw Transcripts / Ground Truth)

Complete, verbatim record of interactions.

  • Deep reasoning source (Memory for high-stakes Reasoning)

  • Perfect context preservation for nuanced Knowledge application

  • Source for L1 extraction (initial Memory capture)

Live Conversation Processing

During live interactions, the system operates primarily in a top-down fashion:

  1. L2 Primary Access (90% of cases)

    • User model provides immediate dimensional context.

    • Critical information is instantly available with high precision.

    • No need for deeper retrieval for most interactions.

    • Ensures low-latency responses while maintaining quality.

  2. L1 Index-Based Retrieval (when needed)

    • When user model lacks specific details, L1 observations serve as a contextualized search index.

    • Each observation contains standalone contextualized information linked to source sessions.

    • The L1 index allows us to find the L0 sessions in which the relevant information lives so that we get both the important information itself and its proximal context.

  3. L0 Deep Reasoning (when needed)

    • For high-stakes decisions requiring perfect context, the system drills down to L0 through the L1 index.

    • Complete reasoning across raw transcripts with full L2 integration which incorporates both recent data (~20 most recent sessions depending on decay algorithm), the most recent full user model snapshot, the ~10 most relevant sessions most likely to contain important info, and relevant external events.

    • User model provides present complete context for interpreting and re-contextualizing raw information from the past.

    • Ensures perfect precision and contextualization where it matters most.

Post-Processing Memory Management

Amigo implements an efficient post-processing cycle that ensures optimal memory performance:

  1. Session Breakpoint Management

    • Each conversation sessionized with clear breakpoints

    • Enables precise tracking of context boundaries

    • Allows efficient batch processing of memory operations

    • Maintains perfect contextual continuity across sessions

  2. L0 → L1 Transformation

    • Raw transcripts analyzed using dimensional framework

    • Important observations extracted and contextualized

    • Each observation tagged with dimensional relevance

    • Linkages maintained to source contextual data

  3. Checkpoint + Merge Pattern

    • L1 observations checkpointed by information volume and dimensional importance

    • Information decay modeled to prioritize persistence

    • Recent observations weighted more heavily in merge operations

    • Critical observations maintain perfect preservation regardless of age

  4. L1 → L2 Synthesis

    • Checkpointed observations synthesized into dimensional understanding

    • Previous user model merged with new observations

    • Evolved understanding captured in updated model

Memory Evolution Handling

The end-to-end memory system uniquely handles information evolution:

  1. Progressive Recontextualization

    • As new L1 observations accumulate, older information is recontextualized

    • Long-range patterns emerge through observation accumulation

    • Changing understanding (e.g., opinions evolving over time) properly captured

    • Perfect context maintained for critical information despite evolution

    • Contextual relationships preserved through associative binding mechanisms

  2. Decay-Aware Persistence

    • Non-critical information naturally decays according to importance model

    • Critical information maintains perfect persistence regardless of age

    • Decay rates vary by dimensional importance defined in user model blueprints

    • Search efficiency improves as less relevant information naturally fades

    • Dimensional tagging ensures critical information doesn't decay even over long periods

  3. Information Gap Processing

    • Dynamic detection of missing information during live sessions

    • Automatic generation of contextually-aware queries to fill gaps efficiently

    • Preservation of information confidence levels to guide retrieval decisions

    • Progressive refinement of understanding through targeted gap filling

This complete cycle ensures that Amigo's memory system delivers both perfect preservation of critical information and efficient operation across all memory access patterns, solving the fundamental challenges that plague traditional approaches.

Overcoming the Token Bottleneck

The layered memory system functions as an external high-dimensional memory store that preserves information that would otherwise be lost in token-based reasoning. This is crucial for the integrated M-K-R cycle, as it ensures that the Reasoning engine has access to faithful and appropriately granular Memory, which in turn allows for more accurate Knowledge application and prevents the degradation of complex thought processes:

  • Perfect Information Preservation: While token-based reasoning loses ~1000x information density, our L0 layer maintains complete, verbatim records with perfect fidelity, forming a reliable Memory base for the M-K-R system.

  • Rich Contextual Relationships: The system preserves multidimensional relationships that would be flattened in token externalization, providing richer Memory context for Knowledge and Reasoning.

  • Progressive Abstraction: While maintaining perfect ground truth (L0 Memory), the system creates increasingly abstracted representations (L1, L2 Memory) that optimize for efficient retrieval and integration into the M-K-R cycle, ensuring high-bandwidth interplay.

PreviousFunctional MemoryNextUser Model

Last updated 10 days ago

Was this helpful?

Current language models are constrained by the : they must squeeze high‑dimensional internal reasoning into low‑bandwidth text tokens, sharply limiting their ability to preserve complex state across steps.

For details on the upcoming Memory‑Reasoning Bridge planned for Agent V2, see .

Advanced Topics › Memory‑Reasoning Bridge
token bottleneck