Layered Architecture
Amigo's memory architecture employs a sophisticated, multi-layered hierarchy (L0, L1, L2) designed to deliver exceptional reliability and performance, overcoming the limitations often seen in simpler AI memory systems. This layered approach is fundamental to how Amigo achieves consistent performance and handles long-horizon tasks reliably – delivering higher reliability with optimized compute resources. It operates through a complete end-to-end processing cycle, serving as a critical component of the unified Memory-Knowledge-Reasoning (M-K-R) system by providing the right granularity of Memory at each stage of Knowledge application and Reasoning.
L2:
(User Model / Synthesized Understanding)
High-level dimensional understanding, distilled insights, and user context.
Primary live access (fast, precise)
Provides immediate context for Reasoning & Knowledge application
Consolidated snapshot across checkpoints
Evolves with new insights from Knowledge/Reasoning, enabling Memory recontextualization
L1:
(Observations / Indexed Insights)
Contextualized, standalone insights extracted from raw data.
Contextual search index for Reasoning
Bridge to L0 when deeper Memory is needed for Knowledge application
Accumulates checkpoints
Extracts patterns for L2 (Memory refinement based on observed K-R patterns)
L0:
(Raw Transcripts / Ground Truth)
Complete, verbatim record of interactions.
Deep reasoning source (Memory for high-stakes Reasoning)
Perfect context preservation for nuanced Knowledge application
Source for L1 extraction (initial Memory capture)
Live Conversation Processing
During live interactions, the system operates primarily in a top-down fashion:
L2 Primary Access (90% of cases)
User model provides immediate dimensional context.
Critical information is instantly available with high precision.
No need for deeper retrieval for most interactions.
Ensures low-latency responses while maintaining quality.
L1 Index-Based Retrieval (when needed)
When user model lacks specific details, L1 observations serve as a contextualized search index.
Each observation contains standalone contextualized information linked to source sessions.
The L1 index allows us to find the L0 sessions in which the relevant information lives so that we get both the important information itself and its proximal context.
L0 Deep Reasoning (when needed)
For high-stakes decisions requiring perfect context, the system drills down to L0 through the L1 index.
Complete reasoning across raw transcripts with full L2 integration which incorporates both recent data (~20 most recent sessions depending on decay algorithm), the most recent full user model snapshot, the ~10 most relevant sessions most likely to contain important info, and relevant external events.
User model provides present complete context for interpreting and re-contextualizing raw information from the past.
Ensures perfect precision and contextualization where it matters most.
Post-Processing Memory Management
Amigo implements an efficient post-processing cycle that ensures optimal memory performance:
Session Breakpoint Management
Each conversation sessionized with clear breakpoints
Enables precise tracking of context boundaries
Allows efficient batch processing of memory operations
Maintains perfect contextual continuity across sessions
L0 → L1 Transformation
Raw transcripts analyzed using dimensional framework
Important observations extracted and contextualized
Each observation tagged with dimensional relevance
Linkages maintained to source contextual data
Checkpoint + Merge Pattern
L1 observations checkpointed by information volume and dimensional importance
Information decay modeled to prioritize persistence
Recent observations weighted more heavily in merge operations
Critical observations maintain perfect preservation regardless of age
L1 → L2 Synthesis
Checkpointed observations synthesized into dimensional understanding
Previous user model merged with new observations
Evolved understanding captured in updated model
Memory Evolution Handling
The end-to-end memory system uniquely handles information evolution:
Progressive Recontextualization
As new L1 observations accumulate, older information is recontextualized
Long-range patterns emerge through observation accumulation
Changing understanding (e.g., opinions evolving over time) properly captured
Perfect context maintained for critical information despite evolution
Contextual relationships preserved through associative binding mechanisms
Decay-Aware Persistence
Non-critical information naturally decays according to importance model
Critical information maintains perfect persistence regardless of age
Decay rates vary by dimensional importance defined in user model blueprints
Search efficiency improves as less relevant information naturally fades
Dimensional tagging ensures critical information doesn't decay even over long periods
Information Gap Processing
Dynamic detection of missing information during live sessions
Automatic generation of contextually-aware queries to fill gaps efficiently
Preservation of information confidence levels to guide retrieval decisions
Progressive refinement of understanding through targeted gap filling
This complete cycle ensures that Amigo's memory system delivers both perfect preservation of critical information and efficient operation across all memory access patterns, solving the fundamental challenges that plague traditional approaches.
Overcoming the Token Bottleneck
The layered memory system functions as an external high-dimensional memory store that preserves information that would otherwise be lost in token-based reasoning. This is crucial for the integrated M-K-R cycle, as it ensures that the Reasoning engine has access to faithful and appropriately granular Memory, which in turn allows for more accurate Knowledge application and prevents the degradation of complex thought processes:
Perfect Information Preservation: While token-based reasoning loses ~1000x information density, our L0 layer maintains complete, verbatim records with perfect fidelity, forming a reliable Memory base for the M-K-R system.
Rich Contextual Relationships: The system preserves multidimensional relationships that would be flattened in token externalization, providing richer Memory context for Knowledge and Reasoning.
Progressive Abstraction: While maintaining perfect ground truth (L0 Memory), the system creates increasingly abstracted representations (L1, L2 Memory) that optimize for efficient retrieval and integration into the M-K-R cycle, ensuring high-bandwidth interplay.
Last updated
Was this helpful?