Layered Architecture
A sophisticated multi-layered hierarchy that creates functional clinical intelligence through unified memory-knowledge-reasoning integration
Amigo's memory architecture employs a sophisticated, multi-layered hierarchy (L0, L1, L2, L3) that creates functional clinical intelligence with perfect memory as one of its capabilities. This isn't just a memory system—it's a complete cognitive framework that generates multiple interconnected feedback loops between global patient understanding and local processing. The architecture operates through distinct post-processing and live session phases, serving as a critical component of the unified Memory-Knowledge-Reasoning (M-K-R) system.
Live Conversation Processing
During live interactions, L3 provides memory at the right interpretation, precision, and depth to power knowledge application and reasoning without retrieval latency that would degrade reasoning quality:
L3 (Always Available)
Memory-Knowledge-Reasoning Integration: L3 provides memory at the precise interpretation depth needed for clinical knowledge application and reasoning
Memory maintained at the specific precision and depth required for different clinical reasoning tasks with immediate availability
Professional identity ensures memory interpretation matches knowledge application requirements without retrieval delays
Healthcare decisions powered by memory-knowledge-reasoning unity where current symptoms connect to patterns through proper contextual depth AND zero-latency access
Unified context with immediate availability enables high-quality reasoning because memory, knowledge application, and reasoning operate on consistently interpreted information without retrieval interruption
Rare Recontextualization (Adds Latency)
Perfect Reasoning Foundation: Rare expansion occurs only when genuinely new context emerges, not due to L3 limitations - L3 provides complete reasoning foundation
Complete Memory-Reasoning Foundation: L3 provides complete memory at the interpretation depth needed for clinical reasoning with immediate availability
Targeted Historical Insight Extraction: Expansion occurs when L3-guided reasoning identifies opportunities to extract additional insights from historical context
Complete Context Expansion: Queries generated with full L3 context enable precise extraction of genuinely valuable historical insights
Perfect Recontextualization: Past L0 conversations recontextualized through L3's complete unified context, enabling reasoning that extracts maximum value from historical information
Complete Professional Integration: Clinical knowledge fully integrated through L3's comprehensive professional context
Unified Memory-Knowledge-Reasoning: L3 enables complete coherent reasoning across all information with the precision depth required for clinical intelligence
Post-Processing Memory Management
Amigo implements a sophisticated post-processing cycle that creates L3 through progressive synthesis:
L0 → L1: Memory Extraction with L3 Anchoring
Net-New Information Determination: L3 determines what constitutes genuinely new information worth extracting from L0 transcripts
Contextual Interpretation: L3 provides the interpretive lens for understanding L0 conversations from complete historical perspective
Professional Identity Targeting: Service provider background shapes what information is deemed critical for extraction
Dimensional Blueprint Guidance: L3's dimensional framework guides extraction targeting based on functional importance
Perfect Source Linking: Each L1 memory maintains linkage to source L0 for future recontextualization needs
L1 → L2: Episodic Synthesis when Accumulation Threshold Reached
Accumulation-Based Synthesis: When net-new information accumulation reaches threshold, L1 memories are synthesized into L2 episodic user model
L3-Anchored Synthesis: L1 memories synthesized into L2 episodic model with complete L3 awareness
Information Density Management: Prevents explosion while maintaining critical insights
Dimensional Organization: Professional identity guides how information is structured in episodic model
Temporal Coherence: Maintains chronological understanding while creating episodic synthesis
Boundary Prevention: L3 anchoring prevents information loss at processing boundaries
L2 → L3: Global Model Evolution through Boundary-Crossing Synthesis
Global Model Merger: Multiple L2 episodic models merged to evolve L3 across all time
Boundary-Crossing Synthesis: Merges L2 episodic models while preventing information density explosion
Complete Temporal Coverage: Creates unified understanding across entire patient history
Dimensional Evolution: User dimensions refined based on patterns discovered across episodes
Professional Identity Integration: Maintains clinically relevant interpretation throughout merger
Continuous Improvement: Each L3 evolution incorporates new insights while preserving historical understanding
Dimensional Evolution and Clinical Intelligence
The system creates multiple interconnected feedback loops between global patient understanding and local processing:
Self-Improving System: This complete cycle creates a self-improving clinical intelligence system where discovered patterns in patient groups can retroactively improve the interpretation of all historical data through dimensional evolution and temporal backfill, ensuring optimal clinical understanding evolves across the entire patient population.
High-Bandwidth Cross-Layer Integrations
Contextualized Historical Access: L3 provides interpretive context for direct L0 access during recontextualization
Temporal Bridging: L3 serves as bridge between present understanding and raw historical events
Selective Retrieval: L3 dimensions guide which L0 sessions are relevant for expansion queries
Interpretive Anchoring: Raw L0 data interpreted through L3 global context rather than isolated historical perspective
User Understanding ↔ Dimension Definition Feedback Loops
Direct User Understanding
Immediate Clinical Context: User understanding directly informs clinical decision-making in live sessions
Dimensional Application: Current dimensional definitions applied to interpret patient information
Professional Identity Filtering: User understanding filtered through professional identity lens
Real-Time Contextualization: Present user state contextualized against historical understanding
Overcoming the Token Bottleneck
The Core Constraint: Current language models are constrained by the token bottleneck: they must squeeze high‑dimensional internal reasoning into low‑bandwidth text tokens, sharply limiting their ability to preserve complex state across steps.
The layered memory system functions as an external high-dimensional memory store that preserves information that would otherwise be lost in token-based reasoning. This is crucial for the integrated M-K-R cycle:
While token-based reasoning loses significant information density, our L0 layer maintains complete, verbatim records with perfect fidelity, forming a reliable Memory base for the M-K-R system.
Last updated
Was this helpful?