Category

Cognitive Architecture

Cognitive science–inspired frameworks for language agents — dual-process systems, neuro-symbolic memory, and biomimetic architectures.

7 papers

Cognitive ArchitectureAgent Memory

Aeon: High-Performance Neuro-Symbolic Memory Management for Long-Horizon LLM Agents

Mustafa Arslan

· 2026

Aeon restructures LLM memory using the Atlas, Trace, Semantic Lookaside Buffer, Write Ahead Log, and Sidecar Blob Arena inside a zero copy Core Shell kernel. Aeon achieves 4.70 ns INT8 dot products, 3.09 µs Atlas traversal at 100K nodes, 3.1× compression, and P99 read latency of 750 ns under 16 thread contention compared to FP32 and flat scan baselines.

Cognitive ArchitectureAgent Memory

Aligning Progress and Feasibility: A Neuro-Symbolic Dual Memory Framework for Long-Horizon LLM Agents

Bin Wen, Ruoxuan Zhang et al.

· 2026

Neuro-Symbolic Dual Memory Framework uses Progress Memory, Feasibility Memory, a Blueprint Planner Agent, a Progress Monitor Agent, and an Actor Agent to decouple semantic progress guidance from executable feasibility checks. On ALFWorld, Neuro-Symbolic Dual Memory Framework achieves 94.78% success rate versus 88.81% for AWM, and on WebShop reaches 0.7132 score versus 0.5998 for WALL-E 2.0.

Cognitive ArchitectureAgent Memory

D-Mem: A Dual-Process Memory System for LLM Agents

Zhixing You, Jiachen Yuan, Jason Cai

· 2026

D-Mem combines Mem0∗, Quality Gating, and Full Deliberation into a dual-process memory system that incrementally stores vector memories and selectively scans raw history. On LoCoMo with GPT-4o-mini, D-Mem’s Quality Gating reaches 53.5 F1 versus the Mem0∗ baseline’s 51.2 F1, recovering 96.7% of the 55.3 F1 Full Deliberation performance with far fewer tokens.

Cognitive ArchitectureLong-Term Memory

Human-Like Lifelong Memory: A Neuroscience-Grounded Architecture for Infinite Interaction

Diego C. Lerma-Torres

· 2026

Human-Like Lifelong Memory combines Executive Function and Working Memory, a Memory Service Knowledge Graph, and a Thalamic Gateway to implement dual-process, valence-aware lifelong memory. Human-Like Lifelong Memory is a theoretical framework with seven functional properties and testable predictions rather than benchmark numbers against specific baselines.

BenchmarkBenchmarkCognitive Architecture

Learning to Forget: Sleep-Inspired Memory Consolidation for Resolving Proactive Interference in Large Language Models

Ying Xie

· 2026

SleepGate augments transformers with a Conflict-Aware Temporal Tagger, Forgetting Gate, Consolidation Module, and Sleep Trigger that periodically rewrite the KV cache during sleep micro-cycles. On the PI-LLM benchmark, SleepGate achieves 99.5% retrieval accuracy at PI depth 5 and 97.0% at depth 10, while full KV cache, sliding window, H2O, StreamingLLM, and a decay-only ablation all stay below 18% across all depths.

Cognitive ArchitectureLong-Term Memory

Memory as Resonance: A Biomimetic Architecture for Infinite Context Memory on Ergodic Phonetic Manifolds

Tarik Houichime, Abdelghani Souhar, Younes El Amrani

· 2025

Phonetic Trajectory Memory (PTM) combines the Acoustic Injection, Entropy Filter, Neuro-Symbolic Relay, and Resonance Engine to encode text as a continuous trajectory on an ergodic Hyper-Torus Memory instead of a growing KV cache. PTM delivers >3,000× signal-to-KV compression while maintaining ≈92% factual accuracy and sub-50 ms retrieval latency on long narrative and scientific corpora compared to dense KV baselines.

SurveyCognitive ArchitectureMemory Architecture

Memory-Augmented Transformers: A Systematic Review from Neuroscience Principles to Enhanced Model Architectures

Parsa Omidi, Xingshuai Huang et al.

arXiv 2025 · 2025

Memory-Augmented Transformers organizes functional objectives, memory types, and integration techniques into a unified taxonomy that connects biological memory principles with concrete architectures like Memformer, Titans, ATLAS, and EMAT. Memory-Augmented Transformers’ main result is a systematic three-dimensional classification that links dynamic multi-timescale memory, selective attention, and consolidation to specific Transformer designs and emerging lifelong-learning paradigms.