From Human Memory to AI Memory: A Survey on Memory Mechanisms in the Era of LLMs

AuthorsYaxiong Wu, Sheng Liang, Chen Zhang, Yichao Wang

arXiv 20252025

TL;DR

From Human Memory to AI Memory uses a 3D 8 quadrant taxonomy to unify personal and system memory mechanisms in LLM-driven AI systems.

THE PROBLEM

LLM agents lack unified memory beyond short context windows

From Human Memory to AI Memory notes that most existing work focuses only on short-term and long-term memory along the time dimension.

This narrow view fragments personal memory and system memory design, making it hard to build coherent, human-like memory-enhanced LLM agents.

HOW IT WORKS

3D 8Q Memory Taxonomy — object, form, and time

From Human Memory to AI Memory introduces the 3D-8Q Memory Taxonomy, aligning Personal Memory, System Memory, Parametric Memory, and Non-Parametric Memory with human memory categories.

An analogy is a brain-inspired storage system where working memory is RAM, long-term episodic memory is a diary, and semantic memory is a compressed knowledge encyclopedia.

This KEY_MECHANISM lets From Human Memory to AI Memory describe memory behaviors that a plain context window cannot, such as cross-session personalization and lifelong procedural skill refinement.

DIAGRAM

Human to AI memory mapping taxonomy

This diagram shows how From Human Memory to AI Memory maps human sensory, working, explicit, and implicit memory onto AI personal and system memory types.

DIAGRAM

Survey organization and evaluation landscape

This diagram shows how From Human Memory to AI Memory structures its survey into personal memory, system memory, and open problems sections.

PROCESS

How From Human Memory to AI Memory Handles a Memory Mechanism Survey

  1. 01

    Human Memory

    From Human Memory to AI Memory first analyzes Human Memory, detailing short term and long term mechanisms to ground the later 3D 8Q Memory Taxonomy.

  2. 02

    Memory of LLM driven AI Systems

    From Human Memory to AI Memory then introduces Memory of LLM driven AI Systems, defining object, form, and time dimensions for AI memory.

  3. 03

    3D 8Q Memory Taxonomy

    From Human Memory to AI Memory constructs the 3D 8Q Memory Taxonomy, splitting memory into eight quadrants across personal and system memory.

  4. 04

    Personal Memory and System Memory

    From Human Memory to AI Memory surveys Personal Memory and System Memory works, connecting them back to the taxonomy and highlighting open problems.

KEY CONTRIBUTIONS

Key Contributions

  • 01

    Systematically define LLM driven AI systems memory

    From Human Memory to AI Memory links Human Memory with Memory of LLM driven AI Systems, clarifying how personal and system memory mirror human categories.

  • 02

    3D 8Q Memory Taxonomy

    From Human Memory to AI Memory proposes the 3D 8Q Memory Taxonomy over object, form, and time, organizing eight quadrants like episodic and procedural memory.

  • 03

    Personal Memory and System Memory surveys

    From Human Memory to AI Memory reviews Personal Memory and System Memory, covering systems like MemoryBank, HippoRAG, MemoRAG, and MemoryLLM.

RESULTS

By the Numbers

Quadrants in taxonomy

8 quadrants

covers personal and system memory beyond 2 time based categories

Memory dimensions

3 dimensions

object, form, and time unify prior fragmented views

Surveyed memory types

2 main types

personal memory and system memory analyzed in depth

Open problem themes

6 themes

multimodal, stream, comprehensive, shared, collective privacy, automated evolution

From Human Memory to AI Memory does not report benchmarks but structures the field with 3 dimensions and 8 quadrants, clarifying how LLM memory research fits into a unified space.

BENCHMARK

By the Numbers

From Human Memory to AI Memory does not report benchmarks but structures the field with 3 dimensions and 8 quadrants, clarifying how LLM memory research fits into a unified space.

BENCHMARK

Memory dimensions coverage in From Human Memory to AI Memory

Proportion of conceptual focus across object, form, and time dimensions in the 3D 8Q Memory Taxonomy.

KEY INSIGHT

The Counterintuitive Finding

From Human Memory to AI Memory argues that classifying memory only by time is insufficient, despite widespread use of short term and long term labels.

This challenges the assumption that extending context windows alone solves memory, emphasizing object and form dimensions like personal versus system and parametric versus non parametric.

WHY IT MATTERS

What this unlocks for the field

From Human Memory to AI Memory enables builders to place any memory mechanism into a precise quadrant, clarifying missing capabilities like procedural system memory.

Developers can now design agents that deliberately combine personal episodic memory, system procedural memory, and parametric semantic memory instead of relying on ad hoc context stuffing.

~16 min read← Back to papers

Related papers

SurveyAgent Memory

Anatomy of Agentic Memory: Taxonomy and Empirical Analysis of Evaluation and System Limitations

Dongming Jiang, Yi Li et al.

arXiv 2026 · 2026

Anatomy of Agentic Memory organizes Memory-Augmented Generation into four structures and empirically compares systems like LOCOMO, AMem, MemoryOS, Nemori, MAGMA, and SimpleMem under benchmark saturation, metric validity, backbone sensitivity, and system cost. On the LoCoMo benchmark, Anatomy of Agentic Memory shows Nemori reaches 0.502 F1 while AMem drops to 0.116, and MAGMA achieves the top semantic judge score of 0.670 under the MAGMA rubric.

Memory ArchitectureSurvey

Multi-Agent Memory from a Computer Architecture Perspective: Visions and Challenges Ahead

Zhongming Yu, Naicheng Yu et al.

arXiv 2026 · 2026

Multi-Agent Memory Architecture organizes **Agent IO Layer**, **Agent Cache Layer**, and **Agent Memory Layer** plus **Agent Cache Sharing** and **Agent Memory Access** protocols into a unified architectural framing for multi-agent systems. The position-only SYS_NAME proposes no benchmark MAIN_RESULT or numeric comparison against any baseline.

SurveyMemory Architecture

Memory-Augmented Transformers: A Systematic Review from Neuroscience Principles to Enhanced Model Architectures

Parsa Omidi, Xingshuai Huang et al.

arXiv 2025 · 2025

Memory-Augmented Transformers organizes **functional objectives**, **memory types**, and **integration techniques** into a three-axis taxonomy, grounded in biological systems like sensory, working, and long-term memory. The survey synthesizes dozens of architectures to highlight emerging mechanisms such as hierarchical buffering and surprise-gated updates that move beyond static KV caches.