CAM: A Constructivist View of Agentic Memory for LLM-Based Reading Comprehension

AuthorsRui Li, Zeyu Zhang, Xiaohe Bo et al.

2025

TL;DR

CAM uses an incremental overlapping clustering algorithm with Prune and Grow retrieval to reach 52.3 ACC-L on NovelQA, +4.5 over RAPTOR.

SharePost on XLinkedIn

Read our summary here, or open the publisher PDF on the next tab.

THE PROBLEM

LLM reading agents fail on long documents even with extended context

CAM targets scenarios where LLM performance declines as input texts lengthen, even within the specified context length.

When long novels, meetings, or multi document corpora exceed context, reading agents miss dispersed evidence, harming question answering, summarization, and claim verification.

HOW IT WORKS

Constructivist Agentic Memory with incremental overlapping clustering

CAM centers on an incremental overlapping clustering algorithm, Foundational Semantic Network G0, and Ego Centric Disentanglement to build hierarchical schemata.

You can think of CAM like a human schemata system plus a dynamic card catalog, where chunks are filed into multiple overlapping folders that are locally rebalanced.

This design lets CAM flexibly assimilate new chunks and dynamically accommodate structure changes, enabling retrieval patterns that a flat context window or static tree cannot support.

DIAGRAM

Prune and Grow associative retrieval in CAM

This diagram shows how CAM performs Prune and Grow associative exploration over its memory hierarchy when answering a query.

DIAGRAM

Offline and online evaluation pipeline for CAM

This diagram shows how CAM is evaluated across offline and batch level online settings on long text benchmarks.

PROCESS

How CAM Handles a Long Text Reading Query

  1. 01

    Foundational Network Expansion

    CAM encodes new contiguous text chunks into the Foundational Semantic Network G0 using a composite similarity score with femb and positional proximity.

  2. 02

    Ego Centric Disentanglement

    CAM extracts ego networks, partitions them into connected components, and builds a Replica Network that disentangles overlapping clusters via node replication.

  3. 03

    Online Clustering Updates

    CAM runs an incremental label propagation algorithm on the Replica Network, updating clusters and regenerating abstraction nodes in higher level schemata.

  4. 04

    Prune and Grow Associative Retrieval

    Given a query, CAM performs fast localization with global similarity, then iteratively expands and prunes activated nodes to feed the LLM backbone for answers.

KEY CONTRIBUTIONS

Key Contributions

  • 01

    Constructivist design principle for agentic memory

    CAM formalizes structured schemata, flexible assimilation, and dynamic accommodation using an incremental overlapping clustering algorithm and ego centric disentanglement for long text reading.

  • 02

    Prototype of Constructivist Agentic Memory

    CAM instantiates the principle with a Foundational Semantic Network, Replica Network, and Prune and Grow retrieval, supporting both offline and batch level online memory development.

  • 03

    Evaluation on diverse long text benchmarks

    CAM achieves dual superiority in performance and efficiency, e.g., 52.3 ACC L on NovelQA and over 4× faster batch insertion than RAPTOR and GraphRAG.

RESULTS

By the Numbers

ACC-L

52.3

+4.5 over RAPTOR on NovelQA

R-L

25.4

+1.7 over RAPTOR on NovelQA

ACC-L

57.6

+3.7 over GraphRAG on QMSum

ACC-L

54.6

+4.4 over GraphRAG on ODSum Story

Table 2 reports CAM on NovelQA, QMSum, and ODSum Story, which test long form question answering and summarization. The gains in ACC L and ROUGE show that CAM’s constructivist memory design improves both answer quality and abstraction over strong structured baselines like RAPTOR and GraphRAG.

BENCHMARK

By the Numbers

Table 2 reports CAM on NovelQA, QMSum, and ODSum Story, which test long form question answering and summarization. The gains in ACC L and ROUGE show that CAM’s constructivist memory design improves both answer quality and abstraction over strong structured baselines like RAPTOR and GraphRAG.

BENCHMARK

Reading comprehension results on NovelQA (ACC-L)

ACC-L on NovelQA single document question answering comparing CAM against structured and unstructured memory baselines.

KEY INSIGHT

The Counterintuitive Finding

CAM’s online batch insertion becomes over 4× faster than RAPTOR and GraphRAG while maintaining ACC L comparable to the offline setting.

This is surprising because many expect dynamic online memory updates to be slower and less stable than a single offline reconstruction over the full corpus.

WHY IT MATTERS

What this unlocks for the field

CAM unlocks constructivist style agentic memory where schemata flexibly assimilate new chunks and dynamically accommodate structure without full rebuilding.

Builders can now deploy long context reading agents that continuously ingest streaming chapters or news batches while preserving retrieval quality and keeping latency practical.

~13 min read← Back to papers

Related papers

BenchmarkAgent Memory

Active Context Compression: Autonomous Memory Management in LLM Agents

Nikhil Verma

· 2026

Focus Agent adds start_focus, complete_focus, a persistent Knowledge block, and an optimized Persistent Bash plus String-Replace Editor scaffold to actively compress context during long software-engineering tasks. On five hard SWE-bench Lite instances against a Baseline ReAct agent, Focus Agent achieves 22.7% token reduction (14.9M → 11.5M) while matching 3/5 = 60% task success.

Agent Memory

ActMem: Bridging the Gap Between Memory Retrieval and Reasoning in LLM Agents

Xiaohui Zhang, Zequn Sun et al.

· 2026

ActMem transforms dialogue history into atomic facts via Memory Fact Extraction, groups them with Fact Clustering, links them through a Memory KG Construction module, and uses Counterfactual-based Retrieval and Reasoning for action-aware answers. On ActMemEval, ActMem reaches 76.52% QA accuracy with DeepSeek-V3, beating LightMem’s 63.97% by 12.55 points and NaiveRAG’s 61.54%.

Questions about this paper?

Paper: CAM: A Constructivist View of Agentic Memory for LLM-Based Reading Comprehension

Answers use this explainer on Memory Papers.

Checking…