IMDMR: An Intelligent Multi-Dimensional Memory Retrieval System for Enhanced Conversational AI

AuthorsTejas Pawar, Sarika Patil, Om Tilekar et al.

2025

TL;DR

IMDMR uses a six dimensional memory retrieval architecture with intelligent query processing to reach an overall score of 0.792 vs 0.207 for spaCy + RAG.

SharePost on XLinkedIn

Read our summary here, or open the publisher PDF on the next tab.

THE PROBLEM

Single dimensional RAG caps at 0.207 overall score

IMDMR targets conversational systems where state of the art baselines reach only about 20% overall performance, with spaCy + RAG at 0.207.

These single dimensional RAG and memory systems miss entity relationships and temporal context, so long term conversations lose personalization and coherent user specific behavior.

HOW IT WORKS

IMDMR multi dimensional memory retrieval

IMDMR centers on a Memory Storage Layer, Multi-Dimensional Search Engine, Intelligent Query Processor, and Response Generation Module wired to AWS Bedrock, Amazon Titan, and Qdrant.

You can think of IMDMR like a library card catalog plus a timeline and social graph layered over vector search, instead of a single semantic index.

This architecture lets IMDMR combine six memory dimensions with a multi dimensional bonus factor, retrieving memories that a plain context window or cosine similarity store would never surface.

DIAGRAM

IMDMR query time retrieval pipeline

This diagram shows how IMDMR processes a user query through intelligent query analysis and multi dimensional search to retrieve memories and generate a response.

DIAGRAM

IMDMR evaluation and ablation design

This diagram shows how IMDMR evaluates simulated and production variants against baselines and ablation configurations on the synthetic conversation dataset.

PROCESS

How IMDMR Handles a Multi Turn Conversation Query

  1. 01

    System Architecture

    IMDMR initializes the Memory Storage Layer with semantic, entity, category, intent, context, and temporal metadata using Amazon Titan and Qdrant for storage.

  2. 02

    Multi Dimensional Search

    When a query arrives, IMDMR routes it to the Multi-Dimensional Search Engine, which computes dimension specific scores and applies the multi dimensional bonus factor Smulti.

  3. 03

    Intelligent Query Processing

    The Intelligent Query Processor calls AWS Bedrock for entity extraction and intent classification, then selects dimension weights and strategies based on query type.

  4. 04

    Entity Extraction and Resolution

    IMDMR runs its entity extraction and cross memory entity resolution pipeline, building entity graphs that the Response Generation Module uses to synthesize coherent answers.

KEY CONTRIBUTIONS

Key Contributions

  • 01

    Multi dimensional memory retrieval architecture

    IMDMR introduces a six dimension Multi-Dimensional Search Engine over a Memory Storage Layer, combining semantic, entity, category, intent, context, and temporal scores with a multi dimensional bonus factor Bmulti up to 3.0.

  • 02

    Intelligent query processing system

    IMDMR adds an Intelligent Query Processor that uses AWS Bedrock for entity and intent analysis, dynamically selecting dimension subsets and weights for each query.

  • 03

    Simulation vs production comparison

    IMDMR provides both IMDMR-Sim and IMDMR-Prod, showing overall scores of 0.314 and 0.792 respectively, quantifying the impact of real AWS Bedrock, Titan, and Qdrant integration.

RESULTS

By the Numbers

Overall Score

0.792

+0.585 over spaCy + RAG

Entity Extraction F1 Score

1.000

vs spaCy + RAG at 0.500

Memory Relevance

1.000

vs spaCy + RAG at 0.333

BLEU Score

0.800

vs spaCy + RAG at 0.058

On the synthetic 1,000 conversation dataset testing entity extraction, intent understanding, and answer quality, IMDMR-Prod achieves an overall score of 0.792 compared to 0.207 for spaCy + RAG, proving that multi dimensional retrieval plus real AWS integration drastically improves conversational memory systems.

BENCHMARK

By the Numbers

On the synthetic 1,000 conversation dataset testing entity extraction, intent understanding, and answer quality, IMDMR-Prod achieves an overall score of 0.792 compared to 0.207 for spaCy + RAG, proving that multi dimensional retrieval plus real AWS integration drastically improves conversational memory systems.

BENCHMARK

Comprehensive Baseline System Performance Comparison

Overall Score across IMDMR variants and baseline systems.

BENCHMARK

Ablation Study: Dimension Effectiveness Analysis

Overall Score for IMDMR_Full vs single and hybrid dimension variants.

KEY INSIGHT

The Counterintuitive Finding

IMDMR shows that IMDMR-Prod reaches a perfect entity extraction F1 of 1.000 while IMDMR-Sim only reaches 0.667 and spaCy + RAG 0.500.

This is surprising because many assume simulated pipelines are enough for memory research, but IMDMR reveals real AWS Bedrock and Titan integration can add 0.333 F1 over a sophisticated simulated system.

WHY IT MATTERS

What this unlocks for the field

IMDMR unlocks a practical way to treat conversational memory as a six dimensional object, not just a semantic vector, with explicit entity, category, intent, context, and temporal axes.

With IMDMR, builders can deploy production agents that remember user preferences and goals over time, achieving 0.792 overall scores where classic RAG stacks like spaCy + RAG plateau around 0.207.

~15 min read← Back to papers

Related papers

BenchmarkAgent Memory

Active Context Compression: Autonomous Memory Management in LLM Agents

Nikhil Verma

· 2026

Focus Agent adds start_focus, complete_focus, a persistent Knowledge block, and an optimized Persistent Bash plus String-Replace Editor scaffold to actively compress context during long software-engineering tasks. On five hard SWE-bench Lite instances against a Baseline ReAct agent, Focus Agent achieves 22.7% token reduction (14.9M → 11.5M) while matching 3/5 = 60% task success.

Agent Memory

ActMem: Bridging the Gap Between Memory Retrieval and Reasoning in LLM Agents

Xiaohui Zhang, Zequn Sun et al.

· 2026

ActMem transforms dialogue history into atomic facts via Memory Fact Extraction, groups them with Fact Clustering, links them through a Memory KG Construction module, and uses Counterfactual-based Retrieval and Reasoning for action-aware answers. On ActMemEval, ActMem reaches 76.52% QA accuracy with DeepSeek-V3, beating LightMem’s 63.97% by 12.55 points and NaiveRAG’s 61.54%.

Questions about this paper?

Paper: IMDMR: An Intelligent Multi-Dimensional Memory Retrieval System for Enhanced Conversational AI

Answers use this explainer on Memory Papers.

Checking…