Graph-level Anomaly Detection via Hierarchical Memory Networks

AuthorsChaoxi Niu, Guansong Pang, Ling Chen

arXiv 20232023

TL;DR

HimNet uses hierarchical node and graph memory modules inside a graph autoencoder to detect anomalous graphs, reaching 80.6% AUC on DD vs 70.6% for PK-iF (+10.0 points).

SharePost on XLinkedIn

Read our summary here, or open the publisher PDF on the next tab.

THE PROBLEM

Graph-level anomalies hide in local and global patterns

Graph-level anomaly detection must catch graphs that are abnormal in part or in whole, i.e., locally-anomalous or globally-anomalous graphs.

Existing GLAD methods focus on discriminative embeddings and may not preserve primary graph semantics, making them ineffective when rich structural and attribute information is needed.

HOW IT WORKS

Hierarchical Memory Networks for GLAD

HimNet introduces a GNN Encoder, Node Memory Module, Graph Memory Module, and Graph Decoder to learn hierarchical node-to-graph normal patterns jointly.

You can think of HimNet like a two-level cache: node memory stores detailed local templates, while graph memory stores global prototypes, similar to RAM plus a library of canonical blueprints.

By reconstructing graphs only from these memory blocks, HimNet exposes graphs that cannot be well expressed by normal patterns, something a plain context-limited autoencoder cannot reliably do.

DIAGRAM

Graph-level anomaly scoring pipeline in HimNet

This diagram shows how HimNet processes a graph at inference time to compute reconstruction and approximation based anomaly scores.

DIAGRAM

Training and evaluation pipeline for HimNet

This diagram shows how HimNet is trained on normal graphs and then evaluated against baselines on 16 datasets.

PROCESS

How HimNet Handles Graph-level Anomaly Detection

  1. 01

    Graph Autoencoder

    HimNet first uses the Graph Autoencoder with the GNN Encoder and Graph Decoder to learn node and graph representations that preserve structural and attribute semantics.

  2. 02

    Node Memory Module

    The Node Memory Module stores P memory blocks and approximates node embeddings so the Graph Decoder reconstructs graphs only from combinations of normal node patterns.

  3. 03

    Graph Memory Module

    The Graph Memory Module stores Q graph memory blocks and approximates graph embeddings, enforcing that each graph is expressed as a mixture of normal global prototypes.

  4. 04

    Training and Inference

    HimNet jointly minimizes reconstruction, approximation, and entropy losses during training, then uses the combined Lrec and Lapp value as the anomaly score during inference.

KEY CONTRIBUTIONS

Key Contributions

  • 01

    Hierarchical node to graph memory network HimNet

    HimNet is the first memory based GLAD framework that jointly uses the Node Memory Module and Graph Memory Module inside a graph autoencoder for graph-level anomaly detection.

  • 02

    Three dimensional node memory module

    HimNet introduces a three dimensional Node Memory Module Mn ∈ R^{P×N×D} with multiple two dimensional blocks, each capturing one type of normal pattern across all nodes.

  • 03

    Joint learning with reconstruction and approximation

    HimNet jointly minimizes graph reconstruction error and graph approximation error, plus an entropy term, to learn hierarchical normal patterns and detect both locally and globally anomalous graphs.

RESULTS

By the Numbers

AUC

80.6%

+10.0 over PK-iF on DD

AUC

68.6%

+14.1 over OCGCN on NCI1

AUC

78.0%

-0.2 vs GLocalKD on REDDIT

AUC

71.1%

+6.7 over GLocalKD on PPAR-gamma

These AUC scores come from 16 real world GLAD datasets, including DD, NCI1, REDDIT, and PPAR-gamma, showing that HimNet consistently improves anomaly detection over both two step and end to end baselines.

BENCHMARK

By the Numbers

These AUC scores come from 16 real world GLAD datasets, including DD, NCI1, REDDIT, and PPAR-gamma, showing that HimNet consistently improves anomaly detection over both two step and end to end baselines.

BENCHMARK

AUC on DD biochemical molecule dataset

AUC scores for DD, comparing HimNet against representative two step and end to end GLAD baselines.

KEY INSIGHT

The Counterintuitive Finding

On REDDIT, HimNet jumps from 21.8% AUC for GAE to 78.0% AUC, a +56.2 point improvement despite using the same encoder backbone.

This is surprising because simply inserting the Node Memory Module and Graph Memory Module into the Graph Autoencoder radically changes performance, contradicting the intuition that reconstruction based methods are inherently weak on large graphs.

WHY IT MATTERS

What this unlocks for the field

HimNet shows that explicitly storing hierarchical normal patterns in memory blocks makes reconstruction based GLAD practical even on large, complex graph datasets.

Builders can now design graph anomaly detectors that distinguish local and global irregularities using memory guided reconstruction, instead of relying solely on contrastive or distillation based embeddings.

~12 min read← Back to papers

Related papers

BenchmarkAgent Memory

Active Context Compression: Autonomous Memory Management in LLM Agents

Nikhil Verma

· 2026

Focus Agent adds start_focus, complete_focus, a persistent Knowledge block, and an optimized Persistent Bash plus String-Replace Editor scaffold to actively compress context during long software-engineering tasks. On five hard SWE-bench Lite instances against a Baseline ReAct agent, Focus Agent achieves 22.7% token reduction (14.9M → 11.5M) while matching 3/5 = 60% task success.

Questions about this paper?

Paper: Graph-level Anomaly Detection via Hierarchical Memory Networks

Answers use this explainer on Memory Papers.

Checking…