WAP PAPER

Length Generalization in Hierarchical Sparse Attention

Graduate-level takeaway

The paper provides a systematic dissection of hierarchical sparse attention models, with ablations and theory that isolate three design principles enabling training-free length extrapolation. A model trained on 4K context generalizes to 32M tokens on RULER and BABILong.

Grad · EN Hierarchical Sparse Attention Length Extrapolation

Paper facts

Authors: Jiaqi Leng, Xiang Hu, Junxiong Wang, Jianguo Li, Wei Wu, Yucheng Lu
Date: 20 Oct 2025
Venue: arXiv (cs.CL) preprint
arXiv: 2510.17196
DOI: 10.48550/arXiv.2510.17196

Problem setup / 背景

Standard attention has quadratic cost and degrades on very long inputs. Sparse or linear-time alternatives can scale but often lose long-range fidelity. Chunk-based hierarchical sparse attention is promising for long-context scaling, yet it is unclear which architectural components actually enable length generalization versus short-context performance.

Method / 方法

The authors build a unified framework to ablate components of hierarchical sparse attention and provide theoretical motivation for intra-chunk processing and landmark generation. The key principles are:

Experiments & Results / 实验与结果

The paper reports comprehensive ablations that confirm each principle is necessary for length extrapolation. The final design yields state-of-the-art training-free length generalization on long-context benchmarks.

4K → 32M

Trained on 4K tokens; tested up to 32,000,000 tokens.

RULER + BABILong

Benchmarks for long-context reasoning and memory.

SOTA

New state-of-the-art in training-free length extrapolation.

Ablations

Removing any core principle harms extrapolation.

Limitations / 局限

The abstract does not enumerate limitations. Likely constraints include reliance on specific benchmark suites, absence of public code on arXiv, and unknown trade-offs in latency/memory on real-world deployments. Further validation on other domains (e.g., long-form retrieval, dialogue, multi-document QA) would strengthen conclusions.

Resources / 资源

Paper (arXiv abstract) Open
PDF Download

Suggested citation / 推荐引用

Leng, Jiaqi, Xiang Hu, Junxiong Wang, Jianguo Li, Wei Wu, and Yucheng Lu. 2025. “Understanding and Improving Length Generalization in Hierarchical Sparse Attention Models.” arXiv:2510.17196. https://doi.org/10.48550/arXiv.2510.17196.