LoopViT: Scaling Visual ARC with Looped Transformers

Abstract
We introduce Loop-ViT, a weight-tied (looped) transformer for Visual ARC that scales reasoning via iterative inference. By trading model width for “thinking time”, Loop-ViT reaches a new Pareto frontier on ARC-AGI, achieving strong accuracy with significantly fewer parameters.
Type
Publication
Blog / Preprint
Preprint / blog post.
Figure: Loop-ViT overview.