Show HN: Topas-DSPL – A 15M param AI that solves hard reasoning tasks(ARC-AGI-2)
10 hours ago (github.com)
Author here.
We’ve been frustrated by the "Scaling Hypothesis" (just make the model bigger). We believe the issue with current AI isn't size, it's architecture.
We just open-sourced TOPAS-DSPL. It’s a tiny model (~15M params) that uses a Dual-Stream architecture (separating Logic from Execution) to solve the ARC-AGI-2 benchmark.
It achieves 24% accuracy on the hard evaluation set, which beats many models 1000x its size.
The repo includes the full training code, data augmentation pipeline, and the MuonClip optimizer we used to stabilize the recursion. It runs on a single consumer GPU.
We are very excited about this for many reasons - but one of them! It's our least efficient and accurate model. The one we are most comfortable open sourcing.
A link to our corresponding paper as well: https://zenodo.org/records/17683673
Our AI agent: https://bitterbot.ai/
Sorry! Linked the wrong paper: https://zenodo.org/records/17834542