We are a frontier AI lab working on novel neural architectures and open foundation models. From research breakthroughs to production-ready systems — everything we build is designed for long-context intelligence at scale.
Liquid neural networks, continuous-time attention, and flow-based mechanisms that scale linearly. We design the architectures that will power the next generation of AI.
Quasar is our flagship series — 2M+ token context, trained on Bittensor's distributed compute network. Every model ships open-weight under Apache 2.0.
We build products powered by our models — processing entire codebases, legal corpora, and research libraries in a single pass. Real applications, not demos.
Our research focus is decentralized state-of-the-art AI. We tackle one of the hardest problems in the field: memory — so AI is no longer like an AGI stuck in a black box.
We focus on giving models true long-range memory — processing millions of tokens so the model can reason across entire codebases, legal corpora, and research libraries without forgetting.
We explore different architectures and attention variants — linear attention, flow-based mechanisms, continuous-time systems — to find the best approach for state-of-the-art long-context performance.
We train on Bittensor's distributed compute network. Miners compete to produce the best model checkpoints, making frontier-class pretraining possible without centralized GPU clusters.