SILX AI — Open Foundation Models for Long-Context Intelligence

Novel architectures. Millions of tokens of context. Trained on decentralized compute. All open-weight.

What We Work On

Novel architectures and training methods that move the field forward.

Linear Attention

Standard transformers scale quadratically. Our Quasar architecture uses continuous-time attention that scales linearly — handling millions of tokens at a fraction of the compute.

Decentralized Pretraining

We train on Bittensor's distributed compute network. Miners compete to produce the best model checkpoints, making frontier-class training accessible without centralized GPU clusters.

Open Weights

Every model we release ships with full weights under Apache 2.0. No gated access, no waitlists. Download from Hugging Face and deploy on your own infrastructure.

Quasar

Our first foundation model series. Long-context reasoning with linear attention.

Quasar-2M-Base

Live

Foundation model for long-context understanding and reasoning

Parameters
2.4B
Context
2M tokens
Architecture
Linear Attention
Trained on
2T tokens
Open WeightsApache 2.0Bittensor Subnet 24
Download on Hugging Face