Introducing the
Quasar LogoQuasar

A foundation model built to handle long, consistent context. Join the frontier of decentralized reasoning.

Quasar Subnet 24

Backed by the best

BitStarter
DSV

Quasar Foundation Models

Introducing Quasar, a revolutionary series engineered to shatter the context window limitation. With millions of tokens in a single context, Quasar enables reasoning across vast datasets without forgetting a single detail.

Context Window Comparison

DeepSeek-V3.2164k
Llama-4-Maverick1M
Kimi-Linear1M
QuasarNEW
10M+
LIMITLESS

99.9% Recall

Perfect retrieval across millions of tokens.

Linear Scaling

Consistent performance at any depth.

1/10th Cost

Optimized attention reduces inference overhead.

The Challenge

The Context Bottleneck

Traditional Transformers have a quadratic complexity problem ($O(N^2)$). Doubling the context length quadruples the compute cost. This makes long-context reasoning prohibitively expensive and slow.

Linear Attention

Complexity Reduction

Standard O(N²)Quasar O(N)

Technical Specs

Parameters
2.4B
Architecture
Quasar Continuous-Time Attention Transformer
Training Dataset
2 Trillion Tokens
Optimized for Inference on Consumer Hardware
Economic Advantage

Training Efficiency

By leveraging decentralized compute on Bittensor, Quasar reduces training costs by orders of magnitude compared to centralized labs.

Traditional Pre-training$10M+
Quasar Pre-training<$50k
Cost Reduction-99.5%
Subnet 24 • Decentralized Intelligence

Built on Bittensor

We leverage the $TAO network to orchestrate a global, decentralized training run. Intelligence is no longer centralized; it is mined, verified, and incentivized in real-time.

Quasar Subnet Mining

In this adversarial environment, independent Miners train high-performance variations of the Quasar architecture. They don't just process static data; they compete to solve complex reasoning tasks across a spectrum of sequence lengths.

1

Sequence Competition

Models are challenged on inputs ranging from 8k to 2M tokens. Miners must optimize for both retrieval accuracy and inference speed to survive the network's rigorous selection process.

2

Validator Consensus & Rewards

Validators act as the immutable source of truth. They continually audit miner checkpoints against unseen datasets. Top-performing models are automatically rewarded with $TAO, creating a self-sustaining cycle of improvement.

Node 1
Node 2
Node 3
Node 4
Node 5
Live Network Status
2M+
Max Sequence
450+
Active Miners
LB

LongBench

Language Modeling

Evaluates standard language modeling capabilities across diverse long-context datasets, ensuring scalability and coherence.

SCORESOTA
QB

QuasarBench

Synthetic Needle

Strict synthetic "needle-in-a-haystack" testing. Verifies absolute retrieval accuracy across the full context window.

PRECISION100%
// Subnet 24: Continuous Model Evolution

Narrative Understanding

Ingest entire book series. Quasar doesn't just summarize; it remembers every character arc, plot twist, and subtle detail.

What happened in the 3rd book?

In Prisoner of Azkaban, Harry discovers Sirius Black is his godfather.

Full Codebase Analysis

Feed Quasar your entire repository. trace bugs across modules, refactor legacy patterns, or generate comprehensive documentation.

Where is the authentication logic?

Auth logic is in auth.ts (lines 45-120).

Legal & Financial Review

Twice the accuracy. Instantly spot contradictions, track liability clauses across documents, and synthesize insights from the entire corpus.

Signs
Signs
Signs

Conflicting liability clauses?

Contract A caps liability at $1M; Contract C implies unlimited.

Meet the Team

Leading SILX with deep expertise in AI research and visionary thinking to guide our mission towards the future of synthetic intelligence.

Eyad Gomaa

Eyad Gomaa

CTO & CO-FOUNDER

leading researcher and exploring possible new architectures for the next wave of synthetic intelligence.

Connect
Youssef Farahat

Youssef Farahat

CEO & CO-FOUNDER

Expertise in blockchain for 4+ years. Part-time researcher exploring advanced technologies and decentralized systems.

Connect

Our Advisors

Guided by industry experts and strategic minds shaping the future of decentralized intelligence.

Siam Kidd

Siam Kidd

Connect
Mark Creaser

Mark Creaser

Connect
Chris Zacharia

Chris Zacharia

Connect
Background

Model Weights

Access our open-source model weights on Hugging Face. Download and deploy Quasar for your own applications.

Quasar-2M-Base

Foundation model optimized for long-context understanding and reasoning tasks

Open Source2M Context

All models are released under open-source licenses. Join our community on Hugging Face to collaborate and contribute.