Back to journal
PartnershipApril 30, 20263 min read

SILX AI Partners with Adaption Labs to Advance Quasar Foundation Models

SILX AI is partnering with Adaption Labs to bring high-quality adaptive data into the next phase of Quasar foundation model training.

QuasarAdaption LabsDecentralized AIMoE
SILX AI and Adaption Labs partnership banner

At SILX AI, our mission with Quasar is clear: build open, high-quality foundation models that can compete at the frontier.

Today, we are excited to announce a new collaboration with Adaption Labs, an AI research company focused on building adaptive intelligence systems.

Through this partnership, Adaption Labs will provide SILX AI with state-of-the-art adaptive data to support the training of the Quasar foundation models.

Their role will focus on generating and refining high-quality adaptive datasets at scale, helping Quasar continuously improve its reasoning, generalization, and long-context capabilities.

This collaboration strengthens Quasar's path toward achieving SOTA performance and competing with leading closed-source models.

Why Quality Matters in Decentralized Training

The history of decentralized training runs has mainly focused on large-scale execution.

That is important. Open and transparent training needs scale, coordination, and the ability to run complex training workloads across decentralized infrastructure.

But one critical part has often been missed:

Quality.

A training run can be technically impressive, but still not useful in practice if model quality was not the main goal.

Even some of the most successful decentralized training runs, such as SN3, showed the potential of decentralized execution, but the final models were not practical enough for real-world use because quality was not the core priority.

At Quasar SN24, we are changing that.

Building the Largest MoE Training Run with Quality at the Core

Quasar is being built with a different standard.

We are not only focused on proving that decentralized training can scale. We are focused on building models that are actually useful, reliable, and competitive.

Our goal is to build one of the largest MoE training runs in decentralized AI, with SOTA performance and model quality as core priorities from day one.

That means focusing on the full stack:

  • High-quality adaptive data
  • Strong long-context capabilities
  • Better reasoning and generalization
  • Scalable decentralized training
  • Practical usability after training

The output should not just be an impressive decentralized training run.

It should be a family of long-context models that are genuinely usable and capable of competing with leading closed-source systems.

Why Adaption Labs

Adaption Labs brings deep research experience in adaptive AI systems and data generation.

The company is co-founded by Sara Hooker, former Vice President of Research at Cohere and a veteran researcher from Google DeepMind, alongside Sudip Roy.

Adaption Labs has also raised $50M in seed funding to advance its mission in adaptive AI.

Their work aligns closely with what Quasar needs at this stage: high-quality, adaptive datasets that can help improve model performance across reasoning, generalization, and long-context tasks.

The Next Phase for Quasar

Quasar SN24 represents a shift in how decentralized AI training should be measured.

The question is no longer only:

Can decentralized networks run large-scale training?

The better question is:

Can decentralized training produce models that people actually want to use?

That is the standard we are building toward.

By combining decentralized training, MoE architecture, long-context modeling, rigorous evaluation, and adaptive data from Adaption Labs, Quasar is moving toward a new class of open foundation models.

Models that are not only transparent, but also high-quality, scalable, and practical.

This partnership with Adaption Labs is an important step toward that vision.