SILX AI – A frontier model at your pocekt

chevron-icon
Back
Top
chevron-icon
project-presentation-img
Eyad Gomaa
Project Owner

SILX AI – A frontier model at your pocekt

Expert Rating

n/a
  • Proposal for BGI Nexus 1
  • Funding Request $50,000 USD
  • Funding Pools Beneficial AI Solutions
  • Total 2 Milestones

Overview

SILX AI is making a new frontier in AI by focusing on reasoning at the model’s core eliminating the need for CoT or complex prompting. Our early prototype, guided by our latest research paper, introduces groundbreaking mechanisms: the Token Temperature Mechanism (TTM) and Guided Sequence of Thought (GSoT). These innovations fundamentally enhance how models process tokens, leading to more structured and context-aware reasoning. SILX AI is redefining AI intelligence from the ground up.

Proposal Description

How Our Project Will Contribute To The Growth Of The Decentralized AI Platform

The creation of affordable AI models that can reason at their core, making them more scalable and efficient. By controlling the model’s own reasoning, we enhance predictability and mitigate safety risks, ensuring more reliable AI behavior. Additionally, this approach democratizes AI development, allowing small labs worldwide to build frontier models, fostering global innovation in AI research.

Our Team

Eyad Gomaa – CEO & Co-Founder

  • AI researcher 
  • Ex-CEO of Triv AI, ranked among the top 42 transportation companies in Egypt
  • Co-author of "Guidance Is All You Need", a groundbreaking paper introducing new AI mechanisms

https://www.linkedin.com/in/eyad-gomaa-silx/ 

Mohamed Sharaf – CTO & Co-Founder

  • AI researcher and engineering expert
  • Ex-CTO of Triv AI, bringing extensive experience in AI-driven solutions

AI services (New or Existing)

Quasar-1

Type

New AI service

Purpose

Quasar-1 is a frontier AI model designed to achieve true reasoning at scale through its Token Temperature Mechanism (TTM) and Guided Sequence of Thought (GSoT), Quasar-1 dynamically refines token importance and structures logical progression, reducing computational overhead while improving accuracy. The goal is to create affordable yet powerful AI models that can perform complex reasoning tasks without excessive resource consumption.

AI inputs

Quasar-1 processes natural language inputs, structured data, and contextual information. It utilizes TTM to assign importance levels to different tokens and GSoT to structure its reasoning process in a more predictable and interpretable manner.

AI outputs

Quasar-1 generates highly structured, reasoning-driven responses with minimal compute overhead. Context-aware explanations that follow a logical reasoning path. frontier-level AI capabilities that remain affordable and scalable.

Company Name (if applicable)

SILX AI

The core problem we are aiming to solve

We are solving the challenge of making AI both affordable and frontier. Current AI models are either too expensive to scale or lack true reasoning capabilities. Our focus is on developing models that can reason at their core while remaining cost-efficient, making high-performance AI accessible to researchers and labs worldwide. By ensuring these models are predictable, controllable, and scalable, we push the boundaries of AI without requiring massive infrastructure. This approach democratizes AI development, allowing even small teams to build cutting-edge systems.

 

Our specific solution to this problem

Our solution, as demonstrated through our TTM + GSoT mechanisms, enables AI models to achieve reasoning at scale without requiring extra computation time. Unlike traditional models that rely on test-time compute to refine their outputs, our approach allows the model to reason in real time, eliminating unnecessary processing overhead. This results in lower compute usage while maintaining high performance, making AI both efficient and scalable without sacrificing reasoning capability.

Project details

Quasar-1 is a next-generation AI model designed to achieve true reasoning at scale through its novel Token Temperature Mechanism (TTM) and Guided Sequence of Thought (GSoT). Unlike traditional models that rely on brute-force computation and extensive test-time processing, Quasar-1 introduces a structured approach to token processing and reasoning.

At its core, TTM dynamically adjusts attention based on token importance, distinguishing between hot tokens (critical for reasoning) and cold tokens (less relevant). This mechanism ensures that the model focuses on key parts of the input, optimizing efficiency and reducing unnecessary computation.

GSoT further enhances the model’s reasoning ability by structuring token interactions in a way that mimics human-like logical progression. Instead of generating responses through trial and error, Quasar-1 follows a guided thought process, making it more predictable and reducing computational overhead.

By combining TTM and GSoT, Quasar-1 achieves frontier-level performance while remaining computationally efficient. This allows for real-time reasoning without excessive resource consumption, making AI more scalable, controllable, and accessible to researchers and organizations worldwide.

 
4o

Existing resources

we have 3k worth of compute 

Open Source Licensing

Apache License

Links and references

our paper :
https://arxiv.org/abs/2412.06822

Proposal Video

Placeholder for Spotlight Day Pitch-presentations. Video's will be added by the DF team when available.

  • Total Milestones

    2

  • Total Budget

    $50,000 USD

  • Last Updated

    15 Feb 2025

Milestone 1 - Integrating TTM + GSoT into Pretraining

Description

This milestone focuses on evolving our AI model architecture by embedding the Token Temperature Mechanism (TTM) and Guided Sequence of Thought (GSoT) directly into the pretraining process instead of using them as external transformer layers. By doing this, the model will learn reasoning and contextual prioritization naturally during training, improving efficiency and scalability while reducing inference-time compute. The work involves modifying the transformer backbone to incorporate TTM as an intrinsic attention modulation mechanism, embedding GSoT within the pretraining loss functions to enable structured reasoning from the ground up, optimizing compute efficiency by eliminating the need for extensive test-time reasoning, and running controlled pretraining experiments to benchmark improvements in reasoning capabilities, response structure, and interpretability.

Deliverables

The Deliverable Description for this milestone is a second-generation AI model that integrates the Token Temperature Mechanism (TTM) and Guided Sequence of Thought (GSoT) directly into the pretraining phase rather than as external transformer layers. This approach ensures that reasoning and contextual prioritization are embedded in the model's core, making it more efficient, predictable, and scalable while reducing inference-time computation.

Budget

$14,000 USD

Success Criterion

The Success Criterion will be measured by evaluating the model’s ability to perform structured reasoning without additional external layers, demonstrating improved efficiency in token processing, achieving reduced test-time compute while maintaining or exceeding baseline performance in benchmark tasks, and verifying that small research labs with limited resources can train and deploy the model effectively within the estimated $10,000 to $14,000 budget.

Milestone 2 - Scaling the AI Model

Description

This phase focuses on scaling our AI model to enhance its pretraining capabilities. By expanding computational resources and optimizing architecture, we aim to improve efficiency, reasoning, and generalization. The scaling process will involve refining our TTM and GSOT mechanisms to work seamlessly within larger models, ensuring better performance without increasing inference costs. Estimated costs for this milestone range between $10K to $50K, depending on infrastructure and compute availability.

Deliverables

The successfully scaled AI model will incorporate TTM and GSOT at a larger scale, ensuring reasoning is deeply integrated into the pretraining process. The deliverable includes a fully trained and optimized model capable of maintaining efficiency while handling more complex tasks. This will be achieved without significantly increasing inference costs, making it accessible for wider adoption.

Budget

$36,000 USD

Success Criterion

The model should demonstrate improved reasoning capabilities, reduced compute requirements during inference, and enhanced scalability. Benchmark evaluations will show measurable gains in performance, efficiency, and accuracy compared to earlier versions. The pretraining process should remain stable and cost-effective, aligning with our goal of creating frontier AI that is both powerful and affordable.

Join the Discussion (3)

Sort by

3 Comments
  • 0
    commentator-avatar
    Simon250
    Mar 9, 2025 | 1:23 PM

    Under project details there are some weird symbols as shown below.   4o  

    • 0
      commentator-avatar
      Simon250
      Mar 9, 2025 | 1:24 PM

      I can't post the screeshots here. It's above Existing Resources, in the Project Details section.

  • 0
    commentator-avatar
    Sky Yap
    Mar 9, 2025 | 12:59 PM

    I think SILX AI is a really innovative project that’s pushing the boundaries of how AI models can reason. By focusing on mechanisms like TTM and GSoT, it’s addressing a core challenge—enabling models to process tokens in a structured, context-aware way without relying on heavy prompting. This could make AI not only more efficient and scalable but also safer and more predictable. One thing to keep an eye on might be scaling compute resources as the model evolves, but overall, it’s a bold and exciting step forward in AI research.

Expert Ratings

Reviews & Ratings

    No Reviews Avaliable

    Check back later by refreshing the page.

feedback_icon