This proposal evaluates whether quantum computing can outperform classical GPUs in executing core AGI operations—reasoning, attention, and memory—from OpenCog Hyperon. It benchmarks leading quantum paradigms (ion-trap, superconducting, photonic, etc.) to determine which, if any, offer superior cost-efficiency. Beyond testing feasibility, the study identifies which architecture holds the greatest promise for AGI, and what hardware thresholds must be crossed to gain the real-world advantage.
This RFP seeks a technical and experimental assessment of quantum computing architectures in AGI applications. Proposals should explore the practicality and limitations of various quantum approaches — including trapped-ion, superconducting, photonic, and topological quantum computing — in handling probabilistic reasoning, parallel processing, and large-scale knowledge representation. The research could include quantum-classical hybrid simulations and feasibility studies for applying quantum advancements to AGI workloads. Bids are expected to range from $20,000 - $100,000.
This RFP seeks a technical and experimental assessment of quantum computing architectures in AGI applications. Proposals should explore the practicality and limitations of various quantum approaches — including trapped-ion, superconducting, photonic, and topological quantum computing — in handling probabilistic reasoning, parallel processing, and large-scale knowledge representation. The research could include quantum-classical hybrid simulations and feasibility studies for applying quantum advancements to AGI workloads. Bids are expected to range from $20,000 - $100,000.
Proposal Description
Company Name (if applicable)
Rubiks Hub
Project details
This proposal investigates whether quantum computing platforms can deliver meaningful advantages over classical hardware in running core AGI workloads derived from OpenCog Hyperon. These workloads: logical inference, memory updates, and attention control, are essential cognitive primitives expected to be central in Artificial General Intelligence systems.
Over 12 weeks, the project will benchmark several leading quantum computing paradigms—including ion-trap, superconducting, photonic, neutral-atom, and topological systems—against GPU-based classical implementations. The core benchmark is a throughput-per-dollar metric (denoted χ), which evaluates how many correct AGI operations each platform can perform per unit cost.
Rather than assuming quantum superiority, this study takes a neutral, comparative stance. The goal is to pinpoint which quantum architectures (if any) are advantageous today, and if none meet the performance-cost crossover point, the study will define how far each platform is from becoming viable. This includes identifying the exact resource bottlenecks (e.g., decoherence, gate latency, compiler inefficiency) that prevent quantum acceleration in AGI contexts.
Ultimately, this project delivers more than performance data. It produces a reproducible framework, a ranked capability map of quantum platforms for AGI, and a strategic blueprint for how future quantum improvements could unlock decisive gains in cognitive computation. This work positions quantum reasoning as a testable pathway toward scalable intelligence.
Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.
Total Milestones
4
Total Budget
$40,000 USD
Last Updated
20 May 2025
Milestone 1 - Baseline Mapping
Description
Objective:
To establish a comprehensive and up-to-date baseline of hardware performance and costs for each candidate architecture and the classical baseline.
Actions:
• Collect the latest, peer-reviewed metrics on coherence times, gate fidelities, energy requirements, and access pricing for all five quantum architectures (trapped-ion, superconducting, photonic, topological, neutral-atom) as well as 8 × A100 classical GPU clusters.
• Source data from public spec sheets, technical white papers, and direct communication with quantum cloud providers (IonQ, IBM Q, AWS Braket, etc.).
• Integrate these metrics into a starter throughput-per-dollar table (χ), normalizing across paradigms for direct comparison.
• Document all sources and assumptions, ensuring transparency and reproducibility.
Deliverables
A baseline χ table with supporting documentation, forming the foundation for subsequent experimental phases.
Budget
$4,190 USD
Success Criterion
Milestone 1 will be considered successful if a transparent, normalized χ baseline is created for all six compute architectures, using at least four validated performance and cost metrics per platform, backed by reproducible documentation, and ready to support cross-paradigm benchmarking in later phases.
Objective:
To empirically measure the execution performance and error behavior of OpenCog Hyperon’s core cognitive kernels across classical (GPU) and available quantum backends, producing its throughput-per-dollar (χ) of each architecture on key AGI micro-tasks under controlled scenarios.
Actions:
1. Kernel Selection & Formalization
2. Hardware-Specific Compilation
3. Simulated & Native Execution
4. Throughput and Fidelity Analysis
5. Cross-Platform Benchmark Table
Deliverables
Deliverable:
A kernel-specific χ benchmark report, containing:
• Executable representations of each kernel.
• Compilation traces and circuit schematics.
• Benchmark tables of real and simulated performance across platforms.
• Annotated heatmaps comparing throughput, cost, and fidelity.
• Diagnostic notes on failure modes or compile-time bottlenecks.
Budget
$12,165 USD
Success Criterion
Phase 2 will be successful when OpenCog Hyperon’s core AGI kernels are implemented and executed across both classical and multiple quantum hardware backends, yielding real, fidelity-adjusted χ values with full traceability. This outcome establishes the first comparative performance baseline for AGI-relevant computation across emerging quantum paradigms.
Milestone 3 - Sensitivity Mapping and Advantage Frontier
Description
Objective:
To project how each quantum architecture must evolve—across gate fidelity, decoherence time, qubit count, compilation efficiency, or cost—to outperform classical hardware on AGI kernel tasks. This phase quantifies how far each paradigm is from crossing the "quantum advantage frontier" in AGI workloads and identifies the most leverageable parameters.
Actions:
1. Construct Frontier Models
2. Sensitivity Analysis
3. Construct Quantum Advantage Frontier
4. Forecast Platform Readiness
5. Prescribe Development Priorities
Deliverables
• A set of χ parametric models (analytical or empirical)
• Sensitivity heatmaps and elasticity plots per platform
• Quantum Advantage Surface plots (2D/3D)
• Distance-to-frontier scores with annotated platform gaps
• Timeline and feasibility scorecard (including alignment with vendor roadmaps)
Budget
$14,475 USD
Success Criterion
Phase 3 will be considered successful when the project produces calibrated, platform-specific models of χ (fidelity-adjusted throughput-per-dollar), quantifies how each key hardware parameter affects performance, and defines the exact technical conditions under which each quantum architecture can surpass the classical GPU baseline in AGI workloads.
Milestone 4 - Strategy Synthesis and Deployment Blueprint
Description
Objective:
To convert benchmarking results and frontier models into a realistic, staged deployment strategy. This phase determines which quantum architecture is most viable for short- to medium-term AGI integration, which AGI kernels it should run, and how development teams should proceed, technically and operationally, toward hybrid and native quantum AGI execution.
Actions
1. Platform Suitability Ranking
2. Kernel–Platform Affinity Mapping
3. Deployment Model Design
4. Timeline + Risk Model
5. Output Strategy Toolkit
Deliverables
• Platform ranking index and scoring methodology.
• AGI kernel-to-platform affinity matrix.
• Deployment architecture diagrams (hybrid, kernel-attached, full native).
• Timelines with capability checkpoints.
• Strategic risk memo for investors, CTOs, and research leads.
• Open-source toolkit for reproducing χ-based benchmarking.
Budget
$9,170 USD
Success Criterion
Phase 4 will be successful when benchmarking insights and quantum frontier projections are translated into a realistic, technically sound deployment strategy. This includes selecting the most viable quantum architecture, mapping AGI kernels to hardware capabilities, defining integration models, and delivering an actionable roadmap supported by risk and timeline analysis.
This rating indicates compliance to 'Must haves' but also adaptation of 'Nice to haves' and Non-functional requirements defined in the RFP.
Solution details and team expertise
RFPs will offer varying degrees of freedom. This rating indicates the quality of the team's specific solution ideas, the provided details, and the reviewer's confidence in the team's ability to execute.
Value for money
Each RFP defines a maximum allowed budget, but teams can differentiate their proposal by offering a solution with a lower budget or a wider scope.
About Expert Reviews
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary)This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user\'s assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it\'s questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
Join the Discussion (0)
Please create account or login to post comments.