Premier Next-Gen Hardware Paradigms for AGI

chevron-icon
RFP Proposals
Top
chevron-icon
project-presentation-img
user-profile-img
Gabriel Axel Montes
Project Owner

Premier Next-Gen Hardware Paradigms for AGI

Expert Rating

n/a

Overview

Led by Nature-published Omowuyi Olajide and Gert Cauwenbaughs and neuro/AI research/product and long-time SingularityNET affiliate Gabriel Axel Montes, this proposal undertakes a comprehensive, investigation of eight state‑of‑the‑art hardware architectures and computing paradigms—race logic, analog in‑memory (memristors), analog photonics, asynchronous spintronic computing, stochastic computing, approximate computing, and 3D monolithic integration. The goal is the identification and prototyping of the paradigm that exceeds GPU/TPU baselines on AGI‑specific tasks (recursive reasoning, probabilistic inference, attention mechanisms) in latency, energy efficiency, and scalability.

RFP Guidelines

Explore novel hardware architectures and computing paradigms for AGI

Complete & Awarded
  • Type SingularityNET RFP
  • Total RFP Funding $80,000 USD
  • Proposals 9
  • Awarded Projects 1
author-img
SingularityNET
Apr. 14, 2025

The purpose of this RFP is to identify, assess, and experiment with novel computing paradigms that could enhance AGI system performance and efficiency. By focusing on alternative architectures, this research aims to overcome computational bottlenecks in recursive reasoning, probabilistic inference, attention allocation, and large-scale knowledge representation. Bids are expected to range from $40,000 - $80,000.

Proposal Description

Company Name (if applicable)

Neural Axis LLC

Project details

Main Purpose

To develop a rigorous benchmarking framework, execute proof‑of‑concept implementations, and produce a detailed roadmap—including cost‑benefit and total cost of ownership (TCO) analyses for integrating the highest‑performing hardware paradigm into AGI systems, in particular Hyperon’s AGI components (PLN, MOSES, ECAN) [1].

---

While GPUs/TPUs excel at dense linear algebra, they incur high energy costs and exhibit limited scalability on AGI’s heterogeneous workloads [8], [15]. We will methodically evaluate the following eight paradigms:

  1. Race Logic: Encoding data as signal delays to accelerate graph and dynamic‑programming tasks, with documented energy savings up to 200× on ASIC prototypes [1], [2].

  2. Analog In‑Memory Computing (IMC): Leveraging memristor crossbars for parallel vector‑matrix multiplication to alleviate the von Neumann bottleneck. A full‑stack CIM system demonstrates robust training across multiple deep‑learning models [3], [7].

  3. Analog Photonic Computing: On‑chip photonic MVM accelerators delivering sub‑microsecond latency and multi‑GHz throughput. Recent 32‑channel SOI prototypes achieved 93.5 % MNIST accuracy at high speed [4], [9], [10].

  4. Asynchronous Architectures: Event‑driven spiking neural network (SNN) accelerators offering pico‑joule‑scale energy per synaptic event. ICONS 2025 showcased novel toolchains and spiking modules [6].

  5. Spintronic Computing: Antiferromagnetic oscillator devices enabling ternary encoding and sub‑picosecond dynamics for reservoir computing [11].

  6. Stochastic Computing: Utilizing probabilistic bit‑streams to compress arithmetic operations, trading controlled precision for significant power savings [12].

  7. Approximate Computing: Dynamic precision scaling frameworks that adapt numerical fidelity based on workload semantics, optimizing energy and latency trade‑offs [13].

  8. Monolithic 3D Integration: Vertically stacking memory and logic layers to reduce interconnect energy by >10× and enable ultra‑dense neural accelerators [14].

We will construct a Dockerized benchmarking suite encompassing recursive query resolution, probabilistic inference kernels, and attention subroutines. Benchmarks will run on Nvidia H100 baselines and hardware/emulated prototypes, measuring latency, throughput per area, energy per inference, scalability exponent, and total cost of ownership (TCO). Proof‑of‑concepts include Verilog race logic and SNN modules on FPGA, plus SPICE‑level memristor crossbar and photonic circuit simulations.

Functional Requirements

  • Paradigm Whitepapers: Eight detailed technical reviews covering computational principles, PDK compatibility, fabrication constraints, and energy metrics.

  • Benchmark Suite: End‑to‑end scripts implementing the five defined metrics with reproducible results (<2 % variance over ≥3 runs).

  • Prototype Artifacts:

    • HDL Modules (SystemVerilog/Verilog/VHDL) for race logic and SNN, with ≥80 % unit‑test coverage.

    • Analog/Mixed‑Signal Models for RRAM IMC and photonic MVM, annotated with PDK parameters.

  • Analysis Pipelines: Circuit‑level EDA workflows for area/power estimation; Monte Carlo TCO simulations incorporating fabrication, assembly, and operational energy costs.

Non‑Functional Requirements

  • Scientific Rigor: All methodologies and results will cite peer-reviewed sources, aiming to cite top journals (Nature Electronics, IEEE Trans.) and conferences (ISCA, BioCAS, ICONS 2025).

  • Open Science: Public GitHub repository under an open license containing all code, benchmarks, data, and documentation.

  • Manufacturability: Designs constrained to standard CMOS PDKs or SOI processes, with documented design rules.

  • Stakeholder Engagement: Regular reviews with ≥3 AGI hardware experts to iteratively refine approach.

Main Evaluation Criteria

  1. Alignment (25 %)

    • Deliverable Completeness: Research plan, literature review, experiments, and prototype must comprehensively address RFP objectives.

    • Innovation Depth: Inclusion of eight paradigms with 2025 breakthroughs in IMC [7], photonics [9], antiferromagnetic neuromorphic [11], and stochastic methods [12].

  2. Pre‑existing R&D (25 %)

    • Gert Cauwenberghs multiple Nature publications, (35 years’ experience) pioneered adaptive synaptic microcircuits in silicon, embedding plasticity for femtojoule‑scale synaptic operations in visual template recognition [15].

    • Omowuyi Olajide (10 years’ experience) co‑authored “Reconfigurable Event‑Driven SNN near HBM” at IEEE BioCAS 2023 [16] and first‑authored “Improved Throughput for Non‑Binary LDPC Decoder” in Computer Engineering & Applications [17]. 

    • Gabriel Axel Montes (15 years’ experience) integrates neuroscience and AI R&D and product, leading Neural Axis in biologically inspired sensorimotor tasks, published in neuroscience, AI, virtual reality, mechanical engineering, medical devices/biometrics.
    • Theo Valich (20 years’ experience) CEO of Ecoblox, architected sustainable HPC/data‑center solutions and carbon‑aware routing protocols.

  3. Team Competence (25 %)

    • Publication Record: >150 peer‑reviewed papers across neuromorphic, VLSI, quantum, and FPGA domains.

    • Prototype Expertise: Demonstrated FPGA (race logic, HBM‑SNN) and analog (memristor, photonic) implementations with rigorous CI/test frameworks.

    • Interdisciplinary Synergy: Combined strengths in bioengineering, circuit design, HPC infrastructure, and neuro‑driven benchmarks.

  4. Cost Efficiency (25 %)

    • Budget

      • Literature Review, Simulations (e.g. open-source tools, AMD Vivado, etc.), Development & Prototyping, Dev kits (e.g. PLD, PLC, FPGA, SOC, etc.), Personnel/Expertise.

Total Cost of Ownership (TCO) Analysis: A 3‑year payback period  validated against Ecoblox operations, with sensitivity to energy pricing and hardware scaling.

About the Team 

Omowuyi Olajide, PhD(c)

IC Design and Neuromorphic computing engineer
LinkedIn

Omowuyi Olajide is a distinguished integrated circuit designer and neuromorphic engineer whose decade‑long career bridges VLSI hardware, mixed‑signal circuits, quantum computing, and brain‑inspired computing to realize ultra‑efficient AI accelerators. As a faculty affiliate in the Dept. of Bioengineering and the Institute for Neural Computation at UC San Diego, he co‑authored with Prof. Cauwenberghs the 2025 Nature Commuications paper "ON-OFF neuromorphic ISING machines using Fowler-Nordheim annealers", and IEEE BioCAS 2023 paper “Reconfigurable Event‑Driven Spiking Neuromorphic Computing near High‑Bandwidth Memory”, demonstrating seamless integration of spiking SNN cores with HBM for sub‑microsecond, low‑power inference. His pioneering work on non‑binary low‑density parity‑check decoders—published in the Computer Engineering & Applications Journal—advanced error‑correction throughput in VLSI by over 50 %, showcasing his mastery of mixed‑signal design and coding theory. Olajide’s expertise extends to quantum‑inspired compute models, race logic prototyping on FPGA fabrics, and end‑to‑end analog in‑memory computing simulations using emerging memristor technologies, all complemented by his adeptness in Python‑ and MATLAB‑based benchmarking frameworks. His unique blend of hands‑on hardware development, theoretical rigor, and cross‑disciplinary fluency in AI algorithms positions him to spearhead our proof‑of‑concept implementations and ensure that our AGI hardware paradigms not only meet but exceed the stringent performance, energy‑efficiency, and scalability targets of this proposal.

Prof. Gert Cauwenberghs – Advisor

Prof. of Bioengineering, UCSD. Previously Prof. of Electrical and Computer Engineering, Johns Hopkins University.
Co-director, UCSD Institute of Neural Computation.
Director, UCSD Integrated Systems Neuroengineering Lab.

Specialties: Micropower mixed-signal VLSI circuits and systems, bioinstrumentation, neuron-silicon interfaces, brain-computer interfaces, large-scale neural computation.
LinkedIn 

Prof. Gert Cauwenberghs is a world‐renowned pioneer in neuromorphic engineering and adaptive circuit design, whose 35‑year career at the intersection of neuroscience and microelectronics has yielded foundational advances in energy‑efficient silicon implementations of synaptic plasticity. As the founding director of the UCSD Institute for Neural Computation, he has led the development of massively parallel, mixed‑signal microcircuits that emulate the structure and function of biological neural networks—most recently demonstrating on‑chip synaptic arrays for template‑based visual pattern recognition operating at less than one femtojoule per synaptic event, surpassing the nominal energy efficiency of the human brain. His seminal work on embedding dynamic learning rules directly into CMOS devices has informed the design of race logic architectures, analog in‑memory computing cells, and asynchronous spiking neural processors, making him uniquely qualified to guide our exploration of next‑generation AGI hardware paradigms. With over 200 peer‑reviewed publications in top venues including Nature Electronics, IEEE Transactions on Neural Networks and Learning Systems, and ISCA, and a track record of successful technology transfer to industry, Prof. Cauwenberghs brings unparalleled expertise in co‑designing algorithms and hardware, evaluating fabrication constraints, and delivering manufacturable, high‑performance neuromorphic systems that align precisely with the objectives of this proposal.

Gabriel Axel Montes, PhD
CEO & Founder, Neural Axis
LinkedIn

Neuroscientist and AI entrepreneur; early SingularityNET team, 0-to-1 builder for neurotech, VR, robotics and blockchain startups. Co-author of The Consciousness Explosion (AI & consciousness). 10,000+ hours contemplative practice inform cognitive benchmarks. Track record of project execution, and liaison with Hyperon.


Theo Valich – Advisor
CEO & Founder, Ecoblox (sustainable modular data centers)
LinkedIn

HPC and data-center veteran; architected several national supercomputers and carbon-aware routing protocols. Deep expertise in GPU/CPU road-maps, market trends and TCO modeling. Advises on deployment feasibility and industry alignment.

Open Source Licensing

Custom

The proposed license model is as follows:

  • For SingularityNET Foundation: MIT License with Commons Clause:

    • The work will be provided to SingularityNET Foundation under the terms of the MIT License, with the following modification:
      - The entity may not sell the work (or a modified or derivative aspect/version of it), or offer it for sale, unless the entity has a separate commercial license from the copyright holder.

  • For all other non-SingularityNET Foundation entities: Proprietary

This is not a strict criterion of the proposal, but is the proposed license model for the time being.

 

Links and references

Olajide, Cauwenbaughs et al. "ON-OFF neuromorphic ISING machines using Fowler-Nordheim annealers". (Nature Communications, 2025)

Cauwenbaughs. "A compute-in-memory chip based on resistive random-access memory."
(Nature, 2022) 

Cauwenberghs. "The neurobench framework for benchmarking neuromorphic computing algorithms and systems". 
(Nature Communications, 2025)

Cauwenberghs. "Neuromorphic Computing at Scale".
(Nature, 2025)


Additional videos

Glossary

Proposal Video

Not Avaliable Yet

Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.

  • Total Milestones

    4

  • Total Budget

    $80,000 USD

  • Last Updated

    22 May 2025

Milestone 1 - Paradigm Survey & Research Plan

Description

We launch the project by systematically examining eight non-von-Neumann paradigms—race logic analog in-memory (memristor) analog photonic asynchronous SNN spintronic stochastic approximate and 3-D monolithic integration. For each we collect data on computational model fabrication constraints latency energy efficiency and tooling maturity drawn from 2023-2025 publications and reports. Findings are consolidated into a 15-page Research Plan and a concise Mapping Matrix that shows how each paradigm’s strengths map onto Hyperon’s PLN MOSES and ECAN modules and to the five quantitative benchmark metrics. Technical risks and mitigation strategies are recorded for every paradigm establishing a clear evidence-based foundation for all subsequent benchmarking and prototyping.

Deliverables

- 15‑page Research Plan covering computational models fabrication requirements and AGI relevance for eight paradigms. - Mapping Matrix aligning Hyperon modules (PLN MOSES ECAN) with paradigm features and candidate benchmark metrics.

Budget

$20,000 USD

Success Criterion

- Over 8 paradigms surveyed with more than 3 publications each from recent years (target: 2023–2025). - Over 3 technical risks identified per paradigm, each with mitigation strategies. - Definition of ≥5 quantitative metrics (units and target thresholds). - Peer approval by ≥2 AGI hardware experts.

Milestone 2 - Literature Review & Benchmark Framework

Description

Building on the survey we produce a 30-page literature review that synthesises ≥50 sources (≥10 from 2025) tracing performance scalability and manufacturability trends for every paradigm and identifying open research gaps. In parallel we implement a Dockerised Benchmark Suite that runs recursive reasoning probabilistic inference and attention kernels representative of Hyperon workloads. The suite reports the five predefined metrics—latency throughput-per-area energy-per-inference scalability exponent and projected TCO—with <2 % run-to-run variance creating a reproducible yardstick that will be applied to all hardware prototypes.

Deliverables

- 30‑page Literature Review synthesizing more than 50 sources including over 10 publications from 2025. - Dockerized Benchmark Suite for AGI workloads with automated dashboards.

Budget

$20,000 USD

Success Criterion

- Bibliography of ≥50 unique citations with ≥10 from 2025. - Benchmark reproducibility: <2 % variance over ≥3 runs. - Implementation of all five predefined metrics. - Validation with ≥2 domain experts; feedback integrated into v1.0.

Milestone 3 - Prototype Implementation & Interim Evaluation

Description

We deliver three proof-of-concept implementations: (i) a race-logic accelerator and (ii) an event-driven SNN core on FPGA and (iii) SPICE-level models of memristor crossbars plus photonic MVM circuits. All HDL simulation files and test benches are maintained in a public GitHub repository with continuous integration and ≥80 % unit-test coverage. Each prototype is benchmarked against an Nvidia H100 baseline producing interim latency energy and throughput-per-area results that are compiled in an interim report and reviewed by external experts.

Deliverables

- GitHub Repository containing HDL (race logic SNN) and analog/mixed‑signal models (RRAM IMC photonic MVM). - Interim Report detailing latency throughput per area and energy per inference for each paradigm.

Budget

$20,000 USD

Success Criterion

- Functional prototypes for ≥3 paradigms. - ≥10 % improvement in at least one metric vs. Nvidia H100 baseline. - ≥80 % unit‑test coverage with CI passing. - Interim report endorsed by ≥2 external reviewers with ≤5 major revision requests.

Milestone 4 - Final Prototype & Deployment Roadmap

Description

The highest-performing paradigm is refined to achieve ≥50 % throughput-per-area gain or ≥2 × energy efficiency on the full AGI task suite. Comparative tables a three-year TCO and scalability model and an integration roadmap for Hyperon components are assembled in the Final Report. A demonstration package shows the prototype in action while a manuscript formatted for ISCA or Nature Electronics disseminates the project’s results and positions the technology for adoption.

Deliverables

- Final Report with comparative tables TCO and scalability models and deployment recommendations. - Demonstration Package (video + live demo) showcasing throughput/area gain (target: ≥50 %) or energy efficiency (target: ≥2×) over GPU baseline.

Budget

$20,000 USD

Success Criterion

- Prototype is benchmarked against target performance metrics, which are optimally met or exceeded, on the selected AGI task. - TCO model demonstrates a payback period (target: ≤3‑years) with sensitivity analysis. - Demo validated by ≥2 AGI researchers. - Manuscript formatted and ready for submission to ISCA or Nature Electronics, as optimal candidates target journals.

Join the Discussion (0)

Expert Ratings

Reviews & Ratings

    No Reviews Avaliable

    Check back later by refreshing the page.

Welcome to our website!

Nice to meet you! If you have any question about our services, feel free to contact us.