Neuromata

chevron-icon
RFP Proposals
Top
chevron-icon
project-presentation-img
user-profile-img
Prasad Kumkar
Project Owner

Neuromata

Expert Rating

n/a

Overview

Project Neuromata will explore brain-inspired hardware for AGI, targeting the critical bottlenecks of recursive reasoning, probabilistic inference, attention mechanisms, and large-scale knowledge retrieval. We will survey and benchmark four non-von Neumann paradigms - temporal (race logic), asynchronous neuromorphic, analog in-memory computing, and processing-in-memory. Using Python simulations, FPGA prototypes, and remote neuromorphic boards over defined metrics such as latency, energy per operation, accuracy, scalability. Integrated with OpenCog Hyperon, Neuromata aims to demonstrate a proof-of-concept accelerator achieving 10x-50x speedups and 10x–100x energy savings on AGI workloads.

RFP Guidelines

Explore novel hardware architectures and computing paradigms for AGI

Complete & Awarded
  • Type SingularityNET RFP
  • Total RFP Funding $80,000 USD
  • Proposals 9
  • Awarded Projects 1
author-img
SingularityNET
Apr. 14, 2025

The purpose of this RFP is to identify, assess, and experiment with novel computing paradigms that could enhance AGI system performance and efficiency. By focusing on alternative architectures, this research aims to overcome computational bottlenecks in recursive reasoning, probabilistic inference, attention allocation, and large-scale knowledge representation. Bids are expected to range from $40,000 - $80,000.

Proposal Description

Our Team

Prasad - 6+ years in Web3 R&D and protocol design, with deep expertise in decentralized ledger consensus, incentive models, and smart-contract architectures. Bachelor’s in Computer Science & Engineering.

Siva - Academic background in cryptography and blockchain. Computer Engineering degree with strong systems background; Advocates for statistically sound evaluation; has designed and executed large-scale benchmark suites for consensus protocols and cryptographic primitives.

Company Name (if applicable)

Chainscore Labs

Project details

AGI aims to enable systems that can reason, learn and decide in open-ended domains. Today, AGI research is stalled by conventional hardware. GPUs and TPUs excel at dense tensor math but struggle with the irregular, sparse and dynamic workloads at the heart of AGI:

  • Recursive reasoning over large knowledge graphs that require pointer chasing, backtracking and dynamic updates.

  • Probabilistic inference that demands high-throughput sampling and branching logic.

  • Attention and pattern matching that call for real-time selection among high-dimensional vectors.

  • Adaptive learning that must update weights or rules on the fly without pausing entire models.

These tasks drive latency into seconds or minutes and energy into hundreds of watts, making real-world deployment impossible. Data movements between memory and CPU add further delay and power drain.

Project Neuromata breaks through these barriers by co-designing hardware and software around AGI’s core primitives. We combine four complementary paradigms—each tailored to specific AGI patterns—to build a brain-inspired accelerator for reasoning, inference and memory tasks:

  1. Asynchronous, event-driven neuromorphic cores to handle sparse, dynamic graph processing.

  2. Temporal (race logic) modules to solve winner-take-all and shortest-path problems in constant time.

  3. Analog in-memory computing arrays for high-throughput vector operations and associative lookups.

  4. Processing-in-memory techniques to eliminate data transfer overhead in large knowledge stores.

By integrating these paradigms into a unified architecture, we target order-of-magnitude gains in speed, latency and energy efficiency for AGI applications.


1. Asynchronous Neuromorphic Reasoning

Why it matters
AGI knowledge bases are sparse: only a few concepts activate per query. Conventional hardware must scan entire datasets, wasting power on inactive elements.

Our approach
We model each concept as a spiking neuron. A query triggers spikes in relevant nodes, and only those nodes consume energy. Synaptic weights sit on-chip in small analog or digital memory so activation sums occur locally.

Plan

  • Simulation: Build a Python spiking-network simulator for Atomspace graphs, driven by MeTTa queries.

  • Prototype: Map the model to an FPGA with asynchronous handshake pipelines or use Intel Loihi boards via research access.

  • Metrics: Compare spikes-per-query, energy per spike and query latency versus CPU/GPU baselines. Our target is a 10×–50× energy reduction for graph traversals and sub-second query times on graphs of 10⁵–10⁶ nodes.

This design also supports probabilistic inference through stochastic synapses, embedding randomness in hardware rather than in software.


2. Temporal (Race Logic) Modules

Why it matters
AGI often requires selecting the best among many options (attention, planning). Race logic encodes values as signal delays. The earliest pulse wins, delivering min or argmin in one step.

Our approach

  • Implement Delay, Min, Max and Inhibit primitives in simulation or on FPGA delay lines.

  • Connect a temporal module to neuromorphic cores: when reasoning paths generate candidate solutions, the race logic picks the fastest (lowest-cost) path. For attention, we convert analog weights to delays and let signals compete.

Plan

  • Build an FPGA demo solving a toy planning problem using race logic.

  • Measure latency per selection versus iterative digital comparison.

Metrics

  • Latency: Aim for sub-microsecond winner selection among hundreds of inputs.

  • Energy: Target at least 10× lower energy per operation than digital comparator arrays.

This module accelerates attention mechanisms and graph searches, two major AGI bottlenecks.


3. Analog In-Memory Computing

Why it matters
Linear algebra underpins both deep learning and attention. In analog crossbars, Kirchhoff’s laws sum currents in one physical step, bypassing digital clock cycles.

Our approach

  • Use memristor or phase-change memory crossbars to store weight matrices (e.g. transformer key/query).

  • Apply voltage vectors as inputs and read summed currents as dot products.

  • Compensate for device variability with calibration algorithms to maintain precision within 2%.

Plan

  • Simulate an analog crossbar in Python using device-level models.

  • If available, run a small analog compute demo on off-the-shelf memristor test hardware or mixed-signal FPGA modules.

  • Integrate with MeTTa’s attention routines and benchmark end-to-end performance.

Metrics

  • Throughput: Single-cycle analog MAC for entire vectors, offering 50× higher throughput on matrix-vector ops.

  • Energy: Achieve tens of TOPS/W for analog inference, well above digital accelerators.

Analog compute collapses the memory wall and supports on-chip learning by updating weights directly in memory elements.


4. Processing-in-Memory (PIM) Techniques

Why it matters
AGI knowledge graphs contain millions of nodes. Transferring data back and forth between DRAM and CPU wastes energy and adds latency.

Our approach

  • Explore PIM commands like bulk bitwise operations (AND, OR, XNOR) for fast pattern filtering.

  • Prototype a content-addressable memory (CAM) unit for associative lookups, retrieving all rows that match a MeTTa pattern in constant time.

  • Use HBM-PIM or simulate DRAM-PIM architectures to evaluate bulk query performance on Atomspace-like datasets.

Plan

  • Model a DRAM-PIM with embedded logic in Python and run sample knowledge-graph queries.

  • If possible, implement on an FPGA-based DRAM soft-IP with logic extensions.

Metrics

  • Latency: 100× faster bulk attribute filtering versus CPU loops.

  • Energy: 5×–10× reduction in joules per query.

PIM makes large-scale graph operations feasible in real time and aligns with AGI’s need for rapid, large-data access.


Software-Hardware Co-Design & MeTTa Integration

We will develop a Python-centric co-design framework that unifies algorithm modeling, hardware simulation and benchmarking:

  1. Algorithm Modeling: Extend spiking-network and race-logic simulations to accept MeTTa calls.

  2. Hardware Simulation: Build lightweight simulators for race logic, analog crossbars and PIM.

  3. Benchmark Harness: Feed identical AGI tasks (MeTTa reasoning scripts, attention loops, Bayesian samplers) to simulated hardware and CPU/GPU baselines, measuring latency, energy and accuracy.

This loop lets us refine circuit parameters, test impact on AGI tasks and then map the best designs to hardware prototypes. A MeTTa-to-Neuromata driver will offload supported primitives to hardware or simulation, falling back to CPU for others.


Within SingularityNET’s ecosystem, Neuromata can become the hardware standard for AGI services, unlocking new levels of performance, scalability and deployment flexibility. By co-designing hardware to match AGI’s unique needs, Project Neuromata ensures that future AGI systems can think, learn and adapt as efficiently as biological brains.

Proposal Video

Not Avaliable Yet

Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.

  • Total Milestones

    4

  • Total Budget

    $40,000 USD

  • Last Updated

    27 May 2025

Milestone 1 - Paradigm Identification & Experiment Plan

Description

Conduct a comprehensive survey of alternative computing paradigms (temporal/race logic asynchronous neuromorphic analog in-memory PIM) and map each to AGI workload requirements (recursive reasoning inference attention graph traversal). Define target tasks and select the top 2–3 paradigms for deeper study.

Deliverables

A “Paradigm & Plan” report listing candidate paradigms their AGI alignments and a detailed experiment plan—simulation tools prototype platforms target metrics and evaluation procedures.

Budget

$10,000 USD

Success Criterion

Stakeholder approval of the paradigm report; clear mapping of each candidate to specific AGI tasks; a complete, actionable experiment plan ready for Phase 2.

Milestone 2 - Literature Review & Benchmark Definition

Description

Perform an exhaustive literature review on the selected paradigms collating existing hardware and simulation performance data for AGI-relevant tasks. Define quantitative benchmarks: tasks datasets metrics (latency energy per operation accuracy scalability).

Deliverables

A “Review & Benchmarks” document summarizing state-of-the-art results and a benchmark specification detailing each test case measurement methodology and success thresholds.

Budget

$10,000 USD

Success Criterion

Completion of a literature review covering all target paradigms; finalized benchmark suite approved and ready for prototype testing; baseline performance numbers identified.

Milestone 3 - Prototype Experiments & Feasibility Analysis

Description

Develop and run prototype experiments for the top paradigms: build Python simulations FPGA proofs-of-concept or use neuromorphic hardware access. Execute benchmark tests on AGI tasks and collect performance and accuracy data.

Deliverables

An “Experimental Results” report detailing prototype implementations measured speedups energy efficiency gains accuracy trade-offs and a feasibility analysis with recommendations for refinement.

Budget

$10,000 USD

Success Criterion

Successful demonstration of at least two prototypes; empirical data showing ≥10× improvement in key metrics on target tasks; selection of the best approach(s) for Phase 4.

Milestone 4 - Final Prototype MeTTa Integration & Evaluation

Description

Refine the chosen prototype(s) integrate a MeTTa-compatible driver for offloading AGI primitives and prepare a live or recorded demo. Conduct full evaluation across all benchmarks and document results.

Deliverables

A working prototype or simulation that accelerates a MeTTa reasoning script a comprehensive evaluation report comparing performance energy accuracy and scalability against CPU/GPU baselines plus a demo slide deck.

Budget

$10,000 USD

Success Criterion

Demonstration of an AGI task on Neuromata with ≥10× speed or energy gains and ≤2% accuracy loss; delivery of final evaluation report and demo materials; clear roadmap for scaling to a full-scale hardware design.

Join the Discussion (0)

Expert Ratings

Reviews & Ratings

    No Reviews Avaliable

    Check back later by refreshing the page.

Welcome to our website!

Nice to meet you! If you have any question about our services, feel free to contact us.