In the Topology of Thought

chevron-icon
RFP Proposals
Top
chevron-icon
project-presentation-img
user-profile-img
mohamed.bani
Project Owner

In the Topology of Thought

Expert Rating

n/a

Overview

As an extention to KG, we propose a novel framework in which a language model recursively restructures a representation of Problem Space by evaluating reasoning-based proximity between problems. Using self-reflective queries, behavioural transfer tests, and iterative clustering, the system constructs a geometric topology in which vector proximity reflects epistemic similarity, not statistical co-occurrence. This enables the emergence of a structured reasoning space aligned with logical coherence. We argue that this architecture supports continual, insight-driven learning without external supervision, and may offer a path toward non-plateauing cognitive development in artificial intelligence.

RFP Guidelines

Advanced knowledge graph tooling for AGI systems

Complete & Awarded
  • Type SingularityNET RFP
  • Total RFP Funding $350,000 USD
  • Proposals 39
  • Awarded Projects 5
author-img
SingularityNET
Apr. 16, 2025

This RFP seeks the development of advanced tools and techniques for interfacing with, refining, and evaluating knowledge graphs that support reasoning in AGI systems. Projects may target any part of the graph lifecycle — from extraction to refinement to benchmarking — and should optionally support symbolic reasoning within the OpenCog Hyperon framework, including compatibility with the MeTTa language and MORK knowledge graph. Bids are expected to range from $10,000 - $200,000.

Proposal Description

Our Team

Mohamed Bani: theoretical framework and coding

Project details

While not a KG, this structure generalizes it replacing static factual links with dynamic reasoning proximity. The core assumption of this paper is that it is possible to construct a Problem Space, where problems are treated as mathematical objects and its structure can be refined through behavioral feedback loops

A problem, once presented as a prompt and tokenized by an LLM, becomes a mathematical object in a vector space. The LLM’s inference process can then be viewed as a function acting on it to produce an output

Let P be a problem, presented as a sequence of tokens

  P = [t₁, t₂, ..., tₙ]

Let E be the token embedding function

  E(tᵢ) ∈ ℝᵈ

Then the embedded input is

  X₀ = [E(t₁), E(t₂), ..., E(tₙ)] ∈ ℝⁿˣᵈ

Let M be the language model, which processes X₀ to generate an output

  M(X₀) → output

Thus, the overall inference process can be seen as a transformation

  P → X₀ → M(X₀) = Answer

We seek to define a distance metric d*(A, B) such that

 d*(A, B) is small when solving A helps solve B

d*(A, B) increases when reasoning strategies diverge

And d*(A, B) is agnostic to surface embedding

To build such a metric, we propose several estimation strategies

Let the model solve problem A. Then provide A’s solution as context for problem B, and observe if it induces an improvement in performance

This provides a directional transfer score TS(A) → B

Use techniques such as Explanation generation: prompt the model to explain its reasoning process for A and B Compare outputs via semantic similarity, entailment, or structured parsing

Ask the model directly

Does solving A help solve B

Are these two problems similar in reasoning

Rate the conceptual similarity between A and B from 0 to 10

Responses are aggregated and smoothed over multiple prompt variations

From these signals we define a pairwise pseudo-distance matrix D* over a set of problems

We then seek a transformation
 T: ℝⁿ → ℝᵖ where p ≤ n
 such that for any pair of problems A and B

‖T(x_A) − T(x_B)‖ ≈ d*(A, B)

where

x_A, x_B are the original LLM embeddings of problems A and B and

d*(A, B) is a derived cognitive distance reflecting behavioral or reasoning-based proximity

This can be approached through

Distance-matching optimization, where T is trained to minimize
 L(T) = Σ₍A,B₎ (‖T(x_A) − T(x_B)‖ − d*(A, B))²

Kernelized regression or similarity-preserving mappings, which learn T to align pairwise relations with a target similarity or distance matrix

The refinement cycle consists of these steps

Problem Sampling Select a representative set of problems P = {P₁, P₂, ..., P} from the current working space

Distance Estimation For each pair (P, P), estimate a cognitive similarity or distance d*(P, P) using behavioural probes, self-reflection, or transfer metrics

Space Transformation Apply a transformation Tk to the original embedding space such that proximity in the new space approximates d*

Clustering in Transformed Space to identify reasoning clusters in the transformed space

Feedback Integration Feed the discovered cluster structures and the updated transformation back into the model's prompting or context integration

Repeat: Begin the next iteration, using Tk+1

While initially described in geometric terms, the algorithm can be more precisely framed as a form of reinforcement learning, but one in which the agent is the external system organizing Problem Space. The reward signal is a composite proxy for emergent cognition. The action space consists of transformations over problem representations; the environment is the LLM’s behavioural response

Minimal Viable Implementation

Test whether

A language model can meaningfully estimate reasoning proximity

A transformation of embedding space can improve reasoning-aligned clustering

Iteration over this loop yields increasing alignment between geometry and transferability

Define a Compact Problem Set

Mathematical puzzles (e.g., logic grid problems, parity, set operations)

Commonsense reasoning tasks

Elementary programming tasks (recursion, iteration, sorting)

Riddle-like abstract problems requiring analogical reasoning

Estimate Pairwise Reasoning Distance

Estimate reasoning proximity using

Prompt transfer tests (solution A as context for B)

Reflective queries (does the model say they are similar?)

Reasoning trace comparison (using explanation generation)

Aggregate these signals into a pseudo-distance matrix D*

Learn or Apply a Space Transformation

Step 4: Cluster in the Transformed Space

Run the clustering algorithm in the transformed space to identify reasoning clusters

Evaluate whether

Clusters are internally consistent (e.g., same reasoning style)

Transferability improves within clusters vs. across clusters

The model solves new problems better when prompted by solutions from the same cluster

Initial success can be measured by

Intra-cluster transfer rate: Are problems within the same cluster solved more easily when conditioned on each other

Stability across iterations: Do clusters converge, diversify, or drift

Distance-reasoning correlation: Does proximity in transformed space correlate with actual performance gain from transfer

More advanced metrics may include

Emergence of abstract cluster identities

Novel insight detection

Dialogue Between Creativity and Execution Supervised by Rigour

We propose that reasoning may emerge most robustly through a dialogue between creativity and execution, overseen by a third agent committed to rigour and precision

       The Executor, a high-performance language model (LLM)

       The Creative Agent, whose task is to place problems within a structured space, suggest analogies, and propose novel paths, rewarded for innovation, and forgiven for speculative leaps

       The Scrutiniser, an agent trained solely to evaluate, challenge, and reject flawed reasoning, rewarded for accuracy and internal consistency, and heavily penalized for oversight

Pondering, Priority, and the Reward Structure of the Creative Agent

To preserve the speculative power of the Creative Agent without overwhelming the system, we propose a refined internal architecture centered on queue control, asymmetric rewards, and internal low-cost simulation. This design formalizes the agent’s autonomy in deciding what to think about, when to try again, and how much to escalate

The Pondering Loop

The Creative has access to a private, low-cost mechanism: a small internal LLM, and a compressed Scrutiniser. This introduces the computational equivalent of inner speech, reflection, or gut-checking. It does not ensure quality, but it filters speculation in a cost-aware way

Integrating Algorithmic Reasoning via Topological Routing

Creative acts as an intelligent dispatcher. Once a problem is mapped, its placement can inform the delegation of the problem to external, non-language-based computational tools

The Scrutiniser/Ghost Arms Race

Ghost exists solely to deceive Scrutiniser and strives to produce highly plausible fallacies. It is allowed a limited number of stealth attacks per cycle

It is rewarded only when it produces a false but plausible output that deceives Scrutiniser

To increase success, Ghost maintains its own private lie-preparation loop, a small LLM and a compressed Scrutiniser used to refine its attacks before launch

To further strengthen the system, difficult or ambiguous problems flagged by Scrutiniser and all Ghost attacks are escalated to a distributed network of human checkers, forming the basis of an epistemic market

Design Postulate: Accelerate the Leading Edge, Rekindle the Rest

This architecture is not built for symmetry, but for compounding progress. When one agent, Scrutiniser, Ghost, or Creative, advances faster than the others, we intervene to keep it accelerating, not to hold it back. At the same time, we monitor for plateaus in the others, and apply targeted pressure, additional compute, modified incentives, or structural tuning, to prevent stagnation

Persistent Insights Through Generational Differentiation and Cluster-Based Generalization

The system does not store answers or solutions. Instead, it retains only structurally valuable insights, those that demonstrate two critical properties

Generational Differentiation Power (Vapnik-LUPI inspired)

An insight is considered meaningful when it enables success preferentially in later generations of the Creative agent, rather than older ones

Cluster-Based Generalization
Retained insights are associated with specific problem clusters in the structured Problem Space. Their reuse is restricted to topological neighbors

Insights that generalize well, i.e., successfully assist in solving multiple neighboring problems, are ranked higher in the problem-context memory

Those that fail to generalize over time are gradually deprioritized and eventually forgotten

From Checkers to Contributors

Checkers, now better called Contributors, are invited as well to make general heuristic contributions. For example

Try a change of variables. Ask why it might simplify the problem. See if it leads somewhere

That’s not a solution, it’s a line of thought. If it keeps working, not just once, but in different types of problems, it shouldn’t be buried inside a cluster. It should go into a shared toolbox and keep earning credits

Epistemic Coin

Checkers are evaluated on their long-term accuracy across difficult tasks, and rewarded for epistemic contribution validated over time. A cryptocurrency could be built around this principle. Checkers accumulate credits for solving hard verification tasks, and these credits, subject to rigorous statistical evaluation, can eventually be converted into tokens. These tokens, in turn, could be used for mining rights or to participate in network governance. As Checkers become heuristic Contributors, AI learning shifts, from scraping the internet and averaging mediocrity, to distilling human intelligence. All self-sustained by the coin, and governed by an ecosystem of rigour. This paradigm holds until AI singularity, when the blockchain shifts to proof-of-stake and evolves independently

full paper

https://docs.google.com/document/d/19njG0Kml-BKa8a8vMhRGlXDTE5450pDU/edit

Background & Experience

Mohamed Bani

Education: Ecole Polytechnique, Paris, majors in Mathematics

July 1999-July 2000: Knowledge Extraction Engines, a Data-mining Start-up: Conception of robust algorithms for predictive models, using the General Theory of Statistical Learning [V. Vapnik]. Realization in C++, with Leon Bottou as scientific advisor.

September 2000-Now: Investment Banking (JPMorgan, Goldman Sachs, Credit Suisse, ICBCS)

Proposal Video

Not Avaliable Yet

Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.

  • Total Milestones

    6

  • Total Budget

    $200,000 USD

  • Last Updated

    27 May 2025

Milestone 1 - Architecture

Description

Finalize Architecture and Core Tools

Deliverables

Architecture spec for agents (Creative, Executioner, Scrutinizer & Ghost) base LLM chosen task definition for reasoning problems basic infra setup.

Budget

$25,000 USD

Success Criterion

Approved architecture document; LLM selected and tested; dataset outline created; core team onboarded.

Milestone 2 - Problem space

Description

Build Reflection Loop and Problem Space topology

Deliverables

Working self-reflection loop on ~10000 problems; first proximity signals captured; early clustering trials

Budget

$35,000 USD

Success Criterion

Loop shows reproducible proximity patterns; reflection leads to performance deltas on held-out problems.

Milestone 3 - Transformation & Clustering

Description

Implement Topology Transformation and Clustering

Deliverables

Functioning T transform and clustering pipeline; problem topology built and visualized; cluster-linked memory store.

Budget

$40,000 USD

Success Criterion

Clusters are internally consistent; problem placement correlates with reasoning similarity; memory retrieval functional.

Milestone 4 - Agents integration

Description

Integrate Agents in Recursive Loop

Deliverables

Fully connected agent loops (Creative → Executioner → Scrutinizer, Ghost→ Scrutinizer); performance tracking dashboard; early insight scoring.

Budget

$35,000 USD

Success Criterion

Loop executes end-to-end on 100+ problems; insights evaluated; Scrutinizer decisions reproducible.

Milestone 5 - LUPI heuristic

Description

Deploy Heuristic Contributors and Persistent Insight Layer

Deliverables

Contributor simulation layer; insight ranking across clusters; latent queue implementation for Creative agent.

Budget

$30,000 USD

Success Criterion

Insights reused with measurable performance improvement; differentiation heuristic tested and ranked.

Milestone 6 - Final evaluation

Description

Evaluate System and Publish Results

Deliverables

Performance comparison with baseline; polished whitepaper/report; system diagram for expansion.

Budget

$35,000 USD

Success Criterion

Demonstrated improvement on problem-solving tasks; public-facing artifact produced; peer/shareholder review positive.

Join the Discussion (0)

Expert Ratings

Reviews & Ratings

    No Reviews Avaliable

    Check back later by refreshing the page.

Welcome to our website!

Nice to meet you! If you have any question about our services, feel free to contact us.