Wisdom-Lattice: Synergy-Driven Mind Ontologies

chevron-icon
RFP Proposals
Top
chevron-icon
project-presentation-img
user-profile-img
Gabriel Axel Montes
Project Owner

Wisdom-Lattice: Synergy-Driven Mind Ontologies

Expert Rating

n/a

Overview

We will mine, formalise and creatively blend the consciousness-state vocabularies of at least two sources—choosing between Taoism and Hinduism wisdom traditions, and potentially alien-like models of intelligence, e.g. octopus cognition—inside the OpenCog Hyperon / MeTTa stack. Tradition-specific ontologies are built with uncertain Formal Concept Analysis; cross-source blends are ranked by information-theoretic synergy (ΦID) and mutual information (MI) as proposed candidates. Human, LLM and logical checks ensure novelty and coherence. Resulting concepts, code and datasets integrate into Hyperon’s Distributed Atomspace, fulfilling every must-have, should-have and could-have item in the RFP.

RFP Guidelines

Experiment with concept blending in MeTTa

Complete & Awarded
  • Type SingularityNET RFP
  • Total RFP Funding $100,000 USD
  • Proposals 11
  • Awarded Projects 1
author-img
SingularityNET
Apr. 14, 2025

This RFP seeks proposals that experiment with concept blending techniques and formal concept analysis (including fuzzy and paraconsistent variations) using the MeTTa programming language within OpenCog Hyperon. The goal is to explore methods for generating new concepts from existing data and concepts, and evaluating these processes for creativity and efficiency. Bids are expected to range from $30,000 - $60,000.

Proposal Description

Our Team

With deep expertise spanning neuroscience, cognitive and information theory, and MeTTa development, our team unites proven builders—from SingularityNET's own and iCog Labs engineers to Imperial’s complexity scientists and award-winning cognition researchers—to deliver innovative, rigorously evaluated concept-blending workflows and robust, open-source Hyperon components/modules.

Company Name (if applicable)

Neural Axis LLC

Project details

Drawing on wisdom traditions and possibly alien-like (e.g. octopus) cognition gives the blending engine access to conceptual ecosystems that evolved specifically to explore non-ordinary modes of cognition—states like wu-wei (effortless action), and samādhi (absorbed clarity). Encoding and recombining these motifs lets us prototype cognitive structures that depart sharply from today’s utilitarian, goal-optimising AI heuristics. The resulting blends serve two strategic purposes: (1) they seed future AGI systems with alternative “mental gestures” that might support creativity, intuition or contemplative reasoning not native to mainstream machine learning; and (2) they furnish test-beds for probing how unusual value orientations and inference logics behave inside Hyperon, informing alignment and safety research. As Ben Goertzel’s Hyperseed sketch argues, a minimal core ontology gains power when grafted with culturally diverse conceptual lineages, because it exposes the engine to richer symmetry-breaking and higher-order synergies—exactly the soil in which novel, beneficiary forms of general intelligence can sprout.

1 Objectives

  1. Ontology Construction: Extract object–attribute triples describing modes and states of mind from 2+ sources (choose among several wisdom traditions and potentially, non-human or octopus intelligence)

  2. Uncertain FCA Lattices: Generate weighted concept lattices inside MeTTa, cindluing with fuzzy/paraconsistent FCA.

  3. Concept Blending & Synergy Ranking: Blend lattices; score candidates with information-theoretic measures (proposed candidates: integrated-information decomposition [ΦID], mutual information [MI] and entropy-surprise; more options below); retain top blends.

  4. Evaluation Framework: Combine PLN logic, LLM critique (optimally multi-agent) and subject matter expert (SME) review.

  5. Integration & Release: Deliver open-source Hyperon components, docs, demo and datasets.

2 RFP Requirement-Compliance Matrix

RFP Requirement

How We Satisfy

Must – implement info-theoretic blending OR uncertain FCA

Implement both: fuzzy/paraconsistent FCA and information-theoretic (ΦID/MI) blend ranking.

Must – create novel concepts & evaluate novelty/coherence

Blending engine + PLN/LLM/SME evaluation pipeline.

Must – qualitative human assessment

Subject-matter expert review.

Should – LLM-based evaluation

Included LLM critique.

Should – parallel testing of the two approaches

Side-by-side comparison of ΦID vs. fuzzy/paraconsistent FCA pipelines.

Could – multi-agent LLMs for real-time creativity filters

Using Manus AI, a multi-agent system.

Could – further info-theoretic refinement

Entropy-surprise & MDL (optional).

Non-functional – MeTTa + DAS, parallelism, reliability, usability, maintainability

Code is native MeTTa, runs in Distributed Atomspace, uses Python FFIs for ΦID; Docker image + API; modular packages, unit tests, fault-tolerant job scheduler.

3 Methodology

3.1 Ontology Harvest NLP pipelines extract candidate terms; manual curation ensures cultural fidelity and ethics.
3.2 Uncertain FCA Custom MeTTa library computes lattices; weights capture vagueness inherent in contemplative vocabulary.
3.3 Concept Blending Algebraic-blend operator generates cross-tradition candidates; logical constraints enforce semantic sanity.
3.4 Information-Theoretic Scoring (proposed candidate measures*):

  • Primary: ΦID synergy atom – higher-order integration.

  • Secondary: Mutual information, entropy-surprise.
    Final metric choice will be data-driven once initial corpora are mapped; we propose ΦID + MI as the leading pair to maximise reviewer confidence while retaining flexibility.
    3.5 Evaluation Loop PLN truth-maintenance + LLM ensemble + SME rating.
    3.6 Integration All artefacts packaged as Hyperon components with CLI & Jupyter demos.

*See below for a table of all currently considered information-theoretic candidates.

4 Non-Functional Design

  • Architecture Native MeTTa atoms stored in DAS; Python ΦID helper via FFI.

  • Parallelism Batch jobs sharded across cores; asynchronous LLM calls.

  • Reliability Checkpointed blend batches; Atomspace autosave.

  • Usability create_blends --traditions tao hindu --k 100 style CLI; REST API; extensive docs.

  • Maintainability Modular repo, MIT licence, CI tests, code comments.

5 Deliverables

  1. 2+ annotated MeTTa ontologies & lattice graphs.

  2. Fuzzy FCA library + unit tests.

  3. Blend engine with ΦID/MI ranking.

  4. Evaluation reports & dashboards comparing methods.

  5. Video demo, white-paper, Docker image.

Candidate information-theoretic measures we could execute:

Measure

Intuition for concept blending

Implementation sketch in MeTTa / Python helper

Shannon entropy HHH

Baseline “surprise” of a concept’s attribute distribution; useful to see if a blend reduces or amplifies uncertainty.

Compute attribute-frequency vectors, H = -Σ p log p.

Mutual Information I(X;Y)I(X;Y)I(X;Y)

Amount of info two parent concepts share; high enough ⇒ they relate, but not so high they’re duplicates.

Build term–feature matrices; use standard MI formula; threshold for admissible pairs.

Interaction Information (multivariate MI)

Captures higher-order synergy among ≥3 features when we merge whole schemas not just attributes.

Use Möbius-inversion formula over entropies of all subsets (tractable on sparse feature sets).

Information Gain / Surprise of new links

How much the blend changes predicted truth-value distribution (idea behind OpenCog’s Surprise → boosts ShortTermImportance).

ΔH before vs. after adding candidate links; ties directly into Hyperon’s STI update routine.

Minimum Description Length (MDL)

Prefer blends that compress the joint description better than keeping inputs separate—classic Occam’s Razor.

Encode graphs with naive edge-list compressor; score = bits_saved.

Normalized Compression Distance (NCD)

Approximate Kolmogorov complexity distance between concept graphs—helps pick diverse parents.

Use off-the-shelf compressors on serialised Atomspace sub-graphs.

Pointwise Mutual Information (PMI) for link pairs

Quick filter: choose attribute pairs whose PMI > τ but < τ_max.

Pre-compute PMI table; MeTTa rule rejects extremes.

About the Team

Gabriel Axel Montes, PhD
Neuroscientist and AI entrepreneur; early SingularityNET team, 0-to-1 builder for neurotech, VR, robotics and blockchain startups. Co-author of The Consciousness Explosion (AI & consciousness). 10,000+ hours contemplative practice inform cognitive frameworks. Track record of project execution, and liaison with Hyperon.
LinkedIn

Hope Alemayehu, BSc(c), Computer Science
Hope Alemayehu is a final-year Computer Science undergraduate specializing in quantum software with Qiskit and Cirq. She is a member of iCog Labs and will drive the project work in computer science, FCA, information theory, and MeTTa programming.
LinkedIn

iCog Labs (Addis Ababa)
iCog Labs will lend their R&D, software development, and MeTTa coding experience in SingularityNET to flexibly supply any missing gaps or supplementary work needed in the project.
Website

Pedro Mediano, PhD
Pedro Mediano is a complexity and cognitive scientist at Imperial College London and first author of a publication (in the References section) with key information-theoretic methods to contribute to the project.
Gitlab, Google Scholar, LinkedIn

Matt Ikle, PhD | Chief Science Officer, SingularityNET
Math modelling expertise. Prior use of integrated-information theory (IIT) in SingularityNET projects, e.g. for measuring the consciousness of a system (see references), & has existing IIT code. 
LinkedIn

Sidney Carls-Diamante, PhD
With her prize‑winning work on decentralized octopus cognition (“Octopus and the Unity of Consciousness”, Werner Callebaut 2019), Sidney will contribute her expertise in octopus cognition as inspiration for non-ordinary models of mind in the development of the framework.
Google Scholar

Open Source Licensing

Custom

The proposed license model is as follows:

  • For SingularityNET Foundation: MIT License with Commons Clause:

    • The work will be provided to SingularityNET Foundation under the terms of the MIT License, with the following modification:
      - The entity may not sell the work (or a modified aspect/version of it), or offer it for sale, unless the entity has a separate commercial license from the copyright holder.

  • For all other non-SingularityNET entities: Proprietary

This is not a strict criterion of the proposal, but is the proposed license model for the time being.

 

Proposal Video

Not Avaliable Yet

Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.

  • Total Milestones

    4

  • Total Budget

    $50,000 USD

  • Last Updated

    23 May 2025

Milestone 1 - Plan & Conceptual Framework

Description

- Submit a comprehensive research plan and a high-level framework design that together set the foundation for the entire project. The research plan details objectives scope boundaries ethics pathway risk register and agile task breakdown. - The design specifies how wisdom-tradition concepts will be mapped into formal contexts how uncertain FCA and information-theoretic measures (proposed candidates: ΦID mutual information entropy-surprise) hypothetically interrelate and how evaluation loops will combine PLN LLM critique and subject-matter expertise. - Optional: A system-level architecture diagram shows data flow from raw text to blended concepts within Hyperon’s Distributed Atomspace. No coding data harvesting or lattice generation occurs in this milestone; the output is entirely strategic and design-oriented.

Deliverables

- Research-plan text document - Conceptual-framework document (definitions diagrams metric-selection rationale) - Timeline/Gantt overview (PNG/SVG) - Optional: High-level framework design diagram

Budget

$12,500 USD

Success Criterion

MeTTa developer approves the framework/architecture; project plan is in place; high-level framework design reflects the overall A-to-B of the project.

Milestone 2 - Data Harvest & Ontology Skeletons

Description

Assemble a significant proportion of the corpora for the selected sources/ wisdom traditions. Extract the initial set of concept-attribute relations and load them into prototype MeTTa context files creating ontology skeletons for each tradition. Develop and unit-test the uncertain-FCA library; generate first-pass lattices and visualisations capturing core conceptual clusters.

Deliverables

- Curated corpora of wisdom tradition sources - Draft MeTTa context files per tradition - Uncertain-FCA MeTTa library + unit tests - Initial lattice graphs (DOT/PNG)

Budget

$12,500 USD

Success Criterion

Corpora and context files compiled; lattices compile effectively; the initial lattices reflect the major conceptual clusters of each tradition.

Milestone 3 - Blend Engine & Initial Benchmarks

Description

Substantial progress building the blend engine. Run an initial batch of concept blends to validate the end-to-end flow and collect preliminary performance and quality metrics. Refine with feedback from LLM’s.

Deliverables

- MeTTa blend rules + (optional) Python FFI helpers - Sample LLM-ranked blends list with raw scores - Benchmark summary report (proposed/suggested: run-time memory novelty indicators)

Budget

$12,500 USD

Success Criterion

Preliminary pipeline/components run well within reasonable computing limits (desktop/VM, ideally); generates a set of promising blended concepts judged meaningful by subject-matter expertise and LLM’s; preliminary benchmark results report shows promise for establishes a performance envelope for final scaling.

Milestone 4 - Evaluation Integration & Release

Description

Iterate blend thresholds (optimally using PLN logic) LLM (optimally including mult-agent) critique and subject-matter expert review. Compare information-theoretic–guided vs. uncertain FCA-only pipelines with final benchmarks. Package all code data and docs; build Docker image; record screencast demo; draft white-paper and final slide deck; push to public GitHub and submit completion report. Specifics are open to suggestions from the Hyperon/MeTTa teams.

Deliverables

- Evaluation dataset with expert labels & LLM - Optional: Information-theoretic refinement component -Comparative dashboard notebook - Git repo (MIT with commons clause) + Docker image + docs -Video demo & white-paper

Budget

$12,500 USD

Success Criterion

Measurements between Information-theoretic (proposed candidate: ΦID and/or MI) pipeline(s) vs. uncertain FCA “novel + coherent” concepts; Hyperon core team merges pull request without major rework; public demo released under open licence.

Join the Discussion (0)

Expert Ratings

Reviews & Ratings

    No Reviews Avaliable

    Check back later by refreshing the page.

Welcome to our website!

Nice to meet you! If you have any question about our services, feel free to contact us.