Symbolic KG Tools for AGI Using FlowiseAI

chevron-icon
RFP Proposals
Top
chevron-icon
project-presentation-img
user-profile-img
fourth
Project Owner

Symbolic KG Tools for AGI Using FlowiseAI

Expert Rating

n/a

Overview

This project delivers an open-source toolkit for building, refining, and benchmarking symbolic knowledge graphs (KGs) for AGI systems. Using FlowiseAI, LangChain, and MeTTa, it enables visual graph construction, semantic compression, contradiction detection, and export to the MORK backend. Led solely by Dr. Freeman Jackson, the system supports AGI reasoning tasks such as multi-hop QA and analogical inference. All tools, benchmarks, and datasets will be publicly released to advance neuro-symbolic AI and integration with the Hyperon framework.

RFP Guidelines

Advanced knowledge graph tooling for AGI systems

Internal Proposal Review
  • Type SingularityNET RFP
  • Total RFP Funding $350,000 USD
  • Proposals 40
  • Awarded Projects n/a
author-img
SingularityNET
Apr. 16, 2025

This RFP seeks the development of advanced tools and techniques for interfacing with, refining, and evaluating knowledge graphs that support reasoning in AGI systems. Projects may target any part of the graph lifecycle — from extraction to refinement to benchmarking — and should optionally support symbolic reasoning within the OpenCog Hyperon framework, including compatibility with the MeTTa language and MORK knowledge graph. Bids are expected to range from $10,000 - $200,000.

Proposal Description

Proposal Details Locked…

In order to protect this proposal from being copied, all details are hidden until the end of the submission period. Please come back later to see all details.

Proposal Video

Not Avaliable Yet

Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.

  • Total Milestones

    5

  • Total Budget

    $100,000 USD

  • Last Updated

    24 May 2025

Milestone 1 - System Architecture & Technical Design

Description

This milestone focuses on creating the full architectural blueprint of the symbolic KG tooling platform. It includes defining the modular system structure data pipelines API contracts and symbolic transformation flows. Detailed integration plans will outline how FlowiseAI pipelines ingest data how LangChain agents interact with the pipeline how knowledge is serialized into MeTTa expressions and how symbolic queries are routed through MORK.

Deliverables

High-level and low-level architecture diagrams Technical spec for Flowise → LangChain → MeTTa → MORK integration List and definitions of AGI benchmark tasks (multi-hop QA contradiction detection analogical retrieval) Reference datasets (e.g. scientific text logic puzzles Wikidata subsets)

Budget

$15,000 USD

Success Criterion

Architecture published in GitHub with clear module boundaries Benchmark framework categories established and documented Dataset sources aligned with AGI reasoning tasks

Milestone 2 - GitHub Repository Setup & Development Environment

Description

This milestone sets up the foundation for development and collaboration. A public GitHub repository will be launched with a fully structured project layout Apache 2.0 license and issue tracker. A containerized dev environment using Docker will be configured to enable reproducible local builds. The CLI interface will allow developers to interact with the system via symbolic graph commands. Community contribution templates and onboarding guides will be published.

Deliverables

GitHub repo with README LICENSE and issue templates Docker setup with build scripts and service orchestration CLI tools for local dev testing and graph export Contributor guide and development standards

Budget

$20,000 USD

Success Criterion

System builds locally via Docker without error CLI functional for basic pipeline tests External user successfully clones and runs system GitHub repository adheres to open-source best practices

Milestone 3 - FlowiseAI Pipeline & MeTTa Exporter Prototype

Description

This milestone delivers a working prototype of the visual Flowise pipeline that transforms structured/semi-structured input (e.g. JSON CSV XML LLM output) into MeTTa expressions. The Flowise nodes will include ingest transform and output blocks. The MeTTa exporter module will convert processed graph data into valid S-expressions compatible with symbolic reasoning engines.

Deliverables

Flowise visual pipeline with ingest + MeTTa export nodes Node.js or Python-based MeTTa exporter Test cases using real and synthetic data Unit and integration tests for graph conversion

Budget

$20,000 USD

Success Criterion

JSON and LLM data correctly transformed into MeTTa format MeTTa output validated via interpreter or parsing tool Graph structure visualized and compared to expected symbolic logic

Milestone 4 - Toolchain Integration & Workflow Demonstration

Description

This milestone combines all prior components into a unified symbolic reasoning pipeline. Flowise pipelines will pass structured data to LangChain agents which manage context and tool execution. The resulting knowledge will be serialized via the MeTTa exporter and routed to MORK. The system will demonstrate reasoning workflows such as symbolic QA analogy resolution and contradiction spotting using curated datasets.

Deliverables

End-to-end integration of Flowise LangChain MeTTa and MORK Functional reasoning chain tested with real data CLI or visual tool for initiating graph reasoning sessions Demo video notebook or walkthrough guide

Budget

$25,000 USD

Success Criterion

Reasoning chain completes from input to MORK query AGI benchmark tasks can be triggered and evaluated Integration is reproducible from a clean repo clone Documentation reflects full workflow lifecycle

Milestone 5 - Benchmarking Toolkit & Public Release

Description

This milestone finalizes the platform with a benchmarking suite that evaluates KGs for AGI utility. It includes metrics for reasoning performance structural correctness mutation tracking and contradiction detection. Documentation tutorials and a deployment-ready Docker image or hosted demo will be released. Community onboarding materials and a roadmap will prepare the platform for long-term open-source growth.

Deliverables

Benchmarking scripts and scoring modules (Python/Node.js) Metrics: symbolic path depth accuracy compactness confidence scoring Full documentation: API usage agent workflows test datasets Hosted demo (e.g. DockerHub Hugging Face Spaces) Contributor templates issue policies and roadmap

Budget

$20,000 USD

Success Criterion

Benchmark toolkit tested on internal and public datasets System installable and usable by third-party testers GitHub repo includes all deliverables and passes install checks At least one external issue or feedback submission received

Join the Discussion (0)

Expert Ratings

Reviews & Ratings

    No Reviews Avaliable

    Check back later by refreshing the page.

Welcome to our website!

Nice to meet you! If you have any question about our services, feel free to contact us.