
Luke Mahoney (MLabs)
Project OwnerLuke will act as the Grants Manager on behalf of the MLabs AI team. He manages several similar grant projects at MLabs, including numerous Project Catalyst works.
There are many large, generic semantic graphs (e.g. WikiData, DBpedia, etc.) alongside growing numbers of domain-specific ones. LLMs offer a quick path to KG extraction but introduce: Inconsistency: erratic, non-deterministic entity resolution; Inaccuracy: missing or hallucinated predicates; Context loss: unstable frames of reference; Confidence: Displaying confidence regardless of accuracy; Speed: Impractical at scale. We are developing an AGI-focused KG tooling suite tackling these challenges via four modules—Context Identification, Entity Management, Predicate Management, and Confidence Management. This proposal addresses entity, predicate, and confidence management.
This RFP seeks the development of advanced tools and techniques for interfacing with, refining, and evaluating knowledge graphs that support reasoning in AGI systems. Projects may target any part of the graph lifecycle — from extraction to refinement to benchmarking — and should optionally support symbolic reasoning within the OpenCog Hyperon framework, including compatibility with the MeTTa language and MORK knowledge graph. Bids are expected to range from $10,000 - $200,000.
In order to protect this proposal from being copied, all details are hidden until the end of the submission period. Please come back later to see all details.
This milestone deploys our novel IP for the automatic discovery of aliases for entities in a KG. It uses two elements; our variation of the N-M-W algorithm to produce a list of statistically plausible alias candidates and a semantic analysis to refine this list into a set of entity aliases. The algorithm is computationally efficient and produces final candidate lists which require minimal additional curation.
Software implementation of the compound alias detection algorithm and experimental results.
$15,000 USD
Fully operational software implementation, with a demonstration on a large unstructured or semi-structured text corpus and the WikiData KG, together with accompanying documentation.
In this milestone we will show how source reliability can be bootstrapped using credibility and re-estimation. When constructing AGI KGs we can initialise using a trusted well-curated source. This becomes the baseline. When new information is included in the KG we judge the credibility of the new knowledge and the reliability of its source using the baseline as ground truth. Having assigned values for confidence we can use consistency to recompute the credibility and reliability metrics. In this way we iteratively refine our estimates of source reliability as our knowledge base grows.
Source reliability estimation implementation and an illustration of the system working with a small number of variable-quality sources (such as WordNet WikiData and automatically generated KGs such as those from AutoKG).
$15,000 USD
Fully operational software implementation, with a demonstration of selected KGs, together with accompanying documentation.
This milestone is a production-level implementation of the deductive inductive and abductive scoring for self-consistency together with the fusion of reliability for evidence from multiple sources. The system will be incorporated into the earlier system for curating KGs and evaluated on the same variable-quality graphs.
Credibility estimation implementation and an illustration of the system working with a small number of variable-quality sources (such as WordNet WikiData and automatically generated KGs such as those from AutoKG).
$15,000 USD
Fully operational software implementation, with a demonstration on selected KGs, together with accompanying documentation.
In the final milestone we include the currency estimation algorithm. The approach is simple and is based on partial derivatives - how quickly is the knowledge likely to be changing (the derivative) and how long ago was the knowledge acquired (from timestamps) give an estimate of how out-of-date the knowledge is likely to be. The decay of currency is modeled as an inverse exponential function appropriately calibrated to the type of predicate the source and the frame of reference.
Currency estimation implementation and an illustration of the approach operating on information extracted from Wikipedia plus associated documentation.
$15,000 USD
Fully operational software implementation, with a demonstration on Wikipedia, together with accompanying documentation.
Reviews & Ratings
Please create account or login to write a review and rate.
Check back later by refreshing the page.
© 2025 Deep Funding
Join the Discussion (0)
Please create account or login to post comments.