NL Explanation Generation for Knowledge Graphs

chevron-icon
RFP Proposals
Top
chevron-icon
project-presentation-img
user-profile-img
SciRenovation
Project Owner

NL Explanation Generation for Knowledge Graphs

Expert Rating

n/a

Overview

Our team is committed to addressing a critical challenge in graph-based AI systems: the lack of transparency and explainability. We're proposing a comprehensive framework for Natural-Language Explanation Generation (NLEG) in Graph Systems that can trace inference pathways and convert them into human-readable explanations. We're particularly excited about the potential impact on transparency, trustworthiness, and user adoption across various domains - with special attention to compatibility with existing graph database systems and knowledge frameworks.

RFP Guidelines

Advanced knowledge graph tooling for AGI systems

Complete & Awarded
  • Type SingularityNET RFP
  • Total RFP Funding $350,000 USD
  • Proposals 39
  • Awarded Projects 5
author-img
SingularityNET
Apr. 16, 2025

This RFP seeks the development of advanced tools and techniques for interfacing with, refining, and evaluating knowledge graphs that support reasoning in AGI systems. Projects may target any part of the graph lifecycle — from extraction to refinement to benchmarking — and should optionally support symbolic reasoning within the OpenCog Hyperon framework, including compatibility with the MeTTa language and MORK knowledge graph. Bids are expected to range from $10,000 - $200,000.

Proposal Description

Our Team

We're a passionate group of experts who have worked together on complex graph problems for years:

  • 3 dedicated researchers with backgrounds in knowledge representation

  • 1 backend and 1 data engineers with extensive graph database experience

  • 1 data scientists who specialize in graph analytics

  • 1 natural language specialist and 1 generative AI developer
  • 1 graph scientist focused on optimization algorithms

  • 1 graph network expert

Company Name (if applicable)

SciRenovation Labs

Project details

Have you ever struggled with understanding why was the answer generated as it is? Can you rely on the answer that was given by the generative AI system? We doubt you'd be excited to get the answer "42" without proper explanation and run business or strategy decisions without full trust from the system.

Anyone who's worked with graph-based AI systems has encountered the "black box" problem where systems provide answers without explaining their reasoning. We've experienced these challenges firsthand, and they've motivated us to develop this proposal. Our vision is to create open, extensible, and scalable tools that illuminate the inference paths within graph systems - making them more transparent and trustworthy for users across domains.

We're focusing on three key areas that we believe will make the most significant impact:

1. Advanced Inference Path Identification and Tracing

We're determined to overcome the limitations of current path identification methods by developing more efficient and accurate techniques for tracing reasoning in graph systems.

We're particularly excited about:

  • Implementing specialized graph traversal algorithms (BFS, DFS, A* search) optimized for inference path identification

  • Developing pattern matching systems that can efficiently locate specific structural configurations in knowledge graphs

  • Creating logical inference mechanisms that can trace the application of specific rules in rule-based systems

  • Exploring reinforcement learning approaches for identifying optimal reasoning pathways in complex knowledge graphs

  • Trace where conclusions came from

  • Trace where inference engines "got stuck" or skipped over valid logic

For larger knowledge graphs with intricate reasoning chains, we're adapting specialized subgraph extraction techniques to focus on the most relevant portions of the graph. Early results suggest this approach significantly improves both efficiency and clarity of explanations.

Additionally, we're investigating causal graph representations (DAGs) to better capture and visualize the directional flow of reasoning from premises to conclusions.

2. Natural Language Verbalization of Inference Paths

We're a bit frustrated by the limitations of current explanation generation systems. Our goal is to create sophisticated verbalization techniques that can transform structured graph paths into clear, contextually appropriate natural language explanations.

We're exploring:

  • Template-based generation systems with enhanced flexibility for various relationship types

  • Rule-based verbalization frameworks that ensure grammatical correctness and coherence

  • Neural approaches leveraging state-of-the-art language models for more natural-sounding explanations

  • Hybrid systems that combine the reliability of templates with the fluency of neural models

One of our most ambitious goals is to develop context-aware explanation generation that adapts the level of detail, terminology, and format to match the user's background and needs. We're also creating visualization tools that complement textual explanations with intuitive diagrams of inference paths.

3. Comprehensive Evaluation Framework for Explanation Quality

We're tired of inadequate evaluation metrics that don't capture real explanation effectiveness. Our team is designing a robust evaluation framework that assesses the aspects that truly matter: accuracy, understandability, and usefulness of generated explanations.

Our evaluation framework will measure:

  • Explanation fidelity (how accurately the explanation reflects the actual inference process)

  • Linguistic quality (fluency, grammatical correctness, and readability)

  • User comprehension (how well users understand the system's reasoning after reading the explanation)

  • Task utility (how the explanations improve user performance on specific tasks)

  • Domain appropriateness (suitability of explanations for different domains and use cases)

Beyond simple metrics, we're focusing on user-centered evaluation methods that capture the real-world impact of explanations on trust, decision-making, and system adoption.

Our project naturally aligns with the Hyperon framework and existing graph database systems by providing essential tools that enhance transparency and user understanding. We're committed to exploring integration with the MORK system to ensure consistent knowledge graph reasoning and validation within the OpenCog ecosystem.

  1. Dong, G., & Pei, J. (2014). Natural Language Generation from Graph Structures. International Journal of Semantic Computing.

  2. The Alan Turing Institute. Knowledge graphs.

  3. Hwang, K., et al. (2022). A Decade of Knowledge Graphs in Natural Language Processing: A Survey.

  4. Yin, Y., et al. (2023). Knowledge Graphs and Natural-Language Explanation Generation.

  5. PuppyGraph. What is a Knowledge Graph? A Comprehensive Guide.

  6. Ngomo, A.-C. N., et al. (2022). Generating Explanations in Natural Language from Knowledge Graphs.

  7. Ontotext. Inferencing — GraphDB documentation.

  8. IBM. What is Natural Language Generation (NLG)?

  9. Datavid. Knowledge graph visualization: A comprehensive guide.

  10. Goertzel, Ben, Vitaly Bogdanov, Michael Duncan, Deborah Duong, Zarathustra Goertzel, Jan Horlings, Matthew Ikle et al. "Opencog hyperon: A framework for agi at the human level and beyond." arXiv preprint arXiv:2310.18318 (2023).

  11. Gephi. The Open Graph Viz Platform.

  12. Fujitsu. Explainable AI Through Combination of Deep Tensor and Knowledge Graph.

  13. Diffbot. Grounded Natural Language Generation with Knowledge Graphs.

  14. Statworx. Explainable AI in practice: Finding the right method to open the Black Box.

  15. PMC. Knowledge-graph-based explainable AI: A systematic review.

  16. ACL Anthology. (2023). Knowledge-Grounded Natural Language Recommendation Explanation.

  17. Springer Nature. Knowledge Graphs and Their Applications in Drug Discovery.

  18. XAIWC. (2025). Explainable retrieval and graph augmented generation.

  19. PMC. Querying knowledge graphs in natural language.

Open Source Licensing

MIT - Massachusetts Institute of Technology License

Background & Experience

This proposal is one of five parts of a unified toolkit to be developed in parallel across a team of 14-16 developers and scientists. We have already a case of successful grant completion for DeepFunding.

Our team includes current employees from Yandex and Intel, former Linkedin, 3 university lecturers, and 7 PhDs. We're proud of our 10+ presidential awards, several patents and 30+ publications, including multiple technical books from a well known publishing house.

If we are awarded funding for 4 out of 5 proposals, we are committed to developing the 5th one at no additional cost.

We believe it’s better to have one toolkit than a kit of tools, that need to be duct-taped together.

We aim to deliver a robust, extensible, modular, production-ready ecosystem that can evolve with future RFPs, enabling seamless adoption, innovation, and collaboration. This approach will maximize the utility of knowledge graph fundamentals and pull in other innovative features and technologies from DeepFunding.

Proposal Video

Not Avaliable Yet

Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.

  • Total Milestones

    4

  • Total Budget

    $87,500 USD

  • Last Updated

    18 May 2025

Milestone 1 - Advanced Inference Path Identification System

Description

We will develop and release a comprehensive inference path identification system capable of tracing reasoning chains in graph-based AI systems. This system will implement optimized graph traversal algorithms specifically designed for knowledge graphs and will include subgraph extraction techniques to handle complex reasoning paths. The system will be thoroughly tested with diverse graph datasets to ensure robustness and accuracy in path identification.

Deliverables

- A documented codebase implementing at least three specialized graph traversal algorithms optimized for inference path identification - A pattern matching subsystem for identifying structural configurations in knowledge graphs - A performance evaluation report comparing our system against current path identification methods using standard benchmark datasets

Budget

$25,000 USD

Success Criterion

The system will successfully identify at least 90% of inference paths in test scenarios. The system will be able to process graphs with at least 100,000 nodes within reasonable time constraints.

Milestone 2 - Natural Language Explanation Generator

Description

We will build a sophisticated explanation generation system that transforms identified inference paths into clear natural language explanations. This system will combine template-based approaches with neural methods to generate explanations that are both accurate and natural-sounding. The explanations will be adaptable to different user expertise levels and will include appropriate domain terminology.

Deliverables

- A modular explanation generation system supporting multiple verbalization strategies - A library of domain-specific templates for at least three domains (e.g. biomedical financial educational) - An interactive demonstration tool allowing users to explore different explanation styles for the same inference path

Budget

$30,000 USD

Success Criterion

The system will generate explanations that achieve human ratings of at least 4/5 for clarity and accuracy in blind evaluation studies. Generated explanations will successfully incorporate domain-specific terminology and adapt to different user expertise levels as measured through comprehension tests.

Milestone 3 - Comprehensive Evaluation Framework

Description

We will develop a multi-dimensional evaluation framework for assessing explanation quality that goes beyond traditional metrics. This framework will include both automated metrics and user-centered evaluation methodologies to provide a holistic assessment of explanation effectiveness. The framework will be designed to be reusable across different explanation generation systems.

Deliverables

- A suite of automated metrics measuring explanation fidelity, linguistic quality, and information completeness - A standardized protocol for user-centered evaluation covering comprehension, trust, and task performance - A benchmark dataset of explanations with human annotations for training and evaluation purposes

Budget

$25,000 USD

Success Criterion

The evaluation framework will demonstrate high inter-rater reliability and will identify meaningful differences between explanation generation techniques that correlate with user preferences.

Milestone 4 - Integration with Metta/MORK Systems

Description

We will develop robust integration capabilities between our explanation tools and the Metta/MORK systems in the OpenCog ecosystem. This integration will allow for transparent reasoning explanations within Metta's computational framework and MORK's knowledge representation system.

Deliverables

- A dedicated integration module connecting our explanation system to Metta/MORK data structures and workflows - A set of Metta-specific explanation templates optimized for MORK's knowledge representation

Budget

$7,500 USD

Success Criterion

The integrated system will successfully generate explanations for at least 75% of inference paths in Metta/MORK without requiring modifications to existing Metta/MORK implementations.

Join the Discussion (0)

Expert Ratings

Reviews & Ratings

    No Reviews Avaliable

    Check back later by refreshing the page.

Welcome to our website!

Nice to meet you! If you have any question about our services, feel free to contact us.