Categorical and symmetry neural-symbolic networks

chevron-icon
RFP Proposals
Top
chevron-icon
project-presentation-img
Elija Perrier
Project Owner

Categorical and symmetry neural-symbolic networks

Expert Rating

n/a

Overview

We are researchers from ANU (PhD) and Stanford (postdoc) specialising in AI, logic, mathematics and physics. We aim to leverage category theory, physics-inspired symmetry mathematics and geometric insights to unify symbolic and neural paradigms for neuro-symbolic AI. We propose to experimentally demonstrate how integration of these methods into Hyperon-based DNNs using frameworks like PyNeuraLogic and KANs can lead to the emergence of neuro-symbolic logic in DNNs. In turn, we aim to demonstrate through simulations how this approach can be leveraged to integrate experiential systems like AIRIS and higher-order reasoning in a distributed, interpretable, scalable and resource-efficient manner.

RFP Guidelines

Neuro-symbolic DNN architectures

Internal Proposal Review
  • Type SingularityNET RFP
  • Total RFP Funding $160,000 USD
  • Proposals 9
  • Awarded Projects n/a
author-img
SingularityNET
Oct. 4, 2024

This RFP invites proposals to explore and demonstrate the use of neuro-symbolic deep neural networks (DNNs), such as PyNeuraLogic and Kolmogorov Arnold Networks (KANs), for experiential learning and/or higher-order reasoning. The goal is to investigate how these architectures can embed logic rules derived from experiential systems like AIRIS or user-supplied higher-order logic, and apply them to improve reasoning in graph neural networks (GNNs), LLMs, or other DNNs.

Proposal Description

Proposal Details Locked…

In order to protect this proposal from being copied, all details are hidden until the end of the submission period. Please come back later to see all details.

Proposal Video

Not Avaliable Yet

Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.

  • Total Milestones

    3

  • Total Budget

    $80,000 USD

  • Last Updated

    8 Dec 2024

Milestone 1 - Theoretical Foundations and Research Plan

Description

This milestone establishes the theoretical foundation for our approach to neurosymbolic DNN emergence where instead of manually imposing logical rules we leverage geometry symmetry mathematics and category theory to guide network architecture design: 1. Identify appropriate symmetry groups and geometric invariants so logical rules become stable invariant features of neural latent spaces independent of coordinate choices or arbitrary data transformations. 2. Construct category-theoretic frameworks mapping data representations (e.g. graphs) to logical domains ensuring logical morphisms are preserved under network operations and transformations. 3. Define algebraic conditions that ensure neural embeddings naturally implement logical connectives (AND OR) quantifiers and inference steps as stable continuous transformations not mere heuristic constraints. 4. Incorporate these conditions into layer and architecture design so networks internally encode logical reasoning patterns rather than just fitting statistical regularities. 5. Explore chosen symmetries and topological invariants to reveal how robust logical inference can arise dynamically without manually encoding rules. 6. Translate geometric and categorical insights into a formal blueprint that guarantees logically coherent interpretable reasoning emerges from the network’s underlying mathematics.

Deliverables

Milestone 1 deliverables are as follows: 1. Research Plan: Formulate a detailed mathematical framework that leverages geometry symmetry and category theory to ensure logical rules emerge naturally within deep neural network architectures. Specify target algebraic invariants define the categories and functorial mappings that preserve logical semantics and establish criteria for measuring robustness scalability and stability of these emergent logical structures. 2. Literature Review: Survey current neuro-symbolic systems with particular attention to existing geometric and category-theoretic approaches in deep learning. Identify where logic does not arise intrinsically highlight which methods show partial success and pinpoint gaps our framework can fill. This review will directly inform the selection of invariants symmetry groups and categorical constructs necessary for stable interpretable logic in neural embeddings. 3. Simulation Scoping: Propose simulation scenarios using Hyperon-based prototypes applying well-defined geometric and categorical conditions to test the spontaneous emergence of logic in minimal ontology-based tasks and structured graph datasets. In consultation with Hyperon’s development team define metrics for logical coherence interpretability and stability. Outline initial experiments that validate the theoretical principles ensuring that subsequent implementation and scaling rest on rigorously vetted designs.

Budget

$20,000 USD

Success Criterion

Achievement of the following: Formalized Mathematical Framework: A clear, coherent theoretical model is established, linking geometry, symmetry, and category theory into a unified formalism. Logical concepts must be precisely defined as algebraic constructs in neural embeddings, ensuring that logical operations correspond to stable transformations identifiable within the network’s structure. Defined Logical Invariants and Constraints: Identification of appropriate symmetry groups, invariants, and categorical structures that guarantee logical consistency. These conditions must enable emergent logic without relying on ad-hoc fixes, ensuring that even as the network adapts and generalizes, it preserves meaningful logical relations and inference rules. Integration: The theoretical framework must incorporate and refine established ideas from the literature. This involves referencing known neuro-symbolic approaches, geometric methods, and category-theoretic constructs, confirming that our chosen path aligns with and extends the current state of knowledge. Documentation: Documentation and proofs where needed for theoretical assumptions, chosen invariants, and category-theoretic mappings. Clear explanations, diagrams, and formal proofs ensure that future developers, researchers, and stakeholders can understand and trust the framework’s foundational principles, setting the stage for subsequent implementation and testing.

Milestone 2 - Simulation Implementation

Description

Stage 2 applies the principles from Stage 1 by implementing category-theoretic geometric and symmetry-based logic embeddings in PyNeuraLogic and Kolmogorov-Arnold Networks (KANs). 1. PyNeuraLogic Integration: Embed experiential logic rules (e.g. from AIRIS) as algebraic constraints in GNN embeddings ensuring learned representations respect logical structure. Impose geometric invariants so that transformations in graph topology do not disrupt logical meaning thereby maintaining consistent inference. 2. KAN Logical Operators: Adapt KAN components to approximate logical connectives (AND OR) and quantifiers as stable univariate functions. Utilise category theory to treat these connectives as morphisms preserving logical relations across compositional transformations and apply symmetry mathematics to guarantee equivalences. 3. Symmetry for Robustness: Define symmetry groups acting on latent spaces so that logical invariants persist and are robust to shifting data permutations and coordinate changes for stability seen in physical systems. 4. Simulations: Test on minimal ontologies to see if PyNeuraLogic-based GNNs infer rare conditions from limited data. Track how logic guides reasoning steps and verify that embedded rules remain interpretable. Apply KAN-based reasoning to complex settings like smart energy grids or market predictions and use performance assessment to compare reasoning clarity data efficiency and generalisation against baseline models.

Deliverables

Milestone 2 deliverables include: 1. Literature Review Update: Expand the initial review to incorporate cutting-edge research on category-theoretic and geometric deep learning strategies recent neuro-symbolic prototypes and emerging best practices for embedding logic into neural systems. Highlight approaches that successfully align logic with continuous embeddings ensuring our planned architectures draw on proven methods and avoid known pitfalls. 2. Simulations: Configure simulations that combine PyNeuraLogic and KAN modules under the algebraic symmetry and categorical constraints defined in Stage 1. Select representative datasets—such as small ontologies or graph-based tasks—and outline metrics to assess interpretability reasoning stability and data efficiency. These initial tests form a baseline for gauging how well theoretical principles translate into measurable improvements. 3. Visualisation and User Tools: Begin developing visualisation utilities and user interfaces tailored to PyNeuraLogic and KAN. Design tools that expose underlying logical structures illustrate how symmetry transforms embeddings and let researchers interactively inspect reasoning steps. These interfaces support iterative debugging foster understanding and encourage wider adoption of our integrated neuro-symbolic approach.

Budget

$40,000 USD

Success Criterion

Achievement of the following: Functional Prototype Integration: Demonstrating PyNeuraLogic-based GNNs and KAN modules operate under the category-theoretic, geometric, and symmetry constraints defined in Stage 1. The resulting prototypes must demonstrate that logical rules, including those derived from experiential sources like AIRIS, are consistently and transparently embedded as algebraic relations within the network’s latent spaces. • Stable Logical Inference Under Transformation: Demonstrating the implemented models, when exposed to changes in graph structure or data representation, preserve logical meaning and inference capabilities. Logical connectives and quantifiers remain identifiable as stable morphisms, and symmetry-based invariants ensure reasoning steps are robust against variations in input. • Performance Metrics: Demonstrating that initial simulations on small, well-defined datasets confirm that logic-driven constraints enhance interpretability, reasoning accuracy, and data efficiency compared to baseline models. These metrics provide early evidence that the theoretical principles yield tangible benefits in practice. • Documented Codebase and Procedures: A publicly accessible code repository, annotated scripts, and foundational tutorials are complete. They must enable other researchers, including Hyperon and MeTTa developers, to replicate, modify, and extend the approach, ensuring broader adoption and paving the way for more extensive testing in Stage 3.

Milestone 3 - Scaling Visualization and Demonstrations

Description

Stage 3 advances beyond initial simulations. Building on Stages 1 and 2 we now focus on dynamic scenarios where experiential rules derived from AIRIS evolve and higher-order logic imposes intricate constraints. Our aim is to verify that logical reasoning and structural coherence persist as the network adapts and scales embedding new rules without compromising stability or interpretability. We concentrate on enhancing usability visualisation and code availability in the context of Hyperon’s architectures. By creating tools to depict how logical functions shape the network’s latent geometry we help users understand how logic emerges as stable symmetries or manifold deformations. Such visualisations illuminate the interplay between learned rules and underlying embeddings fostering trust and providing insights into the reasoning processes. To validate generality we apply our approach to diverse reasoning tasks including structured scientific datasets and cross-domain inference. We test the hypothesis that our methods produce a universal paradigm for embedding logic into DNNs ensuring robust scalable neuro-symbolic reasoning. We aim to conclude the project by delivering research reports academic publications and a publicly accessible codebase together establishing a new method for constructing explainable logic-grounded deep neural architectures based on Hyperon and MeTTa protocols.

Deliverables

1. Final Report and Academic Articles: the final report will summarise the results of our project including the utility of category theory’s role in preserving logical morphisms illustrate how geometric invariants and symmetry principles enhance the stability and interpretability of logic-infused DNNs and confirm best practices for embedding logic in neural architectures. We expect first drafts of academic articles arising from the report to be available too. 2. Simulations: we will deliver a final and fully curated code repository featuring finalised PyNeuraLogic and KAN implementations that embody the constraints and principles defined in previous stages designed in collaboration with Hyperon and MeTTa developers for interoperability and extensibility. Accompanying scripts documentation and tutorials will support dataset handling transformation performance verification and reproducibility enabling stakeholders to easily extend and adapt our approach. 3. Visualization and App: Provide a set of visualization modules and interactive interfaces that allow users to inspect logic-driven embeddings and transformations within the DNNs. These tools will highlight how logical structures evolve as new rules are introduced and will demonstrate the system’s coherence and trustworthiness in real-time.

Budget

$20,000 USD

Success Criterion

Achievement of the following: 1. Scalable and Generalizable Reasoning: Demonstration that our approach when scaled exhibits stable logical inference and structural coherence. Logical connectives and higher-order rules maintain integrity even as new experiential data or abstract constraints are introduced, confirming that the framework scales beyond controlled scenarios. 2. Visualisation and User Experience: Visualisation tools, interfaces, and applications enable stakeholders to intuitively explore latent spaces and track how logical rules shape neural representations. Users can identify logical symmetries, observe shifts in embeddings as rules evolve, and confidently navigate the system’s reasoning processes, fostering trust and accelerating adoption. 3. Robust Cross-Domain Demonstrations: Multiple testbeds—ranging from structured scientific datasets to cross-domain inference tasks—validate protocols using PyNeuraLogic and KANs within Hyperon framework. In each case, logical coherence, interpretability, and robust inference persist, reinforcing the notion of a universal paradigm for embedding logic into deep neural architectures. 4. Publicly Documented Outcomes: Final research reports, academic publications, and publicly accessible codebases are delivered, fully describing the methods, visualizations, and lessons learned. These resources serve as a comprehensive reference for future projects, ensuring that the concepts, techniques, and findings drive ongoing innovation.

Join the Discussion (0)

Expert Ratings

Reviews & Ratings

    No Reviews Avaliable

    Check back later by refreshing the page.

feedback_icon