Categorical and symmetry neural-symbolic networks

chevron-icon
RFP Proposals
Top
chevron-icon
project-presentation-img
Expert Rating 3.6
Elija Perrier
Project Owner

Categorical and symmetry neural-symbolic networks

Expert Rating

3.6

Overview

We are researchers from ANU (PhD) and Stanford (postdoc) specialising in AI, logic, mathematics and physics. We aim to leverage category theory, physics-inspired symmetry mathematics and geometric insights to unify symbolic and neural paradigms for neuro-symbolic AI. We propose to experimentally demonstrate how integration of these methods into Hyperon-based DNNs using frameworks like PyNeuraLogic and KANs can lead to the emergence of neuro-symbolic logic in DNNs. In turn, we aim to demonstrate through simulations how this approach can be leveraged to integrate experiential systems like AIRIS and higher-order reasoning in a distributed, interpretable, scalable and resource-efficient manner.

RFP Guidelines

Neuro-symbolic DNN architectures

Complete & Awarded
  • Type SingularityNET RFP
  • Total RFP Funding $160,000 USD
  • Proposals 9
  • Awarded Projects 2
author-img
SingularityNET
Oct. 4, 2024

This RFP invites proposals to explore and demonstrate the use of neuro-symbolic deep neural networks (DNNs), such as PyNeuraLogic and Kolmogorov Arnold Networks (KANs), for experiential learning and/or higher-order reasoning. The goal is to investigate how these architectures can embed logic rules derived from experiential systems like AIRIS or user-supplied higher-order logic, and apply them to improve reasoning in graph neural networks (GNNs), LLMs, or other DNNs.

Proposal Description

Project details

A major focus of AI research is to find architectures merging the rigour of symbolic logic with the versatility and power of DNNs. The challenge with all approaches to this problem is how to retain the dynamic generalisation capabilities of DNNs while at once providing for the constrained structure characteristic of formal systems. Simply encoding logical or even symmetry constraints interferes with the complexity underpinning their power and lacks versatility when new logics must be learnt.

Our project takes a distinctive approach to solving this dilemma. We propose to ground the construction of neuro-symbolic architectures in physics-inspired complex systems mathematics, leveraging specific tools from geometry, symmetry mathematics and category theory. Starting from these foundational principles where complexity emerges within encoded symmetry constraints that are central to neurosymbolic functionality, we aim to engineer neural architectures that inherently produce stable, controllable, interpretable, and manipulable symbolic functionality. 

Geometry and Symmetry

Category theory, geometric and symmetry principles methods are central to our approach. Neural networks, especially on structured data like graphs, can be viewed as maps between geometric or topological spaces. By imposing geometric and symmetry constraints within latent space, we aim to experimentally explore how embedded representations of concepts, predicates, and logical transformation rules can emerge. Doing so we anticipate will ensure that logic is tied to relational structure rather than to arbitrary coordinate systems. Symmetry mathematics provides a rich language and tools to achieve these invariances. If a logical rule is encoded as a certain subspace or manifold within the network’s embedding layers, then group actions (such as permutations of elements in a graph, or rotations in a continuous embedding space generated by the supervening algebra) that preserve logical meaning should leave the structure unchanged. Ensuring such invariances is critical to making logic robust: it cannot depend on the peculiarities of how data is presented, but should reflect the underlying relational patterns encoded by the rules.

 

Category Theory

Category theory complements our geometric and symmetry-based methods and Hyperon-based DNN architectures by offering a unifying abstract framework for ensuring that the translation of data into logical structures is coherent, compositional, and well-defined. Category theory (through concepts such as infinity categories) also provides a means of also handling (and possibly engineering) the type of dynamic self-similarity structuring characteristic of physical systems and DNN architectures such as Hyperon.

For example, logical operations such as conjunction, disjunction, or quantification can be seen as morphisms in a category of logical formulas. Neural architectures, in turn, can be modelled as functors that map from a data category (e.g., graphs representing knowledge states) to a logic category (representing logical constraints and rules). By ensuring that these functors preserve certain limits, colimits, or universal properties, we guarantee that logical inferences are represented faithfully by the network’s operations. This categorical lens allows us to impose algebraic constraints, ensuring that certain algebraic operations on embeddings correspond to logical operations, making it possible to guarantee logical rules from the learned representations.

Project Stages

We structure the project into three stages, gradually moving from theoretical foundations and abstract design principles to concrete implementations and demonstrations using PyNeuraLogic and Kolmogorov-Arnold Networks (KANs). Our goal is to produce practical and demonstrable methods, supported by reproducible simulations, code, and case studies (publicly accessible via repositories) that showcase improved reasoning capabilities, interpretability, and learning efficiency in complex scenarios.

 

Stage 1: Theoretical Foundations

In Stage 1, we focus on building a rigorous mathematical and conceptual foundation that integrates geometry, symmetry, and category theory into the design of neuro-symbolic deep neural networks. We begin by identifying the essential conditions under which logic can emerge as an inherent property of neural representations. A central task here is to define how neural embeddings should behave collectively so that logical connectives, quantifiers, and inference patterns correspond to well-defined algebraic operations in the embedding space.

We examine how symmetry constraints in physical systems can enforce invariants that support stable logical structures while enabling the flexibility (and degeneracy as noted in singular learning theory) that allows for their versatility. This involves selecting symmetry groups that preserve logical meaning and exploring coordinate-free geometric representations that ground logical rules in topological or geometric invariants. Simultaneously, we translate this geometric intuition into category-theoretic language via categories for data domains (such as graphs representing knowledge states), for logical systems (such as categories whose objects are logical formulas and whose morphisms represent inference rules), and for neural architectures (seen as functorial mappings between these categories). By characterising the conditions under which neural modules implement functors that preserve and reflect logical structure, we ensure that logic emerges naturally from neural training rather than needing to be manually inserted. At the end of Stage 1, we aim to have a well-defined formalism that tells us how to design layers, embedding functions, and network operations so that logic arises from fundamental mathematical principles.

Stage 2: Simulation Implementation

In Stage 2, we move from theory to practice. We take the insights developed in Stage 1 and implement them in two frameworks: PyNeuraLogic and Kolmogorov-Arnold Networks (KANs). PyNeuraLogic is designed to integrate logical constraints into neural architectures, particularly graph neural networks (GNNs). Our approach augments PyNeuraLogic with geometric and categorical constraints. For instance, logic rules derived from experiential systems like AIRIS become embedded as algebraic relations within the GNN’s embedding space. By applying our category-theoretic and geometric conditions, we aim to enable rules to shape learned representations consistently and interpretably. We expect that, as in physical systems, symmetry constraints guarantee that logical rules remain invariant under transformations of the input graph’s structure, making reasoning robust even as the environment changes.

KANs decompose complex multivariate functions into simpler univariate functions. By applying the category-theoretic and symmetry-based constraints established in Stage 1, we will explore how KAN components can represent a stable, algebraically coherent logical operator and how category theory can ensure compositional alignment with logical rules, treating logical connectives like AND or OR as morphisms preserved across transformations. Simultaneously, symmetry mathematics enforces invariances that maintain logical equivalences despite changes in input representation. We intend to train KAN modules so their spline-based functions give rise to logical connectives, as well as more complex quantifiers then leverage symbolic regression to translate the resulting functional approximations back into discrete logical formulas.

 

We test these implementations through numerical simulations and benchmarking tasks. For example, we may explore training a PyNeuraLogic-based GNN with our symmetry constraints on a small medical ontology, where AIRIS-derived rules link symptoms to diseases. We evaluate how well the network can infer rare conditions from minimal data and whether the reasoning steps can be traced back to the embedded logic. Similarly, we may consider a KAN-based system to handle higher-order logic rules governing dynamically distributed networks, such as smart energy grids or market predictions.

 

These simulations are designed to test and validate that the theoretical principles from Stage 1 yield practical benefits. Our hypothesis is that embedding logic using category-theoretic and geometric constraints enhances interpretability, reduces data requirements, and improves reasoning.

Stage 3: Visualization and Demonstrations

In Stage 3, we scale up from controlled simulations to more complex domains and refine the usability and interpretability of our approach. We examine flexible reasoning scenarios such as integrating experiential rules (derived from AIRIS) that evolve as an environment changes, using higher-order logic to specify abstract constraints, and robustness of internal structures as the DNN embeds new rules. A major imperative for usability of our results is on user experience, visualisation and availability of code. We will develop tools to visualise latent spaces and highlight how logical functionality manifests in DNNs. For instance, we will explore how to represent logical equivalences as specific symmetries in the embedding space, or illustrate how adding a new rule distorts, reorients, or refines certain submanifolds within that space. We expect such visualisations will enhance understanding and trust in the model’s reasoning process. In addition, we aim to demonstrate how our model generalises by applying it to reasoning tasks involving structured scientific data or cross-domain inference scenarios, we will test our hypotheses that our underlying architecture satisfies neuro-symbolic reasoning criteria and provides a universal design paradigm for embedding logic into DNNs. At the conclusion of the project, we aim to have produced a research report, academic articles and a code-base that demonstrates a new paradigm for constructing neuro-symbolic deep neural networks.

Open Source Licensing

MIT - Massachusetts Institute of Technology License

Links and references

Elija Perrier Google Scholar: link

Michael Bennett Google Scholar: link

Luke Marks Google Scholar: link

Proposal Video

Not Avaliable Yet

Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.

  • Total Milestones

    3

  • Total Budget

    $80,000 USD

  • Last Updated

    8 Dec 2024

Milestone 1 - Theoretical Foundations and Research Plan

Description

This milestone establishes the theoretical foundation for our approach to neurosymbolic DNN emergence where instead of manually imposing logical rules we leverage geometry symmetry mathematics and category theory to guide network architecture design: 1. Identify appropriate symmetry groups and geometric invariants so logical rules become stable invariant features of neural latent spaces independent of coordinate choices or arbitrary data transformations. 2. Construct category-theoretic frameworks mapping data representations (e.g. graphs) to logical domains ensuring logical morphisms are preserved under network operations and transformations. 3. Define algebraic conditions that ensure neural embeddings naturally implement logical connectives (AND OR) quantifiers and inference steps as stable continuous transformations not mere heuristic constraints. 4. Incorporate these conditions into layer and architecture design so networks internally encode logical reasoning patterns rather than just fitting statistical regularities. 5. Explore chosen symmetries and topological invariants to reveal how robust logical inference can arise dynamically without manually encoding rules. 6. Translate geometric and categorical insights into a formal blueprint that guarantees logically coherent interpretable reasoning emerges from the network’s underlying mathematics.

Deliverables

Milestone 1 deliverables are as follows: 1. Research Plan: Formulate a detailed mathematical framework that leverages geometry symmetry and category theory to ensure logical rules emerge naturally within deep neural network architectures. Specify target algebraic invariants define the categories and functorial mappings that preserve logical semantics and establish criteria for measuring robustness scalability and stability of these emergent logical structures. 2. Literature Review: Survey current neuro-symbolic systems with particular attention to existing geometric and category-theoretic approaches in deep learning. Identify where logic does not arise intrinsically highlight which methods show partial success and pinpoint gaps our framework can fill. This review will directly inform the selection of invariants symmetry groups and categorical constructs necessary for stable interpretable logic in neural embeddings. 3. Simulation Scoping: Propose simulation scenarios using Hyperon-based prototypes applying well-defined geometric and categorical conditions to test the spontaneous emergence of logic in minimal ontology-based tasks and structured graph datasets. In consultation with Hyperon’s development team define metrics for logical coherence interpretability and stability. Outline initial experiments that validate the theoretical principles ensuring that subsequent implementation and scaling rest on rigorously vetted designs.

Budget

$20,000 USD

Success Criterion

Achievement of the following: Formalized Mathematical Framework: A clear, coherent theoretical model is established, linking geometry, symmetry, and category theory into a unified formalism. Logical concepts must be precisely defined as algebraic constructs in neural embeddings, ensuring that logical operations correspond to stable transformations identifiable within the network’s structure. Defined Logical Invariants and Constraints: Identification of appropriate symmetry groups, invariants, and categorical structures that guarantee logical consistency. These conditions must enable emergent logic without relying on ad-hoc fixes, ensuring that even as the network adapts and generalizes, it preserves meaningful logical relations and inference rules. Integration: The theoretical framework must incorporate and refine established ideas from the literature. This involves referencing known neuro-symbolic approaches, geometric methods, and category-theoretic constructs, confirming that our chosen path aligns with and extends the current state of knowledge. Documentation: Documentation and proofs where needed for theoretical assumptions, chosen invariants, and category-theoretic mappings. Clear explanations, diagrams, and formal proofs ensure that future developers, researchers, and stakeholders can understand and trust the framework’s foundational principles, setting the stage for subsequent implementation and testing.

Milestone 2 - Simulation Implementation

Description

Stage 2 applies the principles from Stage 1 by implementing category-theoretic geometric and symmetry-based logic embeddings in PyNeuraLogic and Kolmogorov-Arnold Networks (KANs). 1. PyNeuraLogic Integration: Embed experiential logic rules (e.g. from AIRIS) as algebraic constraints in GNN embeddings ensuring learned representations respect logical structure. Impose geometric invariants so that transformations in graph topology do not disrupt logical meaning thereby maintaining consistent inference. 2. KAN Logical Operators: Adapt KAN components to approximate logical connectives (AND OR) and quantifiers as stable univariate functions. Utilise category theory to treat these connectives as morphisms preserving logical relations across compositional transformations and apply symmetry mathematics to guarantee equivalences. 3. Symmetry for Robustness: Define symmetry groups acting on latent spaces so that logical invariants persist and are robust to shifting data permutations and coordinate changes for stability seen in physical systems. 4. Simulations: Test on minimal ontologies to see if PyNeuraLogic-based GNNs infer rare conditions from limited data. Track how logic guides reasoning steps and verify that embedded rules remain interpretable. Apply KAN-based reasoning to complex settings like smart energy grids or market predictions and use performance assessment to compare reasoning clarity data efficiency and generalisation against baseline models.

Deliverables

Milestone 2 deliverables include: 1. Literature Review Update: Expand the initial review to incorporate cutting-edge research on category-theoretic and geometric deep learning strategies recent neuro-symbolic prototypes and emerging best practices for embedding logic into neural systems. Highlight approaches that successfully align logic with continuous embeddings ensuring our planned architectures draw on proven methods and avoid known pitfalls. 2. Simulations: Configure simulations that combine PyNeuraLogic and KAN modules under the algebraic symmetry and categorical constraints defined in Stage 1. Select representative datasets—such as small ontologies or graph-based tasks—and outline metrics to assess interpretability reasoning stability and data efficiency. These initial tests form a baseline for gauging how well theoretical principles translate into measurable improvements. 3. Visualisation and User Tools: Begin developing visualisation utilities and user interfaces tailored to PyNeuraLogic and KAN. Design tools that expose underlying logical structures illustrate how symmetry transforms embeddings and let researchers interactively inspect reasoning steps. These interfaces support iterative debugging foster understanding and encourage wider adoption of our integrated neuro-symbolic approach.

Budget

$40,000 USD

Success Criterion

Achievement of the following: Functional Prototype Integration: Demonstrating PyNeuraLogic-based GNNs and KAN modules operate under the category-theoretic, geometric, and symmetry constraints defined in Stage 1. The resulting prototypes must demonstrate that logical rules, including those derived from experiential sources like AIRIS, are consistently and transparently embedded as algebraic relations within the network’s latent spaces. • Stable Logical Inference Under Transformation: Demonstrating the implemented models, when exposed to changes in graph structure or data representation, preserve logical meaning and inference capabilities. Logical connectives and quantifiers remain identifiable as stable morphisms, and symmetry-based invariants ensure reasoning steps are robust against variations in input. • Performance Metrics: Demonstrating that initial simulations on small, well-defined datasets confirm that logic-driven constraints enhance interpretability, reasoning accuracy, and data efficiency compared to baseline models. These metrics provide early evidence that the theoretical principles yield tangible benefits in practice. • Documented Codebase and Procedures: A publicly accessible code repository, annotated scripts, and foundational tutorials are complete. They must enable other researchers, including Hyperon and MeTTa developers, to replicate, modify, and extend the approach, ensuring broader adoption and paving the way for more extensive testing in Stage 3.

Milestone 3 - Scaling Visualization and Demonstrations

Description

Stage 3 advances beyond initial simulations. Building on Stages 1 and 2 we now focus on dynamic scenarios where experiential rules derived from AIRIS evolve and higher-order logic imposes intricate constraints. Our aim is to verify that logical reasoning and structural coherence persist as the network adapts and scales embedding new rules without compromising stability or interpretability. We concentrate on enhancing usability visualisation and code availability in the context of Hyperon’s architectures. By creating tools to depict how logical functions shape the network’s latent geometry we help users understand how logic emerges as stable symmetries or manifold deformations. Such visualisations illuminate the interplay between learned rules and underlying embeddings fostering trust and providing insights into the reasoning processes. To validate generality we apply our approach to diverse reasoning tasks including structured scientific datasets and cross-domain inference. We test the hypothesis that our methods produce a universal paradigm for embedding logic into DNNs ensuring robust scalable neuro-symbolic reasoning. We aim to conclude the project by delivering research reports academic publications and a publicly accessible codebase together establishing a new method for constructing explainable logic-grounded deep neural architectures based on Hyperon and MeTTa protocols.

Deliverables

1. Final Report and Academic Articles: the final report will summarise the results of our project including the utility of category theory’s role in preserving logical morphisms illustrate how geometric invariants and symmetry principles enhance the stability and interpretability of logic-infused DNNs and confirm best practices for embedding logic in neural architectures. We expect first drafts of academic articles arising from the report to be available too. 2. Simulations: we will deliver a final and fully curated code repository featuring finalised PyNeuraLogic and KAN implementations that embody the constraints and principles defined in previous stages designed in collaboration with Hyperon and MeTTa developers for interoperability and extensibility. Accompanying scripts documentation and tutorials will support dataset handling transformation performance verification and reproducibility enabling stakeholders to easily extend and adapt our approach. 3. Visualization and App: Provide a set of visualization modules and interactive interfaces that allow users to inspect logic-driven embeddings and transformations within the DNNs. These tools will highlight how logical structures evolve as new rules are introduced and will demonstrate the system’s coherence and trustworthiness in real-time.

Budget

$20,000 USD

Success Criterion

Achievement of the following: 1. Scalable and Generalizable Reasoning: Demonstration that our approach when scaled exhibits stable logical inference and structural coherence. Logical connectives and higher-order rules maintain integrity even as new experiential data or abstract constraints are introduced, confirming that the framework scales beyond controlled scenarios. 2. Visualisation and User Experience: Visualisation tools, interfaces, and applications enable stakeholders to intuitively explore latent spaces and track how logical rules shape neural representations. Users can identify logical symmetries, observe shifts in embeddings as rules evolve, and confidently navigate the system’s reasoning processes, fostering trust and accelerating adoption. 3. Robust Cross-Domain Demonstrations: Multiple testbeds—ranging from structured scientific datasets to cross-domain inference tasks—validate protocols using PyNeuraLogic and KANs within Hyperon framework. In each case, logical coherence, interpretability, and robust inference persist, reinforcing the notion of a universal paradigm for embedding logic into deep neural architectures. 4. Publicly Documented Outcomes: Final research reports, academic publications, and publicly accessible codebases are delivered, fully describing the methods, visualizations, and lessons learned. These resources serve as a comprehensive reference for future projects, ensuring that the concepts, techniques, and findings drive ongoing innovation.

Join the Discussion (0)

Expert Ratings

Reviews & Ratings

Group Expert Rating (Final)

Overall

3.6

  • Feasibility 4.0
  • Desirabilty 4.0
  • Usefulness 3.8

Proposal received high ratings from reviewers but experts ultimately selected another winner for strategic relevance. Potential interest in funding this work in a subsequent round.

  • Expert Review 1

    Overall

    3.0

    • Compliance with RFP requirements 3.0
    • Solution details and team expertise 3.0
    • Value for money 3.0
    Interesting proposal providing unique aspects, however not many technical details.

    Not many detail are provided. I would like to see more details on the PRIMUS components research including experiential learning systems and how rules or logic can be encoded and used in the PyNeuralLogic or KAG networks, integration points and how it will improve the overall performance. The team members have experience in what they proposing and are current members of AGI community.

  • Expert Review 2

    Overall

    4.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 4.0
    Solid though complex w risks

    Strong proposal leveraging category theory, symmetry mathematics, and geometry to unify symbolic and neural paradigms, aligning well with RFP goals. Clear plan from theory to practical simulations and visualizations, with applications in structured domains like medical ontologies and energy grids. Challenges are high complexity, scalability risks, and limited implementation details. Promising high-impact potential; funding recommended.

  • Expert Review 3

    Overall

    2.0

    • Compliance with RFP requirements 2.0
    • Solution details and team expertise 2.0
    • Value for money 2.0
    Interesting ideas regarding latent space structures

    "The goal is to investigate how these architectures can embed logic rules derived from experiential systems like AIRIS or user-supplied higher-order logic, and apply them to improve reasoning in graph neural networks (GNNs), LLMs, or other DNNs." Without further details this is hard to buy. GNN's operate with backprop, how to make them benefit from AIRIS-like rules even when they are embedded in the network? "we may explore training a PyNeuraLogic-based GNN with our symmetry constraints on a small medical ontology, where AIRIS-derived rules link symptoms to diseases." Here I point out that AIRIS is not a suitable choice to model such non-deterministic relations. However the author's effort to embed AIRIS-like rules into a GNN could turn out to be promising. Also I like the idea to explore how manifolds with mathematical properties could arise and to use them for cognitive purposes. Latent space structure like that could turn out to be very useful for AI in the future even when far off at current stage.

  • Expert Review 4

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 5.0
    It's a beautiful proposal that directly addresses the RFP's points in an elegant and original way.

    The idea to use categorical and geometrical constraints to structure bridges btw neural and symbolic makes sense, and the team seems to have a plan for fleshing out this idea in a practical way. It seems this could be a significant addition to the Hyperon conceptual and software approach.

  • Expert Review 5

    Overall

    4.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 5.0

    Interesting proposal grounding the construction of neuro-symbolic architectures in physics-inspired complex systems mathematics, leveraging specific tools from geometry, symmetry mathematics and category theory.

feedback_icon