Luke Mahoney (MLabs)
Project OwnerGrant manager
A widespread concern regarding AGI is the possibility that we lose control of the “System” leading to disastrous consequences. Yet, there is another less-discussed failure mode: gradual drift into a state of divergent oscillation - a failure mode common in complex systems. This possibility is exacerbated in highly distributed systems, where a cascade of seemingly unimportant local changes quickly accumulate into a system-wide global event (e.g., the Flash Crash of 2010 & the Northeast Blackout of 1965). Not enough has been done to address this aspect of AGI. We see an opportunity for the analysis of neuro-symbolic models such as KANs as a means to redress the balance.
This RFP invites proposals to explore and demonstrate the use of neuro-symbolic deep neural networks (DNNs), such as PyNeuraLogic and Kolmogorov Arnold Networks (KANs), for experiential learning and/or higher-order reasoning. The goal is to investigate how these architectures can embed logic rules derived from experiential systems like AIRIS or user-supplied higher-order logic, and apply them to improve reasoning in graph neural networks (GNNs), LLMs, or other DNNs.
In order to protect this proposal from being copied, all details are hidden until the end of the submission period. Please come back later to see all details.
A comprehensive comparative study of KANs and MLPs ability to recover the underlying dynamics of known chaotic systems from both clean and noisy data. This initial milestone is largely experimental and will assess if KANs have better modelling capabilities for PDEs. We will evaluate the approach on a collection of standard attractors: Rossler Lorenz Dadras Langford Thomas Sprott and Halvorsen attractors for example. Key to the evaluation is how well KANs are able to model the dynamics from a time-delay embedding as we add noise to the observations; and if symbolification helps with tolerance to noisy input.
A detailed write-up
$15,000 USD
The write-up should be detailed and effectively communicate our findings, methods, and datasets used for analysis. Code will also be provided to allow replication of our experimental results.
In this phase we will use the models approximated in Milestone 1 as the basis for a rigorous perturbation analysis. We can extract the rate at which two almost identical states of the model diverge as they evolve; most readily encapsulated by the Lyapunov exponents of the phase space model. Lyapunov exponents can be factorized into two main components: a temporal component describing how quickly the rate of evolution might change and a radial component describing how quickly the state of the evolution might change. We will leverage the reasoning and symbolic capabilities of KANs to determine Lyapunov exponents in complex systems and compare these results with regular MLPs.
A detailed write-up
$5,000 USD
The write-up should be detailed and effectively communicate our findings, methods, and datasets used for analysis. Code will also be provided to allow replication of our experimental results.
We will now have the instrumentation available to monitor a complex system and exert dynamical control on that system to prevent it entering a state of divergent oscillation. We will still at this stage be using well known strange attractors for our complex systems just as in Milestones 1 and 2 while building a KAN architecture to keep the dynamics under control. For example the Lorenz attractor has two lobes to its chaotic orbit - a KAN will be used to model the dynamics and estimate the Lyapunov exponents as the system evolves. Thereafter we build a KAN to produce control inputs (extra terms in the PDEs of the system representing external forces) so that the system dynamics are constrained to one lobe only. Our aim is to detect the drift into the chaotic regime early enough that the control inputs are minimal. This KAN controller will have the ability to keep the system in check and ensure that the system dynamics does not diverge into undesired behaviors.
A detailed write-up
$10,000 USD
The write-up should be detailed and effectively communicate our findings, methods, and datasets used for analysis. Code will also be provided to allow replication of our experimental results.
This next milestone is to apply the reasoning power of KANs to the behavior of Neural Networks. It is well known that there can be issues of stability in Deep Learning Architectures adversarial attacks being an example (Szegedy et. al. 2013). Since any deep neural network can be seen as a complex system that introduces a change in the input variables one step at a time it is possible to determine the attractor behavior and Lyapunov exponents of the network itself. This helps us understand how stable a Neural Network can be under perturbations of initial conditions; i.e. how susceptible an architecture is to noise. This can apply during training inference or fine-tuning. Therefore learning the complex attractor system of a given neural network will give an insight on the status of its architecture and learning methods. Specifically using an external Kolmogorov-Arnold Network and taking the per-layer outputs of a neural network we will be able to learn the latent representation of the DNN dynamics using its higher expressiveness and reasoning capabilities. While it is our ultimate ambition to do this for the large DNNs which are often regarded as stepping stones to AGI (ChatGPT and other LLMs for example) we will restrict our investigations at this stage to more manageable yet complex networks such as ResNet. A preliminary analysis of several potential DNNs will be used to decide which architecture and problem domain will form the basis of this milestone
A detailed write-up & data plots.
$20,000 USD
The detailed write-up of will cover our methods and findings as well as datasets used for analysis, and code to allow replication of the experiments. The data plots will clearly represent the learnign procedure and results.
Finally the main objective of this proposal is to be able to guide a Deep Learning Architecture to an attractor that stabilizes its output under perturbations noise and incomplete data. By using KANs as an external observer and controller coupled to another Neural Network we can understand the latent dynamics of said network and control its trajectory in parameter space in a way such that: we boost the reasoning capabilities of the network; introduce a dynamical mnemonic effect augmenting recall and avoiding overfitting; and improve the intepretability due to the attractor modeling in latent space thus enhancing altogether the performance of the system. To assess the effect of the KAN controllers on DNNs well established models of architectures will be used. In this scenario Kolmogorov-Arnold Networks will act as an assisting agent that will aid the Deep Architecture in reasoning not only creating complex logic rules but also providing mathematical analysis even when information is missing or input data is only partially available.
Complete project summary & architecture code
$25,000 USD
The complete project summary will be detailed, clearly stating the methods used as well as any parameter or random seeds for reproducibility. This summary includes but is not limited to: mathematical description, comparison tables, plots of training and testing. The code will incorporate a final architecture using KANs for control of a Neural Network. It will also include an example application ready to use.
In this final milestone we will collate the insights discovered in the project as a whole and combine the individual milestone reports into a cohesive document. We will use this as the basis of a research paper for dissemination to SingularityNET and the wider community.
A final research paper
$5,000 USD
The research paper will describe all our findings, and detail our experimental results. Success may also extend to submitting this paper to a a suitable conference, either in the AGI field such as the Conference on Artificial General Intelligence (AGI), or to neural network audience such as NeurIPS Conference on Neural Information Processing Systems.
Reviews & Ratings
Please create account or login to write a review and rate.
Check back later by refreshing the page.
© 2025 Deep Funding
Join the Discussion (0)
Please create account or login to post comments.