K-A Networks for Complex System Control

chevron-icon
RFP Proposals
Top
chevron-icon
project-presentation-img

K-A Networks for Complex System Control

Expert Rating

n/a

Overview

A widespread concern regarding AGI is the possibility that we lose control of the System, leading to disastrous consequences. Yet, there is another less-discussed failure mode: gradual drift into a state of divergent oscillation - a failure common to complex systems. A notable risk to highly distributed systems where a cascade of unimportant local changes accumulates into a system-wide global event (e.g., the Flash Crash of 2010 & others, when a small failure escalated quickly into a major problem). Not enough has been done to address this aspect of AGI. We see an opportunity for the analysis of neuro-symbolic models such as Kolmogorov-Arnold Networks to redress the balance.

RFP Guidelines

Neural-symbolic DNN architectures

Complete & Awarded
  • Type SingularityNET RFP
  • Total RFP Funding $160,000 USD
  • Proposals 17
  • Awarded Projects 1
author-img
SingularityNET
Apr. 14, 2025

This RFP invites proposals to explore and demonstrate the use of neural-symbolic deep neural networks (DNNs), such as PyNeuraLogic and Kolmogorov Arnold Networks (KANs), for experiential learning and/or higher-order reasoning. The goal is to investigate how these architectures can embed logic rules derived from experiential systems like AIRIS or user-supplied higher-order logic, and apply them to improve reasoning in graph neural networks (GNNs), LLMs, or other DNNs. Bids are expected to range from $40,000 - $100,000.

Proposal Description

Company Name (if applicable)

MLabs LTD

Project details

We propose an in-depth investigation into how Kolmogorov-Arnold Networks can be used as part of an AGI system, operating in a complex and dynamic environment. While KANs might form part of the modular information processing infrastructure, we are more interested in this study in how they might form part of the overarching control system which maintains (dynamic) equilibrium of the AGI architecture as a whole. KANs offer a bridge between symbolic and sub-symbolic methods for information processing, and enable us to use techniques from complex systems theory to characterise the chaotic regions of the underlying phase space.

Kolmogorov-Arnold Networks (KANs) have received much attention over the past few months. KANs are neural networks similar to Multi-Layer Perceptrons (MLPs), but which learn the activation function of the neurons, not just the weights between neurons. In the standard configuration, the KAN uses piecewise polynomials (B-splines) to model the activation function for each neuron, the input of which is a projection of the data into some subspace. The Kolmogorov-Arnold superposition theorem shows that such a configuration can model any multivariate, continuous, differentiable function. KANs are therefore universal approximators.

There is some experimental evidence that KANs can model complex datasets in fewer iterations than equivalent Deep Neural Networks (DNNs) while also utlising far fewer trainable parameters. Although there is concern that KANs are liable to overfit data under certain conditions, there is, nonetheless, a weight of experimental results indicating that KANs are a genuine alternative to MLPs and DNNs.

A further advantage of KANs is that they explicitly look for simple subspaces and can be used to identify key variables for the model in the data. When coupled with a search over a library of simple mathematical functions, KANs are sometimes able to discover parsimonious symbolic functions that fit complex data well.

The third and final advantage of KANs is their demonstrated ability to model systems of partial differential equations. Several authors have noted that KANs are well suited to this task and have reported excellent results on toy data.

We therefore have in KANs a means of:

  • modelling complex datasets parsimoniously and efficiently
  • extracting symbolic representations of the model
  • which are seemingly good at systems of partial differential equations

It is this combination of capabilities that we intend to explore in detail in this project. We will start with simple PDEs, such as the famous Lorenz attractor, and repeat the results obtained by other authors. Crucially, we will also characterize the ability for KANs to model these PDEs accurately when the observed data is subject to noise (a step which often proves to be the undoing of other modelling approaches, including MLPs).

Using Takens’ time-delay embedding theorem, we can approximate the phase space of the underlying system and recover the system dynamics as data points for estimating the PDEs. In doing so, we can model a complex system using only observations of its behaviour and not the latent state of the system.

Having built a KAN to model this behavior, we can determine the Lyapunov exponents of the model and track the states of the original system, which can lead to chaotic dynamics. Once such states are identified we can offer help either in an open-loop mode, where the user is warned of the chaotic region and the likely evolution of the system dynamics that may follow; in a closed-loop mode, where the KAN outputs are used directly to damp the system dynamics and prevent an escalation to the chaotic regime.

By doing this, the KAN infrastructure, as an assisting neural module, will interact with the main AI architecture, modifying its weights and providing an alternate path forward for training and inference. Therefore, this module will be able to assist in a variety of tasks: attentional, linguistic, mathematical or logical. Moreover, since it is an assisting module, it will not be restricted to work with only one architecture. Instead, it will be able to be combined with any of the popular AI networks. This means that it can be used to work with the Hyperon infrastructure in general and the PRIMUS cognitive architecture in particular.

Our analysis will be thoroughly written as a research paper, all of our experimental setups made accessible to the SingularityNET community, and our final control system developed into a demonstrator, with a well-known neural network architecture as the main module.

Challenges

The project has a number of technical challenges, which we have tried to mitigate with up front investigations.

First is robustness to noise. It is well known that the recovery of complex system dynamics using Takens’ time delay embedding theorem is somewhat susceptible to observation noise. One of our key investigations is to assess how much more robust KANs are to such noise than other models such as MLPs or DNNs. While this is currently an open theoretical question, we have sufficient empirical evidence that KANs provide sufficient robustness in modeling noisy observations from PDEs that we are confident that this will not prevent us completing the project.

Second is the identification of a suitable AI system to model in detail. We desire a DNN which is both simple enough that we can develop a KAN control system and discover verifiable insights into its chaotic behavior, and simultaneously be complicated enough that the mental leap to control of a complete AGI system is not implausible. Our investigations thus far would indicate that a DNN implementation such as ResNet would fit the requirements, although we will conduct further research before settling on our example.

Finally, this is an original project which is investigating novel ideas. While there is some precedence for our approach, and a small body of literature on related techniques, we are breaking new ground and there is inevitably a degree of technical risk. We have assigned senior individuals to this project who, we believe, are best positioned to troubleshoot any difficulties encountered during the project. We will also hold regular reviews and workshops with the wider MLabs AI team, and we are very open to receiving help from the SingularityNET community if needed. We trust that overall the technical risks are well contained.

Open Source Licensing

MIT - Massachusetts Institute of Technology License

Links and references

Website: https://www.mlabs.city/

RIGEL DFR3 project: https://github.com/mlabs-haskell/rigel

NEURAL SEARCH DFR4 project: https://github.com/mlabs-ai/neural-search

Proposal Video

Not Avaliable Yet

Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.

  • Total Milestones

    6

  • Total Budget

    $80,000 USD

  • Last Updated

    14 May 2025

Milestone 1 - Known Complex Systems Dynamics Modelling

Description

A comprehensive comparative study of KANs and MLPs' ability to recover the underlying dynamics of known chaotic systems from both clean and noisy data. This initial milestone is largely experimental and will assess if KANs have better modelling capabilities for PDEs. We will evaluate the approach on a collection of standard attractors: Rossler Lorenz Dadras Langford Thomas Sprott and Halvorsen attractors for example. Key to the evaluation is how well KANs can model the dynamics from a time-delay embedding as we add noise to the observations and if symbolification helps with tolerance to noisy input.

Deliverables

A detailed write-up

Budget

$15,000 USD

Success Criterion

The write-up should be detailed and effectively communicate our findings, methods, and datasets used for analysis. Code will also be provided to allow replication of our experimental results.

Milestone 2 - Lyapunov Exponent Modelling

Description

In this phase we will use the models approximated in Milestone 1 as the basis for a rigorous perturbation analysis. We can extract the rate at which two almost identical states of the model diverge as they evolve most readily encapsulated by the Lyapunov exponents of the phase space model. Lyapunov exponents can be factorized into two main components: a temporal component describing how quickly the rate of evolution might change and a radial component describing how quickly the state of the evolution might change. We will leverage the reasoning and symbolic capabilities of KANs to determine Lyapunov exponents in complex systems and compare these results with regular MLPs.

Deliverables

A detailed write-up

Budget

$5,000 USD

Success Criterion

The write-up should be detailed and effectively communicate our findings, methods, and datasets used for analysis. Code will also be provided to allow replication of our experimental results.

Milestone 3 - Complex Systems Dynamical Control

Description

We will now have the instrumentation available to monitor a complex system and exert dynamical control on that system to prevent it from entering a state of divergent oscillation. We will still at this stage be using well-known strange attractors for our complex systems just as in Milestones 1 and 2 while building a KAN architecture to keep the dynamics under control. For example the Lorenz attractor has two lobes to its chaotic orbit - a KAN will be used to model the dynamics and estimate the Lyapunov exponents as the system evolves. Thereafter we build a KAN to produce control inputs (extra terms in the PDEs of the system representing external forces) so that the system dynamics are constrained to one lobe only. We aim to detect the drift into the chaotic regime early enough that the control inputs are minimal. This KAN controller will have the ability to keep the system in check and ensure that the system dynamics do not diverge into undesired behaviors.

Deliverables

A detailed write-up

Budget

$10,000 USD

Success Criterion

The write-up should be detailed and effectively communicate our findings, methods, and datasets used for analysis. Code will also be provided to allow replication of our experimental results.

Milestone 4 - AI Systems Dynamics Modeling

Description

This next milestone is to apply the reasoning power of KANs to the behavior of Neural Networks. It is well known that there can be issues of stability in Deep Learning Architectures; adversarial attacks are one such example (Szegedy et. al. 2013). Since any deep neural network can be seen as a complex system that introduces a change in the input variables one step at a time it is possible to determine the attractor behavior and Lyapunov exponents of the network itself. This helps us understand how stable a Neural Network can be under perturbations of initial conditions i.e. how susceptible an architecture is to noise. This can apply during training inference or fine-tuning. Therefore learning the complex attractor system of a given neural network will give an insight into the status of its architecture and learning methods. Specifically using an external Kolmogorov-Arnold Network and taking the per-layer outputs of a neural network we will be able to learn the latent representation of the DNN dynamics using its higher expressiveness and reasoning capabilities. While it is our ultimate ambition to do this for the large DNNs which are often regarded as stepping stones to AGI (ChatGPT and other LLMs for example) we will restrict our investigations at this stage to more manageable yet still complex networks such as ResNet. A preliminary analysis of several potential DNNs will be used to decide which architecture and problem domain will underlie this milestone.

Deliverables

A detailed write-up & data plots.

Budget

$20,000 USD

Success Criterion

The detailed write-up will cover our methods and findings, as well as datasets used for analysis, and code to allow replication of the experiments. The data plots will represent the learning procedure and results.

Milestone 5 - AI Systems Dynamical Control

Description

Finally the main objective of this proposal is to be able to guide a Deep Learning Architecture to an attractor that stabilizes its output under perturbations noise and incomplete data. By using KANs as an external observer and controller coupled to another Neural Network we can understand the latent dynamics of said network and control its trajectory in parameter space in a way such that: we boost the reasoning capabilities of the network; introduce a dynamical mnemonic effect augmenting recall and avoiding overfitting; and improve the intepretability due to the attractor modeling in latent space thus enhancing altogether the performance of the system. To assess the effect of the KAN controllers on DNNs well well-established models of architectures will be used. In this scenario Kolmogorov-Arnold Networks will act as an assisting agent that will aid the Deep Architecture in reasoning not only creating complex logic rules but also providing mathematical analysis even when information is missing or input data is only partially available.

Deliverables

Complete project summary & architecture code

Budget

$25,000 USD

Success Criterion

The complete project summary will be detailed, clearly stating the methods used as well as any parameters or random seeds for reproducibility. This summary includes, but is not limited to: mathematical description, comparison tables, plots of training and testing. The code will incorporate a final architecture using KANs for control of a Neural Network. It will also include an example application ready to use.

Milestone 6 - Summary Research Paper

Description

In this final milestone we will collate the insights discovered in the project as a whole and combine the individual milestone reports into a cohesive document. We will use this as the basis of a research paper for dissemination to SingularityNET and the wider community.

Deliverables

A final research paper

Budget

$5,000 USD

Success Criterion

The research paper will describe all our findings and detail our experimental results. Success may also extend to submitting this paper to a suitable conference, either in the AGI field, such as the Conference on Artificial General Intelligence (AGI), or to a neural network audience, such as the NeurIPS Conference on Neural Information Processing Systems.

Join the Discussion (0)

Expert Ratings

Reviews & Ratings

    No Reviews Avaliable

    Check back later by refreshing the page.

Welcome to our website!

Nice to meet you! If you have any question about our services, feel free to contact us.