Ternary Sheaf Network

chevron-icon
Back
Top
chevron-icon
project-presentation-img
Nils Kakoseos
Project Owner

Ternary Sheaf Network

Status

  • Overall Status

    ⏳ Contract Pending

  • Funding Transfered

    $0 USD

  • Max Funding Amount

    $70,000 USD

Funding Schedule

View Milestones
Milestone Release 1
$10,000 USD Pending TBD
Milestone Release 2
$15,000 USD Pending TBD
Milestone Release 3
$30,000 USD Pending TBD
Milestone Release 4
$15,000 USD Pending TBD

Project AI Services

No Service Available

Overview

The transformer architecture is proven to be efficient to train and run across many applications. But for deductive reasoning tasks in few-shot learning contexts where limited or no training data exists, transformers have certain algorithmic limitations making them suboptimal in applications necessitating recurrence and encoding hyper-graph structures across tokens beyond simple cliques. We propose a novel neurosymbolic architecture operating directly on the graph-structure of any domain specific languages (DSL). Moreover, we propose to deploy this learning architecture within a novel continual learning framework termed Ternary Semiself-Reinforced Learning (TSRL).

RFP Guidelines

Neuro-symbolic DNN architectures

Complete & Awarded
  • Type SingularityNET RFP
  • Total RFP Funding $160,000 USD
  • Proposals 9
  • Awarded Projects 2
author-img
SingularityNET
Oct. 4, 2024

This RFP invites proposals to explore and demonstrate the use of neuro-symbolic deep neural networks (DNNs), such as PyNeuraLogic and Kolmogorov Arnold Networks (KANs), for experiential learning and/or higher-order reasoning. The goal is to investigate how these architectures can embed logic rules derived from experiential systems like AIRIS or user-supplied higher-order logic, and apply them to improve reasoning in graph neural networks (GNNs), LLMs, or other DNNs.

Proposal Description

Company Name (if applicable)

Musing AB

Project details

Ternary Semiself-Reinforced Learning (TSRL): A Novel Neurosymbolic Architecture and Learning Framework

Introduction

Ternary Semiself-Reinforced Learning (TSRL) is a novel neuro-symbolic framework for online continual learning using a vast generalisation of the transformer architecture enabling the AI-system to bypass text-like tokensation to instead operate directly on the execution graph of a domain specific language (DSL). This model execution environment could for example be a DSL specific to high performance specialised inference hardware such as inference GPUs or domain specific field programmable gated arrays (FPGAs). Moreover it features a novel version of test-time inference compute with better adaptation potential in continual learning environments, by a structured learning process to develop stronger reward models facilitating artificial curiosity-driven dataspace exploration, as described in more detail below.

There exists two available POC implementations (attached to this application). One of which was developed in close connection with the recent edition of the “ARC challenge”, featuring the “abstraction and reasoning corpus” for benchmarking reasoning capabilities in AI-systems. 

The other POC, exhibiting the continual learning and test-time inference compute applied to online weather data for efficient extreme weather forecasting was a recent winner of the SingularityNET risk assessment hackathon competition (as project proposal “Weather Sheaf”).

The github repositories of these existing project, which can be readily adapted and build upon to facilitate this proposed project, are provided to the application jury upon request. 

MODEL ARCHITECTURE

To facilitate the above-described learning process, we need to develop a novel learning architecture capable of overcoming limitations inherent in current transformer architectures. In particular, to efficiently encode the hyper-graph structures required by our process, the transformer attention mechanism must be generalised to encode more complex hypergraph-structures than a fully connected "clique" graph across input tokens (as in ordinary vanilla attntion). Moreover, to generalise the learning system towards tasks requiring more comprehensive and structured search processes, the forward pass needs to support recurrent processing, and admitting a theoretically infinite context window, to algorithmically allowing any necessary search process across output solution space as well as input data space.

Consider the hyper-graph of vector embeddings of values in our DSL type system T, where a hyper-edge corresponds to an input value tuple to one of the primitives in the DSL. Combining the individual value-nodes in the hyper-edge yields a particular valid input-value tuple to any of the primitive DSL functions. Abstraction one level up, we consider the graph over the edges of this value-tuple hyper-graph. The nodes of this graph are naturally interpreted as the primitive functions in the DSL, and its edges correspond to either function application of one primitive on the output value of another, or a product-combination of two existing values into a new "merged" value which is also contained in the set of valid input types in the DSL. 

The output from the model operating on this input graph is interpreted as a beamsearch of output programs executed as a random walk on this DSL-graph, resulting in a valid algorithms computing input-output mappings, which are then evaluated against the learning criterions. This output defines the model’s influence in its environment via the API-connections of the DSL application on the data space, and the API-connections defining the data pipelines providing the model and its learning environment with exogenously generated input data from, for example, the internet or any other external network of sensors. The process of generating model outputs is recurrent, operating on both any observed input data in the environment and its own previously generated outputs, which can be combined with other input data into new valid input data to the model for generating its next steps in the learning environment.

We now introduce the following neural attention operator on the above described ternary sheaf morphism structure. It computes attention scores from embedding vectors associated to Si as a selection mechanism of embedding vectors associated to Si+1. Starting in S1 (the "sphere" of model state encodings), model states are encoded to a vector by a neural (GNN) operator M1_enc. A section s_1 of the sheaf on S1 is then a collection of such vectors, selected by a GNN-decoder M1_dec. Similarly, in S2, input-output value mappings are encoded by the GNN M2_enc. Given a collection of embedding vectors s_1 (a section in S1) and the graph of vector embeddings of S2 encoded by M2_enc, we construct an attention operator:

att_S1:S2 : s_1 ∈ S1 , v ∈ S2_emb | --> (Softmax(s ° s)) ° v

This produces attention scores over the embedding vectors of S2, which similarly to the usual transformer architecture, are passed via an up-dim projection through a channel mixing model component to form a logits output distribution over output classes. In this case, these classes are input-output value pairs in the DSL execution environment. The end result of the first step of the ternary attention process is then a section of decoded embedding vectors in S2. This process is similarly continued to construct a section of embedding vectors from the sheaf over S3, which is then used to select the next section of embedding vectors in S1, as a single iteration of the ternary attention process.

CONTINUAL LEARNING WITHIN THE TSRL FRAMEWORK

The main motivation for the TSRL framework is to facilitate continual online learning while allowing users of the system to focus its attention on particular tasks. We envision a self-improving online continually learning system that both improves by attempting to solve user-provided tasks and embodies internal reward mechanisms encoding intentions of data exploration and learning that facilitate solutions of these tasks. A central problem in continual learning is that the learned model output distribution often diverges towards regions of increasing entropy, causing the system to forget learned features and get stuck in local bad equilibria of increasing uniformity in the output distributions. This often occurs due to input data distribution drifts, causing the model to overextend its capacity and try to learn too many redundant features of the data space.

The ternary sheaf attention mechanism described above is designed to radically remedy these problems. By allowing the model to, given the current global structure of the learning environment across internal model states, output execution environment states, and the state of the reward model, continuously "zoom in" to the currently most relevant region in model state space for operating on the currently most relevant region in execution environment space, it can best facilitate updates of the next model states via the most relevant reward model states, and so on. Hence the analogy with Penrose’s ternary image (where global structures of the Sheaf on the previous "sphere" is compressed to the attention of a local region in the next ternary "sphere", and so forth).

Computationally, one central challenge in continual reinforcement learning is keeping the measure of entropy low globally within the neural learning system. We apply methods from algebra and harmonic analysis to define an information-theoretically motivated measure of "harmonic entropy". Heuristically, this can be understood as first obtaining a frequency-decomposition across sections in S_i, and then learning filters on the vector representations in the section that obtain spectra of frequencies with minimal accumulated entropy in the spectral impulse-response. For example, when the impulse comes from S3, this is like taking a snapshot in time of the evolution of the reward signal, and minimizing the entropy in the resulting resonance in sections selected across the vector representations of model states in S1. For more detailed information and mathematical derivation, see the appendix provided in the linked gdrive folder.

This structured construction of reward signals is designed to allow the system to freely explore the data space using artificial emotions that facilitate curiosity-driven search through the input data manifold. In particular, we want to:

  1. Allow the system to freely explore a set of APIs for online available data, and via efficient data pipelines select sources that are most "interesting" to the system’s current state and objectives.

  2. By varying the objective tasks, allow the system to optimize its ability of "learning to learn" new tasks by exploring the available input data APIs along new directed and curiosity-driven paths.

Since the system supports operating on any DSL, human inventiveness and creativity can be fully utilised to create new tasks deemed "interesting" to the user, while the system freely explores all available data streams using the described attention and search process. It finetunes its curiosity-mechanisms to flexibly optimize performance across a wide array of different human-generated tasks. These "system 2"-tasks are restricted to cases of binary verifiability of correctness in output/solutions, in the form of satisfiability of criteria encoded in a binary property-vector. A blockchain-based market place for model developers and users can be developed on this infaarastructure, as describd below in the project milestone nr 4.

Links and references

As a recent winner in the SNET hackathon I got informed about this oportunity on the deadline day. Anyways wanted to apply with this project which is based in many years of research, and in particular developed during this year in the ARC challenge. Apologies for any errors in the under-edited text which had to be created in a few hours to meet the deadline. _ 

Application attachments via the following link:

https://drive.google.com/drive/folders/1CYE7aPMboKXqiujnsY1RcjgyF24Kj6At?usp=sharing

Additional videos

Application attements via the following link

https://drive.google.com/drive/folders/1CYE7aPMboKXqiujnsY1RcjgyF24Kj6At?usp=sharing

Proposal Video

Not Avaliable Yet

Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.

Group Expert Rating (Final)

Overall

5.0

  • Feasibility 3.6
  • Desirabilty 3.4
  • Usefulness 3.8

New reviews and ratings are disabled for Awarded Projects

Overall Community

3.6

from 5 reviews
  • 5
    2
  • 4
    1
  • 3
    0
  • 2
    2
  • 1
    0

Feasibility

3.6

from 5 reviews

Viability

3.4

from 5 reviews

Desirabilty

3.8

from 5 reviews

Usefulness

0

from 5 reviews

Sort by

5 ratings
  • Expert Review 1

    Overall

    2.0

    • Compliance with RFP requirements 1.0
    • Solution details and team expertise 1.0
    • Value for money 1.0
    Interesting but out of scope.

    The proposal seems out of scope as it does not touch upon experiential learning system integration and high-level rule embeddings.

  • Expert Review 2

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 5.0
    Excellent proposal

    Promising, innovative proposal with potential for high impact on continual learning and neuro-symbolic reasoning. Funding recommended.

  • Expert Review 3

    Overall

    2.0

    • Compliance with RFP requirements 2.0
    • Solution details and team expertise 1.0
    • Value for money 3.0
    New attention layer and combination with DSL

    A Sheaf attention layer is proposed, which is potentially promising but the combination with DSL is not clear and how it can lead to a proper neuro-symbolic integration even though it might be possible to learn embedding vectors for a DSL for sure. Also it is not clear how this can lead to a continual learning framework as it implicitly depends on Backpropagation and no mentioning how to overcome continual learning limitations is provided. However 2 stars are given as if combination with a DSL is possible it could help to embed sentences of the DSL which might also work for MeTTa expressions at some point.

  • Expert Review 4

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 5.0
    An ambitious yet practical and elegant proposal that hits all the bases of the RFP.

    This proposal is excellent and manages to be radically innovative yet also believable and practical, and to fit squarely into the requirements of the RFP. Super nice. I am not sure the proposer can really get quite all this much done within the time/money but even substantial partial progress in this direction could be a major asset to Hyperon and the AGi field. w00t !!

  • Expert Review 5

    Overall

    4.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 5.0

    A novel neuro-symbolic framework for online continual learning using a vast generalisation of the transformer architecture enabling the AI-system to bypass text-like tokensation to instead operate directly on the execution graph of a domain specific language (DSL).

  • Total Milestones

    4

  • Total Budget

    $70,000 USD

  • Last Updated

    3 Feb 2025

Milestone 1 - Implementation developed via existing POC

Status
😐 Not Started
Description

We have an existing implementation of a general type system encoding the graph structure of a given DSL (satisfying certain type restrictions) and also an existing PyTorch-implementation of the proposed learning architecture. Next we need to use this to develop a more robust implementation and data pipeline for a fully parallellized training run across GPU clusters. This includes thorough testing of the type system and DSL parsing mechanisms to ensure robust and efficient performance across general DSLs adhering to required type restrictions. The model architecture will be implemented using PyTorch for the model components later used in the distributed training run while developing custom vector database indexing structures adhering to the requirements of the mathematical sheaf structure and vector-representations of index groups. For this we plan to build custom PyTorch modules on top of the Faiss vector database framework ensuring an efficient and consistent C++ framework for our implementation.

Deliverables

The implementation will provide a production-ready codebase encompassing the complete HSN architecture and its data inference environment. This includes fully documented PyTorch modules implementing the core neural network components custom C++ extensions integrating with the Faiss framework for efficient vector operations and a comprehensive type system implementation for DSL parsing and validation. The deliverable will include automated data pipeline configurations for distributed training complete with efficient data loading mechanisms and custom collation functions optimized for GPU processing. The codebase will be accompanied by extensive technical documentation covering the architecture design implementation details and deployment procedures. This documentation will include detailed API specifications system architecture diagrams and comprehensive guides for extending the type system with new DSL features. Additionally we will provide benchmark results demonstrating the system's performance characteristics across various operational scenarios and data scales.

Budget

$10,000 USD

Success Criterion

The implementation must satisfy rigorous technical and performance requirements to be considered successful. All components must pass a comprehensive suite of unit tests with at least 95% code coverage, including specific tests for type system validation, DSL parsing efficiency, and neural network operations. The system must demonstrate stable performance in distributed training scenarios, maintaining consistent throughput across multiple GPU nodes with scaling efficiency above 80%. Performance benchmarks must show that the custom vector database indexing structures achieve query times within 150% of baseline Faiss performance while maintaining the required mathematical properties. The type system implementation must successfully validate and process DSL specifications with parse times under 100ms for typical cases and maintain linear scaling with DSL complexity. The data pipeline must sustain a minimum throughput of 10,000 samples per second per GPU when operating in a distributed environment. Integration tests must verify seamless interaction between all system components, including proper handling of edge cases in the type system, correct propagation of gradients through custom PyTorch modules, and efficient data flow through the vector database infrastructure. The system must demonstrate stable convergence behavior across multiple training runs with different random seeds, maintaining consistent performance metrics within a 5% variance.

Link URL

Milestone 2 - Pre-training and reward model development

Status
😐 Not Started
Description

The HSN learning system is deployed in a self-reinforcing learning environment utilising a network of publicly available APIs across a variety of signals and input/output- data types. Extensive transfer-learning is performed from domain-specialised pre-trained FN-models across multiple domains related to the variety of input/outputs. These APIs and corresponding pre-trained FN-models include without limitations large corpora of symbolic datasets and language datasets (X) SOTA LLMs computer vision datasets and audio datasets in particular relating to human emotional intent-driven input-output mappings such as e.g. audio-visual data relating to musing and dance large physically grounded datasets in particular extensively publicly available weather data (see provided weather sheaf POC!) as well as datasets from chemistry and microphysics in order to cover data-mappings across multiple levels of physical magnitudes.

Deliverables

A developed reward model for the HSN learning system which balances R1: supervised performance in particular input-output example tasks across the varying I/O-types with R2: model-internal reward signals using various information-theoretically grounded measures (generalisations of evidence lower bound using total correlation across I/O-signals) encoding model perplexity and curiosity towards information-rich regions of the input data manifold with respect to the input/output data types and tasks.

Budget

$15,000 USD

Success Criterion

Observable improvements in performance metrics across supervised sample-tasks weighted with metrics assessing explorative qualities in the models internal reward system with respect to the input data manifold.

Link URL

Milestone 3 - Distributed training

Status
😐 Not Started
Description

Execute full scale training run on distributed GPU cluster via AWS sage maker with online training data pipelines connect to a variety of APIs. This milestone focuses on implementing distributed training for our deep learning model using AWS SageMaker's distributed computing capabilities. The implementation will leverage SageMaker's data parallelism library to partition training data across multiple GPU instances while maintaining model synchronization through AllReduce operations. Our configuration will utilize ml.p4d.24xlarge instances with 8 NVIDIA A100 GPUs each orchestrated in a cluster architecture to maximize throughput and minimize communication overhead. The scope includes adapting the PyTorch training script for SageMaker's distributed package implementing gradient compression for bandwidth optimization and establishing robust checkpointing mechanisms. We will fine-tune distributed training parameters including gradient accumulation steps local batch sizes and learning rate scaling to ensure consistent convergence. The implementation will include comprehensive monitoring through SageMaker metrics and custom CloudWatch dashboards to track training efficiency and resource utilization.

Deliverables

The milestone will deliver a production-ready distributed training infrastructure with the following components: A fully configured distributed training pipeline integrated with AWS SageMaker complete with automated node initialization and fault tolerance mechanisms. Comprehensive technical documentation detailing the distributed architecture including node configuration communication patterns and deployment procedures. A suite of monitoring tools and dashboards for real-time tracking of training metrics GPU utilization and inter-node communication performance. Custom utilities for managing distributed checkpointing and model synchronization will be provided along with scripts for automated deployment and scaling of training clusters. Performance analysis reports will document scaling efficiency resource utilization and cost optimization strategies. The codebase will include unit tests and integration tests specific to distributed operations ensuring reliability and reproducibility of training runs.

Budget

$30,000 USD

Success Criterion

The implementation will be considered successful upon meeting the following quantitative and qualitative metrics: Achievement of linear scaling efficiency of at least 75% when scaling from 1 to 8 nodes, demonstrated through comprehensive benchmarking tests. Maintenance of model convergence metrics within 1% deviation compared to single-node training results, verified through multiple training runs. The system must demonstrate fault tolerance by successfully recovering from simulated node failures without data loss. Training throughput should show at least 6x improvement when scaling from single-node to 8-node configuration. Resource utilization metrics must indicate GPU utilization above 85% during training. The implementation must maintain consistent convergence behavior across different cluster sizes, verified through loss curve analysis and final model performance metrics. Network bandwidth utilization should remain within 80% of theoretical limits during all-reduce operations, and gradient compression must achieve at least a 3x reduction in communication overhead without impacting model accuracy.

Link URL

Milestone 4 - Deployment testing and validation

Status
😐 Not Started
Description

Deploy the pre-trained system to the continual learning inference environment and integration with the SNET platform on the Ethereum blockchain. This includes smart contract integrations allowing developers to develop custom learning tasks/criterions for the AI-system following a blue print template for developing custom DSL functions and custom n-ary criterion vectors in order to apply the system on particular tasks while improving it on all tasks through the continual learning process. A MVP will be developed and deployed allowing any users of the SNET platform to access model inference by transacting AGIX tokens to the smart contract and the earned tokens will be distributed via smart contracts across contributing developers in proportion to measurable improvements in model performance due to optimizing on tasks and data pipelines they created.

Deliverables

A production-ready system deployed in the inference environment integrated with the SNET platform encompassing a suite of smart contracts for token management and reward distribution. The system will include an automated testing framework validating all blockchain interactions token transactions and reward distributions. API documentation will detail the integration points between the AI system and blockchain components including interfaces for custom task development and performance tracking. The deployment package will contain Docker containers for both the inference environment and blockchain nodes ensuring consistent deployment across environments. Developer documentation will provide detailed guidelines for creating custom DSL functions and criterion vectors complete with example implementations and best practices. The system will include monitoring dashboards tracking model performance token transactions and developer contributions with automated reporting mechanisms for transparency in reward distribution.

Budget

$15,000 USD

Success Criterion

The deployment will be considered successful upon meeting specific technical and operational benchmarks. The system must demonstrate seamless integration between the AI inference environment and the Ethereum blockchain, with transaction processing times under 30 seconds for model inference requests. Smart contract execution costs must remain below 0.05 ETH per transaction under normal network conditions. The reward distribution mechanism must accurately track and attribute performance improvements to individual developers with 99.9% accuracy. System availability should maintain 99.9% uptime, with automated failover mechanisms for both inference and blockchain components. The platform must successfully process at least 1000 concurrent inference requests while maintaining response times under 2 seconds. All smart contracts must pass comprehensive security audits and demonstrate resistance to common attack vectors. The token management system must accurately track and distribute rewards with zero discrepancies in token accounting. Developer onboarding metrics should show that new contributors can successfully deploy custom tasks within 48 hours of following the documentation. The system should demonstrate the ability to track and validate performance improvements across at least 50 concurrent custom tasks while maintaining accurate reward attribution.

Link URL

Join the Discussion (2)

Sort by

2 Comments
  • 0
    commentator-avatar
    Nils Kakoseos
    Jan 1, 2025 | 11:29 AM

    Thank you for considering our proposal. I strongly recommend to read the new up-to-date writeup of the architecture containing most essential mathematical definitions in full detail and some derivation fleshed out much more transparently than before: https://drive.google.com/drive/folders/1CYE7aPMboKXqiujnsY1RcjgyF24Kj6At?usp=sharing

  • 0
    commentator-avatar
    Nils Kakoseos
    Dec 13, 2024 | 9:42 AM

    As a recent winner in the SingularityNET risk assessment hackathon, I was informed about this call very recently, and all of the proposal text had to be written in one afternoon during last hours before the application deadline. I'm thankful for receiving encouraging support from the members of the Singularity team, who allowed me some room to soon thereafter add some complementing material to the attached gdrive folder. In particular this contains a radically more organised and mathematically explicit overview of the TSRL system of the proposal. To accurately assess this project, I would highly recommend the judges to consider reading the latex-formatted version of the proposal text available in this folder, and also linked to directly here:  https://drive.google.com/file/d/10hzup29B4N6xKcUzoMHTj3Z05un2VJKt/view?usp=sharing And please ignore the later half or so of original proposal text, which contains very incomplete and vague descriptions of the technical aspects of the system. Another thing that should be emphasised more perhaps here (as pointed out by Jan Horlings) is the extent to which this rather complex technical project can be successfully executed by me and the team I will put together for it. To this effect I should add that I'm recently employed as research engineer at a global software consultancy firm (whose reference I'll happily provide upon request) and that, given external funding, I'll ensure it would be very possible for us to put together a highly competent development team to execute the proposed project roadmap. Am happy to answer any further questions or discuss the project in general to clarify things here or via my provided mail address.  

Expert Ratings

Reviews & Ratings

Group Expert Rating (Final)

Overall

5.0

  • Feasibility 3.6
  • Desirabilty 3.4
  • Usefulness 3.8

New reviews and ratings are disabled for Awarded Projects

  • Expert Review 1

    Overall

    2.0

    • Compliance with RFP requirements 1.0
    • Solution details and team expertise 1.0
    • Value for money 1.0
    Interesting but out of scope.

    The proposal seems out of scope as it does not touch upon experiential learning system integration and high-level rule embeddings.

  • Expert Review 2

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 5.0
    Excellent proposal

    Promising, innovative proposal with potential for high impact on continual learning and neuro-symbolic reasoning. Funding recommended.

  • Expert Review 3

    Overall

    2.0

    • Compliance with RFP requirements 2.0
    • Solution details and team expertise 1.0
    • Value for money 3.0
    New attention layer and combination with DSL

    A Sheaf attention layer is proposed, which is potentially promising but the combination with DSL is not clear and how it can lead to a proper neuro-symbolic integration even though it might be possible to learn embedding vectors for a DSL for sure. Also it is not clear how this can lead to a continual learning framework as it implicitly depends on Backpropagation and no mentioning how to overcome continual learning limitations is provided. However 2 stars are given as if combination with a DSL is possible it could help to embed sentences of the DSL which might also work for MeTTa expressions at some point.

  • Expert Review 4

    Overall

    5.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 5.0
    An ambitious yet practical and elegant proposal that hits all the bases of the RFP.

    This proposal is excellent and manages to be radically innovative yet also believable and practical, and to fit squarely into the requirements of the RFP. Super nice. I am not sure the proposer can really get quite all this much done within the time/money but even substantial partial progress in this direction could be a major asset to Hyperon and the AGi field. w00t !!

  • Expert Review 5

    Overall

    4.0

    • Compliance with RFP requirements 5.0
    • Solution details and team expertise 5.0
    • Value for money 5.0

    A novel neuro-symbolic framework for online continual learning using a vast generalisation of the transformer architecture enabling the AI-system to bypass text-like tokensation to instead operate directly on the execution graph of a domain specific language (DSL).

feedback_icon