
Rob Freeman
Project OwnerProject Leader Managing the grant, and looking for the right collaborative synergies.
Milestone Release 1 |
$15,000 USD | Pending | TBD |
Milestone Release 2 |
$10,000 USD | Pending | TBD |
Milestone Release 3 |
$20,000 USD | Pending | TBD |
Milestone Release 4 |
$20,000 USD | Pending | TBD |
Milestone Release 5 |
$10,000 USD | Pending | TBD |
Milestone Release 6 |
$4,999 USD | Pending | TBD |
Milestone Release 7 |
$1 USD | Pending | TBD |
An experiment to test whether symbolic cognitive structure can be made to emerge simply as chaotic attractors in a network of observed language. This experiment is an extension of earlier work which: 1) Demonstrated partial structuring of natural language from ad hoc (chaotic?) re-structurings of vectors. 2) Demonstrated spontaneous oscillations in a network of language sequences. The core idea posits neuro-symbolic integration has eluded us because what we perceive as symbolic order in the world, may actually be chaotic attractors on what is a fundamentally multiply interpretable, reality. If that is so, full neuro-symbolic integration may be easy, we just have to embrace the chaos.
This RFP invites proposals to explore and demonstrate the use of neuro-symbolic deep neural networks (DNNs), such as PyNeuraLogic and Kolmogorov Arnold Networks (KANs), for experiential learning and/or higher-order reasoning. The goal is to investigate how these architectures can embed logic rules derived from experiential systems like AIRIS or user-supplied higher-order logic, and apply them to improve reasoning in graph neural networks (GNNs), LLMs, or other DNNs.
New reviews and ratings are disabled for Awarded Projects
Get sequence network oscillating and identifying more tightly clustered subnetworks: Select platform: Option 1: Get clustering working in sequence network with old neurosimulator code from paper: https://ncbi.nlm.nih.gov/pmc/articles/PMC3390386/ Code here: https://modeldb.science/144502 Option 2: Alternatively extend already existing implementation of oscillating sequence network in Github project: https://discourse.numenta.org/t/the-coding-of-longer-sequences-in-htm-sdrs/10597/27 Which is chosen might depend on the experience of coders hired for the task. Sketching one month timeline to incorporate project starting and admin wrinkles.
Project platform established.
$15,000 USD
Basic baseline neurosimulator code base selected and working.
Get neurosimulator oscillating in network formed from a moderate sized language corpus. Option 1: Brown Corpus Option 2: British National Corpus Also sketching timeline of one month for this. It might not be too hard but allowing some time for problems. Likely problems might be issues with the size of a corpus network. It might require hardware upgrades or adjustments.
Input language connectivity for a sufficiently large language sample.
$10,000 USD
Codebase working for language data of sufficiently large size. We may not be able to identify skip vectors yet but issues with size of corpus should be solved, at least enough to enable initial experiments.
This is the first major research question. It may be tricky. This is the major first step not yet achieved for other code and data. Identifying equivalents to skip vectors in the corpus network might be achieved using an interface in the form of a raster plot and some kind of "rheostat" input power variation and perhaps input source variation (different parts of the network corresponding to different input sentences.) Assumption is that by varying the driving oscillation we should be able to observe the raster plot and identify clusters of network nodes that are synchronizing and equate these to skip vectors over words associated with the nodes. If we can achieve this I will assess it as a major success. At this stage the skip vector would be limited to single words. Am estimating 1-2 months for this step. But if it took longer it would not be a problem because it would be a major achievement and the major stumbling block to further progress.
Word skip vector equivalents identified using raster plot of oscillations in network of language corpus.
$20,000 USD
Some identifiable mapping of raster plot patterns to skip vectors.
Re. what aspects of skip vectors to preserve... I think only their predictive properties. I actually want to not so much preserve them as go beyond skip vectors. I want to go beyond them because I think they will change. I think the fact they change (are dependent on context) is what is holding up language models. And I also want to go beyond them because of and also related to exactly the "arbitrarily far apart" aspect above. Because I think allowing them to change as chaotic attractors should allow us to scrunch up or pull apart sub sequences at will. Like the "keep"->"apart" based skip vector which might consist of words like {keep stay move...} but should also be able to (at a higher energy of synchronization?) break down into k->a k->p... and then also k-(arp)a... skip vectors and overcome the "token" problem which so besets the LLM folks. And in the same way crunch down even more so we get skip vectors for pairs of words and longer... So we'll finally be able to get hierarchy and logical/parse structure. (Though in detail going below words into letters skip vectors/shared context as such probably only influences some broad phonetic effects at the sub-word level and "letters" or phonemes are probably ruled more by things other than skip vector shared context. But above the world level this should enable us to find phrases etc. for the first time. Structure you can use to build an algebra and reason over.)
Skip vectors for "tokens" of variable length. Key to finding hierarchy and ultimately symbolic structure.
$20,000 USD
Some identifiable mapping of raster plot patterns to skip vectors of different lengths.
If we manage to find skip vectors for sequences of varying length this should encode implicit hierarchy in a representation space. This step might be as simple as coding an interface to the raster plot which would graphically represent hierarchies associated with differential clustering between words and word groups varying as driving oscillation power varies.
Graphical or other display of (symbolic) hierarchy in representation space.
$10,000 USD
Some identifiable mapping between patterns in raster plots and meaningful (symbolic) hierarchy.
This is more of a stretch goal. The problems we might face getting the basic underlying principle working may well absorb all our time. But if we do manage to achieve this the next step would be to explicitly integrate the symbolic/hierarchical structure found with representation in Hyperon Atom Space. This might be seen as something of the reverse of a solution like PyNeuraLogic. Instead of implementing logic rules in weights or networks it would use networks to emerge logic relations. The idea is that these have eluded us up to now continue to elude "neuro-" efforts and particularly elude Large Language Models because the structure they seek to "learn" is inherently dynamic and changing. If we embrace the change clear structure will emerge (though change from context to context.) Not only should this be a mapping to a hypergraph representation between "atoms". I would be seen as generating both the representation and the interactions/combinations between "atoms". The idea is that the only reason there has been a tension between neuro- and symbolic historically a puzzle is because structure is chaotic and constantly changing. Embrace the chaos and we should get natural neuro-symbolic representations.
Hyperon Atom Space integration to enable actual reasoning over neurosymbolic representations found using the basic method. Basically the deliverable envisaged is a complete solution to the tension between neuro- and symbolic representations. Which is revealed to have been because cognitive structure consists of chaotic attractors and constantly changes. So deliverable a complete solution to neurosymbolic representation in AI. (With a pathway to other deliverables of chaos: hints at creativity - constant novelty consciousness - chaos "larger than itself" and free will - only the chaotic system fully predicts its own evolution even it's creator cannot fully predict what it will do/discover.) This should be extensible to other experiential learning frameworks. It is not imagined that this dynamic structuring is limited to language. Language would just be the simplest example. In particular it might be immediately applicable to other experiential learning systems like AIRIS.
$4,999 USD
Establish mapping between hierarchies of meaningful "objects" (clusters) in raster plots, and Hyperon Atom Space.
Really getting "stretch" for this project but if budget and time were remaining after an unexpectedly easy accomplishment of the key earlier research milestones a natural next step would be to relate the dynamic skip vectors (incorporating a sense of "attention" because dynamic according to length) to the prediction task which is the one at which current LLMs excel.
Beat LLMs at their own prediction benchmarks but do it with an exposed neurosymbolic hierarchical structure under a Hyperon atom space. Enabling Hyperon to influence sequence completion or other behavioural "predictions" by applying activation differentially to the hierarchical variable length "token" skip vectors associated with input sentences (and eventually other sensory data.)
$1 USD
Demonstrate a result against an existing Large Language Model prediction benchmark or assessment criterion. Success++ would be to already beat such a benchmark. But LLMs have billions invested and thousands of engineers optimizing them, so beating their benchmarks in the narrow domains they excel is unlikely to be achievable immediately. As a demonstration of principle, simply demonstrating performance against a benchmark, to be later refined, would be early success enough.
Please create account or login to post comments.
Reviews & Ratings
New reviews and ratings are disabled for Awarded Projects
© 2025 Deep Funding
Sort by