DEEP Connects Bold Ideas to Real World Change and build a better future together.

DEEP Connects Bold Ideas to Real World Change and build a better future together.

Coming Soon

3-D Interactive Editing of Programmable Graphs

chevron-icon
RFP Proposals
Top
chevron-icon
project-presentation-img
user-profile-img
len du
Project Owner

3-D Interactive Editing of Programmable Graphs

Expert Rating

n/a

Overview

It is sort of a Sci-fi trope (e.g. Iron Man) to feature a 3D interface where one manipulates, and in particular connects between, objects floating in a 3D space, as opposed to 2-D one in real world visual programming. The artistic intuition might have some scientific merits. Arbitrary graphs (or just complete graphs) can be embedded in the 3D Euclidean space but not in 2D. So if we visually edit a graph in 3D, we may avoid the nontrivial problem of wire placement, which may be the reason why 2D visual programming don’t scale well with complexity. We will use MORK/MeTTa as the underlying representation, so future growth in 3D graph editing will gravitate towards the MORK/MeTTa ecosystem.

RFP Guidelines

Complete & Awarded

Advanced knowledge graph tooling for...

Ended on:

27 May. 2025

Days
Hours
Minutes
  • Type SingularityNET RFP
  • Total RFP Funding $350,000 USD
  • Proposals 34
  • Awarded Projects 5

Advanced knowledge graph tooling for AGI systems

Complete & Awarded
  • Type SingularityNET RFP
  • Total RFP Funding $350,000 USD
  • Proposals 39
  • Awarded Projects 5
author-img
SingularityNET
Apr. 16, 2025

This RFP seeks the development of advanced tools and techniques for interfacing with, refining, and evaluating knowledge graphs that support reasoning in AGI systems. Projects may target any part of the graph lifecycle — from extraction to refinement to benchmarking — and should optionally support symbolic reasoning within the OpenCog Hyperon framework, including compatibility with the MeTTa language and MORK knowledge graph. Bids are expected to range from $10,000 - $200,000.

Proposal Description

Our Team

Len Du - Senior Machine Learning Engineer

Company Name (if applicable)

N/A as this is not for business purpose. However one can be easily arranged for if beneficial.

Project details

A lot can be said about how (hyper)graphs can be considered closer to the “essence” of the semantics of a program (or “meaning” in the general sense), such as naturally circumventing α-equivalence (that we can change variable names inside a program without changing its behaviour) and being “point-free”, which might have been one of the many motivations behind the original OpenCog. But to reap these benefits, directly editing the graphs really needs to be natural and approachable.

It is sort of a Sci-fi trope (e.g. Iron Man) to feature a 3-dimensional (or “spatial computing”, as marketed by Apple) interface where the user interacts with, manipulate, and in particular make connections between, objects or “nodes” floating around in a 3-dimensional space, as opposed to draging-and-dropping on a 2-dimensional plane in all real world visual programming environments, the most significant and established example being LabView.

The artistic intuition might actually happen to have some scientific merit in it. The representation of, and the interaction with graphs in 3-D, as opposed to that in 2-D, is far from superficial. It is a known result that arbitrary graphs (or, in the most demanding case, complete graphs) can be embedded in a 3-D Euclidean space (R^3) rather than a 2-D one (R^2). A common complaint about real world 2-D visual programming languages, is that they seem to be very bad at scaling up with complexities. When the expressed logic is simple – corresponding to a short script in textual programming languages – the visual language seems very attractive. When the project gets larger, the scalibility problem starts to surface, often eventually remedied by implementing part of the functionality in textual languages. This is not so much of a problem in some niche applications such as composable shaders, where the complexity is limited by, among other things, that loops are not typically involved.

Embedding a graph in 2D, often with limited edge crossing allowed, introduces all sorts of intellectually interesting problems, having invited a long history of research. Users of a 2D visual programming environment are often effectively solving such problems with their brain, rather than leaving these problems to the computer. Hopefully, as the complete graph is embeddable in 3D, directly editing the graph in 3D can take this burden off. If we were to visualize the process of graph rewriting, it may also be necessary for successive layouts to be done without needing user input.

Note that hypergraphs and metagraphs (in the sense that edges can point to edges, in addition to nodes – a “metagraph” can otherwise mean different things) are not necessarily embeddable even in the 3D space (or in any Euclidean space), so for the purpose of visualization and interactive editing, they have to be converted to proper graphs. Hypergraphs can readily be converted to bipartite graphs. We can then relax the bipartition by allowing edges between the “hyperedge” nodes.

In this project we will use MORK/MeTTa as the underlying machine representation, so that future growth in 3D graph editing could gravitate towards the application and extension of MORK/MeTTa. It could otherwise also be an attractive proposal for knowledge graph researchers or practitioners outside the OpenCog/SingularityNET ecosystem to come up with ideas of similar 3D visualization or interactive editing methods. A major point of this implementation is to potentially attract such efforts to be based on MORK/MeTTa and grow the influence of this ecosystem.

There are a few design choices that had been deliberated on. First is how the human user would give input to the graph. Second is whether we use VR – while we don’t have holographics like those in Sci-fi movies, VR headsets are the closest thing we get. And there is certainly a push towards adapting VR as the general productive user interface (“Spatial computing” as Apple touts), although the major barrier seems to be the cost vs display quality of the actual hardware, which we can expect to gradually improve over the following years.

Taken the visualization (computer → human) alone, at this stage VR headsets might not necessarily be a superior choice to a GUI window on a traditional flat display. While we could say that VR is likely the future (which depends on hardware becoming cheaper, a quite predictable trend) and particularly suitable for 3D interaction (and further that we would likely want to “dive in” to see closely a small part of a large graph), it would still be a barrier of entry to require a VR headset at all. 

However, VR headsets today also provides immediately usable hand tracking, or just 3D input, which otherwise requires niche equipment on a personal computer. So if we consider both input (human → computer) and visualization (computer → human) together, a VR headset is actually a lower barrier of entry than doing handtracking or other 3D input on a computer without VR. Hence, we would envision our program to be a PCVR application. For the scope of this project, the application targets a Linux desktop environment, simply due to that’s what the team is familiar with. Although, compatibility other operating systems will be taken into consideration in other design choices.

The ability for the user to quickly switch between giving 3D input and typing on a keyboard is also crucial, as we can imagine the user will need to very frequently type in attributes associated with nodes and edges. Hence hand tracking is preferred over substantial controllers, even though the latter still provide more responsive and more accurate input. A potential solution are low profile hand-tracking gloves that have been designed to accommodate handling objects with the gloves on. How compatible they are to keyboarding remains to be tested. The application should be usable with just a modern VR headset (such as Meta Quest 3) and a personal computer, although use of hand-tracking gloves will be incorporated.

Another important choice is the graphics pipeline, or whether to use a video game engine, or to directly program against graphical APIs. Compared with video games, we may need to draw many similar objects on screen very efficiently, while having less variaty over these objects. After some initial research, we think that it may actually be a better idea to directly program against the graphical API (the modern choice being Vulkan), and particularly Vulkan with SDL2, a combination proven to run straight away on Linux. Incidently, we will bridge the MORK represetation and flat arrays (that GPU likes) with toolchain for Graph Neural Networks (that is, pytorch-geometric in this instance), which has the added potential to attract the research community on graph neural networks towards MORK/MeTTa as well.

 

Open Source Licensing

AGPL - Affero GPL

To be fully open sourced in the same way as other OpenCog libraries.

Background & Experience

Machine learning background related to AGI and mainstream deep learning.

Publications 
https://dblp.org/pid/258/7336.html

Previous work experience in video game graphics. Had done some work in GPGPU computation with shading language when CUDA was not as popular.

Not published but relevant to this project, some work had been done using PyTorch, PyTorch-geometric as well as second order optimization to solve for forces in a cable-stayed truss structure. It was actually quite similar to solving graph embedding/layout.

Links and references

https://link.springer.com/chapter/10.1007/3-540-52698-6_1

https://math.mit.edu/~urschel/publications/p2021a.pdf

https://math.stackexchange.com/questions/3651675/can-all-hypergraphs-be-embedded-in-3d-space

https://docs.unity3d.com/6000.1/Documentation/Manual/shader-graph.html

 

Proposal Video

Not Avaliable Yet

Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.

  • Total Milestones

    4

  • Total Budget

    $80,000 USD

  • Last Updated

    28 May 2025

Milestone 1 - Initial setup

Description

Plumbing through the basic application environment.

Deliverables

A SDL2+Vulkan application that runs on a Linux PC and displays some minimal graphics (e.g. a box) on a Meta Quest 3 preferably wirelessly over ALVR.

Budget

$10,000 USD

Success Criterion

A SDL2+Vulkan application that runs on a Linux PC and displays some minimal graphics on a Meta Quest 3.

Milestone 2 - Generic graph drawing

Description

Implement generic graph drawing and editing (creating/deleting a node adding and displaying textual metadata).

Deliverables

Generic graph drawing and editing capability on top of the previous deliverable backed by PyTorch-geometric.

Budget

$30,000 USD

Success Criterion

Generic graph drawing and editing capability, backed by PyTorch-geometric. Responsiveness and smoothness is crucial. So GPU instancing, level-of-detail, and culling are also to be resolved at this stage. At this stage the underlying representation of graph will just be PyTorch arrays (as in GNN). An issue to be solved here whether we can directly use the PyTorch array in Vulkan, which is a very specific problem to be resolved. It should be possible but at worst we would need to copy and slightly transform the array a few times, which wouldn't be a huge problem. Some provisions to playback steps graph rewriting are also to be made at this stage.

Milestone 3 - Integration with MORK

Description

Integrate the last deliverable with MORK

Deliverables

Graph drawing and editing capability now backed on MORK. As well as handling hypergraphs/metagraphs. Playback of graph rewriting steps (especially incremental adjustments to layouts)

Budget

$20,000 USD

Success Criterion

Graph drawing and editing capability, backed by MORK, or in other words synchronized to MORK. Hypergraphs/metagraphs are transformed to/from regular graphs between MORK and PyTorch-geometric. Some minor changes/extensions may be required on PyTorch-geometric. Performance aspects associated with local updates (when updating a small part of the graph, it should not require copying over the whole graph) will need to be addressed. It may also be possible that the PyTorch representation no longer needs to be the whole graph. This will be investigated further and decided on during actual implementation.

Milestone 4 - Finetuning/optionals

Description

Some final finetuning/optional features

Deliverables

Some changes to the latest deliverable to improve performance if not previously addressed. Integration with finger tracking gloves to improve input responsiveness (note that accuracy largely correlates with responsiveness.) and hence usability - if test results indicate that they are compatible with keyboards. Integration with Apple Vision Pro (or newer hardware available at that time capable of as high/higher resolution) still through ALVR. ~ 5000 is allowed for the cost of hardware. Some videos demonstrating the use.

Budget

$20,000 USD

Success Criterion

As per deliverable description

Join the Discussion (0)

Expert Ratings

Reviews & Ratings

    No Reviews Avaliable

    Check back later by refreshing the page.

Welcome to our website!

Nice to meet you! If you have any question about our services, feel free to contact us.