A working AGI system that builds knowledge through explainable cognitive modules. It performs symbolic reasoning, detects conflicts, learns functions from examples, and generalizes via abstraction — all backed by a local LLM trainer.
Functional Graph AGI is an experimental reasoning engine made of callable, memory-aware modules. Each module carries self-description, trust scoring, and explainability. The system integrates user-guided teaching, semantic abstraction, and curiosity-based learning. It uses a local LLM to generate hypotheses and safely builds a symbolic reasoning graph that evolves over time. The goal is to make AGI transparent, controllable, and open for experimentation across domains like math, logic, education, and simulations.
Live demo (UI walkthrough): https://limitgraph.com
Public API (real-time reasoning): http://api.limitgraph.com:8000/docs
Research paper (Zenodo): https://zenodo.org/records/15611501
AI systems today are opaque, hard to teach, and not built for explainable growth. AGI research often focuses on large models, not composable and controllable reasoning systems. There's a gap between symbolic clarity and modern learning.
We built a modular AGI where each function is a cognitive block. It supports teaching by examples, conflict detection, and abstraction. It uses a local LLM to hypothesize missing logic and builds a functional graph of growing intelligence. This system is already live and ready for next steps.
© 2025 Deep Funding
Join the Discussion (0)
Please create account or login to post comments.