CyberPravda – Knowledge Graph & Rating of Veracity

chevron-icon
RFP Proposals
Top
chevron-icon
project-presentation-img
user-profile-img
Timur Sadekov
Project Owner

CyberPravda – Knowledge Graph & Rating of Veracity

Expert Rating

n/a

Overview

LLMs can degenerate by learning from their own hallucinations on the Internet and humans will degenerate by applying the degenerated LLMs. This process is called "neural network collapse". For mutual validation and self-improvement of LLM and humans, we need the ability to match the knowledge of artificial intelligence with collective intelligence. And for all of this, we need a reliability rating of human knowledge. We have found a fundamentally new way to determine the veracity of information that does not depend on admins, biased experts, special content curators, oracles, authority certificates of states and corporations, clickbait likes/dislikes or voting tokens that can bribe any user.

RFP Guidelines

Advanced knowledge graph tooling for AGI systems

Complete & Awarded
  • Type SingularityNET RFP
  • Total RFP Funding $350,000 USD
  • Proposals 39
  • Awarded Projects 5
author-img
SingularityNET
Apr. 16, 2025

This RFP seeks the development of advanced tools and techniques for interfacing with, refining, and evaluating knowledge graphs that support reasoning in AGI systems. Projects may target any part of the graph lifecycle — from extraction to refinement to benchmarking — and should optionally support symbolic reasoning within the OpenCog Hyperon framework, including compatibility with the MeTTa language and MORK knowledge graph. Bids are expected to range from $10,000 - $200,000.

Proposal Description

Our Team

Timur Sadekov (CEO) — nuclear energy expert and a head of nuclear power plant start-up projects www.linkedin.com/in/timursadekov

Alexey Vostryakov (CBDO) — IT-entrepreneur, software engineer and automation specialist www.linkedin.com/in/alekseivostriakov

Dmitry Shalashov (CTO) — Recsystems ex-CTO at VK Group, founder of Surfingbird and Relap.io (Wired Top 100 Startups of 2015) www.linkedin.com/in/skaurus

Dmitry Prokofev (Data Scientist) — Python, JS, NLP, ML, React www.linkedin.com/in/dimaprokofev

Company Name (if applicable)

CyberPravda SL, Barcelona, Spain

Project details

I am a proponent of the hypothesis that superintelligence should be born from the synergy of artificial intelligence, which has a gigantic erudition and trend awareness, with the collective intelligence of real live human experts with real knowledge and experience.

Yann LeCun has a great quote: “Large language models seem to possess a surprisingly large amount of background knowledge extracted from written text. But much of human common-sense knowledge is not represented in any text and results from our interaction with the physical world. Because LLMs have no direct experience with an underlying reality, the type of common-sense knowledge they exhibit is very shallow and can be disconnected from reality”.

LeCun writes about the tricks that can be used to try to teach the LLM common sense about the world, but this is still far from the question of validity, because even if these tricks lead to results, there is still the question of whether the common sense database is valid.

At the moment, all LLM developers claim that their datasets are reliable, but this is obviously not the case, as they have been found to be fake on more than one occasion, and the developers themselves have no criterion for the veracity of the information at all.

The position “my dataset or ontology is trustworthy because it's mine” cannot be the basis of trustworthiness. So the future for me personally is quite simple and is determined by the following logic:

1. The hallucinations and confabulations of artificial intelligence are fundamentally unrecoverable https://www.mdpi.com/1099-4300/26/3/194 

2. Cross-training LLMs on each other's hallucinations inevitably leads to “neural network collapse” and degradation of the knowledge of the people who apply them https://arxiv.org/abs/2305.17493v2 and https://gradual-disempowerment.ai

3. Any physical activity in the real world is connected to the physics of the entire universe, and sometimes the slightest mistake in understanding these interrelationships is fatal. A million examples can be seen in industrial safety videos. That is why any hallucination of artificial intelligence without reliance on the real experience of real people with real knowledge of the world will end in mistakes and losses of varying degrees for the humans, up to catastrophic.

Hence the conclusion — people have the main responsibility to connect with reality. And the more complex will be the questions that will be solved by neural networks, the more serious will be the human responsibility for timely detection of more and more subtle and elusive hallucinations. This requires people with the deepest knowledge, and not the knowledge memorized under pressure at school, but with real experience on almost any issue.

How many tasks neural networks will have, there should be so many superprofessionals on these tasks. And for the superprofessionals, you just need ordinary professionals and assistant professionals and students of assistant professionals.

And for all this we need a rating of reliability of knowledge to know who is a professional and who is not a professional.

And without information veracity criterion and knowledge validity rating any LLM (and in general any artificial system according to Michael Levin's proof) will face imminent collapse.

Only the collective neural network of all the minds of humanity can be opposed to artificial intelligence. For mutual verification and improvement of LLMs and humans, we need the ability to compare the knowledge of artificial intelligence with collective intelligence. This is the only thing that can get us out of the personal tunnels of reality and personal information bubbles in which we are getting deeper and deeper stuck individually.

All this is aggravated by the fact that all factchecking in information technologies is now based solely on the principle of verification (i.e. confirmation). But this approach becomes unacceptable in modern conditions after the superiority of AI over humans and should be fundamentally replaced by the principle of falsifiability. It means that the verification of the meaningfulness and then the veracity of hypotheses should be carried out not through the search for facts that confirm them, but mainly (or even exclusively) through the search for facts that refute them. This approach is fundamental in the scientific world, but for some unknown reason it has not been implemented anywhere in information technologies until today.

I strongly believe that reputation and trust are based on people saying what they do and doing what they say, so a key element for this whole scheme to work is the need for an independent system to objectively and unbiasedly verify the veracity of information and compare any existing opinions for mutual consistency and noncontradiction. And such a system must satisfy to the highest principles of scientific honesty:

— the system must be completely independent of admins, biased experts, special content curators, oracles, certificates of states and corporations, clickbait likes/dislikes or voting tokens that can bribe any user

— the system should be global, international, multi-lingual and free of charge to be accessible for users all over the world

— the system should be unbiased to all authors and open to the publication of any opinions and hypotheses

— the system must be objective and purely mathematically evaluate all facts and arguments without exception according to the principle of "all with all" immediately the moment they were published

— the system must be decentralized and cryptographically secured insuring that even its creators have no way of influencing the veracity and reputation

— the system should be available for audit and independent verification at any time.

The technical solution is a discussion platform for crowdsourcing reliable information with a monetization ecosystem, game mechanics, and a graph theory based algorithm that objectively compares all existing points of view. It is a combination of existing technologies that are guaranteed to meet all requirements of objectivity and unbiasedness:

— it is built on a blockchain with an independent mathematical algorithm for analysis of the veracity of arguments on the basis of graph theory with auto-translation of content into 109 languages

— credibility arises only from the mutual influence of at least two different competing hypotheses and does not depend on the number of proponents who defend them

— credibility arises only from the mutual competition of facts and arguments and their interactions with each other

— the only way to influence credibility in this system is to publish your personal opinion on any topic, which will be immediately checked by an unbiased mathematical algorithm for consistency with all other facts and arguments existing in the system

— each author in the system has personal reputational responsibility for the veracity of published arguments.

Algorithm is based on the principle that for some facts and arguments people are willing to bear reputational responsibility, while for others they are not. The algorithm identifies these contradictions and finds holistically consistent chains of scientific knowledge. And these are not likes or dislikes, which can be manipulated, and not votes, which can be falsified. Each author publishes his own version of events, for which he is reputationally responsible, not upvoting someone else's.

In the global hypergraph, any information is an argument of something one and a counterargument of something else. Different versions compete with each other in terms of the value of the flow of meaning, and the most reliable versions become arguments in the chain of events for facts of higher or lower level, which loops the chain of mutual interaction of arguments and counterarguments and creates a global hypergraph of knowledge, in which the greatest flow of meaning flows through stable chains of consistent scientific knowledge that best meet the principle of falsifiability and Popper's criterion.

The algorithm is based on the fact that scientific knowledge is noncontradictory, while ignorames, bots and trolls, on the contrary, drown each other in a flood of contradictory arguments. The opinion of a flat Earth has a right to exist, but it is supported by only one line from the Bible, which in turn is not supported by anything, yet it fundamentally contradicts all the knowledge contained in textbooks of physics, chemistry, math, biology, history and millions of others.

The algorithm up-ranks facts that have confirmed evidences with extended multiple noncontradictory chains of proven facts, while down-rating arguments that are refuted by reliable facts confirmed by their noncontradictory chains of confirmed evidences. Unproven arguments are ignored, and chains of statements that are built on them are discarded.

The algorithm has natural self-protection from manipulations, because regardless of any number of authors each fact in the decentralized database is one single information block and regardless of any number of opinions any pair of facts is connected by one single edge. The flow of meaning in the logical chain of the picture of events does not depend on its length, and unreasonable branching of the factual statement leads to the separation of the semantic flow and reduces the credibility of each individual branch, for which we have to prove its veracity anew.

Similar to Wikipedia, the algorithm creates articles on any controversial topic, which are dynamically rearranged according to new incoming evidences and desired levels of veracity set by readers in their personal settings ranging from zero to 100%. As a result, users have access to multiple Wikipedia-like articles describing competing versions of events, ranked by objectivity according to their desired credibility level, and the smart contract allows them to earn money by betting on different versions of events and fulfilling orders to verify their veracity.

Open Source Licensing

Custom

Once the blockchain protocol is launched and debugged, we plan to open it to be developed and improved by the user community.

Background & Experience

Dmitry Prokofev (Data Scientist) — has its own "Knowledge Map" pet project on knowledge structuring on the basis of DAG. He is active participant in projects related to radical life extension (Open Longevity volunteer), where he used MeTTa to formalize biomedical hypotheses. His "Knowledge Map" project is an analog of the CyberPravda hypergraph, but without ratings of veracity of information and reputation of its authors. It has a map in Miro made by hand with 6000 elements of blocks and links. He uses algorithms (e.g. Sugiyama-Kozo, force algorithms from D3.js) and has hands-on experience with GraphQL and schemas for managing knowledge graphs. He has experience in optimizing large graph visualization (Pixi.js, GPU) and gRPC integration for high performance and handling millions of links.

Links and references

Brief description and essence of the project — https://drive.google.com/file/d/18OfMG7PI3FvTIRh_PseIlNnR8ccD4bwM 

CyberPravda extended presentation — https://drive.google.com/file/d/1RmEbq4Tsx1uCCjMNjXNK4NXENtriNCGm 

Article about CyberPravda and its application for creating global cybernetic communities — https://www.lesswrong.com/posts/YtCQmiD82tdqDkSSw/cybereconomy-the-limits-to-growth-1 

Mathematical model — https://drive.google.com/file/d/1GrcDP0LPvxJ_4E8wYLp49aZ8R76ysVUo

Additional videos

YouTube video — https://youtu.be/jFZVhp_GJtY

YouTube video-presentation with explanations in English — https://youtu.be/cha7BwZ5t4U

CyberPravda presentation cartoon — https://youtu.be/edI5wLgimPQ 

CyberPravda Data Room Dashboard — https://docs.google.com/spreadsheets/d/16-ySPw5vy2wUIvlsV_nx64jbJpL4_Ns5HxZ0SyfAUP0

Describe the particulars.

Anton Kolonin (Computer Scientist, AI and Blockchain architect, Ph.D.) — admin of the https://agirussia.org site and AGITopics community in Telegram

Proposal Video

Not Avaliable Yet

Check back later during the Feedback & Selection period for the RFP that is proposal is applied to.

  • Total Milestones

    1

  • Total Budget

    $200,000 USD

  • Last Updated

    15 May 2025

Milestone 1 - Creation of a minimum viable product (MVP)

Description

Prototype refinement for demonstration of a minimum viable product (MVP) polishing algorithms involving users starting first monetization preparing for the transition to blockchain. 1. Algorithm of veracity of information and reputation of its autors (ready) 2. Readers' interface (ready) 3. Authors' interface (60% ready) 4. Smart contract for disputes about different versions of events (ready) and for corporate orders for verification of the veracity of information (50% ready) 5. LLM trained to find basic facts and arguments in any websites or publications (80% ready)

Deliverables

We will launch an app with fundamentally new algorithm that mathematically determines the veracity of information without AI hallucinations cryptographic certificates of states and corporations or voting tokens that can bribe any user. The algorithm does not require external administration review by experts or special content curators. It has neither semantics nor linguistics — all these approaches have not justified themselves. It has a unique and very unusual combination of mathematics psychology and game theory. We have developed a purely mathematical international multilingual correlation algorithm that allows us to get a deeper scientometric assessment of the accuracy and reliability of information sources compared to the PageRank algorithm or the Hirsch index. At a very fundamental level it revolutionizes existing information technologies making a turn from information verification to the world's first algorithm for analyzing its compliance with Popper's falsifiability criterion which is the foundation of the scientific method. The algorithm will be completely transparent and visible to all users of the system. Everyone will know the rules of the game and will be able to check them at any time. All this makes possible to assign each message in the system a credibility rating and authors — a reputation rating. Knowledge graph will support integration with MeTTa.

Budget

$200,000 USD

Success Criterion

Target audience 100 000 MAU. 20 corporate clients place orders for analysis and verification of information veracity on average 5 times a month with an average check of $500. Half of the order amount is automatically paid to the authors of the most reliable information, who scored the highest reputation rating in the system, and a commission of 50% of the order amount generates $25K/month in revenue, and the project becomes self-sustaining preparing for the full transition to blockchain. We made mathematical modeling that shows that after creating a million information blocks, the leading scientific hypothesis consistently wins the credibility rating over competing versions of ignorames, bots and trolls. And reputation ratings of academical minority are much higher than ratings of countless ignorames, bots, and trolls and that gap only widens as the total number of publications grows. Mathematical model — https://youtu.be/r3MCpSDLSEo

Join the Discussion (0)

Expert Ratings

Reviews & Ratings

    No Reviews Avaliable

    Check back later by refreshing the page.

Welcome to our website!

Nice to meet you! If you have any question about our services, feel free to contact us.