DFR4 Best contributors experiment

chevron-icon
Back
project-presentation-img
Completed 👍
Expert Review🌟
Almalgo (Rojo)
Project Owner

DFR4 Best contributors experiment

Funding Awarded

$5,000 USD

Expert Review
Star Filled Image Star Filled Image Star Filled Image Star Filled Image Star Filled Image 4
Community
Star Filled Image Star Filled Image Star Filled Image Star Filled Image Star Filled Image 4 (3)

Status

  • Overall Status

    🥳 Completed & Paid

  • Funding Transfered

    $5,000 USD

  • Max Funding Amount

    $5,000 USD

Funding Schedule

View Milestones
Milestone Release 1
$1,000 USD Transfer Complete 25 Apr 2024
Milestone Release 2
$2,000 USD Transfer Complete 30 May 2024
Milestone Release 3
$1,000 USD Transfer Complete 27 Jun 2024
Milestone Release 4
$1,000 USD Transfer Complete 25 Jul 2024

Status Reports

May. 7, 2024

Status
😀 Excellent
Summary

We are very close to finishing milestone 2 and 3, which means 75% of the project will be done!

Full Report

Project AI Services

No Service Available

Overview

We aim to compare LLM's scoring of the contributions (i.e. Comments) from DFR3 with the scores given by reviewers. We also aim to start the first steps of a web app for the best contributor's review process that will remove most friction in the process while adding an AI agent as one of the reviewers to get insights into how LLMs score contributions vs humans.

Proposal Description

How Our Project Will Contribute To The Growth Of The Decentralized AI Platform

This project aims to start developing a web app through an MVP that aims to foster the best contributor's review process. The main innovation of this project is to add AI agents as reviewers alongside humans. The project aims to contribute in at a later stage to the AI platform by training AI agents and hosting them in the AI platform. This will give access to the whole community to use this agent to judge textual contributions from others and assign a point system to it.

Company Name (if applicable)

Almalgo

The core problem we are aiming to solve

From our experience in the best contributors review process, we can assert that the process around this review is faced with many challenges, the main ones are Technical difficulty in distributing and assigning tasks, Human bias when reviewing textual comments, Ensuring randomness and fairness in the process, Smooth experience for reviewers.

Our specific solution to this problem

Our solution is a platform that randomizes the input given to each reviewer and has limited access per reviewer to keep the whole process fair, The platform will also randomize the contributions to be evaluated and choose a subset of them based on parameters provided by the admins. The platform will also have AI agents who will be joining as reviewers in the process. We will do a statistical analysis of the difference between AI agents and human evaluation alongside some explanations for the results we will get. This will provide good material to start the discussion in the community about automated AI-based reviewing systems.

The AI agent and the code used are to be implemented at a later stage on the platform to be called via API. 

Project details

The work to be done in this ideation phase is:

  • Providing an action plan and the architecture to be implemented to get community feedback.
  • Developing an MVP of the Webapp solution to test the solution concept.
  • Simulating contributions in a round to test our solution.
  • Re-running DFR3 Best Contributor's Rewards With AI Agents

The aim of this project is to develop a proof of concept and a valid architecture to be fully implemented later on in the New Projects pool. This project will aim to:

  • Decreasing Technical difficulty in distributing and assigning tasks.
  • Reduce Human bias when reviewing textual comments.
  • Ensuring randomness and fairness in the process.
  • Providing a smooth experience for reviewers.

Our project also aims to engage the community at each stage of the work to be done through community calls and presentations in townhalls.

Additional links

Example of the work previously done here: Tale of Two Funds – Deepfunding 

Proposal Video

Placeholder for Spotlight Day Pitch-presentations. Video's will be added by the DF team when available.

  • Total Milestones

    4

  • Total Budget

    $5,000 USD

  • Last Updated

    5 Aug 2024

Milestone 1 - Action plan

Status
😀 Completed
Description

In this milestone we will draft a high-level outline of all tasks that will be involved in this project and assign each person to tasks accordingly. Moreover we will provide a blueprint for the architecture of the solution and the technology stack to be used. This will include (Database architecture APIs to be utilized or implemented Front end solution)

Deliverables

A report including the tasks that will be covered by each member of the team and an action plan. A technical section with diagrams and graphs explaining the full architecture to be implemented and the technology stack for each component.

Budget

$1,000 USD

Milestone 2 - MVP Webapp development

Status
😀 Completed
Description

This milestone will involve developing an MVP implementing the architecture from the previous milestone to use in the experiments that we will operate. The solution will be open-source for the community to independently test.

Deliverables

An MVP of the web app publically hosted to be tested by community members. And a report about the development steps taken.

Budget

$2,000 USD

Milestone 3 - Round simulation

Status
😀 Completed
Description

In this period we will generate some contributions and provide them to the Webapp simultaneously we will engage some community members to contribute to the project as reviewers and we will simulate a round with AI agents.

Deliverables

A report stating the number of data points used and the number of reviewers and AI agents included. We will then study the results and compare them with the parameters and the premise of the experiment and provide that to the community.

Budget

$1,000 USD

Milestone 4 - DFR3 best contributor's rewards with AI agents

Status
😀 Completed
Description

In this phase of the experiment we will work with real data from the last round while including new reviewers which are AI agents. This aims to give us new insights about the previous round and to validate the solution or provide some issues that we need to fix in our solution.

Deliverables

A report stating the number of data points used and the different number of AI agents included. We will then study the results and compare them with the parameters and the premise of the experiment and provide that to the community.

Budget

$1,000 USD

Join the Discussion (4)

Sort by

4 Comments
  • 1
    commentator-avatar
    Walter Karshat
    Feb 12, 2024 | 2:56 AM

    Broken link under Additional links in the proposal.

    • 1
      commentator-avatar
      rojokabot
      Feb 12, 2024 | 11:17 AM

      Thanks, I'll try to fix it if I can still edit!

  • 3
    commentator-avatar
    Jan Horlings
    Feb 1, 2024 | 2:01 PM

    Hey Rojo, Do you expect the LLM/Bot to create reviews in the same 4 dimensions as the reviews and ratings in the tab?

    Upvoted by Project Owner
    • 2
      commentator-avatar
      rojokabot
      Feb 1, 2024 | 3:40 PM

      Initially, we will be mainly aiming to finetune to agent to recognize the type of comments or engagement we have throughout previous rounds and future ones, the human reviews aim to show the AI agent what type of engagement is most desirable on average. I would also love to see these finetuned models giving feedback to proposals at some point and opening the discussion around relevant points of each proposal. But this step will need much testing before.

Expert Review

Overall

4

user-icon
  • Feasibility 4
  • Viability 4
  • Desirabilty 5
  • Usefulness 5
AI-supported 'fair' contributor-review process

Excellent idea making the review process “fair” and AI injected. At first glance the budget is low for all the spelled-out milestones – perhaps only a subset need to be accomplished? Some technical details are unclear, for example, what “randomize” means but perhaps the ideation phase will flush this out. Both the Web App and the AI agent may merit their own independent projects. The team is proven and capable thus project viability is high. Recommend this project.

New reviews and ratings are disabled for Awarded Projects

Sort by

3 ratings
  • 1
    user-icon
    Walter WaKa
    Feb 12, 2024 | 3:11 AM

    Overall

    5

    • Feasibility 3
    • Viability 3
    • Desirabilty 5
    • Usefulness 5
    Directly applicable and necessary for progress

    The topic of this investigation does need advancing to make the DF program yet more fair and equitable. Gross analysis of the voting patterns and outcomes per funding round is being done already, but without digging into specific aspects of behaviors of individual reviewers, furthers insights will be limited.

    The proposal is overly ambitious for the expectations at this phase and for the budget, but it shows a clear vision toward a meaningful set of methods and tooling.

    There may be overlap with other projects within the test round, particularly the one on Reputation, thus opportunities for collaboration.

    Once fully built out, with more substantial work delivered in further funding rounds, such a system would naturally fit into the marketplace.

  • 1
    user-icon
    Rafael_Cardoso
    Feb 10, 2024 | 8:30 PM

    Overall

    4

    • Feasibility 4
    • Viability 4
    • Desirabilty 4
    • Usefulness 3
    An Interesting solution to improve Deep Funding

    Feasibility -> This proposal seems overall feasible, however I have a concern that we might not yet have enough data to reach significant conclusions from this proposal which decreases a bit this metric. 

    Viability -> The Team seems experienced and capable enough to deliver on this proposal. However, they might face some difficulties with the data available to train this ai model and the data available for comparisson and reaching a decision. In addition, the proposal seems to need contributions from the community, but to make sure people really put in the effort and have the necessary availability to contribute it would have been good to add some kind of incentives for the community, even if just a prize for the 3 strongest contributors. 

    Desirability -> This project can actually help streamline the process of reviewing ratings and defining the best contributors, it would also remove some of the workload and cost with reviewers.

    Usefulness -> The proposal is not directly useful for the AI marketplace, but is useful for the Deep Funding Operations, not so much right away where the need for this is limited, but as Deep Funding grows and we start to have more reviews to be analyzed and more individuals participating. So it definitely addresses a future need.

  • -2
    user-icon
    Rafiatu07
    Feb 11, 2024 | 11:36 PM

    Overall

    3

    • Feasibility 3
    • Viability 3
    • Desirabilty 4
    • Usefulness 4
    A desirable project for consideration

    The proposal outlines a project aimed at improving the best contributor's review process by incorporating AI agents alongside human reviewers. The project seeks to address various challenges such as technical difficulties in task distribution, human bias in reviewing, ensuring fairness, and providing a smooth experience for reviewers. Through the development of a web app MVP and the inclusion of AI agents, the project aims to gather community feedback and compare AI-generated scores with human evaluations.

    Positives of this project include its innovative approach to integrating AI into the review process, its focus on community engagement, and the potential to improve the efficiency and fairness of the review process. The team demonstrates an understanding of the challenges faced in the review process and proposes feasible solutions to address them. Also, the project milestones and budget seem realistic and achievable, with a clear plan outlined for each phase of development.

    However, some potential drawbacks include the complexity of integrating AI agents into the review process and the need for careful consideration of ethical implications and biases in AI-generated scores. Furthermore, the success of the project may depend on the willingness of the community to adopt and trust AI-based review systems.

    In terms of feasibility, the project team has the necessary skills and experience to carry out the project successfully, as evidenced by their proposed action plan, technical expertise, and previous experience in similar projects. The project also aligns with SingularityNET's goals of advancing AI technology and fostering community participation.

    In terms of viability, the project has the potential to provide valuable insights into the effectiveness of AI agents in the review process and could lead to the development of more efficient and fair review systems in the future. By leveraging AI technology, the project contributes to the advancement of AI capabilities within the SingularityNET platform.

    This project is desirable to SingularityNET AI as it addresses important challenges in the review process and has the potential to improve the efficiency and fairness of community-driven initiatives. However, careful consideration of ethical and practical considerations will be essential to ensuring the success and acceptance of AI-based review systems

Summary

Overall Community

4

from 3 reviews
  • 5
    1
  • 4
    1
  • 3
    1
  • 2
    0
  • 1
    0

Feasibility

3.3

from 3 reviews

Viability

3.3

from 3 reviews

Desirabilty

4.3

from 3 reviews

Usefulness

4

from 3 reviews

Get Involved

Contribute your talents by joining your dream team and project. Visit the job board at Freelance DAO for opportunites today!

View Job Board