biomedical algorithm & data auditing service

chevron-icon
Back
Top
chevron-icon
project-presentation-img
mike duncan
Project Owner

biomedical algorithm & data auditing service

Expert Rating

n/a
  • Proposal for BGI Nexus 1
  • Funding Request $5,000 USD
  • Funding Pools Beneficial AI Solutions
  • Total 4 Milestones

Overview

Our BGI Nexus proposal implements an iterative participatory design process of stakeholder engagement, mapping the relevant scope and necessary parameters for the development of an auditing service agent to be implemented on the SingularityNet platform. Through the development of an auditing service agent we aim to provide empirical measurements of bias and safety, as well as a mechanism for evaluating agents and services on the platform through the communication of human user and expert opinion. The service agent for evaluating bias and safety will integrate into existing reputation systems on the platform.

Proposal Description

How Our Project Will Contribute To The Growth Of The Decentralized AI Platform

Bias is a fundamental potential harm in AI algorithm implementation, especially in training data corpora, and it's analysis & mitigation is necessary for ensuring beneficial AGI

Our Team

Emile Devereaux
Michael Duncan

AI services (New or Existing)

alogorithm & data auditing

Type

New AI service

Purpose

An auditing service that is both human and machine interpretable will allow platform users to include auditing results in their decision to use platform services. This service will necessarily have the capacity to evolve in an agile manner as new AI algorithms and new methods of bias measurement and mitigation emerge.

AI inputs

abstract algorithms and/or code implemetations, training data corpora, and target population characteristics (at minimum racial/ethnic categories, gender, disabilities, and age)

AI outputs

qualitative analyses & empirical measurements of input system bias and suggestions for bias mitigation & other safety risks

The core problem we are aiming to solve

There is currently no mechanism to empirically or otherwise gauge the algorithmic bias and safety of SingularityNet platform services.  With this grant we will develop a plan for an auditing service that is both human and machine interpretable that will allow platform users to include auditing results in their decision to use platform services.  This service will necessarily have the capacity to evolve in an agile manner as new AI algorithms and new methods of bias measurement and mitigation emerge.

Our specific solution to this problem

The popular reception and explosive media response to large language models has put in the spotlight the likelihood for racial and gender biases in the outputs of AI systems.  The SingularityNet ecosystem has yet to address this key aspect of AI safety and ethics in a formal way in relation to the existing AI systems currently in development. This proposal will allow SingularityNet research into empirical measurements of algorithm bias and mitigation strategies to engage with the rapid developments of a wider research community. 
Through literature review and stakeholder interviews we will establish the scope of a potential auditing service agent to be implemented on the singularityNet platform that will provide empirical measurements of bias and safety and a mechanism for including human user and expert opinion. This research will then be available for evaluating other agents and services on the platform, as well as integrating with existing reputation systems on the platform.

Open Source Licensing

MIT - Massachusetts Institute of Technology License

Links and references

This article gives context to add to overview:
https://medium.com/@jeffery-recker/what-is-an-algorithmic-bias-audit-ea71252b0ec3
https://en.wikipedia.org/wiki/Inductive_bias

Was there any event, initiative or publication that motivated you to register/submit this proposal?

select_option

Proposal Video

Placeholder for Spotlight Day Pitch-presentations. Video's will be added by the DF team when available.

  • Total Milestones

    4

  • Total Budget

    $5,000 USD

  • Last Updated

    24 Feb 2025

Milestone 1 - Literature Review, Scoping and Mapping

Description

Research, identify, and review relevant case studies and literature in the application of AI models to human health within the context of Rejuve.bio as a case study. In mapping best practice through existing literature, we aim to define the scope and identify categories of bias and risk that can be mapped across the SingularityNet network.

Deliverables

Bibliographic list of informational sources, mapping overview of potential risk categories for SingularityNet stakeholders.

Budget

$1,200 USD

Success Criterion

document current state of the art for biomedical algorithm auditing & bias mitigation for presentation to service stakeholders: Singularitynet service users, developers, and institutional parties (government and corporate healthcare regulators and quality & safety evaluators)

Milestone 2 - Participatory Stakeholder Engagement 1: Rejuve.bio

Description

Starting with Revjuve.bio as our initial case study, we will conduct interviews with key team members in order to identify, test and evaluate further areas of potential risk and safety. Using methods of observational design and prototyping, we will

Deliverables

A list of key categories of risk and safety as applicable to Rejuve.bio. An operational flowchart for prototyping and demonstrating the basic auditing system functions for Rejuve.bio and how these might be applied more widely.

Budget

$600 USD

Success Criterion

Buy-in from Rujuve.bio developers, product manager & platform beta testers on relevance & utility of proposed service requirements

Milestone 3 - Democratic Participation & Community Feedback

Description

Using results of Milestones 1 & 2 we will get feedback from Singularitynet token holder community & larger ecosystem about alignment with community values & goals

Deliverables

paper prototype evaluating and expanding on Rejuve.bio specifications for applicability to larger singularitynet ecosystem

Budget

$600 USD

Success Criterion

buy-in from larger singularitynet community on service specifications

Milestone 4 - System specification for safety auditing service

Description

we will produce DeepFunding Proposal for singularitynet marketplace AI auditing service, including system scope & recruitment of key players for software design & deployment

Deliverables

Proposal and Flowchart for system and production outline (staffing & timeline) to present to SingularityNet under appropriate RFP

Budget

$2,600 USD

Success Criterion

we will produce a team and documentation outlining service coding & implementation that gains approval for deepfunding of project

Join the Discussion (0)

Expert Ratings

Reviews & Ratings

    No Reviews Avaliable

    Check back later by refreshing the page.

feedback_icon