Biomedical Algorithm & Data Auditing Service

chevron-icon
Back
project-presentation-img
Presentation
Expert Review🌟
EmileDevereaux
Project Owner

Biomedical Algorithm & Data Auditing Service

Funding Requested

$5,000 USD

Expert Review
Star Filled Image Star Filled Image Star Filled Image Star Filled Image Star Filled Image 4
Community
Star Filled Image Star Filled Image Star Filled Image Star Filled Image Star Filled Image 3.6 (9)

Overview

Our ideation phase proposal implements an iterative participatory design process of stakeholder engagement, mapping the relevant scope and necessary parameters for the development of an auditing service agent to be implemented on the SingularityNet platform. Through the development of an auditing service agent we aim to provide empirical measurements of bias and safety, as well as a mechanism for evaluating agents and services on the platform through the communication of human user and expert opinion. The service agent for evaluating bias and safety will integrate into existing reputation systems on the platform.

Proposal Description

Our Team

Our team combines academic and industry expertise in the areas of AI development for human health, participatory & service design,  institutional frameworks and assessment for Equality, Diversity & Inclusion, and legal frameworks for assessing AI risk and bias for policy development.

Dr Emile Devereaux, University of Sussex (participatory design, digital media, and EDI)

Michael Duncan, Rejuve.bio (AI and computational biology) 

Professor Angela Daly, University of Dundee (AI and international law)

View Team

Company Name (if applicable)

N/A

Please explain how this future proposal will help our decentralized AI platform grow and how this ideation phase will contribute to that proposal.

The popular reception and explosive media response to large language models has put in the spotlight the likelihood for racial and gender biases in the outputs of AI systems.  The SingularityNet ecosystem has yet to address this key aspect of AI safety and ethics in a formal way in relation to the existing AI systems currently in development. This proposal will allow SingularityNet research into empirical measurements of algorithm bias and mitigation strategies to engage with the rapid developments of a wider research community. 

Through literature review and stakeholder interviews we will establish the scope of a potential auditing service agent to be implemented on the singularityNet platform that will provide empirical measurements of bias and safety and a mechanism for including human user and expert opinion. This research will then be available for evaluating other agents and services on the platform, as well as integrating with existing reputation systems on the platform.

 

Clarify what outcomes (if any) will stop you from submitting a complete proposal in the next round.

The ongoing yet largely unknown implications of the token merger of Fetch, Ocean and SingularityNet could impact the functioning of any potential service implemented on the SingularityNet marketplace.

 

The core problem we are aiming to solve

There is currently no mechanism to empirically or otherwise gauge the algorithmic bias and safety of SingularityNet platform services.  With this grant we will develop a plan for an auditing service that is both human and machine interpretable that will allow platform users to include auditing results in their decision to use platform services.  This service will necessarily have the capacity to evolve in an agile manner as new AI algorithms and new methods of bias measurement and mitigation emerge.

 

Our specific solution to this problem

We will develop a proposal for an algorithm auditing service that will at minimum provide a transparent description of potential algorithm biases; providing an overview for how the algorithm outputs may negatively impact people.

Project details

N/A

Existing resources

We will draw on existing contacts and stakeholder knowledge base at Rejuve.bio, Rejuve.ai, and SingularityNet. We will be working alongside the AI Risk Management Framework taskforce and the Ambassador program policy and vision documentation. Academic journals and peer reviewed research will be accessed through the University of Sussex and Dundee University in the United Kingdom. 

Additional videos

N/A

Proposal Video

DF Spotlight Day - DFR4 - Emile Devereaux - Biomedical Algorithm & Data Auditing Service

4 June 2024
  • Total Milestones

    5

  • Total Budget

    $5,000 USD

  • Last Updated

    4 Jun 2024

Milestone 1 - Literature Review Scoping and Mapping

Description

Research identify and review relevant case studies and literature in the application of AI models to human health within the context of Rejuve.bio as a case study. In mapping best practice through existing literature we aim to define the scope and identify categories of bias and risk that can be mapped across the SingularityNet network.

Deliverables

We will produce two deliverables: - Bibliographic list of informational sources from peer reviewed and non-peer reviewed publications; - A mapping overview of potential risk categories for SingularityNet stakeholders.

Budget

$1,200 USD

Milestone 2 - Participatory Stakeholder Engagement 1: Rejuve.bio

Description

Starting with Revjuve.bio as our initial case study we will conduct interviews with key team members in order to identify test and evaluate further areas of potential risk and safety. Using methods of observational design and prototyping we will assess and determine potential areas of concern.

Deliverables

We will produce two deliverables: - A list of key categories of risk and safety as applicable to Rejuve.bio and how these might be applied more widely. - An operational flowchart for prototyping and demonstrating the basic auditing system functions for Rejuve.bio to be used in engaging with additional stakeholders.

Budget

$800 USD

Milestone 3 - Participatory Stakeholder Engagement 2: Evaluation

Description

The second iteration of our participatory design process builds on the initial work with Rejuve.bio in order to test the auditing service framework for additional AI stakeholders. The second phase engages stakeholders across SingularityNet in a process of democratic participation and community checks and balance to refine the proposed categories of risk and safety assessment.

Deliverables

We will deliver: - An updated and refined list of key categories for evaluating risk and safety in AI; - A "paper prototype" in the form of a flowchart for the auditing procedures.

Budget

$800 USD

Milestone 4 - Participatory Stakeholder Engagement 3: Policy

Description

In the third iteration of stakeholder engagement we will evaluate the auditing service in respect to current international law and AI policy development; providing checks and balances from a legal perspective. Angela Daly Professor of Law & Technology in the Leverhulme Research Centre for Forensic Science and Dundee Law School will evaluate our proposal in respect to stakeholder engagement from legal governmental and non-governmental policies. https://discovery.dundee.ac.uk/en/persons/angela-daly

Deliverables

We will deliver: - An updated and refined list of key categories for evaluating risk and safety in AI; - An updated "paper prototype" in the form of a flowchart for the auditing procedures.

Budget

$1,100 USD

Milestone 5 - Deepfunding Proposal Development

Description

DeepFunding Proposal for a SingularityNet marketplace AI auditing service including the system scope and recruitment plan for the key players in software design and deployment.

Deliverables

We will deliver a Deepfunding proposal including a flowchart to use in consultation with the Project Manager and SingularityNet engineers for further development.

Budget

$1,100 USD

Join the Discussion (4)

Sort by

4 Comments
  • 0
    commentator-avatar
    Emotublockchain
    Jun 9, 2024 | 9:17 PM

    What strategies do you plan to employ to ensure the agility of the auditing service agent, allowing it to evolve alongside emerging AI algorithms and bias measurement/mitigation methods?

  • 0
    commentator-avatar
    Emotublockchain
    Jun 9, 2024 | 9:11 PM

    Can you elaborate on how the auditing service agent will integrate into existing reputation systems on the SingularityNet platform, and how it will communicate auditing results to users and decision-makers?

  • 0
    commentator-avatar
    Emotublockchain
    Jun 9, 2024 | 9:10 PM

    How will you ensure that the iterative participatory design process effectively engages stakeholders and incorporates their perspectives into the development of the auditing service agent?  

  • 0
    commentator-avatar
    michael
    Jun 6, 2024 | 4:33 PM

    examples of bias detection software from nih sponsored hackathon: contest descriptionwinning entries, code

Expert Review

Overall

4

user-icon
  • Feasibility 3
  • Viability 4
  • Desirabilty 4
  • Usefulness 4
Comprehensible approach and a reasonable plan.

The proposal aims to provide safety assessments, a crucial feature to help the SNET marketplace stand out. If successful, it could lead to a built-in safety and auditing function for all marketplace projects. The experienced team presents a comprehensible approach and a reasonable plan, and overall, we see it as a proposal that is worthy of support within the Ideation Pool in order to work on the topic further.

However, we would like to point out that the ""elimination of bias"" approach also has its dangers. It is not clear that it is desirable or possible to eliminate bias. Bias is built into living systems, and the development of bias has been a biological and evolutionary necessity. When it comes to biomedical systems, the desire would be to create the appropriate bias, not to eliminate bias. For example, when providing a diagnosis for an older female patient with dark skin, it would be most appropriate to look at biomedical data that is most relevant to older female patients with dark skin (disease caused by Vitamin D deficiency is more common in older and darker-skinned populations). Eliminating bias in biomedical data would actually create a poor diagnosis. It is of concern that the team has no medical professionals, and we would expect them to understand the implications of bias in medical assessments before eliminating it.

Sort by

9 ratings
  • 0
    user-icon
    Ayo OluAyoola
    Jun 10, 2024 | 11:20 AM

    Overall

    4

    • Feasibility 4
    • Viability 4
    • Desirabilty 4
    • Usefulness 4
    Data Auditing Service

    Your proposal outlines a promising approach for developing an auditing service agent on SingularityNet. Here are some critical points for consideration:

    Strengths:

    • Iterative Participatory Design: Engaging stakeholders throughout the process is a strong foundation for building a tool that meets diverse needs.
    • Focus on Bias and Safety: Addressing these critical issues is essential for responsible AI development on SingularityNet.
    • Empirical Measurements: Combining qualitative (human opinion) with quantitative data (bias/safety metrics) provides a more comprehensive evaluation.
    • Integration with Reputation Systems: Utilizing existing infrastructure can streamline adoption and user experience.

    Points for Further Discussion:

    • Scope and Parameters: More details on the specific functionalities and limitations of the auditing service agent would be helpful.
    • Metrics for Bias and Safety: Defining the metrics used to measure bias and safety is crucial for ensuring their effectiveness.
    • Evaluation Methodology: Clarify how human and expert opinions will be collected and incorporated into the evaluation process.
    • Integration Details: Specify how the auditing service agent will interact with existing reputation systems on the platform.

    Additional Considerations:

    • Scalability: How will the agent handle a growing number of agents and services on SingularityNet?
    • Transparency and Explainability: Can the agent explain its findings on bias and safety in a way that users can understand?

    This proposal demonstrates a well-considered approach to building an auditing service agent for SingularityNet. By addressing the points for further discussion and considering the additional aspects, the team can develop a robust and valuable tool for the platform.

    Have you seen  Ghost AI? Our ideation project just as fantastic as yours. Click https://deepfunding.ai/proposal/persona-ai/ to see what we would like to explore.

  • 0
    user-icon
    Gombilla
    Jun 10, 2024 | 7:48 AM

    Overall

    4

    • Feasibility 4
    • Viability 4
    • Desirabilty 4
    • Usefulness 4
    Will contribute to equitable healthcare solutions

    I am drawn to your approach of incorporating human user and expert opinions in the evaluation process, improving the robustness and accuracy of your auditing outcomes. This is a very important phase in this ideation stage and I commend your team for that. Also, your provision of empirical measurements of bias in biomedical algorithms, will contribute to fairer and more equitable healthcare solutions. Kudos !

  • 0
    user-icon
    Max1524
    Jun 10, 2024 | 3:59 AM

    Overall

    3

    • Feasibility 3
    • Viability 3
    • Desirabilty 3
    • Usefulness 3
    Are there any risks during implementation?

    The total requested amount of $5,000 represents the true value of what the team is doing. The 4 milestones presented in detail make me quite satisfied and this is a plus point of this proposal. On the other hand, the team should present some aspect of foreseeable risks when implementing the proposal to ensure objectivity. Thank you team.

  • 0
    user-icon
    Emotublockchain
    Jun 9, 2024 | 9:21 PM

    Overall

    3

    • Feasibility 4
    • Viability 3
    • Desirabilty 3
    • Usefulness 5
    The lack of a mechanism

    Your proposal addresses a critical issue within the SingularityNet platform. The lack of a mechanism to evaluate algorithmic bias and safety. Here's a review of your proposal:

    1. Clear Problem Statement: You articulate the problem well, emphasizing the need for empirical measurement of bias and safety in SingularityNet platform services.
    2. Comprehensive Solution: Your proposal outlines a detailed plan to develop an auditing service agent that addresses the identified problem. It covers stakeholder engagement, scope mapping, and parameters for development.

    Areas of Improvement

    1. Concrete Methodology: While the proposal outlines the objectives and goals clearly, it could benefit from a more detailed methodology section. Specify the steps and methodologies you'll employ during the iterative participatory design process.

        2. Measurement Metrics: Define specific metrics for measuring algorithmic bias and safety.        How will you quantify these concepts to provide empirical measurements?

  • 0
    user-icon
    Onize Olie
    Jun 8, 2024 | 10:07 PM

    Overall

    4

    • Feasibility 3
    • Viability 4
    • Desirabilty 4
    • Usefulness 4
    Addressing Biases in Data

    Upon reviewing the proposal, I found it innovative in its approach, particularly in stakeholder engagement and the inclusion of an expert team to address bias and safety in AI systems. The methodology\'s thoroughness, including literature review and stakeholder interviews, is a strong point. 

    Overall, its idea holds much promise but might be hard to implement because biases are unavoidable most times. 

  • 0
    user-icon
    HieuTran
    Jun 1, 2024 | 10:51 AM

    Overall

    3

    • Feasibility 3
    • Viability 3
    • Desirabilty 3
    • Usefulness 3
    Biomedical algorithms and audit data services.

    Feasibility

    This is a great idea for a biological audit service. The idea exposes unresolved questions about AI safety and ethics at SingularityNet compared to other AI platforms. Simultaneously, it is recommended to create a strategy for an audit service that both people and machines can comprehend, allowing platform users to use audit results when deciding whether to use the platform service. This is an executable. However, employing model selection is based on Rejuve—bio as a case study. The proposal does not specify the magnitude of the rejuve.bio, making it difficult to obtain precise data to evaluate the plan's practicality.

    Viability

    The team, milestones, and proposed deliverables are clearly defined. The plan does not provide a time frame for completing each milestone. According to the description, only member Michael's lindedin information was discovered. Other project participants had evident competence, but members EmileDevereaux and Adaly001's information channels could not be found.

    Desirabilty

    The project provides benefits in terms of auditing and algorithmic services in the biological niche, but the planned team has yet to create data that can be used in other disciplines.

    Usefulness

    This concept is extensively applied and developed on the AI platform, and it considers the use of algorithms at SingularityNet. The platform compares SingularityNet's experimental bias and safety measurements to those of other AI platforms.

     

  • 0
    user-icon
    CLEMENT
    May 31, 2024 | 4:09 PM

    Overall

    4

    • Feasibility 4
    • Viability 3
    • Desirabilty 3
    • Usefulness 4
    It Addresses concerns with bias in biomedical data

    Although still at an ideation phase, I believe this project holds significant promise both generally and within the SNET AI Marketplace. From a broader perspective, it addresses critical concerns surrounding bias and safety in biomedical algorithms and data, a crucial step in ensuring the reliability and ethical use of AI in healthcare. Moreso, by implementing an iterative participatory design process and engaging stakeholders, this project will ensure that diverse perspectives and expertise are incorporated into the development of the auditing service agent.

    Additionally, within the SNET AI Marketplace, this project will contribute in several ways. Firstly, it will enhance trust and transparency by providing empirical measurements of bias and safety, allowing users to make informed decisions about the algorithms and data they employ.

    Great job ideating this !

  • 0
    user-icon
    Tu Nguyen
    May 31, 2024 | 8:05 AM

    Overall

    4

    • Feasibility 4
    • Viability 4
    • Desirabilty 4
    • Usefulness 4
    Biomedical Algorithm & Data Auditing Service

    The core problem this project aims to solve is that there is currently no mechanism to evaluate empirically or otherwise the bias and algorithmic safety of SingularityNet platform services. This proposal has the idea of ​​developing an algorithm testing service that would, at a minimum, provide a transparent description of potential algorithm biases; provides an overview of how algorithmic outputs can negatively impact people.
    Personal advice: they should determine the start and end times of milestones.

  • 0
    user-icon
    Joseph Gastoni
    May 21, 2024 | 1:04 PM

    Overall

    3

    • Feasibility 3
    • Viability 3
    • Desirabilty 2
    • Usefulness 3
    a process for developing an auditing service agent

    This proposal outlines an iterative design process for developing an auditing service agent to assess bias and safety of AI services on the SingularityNet platform. Here's a breakdown of its strengths and weaknesses:

    Feasibility:

    • Moderate-High: Leveraging existing research and iterative design can lead to a functional auditing service.
    • Strengths: The proposal focuses on a well-defined problem and utilizes established methodologies (literature review, stakeholder interviews).
    • Weaknesses: Developing robust bias detection algorithms can be complex, and integrating human input requires careful design.

    Viability:

    • Moderate: Success depends on the clarity and value proposition of the auditing service for SingularityNet users.
    • Strengths: The proposal addresses a growing concern (AI bias) and could enhance user trust in the platform.
    • Weaknesses: The proposal lacks details on the long-term sustainability of the service and its potential impact on platform operations.

    Desirability:

    • High: For developers concerned about AI bias and users seeking trustworthy AI services, this is desirable.
    • Strengths: The proposal offers transparency and user empowerment in evaluating AI services.
    • Weaknesses: The proposal needs to address concerns about the complexity of bias detection and potential limitations of the service (e.g., not all biases detectable).

    Usefulness:

    • Moderate-High: The service has the potential to improve the safety and fairness of AI services on SingularityNet.
    • Strengths: The proposal offers valuable insights into bias detection and mitigation strategies for SingularityNet developers.
    • Weaknesses: The proposal lacks details on how the service integrates with existing workflows and how it will be maintained and updated.

    Overall, developing an auditing service agent for SingularityNet can be beneficial. However, focusing on:

    • Developing clear metrics and methodologies for bias detection that are both human and machine interpretable.
    • Defining the scope and limitations of the service so user expectations are managed.
    • Designing a sustainable model for maintaining and updating the service as AI and bias detection techniques evolve.
    • Engaging with developers and users throughout the design process to ensure the service meets their needs.

    By addressing these considerations, the auditing service agent can become a valuable tool for promoting responsible AI development on the SingularityNet platform.

    Here are some strengths of this project:

    • Addresses a critical concern (AI bias) and offers a potential solution for SingularityNet users and developers.
    • Employs an iterative design process that incorporates stakeholder input and existing research.
    • Focuses on user empowerment and transparency in evaluating AI services on the platform.

    Here are some challenges to address:

    • Developing robust and reliable methods for detecting various types of bias in AI algorithms.
    • Ensuring the long-term sustainability and scalability of the auditing service within the SingularityNet ecosystem.
    • Managing user expectations by clearly communicating the scope and limitations of the service's bias detection capabilities.

Summary

Overall Community

3.6

from 9 reviews
  • 5
    0
  • 4
    5
  • 3
    4
  • 2
    0
  • 1
    0

Feasibility

3.6

from 9 reviews

Viability

3.4

from 9 reviews

Desirabilty

3.3

from 9 reviews

Usefulness

3.8

from 9 reviews

Get Involved

Contribute your talents by joining your dream team and project. Visit the job board at Freelance DAO for opportunites today!

View Job Board