Stephen Whitenstall
Project OwnerLead Project Manager and Development Lead.
This project will develop an Ethical AI Auditing framework for graph-based and vectorisation systems, ensuring fairness, transparency, and compliance with ethical standards. It aims to enhance data accuracy, detect bias, and improve accountability in AI decisions. Using community-sourced data, the project will design accessible auditing techniques and share open-source tools to promote ethical alignment and trust in AI systems.
New AI service
Metrics will be applied to quantify graph properties and compare them against domain expectations. Such as degree distribution (check if node connections follow expected patterns) centrality measures (Identify influential nodes) homophily (measure whether nodes with similar attributes are disproportionately connected) and clustering coefficients (assess community structures).
Archive dataset (json)
Archive dataset (graph)
New AI service
Define explicit rules to validate graph integrity. Including syntax checks (ensure nodes/edges have valid IDs and attributes) validate edge directionality (e.g. "CEO → Company" should not be reversed) enforcing ontological consistency (with reference to a communities domain specific logic - eg who does what how and why)
Archive dataset (json)
Archive dataset (graph)
New AI service
Guidance on a manual audit of graphs for discriminatory patterns. Checking for attribute parity (how attributes are shared or distributed) edge fairness (identify disproportion or weighted relations) and counterfactual testing (consider how changing a node’s attribute would alter its connections or outcomes).
Archive dataset (json)
Archive dataset (graph)
New AI service
Trace the origin and transformations of graph data. Guidance on source validation: Confirm data sources (e.g. GitHub Repo datasets and databases) Update Logs: Audit timestamps and edit histories to detect tampering or stale data. Privacy Checks: Ensure sensitive attributes (e.g. personal or financial details) are removed or anonymized.
Archive dataset (json)
Archive dataset (graph)
New AI service
Engage domain experts or end-users to validate graph utility: Guidance on Expert Review: - Validate graph data with domain experts and archive maintainers User Surveys: - Ask users if data outputs align with their needs.
Archive dataset (json)
Archive dataset (graph)
New AI service
Combine automated detection with human judgment: Flagged Anomalies: Use algorithms to highlight unusual nodes/edges (e.g. sudden spikes in relations / edges) and have humans investigate. Red-Teaming: Deliberately inject synthetic anomalies (e.g. fake edges) to test auditability.
Archive dataset (json)
Archive dataset (graph)
New AI service
Ensure graphs adhere to regulations and ethical norms: GDPR/CCPA Compliance: Verify that graphs don’t expose personally identifiable information (PII). Transparency: Document graph construction logic (e.g."Why are these two users connected?"). Impact Assessments: Evaluate risks (e.g."Could this relation graph amplify polarization?").
Archive dataset (json)
Archive dataset (graph)
- Gather community-sourced data (Archives WG datasets) and prepare for migration to a graph-based system. - Establish data cleaning and preprocessing pipelines. - Ensure compliance with ethical and legal data standards (e.g. GDPR).
- Cleaned and preprocessed dataset. - Documentation of data sources and cleaning procedures. - Ethical assessment report on data collection and handling practices.
$3,000 USD
- Data Quality: At least 95% accuracy and completeness in the cleaned dataset. - Compliance: Full compliance with GDPR and other relevant data protection standards. - Documentation: Clear and comprehensive documentation that allows reproducibility of data preparation steps.
- Design a graph data structure suitable for an auditing process. - Migrate cleaned data to the graph-based system. - Ensure graph integrity and consistency.
- Graph schema and data model. - Initial graph database populated with community-sourced data. - Documentation on data migration process and structure.
$4,000 USD
- Data Integrity: 100% data consistency and no data loss during migration. - Performance: Graph database queries perform within acceptable time limits (e.g., <1 second for standard queries). - Scalability: Graph structure can scale to at least 10x the initial dataset volume.
- Perform statistical analysis on graph data (e.g. degree distribution centrality clustering coefficients). - Conduct initial auditing using visual inspection and rule-based checks. - Identify and document potential biases or anomalies.
- Statistical analysis report. - Initial audit findings highlighting anomalies or biases. - Documentation of auditing methods and tools used.
$6,000 USD
- Accuracy: Statistical metrics calculated with at least 99% accuracy. - Bias Detection: Identification of at least 3 potential bias patterns for further investigation. - Tool Verification: Auditing tools validated through benchmark tests.
- Develop an Ethical AI Auditing framework tailored for graph-based systems. - Integrate stakeholder feedback and domain-specific rules. - Define guidelines for fairness transparency and ethical alignment.
- Ethical AI Auditing framework (including guidelines checklists and evaluation criteria). - Stakeholder feedback integration report. - Code samples demonstrating framework implementation
$4,500 USD
- Framework Completeness: Covers at least 90% of identified ethical risks and biases. - Stakeholder Acceptance: Positive feedback from at least 80% of stakeholders. - Usability: Framework easily integrated with at least 2 existing graph-based systems.
- Prepare graph data for vector manipulation and generative model integration. - Implement advanced auditing techniques including bias and fairness evaluation counterfactual testing and ethical compliance checks. - Conduct anomaly detection with human-in-the-loop evaluation.
- Vectorized graph data ready for generative models. - Advanced auditing reports including fairness and ethical compliance assessments. - Anomaly detection results with human verification.
$5,000 USD
- Model Accuracy: Vector representations achieve at least 95% accuracy in similarity tasks. - Bias Detection: Identification of at least 5 bias patterns using advanced techniques. - Anomaly Detection Precision: At least 90% precision in detecting anomalies.
- Validate the Ethical AI Auditing framework through the Archives WG real-world application. - Document findings methodologies and challenges encountered. - Share knowledge through open-source code samples white papers or community workshops.
- Case study report validating the framework. - Comprehensive project documentation including ethical considerations and auditing best practices. - Open-source repository with code samples and tools. - White paper or workshop presentation on Ethical AI Auditing practices.
$3,500 USD
- Validation Success: Framework validated in Archives WG real-world application with positive feedback. - Knowledge Sharing Reach: White paper accessed, distributed and commented on. - Open-Source Adoption: tba.
Please create account or login to post comments.
Reviews & Ratings
Please create account or login to write a review and rate.
Check back later by refreshing the page.
© 2024 Deep Funding