AI Education Assistant for CrisisAffected Children
Expert Rating
n/a
Proposal for BGI Nexus 1
Funding Request
$50,000 USD
Funding Pools
Beneficial AI Solutions
Total
3 Milestones
Overview
We are aiming to create an AI-based application that offers personalized educational content to children in crisis zones, adapting to their learning levels and needs. This tool seeks to bridge educational gaps and provide continuous learning opportunities in challenging circumstances
Millions of children in crisis zones are deprived of education due to displacement and resource scarcity. Traditional education models fail to address these challenges effectively. Our AI-driven education assistant will provide an adaptive, interactive learning environment accessible even in unstable conditions.
Our specific solution to this problem
Extent of Contribution to BGI Mission: This project aligns with BGI’s vision of social impact through AI by ensuring that vulnerable children receive quality education despite crises.
How It Helps BGI’s Mission:
Addresses educational inequality, ensuring displaced and underprivileged children have access to personalized learning.
Promotes ethical AI use in education, with reinforcement learning optimizing engagement while protecting students' privacy.
Contributes to long-term social good by equipping children with essential knowledge, fostering opportunities for growth.
Supports a decentralized AI future, as the open-access nature of the platform allows NGOs, schools, and governments to leverage AI for education.
Project details
Project Details:
Objectives:
Develop an AI-driven education system that adapts to the learning pace and skill level of individual students.
Implement gamification elements to improve engagement and retention.
Provide a multilingual experience to support diverse student backgrounds.
Develop an offline-first approach to allow access in low-connectivity regions.
Integrations:
NLP for personalized learning content.
Reinforcement Learning for adaptive difficulty levels.
Cloud-based database for student progress tracking.
Real-time Interventions:
Instant AI-generated feedback for students.
Teacher dashboard for monitoring progress and intervention suggestions.
Scalability and Accessibility:
Web and mobile compatibility(for later after successful deployment on web).
Offline mode for accessibility in underserved areas.
Open Source Licensing
MIT - Massachusetts Institute of Technology License
Placeholder for Spotlight Day Pitch-presentations. Video's will be added by the DF team when available.
Total Milestones
3
Total Budget
$50,000 USD
Last Updated
23 Feb 2025
Milestone 1 - AI Model Training
Description
- Develop core adaptive learning algorithms.
- Test on initial dataset and refine performance.
Deliverables
MVP with basic personalized learning paths
Budget
$15,000 USD
Success Criterion
AI-generated lesson plans show measurable improvements in student learning rates.
Milestone 2 - Web-Based Platform Prototype
Description
- Develop an interactive web interface for educational content.
- Integrate learning features.
Deliverables
Beta version of the web platform with core functionalities
Budget
$15,000 USD
Success Criterion
Successfully tested with initial user groups
Milestone 3 - Content Expansion and Optimization - Deployment
Description
- Add multilingual support and advanced AI feedback features.
- Deploy platform publicly with open access.
- Ensure scalability for thousands of concurrent users.
- Enhance UI/UX for seamless engagement.
Deliverables
Live platform with an active user base.
Fully functional web-based learning assistant.
Budget
$20,000 USD
Success Criterion
Positive feedback from test users(DeepFunding Community) and educational organizations. -80%+ satisfaction rating from early adopters.
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary) This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user's assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it's questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
About Expert Reviews
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary) This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user\'s assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it\'s questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok
Join the Discussion (0)
Please create account or login to post comments.