Dor Garbash and Johnny Nguyen, creators of Catalyst and Intersect MBO, propose a project to enhance the Singularity Net ecosystem's strategy. They aim to improve the ecosystem's return on investment (ROI) by conducting research, data collection, analysis, and workshops with key stakeholders. The project, funded with $50,000, seeks to address the challenge of defining Singularity Net's ecosystem strategy to attract top talent and foster innovation.
Their solution involves a rigorous builder-centric strategy based on interviews with ecosystem builders, innovators, and expert assessments. The target audience includes administrators, DAO members, founding organizations, and builders. The project will disseminate its findings through PDF reports shared on Singularity Net's Discord, forums, Telegram channels, and Townhall. Workshops will be recorded and shared on YouTube.
The project is divided into four milestones: Research, Analysis, Report Publishing, and Workshops, with a total budget of $41,998. The main risk is that recommendations may not be implemented, but the team plans to engage stakeholders, provide actionable recommendations, and track implementation progress to mitigate this risk.
Proposal Description
AI services (New or Existing)
Compnay Name
Radical Ecosystems
Service Details
Without an ecosystem strategy, it's unlikely for the ecosystem to thrive. We will identify key levers to dramatically improve Singularity Net's Ecosystem's ROI through rigorous data collection, analysis, and workshops with key stakeholders.
Problem Description
What is Singularity Net's Ecosystem strategy? Without getting this right we will not be able to attract the best talent and create the most powerful innovations.
Solution Description
A Rigorous Builder centric strategy, based on interviews with Ecosystem builders and innovators, and expert assessment.
Milestone & Budget
Milestone title: Research
Deliverables: Interview with different participant groups, Gather publicly available data, Review documents, Send information inquiries to various stakeholders
Outputs: Gather the experiences of key participant groups, estimate the ecosystem's current size, and collect existing governance models, incentives, and decision processes.
Documentation: Raw interview transcriptions, data, and documents.
Time: 1 month
Budget: $16,416
Milestone title: Analysis
Deliverables: Summarize insights from interviews, analyze publicly gathered data, summarize information gathered from documents and inquiries, and provide recommendations.
Outputs: All data is processed into our assessment framework in a succinct format. Review of outcomes and writing of recommendations.
Documentation: Report formatted in a Google doc and a bunch of governance and ecosystem models visualizations.
Time: 1 month
Budget: $16,416
Milestone title: Report publishing
Deliverables: Organizing all information in PDF, disseminating PDF, and sending out invites to public workshops.
Outputs: The community can read the report in a friendly format.
Documentation: A PDF, a slide deck summarizing key findings for each section.
Time: 2 weeks
Budget: $8,083 + $1000 for design.
Milestone title: Workshops
Deliverables: Organizing 2 workshops, gathering feedback on the report and workshop, and disseminating workshop recordings.
Outputs: The report is being discussed, and conclusions and next steps get decided on.
Documentation: Recording and meeting summary. Time: 2 weeks
Budget: $8,083
Marketing & Competition
The Target audience is all stakeholders that impact the ecosystems: Administrators, DAO members, founding organizations, and especially builders.
The report will be formatted as a PDF and will be disseminated through Singularity Net's Discord, Forum, relevant TG channels, and Townhall.
Workshops will be recorded, and published on Youtube, and links to them shared through community channels.
Related Links
https://radicalecosystems.com/
https://projectcatalyst.io/
https://zkignite.minaprotocol.com/
https://www.intersectmbo.org/
Long Description
Radical Ecosystems
Dor Garbash & Johnny Nguyen: Creators of Catalyst and Intersect MBO
Summary
Without an ecosystem strategy, it's unlikely for the ecosystem to thrive. We will identify key levers to dramatically improve Singularity Net's Ecosystem's ROI through rigorous data collection, analysis, and workshops with key stakeholders.
Funding Amount
$50,000
The Problem to be Solved
What is Singularity Net's Ecosystem strategy? Without getting this right we will not be able to attract the best talent and create the most powerful innovations.
Our Solution
A Rigorous Builder centric strategy, based on interviews with Ecosystem builders and innovators, and expert assessment.
Marketing Strategy
The Target audience is all stakeholders that impact the ecosystems: Administrators, DAO members, founding organizations, and especially builders.
The report will be formatted as a PDF and will be disseminated through Singularity Net's Discord, Forum, relevant TG channels, and Townhall.
Workshops will be recorded, and published on Youtube, and links to them shared through community channels.
Our Project Milestones and Cost Breakdown
Milestone title: Research
Deliverables: Interview with different participant groups, Gather publicly available data, Review documents, Send information inquiries to various stakeholders
Outputs: Gather the experiences of key participant groups, estimate the ecosystem's current size, and collect existing governance models, incentives, and decision processes.
Documentation: Raw interview transcriptions, data, and documents.
Time: 1 month
Budget: $16,416
Milestone title: Analysis
Deliverables: Summarize insights from interviews, analyze publicly gathered data, summarize information gathered from documents and inquiries, and provide recommendations.
Outputs: All data is processed into our assessment framework in a succinct format. Review of outcomes and writing of recommendations.
Documentation: Report formatted in a Google doc and a bunch of governance and ecosystem models visualizations.
Time: 1 month
Budget: $16,416
Milestone title: Report publishing
Deliverables: Organizing all information in PDF, disseminating PDF, and sending out invites to public workshops.
Outputs: The community can read the report in a friendly format.
Documentation: A PDF, a slide deck summarizing key findings for each section.
Time: 2 weeks
Budget: $8,083 + $1000 for design.
Milestone title: Workshops
Deliverables: Organizing 2 workshops, gathering feedback on the report and workshop, and disseminating workshop recordings.
Outputs: The report is being discussed, and conclusions and next steps get decided on.
Documentation: Recording and meeting summary. Time: 2 weeks
Budget: $8,083
Risk and Mitigation
The Key risk is that our recommendations would not get implemented.
Our mitigation approach is the following:
(1) Engage and involve key stakeholders throughout this process.
(2) We have greatly influenced the Cardano and Mina ecosystems in the past and we know the appropriate approach to achieve success here as well:
We use thoughtfully articulated and actionable recommendations, taking feedback from stakeholders into account and achieving alignment.
(3) Specifically our workshops will include an explicit discussion and articulation of who's responsible for implementing the recommendation
so they can be tracked over time.
(4) If for some reason a high-impact recommendation will not be followed, we will create a follow-up proposal to accelerate the implementation of key recommendations.
(5) If our proposal is accepted we will submit a follow-up proposal for tracking the impact of following the recommendation over time. (Pool budget doesn't allow to include that in scope unfortunately)
Voluntary Revenue
---
Open Source
All outputs will be published with a CC0 license (https://creativecommons.org/share-your-work/public-domain/cc0/)
Our Team
Dor Garbash has worked as Head of Governance in IOHK and Head of Ecosystem in Mina’s blockchain. He initiated and led the Creation and growth of Catalyst as well as the early formation of Cardano’s governance and Drep structure. In Mina, he has led the creation of Onchain governance and their Radical Innovation Fund zkIgnite. Dor also led the development of the first decentralized proposal system in Ethereum as part of his role of product manager in DAOstack.
Johnny Nguyen was a Director at IOHK and initially responsible for the Decentralized Consortium Fund and then subsequently the Cardano Member-Based Organization. Johnny laid much of the groundwork that resulted in what is now known as the Intersect MBO. Johnny has a career in enterprise technology that spans nearly three decades with emphasis on software engineering, application development, and enterprise architecture. In addition to his role and contributions at IOG, Johnny has always been a voice in the Cardano Community and is an advisor to several ecosystem projects including Charli3 and Indigo Protocol. Johnny is an advocate for Open Source Software and Open Collaboration and has moderated panels on Open Source Development in Cardano at the Cardano Summit.
Review For: Ecosystem strategy by the creators of Catalyst and Intersect
Expert Review
Rating Categories
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary) This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user's assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it's questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
About Expert Reviews
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary) This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user\'s assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it\'s questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
Total Milestones
4
Total Budget
$50,000 USD
Last Updated
16 Jan 2024
Milestone 1 - Research
Status
🧐 In Progress
Description
Interview with different participant groups, Gather publicly available data, Review documents, Send information inquiries to various stakeholders
Deliverables
Budget
$16,416 USD
Link URL
Milestone 2 - Analysis
Status
😐 Not Started
Description
Summarize insights from interviews, analyze publicly gathered data, summarize information gathered from documents and inquiries, and provide recommendations.
Deliverables
Budget
$16,416 USD
Link URL
Milestone 3 - Report publishing
Status
😐 Not Started
Description
Organizing all information in PDF, disseminating PDF, and sending out invites to public workshops.
Deliverables
Budget
$9,083 USD
Link URL
Milestone 4 - Workshops
Status
😐 Not Started
Description
Organizing 2 workshops, gathering feedback on the report and workshop, and disseminating workshop recordings.
New reviews and ratings are disabled for Awarded Projects
No Reviews Avaliable
Check back later by refreshing the page.
Expert Review (anonymous)
Final Group Rating
Rating Categories
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary) This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user's assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it's questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
About Expert Reviews
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary) This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user\'s assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it\'s questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok
Sort by