We conducted a market analysis, reviewed materials on AI governance and financing models, interviewed 11 different AI entrepreneurs and researchers, wrote a report with findings on the interviewes and are now working on suggestions and edits to the PRISM model to offer to the community as a framework and tool.
RnDAO and ReLab, under the Regenerative Organizations Laboratory, aim to address the problem of stakeholder misalignment in AI development, particularly AGI, by developing the Preferential Return on Investment Soft-Capped Multistakeholder Model (PRISM). PRISM is designed to provide an alternative fundraising model for AI projects, ensuring alignment between entrepreneurs, researchers, investors, and collaborators, prioritizing humanity's wellbeing and safety. The grant application seeks funding to further research and apply PRISM to AI development and alignment, potentially offering a solution to the risks posed by competitive dynamics in the AI field. The project involves research planning, user interviews, design adaptations, presentation, validation, and publishing of findings. It will be conducted in collaboration with SingularityNET, leveraging both organizations' social channels and networks for promotion. The findings will be made publicly available under a creative commons license. Mitigation strategies include a collaborative alliance and the possibility of disproving the hypothesis.
Proposal Description
AI services (New or Existing)
Compnay Name
RnDAO and ReLab - The Regenerative Organizations Laboratory
Problem Description
Misalignment of interests between stakeholders is one of the major areas of friction for any project and the source of a great deal of humanity's coordination failures. When dealing with the development of AI, and particularly AGI and applications of AI that can have a manipulative effect in an individual's sense making capacity, stakeholder misalignment can become a serious threat to current human life's systems for survival and wellbeing. This argument is described by Daniel Schmachtenberger very eloquently here.
Thus we are left with a scenario where AI development is being led mostly by competitive molochian dynamics that pose significant systemic risks, while alternative pathways are rendered inadequate due to the large scale costs associated with AI model training and research (biggest proof of this perhaps is that most cutting edge research AI papers in the past few years have come out not of universities but from the deep pocketed tech companies and their spinoffs).
Solution Description
Prism (the Preferential Return on Investment Soft-capped Multistakeholder Model) was designed inspired by the structure of the Microsoft-OpenAI investment deal that aimed to create alignment between investors (Microsoft) and the researchers in developing AI in a responsible ethical way that was not purely driven by fiduciary obligations for profit maximization and shareholder return above the wellbeing and survivability of the human species. However pioneering, the Microsoft-OpenAI model has its limitations and was eventually made victim by the system's interests when the return on investment cap that OpenAI must pay to Microsoft was raised to 100x ROI.
Prism attempts to address this problem by including explicit clauses and governance structures in place that maximize transparency amongst stakeholders and thus tends to increase the systems' capacity for self-regulation and positive external intervention if necessary.
You can read more about Prism in its current form in this article.
Our goal with this grant is to further the research and application of Prism specifically to the AI alignment context, and with it, develop relevant knowledge, templates and a simple tool to enable aligned funding for AI research and entrepreneurship (also leveraging an understanding of Web3 tokenomics when relevant).
Milestone & Budget
Milestone 1: Research Planning
Milestone Description: Plan the scope of research, areas of interest, people of interest, specific research questions. This can be done collaboratively with the SNET, RnDAO and ReLab communities to crowdsource references and the collective intelligence.
Milestone deliverable: 1. Research questions, knowledge map, schedule of research for the following weeks
Milestone-related budget: $1000
Period: week 1.
Milestone 2: User Research interviews, market research & summary of findings
Milestone Description: Multimedia deep dives in selected areas of interest connected to the research questions. Ongoing documentation and synthesis of findings in public accessible drive. Outreach to find relevant interview subjects, execution of the interviews, and synthesis of findings in Jobs To Be Done format.
Milestone deliverable: 1. Summary of relevant research questions and topics of interest 2. Summary of user interview findings
Milestone-related budget: $2,000
Period: weeks 2 and 3.
Milestone 3: Design of Prism adaptations to AI Alignment and Fundraising Context
Milestone Description: Given findings from content and user research, apply them in alternative designs for Prism that attempt to solve the AI alignment issue.
Milestone deliverable: 1. AI Alignment optimized versions of the Prism model
Milestone-related budget: $1,000
Period: week 4.
Milestone 4: Presentation, Validation and Adjustment of the Model
Milestone Description: Present proposed solutions to relevant stakeholders, incorporate feedback.
Milestone deliverable:
1. Design and revision of the template spreadsheets and simple websites for modeling of Prism investments
Milestone-related budget: $2,000
Period: week 6 and 7.
Milestone 5: Publishing
Milestone Description: Publish findings
Milestone deliverable:
1. Long form article that synthesizes the whole project
2. Templates and tool for modeling made available to the public
Milestone-related budget: $2000
Period: weeks 7 and 8.
Revenue Sharing
For larger or high-risk projects, there is the option to share part of the success of your project with SNET/Deep Funding. This is completely voluntary but will have the benefit of motivating the community to vote in favor of your project.
Marketing & Competition
We'll leverage ReLab and RnDAO social channels to publicize the initiative and collaboration with Singularity Net. Any amplification that Singularity Net can provide is most appreciated.
The findings will be (co)published in the RnDAO blog
referencing Singularity Net as our research partner and sponsor, thus providing additional marketing exposure to SN. (Previous partners have included
RnDAO and ReLab - The Regenerative Organizations Laboratory
Summary
Misalignment between stakeholders is a serious problem. But when it comes to AI development, particularly AGI and other applications that can affect human capacity for coordination and sensemaking, it can be an existential one. SingularityNET's objective to pursue a beneficial singularity as opposed to an extractivist profit driven one must be supported necessarily by coherent designed-for-alignment fundraising and financing mechanisms.
PRISM (the Preferential Return on Investment Soft-Capped Multistakeholder Model) is a fundraising model we have been developing for some time and that has the potential to provide a viable alternative model for AI projects to fundraise and maintain alignment between entrepreneurs, researchers, investors and collaborators, maximizing the overall wellbeing and safety of humanity.
We are applying for this grant to further the research and application of PRISM, which was originally designed with regenerative organizations and impact investment in mind, to the specific problem of AI development and alignment. We hope that this research and all the knowledge, models and templates that are produced with it, can enable AI teams and projects to do their work in a way that is aligned with the interests of the whole.
Funding Amount
8,000 USD
The Problem to be Solved
Misalignment of interests between stakeholders is one of the major areas of friction for any project and the source of a great deal of humanity's coordination failures. When dealing with the development of AI, and particularly AGI and applications of AI that can have a manipulative effect in an individual's sense making capacity, stakeholder misalignment can become a serious threat to current human life's systems for survival and wellbeing. This argument is described by Daniel Schmachtenberger very eloquently
Thus we are left with a scenario where AI development is being led mostly by competitive molochian dynamics that pose significant systemic risks, while alternative pathways are rendered inadequate due to the large scale costs associated with AI model training and research (biggest proof of this perhaps is that most cutting edge research AI papers in the past few years have come out not of universities but from the deep pocketed tech companies and their spinoffs).
Our Solution
Prism (the Preferential Return on Investment Soft-capped Multistakeholder Model) was designed inspired by the structure of the Microsoft-OpenAI investment deal that aimed to create alignment between investors (Microsoft) and the researchers in developing AI in a responsible ethical way that was not purely driven by fiduciary obligations for profit maximization and shareholder return above the wellbeing and survivability of the human species. However pioneering, the Microsoft-OpenAI model has its limitations and was eventually made victim by the system's interests when the return on investment cap that OpenAI must pay to Microsoft was raised to 100x ROI.
Prism attempts to address this problem by including explicit clauses and governance structures in place that maximize transparency amongst stakeholders and thus tends to increase the systems' capacity for self-regulation and positive external intervention if necessary.
You can read more about Prism in its current form in
Our goal with this grant is to further the research and application of Prism specifically to the AI alignment context, and with it, develop relevant knowledge, templates and a simple tool to enable aligned funding for AI research and entrepreneurship (also leveraging an understanding of Web3 tokenomics when relevant).
Marketing Strategy
We'll leverage ReLab and RnDAO social channels to publicize the initiative and collaboration with Singularity Net. Any amplification that Singularity Net can provide is most appreciated.
Milestone Description: Plan the scope of research, areas of interest, people of interest, specific research questions. This can be done collaboratively with the SNET, RnDAO and ReLab communities to crowdsource references and the collective intelligence.
Milestone deliverable: 1. Research questions, knowledge map, schedule of research for the following weeks
Milestone-related budget: $1000
Period: week 1.
Milestone 2: User Research interviews, market research & summary of findings
Milestone Description: Multimedia deep dives in selected areas of interest connected to the research questions. Ongoing documentation and synthesis of findings in public accessible drive. Outreach to find relevant interview subjects, execution of the interviews, and synthesis of findings in Jobs To Be Done format.
Milestone deliverable: 1. Summary of relevant research questions and topics of interest 2. Summary of user interview findings
Milestone-related budget: $2,000
Period: weeks 2 and 3.
Milestone 3: Design of Prism adaptations to AI Alignment and Fundraising Context
Milestone Description: Given findings from content and user research, apply them in alternative designs for Prism that attempt to solve the AI alignment issue.
Milestone deliverable: 1. AI Alignment optimized versions of the Prism model
Milestone-related budget: $1,000
Period: week 4.
Milestone 4: Presentation, Validation and Adjustment of the Model
Milestone Description: Present proposed solutions to relevant stakeholders, incorporate feedback.
Milestone deliverable:
1. Design and revision of the template spreadsheets and simple websites for modeling of Prism investments
Milestone-related budget: $2,000
Period: week 6 and 7.
Milestone 5: Publishing
Milestone Description: Publish findings
Milestone deliverable:
1. Long form article that synthesizes the whole project
2. Templates and tool for modeling made available to the public
Milestone-related budget: $2000
Period: weeks 7 and 8.
Risk and Mitigation
Illness and capacity reduction: As a small team, losing even if only temporarily a member risks delays in projects. As mitigation, we’re part of a broader alliance of closely aligned projects (RnDAO) where we benefit from a community talent funnel and multiple other contributors who are also specialized in research, and we can call on them if needed.
Disproving the hypothesis: We’re working with the assumption that there’s a valuable and viable path for AI research funding using a mix of collaborative-competitive models such as Prism, that can enhance alignment between stakeholders for better outcomes. However, the research could yield a negative result. This would be a disappointing scenario but the findings would help others to avoid a dead alley or at least have initial data available.
Voluntary Revenue
For larger or high-risk projects, there is the option to share part of the success of your project with SNET/Deep Funding. This is completely voluntary but will have the benefit of motivating the community to vote in favor of your project.
Open Source
The findings will be publicly available under a creative commons license (CC BY-SA).
Our Team
Davi Lemos (Founder of ReLab, Lead Researcher) - Psychologist, computer scientist, sociologist and systems designer, Davi has been designing incentive systems and collaborative experiences for teams, start-ups and nonprofits for almost 10 years. He is the inventor of Prism and has been working with multiple entrepreneurs, investors and their lawyers, tax advisors and heads of HR in the implementation of the multistakeholder model across multiple jurisdictions including the US, Germany and Brazil. Will be dedicating 30h per week during the duration of the project.
Danielo (Instigator at RnDAO, Co-Researcher) - Instigator at RnDAO and CoLead at TogetherCrew. Previously, Head of Governance at Aragon, 8 years experience in Organization Design consulting (clients include Google, BCG, Daymler, The UN, and multiple startups), founded two startups, and served as visiting lecturer at Oxford University. Will be dedicating 10h a week.
Review For: Alignment-Optimized Fundraising Tool for AI Research and Entrepreneurship
Expert Review
Rating Categories
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary) This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user's assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it's questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
About Expert Reviews
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary) This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user\'s assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it\'s questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
Total Milestones
5
Total Budget
$8,000 USD
Last Updated
6 Oct 2024
Milestone 1 - Research Planning
Status
😀 Completed
Description
Plan the scope of research, areas of interest, people of interest, specific research questions. This can be done collaboratively with the SNET, RnDAO and ReLab communities to crowdsource references and the collective intelligence.
Milestone 2 - User Research interviews, market research & summary of findings
Status
😀 Completed
Description
Multimedia deep dives in selected areas of interest connected to the research questions. Ongoing documentation and synthesis of findings in public accessible drive. Outreach to find relevant interview subjects, execution of the interviews, and synthesis of findings in Jobs To Be Done format.
New reviews and ratings are disabled for Awarded Projects
No Reviews Avaliable
Check back later by refreshing the page.
Expert Review (anonymous)
Final Group Rating
Rating Categories
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary) This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user's assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it's questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
About Expert Reviews
Reviews and Ratings in Deep Funding are structured in 4 categories. This will ensure that the reviewer takes all these perspectives into account in their assessment and it will make it easier to compare different projects on their strengths and weaknesses.
Overall (Primary) This is an average of the 4 perspectives. At the start of this new process, we are assigning an equal weight to all categories, but over time we might change this and make some categories more important than others in the overall score. (This may even be done retroactively).
Feasibility (secondary)
This represents the user\'s assessment of whether the proposed project is theoretically possible and if it is deemed feasible. E.g. A proposal for nuclear fission might be theoretically possible, but it doesn’t look very feasible in the context of Deep Funding.
Viability (secondary)
This category is somewhat similar to Feasibility, but it interprets the feasibility against factors such as the size and experience of the team, the budget requested, and the estimated timelines. We could frame this as: “What is your level of confidence that this team will be able to complete this project and its milestones in a reasonable time, and successfully deploy it?”
Examples:
A proposal that promises the development of a personal assistant that outperforms existing solutions might be feasible, but if there is no AI expertise in the team the viability rating might be low.
A proposal that promises a new Carbon Emission Compensation scheme might be technically feasible, but the viability could be estimated low due to challenges around market penetration and widespread adoption.
Desirability (secondary)
Even if the project team succeeds in creating a product, there is the question of market fit. Is this a project that fulfills an actual need? Is there a lot of competition already? Are the USPs of the project sufficient to make a difference?
Example:
Creating a translation service from, say Spanish to English might be possible, but it\'s questionable if such a service would be able to get a significant share of the market
Usefulness (secondary)
This is a crucial category that aligns with the main goal of the Deep Funding program. The question to be asked here is: “To what extent will this proposal help to grow the Decentralized AI Platform?”
For proposals that develop or utilize an AI service on the platform, the question could be “How many API calls do we expect it to generate” (and how important / high-valued are these calls?).
For a marketing proposal, the question could be “How large and well-aligned is the target audience?” Another question is related to how the budget is spent. Are the funds mainly used for value creation for the platform or on other things?
Examples:
A metaverse project that spends 95% of its budget on the development of the game and only 5 % on the development of an AI service for the platform might expect a low ‘usefulness’ rating here.
A marketing proposal that creates t-shirts for a local high school, would get a lower ‘usefulness’ rating than a marketing proposal that has a viable plan for targeting highly esteemed universities in a scaleable way.
An AI service that is fully dedicated to a single product, does not take advantage of the purpose of the platform. When the same service would be offered and useful for other parties, this should increase the ‘usefulness’ rating.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok
Sort by