DFR1 – Voting analysis part 3 – Next steps
While the other articles in this series are more of a factual, neutral nature, in this part, I will give the current state of my thinking. These are my personal views based on current knowledge. The conclusions may change, based on new arguments and insights or newly available functionality.
First goal: Reduce the risk and impact of teams that want to influence the outcomes to their personal advantage, e.g. by temporarily buying and using a large number of tokens, just for voting purposes.
Second goal: Ensure that the outcome of the voting process is a good reflection of the overall opinion of our community of token holders, by:
- Including as many community members as possible in the voting process
- Creating a proper balance between the weight of tokens vs the weight of people/wallets
Third goal: Ensure that good and realistic proposals float to the surface (at the cost of projects that are not feasible, not viable, or not desirable).
While these goals will hopefully motivate more community members to participate (which would be very welcome!) I did not specify this as a separate goal. More fundamental than increasing the number of engaged community members are: 1) making sure that the best projects surface and 2) making sure that the outcomes reflect the community’s preferences. Having more community members involved will increase the chances of #2, but a good reputation system might accomplish the same result with fewer people. Expert reviews mentioned below will help to align #1 and #2.
With these goals on the radar, there are a number of options, not mutually exclusive.
Option 1: Expert reviews
This is in my opinion conceptually straightforward. Different teams with different backgrounds post proposals that are very different in style, and of varying technical detail and quality. It would be helpful to have an overview created by a knowledgeable, experienced, and neutral entity that will assess all proposals on the same grounds and gives them an ‘expert rating’ based on predefined criteria. The main benefits of this option are:
- Reducing the threshold to form an opinion and thereby removing hurdles to participate in voting.
- Reducing the risk that unrealistic proposals or teams with insufficient qualifications will get awarded.
- The criteria that the experts are using will be an important guideline for the proposers. Sub-par proposals or teams might abstain from participating at all, while good teams will be able to make better proposals based on these assessment criteria.
How to implement:
Ideally, the experts would have true neutrality. This means that they should not be affiliated with any participant in any way. While I don’t see any issue in having experts that are working for SNET, this would not be appropriate for assessing projects that are affiliated with or incubated by SNET.
Technical feasibility would be the first and foremost assessment criterium. Therefore the experts should be technically savvy. Other selection criteria should be team capabilities and business viability.
Challenges and solutions
Some practical challenges:
- We need to find these experts, perhaps rotate them and make sure they are motivated
- It will be hard to find enough experts to have every proposal assessed
Solution direction:
We can somehow filter which projects should be assessed by this team of experts. I think the sweet spot in the number of proposals per expert is somewhere between 5 and 10. We can filter on the requested amount, on community preference (using the portal features), or we can ask the expert group to make their own choice on which projects they think are most promising.
In the short term, I can imagine we will order the projects in their requested quantity and ask the experts as a group to make a selection of projects in the top 20% that are most promising. This will avoid experts being forced to spend quality time on proposals that have a high requested amount but are clearly low quality.
In the longer term, with sufficient community engagement, we can give a role here to the community, and create a filter that takes both required funding and popularity in account, while still giving some freedom to the experts. E.g we first filter out 15 proposals based on their requested amount. These are ordered based on community rating and the top 4 will be assessed for sure. Out of the remaining 11 proposals, the experts’ group can make their own choice. (of course, all these numbers are arbitrary, and just added as an example)
Should the number of proposals and the available funding grow, we can grow the number of experts and assessed proposals, as long as each proposal is at least assessed by e.g. 3 experts.
Desired end state:
Community rating: While the portal is open for submissions and for a limited time after, community members have the option to rate each proposal on a predefined number of criteria. E.g
- Technical feasibility (is the project technically possible and feasible with a limited time and budget?)
- Confidence in the project team (team roles, capabilities, experience)
- Business viability (Is there a market for this product? Will it result in many and/or complex API calls on the platform?
After a set date, a number of projects are selected for an expert review, that will be based on the same criteria. Each project will have a community rating and a selection of projects will also have an expert rating.
Live show
After the experts have made their assessments, the teams and the experts will meet in a live show.
- The proposers will get an opportunity to defend their projects against the given expert feedback. They may give better or more detailed explanations, or propose improvements. These explanations or improvements will also be added to their proposal
- The experts will ask some critical questions, to make their final assessment.
- At the end of the show, each of the experts will give their personal ranking of all the proposals on the show. (So they do not need to adjust their ratings, but might be allowed to)
- The grand finale is the final expert ranking of the top proposals.
Side note around the skill and appetite to participate in such a show:
Participating publicly in a live show may not be suitable for everyone.
For experts, this means that some will rather refuse the role, while for others this may be an incentive and a podium to profile themselves as an expert. This might even make other (AGIX) incentives unnecessary.
For proposers, this also means that some good teams may feel so uncomfortable with these prospects that they rather not participate in this activity. This should be clarified at an early stage, so the experts will only assess teams that are willing to participate in the show. This is not because we want to somehow punish teams that do not want to participate but to ensure that we have sufficient candidates.
There is an option to add some extra skin in the game by endowing the experts with some voting power. However, I am leaning towards keeping the outcome of the show purely advisory. Just having the benefit of this publicity is already a significant advantage, and we should remain open to high-quality projects from teams that are capable in all important areas except good presentation skills or teams that simply do not desire to put themselves on stage like this.
Option 2: Incentives for voting.
A lot of things that we do in our everyday life are steered by incentives. A proper mechanism of the right incentives will lead to a self-organizing system that doesn’t need a lot of rules and overhead. But choosing the right incentives and predicting the outcomes they will lead to, can be hard. In the context of voting for Deep Funding, the first question is: Do we want people to vote because of the rewards? Can we trust that those people will actually do their research and vote conscientiously, rather than getting it over with quickly, just to get the reward?
One theoretical way to steer this in the right direction would be to connect the rewards to the proven benefit of a project to the platform, calculated in the value of the API calls done after a certain period. (This would exclude a few specific categories of projects, but we can manage that separately.) The issue here is that the time needed to collect this evidence is not in balance with the actual rewards which will be relatively low. We could try to amend this by some kind of ‘winner takes all’ principle, but that would create a lot of complexity and make the whole incentive process more like a lottery with a long waiting time.
Another scenario I researched is a voting process where we give every voter a fixed number of points to be distributed over the preferred projects at will. They can give all their points to the best project, or distribute them more evenly. Projects with higher requested amounts would also require a higher number of points to become awarded. The idea is that voters would get awarded based on the number of points they allotted to a winning project thus gamifying the voting process and enabling voters to get higher awards by casting their votes in a smart way.
This may all sound a bit complex and we can dive into this deeper at a later time. Important for now is that, while interesting from a voting perspective, it will not be the distinguishing solution to gamify the voting process by means of awards, since the scenarios show that they will be distributed quite evenly, keeping the awards per person still relatively small.
My conclusion, for the time being, is that offering AGIX rewards just for voting is not yet a good solution. Other rewards, such as NFTs are an option, but suffer the same downsides as AGIX. Our best approach may be related to reputation ratings (more about that further below). I hope this will successfully incentivize positive behavior, but it might not lead to substantially more people voting. If, after other measures have been implemented, we still want to increase the number of votes cast, we may re-address this topic and try out different voting award scenarios.
Option 3: Forms of liquid democracy
With the 3 options described in part 2 (2a/b/c/) in mind, options b and c, incorporating some kind of liquid democracy, look best suited to the context of a crypto-DAO. There are a lot of variations and possibilities in the implementation, but I see 2 main challenges:
1) How will people choose to whom they will lend their voting power? How can you tell which person will be best aligned to one’s own preferences?
2) How can we keep this allocation system fresh and dynamic?
A suboptimal outcome would be if the majority of the token holders would assign their votes once, only to forget about it and not be bothered anymore. This would basically lead to a new layer with ‘proxies’ (people that are chosen to apply other people’s voting power) that have structural and long-term voting power. A static layer like that would be susceptible to abuse, or equally damaging, the perception of abuse. E.g. by being bribed or influenced to vote in a certain way. Also, in case the members themselves would become less engaged or less active, the votes would become un(der)used or not used with sufficient scrutiny.
This could be mitigated by expiring these assignments of voting power after a period of time. This would however increase the threshold of participating again because people would need to make a new choice every now and then (every year?). And there is some likelihood that they will either ‘forget’ or, just choose for the same person again, out of habit and convenience.
Assessing the actual voting behavior of a proxy takes time and dedication. Therefore, in my opinion, assigning your votes to a proxy is not a solution to onboard people that are not sufficiently engaged in the first place, and there is the added risk that a subset of people will float to the surface and accumulate ‘staying power’ that is not based on their true merit, but on popularity, visibility or plain habit. It may however support engaged people by enabling them to outsource some of the work for which they don’t have the time or don’t have the necessary knowledge.
For these reasons, I believe that, before implementing a form of liquid democracy, it makes sense to first develop a system that will rate voters based on their behavior: Option 3.
Diving deeper into liquid democracy:
Note that this paragraph only scratches the surface related to Liquid democracy. There are other topics and options that can be explored such as: Can I override the votes of my proxy by making a few votes of my own? (The quick answer here is ‘yes!) How can I be assured all projects are voted upon by one of my proxies? How do I know which proxy votes for what project? Can I select multiple proxies? etc.
Or to take a different direction; we could assign proxies for the expert rating rather than for voting. In that case, one could assign one person for their technical prowess, another for their business insights, and a third for their accurate team assessments.
But all these complexities will also require a good solution for the issues described above.
Option 4: Reputation ratings
Let’s start by observing a number of important questions:
- What areas of behavior are important to include in a reputation score? Behavior on the portal like ratings, comments, etc.? Voting behavior over multiple rounds? The number of tokens and how long they have been in someone’s possession?
- Should people/wallets with a high reputation be given an (AGIX) award?
- Should reputation ratings be used to amplify ratings given on the portal, or should they also influence the actual voting outcomes?
I believe that the more relevant data we have on an individual, the better we will be able to give someone a proper reputation, and the harder it will be to game the system. For this reason, I would like to be able to include not just the behavior on our portal, but also voting behavior, token balances, and even ‘holding’ behavior to the ratings. At the very least recording and weighing in voting behavior in consecutive rounds will make it much harder for a proposer to have a decisive impact on the outcome of a round by purchasing tokens just for the purpose of voting up his own proposal. Over time we can improve the way that the weight of different attributes makes up someone’s overall reputation and to what extent this will influence the voting results. This is a learning process, that I expect to be ongoing, as also the dynamics of Deep Funding itself will change over time. This means that over time we should also define a process for reviewing and adjusting the reputation algorithm and the impact on voting weights on a regular basis.
The definition of a reputation rating algorithm is something to be explored in detail and beyond the scope of this article. It depends on what data is available, and how we want to weigh each attribute relative to others. It is possible that for different purposes we will use different attributes and even assign them different weights. Some examples for applying reputation ratings:
- For official voting I can imagine that we will give a relatively high weight to voting behavior in previous rounds and the token balance of a wallet over a period of time, thus assuring that gaming the system is hard.
- For the expert ratings, the number of tokens someone has is of less importance, but we may look deeper into their past comments and ratings and assess how other people have valued their contributions.
- If we would use reputation to help people choose the best voting representative, we may want to give more weight to more recent contributions, so we keep the voting proxies fresh and ensure that the most active people with the most recent and high valued contributions float to the service as potential proxy candidates.
KYC or wallet reputation?
From a practical point of view, I would try to avoid any kind of KYC if possible. Although I have no strong objections against it personally, I do expect this to be another hurdle I would rather do without. But with a reputation mechanism as indicated above, we should be able to relate a reputation to a specific wallet. Perhaps we can think of a mechanism for people to optionally connect multiple wallets (e.g. on different chains, or in case they want to start using another wallet) if they so desire. The goal of a reputation system is not to be 100% accurate all the time, but to maximize the quality of the outcomes. This means that there will be some high-quality contributors that may not have a high reputation yet. But the overall allocations of weights should lead to a balanced and representative outcome.
Awarding high reputation contributors
If we want to go in the direction of awarding people in AGIX as an incentive, we should award activities that are most valuable to the program. Eg the number of (liked) comments on proposals, number, and quality of ratings given, improvement suggestions done, support given to proposers, etc. We might also factor in if the person/wallet is a long-term holder or a new wallet. Of course, not all these activities will be measurable from day one, and properly judging the quality of the contributions will be a process, perhaps someday assisted by an AI.
The best moment to grant such awards would be after voting. There is an opportunity here to relate the awarded amount for a large part to the activity in the previous round, thus incentivizing ‘constructive behavior’ in every new round, as well as the voting process itself. The definition of how to collect reputation points and how this will translate to awards requires a deeper analysis and some conversations with Swae.io, our portal provider.
Vulnerabilities of the reputation system
As mentioned in the third bullet above we can keep the reputation fresh by factoring in the recency of contributions. There are however still some potential risks.
One risk is for proposers or their affiliates to contact high reputation people in order to convince them towards a certain voting behavior. This might even become annoying for the high reputation people themselves. This could be enough reason to maintain anonymity or at least try to avoid facilitating 1:1 contact based on people’s reputation.
A more intriguing risk is based on the dynamic between liquid democracy and reputation. We should avoid that reputation will amplify liquid democracy (or vice versa) in such a way that the majority of the voting power will come to rest with just a few individuals. In this scenario, liquid democracy is harder to manage than reputation rating. Reputation rating is based on an algorithm that can be tweaked, while liquid democracy is ultimately a choice of a voter that cannot be reversed or adapted other than by the voters themselves. For this reason, I would be cautious in implementing a liquid democracy system and rather start with reputation.
Final thoughts on Reputation rating vs Liquid democracy
In the context of Deep Funding, my current view is that a well-configured reputation-governed system might work out as a meritocracy, that can be sustained with a smaller amount of people while giving superior outputs over a liquid democracy. Regarding a liquid democracy, I see a risk that this might become stale over time and degenerate into an oligarchy or plutarchy.
I keep however an open mind, knowing that we are in uncharted territory. Especially the effects of a combination of these 2 systems are very hard to predict and will likely be dependent on a range of configurations and dynamics that we do not have sufficient insights into at this time. However, I do believe that the safest and most promising route at this point is starting out with a reputation system and not yet implementing a system based on liquid democracy.
Main -provisional- conclusions
In many of the topics above we are just scratching the surface. There are many variations possible for each option and due to this, even the most promising options can still lead to unexpected/undesired outcomes. So we will need to continue to experiment and learn and not be afraid of making changes if the outcomes are not as desired.
But with that disclaimer, the road I see today centers around these main improvement measures:
- Implement an expert review process that will help the community to better recognize high-quality and realistic projects, ideally having the most promising proposals assessed by independent domain experts.
- Start working on a system of reputation ratings that will gain in reliability ánd weight as the program advances.
- Ideally, this reputation is based on voting behavior and constructive engagement in our portal. To accomplish this I would like to enable people to connect their online ID with their wallets(s). Hopefully, someday we can fully integrate the voting process into the portal.
- Reputation can be used to give extra weight to ratings and votes and as a scale to reward proactive community members.
- In the case of the voting weights, we will assess a person’s reputation collected over a longer period of time.
- In the case of giving rewards, we will emphasize a person’s constructive activities in the last voting round.
Next steps to take:
- Finding experts that are willing and capable to contribute in the way described above. Either voluntarily or against a low compensation in AGIX taken from the DF wallet.
- Flesh out the reputation system and define which actions are recorded and how they affect rating and voting results as well as voting rewards. I imagine we can already define how the rules will change over the first rounds as we collect more historic voting behavior.
- Discuss required / desired functional improvements to our portal with Swae.
I hope that with this article I have given some insights into how I would like to see the voting process and related tooling and actions evolve and I hope I have made convincing arguments as to why I believe these measures are our best options. Nevertheless, none of this is written in stone. I am always open to good arguments to other options and alternative suggestions. You are therefore warmly invited to join the conversation about these fascinating topics on our Deep Funding community channels on Telegram, or preferably, on Discord. I’m looking forward to meeting you there to discuss these and other Deep Funding related topics!
While the other articles in this series are more of a factual, neutral nature, in this part, I will give the current state of my thinking. These are my personal views based on current knowledge. The conclusions may change, based on new arguments and insights or newly available functionality.
First goal: Reduce the risk and impact of teams that want to influence the outcomes to their personal advantage, e.g. by temporarily buying and using a large number of tokens, just for voting purposes.
Second goal: Ensure that the outcome of the voting process is a good reflection of the overall opinion of our community of token holders, by:
- Including as many community members as possible in the voting process
- Creating a proper balance between the weight of tokens vs the weight of people/wallets
Third goal: Ensure that good and realistic proposals float to the surface (at the cost of projects that are not feasible, not viable, or not desirable).
While these goals will hopefully motivate more community members to participate (which would be very welcome!) I did not specify this as a separate goal. More fundamental than increasing the number of engaged community members are: 1) making sure that the best projects surface and 2) making sure that the outcomes reflect the community’s preferences. Having more community members involved will increase the chances of #2, but a good reputation system might accomplish the same result with fewer people. Expert reviews mentioned below will help to align #1 and #2.
With these goals on the radar, there are a number of options, not mutually exclusive.
Option 1: Expert reviews
This is in my opinion conceptually straightforward. Different teams with different backgrounds post proposals that are very different in style, and of varying technical detail and quality. It would be helpful to have an overview created by a knowledgeable, experienced, and neutral entity that will assess all proposals on the same grounds and gives them an ‘expert rating’ based on predefined criteria. The main benefits of this option are:
- Reducing the threshold to form an opinion and thereby removing hurdles to participate in voting.
- Reducing the risk that unrealistic proposals or teams with insufficient qualifications will get awarded.
- The criteria that the experts are using will be an important guideline for the proposers. Sub-par proposals or teams might abstain from participating at all, while good teams will be able to make better proposals based on these assessment criteria.
How to implement:
Ideally, the experts would have true neutrality. This means that they should not be affiliated with any participant in any way. While I don’t see any issue in having experts that are working for SNET, this would not be appropriate for assessing projects that are affiliated with or incubated by SNET.
Technical feasibility would be the first and foremost assessment criterium. Therefore the experts should be technically savvy. Other selection criteria should be team capabilities and business viability.
Challenges and solutions
Some practical challenges:
- We need to find these experts, perhaps rotate them and make sure they are motivated
- It will be hard to find enough experts to have every proposal assessed
Solution direction:
We can somehow filter which projects should be assessed by this team of experts. I think the sweet spot in the number of proposals per expert is somewhere between 5 and 10. We can filter on the requested amount, on community preference (using the portal features), or we can ask the expert group to make their own choice on which projects they think are most promising.
In the short term, I can imagine we will order the projects in their requested quantity and ask the experts as a group to make a selection of projects in the top 20% that are most promising. This will avoid experts being forced to spend quality time on proposals that have a high requested amount but are clearly low quality.
In the longer term, with sufficient community engagement, we can give a role here to the community, and create a filter that takes both required funding and popularity in account, while still giving some freedom to the experts. E.g we first filter out 15 proposals based on their requested amount. These are ordered based on community rating and the top 4 will be assessed for sure. Out of the remaining 11 proposals, the experts’ group can make their own choice. (of course, all these numbers are arbitrary, and just added as an example)
Should the number of proposals and the available funding grow, we can grow the number of experts and assessed proposals, as long as each proposal is at least assessed by e.g. 3 experts.
Desired end state:
Community rating: While the portal is open for submissions and for a limited time after, community members have the option to rate each proposal on a predefined number of criteria. E.g
- Technical feasibility (is the project technically possible and feasible with a limited time and budget?)
- Confidence in the project team (team roles, capabilities, experience)
- Business viability (Is there a market for this product? Will it result in many and/or complex API calls on the platform?
After a set date, a number of projects are selected for an expert review, that will be based on the same criteria. Each project will have a community rating and a selection of projects will also have an expert rating.
Live show
After the experts have made their assessments, the teams and the experts will meet in a live show.
- The proposers will get an opportunity to defend their projects against the given expert feedback. They may give better or more detailed explanations, or propose improvements. These explanations or improvements will also be added to their proposal
- The experts will ask some critical questions, to make their final assessment.
- At the end of the show, each of the experts will give their personal ranking of all the proposals on the show. (So they do not need to adjust their ratings, but might be allowed to)
- The grand finale is the final expert ranking of the top proposals.
Side note around the skill and appetite to participate in such a show:
Participating publicly in a live show may not be suitable for everyone.
For experts, this means that some will rather refuse the role, while for others this may be an incentive and a podium to profile themselves as an expert. This might even make other (AGIX) incentives unnecessary.
For proposers, this also means that some good teams may feel so uncomfortable with these prospects that they rather not participate in this activity. This should be clarified at an early stage, so the experts will only assess teams that are willing to participate in the show. This is not because we want to somehow punish teams that do not want to participate but to ensure that we have sufficient candidates.
There is an option to add some extra skin in the game by endowing the experts with some voting power. However, I am leaning towards keeping the outcome of the show purely advisory. Just having the benefit of this publicity is already a significant advantage, and we should remain open to high-quality projects from teams that are capable in all important areas except good presentation skills or teams that simply do not desire to put themselves on stage like this.
Option 2: Incentives for voting.
A lot of things that we do in our everyday life are steered by incentives. A proper mechanism of the right incentives will lead to a self-organizing system that doesn’t need a lot of rules and overhead. But choosing the right incentives and predicting the outcomes they will lead to, can be hard. In the context of voting for Deep Funding, the first question is: Do we want people to vote because of the rewards? Can we trust that those people will actually do their research and vote conscientiously, rather than getting it over with quickly, just to get the reward?
One theoretical way to steer this in the right direction would be to connect the rewards to the proven benefit of a project to the platform, calculated in the value of the API calls done after a certain period. (This would exclude a few specific categories of projects, but we can manage that separately.) The issue here is that the time needed to collect this evidence is not in balance with the actual rewards which will be relatively low. We could try to amend this by some kind of ‘winner takes all’ principle, but that would create a lot of complexity and make the whole incentive process more like a lottery with a long waiting time.
Another scenario I researched is a voting process where we give every voter a fixed number of points to be distributed over the preferred projects at will. They can give all their points to the best project, or distribute them more evenly. Projects with higher requested amounts would also require a higher number of points to become awarded. The idea is that voters would get awarded based on the number of points they allotted to a winning project thus gamifying the voting process and enabling voters to get higher awards by casting their votes in a smart way.
This may all sound a bit complex and we can dive into this deeper at a later time. Important for now is that, while interesting from a voting perspective, it will not be the distinguishing solution to gamify the voting process by means of awards, since the scenarios show that they will be distributed quite evenly, keeping the awards per person still relatively small.
My conclusion, for the time being, is that offering AGIX rewards just for voting is not yet a good solution. Other rewards, such as NFTs are an option, but suffer the same downsides as AGIX. Our best approach may be related to reputation ratings (more about that further below). I hope this will successfully incentivize positive behavior, but it might not lead to substantially more people voting. If, after other measures have been implemented, we still want to increase the number of votes cast, we may re-address this topic and try out different voting award scenarios.
Option 3: Forms of liquid democracy
With the 3 options described in part 2 (2a/b/c/) in mind, options b and c, incorporating some kind of liquid democracy, look best suited to the context of a crypto-DAO. There are a lot of variations and possibilities in the implementation, but I see 2 main challenges:
1) How will people choose to whom they will lend their voting power? How can you tell which person will be best aligned to one’s own preferences?
2) How can we keep this allocation system fresh and dynamic?
A suboptimal outcome would be if the majority of the token holders would assign their votes once, only to forget about it and not be bothered anymore. This would basically lead to a new layer with ‘proxies’ (people that are chosen to apply other people’s voting power) that have structural and long-term voting power. A static layer like that would be susceptible to abuse, or equally damaging, the perception of abuse. E.g. by being bribed or influenced to vote in a certain way. Also, in case the members themselves would become less engaged or less active, the votes would become un(der)used or not used with sufficient scrutiny.
This could be mitigated by expiring these assignments of voting power after a period of time. This would however increase the threshold of participating again because people would need to make a new choice every now and then (every year?). And there is some likelihood that they will either ‘forget’ or, just choose for the same person again, out of habit and convenience.
Assessing the actual voting behavior of a proxy takes time and dedication. Therefore, in my opinion, assigning your votes to a proxy is not a solution to onboard people that are not sufficiently engaged in the first place, and there is the added risk that a subset of people will float to the surface and accumulate ‘staying power’ that is not based on their true merit, but on popularity, visibility or plain habit. It may however support engaged people by enabling them to outsource some of the work for which they don’t have the time or don’t have the necessary knowledge.
For these reasons, I believe that, before implementing a form of liquid democracy, it makes sense to first develop a system that will rate voters based on their behavior: Option 3.
Diving deeper into liquid democracy:
Note that this paragraph only scratches the surface related to Liquid democracy. There are other topics and options that can be explored such as: Can I override the votes of my proxy by making a few votes of my own? (The quick answer here is ‘yes!) How can I be assured all projects are voted upon by one of my proxies? How do I know which proxy votes for what project? Can I select multiple proxies? etc.
Or to take a different direction; we could assign proxies for the expert rating rather than for voting. In that case, one could assign one person for their technical prowess, another for their business insights, and a third for their accurate team assessments.
But all these complexities will also require a good solution for the issues described above.
Option 4: Reputation ratings
Let’s start by observing a number of important questions:
- What areas of behavior are important to include in a reputation score? Behavior on the portal like ratings, comments, etc.? Voting behavior over multiple rounds? The number of tokens and how long they have been in someone’s possession?
- Should people/wallets with a high reputation be given an (AGIX) award?
- Should reputation ratings be used to amplify ratings given on the portal, or should they also influence the actual voting outcomes?
I believe that the more relevant data we have on an individual, the better we will be able to give someone a proper reputation, and the harder it will be to game the system. For this reason, I would like to be able to include not just the behavior on our portal, but also voting behavior, token balances, and even ‘holding’ behavior to the ratings. At the very least recording and weighing in voting behavior in consecutive rounds will make it much harder for a proposer to have a decisive impact on the outcome of a round by purchasing tokens just for the purpose of voting up his own proposal. Over time we can improve the way that the weight of different attributes makes up someone’s overall reputation and to what extent this will influence the voting results. This is a learning process, that I expect to be ongoing, as also the dynamics of Deep Funding itself will change over time. This means that over time we should also define a process for reviewing and adjusting the reputation algorithm and the impact on voting weights on a regular basis.
The definition of a reputation rating algorithm is something to be explored in detail and beyond the scope of this article. It depends on what data is available, and how we want to weigh each attribute relative to others. It is possible that for different purposes we will use different attributes and even assign them different weights. Some examples for applying reputation ratings:
- For official voting I can imagine that we will give a relatively high weight to voting behavior in previous rounds and the token balance of a wallet over a period of time, thus assuring that gaming the system is hard.
- For the expert ratings, the number of tokens someone has is of less importance, but we may look deeper into their past comments and ratings and assess how other people have valued their contributions.
- If we would use reputation to help people choose the best voting representative, we may want to give more weight to more recent contributions, so we keep the voting proxies fresh and ensure that the most active people with the most recent and high valued contributions float to the service as potential proxy candidates.
KYC or wallet reputation?
From a practical point of view, I would try to avoid any kind of KYC if possible. Although I have no strong objections against it personally, I do expect this to be another hurdle I would rather do without. But with a reputation mechanism as indicated above, we should be able to relate a reputation to a specific wallet. Perhaps we can think of a mechanism for people to optionally connect multiple wallets (e.g. on different chains, or in case they want to start using another wallet) if they so desire. The goal of a reputation system is not to be 100% accurate all the time, but to maximize the quality of the outcomes. This means that there will be some high-quality contributors that may not have a high reputation yet. But the overall allocations of weights should lead to a balanced and representative outcome.
Awarding high reputation contributors
If we want to go in the direction of awarding people in AGIX as an incentive, we should award activities that are most valuable to the program. Eg the number of (liked) comments on proposals, number, and quality of ratings given, improvement suggestions done, support given to proposers, etc. We might also factor in if the person/wallet is a long-term holder or a new wallet. Of course, not all these activities will be measurable from day one, and properly judging the quality of the contributions will be a process, perhaps someday assisted by an AI.
The best moment to grant such awards would be after voting. There is an opportunity here to relate the awarded amount for a large part to the activity in the previous round, thus incentivizing ‘constructive behavior’ in every new round, as well as the voting process itself. The definition of how to collect reputation points and how this will translate to awards requires a deeper analysis and some conversations with Swae.io, our portal provider.
Vulnerabilities of the reputation system
As mentioned in the third bullet above we can keep the reputation fresh by factoring in the recency of contributions. There are however still some potential risks.
One risk is for proposers or their affiliates to contact high reputation people in order to convince them towards a certain voting behavior. This might even become annoying for the high reputation people themselves. This could be enough reason to maintain anonymity or at least try to avoid facilitating 1:1 contact based on people’s reputation.
A more intriguing risk is based on the dynamic between liquid democracy and reputation. We should avoid that reputation will amplify liquid democracy (or vice versa) in such a way that the majority of the voting power will come to rest with just a few individuals. In this scenario, liquid democracy is harder to manage than reputation rating. Reputation rating is based on an algorithm that can be tweaked, while liquid democracy is ultimately a choice of a voter that cannot be reversed or adapted other than by the voters themselves. For this reason, I would be cautious in implementing a liquid democracy system and rather start with reputation.
Final thoughts on Reputation rating vs Liquid democracy
In the context of Deep Funding, my current view is that a well-configured reputation-governed system might work out as a meritocracy, that can be sustained with a smaller amount of people while giving superior outputs over a liquid democracy. Regarding a liquid democracy, I see a risk that this might become stale over time and degenerate into an oligarchy or plutarchy.
I keep however an open mind, knowing that we are in uncharted territory. Especially the effects of a combination of these 2 systems are very hard to predict and will likely be dependent on a range of configurations and dynamics that we do not have sufficient insights into at this time. However, I do believe that the safest and most promising route at this point is starting out with a reputation system and not yet implementing a system based on liquid democracy.
Main -provisional- conclusions
In many of the topics above we are just scratching the surface. There are many variations possible for each option and due to this, even the most promising options can still lead to unexpected/undesired outcomes. So we will need to continue to experiment and learn and not be afraid of making changes if the outcomes are not as desired.
But with that disclaimer, the road I see today centers around these main improvement measures:
- Implement an expert review process that will help the community to better recognize high-quality and realistic projects, ideally having the most promising proposals assessed by independent domain experts.
- Start working on a system of reputation ratings that will gain in reliability ánd weight as the program advances.
- Ideally, this reputation is based on voting behavior and constructive engagement in our portal. To accomplish this I would like to enable people to connect their online ID with their wallets(s). Hopefully, someday we can fully integrate the voting process into the portal.
- Reputation can be used to give extra weight to ratings and votes and as a scale to reward proactive community members.
- In the case of the voting weights, we will assess a person’s reputation collected over a longer period of time.
- In the case of giving rewards, we will emphasize a person’s constructive activities in the last voting round.
Next steps to take:
- Finding experts that are willing and capable to contribute in the way described above. Either voluntarily or against a low compensation in AGIX taken from the DF wallet.
- Flesh out the reputation system and define which actions are recorded and how they affect rating and voting results as well as voting rewards. I imagine we can already define how the rules will change over the first rounds as we collect more historic voting behavior.
- Discuss required / desired functional improvements to our portal with Swae.
I hope that with this article I have given some insights into how I would like to see the voting process and related tooling and actions evolve and I hope I have made convincing arguments as to why I believe these measures are our best options. Nevertheless, none of this is written in stone. I am always open to good arguments to other options and alternative suggestions. You are therefore warmly invited to join the conversation about these fascinating topics on our Deep Funding community channels on Telegram, or preferably, on Discord. I’m looking forward to meeting you there to discuss these and other Deep Funding related topics!
Join the Discussion (0)
Please create account or login to post comments.