DFR1 – Voting analysis Part 1 – number crunching

Author: jan Published: June 16, 2022

We have just completed the voting process in the first round on Deep Funding. We have defined the winners and published the results. This does, however, not answer all our questions. We did a deeper analysis of the outcomes so we can learn and improve as we go along. 

Some of the questions we reviewed and will try to answer in this article are:

  • What would be the results if we didn’t count the extremes (1 and 10)?
  • What was the impact of large and very large wallets on the overall results? 
  • What would be the result if every wallet would have had equal voting power, regardless of their AGIX balance?

Of course, the exercises below are not completely realistic. If we would have had different rules, also the voting behavior of the community would have been impacted in a more subtle way than by just removing or rearranging currently cast votes. So this exercise is only meant as an indication that, in the best case, gives some directions. Therefore, while interesting, we should be careful in making final judgments based on this imperfect exercise that is also only based on a single round. For this reason, among others, we are not disclosing any particulars about specific projects, but are limiting the results to numeric differences.  

 

Grade voting 

While we have been overall conservative in our approach to Deep Funding rules and governance, we did decide to introduce ‘grade voting’ meaning that every voter could rate a project on a scale of 1-10 rather than just giving a binary vote of ‘yes or no’ which is the regular practice. So more than enough reason to have a deeper dive into the data.

Removing the extremes: 1 and 10

We received feedback that some voters were inclined to vote for the extremes, either 1 or 10 instead of going for the middle ground. In our view, this is not the expected nor desired behavior, since measured against real-world criteria scoring either of the extremes should be possible but pretty rare cases.  So, if based on the current voting results, we would simply remove the extremes and only keep all grades from 2 to 9, what would have been the result?
Answer:   

  • In Pool A, 10 projects would have been eligible against the current 3. But from the 3 teams awarded 2 of them would still have been awarded in this scenario. Only one of the 3 outcomes would have been different. 
  • In Pool B the number of eligible teams would stay the same. Of the 9 awarded projects only 2 different teams would have floated to the surface. 7 of them would have been unchanged. 

Conclusion: The differences are limited. We could argue that if the current extremes would have been attributed more conservatively instead of having been removed completely, the difference with actual outcomes would be even smaller. 

Removing extremes by reducing the scale to 1-5

In this scenario, we bucketed the current answers where we added up the votes of 9+10  to the new ‘5’, bucketed 7+8 to the new ‘4’ etc. This is similar to a 5-star rating mechanism used in many reviews. This means that in contrast to the previous experiment we are still counting all votes cast, but a 9 will have the same weight as a 10, etc. Of course, we changed the threshold similarly, from the old 6,5 to a new 3,25.

Results:

The impact on eligibility compared to the actual results is very low. Only one more eligible project in Pool A and one more in pool B. The only change to actual awards would have been that only one extra project in pool B would have been awarded. 

Conclusion: Based on these results we don’t expect significantly different results by changing the scale of grading to 1-5 if we determine the threshold on 3.25. (If we would reduce the threshold to 3 compared to a threshold of 6 in the 1-10 scale, the effect would be a much wider number of projects eligible in Pool A, but the end result of awarded teams would not be impacted at all.)

Redistributing the grades to a binary scale of Yes or No 

This is the ultimate reduction from 1-10 grading to binary.
We can safely equal all votes of 7 and higher as a ‘Yes’ and all votes of 5 and lower as a ‘No’. 

A ‘6’ vote is a bit of a mixed signal. According to our rules, we defined a threshold of 6.5, meaning that a 6 should count as a ‘No’. However, in everyday use of 1-10 grading a 6 is often seen as -just- adequate, which may have also guided the voters. We, therefore, decided to view ‘6’ as a neutral grade and ignored the 6 votes in this exercise. 

Results: In Pool A the number of eligible and awarded projects stays the same, but one of the three winners is a different project. In Pool B 4 additional projects would have been eligible and awarded.  

Conclusion: Going all the way to binary has the most impact, but even so, not a radical change in results, especially in the light of the somewhat arbitrary decision of removing the ‘6’ votes. 

 

The impact of 1 token = 1 vote.

Equaling someone’s voting power to the number of AGIX in their possession can be justified from the standpoint of ‘skin in the game’. But one could argue whether the vote of someone with a million tokens should really equal 100 people with only 10.000 tokens in their wallet. Of course, there was also a practical reason for this arrangement: basing the results on token balance is the simplest way to avoid people gaming the system by splitting their wallets. An alternative would have been some kind of KYC to target real individuals, which would have presented an extra hurdle to participation that we would rather avoid, especially in the first round. 

Still, looking at the graph below we can see that using this system a few (just the top 3) very large wallets must have had a disproportional impact on the overall results.

Fig. 1 Distribution of tokens per wallet involved in the voting process. Balance quantities have been removed for the sake of privacy.

Counting wallets instead of AGIX balances 

To answer the question of whether a personalized approach should be considered in future rounds we first reviewed what the results would have been if we are only counting the voting behavior of wallets, regardless of their AGIX balance. We expect that in the current settings most, if not all, people will only have used one wallet for voting. 

Results: 

In Pool A all projects except one would have been eligible. Of the current 3 awarded teams only one would still have made it. Two other teams would have been awarded instead. 

In Pool B all projects would have been eligible! All teams that have been awarded currently, would still be awarded, and an additional 4 teams would also have been awarded. 

Conclusion: 

We might say that the results would have been more significantly changed, had we only counted wallets instead of tokens. 

If we combine the wallet-based voting with some of the ‘bucketing’ exercises above we see some further additional changes, but the impact is similarly small, compared to the default based on wallet-votes. 

Effect of large vs smaller wallets

Another exercise reveals the effect of AGIX balances by comparing the behavior and results of smaller wallets with larger wallets. What would happen if we filter out the (very) large wallets from the results? We reviewed a number of scenarios: 

  • Results of wallets <50,000 AGIX
    • In Pool A, all projects would have been eligible. The top 3 awarded projects would have been completely different than the current situation!
    • In Pool B, Also all projects would have been eligible. Four additional projects would have been awarded (leaving out only two).
  • Result of wallets >50,000 AGIX 
    • In Pool A the same 3 projects would have been eligible and awarded, so the outcomes would have been the same as the actuals.
    • In Pool B 9 projects would have been eligible and awarded. Also here the same outcomes as the actuals. 
  • Results, if we filter out the +/- 10% wallets with the highest balances (which means wallets > 400.000 are not counted) 
    • In pool A, 9 projects would have been eligible instead of the current 3. Only one of the current projects would have been awarded and there would be two other winners instead. 
    • In Pool B, 5 more projects would have been eligible, and all eligible projects would (just) have been awarded, so there would have been 5 more awarded projects. 
  • Results if we filter 3 ‘whales’ that had amounts of 1M and higher
    • In pool A, only 1 project would have been eligible and thus awarded. 
    • In pool B there would be 8 projects eligible with one project not in the current set.

Conclusion

As might have been expected statistically, the smaller wallets had little impact. The outcomes of all wallets larger than 50K would have been the same as the current. What is interesting is that the smaller wallets seem to be more forgiving or optimistic, since the number of eligible projects is much larger. The difference in outcomes is especially clear in Pool A. One possible explanation is that, with higher stakes, the ‘business model’ for buying tokens just for the sake of voting might also be more interesting(!). 

Overall though, the impact of token balances per wallet is much more impactful than the grading scale.  

Overall voting behavior 

We also analyzed the number of projects that wallets voted for. Of the 158 wallets, 21 voted for all 28 projects. On the other hand, 37 wallets only voted for one project. 

This is nicely visualized in the video below.

 

The visual was generated by Robert Haas with his open-source graph visualization library [gravis].
The dots at the top represent wallets ordered from small (left) to large (right).
The dots at the bottom represent the projects. The green ones to the right have been awarded.

  • The thick red lines represent ‘1’ votes
  • The orange lines represent ‘2-6’ votes
  • The light green lines represent ‘7-9’ votes
  • The thick dark green lines represent ’10’ votes.

Try this interactive version that lets you isolate both individual wallets or projects at will!

We can make an educated guess that some wallets belong to participants that have voted mainly in favour of their own project, but this is of course not provable, and (for this reason), it was also not forbidden. Interestingly there are some wallets that voted a 10 on every project, for which we do not have a good explanation. Besides this, there are some patterns of wallets that have voted one ‘10’ and mostly 1’s on all other projects. This is not the kind of behavior one would like to see if the wallet owner has a personal interest in the project he voted in favor of. In part 2 of this article, we will discuss some options to counter this kind of seemingly ego-centered behavior. 
It is especially interesting to look at this graph with the distribution of tokens in figure 1 in mind. It is somewhat comforting that the votes of the highest wallets were not the most extreme and also that the end results are not a complete copy of any of the largest wallets. This is however not something we can rely on in the future, so this also points in the direction of a system that somehow reduces (but not removes) the weight of token balances in the end result.

Other factors that may have influenced the current voting behavior:

We were aware that there might be some confusion between the act of voting on our portal and the ‘official’ vote that would be cast in our own voting portal. We were however limited in the functional options at our disposal. Rather than delaying the first Deep Funding round further, until we have a perfect portal, we decided that we would take the risk of some confusion. 

During the voting week, we do see an uplift of ‘votes’ on our regular proposal portal. This is of course a bit disturbing because we may conclude that at least a large part of these voters may have been under the impression that they were executing their actual voting rights, even though there was no request to connect to their wallet in order to validate their AGIX account. You can see the effect in the graphic below. 

Fig 2 Voting behavior on the proposer portal 

Of course, it is very hard to reliably assess what the impact on the results would have been if these people would have voted correctly. Not only do we not know their AGIX balance; it is also possible they díd vote on the actual voting portal after all. Nevertheless, we hope that we can take some measures to avoid confusion in the upcoming rounds. Ultimately we expect that all activities (submitting, reviewing, rating and voting) will happen in the same environment, with adequate usability. 

So much for the number crunching. In a next article (Voting analysis Part 2), we’ll write about our ideas for improving the current situation. Stay tuned!

Join the Discussion (0)

Related News Updates