Deepfunding Focus Group Listening Sessions

Author: jan Published: November 21, 2023

As you might already know, the DeepFunding Focus Group has begun running fortnightly “listening sessions” on Thursdays, inviting the community to give their views on different aspects of how DeepFunding should be run. We plan to share the notes from these discussions here – so here’s a quick recap of the most recent one.

Authors: On behalf of the Deep Funding Focus group: Ubio Obu and Vanessa Cardui

Listening Session 2 (9th November 2023)

The topic was How to recognise and reward people’s input. Our two breakout rooms were:

  1. The difference between “contribution” and “reputation” approaches
  2. How do we combat the potential pitfalls of reputation systems?

Key points from breakout room 1

  • Many types of “reputation” and “contribution” – rewarding can (should?) be hybrid and multi-factor. Peer-to-peer assessments, plus observable metrics, plus – what else? Maybe we don’t have to fully separate “reputation” and “contribution” approaches?
  • It’s a process – communities often start with contribution, move to reputation. Where do we move to after that?
  • What governance action(s) should we assign to it? Should  it be mainly  about assigning voting power?
  • Design for a purpose. What are we trying to achieve with our reward systems; what risks are we trying to prevent? 
  • The aim is to predict how people will behave (i.e. if someone has a high reputation, we’re saying as a community we think they’ll behave in positive ways in future) – but how can we test whether the prediction was correct?
  • Or, it’s about ways of measuring trust. By “trust”, do we mean the “human” type – strength of character, humanistic values, no hypocrisy and self-dealing? Or do we mean predictability of participation and outcomes?
  • Is this a core difference between the 2 approaches? “Reputation” often involves judging personal character, but “contribution” needn’t. A “bad person” (whatever that is) can still be a contributor; also, people differ on their idea of a “bad person” – contribution systems give less weight to these normative external judgements?
  • “Contribution” doesn’t have to be only about tracking “participation”. 
  • In DF we currently use a contribution threshold, rather than voting weight; after that, it’s 1person1vote. Technically the gmbl tokens used for that threshold could be used for voting weight too. In the same way as we use a token-policy id, we could use the policy id of gimbal tokens.
  • What do we think counts as a valid contribution – what are the criteria? (And who decides? Who is “we”?) If we can establish criteria, then we can automate it.
  • Possible criterion: “Added value to proposers”? Constructive criticism, rather than pointing out a problem or issue?
  • If we have criteria for what is “good/ wanted behaviour”, won’t we strip out the diversity of thought in our communities, because everyone is chasing “reputation”, and acting in conformity with the criteria?
  • With “reputation” – members of marginalised groups will often score lower due to prejudice, or because their lived experience gives them a different perspective which the mainstream isn’t fully able to understand or assess.
  • Should comments from high-reputation users come to the top, for higher visibility? But this risks prioritising people for popularity, not for what they say; plus it’s circular (high reputation, the more you are heard, so even higher reputation). How to instead evaluate a comment from scratch on its merits? Anonymity for commenters, maybe? 

Key points from breakout room 2

In room 2 the discussions fell into three sub-topics
(i) The ills of “social credit”-based reputation systems
(ii) Possible means of combatting these
(iii) Alternative reputation systems that are not based on social credit.

(i) The ills of social credit-based reputation systems: 

  • Creating a reputation system based on social credit can be limiting and not inclusive, as there can be legitimate reasons why certain people are not engaging in a certain way as demanded by the community. So removing these people from the community, or reducing how much they engage in the community or build social credit to gain reputation and recognition, is problematic.
  • The existence of implicit reputations due to longevity in a community, which does not necessarily translate into effective contribution in the community – there’s a trap of longevity over effectiveness in a social credit based reputation system
  • Reputation in business is different from social reputation. In business, people are more cautious, but in social situations, less so. This lack of caution does play negatively, as people use reputation to hurt other players in the ecosystem. There is an unspoken reputation shark in the room, using his/her reputation to hurt others in the community.

(ii) Possible means of combatting the ills of a social-credit-based reputation system

  • Establish principles of accountability: Social credit systems should not be arbitrary. There should be rules of engagement, which should accommodate the different voices and kinds of people in a community, so that none is given advantage over the other.
  • Less explicit powers should be given to reputation in an ecosystem: This is one of the most agreed solutions from the room: we agreed that people will keep trying to game the system once there is an incentive to do so, but if the incentive is lower, that motivation becomes reduced as well. In the context of Deepfunding it was suggested that a factor of 3X, 4X, and 5X, as is currently given to some persons with high reputations, only incentivises people to want to game the system.
  • Limit the amount of social credit-based actions that an individual can take: limiting the number of likes, comments, etc that a person can do in a duration can also reduce the pressure to want to game the system for more social credit.

(iii) Alternative reputation systems that are not based on social credit: healthier ways of gaining reputation that will not put so much pressure on people trying to game the system. The following were suggested:

  1. Task-based reputation system
  2. Value-based reputation system
  3. Project-based reputation system
  4. Knowledge-based reputation system
  5. Collaboration-based reputation system
  6. Zero-knowledge-proof of reputation

These alternatives were deemed to be healthier and better than social-credit-based reputation systems 

Plenary session: Further discussion is needed!

The key point in the plenary session was: the discussion isn’t finished yet!

The next Listening Session will be on Thurs 23rd Nov, 19:00 UTC


The decisions of the Review Team on rewards in DFR3 (Deepfunding Round 3), and what should change in DFR4. If we reward commenting, what should the criteria be? Or should we stop rewarding comments at all?

Meeting registration link:

Join the Discussion (0)

Related News Updates