A new approach to Content Reward Allocation

View this thread on: d.buzz | hive.blog | peakd.com | ecency.com
·@dantheman·
0.000 HBD
A new approach to Content Reward Allocation
Today I decided to step back and really dig into the nature of Steem and get a better understanding of the exact problem we are trying to solve.  You would think that living and breathing Steem for almost 6 months I would understand the problem perfectly, but each day I learn more and today is no exception.

A problem well stated is a problem half solved.  So I decided to ask myself what I would do if it was just me and a couple of good friends trying to divide up a pot amongst ourselves based upon some kind of subjective value judgement. The result was quite illuminating.

## Stating the Problem

Suppose a group of 10 friends gets together for a competition to see who is the best writer. Each individual puts $100 into a pot and then submits a sample of writing.  The friends agree that no outside judges are allowed and that each individual should receive a share of the pot proportional to quality of the writing as judged by their peers. 

The friends need to come up with a system that ensures everyone judges fairly. The temptation is to judge your own work to be of the highest quality and everyone else's to be worthless.  If everyone did this then everyone would get their money back.  If only one person did this, and everyone else was fair, then the cheater would profit.  If this competition were to repeat and defection became profitable, then the amount of defection would increase until everyone defected.

The challenge is to make it more profitable to be honest than to lie.  

## A Proposed Solution

If you only have 10 friends, then each individual's impact on the judgment will have a massive impact on the average. If you have 100 friends then the most any individual could bias the results in their favor would be 1%.  The more people involved, the harder it becomes for an individual defector to profit from lying about the quality of his own submission. 

If each individual casts their votes in secret, then the resulting average will be unknown in advance. An honest appraisal will tend to be much closer to the average appraisal if everyone is attempting to predict the average. If all of the voters share an reward pool weighted by the square of their accuracy, then the most honest voters will be rewarded and most dishonest voters will receive almost nothing.

So long as the profit from being honest is greater than what the voter could earn from biasing their appraisal of their own work we can trust all voters to be honest.  If 100% of the rewards went to the voters then over many independent trials smart honest voters would earn a profit at dumb or dishonest voters.   Because Steem needs to reward the author there is a natural bias toward the author that can never be fully eliminated, but for all practical purposes it will be within the natural margin of error for subjective human evaluation.   

## Getting a Quorum 

Suppose only some of your friends show up to vote and you don't want to wait for everyone to show up. How would you divide the funds then?  A simple solution would say that if half showed up, then half of the rewards could be spent by consensus. This approach provides no incentive to reach a quorum.  Instead we can use the percent squared metric.  If half of your friends show up, then you can spend 25% of the budget.  If only 1% show up then you then only 0.01% of the budget be spent.

## Pulling it all Together
The user interface would work as follows:  each post would have a 5 star rating bar and a spam button.  Posts would be sorted by the size of the quorum that has voted.  When a user votes they will submit both the blinded amount and the unblinded amount to Steemit.  Steemit would automatically reveal the unblinded amount when the time comes. This requires some trust in Steemit. Users who do not trust steemit can run their own full nodes to automatically submit the unblinded result at the appropriate time.

 1. someone makes a post
 2. N people post blinded votes on the relative value of the post
 3. M people reveal their blinded vote
 4. The Steem Power weighted average vote is calculated (value between -5 and 5 on log10 scale)
 5. Author payout is calculated  (VotingSteemPower / TotalSteemPower)^2 * budget * weighted_average_vote^10 /  5^10 
 6. Each voter's margin of error is calculated as ABS( weighted_average_vote - user_vote )^2
 7. Voters split an amount equal to the maximum author payout weighted by Steem Power * (1-error) 

The financial incentive is for voters to vote on stuff other voters have voted and to do so honestly. To the extent they are dishonest (or are poor predictors of public opinion) they will earn less than those who are good predictors. Honest voters receive income proportional to how much they are diluted, dishonest or inactive voters do not receive income proportional to the dilution.  The result is gradual value transfer from dishonest to honest voters and authors.

There is less financial incentive to vote on things no one has voted on unless one desires to reward the author by getting more people to see and vote on the same content.  Authors want as many people to see and vote on their content as possible. 

## Punishing Users who fail to Reveal
Users have nothing to gain by failing to reveal. They will get 0 rewards instead of a small reward. 

The primary downside to this approach is the need to "commit and reveal" which requires most users to trust a service to automatically reveal on their behalf.  This downside may be insignificant considering how many people trust the same service to collect their vote, secure their private keys, and submit it to the blockchain in the first place.

## What do you think?

Would this method of allocating rewards be perceived as "simpler" and/or "more fair" than the current system?  Is the proposed user interface simple enough that most people can use it without thinking?  What could go wrong?
👍 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,