You are viewing a single comment's thread from:

RE: Why I Advise Against Linear Reward

in #steem6 years ago

It may, but my point is that I would prefer to keep the reward algorithm as simple as possible, and allow the community to regulate it from there. So if the community decides that excessive self-upvoting is bad, they can downvote people who do that, which then removes the incentive that is there now.

Sort:  

Flagged rewards are simply returned to the reward pool and as stated in my post those flagged rewards are up for grab by selfish and selfless upvoters alike, leaving selfish upvoters at an advantage.

Flagged rewards are simply returned to the reward pool and as stated in my post those flagged rewards selfish and selfless upvoters alike, leaving selfish upvoters at an advantage

Not true. Those second round of selfish voters are also more likely to be downvoted. You have to look at this as a large economy, not individual steps. The first downvote of a selfish voter puts the rewards back in the pool but that doesn't mean selfish and selfless voters are on then equal footing. The downvotes can continue to follow selfish voters indefinitely.

The result is likely to be an equilibrium with far less selfish voting, not only because selfish voters get downvoted, returning those rewards to the pool, but because crowdsourced downvotes change the incentives on selfish voting in the first place. It will not be perfect (nohting is) but it will likely be far better, with more for the pool going to value adding activities.

Those second round of selfish voters are also more likely to be downvoted.

True but under linear reward, the rewards returned would represent a growing % of the reward pool with ever-growing incentive to defect. If downvotes are crowdsourced my statement is irrelevant.

That entire statement makes no sense. The returned rewards represent either the same % of the reward pool (if all returned) or a smaller % (if only a portion are returned). There is no way for it to increase. I have no idea what sort of convoluted reasoning has led you to conclude this.

I've cut corners while explaining my point.

Let's say someone's flag or vote was first worth an infinitesimal part of the reward pool well after the most extreme case imaginable where every single vote is counteracted, well that infinitesimal vote would now control 100% of the pool.

The incentives to defect become greater as more flags are given because flagers become "fewer" or spent and the potential rewards greater.

It's a very convoluted way to make my point and I'm not sure how sound my explanation of it is but hopefully, it's clear enough.

It's good that you are calling me on my half-baked explanation that I had given. I knew when I wrote my answer it wasn't really clear at best and possibly didn't mean much at worst.

well that infinitesimal vote would now control 100% of the pool

Aside from the fact that this situation is contrived and would never happen....you haven't demonstrated anything here. It is just as easy to downvote that vote as all the others, and there is no reason to believe that after all the other votes are counteracted this one wouldn't be too.

You could make a similar argument for non-linear. As votes become more concentrated there is no incentive to vote for anything but the single highest-paid post (even if it is complete and utter garbage/abuse/etc.). Everything else will pay nothing making the vote worthless! It becomes a tyranny of vote-for-the-biggest-or-your-vote-is-worthless.

These extreme cases are not helpful.

Under superlinear, the rewards go disproportionally toward the consensus in the saw way, the flags are disproportionally powerful when cast against the consensus.

Under linear rewards, everyone has, proportionally to their stake, the same incentive to defect (self-vote) as everyone else (except for curation reward), while under superlinear, the biggest shareholders have a disproportional incentive to defect but if too many do, they collapse their own network and Steem net worth. Thus there is an incentive for them to police each other. It is mathematically possible for them to effectively police each other unlike under linear reward.

Under linear everyone has the same incentive to defect and thus no incentive to flag. Expecting more flags to be given when there are no incentives for flags goes against logic.

They're up for grab for the next least selfish self-voters. The most likely contender for countering selfvoting is independent flagging pool along with burning of flagged reward, which I guess due to technical implementation difficulty doesn't have much traction.

They're up for grab for the next least selfish self-voters.

Indeed. It's a loophole with serious issues as I've stated.

The most likely contender for countering selfvoting is independent flagging pool along with burning of flagged reward

Leaving the rest for grab for the next least selfish self-voters leaving the same loophole open as I understand it.

"next least selfish" = Not selfish.

Not sure how you understand it otherwise.

It doesn't put potential abusers at a disadvantage.

yes it does

Of course it does. They are more likely to be downvoted. Even if it isn't 100% guaranteed it pushes their incentives toward better (less likely to be downvoted) behavior

True but under linear reward, the rewards returned would represent a growing % of the reward pool with ever-growing incentive to defect.

Under crowdsourced flags, this isn't so.