You are viewing a single comment's thread from:

RE: Bot War: Keeping Peace With A Reverse Fractional Formula

in #steem8 years ago

Do you think an army of human readers could outperform Google's page rank algorithm? In aggregate, bots are basically just creating a decentralized version of page rank, aren't they? Yes, it's all new so there will be problems along the way, but I think they'll eventually prove to be indispensable to steemit's growth. I think the goal should be to find incentives that make the human/bot relationship symbiotic.

(At least) Three incentives drive human curators and bot-curators:
i. Short term revenue in the form of curation rewards.
ii. Long term value in the form of post-quality to attract readers.
iii. Long term value in the form of author reward distribution to attract writers.

Of those, for the long term value goals, I think humans are crucial for the second goal, but bots will outperform humans for the third.

I think I might agree with your suggestion to reverse the 75/25 split. I'll have to give that some thought. It might even turn out to benefit the authors, too. For example, 25% of 1,000 is better than 75% of 100. If increasing curation reward percentages increases post reward totals, the authors might ultimately wind up receiving higher average payouts.

Sort:  

Thinking about this some more. The more I think about rebalancing the author/curator split, the more I like it. First, maybe reversing it to 25/75 isn't the exact right split either, but experimenting with the distribution seems likely to be useful. It's a near certainty that they didn't happen to get the exact optimal solution on the first try. Second, I wonder if it could be set up so that authors could set their own split on each post (along with a suggested value)?

For example, if I'm a new author, maybe I want to give 95% to curators to get my name on the board. After all, 5% of something is better than 100% of 0. But if I'm already established, I might want to keep more like 75% for myself.