You can say the same thing about all the 'free steem' that is being given out to whales and the people they vote for under the n^2 system.
I feel like we have come to some important understanding.
That is correct, under n2 whales can simply upvote themselves exclusively and over time they will reduce the use of Steem to 0 and the value of Steem also to 0$, just like under linear reward people can upvote themselves and reduce the reward to simple interest on SP, making the case for SP to be reduced to nothing.
Should Steem give the biggest advantage to those who don't engage in proof-of-brain?
Those who create bots and votes 10 of their own comments/posts a day are those who don't engage in proof-of-brain make practically 0 amount of effort and are assured a revenue overtime unless someone counteracts their vote ad infinitum all at the cost of the person who police.
The trick is to identify the proper balance between incentives and the risk of abuse. @dantheman
If 50% of people end up voting themselves while the other 50% end up downvoting them, the whole pool is up to grab with just one vote by any account even with the smallest vote.
How does that make a sounder system than super-linear reward?
Unde super linear those who have the most to lose can choose to try to abuse the reward but over time would reduce the demand for Steem and thus their Steem wealth making it irrational for them to do this.
Sure super linear can't protect against irrational behavior nor linear or any other system.
Under linear reward, the selfless investors will see their portion of the total Steem being reduced over time to the selfish investors.
The selfless investors want to incentivize selflessness and yet they will inevitably see their portion of the Steem being reduced when compared to the selfish investors.
The slefless investors will realise this over time and will leave.
If 100% of all rewards were distributed via a flawed system, then it could devalue the entire platform; however, if just 1% of rewards are distributed by the same algorithm then any misallocations can be tolerated.
The trick is to identify the proper balance between incentives and the risk of abuse. @dantheman
Are The Selfless Delaying The Inevitable? (link)
Where we seem to get stuck in a loop is that you seem to think that if we switch to n^2 or some other form of super linear curve, the problem of “non-proof-of-brain” rewards is just automatically fixed. You seem to be characterizing it in a way that the stakeholders with tons of stake will not act selfishly and they will actually vote in ways that benefit the platform.
I keep going back to the same two things: