An Idea for Blockchain Peer Review

in #blockchain2 years ago

censorship vigilante.png

As web3 evolves and becomes more prevalent, we are faced with a growing problem of content moderation. We've seen plenty of it in the downvote wars on hive and steem, and that's a microcosm of the wider social media landscape.

What content deserves to be rewarded?

What content should be demonetized entirely?

What content should be hidden or removed?

These are not easy questions.

The web2 solution is simply to have a centralized structure where the operator can pick and choose what to allow. And we have seen how off the rails that can go rather quickly.

So, I've had this idea kicking around in my head for a few years and decided to finally make a post about it.


One of the giant issues with web3 governance is that pure stake-weighted voting is a slow spiral to zero. The early adopters gain outsized influence over a platform. This discourages later adopters from participating because they see the existing landscape and decide that there is no way to ever overcome that first-mover advantage. This seignorage issue is actual endemic to all of crypto. The only ones who can truly overcome it are those who have enough wealth outside the system to make acquiring a usable stake affordable. But again, that does not apply to the vast majority of users.

This is why pretty much every legislature in the Western tradition has developed a bicameral system. One group represents the interests of the few and the other group represents the interests of the many.

But that is impossible to do in a web3 system because there is no tie between account and identity. We have seen the abject failures of supposed web3 systems that try to verify identity through KYC. There's just no market for it.

So how do we create a system that can moderate content that isn't just a question of whoever has the most money gets to set the rules?


To my mind, the answer is peer review.

It would work like this:

  • UserA posts a piece of content on a web3 platform.
  • UserB thinks UserA's content is bad for some reason and clicks a button to submit UserA's content for peer review
  • The platform randomly selects X other users (UserA and UserB are excluded) to review UserA's content. These other users are the peer reviewers.
  • The peer reviewers give a yes/no vote on whether UserA's content is acceptable
  • If quorum is met and a supermajority of peer reviewers decide UserA's content is bad/unworthy/illegal/whatever then UserA's content is hidden/demonetized/removed entirely/or similar
  • If quorum is not met or if quorum is met and enough peer reviewers decide the content is OK, it stays.
  • Optional: if quorum is met but the vote is in a middle range that is indeterminate, resubmit for peer review with a larger group

How many peer reviewers, what percentage of votes satisfies quorum, and what supermajority vote is needed can all be determined by governance votes.

Basically anybody who is negatively impacted by this system has created content that* we can statistically say is not in line with the community's acceptable use*. If 1000 peer reviewers are selected, 70% respond, and 70% of those vote that the content is objectionable, we can reasonably rely on that determination.

There are those who would say that it should be anything goes on a decentralized content system. But part of that anything goes is for the community to be able to say "we don't want this here."

Having a statistically sound mechanism like this is much better than what we see on hive now with individualized downvoting. One, individual downvotes only have an noticeable effect when coming from the relatively few users with large stake. Two, it is subject to individual biases. Three, it creates personal animosity between users.

It might be that this system is applied in an all-or-nothing way to hide and demonetize or it might be that different peer review votes could be used to segregate demonetization only versus hiding content versus expunge from the blockchain.

As far as I can tell, the only avenue for attack on this method would be to create such a vast number of accounts that the random sampling would include enough controlled accounts to prevent the supermajority from being formed. I think if a system got to that point, it would be under centralized control anyways so the entire thing becomes moot.


The thing that brought this to mind was a tweet by Mark Cuban:

We add an optimistic roll up to Doge Everyone puts up 1 doge for unlimited posts. If anyone contests a post and humans confirm it's spam, they get the spammer's Doge. Spammer has to post 100x more Doge If it's not spam,the contestor loses their Doge. DogeDAO FTW ! 🚀🚀🚀

No pay-to-post system is going to work for social media. But I do like the idea of having some kind of penalty for users who get negative peer reviews. In the HIVE ecosystem, that might come in the form of increasing RC costs for transactions.

Sort:  

A very interesting framework you laid out here. It makes sense to delve into such thought process.

I think the idea of a reward/penalty system does have merit. The quorum style voting you mention also could increase the overall utility of the system.

Here are a few ideas:

  • In addition to yes/no, another layer could be added for the rewards. This is where a lot of the contesting often comes in. So the quorum would be responsible for coming up with a percentage if acceptable content.

  • Could their be a penalty added to those who use stake for personal vendetta? Perhaps if one flags too many posts (downvotes) and the quorum goes against, there is some type of penalty to that account.

Just a couple more thoughts.

Let me put two Facts in Balance:

1. Attacks are real
Without moderation, there will be professional attackers and they will make good profits. And yes, there is no White or Black Knight coming to rescue us from the malicious intents of people who want to game the HIVE systems.

2. Base Layer Censorship is real
A couple of People who orchestrate themselves in Discord can wipe the range, influence, and rewards of others. We've seen that and it's a terrible thing to happen. Human thoughts and senses are just interfacing with the world and we sometimes have a hard time finding common truth. The attempt to get along has to be able to fail, without failing its participants.

Therefore, it's very wise to tent to a grey solution with percentages.

--

That being stated, it takes energy to run such systems of moderation and human-controlled energy always follows value. The flow of value is modified by the moderation that gets its value back from the flow in moderates, eventually, following that logic, I believe that any moderation system will fall into corruption.

Moderation systems need to be redundant and should compete against each other to create balance and make it possible to let corrupted moderation systems fall into "disgrace".

Thinking about it, going deeper into Tibes and Communities is an obvious solution. Make it less about the nature of the intention behind the public post, let the free market of ideas and innovation decide where true value can be found. That means HIVE as a single point of reward needs to be balanced against competing tribal-tokens with their own rewards/censor ethos. A well-managed Tribal reward system should always have a higher yield than the base token reward system, we know that from the DEFI space. The idea to find a rewards solution by thinking about it, is so to say, a wrong idea.

Maybe I've lost the plot, just thinking out loud with my keyboard here.

I think negative feedback for over-flagging is a good idea.

If you had a cohesive enough community you could use this kind of system for positive rewards as well. But in a large online community I think there would be too much unrelated stuff a random user wouldn't care about.

The first time I enountered this system in the wild was the online game "League of Legends".
I don't exactly remember the whole system. People could get flagged for a variety of bad behaviors (swearing, insulting, being generally disruptive, intentionally feeding the enemy, etc.).
These flags were presented to people who would volunteer to review flags. If a hurdle was met, that player was reprimanded in some way.
I think they also had a system to capture who is doing good work as a reviewer and flagger, and who is not.

Yes, it's been overwhelmed by the massive flow of flags to a point that central moderation was implemented to take swift and Algo-Based judgment by digital evidence and leave people with only the ability to go through a very slow appeal process to counter that.

The LoL flag System did one thing very good, it kept people in the illusion that they can do something about bad actors themselves and therefore don't bother the actual humans working at support levels.

Yeah, it's not a completely new idea. It's basically a jury of your peers.

There's a balancing act to be made with needing people to participate in reviews and making the platform too annoying to use. You don't want to have to wade through 37 reviews before actually using your account.

sound interesting, but remember if the system gets bigger its get impossible for humans to check it and review it. i dont now how many posts get uploaded every min, imagine someone just claim every post. its sure the right direction but i dont know how to scale it.

Yeah there needs to be some kind of negative feedback to a failed flag. Maybe you lose the ability to flag content for a day or something.