You are viewing a single comment's thread from:

RE: Manual curation

in #manuallast year (edited)

Hello @acidyo, sorry to tag you, I just do it that you notice that I reply again to the same topic.

The downvotes are a really delicate topic.

On the one hand you might be surprised that I actually agree that there are not enough downvotes.

On the other hand it remains a fact that (if justified or not) most users take downvotes personally (and well, of course quite some downvotes are meant personal).
Then the downvoter gets attacked verbally and the downvoted 'victim' feels attacked unjustifiably. Not a nice situation for both of them.

My newest idea concerning this dilemma would be to interpose an AI or an algorithm which tries to decide based on objective criteria if a downvote would be justified or not: if anybody feels to downvote a post or comment they would inform the AI together with a reason (plagiarism, spam, overvalued, containing threats etc.). If the AI would agree to the downvote the downvoter would have a reason to argue that their downvote wasn't just a personal attack on a user. It sounds complicated but actually could encourage downvotes (and discourage - or even prevent(?) - personal attacks).
The AI would not downvote itself, just serve as kind of an arbiter.
If someone downvotes against the suggestion of the AI the community could counter this with upvotes then (the other option would be that downvotes against the decision of the AI wouldn't be possible at all).

An alternative to the AI would be a council of respected, well known users (not only whales, and all given the same voting power within the council!) which would themselves get rewarded for doing the hard work to act as arbiters.

These two ideas might both still be unformed, but I think they show into the right direction.