You are viewing a single comment's thread from:

RE: A peer-to-peer network for sharing and rating information

in #hivemind3 years ago (edited)

Really enjoyed the read, it's a very interesting topic. I also like the anonymity aspect as without it this system would be terrible and run into politics. For example, on hive, if I know freedom has the biggest stake, then I have to make sure I give him a high score in every single category. That would possibly buy me his witness votes, or maybe not, but surely scoring him as terrible in terms of trust will not make him want to vote me as a witness.

So assuming that there is enough privacy with the ratings, I think this can be revolutionary. However, this feature must be optional for communities. A community must be able to decide if it wants an algorithmic trust rating for its users or not. Forcing that on everyone will cause people to avoid using the product for fear of being scored negatively.

There are also some fundamental issues with such a system:

  • The accuracy of the algorithm cannot be measured using the algorithm, so there is a fundamental assumption (like axioms in math) that the web of trust delivers accurate ratings.

The accuracy of the ratings can only be deemed good or bad by people or the developer himself. As you have amazingly demonstrated in your post, people are not good at that. It is very easy for two devs to develop different algorithms for a web of trust. Each web of trust will show different scores for the same people. And then, people will chose one algorithm over the other. How to choose which web of trust is the most accurate? It all comes back to normal life and normal information sharing.
If there was a feedback loop where a web of trust is implemented, we look at the trust scores, then judge whether it works or not, then it means that we could improve it. However, no one can judge if the scores are correct or not, so we will never know if a scoring system is accurate or not. It will all boil down to personal beliefs, and choice of different scoring systems, depending on who you trust in real life, similar to basic information sharing we have now, without any scoring system.

  • People are tribalistic and stupid. They believe what they want to believe, no matter what trust score you might show them.

Even if the web of trust perfectly scores everyone, people will not take it into account. For instance, we saw recently with the US elections how some people thought it was rigged, and others think that this is ridiculous statement. While the truth might seem obvious to some, it is obviously the opposite to others. I am certain that no matter what score Trump has on a web of trust, his followers will believe what he says is true and believe the election was rigged. Even if every poll expert says it wasn't. And this applies for both sides. Also, this provides a challenge for the algorithm calculating the scores: how to score sources where ratings are extremely polarized? Where 50% of the people say it's flat wrong and 50% say it's absolutely true.

On another note, this post made me think of Bridgewater, Ray Dalio's top performing hedge fund. I would highly suggest listening to his ted talks if you haven't already. He has implemented exaclty what you are proposing inside his company. People rate other's based on their viewpoints on the economy and the market. Trust scores are then calculated, he calls it "believability". Then, trading/investing decisions are made by the algorithm that takes into account everyone's believability. Bridgewater has the best performance over the long run out of all the hedge funds. One key difference is that they have real world feedback. If the fund makes money, it means the algorithm is useful. If they lose money, then maybe it's time to tweak it.

You might argue that in our case we can judge the success of a web of trust by the real life "success" of the community using it. But this feedback loop can take years or even decades to take effect in society. For example, the US monetary policy has brought wealth to the country for decades, despite trading deficits, and a reducing productivity in the country. How would the FED be rated in a web of trust? What happens when it all crashes?

Edit - I just read one of your replies in the comments. If I understand it correctly the scores will be different for everyone and will be personalized by the user himself, so all my points above are not valid. Essentially it becomes a tool that allows you to quickly identify who to follow and trust on social media. But it won't be much more than that. It is definitely useful, but can also make people live inside their own bubbles, kind of similarly to what big tech does today. Users will infinitely tweak the algorithm until they get recommended users/posts they enjoy.

Sort:  

In this first post, I intentionally didn't say a lot about how rating software could help our current methods of rating information or how it would work, because I had a lot to say about how we rate information now, and I hoped to keep the focus on that (and separate the discussion from talks on a rating system itself, since it's likely more controversial and logically distinct).

I probably should have been clearer on this point in my closing section, because most of the post comments are still based on expectations of how a rating system might work and a discussion of it's potential/perceived flaws.

Maybe I can take that to mean that there's relatively little disagreement in my interpretation of how we rate information today, which I suppose is a good thing, if it means we have some consensus agreement on much of the information shared so far.

Anyways, my next post will explore some of the concerns you've raised about potential implementations of a rating system.

What about the Bridgewater/Ray Dalio method? Have you heard of it before? It might be interesting to you.

Looking forward to your next post.

I hadn't heard of it, but it does seem to have correspondences with the rating systems I'm envisioning.

I should also add I've found portions of many of my own ideas espoused by other people in other research literature when I started doing literature searches, sometimes with very similar ideas, even down to shared ideas about specific mathematical techniques that might be useful to analyze web of trust data.

But back to the Bridgwater method, updating this type of rating system based on performance is also possible in most information domains and IMO designs for a rating system should include plans for allowing feedback on current results to influence future results. Admittedly measuring performance won't always be clear cut in all domains, and will probably depend on Darwinian theory to some extent as well.