Modeling information in an information rating system

in #hivemind3 years ago

In my previous post on this topic, I discussed some of the ways we rate the truth of information. One of the key points of that post was that much of our information rating is based on the opinions of other people that we, for one reason or another, think are good judges of the information’s reliability. Due to the sheer amount of information we deal with, none of us has the time to personally vet the truth of all the information we rely on, even in the cases where we have the skills and resources to do so.

In this post, I’ll start to introduce some possible ways that computers can help improve the ability to rate information. In particular, I want to look at ways to improve our ability to rate information based on other people’s opinions. Many of the ways we’ll look at to do this are just extensions of methods that we already employ today in our normal lives.

I want to emphasize that this post is just meant to begin an early exploration of a very broad topic, not a prescription for immediate action in the form of explicit algorithms, and not a very comprehensive exploration either (there will be more posts later). Also, I’ll only be discussing the associated mathematics in a very general way, although I’ll add some links to sources with more details on potential approaches.

The need for mathematical models: Computers “think” in math

As any programmer can tell you, one of the most challenging tasks in getting a computer to solve a problem, is to figure out how to model the problem in a mathematically formal way. Below I’ll examine some of the ways we can mathematically model a web of trust system for rating information (for brevity, I’ll refer to this system as a trust network).

Trust networks naturally generate individualized ratings

Before we jump into the modeling problem, it’s worth pointing out one important aspect of the trust network approach to rating information that I only addressed in the comments of my last post: these networks don’t need to give universal answers when rating information. For one thing, each person can get individualized answers based on the people they trust and the ways in which they trust them.

Also, each user is free to select their own personalized algorithms for analyzing the information from the people in their web of trust. While I’m visualizing a network of trust that is at least partially publicly accessible, there’s no requirement that all users run the same software. The only requirement for interoperability is that each version of the trust network software is able to operate over a commonly understood data format that is exchangeable by nodes in the trust network.

Modeling units of “information to be rated” using predicates

The first problem we need to address is how we can represent the information we want to rate. This general area of computer research is known as knowledge representation. As you can probably imagine, it is a huge topic, and we’ll try to make as many simplifications for now to avoid the many complexities that arise.

To assist in the mathematical modeling necessary to map the information rating domain to a computer, I’ll briefly introduce some terms from symbolic logic. Symbolic logic theory is a branch of mathematics that was created to formalize the rules of logic we use when we reason about the truth of statements, so it’s worth a brief look at some of the things it has to offer.

Borrowing from the language of symbolic logic theory, I will refer to the units of information to be rated in a web of trust network as “predicates”. In symbolic logic theory, a predicate is something that is affirmed or denied of the subject in a proposition in logic. For example, one example of a predicate is “Hive has a blockchain”. This predicate makes a statement about the subject, Hive, that can be either true or false, but obviously not both simultaneously.

Assigning probabilities to predicates

Traditional symbolic logic requires predicates to be either true or false, but we need a model that represents probabilities of truth rather than absolutes. Let’s assign a decimal value of from 0 to 1 to indicate a predicate’s truth probability. For example, 0 means it’s false, 1 means it’s true, and 0.7 means there’s a 70% chance that it is true.

In an information rating system, it’s likely that very few predicates should ever get rated as absolutely true (1) or absolutely false(0). This is because for most of the things we believe to be true, we need to be prepared to receive new information that contradicts our current beliefs. In fact, it might at first seem like we could never rate anything as absolutely true or false, but there are internally consistent information domains such as mathematics, where it is quite possible to rate things as absolutely true (e.g. 1+1 = 2).

Modeling the inputs from a web of trust network using probabilities to rate them

As a quick refresher from my previous post, a web of trust is one or more information sources that you trust to some extent to rate the truth of information (e.g. your friends are one example of a web of trust you have).

Let’s look at one possible way we can mathematically model the opinions about a predicate’s truth coming from a web of trust. Let’s start with a simple example, where your web of trust consists of 2 people, Alice and Bob, who both solve the same math problem on a test, with Alice reporting the answer to the problem is 5 and Bob reporting the answer is 10.

Before we can optimally cheat on our own test, we need to decide which of the two answers is more probable. Typically we’ll do this based on an internal rule like “Alice is good at math” and “Bob is no math genius”. In this simple case, we would likely just decide that Alice is right and go with her answer. But next we look around the room a little more and spot that Carol, who is good at math, has Bob’s answer. This new evidence may lead us to believe that Bob is actually right and Alice is wrong.

In order for a computer to help us solve this problem, we need to assign probabilities to our ratings of the information sources. So let’s say we believe Alice is right 80% of the time, Bob is right 60% of the time, and Carol is right 70% of the time. At this point, we have enough information to compute which is the more probable answer using probability theory (although, if you’re like me, you’re still going to want a computer to do it). One promising avenue for doing computations of this sort to model the probability of truth is using Bayesian probability theory.

Confidence factor: how sure are we of a probability?

In real life, we’re rarely able to rate information so precisely that we’re able to say things like “Alice is right 80% of the time”. So just deciding our own internal ratings of information sources isn’t a trivial problem.

For example, we may believe Alice is right most of the time, but we might not have enough experience with Alice’s math skills to know if that means Alice is right “7 times out of 10” or “9 times out of 10”. Let’s call this a “confidence interval” of .7 to .9, as it’s a measure of how confident we are in rating Alice’s skill at math. Another more powerful way to model this uncertainty mathematically is to rate an information source with a probability distribution rather than a simple confidence interval. In either of these two models, we can tighten the range of our confidence interval or probability distribution as we get new information that allows us to increase our precision in rating Alice’s math ability.

For programmers who are interested in experimenting with using computers to perform math with probabilistic distributions, here’s a python-based library that I came across in my research: PyMC3.

Problems with predicate meaning: meaning requires context

One problem that immediately arises with the use of predicates is how to decide when two information sources are reporting on the same subject. As a simple example, let’s say you have two information sources, and one says “Coke is good for you” and another that says “Coke is bad for you”. Serious problems emerge immediately in determining if these two sources of information are reporting conflicting information, or if they are talking about entirely different things. One may be talking about Coke, the product from the Coca-cola company, whereas the other source may be talking about cocaine.

A related problem is we have to decided if the predicates are making the same assertion about the subject of the predicate. For example, in this case, we have to decide what “good for you” means, does it mean “good for your state of mind” or “good for your health”?

There’s many possible ways to get around these problems, of course. Normally we use contextual clues to determine the meaning of such statements. If we read the statement during an article about soft drinks, we can be pretty sure they’re talking about the coca-cola product.

But while this is easy enough for us to do normally, a few decades ago this would have been a very difficult problem for a computer to solve. Nowadays, however, computers have gotten much better at making distinctions of this sort, so one potential solution is to provide a trust network with enough contextually relevant information that it can use to distinguish seemingly similar predicates (or even identify the same predicate expressed slightly differently).

Another way we can solve this problem is to get the information sources themselves to agree about a unique set of predicates. In other words, we can use human brains to make the distinctions instead of relying on the computers to interpret the words associated with the predicate.

There’s many ways that the trust network could go about achieving such consensus on the meaning of predicates, and it’s a research topic in and of itself. One interesting method might be to combine the use of computers to identify probable matches in predicates, then have humans in the trust network rate the probability of those predicates being identical.

Unique subjects allow us to create a set of predicates about that subject

In the previous section, we talked about a mechanism for uniquely identifying subjects in predicates. Once we have such a mechanism, we can begin to group predicates associated with that subject. This can be very useful in many ways.

One obvious way this kind of predicate grouping can be useful is simply as a new way to get information about a subject: when we know one thing about a subject, it’s likely that we will want to know other things about it.

Predicate grouping also allows us to reason about the subject and make conclusions about it. For example, the more information/predicates we know about a subject, the easier it is to identify if a subject in another predicate is the same subject, because the associated predicates serve as contextual information. It also to allow us deduce new information (new predicates) about the subject from existing predicates, which I’ll discuss in more depth in a later post.

Sentient subjects

One particularly important group of subjects for an information rating system are subjects that represent a person or an organization, because these are the information sources in our trust network.

Predicates about people can allow us to rate the probability of several important attributes that can help us decide whether to trust them as information sources. For example, one of the most fundamental such attributes is identifying whether an information source is a real person, and not just one of many sock puppets used to magnify the opinions of one person. Predicates can also be used to indicate a person’s specialized areas of knowledge and we can use this information to help make a judgment on how much we should weight their opinions in a given information domain.

We can also associate a set of cryptographic keys with subjects that represent people or organizations. In this way, the owner of the keys can “act” for the subject in the trust network. This has many potential uses, but one of the most obvious is it enables an information source to cryptographically sign information they disseminate, in order to authenticate that the information is originating from the signing source. This is basically how Hive works today with Hive accounts and posts. In a trust network, we could even assign probabilities to the likelihood that a subject still retains control of their keys.

Grouping “related predicates” into domains

A “related” problem to the problem of identifying when information is being reported on the same subject, is how to group predicates that are related to each other.

It’s desirable to group predicates into domains, because we would like to be able to rate an information source based on their domain knowledge. In other words, it’s not very useful for our trust network to rate Alice as being good at math, unless we can also classify the predicates we are trying to rate as being in the domain of math or not.

Classification of predicates into domains is also a very general problem in computer science with many possible solutions. One method that many Hive users are very familiar with is the process of adding tags as metadata to a post they create. In this case, we’re classifying the entire post as belonging to one or more domains. In a trust network, user-specified tags could serve as meta-predicates that assert domains to which a predicate belongs, along with associated probabilities that the predicate actually belongs to each of the specified domains.

Another issue that arises with domain representation is how to perform computations on predicates belonging to overlapping domains. If we assign separate judgment rating probabilities to Alice for math and biology, how does the trust network choose which judgment rating to apply when rating a predicate that has been grouped into both domains? Or does it use both in some way, since her ability to rate the predicate is arguably higher when she’s an expert in two domains that the predicate belongs to?

Reducing the scope of the problem: rating useful predefined predicates such as identity

I suppose that just the simple examples above will have given you some insight into the difficulties involved in knowledge representation, but the “good news” is we can often make simplifying assumptions, as long as we’re willing to limit the capabilities of our trust network.

As a starting point, we could simply predefine a set of “interesting” predicates for our trust network and group them into distinct non-overlapping domains. Later work could then focus on how to dynamically introduce new classes of predicates to be rated and how to handle overlapping domains.

For example, applying a network like this to Hive, we could use it to rate Hive accounts as subjects, and rate predicates such “account X is a real person”, “account X is a government-registered entity”, “account X is a plagiarist”, and “account X is an expert in STEM”. The latter predicate could be generalized to “account X is an expert in Y community”. Again, to avoid controversy, I want to point out that the ratings provided by such a trust network are individualized (and thus very decentralized), so it would serve as an aid to each user, but it would not act as a universal social scoring system.

Upcoming topics: creating and optimizing a web of trust

In this post, I’ve focused on some potential ways to create a probabilistic model of information to be rated by a web of trust. In future posts, we’ll continue to look into modeling issues, ways that computer can collect information to create a web of trust, ways to detect problems with a web of trust, using inference engines to infer new information about predicate subjects and detect contradictions, and methods of optimizing a trust network over time.

Sort:  

I'm guessing the plan is to evolve this into some sort of a web service. "Algorithms" have gotten a bad rep because of big tech, but if it's opensource, gives people freedom of choice, and doesn't skew towards one side whether it's Trump or Biden or Kim Jong Un, it IS free speech.

If gatekeepers start deciding search results for millions of people, then it goes the other way (what Google has become today).

If the idea of "information rating system" is to allow anyone to rate anyone without worrying about being censored, it's probably going in the right direction. I have no idea how this could be defined mathematically, or in code. But if you can do it, build it on top of Hive, facilitate discoverability, Hive might actually start competing against big tech.

You'd go into all sorts of questions like what validates a certain rating or identity etc. Dan got lazy and tried this with horrendous KYC for Voice. I think you're aiming for something better, more objective. I have questions like what stops someone with a lot resources from controlling a group of bots to create a fake network? Or what IS identity in this system? What gives one rating more relevance to me vs another rating?

Hive voting and "trending" right now is nothing more than "who's the richest person on Hive and what does he/she think?" It has zero relevance to me. Posts like this kinda make me think there's potential for something more enticing. I'd love to see more details laid out and perhaps opensource code?

I'm guessing the plan is to evolve this into some sort of a web service.

The basis of it wouldn't be a web service, but a peer-to-peer network for exchanging information and associated opinions about that data. However, web services could certainly built on top of it, just like you see web sites built on top of Hive, that offer different ways to look at the underlying data.

The underlying protocol would definitely be open-sourced. For sites built on top of it, the choice of open-sourcing would be up to each operator. But I see the most valuable and trustworthy analysis being done by open-source software being run on computers controlled by each individual user.

If the idea of "information rating system" is to allow anyone to rate anyone without worrying about being censored, it's probably going in the right direction.
I have questions like what stops someone with a lot resources from controlling a group of bots to create a fake network? Or what IS identity in this system? What gives one rating more relevance to me vs another rating?

One of the aims of this idea is indeed to operate as a better alternative to censorship. I believe the current calls for censorship are the wrong solution to a real problem: the problem of media manipulation in the form of bots, paid shills, etc. Centralized censorship is very dangerous because it puts too much power into too few hands. But the power of a few people to manipulate the information supplied to a lot of people is probably equally dangerous, especially when the information sources can magnify the power of their speech so that it appears to come from many people (e.g. using bots). They are two sides of the same coin.

There's various algorithms that can be used to detect a fake network made up of bots, but this is a somewhat technical topic, so I'm reserving the discussion for a future post.

thanks. i already see people freaking out but i don't see much of an issue with the idea itself. in fact i think it could be something that saves hive.

It's predictable that some people would freak out, I think. I suspect that the more someone's brand is that of an "out of the box" thinker (regardless of the reality of that assessment), the more likely they don't like to imagine that someone may be rating what they say or publishing that rating. Unfortunately, that also goes for con artists.

What's amusing so far is I see one case where two obviously contradictory arguments are used simultaneously to attack the idea. I can already imagine the response when I discuss how computers could help spot cases of cognitive dissonance.

The best and only method that respects freedom is for the reader to do their own research. I'm not in favor of adding algorithms to gauge information.

You are allowed to be ignorant and parrot false statements.

Unique opinions can be true or the best explanation based on the current data and still be wrong, but no AI could accurately gauge it. It's my job to decide if I agree or not.

A completely false statement can be rated as true by all available official and unofficial sources, but there's no way for ai to accurately rate it.

I don't like this type of thinking and if added to the platform it will turn many if not most freedom loving users away.

It's great when people "do their own research", but it's important to recognize the limitations of that approach.

In my first post, I essentially argue that no one does their own research on all topics (or even most topics). Not only that, I don't believe that any human has enough time and mental abilities to do so, even if they wanted to expend that much effort.

I also think you're getting confused about what types of algorithms are primary to the operation of a trust network. They are not mostly AI-algorithms, although AI algorithms could be used to supplement the information provided by a trust network (as mentioned, I'll discuss this in a later post). Instead these are "trust algorithms" that allow you to rate your trust in information sources. This is something you already do now, and I'm just looking at ways that a computer can help you to make computations based off the information you give it about your trust in different information sources.

A lot of the ideas for this project in fact are in search of means for us to attain more freedom than we have today. Webs of trust allow us to establish trust relationships outside of centralized sources of information, while helping to prevent us from being duped by false sources of information created thru the many ways that information platforms today can be gamed by savvy information professionals.

How does information I deem trustworthy through my opinion make it trustworthy? It sounds like you want to build an easy way to build echo chambers.

A trust network can certainly be an echo chamber. Many people today foolishly design their trust networks as an echo chamber because they prefer to avoid information that contradicts what they believe to be true.

There's certainly nothing about using computers that will force people to accept information from diverse sources, for example. But computers could help you analyzethe network you're building and at least let you know that you're narrowing your sources of information. They could also inform you in cases where the vast majority of people disagree with the opinions of your trust network. What you do with that information, as always, is up to you. But if you find that your trust network is rating a lot of information incorrectly (in the sense that you find it giving a lot of wrong answers in cases where you're able to verify the truth independently) you may be incentivized to change your network in a way that makes it more useful.

Sure, but it would depend on how intricate you make it. Unfortunately, a consensus doesn't make right or true, so I see this as unnecessary and possibly doing more harm than good. Think China's social scoring...

I've tried to make clear already how completely different this is from social scoring systems. I'm not sure what else I can say on that subject.

But as to consensus, I will argue that some level of consensus is needed for any organized human activity. It doesn't have to be universal consensus, but without some local consensus, none of us will be able to accomplish much.

I've accomplished everything I've done without a consensus. I look up what I want and read all sides. I've never nor will trust another's opinion on a subject. A trust score is just another worthless waste of time that represents nothing valid in reality.

Twitter has just launched something similar and is only used to silence voices. While your idea may be a bit different, I feel it will be abused or cause the masses to limit their research. This will continue the trend of laziness tech has brought about and even cause relevant info to be skipped over, because it isn't popular enough.

This could even be used to target users similar to targeted adds with info. Freedom lovers find this to be an overreach and creepy. It could also be used to target and smear info and ppl.

Hive needs to become decentralized. Not only won't this bring us there, wasting time on such a thing will make sure it never becomes decentralized, but surely will slow down that process.

Oh, based on your final concern, it's also worth noting that any potential integration of a trust network into Hive would almost certainly be as a 2nd layer application, so it's usage would be strictly optional.

most people would do their "own research" through google. algorithm itself is not the issue. if google didn't skew towards one side (it actually didn't at one point if you remember) it could've embodied free speech.

I'm not sure that's relevant, since Google still provides most the links that exist, they are simply further down the list. Also, googles popularity is waning and we have quite a few alternatives now with more coming.

And... believe it or not, many still use ink and paper for their research. Also, Google doesn't change the wording in the links, so if you find a paper Google doesn't like, you still get to read it unredacted.

I'm talking about the flaw in your logic when you say "The best and only method that respects freedom is for the reader to do their own research."
When you read a book or paper, whether it's in ink or your screen, the publisher + writer + many more people involved are doing research for you. Google happens to be one example.

So, individual perspective is a flaw? I'm not sure I understand and am pretty sure it's because you haven't stated your case clearly...

yes. by your definition "freedom loving users" can't exist. nobody can do their own research because the fact that you're even reading something in ink means you'll be influenced by the bias that exists in distribution channels and investors to the research.
(unless you're talking about few people that figure out physics and how reality works..
but even they are not immune to the same economy that pushes readers to one side or the other)

i said "flaw" because the more relevant question would probably be something like where do you wanna draw the line and why, rather than simply saying people should do their own research.

So, in your world ppl only research one side of a topic? Also, your flawed take assumes everyone believes everything they read. This is not the case...

And "freedom loving users can't exist?" What? You do know what the basic qualifiers for a false statement are? You use one for each of your statements.

Not everyone is like you, so you cannot use your own actions to claim 'nobody' will do this or that those types 'can't exist'.

I don't like this type of thinking and if added to the platform it will turn many if not most freedom loving users away

That's true. We can see AI in action on YT, FB or TT, I will power down if it comes here

As a developer, I love these kinds of algorithmic problems, especially when you start getting into neural network territory. However, you need to be careful @blocktrades -- all this talk (however theoretical it may be) is going to rub some members of this platform the wrong way, leading them to be jump to conclusions (because a lot of people here are irrational).

I'm sure you've noticed this, but since the early days of Steem once the fanfare died down and only a smaller subset of the initial users remained, Steem turned into a conspiracy theorist haven and the anti-vaxxer crowd also found a home there.

Now, Hive is still rooted in the legacy that is Steem and with it comes the baggage of that platform. This means when Hive took a snapshot and started a new, these conspiracy theorists and Bill Gates fearing anti-vaxxers still remained.

A few of the top witnesses fall into the anti-vaxxer/conspiracy theory territory. Now, if you were to introduce any system that attempted to classify and rate the accuracy of the content they were posting, it would drive them away. Although, you could argue that isn't a bad thing. I hate the fact this place is a safe harbour for dangerous anti-vaccine rhetoric and Parler antics, but I respect the right of these people to their opinions (however dangerous, misinformed and wrong they are).

One area I would really love to see the @blocktrades team focus on is content quality. I don't care about the accuracy, but I do care that numerous high profile users on this blockchain just post absolute drivel, rhetorical spam. I see nonsensical crap on the trending page daily and it is such a bad look for this site. Articles like, "Reasons crypto will replace fiat" and "Hive is the future of crypto" litter this site because they're easy to write if you're lazy and just wanting to farm from the rewards pool.

An algorithm that actually deemed if a post was worthy of being on the trending page would be welcome. Furthermore, removing the ability for users to self up vote (or seriously reduce the reward you get for doing so) and build a system that encourages and rewards users to curate and up vote other posts instead.

Hive in its current state is great for developers, but if you're a content writer, you probably wouldn't feel accomplished or seen here. No offence, but Hive is terrible for content curation and discovery, it's like a blog platform from 2002. The writing experience is also not great (especially on mobile), I think Medium (as bad as it has now become) still has a superior and clean interface, writing experience and content discovery system that doesn't reward its richest and spammiest users.

If Hive wants to grow and not just cater to its witnesses and whales, you need to focus on content discoverability and a way better interface for starters. You need to make newcomers feel welcomed, give them the tools to build a following and audience. Right now, I would imagine that for anyone new to Hive, anything they post is the equivalent of pissing into the ocean.

It should get better when the 5 min curation window is gone. I don't know how long you have been on Hive/Steem, but the trending page has gotten a lot better. Not that I would say it's good, but at least I can see improvement.

I think the way things are heading Hive won't be all about blogging soon as well. The blockchain itself seems to be the valuable aspect (with the 2nd layer applications on top). Maybe we will find other ways of distributing the token? There's talk of microblogging, for example. But yeah, it's still a long road :)

This might be very controversial thing with outcome dangerous to the free speech here.

Truth is subjective and it's one's basic freedom to decide what is believed to be so.

Truth isn't subjective. People's opinion of it is, but those are actually quite distinct statements.

From a "rights perspective" of morality, people certainly have the right to their opinions, as long as they don't act on them in ways that infringe the rights of others, and I've not made any argument to the contrary.

More to the point, a trust network doesn't really tell you what to believe. It only helps you do math calculations and make inferences based on what you tell it that you believe already. And if you don't like the results of the calculations, you can change the inputs you feed it (or even the algorithms it uses). It's a mental aid, like a calculator, not an arbiter of information.

Quadrants3.gif

This diagram limits space where "truth" applies. Quadrants in the second pic have levels describing areas in the first one.

j_sem20180022_fig_001.jpg

And seeing that "objective" truth from the perspective of different levels of mind narrows the space of truth even further.

And how would this perspective (which is sort of scientific but not always might be true) be assessed by an algorithm?

What I meant, in general, is that truth is so narrow, the task gets complicated beyond applicability. It can't be apolitical. Misunderstandings and using an algorithm outside of its scope of usability might cause more damage than good.

This is just an opinion :)

If I understand you correctly, you're assuming that a trust network is only useful for rating things that have an absolute (but potentially unknown) truth. And you're concerned that this system would then be applied to topics that don't have an absolute truth, such as "ketchup tastes good", and this would result in damage of some sort (maybe ketchup would be eliminated from our choice of condiments because there are more Chinese people than Japanese people).

If that's the case, then I hope I can alleviate your concern. Although I've mostly talked about trust networks in terms of their ability to rate truth, its application extends to rating quality as well as truth, because it is just an extension of our normal methods for rating both.

You can use a trust network to rate the likely reliability of a car model, just like you might read a consumer report to help you when selecting a car to buy. And just like reading a consumer report doesn't force you to buy one car or another, this system wouldn't either. It would just provide you more information to help you make a decision, as you weigh the various pros and cons of each car with respect to your personal needs.

I see. It's much clearer with an example now. Thank you. Can it be seen as some sort of reputation/opinion system?

Can it be seen as some sort of reputation/opinion system?

Yes, it is in essence a means for creating a personalized form of that.

Wow!

You chose a rather difficult topic to comment on! For me the truth is that it is too fickle and depends solely on subjective factors.

Everything will depend on how much confidence you generate who gives the information and how that is supported.

We do pose a very obvious situation, when we read a news headline there are factors that will help us decide how accurate the information is for us. To read "WALTER LOTUS SHOWS SWITZERLAND AS A TECHNOLOGICAL POWER", we should know who Walter is and what he symbolizes for us. If we have an emotional attachment to Walter's excellent handling of the economy or if it seems to us that he is an egocentric businessman who only sells us cheap advertisements.

The veracity that we grant to that statement depends on it. It is my humble opinion! 🤫🤭☺️ On the rest that you raise, because I do not know too much about programming or computing, but I found the information very nutritious. 😉

Escogiste un tema bastante difícil para opinar!
Para mí la verdad que es demasiado voluble y depende únicamente de factores sujetivos. Todo va a depender de qué tanta confianza te genere quién da la información y cómo se soporte eso.
Sí planteamos una situación muy evidente, cuando leemos un titular de una noticia hay factores que nos ayudarán a decidir qué tan veraz es la información para nosotros.
A leer "WALTER LOTUS MUESTRA A SUIZA COMO UNA POTENCIA TECNOLÓGICA", tendríamos que saber quién es Walter y que simboliza para nosotros. Si tenemos un apego emocional por el excelente manejo de Walter en la economía o si nos parece que es un egocéntrico empresario que solo nos vende propagandas baratas. De ello dependerá la veracidad que le otorgamos a ese enunciado.
Es mi humilde opinión! 🤫🤭☺️
Sobre lo demás que planteas, pues no sé demasiado de programación o informática, pero me pareció muy nutritiva la información. 😉

Truth itself isn't subjective, but our methods of rating truth are mostly subjective. But computers can help us with subjective computations. For example, finding a "good" article on the web is subjective, but search and AI software help us find articles we would personally be interested in and are therefore more likely to rate as "good".

one of the goals of the system is to make it easier for humans to do their jobs. however, there is the potential for the system to make things difficult. the programs that you create under certain conditions are expected to make it easier for users to achieve their goals

the interface can be easy while technology isn't. it's like most people don't know how internet works but we all use it because there's layers that make certain things seamless. same with dollars or whatever currency that you use everyday.

Just wondering what would be the net benefit of using such a tool. If I have trust system providing me a rating to all the info I am reading, how would it help me?

I'll never just judge content based on its rating. For instance, someone posted/said something thought provoking. I don't like what was said. But then that same person has a high rating based on my web of trust. What benefit has that brought me? I simply don't care about the rating, I don't like what was said.

The only way this can be useful is if a whole community uses the same algorithm, and the ratings appear the same for every user in the community. But then again, I think this would be scary for humanity, as network effects will form around certain algorithms.

I'll never just judge content based on its rating.

I'll argue that you do this all the time. For most people, they value the diagnosis they get from a doctor more than the diagnosis they get from their hypochondriac aunt. And this generally holds true even if you "don't like what was said". Imagine the doctor tells you that the chest pain you have is a sign of a serious heart condition and not just heartburn. You might not want to believe it, but you'll likely take it seriously, because you have some reason to respect the reputation of the information source.

The only way this can be useful is if a whole community uses the same algorithm, and the ratings appear the same for every user in the community.

In my opinion, that's the opposite of useful. For example, a system that doesn't at least allow for an initial diversity of opinion wouldn't allow for a proper exploration of the solution space needed when tackling scientific questions.

On the other hand, allowing for diversity of opinion doesn't mean that consensus can't emerge on the truth of many issues. I'm simply proposing ways we can use computers to improve the process we already use, and we've seen historically that most of humanity has over time been able to reach consensus on many topics.

I'll argue that you do this all the time.

Theoretically, yes. Your example was a good counter argument, however I don't think a simple rating from an algorithm would have the same effect as talking to a human doctor.

I have a hard time trusting a computer generated number. Maybe that could change once I start using the trust system. It definitely has potential to be revolutionary.

The ratings from a trust network don't really come from a computer, in some sense. It's just doing "provably correct" probability math on your ratings of other people and their ratings of information. So the computer can still generate wrong probabilities, if you feed it the wrong input probabilities, but to the extent that the input probabilities are correct, the output probabilities will be correct. It's similar to the situation where you use a calculator: if you feed it the right inputs, it will generate the right results, but that can be a "big if".

I have a hard time trusting a computer generated number.

Yes, and rightly so to some extent. For a system like this to be trusted, there'll need to be a lot of transparency surrounding the calculations it does. But I see those capabilities as very necessary anyways, since one of the important capabilities of a system like this will be the ability to identify when you're getting bad inputs that are skewing your results and ways to tweak your inputs to adjust its performance.

What you should focus on is TEACHING everyone how to learn and how to figure out what is a fact or not. If everyone was taught that we would all be better off.

The best thing to do would be to leave this alone completely, in terms of making a system of it. What you are talking about doing is shaping the way people think and will put people into "self selected echo chambers" and is dangerous. Your well meaning intentions of this as extremely misguided in my opinion, what you are also talking about doing is creating a sort of decentralized Chinese Social Media Score.... And we all know how that is used by governments...

https://sociable.co/government-and-policy/globalists-embrace-social-media-location-behavioral-data-alternative-credit-scoring/

https://nypost.com/2019/05/18/chinas-new-social-credit-system-turns-orwells-1984-into-reality/

https://www.washingtonexaminer.com/news/hawley-communist-china-social-credit-cancel-culture

people score each other all the time. the only difference is you don't say it to their face or enforce it on them. you looking at some hot chick across the road is 100% decentralization. the chinese version is 0% decentralization. hive or instagram or facebook or youtube or any other app that a lot of people happen to use sit somewhere in between. the question here is where do we wanna place hive.

i hate ccp (probably a lot more than you do). i served in south korean army and i'd gladly **** them if i had the chance because they're the mortal enemy to my freedom, but that's not the issue.

It seems you're simultaneously arguing that a trust network is both an echo chamber AND a universal scoring system (ala Chinese social scoring). Those two terms describe radically different approaches to rating information, so it's beyond me how you can accuse a trust network like I'm describing of being both simultaneously. Anyways, I've addressed both of these characterizations in earlier comments, so I'll just refer you to my responses elsewhere.

https://www.mylife.com How about US and Western social scoring?

image.png

A decentralized social score is already used universally to check against people in many countries. Rating information no matter how decentralized you make it all gets collated into a centralized service eventually, or the product you are creating gets more centralized over time. Everything historically centralizes over time.

There's already websites and companies that already do this. The data is decentralized, in that they pull this data from many different sources and piece it together. It is then centralized. They rate you as a person, your income, your education, and many things.

Anyone who creates a scoring system is to be viewed with extreme suspicion, theres really no good reason to do this because it WILL be abused by people further down the road. I already gave you one example of MyLife, that is just in it's infancy stages and allows pretty much anyone to spy on anyone else for a nominal fee.

Just don't do it, let things be. By making more scoring things you will be feeding into things like this. Someone will take what you did and connect it with other things and centralize it, things will be done with that data that you couldn't possibly have predicted. Therefore it is better logically to just not do it at all, working to further make things anonymous however is a good endeavor.

There's no way to stop the aggregation of data used to rate people, in my opinion. What we can do is level the playing field somewhat on who owns the data (and hence decentralize the power it yields to its holders).

There is many different ways of stopping data aggregation.

Using a Steem/Hive model that rewards people for creating obfuscation and fake data in large quantities is one way. Provable fake data generation(think provable output, taking in real data and changing it just slightly to make it fake, and doing it 100s of times it is obfuscated), mass sign up of social media with garbage info, anonymizing everything, private and encrypted transactions/messaging/emails/everything, even making software for nodes that encrypt and anonymize traffic.

Depending on how far down you want to go into something like that, a lot can be done.

Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s) :

You received more than 1010000 as payout for your posts. Your next target is to reach a total payout of 1015000

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out the last post from @hivebuzz:

Next Hive Power Up Day is February 1st 2021

A very interesting post.

However, the "coke" example is a very bad one. We all know that both (the product from Coca-Cola and Cocaïne) are bad for us, whether for the mind or for health. 😁

Regarding considering used-chosen tags on posts as useful information for domains, that's a step I probably wouldn't take. There are so many users who choose inappropriate tags, especially in order to get upvotes. Tags should first of all be compared to the content of the post whenever possible.

That said, this post allows me to glimpse the system you want to put in place and its huge potential to provide services for our platform. Now, I feel very excited and eager to read your next posts.

To your point about tags, that's why I also mentioned this:

along with associated probabilities that the predicate actually belongs to each of the specified domains.

In other words, the tag originally assigned by the creator of the information can be rated as not applicable. Similarly other people could propose alternative tags that could then be rated.

Of course, none of the above argues against having an option for an automatic tagging system as a supplementary way to check (or even identify) domains as you're suggesting (and its ratings could also be rated by people). So here we could see cases where humans and AI are both checking each other's work.

I really enjoyed this one, thanks for making excellent entertaining content. Stay safe in these strange and unusual times.

very important topics it is.
thanks for shear it.