You are viewing a single comment's thread from:

RE: A peer-to-peer network for sharing and rating information

in #hivemind3 years ago

In many other cases, fictional profiles are created based on individuals. In even more cases, the perpetrator claims to act on behalf of an individual.

In the system I'm planning, creation of a convincing fake account that masquerades as another person will be quite difficult. First, that person will need to create multiple fake accounts, because an account's authenticity is tightly correlated with other accounts that vouch for it. But of course it is not difficult for a determined identity thief to create many accounts and have them vouch for each other.

And here's where the serious difficulty comes in: in order for any of the information being reported by these accounts to be taken seriously by a person's local rating system, no matter how many sock puppets the identity thief controls, they have to convince other people in your trusted network that these are all real people.

Now, you may have one gullible friend in your trust network who's willing to vouch for someone's identity based on insufficient information, so we can already see ways that such a system can potentially be gamed. But I plan on going into more depth on attacks and defenses that a web of trust can deploy to mitigate these attacks in my next post.

Related, to your point of identity verification, I recently attempted a different approach at Hivewatchers

In this paragraph, you're describing an alternative method to using a web of trust to verify information (not just identity information, but all information). In this case, you're using your own critical thinking faculties to analyze the data you've received, looking for contradictions and patterns that can give you clues to whether the person is truly who they claim to be (the "truth" of their claim).

We all use critical thinking to some extent to rate information we receive, and it's how we contribute our own thinking power to the "hivemind" computer of our web of trust network (the people with who we directly or indirectly share the information and opinions we have).

If none of us employed critical thinking, webs of trust would only be able to rate very basic types of information like what we've seen and heard, and they wouldn't be able to rate what I've referred to as "higher-level" information.

What I'm getting at is that a system that tests for variance in content

What you're describing goes beyond the scope of what a web of trust system does. You're describing an analytical engine that helps a human spot patterns. It's an aid to our native critical thinking ability.

Such systems can certainly play a role in helping us to properly rate information, in a similar way to the way a calculator can help us do math, and that sort of checking helps us to rate information more precisely, before we share our opinions with others via our personal web of trust.

Sort:  

In reverse order:

What you're describing goes beyond the scope of what a web of trust system does. You're describing an analytical engine that helps a human spot patterns.

Yes. My interest is in building systems as you likely know by now. A system can either learn or be adjusted based on the output it receives or its output parameters can indirectly adjust user behavior. This was done back when my post highlighter ran for the SSG community. Some users started formatting their posts to be picked up by the bot which looked for a few 'quality' measures.

And here's where the serious difficulty comes in: in order for any of the information being reported by these accounts to be taken seriously by a person's local rating system, no matter how many sock puppets the identity thief controls, they have to convince other people in your trusted network that these are all real people.

This just gave me an idea. One of the ID scammers that I alluded to here and mentioned privately was confirmed to be fraudulent by an individual geographically local to them. The individual was another long-time user from the same hometown who confirmed that its impossible for them to have the credentials they claim. Geographical coordinates of supporting or refuting parties, where they choose to provide them (such as in profile) or knowledge of a particular area, may be incorporated in the weighing algorithm.

Geographical coordinates of supporting or refuting parties, where they choose to provide them (such as in profile) or knowledge of a particular area, may be incorporated in the weighing algorithm.

Yes, when we start looking at using a web of trust to help analyze truth in various domains, one of the more "advanced" topics can be to look at how information in related areas can impact opinions about a specific truth (in the example you mention, proximity when verifying identity). To solve problems like this in a general way is a very challenging problem, but coding for the use of specific, logically related information to help rate other information is clearly possible.