You are viewing a single comment's thread from:

RE: A peer-to-peer network for sharing and rating information

in #hivemind3 years ago

There have recently been highly creative ID deception and theft attempts intercepted. In one, the perpetrator took over existing social media accounts of family and friends (that were otherwise abandoned), either by forcing them or by buying them. In another one, the perpetrator created social media accounts for their victim. These two examples involve real persons. In many other cases, fictional profiles are created based on individuals. In even more cases, the perpetrator claims to act on behalf of an individual. This is most common with accounts they claimed are 'managed'. The victims normally have no idea what's going on or don't understand their identities have been monetized. Hivewatchers deals with all these but even then its hard.

Related, to your point of identity verification, I recently attempted a different approach at Hivewatchers with an account that was accused of being a fake ID. What I did was go through all the messages sent by the person. When reading them back to back, you see that on one day the person is happy and answers one way. On a different day, the person is upset and answers another way. They will give different answers to the same question based on what's going on. A fake account or a thematic account (a type of 'managed' account) can't do that. It will always respond in a similar sort of way because the person is acting. It will have some variation, but it will be minor and playing to a template. Me revealing this here won't give anyone the upper hand in defeating the ID verification procedure; its beyond their acting capability.

As you know, in general, there is a lot of misinformation on Hive. People who spread it are interested in spreading it because they build up their fan base off it and create an echo-chamber. Everything is misread and twisted. We're used to seeing it in general but it's particularly detrimental when it's about Hive itself, as it affects the entire ecosystem. For example, that there's some mythical secret group deciding which proposals get funded. Or that the top witnesses are colluding. Due to the sheer number of these people, using crowd-sourcing to gauge the credibility of information while drawing on the same groups that have been misinformed won't work. Many "confirmed facts" aren't based on experience or education, just guesses. People make observations, they grow to believe them, others confirm their ideas, and now there's a majority view.

What I'm getting at is that a system that tests for variance in content, wording, subject matter and range of use by an account could be the start of a good web of trust. A real person of trust will have an opinion that changes, they will be different on this day from the previous, they will try different dapps, they will talk about things in their life and grow as a person. A fake one won't.

Sort:  

That is a very interesting comment.
I think any human pattern can be simulated by bots. Random as it looks.
But this is indeed an undecided race.

In many other cases, fictional profiles are created based on individuals. In even more cases, the perpetrator claims to act on behalf of an individual.

In the system I'm planning, creation of a convincing fake account that masquerades as another person will be quite difficult. First, that person will need to create multiple fake accounts, because an account's authenticity is tightly correlated with other accounts that vouch for it. But of course it is not difficult for a determined identity thief to create many accounts and have them vouch for each other.

And here's where the serious difficulty comes in: in order for any of the information being reported by these accounts to be taken seriously by a person's local rating system, no matter how many sock puppets the identity thief controls, they have to convince other people in your trusted network that these are all real people.

Now, you may have one gullible friend in your trust network who's willing to vouch for someone's identity based on insufficient information, so we can already see ways that such a system can potentially be gamed. But I plan on going into more depth on attacks and defenses that a web of trust can deploy to mitigate these attacks in my next post.

Related, to your point of identity verification, I recently attempted a different approach at Hivewatchers

In this paragraph, you're describing an alternative method to using a web of trust to verify information (not just identity information, but all information). In this case, you're using your own critical thinking faculties to analyze the data you've received, looking for contradictions and patterns that can give you clues to whether the person is truly who they claim to be (the "truth" of their claim).

We all use critical thinking to some extent to rate information we receive, and it's how we contribute our own thinking power to the "hivemind" computer of our web of trust network (the people with who we directly or indirectly share the information and opinions we have).

If none of us employed critical thinking, webs of trust would only be able to rate very basic types of information like what we've seen and heard, and they wouldn't be able to rate what I've referred to as "higher-level" information.

What I'm getting at is that a system that tests for variance in content

What you're describing goes beyond the scope of what a web of trust system does. You're describing an analytical engine that helps a human spot patterns. It's an aid to our native critical thinking ability.

Such systems can certainly play a role in helping us to properly rate information, in a similar way to the way a calculator can help us do math, and that sort of checking helps us to rate information more precisely, before we share our opinions with others via our personal web of trust.

In reverse order:

What you're describing goes beyond the scope of what a web of trust system does. You're describing an analytical engine that helps a human spot patterns.

Yes. My interest is in building systems as you likely know by now. A system can either learn or be adjusted based on the output it receives or its output parameters can indirectly adjust user behavior. This was done back when my post highlighter ran for the SSG community. Some users started formatting their posts to be picked up by the bot which looked for a few 'quality' measures.

And here's where the serious difficulty comes in: in order for any of the information being reported by these accounts to be taken seriously by a person's local rating system, no matter how many sock puppets the identity thief controls, they have to convince other people in your trusted network that these are all real people.

This just gave me an idea. One of the ID scammers that I alluded to here and mentioned privately was confirmed to be fraudulent by an individual geographically local to them. The individual was another long-time user from the same hometown who confirmed that its impossible for them to have the credentials they claim. Geographical coordinates of supporting or refuting parties, where they choose to provide them (such as in profile) or knowledge of a particular area, may be incorporated in the weighing algorithm.

Geographical coordinates of supporting or refuting parties, where they choose to provide them (such as in profile) or knowledge of a particular area, may be incorporated in the weighing algorithm.

Yes, when we start looking at using a web of trust to help analyze truth in various domains, one of the more "advanced" topics can be to look at how information in related areas can impact opinions about a specific truth (in the example you mention, proximity when verifying identity). To solve problems like this in a general way is a very challenging problem, but coding for the use of specific, logically related information to help rate other information is clearly possible.