You are viewing a single comment's thread from:

RE: Modeling information in an information rating system

in #hivemind3 years ago

The best and only method that respects freedom is for the reader to do their own research. I'm not in favor of adding algorithms to gauge information.

You are allowed to be ignorant and parrot false statements.

Unique opinions can be true or the best explanation based on the current data and still be wrong, but no AI could accurately gauge it. It's my job to decide if I agree or not.

A completely false statement can be rated as true by all available official and unofficial sources, but there's no way for ai to accurately rate it.

I don't like this type of thinking and if added to the platform it will turn many if not most freedom loving users away.

Sort:  

It's great when people "do their own research", but it's important to recognize the limitations of that approach.

In my first post, I essentially argue that no one does their own research on all topics (or even most topics). Not only that, I don't believe that any human has enough time and mental abilities to do so, even if they wanted to expend that much effort.

I also think you're getting confused about what types of algorithms are primary to the operation of a trust network. They are not mostly AI-algorithms, although AI algorithms could be used to supplement the information provided by a trust network (as mentioned, I'll discuss this in a later post). Instead these are "trust algorithms" that allow you to rate your trust in information sources. This is something you already do now, and I'm just looking at ways that a computer can help you to make computations based off the information you give it about your trust in different information sources.

A lot of the ideas for this project in fact are in search of means for us to attain more freedom than we have today. Webs of trust allow us to establish trust relationships outside of centralized sources of information, while helping to prevent us from being duped by false sources of information created thru the many ways that information platforms today can be gamed by savvy information professionals.

How does information I deem trustworthy through my opinion make it trustworthy? It sounds like you want to build an easy way to build echo chambers.

A trust network can certainly be an echo chamber. Many people today foolishly design their trust networks as an echo chamber because they prefer to avoid information that contradicts what they believe to be true.

There's certainly nothing about using computers that will force people to accept information from diverse sources, for example. But computers could help you analyzethe network you're building and at least let you know that you're narrowing your sources of information. They could also inform you in cases where the vast majority of people disagree with the opinions of your trust network. What you do with that information, as always, is up to you. But if you find that your trust network is rating a lot of information incorrectly (in the sense that you find it giving a lot of wrong answers in cases where you're able to verify the truth independently) you may be incentivized to change your network in a way that makes it more useful.

Sure, but it would depend on how intricate you make it. Unfortunately, a consensus doesn't make right or true, so I see this as unnecessary and possibly doing more harm than good. Think China's social scoring...

I've tried to make clear already how completely different this is from social scoring systems. I'm not sure what else I can say on that subject.

But as to consensus, I will argue that some level of consensus is needed for any organized human activity. It doesn't have to be universal consensus, but without some local consensus, none of us will be able to accomplish much.

I've accomplished everything I've done without a consensus. I look up what I want and read all sides. I've never nor will trust another's opinion on a subject. A trust score is just another worthless waste of time that represents nothing valid in reality.

Twitter has just launched something similar and is only used to silence voices. While your idea may be a bit different, I feel it will be abused or cause the masses to limit their research. This will continue the trend of laziness tech has brought about and even cause relevant info to be skipped over, because it isn't popular enough.

This could even be used to target users similar to targeted adds with info. Freedom lovers find this to be an overreach and creepy. It could also be used to target and smear info and ppl.

Hive needs to become decentralized. Not only won't this bring us there, wasting time on such a thing will make sure it never becomes decentralized, but surely will slow down that process.

Twitter has just launched something similar and is only used to silence voices

birdwatch: https://www.zdnet.com/article/twitter-introduces-community-based-birdwatch-pilot-to-address-misinformation/

we already have it: @quackwatch lol

I don't know what you do, so I can't comment on how much consensus is required between you and others to do it.

But I can say with certainty that much of the services you rely on (to do the things you do")required some consensus among other people (you're not necessarily included in that consensus but you still benefit from it). As a simple example, there had to be consensus on network standards to allow the computers you use to communicate with those of other people. You rely on consensus of this sort all the time in your daily life, whether you realize it or not. Without consensus in many areas, human technology wouldn't have advanced much.

Oh, based on your final concern, it's also worth noting that any potential integration of a trust network into Hive would almost certainly be as a 2nd layer application, so it's usage would be strictly optional.

most people would do their "own research" through google. algorithm itself is not the issue. if google didn't skew towards one side (it actually didn't at one point if you remember) it could've embodied free speech.

I'm not sure that's relevant, since Google still provides most the links that exist, they are simply further down the list. Also, googles popularity is waning and we have quite a few alternatives now with more coming.

And... believe it or not, many still use ink and paper for their research. Also, Google doesn't change the wording in the links, so if you find a paper Google doesn't like, you still get to read it unredacted.

I'm talking about the flaw in your logic when you say "The best and only method that respects freedom is for the reader to do their own research."
When you read a book or paper, whether it's in ink or your screen, the publisher + writer + many more people involved are doing research for you. Google happens to be one example.

So, individual perspective is a flaw? I'm not sure I understand and am pretty sure it's because you haven't stated your case clearly...

yes. by your definition "freedom loving users" can't exist. nobody can do their own research because the fact that you're even reading something in ink means you'll be influenced by the bias that exists in distribution channels and investors to the research.
(unless you're talking about few people that figure out physics and how reality works..
but even they are not immune to the same economy that pushes readers to one side or the other)

i said "flaw" because the more relevant question would probably be something like where do you wanna draw the line and why, rather than simply saying people should do their own research.

So, in your world ppl only research one side of a topic? Also, your flawed take assumes everyone believes everything they read. This is not the case...

And "freedom loving users can't exist?" What? You do know what the basic qualifiers for a false statement are? You use one for each of your statements.

Not everyone is like you, so you cannot use your own actions to claim 'nobody' will do this or that those types 'can't exist'.

so you can't figure out i'm using your definition.

but my apologies cuz i think it's going way off the topic now. i wanna point out what you see today, companies using certain algorithms to take away your freedom, doesn't necessarily reflect what a "trust network" could mean.

influence happens whether you agree or disagree on an idea. every info that you receive is delivered to you by a medium and rated either subtly or directly. if you're just gonna say "freedom loving users" and "do your own research" and count out every process that involves automation or AI, you're gonna be left with pretty much nothing because using ink or some manual human process doesn't make it different.

i doubt "freedom loving users" will be driven away, not to mention it could only happen after someone can define them to begin with. even if it does happen most users will flock to whatever is proven to be the best in the market. most likely it'll involve some AI and automation (if not complete) because right now it seems like the most suitable way to scale, if not the only way. hopefully it'll have some decentralization too.

I don't like this type of thinking and if added to the platform it will turn many if not most freedom loving users away

That's true. We can see AI in action on YT, FB or TT, I will power down if it comes here