You are viewing a single comment's thread from:

RE: Steemit and The Dynamic of the Authoritarian in Assumption

in #steemit8 years ago

I was just cautioning that the user may think they like something or that they ought to like something based on some idea of their self image or whatever, but it might be more useful to put stuff in front of the user that you think they'll want, and they can choose to axe it and you can make a better guess next time.

I'm sure that people are pretty clear about what they like. While they may have reasons for liking those things that we may disagree with, why is that our problem? I mean that literally – why should I care if people are upvoting things that make them feel good? Is that my business? No; what they do doesn't affect what is presented to me unless I've decided that I like things that there upvoting – in which case I don't care why they're doing it, I only care that they're doing it in ways that I like.

What you are maintaining is that there is some global measure of "goodness" that we need to try and maximize, and part of that measure of "goodness" is artificial diversity.

Not to put too fine a point on it, but I think that's bullshit.

I have no idea what the best things for you to see are. I wouldn't even hazard a guess, especially if I don't have to guess at all. If I let you tell me the sort of things that you like, I can give you more stuff like that. If you tell me the people that like the sort of things that you like, I can give you more the stuff that they like – which probably involves a fair chunk of diversity in ways that you are likely to like.

We are specifically talking about discovery here in providing a service that users will want to engage with and come back for.

The downside is the user may not get a chance to say they like a thing because it was never put in front of them, especially if the raw signal is such a firehose that they drown whenever they go off on their own.

The firehose is real. The firehose is real even now with there being no significant incentive to being a consistent creator on Steemit. We know what the current situation with bot swarms and strategic voting leads to – we can see it. Even in that terrible situation, people are posting a lot of content to the platform, so much so that it is literally impossible to keep up with anything but fairly low entropy tags.

So, upfront, realistically, we know that it will be impossible for users to see everything or every type of thing and flag it to the system's attention. Literally impossible.

But that's okay because they don't need to. With a web of trust system all they need to do is indicate some of the content that they like. The content will be related to other things, and the next time that the user looks at the presentation they will see different things – some of which they'll like.

Repeat ad infinitum.

But in order to get any maneuvering through the vast and related vector space that is all of the content that is posted to a platform we have to individuate the experience of a user. We know that the current globalized view of value is wrong. We know it's wrong because we look at what's being presented to us by the system and we don't like it. It's a very empirical means of observation.

Give me a reasonable web of trust that's individuated to my expressed desires and I can lens almost anything that happens on the platform. New content? Sure, sort it such that I am likely to like best that stuff near the top. Users? Same deal; users with a higher inherited trust value are more likely to be upvoting content I'm interested in. Days of the month? Sure, if they're first-class entities; maybe there are days of the last month which had activity that I am more likely to be interested in. It's one more way to explore the space.

But you have to trust that the user can reasonably express their interest. If you can't except that the user can reasonably express their interest, you've already violated the central premise of votes on Steemit anyway. If users can't be trusted to vote according to their own interests, the idea that steem value could possibly represent anything alike or near actual worth is ludicrous. (The current state of affairs might suggest that users can't be trusted to make their own decisions, but I reject that out of hand as being philosophically untenable and unfairly judged because of the distortion induced by people "playing the game" rather than "looking for content," the latter of which is what is currently encouraged by the system is designed.)

Sort:  

While they may have reasons for liking those things that we may disagree with, why is that our problem? I mean that literally – why should I care if people are upvoting things that make them feel good?

I don't mean that the things they like are stupid and a waste of time, I mean they upvote something out of obligation (I followed this person and they could use the vote...) or because they think it might be something they like (yeah, I'm a good christian I'm going to upvote this scripture), whereas given the chance, the system could be actively showing them content and developing a model of what they might like and then vet it with the user.

In hindsight, I guess such a system doesn't solve this problem as the user would be free to continue to make these errors. Damn.

Is it just me, or is it getting cramped in these margins?