You are viewing a single comment's thread from:

RE: Announcing the Sndbox Video [A Short Film by Humdinger & Sons]

in #marketing6 years ago (edited)

I think we are actually calculating different metrics here, so comparing them doesn't really make sense.

Steemreports measures the self-votes as a percentage of all votes given, whereas if I'm not mistaken, your metric is measuring self-votes as a percentage of maximum reward pool allocation possible?

Sort:  

Quite right, that is my point which I hope I made well enough, that comparing them is not useful.

But no, that's not my method. The metric we use is the extrapolated self vote return on investment. This means that self vote payout is determined over a given period (in this case one week) as a percentage of their investment, which is the net / available SP, and extrapolated to a year (x52 in this case) as if they voted this way all the time, to get a kind of annual return.

The net / available SP is adjusted to changes in SP to be accurate to the time of casting the vote, which is what matters to the blockchain also, and why we do it. Both our metrics are sensitive to large changes in SP over the given sampling period.

Ok, I think I get your method now thanks. Is your code for obtaining the self-voting information open source?

EDIT: I confirmed, steemreports does not take into account dynamic SP changes over the period so it has the potential to be very wrong and misleading

This limitation has always been explained in the information popup on the chart page. Whether it actually results in inaccuracy in the case of outgoing votes is IMO subject to definitional debate.

Yes, the code is open source.

I have found this limitation to be quite severe for my own metric. I'd be interested to know what you would consider this definitional debate to be in the context of your metric. Suffice to say, the metrics are incompatible.

I think that not adapting to changes in SP generally represents users intentions better, whereas adapting would represent outcomes better.

In cases of sophisticated abuse the latter approach would probably also be preferable, but your method does nothing to deal with more sophisticated abuse either.

I can see the argument for that, though I think the distinction is not clear to users. People clearly use it to argue about outcomes, which is why we're here on this thread on @sndbox 's confusion.

In what ways do you think our method is lacking in terms of more sophisticated abuse?

I think people use it to argue/reason about both intentions and outcomes.

Your method ignores sock-puppets, and whilst this is totally understandable in that dealing with that kind of abuse is a much tougher problem, it might nevertheless have the outcome of pushing abuse under the carpet, so it's out of sight but still present. With it in the open, we get to analyse the dynamics of the situation, and perhaps learn how to improve the more fundamental aspects of the blockchain. With it pushed out of sight, the problems besetting the platform are conflated and more difficult to learn from.

That's very much besides the point of shortcomings in your algorithm, but I'm certainly aware of the limitations and shortcomings of my current sadkitten algorithm. I'm working on a collusive voting detection and scoring algorithm which will probably inhabit a different avatar than sadkitten. One bot cannot and should not be required to embody a total solution, probably there is no such solution.

I do disagree with the test tube approach to abuse. This is dynamic system and you're not going to get a lot out of standing from a distance with a magnifying glass, one must act.