You are viewing a single comment's thread from:

RE: MasterNode Alternative, Forking Solution, Efficient Self-Healing Network with simple Math

in #gridcoin8 years ago

These are good questions, the t is designed never to be an infinity in order to prevent this from happening. t could be a minute, or 10 minutes, but in implementation such as this one it's a hardcoded value. (unless we extend the algorithm further and make it dynamic too, but I was not going that deep).

I should have asked: it is possible that after every iteration of the loop, that a single node accumulates more and more traffic?

It should add much randomization due to factors difficult to predict.

Are you saying that 1) there is already randomization inherent to the model, and/or 2) that randomization should be intentionally introduced?

Sort:  

I should have asked: it is possible that after every iteration of the loop, that a single node accumulates more and more traffic?

No, with current topology it's not possible. The current principle we have in place is single threaded blocking FIFO, meaning more connections lead to higher latency. That's why this implies fair traffic distribution. The more traffic node accumulates, the latency will be higher, leading to getting lower rank, thus receiving less traffic.

Are you saying that 1) there is already randomization inherent to the model, and/or 2) that randomization should be intentionally introduced?

  1. It's definitely inherent to model, as per above answer, controlled by both predictable and unpredictable values. It's also intentional, while unpredictable values add a level of randomization, predictable values keeps it within the threshold ( dynamically calculated by known values.) Draft design of the model aims to produce desirable effects as much as possible to results of a side effects in order to cut the needs of way too much coding and changes.

No, with current topology it's not possible. The current principle we have in place is single threaded blocking FIFO, meaning more connections lead to higher latency. That's why this implies fair traffic distribution. The more traffic node accumulates, the latency will be higher, leading to getting lower rank, thus receiving less traffic.

Measure latency of each, multiply that by the factor of rejected blocks, assign that value as ‘a score’.

What if the top node has no rejected blocks? Is that possible? What if the number of rejected blocks by other other nodes is so high that the top node remains the top node, even taking higher latency into account? Is there a guarantee that one node will never stay at the top? Could a malevolent coalition of nodes take advantage of this model?

Sorry if these questions are addressed by basic knowledge, again I have little experience here.

Under that condition, that node will start accumulating more connections then usual, that directly lead into much higher latency. While it's theoretically possible that it will survive few iterations, the probability drastically falls with each one. Latency takes important part here, and the number of connections increase latency exponentially, hence the drastic fall in probability that the node will keep on top and accumulate large number of connections at the same time.

The final result shall be that the nodes of zero rejected blocks gets approximate same number of connections.

Number of connections is also an important deterministic parameter in this model.

No problem for asking, the post was intentionally explained in common language and common logic so everyone can get involved.

In addition, for the purpose of better explanation, 10 of us communicate with the same node. In order for my packet to get an answer, assuming I'm the last one who sent it, the node needs to answer your first (with whole roundtrip). That's the key point of how latency caused by single threaded networking prevents the scenario you describe.
(of-course, it's possible that I am missing something), this is still at the level of theoretical discussion, so don't take anything 100% to be correct.

I understand better now. Thanks for the patience and explanations.