You are viewing a single comment's thread from:

RE: Researching a FLOP and Energy Based Model for GridCoin Reward Mechanism

in #gridcoin6 years ago

You've done awesome job moving from the plain idea to the above study!

  • BOINC credit system attempts to fairly reward crunchers. Is there any point to create a secondary system? BOINC has direct access to hardware and it's much easier to improve the system within BOINC than build a new one from scratch. Improvements should be done in cooperation with BOINC developers and be contained within the BOINC platform.

  • Location. Should such a system be incorporated into a blockchain? I'm not convinced about that. If yes - pros and cons? If not - where?

I receive exactly 1GRC for doing the work mentioned above (i.e., doing either 1.96 GFLOP on my CPU, or 30.5 GFLOP FP32 on my GPU, or 2.6 GFLOP FP64 on my GPU

  • Let's assume project X has both CPU and GPU FP32 workunits available. GPU will process 15 x more tasks than CPU, but they would be awarded the same amount of credits? This would be wasteful. Current system enforces wise allocation of resources, at least within some projects, at least a bit.

Benefits include a GRC value tied to a fundamental physical asset

  • I'm not convinced it can be considered as a benefit. Here the physical asset, although not fundamental, is FLOP. (ideally would be something like MAX(SOLUTION/FLOPs), but it's a different topic). Using energy (joules) as a measure is like using mass (grams) to value tv screens or cars.

I'm sorry for my comment to sound so critical, I really appreciate your work.

Sort:  

I'm sorry for my comment to sound so critical, I really appreciate your work.

No need to apologize, thanks for your comment!

Is there any point to create a secondary system? BOINC has direct access to hardware and it's much easier to improve the system within BOINC than build a new one from scratch. Improvements should be done in cooperation with BOINC developers and be contained within the BOINC platform.

I'm not insistent on this model being implemented one way or the other - or at all, in fact, if another, better alternative is proposed. As I mentioned briefly regarding WUProp, which is a BOINC project, we may already have the information necessary to do this. That being said, there is the question of whether we want to be able to bring in other distributed computing projects from outside BOINC, which would require a standardized measurement. For now, I'm just trying to get the conversation started.

Let's assume project X has both CPU and GPU FP32 workunits available. GPU will process 15 x more tasks than CPU, but they would be awarded the same amount of credits? This would be wasteful. Current system enforces wise allocation of resources, at least within some projects, at least a bit.

The GPU will do 15x more FLOP and get the same amount of credit for it as a CPU crunching a CPU-only task that required 1.96 GFLOP. If a CPU tried to run those same 15 GPU tasks, it would get the same credit, but it would take 15x more energy to do it - i.e. it would cost 15x as much, so why do it?

As to whether it's fair that the GPU gets the same credit as the CPU, the idea - and feel free to take shots at it - is that many projects simply can't be parallelized on GPUs - otherwise they definitely would be, as GPU computations are much faster. Therefore, if a task is CPU-only, that's because it needs a CPU. At that point, the question is, how can we compare such tasks? My suggestion is to base it on the Joule, as both types of hardware require energy to run. I'm open to other suggestions.

I'm not convinced it can be considered as a benefit. Here the physical asset, although not fundamental, is FLOP. (ideally would be something like MAX(SOLUTION/FLOPs), but it's a different topic). Using energy (joules) as a measure is like using mass (grams) to value tv screens or cars.

Ideally I would like to see all users be rewarded for the FLOP they contribute. The Joule was just an intermediary to compare CPUs and GPUs. As a side result, it also estimated the energy consumption of each individual user.

Just to be clear I'm not suggesting that people who use more energy should be rewarded more - somewhat the opposite: more efficient hardware would always yield more GRC than less efficient hardware under this model, which isn't the case right now, as it depends on which project you run.