You are viewing a single comment's thread from:

RE: Researching a FLOP and Energy Based Model for GridCoin Reward Mechanism

in #gridcoin6 years ago

Or malicious crunchers that try to pass off their older hardware as newer more energy efficient hardware in order to get a greater reward. Either through a bios flash of their video card or some type of hacking tool.

Still, a very interesting idea/analysis. Would this system also not heavily encourage FP64 and CPU projects above FP32 projects?

In terms of hardware, how would we deal with relatively obscure or less commonly used hardware and the obtainment of the FLOPs figures? And how would we ensure that the reported FLOPs of any given hardware is accurate?

Sort:  

Thanks for the good questions!

Or malicious crunchers that try to pass off their older hardware as newer more energy efficient hardware in order to get a greater reward.

The way a user could theoretically manipulate the ER is by having a large collection of newer, more energy efficient hardware of one type, e.g. CPUs, and hiding those, as this would lower the corresponding number in the ER and thus increase the equivalent amount of work they are doing. I haven't really thought up of a good way to fix this. I was thinking about this proposal as a long term solution. At the current stage of GridCoin this isn't necessarily a problem, so it's not a priority, but if it scales to 10x or 100x its current size, I think it will become much more important, as hopefully new ideas (maybe replacing this one) would come forth.

Would this system also not heavily encourage FP64 and CPU projects above FP32 projects?

In general, it should encourage using hardware in the most efficient way possible (e.g. not using the 1080 on FP64, or the 7970 on FP32). If you take a look at the table for the i5-6600K, 7970, and 1080, the cheapest task to run there is the 1080 on FP32.

In terms of hardware, how would we deal with relatively obscure or less commonly used hardware and the obtainment of the FLOPs figures?

I don't know. There might be a way to get that data by benchmarking if it's not publicly available. Or if it's so obscure/uncommonly used, maybe it wouldn't affect the ER much.

And how would we ensure that the reported FLOPs of any given hardware is accurate?

That's the biggest question. I've only thought about this a bit. Ideally it would be as simple as possible; maybe do a benchmark on each type of subtask on BOINC projects and get the information from already existing credit data. It's something I have to look into.