An Update on the Threadripper Experiment

in #gridcoin6 years ago (edited)

It has been almost a week since I started the CPU-project comparison. I started out with 15 projects of which one had to be cancelled now due to computation errors. BOINC does a good job at fetching work for a project for ~12h, finishing that and then moving on to the next project in a circular manner.

IMG_5429.JPG

  • SRBase: 0.25 Mag - now removed due to computation errors (~32 Tasks failed after ~34h of running). I might add it again in the future, but probably not within the next month.
  • Sourcefinder: 0 Mag - Managed to fetch some WUs, but they ended in computation errors. Probably going to remove that one as well...
  • Yafu: 0.09 Mag - unfortunately had a server error, where I lost almost all the work I did for this project this week.
  • TheSkynetPogs: 0.4 Mag - just reached its turn now for the second time, has therefore a good reason to be one of the lowest.
  • Nfs@home: 0.53 Mag
  • Drugdiscovery@home: 0.61 Mag
  • Rosetta@home: 0.82 Mag
  • Numberfields@home: 0.9 Mag
  • Citicen Science Grid: 0.98 Mag
  • Universe@home: 1.28 Mag
  • TnGrid: 1.39 Mag
  • Yoyo@home: 1.61 Mag
  • ODLK1: 1.71 Mag
  • VGTU project@home: 4.10 Mag
  • (AmicableNumbers: 20.99 Mag - not in the competition as it runs on my 2 GPUs)

This data is to be taken with a grain of salt. As I mentioned, it is important which project just had its turn as I estimate that this will make a difference of up to 1/3 of the magnitude. It has to be mentioned that the projects have not confirmed all of my tasks immediately (Citizen science grid for example) and therefore have a delay in their mag.
Furthermore, magnitude builds up in a period over 5 weeks. I expect the data to become more relevant at that time, since the fluctuations should also be reduced to a smaller percentage of the total magnitude. Single computational errors won't affect the data then as much anymore as they do now.

I plan on making some graphs for the future, as they just always look cooler :) With one datapoint per project , however, they would not make too much sense.

Same shout-out again: If anyone has an Idea on how I could extract the data about CPU-time from the project websites, it would be great to get in touch!