Message boards : Number crunching : Subpar credit per CPU second for R@H. Why?
Author | Message |
---|---|
student_ Send message Joined: 24 Sep 05 Posts: 34 Credit: 4,765,223 RAC: 1,786 |
According to the BOINCstats project credit comparison, Rosetta@home grants significantly less credit than most other projects for the same amount CPU time. Comparing only the ratio in hosts that work on both projects (e.g. a host the works on Rosetta@home and Einstein@home), the percentage is about 70% compared to Einstein@home, 72% compared to Docking@home, 80% compared to SETI@home, and in general less than average. Optimizing the efficiency of Rosetta@home may not be entirely necessary to increase the network's performance, considering it's almost twice as powerful as it was at the beginning of CASP 7 (37 teraFLOPS then, 70 teraFLOPS now). Since the number of hosts increased less than a third in that two-year time (65,000 then, 85,000 now), Rosetta@home will probably depend on Moore's law more than mass appeal for its growth in CPU power. Is the below average performance basically due to a lack of an optimized application, utilization of 64-bit clients, etc.? In particular with Docking@home, is that project's increased FLOPS/(CPU time) possibly due to it using the CHARMM molecular dynamics package? |
dcdc Send message Joined: 3 Nov 05 Posts: 1832 Credit: 119,675,695 RAC: 11,002 |
According to the BOINCstats project credit comparison, Rosetta@home grants significantly less credit than most other projects for the same amount CPU time. Comparing only the ratio in hosts that work on both projects (e.g. a host the works on Rosetta@home and Einstein@home), the percentage is about 70% compared to Einstein@home, 72% compared to Docking@home, 80% compared to SETI@home, and in general less than average. Rosetta's efficiency has no effect - if you doubled the speed at which Rosetta crunches it wouldn't change the credit assigned, as the basis for the credit assignment would just expect twice as much work for each credit. To change the granted credit, there needs to be a multiplier added to the initial calculation which assigns the decoy-value for each work-unit. I believe that is done on the in-house machines before the jobs are released. The same multiplier would have to be added to the BOINC claimed credit too, otherwise the average claims would drag the decoy's value down, or it could be added to the backerlab's credit-granting calculation, in which case it would have to multiply all of the claimed credit values by the multiplier... |
student_ Send message Joined: 24 Sep 05 Posts: 34 Credit: 4,765,223 RAC: 1,786 |
Rosetta's efficiency has no effect - if you doubled the speed at which Rosetta crunches it wouldn't change the credit assigned, as the basis for the credit assignment would just expect twice as much work for each credit. Thanks for the clarification. I was going on the assumption that credits approximated FLOPS per the relationship used on the main page (daily credit/100,000 = estimated teraFLOPS), which doesn't seem to reflect the actual situation. How does the multiplier work? Maybe it actually does try to estimate the floating point operations done to produce one decoy for each workunit, or what? |
dcdc Send message Joined: 3 Nov 05 Posts: 1832 Credit: 119,675,695 RAC: 11,002 |
Thanks for the clarification. I was going on the assumption that credits approximated FLOPS per the relationship used on the main page (daily credit/100,000 = estimated teraFLOPS), which doesn't seem to reflect the actual situation. I'm assuming the first reported tasks will be from the bakerlab's in-house clusters, but that might not be true. Maybe the test runs aren't included in the results pool. Either way, the first task reported is the one that sets the initial credit granted- i.e. it gets what it claims. Later submissions get the average of all previous claimed credits (per decoy). I believe the idea was to make one BOINC cobblestone (not sure how that's determined...) equal one credit on R@H, so the multiplier must just be multiplied by the BOINC benchmark score. In that case it would just be a case of increasing that value. Danny |
Message boards :
Number crunching :
Subpar credit per CPU second for R@H. Why?
©2024 University of Washington
https://www.bakerlab.org