Message boards : Number crunching : Optimized Rosetta
Author | Message |
---|---|
Demoman Send message Joined: 4 Feb 08 Posts: 1 Credit: 1,328,407 RAC: 0 |
Is there some optimized Rossetta client for SSE, MMX, 3DNOW, etc.? |
dcdc Send message Joined: 3 Nov 05 Posts: 1832 Credit: 119,675,695 RAC: 11,002 |
Is there some optimized Rossetta client for SSE, MMX, 3DNOW, etc.? There's only one version of Rosetta per platform which is the one automatically downloaded by BOINC - there are lots of threads on the problems with optimising it for any particular extensions, and dispute as to whether there's any gains to be made... So you might say it's already optimised ;) HTH Danny |
Allan Hojgaard Send message Joined: 4 May 08 Posts: 9 Credit: 591,749 RAC: 0 |
I am more interested in seeing improvements in speed for the Linux client. I am running a Core2 T7300 @ 2.0 GHz (Ubuntu 8.04) and its average time to crunch a WU is about 10.000 seconds while another computer of mine, a rather lowly AMD Sempron 3000+ (WinXP) takes on average about 8.000 seconds. Sure I get more credits, but I am more interested in how many WUs I can crunch than points (Science over personal gains) so it feels strange that there is no more optimization to be done on the client when the WinXP client can out crunch a Linux client running on a faster CPU. Are you absolutely sure there is no room for a SSE, MMX, 3DNOW, Enhanced 3DNow!, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, SSE4a and the upcoming SSE5 and AVX from AMD and Intel respectively? Perhaps some other tweaks to make the Linux client run faster? |
dcdc Send message Joined: 3 Nov 05 Posts: 1832 Credit: 119,675,695 RAC: 11,002 |
I am more interested in seeing improvements in speed for the Linux client. I am running a Core2 T7300 @ 2.0 GHz (Ubuntu 8.04) and its average time to crunch a WU is about 10.000 seconds while another computer of mine, a rather lowly AMD Sempron 3000+ (WinXP) takes on average about 8.000 seconds. Sure I get more credits, but I am more interested in how many WUs I can crunch than points (Science over personal gains) so it feels strange that there is no more optimization to be done on the client when the WinXP client can out crunch a Linux client running on a faster CPU. Are you absolutely sure there is no room for a SSE, MMX, 3DNOW, Enhanced 3DNow!, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, SSE4a and the upcoming SSE5 and AVX from AMD and Intel respectively? Perhaps some other tweaks to make the Linux client run faster? the credits directly relate to how much work has been done - there's very little difference between the windows and linux clients as far as efficiency goes. |
Chilean Send message Joined: 16 Oct 05 Posts: 711 Credit: 26,694,507 RAC: 0 |
I am more interested in seeing improvements in speed for the Linux client. I am running a Core2 T7300 @ 2.0 GHz (Ubuntu 8.04) and its average time to crunch a WU is about 10.000 seconds while another computer of mine, a rather lowly AMD Sempron 3000+ (WinXP) takes on average about 8.000 seconds. Sure I get more credits, but I am more interested in how many WUs I can crunch than points (Science over personal gains) so it feels strange that there is no more optimization to be done on the client when the WinXP client can out crunch a Linux client running on a faster CPU. Are you absolutely sure there is no room for a SSE, MMX, 3DNOW, Enhanced 3DNow!, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, SSE4a and the upcoming SSE5 and AVX from AMD and Intel respectively? Perhaps some other tweaks to make the Linux client run faster? Averga time is irrelevant to CPU power. Becuase YOU choose how long each WU takes. (Check your preferences) The default is 3hrs per WU. You get credit by how many models you predicted within each WU. |
Allan Hojgaard Send message Joined: 4 May 08 Posts: 9 Credit: 591,749 RAC: 0 |
Since default is 3 hours does that mean that if I set it higher the client asks for more advanced WUs or spends more time looking for models in a WU? |
Adak Send message Joined: 16 Aug 08 Posts: 14 Credit: 1,136,669 RAC: 0 |
Since default is 3 hours does that mean that if I set it higher the client asks for more advanced WUs or spends more time looking for models in a WU? Yes. |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
Yes Allan, the complexity of the tasks is identical, but your machine will spend more (or less) time creating models, and therefore create more (or less) models for that task. This is why credit is based on work completed. It gives everyone the flexibility to spend an amount of time on tasks that is comfortable for their environment. It does introduce other challenges, such as when a given task, and each model takes 6 hours to complete. All the folks with a runtime preference that is less then 6 hours think it is going haywire. ...oh well :) By the way, it is the Rosetta Preferences where you configure your runtime preference. Rosetta Moderator: Mod.Sense |
Allan Hojgaard Send message Joined: 4 May 08 Posts: 9 Credit: 591,749 RAC: 0 |
Yes Allan, the complexity of the tasks is identical, but your machine will spend more (or less) time creating models, and therefore create more (or less) models for that task. This is why credit is based on work completed. It gives everyone the flexibility to spend an amount of time on tasks that is comfortable for their environment. I see. So the system works like this: All WUs need 1 day of processing (judging from the Rosetta preferences) and what I am currently doing is crunching 3 hours worth of it. Then it gets sent to another computer for anywhere from 1 hour to 21 hours until it is completed? And should I decide to take the full day processing then I would get an enormous amount of credits in exchange for my patience as well as a bigger chance of becoming "Predictor of the day"? |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
It's much simpler then that. Each task is capable of producing an infinite number of models. You produce, basically, as many as you like, and send back your results. No need for anyone else to finish anything. They are already working on a few of the other possible models. Yes, the longer time you spend on a task, the more models you will complete, and the more credit you will be granted. No exponential bonus for time. Just linear credit for number of models completed. So, for a given task, if you currently got 40 credits for 3 hours of work, if you had done that same task for 12 hours, you'd have received 160 credits. In reality, by the time you get to try a 12 hours preference, you will probably receive a task from a different batch with different characteristics. But, you get the idea. And the more models you produce, regardless of whether all on a single task or not, the better your chance of winning the user of the day spot some day. Rosetta Moderator: Mod.Sense |
Message boards :
Number crunching :
Optimized Rosetta
©2024 University of Washington
https://www.bakerlab.org