Optimized Rosetta

Message boards : Number crunching : Optimized Rosetta

To post messages, you must log in.

AuthorMessage
Demoman

Send message
Joined: 4 Feb 08
Posts: 1
Credit: 1,328,407
RAC: 0
Message 55460 - Posted: 1 Sep 2008, 17:49:47 UTC

Is there some optimized Rossetta client for SSE, MMX, 3DNOW, etc.?
ID: 55460 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1832
Credit: 119,675,695
RAC: 11,002
Message 55462 - Posted: 1 Sep 2008, 18:39:27 UTC - in response to Message 55460.  

Is there some optimized Rossetta client for SSE, MMX, 3DNOW, etc.?

There's only one version of Rosetta per platform which is the one automatically downloaded by BOINC - there are lots of threads on the problems with optimising it for any particular extensions, and dispute as to whether there's any gains to be made... So you might say it's already optimised ;)

HTH
Danny
ID: 55462 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Allan Hojgaard

Send message
Joined: 4 May 08
Posts: 9
Credit: 591,749
RAC: 0
Message 55542 - Posted: 4 Sep 2008, 18:08:07 UTC

I am more interested in seeing improvements in speed for the Linux client. I am running a Core2 T7300 @ 2.0 GHz (Ubuntu 8.04) and its average time to crunch a WU is about 10.000 seconds while another computer of mine, a rather lowly AMD Sempron 3000+ (WinXP) takes on average about 8.000 seconds. Sure I get more credits, but I am more interested in how many WUs I can crunch than points (Science over personal gains) so it feels strange that there is no more optimization to be done on the client when the WinXP client can out crunch a Linux client running on a faster CPU. Are you absolutely sure there is no room for a SSE, MMX, 3DNOW, Enhanced 3DNow!, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, SSE4a and the upcoming SSE5 and AVX from AMD and Intel respectively? Perhaps some other tweaks to make the Linux client run faster?
ID: 55542 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1832
Credit: 119,675,695
RAC: 11,002
Message 55543 - Posted: 4 Sep 2008, 18:19:56 UTC - in response to Message 55542.  

I am more interested in seeing improvements in speed for the Linux client. I am running a Core2 T7300 @ 2.0 GHz (Ubuntu 8.04) and its average time to crunch a WU is about 10.000 seconds while another computer of mine, a rather lowly AMD Sempron 3000+ (WinXP) takes on average about 8.000 seconds. Sure I get more credits, but I am more interested in how many WUs I can crunch than points (Science over personal gains) so it feels strange that there is no more optimization to be done on the client when the WinXP client can out crunch a Linux client running on a faster CPU. Are you absolutely sure there is no room for a SSE, MMX, 3DNOW, Enhanced 3DNow!, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, SSE4a and the upcoming SSE5 and AVX from AMD and Intel respectively? Perhaps some other tweaks to make the Linux client run faster?

the credits directly relate to how much work has been done - there's very little difference between the windows and linux clients as far as efficiency goes.
ID: 55543 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Chilean
Avatar

Send message
Joined: 16 Oct 05
Posts: 711
Credit: 26,694,507
RAC: 0
Message 55551 - Posted: 4 Sep 2008, 22:46:32 UTC - in response to Message 55542.  

I am more interested in seeing improvements in speed for the Linux client. I am running a Core2 T7300 @ 2.0 GHz (Ubuntu 8.04) and its average time to crunch a WU is about 10.000 seconds while another computer of mine, a rather lowly AMD Sempron 3000+ (WinXP) takes on average about 8.000 seconds. Sure I get more credits, but I am more interested in how many WUs I can crunch than points (Science over personal gains) so it feels strange that there is no more optimization to be done on the client when the WinXP client can out crunch a Linux client running on a faster CPU. Are you absolutely sure there is no room for a SSE, MMX, 3DNOW, Enhanced 3DNow!, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, SSE4a and the upcoming SSE5 and AVX from AMD and Intel respectively? Perhaps some other tweaks to make the Linux client run faster?


Averga time is irrelevant to CPU power.
Becuase YOU choose how long each WU takes. (Check your preferences) The default is 3hrs per WU. You get credit by how many models you predicted within each WU.
ID: 55551 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Allan Hojgaard

Send message
Joined: 4 May 08
Posts: 9
Credit: 591,749
RAC: 0
Message 55553 - Posted: 5 Sep 2008, 1:22:51 UTC

Since default is 3 hours does that mean that if I set it higher the client asks for more advanced WUs or spends more time looking for models in a WU?
ID: 55553 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Adak

Send message
Joined: 16 Aug 08
Posts: 14
Credit: 1,136,669
RAC: 0
Message 55555 - Posted: 5 Sep 2008, 4:32:11 UTC - in response to Message 55553.  

Since default is 3 hours does that mean that if I set it higher the client asks for more advanced WUs or spends more time looking for models in a WU?


Yes.
ID: 55555 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 55557 - Posted: 5 Sep 2008, 9:12:40 UTC

Yes Allan, the complexity of the tasks is identical, but your machine will spend more (or less) time creating models, and therefore create more (or less) models for that task. This is why credit is based on work completed. It gives everyone the flexibility to spend an amount of time on tasks that is comfortable for their environment.

It does introduce other challenges, such as when a given task, and each model takes 6 hours to complete. All the folks with a runtime preference that is less then 6 hours think it is going haywire. ...oh well :)

By the way, it is the Rosetta Preferences where you configure your runtime preference.
Rosetta Moderator: Mod.Sense
ID: 55557 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Allan Hojgaard

Send message
Joined: 4 May 08
Posts: 9
Credit: 591,749
RAC: 0
Message 55622 - Posted: 8 Sep 2008, 20:06:28 UTC - in response to Message 55557.  

Yes Allan, the complexity of the tasks is identical, but your machine will spend more (or less) time creating models, and therefore create more (or less) models for that task. This is why credit is based on work completed. It gives everyone the flexibility to spend an amount of time on tasks that is comfortable for their environment.

It does introduce other challenges, such as when a given task, and each model takes 6 hours to complete. All the folks with a runtime preference that is less then 6 hours think it is going haywire. ...oh well :)

By the way, it is the Rosetta Preferences where you configure your runtime preference.


I see. So the system works like this:
All WUs need 1 day of processing (judging from the Rosetta preferences) and what I am currently doing is crunching 3 hours worth of it. Then it gets sent to another computer for anywhere from 1 hour to 21 hours until it is completed? And should I decide to take the full day processing then I would get an enormous amount of credits in exchange for my patience as well as a bigger chance of becoming "Predictor of the day"?
ID: 55622 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 55623 - Posted: 8 Sep 2008, 20:23:36 UTC

It's much simpler then that. Each task is capable of producing an infinite number of models. You produce, basically, as many as you like, and send back your results. No need for anyone else to finish anything. They are already working on a few of the other possible models.

Yes, the longer time you spend on a task, the more models you will complete, and the more credit you will be granted. No exponential bonus for time. Just linear credit for number of models completed. So, for a given task, if you currently got 40 credits for 3 hours of work, if you had done that same task for 12 hours, you'd have received 160 credits. In reality, by the time you get to try a 12 hours preference, you will probably receive a task from a different batch with different characteristics. But, you get the idea.

And the more models you produce, regardless of whether all on a single task or not, the better your chance of winning the user of the day spot some day.
Rosetta Moderator: Mod.Sense
ID: 55623 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote

Message boards : Number crunching : Optimized Rosetta



©2024 University of Washington
https://www.bakerlab.org