When will cuda support be ready ?

Message boards : Number crunching : When will cuda support be ready ?

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
todd

Send message
Joined: 12 Oct 06
Posts: 2
Credit: 1,215,594
RAC: 0
Message 64790 - Posted: 4 Jan 2010, 18:15:50 UTC

I know this question has been asked before and will continue to be asked until the dev team gets with it.

So once again when can we expect cuda support?
ID: 64790 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,208,737
RAC: 3,249
Message 64814 - Posted: 5 Jan 2010, 9:47:59 UTC - in response to Message 64790.  

I know this question has been asked before and will continue to be asked until the dev team gets with it.

So once again when can we expect cuda support?


I believe the answer is that is has been looked at and the current spec of GPU's does not make it worthwhile. In short there is no advantage to using a GPU due to its limitations. I am sure when those limitations are surpassed the GPU will again be considered as a resource to be exploited. The devil is always in the details and the reason we have a cpu and a gpu is because they each do things differently. The cpu can be very precise while the gpu does not have to be, while Scientific Research MUST be very precise at times.
ID: 64814 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1832
Credit: 119,821,902
RAC: 15,180
Message 64820 - Posted: 5 Jan 2010, 10:43:20 UTC - in response to Message 64790.  

I know this question has been asked before and will continue to be asked until the dev team gets with it.

So once again when can we expect cuda support?


It's not that the dev team need to get with it - you need to understand the problem. Rosetta is about developing the software. Porting to a GPU properly is still far more complex than throwing a compiler switch and that would need doing each time the software was modified. GPUs can be great for some types of static code but Rosetta isn't that.

There might be an opportunity to make a GPU version of a Rosetta build, but I would assume that would be a side project or done outside the bakerlab as improving the accuracy and flexibility of the software is the bakerlab's current objective.
ID: 64820 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
muddocktor

Send message
Joined: 11 May 07
Posts: 17
Credit: 14,543,886
RAC: 0
Message 64823 - Posted: 5 Jan 2010, 18:58:17 UTC

If I'm not mistaken, the Seti GPU client was pretty much written for them by Nvidia and not Berkeley. So I would imagine that someone would have to see if they could interest either Nvidia or AMD in this project and look into getting them to write a GPU app for Rosetta.
ID: 64823 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,208,737
RAC: 3,249
Message 64829 - Posted: 6 Jan 2010, 9:34:00 UTC - in response to Message 64823.  

If I'm not mistaken, the Seti GPU client was pretty much written for them by Nvidia and not Berkeley. So I would imagine that someone would have to see if they could interest either Nvidia or AMD in this project and look into getting them to write a GPU app for Rosetta.


You are correct however the Project Collatz has written their own Nvidia and ATI versions of the software that work within Boinc and they are doing just fine. It has taken a long time and over 10 grand of the guys own money to get things so we users can depend on Collatz being there day in and day out! The guy in charge, Slicker, is WONDERFUL and Gipsel, the programmer, is VERY smart! So it is possible but as you said a stable Project really does make alot of difference!
ID: 64829 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile bruce boytler
Avatar

Send message
Joined: 17 Sep 05
Posts: 68
Credit: 3,565,442
RAC: 0
Message 64833 - Posted: 6 Jan 2010, 13:11:54 UTC

AQUA@HOME had a GPU client and they found it very limited and not worth the effort. The CPU actually turned out to do the work many times faster.

And the guy at DWAVE was a professional programer.
ID: 64833 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
DJStarfox

Send message
Joined: 19 Jul 07
Posts: 145
Credit: 1,250,162
RAC: 0
Message 64836 - Posted: 6 Jan 2010, 14:45:03 UTC

I think a programmer familiar with the Rosetta code should first look at:
http://en.wikipedia.org/wiki/Stream_processing

...and also other articles/books published before deciding that a GPGPU application (ATI or CUDA or both) would be of any performance benefit. If you're not doing a lot of SIMD or MIMD, then there won't be much benefit. With the introduction of double-precision floating point in newer video cards, precision should no longer be an issue with GPU processing.
ID: 64836 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
TomaszPawel

Send message
Joined: 28 Apr 07
Posts: 54
Credit: 2,791,145
RAC: 0
Message 64845 - Posted: 7 Jan 2010, 11:34:20 UTC - in response to Message 64836.  

I think a programmer familiar with the Rosetta code should first look With the introduction of double-precision floating point in newer video cards, precision should no longer be an issue with GPU processing.

DP in new ATI RADEON HD58xx is very fast!

ATI Stream Software Development Kit (SDK) v2.0 is available

Nothing is standing in the way in order to write applications for GPU.

Only Administrators must want it.
WWW of Polish National Team - Join! Crunch! Win!
ID: 64845 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1832
Credit: 119,821,902
RAC: 15,180
Message 64847 - Posted: 7 Jan 2010, 12:18:49 UTC - in response to Message 64845.  
Last modified: 7 Jan 2010, 12:21:54 UTC

I think a programmer familiar with the Rosetta code should first look With the introduction of double-precision floating point in newer video cards, precision should no longer be an issue with GPU processing.


https://boinc.bakerlab.org/rosetta/forum_thread.php?id=5023&nowrap=true#62931


Nothing is standing in the way in order to write applications for GPU.

Only Administrators must want it.


Assuming GPUs can help with rosetta code, porting it to CUDA would be a huge job (see DEK's post linked above), and then the CUDA version would have to be updated for each subsequent release. The CUDA version would no doubt spit out slightly different results due to FP rounding etc so hunting deviations between the versions would possibly grind the improvements to a standstill.

If porting Rosetta to CUDA was easy I'm sure they'd do it. If it's easy and anyone thinks they're not doing it because the devs have a vendetta against CUDA then do the CUDA port and prove how easy it is!

As I posted previously, again assuming GPGPU can offer a speedup for rosetta, it might well be possible to port a version of minirosetta to CUDA to use for large scale testing of certain protein interactions that the software already does well, but I would guess that would be a side project or a project for someone outside the bakerlab.

If a GPU app were available for Rosetta I'd buy a top end GPU instantly, but it's not as straightforward as some people are making out.
ID: 64847 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Gen_X_Accord
Avatar

Send message
Joined: 5 Jun 06
Posts: 154
Credit: 279,018
RAC: 0
Message 65590 - Posted: 19 Mar 2010, 12:54:10 UTC

What part of "Never" don't people seem to understand?
ID: 65590 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mod.Sense
Volunteer moderator

Send message
Joined: 22 Aug 06
Posts: 4018
Credit: 0
RAC: 0
Message 65591 - Posted: 19 Mar 2010, 16:05:42 UTC

I don't believe anyone would say "never". But I don't believe anyone has indicated to expect it coming either. It is a major retrofit to run effectively on a GPU. Anyone that says otherwise is usually selling GPUs.
Rosetta Moderator: Mod.Sense
ID: 65591 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Mad_Max

Send message
Joined: 31 Dec 09
Posts: 209
Credit: 26,262,530
RAC: 19,111
Message 65671 - Posted: 29 Mar 2010, 1:12:34 UTC - in response to Message 64836.  

I think a programmer familiar with the Rosetta code should first look at:
http://en.wikipedia.org/wiki/Stream_processing

...and also other articles/books published before deciding that a GPGPU application (ATI or CUDA or both) would be of any performance benefit. If you're not doing a lot of SIMD or MIMD, then there won't be much benefit. With the introduction of double-precision floating point in newer video cards, precision should no longer be an issue with GPU processing.

YES, modern cards CAN work in double-precision, BUT DP on GPUs has still a strongest drop in computation speed: from 3 to 6 times slower compared to SP, because main purpose of GPU (computer graphics) has no need DP (its mainly SP and integer).

Certainly not looking at it, the latest generation of middle-end and top-end GPU anyway faster than today's CPU a few times, but real difference in scientific computing is several times less than you might think after reading of marketing press releases.
ID: 65671 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 65676 - Posted: 29 Mar 2010, 17:28:49 UTC
Last modified: 29 Mar 2010, 17:32:12 UTC

except, perhaps, for the recently released nVidia Fermi (aka GTX 480/470: "The Soul of a Supercomputer in the Body of a GPU"), which is envisioned more as a gpgpu than a gpu...

From Wikipedia: GeForce 400 Series

The white paper describes the chip much more as a general purpose processor for workloads encompassing tens of thousands of threads - reminiscent of the Tera MTA architecture, though without that machine's support for very efficient random memory access - than as a graphics processor


The chip features ECC protection on the memory, and is believed to be 400% faster than previous Nvidia chips in double-precision floating point operations.
ID: 65676 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile Chilean
Avatar

Send message
Joined: 16 Oct 05
Posts: 711
Credit: 26,694,507
RAC: 0
Message 65682 - Posted: 31 Mar 2010, 0:46:31 UTC - in response to Message 65676.  

except, perhaps, for the recently released nVidia Fermi (aka GTX 480/470: "The Soul of a Supercomputer in the Body of a GPU"), which is envisioned more as a gpgpu than a gpu...

From Wikipedia: GeForce 400 Series

The white paper describes the chip much more as a general purpose processor for workloads encompassing tens of thousands of threads - reminiscent of the Tera MTA architecture, though without that machine's support for very efficient random memory access - than as a graphics processor


The chip features ECC protection on the memory, and is believed to be 400% faster than previous Nvidia chips in double-precision floating point operations.


Still couldn't beat ATI's leading card. Although, I don't know whether it's easier to code for an Nvidia video card or an ATI one.
ID: 65682 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
The_Bad_Penguin
Avatar

Send message
Joined: 5 Jun 06
Posts: 2751
Credit: 4,271,025
RAC: 0
Message 65686 - Posted: 1 Apr 2010, 15:05:48 UTC - in response to Message 65682.  

I personally could care less about using a graphics card for... graphics.

Personally, I only care about its ability to crunch for science, as a gp-gpu.

I don't know the answer for certain, but suspect it is "No.": Can "ATI's leading card" handle double precision floating point as well as Fermi?

Still couldn't beat ATI's leading card. Although, I don't know whether it's easier to code for an Nvidia video card or an ATI one.

ID: 65686 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
deesy58

Send message
Joined: 20 Apr 10
Posts: 75
Credit: 193,831
RAC: 0
Message 66658 - Posted: 23 Jun 2010, 20:17:35 UTC

I believe the answer is that is has been looked at and the current spec of GPU's does not make it worthwhile. In short there is no advantage to using a GPU due to its limitations. I am sure when those limitations are surpassed the GPU will again be considered as a resource to be exploited. The devil is always in the details and the reason we have a cpu and a gpu is because they each do things differently. The cpu can be very precise while the gpu does not have to be, while Scientific Research MUST be very precise at times.


I used my GTX 295 on the FAH Project at Stanford for a while, and it greatly outperformed my two CPUs. All four processors would be running simultaneously, and the video card would finish its work units much more rapidly than the two CPUs. Perhaps they were smaller WUs, but my scores started to increase at a much higher rate, so the contribution must have been significant. If GPUs can be made useful and valuable by FAH at Stanford, why can't it be made usable and valuable by Rosetta? Aren't both projects studying proteins?

deesy
ID: 66658 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Profile dcdc

Send message
Joined: 3 Nov 05
Posts: 1832
Credit: 119,821,902
RAC: 15,180
Message 66659 - Posted: 23 Jun 2010, 21:35:08 UTC - in response to Message 66658.  

I believe the answer is that is has been looked at and the current spec of GPU's does not make it worthwhile. In short there is no advantage to using a GPU due to its limitations. I am sure when those limitations are surpassed the GPU will again be considered as a resource to be exploited. The devil is always in the details and the reason we have a cpu and a gpu is because they each do things differently. The cpu can be very precise while the gpu does not have to be, while Scientific Research MUST be very precise at times.


I used my GTX 295 on the FAH Project at Stanford for a while, and it greatly outperformed my two CPUs. All four processors would be running simultaneously, and the video card would finish its work units much more rapidly than the two CPUs. Perhaps they were smaller WUs, but my scores started to increase at a much higher rate, so the contribution must have been significant. If GPUs can be made useful and valuable by FAH at Stanford, why can't it be made usable and valuable by Rosetta? Aren't both projects studying proteins?

deesy

Some fundamental issues for R@H GPGPU:

  • The Rosetta code is very large - I'm not sure there's another BOINC project (and that includes folding) that comes close(?) Porting would be a huge job, which would require a lot of testing.
  • R@H is as much about developing the software as the results it produces along the way - it would require two code bases to be maintained (one CPU, one GPU)
  • Past tests with CUDA suggested it wouldn't give a speedup for R@H.



GPGPU is in its infancy and will improve over time with compiler improvements, OpenCL, Fusion etc... My (uninformed) guess is R@H will get a GPU version whenever it becomes possible to create a robust one using a compiler for their current code, or when the work is done at a hardware level, pushing the FPU work etc off onto the GPGPU. I'd like to hear the opinion of someone who knows about the practicalities of GPGPU coding though...

Mod Sense - can we get a sticky on GPGPU? The question gets asked a lot!


ID: 66659 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
TomaszPawel

Send message
Joined: 28 Apr 07
Posts: 54
Credit: 2,791,145
RAC: 0
Message 66673 - Posted: 24 Jun 2010, 7:29:10 UTC

Don't bother my friends, soon you will be folding under BOINC.

Rosetta doesn't want to do app on gpu's and must not.

Other projects will draw the application up in order to exploit the enormous computing power of GPU's.

Be patient!
WWW of Polish National Team - Join! Crunch! Win!
ID: 66673 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
mikey
Avatar

Send message
Joined: 5 Jan 06
Posts: 1895
Credit: 9,208,737
RAC: 3,249
Message 66677 - Posted: 24 Jun 2010, 10:36:11 UTC - in response to Message 66673.  

Don't bother my friends, soon you will be folding under BOINC.

Rosetta doesn't want to do app on gpu's and must not.

Other projects will draw the application up in order to exploit the enormous computing power of GPU's.

Be patient!


Over 95% of Boinc users are cpu only users, that only leaves 5% of us that use our gpu's to crunch with. As the usage of the gpu climbs the number of projects able to use it will also climb. Right now there are like 5 or maybe 6 projects where you can use your gpu to crunch, next year there will be even more, the year after that more etc, etc.
ID: 66677 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
Dirk Broer

Send message
Joined: 16 Nov 05
Posts: 22
Credit: 3,390,179
RAC: 3,244
Message 70317 - Posted: 11 May 2011, 11:04:53 UTC - in response to Message 65686.  

I personally could care less about using a graphics card for... graphics.

Personally, I only care about its ability to crunch for science, as a gp-gpu.

I don't know the answer for certain, but suspect it is "No.": Can "ATI's leading card" handle double precision floating point as well as Fermi?

Still couldn't beat ATI's leading card. Although, I don't know whether it's easier to code for an Nvidia video card or an ATI one.



Actually the ATI/AMD leading cards of the last years, the HD 5970 the last years and now the HD 6990 can run rings around the best nVidia card when it comes to double precision.

The GTX 295, nVidia's leading card of days gone by, could do 1788 GFlops single precision against a very meagre 149 GFlop double precision.
The GTX 590, nVidia's present leading card, does 2486 GFlops singkle precision against a still meagre 311 GFLop double precision.
The Quadro 6000 -nVidia's less-limited leading professional card- gives 1030 Gflops single precision, but deliveres a modest 515 GFLop double precision.

And ATI/AMD?
The HD 5970 deliveres 4640 GFlop single precision and does 928 GFlop double precision.
The HD 6990 deliveres 5099 GFlop single precision and does 1275 GFlop double precision.

I do know the answer for certain, knowing it is "YES": Can "ATI's leading card" handle double precision floating point as well as Fermi?

ID: 70317 · Rating: 0 · rate: Rate + / Rate - Report as offensive    Reply Quote
1 · 2 · Next

Message boards : Number crunching : When will cuda support be ready ?



©2024 University of Washington
https://www.bakerlab.org