Message boards : Number crunching : When will cuda support be ready ?
Author | Message |
---|---|
todd Send message Joined: 12 Oct 06 Posts: 2 Credit: 1,215,594 RAC: 0 |
I know this question has been asked before and will continue to be asked until the dev team gets with it. So once again when can we expect cuda support? |
mikey Send message Joined: 5 Jan 06 Posts: 1895 Credit: 9,208,737 RAC: 3,249 |
I know this question has been asked before and will continue to be asked until the dev team gets with it. I believe the answer is that is has been looked at and the current spec of GPU's does not make it worthwhile. In short there is no advantage to using a GPU due to its limitations. I am sure when those limitations are surpassed the GPU will again be considered as a resource to be exploited. The devil is always in the details and the reason we have a cpu and a gpu is because they each do things differently. The cpu can be very precise while the gpu does not have to be, while Scientific Research MUST be very precise at times. |
dcdc Send message Joined: 3 Nov 05 Posts: 1832 Credit: 119,821,902 RAC: 15,180 |
I know this question has been asked before and will continue to be asked until the dev team gets with it. It's not that the dev team need to get with it - you need to understand the problem. Rosetta is about developing the software. Porting to a GPU properly is still far more complex than throwing a compiler switch and that would need doing each time the software was modified. GPUs can be great for some types of static code but Rosetta isn't that. There might be an opportunity to make a GPU version of a Rosetta build, but I would assume that would be a side project or done outside the bakerlab as improving the accuracy and flexibility of the software is the bakerlab's current objective. |
muddocktor Send message Joined: 11 May 07 Posts: 17 Credit: 14,543,886 RAC: 0 |
If I'm not mistaken, the Seti GPU client was pretty much written for them by Nvidia and not Berkeley. So I would imagine that someone would have to see if they could interest either Nvidia or AMD in this project and look into getting them to write a GPU app for Rosetta. |
mikey Send message Joined: 5 Jan 06 Posts: 1895 Credit: 9,208,737 RAC: 3,249 |
If I'm not mistaken, the Seti GPU client was pretty much written for them by Nvidia and not Berkeley. So I would imagine that someone would have to see if they could interest either Nvidia or AMD in this project and look into getting them to write a GPU app for Rosetta. You are correct however the Project Collatz has written their own Nvidia and ATI versions of the software that work within Boinc and they are doing just fine. It has taken a long time and over 10 grand of the guys own money to get things so we users can depend on Collatz being there day in and day out! The guy in charge, Slicker, is WONDERFUL and Gipsel, the programmer, is VERY smart! So it is possible but as you said a stable Project really does make alot of difference! |
bruce boytler Send message Joined: 17 Sep 05 Posts: 68 Credit: 3,565,442 RAC: 0 |
AQUA@HOME had a GPU client and they found it very limited and not worth the effort. The CPU actually turned out to do the work many times faster. And the guy at DWAVE was a professional programer. |
DJStarfox Send message Joined: 19 Jul 07 Posts: 145 Credit: 1,250,162 RAC: 0 |
I think a programmer familiar with the Rosetta code should first look at: http://en.wikipedia.org/wiki/Stream_processing ...and also other articles/books published before deciding that a GPGPU application (ATI or CUDA or both) would be of any performance benefit. If you're not doing a lot of SIMD or MIMD, then there won't be much benefit. With the introduction of double-precision floating point in newer video cards, precision should no longer be an issue with GPU processing. |
TomaszPawel Send message Joined: 28 Apr 07 Posts: 54 Credit: 2,791,145 RAC: 0 |
I think a programmer familiar with the Rosetta code should first look With the introduction of double-precision floating point in newer video cards, precision should no longer be an issue with GPU processing. DP in new ATI RADEON HD58xx is very fast! ATI Stream Software Development Kit (SDK) v2.0 is available Nothing is standing in the way in order to write applications for GPU. Only Administrators must want it. WWW of Polish National Team - Join! Crunch! Win! |
dcdc Send message Joined: 3 Nov 05 Posts: 1832 Credit: 119,821,902 RAC: 15,180 |
I think a programmer familiar with the Rosetta code should first look With the introduction of double-precision floating point in newer video cards, precision should no longer be an issue with GPU processing. https://boinc.bakerlab.org/rosetta/forum_thread.php?id=5023&nowrap=true#62931
Assuming GPUs can help with rosetta code, porting it to CUDA would be a huge job (see DEK's post linked above), and then the CUDA version would have to be updated for each subsequent release. The CUDA version would no doubt spit out slightly different results due to FP rounding etc so hunting deviations between the versions would possibly grind the improvements to a standstill. If porting Rosetta to CUDA was easy I'm sure they'd do it. If it's easy and anyone thinks they're not doing it because the devs have a vendetta against CUDA then do the CUDA port and prove how easy it is! As I posted previously, again assuming GPGPU can offer a speedup for rosetta, it might well be possible to port a version of minirosetta to CUDA to use for large scale testing of certain protein interactions that the software already does well, but I would guess that would be a side project or a project for someone outside the bakerlab. If a GPU app were available for Rosetta I'd buy a top end GPU instantly, but it's not as straightforward as some people are making out. |
Gen_X_Accord Send message Joined: 5 Jun 06 Posts: 154 Credit: 279,018 RAC: 0 |
What part of "Never" don't people seem to understand? |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
I don't believe anyone would say "never". But I don't believe anyone has indicated to expect it coming either. It is a major retrofit to run effectively on a GPU. Anyone that says otherwise is usually selling GPUs. Rosetta Moderator: Mod.Sense |
Mad_Max Send message Joined: 31 Dec 09 Posts: 209 Credit: 26,262,530 RAC: 19,111 |
I think a programmer familiar with the Rosetta code should first look at: YES, modern cards CAN work in double-precision, BUT DP on GPUs has still a strongest drop in computation speed: from 3 to 6 times slower compared to SP, because main purpose of GPU (computer graphics) has no need DP (its mainly SP and integer). Certainly not looking at it, the latest generation of middle-end and top-end GPU anyway faster than today's CPU a few times, but real difference in scientific computing is several times less than you might think after reading of marketing press releases. |
The_Bad_Penguin Send message Joined: 5 Jun 06 Posts: 2751 Credit: 4,271,025 RAC: 0 |
except, perhaps, for the recently released nVidia Fermi (aka GTX 480/470: "The Soul of a Supercomputer in the Body of a GPU"), which is envisioned more as a gpgpu than a gpu... From Wikipedia: GeForce 400 Series The white paper describes the chip much more as a general purpose processor for workloads encompassing tens of thousands of threads - reminiscent of the Tera MTA architecture, though without that machine's support for very efficient random memory access - than as a graphics processor The chip features ECC protection on the memory, and is believed to be 400% faster than previous Nvidia chips in double-precision floating point operations. |
Chilean Send message Joined: 16 Oct 05 Posts: 711 Credit: 26,694,507 RAC: 0 |
except, perhaps, for the recently released nVidia Fermi (aka GTX 480/470: "The Soul of a Supercomputer in the Body of a GPU"), which is envisioned more as a gpgpu than a gpu... Still couldn't beat ATI's leading card. Although, I don't know whether it's easier to code for an Nvidia video card or an ATI one. |
The_Bad_Penguin Send message Joined: 5 Jun 06 Posts: 2751 Credit: 4,271,025 RAC: 0 |
I personally could care less about using a graphics card for... graphics. Personally, I only care about its ability to crunch for science, as a gp-gpu. I don't know the answer for certain, but suspect it is "No.": Can "ATI's leading card" handle double precision floating point as well as Fermi? Still couldn't beat ATI's leading card. Although, I don't know whether it's easier to code for an Nvidia video card or an ATI one. |
deesy58 Send message Joined: 20 Apr 10 Posts: 75 Credit: 193,831 RAC: 0 |
I believe the answer is that is has been looked at and the current spec of GPU's does not make it worthwhile. In short there is no advantage to using a GPU due to its limitations. I am sure when those limitations are surpassed the GPU will again be considered as a resource to be exploited. The devil is always in the details and the reason we have a cpu and a gpu is because they each do things differently. The cpu can be very precise while the gpu does not have to be, while Scientific Research MUST be very precise at times. I used my GTX 295 on the FAH Project at Stanford for a while, and it greatly outperformed my two CPUs. All four processors would be running simultaneously, and the video card would finish its work units much more rapidly than the two CPUs. Perhaps they were smaller WUs, but my scores started to increase at a much higher rate, so the contribution must have been significant. If GPUs can be made useful and valuable by FAH at Stanford, why can't it be made usable and valuable by Rosetta? Aren't both projects studying proteins? deesy |
dcdc Send message Joined: 3 Nov 05 Posts: 1832 Credit: 119,821,902 RAC: 15,180 |
I believe the answer is that is has been looked at and the current spec of GPU's does not make it worthwhile. In short there is no advantage to using a GPU due to its limitations. I am sure when those limitations are surpassed the GPU will again be considered as a resource to be exploited. The devil is always in the details and the reason we have a cpu and a gpu is because they each do things differently. The cpu can be very precise while the gpu does not have to be, while Scientific Research MUST be very precise at times. Some fundamental issues for R@H GPGPU:
|
TomaszPawel Send message Joined: 28 Apr 07 Posts: 54 Credit: 2,791,145 RAC: 0 |
Don't bother my friends, soon you will be folding under BOINC. Rosetta doesn't want to do app on gpu's and must not. Other projects will draw the application up in order to exploit the enormous computing power of GPU's. Be patient! WWW of Polish National Team - Join! Crunch! Win! |
mikey Send message Joined: 5 Jan 06 Posts: 1895 Credit: 9,208,737 RAC: 3,249 |
Don't bother my friends, soon you will be folding under BOINC. Over 95% of Boinc users are cpu only users, that only leaves 5% of us that use our gpu's to crunch with. As the usage of the gpu climbs the number of projects able to use it will also climb. Right now there are like 5 or maybe 6 projects where you can use your gpu to crunch, next year there will be even more, the year after that more etc, etc. |
Dirk Broer Send message Joined: 16 Nov 05 Posts: 22 Credit: 3,390,179 RAC: 3,244 |
I personally could care less about using a graphics card for... graphics. Actually the ATI/AMD leading cards of the last years, the HD 5970 the last years and now the HD 6990 can run rings around the best nVidia card when it comes to double precision. The GTX 295, nVidia's leading card of days gone by, could do 1788 GFlops single precision against a very meagre 149 GFlop double precision. The GTX 590, nVidia's present leading card, does 2486 GFlops singkle precision against a still meagre 311 GFLop double precision. The Quadro 6000 -nVidia's less-limited leading professional card- gives 1030 Gflops single precision, but deliveres a modest 515 GFLop double precision. And ATI/AMD? The HD 5970 deliveres 4640 GFlop single precision and does 928 GFlop double precision. The HD 6990 deliveres 5099 GFlop single precision and does 1275 GFlop double precision. I do know the answer for certain, knowing it is "YES": Can "ATI's leading card" handle double precision floating point as well as Fermi? |
Message boards :
Number crunching :
When will cuda support be ready ?
©2024 University of Washington
https://www.bakerlab.org