Message boards : Number crunching : GPU not getting work
Author | Message |
---|---|
snipaho Send message Joined: 10 Mar 06 Posts: 2 Credit: 2,560,436 RAC: 0 |
I have an nVidia 8800GT which is CUDA enabled, I am using BOINC 6.6.36 with the latest nVidia driver (190.38) on Windows 7 RC. In my Preferences I have told it to always use the GPU (not just when idle). Under tasks, I only see 4 units being worked on. I have an Intel Core2 Quad Q6700. Shouldn't I see 5 units going? How do I make BOINC get work for my GPU? Or does Rosetta not support GPU work units? |
snipaho Send message Joined: 10 Mar 06 Posts: 2 Credit: 2,560,436 RAC: 0 |
Nevermind, I think I found the answer (that Rosetta doesn't do GPU work units). |
![]() ![]() Send message Joined: 30 May 06 Posts: 5691 Credit: 5,859,226 RAC: 0 |
Nevermind, I think I found the answer (that Rosetta doesn't do GPU work units). That is the correct answer, Rosetta is a cpu only project. I'm sure you have found some other projects that do use GPU processors. |
Marko![]() Send message Joined: 6 Aug 09 Posts: 1 Credit: 8,702 RAC: 0 |
Will Rosetta@home ever support GPU computing? |
![]() ![]() Send message Joined: 30 May 06 Posts: 5691 Credit: 5,859,226 RAC: 0 |
Will Rosetta@home ever support GPU computing? this thread had a brief discussion about GPU and RAH https://boinc.bakerlab.org/rosetta/forum_thread.php?id=4266&nowrap=true#54700 |
![]() ![]() Send message Joined: 30 May 06 Posts: 5691 Credit: 5,859,226 RAC: 0 |
Maybe it is a good idea to make a sticky thread on this subject. It might prevent the ever returning question. Get an official answer from DEK or someone in technology about WHY GPU is not supported by RAH. Also if there are any future plans to support GPU. Then put that answer here and in FAQ. Problem solved. |
![]() ![]() Send message Joined: 18 Sep 05 Posts: 655 Credit: 11,873,832 RAC: 2,171 ![]() |
>>> Problem solved. Dream on Greg! Wave upon wave of demented avengers march cheerfully out of obscurity into the dream. |
![]() ![]() Send message Joined: 30 May 06 Posts: 5691 Credit: 5,859,226 RAC: 0 |
>>> Problem solved. in an ideal world, which this forum is not |
![]() Volunteer moderator Project administrator Project developer Project scientist Send message Joined: 1 Jul 05 Posts: 1480 Credit: 4,334,829 RAC: 0 |
I spent some time looking into using CUDA for nvidia gpus but it turns out that a pretty extensive rewrite would be required to really make good use of the gpus and CUDA does not support C++ yet. I even tried to port a couple routines but the memory bandwidth was just too much. |
![]() ![]() Send message Joined: 30 May 06 Posts: 5691 Credit: 5,859,226 RAC: 0 |
Thanks for that info. Now we know the real reason for RAH not going to GPU. Maybe you could write something official along these lines and then we can point others to that message? |
Orgil Send message Joined: 11 Dec 05 Posts: 82 Credit: 169,751 RAC: 0 |
Recently AMD released HD5850 gpu which is far more better than Nvidia on gpu computing and that use something OpenCL and another language for applications development could R@H use AMD options? something cheaper than nvidia and non-cuda yet would enable a lot's of computing resources. The thing is gpu computing is inevitable advancement for dc projects yet only 2 out of whole boinc adopted it means really slow. |
![]() Send message Joined: 16 Jun 08 Posts: 1235 Credit: 14,372,156 RAC: 1,319 |
The best I can tell, Rosetta@home is using an algorithm in minirosetta which requires so much memory per processor that it won't get much if any benefit from the increased number of processors but decreased amount of memory per processor available in graphics cards - Nvidia type, ATI type, or the AMD type Orgil recommends. Some other BOINC projects that require much less memory per processor probably could, if the proper software was available to compile whatever computer language they are written in into whatever computer code at least one of those graphics cards requires - and they don't yet have the same choices of which types of computer codes will work. The OpenCL idea for a computer code shared by all new graphics cards is rather new, and probably slow getting proper software written to handle it. I've found GPUGRID to be a good source of workunits that handle many of the more recent Nvidia cards well, but not those with the fewest processors or too old a variety of Nvidia processors. Their algorithm appears to try to handle only the protein folding part of the things Rosetta@home can do, and works with less memory per processor. I'd like a new Rosetta@home program that only tries to handle some of the functions the other Rosetta@home programs do, but uses at least one type of graphics card instead of running on the CPU only, but the compiler software support for writing such a program currently makes writing such programs slow enough that few BOINC projects have gone far beyond their initial efforts to try it. |
mikey![]() Send message Joined: 5 Jan 06 Posts: 1896 Credit: 9,881,381 RAC: 36,738 ![]() |
Recently AMD released HD5850 gpu which is far more better than Nvidia on gpu computing and that use something OpenCL and another language for applications development. There are actually more than just 2 but I do agree that a few more could if they had the incentive or money. Here are a few Boinc projects that do have gpu processing right now: gpugrid, seti, milkyway, collatz, aqua and then there is always folding@home as a non Boinc project. There are more projects coming that promise gpu crunching but time will tell if they can follow thru or not. |
![]() ![]() Send message Joined: 5 Jun 06 Posts: 154 Credit: 279,018 RAC: 0 |
the Cuda website does say that Fortran and C++ will be supported in the future. However, if Rosetta needs "X" amount of memory per processor, and these new cards have something like 240 gpu's, but only 512mb-2gb total memory, I'm not sure how that translates to total memory per gpu, but I bet it is not much. http://www.nvidia.com/object/cuda_what_is.html And I know that the ATI people love to ask "What about us?" and their cards do overall more gpu computing than Nvidia, but from the few things I have read, the Firestream ATI drivers still suck and still crash many a computer. I'd rather have stability vs. more gflops of work done. |
mwgiii Send message Joined: 29 Sep 05 Posts: 3 Credit: 90,006 RAC: 0 |
While you are waiting for a R@H CUDA app, you can always take a look at GPUGrid. (http://www.gpugrid.net/) They do all-atom biomolecular simulations using GPU only. That way you can keep R@H going full blast while letting your GPU also work. Concerning ATI, there are a couple of BOINC projects which use ATI cards for GPU processing, but most projects seem to be waiting on OpenCL to add ATI support. |
mikey![]() Send message Joined: 5 Jan 06 Posts: 1896 Credit: 9,881,381 RAC: 36,738 ![]() |
While you are waiting for a R@H CUDA app, you can always take a look at GPUGrid. (http://www.gpugrid.net/) They do all-atom biomolecular simulations using GPU only. That way you can keep R@H going full blast while letting your GPU also work. Collatz is one of those able to use ATI cards and it works quite well for most. http://boinc.thesonntags.com/collatz/ They can also use a ton of cards not just the newest, some people have even been able to use the built-into the motherboard cards both Cuda and ATI versions. Not everyone has been able to do that but some have. One thing they have done recently is to make it so only 128 meg and above video cards will work, the 64 meg cards just didn't cut it for the Science anymore so they had to drop support for them. If your card works you can set Boinc to get only gpu units from them and cpu units from here and then you will be crunching with both your gpu and cpu at the same time. You MUST use a very high level version of Boinc though, they are using the test version 6.10.13 over there now. |
Message boards :
Number crunching :
GPU not getting work
©2025 University of Washington
https://www.bakerlab.org