Message boards : Number crunching : Crunching with GPU?
Author | Message |
---|---|
HTH Send message Joined: 6 Mar 06 Posts: 15 Credit: 250,712 RAC: 0 |
Hi! I was wondering, whether it is possible to use GPU (3D card) to crunch Rosetta@home packets, is it? At least, GPUs have very much computing power... |
dcdc Send message Joined: 3 Nov 05 Posts: 1832 Credit: 119,891,919 RAC: 1,902 |
Hi! This topic has been brought up a number of times - here's a link to previous posts HTH Danny |
Bob Guy Send message Joined: 7 Oct 05 Posts: 39 Credit: 24,895 RAC: 0 |
Hi! Just to complete the thought - the problem is that GPUs only have single precision floating point capability. The Boinc projects usually need at least double precision floating point. So it's a no-go for GPU crunching for now. |
Jimi@0wned.org.uk Send message Joined: 10 Mar 06 Posts: 29 Credit: 335,252 RAC: 0 |
Do the Ageia PhysX PPUs have this limitation? |
Ethan Volunteer moderator Send message Joined: 22 Aug 05 Posts: 286 Credit: 9,304,700 RAC: 0 |
Hi! Actually, the last time someone from the project replied to this question, Rosetta uses mostly single precision floats (note: I'm not in the project, just reposting what's been said). |
Bob Guy Send message Joined: 7 Oct 05 Posts: 39 Credit: 24,895 RAC: 0 |
The physics boards now available do not have sufficient precision as far as I have been able to find out. That may change - but the current boards are designed for a specific type of physics solution and not for general mathematics processing. The boards are not like a FPU replacement no matter what 'advertising' you may have seen. Regarding 'mostly single precision' - mostly is not close enough. Even one set of double precision calculations can completely remove any andvatage of using a GPU. In addition the only real advantage of a GPU is its unique ability to do signal processing which is NOT what Rosetta does. The signal processing ability I refer to is that done by the Fourier transform algorithm, for example, and other classes of matrix transformation. A GPU is not suitable for sufficiently precise processing of this kind. The 'sloppy' processing of visual data is acceptable because a person will be unable to detect the few 'errors' just by looking at the visual results of a GPU's processing, but when used in a precise mathematical algorithm the errors are sufficient to make the results unuseable. |
suguruhirahara Send message Joined: 7 Mar 06 Posts: 27 Credit: 181,020 RAC: 172 |
http://setiathome.berkeley.edu/forum_thread.php?id=29562 Physics processing performance. This seti forums thread can be useful. On this thread the similar thing is argued. |
Jimi@0wned.org.uk Send message Joined: 10 Mar 06 Posts: 29 Credit: 335,252 RAC: 0 |
Well, I've seen it mooted that it is a multi-core processor with a 4x4 matrix and inter-core communication speeds of 2Tb/s. Not that this helps <sigh>. Ageia don't say anything about the architecture. |
BennyRop Send message Joined: 17 Dec 05 Posts: 555 Credit: 140,800 RAC: 0 |
We've had a link or two to the Folding@Home GPU client list where they list a few of the physics processors that are out there. After 2 years? they still haven't released a client to the general public for GPU use - and the collaboration with the add in physics processors hasn't resulted in a released client for the general public, either. For some reason, it doesn't seem quite as easy to create a client for non-cpu use - as it should be.. |
Dirk Broer Send message Joined: 16 Nov 05 Posts: 22 Credit: 3,514,521 RAC: 5,392 |
Hi! Well, at some point in time you were right, but any nVidia card with compute capability 1.3 or higher will have double precision floating point capability, meaning any card from GTX260 or better. It is also present on the ATI Radeon HD 5970, the HD 5800 Series (5830, 5850 and 5870), the HD 4800 Series, the Mobile Radeon HD 4800 Series, the HD 3800 Series, FirePro V8800, V8700 and V7800 Series and AMD FireStream 9200 Series GPUs. So the double-precision floating point argument falls flat. |
Chilean Send message Joined: 16 Oct 05 Posts: 711 Credit: 26,694,507 RAC: 0 |
Imagine a huge huge huge classroom full of 5th graders. Now, Imagine another classroom with two math engineers. The huge classroom is the GPU, and the two math engineers are CPUs (a dual core one). Which one do you think is up to the task of solving a College level Calculus III problem? The CPU is. =] Sure, if you have a task in which you have to solve 3 trillion math problems (relatively easy ones) and manage it to distribute each problem to each student, they'll finish it way faster than the two math professor. Best analogy I've come up so far as to why GPUs can't crunch Rosetta yet. |
dcdc Send message Joined: 3 Nov 05 Posts: 1832 Credit: 119,891,919 RAC: 1,902 |
Imagine a huge huge huge classroom full of 5th graders. and it's a good one! ;) |
JLConawayII Send message Joined: 21 Sep 10 Posts: 2 Credit: 1,009,812 RAC: 0 |
Imagine a huge huge huge classroom full of 5th graders. Then why do other molecular dynamics projects run on GPU? |
Chilean Send message Joined: 16 Oct 05 Posts: 711 Credit: 26,694,507 RAC: 0 |
Imagine a huge huge huge classroom full of 5th graders. I think it's because even though they take lots and lots of operations, they aren't complex. You know that "this" goes this way under X circumstances. Rosetta is aimed at developing a software that'd predict protein based on it's amino acid sequence. If you were to use Rosetta to predict a certain protein, then you could use GPUs. But to "develop" the software, you need a more "off-road" car than a F1 car. Correct me if I'm wrong. |
Rabinovitch Send message Joined: 28 Apr 07 Posts: 28 Credit: 5,439,728 RAC: 0 |
|
The_Bad_Penguin Send message Joined: 5 Jun 06 Posts: 2751 Credit: 4,271,025 RAC: 0 |
What Chilean said... Defeat Censorship! Wikileaks needs OUR help! Learn how you can help (d/l 'insurance' file), by clicking here. "Whoever would overthrow the liberty of a nation must begin by subduing the freeness of speech" B. Franklin |
borg Send message Joined: 4 Dec 07 Posts: 3 Credit: 142,556 RAC: 0 |
Chilean, I believe you are wrong. The only reason GPUs can't be used with Rosetta is that nobody yet took the effort to write the code. GPUs have sufficient instruction set, faster RAM, and greater computing power than CPUs. Complaints about their single precision architecture are misplaced, as doubles can be processed on any CUDA capable GPU, only it takes more cycles. The "classroom" and "off-road" analogies are too empiric, CPU is not smart, it only does what you tell it to. There is no fixed connection between the complexity of Rosetta and x86/x64 architecture - any task can be programmed to run on any kind of processor if there is enough memory and time. |
mikey Send message Joined: 5 Jan 06 Posts: 1896 Credit: 9,387,844 RAC: 9,807 |
Chilean, I believe you are wrong. You are correct but the memory structure and the way a gpu works is totally different than a cpu. A gpu can do one thing alot of times very well, but it is limited to how much different info you can put into its limited memory, after that alot of efficiency is lost because you are moving things in and out of memory causing tremendous slowdown and losing all the gpu's great advantages. Collatz's Admin, Slicker, has explained this several times and why his project works so well using a gpu as opposed to a cpu. They have tweaked the project to load as much has possible into the gpu's memory but not so much they start swapping it out. Rosetta's problem is more memory intensive and not conducive to fitting within a gpu's memory limits. It is also constantly changing, whih is another gpu's problem. Gpu's like the same set of parameters to be used over and over and over while cpu's are very flexible. Could it be done, maybe, but will it be done anytime soon, probably not. What they have works for them and their customers, so putting a ton of money into doing things another way is currently not cost effective. And since it is a working lab that needs to make money that is a consideration. Now if they were to get a donation of a few million bucks targeted for making it work on a gpu, yes they could put forth some resources to take a closer look. Until then those of us with gpu's must just crunch elsewhere. |
Mod.Sense Volunteer moderator Send message Joined: 22 Aug 06 Posts: 4018 Credit: 0 RAC: 0 |
Speaking as an informed volunteer here, recall that I am not a part of the Rosetta Project Team, I've taken several CUDA webinars. The main challenge with getting a GPU application written, and performing better then a CPU application, is memory management. You have to make the instructions that you wish to perform available to the GPU, it has to get them via the CPU and it's disk and memory accesses. So if you have a tiny program that processes each element in an array, it runs like greased lightening. Massively parallel. But if you need a suite of routines available to process different elements differently, then the GPU ends up spending most of it's time waiting for the required elements of the program to be brought from the CPU side. When they arrive, an existing program element must be removed due to insufficient GPU memory, and thus the cycle of waiting before you can get more work done continues. The Rosetta executable is many MB just for the download. The runtime is 100s of MB of program and data being processed. This is most likely the crux of the reasons that the study of the applicability of GPU to Rosetta resulted in not developing a GPU application. In short, you can't believe all of the hype about new products. The comments are specifically chosen to make the new product sound dramatically new and different and easy and beneficial, but it doesn't mean your clothes will get any whiter with the new detergent. Rosetta Moderator: Mod.Sense |
borg Send message Joined: 4 Dec 07 Posts: 3 Credit: 142,556 RAC: 0 |
OK, that makes more sense. |
Message boards :
Number crunching :
Crunching with GPU?
©2025 University of Washington
https://www.bakerlab.org