Message boards : Graphics cards (GPUs) : Resource utilisation: still too immature
Author | Message |
---|---|
Here is my conclusion after I've run GPUGrid for a few hours. I have previously run another non-Boinc project, Folding@Home GPU, for a couple of days as well. | |
ID: 6532 | Rating: 0 | rate: / Reply Quote | |
BOINC-specifically, GPUGrid is actually wasting CPU time. I have a 4-core CPU. Normally all 4 cores are kept busy by various BOINC tasks. In comes GPUGrid. It is counted as a task, but as it runs on GPU, one of the cores of my CPU is not utilised. I tried setting the limit in preferences to allow 5 simultaneous tasks on a multi-core CPU, but BOINC would still only start 4 and with GPUGrid almost not touching the CPU, it is only used for 75%. It depends on the BOINC version you run. You can have 4+1 on a quad core machine (assuming you have 1 gpu). Under 6.4.5 you can fudge it my overriding the number of cpu's, under 6.5.0 it works okay (but it won't shutdown tasks on exit). The various 6.6.x version also will run 4+1 but have work-fetch issues which they are trying to sort through. The current (6.62) version of the GPUgrid app wants .13 of a core to feed the gpu, so it will happily co-exist with another science app running on the same core. ____________ BOINC blog | |
ID: 6533 | Rating: 0 | rate: / Reply Quote | |
One other note, memory is important. I have 1G or better on all GPU cards and see little of the stuttering and the like on any system. Of course I do not do high intensity gaming ... but even on the 9800 GT I have no hesitations on the system I can point to the GPU as the cause ... | |
ID: 6537 | Rating: 0 | rate: / Reply Quote | |
You can have 4+1 on a quad core machine (assuming you have 1 gpu). Under 6.4.5 you can fudge it my overriding the number of cpu's, ... I am running 6.4.5. Could you tell me how to override the number of CPUs? I've already set the preference "On multiprocessors, use at most 'Enforced by version 5.10 and earlier'" to 5, but I can't find the option to set the exact number of CPUs... I have nVidia GForce 8600GT overclocked from 475/475 to 585/555 with 512MB of dedicated video memory, driver version 181.22. The system itself has 4GB of RAM (Vista 64bit). Even though GpuGrid is useless while I use the machine, I might still run it during the nights. It seems I will manage the deadlines (according to BoincView 1.4.2 estimates) | |
ID: 6538 | Rating: 0 | rate: / Reply Quote | |
Just use 6.5.0 and running 4+1 shouldn't be a problem. | |
ID: 6546 | Rating: 0 | rate: / Reply Quote | |
Just use 6.5.0 and running 4+1 shouldn't be a problem. Thanks for the update ... If I knew what I was doing I would not be here ... :) I am trying to learn as fast as I can ... I jsut can't keep all these different cards straight ... even with the lists ... sigh ... Some day I will get it right ... | |
ID: 6549 | Rating: 0 | rate: / Reply Quote | |
You're doing fine :) | |
ID: 6551 | Rating: 0 | rate: / Reply Quote | |
as soon as someone mentions a card slower than a 9600GT, it's just not worth to keep these in mind! My twin 8800GT's little brother 9500GT is busting his butt crunching, even if it does take the poor guy about 40 hours to crunch one WU. :-) Don't forget about the little guys. LOL | |
ID: 6553 | Rating: 0 | rate: / Reply Quote | |
I also think, that this "all or nothing" GPU usage is just too restrictive. I know, my graphics card might not be "fast enough" for GPUGRID, but it is able to finish GPUGRID WUs in time (GeForce 9400GT, 512 MB). The only problem is, that I can run 11 other BOINC projects (CPU) with 100% CPU usage without any problems in the background. But GPUGRID is just sucking too much from the GPU resulting in sluggish and joppy desktop usage which is no fun. | |
ID: 6578 | Rating: 0 | rate: / Reply Quote | |
I often read that the SETI@home GPU version doesn't have this problem (can't check as I have Linux and there is no CUDA version for Linux out yet). So, it doesn't seem to be a CUDA API problem. I explained this many times before, so i'll keep this short: currently the GPU is blocked as long as it executes general purpose calculations. In seti or folding@home the calculations can be split into smaller parts and you'll get screen refreshs in between. That's why you don't see the lag as badly. It's a limitation of DirectX 10, CUDA, Drivers, GPU-Grid.. everything together. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 6582 | Rating: 0 | rate: / Reply Quote | |
Okay, this means it's a problem of the GPUGRID WU's. I don't know the algorithms of WU's but I'm a software developer and therefore can't see a problem to just design the WU processing in a more resource friendly way. I guess, all BOINC projects have some calculations within loops and countless iterations. So, it would be no problem to split such calculations in smaller "chunks" ... just like SETI@home and Folding@home? | |
ID: 6584 | Rating: 0 | rate: / Reply Quote | |
Okay, this means it's a problem of the GPUGRID WU's. Well, a property of the GPU-Grid WUs. They tried to split calculations even further, but performance was horrible. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 6589 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : Resource utilisation: still too immature