Advanced search

Message boards : Graphics cards (GPUs) : nvida 560TI vs GTX680 performance questions

Author Message
William Timbrook
Send message
Joined: 16 Dec 09
Posts: 7
Credit: 1,327,997,278
RAC: 11,380
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29614 - Posted: 30 Apr 2013 | 8:57:57 UTC
Last modified: 30 Apr 2013 | 9:01:04 UTC

I have a pair of EVGA Superclocked 560 TI cards that have 384 cuda cores in them. I already rma'd one of them (too much dust impacted the fan) and the 2nd one is now hitting 85c with the fan at 78% when doing a long gpugrid task.

I'll probably be looking at swapping out the video cards within the next 6 months but I don't know if there's a direct correlation between the cuda core count and how fast a given task gets finished assuming the card clock speeds are the same.

Using GPU-Z, my 384 cuda core cards are being used 88% for gpugrid, 74% for einstein (about 20 minutes), 97% for milkyway (about 7 minutes), and 93% for world community grid (about 10 minutes).

Can anyone share what the EVGA Superclocked GTX 680 with its 1536 cuda cores handles those kinds of tasks?

I think I understand the concept that the performance of the gpu applications is going to vary on how much bandwidth the task needs but I'm hoping that the near 90+ usage percentage could take advantage of more cuda cores.

My 560 is a 1GB while the 680 is a 2GB card. I'm not in a position to swap out the motherboard etc so I'm going to be running in PCI Express 2.0 slots.

Any comments about potentially getting a GTX 690 instead of the pair of 680s would also be appreciated. I did upgrade from an asus mb to a gigabyte assassin but I think the damage was already done to the my video card fans.



Thanks,
William

Simba123
Send message
Joined: 5 Dec 11
Posts: 147
Credit: 69,970,684
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 29615 - Posted: 30 Apr 2013 | 12:11:37 UTC

running crunching tasks, you really have to make sure you keep your cards clean. There are a few tutorials around you should probably look into.

As for what card to upgrade to:
1. do you use the computer for anything else

2. what's your budget

3. power consumption considerations??

The generally accepted back for buck card at the moment is the 660Ti.
Though a stock 660 can get some good numbers too.


William Timbrook
Send message
Joined: 16 Dec 09
Posts: 7
Credit: 1,327,997,278
RAC: 11,380
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29620 - Posted: 30 Apr 2013 | 16:30:55 UTC

Simba123,


Thanks for the response.

I did poke around and didn't see any "tutorials". I've seen number improvements between cards but nothing that stood out cuda counts. You're correct in that I haven't been blowing out the inards on a regular basis.

I'm currently using the computer lightly after hours so it can crunch during the day.

Don't really have a budget but I'm not willing to build a new system since it's about 1.5 years old running an Intel 970 with 24GBs and a 512GB Sata 3 SSD. Water cooled really isn't an option because of the lack of case space. I'm thinking in the 1k range which would either be the pair of 680s or the single 690. But I'm looking for an idea how the jobs respond to the cuda core count.

I'm currently running an 850 psu and have a spare 800 (went thru a debugging issue with an asus board). The system is hooked up an 850 watt ups. I'm currently using about 530 watts which was confirmed with an kill-a-watt plugin meter. The UPS states I have about 6 minutes which is enough for the occasional power flickers we have here during the winter months.

Since I currently have a pair of 560TIs, I'm assuming the 660TI would be an incremental upgrade.



Thanks,
William

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29636 - Posted: 2 May 2013 | 20:08:22 UTC - in response to Message 29620.

I don't have hard result numbers at hand, but you're right: the upgrade from a 500 to a 600 series card in itself isn't that much. I'd only do it if I could sell the 560Ti for a good price. If you do so, GPU-Grid is one of the better projects to run on the Keplers. Performance will improve and power consumption will go down, making it a double-win.

The GTX680 is as much faster at crunching than a GTX660Ti as the price might make you believe. Both are clocked comparably, and the bigger card's got 1536/1344 = 1.14, i.e. 14% more shaders. It's also got 33% more memeory bandwidth, but this doesn't usually limit GPU crunching.

I'd expect the GTX680 to perform about 14% faster for large WUs, whereas for smaller ones the scaling with core count doesn't seem to work as well.

MrS
____________
Scanning for our furry friends since Jan 2002

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 29640 - Posted: 2 May 2013 | 22:28:50 UTC

Are you using a utility to setup fan profiles? I had the same 560 in the past and was able to manually set the fan at 100% and it never went over 68*C. Maybe try EVGA's precision X, MSI's Afterburner, Zotac's Firestorm. They'll all work with their competitors cards as long as there NVidia.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29642 - Posted: 3 May 2013 | 0:33:04 UTC - in response to Message 29640.

Are you using a utility to setup fan profiles? I had the same 560 in the past and was able to manually set the fan at 100% and it never went over 68*C. Maybe try EVGA's precision X, MSI's Afterburner, Zotac's Firestorm. They'll all work with their competitors cards as long as there NVidia.

MSI Afterburner works for everything in my experience, maybe with the exception of some ASUS cards with proprietary fan control chips. Have never had to set any fan to 100% though, but all my cases have adequate cooling.

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29663 - Posted: 4 May 2013 | 4:10:27 UTC - in response to Message 29614.

I have two EVGA FTW+ 680 4GB running on this project. They tend to finish the long Nathan units in 5 - 6 hours at around 80% GPU usage and usually running around 60 - 63 degrees with the fan at 60%. The cards run at 1163Mhz and 1176Mhz when on this project.

I don't run Milkyway on my nvidia cards since the ATI HD7950 on my other computer runs through the GPU tasks so much more quickly (8 - 10 minutes on the 680s, 25 - 30 seconds on the 7950). The 680s get through WCG HCC in about 7 minutes. My times for Einstein look about the same as yours.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29667 - Posted: 4 May 2013 | 10:02:04 UTC - in response to Message 29663.

5 - 6 hours is also what a GTX660Ti needs for the current long-run Nathans, depending on clock speeds an system. Even the GTX660 is not far behind at approximately 6h.

MrS
____________
Scanning for our furry friends since Jan 2002

Trotador
Send message
Joined: 25 Mar 12
Posts: 103
Credit: 9,769,314,893
RAC: 39,662
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29672 - Posted: 4 May 2013 | 11:47:31 UTC

Just as reference, my EVGA GTX 660Ti SC in Linux (no idea of actual speed, anyone knows?) finish current Nathan units a little below 5 hours, between 17200 and 17900 seconds, most commonly around 17400 sec. They certainly run hot 75-80°C.

Also interested in 650 Ti and turbo models crunching times

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29674 - Posted: 4 May 2013 | 11:59:55 UTC - in response to Message 29672.

Also interested in 650 Ti and turbo models crunching times

I have 3 650 TI GPUs running GPUGrid. On long run Nathans they consistently do 8:15 - 8:20. They're OCed at +110 core and +350 memory, run at around 46-50C and sip power compared to my other cards.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29695 - Posted: 5 May 2013 | 18:36:47 UTC - in response to Message 29672.

Just as reference, my EVGA GTX 660Ti SC in Linux (no idea of actual speed, anyone knows?) finish current Nathan units a little below 5 hours, between 17200 and 17900 seconds, most commonly around 17400 sec. They certainly run hot 75-80°C.

Also interested in 650 Ti and turbo models crunching times

The GTX660Ti's will auto-adjust the clocks towards a power usage target. I noted that when I let my card keep it's own temperature, it did it very badly; it went up to 80°C. When I increased fan speed to 74% the temps fell to arount 55°C and as a result the clocks rose and the GPU utilization rose slightly too. When I increased the fan rate I noticed the power target fall from ~91% to ~87% while the clocks remained the same (indicating a 4.5% power saving). The clocks then rose several times, in increments of 13MHz. Your time of just under 5h would have been while the GPU was around 50MHz lower than my cards (if they are both clocked the same and perform the same). This ties in fairly accurately with the well established idea that Linux is around 11% faster than W7. Good to see this reaffirmed.

Matt, your GPU utilization is quite low and most of your NATHAN_dhfr36_ runtimes are about the same as my GTX660Ti (18000sec or more), but I did see one that was ~10% faster (16,781.24) than my fastest. I've seen times of ~15,000sec by a GTX690 on W7, and 13,700sec on a GTX680 on XP.

These things suggest that your setup is restricting your GPU performance some how. My guess is that if you free up the CPU a bit you could see 10% or more of an improvement in the run times of your GPU's.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29700 - Posted: 6 May 2013 | 5:54:01 UTC

Just averaged out my 12 most recent long tasks on my two GTX 680s. Came out with 20599 seconds. How does that compare with everyone else? Of those 12, the shortest was 16781 and the longest was 24861.

Curious how anyone else running with GTX 680s is doing. Anyone getting higher clock speeds (mine are 1163 & 1176 for each card on this project) w/out OCing? If you're OCing, are you able to get this project to run smoothly? I've had no luck there so far using EVGA Precision X.

Anyone have any ideas on how running other projects affects the run-times for this project? My CPU is i7-3770k and I have 7/8 cores enabled in BOINC. Running both GPUs on GPUgrid means I only have 6/8 running other projects (BOINC 7.0.64). I run a number of different projects.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 29701 - Posted: 6 May 2013 | 7:43:18 UTC

I'm running 2 GTX680's on this computer

http://www.gpugrid.net/results.php?hostid=129644

It's 13700 - 13800 per run on NATHAN long, I've got XP Pro x64 on it which is a little faster.

William Timbrook
Send message
Joined: 16 Dec 09
Posts: 7
Credit: 1,327,997,278
RAC: 11,380
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29702 - Posted: 6 May 2013 | 7:46:24 UTC

flashawk, I'm using EVGA's Precision X and yes I can crank up the fans but since I'm not OC'ing what EVGA is doing, then I'm assuming the default fan settings should be fine.

Matt, thanks for your May 4th post on your numbers on the 680s. That's what I wanted to know instead of using the theoretical: http://www.hwcompare.com/12368/geforce-gtx-560-ti-vs-geforce-gtx-680/

It sounds like I'm about 2 to 3 hours behind with the long Nathan units since I'm coming in between 8 and 8.5 hours on the 560TIs.

I know this is a different question and maybe should be a different thread but I'll ask anyways. I've seen some references that you might be able to handle 2 different classes of video cards in the same machine? Maybe boinc will pick them up or maybe you need to tweak a config file? Comments?



Thanks,
William




Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29703 - Posted: 6 May 2013 | 8:41:19 UTC - in response to Message 29702.
Last modified: 6 May 2013 | 9:06:30 UTC

Matt, W7 is ~11% slower than XP, so all else being equal you should be getting run times of ~15,263sec. At present your average speed is ~30% slower than it should be - that's an alarmingly poor performance! I suggest you begin by reducing your CPU usage to <75% (say 70% of the CPU's); each GPUGrid WU will use a full CPU core when running on your Kepler's.

We don't see this when looking at your run times!
For example, your GTX680's run times,
I27R18-NATHAN_dhfr36_6-9-32-RND3323_0 4424143 4 May 2013 | 8:36:40 UTC 5 May 2013 | 19:45:33 UTC Completed and validated 24,470.02 19,380.00 59,000.00
I35R15-NATHAN_dhfr36_6-8-32-RND8996_0 4423003 4 May 2013 | 0:22:00 UTC 5 May 2013 | 5:21:20 UTC Completed and validated 19,955.23 18,754.91 59,000.00
I33R15-NATHAN_dhfr36_5-16-32-RND2126_0 4422237 3 May 2013 | 18:44:18 UTC 5 May 2013 | 2:25:44 UTC Completed and validated 19,275.76 16,659.75 59,000.00
I43R16-NATHAN_dhfr36_5-16-32-RND4262_0 4422052 3 May 2013 | 16:57:56 UTC 4 May 2013 | 13:56:54 UTC Completed and validated 18,363.21 16,200.13 70,800.00

Compare a couple of my runs (660Ti),
I48R5-NATHAN_dhfr36_6-11-32-RND9074_0 4427445 5 May 2013 | 10:01:37 UTC 6 May 2013 | 3:06:06 UTC Completed and validated 19,077.53 19,077.53 70,800.00
I72R12-NATHAN_dhfr36_6-12-32-RND7519_0 4426662 5 May 2013 | 4:20:50 UTC 5 May 2013 | 12:36:12 UTC Completed and validated 18,930.02 18,930.02 70,800.00

My understanding is that Run Time and CPU time should be identical under optimal conditions for GPU crunching here. If they are not you are running sub-optimally. My times are identical (because I prefer to run using <100% CPU usage). This way the CPU is never saturated and always available to service the GPU. The fact that some of your tasks have a Run Time to CPU Time delta of 25% is key to your problems.

flashawks times suggest he is running at ~0.8% of optimal wrt his CPU usage,
6835642 4429145 6 May 2013 | 0:07:36 UTC 6 May 2013 | 7:48:21 UTC Completed and validated 13,789.27 13,678.75 70,800.00
6835638 4426644 5 May 2013 | 23:29:19 UTC 6 May 2013 | 6:55:22 UTC Completed and validated 13,799.34 13,703.42 70,800.00

Similarly, Zoltan's GTX680 runtimes are ~0.3% of begin optimal WRT CPU usage,
I28R7-NATHAN_dhfr36_5-17-32-RND0133_0 4429334 6 May 2013 | 0:48:59 UTC 6 May 2013 | 8:23:48 UTC Completed and validated 13,764.26 13,715.78 70,800.00
I49R20-NATHAN_dhfr36_6-4-32-RND7847_1 4429122 5 May 2013 | 23:32:31 UTC 6 May 2013 | 7:08:39 UTC Completed and validated 13,657.53 13,594.50 70,800.00

William, I suggest you go by flashawk and Zoltan's GTX680 times when comparing your GPU's performance (at least for now).
I have used a GTX470 and a GTX660Ti in the same system to run GPUGrid tasks. Nothing special required. At present I have an ATI and a NVidia in a system, running different projects. Again, it's just a matter of installing the drivers and attaching to a GPU project. So, generally it's not a problem, however there are some exceptions, so if you are thinking about adding another card from a different generation ask for advice (best to start a new thread or use an existing similar thread). One issue that springs to mind is that I can't control the fan speeds of my MSI 5850 using MSI Afterburner when I have the GTX660Ti in the same system. Last time I had two NVidia's in the same system this wasn't an issue.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29704 - Posted: 6 May 2013 | 12:23:25 UTC - in response to Message 29703.

One issue that springs to mind is that I can't control the fan speeds of my MSI 5850 using MSI Afterburner when I have the GTX660Ti in the same system. Last time I had two NVidia's in the same system this wasn't an issue.

What happens? I'm running 7 machines with mixed ATI/AMD & NVidia, and Afterburner controls them all with no problems (including fan speeds). In fact one of them has an MSI HD 5850 along with an EVGA GTX 460. The only issues I've ever run into with controlling fan speeds is on some ASUS (grrrrrrr) cards with their brain-dead proprietary fan controller which can only be seen by their infantile smartdoctor utility.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29710 - Posted: 6 May 2013 | 22:34:42 UTC - in response to Message 29704.

I don't know why it doesn't work. I've tried uninstalling and reinstalling Afterburner. Although it lists the ATI card as being there it doesn't list the ATI's clocks, fan speed or temps. The fan control is also greyed out.
I was thinking of swapping the ATI out and replacing it with a 470 but it's a W7 rig, and I'm not happy about losing 11% performance for one GPU never mind two.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29713 - Posted: 7 May 2013 | 2:01:32 UTC - in response to Message 29710.

I don't know why it doesn't work. I've tried uninstalling and reinstalling Afterburner. Although it lists the ATI card as being there it doesn't list the ATI's clocks, fan speed or temps. The fan control is also greyed out.
I was thinking of swapping the ATI out and replacing it with a 470 but it's a W7 rig, and I'm not happy about losing 11% performance for one GPU never mind two.

Is the ATI in the primary or secondary PCIe slot?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29718 - Posted: 7 May 2013 | 8:48:21 UTC - in response to Message 29713.

The 660Ti is in the top slot and the ATI is in the lower PCIE3 slot, however MSI Afterburner reports the GTX660Ti as being GPU2. Device manager reports that the 660Ti uses PCI device 1 and the ATI uses PCI device 2. DXDIAG just lists the 660Ti (as it's used for the display). From testing I'm sure that the top slot is the main slot; it's the one that operates at PCIE3 x16, the lower slot operates at PCIE3 x8 (and forces the top slot to drop to PCIE3x8 when the lower slot is occupied). I think the lower slot is presently operating at PCIE2 x16 and again forcing the top slot to perform at PCIE2 rates. Perhaps Afterburner is struggling with PCIE3/2.

I expect the ATI is generating a bit of heat and its fans could be turned up to reduce this but it's not too big a deal, the 660Ti is 66°C, I have a case fan blowing directly onto both GPU's and another sitting at the rear of the case pulling the hot air away.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

William Timbrook
Send message
Joined: 16 Dec 09
Posts: 7
Credit: 1,327,997,278
RAC: 11,380
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29768 - Posted: 9 May 2013 | 5:36:11 UTC

skgiven,


Looking at your "6 May 2013 | 9:06:30 UTC" posting to Matt, your comment

My understanding is that Run Time and CPU time should be identical under optimal conditions for GPU crunching here. If they are not you are running sub-optimally. My times are identical (because I prefer to run using <100% CPU usage). This way the CPU is never saturated and always available to service the GPU. The fact that some of your tasks have a Run Time to CPU Time delta of 25% is key to your problems.

I know my CPUs are busy but?
6836895 4430213 106041 6 May 2013 | 8:11:12 UTC 6 May 2013 | 17:00:28 UTC Completed and validated 31,134.25 4,468.79 70,800.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)
6834899 4420368 106041 5 May 2013 | 18:59:58 UTC 6 May 2013 | 3:56:50 UTC Completed and validated 29,190.20 4,219.95 70,800.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)
6834834 4428103 106041 5 May 2013 | 18:59:58 UTC 6 May 2013 | 8:10:06 UTC Completed and validated 29,514.90 4,545.73 70,800.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)


My machine is an Intel 980 6core with hyperthreading running at 3.33GHz and my boinc preferences are to let it use everything. If I'm really that far off then would you suggest backing off this machine by using
http://boinc.berkeley.edu/wiki/Client_configuration - <ncpus>N</ncpus>


As for the mixing of the 2 video cards, I'd probably run with a 560ti and 680.

Thank you for the insight.



William

William Timbrook
Send message
Joined: 16 Dec 09
Posts: 7
Credit: 1,327,997,278
RAC: 11,380
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29769 - Posted: 9 May 2013 | 5:52:06 UTC - in response to Message 29768.

I guess where I'm getting confused - sort of - is this is what my boinc manager looks like. I understand the hyperthreading is just cramming more stuff into the same pipe.




Thanks,
William

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29772 - Posted: 9 May 2013 | 10:12:26 UTC - in response to Message 29769.
Last modified: 9 May 2013 | 10:12:56 UTC

William, your system is working normally.

It's only the Kepler GPU's that are at optimal if they are using a full CPU core, and this was only apparent for the NATHAN_dhfr36_ WU's. It actually suggests that Noelia's work units might benefit from having more CPU access, but that's a different issue.

Your GTX560Ti is a Fermi (not a Kepler). It's Compute Capable 2.1 (while Kepler's are CC3.0, and Titan is CC3.5). The Fermi GPU's don't use as much CPU because they are doing some calculations on the GPU that the Kepler cannot perform on the GPU (different architectures). So the Kepler's use more CPU (well, that's my simplified interpretation). The theory is that if the Keplers are competing for CPU access it will slow down the WU. This was demonstrated to be the case to some extent for both NATHAN's and Noelia's WU's. Basically if you use too much CPU crunching CPU tasks for other projects it will slow the work down here (again only for the Kepler GPU's, and not your Fermi).

Your "0.571 CPU's" is just an estimate, and is usually inaccurate. My "estimate" is ~0.687 for both Nathan's and Noelia's WU's on both my Kepler and Fermi, however the Kepler actually uses 100% of a CPU for Nate's work and 44% for Noelia's work, and my Fermi uses ~12.5% for Nate's work units and and between 5 and 10% for Noelia work.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

William Timbrook
Send message
Joined: 16 Dec 09
Posts: 7
Credit: 1,327,997,278
RAC: 11,380
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29786 - Posted: 10 May 2013 | 7:08:05 UTC

skgiven et al.

Thanks for the explanations!

William

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30074 - Posted: 18 May 2013 | 21:10:14 UTC - in response to Message 29703.
Last modified: 18 May 2013 | 21:15:20 UTC

skgiven, thanks for the advice. I didn't realize this as the tasks report using 0.721 cores each and together the two 680s would only remove 1 of my cores from crunching CPU tasks (I knew the math didn't make sense). I've now lowered my CPU usage. Hopefully this will improve my contributions. Looking into other sources of slowdown as well.

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30362 - Posted: 25 May 2013 | 18:10:18 UTC - in response to Message 29703.

This may be a silly question, but in BOINC preferences, do I need to set the number of processors used to a certain ratio of the total processors? For example, if I set it to 75% of my 8 cores, that will enable 6 cores. If I set it to 62.5% that will enable 5 of 8 cores. If I were to set it to 70%, this would be 5.6 cores. Is this possible? How does this work, or should I stick to percentages that give whole numbers of cores?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30363 - Posted: 25 May 2013 | 18:23:21 UTC - in response to Message 30362.

It can't do fractions of a core/thread. So 75% up to 86% is still 6/8 threads. 87.5% is 7/8 threads, as is 89% and 94% and 99%. 100% is all CPU threads, 8/8.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30374 - Posted: 26 May 2013 | 0:06:41 UTC - in response to Message 30363.
Last modified: 26 May 2013 | 0:07:09 UTC

Alright, that's what I thought. Just wanted to make sure I wasn't missing something. Thank you.

Post to thread

Message boards : Graphics cards (GPUs) : nvida 560TI vs GTX680 performance questions

//