Advanced search

Message boards : Graphics cards (GPUs) : NVidia GTX 650 Ti & comparisons to GTX660, 660Ti, 670 & 680

Author Message
Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27059 - Posted: 9 Oct 2012 | 21:50:09 UTC
Last modified: 9 Oct 2012 | 22:31:19 UTC

As expected, NVidia has released an additional GeForce card in the form of the GTX650Ti, filling out the GeForce 600 range.

The GK106 GTX650Ti has 768 cuda cores (shaders) and reference models are clocked at 925MHz, sport 1GB GDDR5 and have a TDP of 110W (so one 6-pin connector is sufficient).
Manufacturer variants will include 2GB versions and a range of clocks from 925MHz to 1071MHz (~16% range so far)…

As with other GK cards, these are PCIE3 compliant, but like the GTX650 they do not feature boost (get a fast one)!

Cost:
In the UK these begin at £115 and rise to ~£135 (at good online retailers).
In the USA the starting price is $145, and in the Euro zone they also start around 145Euro.

Performance:
With 768cuda cores these cards fall between the GTX650 and GTX660Ti (GTX660 is OEM), and should give roughly twice the performance of a GTX650.

In terms of performance per Watt, the GTX650Ti should slightly outperform the GTX650 (in-itself), but in terms of performance per outlay (purchase cost) it’s certainly worth the extra $40 for twice the crunching performance!

While it’s generally preferable for larger cards to have 2 or 3 fans, I think one fan should be sufficient for these cards. I would however be concerned about the size of the fan; in my experience small fans fail very frequently, while larger fans generally keep running for years.

Overclocking aside (if even possible), it’s generally the case regarding improved/performance cards (cards with higher frequencies) that they give as good a performance per Watt (at least) as reference models. So when you add the system Wattage overhead (motherboard, CPU, RAM, HDD...), they are better value for money. That said, for the same reason it’s usually more efficient overall to get a ‘bigger’ card.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27060 - Posted: 10 Oct 2012 | 17:28:39 UTC - in response to Message 27059.

Update: GTX660 with GK106 and 960 shaders at 980 MHz went retail a few weeks ago.

MrS
____________
Scanning for our furry friends since Jan 2002

werdwerdus
Send message
Joined: 15 Apr 10
Posts: 123
Credit: 1,004,473,861
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27117 - Posted: 21 Oct 2012 | 17:36:56 UTC
Last modified: 21 Oct 2012 | 17:46:12 UTC

Several etailers are offering a free code for download of Assassins Creed III. This could likely be sold (eg on ebay) for at least $30 making this a VERY high value card IMO!

edit: just purchased evga GTX 650 Ti SSC at 1071 MHz, for $159.99 - $5.00 mail in rebate, plan to sell the game code for likely $30-$45 = appr. $114.99 to $129.99 total price :D
____________
XtremeSystems.org - #1 Team in GPUGrid

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27129 - Posted: 23 Oct 2012 | 12:12:56 UTC

yes i read about the release from this card to and was impressed by the number of cuda cores.
____________
DSKAG Austria Research Team: http://www.research.dskag.at



werdwerdus
Send message
Joined: 15 Apr 10
Posts: 123
Credit: 1,004,473,861
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27178 - Posted: 30 Oct 2012 | 2:09:15 UTC - in response to Message 27117.

Several etailers are offering a free code for download of Assassins Creed III. This could likely be sold (eg on ebay) for at least $30 making this a VERY high value card IMO!

edit: just purchased evga GTX 650 Ti SSC at 1071 MHz, for $159.99 - $5.00 mail in rebate, plan to sell the game code for likely $30-$45 = appr. $114.99 to $129.99 total price :D


well I was able to sell the Assassin's Creed 3 code for $43. But the buyer is claiming the code is "already used". So I may end up being out the $43 with nothing to show for it if the buyer requests a refund through PayPal. :(
____________
XtremeSystems.org - #1 Team in GPUGrid

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27179 - Posted: 30 Oct 2012 | 8:55:54 UTC - in response to Message 27178.

Ouch, that's pretty bad! By now you won't be able to use the code either, and you'll never know if that guy is happily running the game now.. or is there anything the game distributor could help you with here? Like invalidate all current installations using this code?

MrS
____________
Scanning for our furry friends since Jan 2002

werdwerdus
Send message
Joined: 15 Apr 10
Posts: 123
Credit: 1,004,473,861
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27187 - Posted: 31 Oct 2012 | 1:41:17 UTC - in response to Message 27179.

the buyer tried it again the next day and it worked!
____________
XtremeSystems.org - #1 Team in GPUGrid

voss749
Send message
Joined: 27 Mar 11
Posts: 26
Credit: 307,452,808
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 27200 - Posted: 2 Nov 2012 | 22:59:03 UTC - in response to Message 27059.

For the price its fantastic number cruncher. Its doing a long work unit in 8 hours!

voss749
Send message
Joined: 27 Mar 11
Posts: 26
Credit: 307,452,808
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 27292 - Posted: 12 Nov 2012 | 20:44:17 UTC - in response to Message 27200.

My 650ti is currently posting slightly better averages than my 560 , and it has not peaked yet. So im not exactly sure what its peak performance is.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27293 - Posted: 12 Nov 2012 | 21:21:02 UTC - in response to Message 27292.

About 190k RAC using only long-run tasks.

MrS
____________
Scanning for our furry friends since Jan 2002

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27378 - Posted: 22 Nov 2012 | 16:47:13 UTC - in response to Message 27293.

How does it do for desktop lag? I usually use a dedicated card for DC projects, but this time can only free up the display card slot. I do some video editing, but otherwise just do web browsing with Firefox.

The GT 440 I tried (96 cores) on GPUGrid was not so great on either desktop lag, or speed. But it seems to be a good match for the Keplers, which is not true for all projects as you know.

werdwerdus
Send message
Joined: 15 Apr 10
Posts: 123
Credit: 1,004,473,861
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27379 - Posted: 22 Nov 2012 | 17:21:55 UTC

worst case scenario you can just disable "Use GPU while computer is in use" in BOINC.
____________
XtremeSystems.org - #1 Team in GPUGrid

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27380 - Posted: 22 Nov 2012 | 19:45:49 UTC

With my GTX660Ti I didn't notice disturbing desktop lag, although HD video playback may sometimes have dropped below 25 fps for short periods. But it's got a lot more raw horse power than a GTX650Ti.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile X1900AIW
Send message
Joined: 12 Sep 08
Posts: 74
Credit: 23,566,124
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 27400 - Posted: 24 Nov 2012 | 16:25:08 UTC
Last modified: 24 Nov 2012 | 16:27:01 UTC

Short test with GTX650TI (MSI Power Edition/OC) & ACEMD2: GPU molecular dynamics v6.16 (cuda42), i7-3770K @3,8 Ghz, 9 threads in BOINC (8xSIMAP, 1xGPUgrid), no problem if this slows down the performance. The progress counting moves constantly over all nine threads.

3x @Stock (993/1350)
Runtimes in s
- 10,260
- 10,294
- 10,207 : mean 10254

1x @ Core +211 = 1202Mhz (Memory @stock)
Runtime in s
- 9,136
- 9,406
- (third in queue) : mean 9271 (-983)

OC +22% (temperatures + 5 degrees Celsius, VCore 1.087 V > 1.150 V)
Result +9% (983/10254)

Question
With previous GPUs it was "safe" to concentrate on shader-OC, overclocking the memory (I found) was not recommended to keep reliability.

What about the new Kepler GTX 650 TI (GK106-220), how safe is it to tune up the memory clock ? Experiences available or give it a try ?

I would prefer valid results, wouldn´t like to test a small increase but "respectable". In a related OC review they occed the memory from 1350 to 1550 (+200 Mhz), do you think it´s just extreme, lucky or necessary to push or support this according to the Core-OC (+211) ?

Can such an opinion be argued for regarding the special kind of calculation resp. requirements of GPUgrid workunits ? Does GPUgrid need memory performance ? More than Core or a balanced mix ? (I can guess your answers, balanced mix.)

In other words: how to get the missing 13 percent ? (22-9) ;-)

I have no experiences with Kepler GPUs, shader tuning is out-dated probably.

P.S. Yes of course, no testing with scientific workunits as a benchmark. I think of OpenCL benchmarks to verify the stability of ocing both core + memory. Temperatures are never a problem, that shows my short testing, the card runs @GPUgrid no more quietly such as @WCG/HCC1 but with a distinct noise (fan @47%=1600rpm). I bought the GTX 650 TI as a small & quiet cruncher with low idle consumption if paused, mission accomplished, now fine tuning shall make it perfect.

Profile X1900AIW
Send message
Joined: 12 Sep 08
Posts: 74
Credit: 23,566,124
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 27406 - Posted: 24 Nov 2012 | 20:24:10 UTC

(edit timeout ?)

1x workunit (ACEMD2) @ Core +211 / Memory +198 = 1202/2898 Mhz (1.175V)
- 9,057 (-1197)

OC Core +22%
OC Memory +14%
Result +11,6% (1197/10254) - more done by core or memory ?

That should be the limit. Next days I look at undervolting and better balancing.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27417 - Posted: 25 Nov 2012 | 21:53:34 UTC - in response to Message 27406.

Your core-OC still yielded more, percentage wise, than your mem-OC. However, GT640, GTX650Ti and GTX660Ti are less balanced than usual cards in the way that they pack a surprising amount of shader power into a package with "usually just enough" memory bandwidth. In games these cards are love memory OC's, much more so than usual cards. Seems like on these cards GPU-Grid is finally also affected by the mem clock by a measureable amount.

You can take a look at the memory controller load via e.g. GPU-Z. Your numbers should be interesting, with and without OC. I've got a slightly core-OC'ed GT640 at ~66% mem controller load and a slightly core-OC'ed GTX660Ti at 44%.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile X1900AIW
Send message
Joined: 12 Sep 08
Posts: 74
Credit: 23,566,124
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 27425 - Posted: 26 Nov 2012 | 5:49:09 UTC

Thank you, MC ~48% at the OC mode, switched to normal mode (=MSI Core OC, Memory stock) MC ~46%, underclocking to Core @928MHz not possible, the offset -65 was not accepted.


Latest runtimes in s with this oc setting (left screen):
- 8,791
- 8,822
- 8,765 : mean 8793

Result +14% (1461/10254). That´s o.k., as long as this setting produces valid results I can live with it.

Should I try a long run workunit ? The hostid 135701 i.e. never crunched one of them, maybe they are reserved field-tested to the really fast GPUs. I´m quite happy to join with this starter peace of Kepler.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27426 - Posted: 26 Nov 2012 | 11:18:59 UTC - in response to Message 27425.

Should I try a long run workunit ?

Yes...
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile X1900AIW
Send message
Joined: 12 Sep 08
Posts: 74
Credit: 23,566,124
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 27464 - Posted: 28 Nov 2012 | 20:58:25 UTC

Two long run workunits finished (GTX 650 TI; GK106, 1GB), looks like these are different, sorry but not enough samples to show effect of OCing:

- 29,753 s: 60,900 credits (run @stock frequencies) /VRAM used: 965 MB - influence of VRAM 2GB ?
- 41,032 s: 103,500 credits (run @Core +207 & Memory +400 Mhz) /VRAM used: 920 MB
(+14% faster as 48,030 s, not knowing his frequencies)

The memory usage is higher than with short run workunits, nearby the built-in limits of 1 GB.



For comparing I searched for related performance levels:

first example of ...GTX 650 TI (GK106; 1GB) ... Linux !
- 28,771 s: 60,900 credits
- 48,030 s: 101,400 credits (base for OC effect, see above, if valid)
- 60,746 s: 113,400 credits - one more step longer workunits are distributed ?

second example of ... GTX 650 TI (GK106; 2GB) ...
- 28,436 s: 60,900 credits
- 46,178 s: 103,500 credits
- 54,151 s: 101,850 credits - bonus missed ?

example of ... GTX 660M (GK107) ...
- 41,402 s: 60,900 credits
- 69,650 s: 101,400 credits

example of ... GT 640 (GK107) ... Linux !
- 42,481 s: 60,900 credits
- 70,912 s: 101,400 credits
- 68,785 s: 103,500 credits

example of ... GTX 560 (GF110 or 114; 336 or 384 shaders) ...
- 29,081 s: 60,900 credits
- 50,328 s: 101,400 credits
- 63,407 s: 113,400 credits

exmple of ... GTX 560 TI (GF 110 or 114; 352 or 384 or 448 shaders) ... Linux !
- 20,379 s: 60,900 credits
- 34,320 s: 101,400 credits

example of ... GTX 660 (GK106) ...
- 20,516 s: 60,900 credits
- 34,253 s: 101,400 credits

My conclusion
No errors so far, in spite of extensive OCing. Overvolting at highest level, probably that helped.
The GTX 650 TI runs respectable with stock or OCed. Between 1GB and 2 GB no apparent difference. I cannot understand the kind of workunits/credit association, but that doesn´t matter, more interesting I think is the time period to finish a certain credit level. The CPU sharing in the progress appears high (CPU time) but indifferent if Windows or Linux.

I am not able to analyze all factors, some linked examples may be OCed in one or all linked results, so this synopsis may contain a lot of mistakes. In the end this investigation isn´t significant at all. Hope you enjoyed the show .. ehm ... reading at least. ;-)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27465 - Posted: 28 Nov 2012 | 22:19:30 UTC - in response to Message 27464.

Thanks for collecting and sharing that data. This should be enough for anyone who wants to know how well this card (and some comparably fast ones) performs :)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27476 - Posted: 29 Nov 2012 | 13:58:30 UTC - in response to Message 27465.
Last modified: 29 Nov 2012 | 14:04:23 UTC

Interesting read.

- 54,151 s: 101,850 credits - bonus missed ?

Bonus wasn't missed; the task was returned & reported inside 24h. In this case the credit is less than might be expected due to the WU type; you can see the credit doesn't line up with other WU's of similar runtime. Usually WU's grant the same credit per time (give or task a very small percentage), however some were not assessed accurately enough and the credit awarded is skewed, one way or another. This time down compared to the others. So you really have to compare like for like Work Units to get enough accuracy to compare card performances.
The tasks that were granted 60900 credits are the give away for GPU comparison. Of note is that the GTX650Ti matches a GTX560 for performance. This is a good generation on generation marker.
With more powerful cards other variables (and there are many that influence performance) can significantly influence GPU comparison. Yesterday I was looking at two same type GPU's on different OS's. I wanted to work out the relative performance influence of the operating systems, but I encountered a discrepancy - basically it was due to the system RAM; DDR2x800 vs DDR3 2133. It's been a while since I've encountered such an unbalanced system. You just can't configure your way out of slow memory.
At this stage I'm fairly convinced that a GTX660Ti matches a GTX580, so we have a few generation on generation markers. I think it's still up for debate whether or not the GTX660Ti fully matches a GTX670. The speculation is that it does, or near-enough, but without hard figures we don't know for sure.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile X1900AIW
Send message
Joined: 12 Sep 08
Posts: 74
Credit: 23,566,124
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 27478 - Posted: 29 Nov 2012 | 16:17:59 UTC

Many thanks for the explanations !
I will continue with long runs.

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 27479 - Posted: 29 Nov 2012 | 17:02:44 UTC

Yeah thx perfect analyis from the 650TI, i was waiting for something like that :)
____________
DSKAG Austria Research Team: http://www.research.dskag.at



John C MacAlister
Send message
Joined: 17 Feb 13
Posts: 181
Credit: 144,871,276
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 28625 - Posted: 20 Feb 2013 | 2:32:39 UTC
Last modified: 20 Feb 2013 | 2:34:15 UTC

Hi, GDF:

I want to process GPU tasks, but need an NVIDIA video card. Which should I choose to suit my budget of about $150?

I have the following PCs processing various tasks:

#1 AMD-A10 HD 7660D, HD 6670, HD 6570, MOBO ASUS F2 A85V PRO;
#2 AMD-A10 HD 7660D, HD6790,MOBO ASUS F2 A85V PRO
#3 AMD 1090T, HD 6570 MOBO ASUS M5A99XEVO.

All Win7 64 bit.

Thank you,

John

----------------------------------------

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28626 - Posted: 20 Feb 2013 | 2:51:20 UTC - in response to Message 28625.

John,

I think it is a fairly easy answer, at least if you are in the U.S. The GTX 650 Ti does very well, and you have several choices at that price.

I have a GIGABYTE GV-N65TOC-1GI, since I needed a short card that had good cooling and low noise. But if you can take a longer card, the Asus is also very good; there are normal speed and overclocked versions. I usually avoid the overclocked ones, since you can get errors, but the Kepler chips run cool and the Gigabyte (factory overclocked to 1032 MHz) is doing very well. It processes a long Noelia work unit in about 18 hours, essentially the same as my GTX 560 which uses more power and is longer.

But the GTX 650 Ti does reserve a whole core on my E8400 Core2 Duo, whereas the GTX 560 does not. I don't know how that would work with your CPUs.

Dylan
Send message
Joined: 16 Jul 12
Posts: 98
Credit: 386,043,752
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwat
Message 28627 - Posted: 20 Feb 2013 | 2:52:50 UTC - in response to Message 28625.

http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&N=-1&isNodeId=1&Description=gtx+650&x=0&y=0


Quick bunch of GTX 650 and GTX 650 Ti's on Newegg ranging from about $100 to $180 or so.

John C MacAlister
Send message
Joined: 17 Feb 13
Posts: 181
Credit: 144,871,276
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 28628 - Posted: 20 Feb 2013 | 4:05:26 UTC - in response to Message 28626.

Many thanks for your help, Jim1348. Prices in US & canada appear similar these days.

John C MacAlister
Send message
Joined: 17 Feb 13
Posts: 181
Credit: 144,871,276
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 28629 - Posted: 20 Feb 2013 | 4:07:09 UTC - in response to Message 28627.

Many thanks, Dylan. Both you and Jim1348 have given me enough information to start the hunt!

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28644 - Posted: 20 Feb 2013 | 20:35:39 UTC - in response to Message 28629.
Last modified: 21 Feb 2013 | 20:17:19 UTC

Be sure to get a GTX 650 Ti, as it's massively faster than a GTX 650 for crunching. Oh that sweet naming madness..

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28645 - Posted: 20 Feb 2013 | 21:11:08 UTC - in response to Message 28644.
Last modified: 20 Feb 2013 | 21:19:50 UTC

The GTX 650 Ti is twice as fast as the GTX 650, and costs about 35% more. It's well worth the extra cost.

I moved these well off topic posts from the New app is out for testing News thread, to this NVidia GTX 650 Ti Graphics cards (GPUs) thread.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

John C MacAlister
Send message
Joined: 17 Feb 13
Posts: 181
Credit: 144,871,276
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 28913 - Posted: 2 Mar 2013 | 13:25:43 UTC
Last modified: 2 Mar 2013 | 13:27:28 UTC

Hi,

I aborted the task 063px10x2-NOELIA_063p-0-2-RND4695 as it was showing about 45% complete after about 11.7h. I have now set my preferences for short runs, only. Should my GTX 650 Ti be able to process long run tasks without issues? I have successfully completed three, but subsequent long tasks appeared to stall.

Thanks,

John

063px10x2-NOELIA_063p-0-2-RND4695
application Long runs (8-12 hours on fastest card)
created 1 Mar 2013 | 10:12:59 UTC
minimum quorum 1
initial replication 1
max # of error/total/success tasks 7, 10, 6
Task
click for details Computer Sent Time reported
or deadline
explain Status Run time
(sec) CPU time
(sec) Credit Application
6557534 146594 1 Mar 2013 | 10:49:59 UTC 2 Mar 2013 | 11:42:49 UTC Aborted by user 42,105.23 18,527.88 --- Long runs (8-12 hours on fastest card) v6.18 (cuda42)

Operator
Send message
Joined: 15 May 11
Posts: 108
Credit: 297,176,099
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28919 - Posted: 2 Mar 2013 | 17:26:35 UTC - in response to Message 28913.
Last modified: 2 Mar 2013 | 17:28:20 UTC

John;

I am running an EVGA GTX 650 Ti 2GB with no issues at all on long runs exclusively. And I usually get the bonus for coming in under 24hrs.

I'm running a light OC at 1124 graphics clock and 2722 on mem, and 1.10v.

This card was already OC from factory at 1071 to begin with.

Been stable now for weeks.

Operator
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 28993 - Posted: 5 Mar 2013 | 21:37:33 UTC

John, your issue is caused by the new features employed in the NOELIA WU. It's not related to your card at all.

MrS
____________
Scanning for our furry friends since Jan 2002

John C MacAlister
Send message
Joined: 17 Feb 13
Posts: 181
Credit: 144,871,276
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 29020 - Posted: 6 Mar 2013 | 21:00:20 UTC - in response to Message 28993.

Many thanks to all for responses. Processing short tasks and non-NOELIA tasks: all well.

John

Operator
Send message
Joined: 15 May 11
Posts: 108
Credit: 297,176,099
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29168 - Posted: 14 Mar 2013 | 15:15:41 UTC - in response to Message 28919.

Thought I'd follow up on GTX 650 Ti performance...

I have a Quad core (9550) running Win7x64 that is currently set to crunch long tasks only, and has been since the card was installed about two months ago.

The card came overclocked as stock (1071) and then I took it a bit higher (1124).

My current average credit with this card running long tasks exclusively is 75,446.58.

Don't know if anybody else is getting any better or worse....

Operator


____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29169 - Posted: 14 Mar 2013 | 15:49:11 UTC - in response to Message 29168.

Going by your runtimes I would say you are getting around 190K/day
http://www.gpugrid.net/results.php?hostid=144671
(86400/32000)*70800≈190,000
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile X1900AIW
Send message
Joined: 12 Sep 08
Posts: 74
Credit: 23,566,124
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 29171 - Posted: 14 Mar 2013 | 16:22:31 UTC

Last days I clocked my GTX 650 TI with reduced OC frequencies (EVGA Precision 4.0.0): +100/+100 Mhz without extra volting (@stock: 1.087 V), runtimes are within the levels from other users.

Operator
Send message
Joined: 15 May 11
Posts: 108
Credit: 297,176,099
RAC: 0
Level
Asn
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29173 - Posted: 15 Mar 2013 | 2:06:04 UTC - in response to Message 29168.

My current average credit with this card running long tasks exclusively is 75,446.58.


Actually, I was looking at the wrong computer's stats.

The average credit for the one with the GTX 650 Ti is 150,825.35.

Sorry for any confusion.

Operator
____________

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29600 - Posted: 29 Apr 2013 | 16:20:29 UTC - in response to Message 27059.

This looks to be the right thread for my questions. I confess I don’t understand most of the technical aspects posted here, but I think I get the gist.

Currently I run a modest ASUS GTX 460 1GB for GPUGRID 24/7 (only – no games…) which, I understand, does not use all of the 300+ shaders available. I over-clock it to 850/1700/2000 with no problems. Recent NATHANs complete in about nine hours.

I’m about to invest in an Antec 620W PSU in preparation for a GPU upgrade. Which upgrade?

The budget puts me in the GTX 650/660 (perhaps TI…) range.

Which GPU (I do like ASUS) do you recommend that uses 100% of the shaders and lets me optimise my GPUGRID contribution?

Thank you. Tom

____________

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29601 - Posted: 29 Apr 2013 | 16:58:06 UTC - in response to Message 29600.

The budget puts me in the GTX 650/660 (perhaps TI…) range.

Which GPU (I do like ASUS) do you recommend that uses 100% of the shaders and lets me optimise my GPUGRID contribution?

I recommend the GTX 660. It takes about 6 hours to complete a Nathan long, which seem to be about the same (or not much longer) as for a GTX 660 Ti. And I just completed a new build, and was able to measure the power into the card as being 127 watts while crunching Nathan longs (that includes 10 watts static power, and accounts for the 91% efficiency of my power supply). The GTX 650 Ti is very efficient too, but takes about 9 or 10 hours on Nathan longs the last time I tried it a couple of months ago.

While the GTX 660 Ti adds more shaders, that does not necessarily result in correspondingly faster performance. It depends on the complexity of the work; they found that out a long time ago on Folding@home, where the more complex proteins do better on the higher-end cards, but the ordinary ones can often do better on the mid-range cards, since their clock rate is usually higher than the more expensive cards, which may more than compensate for fewer shaders.

I like the Asus cards too; they are very well built, but to be safe I would avoid the overclocked ones, though Asus seems to do a better job than most in testing the chips used in their overclocked cards. But that may not be saying much; the main market for all of these cards is for gamers, where an occasional error is not noticed, but can completely ruin a GPUGrid work unit.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29602 - Posted: 29 Apr 2013 | 17:38:56 UTC - in response to Message 29601.
Last modified: 29 Apr 2013 | 17:44:18 UTC

A reference GTX660 has a boost clock of 1084. My GTX660Ti operates at around 1200MHz, and the reference GTX660Ti is 1058MHz, so you can't say the GTX660 has faster clocks.

I would like to see some actual results from your GTX660, to see if it really is completing Nathan Long tasks in 6h. I doubt it because my GTX660Ti takes around 5 1/2h and it's 40% faster by my reckoning:

I34R5-NATHAN_dhfr36_5-17-32-RND3107_0 4392017 24 Apr 2013 | 11:12:40 UTC 24 Apr 2013 | 19:20:22 UTC Completed and validated 19,505.27 18,807.06 70,800.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)

A 960cuda core GTX660 should take ~7.5h. The 1152shader versions are OEM.
These GeForce Kepler cards are either GK106 or GK104, but architecturally they are very similar. It's not like comparing high end and mid range Fermi's which were quite different. All the GK106 and GK104 cards are super-scalar.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29603 - Posted: 29 Apr 2013 | 21:14:55 UTC

This "not using all shaders" is not the best wording if you don't know what it means. As a brief explanation: it's a property of the fundamental chip architecture, every chip / card since "compute capability 2.1" has it, which would be the mainstream Fermis and then all Keplers. This move allowed nVidia to increase the number of shaders dramatically compared to older designs, but at the cost of sometimes not being able to use all of them. At GPU-Grid it seems like the "bonus shaders" can not be used at all.. but this matters only when comparing to older cards. The newer ones are way more power efficient, even with this handicap.

Regarding the actual choice between 650Ti, 650Ti Boost, 660 and 660Ti: Jim is right that higher clock speeds are to be preferred over more shaders.. but in this case all these cards are based on the same architecture and lithography and hence can reach similar clock speeds and similar voltages (=efficiencies). I don't see a penalty here for the bigger GK104 cards. My GTX660Ti runs GPU-Grid happily at 1.23 GHz and POEM at 1.33 GHz. It's been one of the first and from what I've heard more like the norm rather than an exceptionally good card.

Regarding the issue "GTX660Ti is not faster than GTX660" I'd say "show us the numbers". Theoretically the Ti should be 30% faster in the same setup.. and this rule hasn't failed us up to now.

I'd go for the largest of these cards which fits nicely into your budget. But don't go higher than a GTX660Ti, since the GTX670 has the same number-crunching power but higher game performance.. which you'd unnecessarily pay for.

MrS
____________
Scanning for our furry friends since Jan 2002

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29607 - Posted: 30 Apr 2013 | 2:01:13 UTC - in response to Message 29602.

I would like to see some actual results from your GTX660, to see if it really is completing Nathan Long tasks in 6h. I doubt it because my GTX660Ti takes around 5 1/2h and it's 40% faster by my reckoning:

Here is the first one at 5 hours 50 minutes:

6811663 150803 29 Apr 2013 | 17:21:11 UTC 30 Apr 2013 | 0:44:44 UTC Completed and validated 21,043.76 21,043.76 70,800.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)

You can look at the others for a while, though I will be changing this card to a different PC shortly.
http://www.gpugrid.net/results.php?hostid=150803

This GTX 660 is running non-overclocked at 1110 MHz boost (993 MHz default), and is supported at the moment by a full i7-3770. However, when I run it on a single virtual core, the time will increase to about 6 hours 15 minutes.

Simba123
Send message
Joined: 5 Dec 11
Posts: 147
Credit: 69,970,684
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 29608 - Posted: 30 Apr 2013 | 2:34:16 UTC - in response to Message 29607.

I would like to see some actual results from your GTX660, to see if it really is completing Nathan Long tasks in 6h. I doubt it because my GTX660Ti takes around 5 1/2h and it's 40% faster by my reckoning:

Here is the first one at 5 hours 50 minutes:

6811663 150803 29 Apr 2013 | 17:21:11 UTC 30 Apr 2013 | 0:44:44 UTC Completed and validated 21,043.76 21,043.76 70,800.00 Long runs (8-12 hours on fastest card) v6.18 (cuda42)

You can look at the others for a while, though I will be changing this card to a different PC shortly.
http://www.gpugrid.net/results.php?hostid=150803

This GTX 660 is running non-overclocked at 1110 MHz boost (993 MHz default), and is supported at the moment by a full i7-3770. However, when I run it on a single virtual core, the time will increase to about 6 hours 15 minutes.



That's amazingly fast. My 660Ti does Nathan's in about 19000 @ 1097Mhz .
I would like to see what a 660 can do with a Noelia unit (when they start working again) and see if the more complex task takes a proportionate time increase.

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 29611 - Posted: 30 Apr 2013 | 3:14:36 UTC

My GTX670's do the current NATHANS in about 4 hours at 1200MHz, that's not the same as a 660. When we were doing the NOELIA's, the difference was even greater. I know some will disagree with this but I believe the 256 bit onboard bus makes a difference, I'm not pushing data through a smaller pipe. They do have a much greater advantage on power consumption and the amount of heat they put off.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29613 - Posted: 30 Apr 2013 | 8:53:49 UTC

@Flashhawk: you've also got the significant performance bonus of running Win XP. The other numbers here are from Win 7/8.

@Jim: that is impressive performance, thanks for showing the numbers. As a comparison I'll use your config with only a logical core assisting the GPU, as this is what I'm also running. In this case we've got 6:15 h = 22500 s at 1110 MHz.

I'm running at 1228 MHz and should hence get 20400 s. The difference in shaders is 1344/960 = 1.4, which means I should be getting 14530 s. Well, this is clearly not the case :D

Instead I'm seeing 19030 s for these WUs, just 7% faster at the same clock speed. I can't look the memory controller load up right now, but it was fairly low, so it shouldn't limit performance this much (especially since memory OC didn't really help GPU-Grid historically). I think it seems like these WUs are too small for the larger cards to stretch their legs, i.e. there's too much context switching, PCIe transfers etc. happening.

MrS
____________
Scanning for our furry friends since Jan 2002

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29616 - Posted: 30 Apr 2013 | 12:35:07 UTC - in response to Message 29608.

That's amazingly fast. My 660Ti does Nathan's in about 19000 @ 1097Mhz .
I would like to see what a 660 can do with a Noelia unit (when they start working again) and see if the more complex task takes a proportionate time increase.

Good question. I am changing this machine around today so the link will no longer be good for this card, but will post later once I get some.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29618 - Posted: 30 Apr 2013 | 13:53:19 UTC - in response to Message 29601.

I recommend the GTX 660. It takes about 6 hours to complete a Nathan long, which seem to be about the same (or not much longer) as for a GTX 660 Ti. And I just completed a new build, and was able to measure the power into the card as being 127 watts while crunching Nathan longs (that includes 10 watts static power, and accounts for the 91% efficiency of my power supply). The GTX 650 Ti is very efficient too, but takes about 9 or 10 hours on Nathan longs the last time I tried it a couple of months ago.

I'm running 3 MSI Power Edition GTX 650 TI cards and the times on the long run Nathans range from 8:15 to 8:19. They're all OCed at +110 core and +350 memory (these cards use very fast memory chips) and none of them has failed a WU yet. Temps range from 46C to 53C with quite low fan sttings.

I like the Asus cards too; they are very well built, but to be safe I would avoid the overclocked ones, though Asus seems to do a better job than most in testing the chips used in their overclocked cards.

Interesting, for years I have run scores of GPUs (21 running at the moment on various projects) of pretty much all brands and for me ASUS has has BY FAR the highest failure rate. In fact ASUS cards are the ONLY ones that have had catastrophic failures, other brands have only had fan failures. Of the many ASUS cards I've had only 1 is still running, every other ASUS has failed completely except for 1 that is waiting for a new fan. Personally have had good luck with the XFX cards (among other brands) with double lifetime warranty, out of many XFX cards have had 2 fan failures and they've shipped me complete new HS/fan assemblies in 2 days both times. Also have had good luck with MSI, Sapphire, Powercolor and Diamond.

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29619 - Posted: 30 Apr 2013 | 16:15:27 UTC
Last modified: 30 Apr 2013 | 16:15:59 UTC

Many thanks to all who contributed to my question. Quite stimulating!

With apologies to Beyond, who is not impressed with ASUS, I've decided to go with the ASUS GTX 660. All the ASUS GPUs I've had have turned in exemplary performance.

€190 (about £161) on Amazon France looks like the best deal available to me. I live in France so I get free shipping!

All I have to do now is to persuade Her That Matters that it's a good idea!

Tom
____________

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29621 - Posted: 30 Apr 2013 | 18:57:41 UTC - in response to Message 29619.

With apologies to Beyond, who is not impressed with ASUS, I've decided to go with the ASUS GTX 660. All the ASUS GPUs I've had have turned in exemplary performance.

No need to apologize, just relating my experience with the 7 ASUS cards I've owned as opposed to the scores of other brands. For me the ASUS cards failed at an astounding rate. Of course YMMV.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29622 - Posted: 30 Apr 2013 | 23:04:48 UTC - in response to Message 29607.
Last modified: 1 May 2013 | 9:44:40 UTC

Jim1348, is that definitely a 960shader version and not an 1152 OEM card?
Is it a 2GB or 3GB card?

It might be the case that some WU's benefit from the extra memory bandwidth. The bus width of the GTX660Ti is 192bits, the same as the GeForce GTX 660 and GTX650Ti, but because the GTX660Ti has more shaders and SM's it's bandwidth is relatively less. I typically see a 40% memory controller load when running GPUGRid WU's on the GTX660Ti. This is a lot higher than any previous cards I've had.

One problem in assessing the impact of bandwidth is that these tasks can vary by some margin. Just looking at 8 WU's I've seen a 7% variation in runtime on my own system. It's also hard to know what influence the operating system has. It might just be 11% from XP to W7, as it use to be, or it could be more for some WU's.

The 11% difference isn't sufficient to move from 14500sec to 18800sec though. flashawk's GTX670 (on XP) is about 30% faster than my GTX660Ti on W7x64. So, some other factor, other than OS is involved and the likely candidate is bandwidth. Another consideration might be CPU usage. I might look at this at the weekend.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 29623 - Posted: 1 May 2013 | 0:01:52 UTC

40% is really good, I think you're actually using more shaders than I am on my 670's. The highest I've seen on the memory controllers is 32% for a 670 and 34% for a 680.

SK, out of curiosity, what brand are you're 660's?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29624 - Posted: 1 May 2013 | 9:42:24 UTC - in response to Message 29623.
Last modified: 1 May 2013 | 11:54:21 UTC

My GTX660Ti is made by Gigabyte, and comes with a dual fan. It's a descent model, but fairly standard; most boost up to around 1200MHz.

The theory has been that because the GTX660Ti has the same number of shaders as a GTX670 it should be just as fast. However, that doesn't seem to be the case. So perhaps the memory bandwidth is having more of an influence.

My CPU is an i7-3770 @4.2 GHz and I've 2133MHz system memory and a SATA6 drive, so it should not be bottleknecked anywhere else. I do use the CPU, but I usually have 2 threads free.

It's difficult to compare GPU's on different operating systems but the fact that the GTX660 also performs so well comparatively leads me to think that the GPU memory bandwidth is more of an issue than was previously thought.

- It just occurred to me that the drop from 32 to 24 ROP's could be the issue for the GTX660Ti rather than or as well as the memory bandwidth; the GTX670 has 32 Rops, but the GTX660Ti only has 24 Rops. The GTX660 also has 24 Rops - perhaps a better Rops to Cuda Core ratio.
If that is the case then we might also see a similar difference between the "GTX650Ti Boost" and the "GTX650Ti"; the Boost also has 24 Rops, but the 650Ti only has 16 Rops.

Going back over some old posts it looks like the new apps use the PCIE a bit less than before but use the GPU memory a bit more. The new apps also favor the super-scalar cards, making the old comparison charts redundant even for CC2.0 vs CC2.1 architectures.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29626 - Posted: 1 May 2013 | 13:08:35 UTC - in response to Message 29624.

- It just occurred to me that the drop from 32 to 24 ROP's could be the issue for the GTX660Ti rather than or as well as the memory bandwidth; the GTX670 has 32 Rops, but the GTX660Ti only has 24 Rops. The GTX660 also has 24 Rops - perhaps a better Rops to Cuda Core ratio.
If that is the case then we might also see a similar difference between the "GTX650Ti Boost" and the "GTX650Ti"; the Boost also has 24 Rops, but the 650Ti only has 16 Rops.

I'd be interested in seeing the performance of the 650 TI Boost if anyone has some figures for GPUGrid. So far I've been adding 650 TI cards as they seem to run at about 65% of the speed of the 660 TI, are less than 1/2 the cost and are very low power. The 650 TIs I've been adding have very fast memory chips and boosting the memory speed makes them significantly faster. That would make me think that more memory bandwidth might also help. PCIe speed (version) has no effect at all in my tests, but that's on the 650 TI. Can't say for faster cards. Will say though that the 650 TI is about 35-40% faster than the GTX 460 at GPUGrid, yet the 460 is faster at other projects (including OpenCL). As an aside, it's also interesting that the 660 is faster than the 660 TI at OpenCL Einstein, wonder why?

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29628 - Posted: 1 May 2013 | 14:40:23 UTC - in response to Message 29622.

Jim1348, is that definitely a 960shader version and not an 1152 OEM card?
Is it a 2GB or 3GB card?

Yes, 960 shaders at 2 GB; nothing special.
http://www.newegg.com/Product/Product.aspx?Item=N82E16814500270


It might be the case that some WU's benefit from the extra memory bandwidth. The bus width of the GTX660Ti is 192bits, the same as the GeForce GTX 660 and GTX650Ti, but because the GTX660Ti has more shaders and SM's it's bandwidth is relatively less. I typically see a 40% memory controller load when running GPUGRid WU's on the GTX660Ti. This is a lot higher than any previous cards I've had.

I noticed when I bought it that it had good memory bandwidth, relatively speaking. But another factor is that the GTX 660 runs on a GK106 chip, whereas the GTX 660 Ti uses a GK104 as you noted above. The GK106 is not in general a better chip, but there may be other features of the architecture that favor one chip over another for a given type of work unit. Nvidia sells them mainly for gaming, with number-crunching being an afterthought for them. It could well be that the GK104 runs Noelias better; we won't know until we see.


ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29637 - Posted: 2 May 2013 | 20:27:14 UTC

Some quick measurements / observations:

- running my stock config (with GPU-OC and everything) I'm seeing 38% memeory controller load and 86 - 90% GPU utilization

- deactivating 8 Einstein WUs on the i7+HT increases power consumption a bit and GPU utilization to 92%

- increasing the memory speed by 130 MHz increases power by ~1%, but nothing else changes (from 1.5 GHz to 1.63 GHz, without multiplying by 4 for the DDR clock rate)

- decreasing the memory speed by 250 MHz lowers power by about 5-6%, increases the memory controller load to 41%(+/-1%) and increases GPU utilization to 93%

It seems like "waiting for memory" does not count as idle time for the GPU and that some of it is happening here. Been running a regular Nathan.

MrS
____________
Scanning for our furry friends since Jan 2002

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 29639 - Posted: 2 May 2013 | 21:48:11 UTC

It looks as though you found part of the puzzle. Just for reference, my GTX680's have a 96% GPU utilization and the 670's are at 94%. I'm sure you and sk will figure this out. What about downclocking due to heat? All my rigs are liquid cooled and don't get over 45* C.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29645 - Posted: 3 May 2013 | 10:07:00 UTC - in response to Message 29639.
Last modified: 3 May 2013 | 13:26:14 UTC

Heat isn't an issue; dual fan card, open case with additional large case fans, 137W used - GTX660Ti presently at 55°C and would rarely reach 60°C.

The problem isn't lack of memory either; the 3GB versions don't perform any better running one GPUGrid task at a time.

I noticed that even when running two GPUGrid tasks at a time the GPU memory controller load only rose by about 1% even though the GPU utilization rose from around 90% to around 98%. I suspect that the reported memory controller load does not accurately reflect saturation - the full extent of the bottleneck. The way the memory is accessed could be part of the issue. It certainly seems to struggle to go much past 40%.

I'm still not fully aware of how the ROP's are used. Since the shaders were trimmed down, a lot now takes place on the GPU cores, so having relatively fewer ROP's could be key to the relatively poor performance. The drop to 75% of the ROP's from the 670 to the 660Ti could be the problem. It's either the memory bandwidth/controller load or the ROP count, or both. I suspect the ROP count is limiting the memory controller load, but I don't know enough about the ROP usage to be confident.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29666 - Posted: 4 May 2013 | 9:56:46 UTC - in response to Message 29645.

Ah, I forgot to comment on the ROP suggestion yesterday. Well, ROP stands for "render output processor". I've never heard of these units being used in GP-GPU at all, which seems logical to me, since the calculation results wouldn't need to be assembled into a framebuffer the size of the screen. And blended/softened via anti-aliasing.. which you really wouldn't want to do with accurate calculation results :D

And if there's a bottleneck in memory bandwidth then this might not need to show up as 100% memory controller utilization. For example the need for new data from memory might not be distributed evenly over time.

Could anyone run some comparable tests with the current GPU-Grid app and probably the Nathan long-runs? Preferrably within the same PC. We'd need an exceptionally low-bandwidth GPU like the GT640 or GTX660Ti compared to a regularly balanced one like.. pretty much any other. Over and downclock the memory by a noticable amount (about 10% should be fine.. not sure if my memory could take this as OC, though) and observe the change in performance. Ideally no CPU tasks would be run along with this, so the measurements aren't disturbed by changing CPU load. Run a few units in each configuration, average the runtimes and compare: is the performance change with memory clock significantly higher on the low-bandwidth card?

MrS
____________
Scanning for our furry friends since Jan 2002

GoodFodder
Send message
Joined: 4 Oct 12
Posts: 53
Credit: 333,467,496
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 29675 - Posted: 4 May 2013 | 12:31:11 UTC

Been following this thread as I recently put together a dedicated folding machine on a tight budget hence been looking for the best 'bang per buck per watt'.

My tuppence; I suspect the performance improvement of the gtx 660 over say the gtx 650ti is due to the larger cache size (384K v 256K) rather than bandwidth (gtx 670/680 incidently has 512K).
Agree I fail to see how ROPs could affect CUDA performance or the amount of physical mem for that matter for single wu's; personally have not seen mem utilisation above 880mb with the largest of wu's for the past year - would be interesting to see if anybody has seen otherwise.
Incidently I also suspect memory latency could play an important part for GpuGrid; increasing mem clock could of course affect reliability as well potentially outputting inaccurate results.
For those who may be interested with a budget of 400 Euro I ended up with 2x gtx650 ti (1Gb), G2020 cpu, MSI Z77 matx motherboard (supporting 2x pcie 16 @ 8x), a Fractal 1000 case and a single 4GB DIMM. Recycled an old 300W 80+ PSU and a laptop hdd. Hopefully expecting the overall daily folding output to be similar to a single GTX 680 when OC to 1006/1500 (MemtestG80 tested for 12hrs). Power measured at the plug is around the 180W mark at full load, expected running cost 0.7 Euro/day.

Interestingly in my case (Win XP SP3) I found fixing the cpu affinity helped with gpu utilisation which seems opposite to what I have been reading.
Using a little tool called imagecfg I set the wu's an exe to run in uniprocessor mode; this appears to have the effect of binding the wu's exe for the run duration to a 'free' core hence reducing context switching.
(e.g.'imagecfg.exe -u acemd.2865P.exe')
My 650tis are now running at an almost constant 99% which is cool.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29685 - Posted: 5 May 2013 | 1:13:28 UTC - in response to Message 29675.

The GTX660Ti has one less Rop cluster (of ROPs, L2 cache and Memory) than a GTX670. So it drops from 32 ROPs, with 512KB L2 cache and quad channel memory to 24 ROPs, 384KB L2 cache and triple channel memory.

I don't know how you can separate these cluster elements but given what MrS said I respect that any slow-down isn't ROP count based. I reckon that NVidia thinks 384K is sufficient to support the reduced memory bandwidth, but I'm not sure if you can't really separate the two. We know that the GTX670 performs better for the more memory intensive games. So that leaves the memory bandwidth looking like the culprit. The 3GB cards aren't any faster either.

I would go along with the idea of uneven memory requirements. Something is stopping the memory controller load going past ~41%. I'm currently using 38% but that's with the CPU usage at 100%. When I disabled CPU tasks and ran two WU's it only went up to ~41% and that was with 99% GPU utilization.

GoodFodder, I think we ran some >1GB Long WU's a few months back.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29693 - Posted: 5 May 2013 | 15:11:04 UTC

I suspect the performance improvement of the gtx 660 over say the gtx 650ti is due to the larger cache size (384K v 256K) rather than bandwidth (gtx 670/680 incidently has 512K).

Don't forget the amount of shaders, which is the most basic performance metric!

Well.. the L2 cache coupled to the memory controller and ROP block is something which could make a difference here. Not for games, as things are mostly streamed there and bandwidth is key. But for number crunching caches are often quite important.

I'm currently using EVGA Precision X to set clock speeds. And this won't let me underclock GPU memory more than 250 (real) MHz. I suspect I could get memory controller utilization above 41% if I could set lower clocks. But I'm not too keen on installing another one of these utilities :p

MrS
____________
Scanning for our furry friends since Jan 2002

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29722 - Posted: 7 May 2013 | 10:47:58 UTC - in response to Message 29616.

I would like to see what a 660 can do with a Noelia unit (when they start working again) and see if the more complex task takes a proportionate time increase.

My first two Noelias are in, and are running 12-13+ hours on my two GTX 660s, each supported by one virtual core of an i7-3770 (not overclocked). The other six cores of the CPU are now running WCG/CEP2.

063ppx25x2-NOELIA_klebe_run-0-3-RND3968_1 12:05:29 (05:47:34) 5/7/2013 5:46:00 AM 5/7/2013 6:05:05 AM 0.627C + 1NV 47.91 Reported: OK *

041px24x3-NOELIA_klebe_run-0-3-RND0769_0 13:15:57 (06:17:45) 5/7/2013 5:09:05 AM 5/7/2013 5:22:42 AM 0.627C + 1NV 47.46 Reported: OK *

The Nathans now run a little over 6 hours on this setup.
http://www.gpugrid.net/results.php?hostid=150900&offset=0&show_names=1&state=0&appid=

Also, I run a GTX 650 Ti, and the first Noelia was 18 hours 14 minutes.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29724 - Posted: 7 May 2013 | 12:49:15 UTC - in response to Message 29722.
Last modified: 12 May 2013 | 16:53:32 UTC

As some people suggested it appears that the GTX660Ti is relatively better (slightly) when it comes to Noelia's WU's, compared to Nate's.
My GTX660Ti was 25% faster than a GTX660, however for most of the run I only crunched one CPU WU, which enabled the Noelia task to use 94% of the GPU. Using the CPU less accounts for at least 4% but probably >6% compared to crunching CPU tasks on 6 CPU threads. My CPU is also at 4.2GHz, so that might make a little difference compared to stock clocks, possibly bringing it to 9 or 10%. If you run a WU without any CPU tasks running you would find out.

Might have been better to post in the NOELIAs are back! thread, but I might rename this one as we have drifted into a GPU runtime comparison thread.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

matlock
Send message
Joined: 12 Dec 11
Posts: 34
Credit: 86,423,547
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 29889 - Posted: 13 May 2013 | 4:18:08 UTC
Last modified: 13 May 2013 | 4:47:03 UTC

How does the 650 Ti BOOST perform compared to the 650 Ti and 660?

The 650 Ti has: 768 shaders. 16 ROPs. 128 bit mem bus. 256 KB L2 Cache.
The 650 Ti BOOST has: 768 shaders. 24 ROPs. 192 bit mem bus. 384 KB L2 Cache.
The 660 has: 960 shaders. 24 ROPs. 192 bit mem bus. 384 KB L2 Cache.

It seems the 650 Ti and 660 are both great for price/performance, but I may get the BOOST as it is about $40 cheaper than the 660 and may perform close to it.

GoodFodder
Send message
Joined: 4 Oct 12
Posts: 53
Credit: 333,467,496
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 29891 - Posted: 13 May 2013 | 9:21:15 UTC

Unfortunately I have not seen any stats for the 650 ti boost, however personally as gpugrid appears to be mainly shader bound I would go for either a base 650 ti or the 660. I would expect the performance of the boost to only be marginally quicker than a base 650 ti; I guess it really depends on your budget; the price differences of the products in your area; the supporting hardware for the card and whether the machine will be used for other purposes other than crunching.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29892 - Posted: 13 May 2013 | 10:21:50 UTC - in response to Message 29891.
Last modified: 13 May 2013 | 11:42:08 UTC

The 650TiBoost is basically a 660 with less shaders. While the shaders are historically the most important component they have been trimmed down significantly from CC2.0 cards and CC2.1 cards to CC3.0 cards. This means that there is more reliance on other components especially the core but even the CPU. So the relative importance of the shaders is open to debate. It certainly seems likely that the GPU memory bandwidth and probably the different cache memories have more significance with the GeForce 600 GPU's. I think the new apps have exposed this (they generally optimize better for the newer GPU's), making initial assessments of performances outdated. There seems to be some performance difference with the different WU types, but that would need to be better explored/reported on before we could assess it's importance when it comes to choosing a GPU. Even then it's down to the researchers to decide what sort of research they will be doing long term. I'm expecting more Noelia type WU's, as they extend the research boundaries, but that's just my hunch.

The Boost is definitely worth a look, but whether its closer to a 660 or a 650Ti remains to be seen. I would speculate that it performs slightly better than a 650Ti but not that close to a 660; the Boost has less shaders to feed so the memory won't be that important (proportionately less so). A comparison of the 650Ti to a 650TiBoost should reveal the importance of cache (at least the difference between 256 and 384K). As with any GPU, it comes down to the price, performance and running cost.

These 'mid-range' GPU's are very interesting when it comes to crunching. They let normal computer users, especially light-gamers, participate in the project (rather than just the enthusiasts). I would include the GTX 660 (OEM)'s as interesting mid-range GPUs, though they are pushing the high end bracket with 1152shaders, and we don't know the price (OEM). If you include the different memory amounts these mid-range GPU's have and the OEM cards, there are 7 to choose from. I would still like to see non-OEM 1152shader GPU's, to fill out the range; one version has a 256bit bus, so it should match the GTX660Ti (192bit). I'm sure manufacturers could make a sweet one.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29914 - Posted: 13 May 2013 | 23:56:33 UTC - in response to Message 29892.
Last modified: 18 May 2013 | 20:25:35 UTC

Pulled a GTX660Ti and a GTX470 from a system to test a GTX660 and a GTX650TiBoost:
I7-3770K@4.2GHz, with the GTX660 in the top PCIE3 slot (0) and the GTX650TiBoost in PCIE3 slot 1. Running 5 CPU WU’s, and two GPUGrid WU’s, VM off.
While running two Nathan WU’s the GPU power reported for both GPU’s stayed around 94%. With the fans on auto the temperatures stabilized at ~62degC. Fan speeds at 56% and 57%. GPU usage stayed around 93 to 94%. The GTX660’s core clock is 1071 while the GTX650TiBoost’s clock is 1124.
The GTX660’s memory clock is listed as 3005MHz and the GTX650Ti’s is 3055MHz. The GTX660 is using 453MB GDDR (~62MB for the system), and the GTX650TiBoost is using 380MB.
Memory controller loads are 31% for the 660 and 26% for the 650TiBoost
At 5% into the run (which is usually a very accurate indication of expected run times for GPUGrid WU’s, while the estimated remaining time isn’t) the GTX660 had taken 18min 10sec. Expected run time ~22000sec
At 5% into the run the GTX650TiBoost took 20min 9sec, so the expected run time is 24200sec.
The GTX660 is about 11% faster than the GTX650TiBoost.
- At 45% this still looked about right.
Considering the shader count and the core frequency (but nothing else) you would expect the GTX660 to be ~19% faster - Shaders; 960/768=1.25, core freq. 1071/1124 =0.95
So it appears that there are additional performance benefits, probably from the shader cache and memory bandwidth to shader count.

Mid-range GPU performances against the first high-end GPU (running Nathan WU’s):
GTX660Ti - 100% - £210
GTX660 - 88% - £153 (73% cost of a GTX660Ti) – 20.5% better performance/£
GTX650Ti Boost 79% - £138 (66%) – 19.6% better performance/£
GTX650Ti - 58% - £110 (52%) – 11.5% better performance/£

At the above prices the GTX660 offers the best performance/purchase price, but is only slightly better than the GTX650TiBoost. Prices are subject to change.

Running costs:
These cards all optimize towards a similar power % (91 to 95), so we should be able to go by the reference TDP’s and come up with a reasonably accurate measure of performance/Watt:
GTX660Ti – 100% – 150W – 100% (performance/Watt)
GTX660 - 88% – 140W – 94%
GTX650Ti Boost – 79% - 134W – 88%
GTX650Ti – 58% - 110W – 79%

Actual Wattage's might change this a bit, but it's roughly what I would expect; the performance/Watt of the higher specked cards would be better. Again the 660 looks quite good.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

GoodFodder
Send message
Joined: 4 Oct 12
Posts: 53
Credit: 333,467,496
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 29923 - Posted: 14 May 2013 | 10:31:05 UTC

Interesting comparison, be be great to see further results for the various WUs; out of curiosity which cards do you have running in there at the moment?
Personally would leave power and cost out of the equation for the following reasons:
Pricing - really area and time of purchase dependent.
TDP - personal experience (as indeed you mentioned) can be a little misleading as the actual power draw for running a particular app on a particular card really depends on a number of factors e.g num of memory modules; chip quality (binning) not to mention Boost (which I hate) as you know can ramp up the voltage to 1.175V - I have to limit the power target of my 670 to 72% as it gets toasty.
Sorry I'm rabbiting on, will leave it there, good show.

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29925 - Posted: 14 May 2013 | 11:02:19 UTC - in response to Message 29923.
Last modified: 14 May 2013 | 11:09:21 UTC

TDP - personal experience (as indeed you mentioned) can be a little misleading as the actual power draw for running a particular app on a particular card really depends on a number of factors e.g num of memory modules; chip quality (binning) not to mention Boost (which I hate) as you know can ramp up the voltage to 1.175V - I have to limit the power target of my 670 to 72% as it gets toasty.

I would leave in the TDP - it is useful for comparative purposes, even if it varies in absolute terms for the factors you mentioned.

However, I would add a correction for power supply efficiency. For example, a Gold 90+ power supply would probably run about 91% efficient at those loads, and so the GTX 660 that measures 140 watts at the AC input (as with a Kill-A-Watt) is actually drawing 140 X 0.91 = 127 watts. That is not particularly important if you are just comparing cards to be run on the same PC, but could be if you are comparing the measurements for your card with someone else's on another PC, or when building a new PC and choosing a power supply for example.

GoodFodder
Send message
Joined: 4 Oct 12
Posts: 53
Credit: 333,467,496
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 29926 - Posted: 14 May 2013 | 11:28:43 UTC

Yes, upon reflection your right - TDP useful for comparative purposes, was thinking out loud.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29929 - Posted: 14 May 2013 | 14:37:04 UTC - in response to Message 29926.
Last modified: 16 May 2013 | 12:41:51 UTC

The TDP is for reference but it's certainly not a simple consideration. You would really need to list all your components when designing a system if TDP (power usage) is a concern. Key would be the PSU; if your PSU is only 80% efficient, it's lack of efficiency impacts upon all the other components. However, if you are buying for a low end system with only 1 6pin PCIE required then the purchase cost vs the running cost of such a PSU (and other components) are likely to make it more acceptable. For mid-range systems (with mid-range GPU's) an 85%+ PSU is a reasonable balance. For a high end system (with more than one high end GPU) I would only consider a 90+ efficiency PSU (Professional series).

I just ran the two WU's last night. Today I'm finishing off a Noelia (on the 660) and have started a Nathan WU on the 650Ti. The Nathan WU is at 63% and the anticipated runtime is close to the previous WU. I will run a few Nathan WU's to get more accurate results, but so far the Relative performances (in previous post) look reasonably accurate.

One thing to note is that most GTX660Ti's are FOC. My 660 is very much a reference model, no FOC GPU or memory. The 650Ti on the other hand is an FOC card with a core clock of 8.8% over reference and a 1.8% memory FOC.
The reason for this is that the GTX660 has one 6-pin PCIE power connector and a TDP of 140W. This is very close to the 150W maximum, so it doesn't give you much headroom for overclocking. The reference TDP for a GTX650TiBoost is 134W, and having less shaders is bound to reduce the power draw. So it lends itself to FOC, as does the GTX660Ti.

Jim, if a GPU has a TDP of 140W and while running a task is at 95% power, then the GPU is using 133W. To the GPU it's irrelevant how efficient the PSU is, it still uses 133W. However to the designer, this is important. It shouldn't be a concern when buying a GPU but when buying a system or upgrading it is.

To support a draw of 133W requires:
@80% efficiency - 160W
@85% efficiency - 153W
@91% efficiency - 145W

The difference of 7W between an 80+ and an 85+ PSU, or 8W between the 85% and 91% PSU's isn't a massive consideration when you have one or two mid-range GPU's. However when you have 2 or more high end GPU's it is:
Two GTX680's (195W TDP) @ 90% power usage on an 80% efficient PSU wastes 70W just supporting the GPU's. Probably closer to 100W for the whole system. This can be more than halved by using a top PSU. The added benefits are reduced heat radiation, and thus reduced noise from not needing the fans to spin as fast. The heat problem increases exponentially when you add a 3rd or 4th GPU, but you need a more powerful PSU anyway; just for the PCIE power connectors.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29930 - Posted: 14 May 2013 | 14:48:37 UTC - in response to Message 29914.

A very nice bit of information. Thanks!

To put these mid-range GPU performances into perspective, against the first high-end GPU (running Nathan WU’s):
GTX660Ti - 100% - £210
GTX660 - 88% - £153 (73% cost of a GTX660Ti) – 20.5% better performance/£
GTX650Ti Boost 79% - £138 (66%) – 19.6% better performance/£
GTX650Ti - 58% - £110 (52%) – 11.5% better performance/£

Prices are a bit different in the states though. Best price at newegg AR shipped:

650 TI: $120 (after $10 rebate) (MSI)
650 TI Boost: $162 (no rebate, rebate listed at newegg in error on this GPU) (MSI)
660: $165 (after $15 rebate) (PNY)
660 TI: $263 (after $25 rebate) (Galaxy)

So for instance the 650 TI is only 45.6% of the cost of the 660 TI here and the 660 is only 62.7% of the cost of the 660 TI. Neither the 650 TI Boost nor the 660 TI is looking too good (initial purchase) at these prices.

Running costs:
These cards all optimize towards a similar power % (91 to 95), so we should be able to go by the reference TDP’s and come up with a reasonably accurate measure of performance/Watt:
GTX660Ti – 100% – 150W – 100% (performance/Watt)
GTX660 - 88% – 140W – 94%
GTX650Ti Boost – 79% - 134W – 88%
GTX650Ti – 58% - 110W – 79%

Actual Wattage's might change this a bit, but it's roughly what I would expect; the performance/Watt of the higher specked cards would be better. Again the 660 looks quite good.

I don't think the TDP estimates mean too much for our purposes. Do you have some Kill-A-Watt figures?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29936 - Posted: 14 May 2013 | 15:52:38 UTC - in response to Message 29930.
Last modified: 14 May 2013 | 15:55:30 UTC

In the UK running cost difference for these mid-range GPU's would only be around £10 a year, so yeah, no big deal, and only a consideration if you are getting two or three. In a lot of locations it would be less, and in Germany and a few other EU countries it would be more. So it depends where you are.

In the US the prices seem to be stretched out more; with the higher end GPU's costing relatively more than the lower end, but I expect the GTX650TiBoost to fall in price; it was only released two or three weeks ago, and there might not bee too many to choose from. There is no way I would get one for a few $'s less than a GTX660, which still looks like the best bang for buck.

At present I have both the GTX660 and GTX650TiBoost in the system, and I'm CPU crunching on a modestly OC's i7-3770K (337W at the wall, 91% PSU). I would really need to pull both GPU's, take a reading, add one, only crunch on the GPU, take a reading and then do the same for the other GPU to have accurate figures. A bit much for 2 GPU's that have a TDP difference of 6W, but I might do it in a few days, after I get a few more runs in.
In the mean time I suppose I could make reasonable estimates based on idle power usage (from review forums)...
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29938 - Posted: 14 May 2013 | 16:04:13 UTC

I posted here at the end of April about my plan to upgrade my PSU and my GTX 460.

The Antec 620W PSU is up and running (I’m amazed I installed it without screwing up my Dell Studio XPS 435 i7 PC…), and I just ordered an ASUS GTX 660 from Amazon France; the best deal (with free shipping) I could find at €141 (£120, $182), but I did have to pay, in addition, the 20% VAT (value-added tax) the European Union demands. You guys State-side probably complain about 5-6% State taxes…

I’ve been running my GTX 460 at 850/1700/2000 vs. the stock 675/1350/1800 with no problems.

What OC-ing should I try with the 660?

Thanks, Tom
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29940 - Posted: 14 May 2013 | 16:26:24 UTC - in response to Message 29938.
Last modified: 14 May 2013 | 16:26:53 UTC

I would suggest a very modest OC, and no more. Antec don't make bad PSU's, especially 600W+ models, but I would still be concerned about pulling too many Watts through the PCIE slot on that motherboard. I suggest the first thing you do is to compete a WU and get a benchmark, then nudge up the clocks ever so slightly. I'll try to resist the urge to OC for a while, to complete more runs and get a more accurate performance table.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29941 - Posted: 14 May 2013 | 16:30:28 UTC - in response to Message 29940.

That sounds like very good advise!

Thank you.
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29942 - Posted: 14 May 2013 | 18:08:41 UTC - in response to Message 29930.

Measured Wattages:

0 GPUGrid wU’s + no CPU WU’s – System usage 77W (this includes the GPU idle power, of 7W each)
1 GPUGrid WU’s (on the GTX650TiBoost) + no CPU WU’s - 191W
1 GPUGrid WU’s (on the GTX660) + no CPU WU’s - 197W
2 GPUGrid WU’s + no CPU WU’s - 314W
2 GPUGrid WU’s + 5 CPU WU’s - 338W

Excluding the GPU’s idle power,
The GTX 650Ti Boost used 114W
The GTX 660 used 120W (6 Watts more, as expected going by the TDP)
Adding the idle Wattage,
The GTX 650Ti Boost used 121W
The GTX 660 used 127W

Measured when running Nathan WU’s at 95% power and 95% GPU utilization (no CPU WU’s running). Note that these WU's also use part of the CPU, which does increase the power a bit.

These figures suggest that the power usage and GPU utilization can be multiplied to get the actual Wattage used; 0.95*0.95 represents the actual Wattage used, ie 90% of 134 for the GTX650TiBoost=121W, and 95% power * 95% GPU utilization is 90% of the TDP for the 660; 127W is ~0.9*140W. Though I would want confirmation of that by others before I accept it.

____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

GoodFodder
Send message
Joined: 4 Oct 12
Posts: 53
Credit: 333,467,496
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 29943 - Posted: 14 May 2013 | 19:42:46 UTC

Just for comparison:

System: Win XP sp3, G2020, 2x GTX 650 Ti (@1033,1550), 300W 80+ PSU rated at 83% at 50% load.

Would allow +- 3W for power meter accuracy

At the wall:
idle no GPUs = 30W
CPU load (intel burn test) no GPUs = 40W

0 GPUGrid WU’s + no CPU WU’s = 42W
1 GPUGrid WU’s (NATHAN) + no CPU WU’s = 112W
1 GPUGrid WU’s (SDOERR) + no CPU WU’s = 113W
2 GPUGrid WU’s + no CPU WU’s (only has 2 cores) ~ 188W (CPU 100%, GPU 99%)

GTX650Ti idle = 6W
GTX650Ti WU inc 1 CPU core ~83W (at the wall)

est. power @83% for one GTX 650Ti WU inc 1 CPU core = 68.9W

GPU rated TDP 110W, CPU 55W

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29944 - Posted: 14 May 2013 | 20:10:02 UTC

To me it looks like the reported power percentage is calculated relative to the power target rather than TDP. The former is not reported as widely (though it is in e.g. Anandtech reviews), but is generally ~15 W below the TDP. If this was true we could easily calculate real power draw by "power target * claimed power draw".
Some factory OC'ed cards have higher power targets built in, though (e.g. mine).

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29946 - Posted: 14 May 2013 | 20:52:43 UTC - in response to Message 29943.
Last modified: 14 May 2013 | 21:06:08 UTC

You have raised a few interesting points:

Different operating systems perform differently here. Linux can be up to ~5% faster than XP. XP is ~11% faster than Vista and W7. I think W8 is roughly the same as W7 in terms of GPU performance. 2003R2 servers are slightly faster than XP (but only 1 to 3% last time I measured it), and 2008 servers are ~5% slower than XP (but a bit better than W7, except when it comes to drivers).
Obviously GPU utilization is higher with the faster operating systems.

I suppose I should have taken into consideration my PSU efficiency (it's 91%+).

The one big problem with these measurements is that these WU's use the CPU and the GPU. So you are not measuring the GPU running alone. The problem with this is that you can't accurately account for the CPU's power usage; running a CPU WU from another project and taking the CPU power usage from that is not accurate - you can see up to 30W different power consumption running different CPU WU's. How much power the Nathan WU's actually draw is open to debate. I suspect it's a good bit less than the average CPU WU would draw.

For reference:
0 GPUGrid wU’s + 1 CPU BoincSimap WU’s – System usage 91W
0 GPUGrid wU’s + 2 CPU BoincSimap WU’s – System usage 104W
0 GPUGrid wU’s + 1 CPU Ibercivis WU’s – System usage 93W
0 GPUGrid wU’s + 2 CPU Ibercivis WU’s – System usage 106W
0 GPUGrid wU’s + 1 CPU Climate WU’s – System usage 95W
0 GPUGrid wU’s + 2 CPU Climate WU’s – System usage 112W

Yeah, that was too easy - Power target it is. Are there any tools that can tell you what your GPU's Power Target actually is?

I noticed that with MSI Afterburner I cannot unlock the Core Voltage for the GTX650TiBoost, but I can for the GTX660. I can move the Power Limit for both. The last time I played with that it was really inaccurate.
GPUZ is telling me that my power consumption is ~96% of the TDP for my 660 and 95% of the TDP for the 650TiBoost, but that just matches Afterburners power percentage.

I've tweaked things:
660,
Core Voltage +12mV, power limit 109%, Core Clock +78MHz (multiple of 13!), Memory Clock +50MHz; GPU power % now 98%, GPU Power usage ~92% (with 2 CPU WU's), Core clock 1137MHz, GPU Clock 3055MHz.

650Ti,
Core Voltage (cant budge), power limit 109%, Core Clock +78MHz (multiple of 13!), Memory Clock +55MHz; GPU power % now 98%, GPU Power usage ~93%, Core clock 1202MHz, GPU Clock 3110MHz.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Jim1348
Send message
Joined: 28 Jul 12
Posts: 819
Credit: 1,591,285,971
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29952 - Posted: 15 May 2013 | 2:06:49 UTC - in response to Message 29929.
Last modified: 15 May 2013 | 2:39:42 UTC

Jim, if a GPU has a TDP of 140W and while running a task is at 95% power, then the GPU is using 133W. To the GPU it's irrelevant how efficient the PSU is, it still uses 133W. However to the designer, this is important. It shouldn't be a concern when buying a GPU but when buying a system or upgrading it is.

OK, I was measuring it at the AC input, as mentioned in my post. Either should work to get the card power, though if you measure the AC input you need to know the PS efficiency, which is usually known these days for the high-efficiency units. (I trust Kill-A-Watt more than the circuitry on the cards for measuring power, but that is just a personal preference.)

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29957 - Posted: 15 May 2013 | 12:36:20 UTC - in response to Message 29952.

My GTX660 really doesn't like being overclocked. Stability was very poor when it comes to crunching with even a very low OC. It's definitely not worth a 1.3% speed up (task return time) if the error rate rises even slightly, and my error rate rose a lot. This might be down to having a reference GTX660 or it being used for the display; I hadn't been using the system for a bit, and with the GPU barely overclocked, within a minute of me using the system a WU failed, and after 6h with <10min to go! It's been reset to stock.

On the other hand the GTX650TiBoost sticks a modest OC very well, and has returned WU's with the shaders up to 1202MHz (the same as my GTX660Ti), albeit for only a 3.5% decrease in runtime. I dare say a 5%+ performance increase is readily achievable. However, I'm using W7; I would get more than that by just sticking it in an XP rig, and more again using Linux. Also, in XP OCing might not be as beneficial; the GPU would already be running at ~99%. Ditto for Linux.

It's looking like a FOC GTX660 is the best mid-range card to invest in.

____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29959 - Posted: 15 May 2013 | 13:41:28 UTC - in response to Message 29957.

It's looking like a FOC GTX660 is the best mid-range card to invest in.

What's "FOC"?
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29966 - Posted: 15 May 2013 | 16:57:08 UTC - in response to Message 29959.

Factory Over Clocked.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29969 - Posted: 15 May 2013 | 17:15:22 UTC - in response to Message 29966.

Factory Over Clocked.

Thank you!

____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29978 - Posted: 15 May 2013 | 21:23:05 UTC

Actually the higher the GPU utilization, the more a GPU core OC should benefit performance. Because every additional clock is doing real work, whereas at lower utilization levels only a fraction of the added clocks will be used.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29983 - Posted: 15 May 2013 | 23:27:52 UTC - in response to Message 29978.

Yeah, you're right. I was thinking that you wouldn't be able to OC as much to begin with, but 105% of 99% GPU - 99% utilization is > 105% of 88% GPU - 88% utilization; 4.95% > 4.4%
Overclocking the GPU core doesn't actually improve the GPU utilization, it just exploits what's there. To improve the GPU utilization you have to solve other bottlenecks, such as the CPU (higher clocks and >availability), PCIE (PCIE3>PCIE2, X16>x8>x4) and the Memory Controller load/GPU memory bandwidth (OC the GDDR, use Virtu if your board is licensed and capable). Faster system memory and disk might also help a tiny amount.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30076 - Posted: 18 May 2013 | 22:05:40 UTC - in response to Message 29930.
Last modified: 19 May 2013 | 10:36:45 UTC

Just meant as a very rough guide, but serves to highlight price variation and the affect on performance/purchase price.

£:
GTX660Ti - 100% - £210
GTX660 - 88% - £153 (73% cost of a GTX660Ti) – 20.8% better performance/£
GTX650Ti Boost 79% - £138 (cost 66%) – 19.7% better performance/£
GTX650Ti - 58% - £110 (cost 52%) – 11.5% better performance/£

$ (from Beyond's post):
GTX660Ti - 100% - $263
GTX660 - 88% - $165 (63% cost of a GTX660Ti) – 39.7% better performance/$
GTX650Ti Boost 79% - $162 (62%) – 27.4% better performance/$
GTX650Ti - 58% - $120 (46%) – 26.1% better performance/$

€ (from a site MrS linked to):
GTX660Ti - 100% - €229
GTX660 - 88% - €160 (70% cost of a GTX660Ti) – 25.7% better performance/€
GTX650Ti Boost - 79% - €129 (56%) – 41.1% better performance/€
GTX650Ti - 58% - €104 (45%) – 28.8% better performance/€

CAD $ (matlock):
GTX660Ti - 100% - $300
GTX660 - 88% - $220 (73% cost of a GTX660Ti) – 20.5% better performance/CAD
GTX650Ti Boost - 79% - $180 (60% cost of a GTX660Ti) – 31.6% better performance/CAD
GTX650Ti - 58% - $150 (50% cost of a GTX660Ti) – 16% better performance/CAD

Going by these figures the GTX660 is the best value for money in the UK and the US, but the 650TiBoost is the best value for money in Germany (Euro) and Cananda.

Beyond's $84 GTX650Ti offers a staggering 207% better performance/$ than a GTX660Ti. As long as you have the space, such bargains are great value.

I will fill this out a bit later.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

matlock
Send message
Joined: 12 Dec 11
Posts: 34
Credit: 86,423,547
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 30079 - Posted: 18 May 2013 | 23:42:13 UTC - in response to Message 30076.

I think it's best to stick with one manufacturer when comparing prices. This also includes reference vs OC models. Looking at the very lowest prices isn't always great as I don't want another Zotac (my 560ti448 was very loud and hot). There are also mail-in-rebates, but I tend to ignore those when comparing prices.

Memory Express is a retailer in Western Canada that has some of the best prices in the area, and they will also price-beat other stores including Newegg. Here are the prices in Canadian dollars for the Asus DirectCU II OC line of 600 series cards (without MIR and without price-beat):

660Ti - $300
660 - $220
650Ti Boost - $180
650Ti - $150

By using their price beat (beating $214.99 by 5%), I just picked up an Asus 660 for $204. I also have a MIR I can send in for another $20 off. $184 for a very high quality card. Runs cool and quiet, unlike my old Zotac.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30081 - Posted: 19 May 2013 | 4:55:04 UTC - in response to Message 30079.

Newegg just had the MSI 650 Ti (non OC) on sale for $84.19 shipped AR. Unfortunately the sale just ended yesterday, only lasted a day or two.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30617 - Posted: 1 Jun 2013 | 19:38:53 UTC - in response to Message 28645.
Last modified: 1 Jun 2013 | 19:41:07 UTC

The GTX 650 Ti is twice as fast as the GTX 650, and costs about 35% more. It's well worth the extra cost.

Well, it's been a few short months and it looks like the 650 Ti has had it's day at GPUGrid. While it's very efficient It can no longer (in non-OCed form) make the 24hr cutoff for the crazy long NATHAN_KIDc22 WUs. So I suspect the 650 TI Boost and the 660 are the next victims to join the DOA list. Just a warning :-(

http://www.gpugrid.net/workunit.php?wuid=4490227

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 30620 - Posted: 1 Jun 2013 | 21:05:38 UTC - in response to Message 30617.

I've completed 4 NATHAN_KID WUs with my stock-clocked 650ti all in ~81K secs (~22.5 hours). I am about to finish my fifth, also expected to take ~22.5 hours.

22.5 hours is pretty close to the 24h window, so one has to be careful with cache settings. I've set my minimum work buffer to 0.

Maybe something just slowed down crunching for this WU?

One of mine: http://www.gpugrid.net/result.php?resultid=6912798

Comparing the values for "time per step", it is clear all the difference in total time was because of a greater time per step. Maybe you had some application eating up GPU cycles?
____________

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30624 - Posted: 1 Jun 2013 | 22:00:48 UTC - in response to Message 30620.

Maybe something just slowed down crunching for this WU?

Comparing the values for "time per step", it is clear all the difference in total time was because of a greater time per step. Maybe you had some application eating up GPU cycles?

No it's a machine that's currently dedicated to crunching. You're running Linux which is about 15% faster than Win7-64 at GPUGrid according to SKG. That's the difference, and even then you would have to micromanage and still you don't always make the 24 hour cutoff:

25 May 2013 | 7:52:09 UTC
26 May 2013 | 9:50:00 UTC
Completed and validated 81,103.17
139,625.00 credits out of 167,550.00

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 30629 - Posted: 2 Jun 2013 | 7:10:23 UTC - in response to Message 30624.

I see, yes, maybe it is because of Linux. Maybe you could cut some time with a mild OC? Or maybe you could install Linux? :)

I missed the 24h window for my first NATHAN_KID, but that was before setting the min work buffer to 0. Since setting it to 0, it's been working like clockwork.

At least, until they make WUs bigger! I hope not, at least not in the immediate future. There aren't that many people with GTX 660s out there.
____________

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30632 - Posted: 2 Jun 2013 | 9:47:30 UTC - in response to Message 30629.
Last modified: 2 Jun 2013 | 10:46:37 UTC

For a long time the difference between Linux and XP was very small (<3%) and XP was around 11% faster than Vista/W7/W8. However since the new apps have arrived it's not as clear. Some have reported big differences and some have reported small differences. The question is why?

As well as the different apps in use (6.18, 6.52, 6.49 and 6.53 - last 2 Betas), there have been several different WU types (NATHAN_KIDc22, GIANNI_VIL1, SDOERR_2HDQc, NATHAN_dhfr36, NOELIA_klebe), and the GPU's in use are CC2.0, 2.1, 3.0 and 3.5 for the latest Betas. Additionally, we know that there are bandwidth &/or cache issues with some of the GF600 range, and our established perceptions regarding performances of older cards is open for debate.

So different apps &/or WU's might perform slightly differently on different operating systems (and we tend to be a bit generic when referring to Linux; some versions may be faster than others). Apps and WU's may also perform differently on the different GPU's architectures and there might even be some performance variation due to GDDR bandwidth for the GF600's.

I had a quick look at NOELIA_klebe WU's on a GTX650TiBoost and found a ~4% difference between Linux and a 2008R2 server (same system/hardware). The difference use to be around 5% so that's still roughly in line.

I also looked at a few NATHAN_dhfr36 WU's on a Linux SKT775 2.66GHz PCIE2 1066 DDR3 HDD system vs a W7 i7-3770K @ 4.2 GHz PCIE3 2133MHz DDR3 SSD, again on the GTX650TiBoost.
Linux was only 2% faster, but there are explanations; the GPU's frequency was probably slightly higher under W7, the CPU usage for these GPU's is ~100% so processing power probably does have a roll to play, as does PCIE3 vs PCIE2, system memory and even the drive. Individually these are not much, and not particularly important, but collectively they are important as the affect is cumulative.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30655 - Posted: 4 Jun 2013 | 20:16:01 UTC - in response to Message 30617.

So I suspect the 650 TI Boost and the 660 are the next victims to join the DOA list. Just a warning :-(

That's why I prefer few large GPUs here over more smaller ones, as long as the price does not become excessive (GTX680+). However, I also think there's no need to increase the WU sizes too fast.

MrS
____________
Scanning for our furry friends since Jan 2002

GoodFodder
Send message
Joined: 4 Oct 12
Posts: 53
Credit: 333,467,496
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 30659 - Posted: 4 Jun 2013 | 21:41:49 UTC
Last modified: 4 Jun 2013 | 22:02:59 UTC

My gtx 650 ti's are running NATHAN_KIDc22 in less than 70k secs with a mild OC of 1033, 1350 - Win XP mind. Nathan's seem to be very much CPU bound from what I have seen - have you tried upping the process priority to 'Normal' and if only using one GPU per machine setting the CPU affinity with something like ImageCFG?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30881 - Posted: 19 Jun 2013 | 0:29:58 UTC - in response to Message 30659.
Last modified: 19 Jun 2013 | 4:48:18 UTC

Got some PCIE risers to play with :))
Very nice, so long as you don't mind fat GPU's hanging out of a case!

On my main system (which, like many, only has 4 PCIE power connectors) the top slot is occupied by a GTX660Ti (slot 0; PCIE 3 X8), the next with a GTX660 (slot 1; PCIE 3 X8), and now the third with a GTX650TiBoost (slot 2; PCIE 2 X4).

The memory controller load of the Boost is only 22%, the GTX660's memory controller load is 26% and the GTX660Ti's memory controller load is 36%.

Of course Boinc reports three GTX650Ti Boosts!
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30882 - Posted: 19 Jun 2013 | 6:30:12 UTC - in response to Message 30881.

Got some PCIE risers to play with :))

I can't visualize what that looks like, but how about this for an alternative...

I recently upgraded to a GTX 660, so my old GTX 460 now sits in its box.

Is there an adapter I can mount in a PCI slot, on top of which I mount the 460? I have a 620W PSU and four PCIE power connectors.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30883 - Posted: 19 Jun 2013 | 7:01:12 UTC - in response to Message 30882.
Last modified: 19 Jun 2013 | 7:25:50 UTC

I think this 'dual PCIE splitter/riser' is what you are interested in,


A bifurcation capable motherboard would be required!
http://www.ameri-rack.com/ARC2-PELY423-C7_m.html

The risers I have are fairly basic, something like this one

I'm using one in a 3 slot motherboard. The bottom slot is so close to the base of the system that I can't physically install a card directly.

There are more adaptable risers (for mounting),
http://www.logicsupply.com/images/photos/accessories/pelx16-c11_big.jpg

You can see a few in use in this image,

- www.moddiy.com

A 1 to 16 slot converter might also interest some,

http://www.amfeltec.com/products/x1x16pcie-riser.php

With all these risers you need to understand the power consequences! A PCIE2.1 X16 slot can deliver up to 75W of power to the GPU. An X1 slot cannot, therefore all the power would need to be delivered through the PCIE power connector(s).

Some interesting PCIE products,

http://www.ameri-rack.com/PCI-EXPRESS.htm

http://www.cyclone.com/products/expansion_backplanes/
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30884 - Posted: 19 Jun 2013 | 8:51:37 UTC - in response to Message 30883.

My first WU using a riser failed after 7h 55min :(
I was opening and closing application windows at the time, so I expect that had something to do with it.

Outcome Computation error
Client state Compute error
Exit status -1 (0xffffffffffffffff) Unknown error number

I'll try some short WU's...
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30886 - Posted: 19 Jun 2013 | 16:43:51 UTC - in response to Message 30883.

I've spent all day looking at how to add my redundant GTX 460 to my GTX 660 crunching effort. Thanks, skgiven, for the leads!

I'm not technical in any way, shape, or form, but I've come up with the following:

A 1 to 16 slot converter might also interest some,

http://www.amfeltec.com/products/x1x16pcie-riser.php

Note that, for Europe (me), the link is:
http://www.thedebugstore.com/acatalog/SKU-074-01.html

With all these risers you need to understand the power consequences! A PCIE2.1 X16 slot can deliver up to 75W of power to the GPU. An X1 slot cannot, therefore all the power would need to be delivered through the PCIE power connector(s).


That looks like a possibility for me. Love the plug 'n' play bit.
I have an almost-4-year-old Dell XPS 435, whose performance still amazes me! First question: do I have any PCIe x1 slots. I found I have two:



Good start! Next, is there room in the case? Amfeltech don't give the dimensions of their card, but I reckon I have five cm of height before it becomes a problem and I need to run with the cover off.

Finally, do I have power? Previously I posted I had four PCIe power connectors. Not true. I have two, but one of these, for the 460 (it needs two), should be enough:

http://tinyurl.com/mkmof9g

And... I have 620W of power:



Enough?



*

matlock
Send message
Joined: 12 Dec 11
Posts: 34
Credit: 86,423,547
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 30889 - Posted: 20 Jun 2013 | 6:02:34 UTC - in response to Message 30886.


Finally, do I have power? Previously I posted I had four PCIe power connectors. Not true. I have two, but one of these, for the 460 (it needs two), should be enough:

And... I have 620W of power:

Enough?


I wouldn't do it. There's a reason your PSU didn't come with 4 PCIe connectors. I would get a new motherboard and a new PSU if I was going to run dual GPUs having that setup. You don't want to be at the limitations of your hardware when it comes to power supply. You want some headroom.

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30892 - Posted: 20 Jun 2013 | 11:49:13 UTC - in response to Message 30889.


Finally, do I have power? Previously I posted I had four PCIe power connectors. Not true. I have two, but one of these, for the 460 (it needs two), should be enough:

And... I have 620W of power:

Enough?


I wouldn't do it. There's a reason your PSU didn't come with 4 PCIe connectors. I would get a new motherboard and a new PSU if I was going to run dual GPUs having that setup. You don't want to be at the limitations of your hardware when it comes to power supply. You want some headroom.

Might you be a tad pessimistic? Here's the way I see it.

My (new) 620 watts PSU has two PCIe connectors so it can run two GPUs. I need the splitter to run the (legacy) GTX 460 since it requires two feeds. The (new) GTX 660 requires just one.

Nvidia's Web site tells me that the minimum system power requirement for each of these GPUs, separately, is 450 watts. That means I have 170 watts spare to run a second GPU. Surely that's enough?

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 30896 - Posted: 20 Jun 2013 | 14:50:57 UTC - in response to Message 30892.

Personally, I say go ahead! It will either work, or it won't, you don't have much to lose!

Just keep an eye (and your nose) on the system when you start crunching, observe its temps* for a few hours, don't let it unattended. You'll want to stop it before the motherboard starts burning in flames!

* including motherboard temperature
____________

captainjack
Send message
Joined: 9 May 13
Posts: 171
Credit: 3,562,389,156
RAC: 18,013,879
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30899 - Posted: 20 Jun 2013 | 15:01:15 UTC

Tomba,

You should be able to find some power supply wattage calculators around the internet. I use the one at Newegg frequently. You can put in your system configuration and it will tell you how many watts you need.

http://images10.newegg.com/BizIntell/tool/psucalc/index.html?name=Power-Supply-Wattage-Calculator

When Nvidia says that you need 450 watts, that is for the whole system, not just the video card. The two cards that you mentioned use ~150-200 watts per card.

Hope that helps.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30900 - Posted: 20 Jun 2013 | 18:41:37 UTC - in response to Message 30886.

Finally, do I have power? Previously I posted I had four PCIe power connectors. Not true. I have two, but one of these, for the 460 (it needs two), should be enough:

http://tinyurl.com/mkmof9g

And... I have 620W of power:

Enough?


That splitter won't make any difference to power delivery! It would just split 75W into two - not give you two times 75W. This is what you would need,

http://www.amazon.co.uk/Neewer-PCI-E-Splitter-Power-Adapter/dp/B005J8DGTU/ref=pd_sim_computers_10

The PSU should be powerful enough.

A GTX460 has a TDP of 150W or 160W depending on the model. Two PCIE 6pin power connectors can supply 150W, which should be sufficient for crunching here (actual usage is likely to be ~115W on the GPU).

I'm not convinced it would work, but there is nothing obvious to prevent it so I would give it a try, if only to advise others that it doesn't work.

Since I started using my riser cable I've had lots of errors. In part these may be due to using the iGPU on my i7-3770K (display is a bit less responsive) and down to some CPU tasks, however I think I was having a power issue; the card being raised was a GeForce GTX650TiBoost which only has one 6-pin PCIE power connector. That would have been delivering 75W, but the TDP is 134W for a reference card. The card works at 1134MHz rather than 1033, so it might have had a higher bespoke TDP/needed more power. It's also on a PCIE2 slot, unlike the other slots. Anyway, there is likely to be some loss of power delivery from the cable and it might have caused a problem. I've swapped GPU's around now, so that my GTX660Ti (with two PCIE 6-pin power cables supplying 150W). That's actually enough for a reference model on its own, without any power from the PCIE slot.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30901 - Posted: 20 Jun 2013 | 19:01:58 UTC - in response to Message 30899.

Tomba,

You should be able to find some power supply wattage calculators around the internet. I use the one at Newegg frequently. You can put in your system configuration and it will tell you how many watts you need.

http://images10.newegg.com/BizIntell/tool/psucalc/index.html?name=Power-Supply-Wattage-Calculator


Great link! Thank you!!

I spent the afternoon there. I started with my Dell XPS 435 as delivered, four years ago, with a GeForce 210. 386 watts for a 475 PSU. No problem.

I then gave the calculator my three-year-old GTX 460; 553 watts vs. 475 PSU!! That sure is one power-hungry device. And I ran GPUGrid 24/7, under-watted, for three years!

Immediate conclusion? Give the GTX 460 to my grandson for gaming (making sure he has enough watts). And I would still like to keep in the range of my new 610 PSU (should have done the sums before I bought it).

The bottom of the Nvidia range today is the GTX 650. One takes 429 watts, two take 512. One GTX 660 takes 527 watts, two take 709 watts. Unfortunately the Newegg calculator does not allow for more than one GPU but, given these numbers, I reckon one of each - 650 and 660 - will be fine for a 610 watts PSU. Or am I dreaming?

Your thoughts will be most welcome! Thank you, Tom

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30902 - Posted: 20 Jun 2013 | 19:44:42 UTC - in response to Message 30900.

That splitter won't make any difference to power delivery! It would just split 75W into two - not give you two times 75W. This is what you would need,

http://www.amazon.co.uk/Neewer-PCI-E-Splitter-Power-Adapter/dp/B005J8DGTU/ref=pd_sim_computers_10

Many thanks for the heads-up. Ordered...

The PSU should be powerful enough.

I shall certainly try the GTX 460 in the riser, with that molex power supply.

Tom

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 30905 - Posted: 21 Jun 2013 | 9:27:05 UTC - in response to Message 30902.

I think that newegg calculator is only taking into account the Nvidia PSU recommendations, not the cards' actual power draw.

For example, my stock-clocked 650Ti has a "Maximum Graphics Card Power" rating of 110W, but the PSU recommendation is 400W! Nvidia is obviously being cautious and factoring in power consumption by the rest of the computer, but equally obviously, it is being overly cautious! A 110W graphics card does not require a PC that consumes 290W without the card by no means!

A typical CPU will consume less than 100W, a typical spinning disk something like 10W, adding up to 110W. Let's be overly pessimistic and say that the rest of the components will consume 50W, it all adds up to 160W, way less than the 290W assumed by Nvidia! Together with the 650Ti, the power draw gets up to ~270W. One could get along just fine with a 350W PSU. And I'm adding Watts pretty aggressively here...

TL/DR: Don't buy too big PSUs without reason! Find the power draw of your major components, add a safety margin (something like 10%), factor in PSU efficiency (80-90%) and THEN buy the PSU you need.
____________

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30908 - Posted: 21 Jun 2013 | 17:33:48 UTC - in response to Message 30896.

Just keep an eye (and your nose) on the system when you start crunching, observe its temps* for a few hours, don't let it unattended. You'll want to stop it before the motherboard starts burning in flames!

* including motherboard temperature

Thanks for your response.

Re "motherboard temperature" - I found a Win 7 app that measures the CPU core temps. Is that what you mean, or is it something else?

Tom

captainjack
Send message
Joined: 9 May 13
Posts: 171
Credit: 3,562,389,156
RAC: 18,013,879
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30912 - Posted: 21 Jun 2013 | 18:56:19 UTC

Motherboard temperature is different from CPU temperature. Just found and installed SPECCY http://www.piriform.com/speccy a Windows tool which will tell you both (and a whole lot more information about your system). They have a free version and a pay version, I used the free version.

Just so you know, there is also a Windows tool called GPU-Z that will tell you quite a bit of real-time information about your GPU(s) including temperature and % utilization. I use that one frequently.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30913 - Posted: 21 Jun 2013 | 19:58:34 UTC - in response to Message 30912.
Last modified: 23 Jun 2013 | 8:46:12 UTC

Since I started using my riser cable I've had lots of errors. In part these may be due to using the iGPU on my i7-3770K (display is a bit less responsive) and down to some CPU tasks, however I think I was having a power issue; the card being raised was a GeForce GTX650TiBoost which only has one 6-pin PCIE power connector. That would have been delivering 75W, but the TDP is 134W for a reference card. The card works at 1134MHz rather than 1033, so it might have had a higher bespoke TDP/needed more power. It's also on a PCIE2 slot, unlike the other slots. Anyway, there is likely to be some loss of power delivery from the cable and it might have caused a problem. I've swapped GPU's around now, so that my GTX660Ti (with two PCIE 6-pin power cables supplying 150W). That's actually enough for a reference model on its own, without any power from the PCIE slot.

Just an update,
Since I moved the GTX660Ti onto the PCIE Riser (instead of the GTX650TiBoost) I have been getting fewer errors. All cards are returning completed results and the GTX660Ti despite being in a PCIE2 @X2 slot (going by GPUz) is doing very well. If anything it's the most stable of the cards. On the other hand the GTX650TiBoost is running @98% power, 95% GPU usage, and 69C (which is surprisingly high for that card and probably causing issues).

Relative performances (GTX660Ti, GTX660, GTX650TiBoost):
I397-SANTI_baxbim1-2-62-RND5561_0 4536452 21 Jun 2013 | 13:03:04 UTC 21 Jun 2013 | 17:21:28 UTC 10,299.97 10,245.68 20,550.00
I693-SANTI_baxbim1-1-62-RND4523_0 4535331 21 Jun 2013 | 5:50:58 UTC 21 Jun 2013 | 16:32:07 UTC 12,587.32 12,434.03 20,550.00
I538-SANTI_baxbim1-2-62-RND4314_0 4535821 21 Jun 2013 | 9:54:01 UTC 21 Jun 2013 | 17:49:17 UTC 14,005.43 13,897.61 20,550.00
Even on PCIE2 @X2 the GTX660Ti is 36% faster than the GTX650TiBoost and 22% faster than the GTX660. For these WU's and on this setup (Intel HD Graphics 4000 being used for display) PCIE width is irrelevant. This confirms what Beyond and dskagcommunity reported.

Update - 23 valid from last 26, with all cards completing tasks. Reasonably stable...
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30923 - Posted: 22 Jun 2013 | 13:18:41 UTC - in response to Message 30905.

TL/DR: Don't buy too big PSUs without reason! Find the power draw of your major components, add a safety margin (something like 10%), factor in PSU efficiency (80-90%) and THEN buy the PSU you need.

Vagelis is right here, and with the rest of his post. The only points I'd change are these:

1. For power supplies you want way more than 10% safety margin under sustained load. Drive the PSU too hard and it will fail earlier. And maximum PSU efficiency is achieved around 50% load. The historical guideline has been to shoot for ~50% load. However, with more efficient PSUs I think it's no problem to aim for 50 - 80% load, as the amount of heat generated inside the PSU (which kills it over time) is significantly reduced by the higher PSU efficiency, and by now we've got 120 mm fans instead of 80 mm (it's easier to cool things this way). And modern PSU have flatter power responce curves, so efficiency doesn't drop much if you exceed 50% load.

To summarize: I'd want more than 20% safety margin between the maximum power draw of the system and what the PSU can deliver. Typical power draw under BOINC will be lower than whatever you come up with considering maximums.

2. PSU efficiency determines how much power the PSU draws from the wall plug. For the PC a "400 W" unit can always deliver 400 W, independent of its efficiency. At 100% load and 80% efficiency you'd be paying for 500 W, whereas at 90% efficiency you'd be paying for 444 W from the wall plug.

MrS
____________
Scanning for our furry friends since Jan 2002

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 30931 - Posted: 23 Jun 2013 | 7:11:34 UTC - in response to Message 30923.

I agree completely, ETA! Very valid points!

The good thing these days is, that we can find very good quality PSUs at good prices. Of course, there's always the junk that's sold with cases or as an accessory (together with the screws, nuts and bolts I guess!), but if one is willing to look around a little, good PSUs are there to find.
____________

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30940 - Posted: 23 Jun 2013 | 12:23:50 UTC - in response to Message 30883.
Last modified: 23 Jun 2013 | 12:37:42 UTC

Because you wrote about the pcie x1->x16 adapters. There are some with Molex power to get the needed power. But not easy to get them shipped to austria in my case :( In Germany they are available over several shops. I found one too for UK people.
____________
DSKAG Austria Research Team: http://www.research.dskag.at



tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30941 - Posted: 23 Jun 2013 | 13:45:28 UTC - in response to Message 30940.

Because you wrote about the pcie x1->x16 adapters. There are some with Molex power to get the needed power. But not easy to get them shipped to austria in my case :( In Germany they are available over several shops. I found one too for UK people.

I bought one here.

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30942 - Posted: 23 Jun 2013 | 15:21:18 UTC - in response to Message 30912.

Just found and installed SPECCY http://www.piriform.com/speccy a Windows tool which will tell you both (and a whole lot more information about your system). They have a free version and a pay version, I used the free version.

That's a "Bingo!". Thank you.

Right now, before I install the second GPU, the room temperature is 26.5C and Speccy shows my Dell/Intel mobo between 63C and 67C.

When I install the second GPU, what is the max mobo temperature I should be looking for to avoid it bursting into flames??

Thanks, Tom

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30943 - Posted: 23 Jun 2013 | 15:37:17 UTC - in response to Message 30942.

That's difficult to say if we don't know what is actually measured here. Usually reported mainboard temperatures in well ventilated cases are in the range of 35 - 45°C. So >60°C would already be really hot. But if that temperature only applies to the VRM circuitry then you're fine, as these can usually take 80 - 100°C.

Do you have any case ventilation? How hot does the fair feel inside your case, if you quickly open it after sustained crunching?

MrS
____________
Scanning for our furry friends since Jan 2002

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30944 - Posted: 23 Jun 2013 | 17:49:39 UTC - in response to Message 30941.

Because you wrote about the pcie x1->x16 adapters. There are some with Molex power to get the needed power. But not easy to get them shipped to austria in my case :( In Germany they are available over several shops. I found one too for UK people.

I bought one here.


Thats not a pcie x1 to x16 adapter ;)
____________
DSKAG Austria Research Team: http://www.research.dskag.at



tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30946 - Posted: 23 Jun 2013 | 18:09:37 UTC - in response to Message 30944.

Because you wrote about the pcie x1->x16 adapters. There are some with Molex power to get the needed power. But not easy to get them shipped to austria in my case :( In Germany they are available over several shops. I found one too for UK people.

I bought one here.


Thats not a pcie x1 to x16 adapter ;)

Oh dear... This is getting complicated.

Please show me one. Thank you.

TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30947 - Posted: 23 Jun 2013 | 18:45:35 UTC

Interesting what I read here.
To measure some things I have some tips. I use a power monitor, it's a device what is set between the wall plug and the PC plug. The readings give the entire wattage used. There is a difference when idle and at full load. I see around 300W used with one GPU (550Ti) and 4 CPU at full load. And 358W with a GTX660 and 7 cores from an i7, with also 1 SSD, 1 Blueray, 1 DVD and 2 HD. This PSU gives 880W.

A second handy thing is an infrared thermometer, not expensive anymore bought it at Conrad. With a laser pointer it is possible the pin point at a certain location and then read the temperature. With an open case it is even easier.

Both instruments can be an eye-opener for power draw/consumption and temperature/overheating.

@Tomba, with that PSU of yours, 2 GPU's will work. However I don't now about the riser, as I have no experience with that. But skgive wrote that he had errors. So keep in mind that if it doesn't work in your system, do not directly think that the PSU is the bottle neck.
____________
Greetings from TJ

matlock
Send message
Joined: 12 Dec 11
Posts: 34
Credit: 86,423,547
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 30950 - Posted: 23 Jun 2013 | 19:51:37 UTC - in response to Message 30947.

And 358W with a GTX660 and 7 cores from an i7, with also 1 SSD, 1 Blueray, 1 DVD and 2 HD. This PSU gives 880W.


How much is it drawing when your HDDs and optical drives are in use?

You have a lot of headroom with your PSU, but imagine you only had around 600W (at 100% PSU capacity) and two mid-high range GPUs at full load. With HDDs connected I would be concerned for possible damage, meaning data backups would need greater frequency. Overclocking also adds to power stability requirements. Many people may be comfortable with 80%+ load, but I like to keep it under that.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30954 - Posted: 23 Jun 2013 | 20:58:56 UTC - in response to Message 30950.

Modern desktop HDDs draw about 3 W more under load than when idle. Add 1 - 2 W for older ones. Idle power draw is in the range of 5 W, up to 8 W for older ones. At startup they'll briefly need 20 - 25 W, but at this point the GPUs do not yet draw power, so for cruncher PCs this doesn't matter at all ;)

Not as sure for optical drives (they're hardly under load anymore, aren't they?), but around 10 W under load should be about right, a bit more for a burner.

MrS
____________
Scanning for our furry friends since Jan 2002

matlock
Send message
Joined: 12 Dec 11
Posts: 34
Credit: 86,423,547
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 30957 - Posted: 24 Jun 2013 | 4:11:14 UTC - in response to Message 30954.

Western Digital Black drives pull about 10-11 W at load. I could debate the numbers on optical drives, etc, but it would distract from my main point. The more PSU headroom the better, and going over 80% load may cause stability issues. This is just for caution. If someone is comfortable with pushing their PSU, more power(hah) to them.

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 30962 - Posted: 24 Jun 2013 | 9:49:58 UTC - in response to Message 30942.

Just found and installed SPECCY http://www.piriform.com/speccy a Windows tool which will tell you both (and a whole lot more information about your system). They have a free version and a pay version, I used the free version.

That's a "Bingo!". Thank you.

Right now, before I install the second GPU, the room temperature is 26.5C and Speccy shows my Dell/Intel mobo between 63C and 67C.

When I install the second GPU, what is the max mobo temperature I should be looking for to avoid it bursting into flames??

Thanks, Tom


>60C for your motherboard is VERY hot! In the hot Greek summer I'm going through now, with ambient temps ~35C, my temps are:
coretemp-isa-0000
Adapter: ISA adapter
Core 0: +72.0°C (high = +83.0°C, crit = +99.0°C)
Core 1: +66.0°C (high = +83.0°C, crit = +99.0°C)
Core 2: +64.0°C (high = +83.0°C, crit = +99.0°C)
Core 3: +64.0°C (high = +83.0°C, crit = +99.0°C)

atk0110-acpi-0
Adapter: ACPI interface
Vcore Voltage: +1.22 V (min = +0.80 V, max = +1.60 V)
+3.3V Voltage: +3.38 V (min = +2.97 V, max = +3.63 V)
+5V Voltage: +5.14 V (min = +4.50 V, max = +5.50 V)
+12V Voltage: +12.26 V (min = +10.20 V, max = +13.80 V)
CPU Fan Speed: 1110 RPM (min = 600 RPM)
Chassis1 Fan Speed: 1520 RPM (min = 600 RPM)
Chassis2 Fan Speed: 1052 RPM (min = 600 RPM)
Power Fan Speed: 896 RPM (min = 0 RPM)
CPU Temperature: +71.0°C (high = +45.0°C, crit = +45.5°C)
MB Temperature: +38.0°C (high = +45.0°C, crit = +46.0°C)

Fan Speed : 55 %
Gpu : 67 C

With a 27C ambient temperature, my temps would be <60C (around 55) for CPU and GPU and about 30C for my motherboard.

Most probably, the temperature you're seeing is not your motherboard's.
____________

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30963 - Posted: 24 Jun 2013 | 11:15:39 UTC - in response to Message 30962.

Most probably, the temperature you're seeing is not your motherboard's.


TJ
Send message
Joined: 26 Jun 09
Posts: 815
Credit: 1,470,385,294
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30964 - Posted: 24 Jun 2013 | 13:09:48 UTC - in response to Message 30963.

That is hot Tomba.
I have checked my PC's with the same program and see reading between 36-48°C with an ambient temperature of 30.5°C!.
Very strange for a Dell, as I have several types even an XPS420 and that one runs 24/7 and quite cool (40°C of MOBO) but has a smaller PSU than you have. I have still the original Dell.
Your temperature is to high, you should try do to something about it. If you clean all the fans with canned air and you close to case (airflow will be tunneled then, Dell does that smart), and you don't start any applications for a while, what is then the temperature reading?
____________
Greetings from TJ

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30965 - Posted: 24 Jun 2013 | 14:08:43 UTC - in response to Message 30964.

Your temperature is to high, you should try do to something about it. If you clean all the fans with canned air and you close to case (airflow will be tunneled then, Dell does that smart), and you don't start any applications for a while, what is then the temperature reading?

Thanks for responding. As a first step I suspended the active GPUGrid WU. Temp is now 40C/41C.

Now for the blow bit...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30974 - Posted: 24 Jun 2013 | 16:57:30 UTC - in response to Message 30965.

First I'd make sure the reading is actually correct. Which sensor measures what is something which can quickly be screwed up any party involved. That the tool calls it "system temperature" doesn't mean anything on its own. Hence my suggestion to feel the air for yourself - then you'll know for sure if you've got some software reading error or if you're cooking your hardware.

And hence the question regarding the case fans.. the less the more likely the reading is true ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30982 - Posted: 24 Jun 2013 | 19:14:48 UTC - in response to Message 30974.

Going by that window grab 'System Temperature' means the South Bridge Chip, and it's 63°C. It's probably covered by a heatsink. IIRC 63°C is reasonable enough for an early i7.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30995 - Posted: 25 Jun 2013 | 5:40:23 UTC - in response to Message 30965.

Now for the blow bit...

I usually blow out the case every three months so, apart from a small amount of dust, it was clean enough. All the case-side ventilation holes were clear of fluff and spider webs(!)

After reassembling I took temp readings with nothing running - top pic - and with just GPUGrid running - bottom pic:

Vagelis Giannadakis
Send message
Joined: 5 May 13
Posts: 187
Credit: 349,254,454
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 30996 - Posted: 25 Jun 2013 | 11:54:14 UTC - in response to Message 30995.

Your CPU and system temperatures are too high. The CPU, I can attribute to the lousy Intel stock cooler, it is an EPIC fail on its part. I HIGHLY recommend, especially for prolonged crunching, a good aftermarket cooler. You can find many decent ones at very reasonable prices, that will just blow the Intel cooler out of the water!

A high system temperature is unusual. I would attribute it to poor case ventilation or even a failure of some cooling part. Perhaps you should check your motherboard's coolers. Maybe something like the south-bridge cooler has detached from the chip? I would do a visual as well as "touching" test of the various coolers. See if everything is where it must be and make sure it is firmly seated / attached to what it's supposed to cool.
____________

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31002 - Posted: 25 Jun 2013 | 14:24:42 UTC

I know some of you will be cringing, with finger on fire department emergency button, but I had to do it...

The PCIe 1-16 riser, and the Molex PCIe power adapter, arrived. I connected the 660 to the Molex, and the two PSU PCIe connectors to the 460. Windows spent a long time installing S/W for the 460 but it appeared. See pic below.

My next challenge (Help!) is how to tell BOINC about the 460. I tried several GPUGrid "Updates" but no joy. How do I do that?

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 31005 - Posted: 25 Jun 2013 | 15:36:26 UTC

Did you try this?

<cc_config>
<options>
<use_all_gpus>1</use_all_gpus>
</options>
</cc_config>

Save as cc_config.xml if you have it already, disregard.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31006 - Posted: 25 Jun 2013 | 15:37:17 UTC - in response to Message 31002.
Last modified: 25 Jun 2013 | 15:39:05 UTC

cc_config - use all GPU's - FAQ - Best configurations for GPUGRID
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31007 - Posted: 25 Jun 2013 | 15:40:13 UTC - in response to Message 31005.

Did you try this?

<cc_config>
<options>
<use_all_gpus>1</use_all_gpus>
</options>
</cc_config>

Save as cc_config.xml if you have it already, disregard.


Where do I save it in Win 7 ??

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31008 - Posted: 25 Jun 2013 | 15:41:45 UTC - in response to Message 31007.
Last modified: 25 Jun 2013 | 15:42:25 UTC

C:\ProgramData\BOINC (this folder will likely be hidden, but should open if you type it into Explorer)

From, FAQ - Best configurations for GPUGRID
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31012 - Posted: 25 Jun 2013 | 16:26:02 UTC - in response to Message 31008.

C:\ProgramData\BOINC (this folder will likely be hidden, but should open if you type it into Explorer

Wow! It's all happening!! Thank you!

First, I got two POEMs, and the GPUGrid WU went to sleep. Then one of the POEMs died and a GPUGrid WU downloaded. Here we are now, which I guess is what I was aiming for!

Many thanks, Tom



Temps are here:

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31013 - Posted: 25 Jun 2013 | 16:46:17 UTC

One interesting observation:

With the 660-only running, it consumed a whole CPU thread; 13% of the i7.

With the 460 running too, it uses a lot less:

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31015 - Posted: 25 Jun 2013 | 18:06:09 UTC
Last modified: 25 Jun 2013 | 18:07:00 UTC

You can't run more than 1 nVidia for POEM (and all other OpenCL projects, I think). To keep running some POEMs you'd need to exclude one GPU from this project, or mix AMDs and nVidias in one machine.

Edit: yes, the CPU usage again. For Keplers a whole thread is used, for older cards much less.

MrS
____________
Scanning for our furry friends since Jan 2002

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31016 - Posted: 25 Jun 2013 | 18:34:17 UTC - in response to Message 31015.

You can't run more than 1 nVidia for POEM (and all other OpenCL projects, I think). To keep running some POEMs you'd need to exclude one GPU from this project, or mix AMDs and nVidias in one machine.

I wonder why I got two...

Edit: yes, the CPU usage again. For Keplers a whole thread is used, for older cards much less.

MrS

That would explain it. Thank you!

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31024 - Posted: 25 Jun 2013 | 20:22:48 UTC - in response to Message 31016.
Last modified: 25 Jun 2013 | 20:23:03 UTC

You can't run more than 1 nVidia for POEM (and all other OpenCL projects, I think). To keep running some POEMs you'd need to exclude one GPU from this project, or mix AMDs and nVidias in one machine.

I wonder why I got two...

Random chance. Not sure you understood correctly: any POEM WU assigned to your 2nd GPU will fail in the stock config.

MrS
____________
Scanning for our furry friends since Jan 2002

tomba
Send message
Joined: 21 Feb 09
Posts: 497
Credit: 700,690,702
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31070 - Posted: 27 Jun 2013 | 18:05:37 UTC - in response to Message 31002.

I know some of you will be cringing, with finger on fire department emergency button, but I had to do it...

The PCIe 1-16 riser, and the Molex PCIe power adapter, arrived. I connected the 660 to the Molex, and the two PSU PCIe connectors to the 460. Windows spent a long time installing S/W for the 460 but it appeared.

Well - if you were cringing you were right! Too many times the CPU temp went into the red zone, so I removed the riser, and its GTX 460. I'm now back to just the GTX 660, and all is well.

Question: was my problem the lack of power (610W for the two GPUs), or was it insufficient cooling?


ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 31078 - Posted: 27 Jun 2013 | 21:05:23 UTC - in response to Message 31070.

If the CPU becomes too hot it's insufficient case cooling, assuming the CPU fan already works as hard as it can. Too hot is actually caused by too much being burned, so the PSU did its job just fine ;)

MrS
____________
Scanning for our furry friends since Jan 2002

Post to thread

Message boards : Graphics cards (GPUs) : NVidia GTX 650 Ti & comparisons to GTX660, 660Ti, 670 & 680

//