Message boards : Graphics cards (GPUs) : 560Ti Owners!
Author | Message |
---|---|
What are your current runtimes for long tasks? | |
ID: 27977 | Rating: 0 | rate: / Reply Quote | |
17,5 h for the actual patch is a good value on a, i think in your case, 384 core version of the 560 ti. | |
ID: 27985 | Rating: 0 | rate: / Reply Quote | |
Your GTX560Ti times are better than I would have predicted, suggesting a relative performance improvement for the super-scalar cards. | |
ID: 27995 | Rating: 0 | rate: / Reply Quote | |
yes, it is a Ti card. It's an MSI GTX560 TI 2G GDDR5 Twin Frozr II OC . | |
ID: 27999 | Rating: 0 | rate: / Reply Quote | |
yes, it is a Ti card. It's an MSI GTX560 TI 2G GDDR5 Twin Frozr II OC . <add> it is also just the 384 core card, not the 448. | |
ID: 28001 | Rating: 0 | rate: / Reply Quote | |
Simba123 wrote: What are your current runtimes for long tasks? Looks like both GPU's are super-scalars. I have 48,817.82 seconds for 135150 points WU named "9px26_6-NOELIA_hfXA_long_ligs89-0-2-RND0470_0", all 12 threads of CPU was all the time loaded by F@H SMP client. My card is 3-slot Asus ENGTX560Ti 448cores at 1730 @ 1.038 Vcore @ 2000 memory, it's GF110 chip. | |
ID: 28023 | Rating: 0 | rate: / Reply Quote | |
Frontiers, what is your GPU clock rate? | |
ID: 28035 | Rating: 0 | rate: / Reply Quote | |
I use pcie 1.0 with a 560ti 448 core edition. Works fine in there. :D when i would not stuck with this big uploads and my small upload i would get surely 250k Rac with these noelias | |
ID: 28041 | Rating: 0 | rate: / Reply Quote | |
I use pcie 1.0 with a 560ti 448 core edition. Works fine in there. :D when i would not stuck with this big uploads and my small upload i would get surely 250k Rac with these noelias I think that 'you can't use PCIE X1 or X4' has now been proven a misnomer. Looking at your times and mine, there is no difference between running in the X1 slot and the X16 slot. Quite surprising actually, but the numbers don't lie. Happy though as it means I can leave my 560Ti in the bottom X4 slot, which means lower temps. | |
ID: 28061 | Rating: 0 | rate: / Reply Quote | |
I think that 'you can't use PCIE X1 or X4' has now been proven a misnomer. PCIE1 doesn't mean x1 or x4. It can be x16, x8, x4 or x1. PCIE1 x16 is as fast as PCIE2 x8, and even PCIE3 x4. So PCIE1 isn't an obstacle that will prevent you crunching, it will just slow you down. How much depends on whether its x16, x8 or x4. Then there is the fact that PCIE1 boards tend to only support DDR or DDR2 memory. This also reduces performance, as does the CPU which would again be limited compared to a top 3rd generation Intel CPU, or your i7-2600K. So while PCIE2 x4 does work, there will be some performance loss due to the reduced PCIE rates. With a PCIE2 motherboard, you could be using dual or triple channel DDR3 and have a very fast CPU. You also have to consider the operating system performance differences (XP or Linux will outperform W7 by >11%) and don't forget that there is a 448cuda core version of the 560Ti and a 384 (with 256 usable cores) version. This has been discussed, demonstrated and proven. How accurate the conclusions are is open to debate after ever new application release, but as a general rule when you reduce the PCIE bus rate, performance will drop, and this will be more noticeable for more powerful GPU's. As you have the 384cuda-core version, I would not be too worried about having it in the PCIE2.0 x4 slot. You might want to check if your GTX660Ti's slot drops to X8 or remains as X16 if the X4 slot is populated. This is board-specific. ____________ FAQ's HOW TO: - Opt out of Beta Tests - Ask for Help | |
ID: 28062 | Rating: 0 | rate: / Reply Quote | |
Hi, | |
ID: 28070 | Rating: 0 | rate: / Reply Quote | |
No, lower the voltage first. That's a free drop in power consumption (=electricity bill), heat and noise as long as it's still stable. Only lower frequency and voltage further is this isn't enough. | |
ID: 28074 | Rating: 0 | rate: / Reply Quote | |
I don't agree with you. | |
ID: 28080 | Rating: 0 | rate: / Reply Quote | |
That's why I said "as long as it's still stable" - stability testing is needed when lowering the voltage, but the result is more efficient. | |
ID: 28081 | Rating: 0 | rate: / Reply Quote | |
I think that 'you can't use PCIE X1 or X4' has now been proven a misnomer. I remember that discussion. That discussion motivated me to upgrade from Core 2 Quad to Core i7-870 and 970. While your every statement is still true, it seems to me from my experience that the CUDA 4.2 client could be throttled down far less by such factors as PCIe bandwith than the CUDA 3.1 client was (at least some type of workunits). One of my Core i7-970s and its motherboard has failed recently, and I could replace it only with a Core 2 Duo 6700 (@stock clock) in an Intel D975XBX motherboard. It has 3 PCIe (1st gen.) x16 slots, two of them share the same x16 lane (so they become x8 when both are populated) and the 3rd slot is PCIe x4 only. I have two GTX 480s @800MHz in slot 1 (@x16) and in slot 3 (@x4). Taking the running times into consideration it's indistinguishable which workunit was processed on which card. Even if you compare them with my other host's (with similar GPU, but Core i7-870 @3.85GHz) running times, there is only about 5-7% improvement. I have another experimental host with a GTX480@800MHz. Originally it had a Pentium D 2.8GHz CPU in an Intel DQ965GF motherboard (PCIe x16 1st gen.), but I recently upgraded its CPU to a Core 2 Duo 6600. The results before this upgrade are still on the list of this host's page. I haven't experienced any decrease in the running times after upgrading the CPU (while I can recall there was such decrease in the times of the CUDA 3.1 client). I know that the GTX480 is not the fastest GPU any more, so to make things more clear, I've put a GTX670 into this host. It's results will come in the following days. | |
ID: 28085 | Rating: 0 | rate: / Reply Quote | |
I would expect the CPU speed requirement and PCIe bandwidth (and latency) to become less the more complex the WUs are and the slower a GPU is. And I wouldn't be surprised if the current long runs were simulating rather complex molecules.. whatever wasn't possible previously due to a lack of computing power. | |
ID: 28088 | Rating: 0 | rate: / Reply Quote | |
I have another experimental host with a GTX480@800MHz. Originally it had a Pentium D 2.8GHz CPU in an Intel DQ965GF motherboard (PCIe x16 1st gen.), but I recently upgraded its CPU to a Core 2 Duo 6600. The results before this upgrade are still on the list of this host's page. I haven't experienced any decrease in the running times after upgrading the CPU (while I can recall there was such decrease in the times of the CUDA 3.1 client). I know that the GTX480 is not the fastest GPU any more, so to make things more clear, I've put a GTX670 into this host. It's results will come in the following days. It's finished the last NOELIA_hfXA_long_ligs89 in 38.746 seconds (10h 45m 46s). The GTX480@800MHz processed it to 5.2%, and the GTX670 processed the rest (94.8%). There is no significant difference in the running times compared to my other host with a Core i7-870, and two GTX670s (this MB has two real x16 PCIe 2.0 slots). I have to mention that this host consumes 105W less with the GTX670@1084MHz than with the GTX480@800MHz. The next workunit on my experimental host was a TONI_AGGd8 completed in 20.637 seconds (5h 44m), it took the same long as it will take on my other host (it's at 20% atm). | |
ID: 28089 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : 560Ti Owners!