Advanced search

Message boards : Graphics cards (GPUs) : GTX 980Ti

Author Message
Profile Francois Normandin
Send message
Joined: 8 Mar 11
Posts: 71
Credit: 654,432,613
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwat
Message 41218 - Posted: 31 May 2015 | 17:38:55 UTC

The NVIDIA GeForce GTX 980 Ti is launching in two days and we have already seen how the reference cards look and perform. The main feat of the GeForce GTX 980 Ti is going to be the non-reference lineup which is going to be launched by NVIDIA’s AIB partners and will include several high-end coolers featuring triple-fan, hybrid and liquid coolers.


http://wccftech.com/nvidia-aib-partners-prepare-huge-geforce-gtx-980-ti-nonreference-lineup-hybrid-triple-fan-cards-unveiled/

[CSF] Thomas H.V. DUPONT
Send message
Joined: 20 Jul 14
Posts: 732
Credit: 126,845,366
RAC: 156,524
Level
Cys
Scientific publications
watwatwatwatwatwatwatwat
Message 41221 - Posted: 1 Jun 2015 | 5:46:58 UTC

https://twitter.com/TEAM_CSF/status/605247234679128064
____________
[CSF] Thomas H.V. Dupont
Founder of the team CRUNCHERS SANS FRONTIERES 2.0
www.crunchersansfrontieres

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,247,165,968
RAC: 3,858,781
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41570 - Posted: 28 Jul 2015 | 8:19:45 UTC
Last modified: 28 Jul 2015 | 9:15:30 UTC

Ladies and Gentlemen, Boys and Girls,
May I present the fastest GERARD_FXCXCL12 ever processed to you:

e1s5_2-GERARD_FXCXCL12_LIG_13185441-0-1-RND4038_1
It took only 19517 seconds (that is 5h 25m 17s) to process on a Gigabyte GV-N98TG1 GAMING-6GD. Too bad that I'm only testing this card :)

e2s7_e1s6f207-GERARD_FXCXCL12_LIG_11680461-0-1-RND0759_0
19496 seconds (5h 24m 29s)

The specs:
GPU clock: 1447MHz (+80MHz set by MSI Afterburner)
GPU voltage: 1.230V (+38mV set by MSI Afterburner)
GPU Temp: 62°C (max) ambient temp: 20°C (the PC is in the open air)
GPU fan speed: 70%, 2936rpm (Windforce 3x)
GPU memory clock: 3505MHz (set by NVidia inspector)
GPU usage: 96%
GPU FB usage: 28% (@3505MHz)
GPU power: 85% (267W)
Total system power draw (measured at the 230V 50Hz wall outlet): 1 GPU task running: 350W idle: 83W
Power Supply: Corsair AX1200 (80 Plus Gold)
CPU: Core i7-4790k @4.00GHz
PCIe: 3.0@16x
MB: Gigabyte GA-Z87X-OC
no other GPUs were present.
no other tasks were running.
OS: Windows XP x64
NVidia Driver: 353.30
Comparing to a GTX 980 (1415Mhz@1.218V & 3505Mhz):
40% more CUDA cores (2816 vs 2048)
~32.3% better performance (~19500s vs ~25800s)
~50% more costs.
+34.6% total system power draw (260W vs 350W)
+46.7% GPU power draw (182W vs 267W) (total system power draw with GTX980 idle: 78W)

This 1447MHz@1.230V seems to be the maximum setting, as I had a task which errored out after 543 seconds.

The scaling of these cards are a bit lower than a direct ratio, so there must be some bottleneck present in the system (apart from the WDDM, as I'm using Windows XP x64). It could be the application (as it's CUDA 6.5), the GPU, or the PCIe bus, or a combination of them.

[CSF] Thomas H.V. DUPONT
Send message
Joined: 20 Jul 14
Posts: 732
Credit: 126,845,366
RAC: 156,524
Level
Cys
Scientific publications
watwatwatwatwatwatwatwat
Message 41574 - Posted: 28 Jul 2015 | 12:18:05 UTC - in response to Message 41570.

Awesome Retvari!
Thanks for the heads-up :)
____________
[CSF] Thomas H.V. Dupont
Founder of the team CRUNCHERS SANS FRONTIERES 2.0
www.crunchersansfrontieres

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,247,165,968
RAC: 3,858,781
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41577 - Posted: 28 Jul 2015 | 15:16:57 UTC - in response to Message 41570.

40% more CUDA cores (2816 vs 2048)

That is "only" 37.5% more CUDA cores, so there's only 5.2% discrepancy between the ratio of CUDA cores and the ratio of the performance.

eXaPower
Send message
Joined: 25 Sep 13
Posts: 293
Credit: 1,897,601,978
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 41582 - Posted: 28 Jul 2015 | 16:41:06 UTC - in response to Message 41570.
Last modified: 28 Jul 2015 | 16:44:58 UTC

This 1447MHz@1.230V seems to be the maximum setting, as I had a task which errored out after 543 seconds.

5pot's reported (w8.1) 980ti boost bin is 1450 at ~2hr slower. XP with GM200 offers a +20% performance advantage over WDDM.

The scaling of these cards are a bit lower than a direct ratio, so there must be some bottleneck present in the system (apart from the WDDM, as I'm using Windows XP x64). It could be the application (as it's CUDA 6.5), the GPU, or the PCIe bus, or a combination of them.

WDDM holding back single WUat more than ~10% with GM200. I'm curious to see what an app update does. CUDA6.5 app has the 980ti equaling (2) 970's or 780's as the 980 equals (2) 680's. There are so many micro-factors to get hold of when seeking peak optimization. ACEMD is one of kind - challenging the GPU with little latitude.

Trotador
Send message
Joined: 25 Mar 12
Posts: 103
Credit: 13,920,977,393
RAC: 7,489,952
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41587 - Posted: 29 Jul 2015 | 4:26:50 UTC

Nice, complete report, almost 1 Mppd wow!

Nicolas_orleans
Send message
Joined: 25 Jun 14
Posts: 15
Credit: 450,569,525
RAC: 11,473
Level
Gln
Scientific publications
watwatwatwatwatwat
Message 41607 - Posted: 2 Aug 2015 | 18:20:53 UTC
Last modified: 2 Aug 2015 | 18:21:47 UTC

Thanks Retvari for sharing these results !

Your report is very detailed, but I could not find if you were running one task or two (or three) simultaneously ?

Based on previous reports :
- 5pot's 980ti yield 25-27k sec / GERARD, on Win 8.1 and i7-4770k - 81% GPU usage
- localizer-s 980ti yields 25-27k sec / GERARD, Win 7 and i7-4770k - 79-80% GPU usage
- petebe's 980tis yield 22-26k sec / GERARD, Win XP and i7-4770k, unknown GPU usage
... but yours yields 96% GPU usage ?

Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,247,165,968
RAC: 3,858,781
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41609 - Posted: 2 Aug 2015 | 21:16:50 UTC - in response to Message 41607.

...I could not find if you were running one task or two (or three) simultaneously ?

You can know by the shortest runtime ever achieved that I was running only one task. If I had run two simultaneously, the runtime would have been doubled. I don't think it's worth the hassle to run two or more workunits at the same time under Windows XP (or Linux), as these OSes don't have the WDDM overhead.

- petebe's 980tis yield 22-26k sec / GERARD, Win XP and i7-4770k, unknown GPU usage

His GPU usage is between 93 and 95%, but he's runnig 4 GPUs (two GTX 980 and two GTX 980Ti) in the same host. From the huge difference of his and my runtimes I came to the conclusion that one should not reduce any bandwidth to achieve full utilization of the GTX980Ti. This GPU is simply too big and too fast.

... but yours yields 96% GPU usage ?

Yes.

Nicolas_orleans
Send message
Joined: 25 Jun 14
Posts: 15
Credit: 450,569,525
RAC: 11,473
Level
Gln
Scientific publications
watwatwatwatwatwat
Message 41622 - Posted: 4 Aug 2015 | 19:24:21 UTC - in response to Message 41609.

From the huge difference of his and my runtimes I came to the conclusion that one should not reduce any bandwidth to achieve full utilization of the GTX980Ti. This GPU is simply too big and too fast.


Thanks for your replies and conclusion.

Matt
Avatar
Send message
Joined: 11 Jan 13
Posts: 216
Credit: 846,538,252
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41712 - Posted: 29 Aug 2015 | 13:51:33 UTC

My 980Ti achieved over 1 million points yesterday. I've not been able to do that since I was running with two 780Ti and two 680s. It's a lot more power-efficient as well with current tasks only taking me up to about 80% power with a small overclock.

Bedrich Hajek
Send message
Joined: 28 Mar 09
Posts: 485
Credit: 11,130,303,472
RAC: 15,430,035
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41806 - Posted: 13 Sep 2015 | 11:27:39 UTC
Last modified: 13 Sep 2015 | 11:29:28 UTC

Last week, one of my GTX 690 cards failed in my windows 7 computer (the other one is still working), so I bought a GTX 980 Ti, and experimented. Here are my findings. The motherboard on my computer has the following slots: 1 x PCI Express x16 slot (PCI_E2) with x16 operation and 1 x PCI Express x16 slot (PCI_E3) with x4 operation. When I installed the 980 Ti into the x16 slot, a GERARD_FXCXCL12_LIG_ units would finish in a little over 9 hours with 70% usage, but in the x4 slot it would finish in just under 16 hours with 55% usage. This compares with 690 card which would finish 2 GERARDS units (one in each GPU) in over 16 hours in either slot with usage in the mid 80’s %. The temperature for the 980 Ti is in the mid 50’s C, with fan setting at 70%, while the 690 is in the low to mid 70’s C, with fan setting at 90%. The 980 Ti has a better cooling system. The 980 Ti is very bandwidth sensitive.

Is there any way around this or would next GPUGRID application version improve performance?

For the record, my computer has 1 GTX 980 Ti and 1 GTX 690 card, not 3 GTX 980Ti. See link below.

https://www.gpugrid.net/show_host_detail.php?hostid=127986

Otherwise, everything is working fine.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41808 - Posted: 13 Sep 2015 | 13:02:31 UTC - in response to Message 41806.

Bedrich, you could run 2 WUs concurrently on your GTX980Ti. With Gerards this already helps a lot on GTX970, boosting GPU usage from ~70% to 93% on my main machine.

Apart from that: see a few posts above that one. Everything else I mentioned there also applies to your card, or more specifically: applies even more since it's got more CUDA cores to feed with work.

MrS
____________
Scanning for our furry friends since Jan 2002

Bedrich Hajek
Send message
Joined: 28 Mar 09
Posts: 485
Credit: 11,130,303,472
RAC: 15,430,035
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41809 - Posted: 13 Sep 2015 | 15:53:09 UTC - in response to Message 41808.

The only problem with that solution is, it would also be running 2 WUs concurrently on each of GTX 690 GPUs, which I don't want to do because they are already running at near capacity.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41811 - Posted: 13 Sep 2015 | 20:26:40 UTC - in response to Message 41809.

There wouldn't be much gain on the GTX690, but it wouldn't hurt either. I've run 2 concurrent WUs on my older GTX660Ti, and it was OK (although of no benefit) for the larger Noelias, which yield a higher GPU utilization than the Gerards. There was a clear benefit for short runs, though. Each GPU on your GTX690 has 8/7 the shaders and 4/3 the memory bandwidth of my old GPU, so it should profit a bit more from 2 concurrent WUs.

And which "capacity" do you mean that they're running at? Heat, noise and power consumption? In that case: simply lower the power target at bit. By default the GPU runs at full throttle, at maximum turbo boost clock with voltages of up to 1.175 V. Scaling it back so that it runs at ~1.10 V yields a ~14% improvement in power efficiency from the voltage alone. At the same time the clock speed would drop by ~7%. If you'd increase the GPU utilization from 85% to 92%, you make up for that performance drop.

Net gain: 14% lower power consumtpion & heat produced. This way you'd run the hardware more efficiently physically and use more of its capabilities via software.

MrS
____________
Scanning for our furry friends since Jan 2002

Bedrich Hajek
Send message
Joined: 28 Mar 09
Posts: 485
Credit: 11,130,303,472
RAC: 15,430,035
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41822 - Posted: 15 Sep 2015 | 1:49:56 UTC - in response to Message 41811.

There wouldn't be much gain on the GTX690, but it wouldn't hurt either. I've run 2 concurrent WUs on my older GTX660Ti, and it was OK (although of no benefit) for the larger Noelias, which yield a higher GPU utilization than the Gerards. There was a clear benefit for short runs, though. Each GPU on your GTX690 has 8/7 the shaders and 4/3 the memory bandwidth of my old GPU, so it should profit a bit more from 2 concurrent WUs.

And which "capacity" do you mean that they're running at? Heat, noise and power consumption? In that case: simply lower the power target at bit. By default the GPU runs at full throttle, at maximum turbo boost clock with voltages of up to 1.175 V. Scaling it back so that it runs at ~1.10 V yields a ~14% improvement in power efficiency from the voltage alone. At the same time the clock speed would drop by ~7%. If you'd increase the GPU utilization from 85% to 92%, you make up for that performance drop.

Net gain: 14% lower power consumption & heat produced. This way you'd run the hardware more efficiently physically and use more of its capabilities via software.

MrS



Actually both heat and power, I am running the GTX 690 at 70 + degrees C and at 95% power already, and I have been doing that for over 2 1/2 years. I don't want to push it any harder. At that, it takes over 16 hours to complete a GERARD WU. If I run 2 WU per GPU, my pick will be small, and it will take more than 24 hours to complete the said WU. I slow the project down, potentially get more errors and lose my 24 hours bonus.

It's not worth it.

On another topic related to the 980 Ti, how does one get to run it on a Windows XP machine?





Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,247,165,968
RAC: 3,858,781
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41823 - Posted: 15 Sep 2015 | 2:38:56 UTC - in response to Message 41822.
Last modified: 15 Sep 2015 | 2:43:09 UTC

On another topic related to the 980 Ti, how does one get to run it on a Windows XP machine?

You should download the GTX 960 Windows XP driver (or the x64 version), and include two lines to the nv4_dispi.inf file.

1. Download the recent driver for GTX 960 from NVidia.
2. Start the installer, and copy the path of the installation files to the clipboard.
3. Close the installer.
4. Open the nv4_dispi.inf file with a text editor (like notepad) in the Display.Driver folder of the installation files.
-- You can do it by creating a shortcut on your desktop like:
-- notepad "<the path of the installation files>\Display.Driver\nv4_dispi.inf",
-- if you leave the installation path on it's default value, your shortcut would look like this:
-- notepad "C:\NVIDIA\DisplayDriver\355.82\WinXP\English\Display.Driver\nv4_dispi.inf"
5. Search for the string (press CTRL-f in notepad) dev.1401
6. The first hit would look like this:

%NVIDIA_DEV.1401% = Section008, PCI\VEN_10DE&DEV_1401

7. Copy the whole line and paste it under the original, then change both 1401 to 17c8 (leave the original line unchanged):

%NVIDIA_DEV.17c8% = Section008, PCI\VEN_10DE&DEV_17c8

8. Search again (press CTRL-f in notepad) for the string dev.1401
9. It will find this:

NVIDIA_DEV.1401 = "NVIDIA GeForce GTX 960"

10. Copy the whole line and paste it under the original, then change the 1401 to 17c8 and the 960 to 980 Ti (leave the original line unchanged):

NVIDIA_DEV.17c8 = "NVIDIA GeForce GTX 980 Ti"

11. Save the nv4_dispi.inf file (overwrite the original).
12. Start the installer. (or install the drivers manually, there will be a warning of unsigned driver, but it's safe to and should be ignored)

Note for future use of this method, that NVidia may change the Section numbers in the inf file from time to time, so you have to use the correct section number from the original file in step 6 (not just simply copy this line from here to the inf file in step 7)

This method is explained in this post, I've just updated the numbers.

Bedrich Hajek
Send message
Joined: 28 Mar 09
Posts: 485
Credit: 11,130,303,472
RAC: 15,430,035
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41830 - Posted: 16 Sep 2015 | 23:40:52 UTC - in response to Message 41823.

Thanks Zoltan for the help. I knew it was going to just like last time, but I didn't know which lines to copy and how to edit them, with the exception of the substitution of "NVIDIA GeForce GTX 980 Ti" for "NVIDIA GeForce GTX 960". It's better to ask, then to screw it up.

The results for the XP computer were very good, especially since it is an old model, almost 9 year old.

14543988 11192988 16 Sep 2015 | 5:35:24 UTC 16 Sep 2015 | 13:02:44 UTC Completed and validated 24,567.65 22,693.45 255,000.00 Long runs (8-12 hours on fastest card) v8.47 (cuda65)

https://www.gpugrid.net/result.php?resultid=14543988


Which is a lot better than my Windows 7 model.

14543025 11192376 15 Sep 2015 | 20:26:30 UTC 16 Sep 2015 | 6:10:17 UTC Completed and validated 32,033.23 30,958.19 255,000.00 Long runs (8-12 hours on fastest card) v8.47 (cuda65)

https://www.gpugrid.net/result.php?resultid=14543025


The maximum GPU usage is 93% for XP, and 70% for 7, per MSI afterburner.

And all I did was install the card and driver, and cleaned up the register and file clutter. Clean and easy!

The interesting thing is, when both XP and 7 computers were using the GTX 690 cards, the finish times were about the same.

I think the issue may be in the GPUGRID application itself, 900 series cards on windows 7, (I am not sure about windows 8 or 10).

Did anybody say in one of the posts to "I think you need to ditch XP for 7 or 8..."? I don't remember!

I will update when the next application fixes this slow performance issue. Maybe!




Profile Retvari Zoltan
Avatar
Send message
Joined: 20 Jan 09
Posts: 2343
Credit: 16,247,165,968
RAC: 3,858,781
Level
Trp
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41831 - Posted: 17 Sep 2015 | 8:32:45 UTC - in response to Message 41830.

Thanks Zoltan for the help.

You're welcome.

The interesting thing is, when both XP and 7 computers were using the GTX 690 cards, the finish times were about the same.

It's because the GTX 690 has a PCIe splitter chip, so its two GPUs using the same PCIe bus, and since there's a lot of traffic by the two GPUGrid app they are holding back each other (especially if the MB has only PCIe2.0, then both GPU chips on the card will run at PCIe2.0)

I think the issue may be in the GPUGRID application itself, 900 series cards on windows 7, (I am not sure about windows 8 or 10).

The issue is a combination of the WDDM latency, (the dual GPU architecture for the GTX 590, 690 and Titan Z), and PCIe bandwith. Since the same app is running on Windows XP and Windows 7, the issue would have effect Windows XP also if it's only in the GPUGrid app.

Did anybody say in one of the posts to "I think you need to ditch XP for 7 or 8..."? I don't remember!

I think you refer to this post, but I've posted this method there for GTX 980 and 970 (the GTX 980 Ti didn't exist at that time, so its device id was unknown), to avoid ditching Windows XP.

I will update when the next application fixes this slow performance issue. Maybe!

It could be a bit better, but there's no way for the app to bypass the WDDM overhead under Windows 7 and later. Windows 10 theoretically could be better than earlier versions, since it has WDDM 2.0 (which is said to be more optimized for better performance than the previous versions), but there's no sign of this better performance yet.
Theoretically Windows 7 could use XDDM drivers (which is the architecture Windows XP is using) but I couldn't install NVidia's Windows XP drivers on Windows 7. But I've successfully installed Windows 7 on my old laptop, which don't have Windows 7 drivers for its video controller (it has AGP interface), so I have to install the Windows XP drivers under Windows 7, and it's working (without the aero glass interface). Probably this method is not applicable for PCIe cards.

PS: To make your Windows XP safe, you should follow the instructions in this post.

Mel S Stark
Send message
Joined: 4 Jun 13
Posts: 4
Credit: 1,718,458,492
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41936 - Posted: 4 Oct 2015 | 7:36:56 UTC - in response to Message 41808.

How exactly can one run two work units on the same GTX 980? I used to do this on Einstein, but I do not where to find the BOINC config program for GPUGRD which allows one to set usage like on Einstein. I may be able to get slightly higher performance. Would this dual unit configuration also work on a GTX 780? That one only has 3GB RAM. GPU utilization is 80% on both cards. I have thermal headroom on both of my CPU's. Would overclocking them 300 MHz help enough to offset the diminished CPU lifespan which occurs at 10C hotter temps? My i7 3370K runs @ 75C @ 4.2 GHz @ 65% load.
____________

Snow Crash
Send message
Joined: 4 Apr 09
Posts: 450
Credit: 539,316,349
RAC: 0
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41938 - Posted: 4 Oct 2015 | 9:59:13 UTC
Last modified: 4 Oct 2015 | 10:01:00 UTC

How exactly can one run two work units on the same GTX 980?

https://www.gpugrid.net/forum_thread.php?id=4155&nowrap=true#41796
____________
Thanks - Steve

Mel S Stark
Send message
Joined: 4 Jun 13
Posts: 4
Credit: 1,718,458,492
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41939 - Posted: 4 Oct 2015 | 10:41:43 UTC - in response to Message 41938.

Works great! Now if I could get two more work units for the second GPU? Apparently there are none available to send? Thanks for the help!

Mel S Stark
Send message
Joined: 4 Jun 13
Posts: 4
Credit: 1,718,458,492
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 41940 - Posted: 4 Oct 2015 | 10:41:46 UTC - in response to Message 41938.

Works great! Now if I could get two more work units for the second GPU? Apparently there are none available to send? Thanks for the help!
____________

Kylinblue
Send message
Joined: 4 Mar 14
Posts: 2
Credit: 12,310,575
RAC: 0
Level
Pro
Scientific publications
watwat
Message 42263 - Posted: 1 Dec 2015 | 15:34:27 UTC

Just got my 980Ti under full cover water. It is now 75% utilized and 37C when running single task at 1496Mhz. Am I maximizing the GPU power properly?

AyalaZero
Send message
Joined: 22 Nov 15
Posts: 8
Credit: 19,131,150
RAC: 0
Level
Pro
Scientific publications
watwat
Message 42264 - Posted: 1 Dec 2015 | 16:18:37 UTC - in response to Message 42263.

Just got my 980Ti under full cover water. It is now 75% utilized and 37C when running single task at 1496Mhz. Am I maximizing the GPU power properly?


You can make it faster by assigning it a CPU core all to itself. You can do that via swan_sync.

Post to thread

Message boards : Graphics cards (GPUs) : GTX 980Ti

//