Advanced search

Message boards : Graphics cards (GPUs) : Overclocking GPU...

Author Message
Profile UL1
Send message
Joined: 16 Sep 07
Posts: 56
Credit: 35,013,195
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 2788 - Posted: 4 Oct 2008 | 7:46:35 UTC

When overclocking a GPU one normally can modify three values: core, shader and memory. When altering these values which one will be the most effective to fasten up crunching ?

Btw: the PC, or better, the GPU is only used for crunching, so there's no need for a 'good' picture... ;)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 2793 - Posted: 4 Oct 2008 | 10:00:16 UTC - in response to Message 2788.
Last modified: 4 Oct 2008 | 10:00:30 UTC

so there's no need for a 'good' picture... ;)


Yes, but a need for accurate calculations ;)

You should require both, core and shader clock, whereas memory clock should be pretty irrelevant.

MrS
____________
Scanning for our furry friends since Jan 2002

naja002
Avatar
Send message
Joined: 25 Sep 08
Posts: 111
Credit: 10,352,599
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwat
Message 2799 - Posted: 4 Oct 2008 | 22:17:58 UTC - in response to Message 2793.

[quote]

You should require both, core and shader clock, whereas memory clock should be pretty irrelevant.

MrS



Sorry, but that's a very popular myth on the memory. In the F@H GPU community people say that the memory doesn't matter--its just not true. Set the core and shaders then start upping the memory and watch the PPD go up. Lower the memory and watch the PPD go down..It works that way with every card that I have (4). I have absolutely no idea why it would be any different for GPUGrid. Just try it for Yourself.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 2800 - Posted: 4 Oct 2008 | 23:10:59 UTC

I remember that in folding@home GPU 1 a 50% drop in memory clock would cause a slow-down of 10-15%. So if I'd increase my 9800 mem clock from 1100 to 1160 I could expect a 1.6% performance increase.

Things could have changed with GPU 2 though and things could be different in GPU-Grid. It depends on how localized the algorithmn is, that is how good the caches can be used and if execution ever has to wait for memory requests. GPU-Grid calculates larger molecules than f@h, so they *may* be more memory performance bound. I'll give it a try, but improvements of a few % will be difficult to see and need averaging over several WUs.

MrS
____________
Scanning for our furry friends since Jan 2002

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 2808 - Posted: 5 Oct 2008 | 15:18:50 UTC

Wow, this was unexpected! First result is in:

Previously I was at 61.0 - 62.5 ms/step and never below 61 ms with app 6.45. The WU with mem clock increased from 1100 to 1160 MHz needed 59.5 ms/step. 5.5% mem clock increase -> 2.5 - 5.0% performance increase, depending on what the normal time would have been for that WU.

So I take everything back and claim the opposite: mem clock does matter in GPU-Grid.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile KyleFL
Send message
Joined: 28 Aug 08
Posts: 33
Credit: 786,046
RAC: 0
Level
Gly
Scientific publications
wat
Message 2810 - Posted: 5 Oct 2008 | 17:32:21 UTC - in response to Message 2808.

Wow, this was unexpected! First result is in:

Previously I was at 61.0 - 62.5 ms/step and never below 61 ms with app 6.45. The WU with mem clock increased from 1100 to 1160 MHz needed 59.5 ms/step. 5.5% mem clock increase -> 2.5 - 5.0% performance increase, depending on what the normal time would have been for that WU.

So I take everything back and claim the opposite: mem clock does matter in GPU-Grid.

MrS


Good to know.
Thanks to that info I just decided to push the memory clock of my GTX260 a little bit further.
I´ll post some results.
Maybe the incerease will be not the same across different cards, as some may be not so much memory bandwith starved as overs.


Cu KyleFL

Profile UL1
Send message
Joined: 16 Sep 07
Posts: 56
Credit: 35,013,195
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 2813 - Posted: 5 Oct 2008 | 18:30:41 UTC

When running at stock values (600/1500/1000) I had about 65 ms/step...
Now I'm running at 790/1800/1100 and had 52 ms/step for my last WU... ;)
But I'm still wondering which one has got the highest impact on decreasing the calculation times...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 2814 - Posted: 5 Oct 2008 | 19:05:34 UTC - in response to Message 2810.

Maybe the incerease will be not the same across different cards, as some may be not so much memory bandwith starved as overs.


Yes, I'd expect as much. But most cards are pretty balanced anyway. On GT200 the larger caches should also help.

MrS
____________
Scanning for our furry friends since Jan 2002

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 2825 - Posted: 6 Oct 2008 | 20:38:32 UTC

The next 2 WUs are in, 62.7 ms with some interactive use and 61.3 ms with only minor use. Damn those long term averages..

MrS
____________
Scanning for our furry friends since Jan 2002

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 2826 - Posted: 6 Oct 2008 | 20:45:07 UTC - in response to Message 2825.

Can you remind me what it the tool to overclock from Linux?

gdf

Profile UL1
Send message
Joined: 16 Sep 07
Posts: 56
Credit: 35,013,195
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 2827 - Posted: 6 Oct 2008 | 20:54:45 UTC

There are CoolBits from Nvidia and NVclock...

Profile koschi
Avatar
Send message
Joined: 14 Aug 08
Posts: 124
Credit: 792,979,198
RAC: 17,226
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 2828 - Posted: 6 Oct 2008 | 21:13:00 UTC

If you are lucky and the card and driver support it, somehow...
I tried with nvclock (even compiled the newest one from SVN) and coolbits set in xorg.conf, but both give me a white screen when ever I change a value. Here it does not matter if I lower the clocks or increase them.
I logged in via SSH and nvclock -i reported the core speed somewhere over 900MHz, way too much...

I ended up with overclocking my 8800GTS under windows, modifying and flashing the video BIOS to the new values.

Profile The Gas Giant
Avatar
Send message
Joined: 20 Sep 08
Posts: 54
Credit: 607,157
RAC: 0
Level
Gly
Scientific publications
watwatwatwat
Message 2856 - Posted: 7 Oct 2008 | 20:35:26 UTC
Last modified: 7 Oct 2008 | 20:37:33 UTC

I've got an ASUS 9600GT stock at 650/1625/1800 and am wondering the 'best' way to OC it. Do I keep all the ratio's the same or just the engine/shader ratio?

I've upped it to 700/1800/1900 without seeing a noticeable difference and now have it set to 725/1815/1944.

Live long and BOINC!

ps. If I had of known that an 8800 was 'faster' than a 9600 then I would have bought an 8800! Gotta love marketing! Bet then I was a real nOOb, now I'm just a noob.

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 2862 - Posted: 7 Oct 2008 | 21:30:19 UTC - in response to Message 2856.

As anybody overclocked a GTX280?

gdf

Profile KyleFL
Send message
Joined: 28 Aug 08
Posts: 33
Credit: 786,046
RAC: 0
Level
Gly
Scientific publications
wat
Message 2863 - Posted: 7 Oct 2008 | 21:46:26 UTC

Not a GTX280, but a GTX260 (if that helps)

I figured something out:

Overclocking Core&shaders ~15% did have almost a linear impact on WU times.
Overclocking the memory 10% did get me a performance gain ~1-2%.

Stock Clockings:
Core 576 - Shader 1242 - Memory 999 : Time per step: 41,7
Overclocking Core & Shaders:
Core 650 - Shader 1401 - Memory 999 : Time per step: 37,5
Overclocking Memory (and a little bit more on Core & shaders again)
Core 661 - Shader 1425 - Memory 1101 : Time per step: 37,3

It seems to me, that the last gain of 0.2ms is only because of the minimal raised Core & Shaderspeed and not because of the 10% higher memoryclock


Cu KyleFL

Profile The Gas Giant
Avatar
Send message
Joined: 20 Sep 08
Posts: 54
Credit: 607,157
RAC: 0
Level
Gly
Scientific publications
watwatwatwat
Message 2886 - Posted: 8 Oct 2008 | 18:59:45 UTC - in response to Message 2856.

I've got an ASUS 9600GT stock at 650/1625/1800 and am wondering the 'best' way to OC it. Do I keep all the ratio's the same or just the engine/shader ratio?

I've upped it to 700/1800/1900 without seeing a noticeable difference and now have it set to 725/1815/1944.

I must admit I'm underwhelmed by the responses to what I thought was a fairly simple question, anyhoo.....

The first wu completed at the new speeds shows a dramatic decrease in time. Fantastic stuff! I'll see how the next one goes.

Live long and BOINC!

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 2887 - Posted: 8 Oct 2008 | 19:36:50 UTC - in response to Message 2886.

I must admit I'm underwhelmed by the responses to what I thought was a fairly simple question, anyhoo.....


No, it's not simple. Intuition tells me that the ratios don't matter, as long as you don't go extreme. But I can't be 100% sure since I didn't test it specifically.

My suggestion: find the maximum stable clock for engine and shader first, then for the memory and afterwards back off a bit for safety. And don't care about the ratios at all.

MrS
____________
Scanning for our furry friends since Jan 2002

naja002
Avatar
Send message
Joined: 25 Sep 08
Posts: 111
Credit: 10,352,599
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwat
Message 2890 - Posted: 8 Oct 2008 | 20:43:21 UTC - in response to Message 2887.

My suggestion: find the maximum stable clock for engine and shader first, then for the memory and afterwards back off a bit for safety. And don't care about the ratios at all.

MrS



Agreed, that's pretty much how I approach it....

Profile Edboard
Avatar
Send message
Joined: 24 Sep 08
Posts: 72
Credit: 12,410,275
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 2891 - Posted: 8 Oct 2008 | 21:40:00 UTC - in response to Message 2862.

I have it (GTX280) overclocked to:

clock: 697, shaders: 1500, mem: stock

In my last three units I got these times per step:

29.688 ms
29.096 ms
29.391 ms

This OC is not the maximum possible. Simply I tried to put shaders to 1500 (linked clock) and I found it works fine. I have not tried to go beyond it. I have never made an error WU with these settings as you can see in my account.

Profile The Gas Giant
Avatar
Send message
Joined: 20 Sep 08
Posts: 54
Credit: 607,157
RAC: 0
Level
Gly
Scientific publications
watwatwatwat
Message 2899 - Posted: 9 Oct 2008 | 2:36:59 UTC - in response to Message 2890.

My suggestion: find the maximum stable clock for engine and shader first, then for the memory and afterwards back off a bit for safety. And don't care about the ratios at all.
MrS

Agreed, that's pretty much how I approach it....

Excellent! Many thanks for the responses. Looks like the 11% engine/shader oc and 8% memory oc gave a 15% decrease in overall gpu time for the first wu. The wu time is now below 24 hrs!

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5311 - Posted: 5 Jan 2009 | 18:03:47 UTC

Maybe I should run that test utility that find out what the card can do ... :)

THen again ... with my luck ... maybe I shouldn't ....

Profile Nognlite
Send message
Joined: 9 Nov 08
Posts: 69
Credit: 25,106,923
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwat
Message 5323 - Posted: 5 Jan 2009 | 23:49:23 UTC

I have an XFX GTX280 XXX with all the following stock settings:

clock: 670, shader: 1458, memory: 2500

In my last three WU's I got:

27.644 ms,
26.656 ms, and
26.796 ms.

I believe that most GTX280's could attain these speeds without great difficulty. Just keep them cool. Mine run between 79-82°C.

Pat

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5324 - Posted: 6 Jan 2009 | 0:02:24 UTC - in response to Message 5323.

I have an XFX GTX280 XXX with all the following stock settings:

clock: 670, shader: 1458, memory: 2500

In my last three WU's I got:

27.644 ms,
26.656 ms, and
26.796 ms.

I believe that most GTX280's could attain these speeds without great difficulty. Just keep them cool. Mine run between 79-82°C.

Pat


My 280 is running 27 - 32 ms per step over a bunch of tasks ... the 9800 GT is 91 - 92 ms ...

Westsail and *Pyxey*
Send message
Joined: 12 Feb 08
Posts: 11
Credit: 3,194,461
RAC: 0
Level
Ala
Scientific publications
watwatwatwat
Message 5370 - Posted: 8 Jan 2009 | 1:38:35 UTC

Sorry guys, dumb question..
Where you getting these numbers for ms/step? Wondering what my cards doing..Thanks!

Also, the overclocking..this is pretty much my thoughts.
number of shaders X core clock = amount of work done

So I always OC to highest stable core and the rest no matter really.
Does anyone know...My 260 won't OC. Is there a proggy besides the Nvidia stuffs? I tried Ntune and the new one but no have the perfomance tab on the 260 machine!?! It is an msi card btw.

My 9500 GT overclocks like a champ and is maybe the best rac/$ of my current setups. ..checking...Gpu-z gives it's current clocks at 700/570/1750 stock is 550/500/1375. It is made by EV3A. The 9800GTX runs 800/1115/1984.

Mahalos


____________

Scott Brown
Send message
Joined: 21 Oct 08
Posts: 144
Credit: 2,973,555
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 5371 - Posted: 8 Jan 2009 | 2:23:54 UTC - in response to Message 5370.

Sorry guys, dumb question..
Where you getting these numbers for ms/step? Wondering what my cards doing..Thanks!


Look under each computer at the tasks, it is in the output. For example:

http://www.gpugrid.net/result.php?resultid=193826


Scott Brown
Send message
Joined: 21 Oct 08
Posts: 144
Credit: 2,973,555
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 5372 - Posted: 8 Jan 2009 | 2:35:28 UTC - in response to Message 5324.



My 280 is running 27 - 32 ms per step over a bunch of tasks ... the 9800 GT is 91 - 92 ms ...


The older posts in this thread from October were during a period where only one kind of workunit was being examined. Now we have three different kinds (with credits of 32xx, 24xx, and 18xx) that have widely varying ms per step on the same card. It would be interesting to see how cards are doing with these different types of work and how much spread there really is. I suspect that the spread is fairly tight on high-end cards with the differences becoming more substantial as one moves to the low-end. Not sure if different aspects of different OC would have different performance effects on different types of work? and yes, I could not fit another "different" into that last sentence ;)


Thamir Ghaslan
Send message
Joined: 26 Aug 08
Posts: 55
Credit: 1,475,857
RAC: 0
Level
Ala
Scientific publications
watwatwat
Message 5373 - Posted: 8 Jan 2009 | 4:22:02 UTC - in response to Message 5372.
Last modified: 8 Jan 2009 | 4:22:41 UTC



My 280 is running 27 - 32 ms per step over a bunch of tasks ... the 9800 GT is 91 - 92 ms ...


The older posts in this thread from October were during a period where only one kind of workunit was being examined. Now we have three different kinds (with credits of 32xx, 24xx, and 18xx) that have widely varying ms per step on the same card. It would be interesting to see how cards are doing with these different types of work and how much spread there really is. I suspect that the spread is fairly tight on high-end cards with the differences becoming more substantial as one moves to the low-end. Not sure if different aspects of different OC would have different performance effects on different types of work? and yes, I could not fit another "different" into that last sentence ;)



That explains alot to me! I have'nt been following up on the message boards and probably miss a lot of topics and project progress.

I have a 280, and just quick sampled my tasks. I've noticed that the three credit wu types are almost equally spread, making things fair I suppose. Another thing I've noticed is that the 18xx tend to get close to the 35-40 ms while the 32xx tend to do 20-25 ms. the 24xx fall between 25-35 ms.

Just a crude estimate. Perhaps the project admits can come up with a more accurate picture by querying the database and averaging out the cards/tasks.

Profile X1900AIW
Send message
Joined: 12 Sep 08
Posts: 74
Credit: 23,566,124
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5379 - Posted: 8 Jan 2009 | 14:30:56 UTC - in response to Message 5372.

The older posts in this thread from October were during a period where only one kind of workunit was being examined. Now we have three different kinds (with credits of 32xx, 24xx, and 18xx) that have widely varying ms per step on the same card. It would be interesting to see how cards are doing with these different types of work and how much spread there really is.


I started (1) with a GTX 260/192 @stock, later I added (2) a second GTX 260/216 @stock and (3) flashed both to 666/1500/1150 (since 26./27. Nov.), meanwhile I switched the cards between computer-IDs (around 4. Jan. last time), you should not be amazed about mixed values viewing the different task-lists, here are the results for the three credit-groups 1888/2435/3232 (2933-credit-WU may be an exemption):

GTX 260/192 (666/1500/1150) (TaskID > credits > ms/step) - some examples

  • 203862 > 3232 > 30.451 ms
  • 202155 > 3232 > 30.363 ms
  • 199331 > 3232 > 30.188 ms
  • 198010 > 3232 > 43.650 ms (MDIO ERROR)
  • 201276 > 2933 > 57.040 ms (MDIO ERROR)
  • 200798 > 2435 > 36.197 ms
  • 200793 > 2435 > 36.363 ms
  • 196008 > 2435 > 42.375 ms (MDIO ERROR)
  • 202088 > 1888 > 38.864 ms
  • 197912 > 1888 > 38.721 ms
  • 196876 > 1888 > 39.508 ms


GTX 260/216 (666/1500/1150) (TaskID > credits > ms/step)


  • 202904 > 3232 > 27.505 ms (MDIO ERROR)
  • 202898 > 3232 > 27.503 ms (MDIO ERROR)
  • 182989 > 3232 > 27.513 ms (MDIO ERROR)
  • 185601 > 2435 > 33.772 ms (MDIO ERROR)
  • 185199 > 2435 > 33.813 ms (MDIO ERROR)
  • 184240 > 2435 > 34.298 ms (MDIO ERROR)
  • 186263 > 1888 > 35.726 ms (MDIO ERROR)
  • 185932 > 1888 > 35.593 ms (MDIO ERROR)
  • 185160 > 1888 > 37.406 ms (MDIO ERROR)


Are those MDIO errors signs of serious problems ? Do not hope with OC-settings.

Regarding and comparing the values from the oced GTX 280 I can assume there is a measurable effect in the difference of the shaders 192-216-240, which can be reduced by high overclocking, rudimental at least.

Scott Brown
Send message
Joined: 21 Oct 08
Posts: 144
Credit: 2,973,555
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 5384 - Posted: 8 Jan 2009 | 16:07:23 UTC

Okay...On the low end I have numbers from a 9500GT with 512mb(700 core, 1750 shader, 2000 memory) and a mid-range 9600 GSO with 384mb(600, 1700, 1800). Using 6.5.0 for all unless noted otherwise.

9500GT

3232> 210ms
2435> 272ms
1888> 293ms


9600GSO

3232> 85ms to 89ms
2435> 111ms to 116ms
1888> 102ms to 115ms

Odd thing with the 9600GSO 1888 credit units is that the range is deceiving since there were really no middle units--couple at 102ms with the rest around 115ms.

It does appear that the spread does magnify as one moves from faster to slower cards.


*Note that 1)The 9500GT numbers are from single workunits for each credit level 2)The 9500GT shader clock increased from 1750 to 1800 for 3232 credit unit 3)The 9600GSO 1888 credit units all with BOINC 6.3.21

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 5461 - Posted: 10 Jan 2009 | 18:24:00 UTC

I've noticed that the three credit wu types are almost equally spread, making things fair I suppose.


The WUs do differ in ms/step because they differ in complexity. They also differ in number of steps, so that the overall time consumed corresponds to the credits and therefore the statistical spread of the WU types does not matter (no fair or unfair). Currently the 1888 credit WUs are off and give less credits per time, but the problem is already reported.

Not sure if different aspects of different OC would have different performance effects on different types of work?


I don't think there'll be any dramatic effects here.. the WUs are not that much different. The more complex ones could respond stronger to memory frequency increases, though.

So I always OC to highest stable core and the rest no matter really.


No. You need both, core and shader frequency. Some utilities clock both up synchronously, maybe that's why you didn't notice the shader went up as well? And memory clock also matter, just not as much as the other two.

Does anyone know...My 260 won't OC. Is there a proggy besides the Nvidia stuffs?


If RivaTuner can't do it probably noone can do it ;)

Are those MDIO errors signs of serious problems ? Do not hope with OC-settings.


Nope. They just tell you there was no file to resume computation from, because you didn't stop & restart BOINC during the WU.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile X1900AIW
Send message
Joined: 12 Sep 08
Posts: 74
Credit: 23,566,124
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 5463 - Posted: 10 Jan 2009 | 18:42:41 UTC - in response to Message 5461.

Are those MDIO errors signs of serious problems ? Do not hope with OC-settings.

Nope. They just tell you there was no file to resume computation from, because you didn't stop & restart BOINC during the WU.


Ah, good to know, thanks !

Scott Brown
Send message
Joined: 21 Oct 08
Posts: 144
Credit: 2,973,555
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 5495 - Posted: 11 Jan 2009 | 14:49:40 UTC - in response to Message 5384.


9600GSO

3232> 85ms to 89ms
2435> 111ms to 116ms
1888> 102ms to 115ms



Update for the 9600 GSO

29xx workunit > 175ms

Also, since this one took about 4 hours longer than the 3232 credit units, it looks like these might be 'off' in a similar manner as the 18xx credit units.

schizo1988
Send message
Joined: 16 Dec 08
Posts: 16
Credit: 10,644,256
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwat
Message 6419 - Posted: 5 Feb 2009 | 3:10:22 UTC - in response to Message 5461.

I was having the same problem and I found an app put out by EVGA simply called

EVGA_Precision_1.40.exe that I had read worked on almost any brand of card.

I have a BFG card and it seems to be working for me, by that I mean, I just started using it today and don't understand the effect the various changes mean as far as decreasing my run times. GPU over-clocking is new territory for me, CPU'S yes GPU'S no. It seems really simple to use, it simply has four sliders, which are Core Clock, Shader Clock, Memory Clock, and Fan Speed which I was glad to see as my fan was stuck on Auto and never moved, it was always 40% no matter what the conditions were, now I can increase it as needed. The danger posed by overheating when overclocking CPU's I am very familiar with and I assume the same holds for GPU's and I can't afford to risk hurting my GTX 260.

Hope this helps you

STE\/E
Send message
Joined: 18 Sep 08
Posts: 368
Credit: 3,257,972,792
RAC: 50,625,673
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 6434 - Posted: 5 Feb 2009 | 13:09:43 UTC - in response to Message 6419.

I was having the same problem and I found an app put out by EVGA simply called

EVGA_Precision_1.40.exe that I had read worked on almost any brand of card.

I have a BFG card and it seems to be working for me, by that I mean, I just started using it today and don't understand the effect the various changes mean as far as decreasing my run times. GPU over-clocking is new territory for me, CPU'S yes GPU'S no. It seems really simple to use, it simply has four sliders, which are Core Clock, Shader Clock, Memory Clock, and Fan Speed which I was glad to see as my fan was stuck on Auto and never moved, it was always 40% no matter what the conditions were, now I can increase it as needed. The danger posed by overheating when overclocking CPU's I am very familiar with and I assume the same holds for GPU's and I can't afford to risk hurting my GTX 260.

Hope this helps you


The EVGA Precision only works in Windows & only Overclocks 1 Core of a GTX 295 though. For now though I'm using EVGA Precision to control the Fan Speed & the Gainward EXPERTool to control the Second Core of the 295.

schizo1988
Send message
Joined: 16 Dec 08
Posts: 16
Credit: 10,644,256
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwat
Message 6442 - Posted: 5 Feb 2009 | 16:54:42 UTC - in response to Message 6434.

I didn't know about the program only controlling one core of a GTX 295 because I have yet to work out the nerve to buy one. I am looking to build a new system but am going nuts trying to figure out what hardware to purchase. It's main purpose will be crunching Boinc App's and a lot of video conversions, both of which can benefit from a great GPU, but I am going to be putting together a whole system, case, power supply, video card, motherboard, ram, processor, the list seems endless. I am hoping that during the time I spend trying to get the nerve up to lay out what for me will be a small fortune, the prices on video cards at least will stabilize. I saw a BFG GTX 280OC for $365 CAN today and the 295 is around $660 but I think the 295's and now the 285's will push the 280's price down even further very soon. The problem is I don't game at all, it is something I have never been interested in and while this will not only date me it's also a little embarrassing to say the last game I played was something like frogger or Pac-Man back in University. I was even considering getting dual 295's as I can probably scrap up the cash but can I justify that kind of outlay just to increase my RAC and hopefully do some worthwhile science in the process.

J.D.
Send message
Joined: 2 Jan 09
Posts: 40
Credit: 16,762,688
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwat
Message 6444 - Posted: 5 Feb 2009 | 17:49:52 UTC - in response to Message 6442.

The problem is I don't game at all, it is something I have never been interested in and while this will not only date me it's also a little embarrassing to say the last game I played was something like frogger or Pac-Man back in University.


Frogger!
Wasn't there a 3D remake of that? :-)


I was even considering getting dual 295's as I can probably scrap up the cash but can I justify that kind of outlay just to increase my RAC and hopefully do some worthwhile science in the process.


The GTX 295 offers very good performance per dollar because it's two-in-one. You can always buy one now, and get the second later. Get at least a 750 or 850 Watt power supply up front so in the future you can simply add the second card without additional PSU swapping.

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6445 - Posted: 5 Feb 2009 | 17:52:33 UTC

If this is going to be the rig for a few years ... get the best MB with 3 PCIe slots. Then the best and fastest CPU and memory you can afford ... if needs be, skimp on the memory as that can be replaced more cheaply than the CPU ... the high quality MB for the same reason.

Get the best GPU you can afford with what is left. YOu have expansion slots and can slowly add GPUs as money comes in later on ... so, for a start lets say you get a 9800GT for $100 ... does about 5K per day ... a year from now you get a 295 ... and now you are well above 15K a day add another 295 ... then replace the 9800 GT with a third ... by then may be you will be in the market for a new PC ... but now you have a small farm of GPUs to start ...

Rinse ... repeat ...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6451 - Posted: 5 Feb 2009 | 19:32:09 UTC - in response to Message 6445.

Then the best and fastest CPU and memory you can afford ...


-> Then the best and fastest CPU and memory you want to afford ...

;)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile [AF>Libristes] Dudumomo
Send message
Joined: 30 Jan 09
Posts: 45
Credit: 425,620,748
RAC: 0
Level
Gln
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6460 - Posted: 6 Feb 2009 | 8:05:36 UTC
Last modified: 6 Feb 2009 | 8:08:33 UTC

h

Profile koubi
Avatar
Send message
Joined: 15 Sep 08
Posts: 2
Credit: 4,315,885
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwatwat
Message 6463 - Posted: 6 Feb 2009 | 8:16:29 UTC

hello everybody,i would want your advices on my overclocking,it is good?

geforce gtx 260 216sp 55nm (original clock (576/1242/999)

now overclocked at: (gpu:756/shader:1512/memory:1188)
temp idle:55°c
temp full load with furemark in extreme mode:81°c


Task ID 282813
Name hG13938-SH2_US_5-3-10-SH2_US_51910000_1
Workunit 210038
Created 6 Feb 2009 0:48:21 UTC
Sent 6 Feb 2009 0:49:32 UTC
Received 6 Feb 2009 7:39:45 UTC
Server state Over
Outcome Success
Client state Done
Exit status 0 (0x0)
Computer ID 22576
Report deadline 10 Feb 2009 0:49:32 UTC
CPU time 2416.602
stderr out

<core_client_version>6.4.5</core_client_version>
<![CDATA[
<stderr_txt>
# Using CUDA device 0
# Device 0: "GeForce GTX 260"
# Clock rate: 1512000 kilohertz
# Number of multiprocessors: 27
# Number of cores: 216
MDIO ERROR: cannot open file "restart.coor"
# Time per step: 31.006 ms
# Approximate elapsed time for entire WU: 15503.221 s
called boinc_finish

</stderr_txt>
]]>

Validate state Valid
Claimed credit 2478.98611111111
Granted credit 2478.98611111111
application version 6.59
***********************************
Task ID 281496
Name Fm24458-GRA1-4-5-acemd_0
Workunit 210000
Created 5 Feb 2009 14:10:57 UTC
Sent 5 Feb 2009 19:55:55 UTC
Received 6 Feb 2009 7:23:43 UTC
Server state Over
Outcome Success
Client state Done
Exit status 0 (0x0)
Computer ID 22576
Report deadline 9 Feb 2009 19:55:55 UTC
CPU time 2636.021
stderr out

<core_client_version>6.4.5</core_client_version>
<![CDATA[
<stderr_txt>
# Using CUDA device 0
# Device 0: "GeForce GTX 260"
# Clock rate: 1512000 kilohertz
# Number of multiprocessors: 27
# Number of cores: 216
MDIO ERROR: cannot open file "restart.coor"
# Using CUDA device 0
# Device 0: "GeForce GTX 260"
# Clock rate: 1512000 kilohertz
# Number of multiprocessors: 27
# Number of cores: 216
# Using CUDA device 0
# Device 0: "GeForce GTX 260"
# Clock rate: 1512000 kilohertz
# Number of multiprocessors: 27
# Number of cores: 216
# Time per step: 24.897 ms
# Approximate elapsed time for entire WU: 21162.294 s
called boinc_finish

</stderr_txt>
]]>

Validate state Valid
Claimed credit 3215.36111111111
Granted credit 3215.36111111111
application version 6.59

it is good job for a gtx260?
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6464 - Posted: 6 Feb 2009 | 8:29:59 UTC - in response to Message 6463.

Well, it's certainly a high Oc, but not an extreme one. And you finished one WU. But is it good? If it's stable then "yes", otherwise "no". How did you test stability?

MrS
____________
Scanning for our furry friends since Jan 2002

Profile koubi
Avatar
Send message
Joined: 15 Sep 08
Posts: 2
Credit: 4,315,885
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwatwat
Message 6465 - Posted: 6 Feb 2009 | 8:55:55 UTC - in response to Message 6464.

How did you test stability?


stress gpu during 48 hours with furemark in "extreme mode" (1280X1024)with 16X anisotropic filtering(average 17 fps).max temp:81°c(gpu crash at 83°c)
-6 hours using 3dmark 2006 (74°c)

____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6474 - Posted: 6 Feb 2009 | 19:38:53 UTC - in response to Message 6465.

So the answer is a big fat "yes"!

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6479 - Posted: 6 Feb 2009 | 22:13:30 UTC - in response to Message 6474.

So the answer is a big fat "yes"!


I think the answer is maybe, and that's final!

Tixx
Send message
Joined: 15 Jan 09
Posts: 7
Credit: 1,766,054
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwat
Message 6911 - Posted: 23 Feb 2009 | 4:59:54 UTC - in response to Message 6479.

hello all,

Ive tried o/c-ing my 9800gtx card. got it up about 15%, and its stable everywhere else but gpugrid is not liking it very much. temps at full load are around 60deg.
I can only complete maybe 1 out of 5 wus issued by gpugrid. Any idea why this might be the case? as i said the card is perfectly stable in other apps.

Cheers

Profile UL1
Send message
Joined: 16 Sep 07
Posts: 56
Credit: 35,013,195
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwat
Message 6922 - Posted: 23 Feb 2009 | 10:06:23 UTC - in response to Message 6911.

Had the same experience like you: my card errored out each & every WU on GG and when I switched to SETI it ran without any probs...
If you want to continue to crunch GPUGrid-WUs you have to lower your OCing until you get no more computation errors...

STE\/E
Send message
Joined: 18 Sep 08
Posts: 368
Credit: 3,257,972,792
RAC: 50,625,673
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 6923 - Posted: 23 Feb 2009 | 11:11:13 UTC - in response to Message 6922.
Last modified: 23 Feb 2009 | 12:06:54 UTC

Had the same experience like you: my card errored out each & every WU on GG and when I switched to SETI it ran without any probs...
If you want to continue to crunch GPUGrid-WUs you have to lower your OCing until you get no more computation errors...


I believe something has changed in the Application's myself. I have a 8800GT OC that was running the WU's just fine then a few days ago started erring out every WU after 3 Seconds. Like other people I can run the SETI GPU WU's just fine though.

I've even under clocked the 8800GT OC & it still errors out the WU's, so lowering your clock speed isn't necessarily a cure for the errors, at least it wasn't in my case.

The 8800GT OC runs perfectly fine though for everything else @ it's Stock or the Overclocked Speed it came with except for running the GPUGrid WU's so like I said I think it's something in the Application's that's changed ... ???

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 6930 - Posted: 23 Feb 2009 | 13:14:28 UTC - in response to Message 6911.

Tixx, how do you determine that it's stable everywhere else?

MrS
____________
Scanning for our furry friends since Jan 2002

Tixx
Send message
Joined: 15 Jan 09
Posts: 7
Credit: 1,766,054
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwat
Message 6949 - Posted: 23 Feb 2009 | 22:43:57 UTC - in response to Message 6930.

Ive ran benchmarks on 3d mark, stressed on sisoft sandra, several games, and seti wus.

they are all fine.

also like i said 1 out of every 5 gg wus will calculate fine, the rest error out.

strange

STE\/E
Send message
Joined: 18 Sep 08
Posts: 368
Credit: 3,257,972,792
RAC: 50,625,673
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 6950 - Posted: 23 Feb 2009 | 23:44:29 UTC - in response to Message 6949.
Last modified: 23 Feb 2009 | 23:45:41 UTC

Ive ran benchmarks on 3d mark, stressed on sisoft sandra, several games, and seti wus.

they are all fine.

also like i said 1 out of every 5 gg wus will calculate fine, the rest error out.

strange


I get the same thing with my 8800GT OC, every so often it will actually run & finish a Wu, the rest error out after just 3 seconds ...

naja002
Avatar
Send message
Joined: 25 Sep 08
Posts: 111
Credit: 10,352,599
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwat
Message 6954 - Posted: 24 Feb 2009 | 1:31:07 UTC - in response to Message 6923.

Had the same experience like you: my card errored out each & every WU on GG and when I switched to SETI it ran without any probs...
If you want to continue to crunch GPUGrid-WUs you have to lower your OCing until you get no more computation errors...


I believe something has changed in the Application's myself. I have a 8800GT OC that was running the WU's just fine then a few days ago started erring out every WU after 3 Seconds. Like other people I can run the SETI GPU WU's just fine though.

I've even under clocked the 8800GT OC & it still errors out the WU's, so lowering your clock speed isn't necessarily a cure for the errors, at least it wasn't in my case.

The 8800GT OC runs perfectly fine though for everything else @ it's Stock or the Overclocked Speed it came with except for running the GPUGrid WU's so like I said I think it's something in the Application's that's changed ... ???



I must say that I am seeing the same basic effect with my Asus 8800GT. I run 3x 8800GS and 1x 8800GT. The GS's are running fine as usual. The GT collected dust for several months, but is now back up and running--and kicking out 1 compute error after another. This particular card has always been the biggest pita out of the 4 (even on F@H). I just lowered my OC a bit once again, but I expect it to start throwing errors again...maybe not though. I thought it was the card and/or OC....however, after reading this: I'm wondering if there is an 8800GT issue. It's a G92 card.....

Pwrguru
Send message
Joined: 26 Nov 08
Posts: 5
Credit: 50,514,446
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 6956 - Posted: 24 Feb 2009 | 1:49:42 UTC - in response to Message 6954.

Had the same experience like you: my card errored out each & every WU on GG and when I switched to SETI it ran without any probs...
If you want to continue to crunch GPUGrid-WUs you have to lower your OCing until you get no more computation errors...


I believe something has changed in the Application's myself. I have a 8800GT OC that was running the WU's just fine then a few days ago started erring out every WU after 3 Seconds. Like other people I can run the SETI GPU WU's just fine though.

I've even under clocked the 8800GT OC & it still errors out the WU's, so lowering your clock speed isn't necessarily a cure for the errors, at least it wasn't in my case.

The 8800GT OC runs perfectly fine though for everything else @ it's Stock or the Overclocked Speed it came with except for running the GPUGrid WU's so like I said I think it's something in the Application's that's changed ... ???



I must say that I am seeing the same basic effect with my Asus 8800GT. I run 3x 8800GS and 1x 8800GT. The GS's are running fine as usual. The GT collected dust for several months, but is now back up and running--and kicking out 1 compute error after another. This particular card has always been the biggest pita out of the 4 (even on F@H). I just lowered my OC a bit once again, but I expect it to start throwing errors again...maybe not though. I thought it was the card and/or OC....however, after reading this: I'm wondering if there is an 8800GT issue. It's a G92 card.....
I run 5 8800gt's (all o/c'ed) from three differnt manufactures and have the extremeely rare work unit have a problem...I am still running 6.5.0 on all my machines so maybe the problem is with the 6.6.xx application............

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6960 - Posted: 24 Feb 2009 | 5:01:34 UTC

Some programs are more sensitive to errors than others. Aside from occasional errors I have been running for two months with very few errors. The system may pass all tests, yet not run a specific application. If you bring down the OC and the application begins to run without errors, then regardless of other tests the OC was at fault.

Another cause can be the amount of memory on the card.

A third cause can be other events on the system that the system as a whole does not react well to and causes tasks to error out...

There is also a slight amount of evidence that some of the tasks issued here may not run well or at all on the lower end cards.

At SaH one of the error modes is that once tasks fail, all will fail until the system is rebooted. There are dozens of reasons that this can happen with the simplest being that the API does not properly initialize the GPU after certain errors so that the error on one task will contaminate all other tasks started.

STE\/E
Send message
Joined: 18 Sep 08
Posts: 368
Credit: 3,257,972,792
RAC: 50,625,673
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 6964 - Posted: 24 Feb 2009 | 8:59:36 UTC

I run 5 8800gt's (all o/c'ed) from three differnt manufactures and have the extremeely rare work unit have a problem...I am still running 6.5.0 on all my machines so maybe the problem is with the 6.6.xx application


I'm running the 6.5.0 Client too & not the 6.6.xx Client which I think you meant ... ??? If you meant the 6.62 Application I've already surmised that & brought that point up ...

@Paul D.B:
If you bring down the OC and the application begins to run without errors, then regardless of other tests the OC was at fault.


I've already stated that I've Under-clocked the 8800GT with the same results. Took the Card all the way down to 550 Core Speed which is 100 below it's Stock Speed Setting & the WU's still erred out 3 Seconds after starting. That consistent 3 Second Failure is what leads me to believe the Application is at fault & not the Card.

Another cause can be the amount of memory on the card


That's a distinct possibility but up until a week or so ago the Card ran the WU's just fine, so it must have had enough memory up until then if that's the reason. The Card does have 512mb of Memory & only uses about 79mb of it when running the WU's so I don't think that's the reason.

A third cause can be other events on the system that the system as a whole does not react well to and causes tasks to error out


I've put the 8800GT OC in 4 different Systems with the same results, the Systems were already running GTX 260's just fine before trying the 8800GT in them.

At SaH one of the error modes is that once tasks fail, all will fail until the system is rebooted. There are dozens of reasons that this can happen with the simplest being that the API does not properly initialize the GPU after certain errors so that the error on one task will contaminate all other tasks started


I've tried the Re-Boot trick already but the WU's still error out after Re-Booting.

Basically I've retired the 8800GT from running GPU WU's either here or the SETI Project because it won't run the WU's here anymore and is just creating a lot of erred out WU's & to me it's just a waste of Electricity to run the SETI ones for the Credit it gets from them.

I have 2 ATI 4780's coming which should be here either today or tomorrow at the latest & 1 on of them is going to go into the Box the 8800GT is in now and the other in a Box that doesn't have a GPU capable Video Card in it already.

The 8800GT will just become a spare card in the event I need it for backup for one of the 200 Series Cards if one goes bad that I have until I can get a replacement for it. It won't be able to run the WU's but it still can be used for Display only purposes ...






naja002
Avatar
Send message
Joined: 25 Sep 08
Posts: 111
Credit: 10,352,599
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwat
Message 6965 - Posted: 24 Feb 2009 | 12:38:28 UTC - in response to Message 6954.


I must say that I am seeing the same basic effect with my Asus 8800GT. I run 3x 8800GS and 1x 8800GT. The GS's are running fine as usual. The GT collected dust for several months, but is now back up and running--and kicking out 1 compute error after another. This particular card has always been the biggest pita out of the 4 (even on F@H). I just lowered my OC a bit once again, but I expect it to start throwing errors again...maybe not though. I thought it was the card and/or OC....however, after reading this: I'm wondering if there is an 8800GT issue. It's a G92 card.....


Don't get me wrong with my earlier post...I'm not discounting it being the card. This particular card has always been the most...uh,..."sensitive". I like Asus MBs, but I won't be buying any more of their vid cards. I can set the OC on this card and it will run fine...then start throwing errors left and right for no reason. Lowering the OC usually helps, but really just starts the cycle all over again. Then eventually it will OC back up....just to start the eventual spiral back downward. This isn't the only MB/system that it's been in, but seems to be starting once again. And You're right, Paul...there's many possible reasons why this can happen. It's simply that when I read the previous post...it seemed very familiar....very familiar. So, if there's any kind of "GT" issue with the app, wus, etc...It would be nice to know.

Profile Paul D. Buck
Send message
Joined: 9 Jun 08
Posts: 1050
Credit: 37,321,185
RAC: 0
Level
Val
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 6968 - Posted: 24 Feb 2009 | 16:51:18 UTC

Just trying to brainstorm ...

I hate it when we cannot find reasons for things ... like why one of my computers all of a sudden seems to want to run the hard drive all the time to the point where nothing gets done ...

Indexing is off, computer is only used for BOINC ... no AV installed ...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 7047 - Posted: 28 Feb 2009 | 10:42:22 UTC

PoorBoy,

you're seeing a different issue. If the WUs always fail after 3s that's very systematic and is not one of the usual "too much OC" cases. I don't know what it may be, though.

Naja,

as far as I can remember the (newer?) 8800GS also use G92 chips and are clocked quite a bit slower than an 8800GT. So if the initial clock speed isn't pushed as hard you'll get more OC headroom. I'm sure there's no 8800GT-issue, as many of these cards just run fine. It's just that there are many of them out there, so if x% of all cards fail then it's quite probable that y% of these will be 8800GT, with y denoting some surprisingly high number.

Tixx,

Paul is absolutely right about different applications reacting / failing in different ways.
When you run GPU-Grid: you crunch through these numbers for 12h or longer, calculating an overall ~0.8 Mio time steps. Each step uses the results of the previous one, and the simulated atoms are coupled to each other via chemical bonds. So if you get an error in some calculation, it will be passed onto the next iteration. And either be dampened by the (correct) force calculations or be amplified and trigger an error detection mechanism. Errors are critical here.

Now consider what happens when you run 3D Mark. If you push the clock speed to high it will likely crash, maybe you can see some artefacts prior to this point. But what happens below this threshold clock? What happens, when a calculation in a pixel shader is wrong? Well.. one out of several Mio pixels has a wrong color for one frame. Will 3D Mark detect this or are you going to see it? I don't think so. That's why running games and 3D Mark succesfully can not guarantee you that your card is still calculating correctly. Don't know about Sandra, but generally I never found them to be useful. And seti@gpu is less stressful for the card than GPU-Grid and the WUs are shorter, reducing the "time for failures" per WU. Additionally I think their calculations are largely independent of each other, so errors won't be passed along like in GPU-Grid.

Could also be that GPU-Grid uses different functions of the chips, one of which may be more timing-critical than others. E.g. an addition may actually need 0.5 clocks, whereas a muliplication may need 0.95 clocks. So if all you do is running adds, you could OC by 100% before you see any errors. but as soon as you put a single mul in, you'll get errors. [In a real chip the mul-operation would probably be split into more pipeline steps, so that the clock speed limit for operations would be comparable. Well, in a fast chip ;) ]

BTW: 6.5.0 and 6.6.x are the BOINC-versions.. they're not crunching, just launching the apps. I'm quite sure these are not causing your problems.

MrS
____________
Scanning for our furry friends since Jan 2002

Jeremy
Send message
Joined: 15 Feb 09
Posts: 55
Credit: 3,542,733
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 7109 - Posted: 2 Mar 2009 | 18:52:13 UTC - in response to Message 7047.

ATItool is a small free app that's very useful for testing video overclocks. It contains a "scan for artifacts" function that can catch things the naked eye cannot. My usual arsenal of apps for testing the stability of my system is:

Prime95 - 64 bit, multi core version. Run on all cores using Small FFTs, Round Off Checking enabled
FurMark - Stability Test, Xtreme Burning Mode, non-fullscreen, set resolution to match desktop resolution and minimize to get it out of the way
ATItool - set to "scan for artifacts"
SpeedFan - temps calibrated for accuracy

Prime + Fur will put more heat and load into your system than almost anything else. The Round Off Checking in Prime95 will check for CPU computation errors, the Scan for Artifacts tool in ATItool will check for GPU computation errors. Let it run for a few hours. If no errors present, your overclock isn't causing issues. If you also have a system memory/FSB overclock going, run Prime95 on "Blend" to test for memory errors. Typically, I'll find the max stable overclock, then reduce clocks one step down from that level to ensure stability.

Happy overclocking!

uBronan
Avatar
Send message
Joined: 1 Feb 09
Posts: 139
Credit: 575,023
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 7233 - Posted: 6 Mar 2009 | 10:08:20 UTC
Last modified: 6 Mar 2009 | 10:09:14 UTC

Hmm lol i am not really a hero in this area called oc.
But i finally decided to oc my EVGA 9600 GT
And found it running nice at 725/1800/1100 real(2200).
Now lets wait and see if it really gains me some speed ;)
Untill now most units did :
2478 / 139-143 ms
3718 / 117-131 ms
3848 / 187-206 ms

Jeremy
Send message
Joined: 15 Feb 09
Posts: 55
Credit: 3,542,733
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 7520 - Posted: 16 Mar 2009 | 17:00:22 UTC - in response to Message 7233.

UPDATE, just in case anybody was using the stability test I encouraged earlier in this thread. CPU testing with Prime95 small FFTs is NOT sufficient unless you let it run for around 40 hours.

Instead, I'm now using a program called "Intel Burn Test". It'll tell you if you're CPU is operating within spec in 30 minutes.

Post to thread

Message boards : Graphics cards (GPUs) : Overclocking GPU...

//