Advanced search

Message boards : Graphics cards (GPUs) : What card?

Author Message
HTH
Send message
Joined: 1 Nov 07
Posts: 38
Credit: 6,365,573
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1888 - Posted: 29 Aug 2008 | 9:20:56 UTC

Hi!

a) What is the cheapest possible graphics card for crunching PS3grid.net workunits?
b) What is the best/fastest possible graphics card for crunching PS3grid.net workunits?

Thanks!
____________

Profile koschi
Avatar
Send message
Joined: 14 Aug 08
Posts: 124
Credit: 792,979,198
RAC: 11,592
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 1889 - Posted: 29 Aug 2008 | 9:39:41 UTC - in response to Message 1888.

Hi!

a) What is the cheapest possible graphics card for crunching PS3grid.net workunits?
b) What is the best/fastest possible graphics card for crunching PS3grid.net workunits?

Thanks!


a) Geforce 8400GS - but better run it on a single core PC, as otherwise the other units won't finish before the deadline, because the card is sooo slow. Only few shaders at a low clock speed
b) Geforce GTX280 - if you can afford it

HTH
Send message
Joined: 1 Nov 07
Posts: 38
Credit: 6,365,573
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1890 - Posted: 29 Aug 2008 | 9:54:09 UTC - in response to Message 1889.

Hi!

a) What is the cheapest possible graphics card for crunching PS3grid.net workunits?
b) What is the best/fastest possible graphics card for crunching PS3grid.net workunits?

Thanks!


a) Geforce 8400GS - but better run it on a single core PC, as otherwise the other units won't finish before the deadline, because the card is sooo slow. Only few shaders at a low clock speed
b) Geforce GTX280 - if you can afford it


Thanks.

Ok. c) What is the best possible (not too slow, not too expensive) graphics card for crunching PS3grid.net workunits?

Henri.
____________

Profile GDF
Volunteer moderator
Project administrator
Project developer
Project tester
Volunteer developer
Volunteer tester
Project scientist
Send message
Joined: 14 Mar 07
Posts: 1957
Credit: 629,356
RAC: 0
Level
Gly
Scientific publications
watwatwatwatwat
Message 1894 - Posted: 29 Aug 2008 | 10:14:18 UTC - in response to Message 1890.

Hi!

a) What is the cheapest possible graphics card for crunching PS3grid.net workunits?
b) What is the best/fastest possible graphics card for crunching PS3grid.net workunits?

Thanks!


a) Geforce 8400GS - but better run it on a single core PC, as otherwise the other units won't finish before the deadline, because the card is sooo slow. Only few shaders at a low clock speed
b) Geforce GTX280 - if you can afford it


Thanks.

Ok. c) What is the best possible (not too slow, not too expensive) graphics card for crunching PS3grid.net workunits?

Henri.


8800GT 512MB

gdf

HTH
Send message
Joined: 1 Nov 07
Posts: 38
Credit: 6,365,573
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1896 - Posted: 29 Aug 2008 | 10:24:36 UTC - in response to Message 1894.
Last modified: 29 Aug 2008 | 10:36:55 UTC


8800GT 512MB
gdf


Thanks! :)

So, is this the correct model?

Does this card support CUDA 2? Will it work fast enough in PCI Express 1.1? I only have PCI Express 1.1 motherboard.

Where to get the latest drivers for that card? I have no experience on using NVIDIA cards (I have an ATI card at the moment).

Henri.
____________

TomaszPawel
Send message
Joined: 18 Aug 08
Posts: 121
Credit: 59,836,411
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1901 - Posted: 29 Aug 2008 | 12:00:39 UTC - in response to Message 1896.
Last modified: 29 Aug 2008 | 12:01:04 UTC


8800GT 512MB
gdf


Thanks! :)

So, is this the correct model?

Does this card support CUDA 2? Will it work fast enough in PCI Express 1.1? I only have PCI Express 1.1 motherboard.

Where to get the latest drivers for that card? I have no experience on using NVIDIA cards (I have an ATI card at the moment).

Henri.


I recomended 8800 GTS 512 - 128 procesors....

It is 2.0 PCI-E but it should work without problems on PCI-E 1.0

drivers: http://www.nvidia.com/object/cuda_get.html

Profile Krunchin-Keith [USA]
Avatar
Send message
Joined: 17 May 07
Posts: 512
Credit: 111,288,061
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1903 - Posted: 29 Aug 2008 | 12:17:24 UTC

I run 3 8800GT, 1 in each computer. They work very well and I see not much slowdown with normal windows operation. There is some noticed depending on what I'm running, most mostly not too bad. Some people have reported they can't even use their computer when it runs here. My two at work I use all day, heavily with little noticed slowdown and they are both P4-HT, running full boinc using both CPU and GPU. These were well priced for me, not to expensive like the high end cards. Mine are XFX brand and are single slot wide, an important thing to consider as some computers, especially all mine, can not take double wide cards without sacrificing another PCI slot, which I could not do as I have other boards and no empty slots to move them to. These are PCIe x16 2.0 but that is backward compatible with PCIe x16 1.1 slots. They worked fine in my PCIe x 16 1.1 slots.

Mine came with a double life-time warranty. I guess that means when I die, I get to take it with me to the afterlife ;)

Number of stream processors (shown as cores) is important, less processors means it the application takes longer.

512MB memory would be good.

There is 1 device supporting CUDA

Device 0: "GeForce 8800 GT" (640MHz version)
Major revision number: 1
Minor revision number: 1
Total amount of global memory: 536543232 bytes
Number of multiprocessors: 14
Number of cores: 112
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 16384 bytes
Total number of registers available per block: 8192
Warp size: 32
Maximum number of threads per block: 512
Maximum sizes of each dimension of a block: 512 x 512 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 1
Maximum memory pitch: 262144 bytes
Texture alignment: 256 bytes
Clock rate: 1.62 GHz
Concurrent copy and execution: Yes

Test PASSED

There is 1 device supporting CUDA

Device 0: "GeForce 8800 GT" (600MHz Version x 2)
Major revision number: 1
Minor revision number: 1
Total amount of global memory: 536543232 bytes
Number of multiprocessors: 14
Number of cores: 112
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 16384 bytes
Total number of registers available per block: 8192
Warp size: 32
Maximum number of threads per block: 512
Maximum sizes of each dimension of a block: 512 x 512 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 1
Maximum memory pitch: 262144 bytes
Texture alignment: 256 bytes
Clock rate: 1.51 GHz
Concurrent copy and execution: Yes

Test PASSED

See some of the other reports by users in other threads.

Links to downloads are on the front page. NVIDIA supports their product well on their website. Visit the CUDA Zone section for the CUDA drivers. See the FAQ section for a list of cards that are supported here.

Wolfram1
Send message
Joined: 24 Aug 08
Posts: 45
Credit: 3,431,862
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 1905 - Posted: 29 Aug 2008 | 13:19:03 UTC - in response to Message 1901.
Last modified: 29 Aug 2008 | 13:19:31 UTC


8800GT 512MB
gdf


I recomended 8800 GTS 512 - 128 procesors....

It is 2.0 PCI-E but it should work without problems on PCI-E 1.0

drivers: http://www.nvidia.com/object/cuda_get.html


I think the FX9800GTX+ 512MB is a good (ot better) solution. You have to pay 10 EURO more but you will get more power. Isn't it? Or am I wrong?

TomaszPawel
Send message
Joined: 18 Aug 08
Posts: 121
Credit: 59,836,411
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1906 - Posted: 29 Aug 2008 | 13:30:51 UTC - in response to Message 1905.


8800GT 512MB
gdf


I recomended 8800 GTS 512 - 128 procesors....

It is 2.0 PCI-E but it should work without problems on PCI-E 1.0

drivers: http://www.nvidia.com/object/cuda_get.html


I think the FX9800GTX+ 512MB is a good (ot better) solution. You have to pay 10 EURO more but you will get more power. Isn't it? Or am I wrong?


To be honest you can buy 8800GTS512 much cheaper then 9800GTX+

On both cart it is G92 the rally diference betwean this 2 cards is that 9800+ has smaller chip on 55nm and 8800 on 65nm and 9800+ is slighty faster due to aditional Mhz on core and memory. So price/performance better is 8800GTS, but on thermal 9800+

HTH
Send message
Joined: 1 Nov 07
Posts: 38
Credit: 6,365,573
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1908 - Posted: 29 Aug 2008 | 14:09:44 UTC

Thanks again!

One more thing. I want the card to be as silent as possible. It would be very annoying to crunch ~24/7, if the GPU fan yells all the time. So, what exact make and model is quiet enough?

Henri.
____________

Profile MJH
Project administrator
Project developer
Project scientist
Send message
Joined: 12 Nov 07
Posts: 696
Credit: 27,266,655
RAC: 0
Level
Val
Scientific publications
watwat
Message 1909 - Posted: 29 Aug 2008 | 14:25:48 UTC - in response to Message 1908.

So, what exact make and model is quiet enough?


There do exist 8800-series cards with passive cooling systems (ie no fan) but do expect to pay a premium for these!

So long as the devices conform to Nvidia's reference designs, we'd expect them to be OK for GPUGRID.

MJH

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1919 - Posted: 29 Aug 2008 | 19:04:53 UTC

Regarding noise I'll refer to the thread I just created.

MrS
____________
Scanning for our furry friends since Jan 2002

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1921 - Posted: 29 Aug 2008 | 19:40:09 UTC

And let's get serious about the "most effective card" question.

Buying anything smaller than a G92-based card is not the way to go - they are not that much cheaper but much slower. I'd like to know how fast a GTX260 is "in real world", because it's considerably more expensive than the 9800GTX+, the most expensive G92 card, but has about the same maximum GFlops.

Let's consider these cards: 8800GT, 8800GTS 512, 9800GTX+ and GTX280. I collect current pricing for Germany and the GFlops from the wiki (here, here and here).

Doing that, the
- 8800GT has 504 GFlops for 110€ -> 4.58 GFlops/€
- 8800GTS 512 has 624 GFlops for 130€ -> 4.80 GFlops/€
- 9800GTX+ has 705 GFlops for 155€ -> 4.54 GFlops/€
- GTX280 has 933 GFlops for 350€ -> 2.66 GFlops/€

So the 8800GTS 512 seems to be the most efficient card of these. However, you have to take into account that you also need a PC to run the card in and one CPU core. I'll use my main rig to give an example of what I mean by that.

On average my 9800GTX+ needs 44569s/WU with the 6.41 client. That means running 24/7 it earns 3850 credits/day or 6265 with the 6.43 client. Since I sacrifice one CPU core I loose about 1000 credits/day (Q6600@3GHz running QMC). Therefore the net gain by running GPU-Grid is 2580 or 5265 credits/day. Assuming linear performance scaling with the GFlops rating a 8800GTS 512 would earn 5545 credits/day which is a net win of 4545 credits/day.

Therefore the 8800GTS 512 gives you 35 credits/day/€ and the 9800GTX+ 34 credits/day/€ and the 8800GTS 512 is the efficiency winner. However, are 700 credits/day worth a one time investment of 25€ for you? Your choice.. I certainly made mine ;)

Of course you could always overclock either card.. but I don't think the software is already that stable. I'd rather have the additional speed guaranteed. And going with a 55 nm chip doesn't help much but doesn't hurt either.

MrS
____________
Scanning for our furry friends since Jan 2002

Wolfram1
Send message
Joined: 24 Aug 08
Posts: 45
Credit: 3,431,862
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 1924 - Posted: 29 Aug 2008 | 20:12:27 UTC - in response to Message 1921.

And let's get serious about the "most effective card" question.

MrS


It is very intresting what you wrote. I have also a Q6600 overclocking at 3 GHz and will buy me a 9800GTX+ to morrow.

The contigent of 1 WU per CPU and day seams me very small. In your calculation it should be 2 WUs

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1925 - Posted: 29 Aug 2008 | 20:29:09 UTC

Yes, actually I have a hard time to establish a 2 days cache.. but the GPU did not yet run dry :)

MrS
____________
Scanning for our furry friends since Jan 2002

Profile [FVG] bax
Avatar
Send message
Joined: 18 Jun 08
Posts: 29
Credit: 17,772,874
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwat
Message 1926 - Posted: 29 Aug 2008 | 20:57:18 UTC - in response to Message 1925.

Once upon a time... 2 weeks ago... we all happy owners of:

# Geforce 8800 GTS - 320/640M - 96 shader units
# GeForce 8800 GTX - 768M - 128 shader units
# GeForce 8800 Ultra - 768M - 128 shader units

cruched 6.25 application on happy linux OS....

Do U think is possible make us happy in the future? Now we can't help the project but we want !!!


sorry but... I was so happy 2 weeks ago :-))

Profile Stefan Ledwina
Avatar
Send message
Joined: 16 Jul 07
Posts: 464
Credit: 240,957,518
RAC: 4,566,173
Level
Leu
Scientific publications
watwatwatwatwatwatwatwat
Message 1927 - Posted: 29 Aug 2008 | 21:17:52 UTC - in response to Message 1921.
Last modified: 29 Aug 2008 | 21:18:13 UTC

...I'd like to know how fast a GTX260 is "in real world", because it's considerably more expensive than the 9800GTX+, the most expensive G92 card, but has about the same maximum GFlops.

...


Well, I was able to run a few tasks on my GTX 260 with an earlier app version in the first tests under Linux64, but couldn't crunch more than one WU in a row because of driver problems, therefore I switched it to the Vista box...

But as for the speed comparision - My EVGA GTX 260 was as fast as my EVGA 9800 GTX SC (super clocked), actually a little bit slower!
____________

pixelicious.at - my little photoblog

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1929 - Posted: 29 Aug 2008 | 21:40:15 UTC - in response to Message 1927.

But as for the speed comparision - My EVGA GTX 260 was as fast as my EVGA 9800 GTX SC (super clocked), actually a little bit slower!


Thx! So the architectural fine tuning (more registers etc) of the GT200 doesn't yield any benefits (yet) for GPU-Grid and these cards have thus a rather bad performance per money.

If I put the numbers in for the GTX280 I get 2000 credits/day more than a 9800GTX+ for 200€ more. Not a terrible deal, but I wouldn't recommend it.

And I forgot the 9800GX2! 1TFlops for 260€ -> 8900 credits/day, 1600 credits/day more than the 9800GTX+ (assuming 1000 cr/day for both CPU cores) for 100€ more. Downside of this card is that it needs a 6 pin and an 8 pin power plug and aftermarket cooling solutions likely won't work due to the 2 chip architecture.

MrS
____________
Scanning for our furry friends since Jan 2002

Thamir Ghaslan
Send message
Joined: 26 Aug 08
Posts: 55
Credit: 1,475,857
RAC: 0
Level
Ala
Scientific publications
watwatwat
Message 1931 - Posted: 30 Aug 2008 | 5:15:49 UTC - in response to Message 1921.

And let's get serious about the "most effective card" question.

Buying anything smaller than a G92-based card is not the way to go - they are not that much cheaper but much slower. I'd like to know how fast a GTX260 is "in real world", because it's considerably more expensive than the 9800GTX+, the most expensive G92 card, but has about the same maximum GFlops.

Let's consider these cards: 8800GT, 8800GTS 512, 9800GTX+ and GTX280. I collect current pricing for Germany and the GFlops from the wiki (here, here and here).

Doing that, the
- 8800GT has 504 GFlops for 110€ -> 4.58 GFlops/€
- 8800GTS 512 has 624 GFlops for 130€ -> 4.80 GFlops/€
- 9800GTX+ has 705 GFlops for 155€ -> 4.54 GFlops/€
- GTX280 has 933 GFlops for 350€ -> 2.66 GFlops/€

So the 8800GTS 512 seems to be the most efficient card of these. However, you have to take into account that you also need a PC to run the card in and one CPU core. I'll use my main rig to give an example of what I mean by that.

On average my 9800GTX+ needs 44569s/WU with the 6.41 client. That means running 24/7 it earns 3850 credits/day or 6265 with the 6.43 client. Since I sacrifice one CPU core I loose about 1000 credits/day (Q6600@3GHz running QMC). Therefore the net gain by running GPU-Grid is 2580 or 5265 credits/day. Assuming linear performance scaling with the GFlops rating a 8800GTS 512 would earn 5545 credits/day which is a net win of 4545 credits/day.

Therefore the 8800GTS 512 gives you 35 credits/day/€ and the 9800GTX+ 34 credits/day/€ and the 8800GTS 512 is the efficiency winner. However, are 700 credits/day worth a one time investment of 25€ for you? Your choice.. I certainly made mine ;)

Of course you could always overclock either card.. but I don't think the software is already that stable. I'd rather have the additional speed guaranteed. And going with a 55 nm chip doesn't help much but doesn't hurt either.

MrS


I just sold my 8800 GS and went for a 280, I dont regret the upgrade despite many complaining of the affordability!

Real world benchmarks with folding@home and ps3grid and future mark showed me a 3x gain since the upgrade. I've sold my 8800 GS for 1/3 rd the price of the 280.

"Flops" are misleading, I think the number of stream processors plays a bigger role, and frankly, I was never a big fan of SLIs.

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1932 - Posted: 30 Aug 2008 | 9:17:36 UTC - in response to Message 1931.

"Flops" are misleading, I think the number of stream processors plays a bigger role, and frankly, I was never a big fan of SLIs.



Well.. no. Flops are calculated as "number of shaders" * "shader clock" * "instructions per clock per shader". The latter one could be 2 (one MADD) or 3 (one MADD + one MUL), but it's constant for all G80/90/GT200 chips. So Flop are a much better performance measure than "number of shaders", because they also take the frequency into account.

And SLI.. yeah, just forget it for games. And for folding you'd have to disable it anyway.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile KyleFL
Send message
Joined: 28 Aug 08
Posts: 33
Credit: 786,046
RAC: 0
Level
Gly
Scientific publications
wat
Message 1933 - Posted: 30 Aug 2008 | 9:49:19 UTC - in response to Message 1932.
Last modified: 30 Aug 2008 | 9:50:54 UTC

Hello

I just got a 9800GT for 110€ (the same price as the 8800GT and the same G92Chip & Clockspeed)
Running time for one WU is ~21h on 6.43 on a Core2 Duo ~2.1Ghz (E6300)
I have it running together with a Seti WU on the other Core.
(Last night I stopped the Seti Projekt to see, if it has an impact on the GPUgrid WU time, but it doesn´t seem so.)


Regards, Thorsten "KyleFL"

Profile Kokomiko
Avatar
Send message
Joined: 18 Jul 08
Posts: 190
Credit: 24,093,690
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1937 - Posted: 30 Aug 2008 | 10:50:31 UTC

I've one 8800GT and one GTX280 running. The 8800GT needs 11:39h for one WU and I got 1987.41 credits, that's ca. 170 cr/h. The card works on a AMD Penom 9850 BE. The GTX280 needs only 7:50h for one WU and I got 3232.06 for it, that's ca. 415 cr/h. Are this other WUs, or why the credits are higher?
____________

Temujin
Send message
Joined: 12 Jul 07
Posts: 100
Credit: 21,848,502
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwat
Message 1939 - Posted: 30 Aug 2008 | 11:01:50 UTC - in response to Message 1937.
Last modified: 30 Aug 2008 | 11:02:46 UTC

I've one 8800GT and one GTX280 running. The 8800GT needs 11:39h for one WU and I got 1987.41 credits, that's ca. 170 cr/h. The card works on a AMD Penom 9850 BE. The GTX280 needs only 7:50h for one WU and I got 3232.06 for it, that's ca. 415 cr/h. Are this other WUs, or why the credits are higher?
It's the new credit award with app v6.42, your 8800GT will also start to get 3232/WU when it runs v6.42 (or higher) ;-)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1940 - Posted: 30 Aug 2008 | 11:11:34 UTC

Kokomiko,

both of your cards are rather fast. Are they overclocked? What are the shader clocks on both? Are you running Win or Linux? Using my values as reference a stock GTX280 would need ~9:20h.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Kokomiko
Avatar
Send message
Joined: 18 Jul 08
Posts: 190
Credit: 24,093,690
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1941 - Posted: 30 Aug 2008 | 11:11:51 UTC - in response to Message 1939.

It's the new credit award with app v6.42, your 8800GT will also start to get 3232/WU when it runs v6.42 (or higher) ;-)


Both machines runs with v6.43, what's wrong?
____________

Profile Kokomiko
Avatar
Send message
Joined: 18 Jul 08
Posts: 190
Credit: 24,093,690
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1943 - Posted: 30 Aug 2008 | 11:17:23 UTC - in response to Message 1940.
Last modified: 30 Aug 2008 | 11:18:57 UTC

Kokomiko,

both of your cards are rather fast. Are they overclocked? What are the shader clocks on both? Are you running Win or Linux? Using my values as reference a stock GTX280 would need ~9:20h.

MrS


Both are not overclocked. The 8800GT (Gigabyte, 112 shader, 1728 MHz) runs under Vista 64bit on a Phenom 9850 BE with 2.5 GHz, the GTX280 (XFX, 240 shader, 1296 MHz) runs also under Vista 64bit on a Phenom 9950 with 2.6 GHz.
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1944 - Posted: 30 Aug 2008 | 11:21:01 UTC

The last WU finished by your 8800GT was still using 6.41, hence the lower credits.

So your 8800GT is not overclocked by you, but is clocked way higher than the stock 1500 MHz. Interesting though, it's clearly faster than my 9800GTX+ with fewer shaders and a lower shader clock. Which driver are you using?

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Kokomiko
Avatar
Send message
Joined: 18 Jul 08
Posts: 190
Credit: 24,093,690
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1945 - Posted: 30 Aug 2008 | 11:25:59 UTC - in response to Message 1944.

Which driver are you using?

MrS


The newest, 177.84.

____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1946 - Posted: 30 Aug 2008 | 11:33:16 UTC

Same for me. Now the only differences are that you use Vista 64 vs XP 32 for me and my Q6600 @ 3 GHz on a P35 board versus your Phenom. But this shouldn't have such strong effects.

MrS
____________
Scanning for our furry friends since Jan 2002

TomaszPawel
Send message
Joined: 18 Aug 08
Posts: 121
Credit: 59,836,411
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1952 - Posted: 30 Aug 2008 | 14:03:43 UTC - in response to Message 1944.

The last WU finished by your 8800GT was still using 6.41, hence the lower credits.

So your 8800GT is not overclocked by you, but is clocked way higher than the stock 1500 MHz. Interesting though, it's clearly faster than my 9800GTX+ with fewer shaders and a lower shader clock. Which driver are you using?

MrS


It is whell known that some 3D prodecents make 3D cards with higher clocks that references...

HTH
Send message
Joined: 1 Nov 07
Posts: 38
Credit: 6,365,573
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1954 - Posted: 30 Aug 2008 | 15:31:40 UTC - in response to Message 1921.

So the 8800GTS 512 seems to be the most efficient card of these. However, you have to take into account that you also need a PC to run the card in and one CPU core. I'll use my main rig to give an example of what I mean by that.


I do not understand. Cannot I run two SETI@home WUs + gpugrid all at once with my dual core Pentium D 920 (and NVIDIA)?

Henri.
____________

TomaszPawel
Send message
Joined: 18 Aug 08
Posts: 121
Credit: 59,836,411
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1955 - Posted: 30 Aug 2008 | 16:25:46 UTC - in response to Message 1954.
Last modified: 30 Aug 2008 | 16:28:36 UTC

So the 8800GTS 512 seems to be the most efficient card of these. However, you have to take into account that you also need a PC to run the card in and one CPU core. I'll use my main rig to give an example of what I mean by that.


I do not understand. Cannot I run two SETI@home WUs + gpugrid all at once with my dual core Pentium D 920 (and NVIDIA)?

Henri.


No you can not.

Eg. I have Q6600 and 8800GTS, and i crunch Rosetta@home and PS3Grid in TomaszPawelTeam :)

So on 3 cores runs Roseta and on 1 core runs PS3Grid.

At your computer on 1 core will be Seti@home and on second core will be PS3Grid

....

I know to me it is strange too, and it shows that GPU is very powerful, but it need help from CPU to crunch.... So one core is always wasted to one GPU...
P.S.
If I have more $$$ i will buy 280GTX... but i dont't have so i crunch at 8800GTS512... If you have $$$ :) you should buy 280GTX :)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1956 - Posted: 30 Aug 2008 | 16:36:40 UTC - in response to Message 1952.

It is whell known that some 3D prodecents make 3D cards with higher clocks that references...


Sure. The point is that he's about 30 min faster than me with 112 shaders at 1.73 GHz, whereas I have 128 shaders at 1.83 GHz. That's a difference worth investigation. My prime candidate would be the Vista / Vista 64 driver.

And, yes, currently you need one CPU core per GPU-WU. It's not doing any actual work, just keeping the GPU busy (sort of).

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Kokomiko
Avatar
Send message
Joined: 18 Jul 08
Posts: 190
Credit: 24,093,690
RAC: 0
Level
Pro
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1957 - Posted: 30 Aug 2008 | 16:49:08 UTC - in response to Message 1956.

It is whell known that some 3D prodecents make 3D cards with higher clocks that references...


Sure. The point is that he's about 30 min faster than me with 112 shaders at 1.73 GHz, whereas I have 128 shaders at 1.83 GHz. That's a difference worth investigation. My prime candidate would be the Vista / Vista 64 driver.


My wife has on a Phenom 9850 BE a MSI 8800GT running under XP32bit, shader is running with 1674 MHz and she need 13:40h for one WU. Seems also to be faster then the stock frequency, but much slower then my card under Vista 64.

____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1959 - Posted: 30 Aug 2008 | 17:31:48 UTC

GDF said going with the 2.0 CUDA compilers had a 20% performance hit under Win XP, which can be improved by future drivers. The Vista driver is different from the one for XP. So it seems the Vista driver got less than a 20% performance hit.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Krunchin-Keith [USA]
Avatar
Send message
Joined: 17 May 07
Posts: 512
Credit: 111,288,061
RAC: 0
Level
Cys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1960 - Posted: 30 Aug 2008 | 18:36:08 UTC - in response to Message 1932.

"Flops" are misleading, I think the number of stream processors plays a bigger role, and frankly, I was never a big fan of SLIs.



Well.. no. Flops are calculated as "number of shaders" * "shader clock" * "instructions per clock per shader". The latter one could be 2 (one MADD) or 3 (one MADD + one MUL), but it's constant for all G80/90/GT200 chips. So Flop are a much better performance measure than "number of shaders", because they also take the frequency into account.

And SLI.. yeah, just forget it for games. And for folding you'd have to disable it anyway.

MrS

Remember the flops formula is the best the GPU can do (peak), but very few real world applications can issue the max instructions every cycle, unless you just have an application adding and multiplying useless numbers to maintain the maximum.

HTH
Send message
Joined: 1 Nov 07
Posts: 38
Credit: 6,365,573
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 1963 - Posted: 30 Aug 2008 | 19:04:21 UTC

Thanks for the info once again, guys!

It's a bit sad that one CPU core is wasted even if the GPU is used. Can they change this someday?

Henri.
____________

Profile Stefan Ledwina
Avatar
Send message
Joined: 16 Jul 07
Posts: 464
Credit: 240,957,518
RAC: 4,566,173
Level
Leu
Scientific publications
watwatwatwatwatwatwatwat
Message 1965 - Posted: 30 Aug 2008 | 19:17:26 UTC
Last modified: 30 Aug 2008 | 19:18:21 UTC

It is not wasted. If I understood it right the CPU is needed to feed the GPU with data...
It's the same with Folding@home on the GPU, but they only need about 5% of one core and they are planning to distribute an application that only uses the GPU without the need to use the CPU in the future.

Don't know if this would also be possible with the application here on PS3GRID...
____________

pixelicious.at - my little photoblog

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 1967 - Posted: 30 Aug 2008 | 19:58:11 UTC - in response to Message 1960.
Last modified: 30 Aug 2008 | 19:59:59 UTC

Remember the flops formula is the best the GPU can do (peak), but very few real world applications can issue the max instructions every cycle


Yes, we're only calculating theoretical maximum Flops here, the real performance is going to be lower. This "lower" is basically the same factor for all G8x / G9x chips, but GT200 received a tweaked shader core and could therefore show higher real world GPU-Grid-performance with the same Flops rating. That's why I asked for the GTX260 :)

Edit: and regarding CPU usage, F@H also needed 100% of one core in GPU1. The current GPU2 client seems tremendously improved. Maybe whatever F@H did could also help GPU-Grid?

MrS
____________
Scanning for our furry friends since Jan 2002

Robinski
Send message
Joined: 2 Jun 08
Posts: 25
Credit: 0
RAC: 0
Level

Scientific publications
wat
Message 1969 - Posted: 1 Sep 2008 | 11:58:12 UTC - in response to Message 1967.

Remember the flops formula is the best the GPU can do (peak), but very few real world applications can issue the max instructions every cycle


Yes, we're only calculating theoretical maximum Flops here, the real performance is going to be lower. This "lower" is basically the same factor for all G8x / G9x chips, but GT200 received a tweaked shader core and could therefore show higher real world GPU-Grid-performance with the same Flops rating. That's why I asked for the GTX260 :)

Edit: and regarding CPU usage, F@H also needed 100% of one core in GPU1. The current GPU2 client seems tremendously improved. Maybe whatever F@H did could also help GPU-Grid?

MrS


I really hope improvements can be made in the future so more and more GPU computing will be available. I also hope more projects would try to build GPU applications so we are able to use full hardware potential for calculations.

Profile KyleFL
Send message
Joined: 28 Aug 08
Posts: 33
Credit: 786,046
RAC: 0
Level
Gly
Scientific publications
wat
Message 2036 - Posted: 2 Sep 2008 | 20:01:30 UTC
Last modified: 2 Sep 2008 | 20:03:37 UTC

In every of my finished workunits I find the following text:

<core_client_version>6.3.10</core_client_version>
<![CDATA[
<stderr_txt>
# Using CUDA device 0
# Device 0: "GeForce 9800 GT"
# Clock rate: 1512000 kilohertz
MDIO ERROR: cannot open file "restart.coor"
called boinc_finish
</stderr_txt>
]]>

What I´m wondering about is the MDIO Error Message. What exactly is meant by that?
The restults are always valid and the credits are granted. But it seems, that 21.5h computing time on a GF9800GT is a little bit to slow, isn´t it?

GPUGRID Role account
Send message
Joined: 15 Feb 07
Posts: 134
Credit: 1,349,535,983
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 2037 - Posted: 2 Sep 2008 | 20:05:02 UTC - in response to Message 2036.

What I´m wondering about is the MDIO Error Message.


The restart.coor file that the error mentions is created when the processing of the work unit is suspended and stores the state of the simulation so that it may be restarted later.

MJH

Profile koschi
Avatar
Send message
Joined: 14 Aug 08
Posts: 124
Credit: 792,979,198
RAC: 11,592
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 2039 - Posted: 2 Sep 2008 | 20:13:10 UTC

I have the same card, but mine needs ~58600 seconds for one work unit.

http://www.ps3grid.net/results.php?hostid=7528

The older ones at 61.000 seconds where done at a lower VRAM speed of only 500MHz, but finally I found the solution to force the highest Powermizer settings.

First think is to check if your card runs at designed speed while crunching, get yourself a copy GPUZ to find out =)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 2041 - Posted: 2 Sep 2008 | 20:20:49 UTC

I totally agree with koschi.

And @ koschi: so you crunched with only ~50% of the stock VRAM clock and got about 4% slower? That's good to know.. I suspected mem speed would have little effect, but it's good to have a hard number.

MrS
____________
Scanning for our furry friends since Jan 2002

Wolfram1
Send message
Joined: 24 Aug 08
Posts: 45
Credit: 3,431,862
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 2042 - Posted: 2 Sep 2008 | 20:40:05 UTC - in response to Message 2036.

But it seems, that 21.5h computing time on a GF9800GT is a little bit to slow, isn´t it?


Yes, I think so. I have 10,5 h with a GF9800GTX+

Profile koschi
Avatar
Send message
Joined: 14 Aug 08
Posts: 124
Credit: 792,979,198
RAC: 11,592
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 2043 - Posted: 2 Sep 2008 | 20:42:18 UTC
Last modified: 2 Sep 2008 | 20:48:57 UTC

Back in the old days (hmm, 2 weeks ago?) I once managed to force the card to Powermizer level 3, but then the improvement was only from 48.000 to 47.000. Those were the days of the faster Cuda 1.x 6.25 app.

I can't guarantee that the recent decrease in crunching time is only the result of the VRAM now running full speed, as I also updated from 177.68 to 177.70 almost at the same time.
So there might or might not be an impact from the graphics driver, but I doubt that, as no one else reported any improvements.

Following the folding@home GPU2 thread in our teams forum, it seemed that the biggest speedup can be achieved by overclocking the shaders. Core and memory had only little impact.

Should be the same here...

edit:

Well, there are some small differences between 9800GT and GTX+...
The GT is basically a relabeled 8800GT with the same (shader +12MHz) clock speeds and only 112 shader units. The GTX+ has 128 shaders at 1836MHz versus 1500/1512MHz on x800 GT side.

But still, more that 20h is way to long and needs to be investigated ;-)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 2044 - Posted: 2 Sep 2008 | 21:33:52 UTC

Yes, his 9800GT is way too slow. Either the software is horribly broken / distrubed or it's not running at stock clocks (->GPU-Z).

@koschi: OK, I'll take the number with a grain of salt. And sure, the shader clock should be responsible for >95% of the speed on the hardware side. I already downclocked my core and this also reduced speed, not directly proportional (as it should be in the case of shaders) but much more than your VRAM-experiment.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile KyleFL
Send message
Joined: 28 Aug 08
Posts: 33
Credit: 786,046
RAC: 0
Level
Gly
Scientific publications
wat
Message 2047 - Posted: 2 Sep 2008 | 22:27:33 UTC

My 9800GT is running at stock speed:

600 / 900 / 1500 (looked it up in GPU-Z 0.2.7)
Driver is 177.83
PhysX is running OK

Shaders: 1512000 kilohertz is detected by the 6.43 Application (it´s in the text I posted - so the 1500Mhz Shaderclockspeed seem to be working)

running time is always around 78026.66 like that task.
http://www.gpugrid.net/result.php?resultid=49731


Very strange..


Thanks for your help, Kyle

Profile KyleFL
Send message
Joined: 28 Aug 08
Posts: 33
Credit: 786,046
RAC: 0
Level
Gly
Scientific publications
wat
Message 2393 - Posted: 16 Sep 2008 | 16:07:00 UTC - in response to Message 2047.

My 9800GT is running at stock speed:

600 / 900 / 1500 (looked it up in GPU-Z 0.2.7)
Driver is 177.83
PhysX is running OK

Shaders: 1512000 kilohertz is detected by the 6.43 Application (it´s in the text I posted - so the 1500Mhz Shaderclockspeed seem to be working)

running time is always around 78026.66 like that task.
http://www.gpugrid.net/result.php?resultid=49731


Very strange..


Thanks for your help, Kyle


Update:

With the Shaders overclocked @1750Mhz -- a WU took ~ 68000s (~19h) on my GF9800.
Everything about that card seemd to be normal. GPU-Z detected 112shaders and the memorybus and clockspeed was OK, too.
I updated the Video Driver to 177.92, but crunching times didnßt change.

Yesterday I kicked the GT9800 and put in a GTX260.
It´s a pity the 6.45APP was out on that day, so I couldn´t test a WU on 6.43 with it.
With the 6.45 a WU takes ~10h -- that seems to be OK -- not as fast as some reported with the 6.43 (7-8h), but not such a big difference as my 9800GT did show.

It seems there wasn´t anything wrong on my system, but the grafixcard was at fault.
The 9800GT was a Gainward Bliss 9800GT (no golden sample -- just a normal standard card)


Happy Cruning... Regards, Thorsten

Profile koschi
Avatar
Send message
Joined: 14 Aug 08
Posts: 124
Credit: 792,979,198
RAC: 11,592
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 2395 - Posted: 16 Sep 2008 | 16:40:18 UTC

I also have a Gainward Bliss 9800GT 512MB, but mine is 20000 fas^ter than yours was. No idea why yours was so slow...

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 2400 - Posted: 16 Sep 2008 | 18:04:40 UTC

Kyle, do you have another PC to put the 9800 in? If it's also slow there we can surely blame a "dodgy" card.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile KyleFL
Send message
Joined: 28 Aug 08
Posts: 33
Credit: 786,046
RAC: 0
Level
Gly
Scientific publications
wat
Message 2406 - Posted: 16 Sep 2008 | 19:48:40 UTC
Last modified: 16 Sep 2008 | 19:58:07 UTC

No, I have taken them by their word on the 14day returning right and sent the 9800GT back .... :)
I have a GTX260 now - just wanted to let you know.


110€ for the 9800GT
210€ for the GTX260

and the Leadtek GTX260 is MUCH more silent than the Gainward 9800GT (but gets ~20° hotter - 73° vs ~53°)
The 9800GT was a lot louder than the 7800GT I had before -- the GTX260 is better in that way, too, so I won´t complain about the upgrade. Should have gone with the GTX260 from the beginning :)


Regards, KyleFL

HTH
Send message
Joined: 1 Nov 07
Posts: 38
Credit: 6,365,573
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 4452 - Posted: 18 Dec 2008 | 7:53:44 UTC - in response to Message 1921.
Last modified: 18 Dec 2008 | 7:54:11 UTC

- GTX280 has 933 GFlops for 350€ -> 2.66 GFlops/€


GTX280 350€ in the April of 2008?

Can I soon after the Christmas of 2008 buy passively cooled GTX280 with just 300 €? Is it completely impossible to have passively cooled (and stable) GTX280 with 100% GPU load 24/7?

Merry Christmas!

HTH.
____________

HTH
Send message
Joined: 1 Nov 07
Posts: 38
Credit: 6,365,573
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 4487 - Posted: 18 Dec 2008 | 16:13:01 UTC - in response to Message 4452.


Can I soon after the Christmas of 2008 buy passively cooled GTX280 with just 300 €? Is it completely impossible to have passively cooled (and stable) GTX280 with 100% GPU load 24/7?


Maybe GeForce GTX 285. It should run cooler, shouldn't it? :)

Henri.
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 4503 - Posted: 18 Dec 2008 | 18:32:58 UTC - in response to Message 4452.

GTX280 350€ in the April of 2008?[quote]

I posted this on the 29th of august, so I'm pretty sure I was not referring to the price back in April, when this card was not yet released.

[quote]Can I soon after the Christmas of 2008 buy passively cooled GTX280 with just 300 €? Is it completely impossible to have passively cooled (and stable) GTX280 with 100% GPU load 24/7?


If you find that fairy tell her I'll also take one of those.. but I'd like mine to be 100€. No seriously, Kokomiko was right when he said you can totally forget about cooling a GTX 280 passively (under load and in a computer case).. that's why noone else commented.

You can easily handle 50W passively, but 70 - 80W gets painful, i.e. the card runs very hot and fails without sufficient case ventilation (which negates the benefits of running passive..). A GTX280 has 150+W power consumption.. do I need to say more? And, btw., 55 nm isn't going to help much.

MrS
____________
Scanning for our furry friends since Jan 2002

HTH
Send message
Joined: 1 Nov 07
Posts: 38
Credit: 6,365,573
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 4504 - Posted: 18 Dec 2008 | 18:52:54 UTC - in response to Message 4503.
Last modified: 18 Dec 2008 | 18:54:09 UTC

A GTX280 has 150+W power consumption.. do I need to say more? And, btw., 55 nm isn't going to help much.


Ok. Are those fans too noisy on those cards? I don't want to buy a card that screams so much I never ever want to have a 100% GPU load again...

PS. Water cooling is out of business. I don't want to mess with water and electricity.

Henri.
____________

Scott Brown
Send message
Joined: 21 Oct 08
Posts: 144
Credit: 2,973,555
RAC: 0
Level
Ala
Scientific publications
watwatwatwatwatwat
Message 4505 - Posted: 18 Dec 2008 | 18:56:20 UTC - in response to Message 4504.
Last modified: 18 Dec 2008 | 18:56:55 UTC

Henri,

The ASUS cards have fairly quiet fans in my experience. My 9600 GSO has one of the big "glaciator" fans, and the noise increase over the power supply fan is negligible at most (I really can't tell a difference to be honest).

HTH
Send message
Joined: 1 Nov 07
Posts: 38
Credit: 6,365,573
RAC: 0
Level
Ser
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 4509 - Posted: 18 Dec 2008 | 20:42:38 UTC - in response to Message 4505.

The ASUS cards have fairly quiet fans in my experience. My 9600 GSO has one of the big "glaciator" fans, and the noise increase over the power supply fan is negligible at most (I really can't tell a difference to be honest).


Ok. Thanks, Scott and everyone!

Henri.
____________

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 4580 - Posted: 19 Dec 2008 | 21:58:54 UTC
Last modified: 19 Dec 2008 | 21:59:40 UTC

Look here for further information about cooling. I suppose the stock GTX 260 cooler would still be to noisy for my ears, but any card with an Accelero S1 and 2 slow 120 mm fans is *cool*.

MrS
____________
Scanning for our furry friends since Jan 2002

Post to thread

Message boards : Graphics cards (GPUs) : What card?

//