Author |
Message |
MJHProject administrator Project developer Project scientist Send message
Joined: 12 Nov 07 Posts: 696 Credit: 27,266,655 RAC: 0 Level
Scientific publications
|
Hi,
With the arrival of Maxwell, we are now supporting 4 generations of hardware, and 6 separate builds of our application on 2 platforms.
This is starting to tax our resources, so some rationalisation is required. I therefore propose the following:
* Remove support for SM1.3
Geforce 200 series cards contribute rather less than 1% of our capacity and, furthermore, lack hardware features that we want to use in future development work.
* Discontinue CUDA 4.2
We need CUDA 6 to properly support the new Maxwell, so I'll be retiring the old CUDA 4.2 build. If you are still receiving version 4.2 WUs, please consider updating your driver to version 343 or above.
These changes will take effect from January 2015
Matt |
|
|
Jim1348Send message
Joined: 28 Jul 12 Posts: 819 Credit: 1,591,285,971 RAC: 0 Level
Scientific publications
|
Very good. I wish more projects would do that. |
|
|
|
MJH:
I am in agreement with your proposal.
In fact, the 343+ drivers have dropped support for many of the older GPUs, such that new driver installs will not install drivers for them. In reaction to reading their announcement, a couple months ago I replaced my aging GTS 240 with a 2nd GTX 660 Ti, so that I could continue to install the latest drivers in my milti-generational-GPU rig.
http://nvidia.custhelp.com/app/answers/detail/a_id/3473 |
|
|
|
This is good news indeed.
I think that crunching on pre-Kepler cards (GTX-2xx, GTX-4xx, GTX-5xx) is a waste of electricity. |
|
|
MJHProject administrator Project developer Project scientist Send message
Joined: 12 Nov 07 Posts: 696 Credit: 27,266,655 RAC: 0 Level
Scientific publications
|
RZ,
In the stats I posted a wee while ago, you'll see Fermi era card still constitute 10-20% of our capacity. The lower end 2.1 cards aren't much good for anything other than acemdshort, however.
http://www.gpugrid.net/forum_thread.php?id=3864
Matt |
|
|
|
I think that crunching on pre-Kepler cards (GTX-2xx, GTX-4xx, GTX-5xx) is a waste of electricity.RZ,
In the stats I posted a wee while ago, you'll see Fermi era card still constitute 10-20% of our capacity. The lower end 2.1 cards aren't much good for anything other than acemdshort, however.
http://www.gpugrid.net/forum_thread.php?id=3864
Matt
Matt,
I'm sorry for not being clear, I didn't meant that you should drop support of the Fermi based cards.
I've intended my message to encourage my fellow crunchers to upgrade their GPUs, as the Fermi based cards became inefficient, running them in the long term is *not* worth its costs anymore.
Zoltan |
|
|
|
Hi, Matt:
I understand the reason for this rationalization. At present I process GPUGrid tasks with two NVIDIA GeForce GTX 650 Ti (1024MB) GPUs and driver 334.89. I have no intention of spending any more on my PCs: can you please let me know when you will discontinue providing work for this GPU.
Thanks,
John
Hi,
With the arrival of Maxwell, we are now supporting 4 generations of hardware, and 6 separate builds of our application on 2 platforms.
This is starting to tax our resources, so some rationalisation is required. I therefore propose the following:
* Remove support for SM1.3
Geforce 200 series cards contribute rather less than 1% of our capacity and, furthermore, lack hardware features that we want to use in future development work.
* Discontinue CUDA 4.2
We need CUDA 6 to properly support the new Maxwell, so I'll be retiring the old CUDA 4.2 build. If you are still receiving version 4.2 WUs, please consider updating your driver 343 or above.
These changes will take effect from January 2015
Matt
[/quote]
|
|
|
MJHProject administrator Project developer Project scientist Send message
Joined: 12 Nov 07 Posts: 696 Credit: 27,266,655 RAC: 0 Level
Scientific publications
|
John,
At present I process GPUGrid tasks with two NVIDIA GeForce GTX 650 Ti (1024MB) GPUs and driver 334.89.
Those cards are just fine, and will be useful for a long time to come. It's only he Geforce 200-series we need to wave good-bye to.
If you could update your driver to 343 sometime before the new year, that'd be appreciated. You'll get the newer CUDA 65 app, which will - eventually - be faster than 60.
Matt |
|
|
sis651Send message
Joined: 25 Nov 13 Posts: 66 Credit: 197,586,662 RAC: 182,711 Level
Scientific publications
|
I'm crunching on an NV Optimus notebook. The latest repository drivers running Gpugrid are 331.38 and does Cuda4.2. Official drivers have problems with Optimus. Seems time to switch to CPU projects is coming. Rosetta probably... |
|
|
skgivenVolunteer moderator Volunteer tester
Send message
Joined: 23 Apr 09 Posts: 3968 Credit: 1,995,359,260 RAC: 0 Level
Scientific publications
|
I've had a GTX470 sitting on the shelf for some time. It's still a good GPU, but for crunching it just uses too much power relative to the 600 and 700 ranges, never mind the 980 and 970.
Going by the official GFLOPS/Watt (Single Precision) you can see a big jump from Fermi to Kepler:
GTX GF/W (by series number)
980 28.0
780 15.9
680 15.85
580 6.48
480 5.38
GTX GF/W (by most efficient in series)
980 28.0
780Ti 20.2
690 18.74
560Ti 7.42
480 5.38
Didn’t include the 750Ti (21.8) because it’s a Maxwell.
In terms of running costs, the GTX980 is 3.77 times more efficient than the GTX560Ti (the most efficient 900 series GPU vs the most efficient 500 series GPU). The 980 is even 40% more efficient than a GTX780Ti.
For those with a budget the running cost of a GTX970 (24.1) is 3.24 times more efficient than a GTX560Ti, and the 750Ti (21.8) is 2.93 times more efficient than a GTX560Ti.
So it would be in the interest of most crunchers to stop using anything below the GTX600 range, and as Zoltan pointed out, worth encouraging. I suggest giving notice to drop support for 500's in ~6 months (March 2015).
____________
FAQ's
HOW TO:
- Opt out of Beta Tests
- Ask for Help
|
|
|
eXaPowerSend message
Joined: 25 Sep 13 Posts: 293 Credit: 1,897,601,978 RAC: 0 Level
Scientific publications
|
I agree with Skgiven and RZ.
Once GM204 Maxwell's driver support fully integrates filtering technologies support- Molecular programs like PyMOL/JyMOL will be able to view fragments(atoms) and residues (amino acids) like never before. Maxwell's are certainly helping/advancing science in many ways.
On a side note: mobile Maxwell were released, and have very curious core counts. A future GM206/desktopGTX960? GTX970m=1280c/10SMM/3GPC/80TMU/48ROP
(Future GTX960?) A GTX980m(1536core/96TMU/64ROPS) is 12SMM/3GPC rather 13/4GPC for desktop GTX970.
This is pure speculation on my part: I think NVidia is finding Maxwell GPC (4SMM/32TMU/16ROPS) is unable to disable SMM in similar fashion to SMX, due to the 32 core subsets with a separate warp. Unlike SMX monolith design where SMX shares all cores, SFU, LD/ST units with warp. Kelper SMX has (8 separate dispatch-->one Large crossbar-->separate [4] issue for a subset(32c) and [1] issue for 32SFU and [1]issue for 32LD/ST.) Maxwell breaks up dispatch into four pairs of two feeding a separate crossbar for separate issue to subset of 32c and issue for 8SFU,8LD/ST. (SMM= 4 issues for SFU/ 4 issues for LD/ST/ 4 issues for [4] 32 core subset. |
|
|
Beyond Send message
Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level
Scientific publications
|
GTX GF/W (by most efficient in series)
980 28.0
780Ti 20.2
690 18.74
560Ti 7.42
480 5.38
[/pre][/list]
Didn’t include the 750Ti (21.8) because it’s a Maxwell.
In terms of running costs, the GTX980 is 3.77 times more efficient than the GTX560Ti (the most efficient 900 series GPU vs the most efficient 500 series GPU). The 980 is even 40% more efficient than a GTX780Ti.
For those with a budget the running cost of a GTX970 (24.1) is 3.24 times more efficient than a GTX560Ti, and the 750Ti (21.8) is 2.93 times more efficient than a GTX560Ti.
Has anyone actually measured the power draw from various 970 and 980 cards while running GPUGrid? We know the 750Ti is running at about 60 watts. According to the tom's Hardware article linked below 970/980 power consumption while running CUDA tasks may be rather high. Seems we need to see some actual power draw figures for various cards while running GPUGRID tasks.
Tom's Hardware power consumption dissertation for the 970/980:
http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-970-maxwell,3941-12.html
|
|
|
Jim1348Send message
Joined: 28 Jul 12 Posts: 819 Credit: 1,591,285,971 RAC: 0 Level
Scientific publications
|
For those with a budget the running cost of a GTX970 (24.1) is 3.24 times more efficient than a GTX560Ti, and the 750Ti (21.8) is 2.93 times more efficient than a GTX560Ti.
So it would be in the interest of most crunchers to stop using anything below the GTX600 range, and as Zoltan pointed out, worth encouraging. I suggest giving notice to drop support for 500's in ~6 months (March 2015).
That is a very nice comparison, and I am sure quite useful for those who have not done the tests themselves. (It is no accident however that I now have six GTX 750 Ti's, which I picked up heading into the summer.)
However, my concern is not trying to tell people how to spend their money or force them into more efficient cards, useful though that would be for some purposes. (The operating costs are different in the U.S. than in Europe, for example.) My concern is compatibility and the effort that MJH and others must expend to keep the older cards operating. I have seen many projects where problems arise trying to keep a few crunchers happy with their present hardware, even though that detracts from the science of the overall effort. I think we should keep focused on why we are here.
|
|
|
biodocSend message
Joined: 26 Aug 08 Posts: 183 Credit: 10,085,929,375 RAC: 65,075 Level
Scientific publications
|
GTX GF/W (by most efficient in series)
980 28.0
780Ti 20.2
690 18.74
560Ti 7.42
480 5.38
[/pre][/list]
Didn’t include the 750Ti (21.8) because it’s a Maxwell.
In terms of running costs, the GTX980 is 3.77 times more efficient than the GTX560Ti (the most efficient 900 series GPU vs the most efficient 500 series GPU). The 980 is even 40% more efficient than a GTX780Ti.
For those with a budget the running cost of a GTX970 (24.1) is 3.24 times more efficient than a GTX560Ti, and the 750Ti (21.8) is 2.93 times more efficient than a GTX560Ti.
Has anyone actually measured the power draw from various 970 and 980 cards while running GPUGrid? We know the 750Ti is running at about 60 watts. According to the tom's Hardware article linked below 970/980 power consumption while running CUDA tasks may be rather high. Seems we need to see some actual power draw figures for various cards while running GPUGRID tasks.
Tom's Hardware power consumption dissertation for the 970/980:
http://www.tomshardware.com/reviews/nvidia-geforce-gtx-980-970-maxwell,3941-12.html
I looked at power consumption of my 980 on F@H. Below are my data and also a link to the thread. I will do the same sort of test on GPUGrid this weekend.
"I have a Kill-a-Watt power meter at the wall on this 64-bit linux mint 17 LTS system:
Intel 3930K, Asus x79 sabertooth MB, Gigabyte GTX980 Nvidia reference design (Model #: GV-N980D5-4GD-B)
Power readings (SMP-10, project 6348, GPU core 17, project 9201)
Full load with 980@1252MHz (no overclock): TPF (1 min 38 sec), 387 watts total power
Full load with 980@1452MHz (+200 MHz overclock)): TPF (1 min 28 sec), 401 watts total power
GPU Folding on Pause (980@135 MHz): 263 watts total power
Both GPU and SMP folding on pause: 114 watts total power
980 from Idle to 1252MHz (no overclock): 124 watts
980 from Idle to 1452MHz (+200 MHz overclock): 138 watts
I'm not sure how much power the 980 is drawing at idle, but these numbers look pretty good to me."
https://foldingforum.org/viewtopic.php?f=38&t=26757&start=15
|
|
|
MJHProject administrator Project developer Project scientist Send message
Joined: 12 Nov 07 Posts: 696 Credit: 27,266,655 RAC: 0 Level
Scientific publications
|
Bumping, since first announcement had no effect on reducing 4.2 app contribution.
Possibly crunchers who never read the forums.. |
|
|
TJSend message
Joined: 26 Jun 09 Posts: 815 Credit: 1,470,385,294 RAC: 0 Level
Scientific publications
|
Bumping, since first announcement had no effect on reducing 4.2 app contribution.
Possibly crunchers who never read the forums..
Hi Matt, you can use the "Notices" in BOINC Manager to do the announcement and perhaps repeat that every fortnight.
____________
Greetings from TJ |
|
|
MJHProject administrator Project developer Project scientist Send message
Joined: 12 Nov 07 Posts: 696 Credit: 27,266,655 RAC: 0 Level
Scientific publications
|
Hi Matt, you can use the "Notices" in BOINC Manager to do the announcement and perhaps repeat that every fortnight.
Neat, thanks. Can you see the notice now?
Matt |
|
|
SpeedySend message
Joined: 19 Aug 07 Posts: 43 Credit: 40,991,082 RAC: 41,482 Level
Scientific publications
|
Hi Matt, you can use the "Notices" in BOINC Manager to do the announcement and perhaps repeat that every fortnight.
Neat, thanks. Can you see the notice now?
Matt
Yes I can see it in notices. Keep up the great work |
|
|
|
I've got a GeForce 760 card, and the driver is 344.48. I know "every bit counts," but it makes sense that you're moving away from older hardware. Eventually the older hardware just becomes a waste because newer hardware can do it faster, better, and more effecient.
____________
|
|
|
Qax Send message
Joined: 29 May 11 Posts: 8 Credit: 67,402,347 RAC: 0 Level
Scientific publications
|
Hello.
What a lot of you don't realize is everyday there are fewer and fewer people using BOINC. Maybe users are stable on GPUGRID, however, overall people see to get bored, even long time users and eventually stop running BOINC.
So, the want of some users to cut off anymore than what the admins have suggested based off efficiency is just wrong. Sure it'd be ideal if everyone had the most efficient cards. But right now, BOINC is sort of spasiming due to lack of new users to make up for the ones leaving. They need a good PR campaign or something. Or some kind of better reward system (maybe actually giving something like bitcoins for use, such as GRIDcoins). Anyway - cutting off users due to inefficiency is a bad idea. It's alienate the people and would not be worth making the world a slightly better place. I don't think I'd return to the project if it did that to me, seeing how many other ones there are out there that wouldn't do that to me.
Everyone who reads this ought to try to get a friend whose never used BOINC to try it, rather than worry about inefficient GPUs on the project. For real.
Regards,
Matthew Van Grinsven |
|
|
MJHProject administrator Project developer Project scientist Send message
Joined: 12 Nov 07 Posts: 696 Credit: 27,266,655 RAC: 0 Level
Scientific publications
|
You can see from our live performance data http://www.gpugrid.net/graphs/trend-cc-scaled.svg, which breaks down our throughput by compute capability that Geforce 400 & 500 series cards (red, green) still contribute a bit more than 10% of our work.
We'll probably not discontinue support until it falls to <5%, or NVIDIA deprecates support in the compiler, whichever comes first.
Matt |
|
|
Jim1348Send message
Joined: 28 Jul 12 Posts: 819 Credit: 1,591,285,971 RAC: 0 Level
Scientific publications
|
What a lot of you don't realize is everyday there are fewer and fewer people using BOINC. Maybe users are stable on GPUGRID, however, overall people see to get bored, even long time users and eventually stop running BOINC.
So, the want of some users to cut off anymore than what the admins have suggested based off efficiency is just wrong. Sure it'd be ideal if everyone had the most efficient cards. But right now, BOINC is sort of spasiming due to lack of new users to make up for the ones leaving.
GPUGrid (and others) have maintained a high degree of backward-compatibility, so if people are leaving, it is not for that purpose. And if they do leave, it is the ones with the least-efficient cards. What the BOINC projects should be doing is keeping the ones with the most efficient cards happy.
|
|
|
|
Hi Folks, running a 660GTX here, I hope it's still ok. It runs 24/7 but if need be, depending on pricing, what should I get? I do play a few games, but I take it most good crunching cards are very good to excellent gaming cards anyways, a win win situation. If my card is still supported, I will wait until after xmas and I am planning on building a brand new system anyways. Current setup is a Xeon socket775 quad core @ 3.6Ghz and one 660 GTX. Might just add new card and run 2 cards for Crunching!
take care folks!
Sean |
|
|
Jim1348Send message
Joined: 28 Jul 12 Posts: 819 Credit: 1,591,285,971 RAC: 0 Level
Scientific publications
|
Hi Folks, running a 660GTX here, I hope it's still ok.
That is a Kepler card, and should be fine for several years more since it can run CUDA 6.5, which is the latest version.
As for the future, the Maxwell cards are the way to go, being the GTX 750/750 Ti and the GTX 970/980 at the moment. But by the time you need to buy anything, there will be a lot of others out.
|
|
|
|
Qax, you do have a point there. In BOINC "user motivation" may actually be the most important aspect of all. However, I think the project staff made it clear that they're not depracating the old app / GPU support to push power/monetary efficiency, but rather to keep development efforts in check (I think you're well aware of this).
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
|
Well then, I guess I'm set for a little while still. Thanks, hope to output more results this coming year.
|
|
|
|
Are you going to update the GPU Model List?
From what I can tell any GTX 6xx and above will still be supported?
Thanks,
bcavnaugh
____________
Crunching@EVGA The Number One Team in the BOINC Community. Folding@EVGA The Number One Team in the Folding@Home Community. |
|
|
|
I was very dismayed by the announcement that you plan to no longer support NVIDIA GPUs using the 340 driver. I am a long time BOINC supporter and recent GPUGRID supporter. I've recently completed testing of a Linux Cluster for my home that I expect will be idle most of the time other than running BOINC jobs. This cluster is now running about 500 GLOPS for CPUs and 7000 GFLOPS for NVIDIA GPUs. These GPUs are NVIDIA Telsa C1060 cards and GTX 770 cards. These are not the fastest GPUs, but they are able to run your Long tasks in about 24 hours, which is about half as fast as you claim the fastest cards are able to run. These NVIDIA Telsa C1060 cards are not supported by the 343 drivers for Linux. Their 343 driver is buggy and on UBUNTU fails on most cards including my GTX 770 cards, so I am forced to use the 340 driver on all my machines.
I am very interested in the research you are doing and would very much like to continue work on your BOINC tasks beyond the end of 2014. If you proceed with your plans to drop support for the 340 drivers, you will not only lose my machines, but most UBUNTU Linux users as well. Are you sure that the performance improvements you will gain on the newest cards by changing to 343 will offset the loss of all Linux Users? These Tesla C1060 cards are now being sold on the used market for under $75, so hobbyist are just now beginning to have access to these cards, which are fast and support double precision as well. Windows users with these Telsa cards will be forced to stat with the 340 driver as well.
I'm sure that BOINC projects like SETI and POEM are hoping that you proceed with your plan, as they will inherit many GFLOPS from users like me. I currently run those tasks when I cannot get work from your project, which in 2015 will be all the time.
Thank you,
Greg Tippitt
BOINC GTIPPITT - Team Leader for
STARFLEET - Star Trek Fan Association |
|
|
|
Greg,
Sorry to hear about your bad luck with the NVIDIA 343 drivers.
I would like to say though that I am running Ubuntu 14.04, a GTX 770 GPU, and Nvidia 343.22 drivers and it seems to be working just fine. Most long units complete in the 8-10 hour range.
What kind of problems are you having when using the 343 drivers on the GTX 770 cards? |
|
|
MJHProject administrator Project developer Project scientist Send message
Joined: 12 Nov 07 Posts: 696 Credit: 27,266,655 RAC: 0 Level
Scientific publications
|
Dear Greg,
Quite apart from the driver issue, the C1060 is a compute capability 1.3 device and we intend to drop support for these entirely.
As you'll see from http://www.gpugrid.net/graphs/trend-cc-scaled.svg, cc 1.3 devices,represented by the the blue sliver, constitute less than 1% of our throughput.
I hope you'll continue to crunch with the 770s, which will give you better performance and a lower power bill. I'm sure one of the other volunteers will be able to help you out with your driver problems.
Matt |
|
|
|
Hi, Matt
Many thanks for your response. I have attempted to update my drivers as you suggest. I have not succeeded as Win7 reports my drivers are up to date.
I will search further...
John
John,
At present I process GPUGrid tasks with two NVIDIA GeForce GTX 650 Ti (1024MB) GPUs and driver 334.89.
Those cards are just fine, and will be useful for a long time to come. It's only he Geforce 200-series we need to wave good-bye to.
If you could update your driver to 343 sometime before the new year, that'd be appreciated. You'll get the newer CUDA 65 app, which will - eventually - be faster than 60.
Matt
|
|
|
|
Hi, Matt
Many thanks for your response. I have attempted to update my drivers as you suggest. I have not succeeded as Win7 reports my drivers are up to date.
I will search further...
John
John,
The Microsoft update don't have the latest 3rd party drivers, you should check and download them directly from the vendor's website (http://www.nvidia.com, http://www.geforce.com), or download the latest driver with the vendor's updater application (GeForce Experience). |
|
|
|
Greg, I hope this linux driver mess can somehow be sorted out. Maybe the confusion appears when you're mixing C1060's and GTX770's? Would it help to seperate them and keep only the boxes with GTX770 at GPU-Grid?
BTW: Tesla C1060 use the GT200 chip from 2008. They may be cheap now, but I have a really hard time to come up with any scenario where they would actually be a better choice than current offerings. The only variant which comes to my mind would be software which insists on running on a Tesla but refuses a Geforce.
For SP performance: C1060 provides 622 GFlops (MAD)at 190 W TDP, whereas a GTX750Ti provides 1300 GFlops (MAD) at 60 W TDP. It should cost about 100$ and quickly pay for itself via the electricity bill. In Germany it would take just about 1.5 months of sustained load to make up for those 30$/€ higher purchase cost.
For DP performance: C1060 provides 77 GFlops (FMA). All newer Geforce cards support DP, but are heavily crippled in performance. Only Titan and more recent Teslas or Quadros would be worth running in DP mode. Or high end AMD GPUs. Anyway, even an old Fermi card like a mainstream GTX550Ti almost matches this performance (58 GFlops DP FMA, 120 W TDP, released 2011 at 150$), whereas bigger cards easily surpass it.
From my point of view: if you pay anything at all for electricity you'll probably do yourself a favor by removing or replacing those old GPUs. There's a good reason they are so cheap nowadays.
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
|
Thanks, Zoltan.
Hi, Matt
Many thanks for your response. I have attempted to update my drivers as you suggest. I have not succeeded as Win7 reports my drivers are up to date.
I will search further...
John
John,
The Microsoft update don't have the latest 3rd party drivers, you should check and download them directly from the vendor's website (http://www.nvidia.com, http://www.geforce.com), or download the latest driver with the vendor's updater application (GeForce Experience).
|
|
|
sis651Send message
Joined: 25 Nov 13 Posts: 66 Credit: 197,586,662 RAC: 182,711 Level
Scientific publications
|
I have problem with drivers from Nvidia site due to the Optimus system. Also any driver newer than 331.38 in *.deb repositories can't work with Boinc. So started crunching other projects. For ex, Collatz giving good credits, but for what? For a result file that is less than a half kilobyte. :D And for some mathematical problem that I don't know what it will help for. |
|
|
|
Matt,
Good luck with your new software development. I will return to other BOINC projects when my 7 TeraFlops are no longer supported by GPUGRID. This summer I started running GPUGRID tasks because a shortage of GPU tasks on the SETI@Home and POEM@Home projects, but they are back up now with plenty of tasks. I continued running your tasks because I was looking for medical tasks. You also dramatically inflate credit compared to most other projects. For example I got 300,000 credits today from GPUGrid, while other projects give me half that much credit per day on average. GPUGRID gives the most credit per runtime of any project I've found except, Collatz, and certainly more credit than any project doing research on medicine, climate, or cosmology that I've found.
Thanks, Greg
CaptainJack,
I'm using the same 3.13.0-39-generic kernel as you. Is yours 32 or 64 bit?
I'm not mixing the Telsa cards and the GTX 770's. I only have a single GPU per motherboard. My cluster is using quad CPU SuperMicro H8QME-2 motherboards that only have a single x16 slot, and lots of (nearly useless) PCI-X slots. For CPU tasks, they are great with 24 AMD 2.4MHz cores.
There is lots of talk on other BOINC forums about the Telsa C1060 cards being overpriced versions of cheaper consumer video cards. I don't know a lot about them, other than they complete work units faster than the much newer and more expensive GTX770's I've bought. I've got the 770's installed on a my home theatre PC and my desktop PC, where I need the video capabilities of the card.
After seeing the notice in BOINCMGR, I gave the 64 bit 343 drivers from NVIDIA's website a test on a newer machine I was working on. It's is a newer SuperMicro H8QM3-2s motherboard with more PCIe slots. I'm using a pair of GTX 770 cards in this motherboard and no Tesla cards. I'm using the 64 bit UBUNTU 14.04 with all updates. With the NVIDIA 343 drivers, Xserver won't start, but the system runs fine with 340 release.
I live in the state of Tennessee in the US, where we are fortunate to have more affordable electricity than many other places as a legacy of 1940s (TVA) hydroelectric construction projects along the many lakes built on the Tennessee River for flood control and power generation. I pay about 10 cents per kilowatt hour. I probably won't be buying any more new hardware, as I am far further past my sell-by-date than my hardware is. I'm old enough that when I started programming in FORTRAN IV, supercomputers were round instead of square.
I got the hardware for my cluster as a cheap bunch of untested junk, much of which would not work. It was damaged stuff from a local PC recycling center that had been discarded by the nearby Oak Ridge National Laboratory, where they keep some of our biggest supercomputers. I've been working on this cluster as a hobby to see if I could build a supercomputer from garbage. I've not run it much of the time until the weather has gotten cool. The cost to run the cluster this winter will be offset greatly by it heating my home. I've now gotten it to more than 150 CPU cores and 1500 GPU cores. Now that I've almost finished the hardware, I going to start on some parallel software ideas I wanted to play with. When I'm not using it for anything, I'll have it running BOINC tasks.
It's now a bookshelf full of stuff that could win an award for the ugliest computer ever built.
Thanks,
Greg
|
|
|
|
Greg,
I'm using 64-bit and using the NVIDIA driver installation method posted here:
http://www.gpugrid.net/forum_thread.php?id=3713&nowrap=true#36671
If you scroll down a few more posts, skgiven has added some additional tips.
Sounds like you have an interesting project going on there. |
|
|
|
sis651,
I'm very fond of Collatz either. But not due to the small result file, but due to the fact that we can not prove the Collatz Conjecture by trying all numbers, because frankly there's an infinite amount of them. We would have been able to disprove it by finding a single number where it doesn't hold true. We by now we have pushed so far and not found a single counter example, that it's probably true.
You may want to try Einstein@Home instead. It doesn't yield as many credits as other GPU projects, but many like their science.
It would also be nice if this driver issue could be sorted out, but honestly I wouldn't expect anything in this regard with Linux combined with Optimus.
Greg,
it sounds like you know very well what you're doing :)
Just one more remark: if you say the C1060's are completing WUs faster than GTX770's, are you using double precision? Apart from weird software issues or a far too slow CPU feeding the GPU that should be the only circumstance where this could be possible.
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
|
Jack and ET-Ape,
I don't see what the size of the result file has to do with how useful a project is, as long as the result is not "42". For me, some of the BOINC projects do seem like a waste of electricity, but others might say that about SETI@Home, which I've been a supporter of since the original "Pre-BOINC" days. I run work for several projects for SETI, medical research, climate change, and Cosmology. I run both MilkyWay and Einstein.
I was wearing a MilkyWay@Home T-shirt last week, and guy in my cardiologist's waiting room struck up a conversation with me about BOINC. He was talking about how some projects seem frivolous, and he didn't see the point of the one for rendering animations. I told him to download and watch the Big Buck Bunny video. I explained that students trying to learn and practice high-end animation skills don't have access to supercomputers. Steve Jobs made his fortune with Pixar, because George Lucas sold him Pixar cheap. Jobs realized that the reason Pixar's software wasn't working was because they were running it on a old DEC VAX mainframe. After he bought Pixar, he replaced the old mainframe the artists were sharing with Silicon Graphics Workstations for each artist. The software then worked fine, and they finished the original Toy Story movie. An example of Job's marketing genius was that he had the IPO for Pixar stock the same day the movie opened. For rendering of 4K-3D animations, it takes lots of computing power, so students using a desktop PC can use BOINC to render their projects, just as researchers needing data analysis can.
Thanks for the link about the getting the drivers working, I'll take a look. I'm in the process of getting the machine I use as my desktop to run a virtual machine that will be part of my cluster as well. I've just started testing the GPUGrid CPU tasks. They run really cool on these nodes. Each motherboard has 24 cores, but I limit BOINC to only running 20 CPU jobs at a time. That leaves cores for feeding the GPU job, system overhead and IO overhead. The GPUGRID tasks for CPU (and Milkway nBody tasks) will grab all 20 of the CPU cores and start grinding away on a task.
The machine I'm working on now has more PCie slots, so I'm trying both GTX 770 cards and a Tesla card in the same machine. The Tesla card is 50% faster than the GTX 770 card. When Matt makes the changes, I can continue to run the GPUGrid jobs for CPU, and let POEM and SETI use the GPUs.
NVIDIA GPU 0: Tesla T10 Processor (driver version 340.32, device version OpenCL 1.0 CUDA, 4096MB, 4041MB available, 933 GFLOPS peak)
NVIDIA GPU 1: GeForce GTX 770 (driver version 340.32, device version OpenCL 1.0 CUDA, 2048MB, 1984MB available, 624 GFLOPS peak)
NVIDIA GPU 2: GeForce GTX 770 (driver version 340.32, device version OpenCL 1.0 CUDA, 2048MB, 1984MB available, 624 GFLOPS peak)
Since the motherboards are too large for most cases, I've got the motherboards in a bookshelf using nylon wire ties and Ducktape. Here is a picture of what I call my Cluster Duck.
|
|
|
|
If you download the Nvidia GeForce Experience tool, it will keep you up to date on driver releases from Nvidia. I don't get my drivers from anyone but Nvidia. The third party manufacturers just ship out the driver Nvidia does anyway. |
|
|
MJHProject administrator Project developer Project scientist Send message
Joined: 12 Nov 07 Posts: 696 Credit: 27,266,655 RAC: 0 Level
Scientific publications
|
Greg,
The machine I'm working on now has more PCie slots, so I'm trying both GTX 770 cards and a Tesla card in the same machine. The Tesla card is 50% faster than the GTX 770 card.
For what activity, out of interest? The 770 is superior to the Tesla in every respect except for total memory size.
|
|
|
|
Hi Greg,
I don't see what the size of the result file has to do with how useful a project is, as long as the result is not "42".
This was adressed to sis651, wasn't it? Otherwise I would kindly ask you to re-read my post. Regarding the remaining points of project choice: I hope anyone can agree here to disagree, as these discussions often heat up fairly quickly. Personally I obviously have my preferences only strong objections only against a few projects. Most of them give very high credits, though.
Regarding drivers: as far as I know Geforce Experience (as suggested by cowboy2199) is only available under Win. Which makes sense, considering that there are hardly any games for Linux.
How do you know your Tesla is 50% faster? The GFlops you quote there are correct for the C1060 with a maximum throughput of 933 GFlops total SP (with MUL+ADD+SF) or 622 GFlops using only MAD SP. However, the numbers are completely off for the GTX770: it has 3.2 TFlops MAD SP instead of 624 GFlops! Your BOINC and driver are recent, so I have no idea where this number comes from. If it's from the BOINC startup messages it doesn't matter for the actual crunching performance. Also that card and driver are OpenCL 1.1 capable, so I don't know why it's only listed as 1.0. It could be a difference between Win and Linux, as nVidia doesn't like to put any development effort into OpenCL, and surely much less for the few Linux users running OpenCL on nVidia cards.
And finally - sorry, that image link doesn't work.
Best regards,
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
|
How did you install this on Ubuntu 14.04 Driver Nvidia 343.22 that is.
Do you have a commands needed for us Window Users.
Thanks |
|
|
|
To: ET-Ape,
"DON'T PANIC inscribed in large friendly letters"
I'm sorry if I stepped on anyone's toes (or prehensile tail). I wasn't looking to start a fight. Quite the opposite, I was trying to say that there are different views on what are "useful" projects. Even though it's not a project I run tasks for even though it's cool, the RenderFarm was my example of how "frivolous" versus "useful" is a matter of perspective. Listening for ET transmissions is important to me, but considered frivolous to some/many. I'm a huge fan of Douglas Adams writings and was just trying to be humorous with my reference to 42 being the short answer to "the meaning of life, the universe, and everything".
I'm not sure why the link to the picture didn't work. I'm not up to speed with BBCodes.
https://drive.google.com/file/d/0B2gn3MG8nZvAMDRWaEhGV0dkMlk/view
To: Matt , et. al.,
As for the the performance of the GTX770's, you all obviously know far more about that I do, which isn't difficult as I'm fairly ignorant on graphics cards. The only computer games I play are GO and Star Trek Armada, so I've never worried about frame rates. My GTX770 cards are generic Chinese imports. They say NVIDIA GeForce GTX770 on the outside, and NVIDIA drivers say they are the GTX770. The number of CUDA cores reported for GTX770 cards is somewhat confusing, as some things refer to the number of shaders cores and others to the the number of texture cores.
The GFLOPs numbers in my message were from the BOINC start up. A few months ago, I did a comparison of wall-time for running some EINSTEIN tasks. The time to completion for a GTX770 card was about 25% longer than the same system with a Telsa card installed instead. As for whether I'm using single or double precision, I don't know which the projects use. Except for an old PCI video card in my file server that uses a GeForce 8400 GS, all of my cards support DP if required, It is a big performance hit when it's needed. One of my first programming jobs (so long ago that double precision would be needed to calculate) was doing actuarial programming in FORTRAN for life insurance companies. To control rounding errors and improve performance on ancient hardware, floating point was avoided whenever possible. Money amounts were always in pennies using unsigned long integers, such as 100,000 for $1000.00.
A comment about the declining number of BOINC volunteers.
Lots of users are giving up their desktop computers as their tablets and phones have more capabilities to take their place by using online resources. I homes that might have had multiple desktop PC, they now may a single desktop to use when they need something more powerful than their touchscreen devices. I don't do any calculations on my Android devices, because it drains the battery too fast. I use the BOINC client portion to attach to other systems on my home network to monitor them, but I don't run any calculations on these devices.
As the the team leader for the BOINC "Starfleet" team, I've seen the number of active users on my team decline to about half what it was 7 years ago, but the work unit production of the team has increased as desktop machines of the remaining users have become more powerful. In the US, we also don't seem to get much press anymore. I used to hear or read a mention of the SETI@Home project periodically in bits about science news. It has been a few years since I've seen anything about BOINC projects outside of BOINC forums or articles about computer clustering.
I'm very sorry if I made anyone angry. I have a sarcastic sense of humor, and I should take more care so as not to cause offense.
Greg
|
|
|
popandbobSend message
Joined: 18 Jul 07 Posts: 67 Credit: 43,351,724 RAC: 203,505 Level
Scientific publications
|
My GTX770 cards are generic Chinese imports.
There's your problem... they aren't actually GTX770's. Einstein is reporting they only have 128 cuda cores. |
|
|
Beyond Send message
Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level
Scientific publications
|
My GTX770 cards are generic Chinese imports.
There's your problem... they aren't actually GTX770's. Einstein is reporting they only have 128 cuda cores.
Something's wonky: Stderr output:
# Name : GeForce GTX 770
# ECC : Disabled
# Global mem : 2047MB
# Capability : 1.1
# PCI ID : 0000:07:00.0
# Device clock : 1625MHz
# Memory clock : 700MHz
# Memory width : 384bit
Edit, here's a normal GTX770:
# Name : GeForce GTX 770
# ECC : Disabled
# Global mem : 2047MB
# Capability : 3.0
# PCI ID : 0000:01:00.0
# Device clock : 1189MHz
# Memory clock : 3505MHz
# Memory width : 256bit |
|
|
MJHProject administrator Project developer Project scientist Send message
Joined: 12 Nov 07 Posts: 696 Credit: 27,266,655 RAC: 0 Level
Scientific publications
|
Greg,
Looking at your computers' work history on GPU grid, I can't see any circumstance where a GPU work unit (called acemdlong) has run on one of your 770s.
The reason for this is that, notwithstanding the branding of the cards, they re reporting as very old silicon, older even than the Tesla.
It's possible that this is a driver problem, perhaps getting confused by the presence of different devices in the same system. A less pleasant alternative is that you have fake cards. This is made more plausible given that you said that Einstein WUs ran slowly on them.
Sorry to be the bearer of bad news.
Matt |
|
|
|
Everybody,
Thanks for all the info. I was beginning to smell fake last night as I read more specs on the real GTX770s, which is why I mentioned mine were from China (on AliExpress). I guess you get what you pay for at 3 cards for $250. At the time, I needed something faster than the ATI ES1000 graphics on the motherboard with 16MB. When I got them, they were fast enough and cheap enough, so I was happy enough.
After I posted last night, I did a Google search (I can't say Googling, without thinking of Décolletage) for "fake NVIDIA GTX 770". There were lots of discussions regarding cards bought on AliExpress. At least mine isn't as obvious as the one below where they were selling a NVDIA Radeon card by ASUS, which looks like somewhat like my cards other than the ASUS sticker. Mine also has two fans so it must be twice as fast, right?
I am feeling an odd sense of simultaneous stupidity and enlightenment. To paraphrase former Def. Sec. Donald Rumsfeld, at least now I know what I don't know, which is better than not knowing what I don't know. I'm sorry to have inadvertently hijacked this news thread for tech support.
I think perhaps I'll change my name, be quite from now own, and go back to looking for cures for cancer and broadcasts of "I Love Lucy" reruns with Vulcan subtitles (using my CPUs).
Greg Inadvertent Tippitt |
|
|
|
Hi Greg,
don't worry, you didn't upset me (and I don't think anyone else) in this thread. If we were this thin-skinned we probably wouldn't post on message boards any more ;)
And I actually meant my "kindly ask" in the true sense of the word. In a laid-back and relaxed way, yet slightly alarmed that you might have misunderstood things (which you didn't).. not some woman-like indirect way of saying "I'm two seconds away from bursting into fury" :D
Regarding your cards: it seems very probable to me that you got fake ones. At 128 shaders they could use the venerable G92 chip, which was used from 9800GT to GTS250 and is one generation earlier than your Teslas. This matches your observed Einstein performance approximately: 128 shader * 1625 MHz is 67% of 240 shaders at 1296 MHz. This would make your Teslas at maximum 33% faster than your "GTX770", which is well within experimental error range of 25% (and like most projects Einstein does scale absolutely linear with shader count and clock speed).
This also explains the OpenCL 1.0 support, as that G92 chip is actually CUDA compute capability 1.1 (1.3 for the Teslas). And why you're getting problems with newer drivers: nVidia has actually discontinued support for those old chips and is only providing some bug fixes in the 340 legacy branch.
I have fond memories of this chip. I actually started with one at GPU-Grid, somewhere in the time frame 2008 - 2009. It was making nice 10k credits/day. I sold that card for 40€ back then, and it's still working in a system of my colleague for the occasional casual game. For todays number crunching it's not really suitable any more, though.
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
|
This is good news indeed.
I think that crunching on pre-Kepler cards (GTX-2xx, GTX-4xx, GTX-5xx) is a waste of electricity.
I disagree :) My GTX560 is crunching fine on short WUs
Not everyone has the bucks to upgrade to something new whenever a new architecture is released, so don't make such remarks
____________
Team Belgium |
|
|
sis651Send message
Joined: 25 Nov 13 Posts: 66 Credit: 197,586,662 RAC: 182,711 Level
Scientific publications
|
In fact I'm trying new projects, checking for run times and the credit they give. File size was a thing just received my attention when crunching, not a proof of a bad project. I read many things about usefulness of those projects and don't feel perfect with Collatz but anyway it's one of those projects.
I ran Einstein@home and it's on the list, loved the CPU job graphics. But my priority is projects about health and medical issues. I was planning to switch Poem@home but they don't seem to have those OpenCL works there.
Now checked again and Boinc receives CUDA42 Noelia jobs, so I continue them.
Nvidia have never oficially supported their Optimus products on Linux. Anyway Bumblebee is fine for normal usage but not for crunching. Also drivers downloaded from Nvidia site work and crunch fine but I can't switch to integrated graphics on them. In this area we have frequent blackouts. And to Optimus or not Optimus means two hours of difference between the notebook battery life...
Sorry for causing some annoyance here. :)
____________
|
|
|
|
sis651,
Regarding POEM@Home, the availability their GPU work is sort of hit or miss, It doesn't run that well on GPUs as some others projects either. I think their code is maybe better suited to CPUs. Their GPU tasks also require a full CPU core while running. The difference in speed for running CPU vs GPU tasks for POEM is not as great as with other projects like SETI AstroPulse, so I run CPU work for POEM and use my GPUs elsewhere. You run almost exactly the same projects as I do.
Matt and ET-Ape,
This weekend, I snagged a pair of used NVIDIA 670 cards (real ones) on ebay for $250. They're not as fast as Titans, but as bottom-feeding goes, I'm happy. The StdOut from BOINC shows the numbers from my "funny 770", which explains why I thought my Telsa cards were so fast. The fake 770's were the first CUDA GPUs I had ever bought, so I thought the old Tesla cards seemed really fast by comparison. ("So it not a Greyhound, it's a Basset Hound? I always wondered why his legs were so short.") When I've had a chance to move the older cards to another machine, I can then upgrade my drivers to 343 for the machine with the two (real) 670s. In the meantime, I wonder how many workunits I can finish per day with this machine's 4 GPUs and 24 CPU cores.
GPU 0: GeForce GTX 670...... 2048MB - 2915 GFLOPS
GPU 1: Tesla T10 Processor.. 4096MB - 933 GFLOPS
GPU 2: GeForce GTX 770...... 2048MB - 624 GFLOPS
GPU 3: GeForce GTX 670...... 2048MB - 2915 GFLOPS
There is a listing on eBay with lots of bids for a fake 770 like mine. The NVIDIA driver and settings program report that the card is a GTX 770. The clue is the 128 CUDA cores. Well there is also the matter of it not looking like a 770, I guess.
http://www.ebay.com/itm/231383397036?_trksid=p2055119.m1438.l2649&ssPageName=STRK%3AMEBIDX%3AIT
Well, so much for my promise to be quiet from now on...
Greg Tippitt
BOINC Team Leader for
STARFLEET - Star Trek Fan Association
|
|
|
|
I am using driver 334.89. NVIDIA tells me this is the latest driver for my two GTX 650Ti GPUs.
Not being particularly tech savvy, I can't do any more. Not sure what effect, if any, this will have on my ability to process after 1 January 2015.
John |
|
|
|
Damm you lot and your ARMS race. You have just forced me to upgrade again to Asus GTX970 and use my Asus GTX660TI to replace old GTX460.
Only leaves GTX560TI to replace. |
|
|
|
I am using driver 334.89. NVIDIA tells me this is the latest driver for my two GTX 650Ti GPUs.
Not being particularly tech savvy, I can't do any more. Not sure what effect, if any, this will have on my ability to process after 1 January 2015.
John
John,
You'll find the latest NVidia drivers (344.65) for your GTX650Ti for Windows 7 x64 on this page.
This is a direct link to the executable installer:
http://us.download.nvidia.com/Windows/344.65/344.65-desktop-win8-win7-winvista-64bit-international-whql.exe
EDIT: eXaPower linked these drivers on the other thread, but I leave this post here if someone gets here first, just like I did. |
|
|
|
Damm you lot and your ARMS race. You have just forced me to upgrade again to Asus GTX970 and use my Asus GTX660TI to replace old GTX460.
Only leaves GTX560TI to replace.
Is this a joke? The 1st post explicitly says "Series 200", which does not apply to a GTX460 from the "Series 400". So actually nVidia "forced" you to upgrade by providing a new, superior product ;)
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
|
Yes, it was meant to be a joke |
|
|
|
ET-Ape,
We realize that Vulcans have difficulties with the humour of humans',but you could use logic and language syntax to help you with the recognition of irony and sarcasm.
Peace and long life,
Greg |
|
|
Chilean Send message
Joined: 8 Oct 12 Posts: 98 Credit: 385,652,461 RAC: 0 Level
Scientific publications
|
Ahhh, well at least I got a good laugh reading this thread's misunderstood sarcasm... :D
Also, sorry to hear about the fake cards.
____________
|
|
|
|
Haha! Yeah, sorry, I should have gotten it. At least I was suspecious that the sarcasm-module of my tricorder hadn't triggered properly ;)
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
BDDave Send message
Joined: 29 Jul 10 Posts: 8 Credit: 457,945,955 RAC: 0 Level
Scientific publications
|
Hey Matt,
Thanks for the overage GPU card usage percentages. I've been waiting to upgrade my EVGA 470 since last June but the new cards were delayed and even skipped a generation as you all know. Happy to say that my new EVGA 970 in arriving tomorrow! So amazing to see the difference in crunch times. After 14 years of crunching, I'm looking forward to a fresh card with my new computer.
Hmmm, should I keep the 470 in my computer as a second cruncher? The fan is a bit loud and it does use a lot of power... Feedback would be most welcome...
Get Crunchin'
BDDave
____________
|
|
|
|
Hmmm, should I keep the 470 in my computer as a second cruncher? The fan is a bit loud and it does use a lot of power... Feedback would be most welcome...
Since you're from the US you are paying much less for electricity than I am. For me it would be an easy decision not to crunch with a GTX470 anymore, because it's far too inefficient. Sell it to some gamer on a budget, who'll be able to make some good use of it for a few more years.
If your electricity is really cheap there'd still be two arguments against continuing to use the GTX470: eco-conscience and the question whether your case cooling, PSU and ears could take both cards together.
BTW: while Kepler GPUs are also less efficient than Maxwells, I would not yet discourage crunching on them ;)
MrS
____________
Scanning for our furry friends since Jan 2002 |
|
|
|
The waste is heat, so during the winter the cost is offset by the heat keeping your house warm, but come summer doing some math to estimate the monthly cost might be needed, since you then must also pay to cool your home. I don't crunch during the daytime during summer when my AC is ruuning, but wait until evenings when I open window (meaning the wholes in the walls covered with glass, not the software from Redmond, WA)
ET-Ape,
That was a joke, but not a good one, so don't use it for calibrating your tricorder.
Greg |
|
|
Beyond Send message
Joined: 23 Nov 08 Posts: 1112 Credit: 6,162,416,256 RAC: 0 Level
Scientific publications
|
The waste is heat, so during the winter the cost is offset by the heat keeping your house warm, but come summer doing some math to estimate the monthly cost might be needed, since you then must also pay to cool your home. I don't crunch during the daytime during summer when my AC is running
Ditto here. In Minnesota it's heating season and my house has a computerized heating system :-). |
|
|
BDDave Send message
Joined: 29 Jul 10 Posts: 8 Credit: 457,945,955 RAC: 0 Level
Scientific publications
|
Greg,
Actually the computer IS in the bedroom and this room gets hot in the summer time as a result! The heat from the GPUGRID projects ran my EVGA 470 around 85C degrees on average. I had to run the air condidtioner in the living room, then run another floor fan by the hallway to push that cool air in. My summer electric bill is usually twice as much as the winter bill here in the Los Angeles area.
Also, the new EVGA 970SC is in and running at 73C using 86% GPU usage. That alone is awesome and I hear NO noise with the ACX cooler fan.
Get Crunchin'
BDDave
____________
|
|
|
|
BDD,
ET-Ape's suggestion is a good one about passing the card along to somebody that can use it. You could get them to use it for BOINCing in exchange for a free graphics card. Perhaps even include a "dead rat with a string to swing it with".
I'm glad my cluster is in my dining room rather than the bedroom, as I definitely wouldn't get any sleep for its noise. It has 36 of the 80mm fans on the CPUs and GPUs. I've also got three 19 inch box fans to move air through the bookshelf, which holds the 6 open-air motherboards. These box fans do double duty as air filters to prevent dust from clogging up the heat-sinks. On the back of each 19 inch box fans, I put a 20 inch square air-conditioner filter to trap hair and dust. I've got 3 dogs that produce tons of hair that accumulates no matter how often I vacuum. I've also got long hair and beard, that's not quite ZZ-Top length, but almost. The AC filters cut the air flow a bit through the 19 inch fans, but it's worth it for the dust they trap. The cluster has been running 8 hours a day for the past 5 months, and when I had to replace a CPU fan that failed recently, the heat sink had trapped very little dust when I blew it out with compressed air.
It was 20 degrees Fahrenheit here in Knoxville, TN last night. My central heating thermostat is set at 60 degrees, but the house is kept 70 to 75 degrees by my AMD Opteron heating system that's crunching about 500,000 credits per day.
(ET-Ape - Humor warning) I wonder what it would cost to replace the heat sinks and fans with waterblocks to use the cluster as a water heater? Why do some some people call them "hot water heaters"? They heat cold water as there would be no reason to heat water if it were already hot.
Greg
|
|
|
MJHProject administrator Project developer Project scientist Send message
Joined: 12 Nov 07 Posts: 696 Credit: 27,266,655 RAC: 0 Level
Scientific publications
|
Per this plan, SM1.3 support is dropped from today. No new work to pre-Fermi GPUs. Comments please on the main News thread about the change.
Matt |
|
|