Advanced search

Message boards : Graphics cards (GPUs) : PCIe 2.0 vs 3.0 & # of lanes

Author Message
Jack Lightholder
Send message
Joined: 1 Jul 11
Posts: 13
Credit: 643,725,988
RAC: 103,634
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29870 - Posted: 12 May 2013 | 15:27:49 UTC

Hi all,
I've been looking into building a GPUGRID specific rig since my only Nividia card went on me a year ago and I had to replace with an AMD. I've been doing some analysis on number of lanes and PCIe 2.0 vs 3.0 to optimize the built output per $. What it looks like I'm seeing is that going with PCIe 3.0 isn't worth the additional cost for newer boards and chips and that lane speed isn't ultimately significant. Can someone confirm this? I'm looking into an MSI gd-70 which runs 4x PCIe 2.0 at x16 *2 or x8 *4. Is that the best layout for a GPU cruncher? From what I've seen it appears to be more cost effective than going with something like an ASUS Rampage with 3 x16 slots running 3.0 for almost 4 times the cost.

If I do go the 2.0 route do I have to worry about certain cards not being fully utilized or bottlenecking on the lanes? I was thinking of filling it with 4 GTX 560Ti cards but wanted to double check the x8 configuration with 2.0 PCIe would be fine when compared to other options.

Thanks!

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29874 - Posted: 12 May 2013 | 17:00:06 UTC - in response to Message 29870.
Last modified: 12 May 2013 | 17:03:42 UTC

Decide if you want your system to host 2 GPU's or more, and build to that design.

Unless you are going to use 3 or more high-end GPU's, more than 2 PCIE slots isn't of any benefit and is a poor/irrelevant design.

Consider the expense difference between a mid-high end PSU (that can support 2 high end GPU's) against the extreme-end PSUs (that can support 4 high end GPU's); around $75 vs $300+.

I wouldn't invest in old kit for a new system. If you have a 4 PCIE2 x8 motherboard then fine, build on it, but don't buy a new PCIE2 motherboard for 4 mid-range previous generation GPU's.

You would be better off with 2 mid range to high end GPU's in a new system:

GTX680 (£370)
I43R11-NATHAN_dhfr36_6-11-32-RND9487_1 4442761 12 May 2013 | 3:05:32 UTC 12 May 2013 | 12:43:51 UTC Completed and validated 13,641.54 13,612.11 70,800.00
A GTX670 @ £290 seems better value to me.
Two GTX670's would require Four 6pin PICE power connectors, while the 680's need at least 1 8pin connector.

GTX560Ti (£150)
I42R18-NATHAN_dhfr36_5-15-32-RND7485_0 4427556 5 May 2013 | 11:25:59 UTC 5 May 2013 | 19:23:29 UTC Completed and validated 28,503.59 4,085.87 70,800.00

4 GTX560Ti's would require Eight 6pin PICE power connectors; a very high end modular PSU.

If mid range cards are more within your budget the GTX650Ti Boost, and GTX660 (£165) looks like a good price for it's performance. The GTX660Ti sits between the 670 and 660.

Even a GTX650Ti is £115 and can match a GTX560Ti for performance while costing less to buy and less to run:

I73R20-NATHAN_dhfr36_6-12-32-RND2263_0 4443459 12 May 2013 | 6:47:31 UTC 12 May 2013 | 14:48:49 UTC Completed and validated 28,497.36 28,410.83 70,800.00

These only require One 6pin PCIE power connector each.

PCIE3 isn't much faster than PCIE2 for here, but it is future proofed and allows you to use a 3rd generation Intel CPU, which use less electric, as do the GeForce 600 GPU's.

Remember - Only get what you can afford!
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

GoodFodder
Send message
Joined: 4 Oct 12
Posts: 53
Credit: 333,467,496
RAC: 0
Level
Asp
Scientific publications
watwatwatwatwatwatwatwatwatwat
Message 29879 - Posted: 12 May 2013 | 18:32:03 UTC

I would also like suggest for a dedicated rig that two smaller machines may be cheaper, more efficient and in the long run easier to manage than a larger one. Don't forget to take into account your yearly running costs which may end up costing as much as your hardware!
Would recommend using an online psu calculator (such as 'eXtreme's') and use 65-75% of the 'minimum recommended PSU wattage' value as a rough power usage estimate for crunching.
Also note the higher end Nvidia cards are rumored to be replaced shortly to the 7 series which may bring down prices.

Profile Mumak
Avatar
Send message
Joined: 7 Dec 12
Posts: 92
Credit: 225,897,225
RAC: 0
Level
Leu
Scientific publications
watwatwatwatwatwatwatwatwatwatwat
Message 29881 - Posted: 12 May 2013 | 18:56:21 UTC

I would also add that if you want to run multiple GPUs on a mainboard, you need to consider how the PCIe slots are spaced. GPUs which occupy more slots might not fit, also cooling them properly is important. Running the cards 24/7 will generate a lot of heat, so the cards need to have enough spacing for proper air cooling. If that won't work, you'll need to consider liquid coolers for GPUs.

Jack Lightholder
Send message
Joined: 1 Jul 11
Posts: 13
Credit: 643,725,988
RAC: 103,634
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29887 - Posted: 13 May 2013 | 0:45:08 UTC - in response to Message 29874.

Decide if you want your system to host 2 GPU's or more, and build to that design.

Unless you are going to use 3 or more high-end GPU's, more than 2 PCIE slots isn't of any benefit and is a poor/irrelevant design.

Consider the expense difference between a mid-high end PSU (that can support 2 high end GPU's) against the extreme-end PSUs (that can support 4 high end GPU's); around $75 vs $300+.

I wouldn't invest in old kit for a new system. If you have a 4 PCIE2 x8 motherboard then fine, build on it, but don't buy a new PCIE2 motherboard for 4 mid-range previous generation GPU's.

You would be better off with 2 mid range to high end GPU's in a new system:

GTX680 (£370)
I43R11-NATHAN_dhfr36_6-11-32-RND9487_1 4442761 12 May 2013 | 3:05:32 UTC 12 May 2013 | 12:43:51 UTC Completed and validated 13,641.54 13,612.11 70,800.00
A GTX670 @ £290 seems better value to me.
Two GTX670's would require Four 6pin PICE power connectors, while the 680's need at least 1 8pin connector.

GTX560Ti (£150)
I42R18-NATHAN_dhfr36_5-15-32-RND7485_0 4427556 5 May 2013 | 11:25:59 UTC 5 May 2013 | 19:23:29 UTC Completed and validated 28,503.59 4,085.87 70,800.00

4 GTX560Ti's would require Eight 6pin PICE power connectors; a very high end modular PSU.

If mid range cards are more within your budget the GTX650Ti Boost, and GTX660 (£165) looks like a good price for it's performance. The GTX660Ti sits between the 670 and 660.

Even a GTX650Ti is £115 and can match a GTX560Ti for performance while costing less to buy and less to run:

I73R20-NATHAN_dhfr36_6-12-32-RND2263_0 4443459 12 May 2013 | 6:47:31 UTC 12 May 2013 | 14:48:49 UTC Completed and validated 28,497.36 28,410.83 70,800.00

These only require One 6pin PCIE power connector each.

PCIE3 isn't much faster than PCIE2 for here, but it is future proofed and allows you to use a 3rd generation Intel CPU, which use less electric, as do the GeForce 600 GPU's.

Remember - Only get what you can afford!


Thanks for the quick responses guys.

How would it be advantageous to build on newer MOBO architectures where the boards range from $300-$400 which handle 2-4 GPUs? If lane size and BT/s per lane aren't determining factors wouldn't I be better off going with the cheaper, older board setups? My main computer has an ASUS Rampage with 7950's but I cant see building another one of those to dedicate to GPUGRID when I can get a MOBO for 1/4 of the price with all other things (PSU, HDD, etc) being the same. I guess that assumes the lanes/slower BT/s is fine though. Is that a fair assumption? Would an older board bottlekneck with 2 or 4 560Ti's on it?

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29893 - Posted: 13 May 2013 | 11:49:28 UTC - in response to Message 29887.
Last modified: 13 May 2013 | 11:49:59 UTC

How would it be advantageous to build on newer MOBO architectures where the boards range from $300-$400 which handle 2-4 GPUs?


4 slot motherboards start ~$300, but you can get a two-slot PCIE3 motherboard for less than $100 (which was my point)! There are many to choose from, and some even have 3 PCIE slots (albeit the third slot is only PCIE2).
MSI have several LGA1155 Z77A-G45 models to choose from with more than one PCIE3 slot. Gigabyte, Asus, AsRock, and Intel also have numerous Z77 models.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Jack Lightholder
Send message
Joined: 1 Jul 11
Posts: 13
Credit: 643,725,988
RAC: 103,634
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29913 - Posted: 13 May 2013 | 23:50:12 UTC - in response to Message 29893.

How would it be advantageous to build on newer MOBO architectures where the boards range from $300-$400 which handle 2-4 GPUs?


4 slot motherboards start ~$300, but you can get a two-slot PCIE3 motherboard for less than $100 (which was my point)! There are many to choose from, and some even have 3 PCIE slots (albeit the third slot is only PCIE2).
MSI have several LGA1155 Z77A-G45 models to choose from with more than one PCIE3 slot. Gigabyte, Asus, AsRock, and Intel also have numerous Z77 models.


ah! Thanks, now we're on the same page haha. I'm going to pick up three 660 ti cards to fill my ASUS rampage, move both my AMD cards to my second computer and then eventually build a third around the two card architecture. Hopefully I'll be back on GPUGRID in the coming weeks.

Last question- does the cache size for onboard RAM on the card (1GB, 2GB, 3GB) have a big impact for GPUGRID? Are any overkill/underpower for the application?

Thanks!

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29950 - Posted: 14 May 2013 | 23:12:35 UTC - in response to Message 29913.

I suggest you also look at and compare the prices of the GTX660's and the GTX670's.
If you can get a 3GB GPU for a little more than a 2GB version it might be worth it, however I wouldn't spend much more. The GTX650Ti comes in both 1GB and 2GB versioins, but cards above that are all 2GB or above (with the exception of a 1.5GB OEM GTX660).
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Jack Lightholder
Send message
Joined: 1 Jul 11
Posts: 13
Credit: 643,725,988
RAC: 103,634
Level
Lys
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29967 - Posted: 15 May 2013 | 17:03:52 UTC

excellent, thank you everyone. I picked up two GTX 660 Ti (2GB) cards and will hopefully have them up and running by the end of the week.

~Jack

matlock
Send message
Joined: 12 Dec 11
Posts: 34
Credit: 86,423,547
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 29971 - Posted: 15 May 2013 | 18:47:23 UTC
Last modified: 15 May 2013 | 19:45:57 UTC

GoodFodder built a machine with 2 650 Ti's and an Intel G2020 dual core CPU.

This CPU only supports PCIe 2.0 no matter the motherboard. I'm curious if there is a bottleneck with this type of machine setup for PCIe 2.0 at dual x8 mode. If not, would there potentially be a bottleneck with 2 680s?

I have found it inconvenient to do much crunching on my main desktop machine, so I'm also thinking about building a dedicated machine on a budget. It seems that 2 660s would be cheaper and perhaps gain a better RAC than one 680?

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29979 - Posted: 15 May 2013 | 21:28:26 UTC - in response to Message 29971.
Last modified: 15 May 2013 | 21:30:26 UTC

PCIe 2 at 8 lanes should still be enough, as in "barely measureably slower". But the faster the GPU the more performance is lost. In the past I think about 5 - 10% performance loss was measured for GTX580 and 4 lanes, although I forgot if this was PCIe 1 or 2.

2 GTX660 will definitely be faster than a GTX680 (just add up their raw SP performance), but will also use more power. I don't think the GTX680 is a good deal, there's too much of a premium asked just for being the fastest regular nVidia GPU (not counting extremes).

BTW: if your main display is disturbed too much by crunching, you could drive from the IGP in your i5. This costs almost no power at all (it's by far the most efficient GPU available) and using Virtu (which should have been bundled with your mainboard, or work with it anyway) you can still use your big GPU for games.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 29982 - Posted: 15 May 2013 | 22:23:47 UTC - in response to Message 29971.
Last modified: 15 May 2013 | 22:29:46 UTC

Presently, PCIE bandwidth doesn't appear to be overly important. With previous app versions and generations of GPU it was more so (PCIE2x16 to x4 lost something like 8% for high end GTX500's). The lesser GPU's use it relatively less, so it's intrinsically more important for larger GPU's to have more PCIE bandwidth, just like it's better to have a more efficient PSU, faster system memory and CPU to support bigger GPU's.

You actually need an i5-3xxx Intel to get PCIE3.0
Processors such as the i5-3330, i5-3470, i5-3570, i5-3570K are PCIE3.0, as are the i7-3xxx processors. The i3 CPU's are only PCIE2 as are the i7-3930K and i7-3820 (though some people reported they used a modified Bios to achieve PCIE3).

A dual 2.9GHz 22nm CPU isn't going to struggle too much supporting two high end crunching GPU's, but its a bad balance for normal use. GPU performance might take a slight hit because the GPU has to run at PCIE2.0 2x8, and another slight hit as it's an entry level CPU, but for crunching its a cheap solution. You would save $125 by buying a G2020 instead of an i5-3330.

Two GTX660's would get you a better RAC, and they are cheaper to buy.
RAC ~278,000 each on W7 (tested). Perhaps ~300K on XP and Linux (untested by me but others complete tasks at this rate).
The GTX680 brings in ~435K on XP, ~395K on W7 (with some variation)

GTX680 ~£370
GTX660 ~£160 (so two cost £320), do 40% more work and save you £50.

The running cost would be more for two GTX660s, but how important that is depends on your electric cost and other components, especially the PSU.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

matlock
Send message
Joined: 12 Dec 11
Posts: 34
Credit: 86,423,547
RAC: 0
Level
Thr
Scientific publications
watwatwatwatwatwatwatwatwat
Message 30094 - Posted: 19 May 2013 | 22:19:32 UTC

It looks like the Intel Z87 boards will be available soon. In addition to the new Haswell processors.

Here is the line of Asus boards: http://benchmarkreviews.com/index.php?option=com_content&task=view&id=22905&Itemid=47

I haven't found any information regarding additional PCIe 3.0 support yet.

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30097 - Posted: 20 May 2013 | 0:03:50 UTC - in response to Message 30094.
Last modified: 6 Jun 2013 | 11:34:04 UTC

Some boards supposedly support two PCIE3.0 X16 slots (@X16), and up to four PCIE3 slots @X8, but the processors won't.

Intel's 4th Generation desktop CPU's don't bring more PCIE lane support. You might think that the i7-4770K would add 16 lanes, but they didn't and are still using dual channel RAM - only the servers will be quad.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30101 - Posted: 20 May 2013 | 8:37:41 UTC
Last modified: 20 May 2013 | 8:41:12 UTC

Soon i can give a answer how a 570 runs in a pcie 1.0 x4 slot, cos i got my next one on ebay :D when it runs good i think its not from importance what sort of pcie ports you buy with your new board :)
Even now, "two" 570 are running perfect in a pcie 1.0 x16 slot witch means it has the speed of a pcie2.0 x8 or pcie3.0 x4.
____________
DSKAG Austria Research Team: http://www.research.dskag.at



ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30104 - Posted: 20 May 2013 | 10:09:36 UTC

Haswell for Desktops will quite probably stay at "just" 16 PCIe lanes - simply because we didn'T hear anything else up to now. And because it's enough for most people, for others there's always socket 2011. And Intel couldn't just increase the number of lanes for the top model, as this would require either different silicon (far too expensive) or unused silicon on all smaller chips (far too expensive) as well as a different socket to route those additional lanes.

MrS
____________
Scanning for our furry friends since Jan 2002

Profile dskagcommunity
Avatar
Send message
Joined: 28 Apr 11
Posts: 456
Credit: 817,865,789
RAC: 0
Level
Glu
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30590 - Posted: 31 May 2013 | 12:22:36 UTC
Last modified: 31 May 2013 | 12:23:24 UTC

Ok tested, there is no difference between the speed of any kind of pcie slots. I know there are faster cards out there then 570s. But these are running fine even in pcie 1.0 x4 :) gpugrid needs fast gpus, but the rest of the computer you can get from garbage, so the project is a good recycler ^^
____________
DSKAG Austria Research Team: http://www.research.dskag.at



Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30601 - Posted: 1 Jun 2013 | 7:59:33 UTC - in response to Message 30104.
Last modified: 1 Jun 2013 | 8:00:40 UTC

Haswell for Desktops will quite probably stay at "just" 16 PCIe lanes - simply because we didn'T hear anything else up to now. And because it's enough for most people, for others there's always socket 2011. And Intel couldn't just increase the number of lanes for the top model, as this would require either different silicon (far too expensive) or unused silicon on all smaller chips (far too expensive) as well as a different socket to route those additional lanes.

MrS

Haswell does need a different socket - LGA1150, as opposed to LGA1155 for the i7-3770K. So you would need an Intel series 8 motherboard for an ix-4xxx CPU. However it is still 16 lanes, at least for single socket boards. As for silicon space, it's 20 shaders too fat.

Ok tested, there is no difference between the speed of any kind of pcie slots. I know there are faster cards out there then 570s. But these are running fine even in pcie 1.0 x4 :) gpugrid needs fast gpus, but the rest of the computer you can get from garbage, so the project is a good recycler ^^

Looks like 0.7% difference, if i've interpreted your results correctly. A GTX570 is a big and powerful enough card, and would have taxed the PCIE more under previous apps. It's good that it doesn't with the present app, except for those of us who spent the extra on PCIE3 setups based on previous app performances.

It's hard to know how much PCIE bandwidth is being used by a GF500 series GPU. While we can't go entirely by CPU usage, it does suggest a much more limited PCIE usage by the GF500 and GF400 cards. Last time I looked a GTX470 only needed ~8% of a CPU core/thread.
On the other hand a full CPU core/thread is needed for the GF600 (and presumably GF700) cards. So these cards are more likely to be impacted by a PCIE reduction due to high CPU usage, as well as being the new top performers.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30604 - Posted: 1 Jun 2013 | 12:11:10 UTC - in response to Message 30601.
Last modified: 1 Jun 2013 | 12:12:30 UTC

Ok tested, there is no difference between the speed of any kind of pcie slots. I know there are faster cards out there then 570s. But these are running fine even in pcie 1.0 x4 :) gpugrid needs fast gpus, but the rest of the computer you can get from garbage, so the project is a good recycler ^^

Looks like 0.7% difference, if i've interpreted your results correctly. A GTX570 is a big and powerful enough card, and would have taxed the PCIE more under previous apps. It's good that it doesn't with the present app, except for those of us who spent the extra on PCIE3 setups based on previous app performances.

It's hard to know how much PCIE bandwidth is being used by a GF500 series GPU. While we can't go entirely by CPU usage, it does suggest a much more limited PCIE usage by the GF500 and GF400 cards. Last time I looked a GTX470 only needed ~8% of a CPU core/thread.
On the other hand a full CPU core/thread is needed for the GF600 (and presumably GF700) cards. So these cards are more likely to be impacted by a PCIE reduction due to high CPU usage, as well as being the new top performers.

On my 650 TI GPUs I see about a 2% loss in speed for a PCIe 2 x4 slot compared to PCIe 2 x16. On Einstein the difference is MUCH larger.

Off topic: my house got hit directly by lightning a night ago. Luckily I have a lightning rod but the energy still goes into the ground and feeds back up through the phone line even though the DSL has a surge protector inline. Took out the DSL modem (fried), the switch the modem was on (5 ports good, 3 bad), one of the motherboards on a machine connected to that switch, maybe 1 GPU on a different box too. Spent half the day yesterday getting the network diagnosed (4 switches and 11 machines) and getting the DSL and routers configured and working again. Third lightning hit in the 20+ years I've lived here, the house is the highest one in the immediate vicinity, probably the reason...

Profile skgiven
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 23 Apr 09
Posts: 3968
Credit: 1,995,359,260
RAC: 0
Level
His
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30619 - Posted: 1 Jun 2013 | 20:18:01 UTC - in response to Message 30604.

Some good info on PCIE performance with a GF600 GPU. A few more such measurements and we could put together a solid table.
Going by relative performance I would expect a GTX680 to be around twice that, and even going from PCIE3.0 X16 to PCIE2.0 X4 might only be ~6%. Not much of a hit for most.
Obviously you wouldn't want to be sticking a Titan in a PCIE1.0 X4 slot, but then if you can afford a Titan, you wouldn't be using a PCIE1.0 board.

Sorry to hear about your strike.
____________
FAQ's

HOW TO:
- Opt out of Beta Tests
- Ask for Help

flashawk
Send message
Joined: 18 Jun 12
Posts: 297
Credit: 3,572,627,986
RAC: 0
Level
Arg
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwat
Message 30623 - Posted: 1 Jun 2013 | 21:53:38 UTC - in response to Message 30604.

Off topic: my house got hit directly by lightning a night ago. Luckily I have a lightning rod but the energy still goes into the ground and feeds back up through the phone line even though the DSL has a surge protector inline. Took out the DSL modem (fried), the switch the modem was on (5 ports good, 3 bad), one of the motherboards on a machine connected to that switch, maybe 1 GPU on a different box too. Spent half the day yesterday getting the network diagnosed (4 switches and 11 machines) and getting the DSL and routers configured and working again. Third lightning hit in the 20+ years I've lived here, the house is the highest one in the immediate vicinity, probably the reason...


Did you lose any computers or GPU's? Must have been pretty loud, eh? I've been watching it on CNN & FOX, you guys have been getting hammered pretty hard with tornados and T-storms back east plus a heat wave too. This time of the year we worry about forest fires, some real dimwits out there camping.

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30627 - Posted: 1 Jun 2013 | 22:29:13 UTC - in response to Message 30623.

Off topic: my house got hit directly by lightning a night ago. Luckily I have a lightning rod but the energy still goes into the ground and feeds back up through the phone line even though the DSL has a surge protector inline. Took out the DSL modem (fried), the switch the modem was on (5 ports good, 3 bad), one of the motherboards on a machine connected to that switch, maybe 1 GPU on a different box too. Spent half the day yesterday getting the network diagnosed (4 switches and 11 machines) and getting the DSL and routers configured and working again. Third lightning hit in the 20+ years I've lived here, the house is the highest one in the immediate vicinity, probably the reason...

Did you lose any computers or GPU's? Must have been pretty loud, eh? I've been watching it on CNN & FOX, you guys have been getting hammered pretty hard with tornados and T-storms back east plus a heat wave too. This time of the year we worry about forest fires, some real dimwits out there camping.

I woke up at 2:26 AM (looked at the clock next to the bed). Didn't know why but got up and wondered around the house for a few minutes. Then there was another close hit about 10 minutes later. Took a look at the network and guess what, it went down at 2:26 AM. Worked until about 5 AM and got the network mostly running, but the DSL modem was completely fried so no internet. Lost a switch (had a spare though), 1 970 MB dead and 1 GPU is acting flaky, probably damaged. It won't run GPUGrid any more but will run some other projects. Think I was very lucky not to have lost more. A friend lent me his old DSL modem as he switched to cable a couple months ago (over 20x faster than DSL and cheaper), so at least am back on the internet. Sure wish I could get that cable connection but they stopped the line about a mile from here. In fact I might not have had a problem at all with cable, as the problem is ALWAYS that the surge feeds into the house through the copper DSL line. Wonder if optical DSL would solve the problem, but it's not available here yet. 1.5 Mbps copper only, thanks CenturyLink. Not! :-)

ExtraTerrestrial Apes
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 17 Aug 08
Posts: 2705
Credit: 1,311,122,549
RAC: 0
Level
Met
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30654 - Posted: 4 Jun 2013 | 20:10:27 UTC - in response to Message 30601.

On the other hand a full CPU core/thread is needed for the GF600 (and presumably GF700) cards. So these cards are more likely to be impacted by a PCIE reduction due to high CPU usage

I don't think so, as the added CPU usage appears just to be polling the GPU, i.e. equivalent to SWAN_SYNC=0. On older cards SWAN_SYNC=1 is still being used.

And thanks for clarifying the socket issue. In my mind it was clear that I meant "you'd need a larger socket for more lanes".. but this is not what I actually wrote :p

MrS
____________
Scanning for our furry friends since Jan 2002

Profile Beyond
Avatar
Send message
Joined: 23 Nov 08
Posts: 1112
Credit: 6,162,416,256
RAC: 0
Level
Tyr
Scientific publications
watwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwatwat
Message 30915 - Posted: 22 Jun 2013 | 5:22:27 UTC - in response to Message 30604.
Last modified: 22 Jun 2013 | 5:30:22 UTC

Off topic: my house got hit directly by lightning a night ago. Luckily I have a lightning rod but the energy still goes into the ground and feeds back up through the phone line even though the DSL has a surge protector inline. Took out the DSL modem (fried), the switch the modem was on (5 ports good, 3 bad), one of the motherboards on a machine connected to that switch, maybe 1 GPU on a different box too. Spent half the day yesterday getting the network diagnosed (4 switches and 11 machines) and getting the DSL and routers configured and working again. Third lightning hit in the 20+ years I've lived here, the house is the highest one in the immediate vicinity, probably the reason...

I think maybe Thor is unhappy with me. Last night we had another big lightning storm and 2.75 inches of rain. This time had a surge through the power lines. Lost a network port on one MB and an HD 5850 GPU on another PC. Five GPUGrid WUs crashed and burned. Swapped in a spare GTX 460 so the end result is that I have one more GPUGrid client running. Maybe Thor just likes GPUGrid :-)

Post to thread

Message boards : Graphics cards (GPUs) : PCIe 2.0 vs 3.0 & # of lanes

//