Message boards : Graphics cards (GPUs) : Nvidia GT300
Author | Message |
---|---|
GPU specifications Guru3d Brightsideofnews | |
ID: 12937 | Rating: 0 | rate: / Reply Quote | |
It all sounds good but no news yet on when it will be out, other than the "in the next few weeks" line. | |
ID: 12938 | Rating: 0 | rate: / Reply Quote | |
From anandtech Widespread availability won't be until at least Q1 2010. Anandtech | |
ID: 12941 | Rating: 0 | rate: / Reply Quote | |
Some more technical data of the G300 for Nvidia. | |
ID: 12961 | Rating: 0 | rate: / Reply Quote | |
How big of a loan would one have to apply for to get one of these things? Do I need to be looking at a ten year term? | |
ID: 12969 | Rating: 0 | rate: / Reply Quote | |
It is the 25 percent deposit that I am worried about! | |
ID: 12982 | Rating: 0 | rate: / Reply Quote | |
How big of a loan would one have to apply for to get one of these things? Do I need to be looking at a ten year term? I think they will be looking at similar pricing to the HD5870 as thats their main competition. The GT200-based cards will probably fall in price in the short term, so they can compete against ATI until the GT300 gets into volume production. ____________ BOINC blog | |
ID: 12988 | Rating: 0 | rate: / Reply Quote | |
I have heard 20% higher than current price of GTX295. | |
ID: 12994 | Rating: 0 | rate: / Reply Quote | |
the best article is from realworldtech.com. its a good read. they know what they are talking about.will gpugrid take advamtage of the double precision capabilities? does scientific computing cache well? ie data locality. | |
ID: 13014 | Rating: 0 | rate: / Reply Quote | |
the best article is from realworldtech.com. its a good read. they know what they are talking about.will gpugrid take advamtage of the double precision capabilities? does scientific computing cache well? ie data locality. You'll have to wait for someone on the GPUGRID project team to check whether their program, and the underlying science, even has a significant need for double precision, which would probably double the amount of GPU board memory needed to make full use of the same number of GPU cores. In case you're interested in the Milkyway@home project, I've already found that the GPU version of their application requires a GPU card with double precision, and therefore a 200 series Nvidia chip if that card's already available with an Nvidia GPU. Looks worth checking if it's a good use for the many GTX 260 cards with a rather high error rate under GPUGRID, though. They've already said that their underlying science needs double precision to produce useful results. As for caching well, I'd expect that to depend highly on how the program was written, and not be the same for all scientific computing. | |
ID: 13034 | Rating: 0 | rate: / Reply Quote | |
We will take advantage of the 2.5 Tflops single precision. | |
ID: 13052 | Rating: 0 | rate: / Reply Quote | |
We will take advantage of the 2.5 Tflops single precision. with those flops, that card would finish a current wu in little under 2 hrs, if my math is correct, which it may not be. | |
ID: 13054 | Rating: 0 | rate: / Reply Quote | |
My confidence in NVidia has dropped significantly. | |
ID: 13070 | Rating: 0 | rate: / Reply Quote | |
Scientific accuracy can be obtained/validated in more than one way. | |
ID: 13093 | Rating: 0 | rate: / Reply Quote | |
My confidence in NVidia has dropped significantly. Apparently the GTX260 and GTX275 have been killed off. Better get your last orders in. They expect the GTX295 will be next. Pity they haven't got anything to replace them with (ie the GT300-based cards). ____________ BOINC blog | |
ID: 13101 | Rating: 0 | rate: / Reply Quote | |
My confidence in NVidia has dropped significantly. Doesn't sound too good for NVidia at the moment. Nvidia kills GTX285, GTX275, GTX260, abandons the mid and high end market: http://www.semiaccurate.com/2009/10/06/nvidia-kills-gtx285-gtx275-gtx260-abandons-mid-and-high-end-market/ Current NV high end cards dead, fermi so far vapor, stopping development on all chipsets, legal problems with intel. Add that to the sony PS3 no Linux debacle and I'd think GPUGRID might want to accelerate the development of an ATI client. | |
ID: 13102 | Rating: 0 | rate: / Reply Quote | |
I have run across some other discussion threads that don't seem to see things quite so direly ... only time is going to tell if all this is true or not and how this will shake out. Maybe Nvidia is dead or will be dead ... but all the current cards are going to die tomorrow... so there will be plenty of support for months to come. | |
ID: 13105 | Rating: 0 | rate: / Reply Quote | |
Guys the G300 is simply amazing! | |
ID: 13107 | Rating: 0 | rate: / Reply Quote | |
Showing a mock-up that is not functional at a press briefing? Makes sense to me ... who in their right mind wants to risk one of the few working prototypes to butterfingered execs or press flacks? The issue is: the mockup was presented as the real item and it was not at a press briefing, it was at the GPU Technology Conference for developers and investors. If they had a real card, it would have made sense to me to actually show it because at that time rumours were already spreading that they did not have a card. A few smudges on a real card, or a damaged one if fallen, would be no match to a stain on the NVidia image. It reminds me of a (freely reproduced) quote by an Intel exec many years ago when they introduced the microchip. "someone asked me 'how are they going to service such a small part when one breaks', I replied 'if one breaks there are plenty more where that one came from', they just did not understand the concept of a replaceable chip." If you drop a shell of a real GT300, you replace the shell. If you cannot even show a real shell, what does that say about the egg ? Sounds like the chicken won here. (And believe me I'd love to have a nest of GT300 today) | |
ID: 13112 | Rating: 0 | rate: / Reply Quote | |
Showing a mock-up that is not functional at a press briefing? Makes sense to me ... who in their right mind wants to risk one of the few working prototypes to butterfingered execs or press flacks? Still sounds like a dog-and-pony show ... And I am sorry, why is waving around a "real" card that much more impressive than waving about a "fake" one ... to be honest I just want to see it work in a machine. All else is meaningless ... As to GDF's comment, yes I would like to buy a GTX300, but if the only option is to buy at the top, I would rather buy a "slower" and less capable GTX295 so I can get the multi-Core aspect for processing two tasks at once in the same box ...at times throughput is not just measured by how fast you can pump out the tasks, but by how many you can have in flight ... If Nvidia takes out too many levels in the structure they may lose me on my upgrade path as productivity wise the HD4870s I have beat all the Nvidia cards I have, and they are cheaper too ... should OpenCl hit and several projects move to support it ... well, I would not hesitate to consider replacing failed Nvidia cards with ATI versions... | |
ID: 13119 | Rating: 0 | rate: / Reply Quote | |
Well, I would think that none of your ATI card could actually run faster than a GTX285 for our application. If you mean that you get more credits, this is just due to overcrediting by the projects. Something that it is bound to change long term. | |
ID: 13122 | Rating: 0 | rate: / Reply Quote | |
Hi Paul, you wrote: why is waving around a "real" card that much more impressive than waving about a "fake" one If I show my Alcoholics Anonymous membership card and proclaim 'Here is my real American Express Gold card' that may not cast such a good impression in my local Gucci store. If I pass my real American Express Gold card to the cashier and say 'could you please wrap the purse as a gift' in the same Gucci store, I may have more credibility. If you claim to have something, it can be wise to actually show it. Especially if your audience consists of investors and important business relations. If you do not have it it may be unwise to show a mockup and pretend it is the real thing. I am not against NVidia, mind you, but this was not a wise move. | |
ID: 13128 | Rating: 0 | rate: / Reply Quote | |
Wow, did you read about their Nexus plugin for Visual Studio? Now you can have a machine/GPU state debugger for CUDA applications in an integrated development environment. Heck, I might even take on GPU coding; this stuff may be a good job skill to have in the coming years. | |
ID: 13132 | Rating: 0 | rate: / Reply Quote | |
If you claim to have something, it can be wise to actually show it. Especially if your audience consists of investors and important business relations. If you do not have it it may be unwise to show a mockup and pretend it is the real thing. I am not against NVidia, mind you, but this was not a wise move. Having worked with electronics for years I can also assure you that waving a "working" version of a card around with no anti-static protection can turn that "working" card into a dead one in moments. The cases you used are interesting but not parallel. There is no risk, other than to have your card remotely scanned to waving it about in a crowd. There is a huge risk to a chunk of electronics. You see value to the exercise and I do not ... simple as that, and we are never going to agree ... :) | |
ID: 13136 | Rating: 0 | rate: / Reply Quote | |
Well, I would think that none of your ATI card could actually run faster than a GTX285 for our application. If you mean that you get more credits, this is just due to overcrediting by the projects. Something that it is bound to change long term. I have not yet tried to compare something that is more apples to apples but the comparisons of the cards relative speeds can be gesstimated with a relative comparison with Collatz on the two cards which would be more SP to SP ... I know my 4870s are 3-4 times faster than the 260 cards for MW, in part because of the weakness of the 260 cards in DP capability ... but my understanding is that same carries through to a lesser extent with the 260 cards and the 4870s with Collatz. The new 5870 is about 2x faster than the 4870 ... and is shipping now ... Yes the GTX300 will redress this balance somewhat, how much is still not known as the card is not shipping yet ... then we also get into that whole price to performance thing ... | |
ID: 13137 | Rating: 0 | rate: / Reply Quote | |
There is a fundamental flaw with NVidia. They sell GPU’s that are excellent for gamers and for crunchers, but they are expensive due to research and design of cutting edge technology. Partially as a result of this, they don’t sell well against Intel in the low end market – where most of the sales occur. Unfortunately for NVidia Intel have proprietary rights on chip design and can therefore hold NVidia to random – and have been doing so for some time! With a year of financial instability it is little wonder that the manufactures of a panicle technological design are struggling. When you are faced with competitors that are capable of flexing considerable muscle (buy our GPUs or you wont get our CPU’s) and governments who are scared to help NVidia, you are in a difficult position! | |
ID: 13143 | Rating: 0 | rate: / Reply Quote | |
This is just a FYI, I do not want to start a new discussion. | |
ID: 13162 | Rating: 0 | rate: / Reply Quote | |
So by the end of the year there will be some G300 out (I hope more than few). | |
ID: 13165 | Rating: 0 | rate: / Reply Quote | |
There is a fundamental flaw with NVidia. They sell GPU’s that are excellent for gamers and for crunchers, but they are expensive due to research and design of cutting edge technology. Partially as a result of this, they don’t sell well against Intel in the low end market – where most of the sales occur. Unfortunately for NVidia Intel have proprietary rights on chip design and can therefore hold NVidia to random – and have been doing so for some time! With a year of financial instability it is little wonder that the manufactures of a panicle technological design are struggling. When you are faced with competitors that are capable of flexing considerable muscle (buy our GPUs or you wont get our CPU’s) and governments who are scared to help NVidia, you are in a difficult position! thats a little out there to think a company will fail from one generation. AMD is still with us. 5870 is not 2x faster. it is bottlenecked by bandwidth specifically L1 cache. on milky way the card gets 2tflops which is a perfect match for the 2TB/s of bandwidth.nvidias l1 bandwidth on fermi should be 3TB/s so the card will be very fast. something that should be noted about ati's architecture is that it was designed for dx10, not gpgpu. not all applications can take full advantage of vliw or vectors. nvidia has spent very little r&d lately. gt200 and g92 were very small tweaks. fermi is the same basic architecture with more programmablity and cache. how gpugrid performs with ati is a mystery. its going to come down to bandwidth and ilp. | |
ID: 13259 | Rating: 0 | rate: / Reply Quote | |
well said. | |
ID: 13262 | Rating: 0 | rate: / Reply Quote | |
I did not say anyone would fail. I suggested that a plan to save the company was in place and this does not include selling old line GPUs! Nice technical insight but businesses go bust because they cant generate enough business to pay off their debts in time, not because of the technical details of a future architecture. | |
ID: 13270 | Rating: 0 | rate: / Reply Quote | |
We will take advantage of the 2.5 Tflops single precision. You expect Fermi to hit more than 2.4GHz shader clock? The extra MUL is gone with Fermi. The theoretical peak throughput for single precision is just: number of SP * 2 * shader clock That means for the top model with 512 SPs and a clock of 1.7GHz (if nv reaches that) we speak about 1.74 TFlop/s theoretical peak in single precision. | |
ID: 13429 | Rating: 0 | rate: / Reply Quote | |
I have not yet tried to compare something that is more apples to apples but the comparisons of the cards relative speeds can be gesstimated with a relative comparison with Collatz on the two cards which would be more SP to SP ... Collatz does not run a single floating point instruction on the GPU. It's pure integer math. Nvidias are currently slower there because 32bit integer math are not exactly one of the strengths of nvida GPUs and their memory controller and the cache system has more difficulties with the random accesses necessary there (coalescing memory accesses is simply not possible). But integer operations get a significant speedup with Fermi and the new cache system as well as the new memory controller should be able handle their tasks also much better than current nvidia GPUs. With Fermi I would expect a significant speedup (definitely more than factor 2 compared to a GTX285) for Collatz. How bad nvidia currently does there (originally I expected them to be faster than ATI GPUs) is maybe more clear when I say that the average utilization of the 5 slots of each VLIW unit of the ATI GPUs is actually only ~50% (MW arrives at ~4.3/5, i.e. 86% on average). That is also the reason the GPUs consume less power on Collatz than with MW. | |
ID: 13430 | Rating: 0 | rate: / Reply Quote | |
5870 is not 2x faster. it is bottlenecked by bandwidth specifically L1 cache. [..] nvidias l1 bandwidth on fermi should be 3TB/s so the card will be very fast.For some problems it is 2x as fast (or even more), just think of MW and Collatz. And the L1 bandwidth didn't change per unit und clock (each SIMD engine can fetch 64 Bytes per clock from L1, and there are twenty of them ;). That means it isn't more a bottleneck as it was with the HD4800 series. What wasn't scaled as well is the L2 cache bandwidth (only size doubled) and the memory bandwidth. I don't know where you got the L1 bandwidth figure for Fermi from, but it is a bit speculative to assume every L/S unit can fetch 8 bytes per clock. Another estimate would be 16 SMs * 16 L/S units per SM * 4 bytes per clock = 1024 Bytes per clock (Cypress stands at 1280 Bytes/clock but at a significantly lower clockspeed and more units). With a clock of about 1.5GHz one would arrive at roughly half of your figure. on milky way the card gets 2tflops which is a perfect match for the 2TB/s of bandwidth.As said, a HD5870 has only about 1.1 TB/s L1 cache bandwidth. The 2 TB/s figure someone came up with is actually adding the L1 cache and shared memory bandwidth. And I can tell you that it is nothing to consider at all for the MW application. First, the MW ATI applications doesn't use the shared memory (the CUDA version does, but I didn't found it useful for ATIs) and second, the MW applications are so severly compute bound that the bandwidth figures doesn't matter at all. Depending on the problem one has between 5 and 12 floating point operations per fetched byte (not per fetched value), and we are speaking about double precision operations. A HD5870 is coming close to about 400 GFlop/s (double precision) over at MW, that means with the longer WUs (consuming less bandwidth than the shorter ones) one needs only about 33GB/s L1 bandwidth. Really nothing to write home about. That is a bit different with Collatz which is quite bandwidth hungry, not exactly cache bandwidth hungry but memory bandwidth hungry. A HD5870 peaks somewhere just below 100GB/s used bandwidth. And that with virtually random accesses (16Bytes are fetched by each access) to a 16 MB buffer (larger than all caches), which is actually a huge lookup table with 2^20 entries. Quite amazing a HD5870 is able to pull that off (from some memory bandwidth scaling experiments with a HD4870 I first thought it would be a bottleneck). Obviously the on chip buffers are quite deep so they can find some consecutive accesses to raise the efficiency of the memory controllers. Contrary to nvidia the coalescing of the accesses are apparently not that important for ATI cards. something that should be noted about ati's architecture is that it was designed for dx10, not gpgpu.Actually Cypress was designed for DX11 with a bit of GPGPU in mind which is now even part of the DirectX11 specification (DX compute shader). In fact DX11 required that the shared memory gets doubled compared to what is available on the latest DX10.x compatible cards. gt200 and g92 were very small tweaks. fermi is the same basic architecture with more programmablity and cacheI would really oppose this statements as Fermi is going to be a quite large step to be considered the same architecture. how gpugrid performs with ati is a mystery. its going to come down to bandwidth and ilp.I guess GDF will best know what GPUGrid stresses most and if there are particular weaknesses of the architectures. Generally GPUGrid does some kind of molecular dynamics, which have the potential to run fast on ATI hardware if some conditions are met. In the moment ATI's OpenCL implementation is lacking the image extension which really helps the available bandwidth for quite some usage scenarios. And OpenCL is far from being mature right now. That means the first ATI applications may very well not show the true potential the hardware is capable of. And if ATI cards can match nvidia's offerings here is of course dependent on the details of the actual code and the employed algorithms to the same extent as it is dependent on the hardware itself ;) | |
ID: 13431 | Rating: 0 | rate: / Reply Quote | |
As soon as we get a H5870 I will be able to tell you. | |
ID: 13432 | Rating: 0 | rate: / Reply Quote | |
We got access to a 4850, and you were right shared memory is still emulated via global memory, so of not use. | |
ID: 13454 | Rating: 0 | rate: / Reply Quote | |
We got access to a 4850, and you were right shared memory is still emulated via global memory, so of not use. Not sure I am picking this up right. Am I right in thinking that the HD4850's bottleneck is due to using System memory? | |
ID: 13688 | Rating: 0 | rate: / Reply Quote | |
NVIDIA has released another GT300 card. | |
ID: 13861 | Rating: 0 | rate: / Reply Quote | |
LOL!!! | |
ID: 13872 | Rating: 0 | rate: / Reply Quote | |
Please nobody buy a G315 ever. | |
ID: 13874 | Rating: 0 | rate: / Reply Quote | |
Please nobody buy a G315 ever. I have some on Pre-Order ... ;) | |
ID: 13882 | Rating: 0 | rate: / Reply Quote | |
I've noticed that some of the recent high end HP computers offer Nvidia boards that I don't remember seeing on your list of what is recommended and what is not. | |
ID: 13885 | Rating: 0 | rate: / Reply Quote | |
robertmiles | |
ID: 13891 | Rating: 0 | rate: / Reply Quote | |
The Nvidia site said otherwise when I asked it which driver was suitable. | |
ID: 13895 | Rating: 0 | rate: / Reply Quote | |
Now, Nvidia has changed their driver downloads page since the last time I looked at it. It now says that 195.62 will work on MOST, but not all, of their laptop video cards, at least on most laptops. | |
ID: 13898 | Rating: 0 | rate: / Reply Quote | |
just try :-) | |
ID: 13899 | Rating: 0 | rate: / Reply Quote | |
Here is my opinion on the GT210, GT220 and GT315. | |
ID: 13904 | Rating: 0 | rate: / Reply Quote | |
just try :-) So far, it's worked successfully for both Collatz and Einstein. The G105M board in that laptop is listed as not suitable for GPUGRID, so I haven't tried it for GPUGRID. Recently, I looked at what types of graphics boards are available for the high-end HP desktop computers, using the option to have one built with you choice of options. As far as I could tell, the highest-end graphics boards they offer are G210 (not the same as a GT210), GTX 260 (with no indication of which core), and for ATI, part of the HD4800 series. Looks like a good reason NOT to buy an HP computer now, even though they look good for the CPU-only BOINC projects. Even that much wasn't available until I entered reviews for several of the high-end HP computers, saying that I had thought of buying them until I found that they weren't available with sufficiently high-end graphics cards to use with GPUGRID. Anyone else with an HP computer (and therefore eligible for an HP Passport account) want to enter more such reviews, to see if that will persuade them even more? | |
ID: 13919 | Rating: 0 | rate: / Reply Quote | |
Seriously, notebooks and GPU-crunching don't mix well. Nobody expects a desktop GPU to run under constant load and even less so for a mobile card. It will challenge the cooling system heavily, if the GPU has any horsepower at all (e.g. not G210). | |
ID: 13930 | Rating: 0 | rate: / Reply Quote | |
MrS, as always you are quite correct, but perhaps he will be fine with Einstein, for a while at least. | |
ID: 13933 | Rating: 0 | rate: / Reply Quote | |
robertmiles | |
ID: 13970 | Rating: 0 | rate: / Reply Quote | |
robertmiles From what I've read, Dell is even worse. Significant reliability problems, even compared to HP. I'm no longer capable of handling a desktop well enough to build one myself, or even unpack one built elsewhere and then shipped here, so I'll need to choose SOME brand that offers the service of building it and unpacking it for me. Want to suggest one available in the southeast US? There's a local Best Buy, but they do not have any of the recommended Nvidia boards in stock; that's the closest I've found yet. For the high-end HP products I've been looking at lately, I've found that they offer you SOME choices in what to include, at least for the custom-built models, just not enough; and they don't send them sealed. The rest of your description could fit, though; I don't have a good way of checking. I sent email to CyperPower today asking them if they offer the unpacking service, so there's a possibility that I may just have to check more brands to find one that meets my needs. | |
ID: 13997 | Rating: 0 | rate: / Reply Quote | |
Ask around and find somone who can build a system for you. Pay them $200 and you will save money on buying an OEM HP, Dell... | |
ID: 14001 | Rating: 0 | rate: / Reply Quote | |
The gaming-related features of GT300 (or now GF100) have been revealed (link). Impressive raw power and more flexible than previous chips. | |
ID: 14256 | Rating: 0 | rate: / Reply Quote | |
Good Link MrS. | |
ID: 14263 | Rating: 0 | rate: / Reply Quote | |
Unrelated. | |
ID: 14267 | Rating: 0 | rate: / Reply Quote | |
Perhaps someone else would care to speculate Well.. :D - 512/240 = 2.13 times faster per clock is a good starting point - 2x the performance also requires 2x the memory bandwidth - they'll get approximately 2x the clock speed from GDDR5 compared to GDDR3, but reduce bandwidth from 512 to 386 bit -> ~1.5 times more bandwidth -> so GDDR5 is not going to speed things up, but I'd suspect it to be sufficiently fast so it doesn't hold back performance - the new architecture is generally more efficient per clock and has the following benefits: better cache system, much more double precision performance, much more raw geometry power, much more texture filtering power, more ROPs, more flexible "fixed function hardware" -> the speedup due to these greatly depends on the application - 40 nm alone doesn't make it much faster: the current ATIs aren't clocked much higher than their 55 and 65 nm brothers - neither does the high transistor count add anything more: it's already included in the 512 "CUDA cores" - and neither does 40 nm guarantee a cool card: look at RV790 vs RV870: double the transistor count, same clock speed, almost similar power consumption and significantly reduced voltage -> ATI needed to lower the voltage considerably (1.3V on RV770 to 1.0V on RV870) to keep power in check - nVidia is more than doubling transistors (1.4 to 3.0 Billion) and has already been at lower voltages of ~1.1V before (otherwise GT200 would have consumed too much power) and paid a clock speed penalty compared to G92, even at the 65 nm node (1.5 GHz on GT200 compared to 2 GHz on G92) -> nVidia can't lower their voltage as much as ATI did (without choosing extremly low clocks) and needs to power even more transistors -> I expect GT300 / GF100 to be heavily power limited, i.e. to run at very low voltages (maybe 0.9V) and to barely reach the same clock speeds as GT200 (it could run faster at higher voltages, but that would blow the power budget) -> I expect anywhere between 200 and 300W for the single chip flagship.. more towards 250W than 200W -> silent air cooling will be a real challenge and thus temperatures will be very high.. except people risk becoming deaf - definitely high prices.. 3 Billion transistors is just f*cking large and expensive - I think GF100s smaller brother could be a real hit: give it all the features and half the crunching power (256 shaders) together with a 256 bit GDDR5 interface. That's 66% the bandwidth and 50% performance per clock. However, since you'd now be at 1.6 - 1.8 Billion transistors it'd be a little cheaper than RV870 and consume less power. RV870 is already power contrained: it needs 1.0V to keep power in check and thus can't fully exploit the clock speed headroom the design and process have. With less transistors the little GF100 could hit the same power envelope at ~1.1V. Compare this to my projected 0.9V for the big GF100 and you'll get considerably higher clock speeds at a reasonable power consumption (by todays standards..) and you'd probably end up at ~66% the performance of a full GF100 at half the die size. MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 14273 | Rating: 0 | rate: / Reply Quote | |
My rough guess of 512/240=2.13 and multiplied by an improved architecture factor (1.4) giving about 3 times the performance of a GTX 285 is only meant as a guide to anyone reading this and interested in one of these cards, as is my price speculation and power consumption guess! | |
ID: 14275 | Rating: 0 | rate: / Reply Quote | |
| |
ID: 14276 | Rating: 0 | rate: / Reply Quote | |
I can't remember what article I read a couple days ago, but Nvidia admitted the new cards will run VERY hot. They claimed an average PC case and cooling will NOT handle more than one of these new cards. Just FYI..... | |
ID: 14282 | Rating: 0 | rate: / Reply Quote | |
It is slightly slower if compiled for 1.1. We are trying to optimize it, the only other solution is to release an application for 1.3 cards alone. Can't BOINC server be set to send separate apps based on compute ability? If not and if there's only a slight difference I'd vote for the one most compatible with all cards. Another option would be differently compiled optimized apps available to install via an app_info.xml. | |
ID: 14285 | Rating: 0 | rate: / Reply Quote | |
Hi SK, I like the sound of your little GF100. Me too :D I just hope nVidia doesn't do it the GT200-way: don't release a mainstream version of the fat "flag-chip" at all. They've done quite some dodgy moves in the past (just remember the endless renaming) .. but I don't think they're stupid or suicidal! Best regards, MrS ____________ Scanning for our furry friends since Jan 2002 | |
ID: 14286 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : Nvidia GT300