Message boards : Graphics cards (GPUs) : GPU performance chart
Author | Message |
---|---|
I've just included a last chart in the "Performance" tab, featuring the long WU return time per GPU type. Please check it out and tell me if the ranking is consistent with the expectable performance. | |
ID: 41859 | Rating: 0 | rate: / Reply Quote | |
I've just included a last chart in the "Performance" tab, featuring the long WU return time per GPU type.Wow! I really appreciate the work you've done. Please check it out and tell me if the ranking is consistent with the expectable performance. The first inconsistency is shown right at the first three GPUs: GPU N avg min max
GTX TITAN X 40 8.07 6.9 13.3
GTX 980 Ti 290 8.99 4.0 18.1
GTX 980 747 9.89 4.8 16.5 It's more evident if you look at the bars, as the 'average' box of the GTX 980 Ti is lower than of the GTX TITAN X. But it's misleading that the GTX 980 and the GTX 980 Ti could be faster than the GTX TITAN X, because the only reason for this is that no one with a non-WDDM os has a GTX TITAN X. (Could someone please send me one, just to make this graph more consistent :) ) Also, mixing very different long runs in a graph like this will result in inconsistency especially when a batch outnumbers the other. (SDOERR_ntl9evSSXX ~4.1h, GERARD_FXCXCL12_LIG ~5.5h GERARD_VACXCL12_LIG ~5.7h on my GTX 980 Ti) Mixing OSes which have the WDDM overhead (Win Vista & newer) with OSes which don't have it (WinXP, Linux) result in these inconsistencies (another such factors are the use of SWAN_SYNC, and overclocking). If you want to avoid this, I can only suggest to make two charts: one for WDDM OSes, and one for the others. (or four, or eight - no, it makes no sense) But - against all of these inconsistencies - please don't take this chart off. Maybe a brief about different OSes and SWAN_SYNC under this chart will do. | |
ID: 41861 | Rating: 0 | rate: / Reply Quote | |
If the WU assignment is random (as I assume it is), a big number of samples (N) would diminish the effect of time differences related to batch, at least in terms of average. | |
ID: 41862 | Rating: 0 | rate: / Reply Quote | |
Sorry, new thoughts keep popping in my mind after I press the "post reply" button. The question is: which should be the minium number of sample WU (N) in order to say the batch effect is small enough and therefore data is more reliable? The reliability is independent from the number of GPUs, when there are empty areas in the matrix of WDDM, GPU, SWAN_SYNC, WU batch, overclocking. For know I picked to show graphic cards with N>=30, that is considerably low but allows to plot the whole spectrum of GPU cards. I could raise it to 100... I think you should not raise it. | |
ID: 41863 | Rating: 0 | rate: / Reply Quote | |
Ok, this time I don't edit my post, instead I reply myself. The question is: which should be the minium number of sample WU (N) in order to say the batch effect is small enough and therefore data is more reliable? As GPUs gets faster, it's become more & more evident, that the GPUGrid client on a WDDM OS will never ever be as fast as on a non-WDDM OS. Reaching the GTX 980 Ti the WDDM became the main divide regarding performance, and it won't change unless NVidia integrates a (ARM) CPU in their GPU. (I expected they will do it on the Maxwells, but apparently they didn't). That's why I suggested the two GPU performance graphs (WDDM vs non-WDDM os). | |
ID: 41864 | Rating: 0 | rate: / Reply Quote | |
I've just included a last chart in the "Performance" tab, featuring the long WU return time per GPU type. Please check it out and tell me if the ranking is consistent with the expectable performance. Thank you for the new performance chart: this will be helpful to rookie and seasoned Crunchers alike determining their GPU(s) overall performance. Would it be possible to include an account page - GPU(s) performance quartiles - with already implemented personal records list workunit quartiles? | |
ID: 41866 | Rating: 0 | rate: / Reply Quote | |
I would switch the GPU set up to the vertical axis, add a scroll bar, and move the time to the horizontal axis. That way, you can put the full card names on the left side without squeezing them. I agree that we should break up each card into WDDM and non WDDM operating systems and stop there. It gets too cumbersome after that. | |
ID: 41867 | Rating: 0 | rate: / Reply Quote | |
I thought about horizontal charts, the problem is that google charts has not implemented this feature yet for CandleSticks charts, so I'm afraid we will have to stick to the present result (remember that if you hover over the chart you get further information about each candlestick). | |
ID: 41870 | Rating: 0 | rate: / Reply Quote | |
Gerard Hi, | |
ID: 41956 | Rating: 0 | rate: / Reply Quote | |
You are right... casuistry brought us to the scenario where batch names are waaay too long. I'll do something about it. | |
ID: 41957 | Rating: 0 | rate: / Reply Quote | |
I run two long run tasks per GPU on my two SLIed Titan Black cards (I use SLI for other applications) on Windows 7, so four of them are running simultaneously. While this significantly increases the amount of time it takes a task to run, it also somewhat increases the total amount of work done per time. | |
ID: 42151 | Rating: 0 | rate: / Reply Quote | |
I run two long run tasks per GPU on my two SLIed Titan Black cards (I use SLI for other applications) on Windows 7, so four of them are running simultaneously. While this significantly increases the amount of time it takes a task to run, it also somewhat increases the total amount of work done per time. All your Titan Blacks seem to be doing is throwing errors at the moment and this doesn't increase the total amount of work done. Am I missing something? | |
ID: 42154 | Rating: 0 | rate: / Reply Quote | |
I run two long run tasks per GPU on my two SLIed Titan Black cards (I use SLI for other applications) on Windows 7, so four of them are running simultaneously. While this significantly increases the amount of time it takes a task to run, it also somewhat increases the total amount of work done per time. I'm not sure how that's relevant. My computer crashed today while I had it open to clean it, corrupting all the tasks in progress. The majority of work that it's done for both BOINC and GPUGrid has been with this configuration. ... Actually, it just crashed again. This is the first I've been using it after having moved, so something may have damaged it in transit, or the new drivers and tuning software I updated to aren't agreeing with something. Either way, it's still irrelevant to the post I made previously, but thanks for your concern. ____________ My BOINC Cruncher, Minecraft Multiserver, Mobile Device Mainframe, and Home Entertainment System/Workstation: http://www.overclock.net/lists/display/view/id/4678036# | |
ID: 42155 | Rating: 0 | rate: / Reply Quote | |
I know there are few, but as well as Linux systems, Win XP and 2003 servers do not incur a WDDM overhead. | |
ID: 42170 | Rating: 0 | rate: / Reply Quote | |
Message boards : Graphics cards (GPUs) : GPU performance chart