Message boards : News : WU: CASP
Author | Message |
---|---|
Hey all. The next weeks I will be sending out some simulations starting with the CASP name. It will consist of a total of 3600 simulations running at any time (meaning that once some of them complete, new ones are sent to reach again 3600) probably for a month or so. | |
ID: 44670 | Rating: 0 | rate: / Reply Quote | |
Thank you very much, Stefan! | |
ID: 44679 | Rating: 0 | rate: / Reply Quote | |
From my experience these workunits tolerate less overclocking. | |
ID: 44682 | Rating: 0 | rate: / Reply Quote | |
The simulations have their length written in their name (1ns, 5ns, 20ns and 50ns). Some will end up in the short queue and some in the long queue. Longest simulation time I measured on a 780 was 8 hours for one of the proteins at 50ns and shortest at 0.1 hour. Just had 3 run through my GTX 670 (Win7-64): Two 1ns took about 7 minutes each. The 50ns took just over 6 hours. All 3 WUs ran fine. | |
ID: 44683 | Rating: 0 | rate: / Reply Quote | |
I got 2 of these tasks, both completed successfully: | |
ID: 44689 | Rating: 0 | rate: / Reply Quote | |
https://www.gpugrid.net/result.php?resultid=15344597 | |
ID: 44690 | Rating: 0 | rate: / Reply Quote | |
@Bedrich The experiment I'm doing dictates the length of the simulations. If I make them longer I won't be able to prove my point. So unfortunately I can't make them longer. | |
ID: 44695 | Rating: 0 | rate: / Reply Quote | |
@Bedrich The experiment I'm doing dictates the length of the simulations. If I make them longer I won't be able to prove my point. So unfortunately I can't make them longer. Hi Stefan, I don't think Bedrich was complaining. Personally very happy that WUs have returned to a more manageable size. Everything's running smoother and I thank you guys for that. BTW, the 1ns WUs are taking about 9-10 minutes on a 750Ti and the 5ns WUs average about 47 minutes on a 750Ti (Win7-64). Looks like the 20ns are running about 3 hours and the 50ns about 8.5 hours on the same GPUs. Edit: made a quick list since someone asked on the other thread. SDOERR_CASP times: GTX 670: 1ns about 7 minutes, 5ns about 35 minutes, 20ns about 2.5 hours, 50 ns about 6 hours. GTX 750Ti: 1ns about 9.5 minutes, 5ns about 47 minutes, 20ns about 3 hours, 50ns about 8.5 hours. | |
ID: 44697 | Rating: 0 | rate: / Reply Quote | |
10.5 mins | |
ID: 44704 | Rating: 0 | rate: / Reply Quote | |
@Bedrich The experiment I'm doing dictates the length of the simulations. If I make them longer I won't be able to prove my point. So unfortunately I can't make them longer. For the record, I was merely stating my observation and opinion, not complaining nor intending to be derogatory. And here is some more: I was able to finish the 50ns task in under 3 hours, so that would technically make it a short run, not a long run: 15345412 30790 14 Oct 2016 | 6:05:02 UTC 14 Oct 2016 | 13:46:26 UTC Completed and validated 9,970.77 9,892.05 63,750.00 Long runs (8-12 hours on fastest card) v8.48 (cuda65) http://www.gpugrid.net/workunit.php?wuid=11767231 Also, the GPU usage on windows 10 computer was 72% and power usage was 59%, which are low. On the windows xp machine, the GPU usage was 87% and power usage was 65%, which are also a little low. | |
ID: 44706 | Rating: 0 | rate: / Reply Quote | |
Oh, come on be derogatory, this forum could do with it. | |
ID: 44707 | Rating: 0 | rate: / Reply Quote | |
You can take the lead on derogatory, I'll just be sarcastic. | |
ID: 44710 | Rating: 0 | rate: / Reply Quote | |
01:30:00 | |
ID: 44719 | Rating: 0 | rate: / Reply Quote | |
I had 2 tasks (labeled as a long runs) each completed in a little over 1 hour: | |
ID: 44773 | Rating: 0 | rate: / Reply Quote | |
I had 2 tasks (labeled as a long runs) each completed in a little over 1 hour: Stefan already answered you why "@Bedrich The experiment I'm doing dictates the length of the simulations. If I make them longer I won't be able to prove my point. So unfortunately I can't make them longer." | |
ID: 44779 | Rating: 0 | rate: / Reply Quote | |
CASP WU on (2) GTX 970 at 1.5GHz are 2.5~2.7x slower with PCIe 2.0 x1 compared to PCIe 3.0 x4. | |
ID: 44781 | Rating: 0 | rate: / Reply Quote | |
CASP WU on (2) GTX 970 at 1.5GHz are 2.5~2.7x slower with PCIe 2.0 x1 compared to PCIe 3.0 x4. Or PCIe 2.0 x16... | |
ID: 44783 | Rating: 0 | rate: / Reply Quote | |
I had 2 tasks (labeled as a long runs) each completed in a little over 1 hour: My post was not about the length of the tasks. It was about classifying the tasks correctly, and updating the definition of each category. Please read the post more carefully next time. | |
ID: 44788 | Rating: 0 | rate: / Reply Quote | |
I notice with the CASP units I have around 80% GPU usage using windows XP and a gtx 960. I see that other's are having lower gpu usage as well. Will the casp units always use this low of a gpu usage? | |
ID: 44816 | Rating: 0 | rate: / Reply Quote | |
I notice with the CASP units I have around 80% GPU usage using windows XP and a gtx 960. I see that other's are having lower gpu usage as well. Will the casp units always use this low of a gpu usage? Most probably they will. I think it's because the models these units are simulating have "only" 11340 atoms, while others have 2-5 times of this. The smaller the model is, the more frequent the CPU has to do the DP part of the simulation, resulting in lower GPU usage. (However there were high atom count batches with low GPU usage in the past, so a larger model could also need relatively high CPU-GPU interaction.) | |
ID: 44817 | Rating: 0 | rate: / Reply Quote | |
If my CPU and GPU have to interact a lot more often I might be in trouble since the only machine I have access to right now has a pentium D CPU. I'll just hope for the best until I can get my hands on a newer machine. | |
ID: 44818 | Rating: 0 | rate: / Reply Quote | |
Hm okay I can move some stuff from long to short queue. I was stretching the definition a bit but Gianni said to go with it. But if it's an issue I will send only 4+ hours to long and under that to short. | |
ID: 44829 | Rating: 0 | rate: / Reply Quote | |
Hm okay I can move some stuff from long to short queue. I was stretching the definition a bit but Gianni said to go with it. But if it's an issue I will send only 4+ hours to long and under that to short. Stefan, I don't see it as an issue at all. Leave them in the long queue. | |
ID: 44832 | Rating: 0 | rate: / Reply Quote | |
Yeah I switched to the 4 hour thing but Gianni also told me to revert it back. I guess for this project we can "suffer" some shorter sims in the long queue. | |
ID: 44834 | Rating: 0 | rate: / Reply Quote | |
Yeah I switched to the 4 hour thing but Gianni also told me to revert it back. I guess for this project we can "suffer" some shorter sims in the long queue. Thumbs up on the long queue decision and a double thumbs up on being a DOCTOR! BTW, I've a krick in my knee... | |
ID: 44844 | Rating: 0 | rate: / Reply Quote | |
Yeah I switched to the 4 hour thing but Gianni also told me to revert it back. I guess for this project we can "suffer" some shorter sims in the long queue. It's ok to have the 50ns and the 20ns long simulations in the long queue, until there are any in the long queue; as the 5ns and the 1ns long simulations take only 14 minutes and 3 minutes (respectively) to process, the data transfer of a long task takes more time than the calculation of a short task, which makes my hosts to download tasks from my backup project (because there's the limit of two GPUGrid workunits). I am sending now some thousands more simulations including a new protein called alpha3D (a3D). This should be no problem. :) Awesome job by the way on the crunching. I guess since I never did many simulations before I hadn't realized just how awesome the throughput of GPUGRID is :P Totally enjoying it now, hehe. Besides enjoying it, it is your job and responsibility to use and nourish this huge computing power as wisely as possible. As most of us (your volunteer crunchers) are not into biochemistry this job partly consists of making us motivated to support a research whose results we can't comprehend as much as you do (to be polite :) ). | |
ID: 44845 | Rating: 0 | rate: / Reply Quote | |
BTW, I've a krick in my knee... Zoltan, once we have some results I will make a post about it to explain it. I know we are not really top at the communication aspect. Right now it's still a bit in the making and a bit under the covers to avoid competition etc. But if the results are nice it will probably make a quite important publication. | |
ID: 44855 | Rating: 0 | rate: / Reply Quote | |
CASP WU on (2) GTX 970 at 1.5GHz are 2.5~2.7x slower with PCIe 2.0 x1 compared to PCIe 3.0 x4. CASP runtimes (atom and step amount) vary so this just a general reference. GTX 1070 (PCIe 3.0 x8) CASP runtimes: -- 1ns (ntl9) 600 credits = 240/sec @ 2.1GHz / 59% GPU / 15% MCU / 37% BUS / 78W power -- 1ns (a3d)= 1,350 credits = 330/sec @ 2.1GHz / 70% GPU / 24% MCU / 39% BUS / 96W power -- 5ns (ntl9) 3,150 credits = 1,200/sec @ same usage and power numbers as 1ns -- 5ns (a3d) 6,900 credits = 1,600/sec @ same usage and power numbers as 1ns GTX 1060 (3GB) PCIe 3.0 x4 CASP runtimes: -- 1ns (ntl9) 600 credits = 300/sec @ 2.1GHz / 63% GPU / 17% MCU / 51% BUS / 74W power -- 1ns (a3d) 1,350 credits = 450/sec @ 2.1GHz / 74% GPU / 24% MCU / 59% BUS / 88W power -- 5ns (ntl9) 3,150 credits = 1,500/sec @ same GPU usage and power as 1ns -- 5ns (a3d) = 6,900 credits = 2.275/sec @ same GPU usage and power as 1ns IMO: a (1152CUDA GTX 1060) is on par with (2048CUDA GTX 980) and ~20% faster than a (1664CUDA GTX 970). The (1920CUDA GTX 1070) is as (if not) ~5% faster than a (2816CUDA GTX 980ti). | |
ID: 45007 | Rating: 0 | rate: / Reply Quote | |
IMO: a (1152CUDA GTX 1060) is on par with (2048CUDA GTX 980) and ~20% faster than a (1664CUDA GTX 970). This is true if you compare these cards under WDDM os. I guesstimate that my GTX980Ti@1390MHz/3500MHz will be ~6% faster under Windows XP x64 than my GTX1080@2000MHz/4750MHz under Windows 10x64 (while processing CASP11_crystal_contacts_**ns_a3D workunits). I also guesstimate that the TITAN X (Pascal) cards won't scale well under WDDM os especially with low atom-count workunits, the TITAN X (Pascal) will be just slightly faster than a GTX 1080. In general, under a WDDM os the TITAN X (Pascal) GPUs won't be as faster as they should be taking the ratio of the CUDA cores of the GTX 1080 and the TITAN X (Pascal) GPUs in consideration. | |
ID: 45037 | Rating: 0 | rate: / Reply Quote | |
BTW, I've a krick in my knee... Dr. Who? | |
ID: 45300 | Rating: 0 | rate: / Reply Quote | |
hah! | |
ID: 45347 | Rating: 0 | rate: / Reply Quote | |
Message boards : News : WU: CASP