Gridcoin GPU mining (6): Obtaining the maximum performance out of your GPUs

View this thread on: d.buzz | hive.blog | peakd.com | ecency.com
·@vortac·
0.000 HBD
Gridcoin GPU mining (6): Obtaining the maximum performance out of your GPUs
<html>
<p><img src="http://www4.slikomat.com/13/0323/azd-scienc.jpg"/></p>
<p><br></p>
<p>Welcome to the sixth installment of <a href="https://www.gridcoin.us">Gridcoin</a> GPU mining series, continuing our exploration into the world of computational science, done through <a href="http://boinc.berkeley.edu">BOINC</a> network and rewarded through Gridcoin - a cryptocurrency which is rewarding BOINC computations, on top of Proof-of-Stake.</p>
<p><br></p>
<p>Although BOINC is a volunteer effort, it has a very competitive community. Every BOINC project maintains a list of <a href="https://milkyway.cs.rpi.edu/milkyway/top_users.php">Top Participants</a>, <a href="https://milkyway.cs.rpi.edu/milkyway/top_teams.php">Top Teams </a>and <a href="https://milkyway.cs.rpi.edu/milkyway/top_hosts.php">Top Hosts</a>. Many BOINC crunchers are carefully studying those lists, comparing performance of their hardware to the performance of other crunchers. Of course, with Gridcoin there is also a monetary incentive to maximize the performance of your hardware: more successfully completed BOINC tasks also mean a larger Gridcoin income. So, I am going to provide all obvious and less obvious hints on how to squeeze every bit of performance out of your GPUs, maximizing your BOINC output and, in turn, your Gridcoin earnings. I am currently mostly involved with <a href="http://milkyway.cs.rpi.edu/milkyway/">MilkyWay@home</a> BOINC project and some hints will be relevant only for that particular project, but other hints will improve your GPU performance across all BOINC GPU applications (and maybe even your Proof-of-Work gigahashes, if that's your thing).</p>
<p><br></p>
<p><img src="http://www4.slikomat.com/13/0316/ixw-top-se.png"/></p>
<p><a href="https://setiathome.berkeley.edu/show_user.php?userid=407"><em>Laurent Domisse</em></a><em>, top SETI@home user, </em><a href="https://setiathome.berkeley.edu/hosts_user.php?userid=407"><em>has 107 machines</em></a><em> with numerous high-end GPUs crunching for science. Yes, the competition is tough.</em></p>
<p><br></p>
<h2>1. Choose your GPU and BOINC project carefully</h2>
<p>I've already written about this in <a href="https://steemit.com/gridcoin/@vortac/gridcoin-gpu-mining-3-blast-from-the-past">one of my previous articles</a>, but it's worth repeating here. Unlike simple hashing which is mostly dealing with integer numbers, BOINC (and computational science in general) is dealing with decimal numbers and <a href="https://en.wikipedia.org/wiki/FLOPS">floating point operations</a>. The majority of BOINC projects use FP32 computations (so called single-precision), but <a href="http://milkyway.cs.rpi.edu/milkyway/">MilkyWay@home</a> (and some other BOINC projects) require FP64 or double-precision. So yes, your shiny new GTX 1080 Ti is able to achieve 10.8 TFLOPS in FP32, but only 0.34 TFLOPS in FP64, so it's a waste of resources to use it for MilkyWay@home and it will perform terribly there. Use it for FP32 BOINC projects instead (there are plenty of them). In fact, if you check MilkyWay's <a href="http://milkyway.cs.rpi.edu/milkyway/top_hosts.php">Top hosts list</a>, you will see that it's populated mostly with AMD 7970s and R9 280X GPUs which are pretty much outdated, but still renowned for their high FP64 performance at affordable prices. So choose your GPUs and BOINC projects carefully, if you want maximum performance. Don't bring a knife to a gunfight.</p>
<p><br></p>
<p><img src="http://cdn.wccftech.com/wp-content/uploads/2016/05/Nvidia-GTX-1080-Ti-Featured.jpg" width="1273" height="740"/></p>
<p><em>Nvidia GeForce GTX 1080 Ti. So awesome. Except in FP64.</em></p>
<h2><br></h2>
<h2>2. Free up some CPU cores</h2>
<p>Typical newbie mistake: load all CPU and GPU cores with BOINC tasks. CPU utilization at 100% all the time, that's surely the maximum performance I can get out of my BOINC machine, right? Well, not quite. With your CPU busy all the time, GPU tasks will get stalled often, severely reducing your overall BOINC output. To put it simply, there are no pure GPU workloads - the CPU is often needed to provide some 'assistance' with BOINC computations. And if your CPU is 100% busy, then your GPU will have to wait for its turn. And that's bad, you don't want your GPU tasks waiting and stalling, you want them running at full throttle all the time. So free up some CPU cores now. There is an option in your BOINC Manager just for that:</p>
<p><br></p>
<p><img src="http://s1.upslike.net/2017/03/16/85a5d350d553bcd9fa275a9731d9fece.png"/></p>
<p><em>Here it is, under Options-&gt;Computing Preferences, use at most 75% of the cores (or any other number you choose).</em></p>
<p><br></p>
<p>How many cores to free up? How can you be sure that you are doing this right? Open your Task Manager and monitor your CPU utilization. Are there any flat lines at 100%? Free up more cores. This is the CPU utilization history for my machine (<a href="http://milkyway.cs.rpi.edu/milkyway/show_host_detail.php?hostid=439806">ID 439806</a>) and you will probably want your chart to look approximately the same (few spikes are OK).&nbsp;</p>
<p><img src="http://www4.slikomat.com/13/0316/fqg-CPU-ut.png"/></p>
<p><br></p>
<p>Yes, following these instructions, your CPU output will be certainly reduced in the end. But this article is about GPU performance, first and foremost. And GPU tasks bring much more BOINC credits. You want your machine to be at the top and to strike stunning disbelief into the hearts of other BOINC crunchers? Then free up some CPU cores and let your GPU tasks fly. When you hit record numbers, no one will say "but one of your CPU cores is underutilized".</p>
<p><br></p>
<h2>3. Run multiple BOINC tasks per GPU</h2>
<p>Modern GPUs are becoming extremely powerful. GTX 1080 Ti, mentioned before, is equipped with 11 GB of memory and 10.8 TFLOPS of FP32 computing power (typical performance of <a href="https://en.wikipedia.org/wiki/History_of_supercomputing">fastest supercomputers only 15-16 years ago</a>). To put it simply, many BOINC tasks aren't that demanding yet, to utilize efficiently such a large computing resource. However, there is a solution: run multiple GPU tasks simultaneously. Your average runtimes will increase of course, but so will your overall BOINC output.</p>
<p>This is especially important for short, non-memory intensive tasks, which are usually completed in a minute or less (MilkyWay@home is notorious for its short running tasks). With such a workload, your GPU has to switch tasks every minute, finishing a previous task and loading new one, with some idling (which is inevitable in such transitions) in-between. But if multiple tasks are running in concurrence, your GPU will never go fully idle, since concurrent tasks are almost never finished at the same time and a balanced load is created between the tasks which are being finished (and idling) and other tasks which are running at full throttle.</p>
<p><br></p>
<p><img src="http://www4.slikomat.com/13/0316/69i-BOINC2.png"/></p>
<p><em>&nbsp;I have 4 GPUs each running 4 MilkyWay@home tasks (16 tasks in total). This is how they look in my BOINC Manager - no two progress bars are the same. A mix of ending, running and starting tasks ensures that no GPU ever goes idle.</em></p>
<p><br></p>
<p>So the question is: how many tasks should I run on a single GPU, to obtain maximum BOINC output? Unfortunately, there is no universal answer here, since it depends on a lot of factors. Generally, for very long and demanding tasks (like for example PrimeGrid <a href="http://www.primegrid.com/forum_thread.php?id=3980">Genefer World Record</a> workunits, which run for 72 hours or more) one task per GPU is usually&nbsp;more than enough. For short tasks (like MilkyWay@home) optimum is somewhere between 3 and 8 per GPU, I think. Some experimenting is needed to find out what works best for your hardware. If your system becomes slow and unresponsive, decrease the number of running tasks.</p>
<p><br></p>
<p><img src="http://www4.slikomat.com/13/0317/j7f-artefa.jpg"/></p>
<p><em>Artefacts? Corrupted frames? Video stutter? General unresponsiveness? Many BOINC tasks running simultaneously can hammer the GPU pretty hard. Revert back to one task per GPU and things will normalize again (i.e. there can be no permanent damage).</em></p>
<p><br></p>
<p>By default, BOINC is configured to run only one task per GPU, but that can be changed by creating and editing some XML configuration files. Only Notepad is required and it's a simple process - you need to create a file named <strong>app_config.xml</strong> in the project's directory. BOINC Manager will automatically detect it and load it upon start.&nbsp;</p>
<p>An example of app_config.xml for MilkyWay@home can be found on <a href="https://cryptocointalk.com/topic/49370-milkywayhome/?p=222555">Gridcoin Cryptocointalk</a>. I wanted to paste it here, but it's messing up the editor.</p>
<p>More details about BOINC XML configuration <a href="https://boinc.berkeley.edu/wiki/Client_configuration">can be found here.</a></p>
<p><br></p>
<h2>No GPUs were harmed in the process</h2>
<p>By now, we have more or less covered "conventional" stuff (meaning that your GPU warranty is still 100% valid). In the next article, we are going to look into increasing your GPU performance even further, no holds barred: overclocking, overvolting, modding the BIOS, voiding the warranty and turning your GPU into a fiery furnace, bent on the absolute maximum BOINC performance possible, power consumption and heat output be damned. You have been warned :)</p>
<p><br></p>
<p>&nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp; &nbsp;<img src="http://www4.slikomat.com/13/0305/eju-Gridco.jpg"/></p>
</html>
👍 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,