@ipka, the way to fold is actually to cut power to your GPUs.
Most of my GPUs run at ~55-60% of their max TDP (190W (2060s), run at 125W, 225W cards run at 127W, 250W cards run at 140W, and the 2080Tis run at ~180W), and all of them are running at 1900+Mhz, with an occasional 2010Mhz for the 2060 Rog Strix.
The difference between 1900Mhz and 2010Mhz is less than 5-10% of PPD; but the power savings are 40-50% of total system power!
That energy savings can be best put in another GPU, that will make well more than 10% of performance loss!
That being said, the Turing cards usually have only about 10% more to give. They're already running at between 85-90% of their efficiency, when running optimally.
A difference that's smaller than running Linux vs Windows for Nvidia GPU folding.
I also have my doubts about CUDA. It already works in Windows, but gives some strange results; like, sometimes resources of one card are allocated to another.
Meaning, one WU will be done faster, while the other is done slower.
The way they can be optimized, is by re-routing part of a high atom WU card to a low atom WU card.
However, there will be additional PCIE latencies, meaning the high atom WU will be running about 10% slower. So just hoping that the secondary GPU can do more than 10% workload.
I'm not confident that CUDA is a good solution, as it has many drawbacks.
Perhaps in a few years, when the technology has ripened...
post edited by ProDigit - 2019/11/02 03:14:08