TL:DR - I ran some tests with two sets of timings at DDR4 3200 MHz clock speed with a 4.2ghz i7 6900, and saw barely measurable differences in performance which fall within the margin of error and are so small anyway as to be irrelevant
Long-winded - When I bought the motherboard and RAM for my rebuild, I spent extra $$ for CL 14 (14-14-14-34-2T) DDR4 3200. It cost me around $239 for 32gb (4x8gb). Cheaper 16-16-16-36-2T or 16-18-18-38-2T RAM at that size and clock speed was available for between $164 -$194, depending on the model and timings. This means I paid about a $40-60 premium in order to have RAM with the tightest timings at that clock speed.
I do recall reading some articles and comparisons in the past that predicted that the real-world differences would be meager, but all of those comparisons used older 18-18-18-38 speed RAM at DDR4 3200Mhz speeds, and compared with 14-14-14-34 timings at slower speeds like 2133 Mhz or whatever. When I made my decision I reasoned that if the improvements were meager with an increase in clockspeed because of the worse timings, that using the same aggressive timings along with the quicker clockspeed would actually mean a little more.
So tonight I decided to test this out. I ran a bunch of different cpu-related tests with my machine (i7 6900 running at 4.2ghz with cache at 3.5ghz) using the DDR4 3200 XMP setting, which gives me the 14-14-14-34-2T timings. I did modify that, changing the 2T to 1T, and it's been perfectly fine. I then rebooted the machine and set the timings to 16-18-18-38-2T (left the rest of the settings whatever they already were at). I reran all of the tests.
The results are astounding, but not in a positive way. It's astounding how little advantage there is in buying aggressively timed RAM these days.
For each of these tests I ran the test five times and have averaged the results. For the Cinebench R15 cpu benchmark I ran it 10 times for each RAM speed, because it's such a quick benchmark.
Intel Extreme Tuning Utility benchmark
14-14-14-34-1T: 2286.4
16-18-18-38-2T: 2289 (.1% faster)
POV-Ray 3.7 benchmark scene rendered at 1920x1080 with AA0.3, elapsed wall clock time:
14-14-14-34-1T: 371.064 seconds
16-18-18-38-2T: 369.11 seconds (about .5% less elapsed time)
Cinebench R15 benchmark
14-14-14-34-1T: 1772.4 (.2% faster)
16-18-18-38-2T: 1768.6
Fire Strike with Precision X OC set to +130/+450 (my recent 24/7 settings)
14-14-14-34-1T: 20530.4 (.3% faster)
16-18-18-38-2T: 20470.2
Time Spy
14-14-14-34-1T: 8234.2 (.25% faster)
16-18-18-38-2T: 8213.4
You'll notice that the Intel Extreme Tuning Utility benchmark and the POV-Ray 3.7 renders actually gave the slower RAM a very slight edge. The Cinebench, Fire Strike, and Time Spy tests gave a very slight edge to the faster RAM. The largest difference was .5%, with most difference being .1% to .3% either way. I believe that with only five runs each other than the Cinebench test's 10 runs, it's almost certain that the variance from run to run was larger than the actual differences, and therefore a larger sample size might change the actual results, though probably not enough to make the differences more meaningful.
Now, to make sense of this, I consider the following: this cpu is the Core i7 6900, which has a 20mb L3 cache. Compare that cache size to a Skylake chip like the 6700, which has 8mb of L3 cache shared by its 4 cores. I would bet that the smaller cache processors would see a more meaningful improvement due to RAM timings than these socket 2011-v3 chips with their gargantuan caches. How much more of an improvement would have to be measured, but I bet it could be.
I have no doubt that RAM speed could make a difference in other kinds of tests. For example, if I were to download some video encoding tests and encode some videos of several gigabytes in size, then RAM speed would probably make a measurable difference, because we'd be talking about data sets that couldn't be held entirely within the cache most of the time. I can imagine that rendering 1080p videos in Adobe Premiere, for instance, would probably show some differences due to RAM speed.
One thing I didn't test, which I'd like to go back and try out when I have some more time, maybe tomorrow night, would be the Ashes of the Singularity cpu tests. There's enough data being accessed by so many different threads in that game that there's a pretty good chance that the cache won't be able to hide all the accesses, and a measurable difference may be shown.
I'm going to bet that in most other games the difference in performance due to paying $239 for 32gb of RAM rather than $164 for slower 32gb of RAM will be from unmeasurable to possibly measurable by more or less meaningless.
I knew there was a chance that it could turn out this way, having read the other articles from a couple years ago when the DDR4 3200 MHz RAM being tested was all of the 16-18-18-38 variety, but I deemed it worth the chance. I probably don't get to keep smugly thinking my machine is superior by dint of faster RAM speeds anymore, though some will always assume that faster is faster, whether it is actually better in real life or not.
Looking back, I don't know whether I would have rather ploughed the extra $75 into going with 64gb of the slower RAM, or whether I would have just saved the $75, knowing I was probably going to buy $75 more worth of water-cooling paraphenalia at some point anyway.
I can at the very least say, with confidence, that the tighter timing RAM on my machine makes no significant difference in the kinds of workloads I put my machine to 95-99% of the time.
post edited by sethleigh - Wednesday, September 07, 2016 9:28 AM