EVGA

New to SR-2, Need help with OC, Ram and settings.

Author
xxBenja
New Member
  • Total Posts : 4
  • Reward points : 0
  • Joined: 2015/05/06 18:03:35
  • Status: offline
  • Ribbons : 0
2015/05/06 18:22:48 (permalink)
Hi all.
I have just got a SR-2 mobo, and finished assample my setup, so now im starting to play with the bios and such.
I would like a stabile 24/7 OC, my wish is +4GHz..
I have read a lot of the threads, but i think i need my own.
Also, i have 6x4GB Corsair 2000mhz xm3 Ram, and i got 2x4GB Corsair Vengeance Red CL8 1600mhz laying around, can i combine those, cause at the current setup, the computer wont see them.
What settings would you guys recommend to my setup? Its gonna be used as a storage and media encode server, controlled by RDP.
Also, why can i only bench ~450 MB/s on the SSD's and only ~35 on the two Raid PCI-E cards?
 
My setup is:
Lian Li PC-D8000 + D8001 + D8002
Evga Classified SR-2
2x x5680
6x Corsair XM3 2000 MHz DDR3
2x Corsair Vengance Red CL8 1600 MHz
Geforce 9600 GT (I dont need grafics, but i have two 680 i plan to plug in if i find it necessary)
1x Corsair H100
1x Corsair H80 (Which i will switch to a H100 soon)
2x Kingston HyperX 240GB in Raid 0
1x Adaptec Raid 51245
1x Adaptec Raid 5805
12x 2 TB WD Red in Raid 6
8x 2 TB WD Red in Raid 5
 
I really hope you guys will help me out.
Thanks!

My setup:
Lian Li PC-D8000 + D8001 + D8002
Evga Classified SR-2
2x x5680
6x Corsair XM3 2000 MHz DDR3
2x Corsair Vengance Red CL8 1600 MHz
Geforce 9600 GT (I dont need grafics, but i have two 680 i plan to plug in if i find it necessary)
1x Corsair H100
1x Corsair H80 (Which i will switch to a H100 soon)
2x Kingston HyperX 240GB in Raid 0
1x Adaptec Raid 51245
1x Adaptec Raid 5805
12x 2 TB WD Red in Raid 6
8x 2 TB WD Red in Raid 5
#1

5 Replies Related Threads

    gordan79
    SSC Member
    • Total Posts : 531
    • Reward points : 0
    • Joined: 2013/01/27 00:17:36
    • Status: offline
    • Ribbons : 3
    Re: New to SR-2, Need help with OC, Ram and settings. 2015/05/07 03:17:31 (permalink)
    For 24/7 stable 4Ghz:
     
    BCLK=166
    Multiplier=x24
    QPI=4.8GT/s
    TurboBoost=off
    SpeedStep=off (it will boost to +1 multiplier with SpeedStep on even if TurboBoost is off! If you want to enable SpeedStep, reduce the multiplier to x23 to compensate)
    VCore=1.30V
    VTT=1.325V (do not push it past 1.35)
    IOH=1.25V
    VDIMM=1.35V (or whatever your DIMMs are specced at)
    Everything else on auto.
     
    Most SAS cards (I personally tried several generations of LSI, Adaptec and 3Ware) will crash the machine randomly if used with IOMMU virtualization (VT-d). If you need to run VMs with PCI passthrough (e.g. for the GPU for gaming like what I'm doing), you'll have to lose the SAS cards). If you don't intend to run such VMs, disable VT-d in the BIOS.
     
    I would strongly advise you trade your RAM in for some ECC RDIMMs (dual rank x4 type), or you are liable to chase your tail for weeks if you have a marginal DIMM.
     
    RAID 5 and 6 are supposed to be slow, especially with mechanical drives. There is a whole array (no pun intended) of things you need to pay attention to when using various RAIDs regarding storage stack alignment. Here is a good article on the subject of storage stack alignment. The same principles will apply to any OS and file system.
     
    Regarding bottlenecking at 450MB/s on SSDs, I presume you have them connected to the SATA-3 Marvell conroller (red ports). The Marvell controller is behind a single (x1) PCIe 2.0 lane, which gives it upstream connectivity of 5GBit/s (yes, that's right some bright spark decided to put 2x 6GBit SATA ports behind 1x 5GBit of PCIe bandwidth). PCIe encoding is 8/10 (10 bytes sent for 8 bytes of payload), which gives it a theoretical maximum of 500MB/s between the controller and PCIe bus. There will be further overheads on that, so 450MB/s peak sounds about right. Note that this is the maximum total bandwidth of the controller, so will be shared between both of it's ports. You would actually get more total combined bandwidth out of your SSDs if you connected one of them to the ICH10 controller (one of the black ports).
    post edited by gordan79 - 2015/05/10 03:43:51

    Supermicro X8DTH-6, 2x X5690
    Crucial 12x 8GB x4 DR 1.35V DDR3-1600 ECC RDIMMs (96GB)
    3x GTX 1080Ti
    Triple-Seat Virtualized With VGA Passthrough (KVM)
    #2
    xxBenja
    New Member
    • Total Posts : 4
    • Reward points : 0
    • Joined: 2015/05/06 18:03:35
    • Status: offline
    • Ribbons : 0
    Re: New to SR-2, Need help with OC, Ram and settings. 2015/05/07 03:22:38 (permalink)
    Thanks buddy, i appreciate your answer! :)
     
    What rams would you recommend? pricefriendly and i think 48GB..
     
    So, about the SSD's, you'll say one red, one black, and then run soft raid or?
     
    Again, thank you!

    My setup:
    Lian Li PC-D8000 + D8001 + D8002
    Evga Classified SR-2
    2x x5680
    6x Corsair XM3 2000 MHz DDR3
    2x Corsair Vengance Red CL8 1600 MHz
    Geforce 9600 GT (I dont need grafics, but i have two 680 i plan to plug in if i find it necessary)
    1x Corsair H100
    1x Corsair H80 (Which i will switch to a H100 soon)
    2x Kingston HyperX 240GB in Raid 0
    1x Adaptec Raid 51245
    1x Adaptec Raid 5805
    12x 2 TB WD Red in Raid 6
    8x 2 TB WD Red in Raid 5
    #3
    gordan79
    SSC Member
    • Total Posts : 531
    • Reward points : 0
    • Joined: 2013/01/27 00:17:36
    • Status: offline
    • Ribbons : 3
    Re: New to SR-2, Need help with OC, Ram and settings. 2015/05/07 04:12:31 (permalink)
    I only use Crucial RAM because their warranty and customer service are superb. I'm sure other brands will do just fine provided they match the spec, though, if you find them to be considerably cheaper.
     
    Yes, one red, one black, with software RAID. Assuming that you think 450MB isn't sufficient, which is rather questionable. In reality, you'll be dealing with 2 types of data:
    1) Data that you use often that will be cached in RAM so won't hit the SSDs.
    2) Data that requires processing at load time and will thus saturate the CPU (often single-threaded, so many cores won't help) long before you run into the 450MB/s bottleneck.
    Basically unless you are running something like a huge memory starved database server you are unlikely to use up 450MB/s of disk I/O.
     
    Additionally, you are saying you'll be using it as a storage and media encoding server:
    1) Media encoding part will be CPU bottlenecked, so you'll never hit the disk I/O limit.
    2) Storage part will be network limited. Even if you use 802.3ad LACP link aggregation on your switch to bond the two Marvell NICs, that's still a theoretical maximum of 2Gb in each direction, which is 4Gb/s total combined, which in the extreme, best possible case, might reach 400MB/s, which you already tested to be able to exceed.
    Which begs the question of why you think 450MB/s might be insufficient.
     
    Personally, I'd be more worried about RAID0 without redundancy on the SSDs.
    Also, 12x2TB RAID6 is, IMO, not enough redundancy.
    8x2TB RAID5 is even worse. Remember that disks are rated for 10^-14 unrecoverable error rate. That means one unrecoverable sector for every 11TB of reads. Say you have a disk failure on your 8x2TB RAID5. You have to read 7x2TB to replace the data onto the new, replacement disk. That's 14TB of reads. So statistically, you'll lose some data every time you lose a disk, even if another disk doesn't fail during the non-trivial length of the resilvering time. Worse, most hardware RAID firmwares are too simplistic to handle this case and when they encounter a bad sector during resilvering they'll often bail and trash the whole array. Whichever way you look at it, RAID5 is unfit for purpose with 21st century disk sizes. For the RAID6 part, I'd probably run 2x (6x2TB RAID6) instead. But if you have the option, scratch hardware traditional RAID (hardware or software) and switch to ZFS.

    Supermicro X8DTH-6, 2x X5690
    Crucial 12x 8GB x4 DR 1.35V DDR3-1600 ECC RDIMMs (96GB)
    3x GTX 1080Ti
    Triple-Seat Virtualized With VGA Passthrough (KVM)
    #4
    xxBenja
    New Member
    • Total Posts : 4
    • Reward points : 0
    • Joined: 2015/05/06 18:03:35
    • Status: offline
    • Ribbons : 0
    Re: New to SR-2, Need help with OC, Ram and settings. 2015/05/11 01:56:15 (permalink)
    Thanks you Gordan.
     
    Two answer why i want more than the 450 on the SSD, isn't that i need it, just as many people OC without a reason, just because its possible.
    And the RAID0 is choosen bacause i reached 1GB/s on my other gamer. In this case, its for the extra space.
     
    About the raid6, i think 2 redundancy disk is enough, but sadly, the one of the SAS ports on the 51245 died last night, which means 18TB data loss :(
     
    What about ram timings and such, whats your opinion on that? bios settings i mean :)
     
    Thanks.

    My setup:
    Lian Li PC-D8000 + D8001 + D8002
    Evga Classified SR-2
    2x x5680
    6x Corsair XM3 2000 MHz DDR3
    2x Corsair Vengance Red CL8 1600 MHz
    Geforce 9600 GT (I dont need grafics, but i have two 680 i plan to plug in if i find it necessary)
    1x Corsair H100
    1x Corsair H80 (Which i will switch to a H100 soon)
    2x Kingston HyperX 240GB in Raid 0
    1x Adaptec Raid 51245
    1x Adaptec Raid 5805
    12x 2 TB WD Red in Raid 6
    8x 2 TB WD Red in Raid 5
    #5
    gordan79
    SSC Member
    • Total Posts : 531
    • Reward points : 0
    • Joined: 2013/01/27 00:17:36
    • Status: offline
    • Ribbons : 3
    Re: New to SR-2, Need help with OC, Ram and settings. 2015/05/11 03:08:56 (permalink)
    The last two and a half years of SR-2 ownership has taught me, among other things, that buying an expensive OC-ing motherboard so I can save a bit on the CPUs is a false economy in most cases compared to buying a decent, well engineered, well tested, and well debugged server grade motherboard and top of the line expensive CPUs that already run within 10% of the clock speeds that I could hope to achieve from OC-ing a cheaper, lower core multiplier CPU.
     
    X5690 boosts to x27 on all cores, which at default bclk gets it to 3.6GHz.
    X5650 boosts to x22 on all cores, which even if you manage to achieve a fully stable OC at 177 bclk is 3.9GHz.
     
    So X5690 will get to over 92% of the maximum you can realistically hope to achieve with full stability, and do so without any configuration or tuning whatsoever.
    If you end up wasting more than 3-4 days of your time on getting the X5650 at 177 bclk stable, unless the value you put on your time is quite low, you might as well have worked an extra 3-4 days and put the money toward something that will work with no messing about whatsoever.
     
    Is your time really worth so little that the numbers on the benchmark screenshot justify it?
     
    As I said earlier, hardware RAID is evil. If you just lost a port and you were using software RAID or ZFS, you would have been able to just hook up the drives to another controller, and it'd all appear again. To recover from the sort of failure you describe you'd need to get another identical RAID controller and hope that it manages to reassemble it's predecessors RAID array. It is far more trouble than it's worth, and it offers less protection against certain failure modes than a solution like ZFS.
     
    As for RAM timings - mine are all auto-detected, and then manually set explicitly to the values that were auto-detected. Manually setting all the timings and setting the command rate to 2T is necessary when running with 96GB of RAM. I never bothered trying to squeeze more out of the RAM timings as the measurable overall benefits are not worth the testing time, even when you have ECC RAM to make it much more obvious that you had an error (correctable or otherwise). And considering that on a typical system you'll experience a bit flip from RF noise, cosmic rays and other causes  on average at least once a month, it is not a margin for error I am interested in reducing even with ECC RAM.

    Supermicro X8DTH-6, 2x X5690
    Crucial 12x 8GB x4 DR 1.35V DDR3-1600 ECC RDIMMs (96GB)
    3x GTX 1080Ti
    Triple-Seat Virtualized With VGA Passthrough (KVM)
    #6
    Jump to:
  • Back to Mobile