EVGA

PCI-E Lanes ?

Page: 12 > Showing page 1 of 2
Author
billpayer2005
New Member
  • Total Posts : 35
  • Reward points : 0
  • Joined: 2008/11/18 18:04:37
  • Status: offline
  • Ribbons : 0
2015/01/04 11:49:33 (permalink)
Hello,
I am running an SR-2 with 4 x GTX 780's.
 
Are there any issues with maximum PCI-E lanes ? I have read some i7 chips only support a limited number of lanes...
 
I am running with 2 x X5650 (Xeon)
 
Thanks !

EVGA SR-2
2 x x5650 Xeon
4 x EVGA GTX 780 6Gb
aw yeah.
#1

32 Replies Related Threads

    cuda-dude
    New Member
    • Total Posts : 91
    • Reward points : 0
    • Joined: 2014/08/10 00:38:14
    • Status: offline
    • Ribbons : 1
    Re: PCI-E Lanes ? 2015/01/04 12:12:09 (permalink)
    I run 4 780's with no problem

    EVGA Sr-2 w58 bios
    xeon x5690 x 2
    evga GTX 780 ti Kingpins  x4 sli
    clocked on H20 to 4.3
    48 gig Corsair 1600 ram
    Geekbench 34475 Windows 8.1 x64
    Geekbench 36660 Linux Generic Kernel x64
    3Dmark 11 Performance  26393  #1 x5690 Record
    3Dmark Fire Strike Extreme  14775
    3Dmark Fire Strike Ultra     10953
    3dmark Fire Strike   25315
    #2
    ty_ger07
    Insert Custom Title Here
    • Total Posts : 21174
    • Reward points : 0
    • Joined: 2008/04/10 23:48:15
    • Location: traveler
    • Status: offline
    • Ribbons : 270
    Re: PCI-E Lanes ? 2015/01/04 12:56:54 (permalink)
    Here is the manual:
    https://www.evga.com/supp.../files/270-WS-W555.pdf

    Page 33 says that it is designed to support up to four x16 or x8 graphics cards. Pretty vague actually regarding the lane limitations.

    Here's another link with more information:
    http://www.evga.com/suppo...ewfaq.aspx?faqid=59283

    W555 – Classified SR-2
    PCIe slot 1: 16x (8x if slot 2 is filled)
    PCIe slot 2: 8x
    PCIe slot 3: 16x (8x if slot 4 is filled)
    PCIe slot 4: 8x
    PCIe slot 5: 16x (8x if slot 6 is filled)
    PCIe slot 6: 8x
    PCIe slot 7: 16x


    Hope this helps.

    So, somewhere between 56 and 64 lanes max if I understand correctly. I think the max is 56 lanes and I think slot 7 will be 8x with four cards, but the documentation is a bit confusing.
    post edited by ty_ger07 - 2015/01/04 13:04:31

    ASRock Z77 • Intel Core i7 3770K • EVGA GTX 1080 • Samsung 850 Pro • Seasonic PRIME 600W Titanium
    My EVGA Score: 1546 • Zero Associates Points • I don't shill

    #3
    billpayer2005
    New Member
    • Total Posts : 35
    • Reward points : 0
    • Joined: 2008/11/18 18:04:37
    • Status: offline
    • Ribbons : 0
    Re: PCI-E Lanes ? 2015/01/04 13:08:39 (permalink)
    @cuda-dude  interesting... We have similar setups ! But u are running newer chips. Is the bios you are running new also ? How is it ?
    Have you ever tried the 3gb option for memory buffer in bios ? Seemed to help run the system smoother. Also, I don't use SLI.
     
    @ty_ger07  Thanks for the info. Yes I know that manual back to front... All cards should run x16 with slots 1,3,5,7
     
     
    post edited by billpayer2005 - 2015/01/04 13:10:56

    EVGA SR-2
    2 x x5650 Xeon
    4 x EVGA GTX 780 6Gb
    aw yeah.
    #4
    ty_ger07
    Insert Custom Title Here
    • Total Posts : 21174
    • Reward points : 0
    • Joined: 2008/04/10 23:48:15
    • Location: traveler
    • Status: offline
    • Ribbons : 270
    Re: PCI-E Lanes ? 2015/01/04 13:14:21 (permalink)
    If you already know, why are you asking?

    It's only x16 on 1, 3, 5, and 7 in certain cirumstances.

    It could be 8x, 8x, 8x, and 16x with four video cards in worst case. It could be 16x, 8x, 8x, and 16x. It could be all sorts of different combinations. You need to make your question much clearer if you want a better response. It depends on if your video cards are single-slot cards (liquid cooled?) and how many cards you plan to put in the other slots. If they are dual slot video cards and you aren't using other slots, and you already know the answer, why did you ask? Just to make fun of people who answered?

    If you are only using 4 graphics cards in slots 1, 3, 5, and 7 and are using no other cards, all four cards will have x16 wide PCI-E 2.0 with those CPUs. But you already knew that. So, the point of asking?
    post edited by ty_ger07 - 2015/01/04 13:45:20

    ASRock Z77 • Intel Core i7 3770K • EVGA GTX 1080 • Samsung 850 Pro • Seasonic PRIME 600W Titanium
    My EVGA Score: 1546 • Zero Associates Points • I don't shill

    #5
    cuda-dude
    New Member
    • Total Posts : 91
    • Reward points : 0
    • Joined: 2014/08/10 00:38:14
    • Status: offline
    • Ribbons : 1
    Re: PCI-E Lanes ? 2015/01/04 13:14:58 (permalink)
    yea the 58 bios seemed the best out of all of them,3gb is what I have to run to make them all work. And I have sli on when benching and gaming but turn it off for mining or folding. I have tried all the nvidia drivers and 344.11 is the best for me. 16x v2 for all for cards.

    EVGA Sr-2 w58 bios
    xeon x5690 x 2
    evga GTX 780 ti Kingpins  x4 sli
    clocked on H20 to 4.3
    48 gig Corsair 1600 ram
    Geekbench 34475 Windows 8.1 x64
    Geekbench 36660 Linux Generic Kernel x64
    3Dmark 11 Performance  26393  #1 x5690 Record
    3Dmark Fire Strike Extreme  14775
    3Dmark Fire Strike Ultra     10953
    3dmark Fire Strike   25315
    #6
    billpayer2005
    New Member
    • Total Posts : 35
    • Reward points : 0
    • Joined: 2008/11/18 18:04:37
    • Status: offline
    • Ribbons : 0
    Re: PCI-E Lanes ? 2015/01/04 16:40:17 (permalink)
    @ty_ger07 You misunderstood ? I am not making fun ?! I didn't know the pci-e bandwidth stuff at all and I appreciate your help.
     
    @cuda-dude Nice ! thanks ! what is 16x v2 for all the cards ?

    EVGA SR-2
    2 x x5650 Xeon
    4 x EVGA GTX 780 6Gb
    aw yeah.
    #7
    cuda-dude
    New Member
    • Total Posts : 91
    • Reward points : 0
    • Joined: 2014/08/10 00:38:14
    • Status: offline
    • Ribbons : 1
    Re: PCI-E Lanes ? 2015/01/04 16:58:08 (permalink)
    16 x v2 vs. v3 that the newest motherboards have but most of the newer boards make you go to 8x for the other cards. Anyway just open up GPU-Z and on the right you can click the x16 lane test to see what you are getting, on nvidia GPU's sometimes you have to go into the nvidia control panel and choose prefer maximum performance instead of adaptive under power management to get x16 for all cards in sli.

    EVGA Sr-2 w58 bios
    xeon x5690 x 2
    evga GTX 780 ti Kingpins  x4 sli
    clocked on H20 to 4.3
    48 gig Corsair 1600 ram
    Geekbench 34475 Windows 8.1 x64
    Geekbench 36660 Linux Generic Kernel x64
    3Dmark 11 Performance  26393  #1 x5690 Record
    3Dmark Fire Strike Extreme  14775
    3Dmark Fire Strike Ultra     10953
    3dmark Fire Strike   25315
    #8
    ty_ger07
    Insert Custom Title Here
    • Total Posts : 21174
    • Reward points : 0
    • Joined: 2008/04/10 23:48:15
    • Location: traveler
    • Status: offline
    • Ribbons : 270
    Re: PCI-E Lanes ? 2015/01/04 21:08:33 (permalink)
    Oh, I misunderstood. I thought you said you already read the manual front to back and already knew that the four cards would run at x16.

    ASRock Z77 • Intel Core i7 3770K • EVGA GTX 1080 • Samsung 850 Pro • Seasonic PRIME 600W Titanium
    My EVGA Score: 1546 • Zero Associates Points • I don't shill

    #9
    billpayer2005
    New Member
    • Total Posts : 35
    • Reward points : 0
    • Joined: 2008/11/18 18:04:37
    • Status: offline
    • Ribbons : 0
    Re: PCI-E Lanes ? 2015/01/05 12:30:36 (permalink)
    @ty_ger07 yes I have read the manual. yes I knew 4 cards would run x16. but i did not know that x16 means 16 lanes and hence 16 x 4 = 64 lanes. which is not in the manual anywhere.
     
    @cuda-dude I guess you mean PCI verisons ?
     
    I am having a problem where only half my ram slots seems to work.
    But it's not the same slots, and the ram sticks seem fine.
    It's like the system is limited to 12gb. I was wondering if it might be a pci lane issue.

    EVGA SR-2
    2 x x5650 Xeon
    4 x EVGA GTX 780 6Gb
    aw yeah.
    #10
    cuda-dude
    New Member
    • Total Posts : 91
    • Reward points : 0
    • Joined: 2014/08/10 00:38:14
    • Status: offline
    • Ribbons : 1
    Re: PCI-E Lanes ? 2015/01/05 12:34:20 (permalink)
    most ram slot issues of not detecting the ram end up being cpu not correctly seated or bent pins in the cpu socket. Can also be the ram itself though so it is easy to switch sticks around also.

    EVGA Sr-2 w58 bios
    xeon x5690 x 2
    evga GTX 780 ti Kingpins  x4 sli
    clocked on H20 to 4.3
    48 gig Corsair 1600 ram
    Geekbench 34475 Windows 8.1 x64
    Geekbench 36660 Linux Generic Kernel x64
    3Dmark 11 Performance  26393  #1 x5690 Record
    3Dmark Fire Strike Extreme  14775
    3Dmark Fire Strike Ultra     10953
    3dmark Fire Strike   25315
    #11
    gordan79
    SSC Member
    • Total Posts : 531
    • Reward points : 0
    • Joined: 2013/01/27 00:17:36
    • Status: offline
    • Ribbons : 3
    Re: PCI-E Lanes ? 2015/01/11 06:46:23 (permalink)
    If CPU-Z can see the RAM slots it's almost certainly not bent CPU socket pins. Most likely it is just the BIOS being crap. Note your settings, clear the CMOS, make sure you are running the latest BIOS and see if it is still happening. The only time I had missing RAM and that didn't help, it turned out I had a duff DIMM.

    Supermicro X8DTH-6, 2x X5690
    Crucial 12x 8GB x4 DR 1.35V DDR3-1600 ECC RDIMMs (96GB)
    3x GTX 1080Ti
    Triple-Seat Virtualized With VGA Passthrough (KVM)
    #12
    billpayer2005
    New Member
    • Total Posts : 35
    • Reward points : 0
    • Joined: 2008/11/18 18:04:37
    • Status: offline
    • Ribbons : 0
    Re: PCI-E Lanes ? 2015/01/12 22:02:07 (permalink)
    @gordan79 by 'see the slot' do you mean CPU-Z gives data on it (or that the slot pull down is available)?
    it's definitely not seeing all the ram on the SPD page, however the total is correct on the Memory page.
     
    It might be possible some cpu pins are bend, I had some cables pushing against the cpu heatsink, might have screwed with it...
     
    I'm running the oldest bios, 2010. It's been ok for me...

    EVGA SR-2
    2 x x5650 Xeon
    4 x EVGA GTX 780 6Gb
    aw yeah.
    #13
    gordan79
    SSC Member
    • Total Posts : 531
    • Reward points : 0
    • Joined: 2013/01/27 00:17:36
    • Status: offline
    • Ribbons : 3
    Re: PCI-E Lanes ? 2015/01/13 07:18:21 (permalink)
    Check the CPU pins. If SPD isn't showing up that is mildly concerning if you are running at default clocks and voltages. Have you confirmed whether the RAM shows up if you ONLY use the slots that aren't showing up at the moment?

    Supermicro X8DTH-6, 2x X5690
    Crucial 12x 8GB x4 DR 1.35V DDR3-1600 ECC RDIMMs (96GB)
    3x GTX 1080Ti
    Triple-Seat Virtualized With VGA Passthrough (KVM)
    #14
    billpayer2005
    New Member
    • Total Posts : 35
    • Reward points : 0
    • Joined: 2008/11/18 18:04:37
    • Status: offline
    • Ribbons : 0
    Re: PCI-E Lanes ? 2015/01/23 13:39:17 (permalink)
    After running CPU-Z. I tried putting all the ram back in and it works !
    So probably a loose ram or loose cpu thing.
    Thank you for the help !

    EVGA SR-2
    2 x x5650 Xeon
    4 x EVGA GTX 780 6Gb
    aw yeah.
    #15
    boumay
    New Member
    • Total Posts : 85
    • Reward points : 0
    • Joined: 2011/06/04 23:15:02
    • Status: offline
    • Ribbons : 0
    Re: PCI-E Lanes ? 2015/11/07 03:34:12 (permalink)
    @billpayer2005
    Can you tell which power supply you have? How much is needed for 4x 780's to run with the dual 5650?
    Thank you.

    SR-2
    Dual xeon 5650
    48gb G-skill ripjaws 1600 cl9
    LianLi PC-P80 case
    GTX 780 6gb
    GTX 970
    Cooler Master Gold 1000w
    #16
    gordan79
    SSC Member
    • Total Posts : 531
    • Reward points : 0
    • Joined: 2013/01/27 00:17:36
    • Status: offline
    • Ribbons : 3
    Re: PCI-E Lanes ? 2015/11/07 03:55:50 (permalink)
    A GTX 780 (reference version) has TDP of 230W, so 4x230W=920W _just_ for the GPUs.
    Xeon X5650s have a TDP of 95W each, 2x95=190W (assuming stock clocks and voltages, no overclocking).
    Just between those two: 920W+190W=1110W.
    The motherboard itself doesn't appear to have a total TDP for all the components isted, but given all the power conversion circuitry, ICH10 and a pair of Nvidia NF200 PCIe brodges (those can get quite hot), I'm not sure I would want to budget a _minimum_ of 100W for the motherboard itself.
    So call that a 1210W _minimum_, without any margin for error, without overclocking, without disks, and without any other peripherals connected (e.g. via USB).
    Then consider that you should always scale the PSU so that you never have to push it past 80% for optimal efficiency: 1210W/0.8, and you are already at 1512.5W
     
    And that is without overclocking anything in the machine. Once you start adding disks and OC-ing, even relatively conservatively, it wouldn't be surprising to see the total according to above calculations hit 2000W.
     
    The system in my signature is running with a CoolerMaster 1500W PSU, and I wouldn't want to try adding an extra pair of GPUs to it without getting a bigger PSU. Also note that decent PSUs over 1500W are quite difficult to find, as no good manufacturers make them (e.g. EVGA, CoolerMaster, PC Power and Cooling). And believe me, you don't want to have that much expensive hardware hanging off a 3rd rate PSU manufacturer (and there's precious little choice between the above named good PSU manufacturers and the 3rd rate ones you shouldn't be touching with a barge pole). For example, a friend's machine crossed my desk the other day that was a complete write-off after the ****ty Tagan PSU in it fried everything.

    Supermicro X8DTH-6, 2x X5690
    Crucial 12x 8GB x4 DR 1.35V DDR3-1600 ECC RDIMMs (96GB)
    3x GTX 1080Ti
    Triple-Seat Virtualized With VGA Passthrough (KVM)
    #17
    boumay
    New Member
    • Total Posts : 85
    • Reward points : 0
    • Joined: 2011/06/04 23:15:02
    • Status: offline
    • Ribbons : 0
    Re: PCI-E Lanes ? 2015/11/07 06:01:33 (permalink)
    Thank you very much gordan79 for your detailed explanation, very informative.
    But now, how is billpayer2005 and people like him running his system? I'm curious. 2 psu's? lol. I've heard people using two psu's, but how do they synchronize the startup? There is only one connector on the mother board...

    SR-2
    Dual xeon 5650
    48gb G-skill ripjaws 1600 cl9
    LianLi PC-P80 case
    GTX 780 6gb
    GTX 970
    Cooler Master Gold 1000w
    #18
    Cool GTX
    EVGA Forum Moderator
    • Total Posts : 31001
    • Reward points : 0
    • Joined: 2010/12/12 14:22:25
    • Location: Folding for the Greater Good
    • Status: offline
    • Ribbons : 122
    Re: PCI-E Lanes ? 2015/11/07 06:45:18 (permalink)
    I'm enjoying my EVGA Supernova 1600 P2 in my rig.
     
    If you are using multiple PSU, you split the load. 
     
    Putting water pumps, non-boot drives and such on the second PSU.  This of course means you need to add a switch to start the auxiliary PSU before you start the main PSU/PC. 
     
    Your wall outlet had better be up to the task - probably looking at a dedicated circuit buy then as 1600W will pull up to 13.3 Amps at 120V.

    Learn your way around the EVGA Forums, Rules & limits on new accounts Ultimate Self-Starter Thread For New Members

    I am a Volunteer Moderator - not an EVGA employee

    https://foldingathome.org -->become a citizen scientist and contribute your compute power to help fight global health threats

    RTX Project EVGA X99 FTWK Nibbler EVGA X99 Classified EVGA 3080Ti FTW3 Ultra


    #19
    gordan79
    SSC Member
    • Total Posts : 531
    • Reward points : 0
    • Joined: 2013/01/27 00:17:36
    • Status: offline
    • Ribbons : 3
    Re: PCI-E Lanes ? 2015/11/07 06:54:31 (permalink)
    boumay
    But now, how is billpayer2005 and people like him running his system? I'm curious. 2 psu's? lol. I've heard people using two psu's, but how do they synchronize the startup? There is only one connector on the mother board...



    In many cases, yes, they use multiple PSUs.
    You can get Y splitters for ATX, or you can short green-to-black on the ATX connector with a paper clip so that the secondary PSU is always on, and use the switch on the back of the PSU or on the wall socket to switch it off. Or use a relay running off the primary PSU power line so that when that powers up, the secondary PSU is told to power up. There are many easy ways to deal with this.
     
     
    Cool GTX
    Putting water pumps, non-boot drives and such on the second PSU.  This of course means you need to add a switch to start the auxiliary PSU before you start the main PSU/PC. 

     
    I would normally put the water pumps on the same circuit powering the device that is being cooled. That way you cannot get into a situation where one PSU fails and some hardware stays running without it's cooling still running.
     
     
    Cool GTX
    Your wall outlet had better be up to the task - probably looking at a dedicated circuit buy then as 1600W will pull up to 13.3 Amps at 120V.

     
    This is an excellent point. PSUs  over about 1200W come with 16A connectors rather than the standard 13A kettle plug connectors, and most wall _plugs_ are only rated for up to 13A.
    post edited by gordan79 - 2015/11/07 07:00:01

    Supermicro X8DTH-6, 2x X5690
    Crucial 12x 8GB x4 DR 1.35V DDR3-1600 ECC RDIMMs (96GB)
    3x GTX 1080Ti
    Triple-Seat Virtualized With VGA Passthrough (KVM)
    #20
    boumay
    New Member
    • Total Posts : 85
    • Reward points : 0
    • Joined: 2011/06/04 23:15:02
    • Status: offline
    • Ribbons : 0
    Re: PCI-E Lanes ? 2015/11/07 07:48:46 (permalink)
    Thank you again, very interesting.
    So, if I want to add, say 2 gtx 980ti to my setup (see below) for instance, I can add a second 1000w psu and it will be enough, right?
    But I'm afraid of screwing up my system by playing the electronic apprentice, do you know any good l inks on how to perform this safely?
     

    SR-2
    Dual xeon 5650
    48gb G-skill ripjaws 1600 cl9
    LianLi PC-P80 case
    GTX 780 6gb
    GTX 970
    Cooler Master Gold 1000w
    #21
    Cool GTX
    EVGA Forum Moderator
    • Total Posts : 31001
    • Reward points : 0
    • Joined: 2010/12/12 14:22:25
    • Location: Folding for the Greater Good
    • Status: offline
    • Ribbons : 122
    Re: PCI-E Lanes ? 2015/11/07 07:48:51 (permalink)
    gordan79
     
    Cool GTX
    Putting water pumps, non-boot drives and such on the second PSU.  This of course means you need to add a switch to start the auxiliary PSU before you start the main PSU/PC. 

     
    I would normally put the water pumps on the same circuit powering the device that is being cooled. That way you cannot get into a situation where one PSU fails and some hardware stays running without it's cooling still running.
     
     




    Though I follow the logic in your statement; WP pull quite a bit of juice.  In a 2 CPU & 4 GPU setup, most would run two separate loops with 1 or 2 pumps per loop. At 24W (2A) per pump for a D5 pump it may be too much for the main PSU.
     
    With software and hardware monitoring of CPU & GPU,  auto-shutdown is not only feasible but generally the best practices standard.
     
    Again I agree if the main PSU has the capacity to support your pump(s) use it.
     
     

    Learn your way around the EVGA Forums, Rules & limits on new accounts Ultimate Self-Starter Thread For New Members

    I am a Volunteer Moderator - not an EVGA employee

    https://foldingathome.org -->become a citizen scientist and contribute your compute power to help fight global health threats

    RTX Project EVGA X99 FTWK Nibbler EVGA X99 Classified EVGA 3080Ti FTW3 Ultra


    #22
    gordan79
    SSC Member
    • Total Posts : 531
    • Reward points : 0
    • Joined: 2013/01/27 00:17:36
    • Status: offline
    • Ribbons : 3
    Re: PCI-E Lanes ? 2015/11/07 08:47:03 (permalink)
    boumay
    So, if I want to add, say 2 gtx 980ti to my setup (see below) for instance, I can add a second 1000w psu and it will be enough, right?
    But I'm afraid of screwing up my system by playing the electronic apprentice, do you know any good l inks on how to perform this safely?



    Google it, what you'll find is going to be no worse than what I can find using the same methods. :)
    The rig in my sig is comfortably running using a single 1500W PSU. My recommendation is to just buy a bigger PSU, until you have no choice but to run two (unless it's for redundancy). EVGA do make some 1600W models.
     
    Cool GTX
    gordan79
    Cool GTX
    Putting water pumps, non-boot drives and such on the second PSU.  This of course means you need to add a switch to start the auxiliary PSU before you start the main PSU/PC. 

     
    I would normally put the water pumps on the same circuit powering the device that is being cooled. That way you cannot get into a situation where one PSU fails and some hardware stays running without it's cooling still running.
     




    Though I follow the logic in your statement; WP pull quite a bit of juice.  In a 2 CPU & 4 GPU setup, most would run two separate loops with 1 or 2 pumps per loop. At 24W (2A) per pump for a D5 pump it may be too much for the main PSU.
     
    With software and hardware monitoring of CPU & GPU,  auto-shutdown is not only feasible but generally the best practices standard.
     
    Again I agree if the main PSU has the capacity to support your pump(s) use it.



    I'm more saying that you should group whatever pump is running the cooling for a device should be on the same loop as the PSU, i.e. have mobo+CPU pump run of the main PSU, and the GPUs and the pump for that loop run off the second loop. That way if a PSU fails, both the thermal load and the cooling for that load will fail at the same time, and thus nothing gets fried, even if the rest of the machine stays running. It's always a good idea to design things so that in the worst case they fail safe.
     
    Particularly important since PSUs are the second most likely component to fail in a computer (after spinning rust hard disks, for at least some brands of hard disks).

    Supermicro X8DTH-6, 2x X5690
    Crucial 12x 8GB x4 DR 1.35V DDR3-1600 ECC RDIMMs (96GB)
    3x GTX 1080Ti
    Triple-Seat Virtualized With VGA Passthrough (KVM)
    #23
    boumay
    New Member
    • Total Posts : 85
    • Reward points : 0
    • Joined: 2011/06/04 23:15:02
    • Status: offline
    • Ribbons : 0
    Re: PCI-E Lanes ? 2015/11/07 08:50:08 (permalink)
    Thank you gordan, but, as mentioned before, a 1600w won't be enough with 4 gpu's, and my 5650's are oc'd, so I guess I would have no choice but to add a second psu.
     

    SR-2
    Dual xeon 5650
    48gb G-skill ripjaws 1600 cl9
    LianLi PC-P80 case
    GTX 780 6gb
    GTX 970
    Cooler Master Gold 1000w
    #24
    gordan79
    SSC Member
    • Total Posts : 531
    • Reward points : 0
    • Joined: 2013/01/27 00:17:36
    • Status: offline
    • Ribbons : 3
    Re: PCI-E Lanes ? 2015/11/07 12:54:59 (permalink)
    As far as performance is concerned, you are wasting your time and money on running 4 GPUs anyway. The reason I have two in my machine because the 2nd one is passed to my wife's virtual gaming workstation. She games at 2560x1600, I use the Linux host and game a 3840x2400. Neither of us are feeling an overwhelming need for a faster GPU. I'm pondering getting a 980Ti, but only because I'm thinking about upgrading to a 5K monitor, and my 780Ti doesn't have dual DP outputs required to drive it.
     
    If you are planning to use 4 GPUs for virtualization (quad-seat virtual workstation), I'd be inclined to under-clock and under-volt the CPUs and GPUs until it all fits comfortably into the power envelope of the biggest single PSU you can find.
     
    Also, 2000W PSUs do, in fact, exist. For example:
    https://www.overclockers....pply-bl-ca-031-sf.html
    post edited by gordan79 - 2015/11/07 13:05:49

    Supermicro X8DTH-6, 2x X5690
    Crucial 12x 8GB x4 DR 1.35V DDR3-1600 ECC RDIMMs (96GB)
    3x GTX 1080Ti
    Triple-Seat Virtualized With VGA Passthrough (KVM)
    #25
    boumay
    New Member
    • Total Posts : 85
    • Reward points : 0
    • Joined: 2011/06/04 23:15:02
    • Status: offline
    • Ribbons : 0
    Re: PCI-E Lanes ? 2015/11/07 21:27:09 (permalink)
    I don't plan to play games on my gpu's, I need them for gpu computing, so the more the best.
    And thank you for the 2000w psu link, it seems very good quality.

    SR-2
    Dual xeon 5650
    48gb G-skill ripjaws 1600 cl9
    LianLi PC-P80 case
    GTX 780 6gb
    GTX 970
    Cooler Master Gold 1000w
    #26
    gordan79
    SSC Member
    • Total Posts : 531
    • Reward points : 0
    • Joined: 2013/01/27 00:17:36
    • Status: offline
    • Ribbons : 3
    Re: PCI-E Lanes ? 2015/11/08 03:47:41 (permalink)
    It's the only sanely priced 2000W PSU I am aware of, that is not to say I am making any statements as to it's quality. I have never owned a PSU of that brand, nor have I seen proper technical reviews of it.
     
    That's not to say that there aren't plenty of reviews of it out there, what I mean to say is that nearly all reviews of PSUs are completely useless. They are based around somebody putting it in a machine and then feeling clever when they say "Hey, I put all this hardware on it and it didn't blow up and the machine didn't crash in the hour I spent 'testing' it." The bast majority of reviews are far from worth the electrons that were displaced to download the page. In fact, they are worse than useless because they imply something may or may not be a good product without any meaningful analysis having been performed on it.
     
    If you are looking for PSU reviews, unless the load was generated by a proper, calibrated, industrial grade load tester and the voltages and ripples were tested with an oscilloscope at a good range of loads (say, every 10% between 0% and 110% of it's rated load), then the review contains no useful information at all; it's not a technical review, it's pure marketing.

    Supermicro X8DTH-6, 2x X5690
    Crucial 12x 8GB x4 DR 1.35V DDR3-1600 ECC RDIMMs (96GB)
    3x GTX 1080Ti
    Triple-Seat Virtualized With VGA Passthrough (KVM)
    #27
    boumay
    New Member
    • Total Posts : 85
    • Reward points : 0
    • Joined: 2011/06/04 23:15:02
    • Status: offline
    • Ribbons : 0
    Re: PCI-E Lanes ? 2015/11/08 04:10:18 (permalink)
    lol, that's right :))

    SR-2
    Dual xeon 5650
    48gb G-skill ripjaws 1600 cl9
    LianLi PC-P80 case
    GTX 780 6gb
    GTX 970
    Cooler Master Gold 1000w
    #28
    RainStryke
    The Advocate
    • Total Posts : 15872
    • Reward points : 0
    • Joined: 2007/07/19 19:26:55
    • Location: Kansas
    • Status: offline
    • Ribbons : 60
    Re: PCI-E Lanes ? 2015/11/08 05:38:14 (permalink)
    PCI-E lanes were on the motherboard rather than integrated into the CPU. Socket 1366 is the last generation they didn't integrate the PCI-E lanes.
     
    The 5520 chipset had 32 lanes of PCI-E 2.0
    http://ark.intel.com/products/36783/Intel-5520-IO-Hub
     
    Then you have 2 NF200 controllers that add another 32 lanes.
    http://www.legitreviews.com/evga-classified-super-record-2-sr-2-motherboard-review_1437/2
     
    You have 64 lanes in all.

    Main PC | Secondary PC
    Intel i9 10900K | Intel i7 9700K

    MSI MEG Z490 ACE | Gigabyte Aorus Z390 Master
    ASUS TUF RTX 3090 | NVIDIA RTX 2070 Super
    32GB G.Skill Trident Z Royal 4000MHz CL18 | 32GB G.Skill Trident Z RGB 4266MHz CL17
    SuperFlower Platinum SE 1200w | Seasonic X-1250
    Samsung EVO 970 1TB and Crucial P5 1TB | Intel 760p 1TB and Crucial MX100 512GB
    Cougar Vortex CF-V12HPB x9 | Cougar Vortex CF-V12SPB-RGB x5
     
    3DMark Results:Time Spy|Port Royal

    #29
    gordan79
    SSC Member
    • Total Posts : 531
    • Reward points : 0
    • Joined: 2013/01/27 00:17:36
    • Status: offline
    • Ribbons : 3
    Re: PCI-E Lanes ? 2015/11/08 05:56:28 (permalink)
    Not quite. There are 32 lanes worth of bandwidth to the NB and thus to the CPU. What NF200 bridges are doing is providing multiplexing. The peak throughput doesn't increase (there is now way to get past that upstream limit of 32 lanes). However, on the basis that you are typically not using _all_ of the bandwidth to _all_ of the cards at the same time, the average performance increases because you are saturating the upstream bandwidth to a higher degree more of the time.
     
    It's also worth noting that each of the NF200s has 16 lanes to the NB and provides 32 lanes to the slots. So if you really are heavily I/O bound on the GPUs and you are only using two, you would do a little better with one GPU in slots 1 or 3 and the other in slots 5 or 7. For gaming, however, it has been demonstrated time and again that the PCIe bandwidth makes negligible difference so you wouldn't be able to tell the difference.
    post edited by gordan79 - 2015/11/14 06:17:45

    Supermicro X8DTH-6, 2x X5690
    Crucial 12x 8GB x4 DR 1.35V DDR3-1600 ECC RDIMMs (96GB)
    3x GTX 1080Ti
    Triple-Seat Virtualized With VGA Passthrough (KVM)
    #30
    Page: 12 > Showing page 1 of 2
    Jump to:
  • Back to Mobile