SiriusDragon
I'm also curious about the PCI-E lanes too... I assume they did it for 4-way SLI compatibility...
But they could have powered the secondary slots by the secondary CPU, allowing for extra cards like RAID cards, PCI-E SSDs etc without affecting the GPUs.
While being able to run the board with 1 CPU is nice, it's hardly a selling point.
Assuming unlocked CPUs come out it'll still be a nice board, but the gimping of some of the features makes it slightly less appealing than what it was, it's probably a shame for EVGA too as I guess they were hoping for this to be an epic flagship board.
I don't mind about the RAM slots... I think that's one of the few complaints about the board I don't share. I'd rather have better VRMs and stuff than more RAM capacity.
Actually, looking at ASUS board's manual, they have a 16x/8x, 8x, 16x/8x, 8x, 16x, 8x, 16x setup by which the first four slots are powered by the primary CPU and the last three slots are powered by the secondary CPU. Also, they say the board can run 4-way SLI, by this setup, 4-way SLI all at 16x. Now whether having two CPU power the two sets of GPUs causes more or less performance than using 32 lanes from a single CPU as EVGA did (of course expanded by the use of a PLX chip, but those cards are still getting squeezed down a 16x lane) is up for debate. Then again, I think most of the crosstalk between GPUs would be done over the SLI bridge and not the PCIe interface. That would be mostly for communications with the CPU.
Anyway, if I were to redesign the SR-X (or a SR-X2), I would do just a few things differently:
1. Use a 16x, 4x, 16x, 4x, 16x, 4x, 16x setup with the first 3 slots coming from the primary CPU and the last 4 slots coming from the secondary CPU. Of course, all could use 16x physical slots. The reason for this is purely from a gamer's perspective, to maximize my bandwidth between my GPUs and the CPUs. As a CUDA rig, 10 8x slots (16x physical) would be more appropriate. If I populate the 4x slots in this scheme, my 16x slots won't suffer.
2. Use the C608 chipset, if for anything, RST3 SAS RAID 5 Support.
3. Perhaps use an Intel X540-AT2 10Gb/s Ethernet Controller. This would highly depend on what the projected adoption rate of 10Gb/s Ethernet was, but considering a Sandy Bridge-EP board from both Tyan ans Supermicro are using it, the adoption rate seems to be improving. Of course I would use an RJ-45 plug and not anything fancy like SFP+. Since the controller uses 8 PCIe 2.0 links, I would let it take all the PCH's links.
4. I would use a TI TSB83AA23 PCI-Firewire chip since C600 series support PCI and the PCIe-Firewire chips are really PCIe-PCI-Firewire chips. Also, I would place 1 Firewire 800Mb/s port on the rear.
5. Then I would connect 4 NEC µPD720201 USB 3.0 controllers to the secondary CPU's 4x PCIe 2.0 link (normally used to connect to the PCH), each one powering 1 rear USB 3.0 port.
6. Perhaps I would move some SATA 3Gb/s ports (native from the chipset) to the rear for eSATA, but I'm not a big fan of external hard drives to begin with.
post edited by Aggressor Prime - 2012/03/07 01:56:21