I've got to say, I'm a huge fan of the concept of the SR- series of motherboards; what I desire in a motherboard is essentially a highly break-out board for all the various features of the platform I'm running - and that is exactly what the SR- series is trying to offer.
However, I've got a few nits to pick on the SR-X motherboard, from what I've seen of it, because I don't think it goes far enough.
First off, RAM slots: I could care less about aesthetics, but having an asymmetric NUMA configuration is next-to-useless - there is potential for some performance issues those configurations, and naively implemented NUMA-aware software won't really behave nicely. I'd much rather see 4-DIMMs per socket on both sockets, to free up some board space and minimize trace-lengths - 8 and 16GB DIMMs are readily available, and we haven't even seen LR-DIMMs on the market yet - and those should double capacities at similar prices per GB - memory capacity will hardly be the limit of this platform.
Secondly, I/O: Why are we stuck with only dual-GbE LANs and 4 USB3? Why do we have only 7 PCI-E slots? The biggest advantage of the SR-X that I can see is in I/O bound or lightly thready workloads that need high configurability and performance. The Romly-EP platform has FAR too many PCI-E lanes to simply break them all out into slots, even on an HPTX motherboard. As far as I can tell, if you went with 4DIMMs/socket on both sockets, you'd have room for 8 FHFL PCI-E slots; each CPU socket provides 40 lanes of PCIE3, with one of the sockets offering additional 4 lanes of PCIE2 because it isnt connected to a Patsburg chipset, and then Patsburg itself offers 8 PCIE2 lanes. From my perspective, the SR- series motherboards are about making the most of what the platform has to offer - and I don't think the current configuration quite achieves that. I, personally, would like to see 8 PCI-E slots, (4 per socket) configured electrically as PCIE3x8, PCIE3x16, PCIE3x0/x8, and PCIE3x16/x8 for each socket - this would expose all available PCIE3 lanes to the user, and leave room for a PCIE3x8 card even in quad-GPU scenarios. Then we're left with the matter of what to do with our 12 other PCI-E lanes. For the PCIE2x4 coming directly off the socket without Patsburg, I'd
really like to see at a quad GbE NIC at the very least - something along the lines of Intel's i350-AMT4 controller would do the trick very nicely. I'd also like to see 2-4 of the lanes from Patsburg going to mini-PCIE slots with mSATA connections - those are nifty little slots that don't take too much board real-estate, and mSATA for SSD caching would be a nice way to utilize the space under the PCI-E cards. Needless to say, the native SATA and SAS interfaces on Patsburg should all be broken out, but that's already present on the current SR-X. Pretty much anything else should go to USB3 (or maybe firewire or thunderbolt to cater to the video-editing crowd), because why the hell not? This is a "build it that they will come" product model - it's not "because we need it" it's "because someone, somewhere, might want it, and even if they don't we might as well." Let's take that sentiment and run with it!
I wouldn't normally go this far with armchair design suggestions, but this platform is likely to be the absolute peak of the high-end market for both Sandy Bridge EP and Ivy Bridge EP; if this high end has to last us 2 years until Haswell (or, hope against hope, a halfway decent AMD platform :P) it'd better be done in the right way.
post edited by n v o - 2011/12/12 14:01:36