EVGA

SR2 with PCIe Drives and SLI

Page: 12 > Showing page 1 of 2
Author
Trogdoor2010
Superclocked Member
  • Total Posts : 235
  • Reward points : 0
  • Joined: 2010/03/31 06:45:24
  • Status: offline
  • Ribbons : 0
2010/03/31 07:13:26 (permalink)
Hey all,
 
Long time lurker, first time poster. I was really impressed with the 4-way sli classified mobo and my mind was blown when the SR2 was revealed not too long ago. I am now certain that my next build will be an SR2, costs be damned! I work as an engineer, I have money, and I don't want/need a new car anytime soon.
 
In following my costs be damned philosophy, I have been tentatively putting together a parts list for the build.
The build list is FAR from complete, and will likley change between now and when I order all my parts en masse, but one thing I would like to do that I have not seen mentioned here is use PCIe drives as my primary boot/program drives. Obviously single slot water cooling solutions will be used for the graphics cards. I mean really...spending so much on graphics cards and high end hardware to run on AIR? Its like buying a ferrari with a limiter.
 
The product made by Fusion-IO:
http://www.fusionio.com/products/ioxtreme/
 
The fusion-io drive is pretty impressive, especially if you put a pro and regular card together on the same board in a raid 0 configuration. Granted, the cost will be over 2K right now, and the things are not even bootable at the moment. Things that should change in the future when OCZ comes out with a similar technology for a lesser price that can boot. (I think they have something out now, but I don't think it was very impressive and didn't speed up OS performance very much over regular SSD drives).
 
How would a setup with 2x of these cards (likely around 160GB of total space, raid 0) and a dedicated hardware raid card linked to some SSD drives (for all those games that just wont fit on the PCIe drives) fare if one were to use SLI, tri-SLI, or even the inevitable quad SLI EVGA fermi cards? I'd like to think the graphics cards will still get the true 16x performance from their slots, but from what I read they might get limited to 8x, which is unfortunate. I have read that for tri and quad SLI setups being limited to 8x severely limits performance gains and scalability (moreso than using a tri/quad SLI setup in the first place).
 
I might just have to wait for the GTX 495 dual GPU cards to come out for this setup to be viable with true 16x pcie for both cards. They might even have a solutions for this "micro-stuttering" phenomenon I've read about. Waiting isn't always a bad thing, with computers it only means when you do buy whatever you get will be better than what was available in the past for the same price.
 
 
 
 
 
 

System Specs:
Core2 quad q9450 2.66Ghz
XFX 780i Motherboard
4GB ram
ZOTAC GTX 480 1536MB Graphics Card
Geforce 9400 1024MB graphics card (For Tri-Screen Action)
700W Power Supply
Windows 7
160GB HD (OS+Progs)
750GB HD (Short Term Backup)
2TB JBOD Array (Long Term Storage)

Bottleneck? WAT!? :p
#1

42 Replies Related Threads

    The-Hunter
    Superclocked Member
    • Total Posts : 233
    • Reward points : 0
    • Joined: 2009/03/02 11:22:57
    • Status: offline
    • Ribbons : 1
    Re:SR2 with PCIe Drives and SLI 2010/03/31 08:49:23 (permalink)

    The fusion- IO card does not let you boot from it. I have looked closely at this option for my SR-2 build myself. You are best of with two Areca 1680i and 4 to 8 Intel v2 SSDs per each card depending on your budget. Make 100% sure of that you get the latest generation Intel SSDs. Its the raid chip that will cap you, not the PCIe bus. a PCIe buss of x8 gives you 256 MB/s * 8 times two ways. [link=http://www.areca.us/support/download/RaidCards/Documents/Manual_Spec/ARC1680_series_Specification.zip]http://www.areca.us/suppo...ries_Specification.zip[/link] make sure you get the raid card edition you need with enough connectors for what you have in mind. 

    You can do 4 way SLI with the current water cooled of the 480 today and have full x16 performance on those. I am not sure if a 5th PCI slot will run at x16 or not in this setup and one at x8 for the second raid card. In all cases I do not belie in real applications you ever will see a performance difference other than a theoretical maximum that is possible to measure on paper.
    I have yet to experience anything close to  this myself of the bottleneck of x8 versus x16 that gives you theory levels of insane levels of performance. I am not sure if that performance is ever needed as well as all focus on bit maps and so on, is centered on loading the GPU more and more for computational performance in games such as use of tessellation and other mathematic logic and routines to divide up the load and let things reflect of each other etc. The power of these cards are not yet at 25 fps ray tracing levels either. 

    I would not wait! waiting is boring , at the time there is something new, something new is bound to come after the next "something new" is out. 

    Cosmos II water cooled, EVGA SR-X, Intel E5-2687W x2, EVGA  Titan Black Hydrocopper signature x3, 1 x Dell 30" 308WFP,  96Gb 1600Mhz ram, Creative XB-X-FI,  256GB OCZ SSD, Storage controller: Areca 1222 in Raid 0 with 3 x, 2 TB Seagate HD, EVGA 1500W PSU
    __________________________________________________


    #2
    Trogdoor2010
    Superclocked Member
    • Total Posts : 235
    • Reward points : 0
    • Joined: 2010/03/31 06:45:24
    • Status: offline
    • Ribbons : 0
    Re:SR2 with PCIe Drives and SLI 2010/03/31 09:13:24 (permalink)
    So, here is what I am thinking:

    1: GTX480
    2: Areca 1680i
    3: GTX480
    4: ???
    5: GTX480
    6: Areca 1680i
    7: GTX480

    Now, can the Areca 1680i cards work in tandem for Raid-0? Meaning, if I had 4 Intel v2 SSDs on each Areca card, could the cards be configured to work together and form essentially and 8 drive raid array?

    I know there is a point at which these cards will get saturated and be unable to linearly scale SSD drives, which is why I'd rather just get a new card and set of SSD drives to go with it and have it work in tandem than link more SSD drives to the same raid card with diminishing returns.

    I hate thinking I am limited by some theoretical maximum in x16 vs x8 when it comes to graphics card setups. It's just once you get all 4 fermi cards in there at x16, and put in 2 raid cards, the speed between both cards getting limited to x8 is annoying, even if it may never be noticed.

    System Specs:
    Core2 quad q9450 2.66Ghz
    XFX 780i Motherboard
    4GB ram
    ZOTAC GTX 480 1536MB Graphics Card
    Geforce 9400 1024MB graphics card (For Tri-Screen Action)
    700W Power Supply
    Windows 7
    160GB HD (OS+Progs)
    750GB HD (Short Term Backup)
    2TB JBOD Array (Long Term Storage)

    Bottleneck? WAT!? :p
    #3
    The-Hunter
    Superclocked Member
    • Total Posts : 233
    • Reward points : 0
    • Joined: 2009/03/02 11:22:57
    • Status: offline
    • Ribbons : 1
    Re:SR2 with PCIe Drives and SLI 2010/03/31 09:21:58 (permalink)
    Trogdoor2010

    So, here is what I am thinking:

    1: GTX480
    2: Areca 1680i
    3: GTX480
    4: ???
    5: GTX480
    6: Areca 1680i
    7: GTX480

    Now, can the Areca 1680i cards work in tandem for Raid-0? Meaning, if I had 4 Intel v2 SSDs on each Areca card, could the cards be configured to work together and form essentially and 8 drive raid array?

    I know there is a point at which these cards will get saturated and be unable to linearly scale SSD drives, which is why I'd rather just get a new card and set of SSD drives to go with it and have it work in tandem than link more SSD drives to the same raid card with diminishing returns.

    I hate thinking I am limited by some theoretical maximum in x16 vs x8 when it comes to graphics card setups. It's just once you get all 4 fermi cards in there at x16, and put in 2 raid cards, the speed between both cards getting limited to x8 is annoying, even if it may never be noticed.


    you can do this yes, both in hardware and software. I believe raiding the raid 0, on each adapter each with its own raid 0, the software raid them  together as a raid 0, where each adapter is a single volume, will get you the best performance, at least it did last time i did that versus the hw raid option of spanning raid controllers together on the areca raid controller, in hw. I believe its a I/O wait state issue, could be that is the case of that one.

    Anyway the SW based RAID 0 with two adapters brings you incredible performance or should I say ridiculous performance when done. Please post your findings when you are done!!!

    I dont have the I/O schematics on mapping what chip is going to 5520 controller and what is shared on the NF200 so I cant yet tell you what card to put on what buss on the other question. We will get it soon thou, as soon at the card ships to customers I think. If you want a ASAP answer I think people are under NDA so only EVGA could reply if they want to. 

     

    Cosmos II water cooled, EVGA SR-X, Intel E5-2687W x2, EVGA  Titan Black Hydrocopper signature x3, 1 x Dell 30" 308WFP,  96Gb 1600Mhz ram, Creative XB-X-FI,  256GB OCZ SSD, Storage controller: Areca 1222 in Raid 0 with 3 x, 2 TB Seagate HD, EVGA 1500W PSU
    __________________________________________________


    #4
    _NickM
    FTW Member
    • Total Posts : 1130
    • Reward points : 0
    • Joined: 2007/05/11 15:28:19
    • Status: offline
    • Ribbons : 3
    Re:SR2 with PCIe Drives and SLI 2010/03/31 19:07:11 (permalink)
    if you run slots 1, 3, 5, and 7 only you will have x16 through nf200s.

    if you put a card in slot 2, slot 1 drops to x8 and so on. the bottom-most slot 7 is always at x16 via nf200.

    may i ask, why 2 raid cards? 1 of those cards alone supports way more than 8 drives.

     
    EU Questions? Contact me @ 
    nickm@evga.com | +49 89 18 90 49 - 27 
    #5
    Rudster816
    CLASSIFIED Member
    • Total Posts : 2080
    • Reward points : 0
    • Joined: 2007/08/03 22:07:51
    • Location: Eastern Washington
    • Status: offline
    • Ribbons : 18
    Re:SR2 with PCIe Drives and SLI 2010/03/31 19:15:14 (permalink)
    With multiple graphics cards\RAID cards you dont need to pay attention to weather or not your slot is running at x16\x8.

    The 5520 chipset only has 36 PCIe lanes, period. This is how much bandwidth your going to get. The only advantage of running the NF200 chips is being able to run full x16, but still get, say ~x12 in bandwidth, which isnt possible without it.

    The NF200 chips will not add bandwidth. I doubt it will matter though. You wont be using all the bandwidth at what time, so I doubt you will notice a drop in performance when running multiple graphics cards\RAID cards.

    I also wonder why you want x2 RAID cards. A good RAID card will easily fully utilize 8 SSD's.

    [22:00:32] NordicJedi: the only way i can read this chatroom is if i imagine you're all dead
     

    #6
    shamino
    EVGA Overclocking Evangelist
    • Total Posts : 375
    • Reward points : 0
    • Joined: 2008/09/03 19:19:39
    • Status: offline
    • Ribbons : 6
    Re:SR2 with PCIe Drives and SLI 2010/03/31 19:42:59 (permalink)
    #7
    Trogdoor2010
    Superclocked Member
    • Total Posts : 235
    • Reward points : 0
    • Joined: 2010/03/31 06:45:24
    • Status: offline
    • Ribbons : 0
    Re:SR2 with PCIe Drives and SLI 2010/03/31 21:29:48 (permalink)
    Rudster816

    With multiple graphics cards\RAID cards you dont need to pay attention to weather or not your slot is running at x16\x8.

    The 5520 chipset only has 36 PCIe lanes, period. This is how much bandwidth your going to get. The only advantage of running the NF200 chips is being able to run full x16, but still get, say ~x12 in bandwidth, which isnt possible without it.

    The NF200 chips will not add bandwidth. I doubt it will matter though. You wont be using all the bandwidth at what time, so I doubt you will notice a drop in performance when running multiple graphics cards\RAID cards.

    I also wonder why you want x2 RAID cards. A good RAID card will easily fully utilize 8 SSD's.


    It depends on the card. I saw an article once where they hooked up 8+ SSD drives through a hardware raid array and it didn't scale linearly after about 4 drives.

    Obviously, with an adequately powerful and expensive card this wont be a problem. I've noticed that as raid card quality increases its price rises exponentially. Eventually you can get 2 lesser cards for the price of one more advanced card which is only 50% faster than the cheaper card. This all depends on the company, model, brand, etc, but that's basically why you'd want to consider it.

    I'd be most interested in the cards that have the ram slot. Putting 4GB ECC memory into one of those guarantees extremely fast writes, like a drive buffer on uber steroids.

    System Specs:
    Core2 quad q9450 2.66Ghz
    XFX 780i Motherboard
    4GB ram
    ZOTAC GTX 480 1536MB Graphics Card
    Geforce 9400 1024MB graphics card (For Tri-Screen Action)
    700W Power Supply
    Windows 7
    160GB HD (OS+Progs)
    750GB HD (Short Term Backup)
    2TB JBOD Array (Long Term Storage)

    Bottleneck? WAT!? :p
    #8
    The-Hunter
    Superclocked Member
    • Total Posts : 233
    • Reward points : 0
    • Joined: 2009/03/02 11:22:57
    • Status: offline
    • Ribbons : 1
    Re:SR2 with PCIe Drives and SLI 2010/04/01 04:40:48 (permalink)
    EVGATech_NickM

    if you run slots 1, 3, 5, and 7 only you will have x16 through nf200s.

    if you put a card in slot 2, slot 1 drops to x8 and so on. the bottom-most slot 7 is always at x16 via nf200.

    may i ask, why 2 raid cards? 1 of those cards alone supports way more than 8 drives.

    2 raid cards as the buss at 4 PCIe x16 cards and one raid card will bottleneck the X8 buss or will it do 4 x 16 and stop there? -this means we can not do quad SLI and a raid card that gives more performance than a x8 buss can handle if we stick the SSDs to feed it that bandwidth. There are people out there, including me, that have more money than sense and want to do these kind of things. at 250x8 MB/s we are bandwidth limited. One Areca card can do 24 SSDs if we wanted.
    Hence 8 SSDs that peaks at 250 MB/s per card max. Raiding this together to create a internal raid 0 in software for the Avid DS 4 k video files makes a lot of sense as that machine can eat and put to good use all the bandwidth you can throw at it.
     
    I custom build these things and have done most of the consultation of the original clock synch of the original PCIex1 design of the original storage unit for that machine before the business unit was sold of as Softimage DS to become named Avid DS. I was the guy that put together the storage solution for the first commercial product here, and I still at times consult and give advice to some of the key players in the industry in this area that needs this effects and editing machine.  Today this business area is also one of the few growing business units in this market as RED cameras are an amassing hit and this is the only real post and effects production product available that handle raw RED files. www.red.com www.avid.com   
     

    We need details on this stuff to know what we shall purchase or recommend to potentials that are now making up their minds what to purchase. It would be great if these details gets put out in public light ASAP. I believe the workstation purchasers of this board needs it.  

    Shamino have now given new light on the limitations on the nf200 and issues that I was not aware of existed as per his last post. I think its time that we get all the details what can be done, the limitations and opportunities out in to the public door here before we smash each other up with silly speculative arguments that could be avoided in this community if we knew all the design details decisions taken as facts, not speculation. 

    Please help us out with as much details of the I/O design and layout as possible. 
    post edited by The-Hunter - 2010/04/01 04:56:42

    Cosmos II water cooled, EVGA SR-X, Intel E5-2687W x2, EVGA  Titan Black Hydrocopper signature x3, 1 x Dell 30" 308WFP,  96Gb 1600Mhz ram, Creative XB-X-FI,  256GB OCZ SSD, Storage controller: Areca 1222 in Raid 0 with 3 x, 2 TB Seagate HD, EVGA 1500W PSU
    __________________________________________________


    #9
    The-Hunter
    Superclocked Member
    • Total Posts : 233
    • Reward points : 0
    • Joined: 2009/03/02 11:22:57
    • Status: offline
    • Ribbons : 1
    Re:SR2 with PCIe Drives and SLI 2010/04/01 05:02:42 (permalink)
    Trogdoor2010



    It depends on the card. I saw an article once where they hooked up 8+ SSD drives through a hardware raid array and it didn't scale linearly after about 4 drives.

    Obviously, with an adequately powerful and expensive card this wont be a problem. I've noticed that as raid card quality increases its price rises exponentially. Eventually you can get 2 lesser cards for the price of one more advanced card which is only 50% faster than the cheaper card. This all depends on the company, model, brand, etc, but that's basically why you'd want to consider it.

    I'd be most interested in the cards that have the ram slot. Putting 4GB ECC memory into one of those guarantees extremely fast writes, like a drive buffer on uber steroids.


    that is correct, thou Areca have found ways to make do all the way up to 8 SSDs, but yeah most cards and raid control chips with their typical designs cant handle it. There is also a new card out shortly that will scale to 16 of the fastest SSDs out currently. If you have the money and so far with the info at hand it sounds like what i said is the best option. I doubt you will run 4 graphics cards eating all the bandwidth at x16 speed at the same time as you consume shared space of the x8 connectors via the nf200. You should be fine if you need or want quad SLI. TRI SLI and you can add a single areca 1680i if you only want to use 8 SSDs, no more later on or anything else on the same card. And no more in any other slots, this from what shamino posted today!



    Cosmos II water cooled, EVGA SR-X, Intel E5-2687W x2, EVGA  Titan Black Hydrocopper signature x3, 1 x Dell 30" 308WFP,  96Gb 1600Mhz ram, Creative XB-X-FI,  256GB OCZ SSD, Storage controller: Areca 1222 in Raid 0 with 3 x, 2 TB Seagate HD, EVGA 1500W PSU
    __________________________________________________


    #10
    Trogdoor2010
    Superclocked Member
    • Total Posts : 235
    • Reward points : 0
    • Joined: 2010/03/31 06:45:24
    • Status: offline
    • Ribbons : 0
    Re:SR2 with PCIe Drives and SLI 2010/04/01 05:26:42 (permalink)
    Indeed, it seems tri-sli with a raid card in slot 7 is the only way to guarantee you get the max theoretical bandwidth out of every slot.

    If you are not utilizing your graphics cards fully and don't need the bandwidth on them while transfering a large number of files through the raid cards, will the x8 bandwidth available to the card in say, slots 2 and 4 rise to x10 or x15 as demand on the bandwidth is for the raid cards, not the graphics cards?

    It would solve just about all my worries if it can dynamically scale bandwidth allocation to each card based upon its utilization. I suspect, however, this is not the case.

    In any case, if you are going to get limited to x8 speeds on the raid card anyway, you might as well get a raid card that can saturate just slightly more than the x8 pcie can handle. I am not as familiar with high quality hardware raid controllers as I should be. Until recently I was focusing more research into pcie drives like the fusion-io.


    System Specs:
    Core2 quad q9450 2.66Ghz
    XFX 780i Motherboard
    4GB ram
    ZOTAC GTX 480 1536MB Graphics Card
    Geforce 9400 1024MB graphics card (For Tri-Screen Action)
    700W Power Supply
    Windows 7
    160GB HD (OS+Progs)
    750GB HD (Short Term Backup)
    2TB JBOD Array (Long Term Storage)

    Bottleneck? WAT!? :p
    #11
    The-Hunter
    Superclocked Member
    • Total Posts : 233
    • Reward points : 0
    • Joined: 2009/03/02 11:22:57
    • Status: offline
    • Ribbons : 1
    Re:SR2 with PCIe Drives and SLI 2010/04/01 05:35:02 (permalink)
    Trogdoor2010

    Indeed, it seems tri-sli with a raid card in slot 7 is the only way to guarantee you get the max theoretical bandwidth out of every slot.

    If you are not utilizing your graphics cards fully and don't need the bandwidth on them while transfering a large number of files through the raid cards, will the x8 bandwidth available to the card in say, slots 2 and 4 rise to x10 or x15 as demand on the bandwidth is for the raid cards, not the graphics cards?

    It would solve just about all my worries if it can dynamically scale bandwidth allocation to each card based upon its utilization. I suspect, however, this is not the case.

    In any case, if you are going to get limited to x8 speeds on the raid card anyway, you might as well get a raid card that can saturate just slightly more than the x8 pcie can handle. I am not as familiar with high quality hardware raid controllers as I should be. Until recently I was focusing more research into pcie drives like the fusion-io.



    the 5520 I/O bridge sends the bandwidth where its needed to all devices that is has linked to it from and fuses that into the Xeons QPIs via a seperate link from the I/O hub, what is the wild card is how the nf200, the two of them, handle the balancing acts into the connected buses themselves. I got no clue at this point, if its hardware scaling with limitation of  dynamic address allocation. There is talk of some I/O resource limitation issue.. interrupts issues? I am lost.. need details...  as said. 

    Cosmos II water cooled, EVGA SR-X, Intel E5-2687W x2, EVGA  Titan Black Hydrocopper signature x3, 1 x Dell 30" 308WFP,  96Gb 1600Mhz ram, Creative XB-X-FI,  256GB OCZ SSD, Storage controller: Areca 1222 in Raid 0 with 3 x, 2 TB Seagate HD, EVGA 1500W PSU
    __________________________________________________


    #12
    estwash
    New Member
    • Total Posts : 10
    • Reward points : 0
    • Joined: 2010/02/19 09:15:53
    • Status: offline
    • Ribbons : 0
    Re:SR2 with PCIe Drives and SLI 2010/04/01 09:36:43 (permalink)
    this was my interest - using a "freed up" slot that water cooling "frees" - but bandwidth questions. 

    3way sli, with c1060 at #7 slot - HOPING to put RAID Card in one of the freed up slots of 2nd or 3rd gpu (most likely 3rd) - but bandwidth questions for sli performance impact is issue...something I've not found clear answer on even in the forums, folks disagree...and, being a noob, I'm still confused...
    #13
    _NickM
    FTW Member
    • Total Posts : 1130
    • Reward points : 0
    • Joined: 2007/05/11 15:28:19
    • Status: offline
    • Ribbons : 3
    Re:SR2 with PCIe Drives and SLI 2010/04/01 11:40:54 (permalink)
    The areca raid cards are x8 bandwidth max. From what I have heard, you will get start to get diminishing returns (but still increased performance overall) after 4-5 drives as the onboard cache will get used up. The pcmark vantage record was just broken a couple days ago with 11 ssd, one areca 1680, and an E760 Classified. I personally don't think pci-e bandwidth would be your main concern in this scenario.

     
    EU Questions? Contact me @ 
    nickm@evga.com | +49 89 18 90 49 - 27 
    #14
    Trogdoor2010
    Superclocked Member
    • Total Posts : 235
    • Reward points : 0
    • Joined: 2010/03/31 06:45:24
    • Status: offline
    • Ribbons : 0
    Re:SR2 with PCIe Drives and SLI 2010/04/01 11:50:04 (permalink)
    EVGATech_NickM

    The areca raid cards are x8 bandwidth max. From what I have heard, you will get start to get diminishing returns (but still increased performance overall) after 4-5 drives as the onboard cache will get used up. The pcmark vantage record was just broken a couple days ago with 11 ssd, one areca 1680, and an E760 Classified. I personally don't think pci-e bandwidth would be your main concern in this scenario.


    Bandwidth for the raid cards functions is not a concer, I agree.
     
    The part I am concerned with is what effect the raid card muscling in on the graphics card's bandwidth by limiting it to just x8 has on gaming performance in an SLI, tri-SLI, or quad-SLI configuration.
     
    WIth just one card, I doubt it matters much (or so I have been told in by various articles), but running with just one card isn't nearly as awesome as an SLI setup, and according to maingear at least, the fermi cards scale very well and out perform even crossfired 5970 cards when in SLI or tri-SLI configurations.

    System Specs:
    Core2 quad q9450 2.66Ghz
    XFX 780i Motherboard
    4GB ram
    ZOTAC GTX 480 1536MB Graphics Card
    Geforce 9400 1024MB graphics card (For Tri-Screen Action)
    700W Power Supply
    Windows 7
    160GB HD (OS+Progs)
    750GB HD (Short Term Backup)
    2TB JBOD Array (Long Term Storage)

    Bottleneck? WAT!? :p
    #15
    The-Hunter
    Superclocked Member
    • Total Posts : 233
    • Reward points : 0
    • Joined: 2009/03/02 11:22:57
    • Status: offline
    • Ribbons : 1
    Re:SR2 with PCIe Drives and SLI 2010/04/01 15:28:45 (permalink)
    EVGATech_NickM

    The areca raid cards are x8 bandwidth max. From what I have heard, you will get start to get diminishing returns (but still increased performance overall) after 4-5 drives as the onboard cache will get used up. The pcmark vantage record was just broken a couple days ago with 11 ssd, one areca 1680, and an E760 Classified. I personally don't think pci-e bandwidth would be your main concern in this scenario.


    aye, I stand corrected, the areca 1680i series are indeed x8.

    I have certainly mixed up my head a bit, found the data I was looking for, its all x8 based. not x16, so my bad on the x1680i being x16. 

    Here is the card I was thinking of
    http://www.youtube.com/watch?v=ilPYNjBlf38

    8 port version available "immediately"

    ARC-1880LP 8 ports 1x SFF-8087 1x SFF-8088
    ARC-1880i 8 ports 2x SFF-8087
    ARC-1880x 8 ports 2x SFF-8088
    ARC-1880ix-12 12(+4 ext) ports 3x SFF-8087 1x SFF-8088, DIMM slot
    ARC-1880ix-16 16(+4 ext) ports 4x SFF-8087 1x SFF-8088, DIMM slot
    ARC-1880ix-24 24(+4 ext) ports 6x SFF-8087 1x SFF-8088, DIMM slot


    And they are also all X8 

    This product above listed should have availability end april. Chip and I/O, its based of a Marvell 88RC9580 processor. 

    The new adapters are no longer based on I / O processors from Intel, but running on a Marvell 88RC9580 processor. This chip is clocked at 800MHz and features an integrated 6G SAS controller. The cards have a PCI express 2.0 interface with eight lanes and are standard with 512MB DDR2-533 memory.

    Source:


    This DIMM slot on these cards should be upgradeable to a 4 GB DDR2 ECC,  with no compatibility issues

    One way buss bandwidth should stop somewhere round 2 GB/s mark so connecting 24 SSD, if anyone here would be crazy enough to do that would probably not be a good idea. That is if that card lets you put that through it. I will wait until I can get two of the 1880 IX-24 and stick that in to  the SR-2 and post some screen shots with performance numbers when it gets to me. I will enable at first 8 SSDs and later 16 SSDs when new ones come out this q3 or q4 Intel this fall/winter. I wont try to hook up 16 before then, and I will only run Intel SSDs and I have crashed and burned 2 OCZ SSD on my Areca so far. 'This is a general issue related to all SSDs and its called "wear or heavy usage" and is not directly related to any OCZ SSD products. All intel SSDs I ever purchased are still all running fine.  

    That is if anyone else don't beat me to examine the same thing and they publish results that show that this fails badly. As secondary slower storage for the Avid DS I will use 24 HDs in raid6.
    I will try to see how that works together with a Infiniband card via IBM that is a Mellanox ConnectX Dual-port 4x QDR HCA. MHQH29B-XTR

      For graphics I will do this together with a fully enabled 512 shader version of the Fermi chip, in some future revision of the card, or if EVGA could be so nice to get a supply of that and call it something flashy on a fermi that is not crippled to a 480 shader card and call it Classified or something.. and water cool that, then I wold get some of those ASAP, and it would be awesome
    With 3 single chip 512 shader cards for TRI SLI.
    Or get them once the die shrink version of the full product is out or they get a card out in the current die size that have the full shader set available as the chip was designed.
    Before either a die shrink or a 512 shader vesion of the card happens I wont go shopping any of those fermi chips  

    Before that event takes place, I will play round with the SR-2 on various OC just for kicks and let it do bigadv folding or similar, before all parts are all fully tested and pf cpirse all these parts are out and shipped to me. I don't expect any of this to be available and in my hands within the next 3 months. These things tends to take time. If that all works out all fine I will go talk to some Avid DS buddies that use this with the RED camera that have the same need. 

    For video I/O I will use this. It comes bundled in the Avid DS 10.3 package along with some other stuff that goes with it. It enables this:
    HD-SDI/SDI, SMPTE-259/292/296,  Dual-link HD 4:4:4, 2K HSDL

    This is my daily workstation that I do all things on. From word processing, office, web browsing, a few VMs such as some huge MS SQL ERP and olap databases, a Exchange server for my house, a link to the company exchange server, and a few more things running on VMs. As well as playing my games. Other things are most adobe products, most Autodesk products as well as Avid DS software as I have all that from a past project and tend to play round with all these things for, to test and enable things to see what I can make happen. 

    If it works out for my private testing ground machine, then I will put this new workstation config to real use, by purchasing more SR-2 mobos and parts and put that into the studio and let people use the tested and tried solution it for post production and editing work. Talking to more people that have exactly the same needs goes with it. If I get graphics driver problems I will start hassle my IBM buddies for driver fixes or what ever else issues I run into. I have a feeling I need a tweaked graphics driver from their IBM workstation series to make this work out with Avid DS. It will be a driver issue only fix thou they can help me with.  This is not a IBM box..  

    All of this private fun stuff will go into a new case that Spotswood is working on designing. Build logs will come when we get closer to have the parts and case design done. 

    That is.. unless.. .. 
    If EVGA do think this is going to fail badly, reason I/O resources, so please let me know, if I had all I/O specs on the mobo I could possibly do some risk assessment and talk to more people and figure it out, if I waste my or your time here. But I don't have that PCIe NF 200 chipset info at hand. 

    When I search Google I come up blank on the details of the NF200 I/O design. I can talk to helpful and close relations and get the info on the chip and how this is could be working with what is on the SR-2, but I have yet to do that, after all the SR-2 is not even out the door yet so far until todays post that 7 x 480s will not work in one machine was posted on forum here, because of I/O resources, before that post, I was not worried. After todays post by EVGAs OC guru, I am now in a worried state of mind, hence this post so you have the full picture of what I am thinking of doing. The exact details of what exactly are the issue would be a nice reply. 
    post edited by The-Hunter - 2010/04/01 15:52:38

    Cosmos II water cooled, EVGA SR-X, Intel E5-2687W x2, EVGA  Titan Black Hydrocopper signature x3, 1 x Dell 30" 308WFP,  96Gb 1600Mhz ram, Creative XB-X-FI,  256GB OCZ SSD, Storage controller: Areca 1222 in Raid 0 with 3 x, 2 TB Seagate HD, EVGA 1500W PSU
    __________________________________________________


    #16
    russianhaxor
    iCX Member
    • Total Posts : 454
    • Reward points : 0
    • Joined: 2005/11/01 17:39:51
    • Location: San Diego
    • Status: offline
    • Ribbons : 3
    Re:SR2 with PCIe Drives and SLI 2010/04/01 18:54:08 (permalink)
    It is my humble opinion that running those areca RAID cards could potentially be dangerous running them between GTX 480s. Granted, i assume that you'll be using watercooling and everything. I just feel like it will get TOO hot in there even with water blocks... you would need to create quite a bit of additional airflow in order to ensure no dead raid cards.

    Also, i was reading techpowerup and they tested x16 and x8 bandwidths with gtx 480s and only lost 2% of performance... so its not really going to impact your performance much, if at all.

     
    #17
    Spotswood
    iCX Member
    • Total Posts : 268
    • Reward points : 0
    • Joined: 2009/08/01 17:19:50
    • Location: New Hampshire, USA
    • Status: offline
    • Ribbons : 7
    Re:SR2 with PCIe Drives and SLI 2010/04/01 19:23:52 (permalink)
    russianhaxor

    It is my humble opinion that running those areca RAID cards could potentially be dangerous running them between GTX 480s. Granted, i assume that you'll be using watercooling and everything. I just feel like it will get TOO hot in there even with water blocks... you would need to create quite a bit of additional airflow in order to ensure no dead raid cards.


    Nothing a custom water block or two couldn't fix.  hehe

    #18
    linuxrouter
    CLASSIFIED Member
    • Total Posts : 4605
    • Reward points : 0
    • Joined: 2008/02/28 14:47:45
    • Status: offline
    • Ribbons : 104
    Re:SR2 with PCIe Drives and SLI 2010/04/01 19:26:27 (permalink)
    russianhaxor

    It is my humble opinion that running those areca RAID cards could potentially be dangerous running them between GTX 480s. Granted, i assume that you'll be using watercooling and everything. I just feel like it will get TOO hot in there even with water blocks... you would need to create quite a bit of additional airflow in order to ensure no dead raid cards.
     


    I agree. I have an Adaptec SAS controller sitting below a 295 and it gets to be very hot without active cooling. I ended up having to add a side-panel fan blowing air over the card and a PCI-slot exhaust fan below the RAID card to keep it cool.

    CaseLabs M-S8 - ASRock X99 Pro - Intel 5960x 4.2 GHz - XSPC CPU WC - EVGA 980 Ti Hybrid SLI - Samsung 950 512GB - EVGA 1600w Titanium
    Affiliate Code: OZJ-0TQ-41NJ
    #19
    _NickM
    FTW Member
    • Total Posts : 1130
    • Reward points : 0
    • Joined: 2007/05/11 15:28:19
    • Status: offline
    • Ribbons : 3
    Re:SR2 with PCIe Drives and SLI 2010/04/01 20:13:00 (permalink)
    Something like this can work, at least claimed by a newegg reviewer of an areca 1231: http://www.performance-pcs.com/catalog/index.php?main_page=product_info&products_id=2110 Looks like similar mountpoints on the 1680 card.

     
    EU Questions? Contact me @ 
    nickm@evga.com | +49 89 18 90 49 - 27 
    #20
    SAL36864
    New Member
    • Total Posts : 11
    • Reward points : 0
    • Joined: 2010/02/17 19:45:44
    • Status: offline
    • Ribbons : 0
    Re:SR2 with PCIe Drives and SLI 2010/04/01 20:38:45 (permalink)
    The 1680 has an operating temperature of +5°c to +50°c, and the GTX 480 Hydro Copper FTW has an advertised temperature of 49°c (and I would think that you could get better colling than that [According to EVGA_JacobF they use a 360, low speed fans, on a loop with the CPU]), so assuming that you have decent case airflow, it should not be much of an issue.
    #21
    russianhaxor
    iCX Member
    • Total Posts : 454
    • Reward points : 0
    • Joined: 2005/11/01 17:39:51
    • Location: San Diego
    • Status: offline
    • Ribbons : 3
    Re:SR2 with PCIe Drives and SLI 2010/04/01 21:35:49 (permalink)
    Spotswood

    russianhaxor

    It is my humble opinion that running those areca RAID cards could potentially be dangerous running them between GTX 480s. Granted, i assume that you'll be using watercooling and everything. I just feel like it will get TOO hot in there even with water blocks... you would need to create quite a bit of additional airflow in order to ensure no dead raid cards.


    Nothing a custom water block or two couldn't fix.  hehe



    Theoretically, yes. But would you want to or could you even run the raid card's waterblock on the same loop as the GPUs? 


    Can i get a hell no? :P

     
    #22
    0xdeadbeef
    New Member
    • Total Posts : 45
    • Reward points : 0
    • Joined: 2007/05/28 11:45:59
    • Status: offline
    • Ribbons : 0
    Re:SR2 with PCIe Drives and SLI 2010/04/01 23:44:48 (permalink)
    Some info about water blocks from MIPS for Adaptec RAID controller:

    http://www.xtremesystems.org/forums/showthread.php?t=245607

    They aren't released yet but I think you can preorder if you're interested.
    #23
    The-Hunter
    Superclocked Member
    • Total Posts : 233
    • Reward points : 0
    • Joined: 2009/03/02 11:22:57
    • Status: offline
    • Ribbons : 1
    Re:SR2 with PCIe Drives and SLI 2010/04/02 03:32:58 (permalink)
    as SAL36864 said, the cooling of the areca series are a non issue. I have one stuck in between my hydrocopper's 285 at 750 MHz and the fan on the side of my case blowing over the cards, gives more than enough airflow to the areca card, the areca never goes above 45c on my card, it does not run just as hot as the 1680 thou, but it still gives a good indication. The areca 1222 I have seems to have the same fan mounted on it as the 1680 series. 

    Thou, if you are going to get this mobo, and need or just want to have the "best" raid card out there, wait for the new 1880. its on the market selling available to consumers in less than 30 days. It has a passive CPU cooler on it thou, you might want to swap that one out, or mount a tiny fan on it, just to be 100%.  

    I don't understand why people put water blocks on things that make no noise as, on mine I cant hear it from one foot or 30 cm away and does not run anywhere near hot. 
    post edited by The-Hunter - 2010/04/02 03:37:58

    Cosmos II water cooled, EVGA SR-X, Intel E5-2687W x2, EVGA  Titan Black Hydrocopper signature x3, 1 x Dell 30" 308WFP,  96Gb 1600Mhz ram, Creative XB-X-FI,  256GB OCZ SSD, Storage controller: Areca 1222 in Raid 0 with 3 x, 2 TB Seagate HD, EVGA 1500W PSU
    __________________________________________________


    #24
    estwash
    New Member
    • Total Posts : 10
    • Reward points : 0
    • Joined: 2010/02/19 09:15:53
    • Status: offline
    • Ribbons : 0
    Re:SR2 with PCIe Drives and SLI 2010/04/02 08:22:26 (permalink)
    on still being the noob...thank you all, I actually believe I understand the bandwidth issue - and, yes, heat issues and so on.

    I really need the RAID card, and actually after following up on the upcoming cards, I'm going to wait for one of those.  I'll have good air flow/air change in my MM case, and still hoping the heat of the 3way sli watercooled won't be any issue for the sandwiched RAID card.

    I am more persuaded though, that I'll wait through Q2 for any changes/revisions or "improved" (512 core) 400 series.  I don't mind waiting into the summer...I'll have to wait for the SR-2 anyway.  Perhaps, by then, nVidia will have released the c2050 with more than x2 of the c1060 performance at less heat/power...I can only hope. 

    also, PLEASE, Hunter, I can't find out anymore data on software switching for my folding work and gaming time - the stuff you mentioned in the other thread where I asked about this "freed up" x8 slot for my RAID with 3way sli and Tesla as 4 cards...etc.  would appreciate any input. 

    also, have personal and family health investments into the folding@home purpose.  we hope someday, with such an incredible working "background" network of such a diverse group of people globally (work pc's and gamer's pc's), they'll find cures for such things as only currently brings sorrow and loss to families such as my own.  I'm hoping the CUDA aspect of the Fermi 400's in 3way along with the Tesla, will be useful for folding@home background use, while I still enjoy the system in both work and play...just a noob shiver, hehe.

    thx all.
    #25
    estwash
    New Member
    • Total Posts : 10
    • Reward points : 0
    • Joined: 2010/02/19 09:15:53
    • Status: offline
    • Ribbons : 0
    Re:SR2 with PCIe Drives and SLI 2010/04/02 08:30:48 (permalink)
    Please Spotswood, how do I talk to you privately about your cases?  I am forced into a MountainMods case, appreciate them, but would rather go another route, and I am amazed at your build logs I've found.  This will be my last case for many reasons, and I'd prefer "art" along with my tech...how can I contact you, and where can I see SR-2 compatible case options?  thx :)

    EDIT: I was referring to the "Custom Wooden" case thread...I originally found you through a google search, before seeing you were on the EVGA forums already...
    post edited by estwash - 2010/04/02 08:35:44
    #26
    The-Hunter
    Superclocked Member
    • Total Posts : 233
    • Reward points : 0
    • Joined: 2009/03/02 11:22:57
    • Status: offline
    • Ribbons : 1
    Re:SR2 with PCIe Drives and SLI 2010/04/02 10:15:02 (permalink)
    estwash

    on still being the noob...thank you all, I actually believe I understand the bandwidth issue - and, yes, heat issues and so on.

    I really need the RAID card, and actually after following up on the upcoming cards, I'm going to wait for one of those.  I'll have good air flow/air change in my MM case, and still hoping the heat of the 3way sli watercooled won't be any issue for the sandwiched RAID card.

    I am more persuaded though, that I'll wait through Q2 for any changes/revisions or "improved" (512 core) 400 series.  I don't mind waiting into the summer...I'll have to wait for the SR-2 anyway.  Perhaps, by then, nVidia will have released the c2050 with more than x2 of the c1060 performance at less heat/power...I can only hope. 

    also, PLEASE, Hunter, I can't find out anymore data on software switching for my folding work and gaming time - the stuff you mentioned in the other thread where I asked about this "freed up" x8 slot for my RAID with 3way sli and Tesla as 4 cards...etc.  would appreciate any input. 

    also, have personal and family health investments into the folding@home purpose.  we hope someday, with such an incredible working "background" network of such a diverse group of people globally (work pc's and gamer's pc's), they'll find cures for such things as only currently brings sorrow and loss to families such as my own.  I'm hoping the CUDA aspect of the Fermi 400's in 3way along with the Tesla, will be useful for folding@home background use, while I still enjoy the system in both work and play...just a noob shiver, hehe.

    thx all.


    estwash, I am glad there is good people like yourself and all the other folders on this planet. I am sure some day we will have results. There was recently a break through and one unit have folded together so something vent forward thx to this work we do. I don't know the ins and out but I know we are helping a good cause with our help things will happen for the good of us all. 

    Regarding folding versus game. All I suggest you do, is turn of SLI over max 4 cards. I do not believe you can SLI anymore anyway. But that is more than enough for gaming purposes or any other entertainment reason you may have for your SLI. SLI in effect bundles cards performance together via the I/O buss that you clip on on top of the cards. 

    In regards to if we can in theory stick 7 GTX 480 in a machine or not, we yet do not have a single reply from EVGA on that question. As we said, those cards above 4 would be for pure folding in any case. It should not be hard for EVGA to test this and tell us the result or explain what I been asking in detail. But unless I get more data from somewhere on what exactly the NF200 chips does to give more PCI lanes in full details I cant really help more than asking for info and looking at it after I have received those details. 

    I do not believe EVGA made a machine that cant do 4 SLI cards and the rest of the machine as I described I need for my own purposes in life. in My case its 4 EVGA GTX 480 (512 shader version when out) 

    My setup would be something like this. 

    PCI 1, GTX 480
    PCI 2, ARC-1880ix-24 24 
    PCI 3, GTX 480
    PCI, 4 ARC-1880ix-24 24 / Mellanox ConnectX Dual-port 4x QDR HCA. MHQH29B-XTR (all depending on performance of SSD over one Areca) (the end result is that I will have more or less of the normal HDs in my workstation depending on performance of the Areca 1880) 
    PCI 5, GTX 480
    PCI, 6, AJA-OEM2K
    PCI 7, GTX 480

    I find it rather strange that EVGA cant reply to this.. But they reply to other posts asap. Please help me out here, dear EVGA product managers :-)

    Also I asked what happens EVGA in a post what the limitation is if you fill in  GTX 480 cards in every slot and use this for folding only with no SLI, this to be able to reply as best as possible to you. 
    I would have imagined this should be a rather simple question to reply to. I certainly understands that the SLI would not work across the board, and that only one PCI slot would run on x16 speed, but the rest of the logic why this cant be done, as I posted in my above huge blog posts, I have no idea of the logic or reasons for whats going on to reply to you..  lets hope EVGA will get round to reply. Alternatively send over details in a private message of the NF200 so at least I could calm down.

    Cosmos II water cooled, EVGA SR-X, Intel E5-2687W x2, EVGA  Titan Black Hydrocopper signature x3, 1 x Dell 30" 308WFP,  96Gb 1600Mhz ram, Creative XB-X-FI,  256GB OCZ SSD, Storage controller: Areca 1222 in Raid 0 with 3 x, 2 TB Seagate HD, EVGA 1500W PSU
    __________________________________________________


    #27
    estwash
    New Member
    • Total Posts : 10
    • Reward points : 0
    • Joined: 2010/02/19 09:15:53
    • Status: offline
    • Ribbons : 0
    Re:SR2 with PCIe Drives and SLI 2010/04/02 12:27:00 (permalink)
    Hunter - thx for the words :)  family appreciates it. 

    originally, was going to go straight folding w/4way and c1060's...1 c1060 is amazing dif in Adobe work, unless people use it, there's really no way to show the CUDA benefits.

    BUT, once I saw the sr-2 was coming, and, since have already budget prepped for MM case for folding, am going to go the sr-2 route instead of the 4way class board.  AND, here's my planned setup:
    pci-e 1&2 - gtx 480 (or wait til summer brings 485 with full core use)
    pci-e 3&4 - same
    pci-e 5&6 - same
    pci-e 7 - c2050 (if out in summer, or take c1060 from other sys and use it in mean while)

    wanted to put RAID in freed up slot 6 (water cooled gpu's)...

    I was hoping to sli for personal and work use, and since CUDA based Fermi 400 series, hoped the gpu's could also serve as gpgpu's alongside the Tesla c1060 (or upcoming c2050)...

    I am persuaded at this point, that it's going to be functional and beneficial to use the 6th slot for RAID card, while wc'ing the gpu's.  the Tesla card gets hot, but don't think wc'ing is an issue there.

    my guilty question, seriously, is regarding being able to fold@home (background performance), and yet in sli 3 monitor live in my games.

    I'm looking forward to Adobe's upgrades, and Fermi's support of them for work...and, am anxious to see the high quality of gaming on high res monitors........AND, hope it doesn't hinder the folding@home activity...DO I NEED TO interrupt anything to "game" and "work" keep using the system?  I'm hoping not to, but if it does, I'll leave the sys for folding, and keep my asus sys for gaming/work, just upgrade 1 gpu...if I have to.

    thx
    #28
    Spotswood
    iCX Member
    • Total Posts : 268
    • Reward points : 0
    • Joined: 2009/08/01 17:19:50
    • Location: New Hampshire, USA
    • Status: offline
    • Ribbons : 7
    Re:SR2 with PCIe Drives and SLI 2010/04/02 14:16:30 (permalink)
    estwash

    Please Spotswood, how do I talk to you privately about your cases?  I am forced into a MountainMods case, appreciate them, but would rather go another route, and I am amazed at your build logs I've found.  This will be my last case for many reasons, and I'd prefer "art" along with my tech...how can I contact you, and where can I see SR-2 compatible case options?  thx :)

    EDIT: I was referring to the "Custom Wooden" case thread...I originally found you through a google search, before seeing you were on the EVGA forums already...


    I sent you a PM. 

    And I too like to blend art with technology.    
    #29
    _NickM
    FTW Member
    • Total Posts : 1130
    • Reward points : 0
    • Joined: 2007/05/11 15:28:19
    • Status: offline
    • Ribbons : 3
    Re:SR2 with PCIe Drives and SLI 2010/04/02 14:45:36 (permalink)
    Sorry but we tested in house with a couple waterblock cards and a couple raid controllers on the SR-2 and it will not fit.

    The Hydro Copper cards are slightly larger than a single slot starting where the vreg heatsink starts. A shorter card like a network adapter or sound card will fit but a longer card like the Areca controllers (tested with a 1210 and a 1231 or longer GPUs) will touch against the vreg heatsink on the Hydro Copper cards. Cards can be about 6 inches in length before they make contact with the vreg heatsink.

    post edited by EVGATech_NickM - 2010/04/02 14:51:31

     
    EU Questions? Contact me @ 
    nickm@evga.com | +49 89 18 90 49 - 27 
    #30
    Page: 12 > Showing page 1 of 2
    Jump to:
  • Back to Mobile