EVGA

Hot!X299 Manuals posted

Page: < 123 > Showing page 2 of 3
Author
Iamrogue
Superclocked Member
  • Total Posts : 227
  • Reward points : 0
  • Joined: 2010/01/24 01:58:41
  • Location: P(r)oland
  • Status: offline
  • Ribbons : 4
Re: X299 Manuals posted 2017/07/11 09:19:33 (permalink)
slidey
lamrogue do you think cable management will still work well if you leave the grommets in, or is that too tight?



I left them in and there was no problem with anything, at all.

7980XE @ 4,8GHz | 32GB @ 3800 16/17/17/38 | X299 DARK | 2xGTX1080Ti @ 2101MHz | 1600W T2 | CPU loop=420mm HWLabs + D5 + eLOOP @ 25% | GPU loop=480mm HWLabs + D5 + eLOOP @ 25% | Merlin
Max:


GPU x1:
https://www.3dmark.com/spy/3203275
GPU SLI:
https://www.3dmark.com/spy/3460940 https://www.3dmark.com/fs/15137311 https://www.3dmark.com/fs/15137359
#31
Sajin
EVGA Forum Moderator
  • Total Posts : 37090
  • Reward points : 0
  • Joined: 2010/06/07 21:11:51
  • Location: Texas, USA.
  • Status: offline
  • Ribbons : 195
Re: X299 Manuals posted 2017/07/11 10:38:19 (permalink)
Iamrogue
tusharsingal
Great manuals! Any idea when the board might actually come out?
 
Any concerns regarding OC'ing a 7900X on the X299 Micro's single 8-pin connector? 


4,7GHz is 100%
all my chips do this clock, and i tried 14 so far :)


14 7900x's? 

Why so many?

Want to save 5 to 10% on your next EVGA purchase? Just click on the associates banner to save, or enter the associates code at checkout on your next purchase. If you choose to use my code I want to personally say "Thank You" for using it.
 

 
#32
Iamrogue
Superclocked Member
  • Total Posts : 227
  • Reward points : 0
  • Joined: 2010/01/24 01:58:41
  • Location: P(r)oland
  • Status: offline
  • Ribbons : 4
Re: X299 Manuals posted 2017/07/11 10:45:14 (permalink)
Sajin
 
14 7900x's? 

Why so many?


looking for the one I will use myself, select or whatever you would like to call it

7980XE @ 4,8GHz | 32GB @ 3800 16/17/17/38 | X299 DARK | 2xGTX1080Ti @ 2101MHz | 1600W T2 | CPU loop=420mm HWLabs + D5 + eLOOP @ 25% | GPU loop=480mm HWLabs + D5 + eLOOP @ 25% | Merlin
Max:


GPU x1:
https://www.3dmark.com/spy/3203275
GPU SLI:
https://www.3dmark.com/spy/3460940 https://www.3dmark.com/fs/15137311 https://www.3dmark.com/fs/15137359
#33
Jbj5000
Superclocked Member
  • Total Posts : 127
  • Reward points : 0
  • Joined: 2017/06/26 18:05:07
  • Status: offline
  • Ribbons : 0
Re: X299 Manuals posted 2017/07/11 11:01:09 (permalink)

#34
tusharsingal
New Member
  • Total Posts : 17
  • Reward points : 0
  • Joined: 2014/10/04 22:37:36
  • Status: offline
  • Ribbons : 0
Re: X299 Manuals posted 2017/07/11 11:14:16 (permalink)
Iamrogue
tusharsingal
Great manuals! Any idea when the board might actually come out?
 
Any concerns regarding OC'ing a 7900X on the X299 Micro's single 8-pin connector? 


4,7GHz is 100%
all my chips do this clock, and i tried 14 so far :)


That's not my concern - Der8aur noted dangerous levels of power going through a single 8 pin when OC'd. Wanted to hear EVGA's thoughts.
#35
TECH_DaveB
EVGA Alumni
  • Total Posts : 4893
  • Reward points : 0
  • Joined: 2008/09/26 17:03:47
  • Status: offline
  • Ribbons : 46
Re: X299 Manuals posted 2017/07/13 00:48:30 (permalink)
willdearborn
TECH_DaveB
willdearborn
@TECH_DaveB
 
Hey Dave sorry but I just have one more question about the X299 Dark.
 
I know you said PE6 would only get 4x lanes from the PCH with a 28 lane CPU. My question is would that be adequate for a card that is there simply to drive extra DVI displays? I'm sure the 4x lanes are fine for a card I'm not gaming on but since they would go through the PCH is this okay? Would it effect M.2 or SATA? I plan on using all 8x SATA and 2x M.2 slots.
 
My setup would be 2x GTX 1080s in SLI in PE1 and PE3. And then a GTX 660 in PE6 simply because I have 2 extra dual link DVI only displays. All the card does is drive those monitors. Could I get away with a 28 lane CPU in that case? I don't want to have to buy active DP to Dual Link DVI adapters simply for his purpose for many reasons. #1 Cost #2 DP hot plug detect #3 No adapters are just preferable.
 
Thanks so much for the info!




PE6 will be x4 from PCH, which in most cases will not support a GPU, as when used for video output they are supposed to be minimum of x8.  Now I have seen cards work fine, albeit SLOWLY on x4 slots, and other times not detect, since they are meant to be used on x8 I cannot say 100% that it will work as you want.  Enabling PE6 will disable the 80mm M.2 slot, which will also by default remove the Optane support for the board. 
Also you cannot run a card in PE3 with a 28 lane CPU, it will be PE1 and PE4, you can run the 3 cards in PE1, PE2, PE4 and they should work as intended, but I know that is a bit more cramped than you wanted.
 



Thanks for the reply Dave that helps a lot. I guess I'll have to opt for a 44 lane CPU to get the setup I want. So with a 44 lane CPU I would be able to use PE1 (x16), PE3 (x8), PE6 (x8), but I see that you mention that if PE6 is used the 80mm M.2 slot switches to PCH lanes. Would this stop me from using 2x M.2 slots for RAID 0? Since one is using CPU lanes and the other PCH lanes? And why would this even happen being that with the 3 GPUs we are only at 32 lanes, shouldn't that be enough to still use 8 more lanes for 2x M.2's from the CPU? 


I think it will prevent it as I do not think you can use RAID across CPU and PCH controlled slots.
 
As for  the 32 lane argument, it is not as simple lanes being in a pool to be used.  CPU based lanes are assigned in blocks of 4, 0-3, 4-7, 8-11, 12-15, now the last 2 segments are (typically) the sharable items, as the first 4 lanes have to be there for the next 4 to work and so on.  Now there is a component that the lanes that are shared between 2 slots will go through, and it is a detection based allocation, so, when if PE2 shares with PE1 when PE2 is populated then PE1 looses its 8 lanes (8-11 & 12-15) to PE2, it can't just pull 12-15 is it is a sound card, because 12-15 are traced (wired in essence) to the second set of 4 on the slot.  So you would have the first 4 lanes not being there then the second set, which cannot work without the primary 4 being filled which are the 8-11, so plugging in a video card, or a USB add in card, will pull the 3rd and 4th block of PCIE lanes form PE1.
Because this board has 5 x16 slots that means there was some interesting routing done to make this work, 4 is MUCH easier.  A component that helps with lane allocation and sharing like mentioned above assigns the lanes to PE6 and 80mm M.2.  PE6 has 2 sets of 4 lanes form the CPU, and one block of 4 lanes from the PCH, and when a card using x8 lanes is in PE6, the M.2 receives x4 form PCH.  When PE6 is empty, or is using a x4 or x1 slot the M.2 receives x4 lanes from the CPU.  All of this is assuming that you have a 44 lane processor.
 
This is also where some of the juggling act comes in, with a 44, 28, and 16 lane procs.  Because not all segments can pull lanes form the CPU because some sections are empty.  For the sake of a diagram, think the CPUs are represented in excel in horizontal columns, the 44 lane proc on top with 11 groups of 4 lanes, now the 28 lane under it with 7 groups of 4 and the x16 with 4 groups of 4, however the 28 and 16 lane procs do NOT have their  lanes all grouped together, so there are 4 lane sections that are blank, on top of that there are only so many layers, and so much additional components you want to add to handle allocation based off of different configs, so it is basically, look at the 16 lane proc and see what it take to make that have PE1 @ x16/x8 and PE2 be x8, and make sure that lines up with the lanes for the 28 lane proc AND the 44 (the 44 is easy) and how to allocate the best combo of parts between 5 x8 slots (when all are used), 1 x4 slot, U.2 and M.2, and where all the break points are for what is lost with a given CPU, and which peripheral will be disabled due to lack or lanes or sharing of lanes, and have this all work around the limitations of traces and iirc a 12 layer PCB.  Because crosstalk, and various other electrical concerns exist as well, not just how to allocate the resources.
 
If PCIE lanes were a pool, you would be 100% right about having the spare, however that is simply not how that works.  I do not have access to the wiring diagrams anymore as I am no longer an employee, however if that was confusing I can make a quick and dirty diagram to clarify if this left you with more questions than it answered.  I know PCIE lane routing is kinda a pain, but I hope this sheds a little light on the questions I have received many times over the years regarding "But I am not using all the lanes, why can't I use xxx and yyy together".
#36
willdearborn
CLASSIFIED Member
  • Total Posts : 2318
  • Reward points : 0
  • Joined: 2008/01/04 18:54:42
  • Status: offline
  • Ribbons : 5
Re: X299 Manuals posted 2017/07/13 15:36:56 (permalink)
TECH_DaveB
willdearborn
TECH_DaveB
willdearborn
@TECH_DaveB
 
Hey Dave sorry but I just have one more question about the X299 Dark.
 
I know you said PE6 would only get 4x lanes from the PCH with a 28 lane CPU. My question is would that be adequate for a card that is there simply to drive extra DVI displays? I'm sure the 4x lanes are fine for a card I'm not gaming on but since they would go through the PCH is this okay? Would it effect M.2 or SATA? I plan on using all 8x SATA and 2x M.2 slots.
 
My setup would be 2x GTX 1080s in SLI in PE1 and PE3. And then a GTX 660 in PE6 simply because I have 2 extra dual link DVI only displays. All the card does is drive those monitors. Could I get away with a 28 lane CPU in that case? I don't want to have to buy active DP to Dual Link DVI adapters simply for his purpose for many reasons. #1 Cost #2 DP hot plug detect #3 No adapters are just preferable.
 
Thanks so much for the info!




PE6 will be x4 from PCH, which in most cases will not support a GPU, as when used for video output they are supposed to be minimum of x8.  Now I have seen cards work fine, albeit SLOWLY on x4 slots, and other times not detect, since they are meant to be used on x8 I cannot say 100% that it will work as you want.  Enabling PE6 will disable the 80mm M.2 slot, which will also by default remove the Optane support for the board. 
Also you cannot run a card in PE3 with a 28 lane CPU, it will be PE1 and PE4, you can run the 3 cards in PE1, PE2, PE4 and they should work as intended, but I know that is a bit more cramped than you wanted.
 



Thanks for the reply Dave that helps a lot. I guess I'll have to opt for a 44 lane CPU to get the setup I want. So with a 44 lane CPU I would be able to use PE1 (x16), PE3 (x8), PE6 (x8), but I see that you mention that if PE6 is used the 80mm M.2 slot switches to PCH lanes. Would this stop me from using 2x M.2 slots for RAID 0? Since one is using CPU lanes and the other PCH lanes? And why would this even happen being that with the 3 GPUs we are only at 32 lanes, shouldn't that be enough to still use 8 more lanes for 2x M.2's from the CPU? 


I think it will prevent it as I do not think you can use RAID across CPU and PCH controlled slots.
 
As for  the 32 lane argument, it is not as simple lanes being in a pool to be used.  CPU based lanes are assigned in blocks of 4, 0-3, 4-7, 8-11, 12-15, now the last 2 segments are (typically) the sharable items, as the first 4 lanes have to be there for the next 4 to work and so on.  Now there is a component that the lanes that are shared between 2 slots will go through, and it is a detection based allocation, so, when if PE2 shares with PE1 when PE2 is populated then PE1 looses its 8 lanes (8-11 & 12-15) to PE2, it can't just pull 12-15 is it is a sound card, because 12-15 are traced (wired in essence) to the second set of 4 on the slot.  So you would have the first 4 lanes not being there then the second set, which cannot work without the primary 4 being filled which are the 8-11, so plugging in a video card, or a USB add in card, will pull the 3rd and 4th block of PCIE lanes form PE1.
Because this board has 5 x16 slots that means there was some interesting routing done to make this work, 4 is MUCH easier.  A component that helps with lane allocation and sharing like mentioned above assigns the lanes to PE6 and 80mm M.2.  PE6 has 2 sets of 4 lanes form the CPU, and one block of 4 lanes from the PCH, and when a card using x8 lanes is in PE6, the M.2 receives x4 form PCH.  When PE6 is empty, or is using a x4 or x1 slot the M.2 receives x4 lanes from the CPU.  All of this is assuming that you have a 44 lane processor.
 
This is also where some of the juggling act comes in, with a 44, 28, and 16 lane procs.  Because not all segments can pull lanes form the CPU because some sections are empty.  For the sake of a diagram, think the CPUs are represented in excel in horizontal columns, the 44 lane proc on top with 11 groups of 4 lanes, now the 28 lane under it with 7 groups of 4 and the x16 with 4 groups of 4, however the 28 and 16 lane procs do NOT have their  lanes all grouped together, so there are 4 lane sections that are blank, on top of that there are only so many layers, and so much additional components you want to add to handle allocation based off of different configs, so it is basically, look at the 16 lane proc and see what it take to make that have PE1 @ x16/x8 and PE2 be x8, and make sure that lines up with the lanes for the 28 lane proc AND the 44 (the 44 is easy) and how to allocate the best combo of parts between 5 x8 slots (when all are used), 1 x4 slot, U.2 and M.2, and where all the break points are for what is lost with a given CPU, and which peripheral will be disabled due to lack or lanes or sharing of lanes, and have this all work around the limitations of traces and iirc a 12 layer PCB.  Because crosstalk, and various other electrical concerns exist as well, not just how to allocate the resources.
 
If PCIE lanes were a pool, you would be 100% right about having the spare, however that is simply not how that works.  I do not have access to the wiring diagrams anymore as I am no longer an employee, however if that was confusing I can make a quick and dirty diagram to clarify if this left you with more questions than it answered.  I know PCIE lane routing is kinda a pain, but I hope this sheds a little light on the questions I have received many times over the years regarding "But I am not using all the lanes, why can't I use xxx and yyy together".




Thanks again Dave. What a great explanation! I guess to do all the things I really want to do such as SLI with M.2 RAID 0,  I will have to forget about the 3rd card. At this point I'm thinking Dual Link DVI to DP adapters are the way to go so I can connect all 4 of my monitors to the primary card. Thanks again for the great explanation, you are the man! I will be sure to refer people to this post who have a question about PCIe lane allocation.

EVGA RTX 2080 Ti • EVGA X299 Dark • Intel Core i7 9800X • 32GB G.Skill TridentZ Black
2x Samsung 970 EVO 500GBs • 3x Samsung 860 EVO 1TBs • 2x WD Black 2TBs
43" 4K Wasabi Mango UHD430 • Lian Li PC D600WB • EVGA 1600T2


#37
TECH_DaveB
EVGA Alumni
  • Total Posts : 4893
  • Reward points : 0
  • Joined: 2008/09/26 17:03:47
  • Status: offline
  • Ribbons : 46
Re: X299 Manuals posted 2017/07/13 17:58:23 (permalink)
What is you exact monitor config again?  If you are using the SLI set for a high end monitor, and a smaller card just to have access to other screens but not be part of a surround system or something like that, there are some options that you might consider.
If you are needing 2 DVI @ 1080p form the other card, how about PE1 and PE4 for the SLI set, then PE3 you use a single slot dual DVI card, or PE2 use  any dual DVI card.
If you are running the HP monitors in your sig that will be more of a challenge, you will need DL-DVI ports to run at their native resolution of 2560x1600, that will be a different challenge altogether.  If you can give a little more detail on the exact end-game you are looking for I think I can help find a solution that will work, just maybe not HOW you originally thought it would work.
#38
willdearborn
CLASSIFIED Member
  • Total Posts : 2318
  • Reward points : 0
  • Joined: 2008/01/04 18:54:42
  • Status: offline
  • Ribbons : 5
Re: X299 Manuals posted 2017/07/14 14:52:11 (permalink)
TECH_DaveB
What is you exact monitor config again?  If you are using the SLI set for a high end monitor, and a smaller card just to have access to other screens but not be part of a surround system or something like that, there are some options that you might consider.
If you are needing 2 DVI @ 1080p form the other card, how about PE1 and PE4 for the SLI set, then PE3 you use a single slot dual DVI card, or PE2 use  any dual DVI card.
If you are running the HP monitors in your sig that will be more of a challenge, you will need DL-DVI ports to run at their native resolution of 2560x1600, that will be a different challenge altogether.  If you can give a little more detail on the exact end-game you are looking for I think I can help find a solution that will work, just maybe not HOW you originally thought it would work.


 
You are correct, I am using 2x GTX 1080's in SLI for the 4k monitor in my sig. It is connected to the HDMI 2.0 port on the primary 1080. I also have the 3x 30" monitors in my sig that run at 2560x1600 and are Dual Link DVI only. They are not in surround, each one is a single monitor. One of them runs off the Dual Link DVI port on the primary GTX 1080 and I have a GTX 660 in there to drive the other two. Right now I have an ASUS Rampage IV Extreme and it allows me to have all 3 cards installed with a space in between each for optimal airflow. I would really like to have a empty slot below each 1080. If I sandwich them temps go up considerably. Here is my current setup:
 
 

 
Ideally I would like to get rid of the GTX 660 so that I wouldn't have to worry about having this exact slot setup, but doing so would require me to buy 3x active Dual Link DVI to DP adapters (to drive 2560x1600) that are $100+ dollars each and I CANNOT STAND the fact that Displayport has the hotplug detect "feature" so that when monitors are put to sleep or turned off any windows or icons on that monitor are rearranged because windows thinks that monitor has been unplugged, when it has only really been put to sleep. DVI and HDMI don't have this issue, but it is part of the Displayport spec and is a nightmare for multi-monitor users. (ugh...who thought that was a good idea?).
 
The bottom line is I need 3x Dual Link DVI ports and 1 HDMI 2.0 port with SLI while also keeping a slot between each card for airflow. And I'd like to purchase the X299 Dark to accomplish this. But I'd also like to put 2x M.2 SSDs in RAID 0 and also use all 8x SATA ports.
 
My 30" Dual Link only DVI monitors are perfectly fine and I have no need to replace them but (not to mention 16:10 monitors are becoming scarce (and I hate 16:9 1440p) and like I said above, Displayport is a no go for me due to hot plug detect), Nvidia seems intent on moving away from DVI and it's been a complete nightmare for me ever since they made this decision. 
 
If you have any solution to my problem it would be appreciated, but I really don't think a solution exists. I have been thinking about this for the last few years and there is no board/GPU configuration that offers what I need. I can't even upgrade to 1080ti's as I need blower style cards to keep temps in check in this config (blower cards have also gone out of style...BOO!!!!) and the only ones that exist have NO DVI ports at all!!! (I guess I could go to a full custom water setup but that would be another $1200+)
 
There seems to be nothing I can do which would allow me to to do EVERYTHING I want. I could forgo the M.2 RAID 0 and I guess run a single off PCH lanes? or buy 3x active DL-DVI to DP adapters (another $330 on top of $1000 for a 7900x, which I don't even need other than for the PCIe lanes. Intel sucks for giving the 7820x only 28 lanes!)
 
I am trying to get everything I want while not having to go with a custom water loop simply to keep the price somewhat in check. Right now it seems for me any upgrade in CPU/Platform/NVMe SSD would actually be a downgrade in GPU temps and that's why I'm still rocking an x79 with 1080s.
 
Thanks for taking the time to look into my setup Dave! It is very much appreciated!
post edited by willdearborn - 2017/07/14 15:22:21

EVGA RTX 2080 Ti • EVGA X299 Dark • Intel Core i7 9800X • 32GB G.Skill TridentZ Black
2x Samsung 970 EVO 500GBs • 3x Samsung 860 EVO 1TBs • 2x WD Black 2TBs
43" 4K Wasabi Mango UHD430 • Lian Li PC D600WB • EVGA 1600T2


#39
TECH_DaveB
EVGA Alumni
  • Total Posts : 4893
  • Reward points : 0
  • Joined: 2008/09/26 17:03:47
  • Status: offline
  • Ribbons : 46
Re: X299 Manuals posted 2017/07/14 23:37:37 (permalink)
Well, I have 2 suggestions, as the end game is SLI w/4k, +3x1600p monitors, and Nvme RAID.
 
I would have to check with one of the PM guys internal in the company, as I am about 90% sure this would work, run your cards in PE1/4/6 like originally suggested, then in PE2/3 run M.2 PCIE risers and put the nvme's there, I think you can put them into RAID form there, but I know the inhouse PM guys can check on that for you.
 
OR
 
X299 supports VROC (Virtual RAID on CPU) natively.  Out of the box it should support it, VROC supports nvme RAID0, and if you have the VROC header for the port it should support other RAID functions as well.  As this is a new technology there may be some growing pains involved, and I have not used one personally, however I am pretty well read on it as of 1-1.5 months ago and it seems like a really viable option, but I have not seen the VROC cards hit the market yet.
 
OR
 
OK, 3 suggestions, and this may not be a great idea, but since you are not stressing the 660 very hard, maybe run one of the riser ribbons on it and place the card OVER PE6 without plugging into it.  I have seen them used to mount GPUs vertically in full tower cases, so in theory that should work just fine.  That would be cheap and easy.  Run the 1080s ins PE1/3, then PE4 has the ribbon, and it runs under the edge of the lower 1080, that might work out quite well, I haven't tried it but it seems viable.  Some of that will depend on your case though.
#40
willdearborn
CLASSIFIED Member
  • Total Posts : 2318
  • Reward points : 0
  • Joined: 2008/01/04 18:54:42
  • Status: offline
  • Ribbons : 5
Re: X299 Manuals posted 2017/07/15 15:08:59 (permalink)
TECH_DaveB
Well, I have 2 suggestions, as the end game is SLI w/4k, +3x1600p monitors, and Nvme RAID.
 
I would have to check with one of the PM guys internal in the company, as I am about 90% sure this would work, run your cards in PE1/4/6 like originally suggested, then in PE2/3 run M.2 PCIE risers and put the nvme's there, I think you can put them into RAID form there, but I know the inhouse PM guys can check on that for you.
 
OR
 
X299 supports VROC (Virtual RAID on CPU) natively.  Out of the box it should support it, VROC supports nvme RAID0, and if you have the VROC header for the port it should support other RAID functions as well.  As this is a new technology there may be some growing pains involved, and I have not used one personally, however I am pretty well read on it as of 1-1.5 months ago and it seems like a really viable option, but I have not seen the VROC cards hit the market yet.
 
OR
 
OK, 3 suggestions, and this may not be a great idea, but since you are not stressing the 660 very hard, maybe run one of the riser ribbons on it and place the card OVER PE6 without plugging into it.  I have seen them used to mount GPUs vertically in full tower cases, so in theory that should work just fine.  That would be cheap and easy.  Run the 1080s ins PE1/3, then PE4 has the ribbon, and it runs under the edge of the lower 1080, that might work out quite well, I haven't tried it but it seems viable.  Some of that will depend on your case though.




Thanks for the suggestions Dave. I will definitely put some thought into those. One of those might end up working for me.
 
Or what about if I was to forget about nvme RAID 0 and just run a single M.2. Using a 44 lane CPU would I then be able to use PE1/PE3/PE6 at 16x/8x/8x and still use a single M.2 on PCH lanes?
 
Or would that same config (PE1/PE3/PE6) work but with 2x M.2s both off the PCH lanes? From what I see RAID 0 is possible with both M.2s on PCH lanes. Or are any of the M.2 slots on this board CPU lanes only?
 
Thanks for all you help. Also do you know if they are planning on posting the Dark manual soon?
post edited by willdearborn - 2017/07/16 08:34:27

EVGA RTX 2080 Ti • EVGA X299 Dark • Intel Core i7 9800X • 32GB G.Skill TridentZ Black
2x Samsung 970 EVO 500GBs • 3x Samsung 860 EVO 1TBs • 2x WD Black 2TBs
43" 4K Wasabi Mango UHD430 • Lian Li PC D600WB • EVGA 1600T2


#41
TECH_DaveB
EVGA Alumni
  • Total Posts : 4893
  • Reward points : 0
  • Joined: 2008/09/26 17:03:47
  • Status: offline
  • Ribbons : 46
Re: X299 Manuals posted 2017/07/16 13:26:43 (permalink)
willdearborn
 
Thanks for the suggestions Dave. I will definitely put some thought into those. One of those might end up working for me.
 
Or what about if I was to forget about nvme RAID 0 and just run a single M.2. Using a 44 lane CPU would I then be able to use PE1/PE3/PE6 at 16x/8x/8x and still use a single M.2 on PCH lanes?
 
Or would that same config (PE1/PE3/PE6) work but with 2x M.2s both off the PCH lanes? From what I see RAID 0 is possible with both M.2s on PCH lanes. Or are any of the M.2 slots on this board CPU lanes only?
 
Thanks for all you help. Also do you know if they are planning on posting the Dark manual soon?



Honestly, I am not sure how much difference you will see between NVMe and NVMe in RAID0, granted I would love to find out. 
You can use your desired PCIE config and run 1 NVMe drive, it would have to be on the 110mm slot, not the 80mm.  Also, the 110mm slot is PCIE only, does not get lanes form PCH.  Again, would have to have one of the PM guys check with the engineers, but it may be possible to run a M.2 on the 110mm slot, and use a PCIE riser and put the other M.2 on a slot.  The reason we need to check on that, is that these are things that are possible, however I am not sure if it is built to do so, as this board is more about overclocking than running really tricked out storage configs; it may work, just can't say 100%.
#42
willdearborn
CLASSIFIED Member
  • Total Posts : 2318
  • Reward points : 0
  • Joined: 2008/01/04 18:54:42
  • Status: offline
  • Ribbons : 5
Re: X299 Manuals posted 2017/07/16 14:04:57 (permalink)
TECH_DaveB
willdearborn
 
Thanks for the suggestions Dave. I will definitely put some thought into those. One of those might end up working for me.
 
Or what about if I was to forget about nvme RAID 0 and just run a single M.2. Using a 44 lane CPU would I then be able to use PE1/PE3/PE6 at 16x/8x/8x and still use a single M.2 on PCH lanes?
 
Or would that same config (PE1/PE3/PE6) work but with 2x M.2s both off the PCH lanes? From what I see RAID 0 is possible with both M.2s on PCH lanes. Or are any of the M.2 slots on this board CPU lanes only?
 
Thanks for all you help. Also do you know if they are planning on posting the Dark manual soon?



Honestly, I am not sure how much difference you will see between NVMe and NVMe in RAID0, granted I would love to find out. 
You can use your desired PCIE config and run 1 NVMe drive, it would have to be on the 110mm slot, not the 80mm.  Also, the 110mm slot is PCIE only, does not get lanes form PCH.  Again, would have to have one of the PM guys check with the engineers, but it may be possible to run a M.2 on the 110mm slot, and use a PCIE riser and put the other M.2 on a slot.  The reason we need to check on that, is that these are things that are possible, however I am not sure if it is built to do so, as this board is more about overclocking than running really tricked out storage configs; it may work, just can't say 100%.



Thanks for letting me know. I will be fine with 1 NVMe drive if that's all I can do. I just thought it would be cool to try out 2 in RAID 0 but running all my displays on native connections with good airflow for the cards in much more important to me.

EVGA RTX 2080 Ti • EVGA X299 Dark • Intel Core i7 9800X • 32GB G.Skill TridentZ Black
2x Samsung 970 EVO 500GBs • 3x Samsung 860 EVO 1TBs • 2x WD Black 2TBs
43" 4K Wasabi Mango UHD430 • Lian Li PC D600WB • EVGA 1600T2


#43
TECH_DaveB
EVGA Alumni
  • Total Posts : 4893
  • Reward points : 0
  • Joined: 2008/09/26 17:03:47
  • Status: offline
  • Ribbons : 46
Re: X299 Manuals posted 2017/07/16 19:41:43 (permalink)
Understood and agreed there.
VROC may be an option, but that may come down the road a bit, not sure how it will be implemented, but that would be the top performing storage solution for desktop once it gets the early adopter bugs worked out.
#44
willdearborn
CLASSIFIED Member
  • Total Posts : 2318
  • Reward points : 0
  • Joined: 2008/01/04 18:54:42
  • Status: offline
  • Ribbons : 5
Re: X299 Manuals posted 2017/07/21 17:16:03 (permalink)
I've been looking at the ASUS X299 manuals and their boards have both M.2 slots going though the PCH lanes so not only are you able to put NVMe drives into RAID 0 via IRST (without the need for VROC) but also since they aren't on the CPU lanes they don't affect PCIe slot lanes at all. I wonder why EVGA chose to route one of the M.2 slots thought CPU lanes, therefore making RAID 0 not possible and also taking up valuable CPU lanes for storage? With ASUS boards (with a 44 lane CPU) you can have 2 cards in SLI with both @ 16x and also have 2x NVMe M.2 drive in RAID 0 and still have an extra PCIe slot @ 8x. This way uses all the CPU lanes for PCIe slots and leaves all 24 PCH lanes for storage. Seems like the way EVGA allocated the lanes isn't as useful as it could have been.

EVGA RTX 2080 Ti • EVGA X299 Dark • Intel Core i7 9800X • 32GB G.Skill TridentZ Black
2x Samsung 970 EVO 500GBs • 3x Samsung 860 EVO 1TBs • 2x WD Black 2TBs
43" 4K Wasabi Mango UHD430 • Lian Li PC D600WB • EVGA 1600T2


#45
Locutus494
Superclocked Member
  • Total Posts : 110
  • Reward points : 0
  • Joined: 2013/09/24 05:42:30
  • Status: offline
  • Ribbons : 0
Re: X299 Manuals posted 2017/07/22 13:21:51 (permalink)
willdearborn
I've been looking at the ASUS X299 manuals and their boards have both M.2 slots going though the PCH lanes so not only are you able to put NVMe drives into RAID 0 via IRST (without the need for VROC) but also since they aren't on the CPU lanes they don't affect PCIe slot lanes at all. I wonder why EVGA chose to route one of the M.2 slots thought CPU lanes, therefore making RAID 0 not possible and also taking up valuable CPU lanes for storage? With ASUS boards (with a 44 lane CPU) you can have 2 cards in SLI with both @ 16x and also have 2x NVMe M.2 drive in RAID 0 and still have an extra PCIe slot @ 8x. This way uses all the CPU lanes for PCIe slots and leaves all 24 PCH lanes for storage. Seems like the way EVGA allocated the lanes isn't as useful as it could have been.


What?! It's better to have the M.2 slots through the CPU, especially when you have 44 lanes available. EVGA dropped the ball by not having both M.2 slots off the CPU. Having the M.2 slots through the PCH bottlenecks them, especially in RAID setups. Remember, the interface between the PCH and the CPU is equivalent to only four PCIe lanes, so all the devices like M.2 SSDs, USB ports, LAN, and onboard audio all have to share four PCIe lanes to the CPU.

Digital Storm Hailstorm (Corsair 800D), EVGA X58 Classified 3, Intel Core i7 Extreme Edition 990X 3.46GHz Six-Core, 24GB Kingston HyperX T1 1600MHz DDR3, 2x EVGA GTX 780 Dual Classified Hydro Copper, 30" Dell U3011 2560x1600, 2x 600GB Intel 320 SSD RAID 0 (OS), 2x 2TB Western Digital Black Edition 7200 RPM 64MB Cache RAID 0 (data), Sound Blaster X-Fi Titanium Fatal1ty Champion, Corsair HX1200 PSU, Windows 10 Pro 64-bit.
#46
willdearborn
CLASSIFIED Member
  • Total Posts : 2318
  • Reward points : 0
  • Joined: 2008/01/04 18:54:42
  • Status: offline
  • Ribbons : 5
Re: X299 Manuals posted 2017/07/22 17:14:37 (permalink)
.
 
post edited by willdearborn - 2017/07/25 17:37:12

EVGA RTX 2080 Ti • EVGA X299 Dark • Intel Core i7 9800X • 32GB G.Skill TridentZ Black
2x Samsung 970 EVO 500GBs • 3x Samsung 860 EVO 1TBs • 2x WD Black 2TBs
43" 4K Wasabi Mango UHD430 • Lian Li PC D600WB • EVGA 1600T2


#47
willdearborn
CLASSIFIED Member
  • Total Posts : 2318
  • Reward points : 0
  • Joined: 2008/01/04 18:54:42
  • Status: offline
  • Ribbons : 5
Re: X299 Manuals posted 2017/07/23 06:48:46 (permalink)
Locutus494
 
What?! It's better to have the M.2 slots through the CPU, especially when you have 44 lanes available. EVGA dropped the ball by not having both M.2 slots off the CPU. Having the M.2 slots through the PCH bottlenecks them, especially in RAID setups. Remember, the interface between the PCH and the CPU is equivalent to only four PCIe lanes, so all the devices like M.2 SSDs, USB ports, LAN, and onboard audio all have to share four PCIe lanes to the CPU.




 
I looked a little more into it and you are right. All 24 PCH lanes are capped by a x4 connection to the CPU. So any RAID array going through the PCH would have really no increase in speed over a single drive. The only way to have a RAID 0 array at full speed is when all drives are connected to CPU lanes and this requires Intel VROC. VROC allows up to 20 drives (with the use of add-in cards) in RAID 0 but the only way to make BOOTABLE partition using VROC is to use INTEL SSDs. That's right..no Samsung 960s or any other brand will be bootable on VROC. ONLY INTEL BRAND! So there doesn't seem to be any way to get a full speed bootable RAID 0 array using Samsung SSDs on X299. I guess I will really be sticking to a single SSD so I can use a Samsung 960. Intel has really lost their minds with this platform. You have to spend $1000 just to get 44 lanes and VROC only allows you to make a bootable RAID array using Intel's crappy SSDs. Not to mention if you want any other RAID level (1,5,10) on VROC you have to purchase a separate VROC key and this isn't even on sale yet and no one knows the price. Most people think anywhere from $99-$299. Just to enable other RAID levels!
 
 
post edited by willdearborn - 2017/07/25 15:06:14

EVGA RTX 2080 Ti • EVGA X299 Dark • Intel Core i7 9800X • 32GB G.Skill TridentZ Black
2x Samsung 970 EVO 500GBs • 3x Samsung 860 EVO 1TBs • 2x WD Black 2TBs
43" 4K Wasabi Mango UHD430 • Lian Li PC D600WB • EVGA 1600T2


#48
willdearborn
CLASSIFIED Member
  • Total Posts : 2318
  • Reward points : 0
  • Joined: 2008/01/04 18:54:42
  • Status: offline
  • Ribbons : 5
Re: X299 Manuals posted 2017/07/25 15:17:59 (permalink)
I am sorry Dave but I just have one more question. I wish the Dark manual was available so I could look for myself, but...
 
So I understand that if PE6 is used one of the M.2 slots switches to PCH lanes. Does this mean that if PE6 is populated both M.2 slots are connected to PCH lanes? I am just wondering if it is at all possible (with a 44 lane CPU) to have any configuration where both M.2 slots are both coming from the same type of lanes to allow RAID? Even if it's through the PCH?
 
Two scenarios I can think of are:
-#1 For IRST RAID: If you were to populate PE6 would both M.2 slots then be coming from the PCH? If I were to populate PE1/PE4/PE6 @ 16x/16x/8x would both M.2 slots be going though the PCH and therefore able to use IRST RAID?
-#2 for VROC: If you used slots PE1/PE2/PE4 @ 16x/8x/16x (without populating PE6) is it possible to have M.2 RAID 0 through CPU lanes using VROC?
 
Would either or both of these scenarios work for M.2 RAID 0 and do I have the PCIe slot bandwidth correct? Thanks again Dave you have been super helpful.
post edited by willdearborn - 2017/07/25 17:43:13

EVGA RTX 2080 Ti • EVGA X299 Dark • Intel Core i7 9800X • 32GB G.Skill TridentZ Black
2x Samsung 970 EVO 500GBs • 3x Samsung 860 EVO 1TBs • 2x WD Black 2TBs
43" 4K Wasabi Mango UHD430 • Lian Li PC D600WB • EVGA 1600T2


#49
TECH_DaveB
EVGA Alumni
  • Total Posts : 4893
  • Reward points : 0
  • Joined: 2008/09/26 17:03:47
  • Status: offline
  • Ribbons : 46
Re: X299 Manuals posted 2017/07/26 18:43:29 (permalink)
willdearborn
I am sorry Dave but I just have one more question. I wish the Dark manual was available so I could look for myself, but...
 
So I understand that if PE6 is used one of the M.2 slots switches to PCH lanes. Does this mean that if PE6 is populated both M.2 slots are connected to PCH lanes? I am just wondering if it is at all possible (with a 44 lane CPU) to have any configuration where both M.2 slots are both coming from the same type of lanes to allow RAID? Even if it's through the PCH?
 
Two scenarios I can think of are:
-#1 For IRST RAID: If you were to populate PE6 would both M.2 slots then be coming from the PCH? If I were to populate PE1/PE4/PE6 @ 16x/16x/8x would both M.2 slots be going though the PCH and therefore able to use IRST RAID?
-#2 for VROC: If you used slots PE1/PE2/PE4 @ 16x/8x/16x (without populating PE6) is it possible to have M.2 RAID 0 through CPU lanes using VROC?
 
Would either or both of these scenarios work for M.2 RAID 0 and do I have the PCIe slot bandwidth correct? Thanks again Dave you have been super helpful.




Not a problem, always happy to help.
OK, if you populate PE6 with a x8/x16 card yes, you will set the 80mm M.2 to PCH, however if you use a x4 or x1 card, like a sound card or Ethernet then it should stay using 4 CPU lanes.
 
#1: No, the 110mm M.2 is only using CPU based lanes when using a 44 lane processor.
 
#2: VROC will only be through the CPU based lanes (Virtual RAID on CPU) as long as the config allows for adequate CPU lanes to the given slot, and the config you suggested would give you x8/x8/x16 and up to x8 for VROC on PE6
 
Scenario 2 would work well I think.
 
And not a problem, I am glad to help, I find the technical stuff interesting, which is why I planned to stick around with the community when I left anyway, then they made me a moderator.  Always feel free to ask me ANYTHING (computer wise) and I will do my best to help out, and I am a no bs kinda guy if I don't know I will tell you.
#50
willdearborn
CLASSIFIED Member
  • Total Posts : 2318
  • Reward points : 0
  • Joined: 2008/01/04 18:54:42
  • Status: offline
  • Ribbons : 5
Re: X299 Manuals posted 2017/07/27 17:06:34 (permalink)
TECH_DaveB
 
Not a problem, always happy to help.
OK, if you populate PE6 with a x8/x16 card yes, you will set the 80mm M.2 to PCH, however if you use a x4 or x1 card, like a sound card or Ethernet then it should stay using 4 CPU lanes.
 
#1: No, the 110mm M.2 is only using CPU based lanes when using a 44 lane processor.
 
#2: VROC will only be through the CPU based lanes (Virtual RAID on CPU) as long as the config allows for adequate CPU lanes to the given slot, and the config you suggested would give you x8/x8/x16 and up to x8 for VROC on PE6
 
Scenario 2 would work well I think.
 
And not a problem, I am glad to help, I find the technical stuff interesting, which is why I planned to stick around with the community when I left anyway, then they made me a moderator.  Always feel free to ask me ANYTHING (computer wise) and I will do my best to help out, and I am a no bs kinda guy if I don't know I will tell you.




Ok I see now. So one of the M.2 slots is always going through CPU lanes no matter what. So with EVGA X299 boards it seems NVMe RAID through standard IRST on PCH lanes is not possible. Not a big deal since it's limited by DMI 3.0 anyway. But it is good to hear that both M.2 slots will default to CPU lanes (unless you populated PE6 with an 8x or higher card). VROC would be a good option in that case. It's just too bad Intel plans on limiting bootable arrays on VROC to only Intel branded drives. It's a real bummer Intel chose to do this considering their own NVMe SSDs aren't all that great. They must think this will sell SSDs for them, I can't see another reason to limit VROC in this way. Other than maybe it's a way to push their optane drives? What ever the reason is, giving us less choice by artificially limiting this is just another shady move by Intel with X299 (just like limiting the full 44 lanes to 7900x+). Anyway thanks for all the help Dave! I think with all your explanations I am completely clear on PCIe/M.2 for the Dark.

EVGA RTX 2080 Ti • EVGA X299 Dark • Intel Core i7 9800X • 32GB G.Skill TridentZ Black
2x Samsung 970 EVO 500GBs • 3x Samsung 860 EVO 1TBs • 2x WD Black 2TBs
43" 4K Wasabi Mango UHD430 • Lian Li PC D600WB • EVGA 1600T2


#51
iskren87
New Member
  • Total Posts : 15
  • Reward points : 0
  • Joined: 2016/07/19 15:17:10
  • Status: offline
  • Ribbons : 0
Re: X299 Manuals posted 2017/07/28 07:05:27 (permalink)
I am still curous where m.2 goes on the Dark version... The space under that plate looks too small for it ? Are there any picture of that MOBO without the plate ?
Also I am planing to use 4x GPU's and probably one m.2 SSD. At what speed will they work, since the m.2 slot is routed to the CPU lanes ?
post edited by iskren87 - 2017/07/28 07:09:12
#52
Tuxedo.
New Member
  • Total Posts : 67
  • Reward points : 0
  • Joined: 2016/03/26 11:59:08
  • Status: offline
  • Ribbons : 1
Re: X299 Manuals posted 2017/07/28 07:48:24 (permalink)
2 M.2 slots are under the plastic shroud, both will get cooled by the small fan.
#53
willdearborn
CLASSIFIED Member
  • Total Posts : 2318
  • Reward points : 0
  • Joined: 2008/01/04 18:54:42
  • Status: offline
  • Ribbons : 5
Re: X299 Manuals posted 2017/07/31 19:31:09 (permalink)
Hey Dave,
 
I thought I had all my questions answered about the Dark but I just have a few more about the PCIe slots.
 
1- If you were to use a 44 lane CPU is there any slot configuration that allows 16x/16x/8x? Would this work in PE1/PE3/PE6? Or PE1/PE4/PE6? Or both? Or if 3 slots are populated are you stuck with 16x/8x/8x/?
 
2- If you were to use a 28 lane CPU is there any slot configuration that allows 8x/8x/8x? Would this work in PE1/PE4/PE6? I know you said PE3 is disabled with a 28 lane CPU so I assuming this is the only config that might be possible?

EVGA RTX 2080 Ti • EVGA X299 Dark • Intel Core i7 9800X • 32GB G.Skill TridentZ Black
2x Samsung 970 EVO 500GBs • 3x Samsung 860 EVO 1TBs • 2x WD Black 2TBs
43" 4K Wasabi Mango UHD430 • Lian Li PC D600WB • EVGA 1600T2


#54
wjerla
New Member
  • Total Posts : 57
  • Reward points : 0
  • Joined: 2009/05/09 09:46:54
  • Status: offline
  • Ribbons : 0
Re: X299 Manuals posted 2017/08/01 04:37:25 (permalink)
@TECH_DaveB
 
Do you know if the U.2 connectors on the Dark are CPU, PCH, or a mixture of both?
 
Thanks!
#55
EVGATech_LeeM
EVGA Forum Moderator
  • Total Posts : 1331
  • Reward points : 0
  • Joined: 2016/11/04 14:43:35
  • Location: Brea, CA
  • Status: offline
  • Ribbons : 14
Re: X299 Manuals posted 2017/11/21 14:43:30 (permalink)
Added the Dark to the OP.
#56
Gawl86
New Member
  • Total Posts : 21
  • Reward points : 0
  • Joined: 2016/01/26 14:32:36
  • Status: offline
  • Ribbons : 0
Re: X299 Manuals posted 2017/11/23 13:35:19 (permalink)
Finally!!!
#57
Locutus494
Superclocked Member
  • Total Posts : 110
  • Reward points : 0
  • Joined: 2013/09/24 05:42:30
  • Status: offline
  • Ribbons : 0
Re: X299 Manuals posted 2017/11/23 13:48:09 (permalink)
EVGATech_LeeM
Added the Dark to the OP.


FINALLY! A board that has both M.2 slots run off the CPU, while having a two-way SLI setup!

Digital Storm Hailstorm (Corsair 800D), EVGA X58 Classified 3, Intel Core i7 Extreme Edition 990X 3.46GHz Six-Core, 24GB Kingston HyperX T1 1600MHz DDR3, 2x EVGA GTX 780 Dual Classified Hydro Copper, 30" Dell U3011 2560x1600, 2x 600GB Intel 320 SSD RAID 0 (OS), 2x 2TB Western Digital Black Edition 7200 RPM 64MB Cache RAID 0 (data), Sound Blaster X-Fi Titanium Fatal1ty Champion, Corsair HX1200 PSU, Windows 10 Pro 64-bit.
#58
sam nelson
iCX Member
  • Total Posts : 370
  • Reward points : 0
  • Joined: 2014/01/06 21:07:07
  • Status: offline
  • Ribbons : 9
Re: X299 Manuals posted 2017/11/24 10:33:06 (permalink)
will I been reading and studying the manual and I think you can run pe1 at x8 , pe2 at x8, pe4 at x8, and pe6 at x8 for a 4/way sli. thin you can use the 2U ports with 2 of intals 900p ssd that work as a psi-e x4 each . and raid 0 in cpu vroc . you will get 64 Gbits bandwidth. to get full 128 Gbits you have to drop back to 2/way. but for we 4/way and raid zore with 2 cards wow . but must have full 44 lanes to do .that's why I got my delided 7980xe . 
post edited by sam nelson - 2017/11/25 05:18:28
#59
GGTV-Jon
FTW Member
  • Total Posts : 1787
  • Reward points : 0
  • Joined: 2017/11/25 14:11:43
  • Location: WA, USA
  • Status: offline
  • Ribbons : 17
Re: X299 Manuals posted 2017/11/25 14:52:57 (permalink)
After looking over (read as: scoured) the Dark manual I bummed about how things were configured for the larger m.2 slot and the primary u.2 port (page 28) for the 44 lane procs.
 
I take it it came down to trace layouts on the board? As this is different configuration then the FTWK.
 
My planned build has 1 m.2 as the primary OS drive and 2 u.2 drives for everything else. This leaves me wondering how the air flow over the 80mm m.2 will be without a restriction in the 110mm path way of the ducting.
 
Not that it is going to change my buying decision but I am curious as to what the rest of the on board sound components are to complement the Core3D CA0132.
 
Oh and HI - my first post here


#60
Page: < 123 > Showing page 2 of 3
Jump to:
  • Back to Mobile