2022/05/06 01:52:15
kram36
Monstieur
With the newer BIOS versions, if you enable the "CPU Attached RAID" option in the UEFI and set the bifurcation to non-VROC, you can now create RAID arrays using the RST software in Windows instead of VROC. Previously RST was limited to chipset lanes. Both VROC and RST are just software RAID. VROC has a slightly better abstraction layer which hides the individual drives when the driver is not installed, but with RST the individual drives are visible without the driver. I don't believe there is any extra offloading done by VROC compared to RST - they're both done in the driver.
https://www.intel.com/con...upport_on_X299_FAQ.pdf


I was doing that over a year ago using none Intel NVMe drives. The problem that still exits, you must use Intel NVMe drives for a bootable Raid.
2022/05/06 02:33:44
Monstieur
kram36
I was doing that over a year ago using none Intel NVMe drives. The problem that still exits, you must use Intel NVMe drives for a bootable Raid.

RST RAID has been bootable using non-Intel drives for 15 years or so. It's the same regardless of whether you use SATA or PCIe drives. Both chipset and CPU PCIe lines can be used to create bootable arrays with RST.
 
The problem could be the RST EFI modules in your BIOS. I manually upgraded the EFI modules to the latest 18.x version using the UBU tool at Win-RAID. IIRC the latest EVGA BIOS contains 17.x. Earlier BIOS versions shipped with 16.x which could not even create Optane Memory accelerated SATA drives though that was an advertised feature of the board.
2022/05/06 13:52:15
DEJ915
Monstieur
kram36
I was doing that over a year ago using none Intel NVMe drives. The problem that still exits, you must use Intel NVMe drives for a bootable Raid.

RST RAID has been bootable using non-Intel drives for 15 years or so. It's the same regardless of whether you use SATA or PCIe drives. Both chipset and CPU PCIe lines can be used to create bootable arrays with RST.
 
The problem could be the RST EFI modules in your BIOS. I manually upgraded the EFI modules to the latest 18.x version using the UBU tool at Win-RAID. IIRC the latest EVGA BIOS contains 17.x. Earlier BIOS versions shipped with 16.x which could not even create Optane Memory accelerated SATA drives though that was an advertised feature of the board.


That's great it works with your modified bios but if you haven't gotten it working on a release bios then it doesn't really help most people since they won't want to do that.


"You can use this instead" but turns out you are using modified bios and did not mention it.
2022/05/06 13:55:26
Monstieur
DEJ915
That's great it works with your modified bios but if you haven't gotten it working on a release bios then it doesn't really help most people since they won't want to do that.


"You can use this instead" but turns out you are using modified bios and did not mention it.


RST PCIe RAID always worked. It’s only the Optane Memory Acceleration that was broken till EVGA updated the modules (they added “CPU Attached RAID” as well). I merely updated the modules before they did.
2022/07/31 17:42:38
JK_DC
I am having no luck with the Dark and NVME raid. I have cpu attached storage on, the slot with the hyper m2 set to non-vroc and pcie raid turned on. The IRSTe software refuses to install as it says the platform is not supported. I can install RST and it doesn't see the drives at all to allow me to raid them as a data drive although I can see them in windows and disk management.
 
I also have an AsRock that I tested the hyper m2 card on. I set it to enterprise in the bios and bifurcated it and set the slot to vroc aic and it works flawlessly as a data drive. IRSTe installed both sata and vmd driverrs.
 
The problem with the Dark is there is no device for the raid driver. I don't know if the enterprise switch in the AsRock bios exposes the nvme vmd device or not. There is no such option on the Dark. I am running the 1.28 BIOS on the Dark so it should work. If I set the slot to VROC on the Dark it refuses to POST.
 
I am souring on EVGA motherboards after purchasing the Dark. The CMOS battery was dead when I got it, SLI doesn't work and now I can't get nvme raid to work. I ordered a intel vroc key and some intel drives to see if it will make it work. Maybe it will expose the device id for the vmd raid controller after I install it. We'll see. Otherwise I will swap it with an ASUS board, which will probably work more smoothly.
2022/07/31 19:43:05
Monstieur
JK_DC
I am having no luck with the Dark and NVME raid. I have cpu attached storage on, the slot with the hyper m2 set to non-vroc and pcie raid turned on. The IRSTe software refuses to install as it says the platform is not supported. I can install RST and it doesn't see the drives at all to allow me to raid them as a data drive although I can see them in windows and disk management.
 
I also have an AsRock that I tested the hyper m2 card on. I set it to enterprise in the bios and bifurcated it and set the slot to vroc aic and it works flawlessly as a data drive. IRSTe installed both sata and vmd driverrs.
 
The problem with the Dark is there is no device for the raid driver. I don't know if the enterprise switch in the AsRock bios exposes the nvme vmd device or not. There is no such option on the Dark. I am running the 1.28 BIOS on the Dark so it should work. If I set the slot to VROC on the Dark it refuses to POST.
 
I am souring on EVGA motherboards after purchasing the Dark. The CMOS battery was dead when I got it, SLI doesn't work and now I can't get nvme raid to work. I ordered a intel vroc key and some intel drives to see if it will make it work. Maybe it will expose the device id for the vmd raid controller after I install it. We'll see. Otherwise I will swap it with an ASUS board, which will probably work more smoothly.


Let me clear things up as I've tested all combinations extensively.
The RSTe / IRSTe terminology is outdated and is now simply called VROC. It's unrelated to RST.
CPU Attached Storage is merely a configuration setting that affects only RST and has nothing to do with VROC.
 
RST RAID with CPU Attached Storage enabled works only with PE6, PU1, (maybe PU2), PM1, and PM2, when using CPU lanes for these slots. You have to use the RST v18 driver and the Intel Optane Memory and Storage Management application or UEFI to create RAID arrays. EVGA needs to release a BIOS update that supports PE1, PE2, PE3, and PE4, as other manufacturers support all CPU PCIe slots.
 
RST RAID with CPU Attached Storage disabled works only with PE6 and PM2, when using PCH lanes for these slots. You have to use the RST v18 driver and the Intel Optane Memory and Storage Management application or UEFI to create RAID arrays. EVGA needs to release a BIOS update that supports PE5. There is no reason to use this mode as it's bottlenecked by the chipset's DMI PCIe 3.0 x4 connection.
 
Bifurcation (PCIe slots set to non-VROC) works for PE1, PE2, PE3, PE4, and PE6, when using CPU lanes for these slots. No driver is required, but the RST v18 generic driver will work for NVMe drives. You have to use software RAID in Windows such as Storage Spaces.
 
VROC RAID works for PE1, PE2, PE3, PE4, PE6, PU1, PU2, PM1, and PM2, when using CPU lanes for these slots. You have to use the VROC v7 or v8 driver and the Intel Virtual RAID on CPU application or UEFI to create RAID arrays. With Intel drives, you can create a bootable RAID0 array without a VROC key. You can also create a non-bootable array using with the Windows application (the array will not be detected in the UEFI but it works). VROC is unreliable with non-Intel drives as they drop from the array once in a while (the RAID0 array is recoverable).
 
VROC and RST (CPU Attached Storage) are completely separate things. You can use either depending on the slot configuration that works. Or just use bifurcation which always works.
 
The above is true for all manufacturers. The only difference is which slots are enabled for CPU Attached Storage.
2022/07/31 19:54:51
ZoranC
Monstieur
JK_DC
I am having no luck with the Dark and NVME raid. I have cpu attached storage on, the slot with the hyper m2 set to non-vroc and pcie raid turned on. The IRSTe software refuses to install as it says the platform is not supported. I can install RST and it doesn't see the drives at all to allow me to raid them as a data drive although I can see them in windows and disk management.
 
I also have an AsRock that I tested the hyper m2 card on. I set it to enterprise in the bios and bifurcated it and set the slot to vroc aic and it works flawlessly as a data drive. IRSTe installed both sata and vmd driverrs.
 
The problem with the Dark is there is no device for the raid driver. I don't know if the enterprise switch in the AsRock bios exposes the nvme vmd device or not. There is no such option on the Dark. I am running the 1.28 BIOS on the Dark so it should work. If I set the slot to VROC on the Dark it refuses to POST.
 
I am souring on EVGA motherboards after purchasing the Dark. The CMOS battery was dead when I got it, SLI doesn't work and now I can't get nvme raid to work. I ordered a intel vroc key and some intel drives to see if it will make it work. Maybe it will expose the device id for the vmd raid controller after I install it. We'll see. Otherwise I will swap it with an ASUS board, which will probably work more smoothly.


Let me clear things up as I've tested all combinations extensively.
 
RST RAID with CPU Attached RAID enabled works only with PE6, PU1, (maybe PU2), PM1, and PM2, when using CPU lanes for these slots. You have to use the RST v18 driver and the Intel Optane Memory and Storage Management application. EVGA needs to release a BIOS update that supports all slots as other manufacturers support all CPU PCIe slots.
 
RST RAID with CPU Attached RAID disabled works only with PE6 and PM2, when using PCH lanes for these slots. You have to use the RST v18 driver and the Intel Optane Memory and Storage Management application. There is no reason to use this mode as it's bottlenecked by the chipset's DMI PCIe 3.0 x4 connection.
 
Bifurcation (PCIe slots set to non-VROC) works for PE1, PE2, PE3, PE4, and PE6, when using CPU lanes for these slots. No driver is required, but the RST v18 generic driver will work for NVMe drives. You have to use software RAID in Windows such as Storage Spaces.
 
VROC RAID works for PE1, PE2, PE3, PE4, PE6, PU1, PU2, PM1, and PM2, when using CPU lanes for these slots. You have to use the VROC v7 or v8 driver and the Intel Virtual RAID on CPU application. You can create a bootable RAID0 array in the UEFI or Windows application with Intel drives without a VROC key. You can also create a non-bootable RAID0 array with the Windows application. VROC is unreliable with non-Intel drives as they drop from the array once in a while.
 
The RSTe / IRSTe terminology is outdated and is now simply called VROC. VROC and CPU Attached RAID are two separate things.


That is great info, Monstieur, thank you for sharing it. Personally I am using Storage Spaces with bifurcated PE4 and 'CPU Attached Storage' option in BIOS disabled. One thing I wasn't able to figure out is if there is any drawback/benefit to using 'CPU attached storage' disabled vs. enabled in such setup, what is technically correct way, because both seemed to work. Do you happen to know answer on that, please? 
2022/07/31 19:57:15
Monstieur
ZoranC
That is great info, Monstieur, thank you for sharing it. Personally I am using Storage Spaces with bifurcated PE4 and 'CPU Attached Storage' option in BIOS disabled. One thing I wasn't able to figure out is if there is any drawback/benefit to using 'CPU attached storage' disabled vs. enabled in such setup, what is technically correct way, because both seemed to work. Do you happen to know answer on that, please? 

The only purpose of CPU Attached Storage is to remap the drives into Intel RST to allow creation of a RAID array using drives connected to CPU lanes. This would hide the drives from Windows when you create an array. It's also required to enable Optane Memory Acceleration with an Optane 900p in PE6 or PU1. For software RAID or simply attaching multiple drives, always use bifurcation.
2022/07/31 20:03:34
ZoranC
Monstieur
ZoranC
That is great info, Monstieur, thank you for sharing it. Personally I am using Storage Spaces with bifurcated PE4 and 'CPU Attached Storage' option in BIOS disabled. One thing I wasn't able to figure out is if there is any drawback/benefit to using 'CPU attached storage' disabled vs. enabled in such setup, what is technically correct way, because both seemed to work. Do you happen to know answer on that, please? 

The only purpose of CPU Attached RAID is to hide the drives from Windows and remap them into Intel RST to create a RAID array that way or enable Optane Memory Acceleration. For software RAID, always use bifurcation.

I _am_ using bifurcation. Do I interpret your words correctly as when using bifurcation 'CPU Storage Configuration -> CPU attached storage' option should be disabled? If yes that means I am using it correctly but I am little confused by you saying enabling it would hide the drives from Windows as when I had it enabled Windows still saw drives in bifurcated PE4.
2022/07/31 20:04:43
Monstieur
ZoranC
I _am_ using bifurcation. Do I interpret your words correctly as when using bifurcation 'CPU Storage Configuration -> CPU attached storage' option should be disabled? If yes that means I am using it correctly but I am little confused by you saying enabling it would hide the drives from Windows as when I had it enabled Windows still saw drives in bifurcated PE4.

The drives would disappear only when you configure RAID in Intel RST. If CPU Attached Storage is disabled, Intel RST will not allow you to configure RAID on those drives if the slot uses CPU lanes. There may be a slight bootup delay due to the unnecessary remapping if you leave the setting enabled.
 
CPU Attached Storage does not work on PE4 anyway, so it will have no effect in your case.

Use My Existing Forum Account

Use My Social Media Account