EVGA

AnsweredVROC update

Page: < 123 > Showing page 2 of 3
Author
kram36
The Destroyer
  • Total Posts : 21477
  • Reward points : 0
  • Joined: 2009/10/27 19:00:58
  • Location: United States
  • Status: offline
  • Ribbons : 72
Re: VROC update 2022/05/06 01:52:15 (permalink)
Monstieur
With the newer BIOS versions, if you enable the "CPU Attached RAID" option in the UEFI and set the bifurcation to non-VROC, you can now create RAID arrays using the RST software in Windows instead of VROC. Previously RST was limited to chipset lanes. Both VROC and RST are just software RAID. VROC has a slightly better abstraction layer which hides the individual drives when the driver is not installed, but with RST the individual drives are visible without the driver. I don't believe there is any extra offloading done by VROC compared to RST - they're both done in the driver.
https://www.intel.com/con...upport_on_X299_FAQ.pdf


I was doing that over a year ago using none Intel NVMe drives. The problem that still exits, you must use Intel NVMe drives for a bootable Raid.
#31
Monstieur
Superclocked Member
  • Total Posts : 128
  • Reward points : 0
  • Joined: 2016/08/31 02:04:28
  • Status: offline
  • Ribbons : 5
Re: VROC update 2022/05/06 02:33:44 (permalink) ☄ Helpfulby Cool GTX 2022/05/06 08:47:39
kram36
I was doing that over a year ago using none Intel NVMe drives. The problem that still exits, you must use Intel NVMe drives for a bootable Raid.

RST RAID has been bootable using non-Intel drives for 15 years or so. It's the same regardless of whether you use SATA or PCIe drives. Both chipset and CPU PCIe lines can be used to create bootable arrays with RST.
 
The problem could be the RST EFI modules in your BIOS. I manually upgraded the EFI modules to the latest 18.x version using the UBU tool at Win-RAID. IIRC the latest EVGA BIOS contains 17.x. Earlier BIOS versions shipped with 16.x which could not even create Optane Memory accelerated SATA drives though that was an advertised feature of the board.
post edited by Monstieur - 2022/05/06 02:36:13
#32
DEJ915
SSC Member
  • Total Posts : 544
  • Reward points : 0
  • Joined: 2013/11/03 21:58:26
  • Status: offline
  • Ribbons : 11
Re: VROC update 2022/05/06 13:52:15 (permalink)
Monstieur
kram36
I was doing that over a year ago using none Intel NVMe drives. The problem that still exits, you must use Intel NVMe drives for a bootable Raid.

RST RAID has been bootable using non-Intel drives for 15 years or so. It's the same regardless of whether you use SATA or PCIe drives. Both chipset and CPU PCIe lines can be used to create bootable arrays with RST.
 
The problem could be the RST EFI modules in your BIOS. I manually upgraded the EFI modules to the latest 18.x version using the UBU tool at Win-RAID. IIRC the latest EVGA BIOS contains 17.x. Earlier BIOS versions shipped with 16.x which could not even create Optane Memory accelerated SATA drives though that was an advertised feature of the board.


That's great it works with your modified bios but if you haven't gotten it working on a release bios then it doesn't really help most people since they won't want to do that.


"You can use this instead" but turns out you are using modified bios and did not mention it.
#33
Monstieur
Superclocked Member
  • Total Posts : 128
  • Reward points : 0
  • Joined: 2016/08/31 02:04:28
  • Status: offline
  • Ribbons : 5
Re: VROC update 2022/05/06 13:55:26 (permalink)
DEJ915
That's great it works with your modified bios but if you haven't gotten it working on a release bios then it doesn't really help most people since they won't want to do that.


"You can use this instead" but turns out you are using modified bios and did not mention it.


RST PCIe RAID always worked. It’s only the Optane Memory Acceleration that was broken till EVGA updated the modules (they added “CPU Attached RAID” as well). I merely updated the modules before they did.
#34
JK_DC
iCX Member
  • Total Posts : 370
  • Reward points : 0
  • Joined: 2007/11/01 11:31:14
  • Status: offline
  • Ribbons : 0
Re: VROC update 2022/07/31 17:42:38 (permalink)
I am having no luck with the Dark and NVME raid. I have cpu attached storage on, the slot with the hyper m2 set to non-vroc and pcie raid turned on. The IRSTe software refuses to install as it says the platform is not supported. I can install RST and it doesn't see the drives at all to allow me to raid them as a data drive although I can see them in windows and disk management.
 
I also have an AsRock that I tested the hyper m2 card on. I set it to enterprise in the bios and bifurcated it and set the slot to vroc aic and it works flawlessly as a data drive. IRSTe installed both sata and vmd driverrs.
 
The problem with the Dark is there is no device for the raid driver. I don't know if the enterprise switch in the AsRock bios exposes the nvme vmd device or not. There is no such option on the Dark. I am running the 1.28 BIOS on the Dark so it should work. If I set the slot to VROC on the Dark it refuses to POST.
 
I am souring on EVGA motherboards after purchasing the Dark. The CMOS battery was dead when I got it, SLI doesn't work and now I can't get nvme raid to work. I ordered a intel vroc key and some intel drives to see if it will make it work. Maybe it will expose the device id for the vmd raid controller after I install it. We'll see. Otherwise I will swap it with an ASUS board, which will probably work more smoothly.
post edited by JK_DC - 2022/07/31 17:45:12

Core i7 3820 @ 4.75 1.36v, Prolimatech Mega Shadow push/pull 
Asrock X79 Extreme 6,  16 GB G. Skill Z Series 1666 7-8-8-24 1T 1.56v 
EVGA 680 2GB SLI 1272/3200  (1.175v)
CM Storm Sniper Blk,  CM Silent Power Pro 1000w
Creative X-Fi Xtrememusic, Creative G500 5.1 310 watts
Intel X25-M G2 80GB,  Corsair M4 128GB, WD Caviar Blk 750GBX2, 1.5TB
Logitech G510, M705
#35
Monstieur
Superclocked Member
  • Total Posts : 128
  • Reward points : 0
  • Joined: 2016/08/31 02:04:28
  • Status: offline
  • Ribbons : 5
Re: VROC update 2022/07/31 19:43:05 (permalink)
JK_DC
I am having no luck with the Dark and NVME raid. I have cpu attached storage on, the slot with the hyper m2 set to non-vroc and pcie raid turned on. The IRSTe software refuses to install as it says the platform is not supported. I can install RST and it doesn't see the drives at all to allow me to raid them as a data drive although I can see them in windows and disk management.
 
I also have an AsRock that I tested the hyper m2 card on. I set it to enterprise in the bios and bifurcated it and set the slot to vroc aic and it works flawlessly as a data drive. IRSTe installed both sata and vmd driverrs.
 
The problem with the Dark is there is no device for the raid driver. I don't know if the enterprise switch in the AsRock bios exposes the nvme vmd device or not. There is no such option on the Dark. I am running the 1.28 BIOS on the Dark so it should work. If I set the slot to VROC on the Dark it refuses to POST.
 
I am souring on EVGA motherboards after purchasing the Dark. The CMOS battery was dead when I got it, SLI doesn't work and now I can't get nvme raid to work. I ordered a intel vroc key and some intel drives to see if it will make it work. Maybe it will expose the device id for the vmd raid controller after I install it. We'll see. Otherwise I will swap it with an ASUS board, which will probably work more smoothly.


Let me clear things up as I've tested all combinations extensively.
The RSTe / IRSTe terminology is outdated and is now simply called VROC. It's unrelated to RST.
CPU Attached Storage is merely a configuration setting that affects only RST and has nothing to do with VROC.
 
RST RAID with CPU Attached Storage enabled works only with PE6, PU1, (maybe PU2), PM1, and PM2, when using CPU lanes for these slots. You have to use the RST v18 driver and the Intel Optane Memory and Storage Management application or UEFI to create RAID arrays. EVGA needs to release a BIOS update that supports PE1, PE2, PE3, and PE4, as other manufacturers support all CPU PCIe slots.
 
RST RAID with CPU Attached Storage disabled works only with PE6 and PM2, when using PCH lanes for these slots. You have to use the RST v18 driver and the Intel Optane Memory and Storage Management application or UEFI to create RAID arrays. EVGA needs to release a BIOS update that supports PE5. There is no reason to use this mode as it's bottlenecked by the chipset's DMI PCIe 3.0 x4 connection.
 
Bifurcation (PCIe slots set to non-VROC) works for PE1, PE2, PE3, PE4, and PE6, when using CPU lanes for these slots. No driver is required, but the RST v18 generic driver will work for NVMe drives. You have to use software RAID in Windows such as Storage Spaces.
 
VROC RAID works for PE1, PE2, PE3, PE4, PE6, PU1, PU2, PM1, and PM2, when using CPU lanes for these slots. You have to use the VROC v7 or v8 driver and the Intel Virtual RAID on CPU application or UEFI to create RAID arrays. With Intel drives, you can create a bootable RAID0 array without a VROC key. You can also create a non-bootable array using with the Windows application (the array will not be detected in the UEFI but it works). VROC is unreliable with non-Intel drives as they drop from the array once in a while (the RAID0 array is recoverable).
 
VROC and RST (CPU Attached Storage) are completely separate things. You can use either depending on the slot configuration that works. Or just use bifurcation which always works.
 
The above is true for all manufacturers. The only difference is which slots are enabled for CPU Attached Storage.
post edited by Monstieur - 2022/07/31 20:26:38
#36
ZoranC
FTW Member
  • Total Posts : 1099
  • Reward points : 0
  • Joined: 2011/05/24 17:22:15
  • Status: offline
  • Ribbons : 16
Re: VROC update 2022/07/31 19:54:51 (permalink)
Monstieur
JK_DC
I am having no luck with the Dark and NVME raid. I have cpu attached storage on, the slot with the hyper m2 set to non-vroc and pcie raid turned on. The IRSTe software refuses to install as it says the platform is not supported. I can install RST and it doesn't see the drives at all to allow me to raid them as a data drive although I can see them in windows and disk management.
 
I also have an AsRock that I tested the hyper m2 card on. I set it to enterprise in the bios and bifurcated it and set the slot to vroc aic and it works flawlessly as a data drive. IRSTe installed both sata and vmd driverrs.
 
The problem with the Dark is there is no device for the raid driver. I don't know if the enterprise switch in the AsRock bios exposes the nvme vmd device or not. There is no such option on the Dark. I am running the 1.28 BIOS on the Dark so it should work. If I set the slot to VROC on the Dark it refuses to POST.
 
I am souring on EVGA motherboards after purchasing the Dark. The CMOS battery was dead when I got it, SLI doesn't work and now I can't get nvme raid to work. I ordered a intel vroc key and some intel drives to see if it will make it work. Maybe it will expose the device id for the vmd raid controller after I install it. We'll see. Otherwise I will swap it with an ASUS board, which will probably work more smoothly.


Let me clear things up as I've tested all combinations extensively.
 
RST RAID with CPU Attached RAID enabled works only with PE6, PU1, (maybe PU2), PM1, and PM2, when using CPU lanes for these slots. You have to use the RST v18 driver and the Intel Optane Memory and Storage Management application. EVGA needs to release a BIOS update that supports all slots as other manufacturers support all CPU PCIe slots.
 
RST RAID with CPU Attached RAID disabled works only with PE6 and PM2, when using PCH lanes for these slots. You have to use the RST v18 driver and the Intel Optane Memory and Storage Management application. There is no reason to use this mode as it's bottlenecked by the chipset's DMI PCIe 3.0 x4 connection.
 
Bifurcation (PCIe slots set to non-VROC) works for PE1, PE2, PE3, PE4, and PE6, when using CPU lanes for these slots. No driver is required, but the RST v18 generic driver will work for NVMe drives. You have to use software RAID in Windows such as Storage Spaces.
 
VROC RAID works for PE1, PE2, PE3, PE4, PE6, PU1, PU2, PM1, and PM2, when using CPU lanes for these slots. You have to use the VROC v7 or v8 driver and the Intel Virtual RAID on CPU application. You can create a bootable RAID0 array in the UEFI or Windows application with Intel drives without a VROC key. You can also create a non-bootable RAID0 array with the Windows application. VROC is unreliable with non-Intel drives as they drop from the array once in a while.
 
The RSTe / IRSTe terminology is outdated and is now simply called VROC. VROC and CPU Attached RAID are two separate things.


That is great info, Monstieur, thank you for sharing it. Personally I am using Storage Spaces with bifurcated PE4 and 'CPU Attached Storage' option in BIOS disabled. One thing I wasn't able to figure out is if there is any drawback/benefit to using 'CPU attached storage' disabled vs. enabled in such setup, what is technically correct way, because both seemed to work. Do you happen to know answer on that, please? 
#37
Monstieur
Superclocked Member
  • Total Posts : 128
  • Reward points : 0
  • Joined: 2016/08/31 02:04:28
  • Status: offline
  • Ribbons : 5
Re: VROC update 2022/07/31 19:57:15 (permalink)
ZoranC
That is great info, Monstieur, thank you for sharing it. Personally I am using Storage Spaces with bifurcated PE4 and 'CPU Attached Storage' option in BIOS disabled. One thing I wasn't able to figure out is if there is any drawback/benefit to using 'CPU attached storage' disabled vs. enabled in such setup, what is technically correct way, because both seemed to work. Do you happen to know answer on that, please? 

The only purpose of CPU Attached Storage is to remap the drives into Intel RST to allow creation of a RAID array using drives connected to CPU lanes. This would hide the drives from Windows when you create an array. It's also required to enable Optane Memory Acceleration with an Optane 900p in PE6 or PU1. For software RAID or simply attaching multiple drives, always use bifurcation.
post edited by Monstieur - 2022/07/31 20:30:09
#38
ZoranC
FTW Member
  • Total Posts : 1099
  • Reward points : 0
  • Joined: 2011/05/24 17:22:15
  • Status: offline
  • Ribbons : 16
Re: VROC update 2022/07/31 20:03:34 (permalink)
Monstieur
ZoranC
That is great info, Monstieur, thank you for sharing it. Personally I am using Storage Spaces with bifurcated PE4 and 'CPU Attached Storage' option in BIOS disabled. One thing I wasn't able to figure out is if there is any drawback/benefit to using 'CPU attached storage' disabled vs. enabled in such setup, what is technically correct way, because both seemed to work. Do you happen to know answer on that, please? 

The only purpose of CPU Attached RAID is to hide the drives from Windows and remap them into Intel RST to create a RAID array that way or enable Optane Memory Acceleration. For software RAID, always use bifurcation.

I _am_ using bifurcation. Do I interpret your words correctly as when using bifurcation 'CPU Storage Configuration -> CPU attached storage' option should be disabled? If yes that means I am using it correctly but I am little confused by you saying enabling it would hide the drives from Windows as when I had it enabled Windows still saw drives in bifurcated PE4.
#39
Monstieur
Superclocked Member
  • Total Posts : 128
  • Reward points : 0
  • Joined: 2016/08/31 02:04:28
  • Status: offline
  • Ribbons : 5
Re: VROC update 2022/07/31 20:04:43 (permalink)
ZoranC
I _am_ using bifurcation. Do I interpret your words correctly as when using bifurcation 'CPU Storage Configuration -> CPU attached storage' option should be disabled? If yes that means I am using it correctly but I am little confused by you saying enabling it would hide the drives from Windows as when I had it enabled Windows still saw drives in bifurcated PE4.

The drives would disappear only when you configure RAID in Intel RST. If CPU Attached Storage is disabled, Intel RST will not allow you to configure RAID on those drives if the slot uses CPU lanes. There may be a slight bootup delay due to the unnecessary remapping if you leave the setting enabled.
 
CPU Attached Storage does not work on PE4 anyway, so it will have no effect in your case.
post edited by Monstieur - 2022/07/31 20:35:46
#40
ZoranC
FTW Member
  • Total Posts : 1099
  • Reward points : 0
  • Joined: 2011/05/24 17:22:15
  • Status: offline
  • Ribbons : 16
Re: VROC update 2022/07/31 20:22:40 (permalink)
Monstieur
ZoranC
I _am_ using bifurcation. Do I interpret your words correctly as when using bifurcation 'CPU Storage Configuration -> CPU attached storage' option should be disabled? If yes that means I am using it correctly but I am little confused by you saying enabling it would hide the drives from Windows as when I had it enabled Windows still saw drives in bifurcated PE4.

The drives would disappear only when you configure RAID in Intel RST. If CPU Attached Storage is disabled, Intel RST will not allow you to configure RAID on those drives if the slot uses CPU lanes. There may be a slight bootup delay due to the unnecessary remapping if you leave the setting enabled.

Thank you for clarifying that!
 
MonstieurCPU Attached RAID does not work on PE4 anyway, so it will have no effect in your case.

I wasn't aware of how this exactly works until you posted details above, now that you did and are pointing out PE4 is not in equation I understand why exactly I didn't see any difference, thank you again :)
#41
ZoranC
FTW Member
  • Total Posts : 1099
  • Reward points : 0
  • Joined: 2011/05/24 17:22:15
  • Status: offline
  • Ribbons : 16
Re: VROC update 2022/07/31 20:28:02 (permalink)
MonstieurYou have to use software RAID in Windows such as Storage Spaces.

BTW, do you have any advanced knowledge of working with Storage Spaces?
 
#42
Monstieur
Superclocked Member
  • Total Posts : 128
  • Reward points : 0
  • Joined: 2016/08/31 02:04:28
  • Status: offline
  • Ribbons : 5
Re: VROC update 2022/07/31 20:33:27 (permalink)
ZoranC
MonstieurYou have to use software RAID in Windows such as Storage Spaces.

BTW, do you have any advanced knowledge of working with Storage Spaces?
 


I have a 4-column striped space with 4x NVMe drives in PE4. I also had a 3-column striped two-way mirror tiered space with 2x SSDs and 6x HDDs.
post edited by Monstieur - 2022/07/31 20:35:05
#43
ZoranC
FTW Member
  • Total Posts : 1099
  • Reward points : 0
  • Joined: 2011/05/24 17:22:15
  • Status: offline
  • Ribbons : 16
Re: VROC update 2022/07/31 20:46:51 (permalink)
Monstieur
ZoranC
MonstieurYou have to use software RAID in Windows such as Storage Spaces.

BTW, do you have any advanced knowledge of working with Storage Spaces?

I have a 4-column striped space with 4x NVMe drives in PE4. I also had a 3-column striped two-way mirror tiered space with 2x SSDs and 6x HDDs.

I currently have 2x NVMe and 2x HDD in two tier (NVMe + NVMe write cache in front of HDD) mirrored setup. I am trying to figure out:
 
1. Is there a way to change size of write cache without tearing whole storage pool apart.
2. How to add more drives to certain tier and how to remove drives from that tier (I'm considering going from 2x 2TB I have in NVMe tier to 4x 4TB, and also going from 2x HDD to 4x HDD)
3. Is there a way to change size of NTFS cluster for volume that sits on that storage pool without tearing apart volume, doing huge backup/restore
4. Once new drives are added how to expand size of current volume
#44
Monstieur
Superclocked Member
  • Total Posts : 128
  • Reward points : 0
  • Joined: 2016/08/31 02:04:28
  • Status: offline
  • Ribbons : 5
Re: VROC update 2022/07/31 20:51:45 (permalink)
ZoranC
I currently have 2x NVMe and 2x HDD in two tier (NVMe + NVMe write cache in front of HDD) mirrored setup. I am trying to figure out:
 
1. Is there a way to change size of write cache without tearing whole storage pool apart.
2. How to add more drives to certain tier and how to remove drives from that tier (I'm considering going from 2x 2TB I have in NVMe tier to 4x 4TB, and also going from 2x HDD to 4x HDD)
3. Is there a way to change size of NTFS cluster for volume that sits on that storage pool without tearing apart volume, doing huge backup/restore
4. Once new drives are added how to expand size of current volume

1. No
2. Add them to the pool and set the media type to SSD or HDD. You can then expand the pool. This won't add more columns and rebalance data so you won't benefit from increased speeds. You should recreate the whole pool.
3. No
4. Windows Disk Management can expand the volume once new drives are added to the pool.
 
The SSD caching is useless as large sequential writes bypass the cache anyway and write at HDD speed. It's good only for random writes. I would just get a single large HDD and Optane 905p and use Optane Memory Acceleration. This will accelerate all writes. Just backup your data incrementally with the Veeam agent for Windows (free) or Macrium Reflect to a second HDD.
post edited by Monstieur - 2022/07/31 21:02:10
#45
ZoranC
FTW Member
  • Total Posts : 1099
  • Reward points : 0
  • Joined: 2011/05/24 17:22:15
  • Status: offline
  • Ribbons : 16
Re: VROC update 2022/07/31 21:00:45 (permalink)
Monstieur
ZoranC
I currently have 2x NVMe and 2x HDD in two tier (NVMe + NVMe write cache in front of HDD) mirrored setup. I am trying to figure out:
 
1. Is there a way to change size of write cache without tearing whole storage pool apart.
2. How to add more drives to certain tier and how to remove drives from that tier (I'm considering going from 2x 2TB I have in NVMe tier to 4x 4TB, and also going from 2x HDD to 4x HDD)
3. Is there a way to change size of NTFS cluster for volume that sits on that storage pool without tearing apart volume, doing huge backup/restore
4. Once new drives are added how to expand size of current volume

1. No
2. Add them to the pool and set the media type to SSD or HDD. You can then expand the pool. This won't add more columns and rebalance data so you won't benefit from increased speeds. You should recreate the whole pool.
3. No
4. Windows Disk Management can expand the volume once new drives are added to the pool.
 
The SSD caching is useless as sequential writes bypass the cache anyway. It's good only for random writes. I would just get a single large HDD and Optane 905p and use Optane Memory Acceleration. This will accelerate all writes. Just backup your data incrementally with Veeam Backup & Restore (free) or Macrium Reflect.

Re # 2 and 4  think I might be able to figure out how to do that -BUT- 'no' is what I too felt is an answer for #1 and 3 which makes them a showstopper. So I think I will "skin this cat differently" when the time comes for it.
#46
ZoranC
FTW Member
  • Total Posts : 1099
  • Reward points : 0
  • Joined: 2011/05/24 17:22:15
  • Status: offline
  • Ribbons : 16
Re: VROC update 2022/07/31 21:26:24 (permalink)
MonstieurThe SSD caching is useless as large sequential writes bypass the cache anyway and write at HDD speed. It's good only for random writes.

Are you positive about that? If memory serves me well I did benchmarks after doing this and sequential writes were accelerated too. Granted, I did not build my Storage Pool with Microsoft's default 1GB write back cache, my WBC is 100GB.
 
Monstieur
I would just get a single large HDD and Optane 905p and use Optane Memory Acceleration. This will accelerate all writes. Just backup your data incrementally with the Veeam agent for Windows (free) or Macrium Reflect to a second HDD.

I was considering Optane path before going Storage Spaces but then Intel handed over development of its software to Microsoft and they limited size of cache to ridiculously low value. 
#47
Monstieur
Superclocked Member
  • Total Posts : 128
  • Reward points : 0
  • Joined: 2016/08/31 02:04:28
  • Status: offline
  • Ribbons : 5
Re: VROC update 2022/07/31 21:31:59 (permalink)
ZoranC
Are you positive about that? If memory serves me well I did benchmarks after doing this and sequential writes were accelerated too. Granted, I did not build my Storage Pool with Microsoft's default 1GB write back cache, my WBC is 100GB.
 
I was considering Optane path before going Storage Spaces but then Intel handed over development of its software to Microsoft and they limited size of cache to ridiculously low value.

I believe the limit is a 128 KiB buffer for bypassing the cache. There's a white paper about it. You can monitor the Storage Space cache hits in Windows Resource Monitor. File Explorer copies seem to hit the cache most of the time. Sequential writes from other applications typically don't. Any write-through requests also bypass the cache. Optane Memory Acceleration caches everything including write-through because it's basically immune to power loss.
 
You can use any size Optane drive for acceleration with CPU Attached Storage. I'd get a 480 GB or 960 GB drive from eBay.
#48
ZoranC
FTW Member
  • Total Posts : 1099
  • Reward points : 0
  • Joined: 2011/05/24 17:22:15
  • Status: offline
  • Ribbons : 16
Re: VROC update 2022/07/31 21:51:20 (permalink)
Monstieur
ZoranC
Are you positive about that? If memory serves me well I did benchmarks after doing this and sequential writes were accelerated too. Granted, I did not build my Storage Pool with Microsoft's default 1GB write back cache, my WBC is 100GB.
 
I was considering Optane path before going Storage Spaces but then Intel handed over development of its software to Microsoft and they limited size of cache to ridiculously low value.

I believe the limit is a 128 KiB buffer for bypassing the cache. There's a white paper about it. You can monitor the Storage Space cache hits in Windows Resource Monitor. File Explorer copies seem to hit the cache most of the time. Sequential writes from other applications typically don't. Any write-through requests also bypass the cache. Optane Memory Acceleration caches everything including write-through because it's basically immune to power loss.
 
You can use any size Optane drive for acceleration with CPU Attached Storage. I'd get a 480 GB or 960 GB drive from eBay.


I just tested copying 22GB of files using File Explorer onto my Storage Spaces volume and I got average speed of 2GB/sec. I wouldn't be able to get that speed if WBC was ignored.
 
I can use any size of Optane drive -BUT- Intel has handed over development of Optane software to Microsoft (Intel's version has been EOLd) and Microsoft store version limits cache size to 64GB which is too little cache for too much money. See this thread among others Optane 900P -AIC- as HDD cache module? | guru3D Forums
 
#49
Monstieur
Superclocked Member
  • Total Posts : 128
  • Reward points : 0
  • Joined: 2016/08/31 02:04:28
  • Status: offline
  • Ribbons : 5
Re: VROC update 2022/07/31 21:54:23 (permalink)
ZoranC
I just tested copying 22GB of files using File Explorer onto my Storage Spaces volume and I got average speed of 2GB/sec. I wouldn't be able to get that speed if WBC was ignored.
 
I can use any size of Optane drive -BUT- Intel has handed over development of Optane software to Microsoft (Intel's version has been EOLd) and Microsoft store version limits cache size to 64GB which is too little cache for too much money. See this thread among others Optane 900P -AIC- as HDD cache module? | guru3D Forums
 

Yeah, File Explorer does use the cache. Other applications were always slow for me.
 
I was able to use the full 280 GB cache on my Optane 900p just a few months ago before I got rid of my last HDD. My incremental backups are over 100 GB and they all went straight to cache at maximum speed. The apps are just hosted on the Microsoft Store. There's no indication they aren't being developed by Intel. You can download the native version directly from Intel. The 64 GB limit was for non-Optane SSDs when using Intel Smart Response Technology as a caching solution.
post edited by Monstieur - 2022/07/31 21:58:01
#50
ZoranC
FTW Member
  • Total Posts : 1099
  • Reward points : 0
  • Joined: 2011/05/24 17:22:15
  • Status: offline
  • Ribbons : 16
Re: VROC update 2022/07/31 22:02:38 (permalink)
Monstieur
Yeah, File Explorer does use the cache. Other applications were always slow for me.

Which applications were slow for you, please? I would like to try them to see what I get.
 
Monstieur
I was able to use the full 280 GB cache on my Optane 900p just a few months ago before I got rid of my last HDD. My incremental backups are over 100 GB and they all went straight to cache at maximum speed. The apps are just hosted on the Microsoft Store. There's no indication they aren't being developed by Intel. You can download the native version directly from Intel. The 64 GB limit was for non-Optane SSDs when using Intel Smart Response Technology as a caching solution.

Intel tech support themselves told me on their forums that they are EOLing their own version, that everything is handed over to Microsoft, and that whole drive support was never official so whatever Microsoft Store version has will be official.
#51
Monstieur
Superclocked Member
  • Total Posts : 128
  • Reward points : 0
  • Joined: 2016/08/31 02:04:28
  • Status: offline
  • Ribbons : 5
Re: VROC update 2022/07/31 22:09:48 (permalink)
ZoranC
Which applications were slow for you, please? I would like to try them to see what I get.
 
Intel tech support themselves told me on their forums that they are EOLing their own version, that everything is handed over to Microsoft, and that whole drive support was never official so whatever Microsoft Store version has will be official.

Remuxing 4K video with MKVToolnix. Creating ISO images from video files. These activities always wrote at the speed of the HDD. Writing to a standalone SSD was at the expected speed.
 
Optane just uses the regular RST driver that's developed by Intel. It's used for all of their chipsets' SATA and NVMe ports. The application shell around it doesn't matter. The driver doesn't come with the application from the Microsoft Store. You need to install the driver separately or from Windows Update. You don't even need Windows to use Optane - you can set it up in the UEFI for the full capacity of the Optane drive and Windows will see the configuration when you login.
post edited by Monstieur - 2022/07/31 22:14:28
#52
JK_DC
iCX Member
  • Total Posts : 370
  • Reward points : 0
  • Joined: 2007/11/01 11:31:14
  • Status: offline
  • Ribbons : 0
Re: VROC update 2022/07/31 23:27:57 (permalink)
Thanks for the information Monstieur. That will help clarify things when I install it on Win10+. I am however using it on Win7 and it doesn't work like it does on other motherboards. VROC,RST and RSTe all are supported in Win7, but the Dark seems to have compatibility problems with it since there is not another raid device listed besides the c220/c600 sata driver. There should be another raid controller that accepts the vmd nvme raid driver but it isn't listed. You mention that a key isn't needed but in Win7 since there is no storage spaces it has to rely on the rst/irst drivers to work. If a data drive is created there is a 90 day trial before it can't be accessed anymore. I am going to do an install in Win10 as well after I get it all working in Win7 for my older programs. It sound like Win10 will be much easier. 
 
It looks to me like it has to have an Intel key and Intel drives to not have a trial period and to make it bootable in Win7 on X299. So my decision to replace the Dark will be made on how it supports nvme raid in Win7. The Dark is compatible with everything else in Win7 so it might be something they chose not to include in their bios, while other manufacturers have. I'll have more information on what works later in the week. I was wanting a 6 drive Raid 5 array, but to make it bootable it can't span vmd's so I will have to do a 2 drive raid 0 and a 4 drive raid 0 or 5 most likely.
post edited by JK_DC - 2022/07/31 23:33:23
#53
Monstieur
Superclocked Member
  • Total Posts : 128
  • Reward points : 0
  • Joined: 2016/08/31 02:04:28
  • Status: offline
  • Ribbons : 5
Re: VROC update 2022/07/31 23:36:48 (permalink)
There was a period where NVMe RSTe was renamed to VROC while SATA RSTe was still called RSTe. As of now both are called VROC. The consumer boards don't even have SATA RSTe as it's a workstation / server board feature. Consumer boards are just NVMe VROC and SATA / NVMe RST.
 
You can create a VROC RAID array on a secondary installation of Windows and it will work indefinitely on your primary installation without using the trial.
 
I can confirm that you can create a RAID0 array with Intel drives straight from the UEFI without a key. Non-Intel drives will never boot on X299 in any RAID configuration with or without a key. The feature is simply broken on all X299 boards.
post edited by Monstieur - 2022/07/31 23:38:47
#54
Monstieur
Superclocked Member
  • Total Posts : 128
  • Reward points : 0
  • Joined: 2016/08/31 02:04:28
  • Status: offline
  • Ribbons : 5
Re: VROC update 2022/07/31 23:49:58 (permalink)
JK_DCthere is not another raid device listed besides the c220/c600 sata driver. There should be another raid controller that accepts the vmd nvme raid driver but it isn't listed.

The VROC device will appear only when you actually have an NVMe drive in the slot where VROC is enabled. You don't need to create a RAID array for this. The moment you enable VROC on a slot, the native drive will disappear from Windows and be replaced with a VROC device. It creates a separate VROC device for each individual drive if you have not created a RAID array yet.
post edited by Monstieur - 2022/07/31 23:51:24
#55
ZoranC
FTW Member
  • Total Posts : 1099
  • Reward points : 0
  • Joined: 2011/05/24 17:22:15
  • Status: offline
  • Ribbons : 16
Re: VROC update 2022/08/01 12:30:39 (permalink)
Monstieur
ZoranC
Which applications were slow for you, please? I would like to try them to see what I get.
 
Intel tech support themselves told me on their forums that they are EOLing their own version, that everything is handed over to Microsoft, and that whole drive support was never official so whatever Microsoft Store version has will be official.

Remuxing 4K video with MKVToolnix. Creating ISO images from video files. These activities always wrote at the speed of the HDD. Writing to a standalone SSD was at the expected speed.

I'm not using MKVToolnix so I can't comment with certainty what might be going there. All I know is that I haven't yet noticed any signs of write cache being bypassed and that Crystal DiskMark sequential write speeds didn't show that either. So if my experience doesn't end up shown as inaccurate (I'm not in position to test right now) only things that might explain difference in our experiences are either you created pool with Microsoft's default WBC size which is too small and cache is getting trashed, or those apps use I/O requests that force flush cache, or both.
 
Monstieur
Optane just uses the regular RST driver that's developed by Intel. It's used for all of their chipsets' SATA and NVMe ports. The application shell around it doesn't matter. The driver doesn't come with the application from the Microsoft Store. You need to install the driver separately or from Windows Update. You don't even need Windows to use Optane - you can set it up in the UEFI for the full capacity of the Optane drive and Windows will see the configuration when you login.

I don't know why statements by Intel's tech support and some other Optane users contradict yours but even if they are incorrect and one could still use full size of Optane drive for cache I feel Optane in such use case is still not for me. My purpose for NVMe + HDD mix is storage volume that is noticeably faster than HDD majority of the time while having much lower cost per TB than pure NVMe. For work where ultimate speed is critical I am using different volume that is pure NVMe at the price per TB that is still much less than Optane.
#56
ZoranC
FTW Member
  • Total Posts : 1099
  • Reward points : 0
  • Joined: 2011/05/24 17:22:15
  • Status: offline
  • Ribbons : 16
Re: VROC update 2022/08/01 17:28:26 (permalink)
Just came across another reason why Optane would be a no go for me https://www.youtube.com/watch?v=7Fw3bkmm3o0
#57
JK_DC
iCX Member
  • Total Posts : 370
  • Reward points : 0
  • Joined: 2007/11/01 11:31:14
  • Status: offline
  • Ribbons : 0
Re: VROC update 2022/08/03 00:29:01 (permalink)
Monstieur
JK_DCthere is not another raid device listed besides the c220/c600 sata driver. There should be another raid controller that accepts the vmd nvme raid driver but it isn't listed.

The VROC device will appear only when you actually have an NVMe drive in the slot where VROC is enabled. You don't need to create a RAID array for this. The moment you enable VROC on a slot, the native drive will disappear from Windows and be replaced with a VROC device. It creates a separate VROC device for each individual drive if you have not created a RAID array yet.




OK I got 2 670p's in today and put it in the hyper m2 card. I see them in the selection area and can create an array and it says it is bootable. If I enable VROC for the slot it's in I get a 94 error and the pc refuses to POST. If I leave it to non-VROC and I go to disk manager it shows the drives as separate and not in raid at all. If I install the VROC driver it doesn't recognize my sata raid controller but shows the sata drives. It doesn't show the nvme drives at all. I think I have to have the VMD raid driver but it isn't there. I am hoping when I put a VROC key in the slot that it will show the raid device so I can install the driver. It works seamlessly on my AsRock board. It works exactly as you say. So my board might be defective. Are you able to boot the dark with a hyper m2 card in a slot with vroc enabled?
 

Core i7 3820 @ 4.75 1.36v, Prolimatech Mega Shadow push/pull 
Asrock X79 Extreme 6,  16 GB G. Skill Z Series 1666 7-8-8-24 1T 1.56v 
EVGA 680 2GB SLI 1272/3200  (1.175v)
CM Storm Sniper Blk,  CM Silent Power Pro 1000w
Creative X-Fi Xtrememusic, Creative G500 5.1 310 watts
Intel X25-M G2 80GB,  Corsair M4 128GB, WD Caviar Blk 750GBX2, 1.5TB
Logitech G510, M705
#58
Monstieur
Superclocked Member
  • Total Posts : 128
  • Reward points : 0
  • Joined: 2016/08/31 02:04:28
  • Status: offline
  • Ribbons : 5
Re: VROC update 2022/08/03 00:48:40 (permalink)
JK_DC
OK I got 2 670p's in today and put it in the hyper m2 card. I see them in the selection area and can create an array and it says it is bootable. If I enable VROC for the slot it's in I get a 94 error and the pc refuses to POST. If I leave it to non-VROC and I go to disk manager it shows the drives as separate and not in raid at all. If I install the VROC driver it doesn't recognize my sata raid controller but shows the sata drives. It doesn't show the nvme drives at all. I think I have to have the VMD raid driver but it isn't there. I am hoping when I put a VROC key in the slot that it will show the raid device so I can install the driver. It works seamlessly on my AsRock board. It works exactly as you say. So my board might be defective. Are you able to boot the dark with a hyper m2 card in a slot with vroc enabled?
 

The VROC device which requires the VROC v7 or v8 driver will show up only when VROC is enabled in the BIOS. You don't need a VROC key and don't need to create an array for this. While VROC is enabled, the NVMe drives will not show up in Windows until you install the driver. You cannot install the VROC VMD driver for any other device. Merely enabling VROC will replace each NVMe drive with a VROC device. When you create a VROC RAID array, the individual VROC devices will be replaced with a single VROC device.
 
Error 94 is a PCI device enumeration error. It could be a compatibility issue with other devices sharing the PCIe lanes that belong to the same VMD on which VROC is enabled. If you have the Hyper M.2 in PE4, disconnect PE3 even if you use only x8 lanes in PE4. You can view the VMD grouping in the BIOS. The same applies to PE1 / PE2, and PM1 / PU1 / PE6 / PM2 / PU2.
 
I have used the Hyper M.2 card with 4x non-bootable non-Intel drives in PE4 in VROC mode. I have also used 2x bootable Intel Optane drives in PE6 and PU2 in VROC mode.
post edited by Monstieur - 2022/08/03 00:53:49
#59
JK_DC
iCX Member
  • Total Posts : 370
  • Reward points : 0
  • Joined: 2007/11/01 11:31:14
  • Status: offline
  • Ribbons : 0
Re: VROC update 2022/08/04 00:06:13 (permalink)
Monstieur
The VROC device which requires the VROC v7 or v8 driver will show up only when VROC is enabled in the BIOS. You don't need a VROC key and don't need to create an array for this. While VROC is enabled, the NVMe drives will not show up in Windows until you install the driver. You cannot install the VROC VMD driver for any other device. Merely enabling VROC will replace each NVMe drive with a VROC device. When you create a VROC RAID array, the individual VROC devices will be replaced with a single VROC device.
 
Error 94 is a PCI device enumeration error. It could be a compatibility issue with other devices sharing the PCIe lanes that belong to the same VMD on which VROC is enabled. If you have the Hyper M.2 in PE4, disconnect PE3 even if you use only x8 lanes in PE4. You can view the VMD grouping in the BIOS. The same applies to PE1 / PE2, and PM1 / PU1 / PE6 / PM2 / PU2.
 
I have used the Hyper M.2 card with 4x non-bootable non-Intel drives in PE4 in VROC mode. I have also used 2x bootable Intel Optane drives in PE6 and PU2 in VROC mode.




OK I was able to get the driver to install by enabling VROC on PE3 which has nothing in it, but it doesn't help since it has to enabled for the slot the card is in. I can not boot it in PE4 with VROC enabled. It gives a 94 error. I will try it again, but I disabled all other slots from bifurcation except PE4 and it won't boot. I only have a video card, the hyper card and a wireless card in PE5. None of those should conflict with PE4. The big question is why it won't boot in PE4 with VROC enabled, but it will boot with non-VROC enabled? Both are bifurcated.
 
The other interesting thing is I can create a raid in the rst page in bios on pe4, but only if computer attached storage is on. I believe you said rst doesn't work on pe4?
post edited by JK_DC - 2022/08/04 00:48:43
#60
Page: < 123 > Showing page 2 of 3
Jump to:
  • Back to Mobile