I don't know about your friend, but I've run RAID 10 SSD arrays on any chipset from X58 to Z97. All worked/ing perfectly well. You have to account for certain deficiencies - e.g. writes will get a hit, you can't move around that. My last surviving RAID10 array is in my Z97 work machine as a booting drive. No issues to report. In general RAID is by now obsolete technology, belonging to different era. If you will perform a lot of writes then RAID is not a great idea as everything will be written twice across all 4 drives. Write amplification. For normal day-to-day its not a problem, but e.g. recording regularly large video streams to RAID10 may affect the array performance in the long run.
No array other than 0 supports TRIM (and not on all chipsets). You have to deal with this as any SSD put behind RAID controller has to rely on GC. On the other hand TRIM importance is vastly overstated. I have SSDs in my possession which were NEVER TRIMed or serviced in any way. My oldest Corsair Force3 -I think model is- lost only 3% of life expectancy and its like 10 yo while still clocking speeds according to specification.
I have plethora of SSD connected to RAID controllers (proper cards not pseudo chipset) and these working just fine without any side effects of no support for TRIM behind RAID controller. I don't run arrays anymore and moved to drive pooling which is vastly superior redundancy wise. Biggest problem of RAID is that it writes data in chunks instead just writing required amount. For HDDs it doesn't matter, for SSDs it does as space is at premium and NAND is not exactly designed to write useless data - example file: 129KB and array is configured in either 64KB or 128KB stripe, but you have to write either 3 or 2 just to store that one extra 1KB.
One thing to remember is that intel chipsets don't support RAID10 with 6 drives only 4.