I have a 3080Ti FTW3 Ultra that I bought from EVGA direct. My location is USA.
Test Bench:
Asus X99-E-10G WS (BIOS up to date, CMOS cleared)
Intel Xeon E5-2697A v4
Micron 64GB (4x 16GB) DDR4-3200 RDIMM, running at 2400 due to CPU limit
Seasonic Prime 1000W platinum (all voltages check out, well in-spec)
SanDisk 1TB 2.5" SATA SSD (no nvme drives installed)
no other PCIe devices installed
Some months ago I noticed the 3rd fan was having issues. Sometimes it would not spin at all, and when it was spinning, it would make noise (grinding, squealing, etc). I put it off initially since the card was working fine. but the noise became incessant and I couldn't ignore it anymore. So a few weeks before Christmas, I RMA'd it. I did a cross-ship RMA to avoid downtime during shipping.
I received the first replacement card. tested it out briefly on my test bench, then swapped it into the system. it was working fine for about a week, then suddenly the system started crashing. it was intermittent at first, but quickly evolved to be constant and consistent crashing whenever there was load on the GPU. I tested and confirmed the same behavior with this first replacement card on my testbench. Any load (any kind of benchmark) would crash the whole system almost instantly with graphical artifacts, then black screen, and the system would have to be hard rebooted. it would idle at the desktop and do 2D stuff fine, but any kind intense 3D load or even compute load would crash it. I have several 3080Tis and only this card behaved this way. PSU and system were excluded as variables since the behavior followed the card no matter what system it was in, and other cards worked fine in the same systems. I initiated another cross-ship RMA, but was a little delayed in the whole process due to tech support delays over the holidays, which is understandable.
then I received the second replacement GPU yesterday. loaded it up on my test bench and it seemed to work OK. ran all the load tests and benchmarks, but the GPU was stuck at x8 PCIe lanes. this was with the GPU in the top slot closest to the CPU. I tried the GPU in all 4 x16 slots (slots 1,3,5,7; which are all x16 electrically as well) and each time the GPU would only link up at x8. no risers are in use. to verify it wasn't the board, I put the old/bad 3080Ti in, and while it crashes under load, it can be used to run the desktop and sees x16 no problem. I put one of my spare EVGA RTX 3060 cards in the test bench and it also run x16 fine. I also tried this GPU in another system and it would only x8 there also. I inspected the PCIe finger/pins and they all looked fine with no clear signs of anything wrong, but I cleaned them with alcohol anyway just to try, but no luck, still only x8. so there's something going on with this second replacement with the PCIe link width that is out of my control to try to remedy.
Called up EVGA again, and they are making an exception to Advance RMA me another card, which is very appreciated. But I have to wonder what's going on with the QA of their refurbs that this seems to keep happening. the first replacement failing after a week I can give them a pass on since the problem wasn't immediately noticeable, but a card that will only negotiate x8 link width should have been caught by their QA right away. Unfortunately this seems to be a common occurrence with RMAs on 30-series cards based on some other threads I've read here here (especially with 3090 and 3080Ti cards).
EVGA, I love you guys, and your support team has been great and helpful/understanding. But I think you need to make some improvements to the QA process to properly test your refurbished cards.
Rig1: EPYC 7V12 | [4] RTX A4000Rig2: EPYC 7B12 | [5] 3080Ti + [2] 2080TiRig3: EPYC 7B12 | [6] 3070Ti + [2] 3060 Rig4: [2] EPYC 7742 | RTX A2000
Rig5: [2] EPYC 7642
Rig6: EPYC 7551 | [4] Titan V