Jonathan.Focarino
I'm having the same issue myself. I have two 980 ti's and they are showing up reversed in the nvidia control panel. I believe it's related to the motherboard. I'm using a Gigabyte Gaming GT Z97. Perhaps the PLX chip is causing the issue? I'd be curious to see if you removed any add on cards (wifi, sound card, ect.) and just have the two gpu's installed if it resets back to the correct order.
Same here with my Sigs. It is VERY annoying. I already posted here about this issue, but nobody cares...
Have a look to device manager. Right click the GPUs, you should get the same wrong order (Bus PCI #).
This seems to be a very common issue. Many monitoring apps do not report correctly multi GPU setups. I don't think it is mobo related, but software related.
CPUID HWMonitor gives correct results (GPU 1 is first, GPU is 2nd and so on). Have a try.
The reason is lazy use of NVAPI, or NVAPI bug.
Technical hypothesis (and workaround for coders : it worked for me) :
The enumeration function
NvAPI_EnumPhysicalGPUs() fills an array with physical GPU handles, and gives the GPU count. (these handles are needed to communicate with GPUs). But the array is not sorted. Most monitoring applications consider that 1st physical GPU handle matches the 1st slot, 2nd the 2nd and so on. This is untrue (the array seems to be filled more or less randomly by the driver call). Another function has to be called to get the bus ID from the physical handle :
NvAPI_GPU_GetBusId(), and the array has to be sorted according to BUS ID. (a simple bubble sort and bob's your uncle... ). Even device manager don't care about this.
I just read NVidia documentation, and I wrote my own tiny monitoring app using NVAPI so I get the correct order. (hopefully I am only interested in temps : o/c functions need the top secret "non disclosure" API).