Status update: Next day I tried to repeat Fire Strike Ultra benchmark while running fans at 100% and this time situation reversed, card #2 ran cooler than card # 1.
So I’ve decided to change how I test. First I’ve switched to TimeSpy Extreme in order to assure it wasn’t the benchmark being used. Second I have decided to do both benchmark and stress test, do two runs of each, and observe all temperature sensors for both averages and maximums. Results are in screenshot below (average values are what 3dMark reported, max values are what GPU-Z reported).
If I interpret results correctly findings are:
- When running benchmarks card in system with 10900X has slightly lower scores (around 1-3%) while drawing slightly less power (also around 1-3%) and running approximately 2C hotter.
- When running stress tests card in system with 10900X draws slightly -MORE- power (around 2%) while running approximately 2-3C hotter.
In other words difference does exist but it doesn’t seem as big as initial Fire Strike Ultra runs were indicating. Now we are looking at ~ 2% and 2-3C.
All together makes me feel that difference is small enough that it can be attributed to normal variations in silicon, power delivery, and cooling system, and measurement errors combined together.
In turn indicating cards themselves are practically identical, neither one of them is “off”, so there would be no justification at this moment to crack them open and reapply TIM.
What are your thoughts, please?
https://imgur.com/a/mkN4dd6
post edited by ZoranC - 2021/01/30 17:45:24