I recently replaced my EVGA GTX 750 Ti with an EVGA GTX 970 FTW. This is my first experience with a factory-overclocked card. I've been trying to use the debug option in the NVIDIA Control Panel to temporarily downclock to reference stock speeds while troubleshooting some TDR errors in some games. Here's what I'm expecting (and what used to happen):
- When I first got the card and tried this, I would check the system information and note the graphics clock speed being at 1215 MHz under details.
- Turn on debug mode.
- Now the graphics clock speed should drop to 1050 MHz under system information.
- Using Precision X 16 should also show the white arrow move down to around 1050 MHz and a -165 MHz offset for the GPU clock (and 1215 MHz and 0 MHz offset when at factory overclock).
Here's what is happening now:
- Check system information and see 1215 MHz under details.
- Turn on debug mode.
- Still see 1215 MHz under details (even after closing and re-opening NVCP).
- Check Precision X 16 and I see the -165 MHz offset, but it still shows the speeds being at factory overclock.
- When I try to put load on the GPU, it still appears to hit the higher speeds in HWMonitor (need to do more elaborate testing to confirm though).
I work in IT, so one of the first questions I usually ask a user is "well, what has changed?" Unfortunately, a few things have changed here:
- I was a moron (albeit an excited one) when I first got the card and booted up with the old drivers from the 750 Ti installed still. Not a clean setup by any means, but things did work. Anyways, between the time I got the expected behavior and what is happening now, I did an uninstall of the NVIDIA drivers, wiped things with DDU, and then did a clean install of the latest drivers. So, technically, that's two changes: clean driver install and an upgrade to the next version which is the current stable release from last week, 368.81.
- Getting the new card led to me upgrading my monitor to an Acer XB271HU. The key highlights here are changing from a 1920x1080 at 60 Hz monitor to a 2560x1440 at 144 Hz monitor with G-Sync.
Things I've tried since then:
- Uninstalling my old copies of Precision X 16 (along with deleting old profile files) and OC Scanner from last year. Rebooting. Installing new versions that are current as of today.
- I read in a few places that the 144 Hz refresh rate combined with 1440p can cause issues with clock speeds? Downed it to 120 Hz just in case.
- Tried doing a clean boot of Windows with most startup programs disabled, since some programs like Razer Synapse peg the GPU just by running in the background.
Other things I've noticed:
- Last night, I found a bunch of games were performing horribly as I was going through and tweaking settings for the new monitor. Finally popped HWMonitor open and found that the GPU clock was stuck at 405 MHz. Rebooting fixed the problem, but it was pretty strange.
- I seem to get a lot of TDR errors when games switch back and forth between windowed/fullscreen modes and when going back to the Windows desktop. Never have problems in the games themselves. Games that run in 4K with DSR seem to be more frequent offenders. I've considered the power supply being at fault, but why would the system run GPU stress tests stable for an hour without any errors if there are issues with unstable power on the 12V rail? Also, I never had these errors with the 750 Ti, but I realize this is a more demanding card and the games are now using higher settings.
- I'd also like to note that I'm using the 2nd BIOS via the toggle switch on the back of the card. I wasn't wanting to do much with overclocking, but I wanted the more aggressive fan curve that I read was included in that BIOS.
Aside from more nuclear options like re-building the OS from scratch and/or RMAing the card, any suggestions for things I can try? I guess my next step could be another clean driver removal followed by an install of the previous driver I was using? This isn't really a dealbreaker at this point. I really just want to understand why the GPU isn't responding to requests to adjust the target clock speeds (and with NVIDIA's own debug function no less).
Thanks for any assistance I can get! I humbly defer to this community's expertise for help with this problem. (Also, sorry for the huge wall of text. Sheesh, can I ramble or what?) :)
Travis
post edited by twalls - 2016/07/21 15:24:06