11/9/2022
the_Scarlet_one
4090 owners, could you please run GPU-z sensor tab, and look at your PCI slot power draw, and post what GPU you have with the PCI power draw? So far I have seen between 7w at 17w at full load on 600w bios, but nothing more than 17w so far.

I do not have a 4090, and I am just super curious if NVidia and AIB’s are strictly pulling the 600w power limit off of the 12 pin, or if there is any AIB leveraging the PCIe slot 75w availability to remove some of the load off of the 12 pin.

I watched JayzTwoCents latest “EVGA not 4090” video, and he has 7.5 watts max power draw with the 600w bios for the power me two second clip where he shows the 600w bios at load. I won’t link the video because it is just 15 minutes of JayzTwoCentsless (spelled that way on purpose) rambling. It’s not worth sacrificing your ears, the bios doesn’t net higher scores on the pre-production GPU core.
11/9/2022
Sajin
Gigabyte Gaming OC with Cyberpunk 2077 ray tracing load...
 

11/10/2022
CraptacularOne
Same for me, the card draws very minimal power from the PCIe slot. GN also mentioned how the cards sip power from the PCIe slot. 
 
Did the same as above in Cyberpunk 2077 max setting with psycho RT enabled at 5120x1440 resolution. Though mine is a bit higher at about 31w PCIe slot power draw. RTX 4090 Gaming Trio with 520w MSI Suprim BIOS flashed.

11/10/2022
the_Scarlet_one
I am super curious why NVidia wouldn’t put more power usage on the pci slot to reduce the power pulled on the 12 pin as a way to slightly reduce heat for those experiencing issues.

I would assume that if they did that, it would be seen as admitting there is a problem, which is why they likely won’t do anything.
11/10/2022
CraptacularOne
the_Scarlet_one
I am super curious why NVidia wouldn’t put more power usage on the pci slot to reduce the power pulled on the 12 pin as a way to slightly reduce heat for those experiencing issues.

I would assume that if they did that, it would be seen as admitting there is a problem, which is why they likely won’t do anything.

The whole thing is getting blown WAY WAY out of proportion. There have now been numerous tests by people intentionally trying to compromise these cables and get them to melt yet no matter what they do they haven't been able to recreate it. I'm not saying it doesn't or can't happen because clearly it did but It's not because of a poor design of the cable and more than likely user error and them not fully inserting the plug into the socket that's causing the small number of issues. 
 
https://overclock3d.net/news/power_supply/psu_guru_chimes_in_on_12vhpwr_cable_controversy_-_insert_your_cables_fully/1
 
These people that have the damaged cables are not going to outright say "yeah maybe I didn't fully insert the cable" and potentially void their warranty even though the issue is more than likely user error. They of course don't want to be on the hook and admit user error and be out $1600.
 
This whole thing is just another "POScap" overreaction just like when 30 series first launched and it ended up being a driver bug and nothing actually wrong with the GPUs. These cases of burned cables are user error I'm almost certain at this point. What will be interesting is how the various GPU vendors handle these warranty claims. I've had my card for about a month now and no issue. My card is run at a daily overclock of 2850Mhz core and 22Ghz memory from basically day one for many a long gaming session. 
 
Nvidia didn't pull more power from the PCIe slot because they really don't need to and regardless the 66w that they could have got from the PCIe slot wouldn't have made really any difference at all in terms of heat or power draw from the card. .
11/10/2022
kougar
the_Scarlet_one
I am super curious why NVidia wouldn’t put more power usage on the pci slot to reduce the power pulled on the 12 pin as a way to slightly reduce heat for those experiencing issues.

I would assume that if they did that, it would be seen as admitting there is a problem, which is why they likely won’t do anything.



The slot is maxed at 75w so in the total scheme of things leaving 40w on the table isn't much. A tiny bump in clocks and putting the power target to 120% will cause the card to consume more than that easily. As AMD cards have demonstrated in the past, it's very easy to overdraw the PCIe slot and that's a very bad thing to do. There's only so many ways to use or interface the slot power, so GPUs often only use the slot power for separate circuitry that doesn't tie into the main rails because of this.
 
CraptacularOne
The whole thing is getting blown WAY WAY out of proportion. There have now been numerous tests by people intentionally trying to compromise these cables and get them to melt yet no matter what they do they haven't been able to recreate it.


Blown out of proportion? Honestly that's probably true, ~30 crisped cards out of thousands isn't much. But it's still 100% unacceptable. 
 
Also, as quick as JonnyGuru and some others are to blame the victims, that they haven't been able to recreate the burned up cables by not fully inserting the connector themselves means they're still guessing. If it's SO SIMPLE to be a not fully-inserted cable, then why can't they recreate it themselves eh? More than just Jonny have tried. There are still other theories and probable causes in play.
11/10/2022
the_Scarlet_one
CraptacularOne

The whole thing is getting blown WAY WAY out of proportion. There have now been numerous tests by people intentionally trying to compromise these cables and get them to melt yet no matter what they do they haven't been able to recreate it. I'm not saying it doesn't or can't happen because clearly it did but It's not because of a poor design of the cable and more than likely user error and them not fully inserting the plug into the socket that's causing the small number of issues. 
 
https://overclock3d.net/news/power_supply/psu_guru_chimes_in_on_12vhpwr_cable_controversy_-_insert_your_cables_fully/1
 
These people that have the damaged cables are not going to outright say "yeah maybe I didn't fully insert the cable" and potentially void their warranty even though the issue is more than likely user error. They of course don't want to be on the hook and admit user error and be out $1600.
 
This whole thing is just another "POScap" overreaction just like when 30 series first launched and it ended up being a driver bug and nothing actually wrong with the GPUs. These cases of burned cables are user error I'm almost certain at this point. What will be interesting is how the various GPU vendors handle these warranty claims. I've had my card for about a month now and no issue. My card is run at a daily overclock of 2850Mhz core and 22Ghz memory from basically day one for many a long gaming session. 
 
Nvidia didn't pull more power from the PCIe slot because they really don't need to and regardless the 66w that they could have got from the PCIe slot wouldn't have made really any difference at all in terms of heat or power draw from the card. .


Funny you quote JonnyGuru as the source of blaming end users… Real stand up guy to follow through his lies after he keeps sticking his foot further down his own throat. Keep in mind, he said that the quad 8 pin didn’t have any way to tell there was four cables connected because he didn’t actually look into, just shoved that foot in his mouth and then got defense because he got caught. He was a great reviewer, but now that his name rides on the manufacturing of the product, he is vehemently defending it while deflecting blame. He has walked back too many topic for me to care what he says at this point.

Most of these reviewers testing these scenarios have all been doing it for 1 to 2 hours, some more than that, but they haven’t been using most of the cards as much or as long as most end users would run them. More of them are looking at the probability now, but it doesn’t stop the fact that end users have received 12+4 with plastic molded directly into the actual pin. While there isn't a lot of cards experiencing the issue currently, how many people are not even paying attention to this topic right now? JayzTwoCentsless and JonnyGuru are not my two go to people at this point, because one knee jerk reacts to everything on reddit and the other is seemingly defending his own manufacturing at this point.

And why is it with the cables that have melted, why is it only the outside pins melting, typically on both sides at once, not the center pins? If all of the pins are loose because the end user didn’t fully seat them, then all pins should be generating heat and melting more area that just the outside edges.
11/10/2022
the_Scarlet_one
kougar
The slot is maxed at 75w so in the total scheme of things leaving 40w on the table isn't much. A tiny bump in clocks and putting the power target to 120% will cause the card to consume more than that easily. As AMD cards have demonstrated in the past, it's very easy to overdraw the PCIe slot and that's a very bad thing to do. There's only so many ways to use or interface the slot power, so GPUs often only use the slot power for separate circuitry that doesn't tie into the main rails because of this.

 
AMD doesn't report PCIe Slot power draw through GPU-z only Voltage, and prior to the 3090ti, most GPU's from NVidia leveraged up to the 75w with no problem.  It was even reported that the 3090FTW3 was drawing up to 85w on some cards from the PCI slot, and users RMA's and needed new BIOS' to get it lowered.  So, the use of the PCI slot power only recently went down with GPU's, as far as can be tracked.

A perfect example was the 3070 Founders that I just sold. It only used one 8 pin and had a 225w bios. 150w from the 8 pin and 75 from the PCI slot. It is far more uncommon for the PCIe 75 w to go completely unused at this time, as only two product have not made use of it.
11/10/2022
CraptacularOne
the_Scarlet_one
CraptacularOne

The whole thing is getting blown WAY WAY out of proportion. There have now been numerous tests by people intentionally trying to compromise these cables and get them to melt yet no matter what they do they haven't been able to recreate it. I'm not saying it doesn't or can't happen because clearly it did but It's not because of a poor design of the cable and more than likely user error and them not fully inserting the plug into the socket that's causing the small number of issues. 

https://overclock3d.net/news/power_supply/psu_guru_chimes_in_on_12vhpwr_cable_controversy_-_insert_your_cables_fully/1

These people that have the damaged cables are not going to outright say "yeah maybe I didn't fully insert the cable" and potentially void their warranty even though the issue is more than likely user error. They of course don't want to be on the hook and admit user error and be out $1600.

This whole thing is just another "POScap" overreaction just like when 30 series first launched and it ended up being a driver bug and nothing actually wrong with the GPUs. These cases of burned cables are user error I'm almost certain at this point. What will be interesting is how the various GPU vendors handle these warranty claims. I've had my card for about a month now and no issue. My card is run at a daily overclock of 2850Mhz core and 22Ghz memory from basically day one for many a long gaming session. 

Nvidia didn't pull more power from the PCIe slot because they really don't need to and regardless the 66w that they could have got from the PCIe slot wouldn't have made really any difference at all in terms of heat or power draw from the card. .


Funny you quote JonnyGuru as the source of blaming end users… Real stand up guy to follow through his lies after he keeps sticking his foot further down his own throat. Keep in mind, he said that the quad 8 pin didn’t have any way to tell there was four cables connected because he didn’t actually look into, just shoved that foot in his mouth and then got defense because he got caught. He was a great reviewer, but now that his name rides on the manufacturing of the product, he is vehemently defending it while deflecting blame. He has walked back too many topic for me to care what he says at this point.

Most of these reviewers testing these scenarios have all been doing it for 1 to 2 hours, some more than that, but they haven’t been using most of the cards as much or as long as most end users would run them. More of them are looking at the probability now, but it doesn’t stop the fact that end users have received 12+4 with plastic molded directly into the actual pin. While there isn't a lot of cards experiencing the issue currently, how many people are not even paying attention to this topic right now? JayzTwoCentsless and JonnyGuru are not my two go to people at this point, because one knee jerk reacts to everything on reddit and the other is seemingly defending his own manufacturing at this point.

And why is it with the cables that have melted, why is it only the outside pins melting, typically on both sides at once, not the center pins? If all of the pins are loose because the end user didn’t fully seat them, then all pins should be generating heat and melting more area that just the outside edges.


You seem very angry about something you don’t own yet. Opinions of JonnyGuru aside the fact remains that no one has been able to recreate the melting issue in a controlled setting. They have nearly destroyed these cables trying to get them to fail and overheat and they can’t make it happen. So the only other logical assumption is user error. And that seems plausible at this point as we don’t really have any other reasonable conclusion. At last estimate there have been 100K RTX 4090s sold and only a minuscule amount of people (roughly 30 or so cases) have had their cable melt. While I agree that is not acceptable you also must agree that failure rate is very very low. I’m not trying to make excuses but look at the bigger picture here.

They can’t recreate the melting in a lab and have no way to verify in what condition the users are installing their cards. If this was a massive issue there would be a lot more reports than the handful we have considering there are 100K cards out in the wild. I’m speaking from a perspective of experience here with this and my card is fine, no signs of melting or other damage and I’ve gamed heavily on it quite regularly with an increased power target and a flashed BIOS.

Like I said, you don’t have to like the guy that’s not the point. The point is we don’t have any other more plausible cause for these isolated issues.
11/10/2022
tresnugget
CraptacularOne
the_Scarlet_one
CraptacularOne

The whole thing is getting blown WAY WAY out of proportion. There have now been numerous tests by people intentionally trying to compromise these cables and get them to melt yet no matter what they do they haven't been able to recreate it. I'm not saying it doesn't or can't happen because clearly it did but It's not because of a poor design of the cable and more than likely user error and them not fully inserting the plug into the socket that's causing the small number of issues. 

https://overclock3d.net/news/power_supply/psu_guru_chimes_in_on_12vhpwr_cable_controversy_-_insert_your_cables_fully/1

These people that have the damaged cables are not going to outright say "yeah maybe I didn't fully insert the cable" and potentially void their warranty even though the issue is more than likely user error. They of course don't want to be on the hook and admit user error and be out $1600.

This whole thing is just another "POScap" overreaction just like when 30 series first launched and it ended up being a driver bug and nothing actually wrong with the GPUs. These cases of burned cables are user error I'm almost certain at this point. What will be interesting is how the various GPU vendors handle these warranty claims. I've had my card for about a month now and no issue. My card is run at a daily overclock of 2850Mhz core and 22Ghz memory from basically day one for many a long gaming session. 

Nvidia didn't pull more power from the PCIe slot because they really don't need to and regardless the 66w that they could have got from the PCIe slot wouldn't have made really any difference at all in terms of heat or power draw from the card. .


Funny you quote JonnyGuru as the source of blaming end users… Real stand up guy to follow through his lies after he keeps sticking his foot further down his own throat. Keep in mind, he said that the quad 8 pin didn’t have any way to tell there was four cables connected because he didn’t actually look into, just shoved that foot in his mouth and then got defense because he got caught. He was a great reviewer, but now that his name rides on the manufacturing of the product, he is vehemently defending it while deflecting blame. He has walked back too many topic for me to care what he says at this point.

Most of these reviewers testing these scenarios have all been doing it for 1 to 2 hours, some more than that, but they haven’t been using most of the cards as much or as long as most end users would run them. More of them are looking at the probability now, but it doesn’t stop the fact that end users have received 12+4 with plastic molded directly into the actual pin. While there isn't a lot of cards experiencing the issue currently, how many people are not even paying attention to this topic right now? JayzTwoCentsless and JonnyGuru are not my two go to people at this point, because one knee jerk reacts to everything on reddit and the other is seemingly defending his own manufacturing at this point.

And why is it with the cables that have melted, why is it only the outside pins melting, typically on both sides at once, not the center pins? If all of the pins are loose because the end user didn’t fully seat them, then all pins should be generating heat and melting more area that just the outside edges.


You seem very angry about something you don’t own yet. Opinions of JonnyGuru aside the fact remains that no one has been able to recreate the melting issue in a controlled setting. They have nearly destroyed these cables trying to get them to fail and overheat and they can’t make it happen. So the only other logical assumption is user error. And that seems plausible at this point as we don’t really have any other reasonable conclusion. At last estimate there have been 100K RTX 4090s sold and only a minuscule amount of people (roughly 30 or so cases) have had their cable melt. While I agree that is not acceptable you also must agree that failure rate is very very low. I’m not trying to make excuses but look at the bigger picture here.

They can’t recreate the melting in a lab and have no way to verify in what condition the users are installing their cards. If this was a massive issue there would be a lot more reports than the handful we have considering there are 100K cards out in the wild. I’m speaking from a perspective of experience here with this and my card is fine, no signs of melting or other damage and I’ve gamed heavily on it quite regularly with an increased power target and a flashed BIOS.

Like I said, you don’t have to like the guy that’s not the point. The point is we don’t have any other more plausible cause for these isolated issues.


The closest I've seen to an adapter melting in a controlled setting was Ronaldo from Teclab hitting nearly 120c on the adapter with it not all the way plugged in with a 450w load. If he left it in I think it would've kept rising. He hit almost the same temps with it 100% plugged with a 1500w load.

Use My Existing Forum Account

Use My Social Media Account

loading