2016/11/08 23:27:08
Cyricor
mullets
I recommended to my friend to get one of these cards for VR so to a degree I feel responsible. Am I able to order this on his behalf? He's computer-potato and no way in holy hell would be able to do this fix.


I did the same thing for a designer. I registered the product under my name with his permission of course and I ordered the pads on my address since I will  be the one doing the mod on his card. You will be needing the card's serial #  and maybe a photo of the receipt just in case.
2016/11/09 00:26:50
Xfade81
GFAFS
 
Furmark do not disable any protection mechanism (it's even stated on the post itself...) . It only push the GPU Chip to his maximum, causing it to draw MORE power, The more the VRM's need to deliver and regulate the lower the efficiency is...the higher the temperature are (by-product of low efficiency HEAT! in Watt, In return Heat will also affect the efficiency if not dealt with...AKA Thermal Runaway). If the Power delivery/regulation stage is Undersized, not efficient enough, and/or not cooled enough, components will blow, melt, wears, mofsets anyone? (101 of electronic and watts dissipation)...you see it all come to basic design and proper measures to counter a nefarious effect HEAT.
 
Now, don't get me wrong, you can temper with these protections in some case, but certainly not with furmark, it usually require some hard-mod (changing resistance/components or hard-bypass) or need tempering in lower instruction level (bios, rom/eprom in general, what M.G call other applications in that case : nibitor,Maxwell-bios editor, kepler-bios, editor used to alter the card bios). 
 




Nvidia card drivers used to check if you were running Furmark or Occt. When that's the case there were sensors on the card wich would lower the clock speeds till the overvoltage situation went away, and then restore the clocks. If those checks in the driver and/or the sensors aren't available anymore, sure you can freely use Furmark aparantly.
Wich is dangerous to say the least.
 
If they are present, Furmark of all things should not burn out a card. Yet it apparantly does/did.
2016/11/09 01:31:11
Cyanold
Cyricor
Its a bit worse than just that EVGA decided to cut corners. Its a design flaw. Someone actually designed the cooling solution on this card that didn't have common sense.
They covered the mosfets with the midplate? One of the hottest components on the card? Whereas Asus and MSI with way more overbuild VRM solutions, that by design run cooler due to the much higher amperage they can handle, have them actively cooled.

Its a facepalm moment and worthy of my pity to be honest, if they tried to cut corners I would be angry, but all the evidence point to project failure, quality control absence and overall management bad decisions (The one responsible for the cooler design must be fired, even if he is the CEO's nephew as it seems :P ) .
Because its my firm belief that anyone with half a brain could design it better with the same or less cost.

I seriously consider dremel for the midplate to expose the mosfets to direct airflow, and attach some low profile copper heat sinks.

Can any one upload a picture on the back of front heat spreader to see how original thermal pad is applied to mosfet?
2016/11/09 02:55:14
SinistaLad
Cyanold
Cyricor
Its a bit worse than just that EVGA decided to cut corners. Its a design flaw. Someone actually designed the cooling solution on this card that didn't have common sense.
They covered the mosfets with the midplate? One of the hottest components on the card? Whereas Asus and MSI with way more overbuild VRM solutions, that by design run cooler due to the much higher amperage they can handle, have them actively cooled.

Its a facepalm moment and worthy of my pity to be honest, if they tried to cut corners I would be angry, but all the evidence point to project failure, quality control absence and overall management bad decisions (The one responsible for the cooler design must be fired, even if he is the CEO's nephew as it seems :P ) .
Because its my firm belief that anyone with half a brain could design it better with the same or less cost.

I seriously consider dremel for the midplate to expose the mosfets to direct airflow, and attach some low profile copper heat sinks.

Can any one upload a picture on the back of front heat spreader to see how original thermal pad is applied to mosfet?



Edit: it would appear i cant as im still a "new" member 
here is a link
https:// postimg . org/image/yt310ye6p/
 
2016/11/09 04:33:29
Cyanold
SinistaLad
Cyanold
Cyricor
Its a bit worse than just that EVGA decided to cut corners. Its a design flaw. Someone actually designed the cooling solution on this card that didn't have common sense.
They covered the mosfets with the midplate? One of the hottest components on the card? Whereas Asus and MSI with way more overbuild VRM solutions, that by design run cooler due to the much higher amperage they can handle, have them actively cooled.

Its a facepalm moment and worthy of my pity to be honest, if they tried to cut corners I would be angry, but all the evidence point to project failure, quality control absence and overall management bad decisions (The one responsible for the cooler design must be fired, even if he is the CEO's nephew as it seems :P ) .
Because its my firm belief that anyone with half a brain could design it better with the same or less cost.

I seriously consider dremel for the midplate to expose the mosfets to direct airflow, and attach some low profile copper heat sinks.

Can any one upload a picture on the back of front heat spreader to see how original thermal pad is applied to mosfet?



Edit: it would appear i cant as im still a "new" member 
here is a link
https:// postimg . org/image/yt310ye6p/
 


Thanks for the picture, did you see the compression mark on the VRAM and mosfet area(the narrow strip on the left hand of small window? From the picture, pretty much limited contact on the bot VRAM. I can see the way that EVGA intend to cool the area, heat should transfer in a way like component->thermal pad-> front heat spreader. Pretty weird that the card got hot temperature on the vrm but the heat spreader on top of it look rather cool from guru3d review. One potential reason that i can think off is either the thermal pad thickness too thin or EVGA got the stand height wrong. Could any one confirm that the thermal pad on top of mosfet is on good contact? I will check that once i receive my pad.
 
I dont really believe that the new thermal pad that apply on top of front heat spreader will address the root cause except stopping FLIR camera get true reading, they need find out why the heat from the vrm not getting transfer to front heat spreader.
 
Can Jacob or Scarlet confirm this?
 
 
2016/11/09 04:34:09
PietroBR
Noob question guys,
 
Anyone know if the thermal pads fix, will also fix the "Black Screen and 100% fan" issue? (tried to make a search in the forum, but didn't find anything alike).
 
 
2016/11/09 04:43:18
GFAFS
Xfade81
GFAFS
 
Furmark do not disable any protection mechanism (it's even stated on the post itself...) . It only push the GPU Chip to his maximum, causing it to draw MORE power, The more the VRM's need to deliver and regulate the lower the efficiency is...the higher the temperature are (by-product of low efficiency HEAT! in Watt, In return Heat will also affect the efficiency if not dealt with...AKA Thermal Runaway). If the Power delivery/regulation stage is Undersized, not efficient enough, and/or not cooled enough, components will blow, melt, wears, mofsets anyone? (101 of electronic and watts dissipation)...you see it all come to basic design and proper measures to counter a nefarious effect HEAT.
 
Now, don't get me wrong, you can temper with these protections in some case, but certainly not with furmark, it usually require some hard-mod (changing resistance/components or hard-bypass) or need tempering in lower instruction level (bios, rom/eprom in general, what M.G call other applications in that case : nibitor,Maxwell-bios editor, kepler-bios, editor used to alter the card bios). 
 




Nvidia card drivers used to check if you were running Furmark or Occt. When that's the case there were sensors on the card wich would lower the clock speeds till the overvoltage situation went away, and then restore the clocks. If those checks in the driver and/or the sensors aren't available anymore, sure you can freely use Furmark aparantly.
Wich is dangerous to say the least.
 
If they are present, Furmark of all things should not burn out a card. Yet it apparantly does/did.




It was a way at the time to mitigate poor cards hardware/design. The point being, Furmark is not the culprit and never was. Poor Hardware/Design/QC are and must be acknowledged and addressed fully.
 
2016/11/09 04:51:08
Cyanold
GFAFS
Xfade81
GFAFS
 
Furmark do not disable any protection mechanism (it's even stated on the post itself...) . It only push the GPU Chip to his maximum, causing it to draw MORE power, The more the VRM's need to deliver and regulate the lower the efficiency is...the higher the temperature are (by-product of low efficiency HEAT! in Watt, In return Heat will also affect the efficiency if not dealt with...AKA Thermal Runaway). If the Power delivery/regulation stage is Undersized, not efficient enough, and/or not cooled enough, components will blow, melt, wears, mofsets anyone? (101 of electronic and watts dissipation)...you see it all come to basic design and proper measures to counter a nefarious effect HEAT.
 
Now, don't get me wrong, you can temper with these protections in some case, but certainly not with furmark, it usually require some hard-mod (changing resistance/components or hard-bypass) or need tempering in lower instruction level (bios, rom/eprom in general, what M.G call other applications in that case : nibitor,Maxwell-bios editor, kepler-bios, editor used to alter the card bios). 
 




Nvidia card drivers used to check if you were running Furmark or Occt. When that's the case there were sensors on the card wich would lower the clock speeds till the overvoltage situation went away, and then restore the clocks. If those checks in the driver and/or the sensors aren't available anymore, sure you can freely use Furmark aparantly.
Wich is dangerous to say the least.
 
If they are present, Furmark of all things should not burn out a card. Yet it apparantly does/did.




It was a way at the time to mitigate poor cards hardware/design. The point being, Furmark is not the culprit and never was. Poor Hardware/Design/QC are and must be acknowledged fully.
 


Totally agreed, Furmark is a benchmark software that is used to compare the performance of EVGA card with other companys. All we can see is EVGA graphic has significant higher temperature on the vrm area that other brands, which probably still within the "marginal" range of components specification. It is confirmed that EVGA falls behind in terms of cooling design if EVGA doesnt come up with a plan B.


2016/11/09 04:58:00
luckyirishlad
DeathAngel74
Ghetto tek VRM cooling

 

 
 




I have my new brochure out with some great Cool Tech products take a look
 
All EVGA VRM cooling certified too
 
 
 

Attached Image(s)

2016/11/09 05:10:26
Cyanold
luckyirishlad
DeathAngel74
Ghetto tek VRM cooling

 

 
 




I have my new brochure out with some great Cool Tech products take a look
 
All EVGA VRM cooling certified too
 
 
 


The place I am working has a excellent cooling solution for 12MWt motor during the summer time (45 degree), they put a evaporative air conditioning unit on the inlet of motor cooling axial fan, which work brilliant

Use My Existing Forum Account

Use My Social Media Account