kram36I'm calling BS on this. The only real way to know is buy a card like an EVGA iCX card that has sensors for the memory and the proper program to read the sensors. I know for a fact GDDR6 at 256bit bus only runs at 43°C MAX on a card with a good cooler mining Ethereum. There is no way GDDR6X is over double the temperature.
aka_STEVE_bNot to sure that's completely accurate , but I'll bet it's still something insane.
Bruno747kram36I'm calling BS on this. The only real way to know is buy a card like an EVGA iCX card that has sensors for the memory and the proper program to read the sensors. I know for a fact GDDR6 at 256bit bus only runs at 43°C MAX on a card with a good cooler mining Ethereum. There is no way GDDR6X is over double the temperature.Are you sure about that? Pretty certain evga doesn't manufacture the individual ram chips or the temp sensors in the actual gpu. So if evga is adding a sensor to the side or under a ram chip it is gonna be vastly different reading than one built into the die. And the die is going to be way way more accurate.Who knows icx may be using the built in die sensor on memory In that case, the place that just added support may not be decoding the signal properly.
ty_ger07Mining hammers memory. Power target only has a small effect on memory performance. And the people mining for profit are overclocking their memory as far as they can, and reducing core power consumption to whatever is most profitable. The memory is still being hammered; that's where the mining money lies.
ty_ger07I don't think that core temperature would have a large affect on internal memory temperature, but feel free to prove me wrong.
ty_ger07Feel free to prove me wrong.
kram36 ty_ger07Feel free to prove me wrong.Consider yourself proven wrong. In less than 2 min running the card at stock clocks, the memory temp went up with a lower memory clock speed and the miner lost MH/s speed.
ty_ger07kram36 ty_ger07Feel free to prove me wrong.Consider yourself proven wrong. In less than 2 min running the card at stock clocks, the memory temp went up with a lower memory clock speed and the miner lost MH/s speed.Huh? What does that have to do with anything you said earlier. You can't change your standpoint to something nonsensical and then call that proof.
ty_ger07All you have proven is that the memory temperature varies some related to the core heat generation. We already knew that. It's obvious. But that doesn't prove you right and the article wrong. Prove me wrong. Prove the article wrong. You need to prove that YOUR card's memory throttles by default (to prove that your card is a valid test platform to compare with the results reported on), you need to prove what is the most profitable core power limit for your card (to prove your statement about miner's typical power target), and you need to prove that at that most profitable power limit, your card's memory no longer throttles (to prove what you said about the article being wrong due to the way miner's actually use their cards [according to you]).
ty_ger07Fail
ty_ger07Not even close.
kram36ty_ger07Not even close.Easily proved you wrong, only took 2 min to do so.
ty_ger07kram36ty_ger07Not even close.Easily proved you wrong, only took 2 min to do so.You didn't though. You are using a straw man argument. Any outside observer knowledgeable on the topic can easily realize that you have proven nothing. You said that the article was wrong because the author doesn't know how a miner actually uses their card. And then, as "proof" that the article is wrong, you set up your card in the completely opposite way any miner would set up their card. That is not proof. You have some strange ways. Your "proof" was to purposely limit the memory heat generation as much as possible (by underclocking it), and then prove that when the memory is not generating its own heat, it's internal temperature is largely based on external factors. It's a "duh!" proof. It is completely irrelevant to the topic of discussion. What you initially came to prove was that after mining for hours, the internal temperature of overclocked memory while mining ethereum (the way miners actually use it, according to you), is more dependent on outside factors. You haven't proven that. In my opinion, the internal temperature is more affected by the application and the memory performance, than outside factors.
HoggleAlso please end the fighting in this thread. It's not helping the discussion move forward to see people just going back and forth at each other.
badboy64So the memory temp reaches 100C only when mining or does it do this for benchmarking and gaming also?
kram36badboy64So the memory temp reaches 100C only when mining or does it do this for benchmarking and gaming also?I doubt the numbers are true, but gaming would make the memory run hotter and cause the memory to throttle down.