EVGA

The next Geforce doesn’t use HBM 2

Author
Xavier Zepherious
CLASSIFIED ULTRA Member
  • Total Posts : 6746
  • Reward points : 0
  • Joined: 2010/07/04 12:53:39
  • Location: Medicine Hat ,Alberta, Canada
  • Status: offline
  • Ribbons : 16
2017/06/17 13:08:15 (permalink)
Our well-informed sources tell us that the next Geforce will not use HBM 2 memory. It is too early for that, and the HBM 2 is still expensive. This is, of course, when you ask Nvidia, as AMD is committed to make the HBM 2 GPU
....
The next Geforce - and its actual codename is still secret - will use GDDR5X memory as the best solution around. We can only speculate that the card is even Volta architecture Geforce VbG. The big chip that would replace the 1080 ti could end up with the Gx104 codename. It is still too early for the rumored GDDR6, that will arrive next year at the earliest. 
 
 
http://www.fudzilla.com/news/graphics/43873-next-geforce-doesn-t-use-hbm-2
----------------------------
 
 
As I said in an earlier post micron is able to push GDDR5X to 16Gbps with normal stock (not selected/cherry or special batch)
it all about noise, Power, and algorithms to handle it with the chips
 
so GDDR5X could be used interim until GDDR6 is avail in quantities
so that means Volta could arrive anytime this fall - preferably around the time AMD releases Vega - so they don't lose market share
 
however the longer AMD delays Vega  the more likely Nvidia is going to wait for GDDR6 and a Jan launch


Primes found     Affiliate Code:YN2AHK39LH



 
#1

1 Reply Related Threads

    lehpron
    Regular Guy
    • Total Posts : 16254
    • Reward points : 0
    • Joined: 2006/05/18 15:22:06
    • Status: offline
    • Ribbons : 191
    Re: The next Geforce doesn’t use HBM 2 2017/06/17 17:38:03 (permalink)
    Got to laugh at the people surprised by this.
    Xavier Zepherious
    It is too early for that, and the HBM 2 is still expensive. This is, of course, when you ask Nvidia, as AMD is committed to make the HBM 2 GPU
    AMD has no choice, their GPUs have to be large just to compete with nVidia which, if GDDRx was used instead, the power draw would have been too high without HBMx.


    Looks to me that nVidia didn't want to roast their HBM 2.0 supplier for affecting their competitor's product cycle; had they jumped onto the HBM2.0 bandwagon, their products would be late as well-- so the expense isn't per HBM implementation, it would be due to products being late and delayed while recycling old stuff when they don't need to.

    For Intel processors, 0.122 x TDP = Continuous Amps at 12v [source].  

    Introduction to Thermoelectric Cooling
    #2
    Jump to:
  • Back to Mobile