Gold Leader
GM204 you mean Lehpron xD Graphics Maxwell 204
A typo I would have never caught...
Gold LeaderIt's downside is 256Bit GDDR5, 512Bit would of been more appropriate imo
I don't know, we've see 256-bit forever, as long as the frequency goes up then the bandwidth also goes up. As long as the bandwidth scales up with the extra GPU performance, then the only real loss could come from applying extreme usage scenario to any regular card.
For instance, let's say a GPU at 900MHz base clock with 3200-CUDA based on Kepler is made; which is 11.1% more than GK110 as it stands. If the memory bandwidth was increased at the same scale, it would be a proper fit, so to speak. So have the same 384-bit interface with 7.7GHz instead of 7GHz. I could see how 256-bit could be a hindrance but only if the types of scenarios that are better used with GK110 are pushed. Not everyone has bandwagoned to 4K, so they wouldn't need much bandwidth as they aren't pushing textures; but the extra GPU performance means not needing SLI scaling to attain high frame rates.
That said, looking back at GTX680 and GTX770, same 1536 CUDA with 5% frequency, but the change in bandwidth from 6GHz to 7GHz allowed most of the performance difference-- we could say 256-bit 6GHz held back the GK104 as it was. 7.7Ghz would add another boost to the same GK104, but 7.7Ghz is 28.3% scaling-- that would translate into a 1GHz 2000-CUDA GPU chip, or lowering the frequency of the GPU to 890MHz and the CUDA count can go up to 2304. So GTX780 could have also used 256-bit had the RAM scaled up the 7.7GHz.
256-bit at 7.7GHz certainly doesn't match bandwidth with 384-bit, but it could suggest that the extra wasn't necessary for most scenarios. But extreme bandwidth will always be necessary for extreme usage scenarios (a combination of high res, multi-display, and high textures)-- the trick is whether those scenarios are used often enough by the target market to bother implementing it versus only implementing those features in a premium non-ref product. That is nVidia's and EVGA's job; doing it anyway ups the cost and power requirements, what if not everyone needs it though? Let those that prefer it, pay for it. This is a business decision, it isn't just about performance balance.
IMO, if the Vram was scaled up to around 8GHz as a factory-overclocked model, then 256-bit can work with a 3000-CUDA core GPU for most usage cases, i.e. single monitor. In other words, I'm willing to bet nVidia already thought of that and if those leaked/speculative specs are accurate, it reflects that decision. For AMD to use 512-bit may only mean they don't need as high a frequency for GDDR5 to achieve the same bandwidth goals. I wonder whether using a higher interface consumes more power than just a higher frequency...
post edited by lehpron - 2014/07/23 13:33:23