Well, did you know the only difference in specs between
GTX570 and
GTX560 Ti 448-core is the 32-CUDA cores accounting for just 9W of power? So you could use that as a reference for some simple math, therefore your 480-core GF110 die ends up using (not dissipating, TDP is not electrical draw) around 135W of power for itself
only if running at 1464MHz. Faster or slower, scale with frequency. So out of the 219W reference GTX570, the GPU alone accounts for 61% of usage, the rest being Vram and other circuitry.
Except it doesn't scale properly when estimating a whole new architecture like Kepler. Had we taken into account the shrink to 28nm which reduces the area of the die and scales power/heat accordingly, and lower frequency since Kepler has no hotclock (shader-to-core is 1:1, instead of Fermi's 2:1), then the GK110 at 837Hz and 28nm would calculate to 211W, which seems a bit high since the total reference card is 250W and there are more physical Vram chips to account for 6GB than the 2GB in your GTX570.
For Kepler, you need to invoke a new math reference. For one, your estimate of three GTX570's could be somewhat off. Most reviews put Titan at 30% better than GTX680, and GTX680 tended to throw punches with GTX590, putting it just 80% faster than GTX570-- so in reality, 2.3x times more rendering power than a GTX570 would match a stock Titan at resolutions of 1920x1200-- which the 6GB card effectively bottlenecks, it is meant for higher res and then only would your estimate really scale better.
For the power estimate,
GTX670 and
GTX660 Ti share most of the same specs, except loosing 64-bit interface accounts for 20W. Not too helpful, we can't double up the interface and claim a GTX660 Ti with 384-bit would somehow use 150+80=230W without extra CUDA core or Vram. The specs of GTX670 and
GTX680 only differ in CUDA cores and frequency by 25W, which my best guess puts the GK104 at 113W at 1006MHz, and therefore the GK110 at 837MHz at 164W. Is this reasonable? Well the GF110 at 1566MHz on the GTX580 using the Fermi math would end up at 154W. They are both 250W-ish cards and end up around the same temperature with similar heatsinks.
With all this graphics technology, the only thing really changing is the GPU die itself, Vram has been getting faster and more dense, but power levels per chip hasn't been dramatic. On the other hand, analog capacitors and resistors are starting to vanish, but the rest of the circuitry has been stagnant.
Considering the equivalent performance with your GTX570, a Titan Fermi would require a single die of 1024-CUDA at 1464MHz, and at 40nm it would be well over 200W for the die itself and maybe 300W for the whole card, maybe more for all the Vram chips. Going ahead with 2688-CUDA at 40nm would be insane, it may not survive at Kepler Titan speeds.
post edited by lehpron - 2013/04/27 01:17:24