EVGA

Musings about fermi vs kepler

Author
mastergenera1
New Member
  • Total Posts : 9
  • Reward points : 0
  • Joined: 2009/10/22 20:46:47
  • Status: offline
  • Ribbons : 0
2013/04/27 00:01:31 (permalink)
So ive been looking at titan,but i own a reference evga 570 and was figured out it would take 3 gtx 570s to make titan-like performance in an optimal situation in a bitcoin miner.(1440 fermi cores (roughly)= 2688 kepler cores in this way) and was thinking if nvidia was insane enough to make a fermi "titan" card( sinlge gpu 2688 cores) how big would it be,and speculation about power draw and TDP/heat production.
post edited by mastergenera1 - 2013/04/27 00:06:45
#1

2 Replies Related Threads

    lehpron
    Regular Guy
    • Total Posts : 8858
    • Reward points : 0
    • Joined: 2006/05/18 15:22:06
    • Status: offline
    • Ribbons : 191
    Re:Musings about fermi vs kepler 2013/04/27 01:13:33 (permalink)
    Well, did you know the only difference in specs between GTX570 and GTX560 Ti 448-core is the 32-CUDA cores accounting for just 9W of power?  So you could use that as a reference for some simple math, therefore your 480-core GF110 die ends up using (not dissipating, TDP is not electrical draw) around 135W of power for itself only if running at 1464MHz.  Faster or slower, scale with frequency.  So out of the 219W reference GTX570, the GPU alone accounts for 61% of usage, the rest being Vram and other circuitry.
     
    Except it doesn't scale properly when estimating a whole new architecture like Kepler.  Had we taken into account the shrink to 28nm which reduces the area of the die and scales power/heat accordingly, and lower frequency since Kepler has no hotclock (shader-to-core is 1:1, instead of Fermi's 2:1), then the GK110 at 837Hz and 28nm would calculate to 211W, which seems a bit high since the total reference card is 250W and there are more physical Vram chips to account for 6GB than the 2GB in your GTX570.
     
    For Kepler, you need to invoke a new math reference.  For one, your estimate of three GTX570's could be somewhat off.  Most reviews put Titan at 30% better than GTX680, and GTX680 tended to throw punches with GTX590, putting it just 80% faster than GTX570-- so in reality, 2.3x times more rendering power than a GTX570 would match a stock Titan at resolutions of 1920x1200-- which the 6GB card effectively bottlenecks, it is meant for higher res and then only would your estimate really scale better.
     
    For the power estimate, GTX670 and GTX660 Ti share most of the same specs, except loosing 64-bit interface accounts for 20W.  Not too helpful, we can't double up the interface and claim a GTX660 Ti with 384-bit would somehow use 150+80=230W without extra CUDA core or Vram.  The specs of GTX670 and GTX680 only differ in CUDA cores and frequency by 25W, which my best guess puts the GK104 at 113W at 1006MHz, and therefore the GK110 at 837MHz at 164W.  Is this reasonable?  Well the GF110 at 1566MHz on the GTX580 using the Fermi math would end up at 154W.  They are both 250W-ish cards and end up around the same temperature with similar heatsinks.
     
    With all this graphics technology, the only thing really changing is the GPU die itself, Vram has been getting faster and more dense, but power levels per chip hasn't been dramatic.  On the other hand, analog capacitors and resistors are starting to vanish, but the rest of the circuitry has been stagnant.
     
    Considering the equivalent performance with your GTX570, a Titan Fermi would require a single die of 1024-CUDA at 1464MHz, and at 40nm it would be well over 200W for the die itself and maybe 300W for the whole card, maybe more for all the Vram chips.  Going ahead with 2688-CUDA at 40nm would be insane, it may not survive at Kepler Titan speeds.
    post edited by lehpron - 2013/04/27 01:17:24

    For Intel processors, 0.122 x TDP = Continuous Amps at 12v [source].  

    Introduction to Thermoelectric Cooling
    #2
    mastergenera1
    New Member
    • Total Posts : 9
    • Reward points : 0
    • Joined: 2009/10/22 20:46:47
    • Status: offline
    • Ribbons : 0
    Re:Musings about fermi vs kepler 2013/04/27 17:07:01 (permalink)
    you bring up good points for an architectural standpoint. about the advantages of kepler in terms of gaming performance,i was just thinking simple tasks like the above mentioned bitcoin test i did titan would produce about 320 Mb of data per sec while my reference 570(1.2 GB ram) would run at 120 Mb per second.and my other though was that since the 6xx series are supposed to be about 50% faster than their equivlent 5xx parts made me think that titan in compute purposes core for core still cant hold a candle to fermi.it wouldve been funny had a fermi titan been made tho, even if you would need LN2 and your own power station to run it :P. btw 320ish Mbps is worse than a sinlge 7970 so that just really shows how bad nvidia cards are at simple stuffz :P
     
    Edit although id never expect fermi to safely maintain kepler clocks i was just wondering what even a Fermi titan would look like without GPU boost at stockish titan speeds as well so like mid 800s to early 900s clocks as well. 
    post edited by mastergenera1 - 2013/04/27 17:27:30
    #3
    Jump to:
  • Back to Mobile