EVGA

PC World tests DX12 with Futuremark

Page: < 12 Showing page 2 of 2
Author
warrior10
FTW Member
  • Total Posts : 1039
  • Reward points : 0
  • Joined: 2011/03/12 02:29:09
  • Status: offline
  • Ribbons : 0
Re: PC World tests DX12 with Futuremark 2015/03/31 08:30:05 (permalink)
Vlada011
No matter on card we will need 6GB+ and cards at least strong as TITAN X overclocked for DX12 and Windows 10.
I strong believe in GM200 6GB with 1150-1200MHz base clock on summer for arround 750$.
AMD will force NVIDIA to launch that card and ask only 100$ more than R9-390X 8GB.
Off course I would not wait such card but TITAN X is too expensive hardware for normal humans.
Special in Europe, NVIDIA couldn't ask more than 1000$ because that would be 1500e in Europe,
and no matter on that they have again more expensive card than TITAN Black.
Again, every year tradition continue. 
Price in Europe on some place is so how for TITAN X that people in USA could buy 3 cards for same money as European 2.
Need to pay 150$ only. 1300e = 1420$. Or one TITAN X for free on every 2.5.




As I have made a Post about the future of how new-gen games will use up to 6-8gigs of memory, like the Witcher that might be able to use 6gigs fully utilized. If AMD's 390 is 100$ less than an Nvidia GPU that has less Memory and is also Not as fast in processing as the AMD 390, then after almost 20yrs, I will be going back to an ATI/AMD GPU. Only thing i'd have to worry about from AMD is how they update their Drivers nowadays.
 
Only idiots would pay 100$ more for an inferior card, just because it's Nvidia. On the other hand if Nvidia came out with a GPU say the 980Ti would have 8gigs instead of 6 and had that much more processing power as ATI/AMD will supposedly have, then that Might be different, because I like EVGA's Cutomer service Representation that no other GPU seller and maker has.
 
Oh, and thanx Drazhar for the reply.
post edited by warrior10 - 2015/03/31 08:40:33

My new 
https://www.gigabyte.com/...70-Gaming-K7-rev-10#kf
HAF-X Coolermaster: 4 Fans
New Rig:Ryzen 1700 CPU-H100i Liquid arctic Cooled.
16gigs Corsair Vengeance DDR4-2400
180Ti Hybrid
Blu-ray drive SCSI
CDRW-RW drive SCSI 
Black WD Black caviar 2-1TB's and 1 500gig
WD Blue 1 TB  1 Sand disk SSD 240 for OS
Corsair PSU 1020
SWTOR Mouse and Headset
Razer CHROMA Keyboard
Dell 1444p G-Sync Monitor @144htz to 160htz
#31
warrior10
FTW Member
  • Total Posts : 1039
  • Reward points : 0
  • Joined: 2011/03/12 02:29:09
  • Status: offline
  • Ribbons : 0
Re: PC World tests DX12 with Futuremark 2015/03/31 08:37:34 (permalink)
lehpron
rjohnson11
 
In other words (if I interpret this correctly) how many times have we posted in the forums that a weak CPU would cause drop in framerates even with a strong GPU? DX12 is engineered to reverse that psychology. 
In part sure, but many of us compensate by overclocking the CPU, to make the weak strong, and a better overclocker was always recommended.  True litmus of whether DX12 works is whether anyone can tell the difference between and Intel or AMD CPU while playing certain games, historically, a GPU-bound game doesn't care while a CPU-bound game would care.  

I don't know the statistics, but I figure those playing MMOs justify their CPU choices as being more CPU-bound, even with the details turned up.  CPU has to communicate with the ethernet controller and handle all traffic beyond just running the game and telling the graphics card what to draw.  But most reviews never cover multiplayer gaming benchmarks, because there isn't a single test that represents all scenarios with different players.  So in effect, any game benchmark is biased towards single-player, and that makes them more GPU-bound. 

We must beware of DX12 bias, they will not test CPU-bound in order to inflate GPU-bound scores.  That being said, I'm curious what effect DX12 has on multi-player, I honestly don't know.  Ultimately, multi-player is the reign of consoles.




And that's the stupidity of Microsoft and other Companies, because probably 30 million or more ppl Globally play MMOs like me. That's just my estimate, and is probably much much higher than that estimate, especially when it concerns Asian players. I play SWTOR and ESO, and would like to try more game just for the heck of it.
 
MMO players Do Need to be Represented in these tests.

My new 
https://www.gigabyte.com/...70-Gaming-K7-rev-10#kf
HAF-X Coolermaster: 4 Fans
New Rig:Ryzen 1700 CPU-H100i Liquid arctic Cooled.
16gigs Corsair Vengeance DDR4-2400
180Ti Hybrid
Blu-ray drive SCSI
CDRW-RW drive SCSI 
Black WD Black caviar 2-1TB's and 1 500gig
WD Blue 1 TB  1 Sand disk SSD 240 for OS
Corsair PSU 1020
SWTOR Mouse and Headset
Razer CHROMA Keyboard
Dell 1444p G-Sync Monitor @144htz to 160htz
#32
lehpron
Regular Guy
  • Total Posts : 16254
  • Reward points : 0
  • Joined: 2006/05/18 15:22:06
  • Status: offline
  • Ribbons : 191
Re: PC World tests DX12 with Futuremark 2015/03/31 13:45:31 (permalink)
Drazhar
At least they did a test where they disabled several cores and turned off HT and progressively expanded to show the influence of more cores?
Yup, the Starswarm preview by Anandtech used an Ivy-E 6-core to simulate a quad and dual-core.  Their results with single-GPU showed that it didn't benefit with the 6th core enabled, and that the dual-core was within 15% of quad-core performance, and some results didn't need even a quad-core.  

Though keep in mind, on an Intel processor, disabling the cores doesn't disable the shared L3 cache; so even with one core active, it has access to the entire L3 memory bank, giving it an unfair advantage against a lesser-cached Core i3 or i5.  Anandtech would have to rerun the preview with a real i3, i5, i7 to separate out those discrepancies.  

In the Futuremark DX12 test, the use of an AMD 8-core shows scaling that isn't present in the Anandtech preview with the simulated Intel multi-core test.  I admit, I don't know if the cache disables with cores on an AMD processor; if it was the case, then the scaling shown makes sense.  

Considering nVidia's intention with Pascal to go up to 8-way SLI with cards with up to 32GB each, it makes you wonder what multi-core CPU is needed for anything less, let alone the available system RAM.

For Intel processors, 0.122 x TDP = Continuous Amps at 12v [source].  

Introduction to Thermoelectric Cooling
#33
-ReZ-
New Member
  • Total Posts : 5
  • Reward points : 0
  • Joined: 2010/11/04 11:07:30
  • Status: offline
  • Ribbons : 0
Re: PC World tests DX12 with Futuremark 2015/03/31 15:14:16 (permalink)
sorry for the stupid question, but how do those results compare ? more drawcalls/sec. = good or bad ?
i am just confused because my "old" PC does more drawcalls/sec. than stronger new ones
 
NVIDIA GeForce GTX 670(1x) and Intel Core i5-760 ProcessorDX11 Multi-threaded draw calls per second : 2 400 900

DX11 Single-threaded draw calls per second : 1 778 152 



#34
warrior10
FTW Member
  • Total Posts : 1039
  • Reward points : 0
  • Joined: 2011/03/12 02:29:09
  • Status: offline
  • Ribbons : 0
Re: PC World tests DX12 with Futuremark 2015/04/01 10:38:18 (permalink)
lehpron
Drazhar
At least they did a test where they disabled several cores and turned off HT and progressively expanded to show the influence of more cores?
Yup, the Starswarm preview by Anandtech used an Ivy-E 6-core to simulate a quad and dual-core.  Their results with single-GPU showed that it didn't benefit with the 6th core enabled, and that the dual-core was within 15% of quad-core performance, and some results didn't need even a quad-core.  

Though keep in mind, on an Intel processor, disabling the cores doesn't disable the shared L3 cache; so even with one core active, it has access to the entire L3 memory bank, giving it an unfair advantage against a lesser-cached Core i3 or i5.  Anandtech would have to rerun the preview with a real i3, i5, i7 to separate out those discrepancies.  

In the Futuremark DX12 test, the use of an AMD 8-core shows scaling that isn't present in the Anandtech preview with the simulated Intel multi-core test.  I admit, I don't know if the cache disables with cores on an AMD processor; if it was the case, then the scaling shown makes sense.  

Considering nVidia's intention with Pascal to go up to 8-way SLI with cards with up to 32GB each, it makes you wonder what multi-core CPU is needed for anything less, let alone the available system RAM.




That's crazy though, I don't know of any upcoming Motherboard that could have more that 4-way sli. How the hell is a MB manufacturer gonna be able to make an 8-way sli that fits in any ATX case, as 8-way would be each GPU being 32gigs?. I can see a 4-way, but I don't see any MB manufacturer making any room for an 8-way sli/crosfire. Plus all ATX cases would have to be totaly redesigned. Just me.
 
post edited by warrior10 - 2015/04/01 10:41:54

My new 
https://www.gigabyte.com/...70-Gaming-K7-rev-10#kf
HAF-X Coolermaster: 4 Fans
New Rig:Ryzen 1700 CPU-H100i Liquid arctic Cooled.
16gigs Corsair Vengeance DDR4-2400
180Ti Hybrid
Blu-ray drive SCSI
CDRW-RW drive SCSI 
Black WD Black caviar 2-1TB's and 1 500gig
WD Blue 1 TB  1 Sand disk SSD 240 for OS
Corsair PSU 1020
SWTOR Mouse and Headset
Razer CHROMA Keyboard
Dell 1444p G-Sync Monitor @144htz to 160htz
#35
ManBearPig
CLASSIFIED ULTRA Member
  • Total Posts : 6130
  • Reward points : 0
  • Joined: 2007/10/31 12:02:13
  • Location: Imaginationland
  • Status: offline
  • Ribbons : 20
Re: PC World tests DX12 with Futuremark 2015/04/01 10:44:42 (permalink)
If the cards are water cooled and single slot then it is do able.


 
#36
lehpron
Regular Guy
  • Total Posts : 16254
  • Reward points : 0
  • Joined: 2006/05/18 15:22:06
  • Status: offline
  • Ribbons : 191
Re: PC World tests DX12 with Futuremark 2015/04/01 13:15:54 (permalink)
warrior10
 
That's crazy though, I don't know of any upcoming Motherboard that could have more that 4-way sli.
That's what bugs me.  As I mentioned in the other thread about 8-way scaling; in order to keep the secret so well, they must be making the board in-house with no other companies involved (sort of like their Shield products that no one knew about until debut because they developed it themselves).  This means everything is proprietary on the motherboard, and it is possible that it won't use an AMD or Intel CPU, rather an nVidia one.  Otherwise the claimed bandwidth of NVLink that Pascal uses, being 5-12 times faster than PCIe 3.0, would bottleneck in an AMD or Intel system still reliant on PCI Express.


I acknowledge that 8-way scaling is still possible with four dual-GPU cards, I think the 8-way scaling will be reserved for Teslas or Quadros using mainboards with more than 256GB of DDR3/4.



For Intel processors, 0.122 x TDP = Continuous Amps at 12v [source].  

Introduction to Thermoelectric Cooling
#37
seth89
CLASSIFIED ULTRA Member
  • Total Posts : 5290
  • Reward points : 0
  • Joined: 2007/11/13 11:26:18
  • Status: offline
  • Ribbons : 14
Re: PC World tests DX12 with Futuremark 2015/04/02 15:26:10 (permalink)
one card at default test settings. (not 1080p)
http://www.3dmark.com/aot/13024
AMD Radeon R9 290(1x) and Intel Core i7-3820 Processor
DX11 Multi-threaded draw calls per second  744 324

DX11 Single-threaded draw calls per second  815 372

DX12 draw calls per second   0

Mantle draw calls per second   7 609 667


#38
Page: < 12 Showing page 2 of 2
Jump to:
  • Back to Mobile