Comments Locked

180 Comments

Back to Article

  • jardows2 - Monday, October 26, 2015 - link

    Time to go Team Orange!
  • hans_ober - Monday, October 26, 2015 - link

    That moment when you unintentionally perform better together with your competition compared to your own homies.
  • medi03 - Monday, October 26, 2015 - link

    I've missed why they didn't compare vs SLI/Crossfire for older cards.
  • Ryan Smith - Monday, October 26, 2015 - link

    To be clear, that would require Ashes to support implicit multi-adapter, which it does not.
  • wishgranter - Monday, October 26, 2015 - link

    and how it scales with 3-4+ cars in a system ? or its limited to dual card config right now ??
  • willis936 - Monday, October 26, 2015 - link

    I think you mean team yellow if it's additive or team brown if it's subtractive.
  • rituraj - Tuesday, October 27, 2015 - link

    +1
  • pogostick - Tuesday, October 27, 2015 - link

    So, team spotted banana then.
  • BurntMyBacon - Tuesday, October 27, 2015 - link

    Clever analogy, but this worked way to well to be associated with a overripe banana. How about yellow banana for the the ATi + nVidia combo (since it seems to be a good amount of ripe for both setups) and brown banana for the nVidia + ATi combo (Since its performance was a little rotten for the older card setup).

    On a more serious note, I wonder what the results would be if you used a less powerful ATi card and a more powerful nVidia card for the older setup. Maybe an HD7950 + GTX780 and vice versa.
  • pogostick - Tuesday, October 27, 2015 - link

    Hey, it was still better than team turds with corn.
  • Scootiep7 - Thursday, October 29, 2015 - link

    Actually, shouldn't that be switched? Brown would be the additive tertiary color wheel result, and yellow would be the subtractive tertiary color wheel result last I checked.
  • xenol - Monday, October 26, 2015 - link

    Now we can finally put this AMD vs. NVIDIA war to rest. Get both cards to get exclusive technologies, then when the game supports it, get the best horsepower of both.
  • Tunnah - Monday, October 26, 2015 - link

    We need to see the SLI/Xfire numbers first. It's all kinda pointless if 2 980Tis trounce em all
  • silverblue - Monday, October 26, 2015 - link

    I doubt two 980 TIs would make much of a difference over a 980 TI and a Titan X.
  • Refuge - Monday, October 26, 2015 - link

    i think you are crazy if you are going to use any of these numbers for any argument, or buying decision.

    While interesting information to digest, it is merely that, this isn't Beta, this isn't even Alpha, this is is literally engineers saying "Hey guys check this shit out!"

    I love what I'm seeing so far though, this is exciting, but the thought of EA, Bethesda, and Ubisoft being in control of so much does scare me...

    EA = Pure evil

    Bethesda = Tree's sticking through walls, Dragons having seizures in the air, and Chickens reporting your crimes to the local Police.

    And Ubisoft? Well... I haven't given them much credit since the mid to late 90's...
  • naretla - Tuesday, October 27, 2015 - link

    EA hasn't been the worst publisher for some time now. You could call them greedy, but at least they're relatively competent.
  • Refuge - Tuesday, October 27, 2015 - link

    Sim city, and Dragon Age Inquisition beg to differ.
  • naretla - Tuesday, October 27, 2015 - link

    Yeah, they went too far with SimCity, but it's been over two and a half years since then. Please elaborate re:DAI.
  • Refuge - Tuesday, October 27, 2015 - link

    I pre-ordered that game, against my better judgment I admit, and I wasn't able to play it for months because of fucking bugs. I won't pre-order anything from anyone until I see a change in the way these companies do business.

    I also agree that Simcity was a long time ago, but I've also not forgotten. I won't expect them to do better until I see them doing better.

    it isn't just EA though, I won't pre-order Fallout 4 either for the very same reasons. Different publisher/Developer, but still the industry has left me jaded.

    It is insulting to me when a publisher thinks that I'm stupid enough to be ok with paying full price for a game, only to get a beta build.

    Its antics like that, that give me pause for celebration over all the new fine grained controls offered by DX12.

    The potential is huge, the power and life extracted from the X 360, and PS3 were impressive towards the end of their life cycle, and I would love to see what they could do back then had they had the control they are being offered now. I also agree that the Dev's are the best ones to make the most of this technology.

    But Dev's (unless Indie) are under the thumb of Publishers like EA, and their deadlines and budgets. It is this that gives me a lot of fear about the half baked disasters that could this way be headed.
  • SunnyNW - Tuesday, October 27, 2015 - link

    I would think that the GPU vendors would help game developers with engineering resources. Probably not with this particular setup (EMA) but with say like Split Frame Rendering with two identical GPUs being utilized.
  • SuperVeloce - Sunday, December 6, 2015 - link

    X360 and PS3 were already much more "to the metal" than we were used to from DX9. In this respect those consoles have more in common with DX12 than any older DX versions
  • Creig - Tuesday, October 27, 2015 - link

    How long before Nvidia sabotages this the way they stop PhysX from working if an AMD card is detected in your system?
  • 0ldman79 - Monday, November 2, 2015 - link

    I think Nvidia has already disabled Physx from working if AMD is detected.

    I couldn't run Physx with my Radeon 5750 and any Geforce card as a Physx co-processor. There was a massive workaround that I got going once for about a half an hour, then I reboot and couldn't get it working again.

    I spent around 4 hours of my life trying to get paper to flutter around more realistically in Batman only to have it fail on reboot.

    Nvidia really has the wrong idea in this situation. It worked just fine in the past.
  • Samus - Monday, October 26, 2015 - link

    You know...nothing rhymes with orange ;)
  • rituraj - Tuesday, October 27, 2015 - link

    flange?
  • AndrewJacksonZA - Tuesday, October 27, 2015 - link

    Please suck on a lozenge while rhyming with orange.
  • uglyduckling81 - Sunday, November 1, 2015 - link

    Nvidia are not going to like this. They will patch their drivers to make sure this no longer works.
  • lilmoe - Monday, October 26, 2015 - link

    What about Intel's iGPU?
  • Ryan Smith - Monday, October 26, 2015 - link

    Since this early release is limited to basic AFR, there's little sense in testing an iGPU. It may be able to contribute in the future with another rendering mode, but right now it's not nearly fast enough to be used effectively.
  • DanNeely - Monday, October 26, 2015 - link

    What about pairing the IGP with a low end discrete part where the performance gap is much smaller? I'm thinking about the surface book where there's only a 2:1 gap between the two GPUs; but any laptop with a 920-940M part would have a similar potential gain.

    On the desktop side, AMD's allowed heterogeneous xFire with their IGP and low end discrete cards for a few years. How well that setup works with DX12 would be another interesting test point.
  • JamesDax3 - Monday, October 26, 2015 - link

    Agreed. Let's see the test.
  • [-Stash-] - Monday, October 26, 2015 - link

    Very interesting first data.

    Also, where's the 980Ti SLI, Titan X SLI, Fury X Crossfire, Fury Crossfire comparison data? I'd like to see how this compares to the currently existing technology.
  • extide - Monday, October 26, 2015 - link

    I don't think current SLI/CF actually works with DX12 -- although I could be wrong.
  • Ryan Smith - Monday, October 26, 2015 - link

    Not without a bit more work. That would be the implicit multi-adapter option.
  • Manch - Monday, October 26, 2015 - link

    Does the I7 Surface book have the 540igpu like the surface pro? If so how does it compare to the dGPU in the book. If they're similar in performance would AFR work on that? I heard it was supposed to be pretty good
  • DanNeely - Monday, October 26, 2015 - link

    No. The surface book uses an I7-6600U with HD 520 graphics that has a minimum CPU 2.6GHz CPU clock and can turbo to 3.4. The Surface 4 Pro uses an i7-6650U that has Iris 540 graphics but only guarantees 2.2GHz, although it can still turbo to 3.4GHz.
  • Manch - Monday, October 26, 2015 - link

    Oh that's too bad. I would like to see an attempt to AFR those igpu and dgpu. Even a modest bump would be better than nothing. Either way I just want to see what it does
  • Manch - Tuesday, October 27, 2015 - link

    Why the distinction with regards to the ability to turbo between the two. Does the Surface pro dissipate heat poorly compare to the book? For both the guts are all crammed into the same locations
  • DanNeely - Tuesday, October 27, 2015 - link

    They can both turbo to 3.4GHz. However the CPU in the surface has 2x as many GPU cores; when it's going at full power there's less headroom for the CPU which is why Intel set the minimum guaranteed speed 400 MHz lower.
  • medi03 - Monday, October 26, 2015 - link

    It's mentioned in the article:

    "As a result NVIDIA only allows identical cards to be paired up in SLI, and AMD only allows a slightly wider variance (typically cards using the same GPU)."

    Although sounds misleading to me.
  • Grimsace - Tuesday, October 27, 2015 - link

    Nvida and AMD both have a history of re-branding cards with the same gpus. Nvidia SLI requires that the cards are the exactly the same model (i.e. GTX 760 with another GTX 760). While AMD still allow you to crossfire the cards as long as they have the same basic architecture (i.e. Radeon 7870 and an R9 280X).
  • Ryan Smith - Monday, October 26, 2015 - link

    "What about pairing the IGP with a low end discrete part where the performance gap is much smaller? "

    Right now it would still be too slow. Oxide is only ready to test Ashes on high-end hardware at this point.

    "This pre-beta requires very high-end hardware and should only be joined by people with substantial technical expertise and experience in real-time strategy games.

    These builds are buggy, gameplay is very incomplete and it'll probably kill your pets."

    Which is not to say that I'm not curious as well. But it's one of those matters where Ashes needs some more development work before they're ready to show off any kind of fusion involving an iGPU.
  • JamesDax3 - Monday, October 26, 2015 - link

    Intel iGPUs may not be up to par but AMD APUs should be. Would love to see this done with an A10-7870K paired with a R7 360/370 or GTX 950/960.
  • naretla - Tuesday, October 27, 2015 - link

    Intel Iris Pro actually outperforms AMD APUs: http://www.tomshardware.com/reviews/intel-core-i7-...
  • silverblue - Tuesday, October 27, 2015 - link

    ...for double the price. Will Intel's parts also benefit from DX12?
  • nathanddrews - Tuesday, October 27, 2015 - link

    It always costs more to get better performance. Why would that suddenly change in the case of Iris Pro vs APU? If you recall, Intel has been showing DX12 demos on Haswell, Broadwell, and Skylake for some time now. Skylake has been confirmed to support feature level 12_1.
  • silverblue - Tuesday, October 27, 2015 - link

    That doesn't necessarily mean it'll perform better at DX12 than in DX11; ask NVIDIA. However, NVIDIA's DX11 performance is that good, it's little surprise they're not benefitting with DX12.

    It does cost more to get better performance, you're right, however until Broadwell, Intel hadn't provided something to challenge AMD on the desktop. Intel's CPUs generally did cost more regardless of the strength of their IGP.
  • nathanddrews - Wednesday, October 28, 2015 - link

    I'm not sure I follow your post. Intel is more expensive than AMD because they get better CPU performance AND better IGP performance (Iris Pro only). They have also shown - in demos and game engines - that DX12 performance is better than DX11 performance.

    Not sure what NVIDIA has to do with this...
  • looncraz - Wednesday, October 28, 2015 - link

    It better for twice the money!

    AMD could easily build an APU twice as fast, but memory bandwidth is a real issue.

    We will see what they have up their sleeves on the coming year...
  • patrickjp93 - Wednesday, October 28, 2015 - link

    How can bandwidth be the issue when Intel gets around it so easily? Even without eDRAM HD 6000 smacks Kaveri upside the head. Maybe Intel's just better at the integrated game...
  • mosu - Thursday, October 29, 2015 - link

    Did you ever owned or touched an Iris HD 6000? or at least know someone who did?
  • wiak - Friday, October 30, 2015 - link

    eDRAM...
    if AMD goes HBM2 like they did in the past with ddr3 sideport memory

    just a taught
    AMD Zen 4-8 Core with Radeon (2048+ shaders, 2 or 4GB HBM2 (either as slot on mb or ondie like fury)

    i think i read somewhere there will be a single socket for APUs and CPUs,
    so amd lineup can be a Zen CPU with 8-16 cores for perf system and a Zen APU with 4-8 cores, 2048+ shaders and hbm2 for mainstream/laptops computers
  • Michael Bay - Thursday, October 29, 2015 - link

    If it actually could, we would be able to buy it. No such luck.
  • Revdarian - Thursday, October 29, 2015 - link

    Well, it has currently two offerings, one called Xbox One, and the other one that is more powerful is called the Playstation 4.

    Those are technically APU's, developed by AMD, and can be bought at the moment. Just saying, it is possible.
  • Midwayman - Monday, October 26, 2015 - link

    Seems like it would be great to do post effects and free up the main gpu to work on rendering.
  • Alexvrb - Monday, October 26, 2015 - link

    Agreed, as far as dGPU and iGPU cooperation goes I think Epic is on to something there. Free 10% performance boost? Why not. Now for dGPU + dGPU modes, I am not killed on the idea of unlinked mode. Seems like developers would have their work cut out for them with all the different possible configurations. Linked mode makes the most sense to me for consistency and relative difficulty to implement. Plus anyone using multiple GPUs is already used to using a pair of the same GPUs.

    Regardless of whether they go linked or unlinked though... I'd really like them to do something other than AFR. Split-frame, tile-based, something, anything. Blech.
  • Refuge - Monday, October 26, 2015 - link

    For high end AAA Titles likened mode would be optimum, I agree. Allows for their fast releases, and still gives a great performance boost. Their target demographic is already used to having to jump through hoops to get the results they want. Getting identical GPU's won't affect them.

    For games with extended lifetimes like MMO's such as WoW, Swtor, etc, etc. Unlikened mode is worth the investment, as it allows your game to hit a MUCH wider customer base with increased graphical performance. These are crowds that are easy to pole for data so they would easily know who they are directing their efforts towards, and the lifespan of the game make the extra man hours a worthy investment.
  • Gadgety - Tuesday, October 27, 2015 - link

    @alexvrb And game testers have their work cut out for them as well, testing all sorts of hardware configurations.

    In addition game developers will likely see the need for new skill sets, and likely this will benefit larger outfits being able to cope with developing and tuning their games to various hard ware combinations.
  • DanNeely - Tuesday, October 27, 2015 - link

    I suspect most small devs will continue to use their engine in the normal way, not taking any more advantage of most of the DX12 multi-GPU features any more than they did SLI/XFire in DX11 or prior. The only exception I see might be offloading post-processing to the IGP. That looks like a much simpler split to implement; and might be something they could get for free from the next version of their engine.
  • nightbringer57 - Monday, October 26, 2015 - link

    Wow. I didn't expect this to work this well.

    Just out of curiosity... Could you get a few more data points to show how a Titan X + Fury X/Fury X + Titan X would fare?
  • ImSpartacus - Monday, October 26, 2015 - link

    Yeah, I expected it to be pretty bad in the first year or two. Really fascinating results.
  • shing3232 - Monday, October 26, 2015 - link

    Yep I wonder how intel GPU would help for that matter
  • Demon-Xanth - Monday, October 26, 2015 - link

    It's likely a deal as this hypothetical situation:
    You have four pizza cooks: Al, Arnie, Nate, and Nick.
    They all take 10 minutes to prep a pizza.
    Al and Arnie take 3 minutes to spread the dough, and 7 to put on the toppings.
    Nate and Nick take 7 minutes to spread the dough, but only 3 to put on the toppings.

    Al and Arnie together can make crank a pizza out every 5 minutes, Nate and Nick can do the same. But Al and Nick (in that order) can put them out in 3, while Nate and Artie take 7.

    So the order of stacking becomes important.
  • Torgog - Monday, October 26, 2015 - link

    Now I'm hungry.
  • Manch - Monday, October 26, 2015 - link

    If Nate and Artie are making pizzas, what the hell happened to Arnie?! :D
  • Mr Perfect - Monday, October 26, 2015 - link

    He ran to da choppa.

    Okay, that was bad...
  • AndrewJacksonZA - Tuesday, October 27, 2015 - link

    Lol, nice one! :-)
  • yzzir - Monday, October 26, 2015 - link

    Based on your assumptions for Al and Nick and Nate and Arnie:
    Al takes 3 minutes to spread dough
    Arnie take 7 minutes to put on toppings
    Nate takes 7 minutes to spread dough
    Nick take 3 minutes to put on toppings

    It takes 7 minutes (after the initial 10 minutes for the first) for Al and Arnie to crank out a pizza, assuming Al continues spreading dough for the next pizza as soon as he finishes spreading dough for the previous pizza, so 3 minutes of work done by Al are hidden by arnies 7 minutes of work.

    It also take 7 minutes (after the initial 10 minutes for the first) Nate and Nick to crank out a pizza,
    assuming Nate continues spreading dough for the next pizza as soon as he finishes spreading dough for the previous pizza, 4 minutes of idle time are added to Nick's 3 minutes of work waiting on Nate to finish spreading pizza.
  • Gigaplex - Monday, October 26, 2015 - link

    I think the 5 minute figures comes from each person taking 10 minutes to do a compete pizza, but since they're getting 2 pizzas per 10 minutes, that's an average of 1 per 5 minutes.
  • Ryan Smith - Monday, October 26, 2015 - link

    I love that analogy. However that only works if you use a more involved rendering mode than AFR, where the work within a single frame gets split up among multiple GPUs. With AFR it's more like all of the cooks have to complete pizzas on their own without any help from the other.
  • jimjamjamie - Tuesday, October 27, 2015 - link

    [pizza-making intensifies]
  • geniekid - Monday, October 26, 2015 - link

    On one hand the idea of unlinked EMA is awesome. On the other hand, I have to believe 95% of developers will shy away from implementing anything other than AFR in their game due to the sheer amount of effort the complexity would add to their QA/debugging process. If Epic manages to pull off their post-processing offloading I would be very impressed.
  • DanNeely - Monday, October 26, 2015 - link

    I'd guess it'd be the other way around. SLI/XFire AFR is complicated enough that it's normally only done for big budget AAA games. Other than replacing two vendor APIs with a single OS API DX12 doesn't seem to offer a whole lot of help there; so I don't expect to see a lot change.

    Handing off the tail end of every frame seems simpler; especially since the frame pacing difficulties that make AFR so hard and require a large amount of per game work won't be a factor. This sounds like something that could be baked into the engines themselves, and that shouldn't require a lot of extra work on the game devs part. Even if it ends up only being a modest gain for those of us with mid/high end GPUs; it seems like it could end up being an almost free gift.
  • nightbringer57 - Monday, October 26, 2015 - link

    That's only half relevant.
    I wonder how much can be implemented at the engine level. This kind of thing may be at least partially transparent to devs if says Unreal Engine and Unity get compatibility for it... I don't know how much it can do, though.
  • andrewaggb - Monday, October 26, 2015 - link

    Agreed, I would hope that if the Unreal Engine, Unity, Frostbite etc support it that maybe 50% or more of new games will support it.

    We'll have to see though. The idea of having both an AMD and Nvdia card in the same machine is both appealing and terrifying. Occasionally games work better on one than the other, so you might avoid some pain sometimes, but I'm sure you'd get a whole new set of problems sometimes as well.

    I think making use of the iGPU and discrete cards is probably the better scenario to optimize for. (Like Epic is apparently doing)
  • Gigaplex - Monday, October 26, 2015 - link

    Problems such as NVIDIA intentionally disabling PhysX when an AMD GPU is detected in the system, even if it's not actively being used.
  • Friendly0Fire - Monday, October 26, 2015 - link

    It really depends on a lot of factors I think, namely how complex the API ends up being.

    For instance, I could really see shadow rendering being offloaded to one GPU. There's minimal crosstalk between the two GPUs, the shadow renderer only needs geometry and camera information (quick to transfer/update) and only outputs a single frame buffer (also very quick to transfer), yet the process of shadow rendering is slow and complex and requires extremely high bandwidth internally, so it'd be a great candidate for splitting off.

    Then you can also split off the post-processing to the iGPU and you've suddenly shaved maybe 6-8ms off your frame time.
  • Oogle - Monday, October 26, 2015 - link

    Yikes. Just one more exponential factor to add when doing benchmarks. More choice is great for us consumers. But reviews and comparisons are going to start looking more complicated. I'll be interested to see how guys will make recommendations when it comes to multi-gpu setups.
  • tipoo - Monday, October 26, 2015 - link

    Wow, seems like a bigger boost than I had anticipated. Will be nice to see all that unused silicon (in dGPU environments) getting used.
  • gamerk2 - Monday, October 26, 2015 - link

    As this test is a smaller number of combinations it’s not clear where the bottlenecks are, but it’s none the less very interesting how we get such widely different results depending on which card is in the lead. In the GTX 680 + HD 7970 setup, either the GTX 680 is a bad leader or the HD 7970 is a bad follower, and this leads to this setup spinning its proverbial wheels. Otherwise letting the HD 7970 lead and GTX 680 follow sees a bigger performance gain than we would have expected for a moderately unbalanced setup with a pair of cards that were never known for their efficient PCIe data transfers. So long as you let the HD 7970 lead, at least in this case you could absolutely get away with a mixed GPU pairing of older GPUs.


    Drivers. Pretty much that simple. Odds are, the NVIDIA drivers are treating the HD 7970 the same way it's treating the 680 GTX, which will result in performance problems. AMD and NVIDIA use very different GPU architectures, and you're seeing it here. NVIDIA is probably attempting to utilize the 7970 in a way it just can't handle.

    I'd be very interested to see something like 680/Titan, or some form of lower/newer setup, which is what most people would actually use this for (GPU upgrade).
  • medi03 - Monday, October 26, 2015 - link

    Uhm, and what about 7970 simply being the faster card?
  • Gigaplex - Monday, October 26, 2015 - link

    That would explain why 7970+680 is faster than 680+7970, but not why 680 is faster than 680+7970.
  • prtskg - Wednesday, October 28, 2015 - link

    when 680 is in lead, not only it has to render frames but also assign work to 7970 and receive completed frames from it. For it to be slower than only 680 means assigning and receiving work is very slow on it.
  • Hulk - Monday, October 26, 2015 - link

    Could it be that the AMD/nVidia mixed setup performs better because each card has different strengths and weaknesses and they compliment each other rather than having two cards with the same strengths and weaknesses and therefore more probable bottlenecks?
    Just a thought.
  • DragonJujo - Monday, October 26, 2015 - link

    They didn't include any direct comparisons to matched CrossFireX or SLI so it would be a bit premature. The idea itself is quite interesting in light of the GameWorks problems that show up in AMD because of the heavy tessellation (which can be limited).
  • Ryan Smith - Monday, October 26, 2015 - link

    As Ashes uses AFR, each card essentially has to stand on its own. Right now they only work together in as much as each gets assigned work, and then the secondary card ships off completed frames to the primary card for display queuing. There's no greater sharing of work; no opportunity for each card to work on what it does best.
  • bug77 - Monday, October 26, 2015 - link

    Technically, this is a major achievement. But in a world where multi-GPU setups are still in the single digits, these setups will be a niche of a niche.
  • Manch - Monday, October 26, 2015 - link

    I think you'll see more setups with mixed cards. If I don't have to toss my old card and simply by a new gen card with similar performance vs outlaying cash for SLI or cross fire off the bat that would be awesome. want to know if this can handle 4X x mixed cards
  • fingerbob69 - Wednesday, October 28, 2015 - link

    I think this DX12 will lead to dual card set-ups becoming common if not the norm.

    I have a r9-280. In the next year or so I upgrade to either the next gen AMD OR nvidia with the 280 becoming the secondary to te new card's lead.
  • plopke - Monday, October 26, 2015 - link

    why 3 shades of grey on some graphs(no pun intended) , maby i din't understand it , maby it is my screen but i can hardly make a distinction between the shades.
  • MobiusPizza - Monday, October 26, 2015 - link

    It's the same shades that's why you can't distinguish between them. But you are not suppose to distinguish them by shades, but by the bar length! They correspond to different graphic quality profiles. The longest is obviously "Normal average", middle is "Medium average" and shortest the "Heavy average".
  • jmke - Monday, October 26, 2015 - link

    Ashes of the Singularity = Supreme Commander remake?
    wonder how it will compare, since SupCom is pretty much cream of the crop when it comes to large scale RTS
  • CaedenV - Monday, October 26, 2015 - link

    This is rather impressive! Was thinking about updating my GPU this winter... but maybe my 'ol 570 can limp along a little longer to see how this technology shapes up. Perhaps getting 2 decent midrange cards from different vendors to get exclusive rendering tech would be a smarter move than getting a single high-end card.
  • Manch - Monday, October 26, 2015 - link

    or competing game bundles!
  • Murloc - Monday, October 26, 2015 - link

    it seems to me that it's a few years off...
  • nos024 - Monday, October 26, 2015 - link

    So...does that mean we can use Freesync or Gsync with this combo?
  • extide - Monday, October 26, 2015 - link

    It would probably depend on which card is the leader -- although there was a checkbox for FreeSync in the screenshot so, maybe the game has to support it? It's not entirely clear at the moment.
  • Ryan Smith - Monday, October 26, 2015 - link

    Yes, Ashes supports Freesync. It should probably support G-Sync as well, but only Freesync has been explicitly mentioned so far (and as you note, it's even a settings option).
  • WaltC - Monday, October 26, 2015 - link

    I should think this will soon filter-down into the engine-development area, where the engine that developers elect to use will support (or not) these nice capabilities. For the odd game that still requires a home-grown engine--yes, the impetus will be up to the developer.
  • Refuge - Tuesday, October 27, 2015 - link

    To be honest, this is exactly what I'm hoping happens.
  • Badelhas - Monday, October 26, 2015 - link

    Great article but... This is all very nice and everything but what I really miss is a PC game with a graphics breakthrough, like Crysis when it was lauched back on 2007. None of the games I saw in meantime had that WOW factor. I blame consoles.
  • tipoo - Monday, October 26, 2015 - link

    Star Citizen maybe? Pretty good at bringing top systems to their knees. Though yeah, nothing is the singular leader like Crysis in 2007 was, but that's also a product of every engine getting up to a good level.
  • Nfarce - Monday, October 26, 2015 - link

    My thoughts exactly. The jump from games like HL2 and BF2 to Crysis was like a whole new world. Unfortunately it was such a big hit on performance only the most deep pockets could afford to play that game at full capability. I wasn't able to do it until 2009 when building a new rig, and even then wasn't able to get 60fps at 1080p.
  • Marc HFR - Monday, October 26, 2015 - link

    Hello,

    In AFR, isn't the (small) difference between AMD and NVIDIA on the rendering annoying ?
  • PVG - Monday, October 26, 2015 - link

    Most interesting would be to see results with new+old GPUs. As in, "Should I keep my old card in tandem with my new one?"
  • extide - Monday, October 26, 2015 - link

    With AFR, no. If they do a different type of split where each card gets a different set of work to do, and one card gets more than the other, then yes.
  • Refuge - Tuesday, October 27, 2015 - link

    I was under the impression that it had to be DX12 compatible to work.

    That cuts out 90% of the older GPU's out there.
  • DanNeely - Tuesday, October 27, 2015 - link

    Most of the make it faster stuff in DX12 will work on DX11 capable hardware; the stuff that needs new hardware is relatively incidental. AMD intends to support all GCN cards, NVidia as far back as the 4xx family (excluding any low end rebadges). I'm not sure how far along they are with extending support back that far yet.
  • Refuge - Tuesday, October 27, 2015 - link

    These new Multi-GPU modes will require full DX12 compliant cards though correct?

    And thank you for the info, I was unaware of how far back support was going. I'm pleasantly surprised :D
  • rascalion - Monday, October 26, 2015 - link

    I'm excited to see how this technology plays out in the laptop space with small dGPU + iGPU working together.
  • TallestJon96 - Monday, October 26, 2015 - link

    Crazy stuff. 50% chance either AMD or more likely NVIDIA locks this out via drivers unfortunately.

    To me, the most logical use of this is to have a strong GPU rendering the scene, and a weak gpu handling post processing. This way, the strong GPU is freed up, and as long as the weak GPU is powerful enough, you do not have any slow down or micro-stutter, but only get an improvement in performance, and the opportunity to increase the quality of post processing. This has significantly less complications than AFR, is simpler than two cars working on a single frame, and is pretty economical. For example, i could have kept my 750 ti with my nee 970, had had the 750 ti handle post processing, and the 970 do everything else. No micro stutter, relatively simple,and inexpensive, all while improving performance and post-processing effects.

    Between multi-adapter support, multi-core improvements in DX12, free-sync and Gsync, HBM, and possibly X-point, there is quite a bit going on for PC gaming. All of these new technologies fundementally improve the user experience, and fundementally improves the way we render games. Add in the slow march of moore's law, an over due die shrink next year for GPUs, and the abandonment of last generation consoles, and the next 3-5 years are looking pretty damn good.
  • Refuge - Tuesday, October 27, 2015 - link

    I think that would be the dumbest thing either one of them could do.

    Also, if they locked it out, then their cards would no longer be DX12 compliant. Losing that endorsement would be a devastating blow for even Nvidia.
  • Gigaplex - Tuesday, October 27, 2015 - link

    NVIDIA has a habit of making dumb decisions to intentionally sabotage their own hardware when some competitor kit is detected in the system.
  • tamalero - Monday, October 26, 2015 - link

    Question is.. will Nvidia be able to block this feature on their drivers? not the first time they tried to block anything that is not Nvidia (see PhysX that DO work fine with AMD + Nvidia combos, but disabled on purpose)
  • martixy - Monday, October 26, 2015 - link

    What about stacking abstractions? Could you theoretically stack a set of linked-mode for main processing on top of unlinked mode for offloading post to the iGPU?
  • Ryan Smith - Monday, October 26, 2015 - link

    Sure. The unlinked iGPU just shows up as another GPU, separate from the linked adapter.
  • lorribot - Monday, October 26, 2015 - link

    From a continuous upgrade point of view you could buy a new card shove it in as the the primary and keep the old card as a secondary, it could make smaller more frequent upgrade steps a possibility rather than having to buy the one big card.

    Would be interesting to see something like a HD7850 paired with a GTX 780 or R290
  • boeush - Monday, October 26, 2015 - link

    In addition to postprocessing, I wonder what implications/prospects there might be when it comes to offloading physics (PhysX, Havoc, etc.) processing onto, say the iGPU while the dGPU handles pure rendering... Of course that would require a major upgrade to the physics engines to support DX12 and EMA, but then I imagine they should already be well along on that path.
  • Gigaplex - Tuesday, October 27, 2015 - link

    That was already possible with DirectCompute. I don't think many games made much use of it.
  • nathanddrews - Tuesday, October 27, 2015 - link

    This is my fear - that these hyped features will end up not being used AT ALL in the real world. One tech demo that proves that you can use different GPUs together... but how many people with multi-GPU setups will honestly choose to buy one of each flagship instead of going full homogenous SLI or CF?

    It seems to me that the only relevant use case for heterogenous rendering/compute is to combine an IGP/APU with a dGPU... and so far only AMD has been pushing that feature with their Dual Graphics setup, despite the availability of other solutions being available. If it were realistic, I think it would exist all over already.
  • IKeelU - Monday, October 26, 2015 - link

    We've come a hell of a long way since Voodoo SLI.

    Leaving it up to developers is most definitely a good thing, and I'm not just saying that as hindsight on the article. We'll always be better off not depending on a small cadre of developers in Nvidia/AMD's driver departments determining SLI performance optimizations. Based on what I'm reading here, the field should be much more open. I can't wait to see how different dev houses deal with these challenges.
  • lorribot - Monday, October 26, 2015 - link

    Generally speaking leaving it up to developers is a bad thing, you will end up with lots of fragmentation, patchy/incomplete implementation and a whole new level of instability, that is why DirectX came about in the first place.
    I just hope this doesn't break more than it can fix.
    We need an old school 50% upgrade to the hardware capability to deliver 4K at reasonable price point, but I don't see that coming any time soon judging by the last 3 or 4 years of small incremental steps.
    All of this is the industry recognising it's inability to deliver hardware and wringing every last last drop of performance from the existing equipment/nodes/architecture.
  • McDamon - Tuesday, October 27, 2015 - link

    Really? I'm developer, so I'm biased, but to me, leaving it up to the developer is what drives the innovation in this space. DirectX, much like OpenGL, were conceived to homologate APIs and devices. Glide and such. In fact, as is obvious, both APIs have moved away from the fixed function pipeline to a programmable model to allow for developer flexibility, not hinder it. Sure, there will be challenges for the first few tries with the new model, but that's why companies hire smart people right?
  • CiccioB - Tuesday, October 27, 2015 - link

    Slow incremental steps during last 3-4 years?
    You probably are speaking about AMD only, as nvidia has made great progresses from GTX680 to GTX980Ti both in terms of performances and power consumption. All of this on the same PP.
  • loguerto - Sunday, November 1, 2015 - link

    You are hugely sub estimating the GCN architecture, nvidia might have had a jump from kepler to maxwell in terms of efficiency (in part by cutting down the double precision performance), but still with the same slightly improved GCN architecture amd competes in dx11 and often outperforms the maxwells in the latests dx12 benchmarks. I when I say that I invite everyone to look at the entire GPU lineup and not only the 980ti vs fury x benchmarks.
  • IKeelU - Tuesday, October 27, 2015 - link

    Your first statement is pretty entirely wrong: a) we already have fragmentation in the form of different hardware manufacturers and driver streams. b) common solutions will be created in the form of licensed engines, c) the people currently solving these problems *are* developers, they just work for Nvidia and AMD, instead of those directly affected by the quality of end-product (game companies).

    Your contention that solutions should be closed off only really works when there's a clearly dominant and common solution to the problem. As we've learned over the last 15 years, there simply isn't. Every game release triggers a barrage of optimizations from the various driver teams. That code is totally out of scope - it should be managed by the concerned game company, not Nvidia/AMD/Intel.
  • callous - Monday, October 26, 2015 - link

    why test with intel APU + fury? It's more of a mainstream configuration than 2 video cards
  • Refuge - Tuesday, October 27, 2015 - link

    I believe it is too large of a performance gap, it would just hamstring the Fury.
  • nagi603 - Monday, October 26, 2015 - link

    nVidia already forcefully disabled using an nvidia card as physix add-in card with an AMD main GPU. When will they try to disable this extra feature?
  • silverblue - Tuesday, October 27, 2015 - link

    They may already have; then again, there could be a legitimate reason for the less than stellar performance with an AMD card as the slave.
  • xjointsx - Monday, October 26, 2015 - link

    Now i can imagine AMD & Nvidia GPU in 1 PCB Card.

    GTR9 Futan X.
  • Refuge - Tuesday, October 27, 2015 - link

    +1
  • at80eighty - Monday, October 26, 2015 - link

    great. now the video card forums arent going to be as fun anymore T_T
  • ajmiles - Monday, October 26, 2015 - link

    I wouldn't normally post a comment just to correct a typo, but as a Rendering Engineer by day, the idea of a "rendering implantation" was too good to pass up (page 3). Sounds very sci-fi!
  • silverblue - Tuesday, October 27, 2015 - link

    Yes, but you obviously forgot to invert the polarity, realign the phase inducers with the ODN matrices AND do the hokey-cokey.
  • moozoo - Monday, October 26, 2015 - link

    How fine grain is their assignment of work load to the different cards?
    If amd is faster at doing X and nvidia is faster at doing Y. then but putting more X work to the AMD card and more Y work to the Nvidia card you would expect the result to be faster than two cards that are the same.
    i.e. two AMD cards would be bottle necked by the Y work and two Nvidia cards would be bottle necked doing the X work.
  • Kodiack - Tuesday, October 27, 2015 - link

    "YouTube limits 60fps videos to 1080p at this time."

    Fortunately, this is no longer the case. As of a few months ago, you can even watch 4K60 content on YouTube! Your videos are currently showing as 1440p60, and they look wonderful for it.
  • Chaython - Tuesday, October 27, 2015 - link

    Excuse me, how does a high end stack up with a low end? does it downgrade the high end [as with previous bridging,] or will it actually perform better than just the high card
    ie, 970 + 6700k IGPU
  • CiccioB - Tuesday, October 27, 2015 - link

    As it is AFR (alternate frame rendering) where each GPU completes an entire frame on its own, that king of mix will simply, at best, double the performances of the iGPU, keeping the beefy GPU sleeping most of the time waiting for the iGPU to finish its work.
  • Intel999 - Tuesday, October 27, 2015 - link

    The ability to mix GPUs bodes well for AMD in that on the laptop front an APU can be mixed with any discrete GPU. This will, theoretically, make a moderately priced laptop perform well above entry level gaming on the cheap. Better than an Intel Igpu since their graphics don't bring much to the party unless you are willing to pay top dollar for the highest end Igpu that only matches AMDs cheaper APUs.
  • Gigaplex - Tuesday, October 27, 2015 - link

    Intel's top iGPUs can beat AMDs top ones, but expect to pay a premium.
  • loguerto - Sunday, November 1, 2015 - link

    I love how intel managed to implement ondie ram as a workaround to the ddr3 huge bottleneck. I wonder why AMD did not cosed to do the same as their 7870k is evidently bottlenecked by the ddr3, is there a cost problem or they are waiting to switch directly on the hbm memory?
  • CiccioB - Tuesday, October 27, 2015 - link

    A test with the Titan X as master card would be interesting. It may show if the sync problem is HW or SW related.
    Test with low tier cards should be run a 1080p. GTX680 has never been so good at higher resolutions, so maybe the test at FullHD may better level both graphics cards performances and show different results with mixed cards.

    BTW, nvidia cards/driver are not optimized for PCI-e transfers as they use proprietary connectors to do SLI and synchronization, while AMD cards use PCI-e transfer to to all the above. Maybe the problem is that.
    It would also be interesting to see how these mixes work when used on slower PCI-e lanes. You know, not all PCs have PCI-e 3.0 or running at 16x.

    Specific results apart (they will most probably change with driver updates) it is interesting to see that this particular feature work.
  • VarthDaver - Tuesday, October 27, 2015 - link

    Can we also get this? "In conjunction with last week’s Steam Early Access release of the game, Oxide has sent over a very special build of Ashes." I have had access to Ashes for a while but do not see the AFR checkbox in my version to match their special one. I would be happy to provide some 2x TitanX performance numbers if I could get a copy with AFR enabled.
  • Ryan Smith - Tuesday, October 27, 2015 - link

    As I briefly mention elsewhere, AFR support is very much an experimental feature in Ashes at the moment. Oxide has mentioned elsewhere that they will eventually push it out in public builds, but not until the feature is in a better state.
  • silverblue - Tuesday, October 27, 2015 - link

    That's correct as regards the 290, but the 7970 uses a CrossFire bridge.
  • MrPoletski - Tuesday, October 27, 2015 - link

    What about integrated graphics solutions? It'd be nice to see what this does to our potential CPU choice. Can we see a top of the line Intel CPU vs a top of the line AMD cpu now and see how each ones iGPU helps out with a 980ti/furyx?
  • CiccioB - Tuesday, October 27, 2015 - link

    I suggest you and all the others that keep on suggesting to do such tests or making fantasies on hybrid systems to first understand how AFR works and so to understand by yourself why it is useless to use a iGPU with it.
  • Gigaplex - Tuesday, October 27, 2015 - link

    And perhaps you should read the article, where it explicitly states that AFR isn't the only form of multi GPU load sharing. The iGPU could do post processing, such as deferred rendering of lighting. It's not implemented in this particular benchmark yet, but it's been demonstrated in the Unreal engine.
  • Harry Lloyd - Tuesday, October 27, 2015 - link

    I do not see this ever being practical. I would rather see the results of split frame rendering on two identical GPUs, that seems to have real potential.
  • AndrewJacksonZA - Tuesday, October 27, 2015 - link

    Did you try running three cards Ryan?
  • BrokenCrayons - Tuesday, October 27, 2015 - link

    It's pretty interesting stuff, but I think most of the people who buy computing hardware could care less about the performance of a pair of high end graphics processors that are individually priced well above the cost of a nice laptop. What's more substantial in this is the ability to make use of an iGPU whena dGPU is present in the system to obtain a better overall user experience. Very, very few people are going to bother throwing away money to buy even one graphics card at a cost of over $400 let alone two and then also soak up the cost of the components necessary to support their operation. I can't imagine developers, those in control of whether or not this new feature gets any support in the industry, are going to invest much money by writing code to support such a small subset of the overall market. In the end, what DX12 might do is have the exact opposite effect of what we're predicting...it could ultimately kill off high end multi GPU setups as impractical and wholly unsupported by game production studios. The most I'd see this ever doing is making the i+dGPU scenario practical. Everything else seems a little too expensive to implement for a limited market of large desktop computers that are rapidly fading away as small form factor and mobile devices replace them.
  • diola - Tuesday, October 27, 2015 - link

    Hi, after connecting the cards in mainboard and install the drivers, you need something else to work?I was unable to recognize the second card when the first is from AMD and the second from Nvidia.
  • Wunkbanok - Tuesday, October 27, 2015 - link

    How does this multi-adapter thing works? can i use a new card with an older card and get any improvement? wich cards are supported?
  • dray67 - Tuesday, October 27, 2015 - link

    I'm very tempted to give the Ashes alpha a try, I've recently upgraded from a 680 to a 980 ti and I like the idea of a linked setup, but theres so many questions, the main one being compatibility with other hardware, motherboards etc.

    I can't help but think that the bigger the disparity between cards the less you'd gain but either way I like where this is going (giggly).
  • Valantar - Tuesday, October 27, 2015 - link

    I really wish you would take this one step further and test combinations of new and old cards - Fury X + 7970/GTX 680 and 980 Ti + GTX 680/7970 would be the really interesting combinations here. Also, far more relevant for most gamers looking to upgrade at some point in the next few years.
  • loguerto - Tuesday, October 27, 2015 - link

    I would like to mention an off-topic argument:
    the kepler gtx 680 as it's time was constantly outperforming the GCN 1.0 7970, and was on pair with the 7970 ghz edition. Now we have the the 7970 non ghz edition outperforming the gtx 680 by 20%. It's just a fact that proves how AMD cards compared to Nvidias.
  • Clone12100 - Tuesday, October 27, 2015 - link

    Why no comparison of the older cards with the newer cards? Like a 7970 with a 980ti
  • WhisperingEye - Tuesday, October 27, 2015 - link

    Because they are using Alternate Frame Rendering. This means that the slowest card drives the set. So your new $400 card will go at the speed of your (now) $150 card.
  • Oxford Guy - Friday, October 30, 2015 - link

    Would have liked to have seen the 290X or 390X.
  • andrew_pz - Tuesday, October 27, 2015 - link

    Radeon placed in 16x slot, GeFroce installed to 4x slot only. WHY?
    It's cheat!
  • silverblue - Tuesday, October 27, 2015 - link

    There isn't a 4x slot on that board. To quote the specs...

    "- 4 x PCI Express 3.0 x16 slots (PCIE1/PCIE2/PCIE4/PCIE5: x16/8/16/0 mode or x16/8/8/8 mode)"

    Even if the GeForce was in an 8x slot, I really doubt it would've made a difference.
  • Ryan Smith - Wednesday, October 28, 2015 - link

    Aye. And just to be clear here, both cards are in x16 slots (we're not using tri-8 mode).
  • brucek2 - Tuesday, October 27, 2015 - link

    The vast majority of PCs, and 100% of consoles, are single GPU (or less.) Therefore developers absolutely must ensure their game can run satisfactorily on one GPU, and have very little to gain from investing extra work in enabling multi GPU support.

    To me this suggests that moving the burden of enabling multi-gpu support from hardware sellers (who can benefit from selling more cards) to game publishers (who basically have no real way to benefit at all) is that the only sane decision is not invest any additional development or testing on multi gpu support and that therefore multi GPU support will effectively be dead in the DX12 world.

    What am I missing?
  • willgart - Tuesday, October 27, 2015 - link

    well... you no longer need to change your card to a big one, you can just upgrade your pc with a low or middle entry card to get a good boost! and you keep your old one. from a long term point of view we win, not the hardware resellers.
    imagine today you have a GTX970, in 4 years you can get a GTX 2970 and have a stronger system than a single 2980 card... specialy the FPS / $ is very interesting.

    and when you compare the setup HD7970+GTX680, maybe the cost is 100$ today(?) can be compared to a single GTX980 which cost nearly 700$...
  • brucek2 - Tuesday, October 27, 2015 - link

    I understand the benefit to the user. What I'm worried is missing is incentive to the game developer. For them the new arrangement sounds like nothing but extra cost and likely extra technical support hassle to make multi-gpu work. Why would they bother? To use your example of a user with 7970+680, the 680 alone would at least meet the console-equivalent setting, so they'd probably just tell you to use that.)
  • prtskg - Wednesday, October 28, 2015 - link

    It would make their game run better and thus improve their brand name.
  • brucek2 - Wednesday, October 28, 2015 - link

    Making it run "better" implies it runs "worse" for the 95%+ of PC users (and 100% of console users) who do not have multi-GPU. That's a non-starter. The publisher has to make it a good experience for the overwhelmingly common case of single gpu or they're not going to be in business for very long. Once they've done that, what they are left with is the option to spend more of their own dollars so that a very tiny fraction of users can play the same game at higher graphics settings. Hard to see how that's going to improve their brand name more than virtually anything else they'd choose to spend that money on, and certainly not for the vast majority of users who will never see or know about it.
  • BrokenCrayons - Wednesday, October 28, 2015 - link

    You're not missing anything at all. Multi-GPU systems, at least in the case of there being more than one discrete GPU, represent a small number of halo desktop computers. Desktops, gaming desktops in particular, are already a shrinking market and even the large majority of such systems contain only a single graphics card. This means there's minimal incentive for a developer of a game to bother soaking up the additional cost of adding support for multi GPU systems. As developers are already cost-sensitive and working in a highly competitive business landscape, it seems highly unlikely that they'll be willing to invest the human resources in the additional code or soak up the risks associated with bugs and/or poor performance. In essence, DX12 seems poised to end multi GPU gaming UNLESS the dGPU + iGPU market is large enough in modern computers AND the performance benefits realized are worth the cost to the developers to write code for it. There are, after all, a lot more computers (even laptops and a very limited number of tablets) that contain an Intel graphics processor and an NV or more rarely an AMD dGPU. Though even then, I'd hazard a guess to say that the performance improvement is minimal and not worth the trouble. Plus most computers sold contain only whatever Intel happens to throw onto the CPU die so even that scenario is of limited benefit in a world of mostly integrated graphics processors.
  • mayankleoboy1 - Wednesday, October 28, 2015 - link

    Any idea what LucidLogix are doing these days?
    Last i remember, they had released some software solutions which reduced battery drain on Samsung devices (by dynamically decreasing the game rendering quality
  • LemmingOverlord - Wednesday, October 28, 2015 - link

    Now that's what I call ... DRIFT COMPATIBLE!!!!! <cue guitar riff>
  • albert89 - Wednesday, October 28, 2015 - link

    Would love to see DX12 multi-adapter work with a gaming card and a workstation card combination. Is it even possible ?
  • wiak - Thursday, October 29, 2015 - link

    the reason that amd primary + mvidia is better is pretty much the GCN architecture, GCN has ACE that helps to distribute the work to the secondary nvidia gpu
  • zodiacfml - Friday, October 30, 2015 - link

    Not holding my breath on this one. I would be happy though if they can put the integrated GPU to good use.
  • drep - Friday, October 30, 2015 - link

    Maybe I missed it somewhere, but does this mean you no longer need the SLI link on your Nvidia cards when using dx12 with these features? I assume you dont can't you cant use a link on the AMD card. Is there a performance difference if you 2 nvidia cards using the SLI link versus if you don't use it in this bench?
  • Oxford Guy - Friday, October 30, 2015 - link

    Isn't it the 290X/390X that gets the biggest performance boost from Ashes? It would be nice to have one of those included.
  • gothxx - Saturday, October 31, 2015 - link

    @Ryan Smith, what does mixed GPUs mean for G-Sync/Freesync? is it possible to support both just switching which is the primary GPU and in which card the monitor is connected?
  • echtogammut - Monday, November 2, 2015 - link

    You can be certain that either AMD or NVidia will kill off this ability with some driver "update".
  • Haravikk - Tuesday, November 3, 2015 - link

    Promising stuff.

    I'm curious actually, but how well (if at all) does DirectX 12/Mantle interact with HSA? Presumably DirectX 12 and Mantle have their own mechanisms for handling memory use, but does either graphics API leverage HSA in systems that have it, or is that still something that developers need to do themselves? I could see that making a difference an APU iGPU + dGPU setup, maybe even more-so if the discrete graphics are also from AMD?
  • RavenSe - Tuesday, November 3, 2015 - link

    Well actually I believe that's one of the best moves for the crowds by MS. Fck branding let them fight together and we gonna take advantage of both anyways.
    I can see Ryan said he doesn't have a clue how come mixing gets better than keeping one side. Did you look at minimums? I got a simple hunch... how about there is parts of both GPU's that handle specific task worse but since those GPU are so architecturally different, they help each other to handle those bottlenecks. I'm not sure it can work this way but that would be logical... give some task to be done for the card that can handle it better for example PhysX to nVidia and some heavy computing to AMD. This way or another thats actually the first time in my life when I'll start considering having both vendors at once in the same rig!

Log in

Don't have an account? Sign up now