Key Takeaways
- Dual-GPUs fell out of favor due to compatibility issues with deferred rendering, causing communication gaps and inefficient use of VRAM.
- Performance issues, including micro-stuttering, and lackluster support from game developers contributed to the decline of multi-GPU setups.
- Single-chip GPUs improved rapidly, making dual-GPUs an impractical and costly investment compared to regularly upgrading GPUs.
Everybody knows about multi-GPU gaming setups, which were the spectacle of the previous decade. But did you know that mainstream GPUs that had more than one chip on their PCB used to exist? These rendering beasts have been around since 1997, with the Dynamic Pictures Oxygen 402 being one of the first with not two, but four GPU chips on a single PCB.
Companies like 3dfx brought dual-GPUs to fame, and afterward, the mantle was taken up by ATI (later brought by AMD) and Nvidia, which continued to make dual-GPUs until the better part of the last decade. Sadly, you won't find any consumer-grade dual-GPUs anymore. What exactly made the once-revolutionary dual-GPU formula become a mere memory of an era bygone?
4 Deferred rendering
Dual-GPUs just didn't go well with deferred rendering
With how quickly the graphics of video games improved after the year 2000, developers started to opt for deferred rendering. Unlike the outdated forward rendering technique, deferred rendering was more efficient at handling a scene with multiple light sources and less taxing on the graphic card, without too much effect on the visual quality. It did this by breaking up the rendering pipeline into multiple stages and using the information from previous stages to avoid any unnecessary rendering.
The beef between deferred rendering and multi or dual-GPUs was that, since the final frame depended on the previous stages of its rendering, the data regarding those stages had to be passed from one GPU to the other. Initially, the PCIe interface was used to pass that data, and eventually, the SLI and Crossfire connectors were added, but even they weren't enough for the sheer bandwidth the communication required. Another issue was presented in the form of inefficient use of VRAM. Since both GPUs had access only to their own VRAMs, each VRAM had to store the same set of textures to allow both GPUs to render the same scene. This meant having 2x 4GB GPUs would give you only 4GB of effective VRAM.
Both these issues existed in dual-GPUs as well. Despite both chips being on the same PCB, a communication gap existed between the chips and their VRAMs. This gap could've been reduced with enough effort, at least on the graphic cards with both chips on the same PCB. However, due to the lack of incentive, Nvidia and AMD didn't pay much heed to it.
3 Performance Issues
The ever-present and infamous micro-stuttering
Among the issues that multi and dual-GPUs had, arguably the most notorious was micro-stuttering. This was due to how both GPUs divided the workload of rendering frames, and unless you added a third or fourth GPU to your setup without proper support from developers, there was no workaround to it.
With the release of DirectX12, support for multi-GPU was actually improved. Yet, this API is often credited with being the final nail in the coffin for multi-GPU setups. That's because while DirectX12 could significantly increase the performance of multi-GPU setups in games, all the work needed to be done by game developers. With only a minute percentage of gamers rocking setups that could leverage this, devs had no incentive to put in extra hours and money adding a feature that only a few people could take advantage of. The few DX12 games that added support for multi-GPUs, like Ashes of the Singularity and Shadow of the Tomb Raider, ran well on such builds.
Even if we ignore the expensive price tag and the power-hungry nature of a multi-GPU setup, the failure of the gaming community to readily adopt this technology led to lackluster support from game developers, which led to performance and compatibility issues, which led to people avoiding multi-GPU setups entirely. It was a vicious cycle that would only end with the death of multi- and, consequently, dual-GPU graphic cards.
2 Rapid improvement and innovation in single-chip GPUs
Single-chip GPUs were the smarter purchase
In 2014, Nvidia released its last ever dual-GPU, the Titan Z, for $3000. This was a dream card for many, boasting 5 TFLOPS of FP32 calculation rate and 336 GB/s of memory bandwidth. Merely four years later, in 2018, the RTX 2080 Ti was released for less than half the price of the Titan Z, with 13.45 TFLOPS of FP32 and 616 GB/s of memory bandwidth, and had revolutionary features like DLSS and raytracing.
With single-chip GPUs improving so rapidly, investing such an absurd amount of money in a dual-GPU that might not stay relevant for more than a few years makes no sense, so periodically upgrading GPUs is the better choice. Hence, even if GPU manufacturers made dual-GPUs, only enthusiast PC gamers would purchase them, which wouldn't lead to many sales, nor would it be enough to incentivize game developers to optimize their games for such graphic cards.
Related
Best GPUs in 2024: Our top graphics card picks
Picking the right graphics card can be difficult given the sheer number of options on the market. Here are the best graphics cards to consider.
1 Dual-GPUs stopped being relevant
They lost the advantage that their legacy once had
While dual-chip GPUs were once priced competitively and provided a performance increase, this soon became a race to see which company could create the most power-hungry and expensive GPU. During their peak, dual-GPUs provided much better performance for money. For example, the GTX 295 – a dual-GPU that MSRPed for $499 and combined features of GTX 260 and GTX 280 – gave better performance than 2x GTX 260 in SLI, a venture which would set you back $900.
Nvidia's Titan Z in 2014 was nothing more than an exercise in hubris. With a price tag of $3000 and performance worse than two GTX 780 in SLI, each of which cost only $650, it was an utter failure. AMD's R9 380x2 was a similar story; requiring four 8-pin connectors and a TDP of 580W, it was destined to fail. While these cards did grab attention, hardly anyone bought them. Bad multi-GPU support meant that only one chip in these GPUs would do the heavy lifting in most games, making these cards extremely bad value for money, and since single-chip GPUs of those generations were adequately powerful, ran quieter while using less power, and were less expensive, everybody went in that direction. The only real advantage dual-GPU had was the space they occupied, as instead of using two separate GPUs, one in each PCIe slot, you got the same functionality through a single PCIe slot.
Will dual GPUs make a comeback?
You'll be surprised that dual GPUs are still being made today. AMD's Radeon Pro Vega II Duo used in Apple's Mac Pro is an example. Multi-GPU setups also exist, but not how we used to know and love them. Only professionals and industries that require such GPUs and setups use them, and thanks to AMD's mGPU support, the very few gamers who still want to utilize two GPUs can do so.
But for gaming, it's safe to say that dual-chip GPUs won't be making a comeback anytime soon. That is, not in the way that we would think. The CPU and GPU industry is heading toward chiplets, with AMD leading the way. In its current state, this is in no way equal to having two GPUs on the same die since the main chip is broken down into functional blocks to reduce production costs. But maybe in the future, chiplets might evolve to a point where we could see something that actually counts as a dual-GPU.
- GPU
- Gaming
Your changes have been saved
Email is sent
Email has already been sent
Please verify your email address.
You’ve reached your account maximum for followed topics.
Manage Your List
Follow
Followed
Follow with Notifications
Follow
Unfollow
Readers like you help support XDA. When you make a purchase using links on our site, we may earn an affiliate commission. Read More.