D
DemonicRyzen666
- Joined
- Apr 30, 2020
- Messages
- 945(0.59/day)
System Specs
System Name | S.L.I + RTX research rig |
---|---|
Processor | Ryzen 7 5800X 3D. |
Motherboard | MSI MEG ACE X570 |
Cooling | Corsair H150i Cappellx |
Memory | Corsair Vengeance pro RGB 3200mhz 32Gbs |
Video Card(s) | 2x Dell RTX 2080 Ti in S.L.I |
Storage | Western digital Sata 6.0 SDD 500gb + fanxiang S660 4TB PCIe 4.0 NVMe M.2 |
Display(s) | HP X24i |
Case | Corsair 7000D Airflow |
Power Supply | EVGA G+1600watts |
Mouse | Corsair Scimitar |
Keyboard | Cosair K55 Pro RGB |
- Feb 17, 2024
- #21
droopyRO said:
I only once did the stupid thing to go multiGPU. Back 11-12 years ago, when i got a GTX660 and after a few months later i got a second GTX 660 and a R9 280X. After a few days i returned the 280X, and kept the SLI, one of, if not the dumbest thing i ever made regarding hardware. The 280X had 1GB more of vRAM, higher bus, better cooling and was overall a better card. I ran games for a few more months on the SLI and sold them both. Stutter, games not optimized for multiGPU, and bad frame times were what SLI got me. Never again, except as a retro hobby thing if i ever decide to do it again.
Wasn't that card made on smaller node though?
Honestly smaller buses seem to help alleviate a lot of other problems for certain series of cards in multi-card setups, unless the card needs more bandwidth. Which is probably what happened with 280X vs GTX 660.
BMfan80 said:
Maybe you should of watched a video 1st, before spewing nonsense.
A sli 2080ti set up was only 8 fps higher than a 3090. I think it was 67 vs 75.
In Red Dead Redemption 2, Average for the 4090 was 121fps.He also mentions that the 2080ti's where drawing close to 550watts.
That maybe because of way the TAA is used or what every AA/AF they enable, I have done a lot research on TAA & still cannot figure out why developers ever bothered with TAA. It had a lot of inherent problems that other AA/AF techniques don't have.
Being Close to 550 Watts isn't that bad when you consider that the RTX 4090 is node 4x smaller while still needing up to around 450 Watts max, & over clock models push it right back up to 550-600watts.
Also Like said before the smaller single bus is going to help. It's one 384 bit bus vs two 384 bit buses totaling 768 bit, it's a lot for a cpu to handle.
A long time ago back when the HD 5000 series where out & Eyefinity came out Someone did a Tri-fire test with three HD 5670 vs two HD 5870 2Gb's The smaller cards had better loads across them & scaled better in more games than The two cards did. The only time the two cards could out do the three was when the Vram buffer was too small.
My current system is cpu bottlenecked by the low clock speeds of the Ryzen 5800x 3D on my 2080 ti's in S.L.I.
I'm sure If I was running two RTX 2070 super's in S.L.I it would be a better balance.
I can't even move these cards to an AM5 board where I can have cpu clocks & IPC I need because there are no S.L.I AM5 boards. I was hoping to figure out if any of the TRX50 or the WRX90 board have any support like the older TRX40 & WRX80 boards did as they both support Both Workstation NV-link & all S.L.I setups.
Dragam1337 said:
I was personally i big advocate of sli for many years - i had sli setups, tri sli setups, and even quad sli setups starting with the the 8000 gtx, and ending with the 1080 ti. And while the 1080 ti's did indeed SMOKE the 2080 ti in everything where sli worked, that list became excruciatingly small over time, and to even get it to work you had to do quite alot of "hacks". Primary reason being that sli and temporal effects really don't go hand in hand... and games are just super dependant on them today. Rdr2 being a prime example - you could get sli to run super fast, but you needed to have TAA disabled, otherwise performance absolutely tanked... so while performance was super fast, the game looked like **** being a massive shimmerfest.
For those reasons i don't ever see sli really making a comeback.
Honestly frametimes became an entirely non-issue with g-sync together with an fps cap.
But prior to gsync, it could be pretty bad, yeah.
Basically all frostbite based games gave a 97-98% performance uplift with sli, provided that you weren't bottlenecked by anything in your system. But really, alot of games saw very high sli scaling.
Very old post of mine
SLI scaling - is it really only useful in benchmarks ? - Imgur
You do realize that you could / can run sli with gsync + framecap aswell, right? And yes, that will be buttery smooth aswell.
I don't like post processing of anything. Matter fact I stopped using AA & AF for now. Just doesn't seem work really. I just usually upscaling everything to 2160P & run it on a 144hz 1080P monitor lol. I can't see no jaggies at all then.
I have about 5 New games that support the newer mGPU & I can tell you right now that mGPU just makes your games cpu bound,. From what I've seen so far. I think everyone, including developers really didn't know how much drivers really were helping out with graphics cards. I mean Even with Intel's ARC, it's quite clear that drivers still have a massive impact on how well a game runs regardless of DX12 or Vulkan being a much lower-level A.P.I.
It feels like DX12 is failure a lot of broken promises in the part as DXR & RTX are always confused by consumers. Only 50% of games have been releasing with any sort or RT features, & Vulkan is just the lone wolf that everyone seems to have forgotten about.