I run dual gpu. Looking to upgrade. should i stick with dual or go for 1 card?

Thanks for all the info! I really appreciate it, you’ve been a big help. Thanks!

1 Like

Rather than open a new thread, I thought I’d ask in here.

I recently bought a 2080 Super. I also have an old 1070 in my machine. Firstly, turning on Optix and turning off all of the Cuda options is way faster. It’s amazing.

Here’s my real question, is there a way for me to let one instance of Blender use both cards at the same time, the 2080 using Optix, and the 1070 using Cuda? It seems that when I select both of the cards in the Cuda settings, they render together, and, if I turn on Optix, if even one of the cards is selected in the Cuda settings it defaults to Cuda.

Can anyone clear that up for me? Is Cuda the default regardless?

Well that is somewhat logic. Blender doesn’t decide on which tile is to be computed with which tech. Instead it prepares the whole scene to be rendered and the cards grab tiles one after another. Then the lowest coommon tech is used.
What you can do though is look into bucket rendering. Launch a instance of Blender for each card, render partial frames and then stitch them together.
But there’ll be dragons! You’ll most likely end up in some pretty involved optimization math.
Edit: That’s for stills of course. With animations and frameranges it can be handeled much easier.

So, what you’re saying is that I basically should open two instances of Blender and set one to render on the 1070 and one to render on the 2080?

Together with some technique to combine the outputs afterwards. Yes. Otherwise you can’t use pure CUDA and RTX together.

1 Like

Indeed be there dragons.

I rendered out a heavy sequence over the weekend using two Blender instances. One running the 1070, the other running the 2080.

When I took the sequence into After Effects there is a big problem with regards to the rendered output. The gamma/colourspace/not sure exactly what, between the two cards is way off so that it causes a very noticeable flickering. Same file, opened twice, changed card settings and that was it. No idea why that happened.

Any ideas about this? How do render farms guarantee that this doesn’t happen?

I think it is more to do with Optx vs CUDA. Technically speaking they should be 100% identical, just like CUDA and OpenCL.

Would recommend to test this on another scene, a default one like BMW, or Classroom, or other beldner produciton file. If there are still noticalbe differences reprot a bug.

Also are you rendering any fog/cloud/vloume related scene?

No volumetric stuff at all.

I’m just testing it again with my scene. but this time I’m rendering the same frame and then changing the card settings. Rendering to a different slot of course.

Both frames came back identical. I did a render using the Cuda E-Cycles version and that’s the where the discrepancy arises. Damn!

report it as a bug to bilbulibli, it shouldn’t be much difference in e-cycles vs normal cycles render

1 Like