Now if I try to render it all in one go (as above) blender has a hissy fit and throws out a CUDA: out of memory error in my face (something it’s never done before, but given I’ve got 345,289 verts in this one, I can sorta see why xD). After searching the internet, it comes up that either I have to reduce the number of verts, make the scene smaller and less complicated, or just outright buy a new card. All of which I’d rather not do! (I’m a bit of a perfectionist with a low budget).
Anyway. I’ve now been looking into how render layers work, and how they can be used to make the same scene as above possible, but in smaller chunks so the GPU doesn’t crap itself every time.
My question to you lovely lot is this. My scene relies on everything being rendered as close together as possible:
If I’m understanding what you’re wanting, and what I’ve read elsewhere the problem isn’t so much the version of the card as the amount of memory on the card which, unfortunately, is a limitation of GPU rendering.
That said, I believe there might be a way to slice it up, render it in pieces, and stitch it back together again, but I haven’t figured out how to do that yet. (Though I would also like to figure that out since I run into issues with the length of render times that lead to crashes that aren’t always memory related on my laptop at the large size I’d like to be able to render some of my images into.)