When I try to render a scene that I’m working on, I get this error:
The image did not catch it all, so here is the error: CUDA error: invalid value in cuTexRefSetAddress<NULL, texref, cuda_device_ptr<mem.device_pointer>, size>
As you can see I’m not hitting the memory limit on my GPU, so I have no idea whats going on.
Anyone have a suggestion?
You are running out of RAM: The error message is quite clear about that.
Could be the nasty GTX 6xx 4 GB memory limit bug we discussed recently:
Cycles will not render a scene on GTX 670 and 680 cards with 4 GB, if the geometry data exceeds 2 GB of VRAM. So, if you had e. g. 1.8 GB of geometry data and another 1.5 GB of texture data that would render perfectly fine, but 2.1 GB of bare geometry data wouldn’t…
Check your scene: Perhaps you can reduce the number of subdivisions at render time for some objects to save geo memory. Please report back if that helps!
We have already talked about that, haven’t we?
Just took another look at your screenshot: Cycles alone was using almost 3.4 GB of your card’s 4 GB VRAM when it crashed. Windows takes its share, too, and I see you have about a dozen web browser windows/tabs open which will also eat away quite a bit of VRAM. So, even without that aforementioned memory limitation you are scraping the barrel here…
Yeah you warned me about that bug earlier. Does that bug work the other way around as well? I mean, can I have more than 2GB of texture, or is that the same problem as the geometry? I’m “only” at 668K verts on this scene.
I have no clue about programming and the error code is gave me, I just cant see how it refers to memory. Normally I get a dirict wording that blender is out of memory when I exceed the amount of vram that I have.
The 680 is my dedicated render card, I have another card for the windows stuff.
I will get a GTX 770 4gb tomorrow, so that one should be able to render the scene without problems, right?
From “our” (as in: the community) testing this memory limitation does only apply to geometry, more than 2 GB in textures render fine. Mind you, though, that the GTX 770 is nothing more than a rebranded and overclocked GTX 680, so I would not hold my breath for it to work better.
Concerning the number of verts:
That’s 668K verts in the viewport. The subsurf modifier (for example) has independent settings for viewport and render, so the polycount at render time might be significantly higher than that!
Well thats really something that needs to be looked into, if it affects both the 6 series AND the 7 series. I mean, if cycles cant render a scene like the one I have, on that many gfx cards, then there is a problem. The scene I’m doing is an interior with some high polly furniture’s, but it’s nothing out of the ordinary.
Has anyone reported this as a bug? I’ve never reported one, and have no idea where to look?
Well, I’m not sure if the “real” cards from the GTX 7xx series are affected (the ones that truly are new designs = 750, 760 and 780). And to be fair, running out of the sparse memory has always been the Achilles’ heel of GPU rendering.
Not sure if someone has already filed a bug report for this - and it indeed seems to be a bug, since no other GPU renderer I tried this on seems to suffer from this limitation. I think bugs have to be reported here, but am not sure, either.
But bug or not - your scene is already at 3.4 GB, which will cause trouble sooner or later anyway on a 4 GB card…
Solved by getting the latest BuildBot version. I can now render as high as 3.5Gb of only geometry data