Limit on number of image textures in Cycles

Quick question : is there a limit on the number of image textures Cycles can handle ? I’m trying to render a script-generated scene of 20,000 objects all with unique materials and image textures. However it seems that only the first 1,000 imported objects render correctly. The rest show up as uniform pink. If I move a pink object to another layer and render only that layer, it renders correctly. Similarly if I import it to another .blend it renders fine. I searched for this issue but could only find some very old references, mostly about the GPU but I am using the CPU.

I suppose I could get around this with alpha-over composition but it would not be straightforward as I’m planning to have the camera move around quite a bit. Although the scene’s got a lot of materials and objects it renders very fast since the materials are just shadeless and transparent, so apart from this one show-stopper it’s working well. Unfortunately I can’t use the BI engine as I want to render spherical stereo for a VR headset.

Thanks !

Yes, and it looks like you’ve found it.

You could pack textures into atlases, but that could be time consuming for 20000 objects.

what kind of objects are you working with?

I remember this was changed not long ago
not certain but max = 1000 I think !

check the release notes

happy cl

Dang. There’s not really any easy workaround in this case. They are all unique images of different sizes, so it wouldn’t be straightforward to combine them or generate UV maps for each object. Plus each mesh needs to be a separate object to track the camera.

I really wish spherical stereo worked for the internal render engine !

you could try to do what SterlingRoth suggests, using python… It may take some minutes, but will work…

or grab a copy of blender source, and change the values in the intern\cycles\util\util_texture.h
(I don’t know if this will produce some domino effect, but you say the textures are small so there should be no problem)

I’m not sure how I’d go about a Python-based solution. I guess the idea would be to combine the image files into a few large images, then use UV mapping to select the appropriate section of the large image, so that only the correct section of the image is shown on each plane. The problem there is that each image is a different size, so it would be non-trivial to combine them and select the appropriate sections for the UV coordinates. Complicated…

Altering the Blender source sounds like a potentially better option, but I have no experience of the code or compiling it. Hmmm…

compiling is not that difficult…
just follow the steps here:https://wiki.blender.org/index.php/Dev:Doc/Building_Blender

for editing, you can set the following values to 2048:

/* CPU */
#define TEX_NUM_FLOAT4_CPU        1024
#define TEX_NUM_BYTE4_CPU        1024
#define TEX_NUM_HALF4_CPU        1024
#define TEX_NUM_FLOAT_CPU        1024
#define TEX_NUM_BYTE_CPU        1024
#define TEX_NUM_HALF_CPU        1024

note: I haven’t tried this, but if your system has enough RAM to load every image, this should work ok.